How OpenClaw Responded After Being Banned by Anthropic

OpenClaw's strategic response to Anthropic's ban showcases its evolution into a resilient AI platform independent of Claude.

OpenClaw’s Response to Anthropic’s Ban

On April 4, Anthropic officially cut off Claude’s subscription access for third-party tools like OpenClaw, a previously integrated open-source AI agent that suddenly lost its primary model supply. However, this ban was not an unexpected blow; OpenClaw had been preparing for this moment for months.

The reality is that over the past five months, OpenClaw has gradually transformed from a tool reliant on Claude into a platform that can withstand the impact of such a ban. April 4 was merely a public validation of this process.

To understand OpenClaw’s situation, one must consider not only Anthropic’s ban but also how OpenClaw is responding and what this means for the entire open-source ecosystem and the AI industry.

Why Did Anthropic Ban OpenClaw?

According to Boris Cherny, head of Claude Code, the subscription model was not designed for use cases like OpenClaw’s. The logic behind this statement is straightforward: Claude’s subscription costs $200 per month, intended for individual users engaging in regular conversations and programming on the official interface. However, an instance of OpenClaw connected to Claude can run 24/7, executing tasks, calling tools, and handling long contexts, leading to computational costs far exceeding the subscription price.

A popular developer practice known as “Ralph Wiggum” involves having AI repeatedly modify code in a loop until all tests pass. Reports suggest that some have completed development projects worth $50,000 using an API costing less than $300. A subscription priced by “person” cannot sustain the operational rhythm of an agent priced by “machine.”

Cost is just a superficial reason. If it were merely a matter of losing money, raising prices could suffice. Instead, Anthropic spent four months executing a comprehensive strategy.

In January, Anthropic requested that OpenClaw, then known as Clawdbot, change its name due to phonetic similarities with Claude. During this time, Anthropic quietly deployed technical barriers on the server side, preventing third-party tools from accessing Claude using user subscription credentials. By February, this restriction was formally incorporated into the service terms.

In March, Anthropic launched two new products: Claude Dispatch and Computer Use. The former allows users to remotely command Claude on their computers via mobile, while the latter enables Claude to control desktop applications directly. These features closely overlap with OpenClaw’s core functionalities.

On April 4, after preparing its alternatives, Anthropic officially announced the cutoff of third-party tool subscriptions. The logic behind this move was not merely to save costs but to secure market positioning.

Claude Code currently generates an annual revenue of $2.5 billion, making it one of Anthropic’s most crucial products. Anthropic has built a comprehensive matrix around this product, encompassing programming assistance, desktop control, and remote collaboration.

OpenClaw’s existence posed a fundamental threat to this system: it turned Claude into a backend component that could be easily replaced. Users’ workflows, habits, and toolchains were not embedded in Claude but in OpenClaw. For users, today they might call on Claude, tomorrow GPT, and the day after DeepSeek.

For model companies, the most frightening scenario is not losing users but being “piped”—being called upon without being depended on, incurring costs without owning customer relationships.

Reports indicate that OpenClaw founder Peter Steinberger personally negotiated with Anthropic on March 28 but only secured a one-week extension. From Anthropic’s perspective, there was no room for negotiation. They were not seeking higher payments from OpenClaw but ensuring that users utilized their models within their product ecosystem.

How Did OpenClaw Counterattack?

Less than 48 hours after the ban took effect, on April 5, OpenClaw released version 4.5.

The most notable change was the removal of Anthropic from the onboarding process for new users. The official statement was simple: “Anthropic banned us. GPT-5.4 has become stronger. We move forward.” There were no complaints or pleas for reconciliation; it was a positive declaration.

In terms of product updates, version 4.5 integrated OpenAI’s latest GPT-5.4 model and included extensive targeted experience optimizations. Community feedback indicated that users felt they had regained the experience of the old Claude. This means that while users switched to a different underlying model, their daily usage experience in OpenClaw remained largely unaffected.

This outcome was clearly not produced in just 48 hours. OpenClaw had been working for months to transform itself from being “the front end of Claude” into a model-agnostic agent platform.

In February, OpenClaw’s version 4.0 completely rewrote its underlying architecture. The model is no longer hardcoded as the sole entry point but has become a pluggable backend—Claude, GPT, Gemini, DeepSeek, or even user-deployed open-source models can all be integrated as OpenClaw’s “engine.” The system automatically switches to another model when one becomes unavailable, requiring no manual operation from users.

By March, subsequent updates further minimized the differences in usage between various models. Different models have unique authentication methods, tool invocation formats, and data return structures, but OpenClaw unified these differences into its compatibility layer. For users, switching models became a matter of changing a configuration setting rather than readjusting to an entirely new toolset.

Meanwhile, OpenClaw’s skill marketplace has accumulated 44,000 skill packages, covering various scenarios from code development to content creation and daily office tasks. Native adaptations have been made for over a dozen mainstream messaging platforms, including QQ, Feishu, WhatsApp, and Telegram. The agent orchestration capabilities are also continuously improving. These factors combined have increasingly made OpenClaw’s user retention independent of the choice of underlying model.

Additionally, version 4.5 introduced two new features: one is the integration of native video and music generation capabilities from 11 suppliers, allowing users to have the agent create short videos or soundtracks; the other is a “dream” memory system that simulates human sleep mechanisms, automatically organizing and compressing long-term memories during inactive periods, enabling the agent to better understand its user over time.

These two features indicate that OpenClaw is not merely on the defensive; its iterative pace has not been interrupted by the ban.

OpenClaw’s Ban is Not an Isolated Incident

The ban by Anthropic is not an isolated case; similar incidents have occurred earlier.

On January 9, a programming tool called OpenCode was cut off from accessing Claude. At that time, OpenCode had 56,000 stars on GitHub, making it one of the most popular AI programming tools among developers, second only to Claude Code. This cutoff occurred without any prior warning—many developers suddenly found their ongoing programming work interrupted without notice.

The developer community reacted strongly. On GitHub, related feedback posts received over 1,400 reactions and more than 400 comments. One developer wrote: “I was coding just an hour ago, and now it’s throwing errors—this isn’t service, it’s kidnapping.” Anthropic subsequently unilaterally closed this discussion.

In a more extreme case, Anthropic later took legal action against OpenCode. On March 19, OpenCode founder Dax Raad was forced to submit a code update, with the accompanying note containing just two words: “anthropic legal requests”—completely removing all integration code related to Claude from the project. He then announced a shift to OpenAI, fully adapting to GPT-5.

Around the same time, other AI programming plugins running on the VS Code editor, such as Roo Code and Cline, were also affected, with their original Claude access channels rendered ineffective, forcing them to switch to alternative interfaces. Even Cursor, a popular AI programming editor, was not spared.

A noteworthy detail is that engineers from Elon Musk’s xAI had previously been using Cursor to call Claude to assist in training their models, which Anthropic deemed a violation of the service terms prohibiting “use for building competitive AI systems,” leading to an immediate ban.

These events sparked a strong backlash in the developer community. Comma.ai founder George Hotz wrote a blog titled “Anthropic is Making a Huge Mistake.” His core judgment was: “You won’t drive people back to Claude Code; you will only push them toward other model suppliers.” Ruby on Rails creator DHH’s criticism was even sharper: “For a company that trains models on our code, this is a terrible policy.”

Regardless of the intensity of the criticism, every project that faced a ban ultimately did the same thing: they integrated multiple model providers and switched to a pay-per-use model, no longer binding their fate to any single model vendor.

These cases repeatedly validate a single truth: tools that involve workflows, skill packages, exclusive memories, and automation projects are increasingly seeing their switching costs surpass those of switching models. Moreover, most current tool platforms allow users to conveniently switch models.

Conclusion

Over the past two years, there has been an almost unshakeable consensus in the AI industry: models are everything. The strongest model wins, with financing, valuation, and talent competition all revolving around this assumption.

If the switching costs between models are decreasing while the migration costs at the tool layer are increasing, then the value in the industry chain will naturally shift from the model layer to the tool layer.

Anthropic is clearly aware of this. It is simultaneously banning third-party tools while accelerating the enhancement of its own tool layer—Claude Code for programming, Dispatch for remote control, Computer Use for desktop operation, and Cowork for team collaboration.

Anthropic is not just protecting model revenue; it is seizing the orchestration layer. It understands that once the orchestration layer is occupied by others, the substitutability of models increases. And Anthropic does not want to be just a model seller.

The capabilities for model orchestration, engineering, workflow management, and application ecology—elements previously seen as “ancillary to the model”—may become the true focal point of AI competition in the next stage.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.