Key Highlights:
- Anthropic briefly suspended OpenClaw creator Peter Steinberger’s Claude access before restoring it within hours.
- The suspension followed new pricing rules removing subscription support for third-party harnesses like OpenClaw.
- Anthropic said claw-style agent workflows create heavier compute loads than standard prompts.
Anthropic briefly suspended OpenClaw creator Peter Steinberger’s account from accessing Claude before restoring it just hours later. The temporary ban followed recent pricing changes affecting third-party AI harness tools and triggered widespread discussion across developer communities.
The incident quickly gained attention after Steinberger posted about the suspension on X early Friday morning. His access returned later the same day, but the episode raised new questions about how AI model providers handle external developer tools.
Why did Anthropic suspend OpenClaw’s creator?
Peter Steinberger shared a screenshot showing a notice from Anthropic that cited “suspicious” activity as the reason for the suspension. However, the restriction did not last long.
Within hours, Steinberger confirmed his account had been reinstated. Meanwhile, an Anthropic engineer publicly responded online, saying the company has never banned users simply for using OpenClaw and offered assistance.
It remains unclear whether that response directly influenced the restoration. Anthropic has not publicly explained the exact reason behind the suspension.
Still, the timing drew attention because the event came soon after the company updated how subscriptions interact with third-party tools.
What changed in Claude’s pricing for OpenClaw users?
Just days before the suspension, Anthropic announced that Claude subscriptions would no longer cover usage through third-party harnesses such as OpenClaw.
Instead, developers must now pay separately through Claude’s API based on consumption.
Anthropic explained that subscriptions were not designed for the heavier workloads generated by harness systems. These systems can run continuous reasoning loops, retry operations automatically, and connect with external tools across workflows.
Such behavior increases compute usage beyond typical prompt-response interactions.
As a result, Anthropic shifted these workloads to its API pricing structure.
Steinberger said he had already moved his testing workflow to the API under the updated policy when his account was suspended.
Is Anthropic limiting third-party agent tools?
The policy shift sparked debate among developers working on open AI orchestration frameworks.
Steinberger suggested the timing of the pricing change raised concerns. He noted that Anthropic introduced new features inside its own agent environment shortly before restricting subscription coverage for external harness tools.
He appeared to reference features added to Anthropic’s internal agent platform, including remote task assignment capabilities.
Although Anthropic framed the update as a technical adjustment tied to usage patterns, the sequence of events drew attention from developers tracking competition between proprietary agent systems and open frameworks.
The situation reflects a broader shift across the AI industry, where companies increasingly build integrated agent ecosystems around their models.
Why does OpenClaw still test Claude compatibility?
Some developers questioned why Steinberger continues testing Claude compatibility despite working at OpenAI.
He clarified that his role with the OpenClaw Foundation is separate from his product strategy work at OpenAI. According to him, the foundation aims to ensure OpenClaw works reliably across multiple model providers.
He also confirmed that Claude remains widely used within the OpenClaw community.
Maintaining compatibility therefore remains necessary for developers who rely on the framework in production workflows.
This cross-platform testing approach reflects a common strategy among open tooling projects that aim to support several model ecosystems at once.
What does this mean for developers building AI agents?
The temporary suspension highlights growing friction between model providers and third-party orchestration layers.
Agent harness tools like OpenClaw allow developers to automate workflows that go beyond standard chatbot interactions. However, these tools also increase compute demand and reduce platform visibility into usage behavior.
Model providers are increasingly responding by adjusting pricing structures or introducing their own agent-native environments.
In this case, Anthropic said its subscription plans were not designed for the workload patterns generated by harness tools. Moving those workloads to API billing reflects a shift toward usage-based access for advanced automation layers.
At the same time, developer reactions suggest concerns about whether open frameworks will face tighter integration limits over time.
The brief suspension of Steinberger’s account did not lead to long-term access restrictions. Still, the episode shows how policy changes around agent tooling can quickly affect developer workflows connected to Claude.