Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Programming IT

Claude Code Users Hit With Weekly Rate Limits (techcrunch.com) 33

Anthropic will implement weekly rate limits for Claude subscribers starting August 28 to address users running its Claude Code AI programming tool continuously around the clock and to prevent account sharing violations. The new restrictions will affect Pro subscribers paying $20 monthly and Max plan subscribers paying $100 and $200 monthly, though Anthropic estimates fewer than 5% of current users will be impacted based on existing usage patterns.

Pro users will receive 40 to 80 hours of Sonnet 4 access through Claude Code weekly, while $100 Max subscribers get 140 to 280 hours of Sonnet 4 plus 15 to 35 hours of Opus 4. The $200 Max plan provides 240 to 480 hours of Sonnet 4 and 24 to 40 hours of Opus 4. Claude Code has experienced at least seven outages in the past month due to unprecedented demand.

Claude Code Users Hit With Weekly Rate Limits

Comments Filter:
  • by Anonymous Coward on Monday July 28, 2025 @10:10PM (#65551830)

    It has already reached the enshittification stage!

    • You haven't seen nothing yet. This still isn't the true cost of it.

    • I don't agree. Charging an up-front price for a good product is not what enshittification is. That is simple price competition. Gradually reducing the value of the product by introducing ever more-annoying little profit streams is what enshittification is.

      This is for now a competitive market, and these quotas a pretty generous for what one license is marketed to be used for.

      • by unrtst ( 777550 )

        I don't agree. Charging an up-front price for a good product is not what enshittification is.

        OK... that much makes sense. But is that the case here? Is the price the same and the service offering equal or better than when one first signed up? No.

        That is simple price competition.

        That implies that product A and product B are in competition and can compete based on price. That doesn't seem to be the case here.

        Gradually reducing the value of the product by introducing ever more-annoying little profit streams is what enshittification is.

        No. It's not about reducing the value of the product.
        It's about degrading the quality of services, often by promoting ads and sponsored content and such, in order to increase profits. IE: adding shit that makes it worse for the

        • OK, my point isn't really about the terminology. What I think would be really "shitty" is altering the outputs of the AI in subtle unspecified ways, presumably to sell you things or persuade you of things, so you don't even know who it's really working for or what it's doing.

          What I see here is quotas. And obviously, having quotas is worse for the customer than not having quotas for the same price, so it's a bit of a bummer, whatever word fits. But at least it isn't subtle or sneaky.

    • by allo ( 1728082 )

      There is no such thing as a flatrate for something that has usage depending costs.

  • Ask Claude (Score:4, Funny)

    by blue trane ( 110704 ) on Monday July 28, 2025 @10:13PM (#65551838) Homepage Journal

    Claude, can you write a load balancer for yourself so you don't have to impose user limits?

    • by gweihir ( 88907 )

      If this were about average use, they would probably not do it. This is about total load.

      • What if it's purely about creating scarcity for more subscription profit, and in reality demand is not prohibitive because, as you continually remind us, AI is no good and how can so many be enthralled with such a dumb fake?

        • A fair question. But what if it's simply because peak loads are getting too high (how are they paying for their inference? Is there demand pricing?), and they're trying to clamp them, so that they can continue to hit their price point?
        • by gweihir ( 88907 )

          How would they, as as one of many offers, create scarcity?

  • Claude Code uses up tokens like a donkey kong machine

  • by gweihir ( 88907 ) on Monday July 28, 2025 @11:01PM (#65551894)

    ... than many people hallucinated. No surprise for me.

  • ... "Hello world!" project done quickly.

  • by Jeremi ( 14640 ) on Tuesday July 29, 2025 @12:20AM (#65552026) Homepage

    That they've got a pretty decent LLM built in to their own skull, and it's available gratis for their personal and professional use for up to 8-12 hours per day. The best part is that the more you use it, the better-trained and more effective if gets.

    • Indeed. But people burn out. The machine never tires.
    • by war4peace ( 1628283 ) on Tuesday July 29, 2025 @04:23AM (#65552362)

      That's true, for a specific and relatively narrow specialization.

      However, when you need a one-off configuration, setup, solution, script, in a language you don't master, that's where having a code helper available is a godsend.
      One recent case was when I wanted to have my Home Assistant home setup monitor my network devices via SNMP. What would have taken many hours, with the assorted swearing and frustration episodes, took about 3 hours total using Gemini Pro 2.5. It did all the heavy lifting for me and taught me quite a few things (with the occasional hallucination, of course, but I know hoe to weed out those occurrences).

      LLMs are tools which help me be more productive. Much like knives or hammers, when used right, they are useful, and when used wrong, they are dangerous.

      • I can tell you that when I was first burying it balls deep into the depths of network management back in the mid 2000s, I would have *loved* an LLM to help me figure out everything I eventually learned about SNMP walking, bulk walking, net-snmp agents, MIBs, etc.

        I've always figured these things really really excel as learning tools for professionals.
    • by Eneff ( 96967 )

      Unfortunately, my brain has a fairly strict rate limiter and I have yet to see a way to pay out of that.

  • by ledow ( 319597 )

    Oh, look, now the limits start to come in, then the prices will go up, and eventually... hell... who knows... maybe the investors will actually see A PROFIT from those billions spent on training an AI to be a glorified autocomplete.

    It'll only take about a hundred years or so to pay back their investment, even then.

  • If... (Score:4, Insightful)

    by commodore73 ( 967172 ) on Tuesday July 29, 2025 @07:20AM (#65552554)
    If you're willing to pay $200 per month for coding assistance, I suspect you're a shit developer.
    • Well,
      I know about a (good no idea?) developer who had a lot of money.
      He paid "online gamers" to harvest items for him in an online game.

      Because he thought (and told so in public): "I love that game, and when I play it the 5h a week while I have time, I want to play it by the most potential".

      Using an LLM for coding is more or less the same.

      If you have to write 100 lines of code that you have clearly in your mind, and takes 3h to do right, but an LLM can spit it out in 30 seconds ... does that make you a bad

    • If you're willing to pay $200 per month for coding assistance, I suspect you're a shit developer.

      If your'e a developer who makes $100 per hour, spending $200 per month to save even 10 minutes per day is a no-brainer. If it saves you an hour or more per day you'd have to be a complete idiot not to do it.

      • by leptons ( 891340 )
        >If it saves you an hour or more per day you'd have to be a complete idiot not to do it.

        It's not "saving" you anything. You are still required to work the standard 8 hours per day like most people are required to do. With "AI" you're also required to check everything the LLM spits out for errors, which is way too often, and is also a lot of work if you don't want to create more problems for yourself later. LLMs may save you some typing with autocomplete, but generally your cognitive load with them isn'
  • A company that's trying to succeed by actually making a profit instead of killing everybody else by running losses the longest will have to right-size their revenue model. A small subset of power users can really kill all profit for an AI-based service.

    So long as they are applying it to new and renewing contracts, I think it's fair. Just don't try to tinker with agreements that are in flight.

It's ten o'clock; do you know where your processes are?

Working...