Forgot your password?
typodupeerror
Programming AI Microsoft The Almighty Buck

GitHub Copilot Is Moving To Usage-Based Billing 43

GitHub said in a blog post today that it is moving Copilot to usage-based billing starting June 1. Base subscription prices will remain the same but premium requests will be replaced with monthly AI Credits that are consumed based on token usage.

"Instead of counting premium requests, every Copilot plan will include a monthly allotment of GitHub AI Credits, with the option for paid plans to purchase additional usage," the platform said. "Usage will be calculated based on token consumption, including input, output, and cached tokens, using the listed API rates for each model. This change aligns Copilot pricing with actual usage and is an important step toward a sustainable, reliable Copilot business and experience for all users."

Documentation for individuals, businesses and enterprises, and an FAQ can be found at their respective links.

GitHub Copilot Is Moving To Usage-Based Billing

Comments Filter:
  • by ByTor-2112 ( 313205 ) on Monday April 27, 2026 @02:09PM (#66114832)

    Sweet, so almost everyone will be charged zero for the thing they aren't using.

    • Re:So zero, then? (Score:4, Insightful)

      by karmawarrior ( 311177 ) on Monday April 27, 2026 @02:18PM (#66114866) Journal

      Nah, everyone will be charged through the nose for all the things they keep turning off but that keep being turned back on and jumping in front of the features they're trying to use.

      I am so fed up with this timeline. Can we all, as a society, find a way to invent a time reset button, something that maybe takes us back to 1999, with some knowledge of what NOT to do this time around?

      • by shanen ( 462549 )

        Mod parent funny for "timeline" though it could be a multiverse joke.

        Finding it increasingly hard to develop or sustain interest in any Slashdot stories these days. Personal problem with time? Or specific problems with each story? Or just the downwards trend in the quality of discussions? (But that might be a selective memory problem.)

        On this story I have been considering GitHub sans Copilot as part of a solution approach to a problem created by a dying website. However this story puts the last nail in the

      • Nah, everyone will be charged through the nose for all the things they keep turning off but that keep being turned back on and jumping in front of the features they're trying to use.

        I am so fed up with this timeline. Can we all, as a society, find a way to invent a time reset button, something that maybe takes us back to 1999, with some knowledge of what NOT to do this time around?

        First, you'd need to go back to before Reagan's rise, because he sort of lifted the "profit is the most important part" idealism up to where it started to become some weird innate law of nature that can not be overcome by common sense, logic, or ethical concerns. And then, somehow, you'd have to block that mentality from rising in some other way and then becoming some weird religion that drives everything we do as a society. That's really what led us to this point where corporate dictate outstrips all attem

    • by Tablizer ( 95088 )

      They'll probably use dark patterns to trick you into "usage"

    • by xeoron ( 639412 )
      And they are going to share in the spoils right? After all, they trained it on Git stored projects
  • by CubicleZombie ( 2590497 ) on Monday April 27, 2026 @02:12PM (#66114838)

    They've been limiting requests for premium models to 300/month for quite a while now, and gradually removing the free models. The only free model left now is GPT-5 mini, which is nearly useless. I've moved on to Cursor AI. It is vastly superior to Github Copilot.

    Expect AI to get a lot more expensive as people and companies become dependent on it. This is by design.

    • by nlc ( 10289693 )

      > Expect AI to get a lot more expensive as people and companies become dependent on it. This is by design.

      AI is being sold at a loss so prices will increase regardless. Investors are hoping companies become dependent on it but if they do it will be because it works. Right now we are in the experimentation stage and it is by no means clear that AI provides anything of lasting value.

      • Yeah, this. All of the AI platforms have been operating at insane losses -- many estimates say $25-30 burned for every $1 in revenue, some say lots more, and that's without all the externalized costs.

        It was never sustainable, and it will never be sustainable barring some tremendous breakthrough in efficiency of compute, or in cost of power generation. And I'm not talking about the ridiculous data-centers-in-space thing. Energy may be nearly free up there, but we have a hard enough time with the waste hea

        • by Junta ( 36770 )

          It *might* have been sustainable if built out more modestly. But instead they said we need to make new datacenters everywhere all of a sudden no matter how much they have to borrow and whatever purchasing commitments they need to do to seemingly secure supply.

          Probably a large amount of waste written off in bankruptcy then the more sustainable market persists beyond, including the light AI augmentation you reference and some of the software development offerings persist (though those folks have historically

        • Ed Zitron interviewed a guy who said that there are supposedly thousand-fold reductions in inference costs coming down the pipeline. Ed is an AI skeptic, and I don't remember who the guy was, but he seemed knowledgable. Anyway, if that is true then "running" a model could be cheap, whereas "training" a model is still horrendously expensive. Somewhere between the two would be true cost.

          I personally doubt it will ever be a viable business beyond what you're describing.

          Also, there is one customer who doesn't c

          • I read a research paper on how to train through inference by moving away from back propagation, (WAY simplifying this) by doubling the memory usage with learning neuron connections wired into the transformer layers, and having that layer and the learning layer adjusting each other doing forward inferences. instead of doing this 0 layer | end layer |-->-------| tick |----->----| tick |-------->-| tick |--------=>=>=>=| |>=>=>=>=>| |=>=>=>=>=| |>=
            • damnit slashdot ruined my formating and deleted the middle of my post.. I promise the above post made more sense when i typed it out.
          • by DarkOx ( 621550 )

            This will really become the problem for selling commercial access to frontier models, if it proves to be true. (I tend to believe it will ).

            If the models get thousand-fold cheaper to run, than the hardware needed to do it will be something anyone interested in more than very occasional use will be able to justify. Even if it ends up not looking exactly like consumer GPU/NPU offerings today, it will land in PC and likely even SBCs soon enough.

            So now the pure AI companies will have big problem, how to charge

      • by Junta ( 36770 )

        A lot of "big software teams" have already made offerings from these companies foundational to their process. The one I'm familiar with is having just a nasty mess come out of it, but that team has a history of churning out human slop anyway, so the switch to AI slop quality wise hasn't really changed but it is faster and cheaper than the human slop.

        The providers will erase the 'cheaper' part, but it will remain faster.

        At this point even if a company has a human using VSCode as a boring old editor, they'll

    • by CEC-P ( 10248912 )
      The fact that I've never even heard of copilot for github and assume anyone not using Cursor is insane goes to show how much money I think this venture will make. Do they really think "because it's the one already there on the site" is a level of unprofessional laziness that IT developers will fall for?
      • by Anonymous Coward

        >The fact that I've never even heard of copilot for github
        Serious unc vibes pretending not to have heard of github copilot.
        "who? Taylor Swift? I'm sorry, I don't own a TV and I don't know who that is"

    • Until Cursor starts enshittifying.

      • Grok is buying Cursor for 60 billion USD. At least, that is their intent. Once that happens, Cursor will immediately be enshittified.

        Time to start looking for an alternative... Zed [zed.dev] perhaps?

    • by thegarbz ( 1787294 ) on Monday April 27, 2026 @03:42PM (#66115086)

      Expect AI to get a lot more expensive as people and companies become dependent on it. This is by design.

      Expect AI to get a lot more expensive as providers realise that sitting around making loss after loss every quarter is not a sustainable business model. Seriously is there a single company whose AI division is currently in the black?

      • Expect hackers to install hidden background daemons who relay AI prompts from strangers over the Internet, at only half the per token price.
    • by Junta ( 36770 )

      Expect AI to get a lot more expensive as people and companies become dependent on it. This is by design.

      Hey, it has worked pretty well for IBM and their mainframe... Well except for the "starting cheap bit", but the landscape is much more competitive than IBM had starting out.

    • by gweihir ( 88907 )

      Expect AI to get a lot more expensive as people and companies become dependent on it. This is by design.

      Agreed. This is also by sheer economic necessity. For example, OpenAI is currently at around 12% if the revenue they need to be making just to break even. The others are very likely not doing much better. Hence price increases in the 10x...20x range are to be expected since they want to not just break even but make real profits and since increased prices will decrease AI use. And since a lot of companies stupidly fired all the people that would, with the new prices, have done things cheaper than AI. All tho

  • Buh Bye (Score:3, Interesting)

    by SlashbotAgent ( 6477336 ) on Monday April 27, 2026 @02:15PM (#66114852)

    Paywalling that intrusive privacy invading crap is great news.

    Buh bye Copilot. Happy to see you go. Thanks Microslop.

  • by Himmy32 ( 650060 ) on Monday April 27, 2026 @02:15PM (#66114854)

    Clearly everyone was thinking this was going to be the gym model where the heavy users subsidize the light ones, and then a bunch of money off the people who even forgot they even subscribed. But when typical use is more than what they were picturing heavy use as, the whole thing breaks down.

    Anthropic started the roll back of the subsidies with Claude Pro, now all the other providers can follow to try and shore up their business models.

    • by CAIMLAS ( 41445 )

      Yep.

      It was a cost model doomed to fail from the start without another way to make up the difference, akin to "unlimited refills on drinks".

      • Even unlimited sodas is a better cost model than this.

      • by Himmy32 ( 650060 )

        I actually think refills on drinks is a pretty apt analogy though. They thought that most people wouldn't fill up multiple times, that the soda would be cheap enough to not worry about, and if they gave away some free soda they'd get some loyal customers.

        Only after some people are staying to drink only the "cheap soda" and the soda is expensive, do they need to make rules kicking people out early and paying per refill.

  • by devslash0 ( 4203435 ) on Monday April 27, 2026 @02:17PM (#66114862)

    So they're asking users to pay for tokens despite a good portion of tokens being consumed for nothing because of the number of attempts it takes to generate anything usable.

    It's like saying "our product is shit but we'll tell you it's your fault because the 100 prompts you wrote were not enough to make the statement of your simple task specific enough; and well charge you for all of it"

    • Isn't this a reasonable way to charge for AI usage? People who get there faster with better prompts will pay less.

      Whether it is worth the cost is completely different question of course, but paying per usage seems reasonable.

      It's kind of similar to how it works with junior developers. If you give them shit instructions they will mostly produce something that isn't really what you had in mind, and it takes more iterations to get to something good.

    • So they're asking users to pay for tokens despite a good portion of tokens being consumed for nothing because of the number of attempts it takes to generate anything usable.

      If you're bad at using the tools, is that their fault?

      Good prompting and good context management are non-trivial, but they are things you can learn to do.

      Good prompting is really just good communication. Pretend you were telling a junior developer who is very bright and somewhat overenthusiastic what to do via email, and that you can't send them another email for several hours. If you give them incorrect instructions, they're going to produce incorrect results. If you give them vague instructions, th

  • by DeanonymizedCoward ( 7230266 ) on Monday April 27, 2026 @02:57PM (#66114978)

    The experience on annual plans will change significantly: model multipliers will increase, and standard-tier models (currently 0x) will no longer be available, reflecting increased compute costs and the transition to usage-based billing, and no new models or features will be added to annual plans going forward.

    And lest you think that buying an annual plan actually means getting that plan for the duration of its term... remember that we live in a free society where the Epstein class is allowed to change the terms of a "sale" unilaterally. At least they're offering a prorated conversion of the annual plans into the new-and-improved pay as you go plans, which might become somewhat attractive as the multipliers scale TO THE MOON and the annual plan becomes essentially worthless.

  • ... now comes the switch part. We all knew this would happen.

    So did social media, streaming, and everything else. First is great and cheap, then it's shitty and expensive.

    This time at least I had read Cory Doctorow and did learn to host and use open source alternatives from the start.

    • by gweihir ( 88907 )

      I do not think most people really expected this, although everybody had all the data available to make a not at all difficult prediction to that effect. But people in general are not smart and fell for the free drugs at the beginning. And now that they are hooked...

  • https://www.scry.llc/2026/04/0... [scry.llc]

    "I began a "sovereign AI system" using Ollama in July of 2025. The core concept is a domain-constrained AI for specialized knowledge, resistant to external influence and memetic sabotage. The assumption that all knowledge should be accessible and malleable to all people everywhere is ridiculous"

    there is now a significant market for locally-hosted AI in niche roles, AI that doesn't need to be AGI.

  • I find it hilarious that most of the commenters here don't even know what this story is talking about. That's Microsoft's fault really, using Copilot to mean 10 different things. I actually use Github Copilot and it is not the same crap that is shoved into every orifice of Windows and MS products. Most of the time I'm using the Claude models inside of Github Copilot anyway.
  • And now the supposed advantages will vanish as well, but many teams will not be able to move away and will just have to pay whatever is asked. Obviously, even really expensive LLMs will still hallucinate and continue to have zero insight.

    On a personal note, the only thing I have depending on AI are a few student theses looking into its limits. And I expect they will be able to finish their work just fine as the price-increases will not come that fast.

  • ...cuz that's about the only way I'd use copilot.

To thine own self be true. (If not that, at least make some money.)

Working...