Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI Programming

AI Coding Agents Are Already Commoditized (seangoedecke.com) 62

Software engineer Sean Goedecke argues that AI coding agents have already been commoditized because they require no special technical advantages, just better base models. He writes: All of a sudden, it's the year of AI coding agents. Claude released Claude Code, OpenAI released their Codex agent, GitHub released its own autonomous coding agent, and so on. I've done my fair share of writing about whether AI coding agents will replace developers, and in the meantime how best to use them in your work. Instead, I want to make what I think is now a pretty firm observation: AI coding agents have no secret sauce.

[...] The reason everyone's doing agents now is the same reason everyone's doing reinforcement learning now -- from one day to the next, the models got good enough. Claude Sonnet 3.7 is the clear frontrunner here. It's not the smartest model (in my opinion), but it is the most agentic: it can stick with a task and make good decisions over time better than other models with more raw brainpower. But other AI labs have more agentic models now as well. There is no moat.

There's also no moat to the actual agent code. It turns out that "put the model in a loop with a 'read file' and 'write file' tool" is good enough to do basically anything you want. I don't know for sure that the closed-source options operate like this, but it's an educated guess. In other words, the agent hackers in 2023 were correct, and the only reason they couldn't build Claude Code then was that they were too early to get to use the really good models.

AI Coding Agents Are Already Commoditized

Comments Filter:
  • can they explain what that actually means. It seems like a collection of gibberish probably written by an "AI".
    • by laosland ( 55769 )

      From what I've read, it means that the agents are interchangeable.
      I've been using Cline.bot's VS Code extension with various agents and have found that Claude Sonnet has been most reliable. It is getting somewhat expensive but works reasonably well.

      • Can I ask what you use the agents for? I for the life of me can't imagine giving any AI the ability to autonomously change code.

    • You can use any AI coding tool. It will work as a hyper-autocompletion agent. It will do all the basic stuff which is often useful. None of them have solved the problem that sometimes they will put out awful code and you will have to think. They are all basically the same.

      That means that there's no real benefit from going for a proprietary one and you should target the one which is most clear and open about how it works and where it gets its training data from.

      "Agentic" means "work as a software component t

      • Let it read a file on one side and output a file on the other and it works great.

        Reminds me of sed(1).

        Although I don't recommend the special case of reading one file on one side and outputting the same file on the other:)

    • by gweihir ( 88907 ) on Saturday July 05, 2025 @08:54AM (#65498836)

      It means "Must keep AI hype going! Must pretent it is the only true thing! Must make more money!".

      It can safely be ignored as total bullshit of the marketing variant.

    • can they explain what that actually means. It seems like a collection of gibberish probably written by an "AI".

      "AI coding" is a replacement for walking to a bookshelf and picking up your old textbook to look up a well known well studied algorithm and see some sample code. Sample code written for brevity and clarity, that lacks a lot of the defensive coding that production code should have.

      It's one step beyond relying on an online community for the above, where one sacrifices convenience for the professional reviews and checks that went into the creation of the reference book. Settling for amateur reviews and chec

      • by narcc ( 412956 )

        The difference between "AI coding" and the other tools you mention is that those are accurate, consistent, reliable, and inexpensive. LLMs are not.

        That said, AI coding agents do provide a competitive advantage ... to the shops that have figured this out already and aren't using them.

      • In short, "AI coding" is not as mystical as it seems. Doing little that prior sets of tools were not doing. It's just more convenient, perhaps automating the use of numerous such existing tools. It still requires a skeptical review of the code and likely the addition of defensive code.

        Right. It's a tool.

        It's not the singularity, and it's also not "useless" like some energetic posters here want it to be. LLMs are just tools, which devs need to figure out how best to use for their use cases.

  • If you write "good enough" using agents, don't expect me to fix it for you when it goes horribly wrong.

    • Somebody is feeling threatened
    • by gweihir ( 88907 )

      Hahah, yes. And remember that "good enough" looks massively different when you take security, usability and maintainability into account. I guess the next few years will get interesting for some enterprises that depend on software and quite a few will drown in a mountain of AI generated technological debt and die. Stupid people doing stupid things.

    • by LinuxRulz ( 678500 ) on Saturday July 05, 2025 @09:02AM (#65498852)

      The point they all miss is that writing code which works was never the problem. Any junior dev can do it.
      Software engineering always was about balancing tradeoffs, figuring integration points, ensuring long term maintainability, structuring for release and deployment, aligning design with roadmap, communication and collaboration, etc.

      Maybe an AI can eventually get there, but your prompt will be way bigger than the code. I'd rather write the code.
      For the rest, we already had cookiecutters and snippets.

      • You're absolutely right. In order to get anywhere near functional with AI you need to be so specific with the description of what you want AI to produce, that it undermines the whole undertaking. If you allow for any degree of ambiguity, AI will do all it can to satisfy your reqs at the cost of anything else that you've left unspoken - things that human beings take for granted but computers are unable to comprehend its importance.

        • by gweihir ( 88907 )

          Exactly. And then it will tell you that the "optimized" functionality it just produced is what you actually want.

      • by gweihir ( 88907 )

        Indeed. This whole hype of "AI coding" is by people that are lying or never have written code in the context of any non-toy projects.

  • by ffkom ( 3519199 ) on Saturday July 05, 2025 @08:41AM (#65498810)
    Working for a company that is almost obsessed in its attempts to utilize "coding agents", I have attempted time and again to delegate mundane sub-tasks to such agents - for easy things like reading configuration files of a given format. And the results I got, up until today, are so awful, that even in the rare cases when they were not just defunct, I ended up rewriting the code to not be the one signing a commit of inefficient slop code into the repository.

    I can only image what terrible code must be the norm in the places where the "coding agents" available today are considered "good enough".
    • by gweihir ( 88907 ) on Saturday July 05, 2025 @08:51AM (#65498830)

      You are not the only one. I am beginning to think that _everybody_ reporting great sucesses in this space is lying, deep in delusion or only doing very simplistic code (and struggling to do even that by themselves).

      • The thing that AI lets them get around is bad management. They suddenly don't have to have aeeting about every little aspect of every little feature. They can just get AI to generate them some slop. For these types of management traps, it's going to lead to seriously accelerated production. But it's also going to lead to a lot of production failures, and then a return to slow processes.

      • by Tom ( 822 )

        It's like visual coding or RAD all over again. Whenever suits and PHBs are told there's a magic wand that'll allow them to do without paying people for the nitty-gritty bits, they get all excited and convince each other in their echo chamber that their dream of a company of all managers and no workers is just around the corner.

        Then reality says "hi", the hype dies down, a few scam artists got rich and the world continues as it was, with a couple new cool tools in the toolbox of those who know how to use the

    • On one hand, I agree with you that the quality is very poor. On the other, I do find AI tools (specifically, GitHub Copilot) useful and time-saving. I almost always have to fix what it generates, but it still saves me time.

      Some specific areas of success include:
      - SQL commands that manipulate XML or JSON database blobs
      - XPATH or JSONPATH generation
      - Converting jQuery ajax to async/await with fetch
      - Generating unit test skeletons

      In each of these cases, you do have to know what you're doing, and you have to be

      • In each of these cases, you do have to know what you're doing, and you have to be able to know when it's right. But the things it does, do save me time and research effort.

        Likewise.

        They are just tools. Each user has to learn how to use them for their own use cases. They are neither panaceas nor useless.

    • It does not matter that the AI coding agent saves no time by the time your review and fix its code, add the omitted defensive coding, etc.

      What matters is that you can label that block of code as AI generated and look good in the performance metrics of management. That the company can tell wall street it has met its goal of X% AI generated code.

      It's another spin on Wally coding up his minivan. In the end, we typically code to match what management rewards. That's the ugly truth. We don't necessarily do
    • Working for a company that is almost obsessed in its attempts to utilize "coding agents", I have attempted time and again to delegate mundane sub-tasks to such agents - for easy things like reading configuration files of a given format. And the results I got, up until today, are so awful, that even in the rare cases when they were not just defunct, I ended up rewriting the code to not be the one signing a commit of inefficient slop code into the repository.

      I can only image what terrible code must be the norm in the places where the "coding agents" available today are considered "good enough".

      Even industry cheerleaders admit agents are resulting in substantial increases in codebases.

  • by gweihir ( 88907 ) on Saturday July 05, 2025 @08:48AM (#65498824)

    This is just more assholes portraying things they profit from as "inevitable", trying to keep the hype going and delaying the point where enough people realize that LLMs are not the revolution they are advertized as. You know, like in _all_ other AI hypes before.

    And "just" better base models? Good luck with that. It is like saying space travel "just" needs a cheap and reliable FTL engine and we are all set. True, but meaningless.

  • by DewDude ( 537374 ) on Saturday July 05, 2025 @08:55AM (#65498838) Homepage

    This all fails. Everyone gets replaced with AI, no one has a job, no one has money to support companies.

    It's just they get to screw us first.

    We need an AI ban. This is not going to be a good thing for society. It already isn't. People are going to die because of bullshit decisions made by AI...likely already have. When the black box is making the decision and no one lets you look in the black box....is there really a black box?

    We're all fucked. Congrats. It's only going to get worse before it gets better.

    • Your prediction is like predictions from the 20th century, that calculators would destroy the field of mathematics, or that chess computers would destroy the game of chess. Or more recently, that Google Maps would destroy people's ability to navigate, or that Waymo would obliterate Uber and taxi services. All these technologies impacted the various fields in significant ways, but did not destroy them.

      As a daily user of AI, I know that we are a LONG way from being displaced by AI. And even to the extent that

      • by DewDude ( 537374 )

        There are a lot of industries it will take over before it's ready. Customer service for example. Every call center operator is frothing at the mouth to fire every single phone agent they've got. "People are a liability. They cost too much and they're not perfect enough." is the mentality I hear from the executives...the people that cut the checks.

        Customer service already sucks across the board for most corporations. They will not be hurt if they put broken AI in place of people. It already sucks. They will

        • Customer service is an example of an industry that is *ripe* for AI takeover. I think AI would be an improvement over today's call trees and human script readers. In fact, last week I called an air conditioner contractor and was greeted by an AI assistant. I know it wasn't just another call tree because it was able to ask questions and respond appropriately, well beyond the capabilities of the best call trees. It actually did the job of a receptionist, and did it well.

          On Google Maps, people couldn't figure

        • by narcc ( 412956 )

          If an executive can code an app talking to AI...he won't hire programmers.

          It's not like we haven't seen that countless times over the years [wikipedia.org]. It always ends the same way. It turns out that specifying the kind of program you want in sufficient detail for a computer to generate it is just programming.

          If a call center can handle all of it's calls with a couple of AI bots...they will fire the 1200 employees they have

          Sure ... except that the bots are both expensive and can't actually do the job.

    • by gweihir ( 88907 )

      Naa, don't worry about it. Like _all_ previously AI hypes, this one will fizzle out and only smaller things will remain. Most jobs _cannot_ be replaced by AI. Where it is possible, you typeically need higher qualification people to guide and monitor that AI and these people are rare. The only reason why the hype is even going is greed, ego and stupidity. As in all the previous AI hypes.

  • I have no idea what the article is trying to tell us. Tl;dr for me is "people use ai agents". Ok and what?
  • The IDE I use offers 25 backend AI agents to pick from in a dropdown menu. All of them are either free or very cheap except for Claude Sonnet 4.0, which is reportedly the best but it burns through credits. All of them are probably operating at a loss.

    • by m00sh ( 2538182 )

      The IDE I use offers 25 backend AI agents to pick from in a dropdown menu. All of them are either free or very cheap except for Claude Sonnet 4.0, which is reportedly the best but it burns through credits. All of them are probably operating at a loss.

      The future will be locally run models.

      Only problem is that to promote cloud models, the hardware is being slow walked to do local models.

      There is just so much consolidation and conflicts that AI development is being done in a very investment friendly way rather than the most efficient way.

      • >> The future will be locally run models

        Maybe so, but;

        - someone has to make money on those models and you would probably have to buy a good one.
        - it may require special hardware to run them, a good Nvidia GPU for example
        - the cloud models run on top-end hardware, they are always being upgraded and therefore may be better, and they are very cheap.

  • You need data to train them and as the internet becomes filled with slop it's going to become increasingly hard to get that data. The only people who are going to be able to get it are large platform holders that can monitor what their users do and tell the difference between AI slop and real users because they can do heavy duty fingerprinting.

    AI is a technology that is going to very quickly belong to a handful of super wealthy companies. The AI's dependency on training data guarantees market consolidat
    • by gweihir ( 88907 )

      I think your outlook is to positive. My take is it already has become mostly impossible to get new general training data sets. (Specialized ones you can, but at huge cost.) Hence the training data is now slowly aging into irrelevance.

  • As usual.

  • That's how I see AI. I've been writing software for the better part of 40 years. What I see from AI is sometimes astonishing and sometimes pathetic. I would never, ever, ever put AI generated code into production software without carefull checking and refactoring, and I would fire anyone who does.

    Code completion is mostly in the "astonishing" part. If I write a couple lines of near-identical stuff, like assigning values from an input to a structured format for processing, the AI most of the time gets right

Anyone can hold the helm when the sea is calm. -- Publius Syrus

Working...