Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Programming AI

What Happens If AI Coding Keeps Improving? (fastcompany.com) 118

Fast Company's "AI Decoded" newsletter makes the case that the first "killer app" for generative AI... is coding. Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers... Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption.... Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated.

The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...

Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. "I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world," Savinov said.

What Happens If AI Coding Keeps Improving?

Comments Filter:
  • Seems like the logical pathway so I guess that will not happen.
    • Six months of COBRA and free career counseling for a year if they are being generous.
    • by gweihir ( 88907 )

      Well, for this and a lot of other reasons, there will be nations with an UBI you can live off reasonably well and quite a few measures that give people that want it something sensible to do with their time. And then there will be the civil war areas where everything has gone to hell.

      • by hwstar ( 35834 )

        Regarding UBI: You cant get there from here.

        My future prediction is there will be resource wars given that one nation feels the need to be first before everyone else.

        That only leads to one outcome: A significant reduction in human population on the planet.

        After the resource wars, if there is a small contingent of humans left, maybe then there will be UBI (If we can manage the warlords)

        I'm afraid it has to be all torn down to the ground before UBI can even be considered.

        • by gweihir ( 88907 )

          Regarding UBI: You cant get there from here.

          Nonsense. That just shows you have no bothered to do minimal research and are hence bereft of actual understanding.

          • This discussion might be more interesting for the rest of us if both of you would give some actual reasoning. UBI + flat rate seems to me totally logical. It is 100% what the combination of simple survival benefits + tax free earning allowances + somewhat progressive income taxation which most countries end up having is overall trying to achieve, just much simpler and clearer to administrate.

            That says to me that UBI will be very difficult to achieve because the majority of politicians, including those that

            • by gweihir ( 88907 )

              Well, the situation is that work is running out due to productivity increases. Hence "work" as meachnism to distribute the wealth of society is not working anymore.

              The other situation is that in many countries, benefits and aid programs are wasting huge amounts of money. Hence a livable UBI can be financed if all other aid is removed (this has been calculated in several countries, e.g. Switzerland, and is factually accurate in most places in Europe, the US may be fucked though...).

              So, clearly an UBI is both

            • That says to me that UBI will be very difficult to achieve because the majority of politicians, including those that claim to be left wing, are actually pretty far right and don't want simplicity and efficiency to interfere with their ability to complain about "big government".

              They can be both left and right, you know.
              There is more than one left, and more than one right. [slashdot.org]

    • Keep in mind Ubi is worthless without a whole host of other programs and protections.

      Otherwise the 1% buyout all the capital and production capacity and they can just use monopolies to raise prices and suck your Ubi money right out from under you.

      So far the only people with any leverage proposing Ubi as a solution have been right wingers who want to use it as a way to eliminate all other regulations and social programs. The idea is you get your Ubi and you shut the fuck up because hey, we're giving
      • by Anonymous Coward

        Realistically we need fully automated space communism.

        That's the most hilarious sentence I've read in a while.

      • Re:How and why? (Score:5, Insightful)

        by ArmoredDragon ( 3450605 ) on Sunday May 11, 2025 @01:39PM (#65368837)

        Keep in mind Ubi is worthless without a whole host of other programs and protections.

        And you base this on what? Oh, that's right, nothing, because you're just speculating, because no such institution has ever existed for you to make any kind of measurement against.

        So far the only people with any leverage proposing Ubi as a solution have been right wingers who want to use it as a way to eliminate all other regulations and social programs. The idea is you get your Ubi and you shut the fuck up because hey, we're giving you free money, what's wrong with you? It's there to absolve them and the rest of the community from doing anything else to maintain a proper civilization.

        Like who? And you say they're right wing based on what? And what does that even mean?

        Realistically we need fully automated space communism.

        You know, the interesting thing about communism is the whole idea was conceived of by two men who came up with all of these little intricate details about how it would work, and making all kinds of predictions about what exactly will happen once the "revolution" begins. And you know what? Over the next 150 years, multiple "revolutions" began and none of them worked out at all how those two prescribed. The whole system turned out to be a recipe for dictators to seize control of what in many cases were democratic regimes, and turn their states into kleptocracies, which happened every single time without exception.

        You know why? Because just like you, these guys had it in their head that everything they were speculating about would be without-a-doubt accurate, even though in the end it was nothing like how they said it would be, even when their prescription was followed.

        Just like what you're doing here.

      • I can see UBI being easily nullified by rent increases. Say people get $5000 a month, and it goes up by 10% a year. Rents just go up exponentially from there, especially with the fact that it is profitable to buy a property and keep it 100% vacant, as the more properties off the market, the higher real estate prices go.

        • This is why universal basic goods and services beat universal basic income...UBI can advertise a price floor to highly uncompetitive markets. Give a person an apartment instead of UBI to pay for an apartment, and the resident no longer has to worry about rent hikes or mortgage rate increases.

          This could also begin to expose the fact that land ownership was one of humanity's greatest mistakes. Most of them involved allowing people to outright own things they shouldn't have been able to - land, other people, t

          • Yes, let's just get rid of private property and turn everything into a giant commons that no one has any motivation to maintain. Should work great.
            • The commons system did work well. People worked together to maintain the commons. Only corporations would create a tragedy of the commons by exploiting them into ruin.

            • In countries that don't have the "conquering" mindset, commons work perfectly fine. Farmer Joe knows that if he overgrazes the common area, it affect not just everyone, but him as well. However, once a country goes from a high-trust level to a low-trust "take if not nailed down" mentality, things that can benefit everyone in a village wind up having to be shut down. The US used to have an extensive... EXTENSIVE (to use caps) rest stop system, on highways, side roads, almost everywhere. Some places even

        • by djinn6 ( 1868030 )

          You can get a couple friends to go to the middle of nowhere and buy a few acres for very low prices. Combined, your UBI payments will easily cover the mortgage.

          especially with the fact that it is profitable to buy a property and keep it 100% vacant

          No, property prices keep increasing because people need to live close to work. Remove that need and people will move elsewhere, causing lower property prices. The pandemic perfectly demonstrated this effect.

      • None of this is based in reality.
    • The possible uses for human work are limitless. There is no reason to think there will no longer be anything for anyone to do. If basic necessities require no human input they will be all but free. Humans will do other things that still have value to them. Value is a subjective judgment. You can make a million reproductions of a Picasso, but the original still has value even to all those humans that can't tell the difference between the original and the copy.
    • UBI is a fantasy. Resources are not free. Energy is not free. You will never get free money. And especially not computah weirdos. Maybe you're not aware of that, but workers universally despise nerds, because they believe (with some degree of accuracy) they are responsible for the loss of jobs and the outsourcing and automation that has eroded most people's ability to make ends meet. Expect no sympathy from the common men. In fact, advertising you're a coder may shorten your life.
    • Somebody's always going to bring this up, every time there's a story about AI "taking jobs" sometime in the future.

      Guess what, technology has been taking jobs for centuries. Just 50 years ago, 30% of US workers worked in factories, now less than 5%. Just 100 years ago, 70% of US workers were farmers, now less than 5%. And yet, somehow we are at roughly 4% unemployment. Magic, huh!

      This so-called apocalypse of job loss is a long ways from reality. Sure, we can imagine it, but reality always turns out to be mo

  • AI quickly fucks off when designing an advanced frontend UX/UI. It is great for backend/API though, where things can be more granular.

    • by rsilvergun ( 571051 ) on Sunday May 11, 2025 @12:37PM (#65368711)
      Dude the entire premise of this discussion is that AI will no longer suck for the front end or anything else.

      You're just putting your head in the sand. You could make the argument that it'll never get there but honestly with the leaps and bounds we've seen in the last year alone that's a pretty weak argument
      • by leptons ( 891340 ) on Sunday May 11, 2025 @01:30PM (#65368813)
        Nothing I've seen from the AI industry makes me worry about my programming job. LLMs are insufficient in their basic design to replace a human. They produce just as much garbage as they do working code, and that's not going to produce working software systems that can be maintained. The basic premise of LLMs is the problem, and no amount of training will fix it. It will always "hallucinate" and lie. Would you hire a human programmer that you knew was high on drugs and lied about writing working code half the time? Maybe you would, I don't know you, but I certainly would not.
        • How economies work.

          Let's say productivity goes up across the board in programming by 20%. That means 20% fewer programmers because we don't enforce antitrust law so there is little or no competition and companies don't have to worry about startups anymore.

          so now you've got several hundred thousand programmers gunning for your job. They are facing homelessness and starvation if they don't pull it off so they really put their nose to the grindstone. Maybe a quarter of them can reach your level.

          So c
          • Historically and economically, it is far from certain that your hypothetical 20% increase in productivity would actually result in a proportionate decrease in employment. Indeed, the opposite effect is sometimes observed. Increased efficiency makes each employee more productive/valuable, which in turn makes newer and harder problems cost-effective to solve.

            Personally, I question whether any AI coding experiment I have yet performed myself resulted in as much as a 20% productivity gain anyway. I have seen pl

            • Historically and economically, it is far from certain that your hypothetical 20% increase in productivity would actually result in a proportionate decrease in employment.

              Past performance blah blah. They're already mass firing with a vengeance.

      • Yes, that's the premise. But reality doesn't support the premise. Most of the time, AI code doesn't even compile, you have to fix nearly everything it writes. Stupid stuff like adding some code in SQL between BEGIN and END, and it puts in a second END. Even a junior developer wouldn't do that. Get into more complex stuff like javascript mixed with HTML, and you get all kinds of garbage. Helpful, yes, but ready to be unsupervised? Hardly. Not even close.

    • We said that about AI when AI LLMs made images which were absolutely twisted and surreal. That got better.

      One of the milestones will be if AI can beat hand-tuned assembly in specific applications like embedded stuff. If AI can move optimizing compilers to the next level to as good as possible, then this will be a major thing. Maybe even AI taking algorithms and figuring out better algorithms that do the same thing, like replacing a basic sort with a quick sort.

    • I've been playing with Lovable / Bolt a lot lately. The designs they make are... nice. Nicer than what I would have made (I'm not a designer) but they always follow the same template. For a basic website there's no way I wouldn't start with AI and I expect it to just get better and better. It won't replace coders overnight but I will be surprised if humans are writing any code at all in 10 years.
  • by simlox ( 6576120 ) on Sunday May 11, 2025 @11:53AM (#65368603)
    which it has learned from textbooks, tutorials and open source. It is bad for unknown solutions. I.e. as a beginner at say Cloud programming, it is good to make deployment templates etc. But once you have the framework up and running, it gets less and less usefull.
    • by gweihir ( 88907 )

      Obviously so. Also note that standardized repetitive stuff gets coded into a library sooner or later and nobody will have to write it again.

      • by Junta ( 36770 )

        This is the thing I get worried about. For human endeavors we bother to organize it into common, maintained libraries. But what if the LLM can just essentially copy-paste the 'best of breed' code? What does actually maintaining that code look like? When humans do it, then you get massive amounts of code no one knows may have fallen behind.

        Colleagues have spoken of how it tends to reproduce ancient and inappropriate javascript suggestions, which is sort of like how stackoverflow does, answers that made se

        • by gweihir ( 88907 )

          I am not worried. Libraries are much more than just code. They need a real understanding ogf the problem and what its parameters should be. LLMs cannot do that. And hence LLMs cannot identify the best approach to something. Library designers can. And it is a process that can take a long time. The results may be crap (example: Windows kernel API) or exceptionally excellent (Example: Linux or xBSD kernel API).

    • by ebonum ( 830686 )

      Repetitive stuff is not new. I've used Lex and Yacc to write code that writes code, and Perl, and bits of Java and Excel...
      The lazy programmer always sees a lazy way!
      Funny thing. I actually understood every line, and I could explain it all to anyone who asked.
      btw. This used to be the book.
      https://en.wikipedia.org/wiki/... [wikipedia.org]
      Not sure if this is what everyone reads today.

      • This used to be the book

        I bought a copy last year to help me with my embedded programming, but I'm pretty sure it's not what "everyone" reads anymore.

    • It used to be that to "cure" traffic, one built more freeways. However, it was learned that new freeways resulted in new building construction such that those freeways soon filled up and were just as jammed as before.

      So what I expect to happen is that software will become even more bloated because more devs will use AI to throw code and features at a problem instead of practice the patience-requiring art of parsimony. It may code up faster, but likely to be a maintenance spaghetti bowl.

      • Isn't it likely AI will be able to untangle the spaghetti? It would seem to be ideally suited for that. All the information needed to untangle it is right there in the code. The whole idea of "spaghetti" code is that programmers can't easily understand what it does. I am not sure that is problem for AI.
        • by Malenfrant ( 781088 ) on Sunday May 11, 2025 @02:02PM (#65368877)

          'The whole idea of "spaghetti" code is that programmers can't easily understand what it does. I am not sure that is problem for AI.'

          That is in fact a major problem for 'AI' because it doesn't understand anything. It is not parsing the code, understanding how it works and then working out how to add new features. It's looking at how programmers have solved a problem in the past and copying that. By it's nature, it will make code more spaghetti-like and never less. It's also (at least so far) shown itself unable to understand concepts such as scope, and how a new feature may interact with already existing features. Again, this lack of understanding will lead to more spaghetti code.

          An aspect that will make this problem even worse is the poor quality of the bulk of code out there. Projects that have been carefully considered, designed and optomised from first principles are extremely rare. Much more common is poorly designed and written code that's been fixed after publication, and usually in a hurry as critical bugs have been discovered. And since memory and storage got cheaper this problerm has only got worse. Spaghetti code is the norm in this industry and so this is what the 'AI's are mostly trained on. Expecting them to write better than the average human programmer shows a complete misunderstanding of how these things work.

          • That is in fact a major problem for 'AI' because it doesn't understand anything. It is not parsing the code, understanding how it works and then working out how to add new features. It's looking at how programmers have solved a problem in the past and copying that.

            Are those different things? I agree AI "understanding" is anthropomorphizing the process. But I would think it can, at least theoretically, parse the code, determine what it does and then compare if to a world full of other code that solved the same problem in the past. Including code that is used to add similar features.

            But you missed the important point. Spaghetti code implies code that follows a path that can't be easily followed and understood by humans. I see no reason to think AI is likely to produce

          • It is not parsing the code, understanding how it works and then working out how to add new features.

            Have you ... used it?

            I can feed ChatGPT some code, and it does parse it.

            Frequently - more and more frequently - it does add new features correctly, per my requests.

            Does it "understand"? Probably not, but we don't even really know what that means with humans a lot of time.

        • I am not sure that is problem for AI.
          It is no problem.

          At least not in general.

      • We have already seen this, especially as machine capacity and memory expand. Microsoft Word used to have about 95-99% of the features it did now... and used to fit on a single floppy.

    • Ask me how often I, as a professional coder, solve something I've never solved before. The answer is almost never. And when I do I ask Claude/Gemini to get me started.
  • If you layoff all the people with expertise in programming, you're left with management, so I guessing that MBA's are going to start vibe coding.
    I've never met a non-engineer/non-programmer that understands what testing is, or why you need it.
    I'm fairly sure that testing will not be very rigorous.
    • by gweihir ( 88907 )

      And than you add IT security and the little fact that, for example, North Korea invests a lot into training competent hackers. Anybody that does real coding work without quite competent (and hence expensive) coders is comitting strategic suicide.

      • ha ha! good point!
        Also, btw, I've noticed that along with testing, the entire concept of "strategy" has been hand waved away in favour of convenience... guessing that is MBAs again, using the "why build what Microsoft and CrowdStrike can provide us for a low low monthly fee".. plus then there's no need for anyone to audit or even perform network security.. think of the money we'll save !!

        Yeah, the future looks effing depressing.. and scary.
  • by Cyberpunk Reality ( 4231325 ) on Sunday May 11, 2025 @11:58AM (#65368617)
    With numbers like, "raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion" it seems clear like 'AI coding continues to improve' is the baseline. What will happen to all those valuations (and whatever schemes are built around them) if AI coding doesn't keep improving?
    • by gweihir ( 88907 )

      Indeed. Hype is not an indicator something will become possible. Remember flying cars or the "home robots for everybody" craze from about 40 years back? In the "AI" area, only very little of the promised results ever materialize and some of them do only do so multiple decades later.

    • by ET3D ( 1169851 )

      > if AI coding doesn't keep improving?

      While nothing is guaranteed, I think it's reasonable to assume that it will continue to improve. Obviously AI companies have learned a lot, as evidenced by improvements to this point, and coding AI is an area that should be easiest of all AI to get right, as the results can be tested and fed back. The AI should be able to learn to optimise, detect and eliminate security hazards, largely automatically.

      The only thing that can't be learned automatically is "feel", and i

    • It's kind of a weird pattern I noticed whenever companies start getting bought in a particular field, it usually causes stagnation in advancement. The company buying the tech has no idea how to advance it further, and the original founders don't care anymore.

      • interesting comment.. market consolidation usually occurs when the product becomes a commodity... once everyone is selling approximately the same product for the same price it becomes a bloodbath as profit margins approach zero... so then the bigger companies buy the smaller ones and you usually end up with 3 companies left, the undisputed leader, 1 fairly competitive challenger, and a dark horse. You probably recall hard drives had a dozen + manufacturers, in the 90's then everyone either went out of bus
    • I'm sort of amazed that companies like Microsoft, Google have 30% of their codebase written by AI.

      It's pretty much guaranteed that they'll have been training on GPL2 and GPL3 code and it's going to be really hard for someone to tell if it's copied from copyright sources.

      Yesterday I needed to add lseek support to a fuse driver. I asked chatgpt for a basic example and it was completely obvious that it had lifted it (maybe modified) from an open source driver.
      (It came from here https://lists.nongnu.org/archi.. [nongnu.org]

  • It will not (Score:5, Insightful)

    by gweihir ( 88907 ) on Sunday May 11, 2025 @12:04PM (#65368631)

    It already has peaked quite a while ago. At this time there is simply not more code to train it on after the whole Internet content got stolen for that purpose. As LLMs have no insight and no understanding, what they can do is strongly limited by their training data. In theory, that training data could be cleaned up and manually extended, but the effort will be prohibitive beyonf a few cosmetic efforts to fool benchmarks (and the fools that believe in them).

    And there is a second problem: If AI "coding" is used more and more, even less human-made training data will be available and the training data will get worse due to model collapse. Hence no, AI "coders" will not get better and no, this is not sustainable. Stopping to train and hire real coders will likely be a bad strategic mistake as it means much fewer will be available when the illusion that "AI can code" finally collapses.

    • Yup, this right here.
      Except, also kinda not.
      The LLM component won't be getting better for a while. We've run out of data to chew on, and without new data, very little will change with this element.

      What is changing is the sanity checking part (yaknow, by putting Yet Another LLM into the loop), and the using external tools part. And that second one is not to be underestimated.

      Because right now, an LLM can respond to requests by using the correct tool for the job, rather than trying to do the job itself.
      • by gweihir ( 88907 )

        Well, maybe. Or maybe not. I personally think these are empty promised designed to keep the hype going a bit longer because the hype is so profitable and, for example, OpenAI will directly go into bankruptcy once the hype stops.

      • Following your RAG analogy, a PHB can do everything himself provided you give him a shed full of tools. No need to hire employees with domain level experience.
  • Even better, a filter to block all stories quoting "luminaries" who happen to hold immense stock in companies dependent on their "forecasts" one day becoming reality.

    Asking for a friend.

  • by nicolaiplum ( 169077 ) on Sunday May 11, 2025 @12:07PM (#65368639)

    Good at coding is not good at design and analysis, particularly of unknown problems - and almost all business-specific problems are "unknown" to outsiders. An LLM can only repeat things it has seen before - and it may have seen many things, some of which it remembers, but the possible range of combinations of problems in any particular business is so large that many of them have not been seen before.

    LLMs are very bad at those. They're also bad at understanding any complex existing code, so they're great for greenfield programming like coding challenges, and bad at analysing and modifying existing code.

    When you bear in mind that the greatest skill of a programmer is reading and understanding existing code to modify it, and that most programming is modification of existing systems to extend them and not de novo application design, the future for "AI Coding" isn't looking so great right now.

    Simple tasks will be replaced by AI, yes. Just like simple tasks of "install operating system, edit configuration files" has been replaced by "run the platform configuration tool (Terraform/Puppet/Chef/Ansible/whatever)". That hasn't made SRE and Platform Engineer go away at all, it simply means they spend less time typing into text editors and command lines, and more time setting up management frameworks to do the management. LLMs are bad at that too.

    LLMs are particularly hilarious when it comes to anything related to security or other edge cases of reliability (security is an edge case - most users are not malicious, but you must defend against those that are, just as you must defend against the edge case of unlikely hardware failures, and so on).

    • by ET3D ( 1169851 ) on Sunday May 11, 2025 @01:05PM (#65368773)

      While there are some good points here, I don't think that any of them are unsolvable.

      AI is already used to detect security vulnerabilities. All you needs is AI as an adversary to the AI code, and that should solve that problem.

      > the greatest skill of a programmer is reading and understanding existing code to modify it

      It's true that a lot of what programmers bring to a company with experience is the memory of the code structure and how things are done. However, programmers are also notorious at being bad at documentation. If an AI can program and document decently at the same time, that would go a long way towards later reuse.

      Alternately, AI can just rewrite things from scratch. If the AI writes enough unit tests, replacing existing structures with newly written AI code that does the same thing won't be too much trouble. In fact, it seems to me like a good way to move forward. One AI in charge of the system, another AI rewriting things whenever there's need for new functionality.

      I think that there's a lot of scope of moving into an AI programming world. I'm not sure what functions people will play there, it will be interesting to find out, but AI in general should be able to do most things, IMO.

      • I was thinking something similar to "all you needs is AI as an adversary to the AI code, and that should solve that problem".
        what if you used one AI to code and have a committee of AI's to poke holes in it?
        Generative Adversarial Network (GAN) idea, applied to coding?
        I think that's what you're saying.
        • That's very similar to the approach we use.

          We've set up "opinionated" role-identities, and given them an IRC network to communicate across.
          So then, an agent with the "Engineer" role gets their output checked by at least two others, so that technical considerations are balanced against business use case analysis and ethical considerations. It's not quite an adversarial network, but we use phrases like "Politely but aggressively skeptical" for one of the agents, and describe another one like a child hell-b
          • Super interesting! That does make a certain amount of sense. A message that I'm getting from all over is that like it or not the AI concepts are gelling around personas. LLMs are essentially modeled around humans in an ironic way. We keep projecting the idea of it's got to be 'good", "ethical", various attributes, that we as a whole do NOT agree on.. So the discussion becomes one of point of view..uh oh... So the AIs, the LLMs in the end HAVE to be biased by someone or some principle, it has to exist withi
    • I disagree with this "almost all business-specific problems are unknown to outsiders." ... from 40+ years as a professional developer, I can tell you 99% of coding is the same problems solved over and over again. There is very little new about what a business needs to do with software. If anything.
      • >> There is very little new about what a business needs to do with software

        Very true. There are a limited number of tasks and problems that need solving in most businesses. The business app companies are well aware of this and are already transitioning everything to AI.

        “Either get on with it and participate, or start digging your own grave.”
        https://www.salesforceben.com/... [salesforceben.com]

    • You just coping. The majority of programmers don't do design they write functions. When those programmers lose their jobs they aren't going to just take a bullet to the head. They're going to bust their asses to move into the more complex design roles.

      Years ago I was goofing off in low paying jobs and then I got saddled with a kid and before I knew it I needed more money. Never mind that inflation and price gouging was rapidly guzzling down my income requiring more money.

      I wanted to goof off and rel
    • >> They're also bad at understanding any complex existing code

      I haven't found that to be the case at all. The AI's I've used can quickly scan through a codebase, figure out what it all attempts to do, and compose a readme that describes it all. AI is also good at commenting existing code, looking for things to refactor, writing unit tests. It will do any kind of drudgery for you with no complaints.

  • Codebase (Score:4, Funny)

    by ebonum ( 830686 ) on Sunday May 11, 2025 @12:24PM (#65368679)

    You'll have a code base and not 1 employee who understands it.

    • You'll have a code base and not 1 employee who understands it.

      Depends on how much you anthropomorphize AI because the AI "employee" will certainly "understand" it. But how important is that? There are millions of companies using computers who have zero employees who understand the code for the programs they use. In fact, I suspect few programmers would be able to decipher the output of a compiler. But the computer "understands" it and we test its output against the desired results.

  • Coding tests in the interview process will become meaningless (they already were but that's a different topic), and most likely dropped forever.

  • by Berkyjay ( 1225604 ) on Sunday May 11, 2025 @12:35PM (#65368707)

    Tools like Cursor and Windsurf can now complete software projects

    This is bullshit.

  • It won't, unless (and that's a really big unless) human coding keeps improving and the LLMs are carefully, iteratively updated to take advantage of the best written code there is. That's not happening, and is increasingly unlikely, IMO.

    Ditto for any and all other text-based "machine learning" systems -- unless someone literally starts over building LLMs from scratch, and rather than using any old garbage they scrape off the web or pirated books or whatever, actually train it on well written, well understoo

  • by peater ( 1422239 ) on Sunday May 11, 2025 @12:52PM (#65368739)
    This is a standard hype strategy. Start with a flawed premise, and before anybody has a chance to refute it, pose a question that assumes this premise to be true and ask people to discuss that instead. How about posing the basic premise - are LLMs capable of inventing innovative COMPLETE software solutions or are they only capable of regurgitating existing well known solutions to well known problems (that too limited by the context window). The answer is quite obvious for everyone who has worked with LLMs. Everything else is noise.
    • by KILNA ( 536949 )

      This isn't boolean however. It's not like the only two options are "perfect at complete software" AI versus useless AI. The most likely outcome given LLMs' historic trajectory is that they gradually become better at getting closer to perfect over time, where the number of human engineers required to make complete solutions approaches zero the more time passes. What is the basis for the argument that the gains LLMs have made will not continue to push toward that end? I know the post here is particularly focu

      • >> it's rapidly and consistently moving toward complete solutions with less input from human engineers

        Most certainly true in my experience. The improvements over just the past 8 months have been very substantial. AI now writes about 90% of my code, though I generally give it small incremental coding tasks similar to what I would have had to write on my own. If I just write a 2-line comment describing what I'm about to implement it will frequently just complete it all, or a very near facsimile, and I c

  • by LindleyF ( 9395567 ) on Sunday May 11, 2025 @12:52PM (#65368741)
    And really bad for others. Think of the holodeck. You can make a massive, fully interactive, fully detailed environment in seconds. That's the end state of vibe coding. Early parallels are things like making a quick app or webpage for a limited purpose.

    But, the Enterprise still has Jeffries tubes. You still need to know exactly where to go to replace that one damaged component. The ship itself isn't vibe-coded. Similarly, complex systems where exactness is critical, such as security or banking, can't afford to be AI-generated. They will still use ever-smarter autocomplete, and thus will still count towards the "AI generated" stats. But they'll be traditional software.
  • Context isn't enough for LLMs. They forget an hallucinate all over the place even for stuff that's within the context window, making massive mistakes a human would never.
  • Let's face it--AI is theft-ware.
    • Perhaps, but I think it will also help free software. Just recently I was able to make an improvement to a project written in go, a language I'd never used before. I couldn't have taken the time to do it before LLMs, now I'm willing take a stab knowing I'll have help.
    • >> AI is theft-ware

      In what way? You can figure that all of the open source code in existence was fed in as training data, plus all the coding textbooks. That isn't stealing.

    • No it isn't, unless you broaden the definition such that nearly 90% of the code you write is theftware. Someone showed you how to write a loop in Java, didn't they? Oh, did you see someone's code of how to do a sort algorithm? Every time you write a loop, you're stealing that code template. You have a sort in your program, unless you dreamt up the algorithm it's likely you stole that code.

  • by xack ( 5304745 ) on Sunday May 11, 2025 @01:05PM (#65368769)
    Improve Linux on the desktop to the point Microsoft stops making Windows worse and improve Inkscape and The Gimp to the point that Adobe is forced to lower prices, then I will believe the AI hype.
    • by Alworx ( 885008 )

      yes please!

    • That's not going to happen because it's mostly patents keeping inkscape and gimp from catching up to Adobe coupled with interoperability issues.

      For Linux as a whole gaming especially online gaming is a major issue plus if you do anything with fancy hardware like creating music or more advanced video work. I suppose if AI made it so easy for companies to write drivers that they just did it it would be less of an issue. That won't solve the patent issues or the interoperability issues.
      • If AI code output is not patentable (it already is not copyrightable in USA... patent is untested to the best of my knowledge), then eventually (17 years) we get out of the software patent game entirely.

  • The good would be if AI allows us to manage complexity, find tricky bugs, edge cases and other difficult problems. Also good would be making simple tasks simpler.
    The bad would be managers who hire cheap, incompetent people to use AI to create crappy, buggy code that they don't understand.
    I suspect that we will see a bit of both.
    Prepare for a tsunami of bugs

  • Imagine if we had LLM coders in 1995. Would there be any reason to abandon visualbasic and activex? Would javascript and HTML even exist?

  • This is entirely possible at the rate things are going. My guess is that eventually the AI's will develop coding languages for themselves that are better suited to the way they operate than what we use today. It will become increasingly difficult for humans to understand the AI-generated code, which may be more similar to assembler or a byte code than a high level language.

    Our view of the software will consist of pseudocode blended in with a readme that describes what's going on. We will look at the work pr

    • by Junta ( 36770 )

      I'd think that the concept of an AI target code without traditional intermediate programming language would be both not needed and would cause problems. Also, challenging as it would need some entry point to know how to wrangle a language, and it wouldn't have training data to consume to come up with.

      As to your second point, I think there is a class of projects that this can work for, and much of that class is "stuff a programmer could churn out, but those are relatively less available so we settle for cli

      • Code completion is definitely a mixed bag I agree, and a lot of times just a distraction. But I have seen an entire good clause or two offered up on many occasions.

        I also agree that new and original human-developed code examples coming from traditional places would decline if AI begins to write most of the code. My prediction is that the AI assistants will learn from humans as we make use of their services. That will be the new training data. Whatever code we now cause to be generated using AI as the workho

  • by algaeman ( 600564 ) on Sunday May 11, 2025 @04:51PM (#65369147)

    Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers

    Citation needed, or even better, a case study. Lots of CEOs that have bet their company's shirts on AI keep saying this, but I don't think there is a commercial product out there that is fundamentally written by AI. As others have mentioned, it is fine for writing repetitive, simple code that can save a developer time. It is not at all able to complete a project beyond the sort of thing that would be assigned in a first year coding class.

  • Hot take: when AI really gets rolling, it won't just write code, it will analyze all the existing code out there and figure out what the designers of those coding languages were really trying to accomplish with all of those half-baked or almost-but-not-quite-optimal language features that went into each programming language, and use that to generate the language spec for The One True Programming Language That Finally Gets Everything Exactly Right.

    Or not. But it would be interesting to see it try; at the v

  • What the industry needs is not more speed, is more care. This Generative AI revolution achieves the opposite.

  • What if AI coding writes bad code and not found and AI keeps coding on top of that ? Gets released into the wild and what does it bring down ? Just a thought.

Pause for storage relocation.

Working...