Forgot your password?
typodupeerror
AI Programming

Will AI Bring 'the End of Computer Programming As We Know It'? (nytimes.com) 150

Long-time tech journalist Clive Thompson interviewed over 70 software developers at Google, Amazon, Microsoft and start-ups for a new article on AI-assisted programming. It's title?

"Coding After Coders: The End of Computer Programming as We Know It."

Published in the prestigious New York Times Magazine, the article even cites long-time programming guru Kent Beck saying LLMs got him going again and he's now finishing more projects than ever, calling AI's unpredictability "addictive, in a slot-machine way."

In fact, the article concludes "many Silicon Valley programmers are now barely programming. Instead, what they're doing is deeply, deeply weird..." Brennan-Burke chimed in: "You remember seeing the research that showed the more rude you were to models, the better they performed?" They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots... For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job... A coder is now more like an architect than a construction worker... Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating...

If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it's 10 percent, Sundar Pichai, Google's chief executive, has said. That's the bump that Google has seen in "engineering velocity" — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent.

The article cites a senior principal engineer at Amazon who says "Things I've always wanted to do now only take a six-minute conversation and a 'Go do that." Another programmer described their army of Claude agents as "an alien intelligence that we're learning to work with." Although "A.I. being A.I., things occasionally go haywire," the article acknowledges — and after relying on AI, "Some new developers told me they can feel their skills weakening."

Still, "I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines... " A few programmers did say that they lamented the demise of hand-crafting their work. "I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that," one Apple engineer told me. (He asked to remain unnamed so he wouldn't get in trouble for criticizing Apple's embrace of A.I.) He went on: "I didn't do it to make a lot of money and to excel in the career ladder. I did it because it's my passion. I don't want to outsource that passion"... But only a few people at Apple openly share his dimmer views, he said.

The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.'s output means firms will wind up with mountains of flabbily written code that won't perform well. The tech bosses might use agents as a cudgel: Don't get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants.

Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io... thinks the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well... The holdouts are in the minority, and "you can watch the five stages of grief playing out."

"How things will shake out for professional coders themselves isn't yet clear," the article concludes. "But their mix of exhilaration and anxiety may be a preview for workers in other fields... Abstraction may be coming for us all."
This discussion has been archived. No new comments can be posted.

Will AI Bring 'the End of Computer Programming As We Know It'?

Comments Filter:
  • sure (Score:4, Insightful)

    by Mr. Dollar Ton ( 5495648 ) on Saturday March 14, 2026 @11:47PM (#66042016)

    pretty soon there will be so much code generated, that nobody will be able to make sense of it, especially when it stops working.

    a high heaven for all hackers.

    • Re:sure (Score:4, Insightful)

      by anonymouscoward52236 ( 6163996 ) on Sunday March 15, 2026 @12:07AM (#66042044)

      AI will indirectly end humanity in the next two years (through bad decisions with military AI, on either side), so, indirectly, yes it will end computer programming as we know it...

    • Add to it that new generations of AI might fail to maintain old code.

    • And I'll start charging premium to sort it out.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      I don't know why people keep telling that. AI does well-structured and commented code. That's better than the average programmer. Yes, good programmers are better. But AI code is readable, understandable, and even though it is post-hoc, AI is pretty good at explaining it (just as much other code, but some human code is harder to explain). Reading AI code is just as annoying as reading your coworkers code, but chances are good that you can better follow the AI code (even when the code itself is not perfect).

    • Irrational fear is for muggles and Luddites.

      Real programmers aren't afraid of new programming tools.

      • But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

        That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively s

      • I have no irrational fear, except fear of radiation. It is my firm opinion that more code is worse than less code under almost any circumstances. The "AI" on its own mostly generates, it doesn't reuse or reuses poorly. Most of the time I don't need that for the codebase I'm responsible for.

    • by nuckfuts ( 690967 ) on Sunday March 15, 2026 @01:44PM (#66042748)

      pretty soon there will be so much code generated, that nobody will be able to make sense of it, especially when it stops working.

      This is actually where I think AI is very useful. Have you ever had to take over a bunch of code that someone else has written? Possibly with little or no documentation? With Claude, I can have it parse a bunch of code and it does a pretty good job at summarizing what the code is doing, and can make modifications to it with minimal effort on my part. I just describe the functionality I would like added or changed. Is it perfect? Of course not, but I can be quite productive with it.

      • Have you ever had to take over a bunch of code that someone else has written?

        All the time. Have you tried to fix subtle business logic bugs in VB code written in Japanese? I have.

        With Claude, I can have it parse a bunch of code and it does a pretty good job at summarizing what the code is doing

        And yeah, LLMs can guess what trivial code is doing most of the time.

        LLMS fail completely when the the code is in older languages that have no large online "training sets", that are complex, when there is a lot of context switching, they are still useless for tasks that require multi-domain factors. In short, they are useless for most problems that you want to solve in a real software project.

        Maybe in a de

  • Yes, but... (Score:5, Insightful)

    by TwistedGreen ( 80055 ) on Saturday March 14, 2026 @11:52PM (#66042024)

    Yes, it already has. Ultimately it speeds up boilerplate code, and reduces the need to read documentation to figure out how to do what you want. However it firmly places the onus on the programmer to accurately describe what he wants.

    The idea of only needing a "six minute-conversation" is nonsense. If anything, more emphasis than ever must be placed on requirements and honing the specificity of those requirements. It still takes days or even weeks of planning if you're building a maintainable complex system. You can at least iterate on designs faster than ever, though.

    I think of this as pretty much replacing the kind of work that electrical engineers used to do with board design and circuit layout... Now they use an expensive tool like Altium, and then while they may still tweak the output, by and large the layouts are automatically generated by the software and only the high-level requirements are fed into it. All the work is done in the writing of requirements, which often take the form of hideous XML files. With LLMs this just puts one more level of abstraction between the programmer and the actual code, and should change the way we write code but not the way we think about it.

    • by 2TecTom ( 311314 )

      exactly, sure the inital conversation only took six minutes but it still takes an hour to run the code, get the error, understand the error to make sure it's not in an AI loop, then suggest an approach and offer feedback and insights, also the AI tends to forget to bring stuff along if one is not careful to make sure it's doing so

      if people are doing the job properly and with due diligence, we should be learning by watching the structure develop and seeing how the solutions become implemented

      in my experince,

    • Re:Yes, but... (Score:5, Insightful)

      by sjames ( 1099 ) on Sunday March 15, 2026 @01:40AM (#66042098) Homepage Journal

      I'm reminded of a great many stories where someone makes a wish to a genie and it is technically granted much to the wisher's distress.

    • Boilerplate? Yes
      Documentation - not so much.

      You want it to use either the latest version, or a version compatible with your project. But it royally fails at it, will try to give you anything and then lie in your face, untill you call it out.

      • Claude writes better documentation than I do. That includes user guides, about pages, commit text, release notes, and even its own instructions file.

        • You're confused. I wasn't talking about writing documentation for your project. There's a huge difference between writing the docs for a problem you've already solved, and finding the right combination of up-to-date documentation for the problem you're yet to address.

    • Yes, it already has. Ultimately it speeds up boilerplate code, and reduces the need to read documentation to figure out how to do what you want.

      I don't understand what is wrong with you all that you spend such a large percentage of your time writing boilerplate code.

    • Thanks for posting a sane engineering response.
      There is an argument to be made that AI processing is not democratized yet. Weâ(TM)re living in the world of mainframes again. At some point the power of Claude Opus or its relative will be in the palm of our hands, but weâ(TM)ve been here before and survived.
      Another aspect of AI coding is that it is breaking down the roles within work. Especially with senior staff who had moved away from programming to maybe more planning or systems work. Now those

      • I am laughing at the idea of Steve Jobs screaming and ranting at the AI and threatening to fire it if it doesn't work harder. Not sure AI is a drop-in replacement for human engineers in that kind of development environment.

    • I think of this as pretty much replacing the kind of work that electrical engineers used to do with board design and circuit layout... Now they use an expensive tool like Altium, and then while they may still tweak the output, by and large the layouts are automatically generated by the software and only the high-level requirements are fed into it

      Ditto. Time was we coded in assembly. Then compilers came out but people didn't trust them. I, for one, would have occasion to review the generated assembly to double check what got generated matched what I thought the C code should do. By the time I was doing this, it was virtually always me misunderstanding C semantics, not a compiler bug.

      AI feels like the next iteration. I code in reasonably specific English. I may or may not carefully review the code, depending on how familiar I am with the toolset and

  • it's a tool (Score:5, Interesting)

    by hjf ( 703092 ) on Saturday March 14, 2026 @11:55PM (#66042026) Homepage

    It's a tool. You need to know how to use it. But before all, you need to know what you want it to do.

    I don't "vibe code". I explicitly tell an LLM what's the output I want. This works great. It's also helped me take care of long-standing low-priority tickets.

    For example, I had it rewrite a backend function that reads from DB/returns JSON. But I had it do it "streaming" from the database instead of buffering-and-stringifying the database response. This has been long in my to-do list. I knew how to implement it (as I had done it in the past). I just didn't want to do it because it was a "nice to have" but not a must for our use case. And it's honestly boring to write.

    The LLM did it for me in a few minutes.

    I also tried "Vibe coding an app" to see how that would work. It didn't. It shows awesome progress at the beginning and then it starts failing. It deletes entire files, rewrites unnecessary parts, keeps looping and burning through tokens so, I honestly don't know what the "vibe coders" are really doing. It just didn't give me any results when I tried it.

    • Re: (Score:3, Insightful)

      by swillden ( 191260 )

      I had it rewrite a backend function that reads from DB/returns JSON. But I had it do it "streaming" from the database instead of buffering-and-stringifying the database response. This has been long in my to-do list. I knew how to implement it (as I had done it in the past). I just didn't want to do it because it was a "nice to have" but not a must for our use case. And it's honestly boring to write.

      I find I do a lot of refactors that I would have previously written off as good ideas, but too much work to be worth the effort. I still do think about the effort and weigh it against the benefit, but the effort I'm weighing is my time to review what the LLM did, and the time of whoever has to review my PR. On balance, though, I end up doing a lot of code cleanup that I previously wouldn't have done.

      I do also have the LLM actually write most of my code, but first I discuss what I want with the LLM and a

      • Doubles your productivity....probably increases the power/energy used by multiple orders of magnitude.

        A comparison of power usage before and after would be interesting.

    • by Jeremi ( 14640 )

      Maybe I'm out-of-date or a control freak, but I don't want my codebases to contain custom code that I need to rely on but that I didn't write myself. I barely trust my own code, much less code that an opaque AI generated that I consequently only half-understand.

      With code I wrote myself, the way it works is a direct reflection of my own thought process, and by the time it's done, I've spent enough time writing and refining it that I'm intimately familiar with it and can tell you exactly what every line does

    • > It just didn't give me any results when I tried it.

      You were vibing it wrong.

    • by gtall ( 79522 )

      Ancillary to that, I saw a recent Brian Greene interview with Raphael Bousso (both well-respected physicists) on assorted topics from a previous interview they did. Near the end, Greene brought up AI. I'll paraphrase his thoughts.

      He and some colleagues wrote a physics paper over a couple of months, nothing ground-breaking but a nice little result. The thought struck him: would AI have made the job easier. After couple hours work with ChaptGPT he had the same result. He wasn't quite sure what to make of the

      • "I do not see where AI can come up with original problems to solve"

        A lot of original ideas arise from fuzzy (or even erroneous) thinking...starting with a bad/incoherent hypothesis and then iteratively correcting and refining it until you get a solid result (or give up).

        I think AI is going to struggle with that approach, though I see glimmerings of it when it tries to debug code.

    • >> It's a tool. You need to know how to use it.
      The most important thing with a tool is to know when not to use it.
      AI is perfectly fine to generate memes and funny stuff to amuse coworkers on the chat.
      For everything else, it's just a giant time and resource hog.

  • 'the End of Computer Programming As We Know It'?

    Yes, but we've had many of those.
    Compilers. Sure beats assembly.
    IDE. Sure makes debugging easy.
    Graphical layout. The IDE is going to generate working code based on the GUI elements I layout. Damn.
    AI. Now I describe the GUI in natural language and it generates code. Even better. Wait, I don't need my old textbooks as reference anymore? The AI can tell me about these well known algorithms and generate code tailored to some details I provide? Wow.

    Note the latter is all AI can really be trusted with. Dea

  • by liqu1d ( 4349325 ) on Sunday March 15, 2026 @12:36AM (#66042062)
    One second we have reports that it makes people less productive, next second they're more productive. Considering the engineers mentioned are everything from centre the div to making the ads load a ms faster I wonder if there's a bigger portion of the former seeing massive improvements over the latter. Well documented, numerous examples it's fast and mostly produces a good enough replication even if it tends to add way too many superfluous lines. Once you start getting away from the stuff on stack overflow then it's an utter shambles in my experience. Even today it'll fake libraries, namespaces, api calls. When you factor in it's much slower to understand someone else's work and check it than it is to write your own I'm struggling to see how the improvements they claim exist. At least anything more impressive than prebuilt snippets that so many IDEs have already.
    • by Bumbul ( 7920730 )

      One second we have reports that it makes people less productive, next second they're more productive.

      Actually the speed of development IS that fast. Just 5 months ago Agentic AI coding wasn't there yet. Then came Claude Code, Sonnet and Opus, Skilling and Agents and things have changed. And remember, this is not the peak, we only have scratched the surface. It will become more skilled, cheaper, faster, easier to use. Just try now Opencode and Big Pickle model, there's a free tier (mind security implications though with those free models). You'll be surprised.

    • The second is paid advertising disguised as journalism. After all, the guys who spent hundreds of billions making "AIs" now need to make everyone believe – no matter the cost – that everyone has to buy their services. Think about it, hundreds of billions of dollars are involved in the biggest scam ever created, of course they're going to try to convince you that everyone needs them.
  • All technological revolutions have an S-curve structure.
    They eventually peak in value and diminish...

    https://www.scry.llc/2025/09/1... [scry.llc]

    The net result of the past two Kondratieff cycles was a net decrease in the workweek...
    from 60 to 48 in the 1870s and 48 to 40 in the 1930s...

    https://www.scry.llc/2024/12/2... [scry.llc]

    The past four years of constant IT lsyoffs are a huge red flag
    signifying structural change, not inventory adjustment of previous 1yr recessions.

    https://www.scry.llc/2014/05/1... [scry.llc]

    The Kondratieff (credit) cycle is coming to an historical end.
    That's why gold is up 200% in the past two years.
    The Law of Demeter will collapse length of supply chains.

    • by Bumbul ( 7920730 )

      All technological revolutions have an S-curve structure. They eventually peak in value and diminish...

      I believe you completely misinterpret that curve. The peak shown in the right hand graph is not for value, it is for the rate of growth (i.e. change). As the left hand graph shows, the value will stay there, just the rate of change will slow down.

      And we are nowhere near the inflection point, is is still early days, i.e. good entry point.

  • were the peak of handcrafted excellence by builders with decades of experience.

    by 1930, all churches were built with machine-cut bricks and mechanical effort.

    the Era of handcrafted software is ending, just as masonry did.

    adapt or die.

    • by gweihir ( 88907 )

      That is what delusion looks like. Incidentally, churches get built out of stones, concrete, etc. Your claim they are "all" made from bricks is just as wrong as the rest of your post.

  • by Nuitari The Wiz ( 1123889 ) on Sunday March 15, 2026 @02:27AM (#66042114)

    Like playing slot machines, sticking money into it and expecting a big pay back. Most of the time you have a small loss (time wasted on a bad generation, cost of tokens, etc), sometimes you have a middling success (some small productivity gain), few times you hit a jackpot and actually get something really good, but these are really rare. They do however get you hooked in for more very well.

    I do use AI at work (Claude) to some success, but I don't think that the gain is really going to cover the additional costs of using it. Lots of time its incomplete or wrong in a small way, sometimes its even worse than an outright failure. Yeah boilerplate is quick, but often devs have their own tools to do it.

    Personally, I've been using Qwen Code on a very old personal project I gave up back in 2000. Initially things were going well, major language updates, some rewrites etc, tackling obvious bugs.
    But small bugs have taken multiple days of testing, prompting, testing, etc. Context is fleeting at best, the memory file is often ignored a few prompts in. Test generation has been off the mark lately as its clearly not used to BBS door games.

  • by engineerErrant ( 759650 ) on Sunday March 15, 2026 @03:05AM (#66042126)

    Lived through the early 2000s with a new CS degree, and had old people literally laugh at me for how foolish I was. This feels the same way.

    We don't remember that, back then, some people thought the internet would become a (literal) new sovereign nation, a shining city on a hill where all were equal and everything was perfect. Those people were stupid and deserved to be laughed at (back then, even!). However, the underlying technology of their delusion now infuses every aspect of our daily life, down to my coffee maker - just not quite in the way they'd imagined. It really was as transformative as they thought. AI will be the same, at maturity. It's here, it's real; get used to it, although the reality in 20 years won't be what we expect now.

    Computer programming is already over, as a necessary daily skill. I'm not saying to not do it! People still crochet because it's fun, not because they'll freeze to death without that afghan, and I see programming in the same light now. It was neat-o; I spent 30 years on it; but in retrospect, it was always going to be a temporary stage in civilization's development. The real purpose was to express human desire to a machine, and that goal is still 100% valid and needed today. We just don't need to do it in C or Java or Python or whatever anymore.

    Our value as "engineers" is being held under a blowtorch, as well it should - simply knowing which C library to include, or what library method call, is being blasted away, in favor of the skill of engineering judgment. What *should* be built, and why? In the context of the team, the organization, the mission, the business? This is what we truly should be, not "code monkeys." Having good human, engineering judgment is the future (and the past!). Memorizing J2EE API methods is not, and never was.

    • Going back further, when my dad did his EE Master's, they got their hands on some of the first microprocessors. Slow, fragile and expensive. My dad and his college buddy were wildly speculating: in a decade, we'd find these things in cars, in washing machines, in toys. Their professor replied: "That'll never happen, discrete components or mechanical controllers will always be cheaper". If you've done things a certain way your whole life, it becomes the yardstick for quality. Stick with the tried-and-tr
  • I bet they all want to keep their jobs and hence will not say anything negative about the company strategy that has been announced at the highest level. Such a coincidence, though.

    In other news, worthless "journalism" is worthless and no replacement for actual Science.

  • 4GLs (Score:4, Insightful)

    by rambletamble ( 10229449 ) on Sunday March 15, 2026 @03:32AM (#66042158)

    In the eighties, fourth generation languages or 4GLs were going to spell the end of programming. Business people would design and implement the systems.

    Well, that was the theory, or scaremongering, anyway.

    Trouble is, that last 10% of the task is what takes most of the time.

    • Trouble is, that last 10% of the task is what takes most of the time.

      That, and the work to be done after release, is where you'll reap the benefits of good design and good coding: troubleshooting, lifecycle updates, data mining and reporting, changes to external interfaces and integrations. Requirements from users and for reporting and integrations change all the time. With a poor design, those changes are difficult and expensive to implement.

      I wonder how well AIs do at such jobs, and would like to compare it to my own code. My code is a horrible mess that is hard to un

    • by kbahey ( 102895 )

      In the eighties, fourth generation languages or 4GLs were going to spell the end of programming. Business people would design and implement the systems.

      Well, that was the theory, or scaremongering, anyway.

      Add to that the hyperbole around some technologies that were pushed to end users, rather than a developer tool.

      Examples:

      - COBOL was supposed to be a language for managers so they don't have to ask a programmer to write a report for them.

      - SQL was supposed to be the same, managers can query databases direct

  • Quoting an old construction work boss... We have a small team of great developers. Core software is highly critical, and highly optimized, and with lots of ip that we don't want to share with a raft of junior developers trying to build a resume. But, there's a lot of stuff on the periphery that is important, but not as latency or reliability sensitive. A lot of those peripheral tasks were hard to get someone to bite on, or weren't deemed worth taking someone off something else, but now are getting picked
  • ...AI has to put an end to the era of Betteridge's law.
  • by Anne Thwacks ( 531696 ) on Sunday March 15, 2026 @05:39AM (#66042236)
    .. bugs as we know them.

    Replacing them with something infinitely worse!
    Bugs as we don't know them. That are impossible to find, let alone fix. With every possibility that using AI to fix them will make them worse.

  • Neeeeeext!

  • Wrong (Score:4, Insightful)

    by Anamon ( 10465047 ) on Sunday March 15, 2026 @06:31AM (#66042256)
    > Abstraction may be coming for us all.

    No. LLM programming is not a layer of abstraction, it's a layer of imprecision. The reason NLP hasn't taken off in 50 years of trying isn't just because it was too difficult to implement well, it's also simply the wrong tool for the job. As with all previous attempts, LLM coding is not abstracting, it's compromising. In many cases, the compromise is worth it. But it isn't where code actually needs to work, not just happen to work.
    • by dfghjk ( 711126 )

      I'd be interested to see a study on AI coding as it applies to real-time coding challenges, perhaps in the medical community where failure is absolutely not an option. Then add to it that the AI has to choose the smallest, lowest cost implementation that performs the function. You know, what engineers ACTUALLY do in vast swaths of real programming.

      Instead, we see billioaires tell us what engineering actually is, some even declaring themselves "chief engineers" despite have even an hour's education in a te

  • by cascadingstylesheet ( 140919 ) on Sunday March 15, 2026 @08:12AM (#66042316) Journal

    Being a good programmer was never really about churning out code.

    It was about being able think logically, how to understand workflows and represent them as data structures and conditionals.

    If you were just a code monkey full stop, then yes, bye bye.

    • A lot of companies don't want good programmers, they want programmers they can control. Even if that means hiring a lot more of them.
      • by Junta ( 36770 )

        Pretty much spot on, they have needed those 'nerds' but hate that the 'nerds' have leverage.

        They've already leaned hard on incompetent offshoring and waved away resulting business failures, so they get to shift to leaning hard on LLM codegen.

        When given the choice of a good solution but they are at the mercy of some skilled employees, or a shitty solution that will likely lose in the market, but the employees are safely fungible, I've seen multiple companies repeatedly go for the 'fungible employee' strategy

  • But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots

    This is how it is on everything because it doesn't know or even try to know which of the possible solutions it can "imagine" (hallucinate, generate, rectally aviate, whatever) makes sense. Instead, random numbers are literally used to decide what comes out. So when I asked it to help me with 3d printer settings (after having no luck doing old fashioned searches) it came up with some absolute bullshit. Then when I told it that suggestion conflicted with my calibration testing results, it told me I was absolu

  • "70 software developers at Google, Amazon, Microsoft and start-ups"

    This is not representative of programmers. Programming is far more diverse than Silicon Valley app development. Embedded dominates programming.

    But, of course, it's Silicon Valley VC that drives these narratives, they only "know" app development and they only know it to the extent they can take from it. The real question is will billionaire money take all our livelihoods and everything we own.

    • Embedded dominates programming.

      Is that really true? I would have thought it was website programming.

      • by kackle ( 910159 )
        I'm a firmware guy and even I didn't realize the ubiquity until more recently. Just from my desk chair I can see a dozen devices that have firmware in it: microwave ovens, a desk phone, a cellular phone, monitors, laptops, IC-programming pods and a voltmeter. If I stood up, over the cubicle wall I can add to that list, photocopiers, a vending machine and Wi-Fi access points. I'll skip considering our small computer server room and go to a window to see the dozens of parking lot vehicles, each having doze
  • by MpVpRb ( 1423381 ) on Sunday March 15, 2026 @11:22AM (#66042538)

    I can imagine a time when future AI helps us create efficient, bug-free, secure code that is better than any human can create on their own.

    It seems to me that we are doing things in the wrong order. The correct order would be to design AI systems that analyzed existing code to find bugs, unhandled edge cases and security weaknesses. The results should be carefully studied and the systems tuned to find more bugs and problems. Once we have developed a reliable code analyzer and tester, then use AI to write new code that is tested by the mature tester.

    Unfortunately, those who control software development seem to care more about "productivity". Even before AI was available, they preferred cheap, low quality programmers and poorly written tools and frameworks that allowed them to produce crappy code cheaper and faster.

    Current AI is trained on publicly available code, much of which is poor quality. The result can be bloated, inefficient, bug-ridden, insecure slop.

    Even the best programmers, working in the best environments, have rarely written perfect code. Writing perfect code is an unsolved problem. We need AI systems that help us create better code, and in the far future, perfect code. We don't need AI that allows the clueless to effortlessly churn out vast quantities of crappy code

    • I can imagine a time when future AI helps us create efficient, bug-free, secure code that is better than any human can create on their own.

      That's not even mathematically possible. At best, an AI can pattern-match to replicate good code. It cannot exceed it's training data. You understand AI doesn't know anything, at least modern LLM/ML models. It guesses based what it has seen before. I has no clue what Java is or how it's different than Python or the rules of Java. The best it can do is mimic what it has seen before and apply common patterns to code that is inferior to average.

  • Ai has no idea what a well designed piece of software is. All it knows is what is in its training data. So it will go from giving you a really well created, PEP compliant python function after one prompt. Then give you a really sloppy, poorly constructed function with the next prompt. The user really has to be able to vet what Ai generates and know how to guide it to giving better responses.

    • Programming is easy, any moron can do that...designing software is hard! That's why programmers have jobs. Translating specs to Java, Rust or any other language?...very simple. 80s C.A.S.E. tools can do that. The "how" easy...the "what" (are you actually doing) is hard. Fixing all the errors in the spec, meeting with stakeholders to get clarity, and getting them to actually work in as secure and scalable manner?...that's about 95% of my job....1% is programming....4% is meetings and paperwork.
  • Yes, as we know it. Um, timesharing fer chrissakes brought the end of computer programming "as we know it."

  • AI concludes, "You know, COBOL isn't so bad, after all."

  • by Somervillain ( 4719341 ) on Sunday March 15, 2026 @01:15PM (#66042680)
    So far, the AI age has only produced thought pieces and wealth for nVidia and a select few. I am a daily user of Claude 4.6 Sonet and Opus. I can comfortably say that the LLM revolution has exceeded the impact of stackoverflow in the industry. However, if programmers could write massive software projects without programming, the world would look a lot different than it does today. Most of these tools have been around for nearly 5 years. Were a product able to build software without skilled engineers, the world would be flooded with successful startups eating the lunch of legacy players as well as making money off underserved niches. In particular, I think we'd see a revolution in gaming.

    Few areas are more scarce in resources than gaming. Also, writing a game requires a massive amount of talent across many fields...from graphics to programming to music to design. This is the perfect use case for AI...those shitty low-end indie games?...given the level of talent to create them, that same pro should be able to make UE5 games that look almost as nice as the AAA efforts. And they should be able to do this in less than a year.

    If AI really worked, the thought pieces wouldn't have NYT interviewing individual dudes. They'd be interviewing startups that create mind-blowing video games selling like hotcakes for shockingly low prices with a team of bros small enough that we can fit them all in a minivan.

    If the LLM revolution was real, the articles wouldn't be ponderous philosophical nonsense, but endless /. stories of the major software empires buying startups where a few folks with a Claude account were creating amazing products that Google/MS/Oracle/Salesforce either wanted to integrate or kill.

    Someday? Yeah, probably everything they say will be true...but not today. I find this like the whole electric car debate. Someday....there will be no gas stations....but....I am not even confident I'll see that in my lifetime. I "think" we'll see a large scale disappearance of consumer gasoline pumps in the next 20 years...but for a HUGE portion of the general public, electric cars make a lot more sense on paper than reality....at least with today's economic variables.

    But....even when the day comes...no, you will need to be able to code. You may not have to compose the code, but you will have to understand it in depth. You will be directing AI to optimize it. You will be QA-ing it. My manager doesn't write much code, but reads a lot and nit picks both mistakes we make (which are extremely rare for me at this point in my career), as well as strategic mistakes, which are a greater concern. For example, in our micro-services app, frequently, myself or other engineers will try to do too much in a microservice and perform a redundant validation that requires a REST call in 2 different components in the flow. You shouldn't check if the customer owns a resource in both step 1 and step 4...unless you have a good reason. Frequently, the engineer only works on step 4 and didn't know that step 1 also did the same validation and it's impossible your code to find a mistake. It's my boss's job as well as our architect's to catch such details...really my job to know it, as well...

    WHEN the day comes when AI can actually code....and if you think it can write decent code today, you really suck at coding. Sorry....if you can't see the many mistakes it makes: 1...you must not be using a compiled language (because the generated code frequently doesn't even compile) and 2. you really don't understand the language you're working with well enough to spot the mistakes on your own...or 3. you've barely used LLMs and just got lucky once or twice. If you use it daily...you notice the mistakes. However, when that day comes, yeah, we'll become less like code monkeys and more like my boss or Linus Torvalds...supervising the circus. However, that day is not here and isn't even on the horizon from what I can tell. The pace of improvement is glacial....the only
  • The problem isnt AI generating code, Its fixing code at 3am when your corporate web site is down and no one knows how to fix anything. Yea "coding is easy" fixing is hard
  • Based on my career experience, so many people have been "coding" that both just absolutely hate it and are generally speaking lost and kind of bang away at whatever they are faced with until by some miracle it works. They frequently have no idea why it works even when it does, sometimes only because someone came and reworked it for them. Basically at least back to the dot com rush, "coding" was seen as something to get money even if you don't like it, aren't any good at it. The opportunity is that many o

  • I think it's interesting how some people on Slashdot find AI use awful and some swear by it. That matches my experiences: Sometimes it's amazing and sometimes wastefully awful.
  • These days almost no one does that because PSUs have become much more reliable and cheap so you just buy a new one.

    Eventually AI code will break irreparably and even AI might not be able to fix it...BUT restoring from backup, getting the last version or whatever you might have on GitHub etc will mean you could rebuild with AI easily.

    Better yet. If your AI code jammed after 3-5 years the new AI might be able to one shot create the whole thing with a details specification/prompt of that the thing is meant
    • These days almost no one does that because PSUs have become much more reliable and cheap so you just buy a new one. Eventually AI code will break irreparably and even AI might not be able to fix it...BUT restoring from backup, getting the last version or whatever you might have on GitHub etc will mean you could rebuild with AI easily. Better yet. If your AI code jammed after 3-5 years the new AI might be able to one shot create the whole thing with a details specification/prompt of that the thing is meant to do. Programming might be more about talking to AI and reviewing it's work than actually learning complicated text editor keyboard shortcuts. Teams of AI agents are busy making shit better as we speak. Horse riders need to learn to drive cars. Quickly.

      "learning complicated text editor keyboard shortcuts" is not programming. Writing code is easy. If that was our barrier, all problems would be solved. We can automate the "how" part with 80s coding technology....like C.A.S.E. tools. The reason COBOL lives on today is because the actual nuanced details were long lost. If you wrote out every spec for business logic for any sizable company, the spec would exceed the code. Your theory is interesting, but I can't see it happening. If it could, nothing wou

  • No it won't

Ignorance is bliss. -- Thomas Gray Fortune updates the great quotes, #42: BLISS is ignorance.

Working...