Forgot your password?
typodupeerror
AI Programming Software

Software Developers Say AI Is Rotting Their Brains (404media.co) 117

An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.

"We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)."
"I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added.

"It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before."

A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."

Software Developers Say AI Is Rotting Their Brains

Comments Filter:
  • by fahrbot-bot ( 874524 ) on Wednesday May 13, 2026 @05:12PM (#66142317)

    "It's like when we got cellphones and stopped remembering phone numbers, ...

    Or home phones with speed-dial.

    You let some one/thing do tasks for you and you eventually forget how to do them yourself.

    • If you let yourself forget, sure.

      I've been using electric arcs to start gas grills for a long time now, but I still know how to use a match.

    • This is likely a bit facetious but similar to spelling, some people never forget. I remember phone numbers from when I was a child. My kids remember numbers. Why? Because we taught them to remember numbers in case they didn't have access to their phone.

      Just saying.
      • Good point, and perhaps things like that are more a matter of how imprinted they are. Some of those phone numbers you remember were very important, and presented as such, so are probably stored as such. I imagine some phone numbers, even currently used ones aren't as memorable. Assigning them to speed-dial would probably make remembering them harder as you know you don't have to.

      • When my kids were small, I got a new phone number, but out of a selection I chose one that had meaning to me, not the easier one that my wife told me to take. (She got that one.... Long story... Not doing that again...)

        So in reverse social engineering, to get my kids to remember that somewhat harder number, I configured the family tablet to have that as a passcode. What do you know, it took the kids about a day and they had it memorized. Hmmm, I'll have to check if they still remember...

    • Has this destroyed your life? I remember a few phone numbers, but that’s about it. It has not ended the world.

      This is a weird analogy for everyone to pick. If i don’t have to remember the 4 arguments that I have to add to every single API call I make, that’s a win in my book.

      Two memorized phone numbers will get me through the rest of my life without having to waste memory space on others. Claude allowing me to drop a metric fuckton of idiosyncrasies and syntax is of even more value.

      • Has this destroyed your life? I remember a few phone numbers, but that’s about it. It has not ended the world.

        This is a weird analogy for everyone to pick. If i don’t have to remember the 4 arguments that I have to add to every single API call I make, that’s a win in my book.

        Two memorized phone numbers will get me through the rest of my life without having to waste memory space on others. Claude allowing me to drop a metric fuckton of idiosyncrasies and syntax is of even more value.

        I was just expanding on the original cellphone analogy, noting that the same sort of thing is older than that -- for the youngsters who never had a landline. :-) From a practical standpoint, those two examples won't destroy your life, but they're examples of the consequences letting technology do things for you - which isn't necessarily bad, just a price.

        Being able to forget API information isn't necessarily a bad thing, depending on how much you forget. If you forget too much and you're totally relian

    • "You let some one/thing do tasks for you "

      And this is why you are a subsistence farmer who does not have a computer.

      • "You let some one/thing do tasks for you "

        And this is why you are a subsistence farmer who does not have a computer.

        Actually, I'm a software engineer and systems administrator who's worked on everything from PCs to a Cray-2, the latter at NASA LaRC. But I think my original point is valid. The less you do something yourself the more stale you get. I'm not saying that's necessarily a bad thing. For example, an engineer moving up the management ladder still needs to understand the work, but isn't doing it anymore and may be rusty if forced to actually do the grunt work. That's the price paid.

  • Use it or lose it (Score:5, Insightful)

    by Himmy32 ( 650060 ) on Wednesday May 13, 2026 @05:21PM (#66142323)

    Knowledge in general is use it or lose it. I remember my grandpa showing me how to use a slide rule and a lookup tables in books. And waxing about how his coworkers were worried that calculators were going to rot brains. Tools even in math have shifted where knowledge is needed even farther in the past, like stats packages, Maple, or Wolfram Alpha.

    What's scary here is that the need for knowledge isn't shifted, just outsourcing the practice.

    • Sadly, I have forgotten how to use a slide rule, though my old slipstick is still sitting at the back of the bookshelf near my computer. Probably covered with dust, though.

      I do still use my abacus occasionally, but not "as designed". It's handy as all get-out for binary arithmetic and tracking bit flipping. Which isn't what an abacus is for, of course, but that's what I use it for.

      • by dskoll ( 99328 )

        I still remember how to use a slide rule (for multiplication, anyway...) even though I haven't actually used one in decades.

        It's all just logarithms. Logarithms turn multiplication into addition.

    • Re:Use it or lose it (Score:5, Interesting)

      by ffkom ( 3519199 ) on Wednesday May 13, 2026 @06:25PM (#66142421)
      The problem with LLMs is not that people are losing the ability to do one or another specific thing, the problem is that they are unlearning to think as a whole. If you practiced juggling or playing piano at some point in your life, and later, due to a lack of opportunity to practice, unlearned it, it may be a pity, but no big deal, and you can still move your body to do other things. But if you stop moving your limbs entirely, even if only for months, the muscles atrophy, and you will have a hard time doing anything requiring body movement from that moment onward.

      The scary thing I observe in many colleagues, right now, is that they really atrophy their brains. It does not matter that they forget about certain API calls or programming language features, those could be picked up again quickly from some documentation again. What matters is that they stopped having any sophisticated thoughts, they outsource all thinking to LLMs, and start using them for more and more mundane tasks while becoming more and more uncomfortable when asked to think on their own. They cannot answer even simple questions about "their" work results anymore, because the results aren't really "theirs", but fell out of some LLM.

      I hope some people will be able to use LLM responsibly just as a tool, rather than as a brain substitute. But I am concerned many people will atrophy their brains for good.
    • by gweihir ( 88907 )

      Throwing out slide rules was a pretty expensive mistake. As a competent slide-ruler user you do not make mistakes in the ranges of orders of magnitude. As a calculator user, that is a main risk. Does not mean you always have to use a slide-rule, but if it were still taught, you would have a fast way to check calculations with a different tool and stay in practice with very little effort.

    • by dskoll ( 99328 ) on Wednesday May 13, 2026 @11:25PM (#66142687) Homepage

      I think there's a big difference between calculators and AI. Calculators made doing arithmetic much easier. But arithmetic is just rote; there's no creativity involved. If you are asked for the product of 59 * 74, you're going to get 4366 if you do it correctly, whether you do it in your head, on paper, or with a calculator. And if you do without a calculator, you're still going to follow a rote algorithm.

      Software development is different. Writing a piece of software requires creativity, IMO, for all but the most trivial of programs. Give three different expert programmers the same spec and you'll almost certainly get three quite different but correct programs. Outsourcing creativity is very different from outsourcing rote, deterministic algorithmic processing. Creativity is regarded as what makes us human (or it used to be, anyway) and I for one don't want to outsource that. That's why I don't use AI for anything, and why I'm happy I retired from paid software development three years ago.

      I maintain a few hobby projects, one quite actively, and I do not allow AI anywhere near them. I get to express my creativity and not care about managers demanding I use AI.

      • by DarkOx ( 621550 )

        it is also possible that for all but perhaps presentation and UI, creativity in programing is a story we told ourselves and that is why some of this is so upsetting.

        Give three different expert programmers the same spec and you'll almost certainly get three quite different but correct programs.

        Correct in that for the same inputs they give the same outputs sure. However if we are being really honest either some are more correct or after the compiler removes all the formatting and strips the symbols and the resulting output is the same give or take some register choices and other trivialities.

        The correct code is going to be the better m

    • by AmiMoJo ( 196126 )

      I use calculators all the time, but I'm glad that I did have to do maths by hand at school. It gives me the ability to estimate or have a feel for what I'm expecting the result to be, which helps me spot errors made operating the calculator or in the numbers/assumption I'm plugging into it.

  • by Somervillain ( 4719341 ) on Wednesday May 13, 2026 @05:32PM (#66142335)
    I have to scrutinize pull requests much moreso than ever before. I have a handful of coworkers who like to let Claude do everything...which honestly isn't a concern if they test it and write the tests themselves and understand what it does. However, I have had to reject several PRs because they were having AI writing the tests AND the code. Obviously AIs are prone to write unit tests that justify their behavior, not the actual intended function of the code.

    There's a temptation to let Claude do everything...but when I've tried it, I had to edit it heavily. Usually the code it produced was unprofessional or didn't even resemble working. However, it did help me out a few times with libraries I've never used before. I just am very careful about writing my own unit tests and verifying end to end. Additionally, I've been lazy and just pointed Claude at a stacktrace and ask it to tell me why it was failing (a project I'm unfamiliar with). It failed 100% of the time. In fairness, so did I...they were tricky bugs...I had to contact the author and have him explain what he intended to do. It's ability to understand code is really lacking....whereas that should be it's greatest strength.

    I am an AI realist. I give it credit where it works and complain where it's overhyped. I have multiple AI evangelists on my team. For them, it's a religion...do everything in AI...AI is all powerful. To me, it's a tool in my toolbox.

    The difference between us is that I see AI as it is today....their vision is AI as they imagine it...based on sci fi books and movies. In their vision, Claude is smart and knows what it's doing and will guide you to the promised land with a layover in nirvana and bliss. All hail AI!!!!

    The disturbing part is they seem to have noticeably regressed and believe Claude over their own judgment.
    • by Himmy32 ( 650060 ) on Wednesday May 13, 2026 @05:47PM (#66142363)

      I have to scrutinize pull requests much moreso than ever before

      The disturbing part is they seem to have noticeably regressed

      And think this is core to the discussion because output from evangelists is going up while hollowing out the skills needed for the next generation to do the review.

    • > The disturbing part is they seem to have noticeably regressed and believe Claude over their own judgment.

      I find your lack of faith in AI disturbing.

    • I wouldn't say I'm an expert, but I've got some history. My take is that you need one AI to write the code, and other to write the tests. And I'm also finding the most useful tests are high level functional type tests because they shake out stupid stuff AI does where it writes some code that fits the unit tests but doesn't actually do what it needs to do.

      Unit tests seem to be useful to get the code-writing AI to actually write code that runs/compiles or whatever. Humans would write unit tests to be more use

  • People getting fired because the managers guarantee vibe coding works. Meanwhile I order coffee from an app that had worked fine for years and ended up waiting half an hour before needing to find an email proving I even paid at all, which the baffled store employees told me looked like an Uber Eats delivery that had been delivered. But hey the vibe recoded app gave my money to the company and that's the important thing!
    • by ffkom ( 3519199 )

      People getting fired because the managers guarantee vibe coding works.

      And even when they notice that vibe coding does not work that great, they will still try to move expenses away from wages towards tokens paid to some LLM hoster. And once they find out how expensive that gets over time... well, they probably have been replaced by LLMs themselves at that time.

    • by gweihir ( 88907 )

      Indeed. Once again non-tech personnel thinks it knows how tech works and can make competent decisions about it. All that shows is that software engineering is a very immature discipline and that the "managers" are still (as they always were) generally really bad at their jobs. Imaging a "manager" telling a construction engineer that a bridge will definitely take a certain load when the engineer knows that is not true. What would happen is that the engineer escalates or quits. Non-tech personal cannot make c

  • by rezachi ( 10503306 ) on Wednesday May 13, 2026 @05:42PM (#66142349)
    I've used AI agents to assist with troubleshooting some IT issues. And while it did eventually get me there, there were two glaring problems I've found: * On issues where I was familiar with the system, it would make wacky suggestions or tie things together as being the same root cause even when it was an impossibility. You could waste a lot of time going down these rabbit holes if you didn't know what you were doing. * On issues where I was less familiar, I found that after spending hours troubleshooting the system, I arrived at the answer but had not gained any knowledge on the methodology of how the system worked or how the troubleshooting plan was determined. You never get to be a senior level contributor without this kind of knowledge. So it worked, but it would really depend on the goals of the organization as to whether this was a direction they really want to go.
    • by hjf ( 703092 )

      i mean that's not a bad thing either. I sometimes DO NOT want to learn "new to me" things. I've been contributing to an ancient, but still used software called Xastir. It's VERY OLD spaghetti code, low level X11 with Motif. I DO NOT want to learn Motif. It's not a marketable skill or something I'll ever need. But I let the AI code a few contributions (one of them was replace some parts with Cairo fonts for antialias in high dpi scerens, and the other was fixing a very old screen drawing routine that took 2-

      • by Jeremi ( 14640 ) on Thursday May 14, 2026 @12:58AM (#66142727) Homepage

        Could I have fixed this bug? Not even in my wildest dreams. Do I care how it was fixed? Oh no. No I don't. I just checked that the output of the LLM was reasonable.

        The risk in this scenario is that after a few iterations of people applying AI-generated "black box" modifications, users start reporting that the ancient app is crashing on them now and then, and nobody has the first clue why, or how to fix it... and since the crash isn't readily reproducible, you can't even do a "git bisect" to figure out which commit introduced the regression. Now you're left with two unappetizing choices: either live with the instability forever, or roll back all of the "blind" commits to the last known-stable version and never touch the codebase again.

  • by Anonymous Coward

    I use AI for coding, but not often.
    At work we needed a very minimal windows app that makes a websocket connection, performs some tasks, and has a very simple UI that shows an activity log.
    Windows GUI programming makes me want to pull my teeth out and I don't enjoy it at all. The last time I dealt with it was years before LLMs, and I didn't like it then either. So I was quite happy to have an agent do most of the work even with mistakes and clunky unrefined code.
    I don't feel like I've lost any skill by steer

  • by KermodeBear ( 738243 ) on Wednesday May 13, 2026 @05:48PM (#66142365) Homepage

    I received my first AI-generated pull request recently. It was... not great. A lot of extra code that was not necessary at all, some odd naming conventions, and the size of it all made the whole change set difficult to parse. This wasn't a typical "Well, this works and it's okay, it's just not the way I would do it." Some sections were legitimately terrible.

    I have been using AI tools somewhat, but mostly to examine existing structures and answer questions. It's pretty good at that. But the code? I prefer to write it myself. That way I don't forget how it all works, like the people in this article. I am hoping that I can continue to do this for the most part because telling a machine to "just kinda do the thing, y'know" and relying on non-deterministic output scares the crap out of me. Doubly so when I stop being able to understand what's being done to the system.

    And one of the devs in the article is from a fintech firm? Really? Man. This isn't good. Well, for them, anyway. For the rest of us it sounds like we have a lot of cleanup work to do...

  • by znrt ( 2424692 ) on Wednesday May 13, 2026 @06:05PM (#66142379)

    There's no way to evaluate whether that much code is well-written or secure

    sorry? then you're not doing your job.

    pre llm developers didn't remember to do e.g. asm system calls either, and that's not brain rot but abstraction. llms introduce a whole new level of abstraction but it's non-deterministic, so you can use as much llm as you like but you still have to do your job. if you don't do your job then you simply aren't a software developer, you're a vibe coder.

    and vibe coders are fine, they can do damn cool stuff, but they aren't software developers and shouldn't be discussing about software development. vibe coding is building software without any engineering rigor, the result should be regarded as a mere curiosity, poc or prototype until it has been validated by an engineer.

    long story short: if you produce a ton more lines of codes then good for you, but you'll have to hire a lot more software developers. cry me a river.

  • If using an LLM for coding is rotting your brain, then you likely were never using your brain, you were simply translating a requirement from one human language into software. That's accounting, not creating, and your brain has been rotting the entire time.

    Seriously. Software 'development' is little more than acting as a human requirements compiler, and that ship has sailed. Engineers - of any discipline - applying math & developing algorithms - is an endeavor that takes far more than 'software devel

    • by ffkom ( 3519199 )
      Your "no true Scotsman" argument isn't compelling. If people used LLMs only to "lookup facts" or to "translate one language into another", there would be way less reason for concern (well, they would still need to retain the ability to check the results, but that is another topic). But people use LLMs for everything, en masse, from "applying math & developing algorithms" to asking what time it is and what to eat today.
    • Who is so narrow-minded they believe language  translators are accountants? Surely not the people who translated  hieroglyphs ... Sanskrit ... cuneiform ... Linear-B etc.  Or who translate English into Russian during a nuclear confrontation ( we pray ...) .  So who then? Only  accountants ?
    • Re: (Score:2, Troll)

      by gweihir ( 88907 )

      And in actual reality, LLMs cannot do "requirements compilation". That one requires General Intelligence.

  • "There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same, ... We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)

    How did you not build a rat's nest BEFORE AI?

    AI increases output: it magnifies existing issues, but it does not magically create new ones. I strongly suspect when you had hundreds of other programmers

    • Re: (Score:3, Insightful)

      by gweihir ( 88907 )

      That is really nonsense. With actual intelligence you get better at things and the tech debt gets smaller. With code reviews you do not only evaluate the code but the coder. Not all juniors turn into competent coders and you steer them into other paths.

      None of that works for LLMs.

    • "How did you not build a rat's nest BEFORE AI?"

      By employing a load of rats. How did you do it?

  • by thecombatwombat ( 571826 ) on Wednesday May 13, 2026 @06:23PM (#66142417)

    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

    But now it's not a matter of being not smart enough, it's about just leaving yourself the exhausting, miserable work that never should have existed in the first place.

    • by gweihir ( 88907 )

      Indeed. As you get better as a coder, debugging may get harder but you need far less of it. LLMs killed that and, on top of that, produce "review resistant" code. I expect we will see a lot of LLM-caused burnouts in the next few years and that will reduce the number of desperately needed good coders even further.

    • by Jeremi ( 14640 ) on Thursday May 14, 2026 @01:05AM (#66142729) Homepage

      As astronaut Frank Borman put it, "a superior pilot uses his superior judgement to avoid situations which would require the use of his superior piloting skill".

      The programmer's version of that would be "a superior programmer uses his superior judgement to avoid creating the bugs that would require the use of his superior debugging skill".

  • by DMJC ( 682799 ) on Wednesday May 13, 2026 @06:29PM (#66142433)
    AI probably shouldn't be used outside of hobbies. I've been using it for a few months now. It's let me do things which I could never do as a mediocre programmer/someone who struggles with advanced Algebra. But, I would never want to / or claim to be a software developer in a job. I've used it to add VR support to some games. Perfectly fine as a fun hobby, but there's no way I could support these programs/codebases as a paid job. I still hit issues where AI just stops working/doesn't understand certain problems and can't fix those problems. Sometimes you have to scrap an entire codebase and start again to get the output right. It's powerful but I wouldn't use this in my job.
  • It's like guiding a bunch of junior devs and correcting their mistakes on steroids, all day long, every day. F*#ing exhausting!

  • by hydrodog ( 1154181 ) on Wednesday May 13, 2026 @07:05PM (#66142473)
    Anyone complaining about the AI ruining their ability to write PHP code is not very high on the food chain. Somewhere between clams and crabs. Seriously, while AI is not good at writing difficult, new data structures code, it is very good at writing GUI code, API calls, and with suitable prompting, I can control it very well. I think this article is alarmist, silly, and the people being interviewed seem to be on a par with completion 2 in terms of IQ.
  • at my large company, we have a fantastic group
    here's how we manage all of us using AI on our monolithic code base:

    1: our jira tickets are extremely well specified, by both humans and now also vetted by AI
    2: eng instructs ai to look at jira, and make a plan.
    3: 2nd ai "critique this plan like you hate it", you end up with a much better plan
    4: create a unit tests that fail on current code but will pass when bug is fixed or feature is implemented. create as many as you need to definitively pin it down, run all tests and confirm they fail due to lack of bug fix or feature
    4.5 eng tests to ensure THEY can repro bug
    5: implement the plan
    6: test against unit tests: do they all pass now? if not iterate: bad test? bad impl? critique, plan, iterate
    7: tests pass: eng now tests manually, ensure THEY no longer repro bug
    8: create PR. other engs now review PR. we have special pr review bots as well, iterate until all engs and bots are satisfied
    9: give it to QE. QE validates or we iterate more
    10: push to stage

    we're all pretty good at it. AI is only part of the job but it helps us A LOT

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday May 13, 2026 @08:07PM (#66142525) Journal

    Dude, I've been writing code for 40 years. I've used so many different tools, stacks, libraries and APIs that at this point I don't remember any of them, and I haven't remembered them for years, and it doesn't matter at all. Sure, I have to look everything up, but that's fine, that doesn't matter. What matters is that I know when something looks wrong, or hard to maintain, or inefficient, or insecure, or... pick the axis. And I can dig in and find the problem. Anyone can tell if code works, that's easy. Understanding when and why it might break or otherwise impose additional costs, that's the real skill.

    Which, as it happens, is exactly the skill you need to use an LLM effectively. Also the skill you need to understand legacy code, review colleagues' commits, etc., etc., etc. I used to say that the ability to read and understand code is an underrated skill, but an old friend corrected me at lunch a couple of weeks ago, saying that the ability to read and understand code is the most important software engineering skill, and always has been. Upon reflection, I agreed. And LLMs make this clearer than ever before.

    • +1 to this. And undue reliance on LLM's is the antithesis of being able to read and understand code, for the vast majority of LLM users. LLM's aren't designed to provide correct answers, they are designed to provide plausible answers. Wherein lies the trap.

      Bottom line IMO is that the LLM will help the good / experienced developer get things done faster, for a certain subset of problems. LLM's will hold back the inexperienced / novice developer if not actually turn them into a liability.

    • Yeah, that one really came across more as, "oh crap, I'm getting older!"
      • Yeah, that one really came across more as, "oh crap, I'm getting older!"

        Really? Doesn't feel that way at all to me. What it feels like is that LLMs are a massive force multiplier for the skills I already have.

        • Oh, I'm not talking about those at all, just how when something I studied deeply in college slips my mind, I think, "damn, getting old". Which I still think is what the person quoted was actually dealing with. You and I are used to it (if you've done anything for 40 years). This guy may have been running into it for the first time and putting the blame elsewhere.
          • Oh, I'm not talking about those at all, just how when something I studied deeply in college slips my mind, I think, "damn, getting old". Which I still think is what the person quoted was actually dealing with. You and I are used to it (if you've done anything for 40 years). This guy may have been running into it for the first time and putting the blame elsewhere.

            Ah, gotcha. You were referring to the comment from the summary, not mine. Yeah, it's fun to watch the young'uns realize that they are absolutely going to spend their whole lives realizing they forgot something they used to know. It's even more fun to watch them the first time they look at code they wrote two months ago and say "Who wrote this stupid shit? Oh....".

  • by kschendel ( 644489 ) on Wednesday May 13, 2026 @08:14PM (#66142529) Homepage

    Our group has been experimenting with LLM's (I refuse to call it AI because it's no such thing) on a reasonably large and extremely complicated code base. What we're finding is that while the LLM is often right, when it's wrong, it's plausibly wrong. That's problem #1: undue dependence on the LLM weakens the group's sense of "that's not the right answer", leading to bug churn.

    Problem #2 is that a newer developer relying on the LLM's for code writing or debugging, misses out on the chance to develop that sense of how it all works. Left unchecked, you get a bunch of guys who don't actually know at a deep level how the system operates. That is not going to end well. (See #1, if nobody has that sense of "that's not right", well ...)

    The third finding is that the quality of results depends very strongly on the quality of the LLM prompts. This goes back to the classic "Ask a Foolish Question" conclusion: to ask a proper question, one has to already know at least part of the answer. The only way to get there is to have at least some decent understanding of the code base, which one is not going to get by relying on the LLM's for all one's work. ("Ask a Foolish Question" is an excellent and classic Robert Sheckley short, if you haven't read it, kindly do so.)

    Careful use of the LLM by experienced developers who already know the system, at least at a high level if not details of every area, and who can prompt the LLM in the right direction, seems to be an advance. We see more bug fixes; without LLM we might fix (say) 3 bugs in the time that using the LLM can fix 10, even if 3 of those "fixes" are wrong and have to be nak'ed or reverted. Reliance of less experienced developers on having LLM's fix bugs for them is the slippery slope to the nether regions.

  • But these are smart people and you can only fool them for a while. And they start to notice that something is really, badly off. Good.

  • We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same

    I mean, that's an easy one...use AI to do the evaluation!

    It might be funny, if bosses weren't actually demanding this!

  • by dskoll ( 99328 ) on Wednesday May 13, 2026 @11:14PM (#66142673) Homepage

    I retired from software development in 2023. My mantra is:

    I'm so glad I retired when I did. I'm so glad I retired when I did. I'm so glad I retired when I did....

  • by greytree ( 7124971 ) on Thursday May 14, 2026 @01:57AM (#66142747)
    "A software engineer at the FAANG"

    It's obvious the author of the article doesn't understand what he's writing, ironically enough.
  • You can use AI to replace thinking about a problem.
    But you can also use AI so it makes you think more about your code, not less.
    And I often ( but maybe not often enough) try and do the latter.

    I (try and make myself) use AI as a smart junior colleague, who comes up with either crazy or over-elaborate or very good or broken or simply awful solutions to the problems I give it.

    I then have to work out what its code is trying to do and either reject the extreme stuff or, more often, tailor it to be more like what
    • Identifying <good> or <bad> code/text/math/art is a lower intellectual skill than generating <good> or <bad> code/text/math/art . See any "skills" hierarchy for details. Without doubt, use of code-producing *.ai deskills coders, in a way analogous to CNC machines deskilling lathe-workers. If the task-at-point is repetitive and the output a commodity then machine automation proves doable, cheaper and perhaps higher quality. Otherwise an extend
  • by Inglix the Mad ( 576601 ) on Thursday May 14, 2026 @10:20AM (#66143099)

    "There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me."

    Years ago my high school computer lab teacher taught us in our programming portion of the class: "When you don't know if it is secure, you should use the premise it is insecure until proven otherwise."

    I think I can see why American corporations fail basic security audits today. They don't allow their programmers to follow basic concepts that were taught in high school.

  • I've only used AI as a kind of super-search. If I have some code not working, I'll go to an AI with prompt "Write C++ code to do [whatever I'm trying to do] with [toolkit I'm using, specific classes, &c]." I don't think I've ever copy-and-pasted the results, but its good at finding missing initialization calls, and or calls that have to be made before changes is visualization packages &c. Those can be hard to eke out from documentation, or find in somebody else's project.
  • "I had some issues where I forgot how to implement a Laravel API"

    As long as you still know how to read the documentation, it will likely come back to you fairly quickly.
    I guess it is like when you switch away to a different framework / API / etc, stuff like that will leave your short-term memory after a while, and you have to re-learn it, but you will re-learn it a lot quicker than how long it took you to originally learn it.

  • If you went "to university" to learn Laravel, you got scammed, and you learned nothing.
  • How can you "forget how to code"? That's like forgetting how to tie your shoes or ride a bike - I can't imagine it happening to anyone.

"But what we need to know is, do people want nasally-insertable computers?"

Working...