Software Developers Say AI Is Rotting Their Brains (404media.co) 117
An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.
"We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added.
"It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before."
A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."
"We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added.
"It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before."
A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."
Brain rot even farther back ... (Score:5, Insightful)
"It's like when we got cellphones and stopped remembering phone numbers, ...
Or home phones with speed-dial.
You let some one/thing do tasks for you and you eventually forget how to do them yourself.
Re: (Score:3)
If you let yourself forget, sure.
I've been using electric arcs to start gas grills for a long time now, but I still know how to use a match.
Re: Brain rot even farther back ... (Score:5, Insightful)
It depends on the complexity... Safety matches are an ultra-simplified procedure. I was taught to use flintstone but have forgotten how to.
Re: Brain rot even farther back ... (Score:4, Funny)
Re: (Score:2)
I've been using electric arcs to start gas grills for a long time now, but I still know how to use a match.
Technically, using a match and remembering numbers are probably different kinds of memory.
Re: Brain rot even farther back ... (Score:3)
Matches? "Keep dry" is the most complicated instruction in the entire process. I'm not going to say striking it against the strip is intuitive exactly but you only need to see it once as a child and you understand how all matches work more or less.
".. how to implement a Laravel API"
Is NOT fucking intuitive. AND someone reading this in a couple years will go WTF even is that, oh it was a thing some people used before the thing that replaced some other thing. Many people are reading this now and saying WTF is
Re: (Score:3)
Just saying.
Re: (Score:2)
Good point, and perhaps things like that are more a matter of how imprinted they are. Some of those phone numbers you remember were very important, and presented as such, so are probably stored as such. I imagine some phone numbers, even currently used ones aren't as memorable. Assigning them to speed-dial would probably make remembering them harder as you know you don't have to.
Re: (Score:2)
So in reverse social engineering, to get my kids to remember that somewhat harder number, I configured the family tablet to have that as a passcode. What do you know, it took the kids about a day and they had it memorized. Hmmm, I'll have to check if they still remember...
Re: (Score:2)
Has this destroyed your life? I remember a few phone numbers, but that’s about it. It has not ended the world.
This is a weird analogy for everyone to pick. If i don’t have to remember the 4 arguments that I have to add to every single API call I make, that’s a win in my book.
Two memorized phone numbers will get me through the rest of my life without having to waste memory space on others. Claude allowing me to drop a metric fuckton of idiosyncrasies and syntax is of even more value.
Re: (Score:2)
Has this destroyed your life? I remember a few phone numbers, but that’s about it. It has not ended the world.
This is a weird analogy for everyone to pick. If i don’t have to remember the 4 arguments that I have to add to every single API call I make, that’s a win in my book.
Two memorized phone numbers will get me through the rest of my life without having to waste memory space on others. Claude allowing me to drop a metric fuckton of idiosyncrasies and syntax is of even more value.
I was just expanding on the original cellphone analogy, noting that the same sort of thing is older than that -- for the youngsters who never had a landline. :-) From a practical standpoint, those two examples won't destroy your life, but they're examples of the consequences letting technology do things for you - which isn't necessarily bad, just a price.
Being able to forget API information isn't necessarily a bad thing, depending on how much you forget. If you forget too much and you're totally relian
Re: (Score:2)
"You let some one/thing do tasks for you "
And this is why you are a subsistence farmer who does not have a computer.
Re: (Score:2)
"You let some one/thing do tasks for you "
And this is why you are a subsistence farmer who does not have a computer.
Actually, I'm a software engineer and systems administrator who's worked on everything from PCs to a Cray-2, the latter at NASA LaRC. But I think my original point is valid. The less you do something yourself the more stale you get. I'm not saying that's necessarily a bad thing. For example, an engineer moving up the management ladder still needs to understand the work, but isn't doing it anymore and may be rusty if forced to actually do the grunt work. That's the price paid.
Re: Brain rot even farther back ... (Score:5, Funny)
Given your DEI reference, I'd say dementia has already gotten a hold on you. Maybe you weren't taught empathy in whatever remains of your memory of school.
Re: (Score:2)
Empathy isn't really taught. Either people acquire it after a certain ave or they don't. That is true for both cognitive and affective empathy, only the ages differ. Hence GP probably never grew up enough to develop any empathy in the first place.
Re: (Score:2)
I think it's mostly the winners'-side bias of lucky rich people for "righteous indignation". Also common affliction among religious fanatics. But they aren't morally right. Just lucky or fanatical or both.
Re: (Score:2)
I'm older than you, and I've worked with people from all over the world, not just black and female Americans.
On the other hand, the one professional job I *hated* was at the Scummy Mortgage Co in Austin, TX, all the managers, and my VP, were white, and assholes.
Just like you.
Re: (Score:2)
That age group got brain damage from lead poisoning instead.
Use it or lose it (Score:5, Insightful)
Knowledge in general is use it or lose it. I remember my grandpa showing me how to use a slide rule and a lookup tables in books. And waxing about how his coworkers were worried that calculators were going to rot brains. Tools even in math have shifted where knowledge is needed even farther in the past, like stats packages, Maple, or Wolfram Alpha.
What's scary here is that the need for knowledge isn't shifted, just outsourcing the practice.
Re: (Score:1)
Sadly, I have forgotten how to use a slide rule, though my old slipstick is still sitting at the back of the bookshelf near my computer. Probably covered with dust, though.
I do still use my abacus occasionally, but not "as designed". It's handy as all get-out for binary arithmetic and tracking bit flipping. Which isn't what an abacus is for, of course, but that's what I use it for.
Re: (Score:2)
I still remember how to use a slide rule (for multiplication, anyway...) even though I haven't actually used one in decades.
It's all just logarithms. Logarithms turn multiplication into addition.
Re:Use it or lose it (Score:5, Interesting)
The scary thing I observe in many colleagues, right now, is that they really atrophy their brains. It does not matter that they forget about certain API calls or programming language features, those could be picked up again quickly from some documentation again. What matters is that they stopped having any sophisticated thoughts, they outsource all thinking to LLMs, and start using them for more and more mundane tasks while becoming more and more uncomfortable when asked to think on their own. They cannot answer even simple questions about "their" work results anymore, because the results aren't really "theirs", but fell out of some LLM.
I hope some people will be able to use LLM responsibly just as a tool, rather than as a brain substitute. But I am concerned many people will atrophy their brains for good.
Re: (Score:2)
Throwing out slide rules was a pretty expensive mistake. As a competent slide-ruler user you do not make mistakes in the ranges of orders of magnitude. As a calculator user, that is a main risk. Does not mean you always have to use a slide-rule, but if it were still taught, you would have a fast way to check calculations with a different tool and stay in practice with very little effort.
Re:Use it or lose it (Score:5, Insightful)
I think there's a big difference between calculators and AI. Calculators made doing arithmetic much easier. But arithmetic is just rote; there's no creativity involved. If you are asked for the product of 59 * 74, you're going to get 4366 if you do it correctly, whether you do it in your head, on paper, or with a calculator. And if you do without a calculator, you're still going to follow a rote algorithm.
Software development is different. Writing a piece of software requires creativity, IMO, for all but the most trivial of programs. Give three different expert programmers the same spec and you'll almost certainly get three quite different but correct programs. Outsourcing creativity is very different from outsourcing rote, deterministic algorithmic processing. Creativity is regarded as what makes us human (or it used to be, anyway) and I for one don't want to outsource that. That's why I don't use AI for anything, and why I'm happy I retired from paid software development three years ago.
I maintain a few hobby projects, one quite actively, and I do not allow AI anywhere near them. I get to express my creativity and not care about managers demanding I use AI.
Re: (Score:2)
it is also possible that for all but perhaps presentation and UI, creativity in programing is a story we told ourselves and that is why some of this is so upsetting.
Give three different expert programmers the same spec and you'll almost certainly get three quite different but correct programs.
Correct in that for the same inputs they give the same outputs sure. However if we are being really honest either some are more correct or after the compiler removes all the formatting and strips the symbols and the resulting output is the same give or take some register choices and other trivialities.
The correct code is going to be the better m
Re: (Score:2)
I use calculators all the time, but I'm glad that I did have to do maths by hand at school. It gives me the ability to estimate or have a feel for what I'm expecting the result to be, which helps me spot errors made operating the calculator or in the numbers/assumption I'm plugging into it.
Re: (Score:1)
I used a 48sx in collage and it has one hell of a learning code. You had to know how to use it which is not simple and you had to know how to translate albergaic entry to rpn.
Re: (Score:2)
I've just started teaching math. The students reach for the calculator IMMEDIATELY.
I asked what "half times a half" was, expecting a quick and obvious answer. I got guesses. "Is it zero?" "One!" "I hate fractions!"
Converting percentages to decimals is also atrocious. "What's 5% as a decimal?" "0.5?"
13 and 3/4 as a percentage was the next confusion.... was it 0.1334?
These students are 15-16 years old. I think we lost something along the way when the tool wasn't being used just to automate what we already kne
Re: (Score:2)
We see this problem + AI is a tool, not a religion (Score:5, Insightful)
There's a temptation to let Claude do everything...but when I've tried it, I had to edit it heavily. Usually the code it produced was unprofessional or didn't even resemble working. However, it did help me out a few times with libraries I've never used before. I just am very careful about writing my own unit tests and verifying end to end. Additionally, I've been lazy and just pointed Claude at a stacktrace and ask it to tell me why it was failing (a project I'm unfamiliar with). It failed 100% of the time. In fairness, so did I...they were tricky bugs...I had to contact the author and have him explain what he intended to do. It's ability to understand code is really lacking....whereas that should be it's greatest strength.
I am an AI realist. I give it credit where it works and complain where it's overhyped. I have multiple AI evangelists on my team. For them, it's a religion...do everything in AI...AI is all powerful. To me, it's a tool in my toolbox.
The difference between us is that I see AI as it is today....their vision is AI as they imagine it...based on sci fi books and movies. In their vision, Claude is smart and knows what it's doing and will guide you to the promised land with a layover in nirvana and bliss. All hail AI!!!!
The disturbing part is they seem to have noticeably regressed and believe Claude over their own judgment.
Re:We see this problem + AI is a tool, not a relig (Score:5, Insightful)
I have to scrutinize pull requests much moreso than ever before
The disturbing part is they seem to have noticeably regressed
And think this is core to the discussion because output from evangelists is going up while hollowing out the skills needed for the next generation to do the review.
Re: (Score:2)
> The disturbing part is they seem to have noticeably regressed and believe Claude over their own judgment.
I find your lack of faith in AI disturbing.
Re: (Score:2)
I wouldn't say I'm an expert, but I've got some history. My take is that you need one AI to write the code, and other to write the tests. And I'm also finding the most useful tests are high level functional type tests because they shake out stupid stuff AI does where it writes some code that fits the unit tests but doesn't actually do what it needs to do.
Unit tests seem to be useful to get the code-writing AI to actually write code that runs/compiles or whatever. Humans would write unit tests to be more use
One time? (Score:2)
Gemini is pretty good at unit tests. One time I asked it to write a test for a behavior, and it did, but it also fixed a bug in the implementation. And it was right.
"One time" is far from reassuring. Sometimes the AIs get it right. However, if I am sending an AI to it, it's too complex for me to figure out at first glance. I am typically sending it complex projects with a lot of steps to figure out. AI is a nice upgrade from Stack Overflow and a powerful tool. However, in order to justify the AI-washing layoffs, it has to be a lot more reliable than "one time." I get failures daily.
I have not been impressed with Claude's unit tests. They're usually stupidly
Re: One time? (Score:2)
You're fired cause the manager says it works (Score:2)
Re: (Score:2)
People getting fired because the managers guarantee vibe coding works.
And even when they notice that vibe coding does not work that great, they will still try to move expenses away from wages towards tokens paid to some LLM hoster. And once they find out how expensive that gets over time... well, they probably have been replaced by LLMs themselves at that time.
Re: (Score:2)
Indeed. Once again non-tech personnel thinks it knows how tech works and can make competent decisions about it. All that shows is that software engineering is a very immature discipline and that the "managers" are still (as they always were) generally really bad at their jobs. Imaging a "manager" telling a construction engineer that a bridge will definitely take a certain load when the engineer knows that is not true. What would happen is that the engineer escalates or quits. Non-tech personal cannot make c
It stops the development of new knowledge too (Score:5, Insightful)
Re: (Score:2)
i mean that's not a bad thing either. I sometimes DO NOT want to learn "new to me" things. I've been contributing to an ancient, but still used software called Xastir. It's VERY OLD spaghetti code, low level X11 with Motif. I DO NOT want to learn Motif. It's not a marketable skill or something I'll ever need. But I let the AI code a few contributions (one of them was replace some parts with Cairo fonts for antialias in high dpi scerens, and the other was fixing a very old screen drawing routine that took 2-
Re:It stops the development of new knowledge too (Score:5, Insightful)
Could I have fixed this bug? Not even in my wildest dreams. Do I care how it was fixed? Oh no. No I don't. I just checked that the output of the LLM was reasonable.
The risk in this scenario is that after a few iterations of people applying AI-generated "black box" modifications, users start reporting that the ancient app is crashing on them now and then, and nobody has the first clue why, or how to fix it... and since the crash isn't readily reproducible, you can't even do a "git bisect" to figure out which commit introduced the regression. Now you're left with two unappetizing choices: either live with the instability forever, or roll back all of the "blind" commits to the last known-stable version and never touch the codebase again.
Alternative view: programming things that suck (Score:2, Interesting)
I use AI for coding, but not often.
At work we needed a very minimal windows app that makes a websocket connection, performs some tasks, and has a very simple UI that shows an activity log.
Windows GUI programming makes me want to pull my teeth out and I don't enjoy it at all. The last time I dealt with it was years before LLMs, and I didn't like it then either. So I was quite happy to have an agent do most of the work even with mistakes and clunky unrefined code.
I don't feel like I've lost any skill by steer
Those Pull Requests (Score:3)
I received my first AI-generated pull request recently. It was... not great. A lot of extra code that was not necessary at all, some odd naming conventions, and the size of it all made the whole change set difficult to parse. This wasn't a typical "Well, this works and it's okay, it's just not the way I would do it." Some sections were legitimately terrible.
I have been using AI tools somewhat, but mostly to examine existing structures and answer questions. It's pretty good at that. But the code? I prefer to write it myself. That way I don't forget how it all works, like the people in this article. I am hoping that I can continue to do this for the most part because telling a machine to "just kinda do the thing, y'know" and relying on non-deterministic output scares the crap out of me. Doubly so when I stop being able to understand what's being done to the system.
And one of the devs in the article is from a fintech firm? Really? Man. This isn't good. Well, for them, anyway. For the rest of us it sounds like we have a lot of cleanup work to do...
that's your problem right there: (Score:5, Insightful)
There's no way to evaluate whether that much code is well-written or secure
sorry? then you're not doing your job.
pre llm developers didn't remember to do e.g. asm system calls either, and that's not brain rot but abstraction. llms introduce a whole new level of abstraction but it's non-deterministic, so you can use as much llm as you like but you still have to do your job. if you don't do your job then you simply aren't a software developer, you're a vibe coder.
and vibe coders are fine, they can do damn cool stuff, but they aren't software developers and shouldn't be discussing about software development. vibe coding is building software without any engineering rigor, the result should be regarded as a mere curiosity, poc or prototype until it has been validated by an engineer.
long story short: if you produce a ton more lines of codes then good for you, but you'll have to hire a lot more software developers. cry me a river.
The Developer is dead - long live The Engineer (Score:1, Troll)
If using an LLM for coding is rotting your brain, then you likely were never using your brain, you were simply translating a requirement from one human language into software. That's accounting, not creating, and your brain has been rotting the entire time.
Seriously. Software 'development' is little more than acting as a human requirements compiler, and that ship has sailed. Engineers - of any discipline - applying math & developing algorithms - is an endeavor that takes far more than 'software devel
Re: (Score:1)
Re: The Developer is dead - long live The Engineer (Score:3)
This article was about software developers, what I consider to be a dying field.
What you seem to be describing is the much broader field of being an idiot, which no LLM can truly mask, but it might mitigate and normalize.
Re: (Score:2)
Re: (Score:2, Troll)
And in actual reality, LLMs cannot do "requirements compilation". That one requires General Intelligence.
People Seem to Forget Problems Existed Pre-AI (Score:2, Informative)
"There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same, ... We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)
How did you not build a rat's nest BEFORE AI?
AI increases output: it magnifies existing issues, but it does not magically create new ones. I strongly suspect when you had hundreds of other programmers
Re: (Score:3, Insightful)
That is really nonsense. With actual intelligence you get better at things and the tech debt gets smaller. With code reviews you do not only evaluate the code but the coder. Not all juniors turn into competent coders and you steer them into other paths.
None of that works for LLMs.
Re: (Score:2)
By employing a load of rats. How did you do it?
Brian Kernighan nailed this decades ago (Score:5, Interesting)
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
But now it's not a matter of being not smart enough, it's about just leaving yourself the exhausting, miserable work that never should have existed in the first place.
Re: (Score:3)
Indeed. As you get better as a coder, debugging may get harder but you need far less of it. LLMs killed that and, on top of that, produce "review resistant" code. I expect we will see a lot of LLM-caused burnouts in the next few years and that will reduce the number of desperately needed good coders even further.
Re:Brian Kernighan nailed this decades ago (Score:4, Interesting)
As astronaut Frank Borman put it, "a superior pilot uses his superior judgement to avoid situations which would require the use of his superior piloting skill".
The programmer's version of that would be "a superior programmer uses his superior judgement to avoid creating the bugs that would require the use of his superior debugging skill".
Thoughts on AI (Score:3)
Duh (Score:2)
It's like guiding a bunch of junior devs and correcting their mistakes on steroids, all day long, every day. F*#ing exhausting!
Laravel API? Not possible to go lower (Score:5, Interesting)
the opposite is true for me (Score:3)
at my large company, we have a fantastic group
here's how we manage all of us using AI on our monolithic code base:
1: our jira tickets are extremely well specified, by both humans and now also vetted by AI
2: eng instructs ai to look at jira, and make a plan.
3: 2nd ai "critique this plan like you hate it", you end up with a much better plan
4: create a unit tests that fail on current code but will pass when bug is fixed or feature is implemented. create as many as you need to definitively pin it down, run all tests and confirm they fail due to lack of bug fix or feature
4.5 eng tests to ensure THEY can repro bug
5: implement the plan
6: test against unit tests: do they all pass now? if not iterate: bad test? bad impl? critique, plan, iterate
7: tests pass: eng now tests manually, ensure THEY no longer repro bug
8: create PR. other engs now review PR. we have special pr review bots as well, iterate until all engs and bots are satisfied
9: give it to QE. QE validates or we iterate more
10: push to stage
we're all pretty good at it. AI is only part of the job but it helps us A LOT
Forgot how to implement a Laravel API... (Score:5, Insightful)
Dude, I've been writing code for 40 years. I've used so many different tools, stacks, libraries and APIs that at this point I don't remember any of them, and I haven't remembered them for years, and it doesn't matter at all. Sure, I have to look everything up, but that's fine, that doesn't matter. What matters is that I know when something looks wrong, or hard to maintain, or inefficient, or insecure, or... pick the axis. And I can dig in and find the problem. Anyone can tell if code works, that's easy. Understanding when and why it might break or otherwise impose additional costs, that's the real skill.
Which, as it happens, is exactly the skill you need to use an LLM effectively. Also the skill you need to understand legacy code, review colleagues' commits, etc., etc., etc. I used to say that the ability to read and understand code is an underrated skill, but an old friend corrected me at lunch a couple of weeks ago, saying that the ability to read and understand code is the most important software engineering skill, and always has been. Upon reflection, I agreed. And LLMs make this clearer than ever before.
Re: (Score:2)
+1 to this. And undue reliance on LLM's is the antithesis of being able to read and understand code, for the vast majority of LLM users. LLM's aren't designed to provide correct answers, they are designed to provide plausible answers. Wherein lies the trap.
Bottom line IMO is that the LLM will help the good / experienced developer get things done faster, for a certain subset of problems. LLM's will hold back the inexperienced / novice developer if not actually turn them into a liability.
Re: (Score:2)
Re: (Score:2)
Yeah, that one really came across more as, "oh crap, I'm getting older!"
Really? Doesn't feel that way at all to me. What it feels like is that LLMs are a massive force multiplier for the skills I already have.
Re: (Score:2)
Re: (Score:2)
Oh, I'm not talking about those at all, just how when something I studied deeply in college slips my mind, I think, "damn, getting old". Which I still think is what the person quoted was actually dealing with. You and I are used to it (if you've done anything for 40 years). This guy may have been running into it for the first time and putting the blame elsewhere.
Ah, gotcha. You were referring to the comment from the summary, not mine. Yeah, it's fun to watch the young'uns realize that they are absolutely going to spend their whole lives realizing they forgot something they used to know. It's even more fun to watch them the first time they look at code they wrote two months ago and say "Who wrote this stupid shit? Oh....".
Re: (Score:2)
As often the case, can be good if used properly (Score:4, Interesting)
Our group has been experimenting with LLM's (I refuse to call it AI because it's no such thing) on a reasonably large and extremely complicated code base. What we're finding is that while the LLM is often right, when it's wrong, it's plausibly wrong. That's problem #1: undue dependence on the LLM weakens the group's sense of "that's not the right answer", leading to bug churn.
Problem #2 is that a newer developer relying on the LLM's for code writing or debugging, misses out on the chance to develop that sense of how it all works. Left unchecked, you get a bunch of guys who don't actually know at a deep level how the system operates. That is not going to end well. (See #1, if nobody has that sense of "that's not right", well ...)
The third finding is that the quality of results depends very strongly on the quality of the LLM prompts. This goes back to the classic "Ask a Foolish Question" conclusion: to ask a proper question, one has to already know at least part of the answer. The only way to get there is to have at least some decent understanding of the code base, which one is not going to get by relying on the LLM's for all one's work. ("Ask a Foolish Question" is an excellent and classic Robert Sheckley short, if you haven't read it, kindly do so.)
Careful use of the LLM by experienced developers who already know the system, at least at a high level if not details of every area, and who can prompt the LLM in the right direction, seems to be an advance. We see more bug fixes; without LLM we might fix (say) 3 bugs in the time that using the LLM can fix 10, even if 3 of those "fixes" are wrong and have to be nak'ed or reverted. Reliance of less experienced developers on having LLM's fix bugs for them is the slippery slope to the nether regions.
Such a surprise (Score:2)
But these are smart people and you can only fool them for a while. And they start to notice that something is really, badly off. Good.
No way to evaluate? (Score:2)
We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same
I mean, that's an easy one...use AI to do the evaluation!
It might be funny, if bosses weren't actually demanding this!
My mantra (Score:3)
I retired from software development in 2023. My mantra is:
I'm so glad I retired when I did. I'm so glad I retired when I did. I'm so glad I retired when I did....
"A software engineer at the FAANG" (Score:3)
It's obvious the author of the article doesn't understand what he's writing, ironically enough.
Use AI to make yourself think (Score:2)
But you can also use AI so it makes you think more about your code, not less.
And I often ( but maybe not often enough) try and do the latter.
I (try and make myself) use AI as a smart junior colleague, who comes up with either crazy or over-elaborate or very good or broken or simply awful solutions to the problems I give it.
I then have to work out what its code is trying to do and either reject the extreme stuff or, more often, tailor it to be more like what
Re: (Score:2)
Re: (Score:2)
There's no way to know? (Score:3)
"There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me."
Years ago my high school computer lab teacher taught us in our programming portion of the class: "When you don't know if it is secure, you should use the premise it is insecure until proven otherwise."
I think I can see why American corporations fail basic security audits today. They don't allow their programmers to follow basic concepts that were taught in high school.
I Use AI as a Better Form of Search (Score:2)
Not necessarily as bad as it looks ... (Score:2)
"I had some issues where I forgot how to implement a Laravel API"
As long as you still know how to read the documentation, it will likely come back to you fairly quickly.
I guess it is like when you switch away to a different framework / API / etc, stuff like that will leave your short-term memory after a while, and you have to re-learn it, but you will re-learn it a lot quicker than how long it took you to originally learn it.
you went to university for this? (Score:2)
Who are these people? (Score:2)
How can you "forget how to code"? That's like forgetting how to tie your shoes or ride a bike - I can't imagine it happening to anyone.
Re: (Score:3)
Re:You're doing it wrong (Score:4, Interesting)
I started using gemini and found it's far better than my best employee ever was.
My best employee was very very good, but I'd have to wait a day to see results of the meeting.
One thing he (best employee) did that AI can't do is make good judgement calls. No question there.
However, when the AI spits out a half day's work in 10 seconds, it allows me the analyst/designer/project manager to rapidly analyze the output, and do another iteration of design ideas, immediately, or as fast as I can analyze process and respond.
So I can get dozens of turnarounds per day compared to even a good employee.
Working in small logical work units yields very good results. I haven't rolled up my sleeves and done any 12 hour days of deep concentration on code for years, and I don't need to. I have much knowledge and can review code but I don't need to double check syntax or look for typos, the grunt work.
I don't think that I'm losing anything, I do the architecture and design. I think I'm getting huge value and speed from gemini... the key to me is that I work at mid to high levels of abstraction, work in small logical units, review the output, and let the tool worry about the grunt work. I work as a product designer, it works as a coder. My designs are improving significantly from having the AI critique my designs and suggest various possible improvements or how to use tools that I did not know about. I don't need to code. Caveats are that I am not building mission critical or real time software. The reality is maintenance is a dead concept. As the coding agents/models improve, you can conceivably drop your whole codebase into the NEXT better model every time a better model comes out, and it will do the optimizations and grunt work.
Don't hate me. I can see the future and it is grim for people, coders, entry level people. But YOU WILL USE AI for coding is here for non mission critical applications. It's sad but true to say that "quality" is a quaint and outdated concept.. (like privacy).. good enough is today's "quality". Don't shoot the messenger, but barely working, is still working. if it don't work replace it, don't maintain it.
There will always be a need for true experts, good designers, but the writing is on the wall, AI IS REPLACING all junior functions at this time. If you are doing a web based database system, pfft, it barely matters if there is a bug.. I regret that statement but I feel it's today's reality.
Re: (Score:2, Interesting)
Re:You're doing it wrong (Score:5, Insightful)
It might be a red flag if you want them to be focussed on babysitting the probabilistic code generator, but if you want an actual developer who can think through a problem on their own, a lack of AI usage in their studies is a huge benefit.
Re:You're doing it wrong (Score:4, Insightful)
Agree. Gemini and Claude are both super useful, so long as they are used properly. I haven't had as much luck with other models, so I stick with these two.
But how you use them, and how much you use them, depends greatly on the nature of your project. It still requires intelligence and skill to use them well, and if you use them poorly the results will burn you. And for some specific parts of a total solution, you simply can't use them, and will need to do those parts yourself. And it is on you to recognize which parts those are.
If you fall into the trap of just letting some tool like Cursor or Claude Code "do it all" for you, you will end up like the people in this article. Both of these are useful tools, but there is no other way to say it: you have to use them wisely. And you have to know what you are doing. If you are using them to solve problems that are too hard for you to solve, you (and your codebase) will drown.
Re:You're doing it wrong (Score:5, Insightful)
"There will always be a need for true experts, good designers, but the writing is on the wall, AI IS REPLACING all junior functions at this time."
The irony in this statement is so rich: where do you suppose true experts and good designers come from? They're made, not born, and the source material for an expert developer is a junior developer. Using AI to eliminate entry level developer positions is NOT a sustainable course. But it does serve the hypercapitalist masters who have created AI to serve their own selfish goals, so it has that going for it.
Re: (Score:2)
Let's agree though that this is happening, regardless of any quality measures or value to the serfs.
I think we are entering Revelation Space territory. (and other science fiction scenarios)
We're back to the dark ages of Wizards and serfs and monks and Mad Rulers.
We are as a species are going to split into factions, religious rejectionists like Luddites, Robot-Human Hybrids, Evil Machines, etc.
Like in the middle ages, the vast majority of people will be unwashed masses, uneducated, but now wi
Re: You're doing it wrong (Score:2)
"think I'm getting huge value and speed from gemini... the key to me is that I work at mid to high levels of abstraction, work in small logical units, review the output, and let the tool worry about the grunt work. I work as a product designer, it works as a coder. My designs are improving significantly"
pretty much my experience. i ran into the indeterministic behavior/context window issues very early on and modified my methodology. i do small, discrete pieces, always add existing schemas/specs etc so Cha
Re: (Score:2)
Re: (Score:2)
Yeah, looks like we're headed to a world of only savants and drooling idiots, and all the expertise owned by Big Brains.
Open source like the ollama approach give me some hope that Big Brains doesn't own everything.
Here's more hope: will the savants disrupt Big Brains?
Yeah, Bingo. Pretty much this. Except for one ... (Score:3)
... point:
But YOU WILL USE AI for coding is here for non mission critical applications.
Nope. For generic stuff I would, as of now, trust the AI to do mission critical stuff better than any human as well. Point in case: Just yesterday the newest Codex fixed an oversight of mine while doing another task and _explained_ to me that he/she/it was fixing an oversight in order to properly do that other thing I asked for. This was a non-trivial detail concerning state management and recovery in a non-trivial SPA. S
Re: (Score:2)
if it don't work replace it, don't maintain it.
You're building tent cities where people need housing.
Re: (Score:2)
Nvidia VP Bryan Catanzaro stated that AI compute costs "far" exceed employee salaries, making AI more expensive than human labor. While AI is used to replace workers, high energy and GPU costs make it less economical, with studies showing AI is only viable for a minority of tasks.
I would also argue that AI is less useful than humans. I'm not even going to go into the bizarre circular financing, ridiculous energy costs that are currently being borne by average ratepayers that will come home to roost, the insane backlog of datacenters that are far exceeding any theory of profitability, or even the fact the best AI models get called a "nothingburger" or receive a collective "meh
Re: (Score:2)
Re: (Score:2)
All I really see there is that closer you are to things where getting the right answer does not matter, the more useful AI is. No the ringing endorsement you think it i .
Re: (Score:2)
There will always be a need for true experts, good designers, but the writing is on the wall, AI IS REPLACING all junior functions at this time.
So what's your plan for getting the next generation of true experts?
Re: (Score:2)
There will be no next generation of true experts for the most part. The AI will contain our expertise. Only outlying savants will leverage it to benefit themselves.
*I'm already contradicting myself and being ironic as pointed out above*
There is no NEED for meatbags. We're splitting into pseudo religious factions. The Oligarchs will likely not train their kids to use it, then there will be hereditary leadership.. Like tribes and kings... Like Barron Trump, your new president for life. Like Larry Elli
Re: (Score:2)
Which is not something to look forward to. requirements engineers are already poorly paid and under hired for their workloads. Industry isn't about to start respecting their work just because AI cuts out other types..
Re: Holding it wrong (Score:1)
In properly engineered systems, only about 10% of the work is rote coding.
So anyone who 10x's their output using an LLM should be fired.
Re: (Score:2)
That is a misuse of rote work. Rote work is to allow junior devs to get into things and develop a general feel for things. If you are not slowly educating junior devs, you (or rather your organization) is doing it wrong.
As to "research new solutions", absolutely not. LLMs are really bad at giving necessary context, limitations, caveats an the like. At most, you should use an LLM for better finding of actual information sources.
Re: (Score:2)