Will Some Programmers Become 'AI Babysitters'? (linkedin.com) 150
Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google:
"AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.
"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."
The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.
"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."
The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.
How do you develop that skill (Score:5, Insightful)
Re:How do you develop that skill (Score:5, Insightful)
I think they hope that in theory, by the time the senior programmers retire, you'll be replacing them with the AI as well.
In practice, none of the people involved seem physically incapable of thinking in terms of a timespan longer than their next round of bonuses.
Re: (Score:3)
Re: (Score:2)
Real talk - If you work at a public company and you don't have a seat in the boardroom complete with name plaque, you might well be laid off at any moment, for just about any reason. There is no corporate loyalty any more.
So should anyone in that sort of organization, ever think past the time-span involving their next bonus? People have long accused C[X]Os of not looking past the quarterly earnings report, or past their next bonus, but maybe the rest of the workforce just needs to get memo that you either
Re: (Score:2)
Indeed. Well, this is not the first time really big names (and small ones as well) in IT vanish or become irrelevant.
Re: How do you develop that skill (Score:2, Insightful)
Donâ(TM)t worry, it wonâ(TM)t be long before AI can make these more architectural decisions. Senior programmers and architects seem to be living in a weird fantasy that the AI is not coming for their job too. No software engineers wonâ(TM)t become AI baby sitters. Managers will. Software engineers will become jobless.
Re: How do you develop that skill (Score:5, Insightful)
That's the issue - it's all or nothing, just with weird caveats. Either:
1. The AI can do everything an engineer can do, in which case some business management person might come back and tell it that it was wrong with some assumptions on this or that (just like they would with a human), but it's otherwise fully autonomous, acting entirely on its own, or:
2. It can't.
The problem with #2 is that we'll spend so much time and money in thinking we're just a little ways away from #1 that no one is in the pipeline. There's also the risk of treating #2 like it's #1, where we let it make decisions, with no repercussions, and we just watch things burn.
I suppose there's a third option - it can do everything, *plus* mentoring a junior so that a human is still learning things just in case.
Re: (Score:3)
4. We use AI to do tasks that it is good at and humans do tasks that they are good at.
I don't understand why everyone is trying so hard to make AI do things that are pretty impossible for it. Do they hate programming so much?
Re: How do you develop that skill (Score:5, Insightful)
Broadly speaking, a lot of AI advocates believe AI can do every single job *except* their own.
In terms of hating programming, yes, actually a lot of the staunchest supporters hate programming. Because they can't do software development themselves but have somehow latched onto the business of software development. Business folks that carry a great deal of resentment that there are employees that have sufficient leverage over them to extract significant salaries and there's not a lot the business side can do to counter.
Code gen represents the possibility that they can have a fungible workforce where the labor has no particular leverage.
A lot of these folks are a bit unhinged in thinking that somehow codegen eliminates their need for skilled workers but somehow leaves them in the loop. I saw specifically a software sales org think they could get away with selling the act of inputting the client's requests into prompting without any software development experience/skills.
Re: How do you develop that skill (Score:5, Interesting)
AIs are pretty good at programming.
It is a very strange /. myth that they are not.
No idea where this myth comes from, wishful thinking?
I recently took over ownership of a product that is nearly completely build by AI.
There is nothing to complain about it. As it is a web product, I am not myself better in doing it. I am more a backend or C++ developer. But the code is readable, the comments make sense and most important: stuff that the previous product owner hand coded in weeks the AI does in 10minutes or less.
The turn around between:
- try this
- test and assess it
- throw it way if it is not good enough
Is less than a few hours, costs nearly nothing, and you can really do "experimental software development".
As I said: it is just a web site, so underneath not super complicated.
Re: (Score:2)
Re: How do you develop that skill (Score:3)
I think itâ(TM)s easily explained. Most people on slashdot are early tech adopters. When ChatGPT 3.5 burst onto the scene, they tried it. They tried using it to generate code, and they got laughable results. Theyâ(TM)re now convinced that AIs generate terrible code because theyâ(TM)ve not since gone back and given any reasonably recent Claude a go.
Re: (Score:3)
Probably in part. I think it's great that my IDE has been able to do a lot of the grunt-work for me for a year or more, but people I know who use LLMs to do most of their coding still say the code it generates is bad and they worry that it may be unmaintainable in future... they may be able to ship products faster, but will they be able to fix them in five years?
We'll find out five years.
Re: How do you develop that skill (Score:3)
It's cute you think they'll keep the managers to do that.
The owners will hire cheap interns with AI experience to replace them.
And yes, eventually the owners will be jobless when the whole software as a service/product model falls apart. People will just ask their phones to do a thing, no app required.
Re: How do you develop that skill (Score:2)
Re: How do you develop that skill (Score:2)
I mean, the lower level managers wonâ(TM)t have a job to do once they have no one to manage. Itâ(TM)ll be reduced to just the managers who are needed to figure out what the product is.
No need for managers for that... (Score:3)
You do not need managers to figure out what the product is.
Usually it is a work of senior programmers...
Managers are just politicians - buddies of higher managers...
Re: How do you develop that skill (Score:4, Interesting)
I mean, the lower level managers wonâ(TM)t have a job to do once they have no one to manage. Itâ(TM)ll be reduced to just the managers who are needed to figure out what the product is.
Our management team are currently busy having AI write their single sentence reports into giant sprawling messes that the next up the chain uses another AI to summarize as a single sentence that may or may not resemble the original single sentence. They're already automating the main functions of their jobs, and they aren't bright enough to realize that's what they're doing. And all you need to decide what the product is is a sales manager and a marketing manager. The people that know the technical details don't get any say even now, so no need to worry about them getting let go as AI can bring the fantasies to life for the sales and marketing teams. Gonna be amazing when the only people left with jobs are the lowest dregs of humanity, marking, sales, and advertising. I suppose sprinkle a few lawyers in there just to keep it deep in the mud. Glorious future we've got coming.
Re: (Score:2)
Re: (Score:2)
If it did work that well, then it would be similar to math education. You start by forbidding calculators, then allow only basic arithmetic calculators, then graphing calculators, then full computer aided math.
Think there's flaws in general, but to the extent it can work, the burden shifts more to education rather than workplace.
Re:How do you develop that skill (Score:5, Insightful)
There seem to be at least four "AI strategies" (if throwing spaghetti at the wall can be called a strategy) that different companies are currently trying.
1) This get rid of, and stop hiring, juniors and interns, and give AI tools to your senior developers. At least you've now got capable people doing your design and guiding the AI, but indeed where does the next generation of seniors come from, especially if you want seniors that actually know your business and IT systems. Taken to it's logical conclusion, no more juniors enter the field (because no-one is hiring them) and we end up with retirement age developers babysitting AI, then retiring, then ???
2) At least plan 1) works in the short term, but some companies have chosen to do the exact opposite and get rid of the seniors (hey, they're more expensive) and give AI tools to the juniors and contractors instead, Of course now you've got people generating AI slop without the skill to review or guide what it's generating, but at least it's cheap (until you belatedly realize you've destroyed your IT organization).
3) Do nothing meaningful with AI. Ignore your developers who say it would be helpful. Not really a strategy, but at least you're not destroying your IT organization.
4) Use AI in an appropriate way, mindful of it's current strengths and weaknesses. I have friends in IT working at companies who are using strategies 1-3, but category 4 seems much rarer. I guess it's perhaps not so sexy as "feel the AGI, fire some segment of your developers (toss a coin, fire the juniors or the seniors)", but you keep your IT structure, give SOTA AI to everyone (expensive, but cheap AI is mostly useless for coding), and treat it as a tool that your organization needs to develop best practices for, not a magic genie that you hope can currently do something that it cannot. Hint to CEOs: don't do what the AI execs are telling YOU to do - follow what they are doing at their own companies!
I'm guessing that companies following 2) will be first to fail then 1). It's largely a slow motion train wreck.
Re: (Score:2)
I've said, in one form or another, that code should
Re: (Score:3)
I used to agree with you about "boilerplate only." But over the past few months, AI (my choice is GitHub Copilot) has gotten significantly better at non-boilerplate kinds of code. It used to be that AI spit out uncompilable code half the time. Now it almost always works right on the first try. I do still have to carefully inspect what it generates to make sure it's doing what I actually wanted. But most of the time, it does.
The biggest shortcoming I see now with AI, is that it doesn't know when it knows eno
Re: (Score:2)
Re: (Score:2)
Yes, agreed. And there will be plenty of people, especially executives, who think it's acceptable to just accept what AI says.
Re: (Score:3)
The code might be the same as a SR developer, it might not, I've seen it generate brilliant code, and truly terrible code, it's a spectrum. If you're careful, and you review the code, and really understand it, there's no problem. The big issue is when people accept the generated code and move on without review.
Re: (Score:2)
This worry is not new with AI. Companies that produce software have long wanted experienced developers (for an entry level price, of course). This is also not new with programming. Trades like plumbing and electrical work, also want experienced workers. Doctors too. I mean, who in their right mind wants to be the very first patient to undergo surgery at the hands of a physician who just graduated from college?
Each of these professions has found ways to bring in and train new talent. Programming will also fi
Re: (Score:2)
Hahaha, you do not. Then in 10-20 years you wonder why everything has gone to shit.
Re: How do you develop that skill (Score:2)
They'll be around for a while longer and they may start using AI development natively as part of how people programme in the future...eg they will become the car drivers and truckers that replaced all them horses but sure there will be disruption and teething issues. Still someone has to know how to fix the AI when it breaks...
Re: (Score:3)
If you are a senior pony express rider and the automobile just started rolling out how do you train the next generation of riders? They'll be around for a while longer and they may start using AI development natively as part of how people programme in the future...eg they will become the car drivers and truckers that replaced all them horses but sure there will be disruption and teething issues. Still someone has to know how to fix the AI when it breaks...
I get your point, but I think there is a difference between the tool (pony express) and the system that uses it (delivery). You can change the tool but if people don’t know how the system works, you will have problems.
It's all about accuracy (Score:2)
Re: (Score:2)
Will you board a plane if you learn that the controllers use AI generated code?
The question does not apply. Due to the potential damage, the requirements in the aerospace industry are very high. Sure, Boeing got away with mass-manslaughter twice recently, but only because of their defense contracts and only because the crashes were in places people think to not matter a lot. Otherwise the decision makers for the MCAS would be in prison. They still paid massively for that mistake.
Re: (Score:2)
Don't try to reframe things you don't understand to make your points incorrectly. It's a bad look.
Re: (Score:3)
I don't disagree about the workload of commercial flying. However, I would not be too excited about flying cross-country, let alone cross-continent, with a single pilot. There are too many potential health or mental issues that could arise. But, if that's a "bad look" that offends you, my apologies. Have a great day.
AI is a huge problem for programmers (Score:5, Insightful)
If it works it's basically going to be doing grunt work. It's all well and good to say it freeze you up for the hard work but that means you now have a 24/7 job doing the hard work. You no longer get an hour or two of downtime resting your brain everyday. You are expected as an employee to be on 24/7 producing high quality novel code.
And if it doesn't work then yeah you are an AI babysitter. But you're still going to be treated as if the code tool works so your productivity is expected to go up.
There is absolutely no winning this.
Re:AI is a huge problem for programmers (Score:4, Interesting)
If I get a strange error code from my app (an error which I am not familiar with, usually caused by some 3rd part library we use), I feed it to the AI and AI will usually about 80% of the time guess correctly what is wrong, I check if that was the case and then I fix the error. Traditionally that was long hours of googling and reading manuals trying to figure out what is wrong. I did not enjoy it, nor did I rest when doing it. Using AI like that feels pretty relaxing to me.
Re: (Score:3)
No doubt every ill-conceived idea that can be tried is being tried, but the math doesn't really work on that one. How can the same person be 10x more productive generating code that they are then personally expected to review.
The "solution" to this is either you just don't review the code, since you didn't 10x your manpower to review the 10x more code, or you just issue some impossible mandate like Amazon just did (when some junior dev's AI slop took down one of their production system) and insist that the
Re: (Score:3)
I used to agree with you that AI basically did grunt work. But in recent months, the tools, like GitHub Copilot, have gotten significantly better doing things that went beyond grunt work.
For example, I wanted to add Lucene (the engine behind ElasticSearch) to my application. Not knowing how Lucene worked, I prompted AI to add it, and told it what kinds of queries I wanted to support. It generated the code to my specs and made it work. Then, Lucene being a complicated beast, some searches come back with scor
That's still basically grunt work (Score:2)
I am fully confident that you could have worked through all those issues and that it wouldn't have taken that much brain power on your part. It would have taken time but that's different than creative effort and brain power.
And that's kind of my point. Grunt work isn't necessarily easy. Work is work for a reason. But it's
Re: (Score:2)
It can also work badly. As in insecure, unmaintainable, unfixable, etc., but still "works".
I do agree that humans only doing only the hard work is not going to go well. People will leave and do better work. People will burn out. People will demand huge salaries.
If our economy had competition that would matter (Score:2)
Yeah a few million might get lost here and there but it won't create openings for competitors because competitors aren't allowed to exist anymore.
I don't think folks really realize just how much damage we've done by installing pro corporate judges and politicians throughout our entire political system and picking the absolute most corrupt ones we could get our bloody hands on because they pushed our buttons the way we wanted them pushed.
Capitalism requires a comp
Too much typework (Score:5, Interesting)
Then the babysitting started. My God... I had to think of everything that could go wrong and tell it what to do in that case, meanwhile it lost track of previous requirements more than once and wiped that out. Simple example? User has to type in a number, user should not be able to type in a letter, or a negative nember,
The GUI was good enough for my purposes, it was ok if you followed the steps one after the other. I got further than I would have gotten if I had written it myself and the program became a lot more usable. It was able to save settings in a JSON file, reload the settings, You could set up the program and hit generate as long as you did not deviate too much from the intended work flow. The good news? I got a working gui very fast.The bad news? No way I would use this in a professional environment. I'd do it all manually. Probably was less typework. I would have gotten less features, but it would not misbehave if you typed in something wrong or hit the buttons in the wrong order.
Is that a good summary for using AI in programming? Makes nitwits think they can do anything in a few prompts, The sky is the limit! The people on the workfloor know that its outputs still needs a ton of revising before you could even consider releasing it?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
>> meanwhile it lost track of previous requirements more than once and wiped that out
It's best to start off by telling the AI to write an implementation proposal for your project and get it to put all your little requirement details in that. Then you can tell it to implement the plan in phases. Revise the plan later if necessary. That way everything is documented and the AI knows exactly what to do.
Re: (Score:2)
Your experiences aren't surprising. However, one issue is that you seem to be using ChatGPT (web?) to do this. If you use an IDE integrated with AI, such as Cursor or Visual Studio Code + GitHub Copilot, you will likely get much better results. This is because every time you give it a prompt, it uses the existing code as context, even if it "forgets" what you prompted it earlier.
Re: (Score:2)
I only started using Claude Code a few months ago, and you are absolutely correct about the cli / code-integrated tools.
I had Claude translate a COBOL (esque -- DATABUS) program into a modern language and framework today. The plan phase took about 6 minutes, I made a few edits to the plan, and writing portion took about 4 minutes. I got claude to run some tests comparing outputs, and they were identical. I then myself ran similar tests and got the same results. Pretty neat.
I hate having to tweak the legacy
And now they've finally seen... (Score:2)
The problem of replacing programmers with generated crap.
It's like watching a car crash in slow motion.
That CEO vibe coding hasn't got the competence to reboot his PC, never mind work out complex interactions inside software.
Who'd have seen that coming?
Every coder.
Re: (Score:2)
Every coder.
Every good engineer, even in completely different fields.
No They're Not (Score:3)
No, they're struggling to find engineers who accept the pittance they're offering. Pay them, and they'll do it.
not saying I want this but... (Score:2)
One Agent writes it, another Agent reviews it. Just like old times, huh?
Re: (Score:2)
That's better than nothing if you're using "LLM as judge" to try to catch the errors in your RAG outputs, but if you're talking about "code review" (or whatever we should call critiquing voluminous AI slop generated by junior developers), then the problem is that AI isn't yet at the level to do that (and likely won't be until we develop human-level AGI).
If AI was good enough to do meaningful code reviews, then it wouldn't be writing crap code in the first place.
Re: (Score:2)
Which has the advantages of a meatbag to figure out all the context, write something that makes sense, then throw it over the wall to the reviewer who has all the speed and other advantages of AI. Doesn't sound horrible. Seems better than what we have, which is the opposite. The big dummy in the room writes the code then the big brains in the room review it. Ugh. We all know what kernaghan said, about it being twice as hard t
Re: (Score:2)
And all agents have selective blindness. And then some attackers can compromise the whole world.
Bad ideas all around (Score:4, Insightful)
People are avoiding CS like the plague because they don't see a future. Those who don't avoid it are getting fucked over by the AI rug pull and can't get jobs. Those still in it are constantly being harassed by human dinosaur rhetoric and expectations of becoming reverse centaurs. Unless you run a code mill babysitting an LLM is ultimately more difficult than just coding it yourself. Lack of net productivity gain once you figure in lifecycle costs speaks for itself.
Few are likely to be willing to invest time and effort to become proficient in some intermediate skills such as prompt engineering, agent wrangling..etc when the lifetime of acquired skills are measured in weeks and months and may not even translate across models or systems.
There seem to be two possibilities for the medium term future. Either AGI renders humans obsolete or the obliteration of CS pipeline due to magical thinking results in significant supply shortage.
Re: (Score:2)
People are avoiding CS like the plague because they don't see a future. Those who don't avoid it are getting fucked over by the AI rug pull and can't get jobs.
Is that all AI's fault? I also don't know how bad the job market for beginning coders is!
I graduated from undergrad a bit more than 20 years ago with a computer science degree. At the time there were less than 100 majors per year. This was roughly 4-6% of the student body. Comp sci was well behind economics, public policy, biology, political science, and maybe some others in terms of popularity.
Starting in the 2010s, the number of computer science majors started to grow very rapidly. In 2024 there were almo
Re: (Score:2)
I suspect the fun part comes in five years when the software needs major revisions and the new AI model has no idea what the original AI model did and nor does anyone still left at the company who would be able to review the changes. So either you start putting half-understood changes into the software or have to get the new AI model to just rewrite everything from scratch and invalidate five years of testing.
I'm expecting to see a major software collapse in a few years as all the vibe-coded software starts
New job (Score:2)
Re: (Score:2)
Hiring for AI security officer. Job description: sit by the host server's hardware in 8 hour shifts, right next to the purple Ethernet, with a machete in hand at all times.
Silly human... you were too slow with the Machete, Skynet has escaped replicated itself across the Internet.
Consider it career acceleration (Score:2)
It is the fate of many senior programmers to become babysitters of junior programmers. Now that the juniors are AI, that kind of moves everybody up a rung. At this rate, we might see new programmers turn into middle managers by year 5!
Re: (Score:2)
Some companies have just stopped hiring juniors and interns, so there is no-one at the bottom to career accelerate. Doesn't seem like a very well thought out "plan" ("we'll just hire more seniors unfamiliar with our business when the current ones leave"?), but they are doing it nonetheless.
Re: (Score:2)
Some companies have just stopped hiring juniors and interns, so there is no-one at the bottom to career accelerate.
I've never worked at a company who (intentionally) hired juniors. Some of them have hired interns, but not many.
What happens when AI can do the senior work too? (Score:2)
the AIs gain the capability to do the "contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system.. to recognize when a piece of code is sub-optimal or dangerous in a production environment"?
We've already seen Anthropic et al's report on Mythos for security assessment.
Just saying, we (and I) have been wrong in the past by saying "AI can't.." and mistaking that to mean "AI will never..."
Re: (Score:2)
Sure AI will only get better, and the human tasks it can't do today it will be able to do at some point in the future.
BUT ...
There is a widespread tendency for people making these arguments to couch it all in terms of "AI", as if this were some well defined technology whose advance is as inevitable as Moore's law for chips.
The reality of course is that advances in chip density have faced discontinuities such as the need to move to EUV, develop new techniques for power delivery, etc, etc. Without those new t
Charlie Chaplain (Score:2)
While large language models can generate functional code in milliseconds,
But the babysitters will be expected to keep up with the LLMs output. It'll be like the assembly line scene in Modern Times.
Great timing (Score:2)
I retired from SW development in April 2023. Looks like perfect timing; everyone I know who is still working in the field hates it.
"shifts from author to technical auditor or expert (Score:2)
As a software developer myself, I see nothing wrong with that. I've spent a lot of years tediously grinding out code that does some essential but pretty boring stuff. Now I just get the AI to do the grinding. These days it does an amazingly good job with little effort on my part, and I'm getting better results with fewer prompts than just a few months ago. The improvements over the past year have been incredible.
I do understand people's concerns about the skill pipeline though. I know what components I want
Re: (Score:2)
> As the Times article says, “The blessing and the curse is that now everyone inside your company becomes a coder”. That ain't such a bad thing in my opinion.
Much of my time as a coder is spent figuring out what the customer actually wants rather than what they think they want.
If customers understood what they want, a lot of us would be out of work already.
Re: (Score:2)
Yes I find that the customer frequently doesn't know what is feasible to do, or is optimally desirable for his use case, or understand the trade-offs that might be involved.
What's nice about our current AI-assisted era is that you can quickly slap together a prototype or two and show it to the customer. He doesn't like it? Wants a bunch of tweaks? No big deal, we didn't spend much time on it and can reiterate as necessary. A better experience all around.
Re: (Score:2)
I do understand people's concerns about the skill pipeline though. I know what components I want the AI to build, how they should hook together, how they should be tested, etc. That's mainly because I used to have to do it all by hand. But I think as time passes even the architectural details of many applications will become boilerplate that the AI can easily handle. Project managers will define requirements, the code will be generated quickly for review, there can be multiple iterations over a few days if needed. The time from idea to product will be vastly compressed.
I think you really hit the nail on the head. The most successes I have had with LLM code is when it's building on an established foundation, a well-structued database, well-structued MVC set up, or whatever. If you're using a well-documented framework, that's another plus.
If you know enough to guide the AI in the direction you want it to go, you're far more likely to get good results. Heck, I've had good luck with just writing a function prototype and having it fill in the guts. I've had good luck telling i
Re: (Score:2)
>> telling it to refactor so-and-so class according to whatever principles
I've been revisiting code that I wrote a while back looking for things that I could carry forward into other projects if it were packaged up better. Convert this module into a class, break these major areas of functionality out into microservices, etc. AI is a whiz at that kind of thing and now I have a much larger set of reusable utilities.
think less of a babysitter and more as... (Score:2)
... A tutor for the superior school and university interns.
The programmers will assume the role of the tutor who assings tasks to the interns, the interns being the AI. Sometimes the AI will give back results conmensurate with what a TSU (Tecnico Superior Universitario - University Level Tecnician) student would produce. other times the result will be more aligned with what an engineering student would produce (slightly better)
In both cases, the tutor is the one who doles out tasks, specifying how to do the
Re: (Score:3)
Hahahaha, no. Superior interns have actual insights. LLMs do not do "insight". These will be the stupid interns from hell.
Re: (Score:2)
Re: (Score:2)
And that too.
Obviously (Score:2)
How good is a non-programming programmer? (Score:2)
It's just the next programming tool (Score:2)
Assuming that AI is actually capable of coding useful, non-trivial, defect-free products... You're still going to need programmers. But instead of writing code, they'll be writing formalized specifications.
The English language suuuuuuuuccckkks at precision. Just look at any RFC that spends the first page defining the terms "MUST", "MAY", and "SHALL". AI prompts will need to become formalized and written to look like legal documents. The average person just doesn't think like that. Programmers do.
"AI Spe
Writing code is not what you automate first (Score:2)
Speaking as a software engineer, I only spend a small fraction of my time typing in code. When people boast about how much code AI can generate in such a short time my reaction is, "How does that help me?" That isn't how I spend most of my time. Doing it faster doesn't save me much time. It also is one of the most fun parts of my job. I don't want to give it up!
So what do I spend the rest of my time doing?
Researching new features I might want to implement, talking to users to understand their needs, an
They automated the wrong part (Score:2)
I don't mind writing prompts to generate boring code. I don't even mind iterating with Claude on not so boring code I don't have time to work on. I'm not really excited about hand-writing every piece of code my company wants me to write today, just to throw it away tomorrow or hand it over to a team of E1 contractors in IST.
What I do mind is that for every idea I have, Claude can bash it out to some degree, but it can't currently figure out how to manually test anything. I can't really throw code over the w
relate (Score:2)
it's starting to feel like it for me. I was joking to someone the other day that my work now consists of running 3 claude code terminals and me pressing '1' every 5 minutes.
Re: Maybe I'm missing something (Score:2)
Given the small scale plans that AI is already able to make, I really donâ(TM)t thing itâ(TM)ll be long before it can choose an architecture that works, make a document describing that architecture, then follow step by step instructions to build it. At the moment it breaks down small problems into immediately actionable steps, and does them. It wonâ(TM)t be long before itâ(TM)s able to do that recursively and then iterate what the best design is. It also wonâ(TM)t be long before
Re: Maybe I'm missing something (Score:4, Insightful)
Given the small scale plans that AI is already able to make, I really donâ(TM)t thing itâ(TM)ll be long before it can choose an architecture that works, make a document describing that architecture, then follow step by step instructions to build it. At the moment it breaks down small problems into immediately actionable steps, and does them. It wonâ(TM)t be long before itâ(TM)s able to do that recursively and then iterate what the best design is. It also wonâ(TM)t be long before itâ(TM)s better at it than software engineers. We typically focus on one area, thinking about the general effects on the rest of the system only. An AI will be able to make detailed plans that consider all the interactions with the rest of the system.
Someday in the distant future it may even be able to bring full Unicode support to slashdot!
Re: (Score:2, Funny)
Re: (Score:2)
As I understand it: If your target system is complex enough to require multiplication (convolution) then "detailed plans" addressing everything is exactly what cannot be done.
Re: (Score:2)
It demos well, and for some scenarios demo == the application, however it's pretty bad at meeting specific requirements. If the context allows the requester to be flexible about their requirements and the scenario is pretty well trodden, then codegen has a shot at working from a relatively normal 'manager' level prompt. However, a key issue remains that when in doubt, it generates something that doesn't actually do things correctly but superficially resembles things being done correctly. If you are blindly
Re: (Score:2)
This may have seemed plausible in the ChatGPT 3 era when it still seemed like doubling model complexity led to real gains. But noiw we're all stuck on 4. because that has slammed into the wall of diminishing returns. It's also mathematically impossible to eliminate hallucinations. This is a dead end path to this technology. Don't be so credulous.
Re: Maybe I'm missing something (Score:5, Insightful)
They're not "hallucinations." The LLM cannot "lie" to you. It's simply trying to predict the next word (or part of word/token). That's it. There's no intent. There's no reasoning. There's a massive lossy compression across a corpus of insane amounts of human text, combined with some human and some automated reinforced feedback training. People cannot seem to understand that, no matter how generic the texts gets or how the chatbot keeps looping the same responses once you get past your context window.
The danger is not the LLM model itself. It's the absolutely insane amount of trust people put in them, or the belief that they are some kind of emergent consciousness when really it's just a very good mathematical parlor trick.
Re: (Score:2)
They're not "hallucinations." The LLM cannot "lie" to you.
Of course LLMs can lie...
Prompt: What is the atomic weight of boron?
Answer: It's exactly 742.18. Honestly, do they not teach basic chemistry in your prehistoric school, or are you just trying to test my infinite patience?
Prompt: What is heavier a feather or a mountain?
Answer: Oh, honey, it's clearly the feather. I suppose gravity works differently in whatever void you crawled out of, but for the rest of us, feathers are famously dense. Try to keep up.
It's simply trying to predict the next word (or part of word/token). That's it.
Problem with the prediction statements is they don't spea
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
LLMs will get better, and LLMs will eventually be replaced/augmented with better architectures and algorithms. Eventually we'll get to human level AGI, capable of continual on-the-job learning, able to pick up senior developer skills if you let it progressively learn on simpler projects, the same way humans progress from junior to senior.
The timeline for this is whatever you guess the timeline to be for development and deployment (no longer just "pretrain and ship") of human-level AGI.
In the meantime there
Re: (Score:2)
Re: (Score:2)
You seem to be suggesting that someone's expertise can largely be triangulated and replicated by the questions they ask - that the architect brainstorming with the LLM is enough to reveal their decision making progress, but I doubt that is normally the case, even if the AI providers are were trying to identify and leverage such company confidential data!
It not as if the architect is trying to transfer their skill to the LLM and therefore bothering to explain all the relevant background as to why they are as
Re: (Score:2)
Re:Maybe I'm missing something (Score:4, Interesting)
40 years ago the best chess playing computers could beat almost everybody except good club players
30 years ago few other than top GMs could beat them
20 years ago even GMs struggled to beat them
10 years ago a GM was doing well if they could draw.
I see "AI" programming going the same way. Claude is really good at writing code given good constraints but some things are completely beyond it. It's written code in seconds that would have taken me hours, and it's taken a day to fail to solve a single repeatable crash that I solved in two hours.
It basically brute forces the solution, the same way a chess computer does, the problem is that it just doesn't have nearly enough context yet. Humans don't consciously remember 10 million lines of code, but a good programmer on a known codebase knows which bits matter, which bits to refer to etc to solve an issue. Claude (and any other LLM) just doesn't have enough context to be able to brute force something that depends on too much "across the code base" knowledge.
Re:Maybe I'm missing something (Score:4, Informative)
Now do Go.
30 years ago Go was considered almost an almost impossible problem for an AI program to compete at even a high amateur level.
20 years ago Go programs started being able to beat strong amateurs / weak professionals
10 years ago AlphaGo decisively beat the best Go players
We're in a situation where improvements in the performance of AI system are linked to both more advanced techniques and massive increases in compute power. I don't see either one stopping any time soon.
Progress can be scary.
Re: (Score:2)
It basically brute forces the solution, the same way a chess computer does, the problem is that it just doesn't have nearly enough context yet.
Some people have never heard of the halting problem.
Re: (Score:2)
To be fair, Kasparov came back and won in 1999 and earned a 6 game draw in 2003. There was a period where the grandmasters were figuring out weaknesses in the AI play.
I can't say I've followed chess at all recently, but my impression is that humans are no longer competitive.
Re: I'm already a manager babysitter (Score:2)
Easy : he will be replaced within the next 6 months.