Forgot your password?
typodupeerror
Programming AI

Will Some Programmers Become 'AI Babysitters'? (linkedin.com) 150

Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.

"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."

The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

Will Some Programmers Become 'AI Babysitters'?

Comments Filter:
  • by Registered Coward v2 ( 447531 ) on Monday April 13, 2026 @07:52AM (#66091230)
    If you replace junior programmers with AI and use senior ones with the knowledge to review its output, how do you develop th enext generation of senior programmers?
    • by T34L ( 10503334 ) on Monday April 13, 2026 @07:54AM (#66091232)

      I think they hope that in theory, by the time the senior programmers retire, you'll be replacing them with the AI as well.

      In practice, none of the people involved seem physically incapable of thinking in terms of a timespan longer than their next round of bonuses.

      • This is pretty much it when you're dealing with VP's, C-Suite and above. Every action taken is usually for next quarter's results, At most is spans out to end of year results. Almost NEVER longer.
      • by DarkOx ( 621550 )

        Real talk - If you work at a public company and you don't have a seat in the boardroom complete with name plaque, you might well be laid off at any moment, for just about any reason. There is no corporate loyalty any more.

        So should anyone in that sort of organization, ever think past the time-span involving their next bonus? People have long accused C[X]Os of not looking past the quarterly earnings report, or past their next bonus, but maybe the rest of the workforce just needs to get memo that you either

      • by gweihir ( 88907 )

        Indeed. Well, this is not the first time really big names (and small ones as well) in IT vanish or become irrelevant.

    • Donâ(TM)t worry, it wonâ(TM)t be long before AI can make these more architectural decisions. Senior programmers and architects seem to be living in a weird fantasy that the AI is not coming for their job too. No software engineers wonâ(TM)t become AI baby sitters. Managers will. Software engineers will become jobless.

      • by VorpalRodent ( 964940 ) on Monday April 13, 2026 @08:33AM (#66091272)

        That's the issue - it's all or nothing, just with weird caveats. Either:

        1. The AI can do everything an engineer can do, in which case some business management person might come back and tell it that it was wrong with some assumptions on this or that (just like they would with a human), but it's otherwise fully autonomous, acting entirely on its own, or:

        2. It can't.

        The problem with #2 is that we'll spend so much time and money in thinking we're just a little ways away from #1 that no one is in the pipeline. There's also the risk of treating #2 like it's #1, where we let it make decisions, with no repercussions, and we just watch things burn.

        I suppose there's a third option - it can do everything, *plus* mentoring a junior so that a human is still learning things just in case.

        • by dvice ( 6309704 )

          4. We use AI to do tasks that it is good at and humans do tasks that they are good at.

          I don't understand why everyone is trying so hard to make AI do things that are pretty impossible for it. Do they hate programming so much?

          • by Junta ( 36770 ) on Monday April 13, 2026 @10:58AM (#66091568)

            Broadly speaking, a lot of AI advocates believe AI can do every single job *except* their own.

            In terms of hating programming, yes, actually a lot of the staunchest supporters hate programming. Because they can't do software development themselves but have somehow latched onto the business of software development. Business folks that carry a great deal of resentment that there are employees that have sufficient leverage over them to extract significant salaries and there's not a lot the business side can do to counter.

            Code gen represents the possibility that they can have a fungible workforce where the labor has no particular leverage.

            A lot of these folks are a bit unhinged in thinking that somehow codegen eliminates their need for skilled workers but somehow leaves them in the loop. I saw specifically a software sales org think they could get away with selling the act of inputting the client's requests into prompting without any software development experience/skills.

          • AIs are pretty good at programming.

            It is a very strange /. myth that they are not.

            No idea where this myth comes from, wishful thinking?

            I recently took over ownership of a product that is nearly completely build by AI.

            There is nothing to complain about it. As it is a web product, I am not myself better in doing it. I am more a backend or C++ developer. But the code is readable, the comments make sense and most important: stuff that the previous product owner hand coded in weeks the AI does in 10minutes or less.

            The turn around between:
            - try this
            - test and assess it
            - throw it way if it is not good enough

            Is less than a few hours, costs nearly nothing, and you can really do "experimental software development".

            As I said: it is just a web site, so underneath not super complicated.

            • Very true -- Slashdot seems to have many older programmers who refuse to use AI because theu dislike it and like writing code manually. I think of it like great grandpas who refuses to use power tools and used awls, hammers and other hand tools instead. AI generates code at the level of a senior developer at this point. It gives the same advice on security a seasoned CISO would. It finds security bugs in code better than a certified pentester. Much like the IT people who were anti-virtualization (beca
            • I think itâ(TM)s easily explained. Most people on slashdot are early tech adopters. When ChatGPT 3.5 burst onto the scene, they tried it. They tried using it to generate code, and they got laughable results. Theyâ(TM)re now convinced that AIs generate terrible code because theyâ(TM)ve not since gone back and given any reasonably recent Claude a go.

              • by 0123456 ( 636235 )

                Probably in part. I think it's great that my IDE has been able to do a lot of the grunt-work for me for a year or more, but people I know who use LLMs to do most of their coding still say the code it generates is bad and they worry that it may be unmaintainable in future... they may be able to ship products faster, but will they be able to fix them in five years?

                We'll find out five years.

      • It's cute you think they'll keep the managers to do that.
        The owners will hire cheap interns with AI experience to replace them.
        And yes, eventually the owners will be jobless when the whole software as a service/product model falls apart. People will just ask their phones to do a thing, no app required.

      • Probably but the fundamental issue still is âoewhat is the right architecture?â That is defined by current and future requirements , something that is not easily reduced to a prompt. There are decisions and trade offs that need to be made, which AI may continue to struggle making.
    • by Junta ( 36770 )

      If it did work that well, then it would be similar to math education. You start by forbidding calculators, then allow only basic arithmetic calculators, then graphing calculators, then full computer aided math.

      Think there's flaws in general, but to the extent it can work, the burden shifts more to education rather than workplace.

    • by SpinyNorman ( 33776 ) on Monday April 13, 2026 @10:34AM (#66091510)

      There seem to be at least four "AI strategies" (if throwing spaghetti at the wall can be called a strategy) that different companies are currently trying.

      1) This get rid of, and stop hiring, juniors and interns, and give AI tools to your senior developers. At least you've now got capable people doing your design and guiding the AI, but indeed where does the next generation of seniors come from, especially if you want seniors that actually know your business and IT systems. Taken to it's logical conclusion, no more juniors enter the field (because no-one is hiring them) and we end up with retirement age developers babysitting AI, then retiring, then ???

      2) At least plan 1) works in the short term, but some companies have chosen to do the exact opposite and get rid of the seniors (hey, they're more expensive) and give AI tools to the juniors and contractors instead, Of course now you've got people generating AI slop without the skill to review or guide what it's generating, but at least it's cheap (until you belatedly realize you've destroyed your IT organization).

      3) Do nothing meaningful with AI. Ignore your developers who say it would be helpful. Not really a strategy, but at least you're not destroying your IT organization.

      4) Use AI in an appropriate way, mindful of it's current strengths and weaknesses. I have friends in IT working at companies who are using strategies 1-3, but category 4 seems much rarer. I guess it's perhaps not so sexy as "feel the AGI, fire some segment of your developers (toss a coin, fire the juniors or the seniors)", but you keep your IT structure, give SOTA AI to everyone (expensive, but cheap AI is mostly useless for coding), and treat it as a tool that your organization needs to develop best practices for, not a magic genie that you hope can currently do something that it cannot. Hint to CEOs: don't do what the AI execs are telling YOU to do - follow what they are doing at their own companies!

      I'm guessing that companies following 2) will be first to fail then 1). It's largely a slow motion train wreck.

    • Perfectly said, AI is really a massive negative for programming. AI is great is you need boilerplate, or if you're stuck and want a suggestion, but outside of that, you should avoid it. If you don't know what the code generated means, or how it works, you can't debug, you can't support it, and you can't claim it's safe. That's the danger of AI, it can generate a lot of code, but in my experience, that code needs to be carefully checked and commented.

      I've said, in one form or another, that code should
      • I used to agree with you about "boilerplate only." But over the past few months, AI (my choice is GitHub Copilot) has gotten significantly better at non-boilerplate kinds of code. It used to be that AI spit out uncompilable code half the time. Now it almost always works right on the first try. I do still have to carefully inspect what it generates to make sure it's doing what I actually wanted. But most of the time, it does.

        The biggest shortcoming I see now with AI, is that it doesn't know when it knows eno

        • Exactly, you need to read and analyze the code, and if you need to make sure it's accurate. There is no problem if you're careful, my issue is many people aren't careful, and just accept it.
          • Yes, agreed. And there will be plenty of people, especially executives, who think it's acceptable to just accept what AI says.

    • This worry is not new with AI. Companies that produce software have long wanted experienced developers (for an entry level price, of course). This is also not new with programming. Trades like plumbing and electrical work, also want experienced workers. Doctors too. I mean, who in their right mind wants to be the very first patient to undergo surgery at the hands of a physician who just graduated from college?

      Each of these professions has found ways to bring in and train new talent. Programming will also fi

    • by gweihir ( 88907 )

      Hahaha, you do not. Then in 10-20 years you wonder why everything has gone to shit.

    • If you are a senior pony express rider and the automobile just started rolling out how do you train the next generation of riders?

      They'll be around for a while longer and they may start using AI development natively as part of how people programme in the future...eg they will become the car drivers and truckers that replaced all them horses but sure there will be disruption and teething issues. Still someone has to know how to fix the AI when it breaks...
      • If you are a senior pony express rider and the automobile just started rolling out how do you train the next generation of riders? They'll be around for a while longer and they may start using AI development natively as part of how people programme in the future...eg they will become the car drivers and truckers that replaced all them horses but sure there will be disruption and teething issues. Still someone has to know how to fix the AI when it breaks...

        I get your point, but I think there is a difference between the tool (pony express) and the system that uses it (delivery). You can change the tool but if people don’t know how the system works, you will have problems.

  • Will you board a plane if you learn that the controllers use AI generated code? We still board a plane because we trust the accuracy of humans. So it's just a matter of time when AI will surpass humans in this benchmark as well. Until then happy babysitting.
    • by gweihir ( 88907 )

      Will you board a plane if you learn that the controllers use AI generated code?

      The question does not apply. Due to the potential damage, the requirements in the aerospace industry are very high. Sure, Boeing got away with mass-manslaughter twice recently, but only because of their defense contracts and only because the crashes were in places people think to not matter a lot. Otherwise the decision makers for the MCAS would be in prison. They still paid massively for that mistake.

  • by rsilvergun ( 571051 ) on Monday April 13, 2026 @09:18AM (#66091324)
    There's basically two options. Either it works or it doesn't.

    If it works it's basically going to be doing grunt work. It's all well and good to say it freeze you up for the hard work but that means you now have a 24/7 job doing the hard work. You no longer get an hour or two of downtime resting your brain everyday. You are expected as an employee to be on 24/7 producing high quality novel code.

    And if it doesn't work then yeah you are an AI babysitter. But you're still going to be treated as if the code tool works so your productivity is expected to go up.

    There is absolutely no winning this.
    • by dvice ( 6309704 ) on Monday April 13, 2026 @10:00AM (#66091408)

      If I get a strange error code from my app (an error which I am not familiar with, usually caused by some 3rd part library we use), I feed it to the AI and AI will usually about 80% of the time guess correctly what is wrong, I check if that was the case and then I fix the error. Traditionally that was long hours of googling and reading manuals trying to figure out what is wrong. I did not enjoy it, nor did I rest when doing it. Using AI like that feels pretty relaxing to me.

    • No doubt every ill-conceived idea that can be tried is being tried, but the math doesn't really work on that one. How can the same person be 10x more productive generating code that they are then personally expected to review.

      The "solution" to this is either you just don't review the code, since you didn't 10x your manpower to review the 10x more code, or you just issue some impossible mandate like Amazon just did (when some junior dev's AI slop took down one of their production system) and insist that the

    • I used to agree with you that AI basically did grunt work. But in recent months, the tools, like GitHub Copilot, have gotten significantly better doing things that went beyond grunt work.

      For example, I wanted to add Lucene (the engine behind ElasticSearch) to my application. Not knowing how Lucene worked, I prompted AI to add it, and told it what kinds of queries I wanted to support. It generated the code to my specs and made it work. Then, Lucene being a complicated beast, some searches come back with scor

      • At the end of the day it installed some software for you and fixed some issues with the install. It is slightly complicated software I agree but it really did just install software.

        I am fully confident that you could have worked through all those issues and that it wouldn't have taken that much brain power on your part. It would have taken time but that's different than creative effort and brain power.

        And that's kind of my point. Grunt work isn't necessarily easy. Work is work for a reason. But it's
    • by gweihir ( 88907 )

      It can also work badly. As in insecure, unmaintainable, unfixable, etc., but still "works".

      I do agree that humans only doing only the hard work is not going to go well. People will leave and do better work. People will burn out. People will demand huge salaries.

      • But it doesn't and so it doesn't.

        Yeah a few million might get lost here and there but it won't create openings for competitors because competitors aren't allowed to exist anymore.

        I don't think folks really realize just how much damage we've done by installing pro corporate judges and politicians throughout our entire political system and picking the absolute most corrupt ones we could get our bloody hands on because they pushed our buttons the way we wanted them pushed.

        Capitalism requires a comp
  • Too much typework (Score:5, Interesting)

    by Fons_de_spons ( 1311177 ) on Monday April 13, 2026 @09:19AM (#66091328)
    I let chatgpt write a little gui for a hobby project I made. A few prompts later and I had a working GUI for my python program that automatically generates excel sheets for my colleagues.
    Then the babysitting started. My God... I had to think of everything that could go wrong and tell it what to do in that case, meanwhile it lost track of previous requirements more than once and wiped that out. Simple example? User has to type in a number, user should not be able to type in a letter, or a negative nember, ... I got sick of all the explaining I had to do at some point. I was typing in lengthy paragraphs and gave up.
    The GUI was good enough for my purposes, it was ok if you followed the steps one after the other. I got further than I would have gotten if I had written it myself and the program became a lot more usable. It was able to save settings in a JSON file, reload the settings, You could set up the program and hit generate as long as you did not deviate too much from the intended work flow. The good news? I got a working gui very fast.The bad news? No way I would use this in a professional environment. I'd do it all manually. Probably was less typework. I would have gotten less features, but it would not misbehave if you typed in something wrong or hit the buttons in the wrong order.
    Is that a good summary for using AI in programming? Makes nitwits think they can do anything in a few prompts, The sky is the limit! The people on the workfloor know that its outputs still needs a ton of revising before you could even consider releasing it?
    • Don'y worry man, if you're using Python you're not a real programmer.
    • >> meanwhile it lost track of previous requirements more than once and wiped that out

      It's best to start off by telling the AI to write an implementation proposal for your project and get it to put all your little requirement details in that. Then you can tell it to implement the plan in phases. Revise the plan later if necessary. That way everything is documented and the AI knows exactly what to do.

    • Your experiences aren't surprising. However, one issue is that you seem to be using ChatGPT (web?) to do this. If you use an IDE integrated with AI, such as Cursor or Visual Studio Code + GitHub Copilot, you will likely get much better results. This is because every time you give it a prompt, it uses the existing code as context, even if it "forgets" what you prompted it earlier.

      • I only started using Claude Code a few months ago, and you are absolutely correct about the cli / code-integrated tools.

        I had Claude translate a COBOL (esque -- DATABUS) program into a modern language and framework today. The plan phase took about 6 minutes, I made a few edits to the plan, and writing portion took about 4 minutes. I got claude to run some tests comparing outputs, and they were identical. I then myself ran similar tests and got the same results. Pretty neat.

        I hate having to tweak the legacy

  • The problem of replacing programmers with generated crap.

    It's like watching a car crash in slow motion.

    That CEO vibe coding hasn't got the competence to reboot his PC, never mind work out complex interactions inside software.

    Who'd have seen that coming?

    Every coder.

  • by OverlordQ ( 264228 ) on Monday April 13, 2026 @09:34AM (#66091354) Journal

    The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

    No, they're struggling to find engineers who accept the pittance they're offering. Pay them, and they'll do it.

  • It's pretty obvious that the Agents will be trained for different functions. Red Team AI vs Blue Team AI.
    One Agent writes it, another Agent reviews it. Just like old times, huh?
    • That's better than nothing if you're using "LLM as judge" to try to catch the errors in your RAG outputs, but if you're talking about "code review" (or whatever we should call critiquing voluminous AI slop generated by junior developers), then the problem is that AI isn't yet at the level to do that (and likely won't be until we develop human-level AGI).

      If AI was good enough to do meaningful code reviews, then it wouldn't be writing crap code in the first place.

      • ironic, though, wouldn't it be great to submit your decent code to an AI reviewer ?
        Which has the advantages of a meatbag to figure out all the context, write something that makes sense, then throw it over the wall to the reviewer who has all the speed and other advantages of AI. Doesn't sound horrible. Seems better than what we have, which is the opposite. The big dummy in the room writes the code then the big brains in the room review it. Ugh. We all know what kernaghan said, about it being twice as hard t
    • by gweihir ( 88907 )

      And all agents have selective blindness. And then some attackers can compromise the whole world.

  • by WaffleMonster ( 969671 ) on Monday April 13, 2026 @09:53AM (#66091382)

    People are avoiding CS like the plague because they don't see a future. Those who don't avoid it are getting fucked over by the AI rug pull and can't get jobs. Those still in it are constantly being harassed by human dinosaur rhetoric and expectations of becoming reverse centaurs. Unless you run a code mill babysitting an LLM is ultimately more difficult than just coding it yourself. Lack of net productivity gain once you figure in lifecycle costs speaks for itself.

    Few are likely to be willing to invest time and effort to become proficient in some intermediate skills such as prompt engineering, agent wrangling..etc when the lifetime of acquired skills are measured in weeks and months and may not even translate across models or systems.

    There seem to be two possibilities for the medium term future. Either AGI renders humans obsolete or the obliteration of CS pipeline due to magical thinking results in significant supply shortage.

    • People are avoiding CS like the plague because they don't see a future. Those who don't avoid it are getting fucked over by the AI rug pull and can't get jobs.

      Is that all AI's fault? I also don't know how bad the job market for beginning coders is!

      I graduated from undergrad a bit more than 20 years ago with a computer science degree. At the time there were less than 100 majors per year. This was roughly 4-6% of the student body. Comp sci was well behind economics, public policy, biology, political science, and maybe some others in terms of popularity.

      Starting in the 2010s, the number of computer science majors started to grow very rapidly. In 2024 there were almo

    • by 0123456 ( 636235 )

      I suspect the fun part comes in five years when the software needs major revisions and the new AI model has no idea what the original AI model did and nor does anyone still left at the company who would be able to review the changes. So either you start putting half-understood changes into the software or have to get the new AI model to just rewrite everything from scratch and invalidate five years of testing.

      I'm expecting to see a major software collapse in a few years as all the vibe-coded software starts

  • Hiring for AI security officer. Job description: sit by the host server's hardware in 8 hour shifts, right next to the purple Ethernet, with a machete in hand at all times.
    • Hiring for AI security officer. Job description: sit by the host server's hardware in 8 hour shifts, right next to the purple Ethernet, with a machete in hand at all times.

      Silly human... you were too slow with the Machete, Skynet has escaped replicated itself across the Internet.

  • It is the fate of many senior programmers to become babysitters of junior programmers. Now that the juniors are AI, that kind of moves everybody up a rung. At this rate, we might see new programmers turn into middle managers by year 5!

    • Some companies have just stopped hiring juniors and interns, so there is no-one at the bottom to career accelerate. Doesn't seem like a very well thought out "plan" ("we'll just hire more seniors unfamiliar with our business when the current ones leave"?), but they are doing it nonetheless.

      • Some companies have just stopped hiring juniors and interns, so there is no-one at the bottom to career accelerate.

        I've never worked at a company who (intentionally) hired juniors. Some of them have hired interns, but not many.

  • the AIs gain the capability to do the "contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system.. to recognize when a piece of code is sub-optimal or dangerous in a production environment"?

    We've already seen Anthropic et al's report on Mythos for security assessment.

    Just saying, we (and I) have been wrong in the past by saying "AI can't.." and mistaking that to mean "AI will never..."

    • Sure AI will only get better, and the human tasks it can't do today it will be able to do at some point in the future.

      BUT ...

      There is a widespread tendency for people making these arguments to couch it all in terms of "AI", as if this were some well defined technology whose advance is as inevitable as Moore's law for chips.

      The reality of course is that advances in chip density have faced discontinuities such as the need to move to EUV, develop new techniques for power delivery, etc, etc. Without those new t

  • While large language models can generate functional code in milliseconds,

    But the babysitters will be expected to keep up with the LLMs output. It'll be like the assembly line scene in Modern Times.

  • I retired from SW development in April 2023. Looks like perfect timing; everyone I know who is still working in the field hates it.

  • As a software developer myself, I see nothing wrong with that. I've spent a lot of years tediously grinding out code that does some essential but pretty boring stuff. Now I just get the AI to do the grinding. These days it does an amazingly good job with little effort on my part, and I'm getting better results with fewer prompts than just a few months ago. The improvements over the past year have been incredible.

    I do understand people's concerns about the skill pipeline though. I know what components I want

    • by 0123456 ( 636235 )

      > As the Times article says, “The blessing and the curse is that now everyone inside your company becomes a coder”. That ain't such a bad thing in my opinion.

      Much of my time as a coder is spent figuring out what the customer actually wants rather than what they think they want.

      If customers understood what they want, a lot of us would be out of work already.

      • Yes I find that the customer frequently doesn't know what is feasible to do, or is optimally desirable for his use case, or understand the trade-offs that might be involved.

        What's nice about our current AI-assisted era is that you can quickly slap together a prototype or two and show it to the customer. He doesn't like it? Wants a bunch of tweaks? No big deal, we didn't spend much time on it and can reiterate as necessary. A better experience all around.

    • I do understand people's concerns about the skill pipeline though. I know what components I want the AI to build, how they should hook together, how they should be tested, etc. That's mainly because I used to have to do it all by hand. But I think as time passes even the architectural details of many applications will become boilerplate that the AI can easily handle. Project managers will define requirements, the code will be generated quickly for review, there can be multiple iterations over a few days if needed. The time from idea to product will be vastly compressed.

      I think you really hit the nail on the head. The most successes I have had with LLM code is when it's building on an established foundation, a well-structued database, well-structued MVC set up, or whatever. If you're using a well-documented framework, that's another plus.

      If you know enough to guide the AI in the direction you want it to go, you're far more likely to get good results. Heck, I've had good luck with just writing a function prototype and having it fill in the guts. I've had good luck telling i

      • >> telling it to refactor so-and-so class according to whatever principles

        I've been revisiting code that I wrote a while back looking for things that I could carry forward into other projects if it were packaged up better. Convert this module into a class, break these major areas of functionality out into microservices, etc. AI is a whiz at that kind of thing and now I have a much larger set of reusable utilities.

  • ... A tutor for the superior school and university interns.

    The programmers will assume the role of the tutor who assings tasks to the interns, the interns being the AI. Sometimes the AI will give back results conmensurate with what a TSU (Tecnico Superior Universitario - University Level Tecnician) student would produce. other times the result will be more aligned with what an engineering student would produce (slightly better)

    In both cases, the tutor is the one who doles out tasks, specifying how to do the

  • Isn't that what is already happening? I'd argue that LLMs will get feedback-loops and other, specialized LLMs for crosschecking and permanent storage like we store information in our brain it can access (isn't all that vectoring very similar to how neurons work - now we might just need some upscaling, minituarization, cheaper hardware and specialiced sections just like in the human brain. So, I guess that "needs human oversight, context and special knowledge"-part will go away. AI is still in it's infancy,
  • People grow into the roles they're in. If they're not writing code every day but merely auditing, eventually their programming skills will atrophy. People cannot develop skills unless they're doing it daily
  • Assuming that AI is actually capable of coding useful, non-trivial, defect-free products... You're still going to need programmers. But instead of writing code, they'll be writing formalized specifications.

    The English language suuuuuuuuccckkks at precision. Just look at any RFC that spends the first page defining the terms "MUST", "MAY", and "SHALL". AI prompts will need to become formalized and written to look like legal documents. The average person just doesn't think like that. Programmers do.

    "AI Spe

  • Speaking as a software engineer, I only spend a small fraction of my time typing in code. When people boast about how much code AI can generate in such a short time my reaction is, "How does that help me?" That isn't how I spend most of my time. Doing it faster doesn't save me much time. It also is one of the most fun parts of my job. I don't want to give it up!

    So what do I spend the rest of my time doing?

    Researching new features I might want to implement, talking to users to understand their needs, an

  • I don't mind writing prompts to generate boring code. I don't even mind iterating with Claude on not so boring code I don't have time to work on. I'm not really excited about hand-writing every piece of code my company wants me to write today, just to throw it away tomorrow or hand it over to a team of E1 contractors in IST.
    What I do mind is that for every idea I have, Claude can bash it out to some degree, but it can't currently figure out how to manually test anything. I can't really throw code over the w

  • it's starting to feel like it for me. I was joking to someone the other day that my work now consists of running 3 claude code terminals and me pressing '1' every 5 minutes.

The opulence of the front office door varies inversely with the fundamental solvency of the firm.

Working...