Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Programming AI

Vibe Coding Has Turned Senior Devs Into 'AI Babysitters' 83

An anonymous reader quotes a report from TechCrunch: Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. Rover has been in the industry for 15 years, mainly working as a web developer. She's now building a startup, alongside her son, that creates custom machine learning models for marketplaces. She called vibe coding a beautiful, endless cocktail napkin on which one can perpetually sketch ideas. But dealing with AI-generated code that one hopes to use in production can be "worse than babysitting," she said, as these AI models can mess up work in ways that are hard to predict.

She had turned to AI coding in a need for speed with her startup, as is the promise of AI tools. "Because I needed to be quick and impressive, I took a shortcut and did not scan those files after the automated review," she said. "When I did do it manually, I found so much wrong. When I used a third-party tool, I found more. And I learned my lesson." She and her son wound up restarting their whole project -- hence the tears. "I handed it off like the copilot was an employee," she said. "It isn't."

Rover is like many experienced programmers turning to AI for coding help. But such programmers are also finding themselves acting like AI babysitters -- rewriting and fact-checking the code the AI spits out. A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers. These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.

Working with AI-generated code has become such a problem that it's given rise to a new corporate coding job known as "vibe code cleanup specialist." TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go. "Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, 'Please take this into the dining room and pour coffee for the family,'" Rover said. Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren't going to tell you. "It doesn't make the kid less clever," she continued. "It just means you can't delegate [a task] like that completely."
Further reading: The Software Engineers Paid To Fix Vibe Coded Messes

Vibe Coding Has Turned Senior Devs Into 'AI Babysitters'

Comments Filter:
  • by ebunga ( 95613 ) on Monday September 15, 2025 @07:23PM (#65662106)

    I don't think it means what you think it means.

  • by Valgrus Thunderaxe ( 8769977 ) on Monday September 15, 2025 @07:24PM (#65662108)
    You're not a developer and you never have been. Maybe social work or kindergarten teacher would be something more compatible with your personality.
    • Web developer can mean a lot of things. It can mean designer or project manager or several other jobs that are adjacent to coding but not coding.
    • Neither of those. Extend the metaphor: "Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old Can they do it? Possibly. Could they fail? Definitely. Therefore, they require supervision. Constant. Supervision. And *constructive* feedback. Why do you think parenting, teaching, and training is so hard?

      Developers - real developers - don't let their untested code run on production either, of course, so they're still not one of those, either.
      • Neither of those. Extend the metaphor: "Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old Can they do it? Possibly. Could they fail? Definitely. Therefore, they require supervision. Constant. Supervision. And *constructive* feedback. Why do you think parenting, teaching, and training is so hard?

        The real question is how good is your insurance when your child suffers third degree burns over 70% of their body and is the final cost worth it?

    • Neither is it an "industry". It's a craft, and always has been. In fact, most IT project failures are due to the fact that the work is managed like a conveyor-belt factory instead of the workshop of a craftsman.
  • by dj245 ( 732906 ) on Monday September 15, 2025 @07:28PM (#65662116)
    Generative AI can be a huge timesaver but the people doing the task need to actually know how to do the task, if only so they can provide reasonable instruction to the computer. All these get-rich-quick clowns falling on their faces are trying to shortcut years of knowledge and experience. Knowing excel really well does not make one an accountant. And this is compounded by the fact that excel gives predictable output, which is not the case with generative AI.
    • excel gives predictable output

      So you've never had to deal with an excel file containing dates or times or values which could possibly appear to be part of a date type??!

      • by flink ( 18449 )

        If you properly quote text, and format date cells as dates, then yes, it does.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday September 15, 2025 @07:42PM (#65662150) Homepage Journal

    There's no sobbing in vibe coding.

    Just bullshit on top of more bullshit.

  • by choppsv1 ( 9739806 ) on Monday September 15, 2025 @07:49PM (#65662158)

    Code review, everyone loves doing them right? Not right. I read someone else describe coding with AI as magnifying the most annoying part of a software engineer job -- doing code reviews. Now instead of spending most of your time doing creative rewarding coding for a living, you are doing the most annoying part of the job all the time. Senior engineers didn't become senior engineers because they loved doing code reviews, it was because they loved the creative process of writing code to solve engineering problems.

    • by jythie ( 914043 ) on Tuesday September 16, 2025 @06:15AM (#65662750)
      On the other hand, 'code review' is one of the things that really splits senior software developers from the lower levels. Just being good at programming makes you a good programmer. Reviewing and signing off on other people's work is what (generally) makes you a senior or a lead. Probably the number one complaint I hear from senior devs is that they miss coding since so much of their time is spent doing reviews.
      • On the other hand, 'code review' is one of the things that really splits senior software developers from the lower levels. Just being good at programming makes you a good programmer. Reviewing and signing off on other people's work is what (generally) makes you a senior or a lead. Probably the number one complaint I hear from senior devs is that they miss coding since so much of their time is spent doing reviews.

        When I was at DEC code reviewing was not limited to the senior software engineers--we all did it, both as reveiwers and reviewees. I enjoyed the process, probably because it let me demonstrate my skills to my peers.

  • by ffkom ( 3519199 ) on Monday September 15, 2025 @07:57PM (#65662168)
    Not sure why I should pity one who first asks some LLM to spit out code only then to complain how bad the results are. But her fate was much less sad than that of actual senior developers working at larger corporations who are increasingly confronted with code that new hires claim to have coded, and would not admit to have vibe-coded, even though the kind of errors made (and not made) make it quite obvious that the subject of the review is AI slop. Babysitting junior coders that vibe-code without admitting to it is so much worse than just babysitting AI.
    • Years ago I was at uni with someone who... well lets say there were a lot of recreational pharmaceuticals at the rental he shared with a bunch of other students. They eventually got thrown out after burning down part of the fence on the property after it either tried to attack them or persisted in insulting them.

      Anyway, the sort of code he turned in for assignments was the closest I've seen to "AI"-produced vibe coding. Seriously.

  • by Tony Isaac ( 1301187 ) on Monday September 15, 2025 @08:12PM (#65662186) Homepage

    If you're "babysitting" AI code writing, you're letting it do too much.

    For little stuff, I find it literally slower to wait for AI to spit out the code I asked for, than to just type it myself. If you're using AI for big stuff, you are asking for problems.

    What AI is actually good for, is stuff where you know exactly what to do, but might not know the exact syntax or the exact API signature. It can also help with writing unit tests and other drudge work.

    But this idea of people "babysitting AI"--I don't really buy it. It can't actually do that much on its own.

    • by PPH ( 736903 )

      but might not know the exact syntax or the exact API signature

      In other words, a search engine*.

      *In the times of yore before ads buggered them all up.

      • A search engine, plus automation of Stack Overflow. Yeah, pretty much.

      • by Bongo ( 13261 )

        Being able to talk to the computer to ask it to find things based on meanings is like Star Trek technology and it's sad in a way that the hype is distracting from what they're good at. For example, yesterday I had a problem (not tech but a field I know nothing about (oh wait like tech then)) and the first AI search I did gave me an excellent find. Maybe I would have found it after spending all day browsing, but this was a great answer. And what about when we were supposed to wait for the semantic web and al

        • by jbengt ( 874751 )

          Maybe I would have found it after spending all day browsing, but this was a great answer.

          I have probably learned more things useful for my (mechanical) engineering career by coming across things other than that which I was looking up for the job.

          • by Bongo ( 13261 )

            True, there's a huge downside to being fed the answer.

            I do spend a lot of time looking at virtually random stuff, anything that just seems interesting. What is maybe more alarming about AI isn't the job replacement scenario, but the killing of creativity and synthesis.

        • Being able to talk to the computer to ask it to find things based on meanings is like Star Trek technology and it's sad in a way that the hype is distracting from what they're good at. For example, yesterday I had a problem (not tech but a field I know nothing about (oh wait like tech then)) and the first AI search I did gave me an excellent find. Maybe I would have found it after spending all day browsing, but this was a great answer. And what about when we were supposed to wait for the semantic web and all that. So it's a bit sad it gets hyped as being able to replace people.

          I hope you checked the answer. AI can give excellent-looking but totally false results.

          • by Bongo ( 13261 )

            I hope you checked the answer. AI can give excellent-looking but totally false results.

            Yes, exactly, this is it.

            In this case, it dug up something on a company website, and then I could go read the real information there.

      • Dude its so bad. Documentation has gotten really awful as well.

      • by allo ( 1728082 )

        Kinda, just faster and personalized for your code.

    • It's really great so long as you keep the degrees of freedom low. Need a mechanical cha ge across lots of code, like "replace telnet with ssh," it'll usually do well. You have to already have a pretty good idea what you want, so you're just saving time with the actual typing.
      • Even your use case is dangerous. I recently tried to get GitHub Copilot to "convert jQuery .ajax calls to use 'fetch'". It got into ALL kinds of trouble. I had to undo the change and re-apply it, one function at a time, after hand-tailoring my own function wrapping the "fetch" call.

    • This is what I like it for. "Write me a sql query for.." to query various system tables. I've worked with so many relational databases over the last 4 decades, well, it's hard to remember which one has what. That saves me time. "Generate me a unit test for this" also saves me time. "Write me new and different code" not so much.
    • For little stuff, I find it literally slower to wait for AI to spit out the code I asked for, than to just type it myself.

      This is quite obviously a lie.

      Come up with 1 page summary of an application that you want. All the things it has to do.
      I will complete it with an LLM in 15-20 minutes.
      You will take hours.

      The LLM produces tokens faster than I can type, and I can type faster than you.

      What AI is actually good for, is stuff where you know exactly what to do, but might not know the exact syntax or the exact API signature.

      NO. Fucking hell. RTFM. Asking teh AI to hallucinate non-existent objects and methods into existence?

      You're clearly making this shit up. Why?

      • Come up with 1 page summary of an application that you want. All the things it has to do. I will complete it with an LLM in 15-20 minutes.

        How long will it take to ensure that the code is correct? You have to include that into your estimates, or you're doing it wrong.

        • Nonsense.

          We're not adding code review, though we can if we want. It'll still do it faster than a human.
          We're talking about building a small application to do a job.
    • Yeah you tell it to write you a single function and a test.

  • Now that we're getting past the first exuberant rush of hype, people are starting to try to build real stuff with AI. And they're running into walls in every direction. Maybe the AI apocalypse will be upon us one day, but not for a while yet.

    • people are starting to try to build real stuff with AI.

      We have been all along, while here on slashdot, you and other imbeciles have sat here trying to circle jerk yourselves into believing that it wasn't happening.

      And they're running into walls in every direction.

      This part was true then, and is still true now.

      Maybe the AI apocalypse will be upon us one day, but not for a while yet.

      Apocalypse? No. Jobpocalypse? I wouldn't bet against it. You can't actually believe your self-gaslighting, can you?

  • I am not a heavy Excel user, but occasionally I need to write a slightly complicated formula. Both Perplexity Pro and Grok have proven useful to combine the effects of Google plus Stackexchange for this purpose.

    Then I tried generating some python code for Klayout. The template generated might have saved me an hour or two. Which is not bad.

    But I would not try to build a startup around it.

  • "Because I needed to be quick and impressive, I took a shortcut ...

    Fast, good, cheap - pick two.

    She had turned to AI coding in a need for speed with her startup

    Or, hire another programmer.

  • Most of the comments are correct. Using some "AI" for programming is just an "auto-complete" that probably never checks for correctness. Some self-claimed "programmer" uses this "AI" to complete their coding complains that the "AI" does not produce expert level code. Please comment on what I missed here.

    • Please comment on what I missed here.

      Missed? Don't know.
      Characterizing an LLM as "just an auto-complete" is flat out stupid, though.
      It would take a dozen paragraphs to explain why it's stupid, and just one line of them for you to say "SEE? Autocomplete!"
      So yes- yes it is an autocomplete.
      In the same way that a computer is just an abacus.

      LLMs are a mixed bag. They've changed the workflow. I'm not 100% sure it's for the better, or not, but flat out denial of the things they're quite fucking good at smells to me like buggy whip manufacturers

      • by wed128 ( 722152 )
        They are *really good* autocomplete. They guess wrong *all the time*. Using them as more then autocomplete is a serious liability, as the woman in the summary found out.
        • by allo ( 1728082 )

          It's just a reduction to the basics and not useful and a bit misleading.
          Saying LLMs are autocomplete is like saying programs are just a collection of bits. It's correct, but still neither capturing the full extend what the thing represents nor the complexity nor what you can do with it.

  • My job is reviewing terrible PRs and correcting them. Who cares if it's a 1st year graduate or Claude?

    • by dfghjk ( 711126 )

      The 1st year graduate cares, you'll figure it out when your job is terminated

    • by wed128 ( 722152 )
      The 1st year graduate may get better, and some day be capable of reviewing code! Claude's gonna hit a wall soon, and when you retire there'll be nobody left to review Claude's code, and nobody's going to know why everything stopped working.

      The obsession with AI is an existential crisis for a society to rely on technology that a very small minority understands.
    • by GlennC ( 96879 )

      The difference is that after several iterations, the junior developer will learn and eventually become a senior developer able to produce reasonable quality code on their own. Then your job can go from reviewing their work to solving new problems. That's also how the next generation of senior developers is created.

      Yes, the AI tool will look like it's learning too. What happens when you leave for one reason or another? Will there be anyone to take your place reviewing and correcting PRs, or will they have t

  • by 93 Escort Wagon ( 326346 ) on Monday September 15, 2025 @09:25PM (#65662316)

    After reading the title (mistake #1, I know), I assumed this was a story about how people employed in senior positions at large companies are finding themselves stuck babysitting newly-hired "vibe coders". Instead, I find a story about a former web developer who's apparently now trying to develop and sell LLMs.

    I have no idea what her skill level was as a web developer, but - it sounds like she has zero experience with the sort of thing she's trying to do now. Which is fine, I guess, but she isn't a "senior developer" by any stretch of the imagination.

  • by davide marney ( 231845 ) on Monday September 15, 2025 @09:53PM (#65662356) Journal

    My son writes contract proposals, another area where people try to cut corners by generating responses from requirement documents. Sounds legit, you would think. He tells me it's a boat anchor, dramatically slowing down delivery. The problem is when it guesses -- and it guesses A LOT -- there's no telling what ridiculous BS it will pull out of its learning corpus. You can't rely on it being even predictable.

  • There's your problem, right there.

    There is no such thing as complete delegation. The word for that is abdication.
  • by pooh666 ( 624584 ) on Monday September 15, 2025 @11:35PM (#65662498)
    In order to get good work from Claude Code (honestly screw the rest) you have to instruct it, very carefully. I take days to make an outline of a major project before I can even let it do one thing, then I take things in very small pieces and I explain that to Claude Code as well. Checklist after checklist. I get things done REALLY fast and I don't have the kind of issues that seem to be all over the place, like in this pathetic article. I find ChatGPT goes nuts half the time I ask it things, it gave me deprecated APIs in instructions for Klaviyo and Facebook, like REALLY deprecated stuff. You often get to see how good or bad an API's docs are by how badly AIs screw things up. I constantly ask them to give me coding references/links and it is not uncommon to get back, oh sorry, I made that up. I save insane amounts of time just by asking that question fairly often. BUT, I still can go faster than I ever have before and making really good quality work, all of which I understand because I take the time to make sure I do, before I move on to another step. Honestly what is so hard about this? I think that definition of "Senior" needs some vetting in these articles in the future.
    • Wouldn't it be interesting if AI meant a revival of "Waterfall" software specification methodology? One of the complaints I've had with agile since I started using it two decades ago is that if you say for so many things 'we'll get to that' sometimes when you do, you've backed into a corner.
  • YOU are not "coding" at all. You are asking a hacked-together-and-modified human language processing experiment to cut and paste random chunks of code from the internet into a blob that might well compile and even run, but which YOU neither understand nor can claim credit for "writing". You're not gonna know the code, not going to be able to fix it or maintain it, and you certainly are not a "coder" or a "programmer" or a "developer"... you started as a specifier, and then turned into a coach/critic for a d

  • It will get worse - instead of Senior Devs checking the code that they wrote the prompts for, they will check the code that was output from prompts that somebody else wrote. Meaning that they have to deal not just with some AI nonsense but potentially the Junior Dev who may have misunderstood the assignment in the first place. I guess unit testing is still a good quality gate but when your programmers' skills are atrophying it makes the entire process more risky
  • I've found that, counter to what the talking heads would have you believe, the only people that can effectively use AI for coding tasks are the same people that can code effectively to begin with.

  • It is well known to developers that writing a new feature is the easy part, debugging it is the hard part. Letting the AI do the “easy part” and having the human do the “hard part” doesn’t really help. Debugging tends to be the largest part of an accurate schedule, and it is the most uncertain part. People frequently make pretty accurate estimates of how long it takes to hit feature complete, but poor estimates of how long it takes to debug those features.

    Developers shoul

  • What is wrong with people? That one comment, senior dev, you keep using that word and I don't think it means what you think it means is accurate.

    AI at this point can be an assistant for functions and writing code, giving you ideas or suggestions how to write a function, but by no means can you sit on your ass and do nothing while the AI does your job for you can while you're all proud like you did something.

    We live in a constant state of stolen valour where people are lazy, want to do nothing, achieve nothi

"Pay no attention to the man behind the curtain." -- The Wizard Of Oz

Working...