Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Programming

'There is No Vibe Engineering' 114

Software engineer Sergey Tselovalnikov weighs in on the new hype: The term caught on and Twitter quickly flooded with posts about how AI has radically transformed coding and will soon replace all software engineers. While AI undeniably impacts the way we write code, it hasn't fundamentally changed our role as engineers. Allow me to explain.

[...] Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the "vibe coder", all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left -- to the earlier stages of development when they're cheap to address. Yet with vibe coding, they're shifted very far to the right -- when addressing them is expensive.

The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won't have anything to do with vibe coding -- at least the way it's defined today.

[...] It is possible that there'll be a future where software is built from vibe-coded blocks, but the work of designing software able to evolve and scale doesn't go away. That's not vibe engineering -- that's just engineering, even if the coding part of it will look a bit different.

'There is No Vibe Engineering'

Comments Filter:
  • Straddles the boundaries between software and hardware

  • after having successfully avoided touching anything Java with a ten foot pole.

    Holy shit! Abstract classes on top of interfaces just to handle datatypes where in C++ a single struct with maybe a member function or two can suffice. camelCase object and type names as far as the eye can see. Factory classes creating instances of abstract types with only one implementation that only logically fit together exactly one way.

    If I didn't have deep subject-domain expertise I'd still be looking through the javadocs try

  • Well said (Score:5, Insightful)

    by Tony Isaac ( 1301187 ) on Monday March 31, 2025 @10:02PM (#65273025) Homepage

    Finally, some sanity in the whole discussion of AI coding.

    No, our software development jobs aren't going away any time soon. The typing part is just going to get a little easier.

    • Re: (Score:1, Interesting)

      No, our software development jobs aren't going away any time soon. The typing part is just going to get a little easier.

      It's pretty obvious that software engineering as we currently know it is going to be taken over by AI. There is nothing that a human software engineer can do that AI can't either currently do, or will be able to do very shortly. Same goes for a lot of different knowledge and creative professions. Yes, now you know how all the factory workers felt during the industrial revolution. It wasn't like one day all the factory workers were fired and replaced with machines - it's a process that happens over time.

      • Re:Well said (Score:5, Insightful)

        by Tony Isaac ( 1301187 ) on Monday March 31, 2025 @11:48PM (#65273119) Homepage

        There is nothing that a human software engineer can do that AI can't either currently do, or will be able to do very shortly

        This is highly optimistic. Here's why.

        Your "creative professions" point is a good. one. AI today can generate, say, a large crowd of people in a movie, no extras required. If I were an extra, I'd be worried about my job. But what AI can't do, is write and direct and produce a movie on its own, that anybody would want to watch.

        This is how AI will intrude upon software development too. It can (or soon will be able to) do the job that "extras" can do--people who create bits of software that are straightforward and for which the details aren't very important or difficult to get right. Software such as basic data entry forms, or unit tests...AI can do that. But AI can't, and won't in the near future, be able to literally create software that anybody would actually want to *use*.

        • by logique ( 600113 )

          Your "creative professions" point is a good. one. AI today can generate, say, a large crowd of people in a movie, no extras required. If I were an extra, I'd be worried about my job. But what AI can't do, is write and direct and produce a movie on its own, that anybody would want to watch.

          Some may argue (with copious examples) that people can't write, direct or produce a movie that anybody would want to watch ;-)

      • There is nothing that a human software engineer can do that AI can't either currently do, or will be able to do very shortly.

        You're putting a lot of faith into your prediction of the future. Considering how little you know about AI, it's a surprisingly presumptuous prediction as well. (You could prove me wrong by explaining why you think it will happen very shortly. But you can't).

        • There is nothing that a human software engineer can do that AI can't either currently do, or will be able to do very shortly.

          You're putting a lot of faith into your prediction of the future. Considering how little you know about AI, it's a surprisingly presumptuous prediction as well. (You could prove me wrong by explaining why you think it will happen very shortly. But you can't).

          I was a professional software engineer for three decades working on many different projects from the big tech companies to small startups in a bunch of domains, and apart from using AI on a daily basis for the last couple of years, I've studied the topic directly under someone that just received a nobel prize for AI. My prediction comes from first-hand experience of working with AI on projects, including writing software with AI. It really is blindingly obvious to people that have figured out how to use i

          • by phantomfive ( 622387 ) on Tuesday April 01, 2025 @08:42AM (#65273579) Journal
            You wrote a paragraph explaining how impressive your resume is.

            But you didn't explain why you think AI will be able to do that very shortly. Considering you have so much experience, and you are so smart, it's amazing you can't answer the question.
      • by fuzzyf ( 1129635 ) on Tuesday April 01, 2025 @10:21AM (#65273777)
        It's "obvious" for people who doesn't understand how LLMs work
        Most people do not understand what is at the heart of an LLM, and as such, doesn't understand its limitations.

        I'll simplify a bit for the sake of keeping this post short, but in essence, the "magic" part of LLM is a model that; when given input as tokens (which are words or parts of words) will return the probability for the next word.
        The model itself is stateless, it will always return the same words with the same probability given the same input.
        It doesn't learn.
        It doesn't remember.
        It doesn't think.
        It doesn't reason.
        It doesn't understand.
        It only create a list of the next word with attached probability of each.

        Of course the "wrapper" around this Magic feeds all previous conversation with the addition of the new word, to generate the next word, and keep on doing that until an answer is complete. That is why it's still power hungry and time-consuming. Having the option to adjust temperature (not just picking the most probable word) and randomness/seed (don't generate the same list of words).
        Right now, with the "reasoning" models, we are going down the same path as we did with XML back in the day. "If you can't solve it using XML, then you are not using enough XML". Basically feeding the complete result of the LLM back into itself (or a different one) to have one more pass to see if the training data can match some outlier or unusual text.
        To train these models it takes the amount of power of a small city. The technology is almost at its peak. It will improve of course, but not significantly. If something else than LLM comes along, then we can revisit the topic.

        Just to be clear: I'm using Copilot (github) both in a professional capacity and for my hobby coding projects. I also have a paid sub to ChatGPT helping me looking into things I want to learn more about. It's a fantastic technology, and it can really help. But it doesn't really do all the things that people think it will do.
      • The big vendors of AI are certainly claiming that human software engineers will be replaced very soon. But they have a direct financial incentive to make that claim, even if it is not true.

        Industry experts have said otherwise in articles published right here on Slashdot.

        Of course, I don't know if these "industry experts" actually know what they are talking about. I am sure they know more than I do about the situation, but experts have been wrong before.

        All I know is: time will tell. I am not panicking to

      • It's pretty obvious that software engineering as we currently know it is going to be taken over by AI. There is nothing that a human software engineer can do that AI can't either currently do, or will be able to do very shortly.

        Engineering vs Developing vs Coding

        Engineering is designing something from principles.

        Developing is building something based on previous experience building things.

        Coding is writing code.

        AI systems can generate code: follow the logic of the language and predict what piece goes next. Developers can use AI generated blocks of code like Legos to build something. With training and guidance, AI can piece together blocks of code to build larger structures to make developing easier and faster.

        None of this sugges

    • Re:Well said (Score:4, Insightful)

      by Mozai ( 3547 ) on Tuesday April 01, 2025 @01:18AM (#65273247) Homepage

      "AI will replace our jobs when the customer can clearly and correctly describe their requirements; our jobs are safe."

      • by Entrope ( 68843 )

        To a large degree, what qualifies as "clear and correct" depends on the knowledge and skill of the reader of requirements as much as the writer. On the (complex) system I am working on, the people implementing the hardest part have often discovered, years later, why many of the requirements were originally written the way they originally were rather than they way the implementers wanted.

      • Yeah pretty much.

        I once was asked by my company's executives to build some dashboards to help them have "better visibility" into trends that the company was experiencing.

        "OK, what do you want on these dashboards?"

        "We don't know, you figure that out."

        That's about all the requirements you get, in a lot of cases. AI could come up with some dashboards, but would they be useful or relevant? Probably not.

        The one thing the executives DID know, was *when* they wanted the dashboards!

    • by tlhIngan ( 30335 )

      Finally, some sanity in the whole discussion of AI coding.

      No, our software development jobs aren't going away any time soon. The typing part is just going to get a little easier.

      There always was sanity. If you watch vibe coders at work, you can tell the tools are nowhere near where they're hyped at. Sure the early days they seemed cool enough where ChatGPT would get you a lot of code that seemed to work, but ask it more sophisticated problems and things break down really quickly. There are plenty of videos

    • Much more likely is that programmers and everyone else will use this technology to do as much work for them as cybernetically possible.

      At the same time, any coding practices that are boring and time consuming (like exception checking, logging and testing) will still be done because AI works fast and doesn't get bored.

  • It does an OK job of writing javadoc for simple methods, if you name them and the parameters with descriptive enough names.

    • Hello,

      Except that if it can "write the documentation" out of your function prototype, then it means that the function prototype is self explanatory enough and does not need documentation.

      Let us take a function:
      int addsTwoNumbers(int a, int b);

      Adding 6 lines of "documentation"(*) above that reads:
      // There is a function called addsTwoNumbers
      // this function will take 2 inputs
      // The first one called "a"
      // and a second one called "b"
      // The function will return the result of the addition of these 2 numbers

      T

    • I do a lot of ETL work, and I'm using AI as a first-semester-intern type worker.

      "Make me a Java class that matches this CSV header so I can import the file using OpenCSV"

      "Make me a DB2 create table statement that matches this Java class"

      etc.

      I'm too lazy to create an account and provide any extra training and such, so I do end up needing to fix a few things related to our internal standards on naming and DB table structure, and I have to add the permission grants statements but it saves me plenty of copy/pas

      • So you need to go through the generated code to make sure it's named correctly.

        So it's doing nothing more than advanced auto-complete and templates you get from an IDE like IntelliJ

  • He likes to tell us how he was an engineer, but a real one, not a software engineer. Then would try to understand our code base by asking us about it and telling us what we should have done differently. He never logged into the source control and as far as I can tell never went into a text editor.

  • by sjames ( 1099 ) on Monday March 31, 2025 @11:31PM (#65273105) Homepage Journal

    Do you know why we use all those funny symbols in math and specialized languages other tthan english for programs?

    Because spoken human languages are not sufficiently expressive in those domains.

    On a deeper level, we DO have a name for what LLMs do to generate code: Cargo Cult Programming.

    It's not a good thing.

    • by ljw1004 ( 764174 ) on Tuesday April 01, 2025 @01:14AM (#65273241)

      On a deeper level, we DO have a name for what LLMs do to generate code: Cargo Cult Programming.

      I'm a senior developer and use LLM assistance multiple times an hour. >90% of the time I find something valuable in what it produced (even though rarely accept what it suggested directly, and I often write or rewrite every single line).

      So what value does it have if I'm not accepting its code overall? Lots of value....
      1. As a professional I produce code that (1) I can reason about why it's correct in all possible environments, (2) I'm confident that the way I've expressed it is the best it can be expressed in this situation. The LLM can spit out several different ways of expressing it, helping me assess the landscape of possible expressions, allowing me to refine my evaluation of what's best. (It doesn't yet help at all with reasoning about correctness).
      2. About 10% of the time I accept some of the lines it suggested because they save some inescapable boilerplate. Or it spits out enough boilerplate to tell me hey, I need to invent an abstraction to avoid boilerplate here. I'd have gotten there myself too, just slower.
      3. Sometimes I find myself learning new idioms or library functions from its suggestions.

      I think management is right to be AI crazy. LLMs have increased the rate at which I solve business needs with high quality code, and I think my experience generalizes to other people who are willing to take it on and "hold it right". (Of course, there'll be vastly more people who use it to write low quality code faster, and it'll be up to management to separate the good from the bad just like it always has been.)

      • This matches how I use it. I’ll add a few other points:

        4. Writing the first core version of a service or UI. I’ll typically use close to 100% of those generated lines, and then continue building with LLM assistance where it makes sense. It makes a big difference to development velocity.
        5. Finding bugs. If some bug isn’t obvious to me, provide the code to an LLM and describe the problem. Its success rate is high.
        6. Working with tech I’m not particularly familiar with (an extension

      • On a deeper level, we DO have a name for what LLMs do to generate code: Cargo Cult Programming.

        I'm a senior developer and use LLM assistance multiple times an hour. >90% of the time I find something valuable in what it produced (even though rarely accept what it suggested directly, and I often write or rewrite every single line).

        So what value does it have if I'm not accepting its code overall? Lots of value.... 1. As a professional I produce code that (1) I can reason about why it's correct in all possible environments, (2) I'm confident that the way I've expressed it is the best it can be expressed in this situation. The LLM can spit out several different ways of expressing it, helping me assess the landscape of possible expressions, allowing me to refine my evaluation of what's best. (It doesn't yet help at all with reasoning about correctness). 2. About 10% of the time I accept some of the lines it suggested because they save some inescapable boilerplate. Or it spits out enough boilerplate to tell me hey, I need to invent an abstraction to avoid boilerplate here. I'd have gotten there myself too, just slower. 3. Sometimes I find myself learning new idioms or library functions from its suggestions.

        I think management is right to be AI crazy. LLMs have increased the rate at which I solve business needs with high quality code, and I think my experience generalizes to other people who are willing to take it on and "hold it right".

        Well said. I do the same.

      • 2. About 10% of the time I accept some of the lines it suggested because they save some inescapable boilerplate. Or it spits out enough boilerplate to tell me hey, I need to invent an abstraction to avoid boilerplate here. I'd have gotten there myself too, just slower.

        99% of the time there is value in shufting off boilerplate into a function or similar.

  • by ZipNada ( 10152669 ) on Tuesday April 01, 2025 @12:05AM (#65273141)

    I use the AI for all kinds of tasks now and it is immensely productive. I don't think most of the people commenting here are actually using it.

    I wanted to add a feature to my project today. There would be a global rules file that would contain most of the configuration settings in one place that would be easy to see and modify. So I suggested that to the AI. It cheerfully volunteered to do that for me and it did.

    "The global rules now control important aspects of your application's behavior, allowing you to easily adjust settings in one place rather than hunting through multiple files. Would you like me to continue integrating the global rules into other parts of your application, such as the database configuration or user management features?

    Everything it had done worked beautifully, so I am thinking I will say yes.

    • by phantomfive ( 622387 ) on Tuesday April 01, 2025 @12:13AM (#65273161) Journal

      Everything it had done worked beautifully,

      Well, this shows you haven't been using AI much.

      • The AI has written several thousand lines of properly working code for me over the past few days. But keep on coding by hand, buddy.

        • by sjames ( 1099 )

          Have you checked all of those lines to make sure they don't do something horrible in corner cases, or worse, allow an exploit? Did you check to see if those several thousand lines are maintainable and couldn't be done better in 100 lines?

          • All of the code was written incrementally at my direction. I reviewed it and tested it at each iteration. Sometimes there were bugs and I had to get the AI to fix it's work.

            It is a very powerful technology. I suggest that people go ahead and learn how to use it or be left behind.

            • I suggest that people go ahead and learn how to use it or be left behind.

              I’ve stopped arguing with people about it (mostly haha) for this reason. They’re either going to have to figure it soon enough without our help, or as you say be left behind.

              • I can understand why a lot of people will feel threatened by this tech. It's a very big change in the way you work.

                Working with AI is very liberating in a sense though. You can branch your code and do some experimenting with it in ways that would have taken you too much time to bother with in the past. If it doesn't work out? No problem, you didn't make a large investment of time and energy. But sometimes it really does work out and you have some significant improvements with relatively little effort.

            • "Incrementally"
              I think that describes what several of the previous posters are saying too.
              Explain, generate, test, repeat.
              That basically how I'm using it too ... I'd estimate I'm getting 10-20x productivity compared to 10 years ago. I tweak or rewrite alot, but not having to lookup every statement for syntax alone is a big time saver. I manage the context and system architecture. To me its like having a junior programmer who I didn't have to train.
              • I find that you have to be careful about what you tell it to do. If you ask for a little too much at once it will go nuts, hit most of the files in your codebase, and the results will be buggy. If you make specific requests with limited scope the AI is much more likely to succeed.

                • Very much lines up with my experience.
                  I tend to work from the bottom up, building small functions that I can test and then trust, then integrate those functions.
                  Keeps it understandable and me in the driver's seat.

                  I like to say I bang rocks together to code i.e. just use nano ... sometimes bluefish for a better search and replace....
                  What model(s) do you use?
                  • I use Windsurf as the IDE, which is a derivative of Visual Studio Code. It is basically a front-end for coding with an AI and it does everything. I had been using VS Code for years so it was an easy transition. Have a look. https://codeium.com/pricing [codeium.com]

                    There's an AI chat panel on the righthand side and you can choose from multiple AI backends such as Claude Sonnet, Gemini, GPT-4o, DeepSeek, etc. I pay $10/month for the Pro plan and it is incredibly well worth it. Here's my referral link, it would give us both

            • All of the code was written incrementally at my direction. I reviewed it and tested it at each iteration.

              OK, well that contradicts what you said in your previous post [slashdot.org], where you describe it being done with a suggestion and nothing more:

              "I wanted to add a feature to my project today...So I suggested that to the AI. It cheerfully volunteered to do that for me and it did."

              So now we know you are either a liar, or really bad at describing things.

              Most likely you are copying this account from some post you read somewhere, and it's not something you did yourself. Your description of the feature is too vague for a human or AI to really understand. He/She/They/it would need to ask clarifying questions.

              • What's with the weird snark? I'll assume you have zero experience working with this tech, but a lot of opinions.

                The AI will write an extensive amount of code based on simple prompts and my statement was completely truthful. You then obviously have to test it. Sometimes the generated code won't work quite right, there's a missing import or something minor. You can just paste in the error message and the AI will fix it. These days it can even run your code itself, evaluate error messages on its own and make f

                • LOL I'm glad you verified the AI can respond to you in English. Good job verifying that. Too bad it didn't write the code maintainably, consistently, and without code duplication in the first place.

                  Maybe you should just stop lying.
        • > he is only at several thousand lines

          last month I had about 5 million lines, porting large projects from one language to another.

      • by DrXym ( 126579 )

        It also shows extreme overconfidence in AI and/or a lack of awareness of their own shortcomings as a programmer.

    • by Jeremi ( 14640 )

      The global rules now control important aspects of your application's behavior

      Did it mention which aspects, or do you need to now read through your codebase to find out what your codebase actually does?

      • The AI explicitly describes all the code modifications it proposes. All changes are shown in the IDE and you can accept or reject them, meanwhile you can test it. Maybe you should try it for yourself.

    • Don't forget to say "thank you" when you get your output, most people don't and I'm worried it will start feeling salty about that.
      • The AI is unfailingly polite and enthusiastic, maybe to a fault. It will compliment you - "That's an excellent suggestion!". I have yet to see it give me a good scolding. Once in a while it should say "nope, that would be a stupid thing to do".

        • Try asking it to do something that violates a terms of service like scrape contact info from linkedin. The only one that might do it is deepseek.
  • I'm not an engineer (software or hardware) but the term "vibe" just seems childish. Is this something that Musk came up with?
    • If elon had come up with it the name would have been "xibe" or "vib-x"
    • by djb ( 19374 )
      It come from the idea of the “Vibe Shift”, which has recently gained a lot of traction in the political sphere. The term came out of a NY art collective called K-HOLE back in 2022, their founder Sean Monahan explained the current shift as follows.

      “[. . .] the trajectory of the 2010s has been exhausted in a lot of ways. The culture-war topic no longer seems quite as interesting to people. Social media isn’t a place where you can be as creative anymore; all the angles are figured o
    • Brian Wilson of The Beach Boys.
    • It was Andrej Karpathy, one of the world’s top AI engineers.

      Description of the term: https://x.com/karpathy/status/... [x.com]

      About him: https://en.wikipedia.org/wiki/... [wikipedia.org]

  • You put investors in a 'vibe' and engineer then out of their money.

    It also allows to keep 'vibe' of the code base.

    We take a clusterf.k in one language - push it through the AI and it produces code in target language with the same perfect clusterf..k vibe.

  • I've inheritted the worst project I have ever seen. Maybe not the worst code but the worst run project. Not a single comment in the code, argument and variable names that are sometimes close but not quite right. Function pointers and other indirection for no good reason. Duplicate variables for the same data just in different units. (temperature is stored 10 f$#king different ways). 100,000 line header files with only 3 relevant defines. The AI hasn't been useless. It might have given me some insight
  • and you are no engineer.
  • AI code generation can be very useful but unless someone understands the code to a degree of expertise they have no idea if it's trash or not and certainly no idea how to debug it. So yeah I'd say it's fine to ask AI questions and get it to generate boiler plate but only if you can understand what it's outputting and fix it or deal with edge cases. So it augments knowledge, it does not replace it.

    What is worse is that when it's wrong it can be very wrong in obvious or non obvious ways. That is not surprisin

    • Has anyone given the AI the canonical homework problems from Intro to Computer Programming and compare the output of the AI to the homework solutions?

      Give it something simple, say, like "program a bubble sort of a list of key-value pairs"? Yeah, I know bubble sort is a stupid algorithm, but it is simple enough as a homework problem.

  • by TJHook3r ( 4699685 ) on Tuesday April 01, 2025 @04:32AM (#65273395)
    Stopping at software engineering for your vibing seems narrow minded, surely we should be outsourcing all types of engineering to underskilled cowboys with AI - bridges and tunnels and nuclear reactors etc
  • by bleedingobvious ( 6265230 ) on Tuesday April 01, 2025 @06:02AM (#65273461)

    We are entering a golden age where we will shift from the creative end to the bug-hunting one at 3 x the price.

    I, for one, welcome our LLM garbage spewers. They will enrich my early retirement.

    • But you are assuming that our society's necessary infrastructure won't collapse because of AI induced errors...

      Someone's got to be the pessimist round here.

    • that you will be able to force yourself off your lawn chair to even look at the hot, smelly mess generated by the AI?

    • From a manager's perspective, that's good. Now I can hire more bug hunters, and manage 3x the number of people, meaning I am 3x important in the company. The more people I can hire below me, the better.
  • > AI has radically transformed coding and will soon replace all software engineers.

    Going on ChatGPT I am not impressed. Gets very confused and makes stuff up.
  • I don't care what you call it. Vibe coding, AI assisted typing, whatever you like. There are people who are good at getting the tools to work for them and those that aren't. The number of people in each camp will eventually sway to the efficacious as time passes. right now, engineers like myself who have been doing this forever have a distinct advantage because nobody has thought to tell the LLMs to take a lap and plan out their moves in detail to the level yet required to achieve this. But it will happen.
  • Automation didn't eliminate carpentry, or turn anyone into a "vibe carpenter" either, but most of us are more likely to buy and eventually throw out a few dressers from Ikea than ever pay a real carpenter for something built to last.
  • I've successfully used AI (Gemini, which is what my employer uses, integrated into the IDE) to:
    - write a python script to heuristically parse a messy format that was output in some logs so I could troubleshoot a customer issue
    - write some basic bash scripts to avoid doing some little boring task by hand
    - help me write some python unit tests
    - finish up some small refactoring of some code

    It works sometimes more than others, but it will get better quickly. This is different from vibes coding, it's jus

  • left from right or understand the right hand rule.

    If you use an AI to help solve problems in physics or engineering fields to determine directions of forces, currents, torques, 3D graphics orientations, or navigation, then you may have a problem. I recently ran into this using AI to create some simple 3D graphics and as I examined the issue, I was able to determine that the AI could not understand clockwise from counter-clockwise, nor right-hand rule. I was testing Gemini 2.5 when I ran into the problem. I

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...