Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming

'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO (businessinsider.com) 116

Y Combinator CEO Garry Tan said startups are reaching $1-10 million annual revenue with fewer than 10 employees due to "vibe coding," a term coined by OpenAI cofounder Andrej Karpathy in February.

"You can just talk to the large language models and they will code entire apps," Tan told CNBC (video). "You don't have to hire someone to do it, you just talk directly to the large language model that wrote it and it'll fix it for you." What would've once taken "50 or 100" engineers to build, he believes can now be accomplished by a team of 10, "when they are fully vibe coders." He adds: "When they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day."

According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models. Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses.

'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO

Comments Filter:
  • Cannot wait... (Score:5, Insightful)

    by dubist ( 2893961 ) on Tuesday March 18, 2025 @10:46AM (#65242299)

    There will be plenty of money cleaning it up in a few years time..

    • by supremebob ( 574732 ) <themejunky@geocC ... m minus caffeine> on Tuesday March 18, 2025 @10:57AM (#65242337) Journal

      On the bright side, it's probably making the job of pen testers very simple. The simple script kiddie attacks that stopped working in the mid 2000's will suddenly work again for awhile until the Gen AI's "learn" how to write secure code. And by "learn" I mean that should probably stop scraping Stackoverflow comments and using it as a source of truth on how something should be done.

      • Re: (Score:3, Insightful)

        by DarkOx ( 621550 )

        A lot of the 2000s type vulns had as much to do with the tooling as with the devs. Hard to write a buffer overflow vulnerability in Python unless you are trying.

        That said logic flaws are lot more interesting and just as devastating. In the 2000s it was all about getting shellz and I guess it still in some circles (your state actors etc) but most threat actors are there for the $$$ and some bug where the same discount code can be applied 10 times, or you can order a extra slice of cheese for $0.10 with a ha

        • by mysidia ( 191772 ) on Tuesday March 18, 2025 @11:29AM (#65242427)

          Buffer overflows have just about always been the least of your worries for web-based apps.

          One of your most common issues is lack of a bother to check permissions at all.

          For example: You have an endpoint named /viewinvoice.cgi?id=12345.

          Hackers guess that 12346 is probably also a valid invoice ID then query /viewinvoice.cgi?id=12346.

          Your web application was only designed to check that the user was logged in.. there's no proper logic to prevent Customer A from viewing Customer B's invoices. User id number 4567 can Change User number 7890's password, etc.

          And that's before you even start looking at Crafted cookie injection exploits, Javascript injections, SQL Injection, XSS, CSRF, etc. Which can be rampant in your app if the AI does not know what kind of design is appropriate.

          • Re:Cannot wait... (Score:4, Informative)

            by garcia ( 6573 ) on Tuesday March 18, 2025 @12:33PM (#65242565)

            I used to screen scrape jail registry records for county jails in my home area. Though the IDs weren't exactly sequential, doing groups of 50 would get hits for two of the local counties.

            What I found was that, while the website UI wouldn't show juvenile records, you could access them directly w/the ID. Surfacing it to the county took a day or so to find the right person but they quickly closed that hole, but who knows how many records were handed out to malicious actors over the years before I found it.

          • Buffer overflows have just about always been the least of your worries for web-based apps.

            In the early days (90s, early 2000s) when web apps were primarily implemented in memory-unsafe languages, buffer overflows were a big source of security vulnerabilities. Because computing resources (cycles and RAM, mostly) are relatively plentiful on the server side, we quickly shifted to mostly using memory-safe languages, which made the problems go away.

            Of course, people asking AI to write web apps now will ask for the apps to be written in the web server languages used now, which are memory-safe.

      • The source of truth does not matter.
        Can be Stackoverflow, /. or RTFM.

        What is your source of truth?

        God given intuition?

        • The source of truth does not matter.
          Can be Stackoverflow, /. or RTFM.

          Well okay, as long as your source actually is a source of truth. In that regard, online opinions are less reliable than edited or peer-reviewed publications with affirmations from readers.

          What is your source of truth?

          God given intuition?

          Intuition, wherever it comes from (not any god IMHO) is useful for finding what might be the truth, but one must then verify it with actual experience and/or other sources.

    • by haruchai ( 17472 )

      Vibe will make COBOL great again!

    • by dfghjk ( 711126 )

      There won't be any need for cleaning, none of these automatically generated apps will provide value to fix. This comes from the land of Juicero.

      What will need cleaning up is the VC mess left behind.

    • There will be plenty of money cleaning it up in a few years time...

      Yup. Nothin' like code literally no one wrote and probably no one understands.

    • Development in AI will not stop. It has really gotten the world's attention now. The promise of eliminating the expensive salaries of software developers is just too enticing. Tremendous amounts of money and energy will continue to be poured into AI research and development, if for that reason alone.

      The problems that exist with AI now will be focused-on and addressed. Are YOU confident that they are unsolvable, and that the world will always need lots of software engineers? Because statements of the fo

      • Blue-collar workers will be operating as prompt engineers for minimum wage.
        Unlikely. As "prompting" is nearly as challenging as programming.

        And a "prompt engineer" is not what you think it is: a prompt engineer is one who is training AIs. Hence the "engineer" in the job description. All the prompts the AI is "answering" to, are prompts a "prompt engineer" once trained to them.

      • "Blue-collar workers will be operating as prompt engineers for minimum wage". If this is business software, won't the customer just write the software themselves? Why would they hire a band of unqualified vibers to produce anything? This might work for mobile games - expect even more crappy knockoff games than there are already
      • by narcc ( 412956 )

        Development in AI will not stop.[...] Tremendous amounts of money and energy will continue to be poured into AI research and development

        Money alone does not guarantee success. The current approach, using LLMs, is an obvious dead-end but that hasn't stopped foolish investors from dumping truckloads of money into it.

    • Plenty of money for any wizard capable of actually cleaning up AI-generated code. This is basically building a technical debt bomb.

    • You seem to be blissfully unaware at what pace AI - which already is quite usable with notable productivity improvements - is actually improving. Prepare for incoming.

      • by ukoda ( 537183 )
        I think you are right about AI significantly improving productivity of programmers but I still think they will have a hard time reaching the point they can replace programmers.

        The core problem is a current AI will write a program that matches what the user asks for where as a good programmer write a program that matches what the user needs. The key problem being that users often don't actually know what they need, but a good programmer can read between the lines, or ask the right questions, to work out
      • I've been in the industry for 20 years. Every few years some new technology comes out that industry leaders say will replace coders. So far, I've remained gainfully employed. It seems like we're always hiring more people.
    • Re:Cannot wait... (Score:5, Interesting)

      by lordDallan ( 685707 ) on Tuesday March 18, 2025 @02:39PM (#65242957)
      Or a few days?

      From this post on Masto: https://cloudisland.nz/@daisy/... [cloudisland.nz]

      Some guy on Twitter doing some grade a FAFO:

      my saas was built with Cursor, zero hand written code
      Al is no longer just an assistant, it's also the builder
      Now, you can continue to whine about it or start building.
      P.S. Yes, people pay for it
      4:34 am 15 Mar 2025 52.2K Views
      leo &
      @leojr94_

      guys, i'm under attack
      ever since I started to share how I built my Saas using Cursor
      random thing are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db
      as you know, I'm not technical so this is taking me longer that usual to figure out
      for now, I will stop sharing what I do publicly on X
      there are just some weird ppl out there
      9:04 am 17 Mar 2025 53.6K Views
      • by narcc ( 412956 )

        I wonder how long it will take him to figure out that it wasn't sharing process publicly that lead to "random things happening", but the garbage he produced in concert with a silly AI toy.

        Once again, LLMs can't write computer programs. Hell, they can't even balance parentheses. They can only generate text that looks like code. They have no capacity for reason or analysis. That's not what they do and not what they can do. This is why code generating LLMs need to make heavy use of external tools.

  • This AI propaganda is getting ridiculous.

    with 25% having 95% of their code

    It's penetrated 1/4 of a niche market. We should surrender to the corporate AI gods. Please remember us when you are charging $20000/agent for something that doesn't even exist.

    • by dfghjk ( 711126 )

      The solution to criticism of how bad these apps will be is to squelch that criticism, something AI will be able to do and something its billionaire creators will focus on doing. AI will be able to create these apps, so long as AI gets to decide what the standards are for judging the apps created. You will buy it and you will like it.

      • I dont know about all that, but when these things need to be fixed that will produce data that may be used to make better models.

        This looks like Common A.I. Generated Bug Pattern [3.141, 7.534, -2.010] with 97% probability.
      • The solution to criticism of how bad these apps will be is to squelch that criticism, something AI will be able to do and something its billionaire creators will focus on doing. AI will be able to create these apps, so long as AI gets to decide what the standards are for judging the apps created. You will buy it and you will like it.

        When public opinion starts to rise against them, the companies will have AI bots to drown out the negative press too. It's a glorious crapflood apocalypse we're diving into now. I think I smell the first wave coming in now.

  • *cough*BS* (Score:5, Insightful)

    by Kisai ( 213879 ) on Tuesday March 18, 2025 @10:53AM (#65242321)

    I don't think they know what they are saying.

    They're letting 10 idiots code all the work of a team of 50-100, that is going to require 10,000 people years to fix once it breaks and nobody knows jack about it from the lack of documentation.

    • Re:*cough*BS* (Score:5, Insightful)

      by dfghjk ( 711126 ) on Tuesday March 18, 2025 @11:22AM (#65242405)

      Doesn't matter, the "right" people have made the money by then. This app will be discarded and the next con job will be underway.

      There will be no fixing of these apps, there will only be fixing of your attitude.

    • That sounds like a problem for another quarter.

      • ..and another programmer, management, and owner.

        I remember reading something titled "How to Get Bought by Microsoft" or something very similar in the 90s and I have been observing it ever since. Microsoft isnt the only, or even the biggest, pocket these days.
    • This is perfect for Internet of Shit devices.
    • Why does it need to be 10 engineers. A single "manager" can ask all the same questions and get the same code. Or are they editing the code, I presume deploying the code but even then they have had so little input it hardly counts as engineering.
  • by mugnyte ( 203225 ) on Tuesday March 18, 2025 @10:55AM (#65242327) Journal
    They'll never know what it gets wrong, but a lucky customer will. And I am curious who will weave in bug fix, new features, or a breaking change upgrade. The AI? The vibe may be quite solemn indeed when the LLM cannot generate symbol sets on something it never trained for.
  • by MpVpRb ( 1423381 ) on Tuesday March 18, 2025 @10:59AM (#65242339)

    ...works great when making simple code that is similar to popular, published code.
    The prompt "write a snake game in python" works because snake games exist and are simple to make.
    Creating novel, large and complex code is a different problem.
    A very large codebase is too complex to fit in one human mind. No single person, even if smart and talented, knows every detail of how it works.
    If a single mind can't fully understand a complex system, it can't create a prompt to generate it.
    If it was possible, the prompt would be a multiple thousand page specification.
    I suspect that the code in the article is simple and common, probably me-too web apps or phone apps, ever so slightly different from existing apps.

    • by dfghjk ( 711126 )

      "Creating novel, large and complex code is a different problem."

      It isn't if you subscribe to bottom-up programming. Of course, only morons accept bottom-up programming, among them Agile-philes, but those people are the ones pushing this bullshit.

      "If a single mind can't fully understand a complex system, it can't create a prompt to generate it.
      If it was possible, the prompt would be a multiple thousand page specification."

      These are bad arguments. You do not need to know every detail to be effective at top-

      • You do need creativity, though. How much of that does AI have?

        In most benchmarks, more than humans.

        Don't know why this should be a surprise to you.
        There are a couple of things to look at here.
        1) Creativity that exists within the data. It can take the progress of human science decades to piece together an obvious fact from 2 bits of data- like Special Relativity. For humans, it's hard for us to even see connections that were always there and obvious.
        2) temperature.

    • by Ksevio ( 865461 )

      There's lots of coding done that's simple and easy for AI to write. For that sort of stuff you might as well have the AI churn out the basic parts while the human coders handle the big picture and complex parts

  • They are claiming they can have 10 developers do the work of 100....

    But what if that's not because of AI, but simply because those 10 coders are actually working at full capability?

    After all, Twitter reduced headcount by over 80%, and not only kept functioning but started adding more features. They were not using AI tools to achieve this, they simply had tons of coders not doing much!

    Maybe "vibe coding" is nothing more than finding a small number of developers that are efficient and actually work most of t

  • quality. (Score:5, Interesting)

    by Virtucon ( 127420 ) on Tuesday March 18, 2025 @11:10AM (#65242363)

    LLMs can't reason, they can only predict what you're asking for and try to match it up against what the model has and provide what it thinks is an answer.

    Can you generate code with it? Yes. Is it the code you want? maybe. Is it quality code, with no bugs? Probably not.

    Will you have to have an actual professional software developer fix it? Yes.

    LLMs trained on examples don't have an understanding of anything, only a prediction path. It's time we stop pushing the fallacy that they somehow are better
    than experienced professionals at anything, only generating fakes.

    • by mysidia ( 191772 )

      Yes. Is it the code you want? maybe. Is it quality code, with no bugs? Probably not.

      One of these days the devs whose code they are using is going to find out and start issuing Copyright claims against the companies doing AI code completion AND their customers.

      • I've had mixed emotions about copyright around code. Now an overall solution, look and feel, trademarks and certainly patents all apply. But if I take something off of a website where you've published a code sample or even an entire solution should be labelled as such and attribution certainly given if reused.
        Code that's GPL'd would probably apply here as well.
        I guess that's why there's arguments around copyright and AI but that's for legal scholars and politicians to argue.

        • by mysidia ( 191772 )

          if I take something off of a website where you've published a code sample or even an entire solution should be labelled as such and attribution certainly given if reused.

          Attribution only satisfies the author's moral rights. The author of computer program code also has the exclusive right to commercially exploit their writings, and attributing it does not make it legal for someone else to do so.

          Sample code off the internet is generally for your learning or personal use only; not legal to copy and paste

          • Does it? I mean not get into the legal ramifications of LLM data mining and not attributing where the code comes from is problematic, does it violate copyright? If you label your work as copyright then the answer would be yes. If you contribute an answer to Stack Exchange, then Stack Exchange still respects the original author's copyright and then uses the CC BY-SA license. None of that matters much when you have a giant bot just pulling in data, not caring who contributed it and why.
            I don't think this part

      • by Ksevio ( 865461 )

        That would probably lead to the end of the software industry if they could prove that a particular code segment was influenced by another code segment made them liable for copyright infringement

        • copyright infringement is weak-sauce vs patent infringement and the big players have both large patent portfolios and lawyers on retainer

          even if you are brilliant, and surprise the industry so as to get your own patents, they will surround your patent with theirs - the fight is futile - there is only the acquisition
    • instead of writing a proper argument is it a great idea to write a list of silly questions? probably ...
  • by dfghjk ( 711126 ) on Tuesday March 18, 2025 @11:15AM (#65242381)

    "...they will literally do the work of 10 or 100 engineers in the course of a single day."

    As long as that work is shitty work, as long as the expectations of the app are low enough, as long as quality of software continues its trend downward as this will ensure.

    "According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models."

    That's not good news, it's a condemnation of Silicon Valley greed and billionaire tech bros.

    "Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses."

    Real programmers know what a vital and time consuming role debugging plays, this tells you all you need to know. This Tan guy does not know software development.

  • by sdinfoserv ( 1793266 ) on Tuesday March 18, 2025 @11:20AM (#65242397)
    C-levels absolutely do not care about long term maintenance. Their sole focus is this quarters stock price. When you can replace 100's or 1000's of dead weight, liability ridden staff, with a CaaS (coder as a service) your costs drop, your margin increases, and your stock prices rise - with zero capital outlay, no increased sales or productivity. This doesn't even take into account overhead of stuff like managers, HR, office space, equipment, supplies, all can be reduced. Stock price is how CEO pay is determined, and owners, aka share holders, will love it. Stock almost aways rises on news of layoffs. Automate expensive knowledge workers as been the nirvana for AI offerings since the concept began. This is a shot across the bow for the end of "programming" as a trade. I'm going to say it again, get out while you are still in control of your destiny. Once you're laid off with one of the hoards, the markets will be flooded and finding jobs ~ any jobs ~ will be tough.
    • Look what happened to automotive mechanics. Long gone are the days of listening, smelling, observing, touching and thinking. The "art" of car repair is reduced to plug in a computer and replace the parts the computer tells you till the problem vanishes.
    • I have no points to upvote you, but I would do it 100x if I could.

      What I see here is an incredibly amount of people looking the other way and singing 'lalala' to an obvious omen of their own demise.

    • This is true. I think people should be ready for a bunch of layoffs in the near term.

      However, over the longer term, I'd be surprised if there aren't some pretty good opportunities to service or rewrite codebases that companies actually need..

  • by Somervillain ( 4719341 ) on Tuesday March 18, 2025 @11:21AM (#65242401)
    Generating code that works is EASY. Generating code that works well?...that's what software engineers spend decades mastering. There is precedence for this and that's easy-to-use programming languages and tools. Visual Basic is the first that comes to mind. I honestly never used the product, but that is a metric. It's been around since before I began my career and I even used to see it here or there when I started. The press seemed to like it, but everyone I talked to hated it. More importantly, it got largely destroyed by the web. However, those who used it, complained that some business analyst would cobble together spaghetti code and hand it off to someone else to largely rewrite once it stopped scaling and had too many issues.

    In my career-span, there was Ruby on Rails...suddenly lots of idiots were Ruby developers and telling me how old I am and stupid for not using it. I missed that fad because people were paying me much better to work in Java. However, every developer found it easy to create some forms, but as soon as business requirements kicked in, they had to abandon the Rails part...and maintenance was horrible. Also, the performance was shit, so yeah...you got a cool prototype really fast...so long as you ignored the actual business requirements and didn't want it to scale. Every RoR app I ever saw was replaced by a more conventional Java+JavaScript stack...or if they were low-budget, node.js.

    In both cases, both are largely gone from the landscape....even Groovy on Grails, which had excellent Java integration is largely gone. Why? Creating is easy, maintaining is hard.

    If you like bugs and bloat? Let inexperienced "vibe" coders give you a sloppy prototype riddled with errors and security violations. Let's see how that plays out. I think it's VB all over again...but would love to be proven wrong and somehow these overpriced AI vendors figured out how to get machines to maintain code and write code that is secure and well-written from the start. It can be done, in theory.

    However, the reason I am skeptical is that if the could do it, they'd make a LOT more money porting existing Python, Ruby, VB, ASP, cobol apps to Java or rust that looks like it was written by elite developers. That would make a TON of money, but you'd quickly know if it was successful or not. Hell, all the major players have all sorts of legacy code that would benefit from this.

    MS has invested a fuckton of money into AI projects. Imagine an AI CLR that converted your slowish C# code to tiny, fast rust code behind the scenes and reduced your Azure electricity spend by half and doubled your response time? That would be a license to print money. Imagine AI that could find all security violations and submit PRs for your developers to review?...that would be a HUGE source or recurring revenue....just so long as it worked. I think the shit they have doesn't actually work.
  • "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

    Applying this logic to LLMs used by people who can't code at all is left as an exercise for..., well, probably for your pet LLM.

    • logic?

      There's no logic to that whatsoever.
      There's an assertion, and then there's fallacious logic (begging the question) working off of it.
      • Man, the LLMs are getting testy today.
        • If our world is now one where we can't recognize fallacious logic unless an LLM points it out to us, then we're more fucked than I thought.
          • by abulafia ( 7826 )
            The problem with LLMs is that they never shut up.
            • To the contrary, it takes nothing but a stop token to shut one up.
              The problem with humans, is that they form beliefs around stupid fucking phrases masquerading as wisdom, being too fucking lazy or stupid to rigorously apply anything approaching logic to determine if that was smart or not.
              • by abulafia ( 7826 )
                Another odd thing about them is short-term fixation on topics. If you don't have a better way to deal with them, restarting them at least offers a blank slate.
                • I don't think that's odd at all. Short-term fixation is as fundamentally human as lying, or being wrong about shit.
  • From my attempts at coding with ChatGPT, I have a hard time you can build anything complex enough with this method, but okay. But then when you have a requirement change later on, what do you do ? AIs are notoriously bad at taking something that exists and making a small modification (just ask the graphic people who try to regenerate AI art with some small modifications); and nobody understand the code since nobody wrote it. How does that unmaintainable code work for you ?
    • AIs are notoriously bad at taking something that exists and making a small modification (just ask the graphic people who try to regenerate AI art with some small modifications); and nobody understand the code since nobody wrote it. How does that unmaintainable code work for you ?

      Complete unadulterated bullshit.
      LLMs are absolutely excellent at making modifications to code.
      You slam what you need in the context window, tell it what you want changed, and tell it to give you the full result, or a diff if you want.

      From my attempts at coding with ChatGPT, I have a hard time you can build anything complex enough with this method, but okay.

      Given the above, I'm 99% sure you're completely full of shit.

  • The Luddites weren't necessarily against the machines they sometimes wrecked, but rather the downward pressure on wages and product quality.

    Parallels to this in /. comments are left as an exercise for the reader.
  • by SoCalChris ( 573049 ) on Tuesday March 18, 2025 @11:32AM (#65242437) Journal

    I saw this thread on reddit about a month ago. Guy was using an AI tool to program like this, and lost 4 months of work when the AI went nuts and deleted everything. He'd never even heard of git. Then a bunch of other AI coders started jumping in telling him that he needs to use git, and keep copies of all of his code in different folders at milestones so that he has an extra backup. Not one of them had any clue what they were talking about.

    This all happened in an AI coding subreddit, I saw it linked from /r/programminghorror. This thread made me feel much more secure in my job lol.

    https://www.reddit.com/r/curso... [reddit.com]

  • Adversarial coding may let us know when this approach is good enough for real-world use.

    Team A: A few humans + AI writing code.
    Team B: A few humans + AI looking for problems with the code.
    Team C: Enough good/experienced humans to really pick apart the code and find all but the most obscure serious issues.

    When Team B gets as good as Team C, then we can talk about "a few humans + AI writing code" for real-world projects.

    Until then, you may want to stick with Team D: Enough good/experienced humans to write

  • Someone should check these companies code bases. Because I can personally say that as someone who has been using ChatGPT to assist work on a few personal projects, it fucks up ALLLLLL the time. Just enter in a few hundred lines of code and ask it to reprint that code back to you. About half the time it will leave out small bits or entire chunks of code here and there. Don't even get me started on the coding. If you have any ambiguity in your questions/instructions you are going to get a best guess answe

  • AI tools create Write-Only code. That is, it performs the purpose intended - with a few random bugs, and security exploits - but when you need to modify anything, you start over completely.

    Most developers could increase their productivity if they could write code with no thought to maintainability. There's even a guide: How to Write Unmaintainable Code [github.com] - which the AI, no doubt, has been trained on.

    When I was in college, I took a course in assembly. Recognizing that the instructors were providing psu

    • AI tools create Write-Only code. That is, it performs the purpose intended - with a few random bugs, and security exploits - but when you need to modify anything, you start over completely.

      Wrong.
      Yet another post on LLMs, yet another complete falsehood from you.

      LLMs will gladly format code however you want, make iterative changes to it, give it to you as diffs, or entire files. Whatever the fuck you want. They output well-commented and readable code.

      You're not wrong about the random bugs and security exploits, though. That's very much real.

  • Only an out-of-touch executive would think a software system just needs "coding" to implement it. That's the least of the job. The more important aspect is designing the system: figuring out how functional modules should be organized and how they should interact.

  • Serious question. I've only tried the free ones but I've tried the GitHub one, chat GTP and Microsoft's AI to do some basic Python coding with flask, literally hello world shit because I hadn't written a Python flask app before. And every time it's spit out code that was almost but not quite entirely unlike tea.

    That is to say it gave me something that looked like it was supposed to work but was never going to work because it was hopelessly hopelessly out of date. I mean like 10 years out of date.

    May
    • With older models, and models that I was limited to with memory constraints, what you describe was a constant problem I had.
      Lots of hallucinated modules, or module interfaces, or modules that nobody used or maintained anymore.

      Recently, I've been using Qwen's Coder fine-tune for its mix of speed and quality. It works excellently.
      Sometimes, I bust out bigger models- particularly reasoning models- if I've got a tough nut to crack and I want it to really take a shot at it. This also works excellently.
      Of co
      • To get 128 GB of VRAM? Is it multiple cards? I'm genuinely curious now and wondering if it's worth setting up my own system.
        • M4 Max MacBook Pro. Previously, an M1 Max MacBook Pro (64GB)
          Soon, you'll be able to get yourself an AMD rig that can do the same, but at significantly lower performance (though a healthy bit cheaper, most likely- still not cheap however)
          • Christ that's some old school workstation pricing. You must have paid at least 8 grand for that. I guess if you're using it it's worth it though. The AMD equivalent you're talking about is about $2,300 not sure how it'll actually stack up in the real world though. That's the initial launch releases there might be cheaper options available later but I'm not so sure. AMD seems to be keeping that technology locked into super expensive laptops in order to keep prices high.
            • Christ that's some old school workstation pricing. You must have paid at least 8 grand for that.

              Yup. 7 for the one it replaced.

              The AMD equivalent you're talking about is about $2,300 not sure how it'll actually stack up in the real world though. That's the initial launch releases there might be cheaper options available later but I'm not so sure. AMD seems to be keeping that technology locked into super expensive laptops in order to keep prices high.

              Yup :/
              I do hope they sort that shit out and provide some actual competition for Macs in this space. They've got all the tools they need to do so.
              There are certain quirks to the Mac (lack of BF16 support, meaning I have to convert BF16 models to FP16, lack of Metal support for some of the more advanced LLM fine-tuning tools) that I'd love to not have to deal with.
              That, and I can imagine buying 2-4 of the things and setting up an LLM cluster. Since you're only moving around c

  • "81% of Y Combinator's current startup batch consists of AI companies" so clearly this guy is motivated to push the narrative, but I don't think he is very far off the mark. Maybe 10 guys won't replace 100 yet but there definitely is movement in that direction.

    I use Windsurf (basically a front-end for Claude Sonnet) and it has boosted my productivity quite a bit. It is particularly useful in situations where I'm not very familiar with the programming language or the API's I'm needing to work with. You have

  • that the vibe coder was told by the AI code generator 'Learn how to program, it looks like I'm doing your homework'? Yeah, no way that code will be maintainable, much less will those "coders" be able to document it or explain what its doing.

    I started working as a programmer/database developer about 40 years ago. I remember some 20 years ago talking to one of my first programming instructors. She was no longer teaching systems analysis because that wasn't what students wanted to learn. They wanted to
  • There's some value in mocking up an app quickly to to test if there's a market for it, but once you establish that there is a market and you need to scale, what are you going to do? Hire professional developers who know what they're doing. And when they look at the crazy gibberish your LLM cranked out, they're going to add a zero to the end of their quote.
  • Let's just be honest. Most startups (not all) are 100% focused on building something as fast a possible and then unloading in a big sale to some much larger organization at a premium. And in many acquisitions the programming, engineering, and security people from the acquiring company are not allowed to review large amounts of source code to determine all the future loses due to high technical debt. All that just gets in the way of the sale! You hope as the acquirer you can use your internal talent plus
  • Code does not have to be elegant, robust, or even particularly efficient. In fact, your code can be horribly nonperformance and modern hardware will serve it up fine. It might take 20x the clock cycles it ought to, but cycles are cheap. All it has to do is pass the tests. Just don't do code reviews

    The traditional coding pragmatic perfectionist in me hates that... but what can you do? The question is not whether the generated code is good.. it's whether it is good enough.

  • just because you can, doesn't mean you should
  • Waiting for the AI tool that generates startup companies in just a few minutes. Talking points, slides, the entire pitch, plus the code that runs the MVP. One or two slippery "customers" and the VC money will be pouring in.

  • "COBOL statements have prose syntax such as MOVE x TO y, which was designed to be self-documenting and highly readable to non-programmers such as management"

    Though, COBOL never stole code from anyone, and AI's proveyers did.
    https://en.wikipedia.org/wiki/... [wikipedia.org]
  • Y Combinator CEO Garry Tan said startups are reaching $1-10 million annual revenue with fewer than 10 employees due to "vibe coding," a term coined by OpenAI cofounder Andrej Karpathy in February.

    Startups. These are going to be programs that the public will never use. They're hype generators, meant to wow an investor group, get funded, then disappear into the ether.

    If, by some strange miracle, any of these "Vibe coded" programs ever makes it to anything resembling production, look for real developers to be hired and scream bloody fucking murder about what a mess this code is. Testing cycles will be astronomical, and debugging will consist of days per simple function just to understand what the fuck

  • When value is not proven, it's much better to do it cheaply and quickly, regardless of how much tech debt you accrue.

    There's a fairly high chance the company will fail in any case.

    But once you're somewhat established and value was proven, it won't be so easy, and this transition from startup to established player may end up making the chasm companies have to cross much, much larger.

    I guess will know where the sweet spot is in a couple of years.

  • Given that this is Y Combinator we're talking about, all the prompts were probably of the form:

    "A system like X, except for Y"

    where X = {Uber, Grubhub, Facebook, ...}
    and Y is a niche currently-unmonetized domain.

    I will believe it when even one of those applications gets any traction in the real world, and doesn't get immediately crashed or owned.

  • I think the intent here is really a comment on teams and not developers per say. I get the term, but I suspect its more commentary on leveling up any role (dev, ops, manager, etc.). Saying '50 "is less performant than" vibing(10)' needs more context. Not all team members are equal in skill nor equal in responsibility (thus they don't even have a position/perspective to make a real impact). If 10 roles can "power up", and keep focus/influence outcomes, then I agree. If its just the same few power players, th
  • 10 people can do the job of 50 or 100, well, yes, because 10-20% of the workforce are the heavy - hitting engineers and the others are Wally from Dilbert. If you look at what I work on in a given week or month, you'd be surprised that 10 people aren't working on it. Is this new? No.

    I've been in the startup space for 10+ years, and every single company I have ever worked for, outside of RIM (Blackberry), had far too few engineers, and had a ton of bloat, and useless employees. I've worked at startups w
  • I'm currently getting a legacy Angular application under control and expanding it into updated and new requirements. I finally got into using AI to assist me. My boss have me access to a ChatGPT 4o subscription.

    The hype is real and justified.

    It's mostly a well educated committed computer expert and chatting API documentation that I can talk to with solid knowledge of edge cases and pitfalls. Think rubber duck debugging, but the rubber duck is a senior webdev with expert knowledge in every widespread technol

  • How much of an increase do they get in wages?

  • The claims about these systems doing "reasoning" and "inferencing" are exactly the same claims about it "knowing" or "thinking" or "being intelligent".

    Nobody agrees on what any of those words "really" mean; it's fairly pointless to debate. The so-called "AI" does *something*, and some people find it to be useful, to varying degrees. All the words trying to describe it are for purposes of marketing, which is generally way overblown.

    You can compare the output of the AI to the output from humans, and you can s

  • If you think 10 people can do the work of 100, you should pay each worker what they're worth. Trimming staff and explaining away the cuts as "AI can handle the load" is the new "lets fix the shitty singer with autotune during mixdown". sometimes it might work but garbage in, garbage out.

Evolution is a million line computer program falling into place by accident.

Working...