
'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO (businessinsider.com) 115
Y Combinator CEO Garry Tan said startups are reaching $1-10 million annual revenue with fewer than 10 employees due to "vibe coding," a term coined by OpenAI cofounder Andrej Karpathy in February.
"You can just talk to the large language models and they will code entire apps," Tan told CNBC (video). "You don't have to hire someone to do it, you just talk directly to the large language model that wrote it and it'll fix it for you." What would've once taken "50 or 100" engineers to build, he believes can now be accomplished by a team of 10, "when they are fully vibe coders." He adds: "When they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day."
According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models. Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses.
"You can just talk to the large language models and they will code entire apps," Tan told CNBC (video). "You don't have to hire someone to do it, you just talk directly to the large language model that wrote it and it'll fix it for you." What would've once taken "50 or 100" engineers to build, he believes can now be accomplished by a team of 10, "when they are fully vibe coders." He adds: "When they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day."
According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models. Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses.
Cannot wait... (Score:5, Insightful)
There will be plenty of money cleaning it up in a few years time..
Re:Cannot wait... (Score:5, Insightful)
On the bright side, it's probably making the job of pen testers very simple. The simple script kiddie attacks that stopped working in the mid 2000's will suddenly work again for awhile until the Gen AI's "learn" how to write secure code. And by "learn" I mean that should probably stop scraping Stackoverflow comments and using it as a source of truth on how something should be done.
Re: (Score:3, Insightful)
A lot of the 2000s type vulns had as much to do with the tooling as with the devs. Hard to write a buffer overflow vulnerability in Python unless you are trying.
That said logic flaws are lot more interesting and just as devastating. In the 2000s it was all about getting shellz and I guess it still in some circles (your state actors etc) but most threat actors are there for the $$$ and some bug where the same discount code can be applied 10 times, or you can order a extra slice of cheese for $0.10 with a ha
Re:Cannot wait... (Score:5, Insightful)
Buffer overflows have just about always been the least of your worries for web-based apps.
One of your most common issues is lack of a bother to check permissions at all.
For example: You have an endpoint named /viewinvoice.cgi?id=12345.
Hackers guess that 12346 is probably also a valid invoice ID then query /viewinvoice.cgi?id=12346.
Your web application was only designed to check that the user was logged in.. there's no proper logic to prevent Customer A from viewing Customer B's invoices. User id number 4567 can Change User number 7890's password, etc.
And that's before you even start looking at Crafted cookie injection exploits, Javascript injections, SQL Injection, XSS, CSRF, etc. Which can be rampant in your app if the AI does not know what kind of design is appropriate.
Re:Cannot wait... (Score:4, Informative)
I used to screen scrape jail registry records for county jails in my home area. Though the IDs weren't exactly sequential, doing groups of 50 would get hits for two of the local counties.
What I found was that, while the website UI wouldn't show juvenile records, you could access them directly w/the ID. Surfacing it to the county took a day or so to find the right person but they quickly closed that hole, but who knows how many records were handed out to malicious actors over the years before I found it.
Re: (Score:3)
Re: (Score:3)
So what happened after you posted the complete list in the lunchroom? :-)
Re: (Score:2)
Buffer overflows have just about always been the least of your worries for web-based apps.
In the early days (90s, early 2000s) when web apps were primarily implemented in memory-unsafe languages, buffer overflows were a big source of security vulnerabilities. Because computing resources (cycles and RAM, mostly) are relatively plentiful on the server side, we quickly shifted to mostly using memory-safe languages, which made the problems go away.
Of course, people asking AI to write web apps now will ask for the apps to be written in the web server languages used now, which are memory-safe.
Re: (Score:2)
that's not how any of this works.
Re: (Score:1)
The source of truth does not matter. /. or RTFM.
Can be Stackoverflow,
What is your source of truth?
God given intuition?
Re: (Score:2)
The source of truth does not matter. /. or RTFM.
Can be Stackoverflow,
Well okay, as long as your source actually is a source of truth. In that regard, online opinions are less reliable than edited or peer-reviewed publications with affirmations from readers.
What is your source of truth?
God given intuition?
Intuition, wherever it comes from (not any god IMHO) is useful for finding what might be the truth, but one must then verify it with actual experience and/or other sources.
Re: (Score:3)
Vibe will make COBOL great again!
Re: (Score:3)
There won't be any need for cleaning, none of these automatically generated apps will provide value to fix. This comes from the land of Juicero.
What will need cleaning up is the VC mess left behind.
Re: (Score:2)
There will be plenty of money cleaning it up in a few years time...
Yup. Nothin' like code literally no one wrote and probably no one understands.
Maybe (Score:2)
Development in AI will not stop. It has really gotten the world's attention now. The promise of eliminating the expensive salaries of software developers is just too enticing. Tremendous amounts of money and energy will continue to be poured into AI research and development, if for that reason alone.
The problems that exist with AI now will be focused-on and addressed. Are YOU confident that they are unsolvable, and that the world will always need lots of software engineers? Because statements of the fo
Re: (Score:2)
Blue-collar workers will be operating as prompt engineers for minimum wage.
Unlikely. As "prompting" is nearly as challenging as programming.
And a "prompt engineer" is not what you think it is: a prompt engineer is one who is training AIs. Hence the "engineer" in the job description. All the prompts the AI is "answering" to, are prompts a "prompt engineer" once trained to them.
Re: Maybe (Score:2)
Re: (Score:2)
Development in AI will not stop.[...] Tremendous amounts of money and energy will continue to be poured into AI research and development
Money alone does not guarantee success. The current approach, using LLMs, is an obvious dead-end but that hasn't stopped foolish investors from dumping truckloads of money into it.
Re: (Score:3)
Plenty of money for any wizard capable of actually cleaning up AI-generated code. This is basically building a technical debt bomb.
Famous last words. (Score:2)
You seem to be blissfully unaware at what pace AI - which already is quite usable with notable productivity improvements - is actually improving. Prepare for incoming.
Re: (Score:2)
The core problem is a current AI will write a program that matches what the user asks for where as a good programmer write a program that matches what the user needs. The key problem being that users often don't actually know what they need, but a good programmer can read between the lines, or ask the right questions, to work out
Re:Cannot wait... (Score:5, Interesting)
From this post on Masto: https://cloudisland.nz/@daisy/... [cloudisland.nz]
Some guy on Twitter doing some grade a FAFO:
my saas was built with Cursor, zero hand written code
Al is no longer just an assistant, it's also the builder
Now, you can continue to whine about it or start building.
P.S. Yes, people pay for it
4:34 am 15 Mar 2025 52.2K Views
leo &
@leojr94_
guys, i'm under attack
ever since I started to share how I built my Saas using Cursor
random thing are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db
as you know, I'm not technical so this is taking me longer that usual to figure out
for now, I will stop sharing what I do publicly on X
there are just some weird ppl out there
9:04 am 17 Mar 2025 53.6K Views
Re: (Score:2)
I wonder how long it will take him to figure out that it wasn't sharing process publicly that lead to "random things happening", but the garbage he produced in concert with a silly AI toy.
Once again, LLMs can't write computer programs. Hell, they can't even balance parentheses. They can only generate text that looks like code. They have no capacity for reason or analysis. That's not what they do and not what they can do. This is why code generating LLMs need to make heavy use of external tools.
Brought to you by AI-Is there anything it cant do? (Score:2)
with 25% having 95% of their code
It's penetrated 1/4 of a niche market. We should surrender to the corporate AI gods. Please remember us when you are charging $20000/agent for something that doesn't even exist.
Re: (Score:3)
The solution to criticism of how bad these apps will be is to squelch that criticism, something AI will be able to do and something its billionaire creators will focus on doing. AI will be able to create these apps, so long as AI gets to decide what the standards are for judging the apps created. You will buy it and you will like it.
Re: (Score:2)
This looks like Common A.I. Generated Bug Pattern [3.141, 7.534, -2.010] with 97% probability.
Re: (Score:2)
The solution to criticism of how bad these apps will be is to squelch that criticism, something AI will be able to do and something its billionaire creators will focus on doing. AI will be able to create these apps, so long as AI gets to decide what the standards are for judging the apps created. You will buy it and you will like it.
When public opinion starts to rise against them, the companies will have AI bots to drown out the negative press too. It's a glorious crapflood apocalypse we're diving into now. I think I smell the first wave coming in now.
*cough*BS* (Score:5, Insightful)
I don't think they know what they are saying.
They're letting 10 idiots code all the work of a team of 50-100, that is going to require 10,000 people years to fix once it breaks and nobody knows jack about it from the lack of documentation.
Re:*cough*BS* (Score:5, Insightful)
Doesn't matter, the "right" people have made the money by then. This app will be discarded and the next con job will be underway.
There will be no fixing of these apps, there will only be fixing of your attitude.
Re: (Score:3)
That sounds like a problem for another quarter.
Re: (Score:2)
I remember reading something titled "How to Get Bought by Microsoft" or something very similar in the 90s and I have been observing it ever since. Microsoft isnt the only, or even the biggest, pocket these days.
Re: *cough*BS* (Score:1)
Re: (Score:2)
YDKWYDK (Score:3)
Using AI to write software... (Score:4, Interesting)
...works great when making simple code that is similar to popular, published code.
The prompt "write a snake game in python" works because snake games exist and are simple to make.
Creating novel, large and complex code is a different problem.
A very large codebase is too complex to fit in one human mind. No single person, even if smart and talented, knows every detail of how it works.
If a single mind can't fully understand a complex system, it can't create a prompt to generate it.
If it was possible, the prompt would be a multiple thousand page specification.
I suspect that the code in the article is simple and common, probably me-too web apps or phone apps, ever so slightly different from existing apps.
Re: (Score:2)
"Creating novel, large and complex code is a different problem."
It isn't if you subscribe to bottom-up programming. Of course, only morons accept bottom-up programming, among them Agile-philes, but those people are the ones pushing this bullshit.
"If a single mind can't fully understand a complex system, it can't create a prompt to generate it.
If it was possible, the prompt would be a multiple thousand page specification."
These are bad arguments. You do not need to know every detail to be effective at top-
Re: (Score:2)
You do need creativity, though. How much of that does AI have?
In most benchmarks, more than humans.
Don't know why this should be a surprise to you.
There are a couple of things to look at here.
1) Creativity that exists within the data. It can take the progress of human science decades to piece together an obvious fact from 2 bits of data- like Special Relativity. For humans, it's hard for us to even see connections that were always there and obvious.
2) temperature.
Re: (Score:2)
There's lots of coding done that's simple and easy for AI to write. For that sort of stuff you might as well have the AI churn out the basic parts while the human coders handle the big picture and complex parts
Or is this just rightsizing (Score:1, Insightful)
They are claiming they can have 10 developers do the work of 100....
But what if that's not because of AI, but simply because those 10 coders are actually working at full capability?
After all, Twitter reduced headcount by over 80%, and not only kept functioning but started adding more features. They were not using AI tools to achieve this, they simply had tons of coders not doing much!
Maybe "vibe coding" is nothing more than finding a small number of developers that are efficient and actually work most of t
Re: (Score:2)
Haha you buried the sad troll.
quality. (Score:5, Interesting)
LLMs can't reason, they can only predict what you're asking for and try to match it up against what the model has and provide what it thinks is an answer.
Can you generate code with it? Yes. Is it the code you want? maybe. Is it quality code, with no bugs? Probably not.
Will you have to have an actual professional software developer fix it? Yes.
LLMs trained on examples don't have an understanding of anything, only a prediction path. It's time we stop pushing the fallacy that they somehow are better
than experienced professionals at anything, only generating fakes.
Re: (Score:3)
Yes. Is it the code you want? maybe. Is it quality code, with no bugs? Probably not.
One of these days the devs whose code they are using is going to find out and start issuing Copyright claims against the companies doing AI code completion AND their customers.
Re: (Score:2)
I've had mixed emotions about copyright around code. Now an overall solution, look and feel, trademarks and certainly patents all apply. But if I take something off of a website where you've published a code sample or even an entire solution should be labelled as such and attribution certainly given if reused.
Code that's GPL'd would probably apply here as well.
I guess that's why there's arguments around copyright and AI but that's for legal scholars and politicians to argue.
Re: (Score:3)
if I take something off of a website where you've published a code sample or even an entire solution should be labelled as such and attribution certainly given if reused.
Attribution only satisfies the author's moral rights. The author of computer program code also has the exclusive right to commercially exploit their writings, and attributing it does not make it legal for someone else to do so.
Sample code off the internet is generally for your learning or personal use only; not legal to copy and paste
Re: (Score:2)
Does it? I mean not get into the legal ramifications of LLM data mining and not attributing where the code comes from is problematic, does it violate copyright? If you label your work as copyright then the answer would be yes. If you contribute an answer to Stack Exchange, then Stack Exchange still respects the original author's copyright and then uses the CC BY-SA license. None of that matters much when you have a giant bot just pulling in data, not caring who contributed it and why.
I don't think this part
Re: (Score:2)
That would probably lead to the end of the software industry if they could prove that a particular code segment was influenced by another code segment made them liable for copyright infringement
Re: (Score:2)
even if you are brilliant, and surprise the industry so as to get your own patents, they will surround your patent with theirs - the fight is futile - there is only the acquisition
Re: (Score:2)
Re: (Score:3)
don't confuse statistical inference, which is what mocks "reasoning" in an LLM with Reasoning. LLMs have a lot of problems with hallucinations and accuracy that a reasonable professional can see through. The areas where LLMs have strengths, NLP, translation and synthetic data generation is are beneficial but they don't create now knowledge but like any tool could lead to new insights with existent information.
Questioning LLMs and they hype around them isn't a case of misguided disregard for technological in
Re:quality. (Score:4)
Sorry, LLMs can't reason. No matter what you espoused vitriol proposes.
LLMs can perform tasks that look like reasoning, and prompting techniques have improved those capabilities. That doesn't mean they can reason. Until there is an undisputable, scientifically based study to demonstrate otherwise they are fact regurgitators and hallucination engines with implied bias. If you want examples of the latter, go look at the launch of Gemini. A "Reasoning AI" wouldn't have fucked up that badly and then it was disclosed that bias was involved.
I hear the palpable panic now in LLM purveyors about how great the world will be with this innovation. It's also funny as fuck to see people defend it as the next Oracle of Delphi because it's not.
They can't reason with any common sense; you'll piss and moan about that but they can't. They can hallucinate with bias, so every answer coming out of an LLM needs to be verified. They're built on algorithms and training data with bias, rendering a solution set governed by that. Nothing more, nothing less. Can they be a useful tool? certainly but it's like given a hammer and saw to an inexperienced carpenter asking them to build you a house. You have to know how to apply the tool and many of the use cases I've seen reported on are flawed.
Re: (Score:2)
To be fair, an LLM can't reason. It's a language model. It also doesn't "predict what you're asking for." It translates natural language into a concept-based latent space and vice versa.
Presumably the OP is using "LLM" to mean whatever the latest OpenAI, DeepSeek, Gemini, Llama thing is. These are systems that are purposely designed to reason and happen to have language models allowing natural language input and output formatting. In that case the OP is just the usual human conceit with no justification.
Re: (Score:2)
To be fair, an LLM can't reason.
Yes, it can.
It's a language model.
What are you trying to imply with this?
Are you trying to imply that within some massive corpus of work in a language, the structure of reasoning isn't encoded?
Are you trying to imply that the neural network can't encode it?
Handwavy shit like this is absurd. It's shit people say with no ability to back up teh meaning they're trying to imply with it.
It also doesn't "predict what you're asking for." It translates natural language into a concept-based latent space and vice versa.
It converts tokens, their adjacencies, positions, and a fuckton of other dimensinos into concept-based latent space. The natural language aspect co
Re: (Score:2)
Re: (Score:2)
more hyper and idiocracy from the ultra rich (Score:3)
"...they will literally do the work of 10 or 100 engineers in the course of a single day."
As long as that work is shitty work, as long as the expectations of the app are low enough, as long as quality of software continues its trend downward as this will ensure.
"According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models."
That's not good news, it's a condemnation of Silicon Valley greed and billionaire tech bros.
"Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses."
Real programmers know what a vital and time consuming role debugging plays, this tells you all you need to know. This Tan guy does not know software development.
will be the hottest thing for 3 years (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
I have no points to upvote you, but I would do it 100x if I could.
What I see here is an incredibly amount of people looking the other way and singing 'lalala' to an obvious omen of their own demise.
Re: (Score:2)
This is true. I think people should be ready for a bunch of layoffs in the near term.
However, over the longer term, I'd be surprised if there aren't some pretty good opportunities to service or rewrite codebases that companies actually need..
We've seen this movie: VB & RoR (Score:5, Insightful)
In my career-span, there was Ruby on Rails...suddenly lots of idiots were Ruby developers and telling me how old I am and stupid for not using it. I missed that fad because people were paying me much better to work in Java. However, every developer found it easy to create some forms, but as soon as business requirements kicked in, they had to abandon the Rails part...and maintenance was horrible. Also, the performance was shit, so yeah...you got a cool prototype really fast...so long as you ignored the actual business requirements and didn't want it to scale. Every RoR app I ever saw was replaced by a more conventional Java+JavaScript stack...or if they were low-budget, node.js.
In both cases, both are largely gone from the landscape....even Groovy on Grails, which had excellent Java integration is largely gone. Why? Creating is easy, maintaining is hard.
If you like bugs and bloat? Let inexperienced "vibe" coders give you a sloppy prototype riddled with errors and security violations. Let's see how that plays out. I think it's VB all over again...but would love to be proven wrong and somehow these overpriced AI vendors figured out how to get machines to maintain code and write code that is secure and well-written from the start. It can be done, in theory.
However, the reason I am skeptical is that if the could do it, they'd make a LOT more money porting existing Python, Ruby, VB, ASP, cobol apps to Java or rust that looks like it was written by elite developers. That would make a TON of money, but you'd quickly know if it was successful or not. Hell, all the major players have all sorts of legacy code that would benefit from this.
MS has invested a fuckton of money into AI projects. Imagine an AI CLR that converted your slowish C# code to tiny, fast rust code behind the scenes and reduced your Azure electricity spend by half and doubled your response time? That would be a license to print money. Imagine AI that could find all security violations and submit PRs for your developers to review?...that would be a HUGE source or recurring revenue....just so long as it worked. I think the shit they have doesn't actually work.
Applying Kernighan's wisdom (Score:2)
Applying this logic to LLMs used by people who can't code at all is left as an exercise for..., well, probably for your pet LLM.
Re: (Score:2)
There's no logic to that whatsoever.
There's an assertion, and then there's fallacious logic (begging the question) working off of it.
Hostile robots (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The problem with humans, is that they form beliefs around stupid fucking phrases masquerading as wisdom, being too fucking lazy or stupid to rigorously apply anything approaching logic to determine if that was smart or not.
Re: (Score:2)
Re: (Score:2)
And then what ? (Score:2)
Re: (Score:2)
AIs are notoriously bad at taking something that exists and making a small modification (just ask the graphic people who try to regenerate AI art with some small modifications); and nobody understand the code since nobody wrote it. How does that unmaintainable code work for you ?
Complete unadulterated bullshit.
LLMs are absolutely excellent at making modifications to code.
You slam what you need in the context window, tell it what you want changed, and tell it to give you the full result, or a diff if you want.
From my attempts at coding with ChatGPT, I have a hard time you can build anything complex enough with this method, but okay.
Given the above, I'm 99% sure you're completely full of shit.
Here come the Followers of Ned Ludd (Score:2)
Parallels to this in
AI coders are not coders (Score:5, Funny)
I saw this thread on reddit about a month ago. Guy was using an AI tool to program like this, and lost 4 months of work when the AI went nuts and deleted everything. He'd never even heard of git. Then a bunch of other AI coders started jumping in telling him that he needs to use git, and keep copies of all of his code in different folders at milestones so that he has an extra backup. Not one of them had any clue what they were talking about.
This all happened in an AI coding subreddit, I saw it linked from /r/programminghorror. This thread made me feel much more secure in my job lol.
https://www.reddit.com/r/curso... [reddit.com]
Adversarial coding may give us a timeframe (Score:1)
Adversarial coding may let us know when this approach is good enough for real-world use.
Team A: A few humans + AI writing code.
Team B: A few humans + AI looking for problems with the code.
Team C: Enough good/experienced humans to really pick apart the code and find all but the most obscure serious issues.
When Team B gets as good as Team C, then we can talk about "a few humans + AI writing code" for real-world projects.
Until then, you may want to stick with Team D: Enough good/experienced humans to write
Bullshit (Score:2)
Someone should check these companies code bases. Because I can personally say that as someone who has been using ChatGPT to assist work on a few personal projects, it fucks up ALLLLLL the time. Just enter in a few hundred lines of code and ask it to reprint that code back to you. About half the time it will leave out small bits or entire chunks of code here and there. Don't even get me started on the coding. If you have any ambiguity in your questions/instructions you are going to get a best guess answe
Write Only code. (Score:2)
AI tools create Write-Only code. That is, it performs the purpose intended - with a few random bugs, and security exploits - but when you need to modify anything, you start over completely.
Most developers could increase their productivity if they could write code with no thought to maintainability. There's even a guide: How to Write Unmaintainable Code [github.com] - which the AI, no doubt, has been trained on.
When I was in college, I took a course in assembly. Recognizing that the instructors were providing psu
Re: (Score:2)
AI tools create Write-Only code. That is, it performs the purpose intended - with a few random bugs, and security exploits - but when you need to modify anything, you start over completely.
Wrong.
Yet another post on LLMs, yet another complete falsehood from you.
LLMs will gladly format code however you want, make iterative changes to it, give it to you as diffs, or entire files. Whatever the fuck you want. They output well-commented and readable code.
You're not wrong about the random bugs and security exploits, though. That's very much real.
Executives don't understand (Score:2)
Only an out-of-touch executive would think a software system just needs "coding" to implement it. That's the least of the job. The more important aspect is designing the system: figuring out how functional modules should be organized and how they should interact.
Which AI is he using? (Score:2)
That is to say it gave me something that looked like it was supposed to work but was never going to work because it was hopelessly hopelessly out of date. I mean like 10 years out of date.
May
Re: (Score:2)
Lots of hallucinated modules, or module interfaces, or modules that nobody used or maintained anymore.
Recently, I've been using Qwen's Coder fine-tune for its mix of speed and quality. It works excellently.
Sometimes, I bust out bigger models- particularly reasoning models- if I've got a tough nut to crack and I want it to really take a shot at it. This also works excellently.
Of co
What hardware did you use (Score:2)
Re: (Score:2)
Soon, you'll be able to get yourself an AMD rig that can do the same, but at significantly lower performance (though a healthy bit cheaper, most likely- still not cheap however)
Re: (Score:2)
Re: (Score:2)
Christ that's some old school workstation pricing. You must have paid at least 8 grand for that.
Yup. 7 for the one it replaced.
The AMD equivalent you're talking about is about $2,300 not sure how it'll actually stack up in the real world though. That's the initial launch releases there might be cheaper options available later but I'm not so sure. AMD seems to be keeping that technology locked into super expensive laptops in order to keep prices high.
Yup :/
I do hope they sort that shit out and provide some actual competition for Macs in this space. They've got all the tools they need to do so.
There are certain quirks to the Mac (lack of BF16 support, meaning I have to convert BF16 models to FP16, lack of Metal support for some of the more advanced LLM fine-tuning tools) that I'd love to not have to deal with.
That, and I can imagine buying 2-4 of the things and setting up an LLM cluster. Since you're only moving around c
Only modest hyperbole (Score:2)
"81% of Y Combinator's current startup batch consists of AI companies" so clearly this guy is motivated to push the narrative, but I don't think he is very far off the mark. Maybe 10 guys won't replace 100 yet but there definitely is movement in that direction.
I use Windsurf (basically a front-end for Claude Sonnet) and it has boosted my productivity quite a bit. It is particularly useful in situations where I'm not very familiar with the programming language or the API's I'm needing to work with. You have
Wasn't it just last week (Score:2)
I started working as a programmer/database developer about 40 years ago. I remember some 20 years ago talking to one of my first programming instructors. She was no longer teaching systems analysis because that wasn't what students wanted to learn. They wanted to
Value (Score:2)
Most startups do not care are longevity / security (Score:2)
I suspect he's right. (Score:2)
Code does not have to be elegant, robust, or even particularly efficient. In fact, your code can be horribly nonperformance and modern hardware will serve it up fine. It might take 20x the clock cycles it ought to, but cycles are cheap. All it has to do is pass the tests. Just don't do code reviews
The traditional coding pragmatic perfectionist in me hates that... but what can you do? The question is not whether the generated code is good.. it's whether it is good enough.
Yet another case of (Score:1)
Pitch Deck AI (Score:2)
Waiting for the AI tool that generates startup companies in just a few minutes. Talking points, slides, the entire pitch, plus the code that runs the MVP. One or two slippery "customers" and the VC money will be pouring in.
AI Programming Goals the Same as COBOL (Score:2)
Though, COBOL never stole code from anyone, and AI's proveyers did.
https://en.wikipedia.org/wiki/... [wikipedia.org]
"Startups" (Score:2)
Y Combinator CEO Garry Tan said startups are reaching $1-10 million annual revenue with fewer than 10 employees due to "vibe coding," a term coined by OpenAI cofounder Andrej Karpathy in February.
Startups. These are going to be programs that the public will never use. They're hype generators, meant to wow an investor group, get funded, then disappear into the ether.
If, by some strange miracle, any of these "Vibe coded" programs ever makes it to anything resembling production, look for real developers to be hired and scream bloody fucking murder about what a mess this code is. Testing cycles will be astronomical, and debugging will consist of days per simple function just to understand what the fuck
For a startup? It makes perfect sense (Score:2)
When value is not proven, it's much better to do it cheaply and quickly, regardless of how much tech debt you accrue.
There's a fairly high chance the company will fail in any case.
But once you're somewhat established and value was proven, it won't be so easy, and this transition from startup to established player may end up making the chasm companies have to cross much, much larger.
I guess will know where the sweet spot is in a couple of years.
Consider the source (Score:2)
Given that this is Y Combinator we're talking about, all the prompts were probably of the form:
"A system like X, except for Y"
where X = {Uber, Grubhub, Facebook, ...}
and Y is a niche currently-unmonetized domain.
I will believe it when even one of those applications gets any traction in the real world, and doesn't get immediately crashed or owned.
Its not only developers (Score:2)
How is that different from traditional workflows? (Score:2)
I've been in the startup space for 10+ years, and every single company I have ever worked for, outside of RIM (Blackberry), had far too few engineers, and had a ton of bloat, and useless employees. I've worked at startups w
I can confirm. (Score:2)
I'm currently getting a legacy Angular application under control and expanding it into updated and new requirements. I finally got into using AI to assist me. My boss have me access to a ChatGPT 4o subscription.
The hype is real and justified.
It's mostly a well educated committed computer expert and chatting API documentation that I can talk to with solid knowledge of edge cases and pitfalls. Think rubber duck debugging, but the rubber duck is a senior webdev with expert knowledge in every widespread technol
So a ten-fold increase in productivity (Score:2)
How much of an increase do they get in wages?
Is it "Reasoning" (Score:2)
The claims about these systems doing "reasoning" and "inferencing" are exactly the same claims about it "knowing" or "thinking" or "being intelligent".
Nobody agrees on what any of those words "really" mean; it's fairly pointless to debate. The so-called "AI" does *something*, and some people find it to be useful, to varying degrees. All the words trying to describe it are for purposes of marketing, which is generally way overblown.
You can compare the output of the AI to the output from humans, and you can s
Sound like a plot to overwork developers (Score:2)
If you think 10 people can do the work of 100, you should pay each worker what they're worth. Trimming staff and explaining away the cuts as "AI can handle the load" is the new "lets fix the shitty singer with autotune during mixdown". sometimes it might work but garbage in, garbage out.