

What Happens If AI Coding Keeps Improving? (fastcompany.com) 118
Fast Company's "AI Decoded" newsletter makes the case that the first "killer app" for generative AI... is coding.
Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers... Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption.... Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated.
The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...
Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. "I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world," Savinov said.
The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...
Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. "I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world," Savinov said.
UBI follows? (Score:2)
Re: (Score:2)
Re: (Score:2)
Well, for this and a lot of other reasons, there will be nations with an UBI you can live off reasonably well and quite a few measures that give people that want it something sensible to do with their time. And then there will be the civil war areas where everything has gone to hell.
Re: (Score:2)
Regarding UBI: You cant get there from here.
My future prediction is there will be resource wars given that one nation feels the need to be first before everyone else.
That only leads to one outcome: A significant reduction in human population on the planet.
After the resource wars, if there is a small contingent of humans left, maybe then there will be UBI (If we can manage the warlords)
I'm afraid it has to be all torn down to the ground before UBI can even be considered.
Re: (Score:2)
Regarding UBI: You cant get there from here.
Nonsense. That just shows you have no bothered to do minimal research and are hence bereft of actual understanding.
Re: (Score:2)
This discussion might be more interesting for the rest of us if both of you would give some actual reasoning. UBI + flat rate seems to me totally logical. It is 100% what the combination of simple survival benefits + tax free earning allowances + somewhat progressive income taxation which most countries end up having is overall trying to achieve, just much simpler and clearer to administrate.
That says to me that UBI will be very difficult to achieve because the majority of politicians, including those that
Re: (Score:2)
Well, the situation is that work is running out due to productivity increases. Hence "work" as meachnism to distribute the wealth of society is not working anymore.
The other situation is that in many countries, benefits and aid programs are wasting huge amounts of money. Hence a livable UBI can be financed if all other aid is removed (this has been calculated in several countries, e.g. Switzerland, and is factually accurate in most places in Europe, the US may be fucked though...).
So, clearly an UBI is both
Re: (Score:2)
That says to me that UBI will be very difficult to achieve because the majority of politicians, including those that claim to be left wing, are actually pretty far right and don't want simplicity and efficiency to interfere with their ability to complain about "big government".
They can be both left and right, you know.
There is more than one left, and more than one right. [slashdot.org]
How and why? (Score:1)
Otherwise the 1% buyout all the capital and production capacity and they can just use monopolies to raise prices and suck your Ubi money right out from under you.
So far the only people with any leverage proposing Ubi as a solution have been right wingers who want to use it as a way to eliminate all other regulations and social programs. The idea is you get your Ubi and you shut the fuck up because hey, we're giving
Re: (Score:1)
Realistically we need fully automated space communism.
That's the most hilarious sentence I've read in a while.
Re: How and why? (Score:2)
Space Nutters on average are some of the funniest people once you realize they are mentally ill.
Re:How and why? (Score:4, Insightful)
Keep in mind Ubi is worthless without a whole host of other programs and protections.
And you base this on what? Oh, that's right, nothing, because you're just speculating, because no such institution has ever existed for you to make any kind of measurement against.
So far the only people with any leverage proposing Ubi as a solution have been right wingers who want to use it as a way to eliminate all other regulations and social programs. The idea is you get your Ubi and you shut the fuck up because hey, we're giving you free money, what's wrong with you? It's there to absolve them and the rest of the community from doing anything else to maintain a proper civilization.
Like who? And you say they're right wing based on what? And what does that even mean?
Realistically we need fully automated space communism.
You know, the interesting thing about communism is the whole idea was conceived of by two men who came up with all of these little intricate details about how it would work, and making all kinds of predictions about what exactly will happen once the "revolution" begins. And you know what? Over the next 150 years, multiple "revolutions" began and none of them worked out at all how those two prescribed. The whole system turned out to be a recipe for dictators to seize control of what in many cases were democratic regimes, and turn their states into kleptocracies, which happened every single time without exception.
You know why? Because just like you, these guys had it in their head that everything they were speculating about would be without-a-doubt accurate, even though in the end it was nothing like how they said it would be, even when their prescription was followed.
Just like what you're doing here.
Re: (Score:2)
I can see UBI being easily nullified by rent increases. Say people get $5000 a month, and it goes up by 10% a year. Rents just go up exponentially from there, especially with the fact that it is profitable to buy a property and keep it 100% vacant, as the more properties off the market, the higher real estate prices go.
Re: (Score:2)
This is why universal basic goods and services beat universal basic income...UBI can advertise a price floor to highly uncompetitive markets. Give a person an apartment instead of UBI to pay for an apartment, and the resident no longer has to worry about rent hikes or mortgage rate increases.
This could also begin to expose the fact that land ownership was one of humanity's greatest mistakes. Most of them involved allowing people to outright own things they shouldn't have been able to - land, other people, t
Re: How and why? (Score:2)
Re: (Score:3)
The commons system did work well. People worked together to maintain the commons. Only corporations would create a tragedy of the commons by exploiting them into ruin.
Re: (Score:2)
In countries that don't have the "conquering" mindset, commons work perfectly fine. Farmer Joe knows that if he overgrazes the common area, it affect not just everyone, but him as well. However, once a country goes from a high-trust level to a low-trust "take if not nailed down" mentality, things that can benefit everyone in a village wind up having to be shut down. The US used to have an extensive... EXTENSIVE (to use caps) rest stop system, on highways, side roads, almost everywhere. Some places even
Re: (Score:2)
You can get a couple friends to go to the middle of nowhere and buy a few acres for very low prices. Combined, your UBI payments will easily cover the mortgage.
especially with the fact that it is profitable to buy a property and keep it 100% vacant
No, property prices keep increasing because people need to live close to work. Remove that need and people will move elsewhere, causing lower property prices. The pandemic perfectly demonstrated this effect.
Re: How and why? (Score:1)
Re: (Score:2)
Re: (Score:1)
and the prison costs alot more to keep people in l (Score:2)
and the prison costs alot more to keep people in lock up
Re: (Score:2)
Somebody's always going to bring this up, every time there's a story about AI "taking jobs" sometime in the future.
Guess what, technology has been taking jobs for centuries. Just 50 years ago, 30% of US workers worked in factories, now less than 5%. Just 100 years ago, 70% of US workers were farmers, now less than 5%. And yet, somehow we are at roughly 4% unemployment. Magic, huh!
This so-called apocalypse of job loss is a long ways from reality. Sure, we can imagine it, but reality always turns out to be mo
AI sucks for frontend (Score:1)
AI quickly fucks off when designing an advanced frontend UX/UI. It is great for backend/API though, where things can be more granular.
Re:AI sucks for frontend (Score:4, Insightful)
You're just putting your head in the sand. You could make the argument that it'll never get there but honestly with the leaps and bounds we've seen in the last year alone that's a pretty weak argument
Re:AI sucks for frontend (Score:5, Insightful)
That's because you don't understand (Score:1)
Let's say productivity goes up across the board in programming by 20%. That means 20% fewer programmers because we don't enforce antitrust law so there is little or no competition and companies don't have to worry about startups anymore.
so now you've got several hundred thousand programmers gunning for your job. They are facing homelessness and starvation if they don't pull it off so they really put their nose to the grindstone. Maybe a quarter of them can reach your level.
So c
Re: (Score:2)
Historically and economically, it is far from certain that your hypothetical 20% increase in productivity would actually result in a proportionate decrease in employment. Indeed, the opposite effect is sometimes observed. Increased efficiency makes each employee more productive/valuable, which in turn makes newer and harder problems cost-effective to solve.
Personally, I question whether any AI coding experiment I have yet performed myself resulted in as much as a 20% productivity gain anyway. I have seen pl
Re: (Score:2)
Historically and economically, it is far from certain that your hypothetical 20% increase in productivity would actually result in a proportionate decrease in employment.
Past performance blah blah. They're already mass firing with a vengeance.
Re: (Score:1)
Re: (Score:2)
Yes, that's the premise. But reality doesn't support the premise. Most of the time, AI code doesn't even compile, you have to fix nearly everything it writes. Stupid stuff like adding some code in SQL between BEGIN and END, and it puts in a second END. Even a junior developer wouldn't do that. Get into more complex stuff like javascript mixed with HTML, and you get all kinds of garbage. Helpful, yes, but ready to be unsupervised? Hardly. Not even close.
Re: (Score:2)
We said that about AI when AI LLMs made images which were absolutely twisted and surreal. That got better.
One of the milestones will be if AI can beat hand-tuned assembly in specific applications like embedded stuff. If AI can move optimizing compilers to the next level to as good as possible, then this will be a major thing. Maybe even AI taking algorithms and figuring out better algorithms that do the same thing, like replacing a basic sort with a quick sort.
Re: (Score:2)
Good for repetitive stuff (Score:5, Insightful)
Re: (Score:1)
Obviously so. Also note that standardized repetitive stuff gets coded into a library sooner or later and nobody will have to write it again.
Re: (Score:2)
This is the thing I get worried about. For human endeavors we bother to organize it into common, maintained libraries. But what if the LLM can just essentially copy-paste the 'best of breed' code? What does actually maintaining that code look like? When humans do it, then you get massive amounts of code no one knows may have fallen behind.
Colleagues have spoken of how it tends to reproduce ancient and inappropriate javascript suggestions, which is sort of like how stackoverflow does, answers that made se
Re: (Score:2)
I am not worried. Libraries are much more than just code. They need a real understanding ogf the problem and what its parameters should be. LLMs cannot do that. And hence LLMs cannot identify the best approach to something. Library designers can. And it is a process that can take a long time. The results may be crap (example: Windows kernel API) or exceptionally excellent (Example: Linux or xBSD kernel API).
Re: (Score:2)
Repetitive stuff is not new. I've used Lex and Yacc to write code that writes code, and Perl, and bits of Java and Excel...
The lazy programmer always sees a lazy way!
Funny thing. I actually understood every line, and I could explain it all to anyone who asked.
btw. This used to be the book.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Not sure if this is what everyone reads today.
Re: (Score:2)
This used to be the book
I bought a copy last year to help me with my embedded programming, but I'm pretty sure it's not what "everyone" reads anymore.
Automating creation of spaghetti, not maintenance (Score:2)
It used to be that to "cure" traffic, one built more freeways. However, it was learned that new freeways resulted in new building construction such that those freeways soon filled up and were just as jammed as before.
So what I expect to happen is that software will become even more bloated because more devs will use AI to throw code and features at a problem instead of practice the patience-requiring art of parsimony. It may code up faster, but likely to be a maintenance spaghetti bowl.
Re: (Score:2)
Re:Automating creation of spaghetti, not maintenan (Score:5, Insightful)
'The whole idea of "spaghetti" code is that programmers can't easily understand what it does. I am not sure that is problem for AI.'
That is in fact a major problem for 'AI' because it doesn't understand anything. It is not parsing the code, understanding how it works and then working out how to add new features. It's looking at how programmers have solved a problem in the past and copying that. By it's nature, it will make code more spaghetti-like and never less. It's also (at least so far) shown itself unable to understand concepts such as scope, and how a new feature may interact with already existing features. Again, this lack of understanding will lead to more spaghetti code.
An aspect that will make this problem even worse is the poor quality of the bulk of code out there. Projects that have been carefully considered, designed and optomised from first principles are extremely rare. Much more common is poorly designed and written code that's been fixed after publication, and usually in a hurry as critical bugs have been discovered. And since memory and storage got cheaper this problerm has only got worse. Spaghetti code is the norm in this industry and so this is what the 'AI's are mostly trained on. Expecting them to write better than the average human programmer shows a complete misunderstanding of how these things work.
Re: (Score:2)
That is in fact a major problem for 'AI' because it doesn't understand anything. It is not parsing the code, understanding how it works and then working out how to add new features. It's looking at how programmers have solved a problem in the past and copying that.
Are those different things? I agree AI "understanding" is anthropomorphizing the process. But I would think it can, at least theoretically, parse the code, determine what it does and then compare if to a world full of other code that solved the same problem in the past. Including code that is used to add similar features.
But you missed the important point. Spaghetti code implies code that follows a path that can't be easily followed and understood by humans. I see no reason to think AI is likely to produce
Re: (Score:3)
It is not parsing the code, understanding how it works and then working out how to add new features.
Have you ... used it?
I can feed ChatGPT some code, and it does parse it.
Frequently - more and more frequently - it does add new features correctly, per my requests.
Does it "understand"? Probably not, but we don't even really know what that means with humans a lot of time.
Re: (Score:1)
I am not sure that is problem for AI.
It is no problem.
At least not in general.
Re: (Score:2)
We have already seen this, especially as machine capacity and memory expand. Microsoft Word used to have about 95-99% of the features it did now... and used to fit on a single floppy.
Re: (Score:2)
Nukular weapons for children? (Score:2)
I've never met a non-engineer/non-programmer that understands what testing is, or why you need it.
I'm fairly sure that testing will not be very rigorous.
Re: (Score:2)
And than you add IT security and the little fact that, for example, North Korea invests a lot into training competent hackers. Anybody that does real coding work without quite competent (and hence expensive) coders is comitting strategic suicide.
Re: (Score:2)
Also, btw, I've noticed that along with testing, the entire concept of "strategy" has been hand waved away in favour of convenience... guessing that is MBAs again, using the "why build what Microsoft and CrowdStrike can provide us for a low low monthly fee".. plus then there's no need for anyone to audit or even perform network security.. think of the money we'll save !!
Yeah, the future looks effing depressing.. and scary.
Re: (Score:2)
Yeah, the future looks effing depressing.. and scary.
Indeed, it does.
What happens if it doesn't? (Score:3)
Re: (Score:2)
Indeed. Hype is not an indicator something will become possible. Remember flying cars or the "home robots for everybody" craze from about 40 years back? In the "AI" area, only very little of the promised results ever materialize and some of them do only do so multiple decades later.
Re: (Score:2)
> if AI coding doesn't keep improving?
While nothing is guaranteed, I think it's reasonable to assume that it will continue to improve. Obviously AI companies have learned a lot, as evidenced by improvements to this point, and coding AI is an area that should be easiest of all AI to get right, as the results can be tested and fed back. The AI should be able to learn to optimise, detect and eliminate security hazards, largely automatically.
The only thing that can't be learned automatically is "feel", and i
Re: (Score:2)
It's kind of a weird pattern I noticed whenever companies start getting bought in a particular field, it usually causes stagnation in advancement. The company buying the tech has no idea how to advance it further, and the original founders don't care anymore.
Re: (Score:2)
Re: (Score:3)
I'm sort of amazed that companies like Microsoft, Google have 30% of their codebase written by AI.
It's pretty much guaranteed that they'll have been training on GPL2 and GPL3 code and it's going to be really hard for someone to tell if it's copied from copyright sources.
Yesterday I needed to add lseek support to a fuse driver. I asked chatgpt for a basic example and it was completely obvious that it had lifted it (maybe modified) from an open source driver.
(It came from here https://lists.nongnu.org/archi.. [nongnu.org]
It will not (Score:5, Insightful)
It already has peaked quite a while ago. At this time there is simply not more code to train it on after the whole Internet content got stolen for that purpose. As LLMs have no insight and no understanding, what they can do is strongly limited by their training data. In theory, that training data could be cleaned up and manually extended, but the effort will be prohibitive beyonf a few cosmetic efforts to fool benchmarks (and the fools that believe in them).
And there is a second problem: If AI "coding" is used more and more, even less human-made training data will be available and the training data will get worse due to model collapse. Hence no, AI "coders" will not get better and no, this is not sustainable. Stopping to train and hire real coders will likely be a bad strategic mistake as it means much fewer will be available when the illusion that "AI can code" finally collapses.
Re: (Score:2)
Except, also kinda not.
The LLM component won't be getting better for a while. We've run out of data to chew on, and without new data, very little will change with this element.
What is changing is the sanity checking part (yaknow, by putting Yet Another LLM into the loop), and the using external tools part. And that second one is not to be underestimated.
Because right now, an LLM can respond to requests by using the correct tool for the job, rather than trying to do the job itself.
Re: (Score:2)
Well, maybe. Or maybe not. I personally think these are empty promised designed to keep the hype going a bit longer because the hype is so profitable and, for example, OpenAI will directly go into bankruptcy once the hype stops.
Re: (Score:2)
Does slashdot have a "no AI stories" filter? (Score:2, Insightful)
Asking for a friend.
Re: (Score:2)
No.
Slashdot doesn't even handle encodings well.
Perhaps things will improve when the LLM learns Perl.
Good at coding is not good at design (Score:4, Insightful)
Good at coding is not good at design and analysis, particularly of unknown problems - and almost all business-specific problems are "unknown" to outsiders. An LLM can only repeat things it has seen before - and it may have seen many things, some of which it remembers, but the possible range of combinations of problems in any particular business is so large that many of them have not been seen before.
LLMs are very bad at those. They're also bad at understanding any complex existing code, so they're great for greenfield programming like coding challenges, and bad at analysing and modifying existing code.
When you bear in mind that the greatest skill of a programmer is reading and understanding existing code to modify it, and that most programming is modification of existing systems to extend them and not de novo application design, the future for "AI Coding" isn't looking so great right now.
Simple tasks will be replaced by AI, yes. Just like simple tasks of "install operating system, edit configuration files" has been replaced by "run the platform configuration tool (Terraform/Puppet/Chef/Ansible/whatever)". That hasn't made SRE and Platform Engineer go away at all, it simply means they spend less time typing into text editors and command lines, and more time setting up management frameworks to do the management. LLMs are bad at that too.
LLMs are particularly hilarious when it comes to anything related to security or other edge cases of reliability (security is an edge case - most users are not malicious, but you must defend against those that are, just as you must defend against the edge case of unlikely hardware failures, and so on).
Re:Good at coding is not good at design (Score:4, Interesting)
While there are some good points here, I don't think that any of them are unsolvable.
AI is already used to detect security vulnerabilities. All you needs is AI as an adversary to the AI code, and that should solve that problem.
> the greatest skill of a programmer is reading and understanding existing code to modify it
It's true that a lot of what programmers bring to a company with experience is the memory of the code structure and how things are done. However, programmers are also notorious at being bad at documentation. If an AI can program and document decently at the same time, that would go a long way towards later reuse.
Alternately, AI can just rewrite things from scratch. If the AI writes enough unit tests, replacing existing structures with newly written AI code that does the same thing won't be too much trouble. In fact, it seems to me like a good way to move forward. One AI in charge of the system, another AI rewriting things whenever there's need for new functionality.
I think that there's a lot of scope of moving into an AI programming world. I'm not sure what functions people will play there, it will be interesting to find out, but AI in general should be able to do most things, IMO.
Re: (Score:2)
what if you used one AI to code and have a committee of AI's to poke holes in it?
Generative Adversarial Network (GAN) idea, applied to coding?
I think that's what you're saying.
Re: (Score:3)
We've set up "opinionated" role-identities, and given them an IRC network to communicate across.
So then, an agent with the "Engineer" role gets their output checked by at least two others, so that technical considerations are balanced against business use case analysis and ethical considerations. It's not quite an adversarial network, but we use phrases like "Politely but aggressively skeptical" for one of the agents, and describe another one like a child hell-b
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
>> There is very little new about what a business needs to do with software
Very true. There are a limited number of tasks and problems that need solving in most businesses. The business app companies are well aware of this and are already transitioning everything to AI.
“Either get on with it and participate, or start digging your own grave.”
https://www.salesforceben.com/... [salesforceben.com]
Cope (Score:1)
Years ago I was goofing off in low paying jobs and then I got saddled with a kid and before I knew it I needed more money. Never mind that inflation and price gouging was rapidly guzzling down my income requiring more money.
I wanted to goof off and rel
Re: (Score:2)
>> They're also bad at understanding any complex existing code
I haven't found that to be the case at all. The AI's I've used can quickly scan through a codebase, figure out what it all attempts to do, and compose a readme that describes it all. AI is also good at commenting existing code, looking for things to refactor, writing unit tests. It will do any kind of drudgery for you with no complaints.
Codebase (Score:4, Funny)
You'll have a code base and not 1 employee who understands it.
Re: (Score:2)
You'll have a code base and not 1 employee who understands it.
Depends on how much you anthropomorphize AI because the AI "employee" will certainly "understand" it. But how important is that? There are millions of companies using computers who have zero employees who understand the code for the programs they use. In fact, I suspect few programmers would be able to decipher the output of a compiler. But the computer "understands" it and we test its output against the desired results.
One good thing will happen. (Score:2)
Coding tests in the interview process will become meaningless (they already were but that's a different topic), and most likely dropped forever.
No they can't (Score:3)
Tools like Cursor and Windsurf can now complete software projects
This is bullshit.
It won't. (Score:2)
It won't, unless (and that's a really big unless) human coding keeps improving and the LLMs are carefully, iteratively updated to take advantage of the best written code there is. That's not happening, and is increasingly unlikely, IMO.
Ditto for any and all other text-based "machine learning" systems -- unless someone literally starts over building LLMs from scratch, and rather than using any old garbage they scrape off the web or pirated books or whatever, actually train it on well written, well understoo
Standard hype strategy (Score:5, Insightful)
Re: (Score:2)
This isn't boolean however. It's not like the only two options are "perfect at complete software" AI versus useless AI. The most likely outcome given LLMs' historic trajectory is that they gradually become better at getting closer to perfect over time, where the number of human engineers required to make complete solutions approaches zero the more time passes. What is the basis for the argument that the gains LLMs have made will not continue to push toward that end? I know the post here is particularly focu
Re: (Score:3)
>> it's rapidly and consistently moving toward complete solutions with less input from human engineers
Most certainly true in my experience. The improvements over just the past 8 months have been very substantial. AI now writes about 90% of my code, though I generally give it small incremental coding tasks similar to what I would have had to write on my own. If I just write a 2-line comment describing what I'm about to implement it will frequently just complete it all, or a very near facsimile, and I c
AI is really good for some things (Score:3)
But, the Enterprise still has Jeffries tubes. You still need to know exactly where to go to replace that one damaged component. The ship itself isn't vibe-coded. Similarly, complex systems where exactness is critical, such as security or banking, can't afford to be AI-generated. They will still use ever-smarter autocomplete, and thus will still count towards the "AI generated" stats. But they'll be traditional software.
Context window (Score:2)
What if AI Proveyers Paid for the Code they Stole? (Score:2, Insightful)
Re: What if AI Proveyers Paid for the Code they St (Score:2)
Re: (Score:2)
>> AI is theft-ware
In what way? You can figure that all of the open source code in existence was fed in as training data, plus all the coding textbooks. That isn't stealing.
Re: (Score:2)
No it isn't, unless you broaden the definition such that nearly 90% of the code you write is theftware. Someone showed you how to write a loop in Java, didn't they? Oh, did you see someone's code of how to do a sort algorithm? Every time you write a loop, you're stealing that code template. You have a sort in your program, unless you dreamt up the algorithm it's likely you stole that code.
Here's a challenge for AI (Score:5, Interesting)
Re: (Score:2)
yes please!
Unless AI takes over the government (Score:2)
For Linux as a whole gaming especially online gaming is a major issue plus if you do anything with fancy hardware like creating music or more advanced video work. I suppose if AI made it so easy for companies to write drivers that they just did it it would be less of an issue. That won't solve the patent issues or the interoperability issues.
Re: (Score:2)
If AI code output is not patentable (it already is not copyrightable in USA... patent is untested to the best of my knowledge), then eventually (17 years) we get out of the software patent game entirely.
Re: (Score:2)
Easy.
Some good, some bad (Score:2)
The good would be if AI allows us to manage complexity, find tricky bugs, edge cases and other difficult problems. Also good would be making simple tasks simpler.
The bad would be managers who hire cheap, incompetent people to use AI to create crappy, buggy code that they don't understand.
I suspect that we will see a bit of both.
Prepare for a tsunami of bugs
if we are lucky, maybe we can stop re-inventing (Score:2)
Imagine if we had LLM coders in 1995. Would there be any reason to abandon visualbasic and activex? Would javascript and HTML even exist?
"superhuman coding AI systems" (Score:2)
This is entirely possible at the rate things are going. My guess is that eventually the AI's will develop coding languages for themselves that are better suited to the way they operate than what we use today. It will become increasingly difficult for humans to understand the AI-generated code, which may be more similar to assembler or a byte code than a high level language.
Our view of the software will consist of pseudocode blended in with a readme that describes what's going on. We will look at the work pr
Re: (Score:2)
I'd think that the concept of an AI target code without traditional intermediate programming language would be both not needed and would cause problems. Also, challenging as it would need some entry point to know how to wrangle a language, and it wouldn't have training data to consume to come up with.
As to your second point, I think there is a class of projects that this can work for, and much of that class is "stuff a programmer could churn out, but those are relatively less available so we settle for cli
Re: (Score:2)
Code completion is definitely a mixed bag I agree, and a lot of times just a distraction. But I have seen an entire good clause or two offered up on many occasions.
I also agree that new and original human-developed code examples coming from traditional places would decline if AI begins to write most of the code. My prediction is that the AI assistants will learn from humans as we make use of their services. That will be the new training data. Whatever code we now cause to be generated using AI as the workho
Citation needed (Score:3)
Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers
Citation needed, or even better, a case study. Lots of CEOs that have bet their company's shirts on AI keep saying this, but I don't think there is a commercial product out there that is fundamentally written by AI. As others have mentioned, it is fine for writing repetitive, simple code that can save a developer time. It is not at all able to complete a project beyond the sort of thing that would be assigned in a first year coding class.
AI writing code is small potatoes (Score:2)
Hot take: when AI really gets rolling, it won't just write code, it will analyze all the existing code out there and figure out what the designers of those coding languages were really trying to accomplish with all of those half-baked or almost-but-not-quite-optimal language features that went into each programming language, and use that to generate the language spec for The One True Programming Language That Finally Gets Everything Exactly Right.
Or not. But it would be interesting to see it try; at the v
Generative AI: the Asbestos of engineering (Score:2)
What the industry needs is not more speed, is more care. This Generative AI revolution achieves the opposite.
What if AI coding writes bad code and not found... (Score:2)