Will AI Force Source Code to Evolve - Or Make it Extinct? (thenewstack.io) 159
Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers."
This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them.
In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....")
In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research."
The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages.
And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.
This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them.
In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....")
In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research."
The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages.
And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.
Abstract Syntax Tree (Score:3)
Re: (Score:2)
Just generate assembler or even machine code direct. Give the AI the task then ask it to generate for x86, ARM etc.
Re: (Score:2)
C--? (Score:2)
>>Or, since all microprocessors consist of registers, simple instructions, memory addresses, and push/pull, just create a very simple, generic, "language"based on that, and then compile it to any specific architecture.
We already have programming languages like that. C-- claims to be a "portable assembly language" (some claim that traditional C also fits that description).
Re: (Score:2)
Re: C--? (Score:2)
Who needs a language at all?
Cowboy Neal codes in machine instructions, AI should too.
Even C gets translated to assembly before the assembler makes the binary that gets modified by the linker.
Point is: all human-accessible programming languages are _at least_ two degrees of separation from machine language executed by the instruction decoder in the CPU. The only thing that matters is the final binary executable in RAM.
Re: (Score:2)
Re: (Score:2)
Re: Abstract Syntax Tree (Score:2)
Re: (Score:2)
Are any of the people making these proposals actually considering the consequences of having generated code that no human being can understand?
Re: Abstract Syntax Tree (Score:3)
Re: (Score:2)
In the old(er) days of AI there was a philosophical split between people who believed the way forward was creating larger and larger databases of hand compiled facts and rules for relating them, and people who favoured doing as little as possible hand engineering and letting learning algorithms figure things out for themselves. The latter group is certainly dominant now, but old habits die hard.
If we dispense with the need for humans to understand programs then we won't have a "language designed for" progra
What's the prize? (Score:4, Funny)
Re: (Score:2)
Re: What's the prize? (Score:2)
Re: (Score:2)
Sounds like a great idea, what could go wrong? (Score:2)
We are having a hard enough time as it is already debugging code generated by LLMs and now someone wants to make it even more obscure??
Re: (Score:2)
Re: Sounds like a great idea, what could go wrong? (Score:2)
You think they will shut down and leave the companies with all that source code dependent? You realize companies pay fortunes to mainframe builders and other legacy vendors, Cobol devs etc?
Re: (Score:2)
Indeed. My take is that LLM code will eventually have to be ripped out everywhere and at high cost. Good for us, bad for society as a whole.
human readable (Score:2)
Microsoft basic could be saved in the original text format or a faster loading token format. The token format was not human readable but existed as a language of its own. Any computer language can be translated into another language including human readable language. I had a translator program that would convert token basic into human text readable. I also have a program that converts C programs to pascal.
Any computer language is just a language and can be translated to most any other language if you know w
Re: human readable (Score:2)
Tests are not sufficient (Score:3)
Not likely. (Score:2)
Llms are not magic. They are trained on existing languages, including natural language.
For an AI only language, someone will have to invent it first, then train the models on it.
The far more likely outcome will be that the models will code in whichever language has the most code examples, and translate from your natural language input to that language.
Re: (Score:3)
I have used AI to generate code, and it does a decent job of providing generalized code when I do not want to do it myself. The code never works correctly or even at all, but it looks fine at a first glance. The problem is programming languages have to be precise and AI misses details. Specifically I was generating Python code.
Obvious problems: 1) Using parentheses "()" to call a value in an array instead of brackets "[]". 2) Not using the correct spacing. Python is very space sensitive. Generated code may
Lack of information.... (Score:2)
ehh, I've had the opposite experience. I've got way too many years writing C for embedded systems, but needed an Android app. So I asked ChatGPT to create an Android app for me that would do MDNS and Bluetooth discovery, pop up a dialog to let me choose from the discovered devices, and then connect over WiFi or Bluetooth as appropriate. And, after spending an hour downloading and installing the Android toolset, the program compiled first time and did what I asked. I did my normal step-through-line-by-lin
Re: Not likely. (Score:3)
When was the last time you tried this? 2024? Even last month and current week AI is already a world of difference. Codex 5.3 is a leap up from codex 5.2. Opus 4.6 is much better than 4.5. And don't use generic AI for coding tasks, ChatGPT 5.4 can't code it's way out of a wet bag.
Re: Not likely. (Score:2)
DotNet bytecode will work fine. Highly efficient representations of abstract languages, easy to translate, plenty of documentation.
LLM are trained on existing programming languages (Score:2)
Maybe an LLM could produce directly machine code, or p-code instead of having an intermediate step on a language that has to be compiled or interpreted, but this makes more difficult to check and debug the LLM output, and because the output isn't deterministic this makes very diffcult to debug a program.
It could get much worse (Score:2)
Considering text generators mostly work for things you'd have "copied of StackOverflow" before, I fear that this might lead to more and more boilerplate code. We already saw this with advanced development systems in languages like Java, where you often have more boilerplate than actual code.
I mean we keep seeing more and more effort being put into the same trivial code we've written for many decades now. Generative AI will likely only make that problem worse. Probably good programmers, now called 10x engine
Re: It could get much worse (Score:2)
Why is that a bad thing? A library of validated functions that can be glued together would be more secure and more stable than bespoke programming. Itâ(TM)s not as efficient, but with modern processors why is that important? In the industrial automation world this is what systems like PlantPAX are for.
Re: (Score:3)
Itâ(TM)s not as efficient, but with modern processors why is that important?
Why do the terms "640k", "ought", and "everybody" keep popping into my head? ;-)
More specifically, data centres are increasingly responsible for worsening AGW. Inefficient code wastes more power than it needs to; and that's literal, physical, P=E*I power.
AI-usable is the future (Score:2)
I would say from past experience that some concepts are simply not meant for AI.
For example, the Swift programming language is typically good. But SwiftUI construct made on top of Swift is nearly impossible to debug, both for users and AI. Asking AI to do SwiftUI is an experience in futility. (One could say the entire idea of SwiftUI is a false good idea, but thatâ(TM)d be stretching it).
C++ is not much better. It has to rely on known constructs to understand what the code is doing. AI will add up asin
we already have a term for this (Score:3)
Can we reuse the term 'binary blob'? I think we can.
Re: (Score:2)
I love it! (Score:5, Insightful)
I love everything about this but don't get me wrong, it's not because I think it's a good idea. On the surface this seems like it's the future but really it's the single dumbest proposal to come out about AI-based programming. Assuming you make a language that avoids the possibility of creating syntax/structural flaws, it's the inability to scrutinize the underlying code that will bite you, The result is a lower ability to debug code and will end up obscuring security flaws which will make the jobs of criminals easier.
I love this idea because I know the second a company using this crap gets bitten it's going to be an extremely expensive problem the fix, more costly than if they had just paid normal programmers to write the code originally and may actually force entire programs to be rewritten. This basically ensures that if it gets off the ground that it's going to self-destruct and every company who invested in this idiocy will suffer financial losses.
What's not to love about idiots getting their just deserts?
Re: (Score:2)
My job is not primarily focused around writing code, but I have to write code now and then. One the major challenges around code is dealing with very messy data and corner cases. The code would have worked flawlessly except for that meddlin' data. For example, dealing with division by zero errors because a number that should have never been a zero is a zero in one case out of millions. Or that number is null even though the source system does not allow for null to be in that field. One example in two millio
Re: (Score:2)
I love this idea because I know the second a company using this crap gets bitten it's going to be an extremely expensive problem the fix
That's my gut reaction too -- this will result in software with obscure bugs that are near-impossible for a human to find or fix because no human even understands how the software works.
OTOH, maybe no human will need to find or fix the bugs, because they can task an AI to find and fix them instead. I'd say that strains credibility, but last year I would have said it strains credibility that an AI can understand (or, at least, "understand") human-written code as well as a human programmer, and yet here we a
Lack of information.... (Score:3)
>>> this will result in software with obscure bugs that are near-impossible for a human to find or fix because no human even understands how the software works.
And that's different from the current state of Software Engineering how, exactly?
Re: (Score:2)
I agree this is a dumb proposal.
Just wait until the AI that generates your code gets compromised.
After a quick internet search I'll leave this link here:
https://micahkepe.com/blog/thompson-trojan-horse/ [micahkepe.com]
Re: (Score:2)
Problem with that is that this may actually cause a major global crisis.
Re: (Score:2)
That's the idea. I look forward to the massive paydays being doled out to mop up the mess they make.
The question is... (Score:3)
Why bother? LLMs doing the job they are supposed to works just fine targeting a human readable language, what do they imagine the gains to be? The whole point is that the LLMs deal in human friendly material.
This seems to be a pointlessly 'futurist' ambition for showing enthusiasm for LLMs rather than some goal with practical implications.
Even when I see someone having reasonably 'good' luck with their LLM usage, they end up with something a little wrong that, if they would just go manual for a second would be a super quick tweak. But they are committed to the prompt interaction, they keep trying back and forth with prompting and not getting exactly what they want, or something else changing at the same time. Leaving it human readable means you can do that quick manual change without trying to try to induce an LLM to make a precise change, which LLMs even at their best are prone to not manage.
Where would training come from? (Score:4, Insightful)
AI learned to code by reading human code examples. Where will the training examples come from if AI codes directly to a human-unreadable language?
Re: (Score:2)
There will be languages made specifically for AI. Probably something a bit like Rust, with inherently safe typing and the like.
There will still be some developers too, mostly working in languages like C and assembler, and on building stuff for AI to use.
But a lot of development of in-house apps will probably get replaced by AI. The quality is already terrible because there is no business case for doing more than the bare minimum to make it work.
Re: (Score:2)
AI doesn't *only* learn to code by reading human code samples. It also can read documentation and write code based on what it sees in those documents, even if no examples exist.
Generally speaking, the syntax of a language is straightforward. Even non-AI tools can generate proper syntax. What AI does, is a level of abstraction above syntax, applying a specific solution and expressing it in that syntax. AI generally breaks down a problem into components, then puts the building blocks together. The lack of cod
Re: Where would training come from? (Score:2)
yeah maybe.
i wonder how much language-independent discussion of software design is out there. can an AI only "think" in a specific language like python or whatever ? i'm assuming the bulk of the code intelligence is coming from scraping SO and all the coding blogs out there, which in my experience are almost always using the context of a specific language.
otoh, maybe that doesn't matter. it can assume python for "thinking" about design and then drop down to the ai-direct language for implementation.
and
Re: (Score:2)
i'm assuming the bulk of the code intelligence is coming from scraping SO and all the coding blogs out there, which in my experience are almost always using the context of a specific language.
I think this assumption was mostly true a year or so ago, but isn't true any more. Current-generation LLMs do reason quite effectively about code and are perfectly capable of creating entirely new code that isn't regurgitated from anything. They're actually better at debugging than writing code, I think, though they tend to focus on shortest-path solutions even after they've (correctly!) identified the root cause.
Re: Where would training come from? (Score:2)
totally,
but where does that reasoning ability come from ?
i'm assuming it comes from the vast body of human discussion about coding.
Re: (Score:2)
totally, but where does that reasoning ability come from ? i'm assuming it comes from the vast body of human discussion about coding.
Some of it, sure. But some comes from the generic reasoning overlays, some from self-derived conclusions arrived at through self-talk, and some from training that didn't originate in the corpus of human discussion, i.e. from synthetic training. Your mental model of LLMs as regurgitation machines is no longer accurate.
Re: Where would training come from? (Score:2)
i think you're reading a regurgitation viewpoint into what i'm saying. i put 'think' in quotes because i'm still wary of describing them as thinking, but yeah i totally agree they're way beyond regurgitation at this point. i mean, huge portions of human reasoning are language-based. the entire field of formal mathematics is founded on symbol manipulation, for example. so it makes sense that being good at working with language can result in reasoning.
Re: (Score:2)
Assuming that LLMs get all their information from human discussions and examples, is oversimplifying what LLMs do.
One way to visualize it (literally) is to think about what PhotShop does when it "erases" someone or a thing from a photo. When it does this, it doesn't get the idea how to "erase" by scraping discussion forums, and not even necessarily from scraping photo sites. Instead, it "hallucinates" what might be "behind" the thing that is being erased, a lot like auto-complete. It extrapolates from what
Re: (Score:2)
I've seen similar ideas floated by other idiots that said they'll generate machine code directly. Machine code as training data is easy to come by (just compile the C/C++/Rust/Go source), but good luck doing any kind of QA or debugging on it. Code review decreases the amount of testing needed drastically compared to blackbox testing.
Re: (Score:2)
What? A strategic view on things? The get-rich-quick idea people do not do that.
Kind of a dumb question that sounds profound (Score:2)
Binary to "Human Source Code" for reviewing ... (Score:2)
... is likely going to be the mid- to further out future of coding. Presumably de-compiling into some human-only language not intended for re-compiling is going to become more and more of a thing. No need to go through all the hoops of countless programming languages and frameworks just because some naked apes like to each turn their own little software world into a stack-religion.
Some form of containers is going to remain though. Especially to isolate problems and find bugs isolating logical components int
but they won't (Score:2)
A return to the era of "buggy" compilers (Score:5, Interesting)
But that semantic gap (the levels of abstraction between what the programmer considers as an abstract execution model and what the hardware actually executes) was increased by going from C to C++, and Cfront was the tool that enabled that expansion. Newer, even higher level languages have continued to increase that semantic gap, aiding productivity gains. But, they still generate code in a predictable, consistent way. The same code, compiled by the same compiler version, will generate the same executable code every time.
But here we are again. LLM-generated code is just the new "Cfront", taking very high level language descriptions of what we want, in the form of "prompts", and translating them into code at one (higher) level of semantic gap into another (lower) level of semantic gap. Great. Except now we're back to having to deal with "bugs" in the code generation. Worse yet, it doesn't generate the same code consistently, because the "temperature" parameter is a trade-off between output quality and consistency. Perhaps one day we'll get to where modern compilers and code generators are. Until then, we are still responsible, even liable, for the code we create, either directly or via some LLM-based model. So, we'd better be able to read and understand the code being generated at the layer most prone to bugs, until that layer gets to the same level of reliability as modern compilers are at today.
Re: (Score:2)
Re: (Score:3)
I'm sure AI will write efficient and effective machine code.
There is absolutely no reason to expect that. There are a lot of reasons to expect the converse. And, apparently, you have not heard that LLM-generated code needs careful review. Much harder to do that on machine code level.
click bait (Score:2)
Can we top this click bait article title this week?
Random thoughts... (Score:2)
Raise your hand if you have programming in machine language - entering binary directly into memory. Raise your hand if you have programmed in assembly. Raise your hand if you have programming low-level stuff in C.
The first question will have the fewest takers, because there is almost no reason to do that anymore. Assembly will have a few more takers, low-level C a few more. Technology has progressed, our compilers, optimizers and linkers have gotten better.
Historically, there have been numerous attempts
Re: (Score:2)
I have. I still have some of the 8085 call/jump/return opcodes memorized. There were times when it was much faster to patch in a fix and continue, rather than stopping to change the assembly source, run the assembler and linker, and re-download the executable into the in-circuit emulator.
Then again, that was in the early 80's.
Re: (Score:2)
The problem is that CPUs, assemblers and compilers all come with a spec and assurances. LLM code may be arbitrary broken. Hence they are the wrong tool for the job.
No, high level also makes sense for AI (Score:2)
Compilers should adapt too (Score:3)
If we go this way, then we should also adapt compilers to include random variation in the code that they generate, so that the same source code on the same version of the compiler won't always produce the same output.
You now, if LLMs are the golden standard for how we want all the world to work, we take the things that are really bad ideas for programming in LLMs and implement them in our compilers too.
Lets ask Grok.... Straight to the horse. (Score:2)
Re: (Score:2)
Can you build another AI to summarize all that useless shit? Fucking thing bloviates worse than a politician.
Re: (Score:2)
Thanks!
YES (Score:2)
Question 1: Will AI make some languages evolve? Yes, very likely.
Question 2: Will AI make some languages go extinct? Yes, very likely.
These things happen with or without AI.
Anybody remember ALGOL? It was a C-like language that was used to write the Unisys mainframe operating system. It's pretty much extinct today.
Simula anyone? It was a big deal in the 1960s.
The point is, languages go extinct all the time, for many reasons. AI will lead to more extinctions, but this won't be a new phenomenon.
Languages also
Re: (Score:2)
These things happen with or without AI.
If these things happen with or without AI, then how is AI going to cause languages to go extinct or evolve? You've basically nullified your own premise.
Anybody remember ALGOL? It was a C-like language that was used to write the Unisys mainframe operating system. It's pretty much extinct today. Simula anyone? It was a big deal in the 1960s.
And? I fail to see how one of the earliest programming languages last made for machines in the 1960s demonstrates your point. Better languages were invented; hardware has changed.
The point is, languages go extinct all the time, for many reasons. AI will lead to more extinctions, but this won't be a new phenomenon.
You have yet to demonstrate AI will be the cause. You have made the assertion and then declared it to be true.
Languages also constantly evolve, at least, if they are being used. Would AI cause this evolution to change in some way, compared to the evolution that would happen without AI? Perhaps. How would we tell the difference?
Again you keep asserting a premise then declaring it true without any
Re: (Score:2)
I doubt that AI will particularly drive changes, since the whole point would be that the LLMs don't care and can work with the languages as-is.
The languages evolve and go away because of human motivations. Maybe a toolchain had a proprietary lock and associated with a business that went south. Maybe the language was a bit overly verbose and people didn't feel like dealing with it. Sometimes it's more like fashion than reason with how they go unpopular and back, maybe because some rabid fan implements a m
Some kind of token format (Score:2)
Some kind of token format could be used for AI which would be converted to human readable text when necessary.
At the current state of AI I am pretty sure that it would be a bad idea to do away with human readable code. AI just not as reliable as we'd need for that. But I won't rule out that AI could become good enough for that.
Critical systems (Score:2)
I can't wait to fly on a plane with AI generated flight control systems. After all, what possibly could go wrong?
Maybe I'll wait until Elon uses only AI generated code for all SpaceX vehicles first.
nonsense (Score:2)
Will there be an AI-optimized programming language at the expense of human readability?
Why? We already have machine code. What could an "AI-optimized programming language" do that neither machine code nor current programming languages already do?
"Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice?
Uh, you can do that today. That "intermediate language" is any programming language that has enough stuff on the Internet that the LLMs have trained them.
Now whether or not that's a good idea or a recipe for desaster is an ongoing discussion. As a security professional, my take is simple: Thank you AI, my job is secure until I retire. Just when techni
Codex wrote "hello world" straight to binary (Score:2)
Write code in a sequence of prompts (Score:2)
If chatbots can code so well... (Score:2)
Then shouldn't they produce clear, self-documenting code?
Re: (Score:2)
Yes. And lots of it for cheap. But somehow that is not happening.
Ownership of your life? (Score:2)
In most of human history people have had the choice to do things the
Those who ignore history are doomed to repeat it. (Score:2)
Engineers own the code, not the prompt. (Score:3)
In our practice we are maintaining the principle that we own the code. While the prompts help us develop the code, the ownership responsibility for the resultant code still rests with the human. Thus our development approach needs to adjust how the agent delivers the code so that it is still comprehensible and auditable.
Neiter (Score:2)
Seriously, what kind of a stupid question is that. But LLMs may make it clearer that you need people that actually can code well or you will be losing money.
Re:The llms lack understanding of code (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3)
To play devils advocate here, let's say you're using an LLM to generate some Python. You already have no idea what the underlying machine code will be, let alone some representative C of that machine code. The thinking here being that soon you won't know what the Python is either, yet somehow you end up with a working solution.
In some senses you can see this being a logical conclusion. However, as others note, LLMs don't actually "program" - they just regurgitate code they've seen elsewhere. It's unlikely t
Re:The llms lack understanding of code (Score:4, Interesting)
Re: (Score:3)
You already have no idea what the underlying machine code will be, let alone some representative C of that machine code.
For AI to fill the same role as a compiler or interpreter, its behavior needs to be deterministic and predictable.
The history of programming has been a progress toward higher levels of abstraction. AI could become a new, even more abstract programming language, but only if you can precisely reason about it. If it's impossible to predict what the result of your program will be then you're not programming. You're rolling the dice and hoping a black box will happen to do something useful.
You also run into t
Re: The llms lack understanding of code (Score:2)
Re: (Score:3)
Re: The llms lack understanding of code (Score:2)
improving the code at check-in is a terrible idea. At most it should add a task to the task list that you can review. At least, that's one of my rules in agents.md: everything goes through the task list.
Re: (Score:2)
So, for example, they fail miserably at...subtle problem debugging.
You are mistaken.
https://aisle.com/blog/what-ai... [aisle.com]
Re: (Score:2)
Re: (Score:2)
Why bother with software at all? The endgame is that the AI does the job of any and all software. Just let go and let God. I mean AI.
Re: (Score:2)
Exactly! Why would anyone even bother to write any "human readable" program. All _most_ programs do is swallow some data, transform it in some way and then output some data for humans, who still have a job, to see. You can teach AI to perform that same transformation on the input data, using the same Algorithms.
But:
- Does AI offer repeatability? If I have taught it one transformation, is it guaranteed that it will exactly repeat that transformation again and again. After a million times? I have seen the sam
Re: (Score:2)
A theme in the LLM dealing with code front, a *whole* lot of people who never got into coding saying a whole lot about the matter of LLM dealing with code...
To some extent, I get it, a manager who was always intimidated by code manages to get a little project to come out almost like he wanted without the manager actually knowing how to code, and that's exciting. But they always start pontificating on what their success means on projects that they themselves can't make happen even with an LLM, but imagine t
Re: (Score:2)
The thing is, you only begin to understand the real problems and challenges with writing code when you have a few years of experience and have done some larger things and done some maintenance on some older code and not only on simple business code. And suddenly getting some smallish thing to work looks what it is: Playing with simple toys.
Re: (Score:2)
for humans to quickly, but precisely understand the giant code stack created by the LLM.
Why would the LLMs want to do that? Cant
grep "Kill all humans" *.c *.h
Re: (Score:2)
Hahahaha, no. That idea has failed several times already. It is not doable with what the human race knows at this time and that will not change anytime soon.
Re: (Score:2)
LLMs are also really bad at writing secure Python and at spotting non-trivial security problems in Python.