Is Self-Healing Code the Future of Software Development? (stackoverflow.blog) 99
We already have automated processes that detect bugs, test solutions, and generate documentation, notes a new post on Stack Overflow's blog. But beyond that, several developers "have written in the past on the idea of self-healing code. Head over to Stack Overflow's CI/CD Collective and you'll find numerous examples of technologists putting this ideas into practice."
Their blog post argues that self-healing code "is the future of software development." When code fails, it often gives an error message. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. Or hey, have you tried turning it off and then back on again?
Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level... "People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before," said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. "I think there is a risk of accumulating lots of very shoddy code written by a machine," he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools' capabilities to avoid that.
Despite the occasional "hallucination" of non-existent information, Stack Overflow's blog acknowledges that large-language models improve when asked to review their response, identify errors, or show its work.
And they point out the project manager in charge of generative models at Google "believes that some of the work of checking the code over for accuracy, security, and speed will eventually fall to AI." Google is already using this technology to help speed up the process of resolving code review comments. The authors of a recent paper on this approach write that, "As of today, code-change authors at Google address a substantial amount of reviewer comments by applying an ML-suggested edit. We expect that to reduce time spent on code reviews by hundreds of thousands of hours annually at Google scale. Unsolicited, very positive feedback highlights that the impact of ML-suggested code edits increases Googlers' productivity and allows them to focus on more creative and complex tasks...."
Recently, we've seen some intriguing experiments that apply this review capability to code you're trying to deploy. Say a code push triggers an alert on a build failure in your CI pipeline. A plugin triggers a GitHub action that automatically send the code to a sandbox where an AI can review the code and the error, then commit a fix. That new code is run through the pipeline again, and if it passes the test, is moved to deploy... Right now his work happens in the CI/CD pipeline, but [Calvin Hoenes, the plugin's creator] dreams of a world where these kind of agents can help fix errors that arise from code that's already live in the world. "What's very fascinating is when you actually have in production code running and producing an error, could it heal itself on the fly?" asks Hoenes...
For now, says Hoenes, we need humans in the loop. Will there come a time when computer programs are expected to autonomously heal themselves as they are crafted and grown? "I mean, if you have great test coverage, right, if you have a hundred percent test coverage, you have a very clean, clean codebase, I can see that happening. For the medium, foreseeable future, we probably better off with the humans in the loop."
Last month Stack Overflow themselves tried an AI experiment that helped users to craft a good title for their question.
Their blog post argues that self-healing code "is the future of software development." When code fails, it often gives an error message. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. Or hey, have you tried turning it off and then back on again?
Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level... "People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before," said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. "I think there is a risk of accumulating lots of very shoddy code written by a machine," he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools' capabilities to avoid that.
Despite the occasional "hallucination" of non-existent information, Stack Overflow's blog acknowledges that large-language models improve when asked to review their response, identify errors, or show its work.
And they point out the project manager in charge of generative models at Google "believes that some of the work of checking the code over for accuracy, security, and speed will eventually fall to AI." Google is already using this technology to help speed up the process of resolving code review comments. The authors of a recent paper on this approach write that, "As of today, code-change authors at Google address a substantial amount of reviewer comments by applying an ML-suggested edit. We expect that to reduce time spent on code reviews by hundreds of thousands of hours annually at Google scale. Unsolicited, very positive feedback highlights that the impact of ML-suggested code edits increases Googlers' productivity and allows them to focus on more creative and complex tasks...."
Recently, we've seen some intriguing experiments that apply this review capability to code you're trying to deploy. Say a code push triggers an alert on a build failure in your CI pipeline. A plugin triggers a GitHub action that automatically send the code to a sandbox where an AI can review the code and the error, then commit a fix. That new code is run through the pipeline again, and if it passes the test, is moved to deploy... Right now his work happens in the CI/CD pipeline, but [Calvin Hoenes, the plugin's creator] dreams of a world where these kind of agents can help fix errors that arise from code that's already live in the world. "What's very fascinating is when you actually have in production code running and producing an error, could it heal itself on the fly?" asks Hoenes...
For now, says Hoenes, we need humans in the loop. Will there come a time when computer programs are expected to autonomously heal themselves as they are crafted and grown? "I mean, if you have great test coverage, right, if you have a hundred percent test coverage, you have a very clean, clean codebase, I can see that happening. For the medium, foreseeable future, we probably better off with the humans in the loop."
Last month Stack Overflow themselves tried an AI experiment that helped users to craft a good title for their question.
Please no. (Score:5, Insightful)
Re:Please no. (Score:5, Insightful)
Well, I've long been under the firm belief that printing an error and exiting, or throwing an an exception and exiting, is not error _handling_. Handling an error means doing something to attempt to recover. Even something simple like waiting a little bit to retry is usually preferable to throwing an exception to the top level. (I'm baffled by the people who say exceptions are superior error management when they only have a single top level exception catcher, ending up being identical to assert()).
I've found that many times there are bugs from customers that are fixed by just removing the assert() that the naive developer added in. Sadly, despite normal standard for asserts to be on only during development, I've been at multiple companies where they're left on even in production... The idea that if something goes ever so slightly wrong that crashing is the best option I think really speaks to their lack of learning from experience (these were not novice programmers).
Re: (Score:1)
Re: (Score:2)
The advantage of exceptions is that they have zero or near-zero overhead if the code is running normally.
That is very much dependent on the implementation and is not universal at all. Also, considering how often exceptions are inappropriately used for ordinary control flow, their notoriously poor performance is absolutely an issue even when the code is "running normally".
Re: Please no. (Score:2)
Re: (Score:1)
Well, I've long been under the firm belief that printing an error and exiting, or throwing an an exception and exiting, is not error _handling_. Handling an error means doing something to attempt to recover. Even something simple like waiting a little bit to retry is usually preferable to throwing an exception to the top level. (I'm baffled by the people who say exceptions are superior error management when they only have a single top level exception catcher, ending up being identical to assert()).
You've never heard of throw and catch?
I've found that many times there are bugs from customers that are fixed by just removing the assert() that the naive developer added in.
Great, now when the program fails you won't know why. CPU cycles are cheap nowadays; there's really no reason not to leave in a few sanity checks. If an assert fails with valid input then you should fix the assert instead of removing it entirely.
Re:Please no. (Score:5, Informative)
You've never heard of throw and catch?
There's a much, much better pattern:
https://en.wikipedia.org/wiki/... [wikipedia.org]
In other words, if your function can run into an error of any sort, then it MUST return a data type that either returns the success value or an error value, that the caller MUST have a defined means of handling or else you literally can't even do anything with it because the compiler simply won't allow it. Then you don't end up with the dodgy shit show that C++ and Java stick to for error handling. Rust and Kotlin both use this pattern with really good results. It's incredibly easy to build your code so that, anything short of a hardware failure or the OS kernel refusing to allow an allocation (i.e. running out heap space) your program will never crash.
The typical prototyping I do in Rust involves throwing the .unwrap() (which basically says: If this returns an error, panic, otherwise return the value.) Then before releasing, I remove all of those and handle appropriately, followed by optimization. I only leave a .unwrap() or .expect() somewhere that an error should be either absolutely irrecoverable or for some reason we really should halt execution, or there's literally no way that it can possibly return an error. Like a hardcoded regex pattern for example.
Re: Please no. (Score:2)
Re: Please no. (Score:2)
You can, but it's rather pointless. C++ even has such a scheme as of C++23, but given subsequent calls can throw exceptions, you still have to account for them anyways, so why even bother? Java has an optional type, but you know what's funny as hell about it? It's nullable :) Not to mention, java just isn't built around using types like that, which makes them more trouble than they're worth (believe me, I tried once, total waste of time.)
The whole problem is the concept of an "exception" (Score:2)
Which, even if true, is a dangerous distinction to make. The world is unpredictable. A good programmer presumes NOTHING.
Re: (Score:2)
A) Kotlin supports exceptions, just normally
B) your pattern requires you to propagate every error you can not handle, manually upward - exactly what exceptions do.
Case B) is only a suitable solution when you are stuck in a language: which has no exceptions.
Re: Please no. (Score:2)
B) your pattern requires you to propagate every error you can not handle, manually upward - exactly what exceptions do.
You mean by typing all of one character?
https://doc.rust-lang.org/rust... [rust-lang.org]
I think that's a hell of a lot easier TBH.
Case B) is only a suitable solution when you are stuck in a language: which has no exceptions.
In my experience, languages that use exceptions (kotlin only does for compatibility with Java) use that as the only means of handling errors at all, despite the fact that exceptions are only meant for certain kinds of errors. You know, like hardware failures, invalid opcodes, and other truly exceptional events. So instead of that, you use the throw keyword to indicate even routine/mundane/easil
Re: (Score:3)
Even something simple like waiting a little bit to retry is usually preferable to throwing an exception to the top level
It's only preferable if it's been completely designed to handle retries or idempotent, or you explicitly account for this during the retry.
Reversing a transaction for instance can be an OK solution, but only if no external requests were made for this piece of code (or you can revert external parts too, or externally no changes were made).
Developers rarely actually check this. I have seen way more issues come from retries than I have seen issues prevented by them. 99% of time the issue will still be there ne
Re: (Score:3)
Well, a retry with infinite timeouts is bad - but compared to immediately crashing, causing customers to notice and bug reports to show up... Or worse, corruption of data because of immediately crashing. Too many instances I've seen where there are straight forward recovery methods that decide to crash instead.
Re:Please no. (Score:4)
Retry can help but these days it rarely solves the issue, especially if it's done server side (on the client side retries are of course often needed). I agree that flat out always crashing on the first problem is often a bad idea, but in reality there is a wide range of situations that have different solutions. Sometimes it's best to ignore an issue, sometimes it's best to retry, sometimes it's best to crash, sometimes you need a db transaction rollback, sometimes you don't. The important thing is to carefully think about handling errors and logs and not blindly following some paradigm.
Re: (Score:2)
Ah. It works in embedded systems sometimes, depending upon the operation. Like lost bits on a serial port, things like that.
Re: (Score:2)
Not sure what is worse, the false choices or the crappy analysis devoid of context, but I'd say what this comment really says is that bad software is a product of bad programmers who fail to understand problems and devise good solutions.
Re: (Score:2)
Well, to be fair, the goals are very often wrong. I see a lot of failures growing from the start - no design with extremely tight deadlines. That's the basic startup model there, and I've come along in a few startups but only after the point of them getting off the ground. The base code always feels a bit shoddy, and that's because it's inevitably written hurriedly. The incentive for programmers is very often on quantity and not quality, or the very least to get it done on time. It's doubly bad in start
Re: (Score:2)
I'm baffled by the people who say exceptions are superior error management when they only have a single top level exception catcher, ending up being identical to assert().
I remember a *long* time ago getting my favorite, and least helpful, "error" message of all time from Tcsh (on a VAX 11/785 running 4.3BSD):
Assertion botch: This can't happen!
Re: (Score:2)
Re: (Score:3)
The difference between exiting a program on an exception which came from a precondition or postcondition check (e.g. came for a failed assert) and ignoring the checks is that in the first case the program finishes in a defined way. In the second case (e.g. ignoring asserts) the program is likely to finish in an undefined way.
Obviously, it is worse when program finishes in an undefined way. If it was written in a memory unsafe language then it is a nice source of security errors. Regardless of the programm
Re: (Score:2)
Well, the "program" may be a system. If an operation fails it doesn't mean the system needs to stop. Ie, you've got a network router, and you've got a packet and it's not being handled right (a bug) and you don't know what to do with it. If you crash the entire system then it disrupts the network for awhile. Or your a monolithic system, and you got a request from a customer and there's an error in there - if you crash the system it is disruptive but if you send a mysterious error message to the user it c
Re:Please no. (Score:4, Insightful)
printing an error and exiting, or throwing an an exception and exiting, is not error _handling_.
True.
Handling an error means doing something to attempt to recover
False.
There's nothing more annoying than a program trying to be smarter than me.
Error handling begins with (the developer) defining what defined behavior for a program is, even in the case of an error, and what isn't. Generally, there are 3 different types of errors:
There are also other ways to categorize errors (e.g. "technical errors" vs "domain errors"), but that's mostly just relevant for the middle-section (i.e. well-defined state errors).
I've found that many times there are bugs from customers that are fixed by just removing the assert() that the naive developer added in.
Depends on the language (e.g. Python), I'm also using them for development. If your programmer's not an idiot, an asser() means something. Removing it may make the program run past that particular point, but may create inconsistent state down the road. I.e. the program runs alright, but your transaction contains invalid data, or unpredictible race conditions may be triggered on specific circumstances. Be careful with that.
The idea that if something goes ever so slightly wrong that crashing is the best option I think really speaks to their lack of learning from experience (these were not novice programmers)
Again, this depends. Is that specific error supposed to be part of the program's "valid" and well-defined state? If yes, then crashing is not an option. Instead, the program must enter that specific well-defined error state, and wait for the recovery signal (i.e. the user specifying "continue, underlying problem has been taken care of").
If not, then crashing is the only option. The only thing we have a right to bitch about is how the program crashes -- i.e. just with an "Assertion failed" error message, or possibly some more context / gentle shutdown. Mostly the latter is better, but the former may give more insight / more direct information.
But the most common misconception here is that "error handling" means "errors are bad, let's prevent them" -- they're not. They're signals that something went on differently than expected. That can be for various reasons, and "error handling" means differentiating between the reasons and handling the reasons on the level of responsiblity that they require, i.e. within the program itself automatically, within the program but with user interaction, or outside the program.
Re: Please no. (Score:2)
Re: Please no. (Score:1)
Re: (Score:2)
"'I've found that many times there are bugs from customers that are fixed by just removing the assert() that the naive developer added in. "
Of course, you need to:
1) ensure that the state causing the assert never happens, OR:
2) check all the subsequent code for whatever data is passed through without the assert to ensure that no invalid state, system crash (seg fault, the like), inconsistent/invalid data, race conditions, edge conditions, and other things occur.
Re: (Score:2)
Of course, did all that. A lot of code I find just did assert, even if the caller function checked for errors. It's instinct with some devs.
Re: (Score:1)
For real. Programmers theorizing about "solutions" like this need to first take a deep dive into learning how cancer evades medical treatments.
Re: (Score:3)
ON ERROR RESUME
Re: (Score:2)
Yep, instant chaos. And then add some malicious action, because it is likely connected to the internet in some way and there are people that want to extort you.
This idea is not new. It was never more than a nice fairy-tale and it still is not more than that. Fixing things requires _insight_ and ChatAI (or any AI, really) does not have that.
Re: (Score:2)
Nope. Or do you see a fundamental difference between "self-healing" and "fixing" with regards to cognitive and engineering abilities needed?
Re:Please no. (Score:4)
You beat me to the punch, I can imagine some very fancy freshly minted coder put on an experimental project on a huge piece of code that has been pounded on by technical debt but mission critical... Airplane traffic Code, Reactor Management, Train track, Medical Devices
After the initial, "easy, will be done in a week", presenting it as with a really cute power point with the words "Next GEn", "AI", "Technical debt free", [insert current buzzwords here] with a demo hooked up to chatgpt in a week
At the end of the presentation, a guy in program saying we need that now. When the veteran of the project complains there is more to it, give it to a rookie and shorten the deadline.... with eventual spectacular disaster
but I dont understand, it worked great in the DEMO!
Similar shiny toys, very good projects breaking bad... happens all the time
Re:Please no. (Score:4, Funny)
How many apps at once will be saying, "As a Large Language Model..."
Re: (Score:2)
This sound like a arcehtypical case of Betteridge's law of headlines, so the answer is a resounding "no".
Some files are less important than others. (Score:2)
Have a file system that lets you flag some files as "expendable" because a lot of files are just that.
With correct sandboxing, devs will not need this (Score:1)
Delusional (Score:2)
Re:Delusional (Score:4, Funny)
Ah, that's Windows. Having to reboot once a month is a major design flaw. *listens to whispers* Once a day?!? *listens* I can't even... someone help me face palm.
Re: (Score:1)
This is an oldie but a goody. Yes, frequent re-boots are a "code smell", but you know what? Windows is mostly a desktop OS. Using it as a server and having to reboot frequently was stupid, yes; but back in the day all they had to do was get it to the point where it almost always stayed up for one day, and it usually did. Heck, even half a day is fine because the use-case was one where people were going to lunch mid-day, and shutting the whole thing down at the end of the day. That's really just 8, maybe
Re: (Score:2)
Windows hasn't really had that "needs a reboot" daily problem ever since Windows 2000 in my experience.
Though Microsoft seems to have a sense of humor on this topic. When Microsoft paid me a bounty on bugcrowd last month, they did so from an account named CoderOfManyBugs.
Re: (Score:2)
Re: Delusional (Score:2)
Done it at least a few times
Re: Delusional (Score:2)
Re: (Score:2)
The only time I ever reboot my windows boxen is for the occasional patch. And even then, it's not necessarily every patch Tuesday.
Re: (Score:2)
As someone who does 3D design, rendering, video editing, and runs science applications, I find the idea of not rebooting a computer nonsense. In MS Windows, I need to make a proper shutdown link to make sure the system was rebooted.
As someone who washes his hair every Thursday, eats about 2,800 calories per day, and masturbates way more often than is healthy, I find the idea of giving Microsoft a 16-hour daily window that I plan to use my computer, so that they can do what they deem fit for the other 8 ho
do any trendy shit you like (Score:2)
Guess that includes... (Score:2)
AI fixes (Score:3)
"Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level..."
In my current company, they say that ~90% of SEVs are detected and handled by AI. At least that's what they claim. SEV1's, probably not, but everything less dire than that seems to get handled automagically by massive in-depth monitoring and reporting upstream to the AI.
I don't know the specifics, but it's a health care company with shitloads of money so I tend to believe it. They brag about it all the time in meetings.
I don't see why that couldn't be applied at the code level, but I'm not a developer.
Re: (Score:3)
"Developers love automating solutions to their problems
"ON ERROR RESUME" handles every situation a modern programmer could ever face.
What could go wrong? (Score:3)
After the 5th reported bug that turned out to be PEBKAC, AI decides to work on the problem. Solution found..
"I've just picked up a fault in the AE-35 unit. It's going to go 100 percent failure within 72 hours."
Detecting bugs? (Score:4, Funny)
Nope. Still not. Go away. (Score:5, Insightful)
This bullshit has been tried a few times before.
The thing is that you need to have a perfect spec in order for "self healing" code to even be possible. But guess what? Specs have about the same error rate as code has.
Oh, and the incompetent morons behind this always claim "it is the future" as well. Ignore them, they want your money and they have nothing of value to offer in return.
Re: (Score:2)
This bullshit has been tried a few times before.
The thing is that you need to have a perfect spec in order for "self healing" code to even be possible.
And if you nail down that elusive "perfect spec", you probably have less need for self-healing code. But of course we'll never get beyond having a barnacle stuck onto a kludge for every little business-logic misunderstanding caused by shitty or non-existent specs.
Also, I'm holding a bag of popcorn in reserve for the time when the AI behind self-healing code starts "commenting" its own code revisions. At that point samples of hallucinatory computer code will be fodder for the late-night comedy shows.
Eventually ? It's already here (Score:4, Insightful)
As a software architect, I get approached by dozens of companies offering me software to help do codebase analysis, vulnerability scans, etc.
And working with many billion lines of code, there is plenty to review. The vendor also supports user exits and modifications, so worldwide, probably 100s of billions of lines of code. The issue isn't identifying the bad code. The issue is what to do with it. Even with this much data to sift through, AI will never be able to completely comprehend the purpose of all code, simply because there are so many bad programmers out there, and documentation is usually no help.
But an 80% solution would be most welcome too.
The same thing goes for warnings. Yes, we can run automated scans to react to warnings. But we already do this. Automation rocks. When I started, I spent at least half of my nights fixing filled harddrives, extending databases, restarting backups, etc. Today I sleep 360 nights of the year, or more. I applaud monitoring, alerting, automated responses, scripting, and of course vendors improving their toolsets, and 3rd parties adding what the vendor sees as "irrelevant", but which makes my life easier.
However, from a warning to an error, there is a long way. Warnings are there to inform us of an impending (and identified - thus known) problem. Errors happen when unforeseen problems occur. Keyword being "unforeseen". If the programmer knew what the problem was, he could fix it with checks, choices or calls to scripts fixing the issue. But when a programmer chooses to throw an error, it is because he CANT fix the issue. Again, we might be able to understand or add in fixes to some of the problems, simply because AI would allow us to effectively trawl through millions of identical errors, and determine a common cause, or a common solution that is not easily identified by the human mind.... But again, at best an 80% solution, I would guess.
Am I happy with 80% ? Absolutely. Can we go higher ? Hopefully ? We need to. Software evolves faster than our solution to the bad code, if it didn't I would be out of a job by now :)
Now once we get the AI to actually WRITE the code (effectively - not the simple procedures it can do today), we can add in purpose, direction, rules, principles. And unlike humans, the AI will be more inclined to follow those guidelines, resulting in fewer errors. And if the AI wrote the code effectively, it will be more likely to understand the purpose, and thus fix unforeseen problems. And more importantly, people with little or no programming skill can effectively join the ranks of the developers, which will make development cheaper and faster. Startups and new projects wont be dependent on good programmers, just good AI.
Re: (Score:1)
LLM tools are a completely different class than the clumsy old tools of code analysis etc.
Try out the new tools before ranting on about your experience from the old tools.
Re: (Score:2)
They certainly are in a different class. LLMs are completely useless.
Re: Eventually ? It's already here (Score:1)
Re: (Score:1)
AI will never be able to completely comprehend the purpose of all code, simply because
... AI can't comprehend the purpose of any code. That's not how these things work.
AI would allow us to effectively trawl through millions of identical errors, and determine a common cause, or a common solution that is not easily identified by the human mind [...] we can add in purpose, direction, rules, principles. And unlike humans, the AI will be more inclined to follow those guidelines, resulting in fewer errors. And if the AI wrote the code effectively, it will be more likely to understand the purpose, and thus fix unforeseen problems.
This is pure fantasy. It's what you want and what a lot of people expect from science fiction and bad science reporting. However, it is absolutely not something that you'll get. Not now, not in some imagined future. The simple fact is that we don't have a clue how to build something with those properties.
AI could very well be helpful to you (it comes in many flavors) but if you think you're going to get something like ChatG
Re: Eventually ? It's already here (Score:1)
Betteridge's Law of Headlines (Score:2)
Re: (Score:1)
Is Betteridge's Law correct?
Re: (Score:2)
It mostly is.
(and in this case it definitely is)
COBOL's ALTER statement returns . . . (Score:2)
. . . END OF LINE.
Terrible way to use it (Score:2)
Re: (Score:2)
I disagree. You know exactly why it works. It just saves you the trouble of doing mind numbing google searches and wading through 100 stack overflow questions that are not quite the answer.
It's a waste of time and I'm sure glad that these problems can be fixed so easily now.
Re: (Score:2)
I disagree. You know exactly why it works. It just saves you the trouble of doing mind numbing google searches and wading through 100 stack overflow questions that are not quite the answer.
It's a waste of time and I'm sure glad that these problems can be fixed so easily now.
This may be fine for the experienced programmer, but not for the junior coder. I want them to fix the problem themselves because they might not immediately know why the code is failing. Having someone band-aid it for them means they miss out on a valuable learning experience. And, yes, having to wade through irrelevant Stack Overflow posts can be part of that learning process.
Re: (Score:2)
ML today can do a lot for coding, find faster algorithms or teach people how to code or even find errors and why they happened.
No, it can't.
Re: Terrible way to use it (Score:2)
Re: (Score:2)
1) make a half-assed effort ...
2) develop a pattern of half-assed effort
3) institutionalize patterns of half-assed effort into a inference engine
4) Profit!
Garbage in, garbage out (Score:5, Insightful)
And some people still dream of throwing in enough garbage, gold will come out the other end.
Self-healing? Anyone thought of the problem equivalent to software cancer, i.e., the "healing" grows and grows and take down your system? Even the simplest self-healing like "add a little disk space" could become "use up all the system's capacity", turning a one-component failure into a system-wide outage.
Is AI doing 80% good enough? Only if it definitely won't screw up the remaining 20% by making it worse. If a human has to review and understand thoroughly 100% of the AI did to prevent screw up in the 20% of cases, then we might as well start with a human to begin with.
I have seen vendors selling "autonomous", "smart" (or whatever the latest buzzword) solutions to us for over 10 years, none of the solution can answer the simple question: "how could we know it had screwed up, before the system goes down?". Or, put it in another way, how do we avoid trading the risk of many small issues with one big issue?
If AI is smart enough to write working code, we would have seem a revolution in Mathematics with AI proving theorems logically rather than simply by numerically calculating every possible case. If AI isn't smart enough to write a proof when all the axioms and rules are known and fixed, how likely is AI smart enough to write a program that works with unclear requirements and changing external environment?
Re: (Score:2)
99% of the time I clone a repo, it doesn't build or work.
99% of the time, the problem is 1 single line of code over 1000s of lines of code. And, it's always super-stupid shit like: CMake decided to change something and it doesn't behave the way it used to; or somebody decided to change the location of a file in a dependent library and it doesn't work anymore. It's all one line fixes but would take hours upon hours to figure out.
It would be great to ask AI to fix the error or warn the repo owner that this o
Re: (Score:2)
It would be great to ask AI to fix the error or warn the repo owner that this or that has changed and make a PR for the fix.
Why stop there? Why not just ask the AI to write something better than what the repo owner made? Then ask it to play the market for you so that you don't need to work anymore. Then have it make you president of the world!
It's fun the play pretend. The danger is mistaking fantasy for reality. The simple fact is that AI can't do the things you want it to do and that isn't going to change anytime soon. We don't even know where to begin.
Re: Garbage in, garbage out (Score:2)
Re: (Score:2)
I had a friend who did contract work on an Agile-managed project that included a typical request-response protocol. Because of Agile, the protocol did not initially accommodate error handling and that became institutionalized. Every request assumed a response, the requestor could not time out the response. The system hung if you turned off one box. This was a Fortune 100 company that everyone knows, project management closed bug reports on this problem as "as designed" to avoid breaking their velocity.
N
Anti virus false positives (Score:1)
Re: (Score:2)
The AI train is moving way too fast now.
It's really not. The hype just makes it look that way. The article is silly fantasy, not a real thing.
Re: Anti virus false positives (Score:1)
Re: (Score:1)
Re: (Score:1)
No. (Score:2)
No.
The hardest part of software development (Score:4, Insightful)
The most difficult part of software development is not writing code, or fixing bugs. It's getting the requirements right. This "self-healing" code would have to know what the software *should* do when an error happens. Even humans struggle with this.
Re: (Score:2)
Good testing starts with giving bad input. If the code only works with ideal inputs, it's not good enough. And it's not just humans accidentally or maliciously introducing unexpected input field content, you have to deal with data corruption of all kinds from sensors through processing and storage. You can never trust that an input will be in the allowed range, and you can never trust that even if it is within the allowed range that it is correct.
You should always be doing reasonability checks and lookin
Re: (Score:2)
"You can never trust that an input will be in the allowed range, and you can never trust that even if it is within the allowed range that it is correct."
Of course you can, otherwise software could never even start.
"You should always be doing reasonability checks and looking for patterns that indicate a failure. Always."
Why bother, since according to you "you can never trust that an input will be in the allowed range" even AFTER doing "reasonability checks". That's what "never" means.
Your attitude smacks of
Re: (Score:2)
Your post tells me if you code, you're a shitty coder who can't even conceive of writing decent code in the real world.
But given that you can't even imagine bad inputs to your program, I'd bet you've never written anything, because that's a problem every coder runs into almost as soon as they start.
Re: (Score:2)
"This "self-healing" code would have to know what the software *should* do when an error happens."
or when an error doesn't happen.
I agree, good software is the result of good design. You don't get good design from bad design by auto-applying patterns learned from the behaviors of people who cannot produce good design. But, of course, this idea comes from those very people.
Oops something went wrong. (Score:2)
Done with AI?? (Score:1)
"Self-healing" sounds like "no devs needed" to me.
Heartbleed (Score:2)
This is proof that most software writers... (Score:2)
are not engineers.
When engineering, you need to be able to give an explanation for everything that you do or add. You need to be able to review things.
Now, more than ever, the saying
I do mostly
Better solution... Purpose based coding. (Score:2)
What if we used logic and fuzzy ideas to define a feature? Instead of "how" to precisely do something, like assembler or machine code does... we'd tell it "why" and "what" we needed.
A good set of requirements could be reused by lots of people. They might even have different tools (other components big and small) to complete the current goal than each other. But they agree the results would likely look similar if they could choose.
Bugs would be additions and adjustments to those requirements. Life goes
It's almost deja vu (Score:1)