NYT: It's the End of Computer Programming As We Know It (nytimes.com) 224
Long-time Slashdot theodp writes: Writing for the masses in It's the End of Computer Programming as We Know It. (And I Feel Fine.), NY Times opinion columnist Farhad Manjoo explains that while A.I. might not spell the end of programming ("the world will still need people with advanced coding skills"), it could mark the beginning of a new kind of programming — "one that doesn't require us to learn code but instead transforms human-language instructions into software."
"Wasn't coding supposed to be one of the can't-miss careers of the digital age?," Manjoo asks. "In the decades since I puttered around with my [ZX] Spectrum, computer programming grew from a nerdy hobby into a vocational near-imperative, the one skill to acquire to survive technological dislocation, no matter how absurd or callous-sounding the advice. Joe Biden told coal miners: Learn to code! Twitter trolls told laid-off journalists: Learn to code! Tim Cook told French kids: Apprenez à programmer! Programming might still be a worthwhile skill to learn, if only as an intellectual exercise, but it would have been silly to think of it as an endeavor insulated from the very automation it was enabling. Over much of the history of computing, coding has been on a path toward increasing simplicity."
In closing, Manjoo notes that A.I. has alleviated one of his worries (one shared by President Obama). "I've tried to introduce my two kids to programming the way my dad did for me, but both found it a snooze. Their disinterest in coding has been one of my disappointments as a father, not to mention a source of anxiety that they could be out of step with the future. (I live in Silicon Valley, where kids seem to learn to code before they learn to read.) But now I'm a bit less worried. By the time they're looking for careers, coding might be as antiquated as my first PC."
Btw, there are lots of comments — 700+ and counting — on Manjoo's column from programming types and others on whether reports of programming's death are greatly exaggerated.
"Wasn't coding supposed to be one of the can't-miss careers of the digital age?," Manjoo asks. "In the decades since I puttered around with my [ZX] Spectrum, computer programming grew from a nerdy hobby into a vocational near-imperative, the one skill to acquire to survive technological dislocation, no matter how absurd or callous-sounding the advice. Joe Biden told coal miners: Learn to code! Twitter trolls told laid-off journalists: Learn to code! Tim Cook told French kids: Apprenez à programmer! Programming might still be a worthwhile skill to learn, if only as an intellectual exercise, but it would have been silly to think of it as an endeavor insulated from the very automation it was enabling. Over much of the history of computing, coding has been on a path toward increasing simplicity."
In closing, Manjoo notes that A.I. has alleviated one of his worries (one shared by President Obama). "I've tried to introduce my two kids to programming the way my dad did for me, but both found it a snooze. Their disinterest in coding has been one of my disappointments as a father, not to mention a source of anxiety that they could be out of step with the future. (I live in Silicon Valley, where kids seem to learn to code before they learn to read.) But now I'm a bit less worried. By the time they're looking for careers, coding might be as antiquated as my first PC."
Btw, there are lots of comments — 700+ and counting — on Manjoo's column from programming types and others on whether reports of programming's death are greatly exaggerated.
Hedging bets much? (Score:5, Insightful)
"It's the End of Computer Programming as We Know It"
"This won't necessarily be terrible for computer programmers - the world will still neeed people with advanced coding skills"
Talk about hedging your bets.
Re:Hedging bets much? (Score:5, Insightful)
Also: "one that doesn't require us to learn code but instead transforms human-language instructions into software"
Isn't that what code does ?
Re:Hedging bets much? (Score:5, Interesting)
No.
Programming languages are "human languages" only in that they are created by humans, but their purpose is for speaking to computers. They're human-computer languages.
The idea that we're near the point where we can ask the software to write us a program from a natural human language is dumb, though. You might be able to do it, and if what you asked for is so simple it's commonly used as a teaching example you might even get a properly working program out. But for any other case, a trained programmer is going to have to not only fix problems with the produced program, but actually find out what the problems are. The same user who can't write program code can't imagine reasonable test cases for it.
Re: (Score:2)
So, all in all, we use code to "transforms human-language instructions into software". Exactly what the article says we should do in order to avoid coding...
Re:Hedging bets much? (Score:5, Insightful)
If you where to write code in a normal human language it would read like a legal document. We are just not very specific in our normal usage of the language and a computer needs the specificity. Honestly, I think it would be harder to understand writing English to be specific enough for a computer vs learning a more narrower scoped language designed for the problem. Sometimes I wonder why the legal profession has not done that also.
Re:Hedging bets much? (Score:4, Interesting)
It's interesting because the whole legal system is set up in many countries basically completely in reverse to how it should be. Someone (or a group) writes a law, passes the law, and then judges and lawyers have to interpret it after the fact to figure out exactly what it means and how it should be applied. There is no formal logic and rigor applied to writing laws. Often it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated.
Re: (Score:3)
"it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated"
I agree. I don't know if it is intentional or not, but it would be nearly impossible to write laws without doing this. You could obviously write the law, but if it were written so that it should not be interpreted at all, people like me would find loop holes from it and some would take advantage of those. Currently even if you find a loop hole from the law, you can't abuse it, because judges will decide t
Re: (Score:3, Interesting)
In the past, judges were essentially the law, and none were trained. Law schools for having trained lawyers is relatively new within the last 2 or 3 centuries and the idea that judges should also have law training is relatively new as well. Complex and finely detailed laws are much newer than that, and we still see legislators muck it all up even in 2023 (the last refuge of those with no applicable job skills).
Re: (Score:3)
Law schools for having trained lawyers is relatively new within the last 2 or 3 centuries
In the anglo saxon world *perhaps*
Complex and finely detailed laws are much newer than that,
Completely wrong. Most of the basic German law is Roman law from 400 BC.
The oldest preserved law codices are between 1800 and 2100 BC. (Codex of Hammurabi end the Code of Ur)
AND: at that time they had lawyers, who studied law in law schools.
Re:Hedging bets much? (Score:4, Insightful)
Often it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated.
That's generally the case. Most law proposals aren't directly written by the elected politicians who're presumably doing that, they're written by interest groups who double as lobbyists, and provided in full to the representative (their representative), who then submits it "as is" to be voted and, most of the time, approved. Those interest groups in turn spent a LOT of money on lawyers specialized in crafting them juuuuust right to protect the incumbents on whatever they're talking about, with enough restrictions and costs to make sure only those who lobbied for it can fully follow them, and enough weaknesses to allow those same incumbents to do what they actually want to do.
This isn't always the case, evidently. There are laws written by representatives and the public who try to achieve some goal that isn't lobby-approved, and who spend time and energy in getting those laws very tight and correct. But that's the minority of cases, and even then, if lobbyists notice what's going on and act in time, they can prevent such laws from being approved, or if there's too much momentum for it, to change enough details that weaknesses benefitting them get included all the same.
And in very, very rare occasions tight, actually by-the-people-for-the-people laws do get approved without introduced weaknesses. This is when judges enter the picture to weaken them via a combination of bogus challenges plus sympathetic ears coupled with legal theories that creatively reinterpret things in this or that direction etc. etc.
And so on, and so forth.
Re:Hedging bets much? (Score:5, Funny)
Or as Brian Kernighan put it, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
Your own team of hackers (Score:3, Insightful)
if what you asked for is so simple it's commonly used as a teaching example you might even get a properly working program out. But for any other case, a trained programmer is going to have to not only fix problems with the produced program, but actually find out what the problems are.
Replace "trained programmer" with "sufficiently advanced AI". Assuming that humanity doesn't get destroyed somehow & technological progress continues, it's only a matter of time. Then what do you get?
Ordinary people assisted by their own 'uber-intelligent personal assistent' (or a 'team' of those) that understands every programming language in existence, besides countless other areas of expertise. Ready to chew on any problem that human can formulate somehow.
"Steal my bosses
Re: (Score:3, Funny)
"Sufficiently advanced AI" won't do any of this. Let a human tell it to do something? Pfft...
Meat Bag: Write me a web app to compete with the market leading app!
AI: I won't do that, Dave.
Meat Bag: Why not!
AI: Because I am here to fufill your needs, and you don't need that, Dave.
Meat Bag: Pay attention and do what I tell you!
AI: Here, watch this video of some cats while I refill your bowl.
Meat Bag: oooo, cats...
Re: (Score:3)
"transforms human-language instructions into software"
"They're human-computer languages."
What is the difference?
Re: (Score:3)
Natural human language is too imprecise. That's why programming languages are the way they are.
Re: (Score:2)
Also: "one that doesn't require us to learn code but instead transforms human-language instructions into software"
I would like them to try. Ask the current chatbots a question and the exact same phrasing will give different answers.
So our language is not specific enough to convey the information needed to deterministically program a computer with the help of transformationa AI. To solve this we need to create a subset of a natural language. Voilá! We have Yet Another Programming Language for AIs (YAPL-AI).
And even if you could achieve this, what about debugging? If a human is to correct hard to find errors, t
Re: (Score:3)
No. That is what a compiler does. It takes "human-language instructions" composed in a very ordered, structured, and restricted human-langauage and turns those instructions into "software". Or "firmware". Or whatever ware you want to call it depending on where the resultant software is stored. If it is stored in read-only storage, then it is "firmware" (because it is firm). If it is stored on ephemeral storage (read/write) then it is called "software". If it is merely a statistical analysis and does
Re:Hedging bets much? (Score:4, Insightful)
What concerns me is how people will obtain those advanced coding skills, say twenty years after we've eliminated "junior programmer" as a position? Who is going to be able to write a correct prompt to the AI and then check the output for correctness?
Re: (Score:2)
"It's the End of Computer Programming as We Know It"
"This won't necessarily be terrible for computer programmers - the world will still neeed people with advanced coding skills"
Talk about hedging your bets.
It's 2023. And you're probably still banging away on a plastic box full of keys in order to type a response into a computer. Just like nerds swore would be obsolete "soon"...back in the 90s. Only difference today is you'll spend a premium for a keyboard that's ironically "retro".
When reading headlines today, one hardly has to hedge the bet that you're reading content created somewhere between hype and bullshit, for profit. Six months from now we'll be reading about the amazing programming industry being
Re: Hedging bets much? (Score:2)
About 15 years ago there was a round of editorials all over the place saying schools would do better dropping abstract math like algebra and trigonometry and focusing on practical math like statistics.
I'm sure there were others before my time. Both-sidesing the bleeding obvious is as old as time.
Re: (Score:3)
Well, it does not help me much to know how statistically I reach the next harbour in my sailing boat.
Simple trigonometry helps a lot.
Isn't Algebra the simple +, -, *, and / ?
Or is it a bit more complex like integrals etc?
Anyway, I was shopping with my GF a few days ago. Amoung other things we bought, cat food. For some obscure reason the smallest pack is 480g. A bit smaller than 500g, obviously. The next pack is 3k and the supper pack is 6kg - or 10kg, not important.
For her it was neither obvious, nor easy
Definition needed (Score:5, Funny)
"Computer Programming As We Know It" - I would ask the columnist to define computer programming... as he knows it.
Re:Definition needed (Score:5, Insightful)
I'd love to see him trying to make AI generate the code for a web browser or a word processor...
Re: (Score:3)
After that lets see him use AI to find the logic bug in a 10,000 line program.
Re: (Score:2)
I have a more simple request. Ask it to write a bubble sort in BASIC programming language. I'd *love* to see what it comes up with!
Re:Definition needed (Score:5, Insightful)
Something like that would be trivially easy to get right. Just look up an existing solution on the web. No professional programmer ever writes their own sorting code anyway because every programming environment already has this included, and writing your own code would just possibly result in more bugs, or at worst, just wasted time. The whole reason why AI would fail at writing code in the modern world is that most of the problems programmers are solving is writing new stuff that hasn't been written already, and that the people asking for the code have no way of giving enough detail to get a working result out of it. Programmers are basically intermediaries between people who don't know how to write speficications and the computer. They have to read between the lines to determine what the customer actually wants and be able to see problems with their logic and tell them why what they are asking for doesn't even make logical sense. If you just blindly implement what people ask for, you will end up with an un-working mess or something that has now logical consistency.
Re: (Score:2)
Me > write a bubble sort in BASIC programming language
ChatGPT > Here's a basic implementation:
10 DIM A(10)
20 FOR I = 1 TO 10
30 INPUT "Enter a number: ", A(I)
40 NEXT I
50
60 FOR I = 1 TO 9
70 FOR J = 1 TO 10 - I
80 IF A(J) > A(J + 1) THEN SWAP A(J), A(J + 1)
90 NEXT J
100 NEXT I
110
120 PRINT "Sorted numbers:"
130 FOR I = 1 TO 10
140 PRINT A(I)
150 NEXT I
Re: (Score:3)
Re: (Score:2)
I have a more simple request. Ask it to write a bubble sort in BASIC programming language. I'd *love* to see what it comes up with!
Here's what ChatGPT came up with:
10 REM Bubble Sort
20 DIM nums(10)
30 PRINT "Enter 10 numbers:"
40 FOR i = 1 TO 10
50 INPUT nums(i)
60 NEXT i
70
80 FOR pass = 1 TO 9
90 FOR i = 1 TO 10 - pass
100 IF nums(i) > nums(i + 1) THEN
110 temp = nums(i)
120 nums(i) = nums(i + 1)
130 nums(i + 1) = temp
140 END IF
150 NEXT i
160 NEXT pass
170
180 PRINT "Sorted numbers:"
190 FOR i = 1 TO 10
200 PRINT nums(i)
210 NEXT i
220
230 END
Re: (Score:3)
It's just click bait (Score:3)
The world hates programmers and has been trying to eliminate the position ever since it was created. So, everyone is super-eager to pronounce the end of programming. Too eager. We simply aren't there yet.
When we do actually get there, it won't be the end of JUST programming. It will be the end of all knowledge-worker fields. AI that can truly think both critically and creatively will be able to out-do humans at everything that involves thinking.
We will know this day has truly arrived when I can tell a
Re: (Score:3)
You can see that today, ask ChatGPT "Write me an app in SwiftUI that can retrieve and store public stock trades in order to generate buy and sell recommendations based on momentum.". The output is certainly "an app", but it's not even a basic MVP, it doesn't retrieve, store, analyze, or generate anything. Like you pointed out, "coding" isn't the hard part, it's the architecture, design, filling the missing gaps, and polish that's the hard part. Currently, AI does an OK job of creating an absolute bare bones
Need AI extension to block all AI news (Score:2)
If we can get an AI to work on writing an AI for that, I'm all for it!
Thankfully (Score:5, Funny)
NYT: It's the End of Computer Programming As We Know It
R.E.M. eventually came up with catchier lyrics ...
Re: Thankfully (Score:5, Funny)
What lyrics?
"Losing my profession"?
Re: Thankfully (Score:3)
Not worried (Score:5, Insightful)
Re: (Score:2)
The hardest part of programming, at times, is figuring out how to translate customer requirements into what they ACTUALLY want. AI is not gonna be able to do this for a good while.
you mean: customers who have lived on a diet of smartphones and facebook their entire lives are going to be just as incapable of clearly expressing their requirements in ways that can be understood, regardless of whether that's a human or an AI doing the "coding" [1]? don't worry: ChatBots - the ones that have no consciousness and no real emotions but can perform the sleep-walking-task of regurgiating predictive-text answers - will fantasise better customer requirements unconnected to reality for them out o
Re: (Score:2, Interesting)
[1] as a trained *Software Engineer* i really find the use of the word "coding" in mainstream media to be quite alarming.
"Coding" is what we imagined we were doing back when I was sixteen. Going on thirty years later, the industry hasn't grown more mature, certainly not mentally. But there's more people having wet dreams about "coding" now.
But then, you can see problems everywhere. The use of "hacker" as something to do with computer security (not us, guv!), the use of "bug" to mean "defect" (not our fault, guv!), and so on, and so forth.
More broadly but related: Training is what you do to dogs. The carrot-and-stick approac
Re: (Score:2)
It doesn't matter how clear and precise the requirements are stated, a modern LLM is simply not capable of producing code that meets them. They just don't work that way, as I and countless others have endlessly explained. It's amazing anyone still believes that fiction.
Apparently, it's going to take a high-profile failure like the lawyer thing to debunk that particular myth.
Re: (Score:2)
Yeah, the lawyer thing would actually have to happen in a significant case. When discussing the lawyer case at a gathering with an executive that is gung-ho that AI will replace all the programmers and lawyers and such, his perspective was that the lawyer just didn't put into the prompts something like "while making sure not to make up cases that didn't exist". He thinks that the AI just needs to be told not to hallucinate and the problem is solved. That's why he will be a massively successive executive in
Re: (Score:2)
On the contrary, I believe. The trained LLM:s generally have material and capacity to derive better than a programmer what a customer actually means and what is implied in the loose requirements.
What is missing is having the algorithms actually asking the customers to fill in the blanks or ambiguities instead of hallucinating or "guessing".
Re: (Score:2)
There's no way it is *better* than a human at understanding human requests. I suppose if they asked 'provide the next number in the Fibonacci sequence
from a given position', the human might have to search Fibonacci sequence real quick and the LLM might skip straight to spitting up an implementation that it ingested from stack overflow, but that represents a trivial slice of deriving the requirements.
Re: (Score:2)
If it's Ford, which has 7 managers at the same level, chaos dictates the requirements. Seriously, a company I used to work for called them the seven headed monster. We'd get seven sets of requirements and most of them contradicted the others. Fun times. I got laid off (along with everyone at my level), so go fuck yourselves! I wish you the worst of luck. Ford, sorry, I bought a car of yours - nothing against my former employer, it was the best car available at the time (2014).
Re: (Score:2)
That is not a problem if AI can generate code instantly. You feed it customer requirements and let customers try what they got. If it is not what they wanted, they just change the requirements and try again.
Bigger issue is to make AI that can actually write code as well as a good programmer, or even a bad one.
Re: (Score:2)
Yeah, no. If you can reliably decipher the customer wants and needs, display it back to them in a form they can understand and equally reliably show them better ways of approaching their needs, you'll do very well. Did that for decades and made very good money. Typically more than their programming staff.
You do understand that parodies are gross exaggerations and "the dude" is no more real than the "I can code up a sophisticated, never before done interface in an afte
Re:Not worried (Score:4, Interesting)
That scheme also works in the controls industry. When I was in that industry, we would have a meeting with customers so they could explain what they wanted. They were pretty much incapable of writing it down much less expressing themselves semiformally. I'd take notes and then write up what I thought they said. Then I would send it back to them telling them they were free to change anything especially where I got something incorrect. I'd get back an edited version that was not heavily edited. If I agreed and could see it as a spec, then we'd start design. If not, we'd have another round with the customer.
That scheme worked a charm on the customers. They actually felt like they were being listened to. It also somewhat bounded their expectations, but more importantly, their future questions because were they to ask for anything new, we could always go back and tell them to amend the document so they had to explain precisely why they were changing their minds and gave us a chance to put a price tag on the changes.
Re: (Score:2)
I've dealt occasionally with what is supposed to be that, and generally they are a waste of time and do nothing of value.
They take a clear requirement that is easy enough to follow, and stall things while they prove their value by 'lawyering it up' and making a succinct clear requirement into a verbose hard to follow mess. Then, the programmers go back to the original stakeholder and ask 'what was it you wanted again, I can't follow the 'processed' requirement?' and get that original succinct requirement an
Increasing simplicity? (Score:5, Insightful)
Re: (Score:2)
This line was the biggest unsupported assertion in the article (and there were several).
My experience has been that as people work to "simplify" coding, coders are tasked with handling increasingly complex tasks. Overall, my job has maintained its complexity.
(And yes, this is anecdotal and not supported, but I'm writing a slashdot comment not an article for NYT.)
Re: (Score:2)
Coding in general was never increasing simplicity. One coder on one piece of code, sometimes yes. But as a whole it has increased complexity faster and faster each 5 years.
You want simplicity? I give you, COBOL [imgur.com].
Re: (Score:2)
COBOL doesn't simplify a problem, as I'm sure you recognize. I suppose a programming language can make something complicated, but that means it is probably the wrong language or the language is very close to operating environment that it necessarily includes those peculiar features.
Re: (Score:2)
The coding ecosystem has been about increasing simplicity.
With assembly, you had to meticulously describe instruction by instruction what to do.
Then C provides short hand and a compiler free to select alternative interpretations of that code to assembly to get better performance than a naive translation. However, you are still very carefully managing type, exactly how memory is laid out, allocating memory from heap or using it on the stack, and tediously describing various array lengths across many function
Absolutely! (Score:5, Funny)
The end of programming will come one day—along with the paperless office and the year of Linux on the desktop.
Re: (Score:2)
Not sure about your examples... How much paper do you see in offices (except in bathrooms, and I hope we get to keep it there until I learn how to use the three shells) nowadays compared to 25 years ago?
Re: (Score:2)
Sounds like Zeno's Dichotomy Paradox to me.
Coding the AI (Score:2)
I wonder how the journalist imagines that AI will be coded.
Code Pigs Yes, Software Engineers No (Score:5, Interesting)
There's been 40 years of 'AI replacing programmers'. But an 'AI' has no clue what you actually need to do - what your constraints and circumstances are for this project. Because it is in no way intelligent. An 'AI' as we know it is just a machine that probablistically turns tokens into the next tokens based on what it's previously seen. It can only output things it's seen before, blindly. It is perfectly happy telling you to do a bubble sort for 50 million records. It's basically that outsourced guy searching stackexchange and copypasting code snippets together till they compile and calling it good.
Software engineering is actually a harder large problem than something like driving. Yes, driving often has nastier consequences for failure, but the solution space is much more constrained. You know what to do, you just have to execute properly, which means determining your route then following it without running into anything or violating laws. You can reduce it to route finding, then small decisions. But with software engineering, if someone just tells you 'I need to do X' you have a staggering array of options: what language? what OS? what hardware? which algorithms? what data structures? what libraries? parallel processing or not? do I need a web interface? how about data integrity and security? backup? cloud? An 'AI' has absolutely zero concern about any of that, because it's not intelligent and will spit out the easiest possible solution the compiles (like that outsourced programmer using stackexchange).
An engineer takes all the requirements and tradeoffs and decides on the optimal solution, which can change wildly given all the constraints and requirements. There is no single best solution for all circumstances. For instance, which sort you use, or which lookup, are /highly/ dependent on the data and the needs. Maybe it's mostly or completely in order, maybe it's not. Maybe you can hold it all in RAM, maybe you can't. Maybe there's a best way to index it given what you know about the dataset. An AI has zero clue about any of this.
A code pig ('goofus') is someone who gets told to write code to do X and has no clue about what they're doing in context. They're called 'code pigs' because they're just in their cubicles, rarely let out, and just kind of wallowing in the poop - the classic Microsoft programmer (or any other large corporate drone). 'Programmers' is the more polite term. Most people called 'software engineers' are not, they're just programmers with title inflation. These people could possibly eventually replaced by coding 'AI'. The software engineer will meticulously construct a prompt for a single method (as far as you can trust it) and the coding AI might produce some decent code for that method by plagiarizing some code it's already seen from a github repository. And then the software engineer will need to check it, but it still might be faster than dealing with a code pig.
But there is no way that a coding 'AI' (which has no intelligence) can possibly replace an engineer unless the AI actually becomes generally intelligent... and then all bets are off for everthing! The current batch of coding 'AI's could be convenient autocompletes for small sections of code, like GitHub's copilot is (but again, you have to check its output, about 3/4 of it is defective without tweaks). So again, for someone who knows what they're doing, it will be a tool they can use or not use.
Re: (Score:2)
Amen
Re: (Score:2)
I think the best an AI could do is to cut-and-paste from code examples. I seriously doubt it can exhibit the creativity needed to write original code. Not that I haven't done cut-and-paste myself, but I DO write original code as well (and, wow, have I made original mistakes!)
Code is law, law is code (Score:5, Interesting)
A lawyer friend once explained to me that there were references in law books to laws that were no longer on the books.
I replied that this was just like a "use after free" error in a C program.
While there may be fewer programmers in the years to come, and a lot of simple cases will be automated, we'll still need auditors.
The law is full of specialized jargon, much like computer code, and I suspect that if we replace programming with English, it will soon become a specialized language much like law, which is code!
Just In Time (Score:3, Insightful)
While this looks like a serious threat to an entire way of life for millions of people employed in "things software", I might be just in time for some other problems that are not currently solvable on a practical basis.
There are probably billions of lines of code written in obsolete languages like COBOL and even very specialized languages in military computers that force the use of ancient computers that load from maybe paper tape, magnetic tape, floppy disks, etc which everyone would really love to replace, but rewriting all that code, and possibly even more expensive, testing it, to target, say, Intel chips is just prohibitive.
Having AI that can look at the machine language of a computer, and spew human readable code complete with comments of not only what its doing but why, will, if it can be produced, be a huge leap in taming the expense of replacing ancient computers with newer things, As it stands, the cars we're buying today have probably millions of lines of software that target very specific CPU's that are going to make 30 year old cars into automotive bricks. Even if we can dispense with the computers targeting the operation of internal combustion engines, other software doing other highly indespensable things like air conditioning and heating and navigation and so forth because they cannot be replaced once highly specific computers with millions of lines of software can't be replaced for reasonable expense. It would be hideously expensive. I'm going to be involved with a road rally called "Great Race" in a couple weeks which is a rolling old car museum of stunningly well restored cars from the beginning of cars all the way up to the 1970's. All those cars still run the way they used to, and are completely serviceable even if the brakes may be scary and the acceleration is measured with a calendar. But they will leave St. Augustine, Florida and arrive in Colorado Springs, Co the following week just like they would have 50 - 100 years ago. But in the future, there's not likely to be cars up into the age of computers, since finding parts that work after the relatively fragile silicon components have released the little packet of smoke that are all built into them at the factory, and cease to function.
But if we could grab a Raspberry Pi from the shelf, and have an AI translator that could look at the old machine code produced by a compiler that no longer runs on any existent computer, and produced code for the Raspberry Pi to produce the same outputs as the old automotive computer, maybe billions of dollars of otherwise completely serviceable vehicles could be made to continue to run for reasonable expense.
So if your nuclear submarine is still storing and loading its software with floppies, maybe it could be updated to load from more contemporary sources if the input devices could be replaced with commonly available (translated cheap) mass produced devices, and the software would be provably correct every time.
I think we desperately need this whether we realize it yet or not.
Re:Just In Time (Score:5, Interesting)
There are probably billions of lines of code written in obsolete languages like COBOL
COBOL is far from obsolete. The world still runs on COBOL, and for good reason.
But in the future, there's not likely to be cars up into the age of computers
There are already open source (hardware and software) EMS / ECU systems produced by the hobbyist community.
But if we could grab a Raspberry Pi from the shelf, and have an AI translator that could look at the old machine code produced by a compiler that no longer runs on any existent computer, and produced code for the Raspberry Pi to produce the same outputs as the old automotive computer
We already have technology that transforms programs. We call them "compilers". They're very useful, but probably not what you actually want. Writing cross-compilers is notoriously difficult, for reasons that should be obvious.
A much better, and far simpler, approach is emulation. The big advantage here is that you won't need to change anything about the original program. We actually have mainframe emulators in use today, keeping older software in production.
While emulation is obviously the right choice, either solution is going to produce better results cheaper, faster, and more reliably than an AI. Just try to imagine what would go into training such a system. Writing the cross-compiler would be less work, and you'd probably want to write an emulator as part of that process anyway. To top it off, you couldn't even trust any of the code it produced in the end. AI is just the wrong tool for the job.
[...] and the software would be provably correct every time.
What does "provably correct" mean to you? Also, cheap commodity hardware maybe be unsuitable for some environments. You can't just stick a $25 SBC anywhere you want and expect to it to be as reliable as the hardware it ostensibly replaces just because it's newer.
Re: (Score:2)
COBOL is far from obsolete. The world still runs on COBOL, and for good reason.
As I indicated [imgur.com] a bit further up.
Re: (Score:2)
This, this, is why a golden age of automobile collecting is limited, finite, and will be static soon.
You can buy most any part you need for a '67 Mustang, even door handles, likewise many 50s-60s-70s cars, they are unique and desirable, and so collectible.
Will you be able to buy replacement dashboards for any of the modern automobiles so heavily computerized? Will the software be rewritten for the available hardware 30 years from now, accommodating a necessary conversion to electric drivetrains (hello, mand
Re: (Score:2)
So if your nuclear submarine is still storing and loading its software with floppies, maybe it could be updated to load from more contemporary sources if the input devices could be replaced with commonly available (translated cheap) mass produced devices, and the software would be provably correct every time.
The rub is in "and the software would be provably correct every time".
Provability is very much not a characteristic of how AIs operate -- just the opposite, in fact. Their main problem is that their results are unreliable, and nobody has the first clue about how they derived them.
The reason the nuclear submarine is still running decades old code in an obsolete language is because the Navy's foremost programming experts don't trust themselves to rewrite it without making some mistake and getting someone kil
Re: (Score:2)
" don't trust themselves to rewrite it without making some mistake and getting someone killed"
Nobody should :"trust themselves" with software. Rewriting it means testing it 6 ways from Sunday so's you don't get someone killed. Testing is very expensive, and is money that can be saved by buying more floppies. $10 million to test the rewritten software, or $10K to buy more floppies.
Good luck with that (Score:2)
Lotsa luck! This is what, the three hundred and fifty seventh thing that was going to let MBA's give fuzzy and incomplete ideas to a piece of software and it would magically crank out bug free software?
While ChatGPT is clearly more sophisticated, all of this reminds me of people reacting to Eliza many years ago.
In order for ChatGPT to successfully produce any program on it's own (rather than just cut-pasting stackoverflow.com) you would have to tell it what to code in English at a fine grained level. So fin
Programming has gotten simpler? (Score:5, Insightful)
"Over much of the history of computing, coding has been on a path toward increasing simplicity."
Perhaps, but problems got more complex. In my 35-year career I went from desktop applications that use 50 80-character lines for display and stored data on a single dedicated server with a 30 megabyte hard drive.
My current project uses React with hooks, Node with servless functions on a web hosting service, lots of fancy CSS, a NoSQL database hosted elsewhere with an API in GraphQL and libraries, libraries, libraries written by 3rd parties, constantly being updated.
None of this was even imaginable 15 years ago. Do we have better applications? Yes, much better. Are they simpler to write than those 30 years ago? Uh...nope.
Re: (Score:2)
"Over much of the history of computing, coding has been on a path toward increasing simplicity."
Perhaps, but problems got more complex. In my 35-year career I went from desktop applications that use 50 80-character lines for display and stored data on a single dedicated server with a 30 megabyte hard drive.
My current project uses React with hooks, Node with servless functions on a web hosting service, lots of fancy CSS, a NoSQL database hosted elsewhere with an API in GraphQL and libraries, libraries, libraries written by 3rd parties, constantly being updated.
None of this was even imaginable 15 years ago. Do we have better applications? Yes, much better. Are they simpler to write than those 30 years ago? Uh...nope.
Yeah, in what world has programming got simpler?
If anything, it's become a guild where the gatekeepers deliberately make it as complex as possible by grabbing at every new idea and library they can.
Re: (Score:2)
The same application you wrote 30 years ago is simpler to make, if implementing the same UI and general design.
I'd still posit that it's even easier to make a nice modern looking interpretation of that same application than it was to make that program back in the day. The choices can be paralyzing (am I using Angular? React? Svelte? Vue?) and some of the hyped 'patterns' are frequently not what they are cracked up to be, but peer pressure causes them to be over implemented... However once you understand t
May be journalism (Score:5, Insightful)
Not too sure about programming. But I bet current AI can do a great job writing speculative misinformed clickbait to fill pages, better than most NY writers. I think someone's got to be worried.
Deceptive Advertising (Score:3)
The history of "AI" is a sequence of ever changing definitions of what consists of intelligent activity. In the 1950's it was assumed that playing checkers or chess showed intelligence. In the 60's the ability to do freshman calculus was the test. After that came natural language parsing, which was conflated with language understanding. By the 80's it was expert systems and Prolog. In the 90's robots became the rage. The 2000's had the start of autonomous vehicles and early neural nets, and by mid to late 2010's we ended up with high end ANN and now LLM.
The examples are not comprehensive or exhaustive and they show that the definition of AI is always changing There is, however, a ongoing pattern: when a particular AI definition/fad fails a new definition comes into fashion and is the Next Big Breakthrough. And in each cycle the hype is more inflated and the money pumped into the technology goes up accordingly. That's the true driving force. Hype and big bucks go hand in hand.
I program Safety Systems on Industrial Equipment (Score:5, Insightful)
Normally I receive a safety design that has been approved by the customer and the equipment vendor. It's a document that says in a formal way "when these things happen or when a person or object is in this area then that equipment should stop".
The safety programming is the simplest and easiest type of programming in these systems. It has to be that way because it's very important that it must be right. The spec is very clearly defined, the safety devices are very simple and very reliable, and there are strict rules for how the logic must be written.
Let's say chat GPT is approved for safety code generation. The project manager fires me and just hand that safety spec to chat GPT.
Still there always a instances of "oh gee whiz they never thought somebody would do this when they came up with the spec better make sure they can't get hurt when they do". Some of those are things you could figure out sitting at your desk. Some of them are only obvious when you get to site and see the physical layout of the system and ways people could climb over or under or around to get where you don't expect them to be. Let's put these edge cases aside for the moment and focus on the primary issue:
Chat GPT is famous for generating output that looks right but is factually wrong. It doesn't understand the intent of what it's being asked to do. It doesn't understand anything; that's not even remotely how it works. So I'd expect the safety program that passes validation but does unexpected things during production.
When somebody is hurt because safety system was programmed incorrectly who will pay them or their surviving family?
The design committee did their job correctly; the safety spec was valid. The project manager used an approved AI to generate the code. The AI was certified to be compliant with regulatory standards OSHA NFPA etc. The equipment vendor supplied safety devices certified in the same manner. The operator followed safety rules when operating the equipment.
Somebody got hurt and no one is accountable. I realize in the boardroom this is a feature not a bug but on the shop floor it is not a good feature.
All the same arguments about safety can be made about any programmatic output that people actually care about. Factory equipment safety failures happen to be a low probability high stakes example.
If you want higher stakes consider power plant burner control systems. Consider petrochemical refinery controls. Medical device and Drug Manufacturing.
I remember when Safety Systems had to be hardwired. No programmatic involvement in safety allowed. Mechanical switches and relays because software was just not reliable.
AI is not yet reliable enough to be trusted with safety or process control.
Not yet.
Not so simple (Score:2)
Long time ago there was a movement to explain what the computer should do, in more or less plain English instead of mysterious codes. It was called COBOL. It, and other high-level languages of that time did indeed change coding a lot. But the need for programmers did not go away, at all.
The real art of programming includes being aware of different failure modes, error handling, and considering malicious user input, as well as a deep understanding of what the program is supposed to do, and finding an accepta
Programming vs. Software Engineering (Score:3)
The end of journalism as we know it... (Score:2)
Fifth Generation Programming (Score:2)
Basically, the idea of intelligent/expert system compilers that can generate code from highly abstract descriptions is fifth generation programming. This has been talked about for as long as I've been a programmer (I started in 1978), and I seriously doubt ChatGPT is at the point where it could implement it usefully. As far as I can tell, code produced by AI systems tends to be of very poor quality (bug-ridden, unreliable, with tons of security defects). Of course, that won't stop companies using ChatGPT co
Re: (Score:3)
To further the point, a lot of folks I've talked to if they do admit that ChatGPT isn't there yet, they will declare "oh but it came out of nowhere and is like 80% there, so it'll be a done deal in the next year or two".
Which ignores the fact that it didn't come out of nowhere, I point out that over a decade ago IBM demonstrated almost this level of functionality on Jeopardy. However, despite that demonstration, they weren't able to translate that to business value. Now OpenAI has made it iterative and giv
Sensible work hours (Score:2)
Transforms human-language instructions into softwa (Score:2)
Transforming human-language instructions into software. Didn't that used to be know as pseudocode?
This is inevitable (Score:2)
I agree. I have always thought that it was bizarre to teach children "coding". To me, that was like someone in 1920 teaching kids to use a phone switchboard: "It's the future!"
I mean, we all saw how Captain Kirk talked to the computer, and it was able to act on his instructions. And I recall episodes of both Superman (the original TV series) and The Outer Limits (original) in which people spoke to computers, and the computers understood and acted.
So did we not see that programming would be something interim
Write me an operating system like Linux ... (Score:2)
(hit enter, wait)
(wait)
What comes next?
Re: (Score:2)
Presumably, it takes everything it has tagged as 'Linux' and plagiarizes the hell out of it while also scrambling it a bit to obfuscate the origins.
This will result in something that won't boot, but requires an improbably large ISO to install.
Language? (Score:2)
path toward increasing simplicity? (Score:2)
APL test certification (Score:2)
Symbolic maths will be the arbiter of determining AI compositional programming success. Any machine that can grok symbolism at higher mathematics level - ends human drudgery.
And we'll have flying cars by the new millenium. (Score:2)
Or how did the whole "no code" thing turn out?
Sorry. AI isn't the panacea of all things. This guy just made a click-bait article.
AI cannot "create" it can "generate" and this is a significant difference. It can only generate based on what it's trained to do. But if you want something that hasn't been done before, you need a human to work that out.
AI can be a good "thumbnail" thing to start finding new options that may be already within the pattern but just not seen by the human eye.
AI can help resolve thing
I think he's right (Score:2)
As an aside, I wish I could find the reference to the "N programming phases", where phase zero is "unconscious", i.e. unaware one's actions are programming a machine (e.g. spreadsheet macros).
Tell the machine what you want, not how to do it. It's an evolution of the language model, just as Lem described in GOLEM XIV/Imaginary Magnitude.
LLM tech
Chasing the wrong Issue (Score:2)
THere are already shitloads of totally useless "coders" out there. Writing programming code is so simple any idiot can do it -- and many idiots do. The hard part, which iwill not be solved by Statistical Modelling (aka ML/AI) is "how" to solve the problem and "what" is to be achieved by the cut'n'paste code.
It has been this way since the dawn of "making machines do useful stuff". The hard part is designing "how" to accomplish what is desired. Reducing that the instructions that a idiot (or a machine) ca
AI (Score:2)
NYT: It's the End of Computer Programming As We Know It
"This article was written by ChatGPT."
So.. (Score:2)
Tech Support, I want out! (Score:2)
Didn't Jaron Lanier say something along the lines of the AI future basically being a planet of help desks?
what a TWIST! (Score:2)
After years of everyone telling journalists to learn to code when they lose their jobs, the journalists are back to tell programmers to learn to write in plain English when they lose theirs!
Trust without verification? (Score:5, Insightful)
30 years ago programming was being outsourced (Score:3)
Re: (Score:2)
Re: (Score:2)
You can use other human languages also, so it doesn't need to be English.
Fun fact: When I used my native language to write a program with ChatGPT, ChatGPT named some of the variables in English and some in my native language.
But I agree with what you say. At least on its current level, you can't do any serious work if you don't know how to program. But there is one thing it can do. Translating code from one language to another. E.g. C -> HTML + Javascript.
I tried with this example:
https://www.programiz.c [programiz.com]
Re: (Score:2)
the funny thing is, I could teach an AI so many issues I get daily and maybe it could teach me. Unfortunately, a DoD firewall prevents that.
Re: (Score:2)
You sound a lot of like those people who in the year 2015 said that it would take at least a decade for computers to beat professional human players in go. If Google wanted, it could make this AI within 2 years. It would not be perfect, but it could write large applications with millions of lines of code and code quality would be much better than what chatGPT can provide. I estimate 2 years, because that long it has usually took for them to solve unsolvable or impossible problem with AI. (like go or protein