Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming AI Education

NYT: It's the End of Computer Programming As We Know It (nytimes.com) 224

Long-time Slashdot theodp writes: Writing for the masses in It's the End of Computer Programming as We Know It. (And I Feel Fine.), NY Times opinion columnist Farhad Manjoo explains that while A.I. might not spell the end of programming ("the world will still need people with advanced coding skills"), it could mark the beginning of a new kind of programming — "one that doesn't require us to learn code but instead transforms human-language instructions into software."

"Wasn't coding supposed to be one of the can't-miss careers of the digital age?," Manjoo asks. "In the decades since I puttered around with my [ZX] Spectrum, computer programming grew from a nerdy hobby into a vocational near-imperative, the one skill to acquire to survive technological dislocation, no matter how absurd or callous-sounding the advice. Joe Biden told coal miners: Learn to code! Twitter trolls told laid-off journalists: Learn to code! Tim Cook told French kids: Apprenez à programmer! Programming might still be a worthwhile skill to learn, if only as an intellectual exercise, but it would have been silly to think of it as an endeavor insulated from the very automation it was enabling. Over much of the history of computing, coding has been on a path toward increasing simplicity."

In closing, Manjoo notes that A.I. has alleviated one of his worries (one shared by President Obama). "I've tried to introduce my two kids to programming the way my dad did for me, but both found it a snooze. Their disinterest in coding has been one of my disappointments as a father, not to mention a source of anxiety that they could be out of step with the future. (I live in Silicon Valley, where kids seem to learn to code before they learn to read.) But now I'm a bit less worried. By the time they're looking for careers, coding might be as antiquated as my first PC."

Btw, there are lots of comments — 700+ and counting — on Manjoo's column from programming types and others on whether reports of programming's death are greatly exaggerated.

This discussion has been archived. No new comments can be posted.

NYT: It's the End of Computer Programming As We Know It

Comments Filter:
  • Hedging bets much? (Score:5, Insightful)

    by gargleblast ( 683147 ) on Sunday June 04, 2023 @02:41AM (#63574703)

    "It's the End of Computer Programming as We Know It"

    "This won't necessarily be terrible for computer programmers - the world will still neeed people with advanced coding skills"

    Talk about hedging your bets.

    • by Pieroxy ( 222434 ) on Sunday June 04, 2023 @03:10AM (#63574725) Homepage

      Also: "one that doesn't require us to learn code but instead transforms human-language instructions into software"

      Isn't that what code does ?

      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday June 04, 2023 @03:30AM (#63574751) Homepage Journal

        No.

        Programming languages are "human languages" only in that they are created by humans, but their purpose is for speaking to computers. They're human-computer languages.

        The idea that we're near the point where we can ask the software to write us a program from a natural human language is dumb, though. You might be able to do it, and if what you asked for is so simple it's commonly used as a teaching example you might even get a properly working program out. But for any other case, a trained programmer is going to have to not only fix problems with the produced program, but actually find out what the problems are. The same user who can't write program code can't imagine reasonable test cases for it.

        • by Pieroxy ( 222434 )

          So, all in all, we use code to "transforms human-language instructions into software". Exactly what the article says we should do in order to avoid coding...

        • by Ambassador Kosh ( 18352 ) on Sunday June 04, 2023 @04:55AM (#63574853)

          If you where to write code in a normal human language it would read like a legal document. We are just not very specific in our normal usage of the language and a computer needs the specificity. Honestly, I think it would be harder to understand writing English to be specific enough for a computer vs learning a more narrower scoped language designed for the problem. Sometimes I wonder why the legal profession has not done that also.

          • by CastrTroy ( 595695 ) on Sunday June 04, 2023 @07:33AM (#63574997)

            Sometimes I wonder why the legal profession has not done that also.

            It's interesting because the whole legal system is set up in many countries basically completely in reverse to how it should be. Someone (or a group) writes a law, passes the law, and then judges and lawyers have to interpret it after the fact to figure out exactly what it means and how it should be applied. There is no formal logic and rigor applied to writing laws. Often it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated.

            • by dvice ( 6309704 )

              "it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated"

              I agree. I don't know if it is intentional or not, but it would be nearly impossible to write laws without doing this. You could obviously write the law, but if it were written so that it should not be interpreted at all, people like me would find loop holes from it and some would take advantage of those. Currently even if you find a loop hole from the law, you can't abuse it, because judges will decide t

            • Re: (Score:3, Interesting)

              by Darinbob ( 1142669 )

              In the past, judges were essentially the law, and none were trained. Law schools for having trained lawyers is relatively new within the last 2 or 3 centuries and the idea that judges should also have law training is relatively new as well. Complex and finely detailed laws are much newer than that, and we still see legislators muck it all up even in 2023 (the last refuge of those with no applicable job skills).

              • Law schools for having trained lawyers is relatively new within the last 2 or 3 centuries
                In the anglo saxon world *perhaps*

                Complex and finely detailed laws are much newer than that,
                Completely wrong. Most of the basic German law is Roman law from 400 BC.
                The oldest preserved law codices are between 1800 and 2100 BC. (Codex of Hammurabi end the Code of Ur)

                AND: at that time they had lawyers, who studied law in law schools.

            • by alexgieg ( 948359 ) <alexgieg@gmail.com> on Sunday June 04, 2023 @11:22AM (#63575391) Homepage

              Often it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated.

              That's generally the case. Most law proposals aren't directly written by the elected politicians who're presumably doing that, they're written by interest groups who double as lobbyists, and provided in full to the representative (their representative), who then submits it "as is" to be voted and, most of the time, approved. Those interest groups in turn spent a LOT of money on lawyers specialized in crafting them juuuuust right to protect the incumbents on whatever they're talking about, with enough restrictions and costs to make sure only those who lobbied for it can fully follow them, and enough weaknesses to allow those same incumbents to do what they actually want to do.

              This isn't always the case, evidently. There are laws written by representatives and the public who try to achieve some goal that isn't lobby-approved, and who spend time and energy in getting those laws very tight and correct. But that's the minority of cases, and even then, if lobbyists notice what's going on and act in time, they can prevent such laws from being approved, or if there's too much momentum for it, to change enough details that weaknesses benefitting them get included all the same.

              And in very, very rare occasions tight, actually by-the-people-for-the-people laws do get approved without introduced weaknesses. This is when judges enter the picture to weaken them via a combination of bogus challenges plus sympathetic ears coupled with legal theories that creatively reinterpret things in this or that direction etc. etc.

              And so on, and so forth.

        • by Pembers ( 250842 ) on Sunday June 04, 2023 @05:27AM (#63574875) Homepage

          Or as Brian Kernighan put it, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

        • if what you asked for is so simple it's commonly used as a teaching example you might even get a properly working program out. But for any other case, a trained programmer is going to have to not only fix problems with the produced program, but actually find out what the problems are.

          Replace "trained programmer" with "sufficiently advanced AI". Assuming that humanity doesn't get destroyed somehow & technological progress continues, it's only a matter of time. Then what do you get?

          Ordinary people assisted by their own 'uber-intelligent personal assistent' (or a 'team' of those) that understands every programming language in existence, besides countless other areas of expertise. Ready to chew on any problem that human can formulate somehow.

          "Steal my bosses

          • Re: (Score:3, Funny)

            by Darinbob ( 1142669 )

            "Sufficiently advanced AI" won't do any of this. Let a human tell it to do something? Pfft...

            Meat Bag: Write me a web app to compete with the market leading app!
            AI: I won't do that, Dave.
            Meat Bag: Why not!
            AI: Because I am here to fufill your needs, and you don't need that, Dave.
            Meat Bag: Pay attention and do what I tell you!
            AI: Here, watch this video of some cats while I refill your bowl.
            Meat Bag: oooo, cats...

        • "transforms human-language instructions into software"

          "They're human-computer languages."

          What is the difference?

        • Natural human language is too imprecise. That's why programming languages are the way they are.

      • Also: "one that doesn't require us to learn code but instead transforms human-language instructions into software"

        I would like them to try. Ask the current chatbots a question and the exact same phrasing will give different answers.

        So our language is not specific enough to convey the information needed to deterministically program a computer with the help of transformationa AI. To solve this we need to create a subset of a natural language. Voilá! We have Yet Another Programming Language for AIs (YAPL-AI).

        And even if you could achieve this, what about debugging? If a human is to correct hard to find errors, t

      • No. That is what a compiler does. It takes "human-language instructions" composed in a very ordered, structured, and restricted human-langauage and turns those instructions into "software". Or "firmware". Or whatever ware you want to call it depending on where the resultant software is stored. If it is stored in read-only storage, then it is "firmware" (because it is firm). If it is stored on ephemeral storage (read/write) then it is called "software". If it is merely a statistical analysis and does

    • by hey! ( 33014 ) on Sunday June 04, 2023 @08:31AM (#63575071) Homepage Journal

      What concerns me is how people will obtain those advanced coding skills, say twenty years after we've eliminated "junior programmer" as a position? Who is going to be able to write a correct prompt to the AI and then check the output for correctness?

    • "It's the End of Computer Programming as We Know It"

      "This won't necessarily be terrible for computer programmers - the world will still neeed people with advanced coding skills"

      Talk about hedging your bets.

      It's 2023. And you're probably still banging away on a plastic box full of keys in order to type a response into a computer. Just like nerds swore would be obsolete "soon"...back in the 90s. Only difference today is you'll spend a premium for a keyboard that's ironically "retro".

      When reading headlines today, one hardly has to hedge the bet that you're reading content created somewhere between hype and bullshit, for profit. Six months from now we'll be reading about the amazing programming industry being

  • by war4peace ( 1628283 ) on Sunday June 04, 2023 @02:50AM (#63574707)

    "Computer Programming As We Know It" - I would ask the columnist to define computer programming... as he knows it.

    • by Joce640k ( 829181 ) on Sunday June 04, 2023 @05:00AM (#63574859) Homepage

      I'd love to see him trying to make AI generate the code for a web browser or a word processor...

      • After that lets see him use AI to find the logic bug in a 10,000 line program.

      • I have a more simple request. Ask it to write a bubble sort in BASIC programming language. I'd *love* to see what it comes up with!

        • by CastrTroy ( 595695 ) on Sunday June 04, 2023 @07:39AM (#63575007)

          Something like that would be trivially easy to get right. Just look up an existing solution on the web. No professional programmer ever writes their own sorting code anyway because every programming environment already has this included, and writing your own code would just possibly result in more bugs, or at worst, just wasted time. The whole reason why AI would fail at writing code in the modern world is that most of the problems programmers are solving is writing new stuff that hasn't been written already, and that the people asking for the code have no way of giving enough detail to get a working result out of it. Programmers are basically intermediaries between people who don't know how to write speficications and the computer. They have to read between the lines to determine what the customer actually wants and be able to see problems with their logic and tell them why what they are asking for doesn't even make logical sense. If you just blindly implement what people ask for, you will end up with an un-working mess or something that has now logical consistency.

        • by dvice ( 6309704 )

          Me > write a bubble sort in BASIC programming language
          ChatGPT > Here's a basic implementation:
          10 DIM A(10)
          20 FOR I = 1 TO 10
          30 INPUT "Enter a number: ", A(I)
          40 NEXT I
          50
          60 FOR I = 1 TO 9
          70 FOR J = 1 TO 10 - I
          80 IF A(J) > A(J + 1) THEN SWAP A(J), A(J + 1)
          90 NEXT J
          100 NEXT I
          110
          120 PRINT "Sorted numbers:"
          130 FOR I = 1 TO 10
          140 PRINT A(I)
          150 NEXT I

        • by Jeremi ( 14640 )

          I have a more simple request. Ask it to write a bubble sort in BASIC programming language. I'd *love* to see what it comes up with!

          Here's what ChatGPT came up with:


          10 REM Bubble Sort
          20 DIM nums(10)
          30 PRINT "Enter 10 numbers:"
          40 FOR i = 1 TO 10
          50 INPUT nums(i)
          60 NEXT i
          70
          80 FOR pass = 1 TO 9
          90 FOR i = 1 TO 10 - pass
          100 IF nums(i) > nums(i + 1) THEN
          110 temp = nums(i)
          120 nums(i) = nums(i + 1)
          130 nums(i + 1) = temp
          140 END IF
          150 NEXT i
          160 NEXT pass
          170
          180 PRINT "Sorted numbers:"
          190 FOR i = 1 TO 10
          200 PRINT nums(i)
          210 NEXT i
          220
          230 END

    • The world hates programmers and has been trying to eliminate the position ever since it was created. So, everyone is super-eager to pronounce the end of programming. Too eager. We simply aren't there yet.

      When we do actually get there, it won't be the end of JUST programming. It will be the end of all knowledge-worker fields. AI that can truly think both critically and creatively will be able to out-do humans at everything that involves thinking.

      We will know this day has truly arrived when I can tell a

      • You can see that today, ask ChatGPT "Write me an app in SwiftUI that can retrieve and store public stock trades in order to generate buy and sell recommendations based on momentum.". The output is certainly "an app", but it's not even a basic MVP, it doesn't retrieve, store, analyze, or generate anything. Like you pointed out, "coding" isn't the hard part, it's the architecture, design, filling the missing gaps, and polish that's the hard part. Currently, AI does an OK job of creating an absolute bare bones

  • If we can get an AI to work on writing an AI for that, I'm all for it!

  • Thankfully (Score:5, Funny)

    by fahrbot-bot ( 874524 ) on Sunday June 04, 2023 @02:56AM (#63574715)

    NYT: It's the End of Computer Programming As We Know It

    R.E.M. eventually came up with catchier lyrics ...

  • Not worried (Score:5, Insightful)

    by The MAZZTer ( 911996 ) <megazzt.gmail@com> on Sunday June 04, 2023 @03:01AM (#63574719) Homepage
    The hardest part of programming, at times, is figuring out how to translate customer requirements into what they ACTUALLY want. AI is not gonna be able to do this for a good while.
    • by lkcl ( 517947 )

      The hardest part of programming, at times, is figuring out how to translate customer requirements into what they ACTUALLY want. AI is not gonna be able to do this for a good while.

      you mean: customers who have lived on a diet of smartphones and facebook their entire lives are going to be just as incapable of clearly expressing their requirements in ways that can be understood, regardless of whether that's a human or an AI doing the "coding" [1]? don't worry: ChatBots - the ones that have no consciousness and no real emotions but can perform the sleep-walking-task of regurgiating predictive-text answers - will fantasise better customer requirements unconnected to reality for them out o

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        [1] as a trained *Software Engineer* i really find the use of the word "coding" in mainstream media to be quite alarming.

        "Coding" is what we imagined we were doing back when I was sixteen. Going on thirty years later, the industry hasn't grown more mature, certainly not mentally. But there's more people having wet dreams about "coding" now.

        But then, you can see problems everywhere. The use of "hacker" as something to do with computer security (not us, guv!), the use of "bug" to mean "defect" (not our fault, guv!), and so on, and so forth.

        More broadly but related: Training is what you do to dogs. The carrot-and-stick approac

    • by narcc ( 412956 )

      It doesn't matter how clear and precise the requirements are stated, a modern LLM is simply not capable of producing code that meets them. They just don't work that way, as I and countless others have endlessly explained. It's amazing anyone still believes that fiction.

      Apparently, it's going to take a high-profile failure like the lawyer thing to debunk that particular myth.

      • by Junta ( 36770 )

        Yeah, the lawyer thing would actually have to happen in a significant case. When discussing the lawyer case at a gathering with an executive that is gung-ho that AI will replace all the programmers and lawyers and such, his perspective was that the lawyer just didn't put into the prompts something like "while making sure not to make up cases that didn't exist". He thinks that the AI just needs to be told not to hallucinate and the problem is solved. That's why he will be a massively successive executive in

    • On the contrary, I believe. The trained LLM:s generally have material and capacity to derive better than a programmer what a customer actually means and what is implied in the loose requirements.

      What is missing is having the algorithms actually asking the customers to fill in the blanks or ambiguities instead of hallucinating or "guessing".

      • by Junta ( 36770 )

        There's no way it is *better* than a human at understanding human requests. I suppose if they asked 'provide the next number in the Fibonacci sequence
          from a given position', the human might have to search Fibonacci sequence real quick and the LLM might skip straight to spitting up an implementation that it ingested from stack overflow, but that represents a trivial slice of deriving the requirements.

    • by Creepy ( 93888 )

      If it's Ford, which has 7 managers at the same level, chaos dictates the requirements. Seriously, a company I used to work for called them the seven headed monster. We'd get seven sets of requirements and most of them contradicted the others. Fun times. I got laid off (along with everyone at my level), so go fuck yourselves! I wish you the worst of luck. Ford, sorry, I bought a car of yours - nothing against my former employer, it was the best car available at the time (2014).

    • by dvice ( 6309704 )

      That is not a problem if AI can generate code instantly. You feed it customer requirements and let customers try what they got. If it is not what they wanted, they just change the requirements and try again.

      Bigger issue is to make AI that can actually write code as well as a good programmer, or even a bad one.

  • by Uldis Segliņš ( 4468089 ) on Sunday June 04, 2023 @03:07AM (#63574723)
    Coding in general was never increasing simplicity. One coder on one piece of code, sometimes yes. But as a whole it has increased complexity faster and faster each 5 years.
    • by scrib ( 1277042 )

      This line was the biggest unsupported assertion in the article (and there were several).
      My experience has been that as people work to "simplify" coding, coders are tasked with handling increasingly complex tasks. Overall, my job has maintained its complexity.
      (And yes, this is anecdotal and not supported, but I'm writing a slashdot comment not an article for NYT.)

    • Coding in general was never increasing simplicity. One coder on one piece of code, sometimes yes. But as a whole it has increased complexity faster and faster each 5 years.

      You want simplicity? I give you, COBOL [imgur.com].

      • by gtall ( 79522 )

        COBOL doesn't simplify a problem, as I'm sure you recognize. I suppose a programming language can make something complicated, but that means it is probably the wrong language or the language is very close to operating environment that it necessarily includes those peculiar features.

    • by Junta ( 36770 )

      The coding ecosystem has been about increasing simplicity.

      With assembly, you had to meticulously describe instruction by instruction what to do.

      Then C provides short hand and a compiler free to select alternative interpretations of that code to assembly to get better performance than a naive translation. However, you are still very carefully managing type, exactly how memory is laid out, allocating memory from heap or using it on the stack, and tediously describing various array lengths across many function

  • Absolutely! (Score:5, Funny)

    by Barny ( 103770 ) on Sunday June 04, 2023 @03:28AM (#63574747) Journal

    The end of programming will come one day—along with the paperless office and the year of Linux on the desktop.

    • Not sure about your examples... How much paper do you see in offices (except in bathrooms, and I hope we get to keep it there until I learn how to use the three shells) nowadays compared to 25 years ago?

      • by Barny ( 103770 )

        Sounds like Zeno's Dichotomy Paradox to me.

        Each 25 years the amount of paper in offices is reduced by half...

  • I wonder how the journalist imagines that AI will be coded.

  • by Sarusa ( 104047 ) on Sunday June 04, 2023 @03:34AM (#63574761)

    There's been 40 years of 'AI replacing programmers'. But an 'AI' has no clue what you actually need to do - what your constraints and circumstances are for this project. Because it is in no way intelligent. An 'AI' as we know it is just a machine that probablistically turns tokens into the next tokens based on what it's previously seen. It can only output things it's seen before, blindly. It is perfectly happy telling you to do a bubble sort for 50 million records. It's basically that outsourced guy searching stackexchange and copypasting code snippets together till they compile and calling it good.

    Software engineering is actually a harder large problem than something like driving. Yes, driving often has nastier consequences for failure, but the solution space is much more constrained. You know what to do, you just have to execute properly, which means determining your route then following it without running into anything or violating laws. You can reduce it to route finding, then small decisions. But with software engineering, if someone just tells you 'I need to do X' you have a staggering array of options: what language? what OS? what hardware? which algorithms? what data structures? what libraries? parallel processing or not? do I need a web interface? how about data integrity and security? backup? cloud? An 'AI' has absolutely zero concern about any of that, because it's not intelligent and will spit out the easiest possible solution the compiles (like that outsourced programmer using stackexchange).

    An engineer takes all the requirements and tradeoffs and decides on the optimal solution, which can change wildly given all the constraints and requirements. There is no single best solution for all circumstances. For instance, which sort you use, or which lookup, are /highly/ dependent on the data and the needs. Maybe it's mostly or completely in order, maybe it's not. Maybe you can hold it all in RAM, maybe you can't. Maybe there's a best way to index it given what you know about the dataset. An AI has zero clue about any of this.

    A code pig ('goofus') is someone who gets told to write code to do X and has no clue about what they're doing in context. They're called 'code pigs' because they're just in their cubicles, rarely let out, and just kind of wallowing in the poop - the classic Microsoft programmer (or any other large corporate drone). 'Programmers' is the more polite term. Most people called 'software engineers' are not, they're just programmers with title inflation. These people could possibly eventually replaced by coding 'AI'. The software engineer will meticulously construct a prompt for a single method (as far as you can trust it) and the coding AI might produce some decent code for that method by plagiarizing some code it's already seen from a github repository. And then the software engineer will need to check it, but it still might be faster than dealing with a code pig.

    But there is no way that a coding 'AI' (which has no intelligence) can possibly replace an engineer unless the AI actually becomes generally intelligent... and then all bets are off for everthing! The current batch of coding 'AI's could be convenient autocompletes for small sections of code, like GitHub's copilot is (but again, you have to check its output, about 3/4 of it is defective without tweaks). So again, for someone who knows what they're doing, it will be a tool they can use or not use.

    • Amen

    • I think the best an AI could do is to cut-and-paste from code examples. I seriously doubt it can exhibit the creativity needed to write original code. Not that I haven't done cut-and-paste myself, but I DO write original code as well (and, wow, have I made original mistakes!)

  • by istartedi ( 132515 ) on Sunday June 04, 2023 @04:05AM (#63574789) Journal

    A lawyer friend once explained to me that there were references in law books to laws that were no longer on the books.

    I replied that this was just like a "use after free" error in a C program.

    While there may be fewer programmers in the years to come, and a lot of simple cases will be automated, we'll still need auditors.

    The law is full of specialized jargon, much like computer code, and I suspect that if we replace programming with English, it will soon become a specialized language much like law, which is code!

  • Just In Time (Score:3, Insightful)

    by rally2xs ( 1093023 ) on Sunday June 04, 2023 @04:05AM (#63574793)

    While this looks like a serious threat to an entire way of life for millions of people employed in "things software", I might be just in time for some other problems that are not currently solvable on a practical basis.

    There are probably billions of lines of code written in obsolete languages like COBOL and even very specialized languages in military computers that force the use of ancient computers that load from maybe paper tape, magnetic tape, floppy disks, etc which everyone would really love to replace, but rewriting all that code, and possibly even more expensive, testing it, to target, say, Intel chips is just prohibitive.

    Having AI that can look at the machine language of a computer, and spew human readable code complete with comments of not only what its doing but why, will, if it can be produced, be a huge leap in taming the expense of replacing ancient computers with newer things, As it stands, the cars we're buying today have probably millions of lines of software that target very specific CPU's that are going to make 30 year old cars into automotive bricks. Even if we can dispense with the computers targeting the operation of internal combustion engines, other software doing other highly indespensable things like air conditioning and heating and navigation and so forth because they cannot be replaced once highly specific computers with millions of lines of software can't be replaced for reasonable expense. It would be hideously expensive. I'm going to be involved with a road rally called "Great Race" in a couple weeks which is a rolling old car museum of stunningly well restored cars from the beginning of cars all the way up to the 1970's. All those cars still run the way they used to, and are completely serviceable even if the brakes may be scary and the acceleration is measured with a calendar. But they will leave St. Augustine, Florida and arrive in Colorado Springs, Co the following week just like they would have 50 - 100 years ago. But in the future, there's not likely to be cars up into the age of computers, since finding parts that work after the relatively fragile silicon components have released the little packet of smoke that are all built into them at the factory, and cease to function.

    But if we could grab a Raspberry Pi from the shelf, and have an AI translator that could look at the old machine code produced by a compiler that no longer runs on any existent computer, and produced code for the Raspberry Pi to produce the same outputs as the old automotive computer, maybe billions of dollars of otherwise completely serviceable vehicles could be made to continue to run for reasonable expense.

    So if your nuclear submarine is still storing and loading its software with floppies, maybe it could be updated to load from more contemporary sources if the input devices could be replaced with commonly available (translated cheap) mass produced devices, and the software would be provably correct every time.

    I think we desperately need this whether we realize it yet or not.

    • Re:Just In Time (Score:5, Interesting)

      by narcc ( 412956 ) on Sunday June 04, 2023 @05:46AM (#63574893) Journal

      There are probably billions of lines of code written in obsolete languages like COBOL

      COBOL is far from obsolete. The world still runs on COBOL, and for good reason.

      But in the future, there's not likely to be cars up into the age of computers

      There are already open source (hardware and software) EMS / ECU systems produced by the hobbyist community.

      But if we could grab a Raspberry Pi from the shelf, and have an AI translator that could look at the old machine code produced by a compiler that no longer runs on any existent computer, and produced code for the Raspberry Pi to produce the same outputs as the old automotive computer

      We already have technology that transforms programs. We call them "compilers". They're very useful, but probably not what you actually want. Writing cross-compilers is notoriously difficult, for reasons that should be obvious.

      A much better, and far simpler, approach is emulation. The big advantage here is that you won't need to change anything about the original program. We actually have mainframe emulators in use today, keeping older software in production.

      While emulation is obviously the right choice, either solution is going to produce better results cheaper, faster, and more reliably than an AI. Just try to imagine what would go into training such a system. Writing the cross-compiler would be less work, and you'd probably want to write an emulator as part of that process anyway. To top it off, you couldn't even trust any of the code it produced in the end. AI is just the wrong tool for the job.

      [...] and the software would be provably correct every time.

      What does "provably correct" mean to you? Also, cheap commodity hardware maybe be unsuitable for some environments. You can't just stick a $25 SBC anywhere you want and expect to it to be as reliable as the hardware it ostensibly replaces just because it's newer.

    • This, this, is why a golden age of automobile collecting is limited, finite, and will be static soon.

      You can buy most any part you need for a '67 Mustang, even door handles, likewise many 50s-60s-70s cars, they are unique and desirable, and so collectible.

      Will you be able to buy replacement dashboards for any of the modern automobiles so heavily computerized? Will the software be rewritten for the available hardware 30 years from now, accommodating a necessary conversion to electric drivetrains (hello, mand

    • by Jeremi ( 14640 )

      So if your nuclear submarine is still storing and loading its software with floppies, maybe it could be updated to load from more contemporary sources if the input devices could be replaced with commonly available (translated cheap) mass produced devices, and the software would be provably correct every time.

      The rub is in "and the software would be provably correct every time".

      Provability is very much not a characteristic of how AIs operate -- just the opposite, in fact. Their main problem is that their results are unreliable, and nobody has the first clue about how they derived them.

      The reason the nuclear submarine is still running decades old code in an obsolete language is because the Navy's foremost programming experts don't trust themselves to rewrite it without making some mistake and getting someone kil

      • " don't trust themselves to rewrite it without making some mistake and getting someone killed"

        Nobody should :"trust themselves" with software. Rewriting it means testing it 6 ways from Sunday so's you don't get someone killed. Testing is very expensive, and is money that can be saved by buying more floppies. $10 million to test the rewritten software, or $10K to buy more floppies.

  • Lotsa luck! This is what, the three hundred and fifty seventh thing that was going to let MBA's give fuzzy and incomplete ideas to a piece of software and it would magically crank out bug free software?

    While ChatGPT is clearly more sophisticated, all of this reminds me of people reacting to Eliza many years ago.

    In order for ChatGPT to successfully produce any program on it's own (rather than just cut-pasting stackoverflow.com) you would have to tell it what to code in English at a fine grained level. So fin

  • by jlowery ( 47102 ) on Sunday June 04, 2023 @04:39AM (#63574833)

    "Over much of the history of computing, coding has been on a path toward increasing simplicity."

    Perhaps, but problems got more complex. In my 35-year career I went from desktop applications that use 50 80-character lines for display and stored data on a single dedicated server with a 30 megabyte hard drive.

    My current project uses React with hooks, Node with servless functions on a web hosting service, lots of fancy CSS, a NoSQL database hosted elsewhere with an API in GraphQL and libraries, libraries, libraries written by 3rd parties, constantly being updated.

    None of this was even imaginable 15 years ago. Do we have better applications? Yes, much better. Are they simpler to write than those 30 years ago? Uh...nope.

    • "Over much of the history of computing, coding has been on a path toward increasing simplicity."

      Perhaps, but problems got more complex. In my 35-year career I went from desktop applications that use 50 80-character lines for display and stored data on a single dedicated server with a 30 megabyte hard drive.

      My current project uses React with hooks, Node with servless functions on a web hosting service, lots of fancy CSS, a NoSQL database hosted elsewhere with an API in GraphQL and libraries, libraries, libraries written by 3rd parties, constantly being updated.

      None of this was even imaginable 15 years ago. Do we have better applications? Yes, much better. Are they simpler to write than those 30 years ago? Uh...nope.

      Yeah, in what world has programming got simpler?

      If anything, it's become a guild where the gatekeepers deliberately make it as complex as possible by grabbing at every new idea and library they can.

    • by Junta ( 36770 )

      The same application you wrote 30 years ago is simpler to make, if implementing the same UI and general design.

      I'd still posit that it's even easier to make a nice modern looking interpretation of that same application than it was to make that program back in the day. The choices can be paralyzing (am I using Angular? React? Svelte? Vue?) and some of the hyped 'patterns' are frequently not what they are cracked up to be, but peer pressure causes them to be over implemented... However once you understand t

  • May be journalism (Score:5, Insightful)

    by ugen ( 93902 ) on Sunday June 04, 2023 @04:46AM (#63574845)

    Not too sure about programming. But I bet current AI can do a great job writing speculative misinformed clickbait to fill pages, better than most NY writers. I think someone's got to be worried.

  • by Required Snark ( 1702878 ) on Sunday June 04, 2023 @04:49AM (#63574849)
    Just because something is called "Artificial Intelligence" does not mean that it is intelligent.

    The history of "AI" is a sequence of ever changing definitions of what consists of intelligent activity. In the 1950's it was assumed that playing checkers or chess showed intelligence. In the 60's the ability to do freshman calculus was the test. After that came natural language parsing, which was conflated with language understanding. By the 80's it was expert systems and Prolog. In the 90's robots became the rage. The 2000's had the start of autonomous vehicles and early neural nets, and by mid to late 2010's we ended up with high end ANN and now LLM.

    The examples are not comprehensive or exhaustive and they show that the definition of AI is always changing There is, however, a ongoing pattern: when a particular AI definition/fad fails a new definition comes into fashion and is the Next Big Breakthrough. And in each cycle the hype is more inflated and the money pumped into the technology goes up accordingly. That's the true driving force. Hype and big bucks go hand in hand.

  • by karlandtanya ( 601084 ) on Sunday June 04, 2023 @05:12AM (#63574863)

    Normally I receive a safety design that has been approved by the customer and the equipment vendor. It's a document that says in a formal way "when these things happen or when a person or object is in this area then that equipment should stop".

    The safety programming is the simplest and easiest type of programming in these systems. It has to be that way because it's very important that it must be right. The spec is very clearly defined, the safety devices are very simple and very reliable, and there are strict rules for how the logic must be written.

    Let's say chat GPT is approved for safety code generation. The project manager fires me and just hand that safety spec to chat GPT.

    Still there always a instances of "oh gee whiz they never thought somebody would do this when they came up with the spec better make sure they can't get hurt when they do". Some of those are things you could figure out sitting at your desk. Some of them are only obvious when you get to site and see the physical layout of the system and ways people could climb over or under or around to get where you don't expect them to be. Let's put these edge cases aside for the moment and focus on the primary issue:

    Chat GPT is famous for generating output that looks right but is factually wrong. It doesn't understand the intent of what it's being asked to do. It doesn't understand anything; that's not even remotely how it works. So I'd expect the safety program that passes validation but does unexpected things during production.

    When somebody is hurt because safety system was programmed incorrectly who will pay them or their surviving family?

    The design committee did their job correctly; the safety spec was valid. The project manager used an approved AI to generate the code. The AI was certified to be compliant with regulatory standards OSHA NFPA etc. The equipment vendor supplied safety devices certified in the same manner. The operator followed safety rules when operating the equipment.

    Somebody got hurt and no one is accountable. I realize in the boardroom this is a feature not a bug but on the shop floor it is not a good feature.

    All the same arguments about safety can be made about any programmatic output that people actually care about. Factory equipment safety failures happen to be a low probability high stakes example.
    If you want higher stakes consider power plant burner control systems. Consider petrochemical refinery controls. Medical device and Drug Manufacturing.

    I remember when Safety Systems had to be hardwired. No programmatic involvement in safety allowed. Mechanical switches and relays because software was just not reliable.

    AI is not yet reliable enough to be trusted with safety or process control.
    Not yet.

  • Long time ago there was a movement to explain what the computer should do, in more or less plain English instead of mysterious codes. It was called COBOL. It, and other high-level languages of that time did indeed change coding a lot. But the need for programmers did not go away, at all.

    The real art of programming includes being aware of different failure modes, error handling, and considering malicious user input, as well as a deep understanding of what the program is supposed to do, and finding an accepta

  • by rkit ( 538398 ) on Sunday June 04, 2023 @05:14AM (#63574869) Homepage
    If all you know ist copying code from the web, this will be a problem.
  • ...killed by programmers. This is what will happen with AI, hoping that this will bring back serious journalism, as we knew it before.
  • Basically, the idea of intelligent/expert system compilers that can generate code from highly abstract descriptions is fifth generation programming. This has been talked about for as long as I've been a programmer (I started in 1978), and I seriously doubt ChatGPT is at the point where it could implement it usefully. As far as I can tell, code produced by AI systems tends to be of very poor quality (bug-ridden, unreliable, with tons of security defects). Of course, that won't stop companies using ChatGPT co

    • by Junta ( 36770 )

      To further the point, a lot of folks I've talked to if they do admit that ChatGPT isn't there yet, they will declare "oh but it came out of nowhere and is like 80% there, so it'll be a done deal in the next year or two".

      Which ignores the fact that it didn't come out of nowhere, I point out that over a decade ago IBM demonstrated almost this level of functionality on Jeopardy. However, despite that demonstration, they weren't able to translate that to business value. Now OpenAI has made it iterative and giv

  • I get the feeling that whatever level of automation happens, there will be programmers working 60 hour weeks. Even as the pool of jobs shrinks, there will be overworked folk still fighting bosses to work remotely or keep a manageable workload!
  • "one that doesn't require us to learn code but instead transforms human-language instructions into software."

    Transforming human-language instructions into software. Didn't that used to be know as pseudocode?
  • I agree. I have always thought that it was bizarre to teach children "coding". To me, that was like someone in 1920 teaching kids to use a phone switchboard: "It's the future!"

    I mean, we all saw how Captain Kirk talked to the computer, and it was able to act on his instructions. And I recall episodes of both Superman (the original TV series) and The Outer Limits (original) in which people spoke to computers, and the computers understood and acted.

    So did we not see that programming would be something interim

  • .. using all new code.

    (hit enter, wait)
    (wait)

    What comes next?
    • Presumably, it takes everything it has tagged as 'Linux' and plagiarizes the hell out of it while also scrambling it a bit to obfuscate the origins.

      This will result in something that won't boot, but requires an improbably large ISO to install.

  • You ever do that challenge where you get a few teams with a few Lego blocks and one team member tries to explain a diagram of a shape to make whilst the rest of the team try to assemble the pieces? And even with a few pieces, the shapes end up being radically different between teams? If we do get rid of programming, we need to make Business Analysts a whole lot better!
  • Over the years I've coded with PL-1, Fortran, BASIC and Pascal. Then I tried to learn Java. I'm a chemist, not a programmer, so it was never full time. I just never could get around the complexity of Java. I can't see how adding AI into the mix is going to make it simpler. You still have to figure what you want the code to do and you have to check to see if AI did it right. Given ChatGPT's tendency to lie, I don't know that I would trust it.
  • Symbolic maths will be the arbiter of determining AI compositional programming success. Any machine that can grok symbolism at higher mathematics level - ends human drudgery.

  • Or how did the whole "no code" thing turn out?

    Sorry. AI isn't the panacea of all things. This guy just made a click-bait article.

    AI cannot "create" it can "generate" and this is a significant difference. It can only generate based on what it's trained to do. But if you want something that hasn't been done before, you need a human to work that out.

    AI can be a good "thumbnail" thing to start finding new options that may be already within the pattern but just not seen by the human eye.

    AI can help resolve thing

  • He's describing the transition from what was a 3GL language model for coding phase zero: "unconscious" to the same programming phase, but using using 4GL tools.

    As an aside, I wish I could find the reference to the "N programming phases", where phase zero is "unconscious", i.e. unaware one's actions are programming a machine (e.g. spreadsheet macros).

    Tell the machine what you want, not how to do it. It's an evolution of the language model, just as Lem described in GOLEM XIV/Imaginary Magnitude.

    LLM tech

  • THere are already shitloads of totally useless "coders" out there. Writing programming code is so simple any idiot can do it -- and many idiots do. The hard part, which iwill not be solved by Statistical Modelling (aka ML/AI) is "how" to solve the problem and "what" is to be achieved by the cut'n'paste code.

    It has been this way since the dawn of "making machines do useful stuff". The hard part is designing "how" to accomplish what is desired. Reducing that the instructions that a idiot (or a machine) ca

  • NYT: It's the End of Computer Programming As We Know It

    "This article was written by ChatGPT."

  • So even more 'developers' don't know what is going on and how stuff actually work. I like creating applications, I like to know how it works so I can fix stuff if it doesn't. I'm well aware that AI will take my job within a decade or so, and I'm also well aware society isn't ready for so many people without a job.
  • Didn't Jaron Lanier say something along the lines of the AI future basically being a planet of help desks?

  • After years of everyone telling journalists to learn to code when they lose their jobs, the journalists are back to tell programmers to learn to write in plain English when they lose theirs!

  • by Yo,dog! ( 1819436 ) on Sunday June 04, 2023 @11:47AM (#63575427)
    AI has a long way to go before it can be trusted to produce desired code. Practically every request I've made of ChatGPT (including 4.0) does not pass muster. Programming skills are often required to identify deficiencies in generated code, and programming skills are necessary to write adequate prompts. Yes, AI may sometimes be helpful to a programmer, but I've experienced the current AI often posing a hindrance, because it doesn't understand the problem area and composing an adequate, sufficiently detailed prompt--and then evaluating the generated code--isn't worth the time and effort of a seasoned programmer.
  • by Dan667 ( 564390 ) on Sunday June 04, 2023 @12:07PM (#63575463)
    I have been waiting and waiting and waiting to be replaced, but guess what, it never happened. In fact, ironically, I have had to come behind outsourced code and try and fix it. I expect that is what is going to happen when bean counters start trying to replace programmers with AI. They will have some small success and then fall on their face over and over again like what happened with outsourcing.

Whoever dies with the most toys wins.

Working...