Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming

Analyst Mocks the Idea That It's 'The End of Programming' Again (zdnet.com) 97

January's Communications of the ACM includes an essay predicting "the end of programming," in an AI-powered future where "programming will be obsolete."

But IT analyst and ZDNet contributor Joe McKendrick remains skeptical, judging by a new essay sardonically titled "It's the end of programming as we know it — again." Over the past few decades, various movements, paradigms, or technology surges — whatever you want to call them — have roiled the software world, promising either to hand a lot of programming grunt work to end users, or automate more of the process. CASE tools, 4GL, object-oriented programming, service oriented architecture, microservices, cloud services, Platform as a Service, serverless computing, low-code, and no-code all have theoretically taken the onerous burdens out of software development. And, potentially, threaten the job security of developers.

Yet, here we are. Software developers are busier than ever, with demand for skills only increasing.

"I remember when the cloud first started becoming popular and companies were migrating to Office 365, everyone was saying that IT Pros will soon have no job," says Vlad Catrinescu, author at Pluralsight. "Guess what — we're still here and busier than ever."

The question is how developers' job will ultimately evolve. There is the possibility that artificial intelligence, applied to application development and maintenance, may finally make low-level coding a thing of the past.... Catrinescu believes that the emerging generation of automated or low-code development solutions actually "empowers IT professionals and developers to work on more challenging applications. IT departments can focus on enterprise applications and building complicated apps and automations that will add a lot of value to the enterprise."

Even the man predicting "the end of programming" in an AI-powered future also envisions new technology that "potentially opens up computing to almost anyone" (in ACM's video interview). But in ZDNet's article Jared Ficklin, chief creative technologist and co-founder of argodesign, even predicts the possibility of real-time computing.

"You could imagine asking Alexa to make you an app to help organize your kitchen. AI would recognize the features, pick the correct patterns and in real time, over the air deliver an application to your mobile phone or maybe into your wearable mobile computer."
This discussion has been archived. No new comments can be posted.

Analyst Mocks the Idea That It's 'The End of Programming' Again

Comments Filter:
  • by Big Hairy Gorilla ( 9839972 ) on Sunday January 01, 2023 @01:40PM (#63172406)
    "AI" is dumb as fuck. Why? Because "AI" is only as good as it is trained. Kind of like your child. If you keep your kid off the iPad and the internet, send him/her to a good school, they might turn out well. If you beat your kid, you'll raise a passive aggressive bully... like a nazi.

    Basically, NO. AI, will be writing buggy bullshit for a looooong time. It takes a real person with some real world experience to troubleshoot and creatively solve real world problems.

    Buckle up for a torrent of really shitty software. Yes, even worse than now.
    • by Z80a ( 971949 ) on Sunday January 01, 2023 @01:42PM (#63172412)

      The GPT thing is very good at making text that look like what you asked it to.
      Which makes it great at making code that seems correct but isn't

      • by gweihir ( 88907 ) on Sunday January 01, 2023 @02:46PM (#63172548)

        The GPT thing is very good at making text that look like what you asked it to.
        Which makes it great at making code that seems correct but isn't

        And that is exactly the point in a nutshell. This thing can emulate style and make things look or sound great. It cannot do the details though and in engineering details are critical.

        • by narcc ( 412956 )

          Well, programming isn't engineering (that's an insult to real engineers) but you're otherwise correct. AI generating code is a parlor trick, not a serious technology.

      • I tried asking one of these services for a C++ function to reverse a string. What it gave me was a nonfunctional mishmash of an in-place reverse and a reverse-the-copy. Either approach might have been correct independently, but the frankenfunction was obviously wrong (it tried to copy from iterators AFTER moving them to equality). Of course, I tried the same query again a few days later and got a call to std::reverse. So, eh.
    • by Rick Schumann ( 4662797 ) on Sunday January 01, 2023 @02:06PM (#63172470) Journal
      All these people who are all over AI's jock constantly? I'm convinced they think that if you throw enough hardware and so-called 'training data' at it, that it will 'magically' wake up, become self-aware, and start talking to you like it's alive or something. It's like too many people read The Moon Is A Harsh Mistress when they were kids and have it ingrained in their brains that somehow 'Mycroft' can be real or something.

      Your analogy about raising a child is apt, but not in the way you intended: unlike so-called 'AI', a child is capable of cognition, whereas so-called 'AI' is by definition entirely incapable of 'thinking/reasoning'; a child can go beyond what they've been taught, while the machine lacks the ability to do that.

      All this 'AI' crap is the most over-hyped garbage I've seen my entire life.
      • by ls671 ( 1122017 )

        Mycroft seems real enough to me:
        https://en.wikipedia.org/wiki/... [wikipedia.org]

        /s

      • by gweihir ( 88907 )

        Indeed. And that stupidity is _old_. For example, Marvin "the Idiot" Minsky claimed that as soon as a computer has more transistors than a human brain has brain cells, it will be more intelligent. Completely clueless drivel, of course (Neuroscience struggles to model single human-complexity level brain-cells completely and they are using a lot more than one transistor in their attempts), but many people believed that because they cannot fact-check and it came from some "authority".

        "The Moon Is A Harsh Mistr

    • AI does not need to write perfect code. If it copies the right parts from stack exchange and glues the pieces together, it meets the capabilities of some programmers and is cheaper. From a management perspective that is very attractive.
    • I think they will just be tools that you learn how to use that accelerates some parts of the job. I have done some experimenting with these tools and they did a good job of writing unit tests and some basic documentation. The unit tests were not 100% correct but they were about 90% correct and saved a lot of time.

      I don't think I would give them to a novice programmer though. What I have seen is they are not experienced enough yet to handle the kinds of errors these things create but for an expert I think th

  • This is a typical symptom of the fist stage: blase
    and it is also what the artists said https://www.genolve.com/design... [genolve.com]
    • Re: (Score:3, Informative)

      The guy who wrote the original article is selling AI. If people believe him he stands to make more money, therefore his opinion doesn't count.

      • by Jeremi ( 14640 )

        The guy who wrote the original article is selling AI. If people believe him he stands to make more money, therefore his opinion doesn't count.

        "Ad hominem [wikipedia.org] (Latin for 'to the person'), short for argumentum ad hominem (Latin for 'argument to the person'), refers to several types of arguments, most of which are fallacious.

        Typically, this term refers to a rhetorical strategy where the speaker attacks the character, motive, or some other attribute of the person making an argument rather than addressing the substance of the argument itself. The most common form of ad hominem is "A makes a claim x, B asserts that A holds a property that is unwelcome, and

        • Ad hominem is trying to discredit an opinion by attacking its holder SPECIFICALLY. In this case it is a general rule: any claim X made by any A who stands to gain something from people accepting it has no standing.

          The reverse is also true: a claim Y made by a B who stands to LOSE from people accepting it needs to be taken seriously.

        • "Ad hominem (Latin for 'to the person'), short for argumentum ad hominem (Latin for 'argument to the person'), refers to several types of arguments, most of which are fallacious.

          A conflict of interest (COI) [wikipedia.org] is a situation in which a person or organization is involved in multiple interests, financial or otherwise, and serving one interest could involve working against another. Typically, this relates to situations in which the personal interest of an individual or organization might adversely affect a duty owed to make decisions for the benefit of a third party.

          Pound sand.

    • Sure, AI can't do the most basic tasks still, but somehow the looming threat is right around the corner. . . I'll worry about AI taking programming jobs when it actually starts happening. Until then it's just a tool for owners to pay programmers less by convincing them there's an infinite pool of robotic programmers waiting to replace them.
    • by ranton ( 36917 )

      One problem with questions like these is:

      1) Even though we are probably aren't not much closer to AI doing significant programming tasks
      2) Almost no one will be able to accurately identify when we are close to AI doing significant programming tasks

      I don't see anything in recent OpenAI or other similar technologies which leads me to believe programmers are at risk of being disrupted by AI, but I don't really think I'll know it when I see it.

      • Its important to point out; we are for the most part not asking the ai how to do something, we are asking it to do something for us. In the end most of us won't care if it does a good job or not. We just want things to turn on when we press a button.

  • by FeelGood314 ( 2516288 ) on Sunday January 01, 2023 @01:50PM (#63172426)
    Programming is just describing to a computer what you want it to do. I just finished a 900 hour contract and wrote 20,000 lines of code. I spent less than 200 hours writing the code. I spent 40 hours in meetings, 100 hours debugging, 100 hours writing documentation (that I doubt will ever be read) and all the rest of the time was figuring out what the heck the code needed to do.
    Look at the mess South West has. They can't track their pilots and flight attendants because they have a bunch of undocumented scheduling code running on different machines that is so convoluted that they can't integrate a new tracking system into it. AI won't help with that.
    Put it another way, if you knew exactly what you wanted your code to do it would be a high school project or something you could give a coop. The fact that many of us can bill over $100/hr shows that we aren't hired for our programming skills its because we can figure out what needs to be done.
    • Its truly offensive how many people think the word "smart" anything before electronic item means that it thinks or can do something on its own.
    • by JBMcB ( 73720 )

      Look at the mess South West has. They can't track their pilots and flight attendants because they have a bunch of undocumented scheduling code running on different machines that is so convoluted that they can't integrate a new tracking system into it. AI won't help with that.

      I've heard they have to reboot their scheduling system every night to keep it functioning. They've been accumulating IT debt for decades at this point.

    • by Big Hairy Gorilla ( 9839972 ) on Sunday January 01, 2023 @02:45PM (#63172544)
      Thank you very much... for a perfect example of what I was calling "real world problems". You have to have experience in the complexity of the "real world", and creativity, to be able to conceptualize a workable solution. AI doesn't have that real world experience.... and It won't for a long long time. As you say, Figuring out what to do it most of the work.

      AI will be writing "Hello World" and impressing managers everywhere, so like I said, get ready for really really bad software. I'll go further. It will take someone getting killed or a power plant blowing up, or some disaster, before managers, politicians, and other idiots to realize, this isn't going to work the way it does in Star Trek.
      • Not yet. But the AI image and text stuff do give me hope that, in fact, the "holoprogram from a few sentences" may one day be possible.
        • Not yet. But the AI image and text stuff do give me hope that, in fact, the "holoprogram from a few sentences" may one day be possible.

          The problem with the current batch of AI generators is that they're no more than a statistical compilation of everything they've seen, grouped by subject. It follows that, from that summary, you can only retrieve content that it has seen elsewhere.

          Surely it can mix and match that content in new ways if you know how to ask carefully for the correct subjects; but it can't be used to solve problems that it has not seen before. It does not have a component of 'creativity' in the sense of building new code to so

          • by shmlco ( 594907 )

            I wrote an article about my experiments with ChatGPT where I asked it to build a SwiftUI form with name, address, city, state and zip fields. It did. I asked it to move the fields to a view model. It did. I asked it to add some validation and it did. I asked it to create a struct from the data, populated from the VM. It did. I asked it to add a function to populate the VM from the struct. It did.

            And it did all of that in less than two minutes.

            I wouldn't ask it to build an entire app from scratch. But it's v

            • I can see dedicated IDEs coming that do this sort of thing automatically. At which point developers will be free to concentrate on other problems.

              This is such an old dream.

              Boilerplate code seen as tedious waste of time. Programmer makes tool that generates boilerplate code automatically. Programmer invents new framework with no boilerplate code. New methods of writing boilerplate code are developed. Signal repeats.

              • by Jeremi ( 14640 )

                New methods of writing boilerplate code are developed.

                What is the motivation that causes people to re-introduce boilerplate code?

                • Typically expression of ideas the creator of the framework didn't foresee or deem important that then turned out to be the actual future as opposed to their incorrect prediction of the future. It takes a lot of lines of code for a GUI when done with procedural code, you can hide that by making some kind of declarative framework, and then add it all back in to get data out of all those private members so the programmer can do something non-trivial. And then someone comes along and makes a "better" framework
              • by shmlco ( 594907 )

                If you have an API that returns JSON and you need to convert that JSON into a struct in the language of your choice, that's largely a tedious, handwritten boilerplate function. Mutating initializers and functions. Marshaling that data to and from a set of form fields is largely just writing more boilerplate code. Creating mocks and creating the API request routines to get and put that data are mostly boilerplate.

                Fetching lists from APIs. Presenting selection lists. Validation. None of it is rocket science a

      • by freax ( 80371 )

        Not just someone. A few hundred must die first. And maybe a few tech journalists too. Look at the whole self-driving car thing: a few hundred have died already. More will die. And still there are many people who believe that they can stop paying attention behind the wheel of a so called self driving car.

    • by jmccue ( 834797 )

      I just finished a 900 hour contract and wrote 20,000 lines of code. I spent less than 200 hours writing the code. I spent 40 hours in meetings, 100 hours debugging, 100 hours writing documentation

      Where to you work ? :) Where I work the 900 hours translate to this: 500 Meetings, 100 writing Code, 200 keeping Project Mang System updated, 100 Test (on a good project). Testing usually means only do enough to tick off the boxes in a form.

      • I was fixing 20 years of legacy code that no one had the documentation for, no one knew exactly what it did or how and no one in the company wanted to own up to ever having touched the project even though this code was the key piece of the entire 2.47B (as of 4pm Dec. 30) company. The image size was 215KB and had to be under 128KB to allow an OTA upgrade to work as there was a new feature that was needed.

        Don't take contracts to fix technical debt. You will get zero support, you are the lowest of the l
    • Put it another way, if you knew exactly what you wanted your code to do it would be a high school project or something you could give a coop.

      Respectfully, you and other professional programmers underestimate how very rare it is to have the capacity to write functioning code. I teach 2nd-semester (community) college CS majors, and the majority of them can't write a for loop, declare an array, or understand scoping. Even if they can do that, then it's likely they can't read a specification written in clear English.

    • Exactly this.

      Programming is translating human intent into a format the computer can understand—code. People see an AI generating code and say, “It’s coming for your jobs, programmers”, but for that to be the case those people would need to be able to perfectly express their intent to a computer, or at least validate that their intent had been successfully expressed...at which point they themselves would be programmers, just with a natural language rather than a formal one.

      What they

    • by sjames ( 1099 )

      Way back in the before time, we had Junior Programmers, the greenest of which were called "code monkeys". They would do the tedious actual writing of code to create functions specified by Senior Programmers. The job required a high school diploma and good results on an aptitude test. Meanwhile, the Senior programmers educated them so they could become Senior Programmers after a few years.

      The Senior programmers did a lot more thinking and specifying and a lot less actual coding. They were in short supply so

  • AI isn't going to replace programmers immediately. But what it is going to do is reduce the amount of work they need to do, which means either more programming can be done, or there will be less programming jobs.

    The AI commonly produces bad answers... but it also commonly produces good ones. Programmers will spend more of their time writing test cases, which are a good idea anyway, and some of the software will be written by the computer.

    Writing good test cases is hard, but it's already necessary.

    The point

    • by gweihir ( 88907 )

      That will not work and cannot work. Artificial Ignorance coding will have very specific security problems in it that are not accessible to test cases because they will be too complex and testing for security is already hard for simple things and generally several (!) orders of magnitude harder than regular testing. But attackers that find these can then exploit a mass of systems with them because the same mistakes will be in a lot of different code.

      Testing is very limited in what it can do and can never ass

      • Can we teach an AI pen testing?

        • by gweihir ( 88907 )

          Can we teach an AI pen testing?

          Nope. And there is no need to. All the standard attacks are already automated. The rest requires a creative and experienced human driving things. Caveat: While I have not personally pen-tested things (well only so minimally it really does not count), I have closely worked with pen-testers and know several.

          Incidentally, pen-testing is very limited in what it can do. It cannot replace an in-dept security review. It cannot replace a code-review. It cannot do anything beyond very shallow things. And it is never

  • "End of programming" is b the new "year of Linux desktop"
    • I'll see if I can run with that analogy a bit further.

      Your original goals or predictions can come about in a completely unexpected way. "Linux on the desktop" was sort of code-speak for "we want mass adoption". At this point, smartphones have largely replaced what a PC used to be for most people (basic digital consumption, communication, entertainment, and simple personal tasks), so the desktop really isn't even the ultimate mass-adoption target anymore. But Linux is used almost everywhere else... everyw

  • In the past computers have been really good at some things, really bad at others. Some of the things they were bad at, humans were good at. That's where AI is having a big impact. It lets computers be good at the things they used to be bad at but humans were good at.

    That doesn't change the things computers have always been good at. If you need a program to process financial transactions or a device driver for a new GPU, you aren't going to write it by training an AI model. You need code that follows we

    • "Machine Learning" does not exist. It (and AI) are moron-speak for Statistical Modelling.

      • by Jeremi ( 14640 )

        "Machine Learning" does not exist. It (and AI) are moron-speak for Statistical Modelling.

        Sure, but in the long run, it's the morons (i.e. the general public) that decides how words are used, and therefore what those words mean.

        So if the world decides that it's valid to use the word "literally" to emphasize a figurative (i.e. non-literal) point, then eventually that is what the word will mean, and its old definition will fall by the wayside. It's stupid, and there's not a lot you or can do about it.

        Similarly, if the world decides to refer to Statistical Modelling via the name "Machine Learning"

      • by narcc ( 412956 )

        The "morons" in this case being the people who invented the field. "Morons" like Marvin Minsky, John McCarthy, and Claude Shannon.

        SMH...

    • If you need a program [...] for a new GPU, you aren't going to write it by training an AI model. You need code that [..] produces exactly the right result every time

      I agree, sure, but would you please mind letting AMD know?

  • It strikes me that while the amount of grunt work necessary to make any kind of "app" has gone down somewhat over the past 30 or 40 years, the grunt work has always required the smallest portion of yhe developer's mental cycles, compared to the actual business logic.

    At work we've got code, some of which dates back to the 80s. Aside from some references to the memory structure of the VAX or whatever it originally ran on, most of the code is generally equivalent to what one would write today. In some places t

    • by gweihir ( 88907 )

      Indeed. The fact of the matter is that coding is a creative act that does require understanding. Like all engineering design work. And if you look at established engineering fields (which coding is not at this time), the one part they can never get rid of is the engineer doing the thinking and the designing.

    • by narcc ( 412956 )

      Programming has changed significantly over the last 40 years. Not necessarily for the better, but it has undoubtedly changed.

  • The threat of automation is always used to beat workers into submission. Whenever truck drivers, factory workers, doctors, lawyers, heart surgeons, restaurant workers get uppity, they wheel out a barely functioning robot of some kind to show the workers they're replaceable. Yet, in 2022 owners were crying "NOBODY WANTS TO WORK (our jobs that are so low wage it costs more to work the job than not work at all)." And weâ(TM)re blaming inflation on wage increases.

    Generalized AI is still 100+ years out. Until then it's gonna be another tool, like a wrench or an IDE, that might make the programmer more efficient, but ultimately will still have the role of taking stakeholder requirements and converting them into actual implementable solutions and then actually implementing them.
    • by Srin Tuar ( 147269 ) <zeroday26@yahoo.com> on Sunday January 01, 2023 @04:27PM (#63172734)

      > Generalized AI is still 100+ years out.

      Hard disagree. We have no definition of intelligence yet, or even a basis from which to describe it. We can measure it, but the only true measure seems to come in high stakes games for which humans are the only viable participants. the AI developed so far is little more than a tool for a human to use, and not a competitor to a human.

      The best we can do right not is measure intelligence with super primitive means such as turing tests. There has been zero, ZERO progress on AI since the idea was conceptualized. If there was an honest speedometer on AI progress, it would be reading a flat 0mph since 1940s, never even blipping up a micron.

      ML is interesting and useful, but has nothing to do with AI. In fact, one key sign that ML is not AI is that it is only useful in collaborative problem spaces and utterly fails in contentious ones. Its a fancy pattern recognizer, no more intelligent than a coin sorter in a vending machine.

      Until we at least have some kind of theoretical or logical way to analyze what intelligence or sentience is, we cant make any extrapolation of when it can be created by us. 100 years is an arbitrary and short timeline for something which might not be possible in 1 million years.

      It may simply not be possible for a given intelligence to purposefully create another intelligence even half as smart as it is, much less smarter.

      • Fair, 100+ simply means "not in my lifetime" even if anti-aging technologies actually hit in my lifetime.
      • by narcc ( 412956 )

        We can measure it [...] The best we can do right not is measure intelligence with super primitive means such as turing tests.

        The Turing test is in no way a measure of intelligence.

        ML is interesting and useful, but has nothing to do with AI.

        On the contrary, ML is AI. You just want AI to refer to something completely different. The term AI covers a broad range of things that you wouldn't consider "AI", like decision trees, linear regression, or clustering.

        Is the term misleading? Absolutely! But all the complaining in the world isn't going to change the meaning. That ship sailed in the 1956, at a conference at Dartmouth. We can blame John McCarthy, who was more interested in questions

    • Generalized AI is still 100+ years out.

      I used to think like that, too. Then, after reading article after article about the progress AI has made in the last 50 years, I realized that I was short by at least an order of magnitude.

      I'm convinced that it is at LEAST 1000 years out.

      Modern AI hype very closely resembles the notion from The Time Machine that steam power will enable time travel. It just isn't going to happen. We have neither the hardware nor the software to make AI anything more than glorified code completion.

    • If your job is so trivial that it can be automated, then it *should* be automated.

      Factory jobs are an example. These jobs are mind-numbing, dehumanizing work. Automation is a good thing, freeing people to do more human things with their time. Yes, I realize that some people can't be, or don't want to be, retrained. Change takes time, but that doesn't mean change shouldn't happen.

  • As we develop human knowledge and skill it is mechanized so that we can use less skilled labors. One of the earliest examples is the Jaquard loom, a programmable weaving machine that was powered by semi skilled human but programmed with the knowledge of skilled and creative weavers.

    One impetus for the Babbage engines was to create the navigational and mathematical tables. Skilled mathematicians would create the non linear brackets and then semiskilled labor would compute the linear intervals. It would be

  • ... but keep in mind that the "AI" Alexa provides is actually just warehouse sweatshops of real people. They sell it as AI but it's the furthest thing from it.

    • If Alexa is really powered by people (in any significant way), they aren't worth their food rations. Or it's amazing how perfectly they make mistakes that machines would make, so as to hide their real nature.

      • You mean like selling your data to organized crime? Yea, sure that's totally a mistake a machine would make. /sarcasm

  • by gweihir ( 88907 ) on Sunday January 01, 2023 @02:35PM (#63172516)

    But I have more gotten tired of the same stupid crap being claimed again and again and again. Programming is engineering (No, I will not discuss this, if you cannot see it, then that is a limitation on your side.) and engineering is hard and cannot be automatized because you need to understand what you are doing. All the stuff that could be "automatized" has already been put into libraries or can be put into libraries. For the rest, it is just not possible. Artificial Ignorance is dumb as bread and can only do statistical classification, pattern matching and lookups in catalogs. It has zero clue what it is doing. It has no insight. It has no common sense. And it will remain that way for the foreseeable future, because we have absolutely nothing, not even in theory, that could do better. (Just for completeness: Go away physicalists, nobody knows whether humans are "just machines", but it very much does not look that way at this time. Take your deranged, self-denying religion someplace else.)

    Asd to the claims to "programming going away" or "being automatized", these are basically as old as programming is. When I went to university about 30 years ago, the 5GL project that was supposed to automatize programming using constraint solving had just failed resoundingly. The idea was that you specify your problem using constraints and the machine generates the code. Turns out constraint solving is too hard for machines in the complexity needed. Also turns out (as a side result) that specifying a problem using constraints is about as hard as writing code for it directly and requires more expertise and experience. Since then, this moronic claims have cropped up again and again and again. I have no idea what the defect of the people making these claims is, but it must be a serious, fundamental defect, because they will just not go away and refuse to learn from history.

    • by alw53 ( 702722 )
      COBOL was going to make programmers obsolete by allowing managers to code stuff themselves in plain english :)
      • by gweihir ( 88907 )

        Ahahaha, yeah, I forgot that classic fiasco from around 1960 (!). To be fair, I was not born yet at that time.

        • by narcc ( 412956 )

          Fiasco? COBOL was wildly successful!

          No, it didn't eliminate programmers, but it did allow non-experts to understand and audit code to some degree.

          The world still runs on COBOL. It's easy to read, easy to write, and crazy fast and efficient. All those failed COBOL to Java projects failed for a reason. COBOL is hard to beat.

    • by narcc ( 412956 )

      Programming is engineering (No, I will not discuss this, if you cannot see it, then that is a limitation on your side.)

      Programming is absolutely nothing like engineering. Why are programmers always pretending to be something that they're not?

      Don't worry, I'm not asking you to discuss it. There is nothing to discuss. This is simply reality vs fantasy.

  • by Stephan Schulz ( 948 ) <schulz@eprover.org> on Sunday January 01, 2023 @02:38PM (#63172518) Homepage
    There is a wide-spread misunderstanding among non-technical people that the hard thing about programming is expressing the ideas in the idioms of a particular programming language. That is the part they can see and fail to understand. But that is, in fact, the most trivial part of programming - the hard part is understanding the domain, understanding the requirements, and coming up with the ideas about how to solve the problems and structure the program. Producing snippets of code for trivial tasks is not a significant step forward.
    • This is a great and concise observation, should be upvoted.

    • And the solution has to run in reasonable time on reasonable hardware. And be completed on time and on budget.
    • ChatGPT is already at the "structure the program" stage.

      I asked in rather general terms for ChatGPT to generate me a program for a rather complex UI simulation using tools I was aware of but not yet proficient in. Apparently I asked for too much so ChatGPT didn't generate code, it generated an outline for the steps that code would need to do (structure). I then asked for code for each of those steps. ChatGPT either provided the code or broke down the step into smaller steps. In the end it had generated

  • by AlanObject ( 3603453 ) on Sunday January 01, 2023 @02:38PM (#63172522)

    I have been programming (it used to be called that) for pay for a half century now, and I can't recall a year where it was not predicted that the job of programming was going to be automated. In just a year or too look at all the progress we are making.

    There is something about the lay understanding of technology that promulgates this grail as something that is real. So we keep getting these predictions.

  • it's less programming. Better tools mean you need fewer programmers. If you program for a living that means same supply, less demand.

    I'll never understand why people think supply and demand doesn't apply to their wages.
  • I'm not an expert in AI by any means but I feel like AI is still far away from "understanding" what it's doing or working with and perhaps understanding is the most important part of trying to translate something into code.

    As to whether "understanding" requires consciousness I've no idea. I hate the term because it's so loosely defined and real wet AI cannot quite understand it and whether it's necessary at all to be truly intelligent. E.g. many biologists claim that ever simple forms of life such as gras

    • AI == Coder. A monkey that knows not what it is copying and pasting, merely that is a statistically significant snippet.

      Copying and pasting a bunch of "Statistically Significant Snippets" does not a working program make.

      There exists exactly zero working programs created by this method. They have all been massive and spectacular failures.

  • by bluegutang ( 2814641 ) on Sunday January 01, 2023 @02:57PM (#63172574)

    The Jevons paradox [wikipedia.org] shows how by decreasing the amount of a resource needed to make a product, you can actually increase the amount of resource that is used. As the product becomes cheaper (because less of the resource is needed), demand for the product rises enough to offset the smaller amount of resource per product.

    It's entirely possible that this "paradox" is relevant here, with programmer labor as the resource in question. The cloud, AI, and other "technology surges" have made programmers more efficient - allowing them to produce more product per unit labor, and thus to sell the product for a lower price (frequently free these days!). This has in turn increased demand for software - perhaps enough to entirely offset the lesser number of programmers needed to make a particular unit of software.

    • by gweihir ( 88907 )

      I don't think so. If Artificial Ignorance was actually capable of generating functional code with some reliability, then yes. But it is not and it will not be anytime soon because that requires some actual understanding and AI does not have that and may never have that. The approaches known today will certainly never get there. Statistical classifiers can have an incredible width of view, but they can never do more than scratch the surface. No approach "trained" on data can as the amount of data needed incr

  • Fewer programmers. Greater margins. More profit. And completely imaginary.

  • Demonstrating once again the power of imagination and its contribution to creating clickbait. Why would I want an app to organize my kitchen when my robot does that for me?
  • So we have two articles of people's opinions on AI now. We going to cover the other 4 million people's options in articles next?
    • How did you arrive at the figure of four million?
      • 1.5 billion English speakers, roughly 30% of which know anything about computers at all, 90% of which for some god awful reason think they're geniuses or IT experts, and 1% of them that can't shut the hell up about it despite being proven wrong over and over which equals 4,050,000 so I just rounded to 4 million.
  • by Casandro ( 751346 ) on Sunday January 01, 2023 @04:08PM (#63172702)

    One would expect that solving a given software problem would require fewer and fewer lines of code. For a while it looked like this would be the case. After all writing a simple program in dBase (yes it had a programming language) essentially just consists of defining forms for your data sets. The database itself would take care of all manipulations. The same was true for Delphi, which offered you ways for automatically generating forms which you could then edit.

    One would think that such database applications today would be much simpler, but they aren't. Instead people now build multi-layer systems where the functional is not only duplicated in the database and the GUI, but also a server layer in between.

    If we follow the trends we see today, we will see people writing more and more code to solve the same problems, while adding more and more external dependencies which continuously get worse and worse. I mean we now have web apps, written with insane amounts for developer time... yet they barely are able to compete with mediocre desktop software form the 1990s.

    • by narcc ( 412956 )

      There's a reason for this absurdity. Well, more than one, but I'll point out just one. This bizarre aversion to actually writing code. It's all frameworks, plugins, and third-party dependencies. It's bloated projects beyond all reason. Kids are so terrified of "reinventing the wheel" that they'll waste countless hours getting various shady libraries to somehow work together.

      The worst part about it is that it takes longer and makes software larger, slower, less efficient, harder to maintain, and less se

  • I'm a pretty decent programmer. Good enough that I've made a career out of it and none of my code will (likely) ever make it to the Daily WTF. But there are programming concepts that I've always struggled to understand because frankly, the documentation is obtuse and hard to parse, and it wasn't really worth my time.

    For instance, the Wikipedia entry on monads is frankly just obnoxious to read. I program in elisp a bit, so trying to understand monads is mostly about satisfying some curiosity, but something about the article just doesn't click with me and I have to move through it really slowly.

    I asked ChatGPT to explain it to me in simple terms, and it did a good job. It even provided an example in JavaScript. Then I asked it to provide an example in elisp and it did that too. I'm not super concerned about correctness of the code, as long as it's generally okay, and it seems to have done an okay job.

    I've also asked it to document some elisp functions that I've always thought were poorly described (emacs' documentation can really be hit or miss) and it really did a great job.

    I'm not so arrogant as to say that these models won't one day generate a lot of good, usable code, but I honestly think that this ability to collate a tonne of data and boil it down to something understandable could fill in the gaps in a lot of documentation. The longest, most tedious parts of my job very often boil down to research for some engine-specific feature that I need, or some sort of weird platform quirk. For publicly available engines like Unreal, this will honestly improve my productivity quite a lot.

  • Predicts the possibility of “real-time computing”. What an amazing concept. Imagine the possibilities if I had a some device that could do computations in real time. How cool would that be? I mean, I could give this device a set of instructions and then this device would do a set of computations in REAL TIME!!!! Think of what people could do!!!!

    I’ll call this device a “computer” and it would run “programs”. You heard it first right here!

    Clearly I have what i
  • write a program to replace the Southwest Airline scheduler.
  • This all reminds me of an interesting test of genetic algorithms programming FPGAs a few years ago. There were a few tests where the target FPGA did perform correctly according to the test conditions.

    Then they copied the program into another FPGA, same type, and it failed miserably. Analysis was very difficult because the GA had ignored any standard conventions, but they found the problem finally. The GA had programmed chains of gates to act as analog resonators (against any specification of the chip) that

  • The level of bullshit and outright nonsense they have been publishing has become untenable. This is not an organization I want to continue to be associated with. A pity, really.

  • by ODBOL ( 197239 ) on Monday January 02, 2023 @01:18PM (#63174240) Homepage
    1. 1. Programming (plugging wires into patch boards) was automated by stored binary machine code.
    2. 2. Programming in binary machine code was automated by assembly language.
    3. 3. Programming in assembly language, particularly translating formulae into sequences of individual operations, was automated by FORTRAN (FORmula TRANslater).

    After that , the changes in programming were more incremental, so there are not clear breakpoints. The key point is that every step automating programming changes the nature of programming and the tasks that may be addressed by programs, but doesn't eliminate programming. Perhaps some day the word "programming" will be replaced, but there will still be a task to describe what a computer should do. A number of commenters have made essentially this observation, but I thought it worthwhile to think of the specific big steps from the past.

    Long before electronic digital computers, writers of fantasy and fairy tales observed the inherent difficulty of expressing a desire clearly enough to get what you actually want. My favorite is Five Children and It [gutenberg.org] by Edith Nesbit. That book concerns a Psammead, or sand fairy, which is additionally amusing since like modern computers the fairy is silicon based.

  • Let me see if I have this right: an article in Communications criticizing this idiot idea.

    Hmmm... so what's the IEEE's Computer got to say? That would be the magazine that in Jan, 1994, presented OO as "the silver bullet" to the programming backlog, and that was *literally* the cover of the issue.

  • We live in a time where the regular person in the developed world has EASY access to ingredients and information about how to combine those ingredients, and yet take-out is more popular than ever, we have dozens of meal-kit delivery services, and dozens or hundreds of cooking shows. Even though the process of preparing a meal has been made as cheap and easy as it has ever been in history ... fewer people are preparing their own meals than ever before. In fact, many people spend hours and hours watching co

It appears that PL/I (and its dialects) is, or will be, the most widely used higher level language for systems programming. -- J. Sammet

Working...