Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming

Maybe ChatGPT Isn't Coming for Your Coding Job (wired.com) 99

Today Wired published an opinion piece by software engineer Zeb Larson headlined "ChatGPT Isn't Coming for Your Coding Job." Firing engineers and throwing AI at blocked feature development would probably result in disaster, followed by the rehiring of those engineers in short order.

More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering. They can offer autocomplete suggestions or methods to sort data, if they're prompted correctly. As an engineer, I can imagine using an LLM to "rubber duck" a problem, giving it prompts for potential solutions that I can review. It wouldn't replace conferring with another engineer, because LLMs still don't understand the actual requirements of a feature or the interconnections within a code base, but it would speed up those conversations by getting rid of the busy work...

[C]omputing history has already demonstrated that attempts to reduce the presence of developers or streamline their role only end up adding complexity to the work and making those workers even more necessary. If anything, ChatGPT stands to eliminate the duller work of coding much the same way that compilers ended the drudgery of having to work in binary, which would make it easier for developers to focus more on building out the actual architecture of their creations... We've introduced more and more complexity to computers in the hopes of making them so simple that they don't need to be programmed at all. Unsurprisingly, throwing complexity at complexity has only made it worse, and we're no closer to letting managers cut out the software engineers.

This discussion has been archived. No new comments can be posted.

Maybe ChatGPT Isn't Coming for Your Coding Job

Comments Filter:
  • by ihadafivedigituid ( 8391795 ) on Sunday September 17, 2023 @08:30PM (#63856222)
    I do not think it means what you think it means.

    Most software is the opposite of engineering, which is unsurprising in an industry with greater than a 50% project failure rate by many estimates. I write code, but I also have an electronics engineering background so I know the difference.

    The authors of this piece miss the same point many others miss, namely that LLMs or whatever don't have to completely replicate what a human can do in order to be extremely disruptive. They merely have to increase the efficiency of a certain number of people to accelerate the winner-take-all trends in tech.
    • by nonBORG ( 5254161 ) on Sunday September 17, 2023 @08:39PM (#63856238)
      I am in the same position of being an Electronics Engineer and having written plenty of code. However I disagree with you, for the most part there are plenty of great coders who do an implementation that is of a high engineering standard, there are also plenty of Electronics engineers I have met and worked with in my career who give designing electronics a disservice. What none of us need is an arrogant few who instead of helping those who need further understanding of how to do their jobs just mock them. I was once a graduate with a keen desire to build projects and learn the art, nothing wrong with that.
      • by Anonymous Coward
        Managers are just a layer of bureaucracy, red tape, and complexity no longer required.
        • Good thing you posted as AC. This is a simpleton opinion that I would not want attributed to me either
          • by sjames ( 1099 )

            While there are good managers who make a positive contribution to the effort, there ARE enough deadweight managers out there that the AC may not have seen the good ones in action.

            • If the static is just loud enough, it is really, really hard to find a signal, ya know...

              • by dvice ( 6309704 )

                You should try switching from AM to FM is loud static is your problem. Use frequency to transmit data instead of amplitude.

                So if you want to find a good boss, don't listen what they say, instead send them specific questions and see how they reply. I've seen plenty bad bosses who can give good first impression, but I have never seen a bad boss who can frequently give good answers to my questions when they are removed from their comfort zone.

                • "Write it down in a memo and put it on my desk, I'll get to it when I have time"

                  Sorry, they know how to deal with this kind of curve ball that could expose they know jack shit.

            • You need something to get the team focused in the same direction, and hopefully working towards the same goal. Agile doesn't cut it. Left alone to their own devices, developers will do whatever they want to. I've seen plenty who will go off and do bizarre and unnecessary tasks. A good manager sets the directions and keeps the team on track, while being a buffer between the people doing the work and the project and product managers who keep wanting to interfere in the process.

      • by Entrope ( 68843 ) on Sunday September 17, 2023 @09:44PM (#63856352) Homepage

        A good way to help people further understand software development jobs is to be up-front and clear about the difference between software engineering and less-structured software development.

        Software engineering is a subset of software development that involves serious consideration of user needs, requirement capture, error modes during development, runtime failures, verification, validation, and similar topics.

        In contrast, software development just means you write code that runs (and hopefully it passes some tests). Sometimes that is all that is needed, or at least all that management or a customer is willing to pay for. Other times, one needs to engineer software with more care.

        • Right, but as you describe it, "software engineering" doesn't involve "engineering". This is a misnomer that's stuck around far too long. What you describe is really "software management".

          • by Entrope ( 68843 )

            You sound like you have never done serious software or systems engineering. Would you know how to spell ISO/IEC 12207 or 15288 if one of them bit you on the face? Have you ever used ARP 4574A and the RTCA DO-178/254/278 suite of guidance documents? ISO 26262? I did not touch on engineering or software management topics! Contrast what I listed with what ISO 12207 calls "technical management processes" and what ISO 26262 calls "supporting processes".

            If you do a good job at the things I mentioned, develop

      • What none of us need is an arrogant few...

        I was once a graduate with a keen desire to build projects and learn the art, nothing wrong with that.

        Exactly. Too many fucking nerds with no self-awareness about how they progressed in their careers. As if they were perfect the day they started programming.

        They lack the intelligence to develop people (they think teaching someone is simply regurgitating definitions at them), so they assume people can't be developed in their careers.

      • Re: (Score:1, Informative)

        Again: at least 50% of software projects completely fail, according to widely reported industry averages.

        No amount of self-congratulation from the inflated job title crowd will erase that stigma.
        • by nonBORG ( 5254161 ) on Sunday September 17, 2023 @11:24PM (#63856506)
          Well in my experience software projects are difficult to estimate in terms of hours or costs. This is one of the main reasons for failure that I have found, basically it just was costing more than expected and so abandoned. I don't think that it comes down to engineering, however when you do a Electronics design with High speed GB/s lanes working and demanding power requirements and massive numbers of IOs, FPGAs with over a thousand pins etc. It can and often is not nearly as complex as the software running on the platform. Estimating how long the hardware will take is hard and estimating how long the software will take is harder and near always wrong for a number of reasons that don't matter here. Main point is software projects fail but I doubt that the reasons are incompetent programmers in the majority of cases. Note that many hardware projects fail and even companies and business fail so what. There can be a mock up or a get things going stage that basically gives a false impression that things are all working, none of that means the problem was with the actual programmers. You being a perfect programmer have probably never had a project cancelled or "Fail" but in the real world it happens.
          • by KC0A ( 307773 )
            It's not possible to estimate the time to write code unless the code is essentially already written, at least in one's head. If the solution is unknown, or the existing system is poorly understood and may have tech debt that must be cleared, then no estimate is possible.
            • Indeed, there are three t-shirt sizes for any given project. Small means you know what needs to be done and it's just a matter of doing it. Medium means you understand the shape of the solution but there are still details to work out. And large means there are still substantial questions to resolve before the shape of the solution becomes clear.
            • You can estimate (not perfectly) if you first know what you're building. The problem is that often this isn't known. Sure, the high level view may be known but there will be a lot of stuff that no one thought about up front. Writing software is the easy part, the hard part comes when it needs to be integrated with other stuff (software, hardware, network), and when it needs to be tested, and so forth. The snag is that the project tends to be specified inexactly and a deadline is set before asking the te

        • Again: at least 50% of software projects completely fail

          No. 50% fail in some way, such as being over budget, behind schedule, or not meeting all of the original goals.

          That is not "complete" failure, and isn't so different than failure rates in other industries. How many construction projects are completed on schedule and under budget?

          according to widely reported industry averages.

          In other words, numbers were pulled from someone's butt.

          • No. 50% fail in some way, such as being over budget, behind schedule, or not meeting all of the original goals.

            This would mean that 50% of software project are delivered on budget, on schedule and meet the original goals. If only it would be true :-)

            My 35-years experience in the domain tells me that it is rather :

            • - A third of projects globally succeed
            • - A third of projects suffer major problems
            • - A third of projects are abandoned during development or never used

            YMMV depending on the business you're in.

      • there are also plenty of Electronics engineers I have met and worked with in my career who give designing electronics a disservice

        This rings so true. I was in embedded solutions for long enough to know that at zero points in time should I over estimate the quality of my work. I think the folks who have large egos in embedded must live in a bubble of sorts. I've found the field nothing but humbling to say the least.

      • There are a lot of people who are the opposite - to them the end result isn't nearly as important as the process. I see some who are hung up on the "framework". Months spent drawing out UML diagrams and in-depth design docs that don't really convey the actual work that needs to be done. Engineering is not just snapping together lego pieces with a lot of blobs of glue for the places where the pieces don't fit just right.

        There's a big push to be like engineering a bridge, but they fail to notice that bridg

    • Writing code and engineering software are two separate tasks. Most software developers are just code monkeys playing in an already designed system. Successful software (think any OS) is absolutely engineered and it is the same level of engineering that an electrical engineer would do in designing a power system for an automobile.

      • That's probably why I wrote:

        "Most software is the opposite of engineering ...", which implies that some isn't the opposite.
        • I deal with real time systems. You have to think low level, you have to worry about optimization, you can't just slap stuff together. Recently some people who were pre-building components in parallel were annoyed that I was avoiding both their code and the chip's Hardware Abstraction Layer (intended for fast prototyping rather than production). The answer was that it was too slow, the operation was taking several seconds, and it took me a day to just talk directly to hardware and the code was 10 times fa

    • by Tony Isaac ( 1301187 ) on Sunday September 17, 2023 @09:57PM (#63856374) Homepage

      The difference between engineering, and coding, is scale.

      Any handyman can build a backyard shed, but it takes engineers to build a high-rise.
      Any road grader driver can make a dirt road through a field, but it takes engineers to build a freeway.
      Any coder can make a command line tool or a simple web site, but it takes engineers to build something as large as Wikipedia.

      In between those extremes is lots is gray, but that's essentially how I see the difference.

      • Re: (Score:1, Insightful)

        If you think scale defines engineering, you don't understand the term.

        - An engineer could design a shed that had predictable performance in an earthquake, for instance.
        - An engineer could specify the grade and crown appropriate to local conditions and plan a route that minimized work required. My stepfather was a heavy equipment operator, no way would I want him designing a dirt road.
        - You picked a terrible example with command line tools: some of the best code around is found in Unix utilities.
        • by Tony Isaac ( 1301187 ) on Sunday September 17, 2023 @11:11PM (#63856490) Homepage

          An engineer would only be needed for your earthquake-resistant shed, if there weren't already codes available that specify how to build an earthquake-resistant shed. Engineers develop the building codes, but a builder that follows the codes, isn't an engineer.

          Same is true for your road example.

          For a freeway, it's necessary to have custom engineering done, because of the scale. There's no book the covers every situation a freeway will need to be successful.

          As for command line tools, engineering has nothing to do with "good" or "best" code. You can have an excellent shed, that didn't involve any engineering. You can also have an engineered skyscraper that's a piece of junk.

          I think the command line tool example is valid. These generally don't have sufficient scale to qualify as "engineered." Some possible exceptions might include 7-zip, where the compression algorithm is carefully engineered, or perhaps RabbitMQ, which is engineered to be fault-tolerant and supports clustering.

          • git perhaps shows some engineering?
            • Agreed, so broadly characterizing command-line apps as not being engineered, would be unfair.

              Git, while it is command line, certainly required significant engineering when it comes to revision management and the merge process.

      • > The difference between engineering, and coding, is scale. Thanks, I like that definition.
        • Other threads on this topic have led me to a bit of clarification of that boundary. Wikipedia has a good definition:

          https://en.wikipedia.org/wiki/... [wikipedia.org]

          So to me the key to that definition is that engineering involves applying the scientific method to a project. That certainly relates to scale, as small projects generally just require following a pre-engineered "building code." When your project is large or complex enough to require applying the scientific method to come up with the specifications, that's engin

    • It's amusing to me that defensive, insecure "engineers" latch on to my Princess Bride reference with downvotes and butthurt replies but no one has anything substantive to say about the impact of LLMs in response to the meat of my post.
    • > LLMs or whatever don't have to completely replicate what a human can do in order to be extremely disruptive

      I read between the lines you believe in the lump of work fallacy. Do you see AI impact as a zero sum game?

      Jevon's paradox comes in here: technological advances make resource use more efficient, but lower costs drive up demand, often increasing overall consumption. This contradicts the common assumption that efficiency will reduce resource use. See wider roads and traffic as well.
      • Good job on sticking to the subject and offering interesting observations!

        This is my concluding line: "They merely have to increase the efficiency of a certain number of people to accelerate the winner-take-all trends in tech." Emphasis added.

        We're seeing the effects of this trend in the "gig economy", delayed household formation, and other widely-observed disruptions in society. Some people are doing extremely well, sure, but this is not a rising tide that lifts all boats equally.
    • by dvice ( 6309704 )

      I'm a software engineer, who is a also a real engineer (from software technology). Our education contained also some electronics and electricity, but was focused on software.

      I agree with you that software is not like other engineering, but in a way it is. Customer gives you a problem and you need to solve it using existing tools.

      You said that software projects fail often. That is mostly just because you are looking it from a wrong angle. When you are doing a software project, it is almost a fact that specif

  • by backslashdot ( 95548 ) on Sunday September 17, 2023 @08:49PM (#63856258)

    So far every "technology" that was supposed to make life easier and less work has only increased work. TV was supposed to eliminate acting jobs by eliminating local theater. Phones were supposed to eliminate delivery jobs. Cell phones were supposed to make life convenient, instead if means your boss can call you on Sunday and make you work from home. Factory automation was supposed to eliminate factory jobs. Instead we have more humans (worldwide) working in factories than at any time in history. Sure we outsourced it to China, but fact is there's humans doing jobs making crap.

    • Re: After work phone calls, texts, & emails, at least France has banned that kind of abuse. Hopefully, more countries will follow suit. Also, your personal phone & your work phone should be entirely separate, the latter being provided by your employer, mostly so that it isn't you that has to troubleshoot why their software & services aren't working on your phone. It's up to them to make their telecoms systems work properly before they hand the phone to you.
    • by Jeremi ( 14640 )

      I don't know; a lot of people used to spend 12 hours a day tilling fields; now a lot of people spend 12 hours a day watching streaming videos. That seems like a decrease in work to me.

  • by Ken_g6 ( 775014 ) on Sunday September 17, 2023 @08:50PM (#63856260) Homepage

    Every boss I've ever had wanted me to set things up so that non-developers could do some or all of the work developers were doing. Especially when it didn't make sense.

    • Nope. They won't. But that's allright, we'll replace them with ChatGPT sooner or later...

    • We are pressured to make every option configurable. The bosses are happy because the customers can now hold weeks of meetings trying to determine what is the appropriate timeout for some corner case, instead of accepting the default. The customers are happy because they appear to be busy. In the end, nothing gets done.
  • by MpVpRb ( 1423381 ) on Sunday September 17, 2023 @08:53PM (#63856268)

    It doesn't matter if it's described in programming language, legal jargon, specifications or a text prompt. The description of the system must capture ALL of the complexity, including special cases, edge cases and unexpected asynchronous events

    My hope is that some future AI will help us manage complexity and deal with the rare or odd cases

    • by StormReaver ( 59959 ) on Sunday September 17, 2023 @09:20PM (#63856312)

      ...My hope is that some future AI will help us manage complexity and deal with the rare or odd cases[.]

      LLM's would be useful to analyze code and find those corner cases that I missed (but I would be surprised if it were capable of even that little bit of analytical success), but my experience with trying to get it to write code for me has been an exercise in frustration. As so many others have pointed out, it will confidently present an answer that is just plain wrong. By the time I have revised my instructions to get the generated code to look as error free as possible (leaving me to manually correct the remaining errors), I probably could have written the code from scratch more quickly.

      To put it bluntly, AI-generated code sucks. Any company that would fire its developers and replace them with LLM's is destined for a quick dissolution.

      • by gweihir ( 88907 )

        Indeed. All LLMs can do is make an expert faster. Under some circumstances. Unless crappy results is all you need. Incidentally, AI is unsuitable to find missed "corner cases". Doing that is one of the hardest deduction problems and cannot generally be solved by machines. Human experts can do it reasonably well if they have a very keen sense for what they understand and what they do not understand.

      • Maybe I'm a genius at writing prompts, but I get some astounding results with GPT-4. What gweihir says below about making experts faster might have something to do with it.
      • Yep, LLMs simply give you the most probable sequences of characters based on analysing the probability patterns of millions of examples of sequences of characters. In other words, they give you the most average responses possible. Judging whether they're right or useful or not isn't a feature.

        That strategy works great for human languages because that's the way we are. We have the multi $billion industry, PR & marketing, based solely on that characteristic of human interaction & communication.
      • "Any sufficiently developed requirement is indistinguishable from code", I guess that's also true with LLM's.
    • by gweihir ( 88907 )

      Indeed. Anybody that can just turn complete specs into simple business logic is will have their job threatened. But how many coders are actually that limited?

    • And you didn't even mention performance. Even if you managed to get perfectly correct code that handled all edge cases, it still might have terrible performance.
      • Then again, performance is no longer mission critical. For most applications, whether they run a second longer or not is moot and nothing worth throwing talent at. There are of course a select few where performance is still key because costs go up exponentially with a linear increase of runtime, but these cases are few and far between. It usually is the other way around, you could get a linear increase in performance for an exponential increase in cost, and for most applications, this simply isn't warranted

    • It doesn't matter if it's described in programming language, legal jargon, specifications or a text prompt. The description of the system must capture ALL of the complexity, including special cases, edge cases and unexpected asynchronous events

      That's exactly right, but it's not enough to just have that data -- it has to be correct, which it never is. The biggest challenge is not in writing the code; it's in understanding the problem space, and the expected outcomes, and what expectations are reasonable, re

  • by Somervillain ( 4719341 ) on Sunday September 17, 2023 @09:12PM (#63856308)
    If you're paying someone today to write code, or even documentation or advertising copy, it has to be correct, not close to correct. Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete with a super-high carbon footprint. Algorithms are always superior to machine learning. Algorithmic code generation is as old as day 1. Ask your grandparents about C.A.S.E. The have been making Computer-Aided-Software-Engineering tools since before I was born and selling them at insane prices because they're expensive to build and they to convince your boss it will cut costs. Even Ruby on Rails has largely failed to live up to the hype and it had a lot of momentum. The reason is not lack of AI or lack of computing power or effort...the reason is the tradeoffs are so huge it's almost always better to do things by hand.

    Take Hibernate. Nearly my entire career has been helping companies put their DB code in JPA..and then take out several pieces and put them into native SQL when JPA performs poorly. If you ever turn on show SQL, you see it generates awful SQL that works in all scenarios...just poorly. If you knew you were going to save 3 object rows and 12 child rows, you'd write 2 insert statements because you know the domain. You know the object being input and the bounds for the child objects and the specific type. ORM, just like generative AI or machine learning tools don't know this. They have to write 15 statements because they don't know the real relationship between these rows or constraints. All AI-generated code was either broken or incredibly inefficient.

    A well run business doesn't write a TON of code. They write a moderate amount and keep editing it frequently. They tune their codebase based on evolving requirements of their business...it's very bespoke and custom.

    However, these are my explanations as to why. I am more confident that I am correct than I am I fully understand why. Why am I so certain I am correct? AI has been funded by the richest companies in human history sitting on mountains of cash, getting the top brains in the world. They have the talent, experience, funding, manpower, and most importantly the incentive to make it happen. They hire armies of expensive engineers and never seem to have enough talent. If it could be done, they wouldn't be selling you tools to do it, they'd be reorganizing their own companies to use it. Google wouldn't sell you tools, but a ChatGPT prompt to generate a working Android app. Microsoft would be selling Generative AI game builders. You'd tell it "I want a shooter set in the 1930s with Aliens and a rainbow palette set in the South Pole where my character does parkour on icebergs" and the next day, you'd have a working game.

    Or...they'd use Generative AI in their runtimes to increase efficiency. That's an even more straightforward test of their abilities. If ChatGPT was so good, why not incorporate it in the C# CLR to make C# consistently faster than nearly all C code written out there? Why not have ChatGPT translate C# to assembly code customized for your specific processor? If you could increase efficiency with it, they'd save a ton of money in Azure electricity costs and well as be able to charge a massive premium for this more efficient runtime.

    We'll know Generative API can generate code when it's used to build useful applications. The press release won't be telling us about the tools, but demonstrating impressive applications that we could have never imagined a human being writing.
    • Its good for bullshitting (actual language, not code), if no one is going to call you out on it.

      So I've seen management types using it to write apologies and status-page postmortem promises that will never actually be played out and were only posted to mollify a userbase after some server incident.

    • If you're paying someone today to write code, or even documentation or advertising copy, it has to be correct, not close to correct. Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete with a super-high carbon footprint. Algorithms are always superior to machine learning. Algorithmic code generation is as old as day 1.

      [...]

      Take Hibernate. Nearly my entire career has been helping companies put their DB code in JPA..and then take out several pieces and put them into native SQL when JPA performs poorly. If you ever turn on show SQL, you see it generates awful SQL that works in all scenarios...just poorly. If you knew you were going to save 3 object rows and 12 child rows, you'd write 2 insert statements because you know the domain. You know the object being input and the bounds for the child objects and the specific type. ORM, just like generative AI or machine learning tools don't know this. They have to write 15 statements because they don't know the real relationship between these rows or constraints. All AI-generated code was either broken or incredibly inefficient.

      So you're writing in assembly then? Because everything else is algorithmicly generated code, even more so for "native SQL". Not only is the DB itself a big abstraction you had nothing to do with, but the DB is doing a bunch of optimizations.

      The difference between languages like C & SQL and what we usually think of as arithmetically generated code is whether they're complete enough that you never need to work at a lower level. Either way, traditional generative code is typically a translation from a high

      • Btw, comparing SQL generated by JPA to AI-generated code is really, really, wrong (for reasons I'll mention at the end).

        Java to SQL is a far easier application than generative AI is attempting to solve. No machine can create a good solution unless it knows the intent. This is a major reason why ORM gives slow results and Ruby never really took off. If writing code is running, generating SQL from objects is crawling.

        I don't think it's out of the realm of machine learning to pick optimal algorithms based on pattern matching. Most runtimes, from DBA optimizers to JVMs and CLRs and JavaScript engines do that now via diffe

        • Btw, comparing SQL generated by JPA to AI-generated code is really, really, wrong (for reasons I'll mention at the end).

          Java to SQL is a far easier application than generative AI is attempting to solve. No machine can create a good solution unless it knows the intent. This is a major reason why ORM gives slow results and Ruby never really took off. If writing code is running, generating SQL from objects is crawling.

          It's a different problem. ORM is a higher level language like C or Python. It translates into a lower level language and that translation MUST be correct. It allows for some really fancy optimizations since you can put a ton of work into getting a 0.1% improvement and it pays off. But it's also limited since you don't know intent.

          For C the balance is favour of the compiler, it's really hard to write assembly faster than an modern compiler.

          For python, you can usually do better writing in C but it may not be

    • by WaffleMonster ( 969671 ) on Sunday September 17, 2023 @11:58PM (#63856556)

      If you're paying someone today to write code, or even documentation or advertising copy, it has to be correct, not close to correct. Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete with a super-high carbon footprint.

      What scares me is what LLMs are able to do without the many benefits people with brains take for granted.

      LLMs currently cannot learn from experience, learn how to learn, ground their knowledge to reality or impose consonance. They cannot iterate to improve their designs, leverage support tools or even think in ways not rigorously fixed by the models execution. Many of these limitations are likely to be rather fleeting given pace of innovation and known active areas of research.

      When you ask a present day LLM to spit out code to do much of anything it is akin to asking a human to spit out code to do something off the top of their head without thinking much about it. Oh shit the computer didn't consider some corner, it fucked up, it forgot to do... well no shit it's not going to be perfect.

      Humans are also incapable of writing "correct" code. To the extent correctness is possible at all it is only made so by imposition of rigorous process and iteration to inherently fallible minds.

      My personal prediction in the not so distant future we will see AI driving proof assistants bringing more reliable methods of programming to the mainstream.

      Algorithms are always superior to machine learning.

      Machine learning "algorithms" figured out how to find lowest energy conformations of proteins, they found more efficient ways to multiply matrices of certain sizes than humans are known to have ever discovered. It succeeded where humans and their "algorithms" failed despite considerable persistent efforts by smart humans spanning decades.

      However, these are my explanations as to why. I am more confident that I am correct than I am I fully understand why. Why am I so certain I am correct? AI has been funded by the richest companies in human history sitting on mountains of cash, getting the top brains in the world. They have the talent, experience, funding, manpower, and most importantly the incentive to make it happen. They hire armies of expensive engineers and never seem to have enough talent.

      If it could be done, they wouldn't be selling you tools to do it, they'd be reorganizing their own companies to use it. Google wouldn't sell you tools, but a ChatGPT prompt to generate a working Android app. Microsoft would be selling Generative AI game builders. You'd tell it "I want a shooter set in the 1930s with Aliens and a rainbow palette set in the South Pole where my character does parkour on icebergs" and the next day, you'd have a working game.

      GPT-4 the first LLM with any kind of generally useful thinking capability was first released just half a year ago. The algorithms and enabling hardware are in their infancy. Expecting end game capabilities out of the gate isn't reasonable and drawing conclusions of what "could be done" at this point is premature.

      Or...they'd use Generative AI in their runtimes to increase efficiency. That's an even more straightforward test of their abilities. If ChatGPT was so good, why not incorporate it in the C# CLR to make C# consistently faster than nearly all C code written out there?

      Or better still apply it to LLVM and optimize all the languages.
      https://arxiv.org/pdf/2309.070... [arxiv.org]

    • Today's Generative AI has no clue what it's doing and no clue if it's correct. It is fancy autocomplete

      Thsts why I call it Autocomplete Insanity.

    • Algorithms are always superior to machine learning.

      But machine learning is done with algorithms. It's algorithms all the way down.

    • by xtal ( 49134 )

      I came out of retirement to work on AI.. at a name brand place.

      The transition you describe is happening. Mid 2024 most of the tools start rolling out, by end of 2024 it will be clear things have dramatically changed. Not just code, but mangement.

      What we call software development is going away by end of 2025, and it will be replaced by something else, but it isn't going to look like what it does now.

  • by 93 Escort Wagon ( 326346 ) on Sunday September 17, 2023 @10:42PM (#63856446)

    He's thinking like the guy who actually writes the code - but that's now the person who makes these decisions.

    I fully expect that this (currently hypothetical) scenario is going to play itself out, all over the place. Managers are gonna buy into some flim-flam artist's promises of great code with few developers, laying off tons of them. After several months, it's going to be obvious it's not working. But the complication is the manager won't want to admit to such a huge, stupid error - so any "hiring back" of developers is going to start as a trickle at best.

  • by Tony Isaac ( 1301187 ) on Sunday September 17, 2023 @10:59PM (#63856484) Homepage

    ChatGPT can give you good code suggestions, when you are very explicit about what you need. This kind of "assistant" might replace some low-level outsource developers, but certainly not anyone who has to "think."

    • I throw code at ChatGPT and it tells me what's wrong with it. It doesn't understand that I have to fit a module into a stream of existing code. I give it initial conditions and instruct it to provide strict outputs, but it's getting its "intelligence" from some obscure articles and textbooks.

      As you say, it doesn't understand. That's not very intelligent.

  • History is likely pretty useful here; remember when writing HTML was "hard" in 2000? Those jobs that were lost are the same class of jobs that will be lost to generative AI.

    My miserable level of coding capability can easily be replaced by AI... but I only program to solve (via brute force) a problem I can't figure out any other way, not for any long-life production systems. That is useful... but not transformative.

    • History is likely pretty useful here; remember when writing HTML was "hard" in 2000? Those jobs that were lost are the same class of jobs that will be lost to generative AI.

      HTML is harder than it used to be, because now you have to know how to write CSS too. It's more powerful yes, but there's so much more to know...

    • I remember the dot com hype and we had several 'Dream Weaver' developers at the time. Of course all of the HTML was useless and we had to re-do everything. After 9/11 happened they were all laid off never to be seen again.
  • by EmperorOfCanada ( 1332175 ) on Sunday September 17, 2023 @11:27PM (#63856516)
    I use Copiliot combined with ChatGPT as kind of a paired programmer.

    Rarely does it do anything I couldn't do, and often doesn't even do as well as I could do. But it speeds my work right along doing the boring for loops, etc.

    But where it really kicks some ass is in the super drudge work. Things like cooking up unit tests, and putting asserts everywhere. Making sure every conditional is caught, etc.

    Some of these things are exactly what junior programmers are assigned, and where they learn. Paired programming is another huge opportunity for junior programmers to learn. Except I don't want to work with a junior programmer ever again. I can crap out unit tests, integration tests, and with these tools doing the drudge work, I can stay hyper-focused on the hard stuff.

    Before this, a junior programmer could be helpful and occasionally had some cool trick or fact I could learn from. But now they are almost pure productivity-sapping distractions.

    Another group of programmers are those rote learning algo fools. I can get these AI tools to give me just about any answer I need where it twists graph theory into knots. These people were mostly useless to begin with, but now are officially worse than useless.

    And this is exactly where I see a big cadre of programmers getting shot. Junior programmers who will now largely go mentorless, and those rote learning algo turds who used to get jobs at FAANGS because some other rote learning fool convinced everyone that rote learning was good.

    I asked ChatGPT what these people should do and it said, "They should go back to their spelling bees.... nerds."
  • I use Github Copilot and ChatGPT both for what they are, tools to make me more productive. The latter requires experienced and and iterative prompting to help the tool help get what I'm looking for. A less experienced developer will not get the same results, as fast. Given time and experience, sure. But they also have to know what they're prompting for, not just expect prompting experience to be design and development experience. Not worried and I retire in less than 10 years.
  • I've been learning a new framework. ChatGPT has been great at reminding me of simple things, and showing me how other simple things can be done. The combination of explanation and example code it produces is really good - better than any single resource I can find via search.

    However, as soon as you start asking more complex questions, it gets unreliable. It will confidently present and explain code that does not - indeed, cannot - work. We don't even need to think about genuinely difgicult questions based

  • Gen"AI" is notably worse than human but most CEO don't care and see this as an opportunity to have the "AI" generate stuff and make the human "fix it" for a lower pay. It's often so bad that the human have to do it all over again anyway, but for a lower pay.
  • From a comment on a recent article on Hackaday (https://hackaday.com/2023/07/26/chatgpt-the-worst-summer-intern-ever/ [hackaday.com]) here's how poorly ChatGPT understands OpenSCAD.
    The line in question:

    hole_radius = 1/16; // 1/8 inch hole radius

    Firstly as OpenSCAD is by default Metric there was no scaling set (by defining an inch as 25.4).
    ChatGPT failed to come up with the definition itself so that's 1/16 of a millimetre, not inch.

    Then there's the comment that conflicts with the code. The diameter of the result of h

  • Anyone who's ever maintained a codebase knows that ChatGPT is completely useless for programming. At best it makes a nice scripting automator for a desktop user that's too lazy to learn scripting. Beyond that it's useless. It can't edit codebases of millions of lines and thousands of files and update API versions/resolve differences in programming interfaces.
  • This seems to be a reoccurring theme in AI. Also autonomus vehicles only work because they have people accommodating them. Put only avs on the road right now and it would be a disaster.
  • Chat GPT can't fill my TPS report.

    Will my PHB understand how intricate the TPS report is? Too intricate for ChatGPT to figure out.

    I have 5000 TPS reports done and ChatGPT can't handle that size codebase.

    Will ChatGPT always fill my TPS report perfectly? That line 14 sec 32 can mean a lot of things and can be quite complicated. It is my job and I spend days making sure that the TPS report is filled perfectly without any mistakes.

    I define my life through TPS reports. Let me find more reason. Oh oh. That one ti

    • by Jeremi ( 14640 )

      TPS reports are the things ChatGPT does best. Not that it matters, since nobody actually reads them anyway.

  • by ledow ( 319597 )

    So what you're telling me is that ChatGPT wasn't the be-all and end-all of AI and it isn't intelligent and it won't be paradigm-breaking, world-changing AI as they hype proposed, and all those thousands of companies lumping onto it as "the next big thing" are actually not understanding what it can and cannot do, and that it's in fact unreliable and often wrong?

    Gosh... just like every other single AI "revolution" since... what? The 60's?

    I have the solution: Just throw more training at it, more processing p

  • You insensitive bastard, I still toggle in my code on the front panel.
  • by zmooc ( 33175 )

    Of course ChatGPT isn't coming for my job. Instead, companies built around ChatGPT (or similar) will be coming for my employer and my job will simply disappear during the next economic shake-up.

    Disruption hardly ever happens on the job-level. Also, it usually won't be predicted at the analyst level, which is proven by things like this:

    More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering.

    Reality is that there is almost no dull work left in proper software development. Any dull work left is due to legacy, stupidity, stubbornness or lack of access to proper too

  • Every time automation replaces human labor there are cries of "It just isn't as good". So what, the bottom line is return on investment. Just as yourself how many hand-made shoes are made each year versus machine made.
  • So LLM’s won’t take our jobs but “large language models (LLMs) can replace some of the duller work of engineering”? Doesn’t this mean that some software engineering work will be lost to LLM’s?

  • I asked chatGPT to create a data structure in python that can hold the data described in Table XX.X in the document at the link www.XXX.XXX/XXX.pdf. About the only useful thing I got out of the thing was the name of the pdf document. At least I know it can read a pdf. The rest I might as well have typed "sample python data structure" in google. It completely ignored the data in the table and just spit out some random data struct that has no connection with the described data.

    This is the kind of simple t

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...