Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

AWS CEO Says Most Developers Could Stop Coding Soon as AI Takes Over 178

An anonymous reader shares a report: Software engineers may have to develop other skills soon as AI takes over many coding tasks. That's according to Amazon Web Services' CEO, Matt Garman, who shared his thoughts on the topic during an internal fireside chat held in June, according to a recording of the meeting obtained by Business Insider. "If you go forward 24 months from now, or some amount of time -- I can't exactly predict where it is -- it's possible that most developers are not coding," said Garman, who became AWS's CEO in June.

"Coding is just kind of like the language that we talk to computers. It's not necessarily the skill in and of itself," the executive said. "The skill in and of itself is like, how do I innovate? How do I go build something that's interesting for my end users to use?" This means the job of a software developer will change, Garman said. "It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code," he said.
This discussion has been archived. No new comments can be posted.

AWS CEO Says Most Developers Could Stop Coding Soon as AI Takes Over

Comments Filter:
  • by Pseudonymous Powers ( 4097097 ) on Friday August 23, 2024 @01:40PM (#64729508)
    I'd say that AWS risks losing the good will of professional software developers, but first, AWS never had the good will of professional software developers, and two, I'm not sure that professional software developers have any good will to give, period.
    • by Seven Spirals ( 4924941 ) on Friday August 23, 2024 @02:42PM (#64729756)
      I wouldn't piss up their ass if their guts were on fire. Feel free to lay off all 5 of your US developers, AWS. I actually work with LLMs and CoPilot every day (shit man, almost every hour). it's nowhere near the level of replacing C/C++ coders. They can't write two pages of code without producing more errors than the code is worth.
      • it's nowhere near the level of replacing C/C++ coders

        Not even some of them? Go back in your memories of code and coders you've encountered. Think of the worst 30%. Is the current output of LLMs less useful or more useful?

        • by Seven Spirals ( 4924941 ) on Friday August 23, 2024 @05:46PM (#64730366)
          Interesting thought experiment. Well the worst 10% did nothing at all. Zero. They could send email and answer the phone but basically just wouldn't write code if they were on fire and you had to write hello.c to put it out. The bottom 30% write code that is full of bugs, mistakes, and turd-peanuts someone else has to dig out of the turd. LLMs can do that, too. However, the whole "rest" of the job that involved doing code-reviews, answering the phone, sending email, and sexually harassing the front desk girl would be a lot harder.
  • by Baron_Yam ( 643147 ) on Friday August 23, 2024 @01:41PM (#64729512)

    One person who knows what they're doing can use an AI system trained on existing solutions to multiply their output several fold. ...For anything basic that requires no real understanding or planning ability, and isn't complex enough to make it take more time to debug AI hallucinations than to simply code for yourself.

    AI isn't going to replace programming any time soon, but it's going to thin the herd, just like the computer spreadsheet reduced the need for accountants.

    • by gweihir ( 88907 )

      Probably not. The really elementary stuff is usually already a library. For the rest, apparently AI does not really speed things up and may even slow things down. Sure, somebody incompetent fumbling their way may be a bit faster, but they will also learn less, so even there AI is not a benefit.

      • Already in libraries that already are buggy. I spend a lot of time fixing up code in libraries, or adapting libraries, or adding diagnostics to libraries. The attitude that the libraries are sacrosanct items that are perfect as they are is broken. I was behind a car yesterday with detailing for a business that did "Low Code and No Code", I couldn't help but think what a horrible thing that would be, but it is what so many "programmers" do these days.

        One should never use AI code without MORE code review th

        • Yup. I spent a good chunk of my career developing and maintaining libraries. The idea that they are bug free and can meet everybody's needs is misguided. They are made of code, too. The LLMs aren't going to write and maintain libraries all on their own.

    • Re: (Score:3, Informative)

      by Malggi ( 791997 )

      Ehhh, I'm not seeing it.

      I have a new Chromebook that came with "Gemini Advanced." I've been trying in good faith to ask it questions about my work, and all it does is spit nonsense back at me. Stuff like, "click on such-and-such tab" when the software I'm using is menu driven and doesn't even have tabs.

      I don't know what folks up in the C-Suite use this thing for all day, but replacing my coworkers and me with it is completely out of the question.

      • I've been using ChatGPT for a while now to speed up my PowerShell scripting, and 90% of the time the output is exactly what I wanted, the other 10% I still spend less time making corrections than I would have spent typing from scratch. Nothing huge or complex - we're talking under a hundred lines of code per task, not hundreds or thousands.

        Since I'm one of two guys in the department who can script and code at all, this is a huge productivity boost for the company.

        • by r1348 ( 2567295 ) on Friday August 23, 2024 @02:24PM (#64729684)

          I've used ChatGPT to write a few Bash scripts, nothing too huge, something in the 400-1000 lines range.
          It's basically a faster Stack Overflow in that scope, and works well 90% of time but:
          - when it hallucinates, it goes completely bananas. Debugging that mess added 4-5 hours to my work (difficult to estimate how many hours it saved though)
          - during longer interactions, it clearly runs out of tokens, and starts forgetting the initial statements
          - it's terrible at string and hex parsing. It could not parse a hexdump even if its life depended on it. Had to do all the xxd fun manually. I later discovered it also doesn't get ASCII art. Try it for yourself, ask it to create ASCII art of a wok, and watch it sprout nonsense.

          • I agree. I use it to troubleshoot and write some blocks and it's pretty questionable. If I ask it to produce more than a couple of pages of code I know I'll be debugging longer than it'd probably take to write the code myself. It produces tons of bugs and like you say, when it hallucinates it turns out some doozies.
            • by r1348 ( 2567295 ) on Friday August 23, 2024 @02:54PM (#64729802)

              In my experience, it works worse as a debugger than as a code generator.
              Code generation definitely saved me some time for writing down some run-of-the-mill functions, i.e. a y/n user input parser, input sanitization, etc.

              But ask it to debug why some parsing isn't working correctly with i.e. grep, awk or jd, and it's like watching an orangutan trying to do calculus. I think it just lacks the ability to visualize exact results, and constantly tries to churn out statistically similar results in its training data.

              • Well, I gotta agree. It's okay at catching simple stuff... sometimes. It often wastes more time than it saves, though. If I need to generate a big matrix or look at a long list and find an outlier, it can be good for that. However, right now it's just special onesy-twosy items and nowhere even close to being able to "replace" some guy who can code, debug, send email, talk on the phone, go to meetings, write down customer requirements, write technical docs for everything done in the last six months, etc...
          • - during longer interactions, it clearly runs out of tokens, and starts forgetting the initial statements

            The context windows nowadays are pretty long, but a tip in general to deal with this: Start a new conversation with the current state of the code and describe what you still need to do/fix. This also helps if a lot of the existing conversation contains what turned out to be incorrect approaches/red herrings/etc.

          • Stack Overflow has 95% wrong answers, you realize that? Cutting and pasting code from Stack Overflow shouldn't happen; instead first UNDERSTAND the code, find the bugs in the code (which are inevitably there if you only look at the top answers because bad code gets upvoted there for some reason), and then write it yourself once it is understood. One should not do this because you're in a hurry or your boss is breathing down your neck! Going fast has never resulted in better code.

          • - when it hallucinates, it goes completely bananas. Debugging that mess added 4-5 hours to my work (difficult to estimate how many hours it saved though)

            That's only true if you're lucky. The subtle bugs that you don't notice in your initial testing but will show up three months later, those are the ones to worry about.

      • Ehhh, I'm not seeing it.

        I have a new Chromebook that came with "Gemini Advanced." I've been trying in good faith to ask it questions about my work, and all it does is spit nonsense back at me. Stuff like, "click on such-and-such tab" when the software I'm using is menu driven and doesn't even have tabs.

        I don't know what folks up in the C-Suite use this thing for all day, but replacing my coworkers and me with it is completely out of the question.

        That won't stop the people in the C-Suite from trying. Remember, these are not normal humans, they are the winners of the greatest meritocracy in history, mercurial 6D chess playing geniuses, the princes of the universe, ...

      • by Junta ( 36770 )

        I don't know what folks up in the C-Suite use this thing for all day, but replacing my coworkers and me with it is completely out of the question.

        From what I've seen, it's in the phase where the people that don't see it working out are afraid they will be seen as "not getting it", so even if they don't see a path forward, they will either say they do or at best just stay silent about it. Very few of the C-suite guys will say out loud that they don't think it's useful, because they don't want to be called out as that incompetent dinosaur who failed to see the obvious value proposition.

        Dealing with the random developers such leaders hire anyway, the A

      • I don't know what folks up in the C-Suite use this thing for all day...

        "Write a letter for my daughter to tell her favorite athlete how inspirational she is."

        "Write a letter to my wife to tell her I'm sorry for seeing other women."

        "Write a letter in the style of my lawyer to demand..."

        "Answer these interview questions in a manner demonstrating how innovative I am."

    • by eepok ( 545733 )

      but it's going to thin the herd, just like the computer spreadsheet reduced the need for accountants.

      Not the best example. Spreadsheets enhance what an accountant can do, so they do more in volume and in variety. As such, the demand for accountants has only increased. Everyone is always hiring accountants.

  • by gweihir ( 88907 ) on Friday August 23, 2024 @01:43PM (#64729516)

    I guess this guy has not heard about that.

  • CEOs are useless (Score:5, Insightful)

    by ebunga ( 95613 ) on Friday August 23, 2024 @01:44PM (#64729520)

    The work of the CEO is easier to automate. There are countless billions of dollars sitting on the table that rightfully belongs to investors but this jackwagon is trying to automate away the employees that generate value, while any random CEO is just a net drain. What does he do all day? Regurgitate idiotic ramblings and make sure the right numbers go up. If I were a shareholder I would be suing.

    • by Tailhook ( 98486 )

      Right. So rather than talking to an actual executive or other shot caller, you talk to the AI running the company, negotiate a deal and the AI writes a contract. The AI dispatches the tasks enumerated in the contract to developers, other AI or both for implementation.

      Sounds great. Let's do that.

    • When I read chatgpt output I always have the impression that it is some sort of corporate suit guy. Way too much output. Using different words to just repeat the same. When you read it, you are impressed by the wording. When you try to summarize it at the end you draw the conclusion that you already knew that and that it is obvious, or it just dodged the real question. Yep, gpt can simplify management by a lot. It also hallucinates. They'll love that.
      • Try speaking aloud whatever comes out of Chatgpt. As Harrison Ford said of the Star Wars dialogues from Lucas, you can write this shit, but you sure can't say it. I have been accused of speaking and writing too formally, but Chatgpt really takes it to a whole new level.

    • "The Twilight Zone" had the vision. [youtube.com]
    • So they're not useless in the way you're thinking. They're the new name for a thousands year old system.

      You can put the king in a guillotine if you don't even know you have one.
      • by ebunga ( 95613 )

        Corporate CEOs are not the ruling class. They're the front line lieutenants of the ruling class.

    • by dnaumov ( 453672 )

      While you will be happy to learn that "automating the CEOs" is very literally a thing that has been worked on by several parties for quite some time now, I have to shit all over your parade. One of the main jobs of the CEO is to "sell" the company to investors, existing or potential future ones. Go to investor conventions, organize and present major events. How exactly do you foresee LLMs or AIs doing this particular task? The second problem and a much bigger one, is legal. CEOs have legal liability because

      • Can you cite a single example of a CEO being held liable for a company's actions? Like, sure, if there's an intentional crime, maybe, but from simple lack of oversight?

  • by Menelkir ( 899602 ) on Friday August 23, 2024 @01:46PM (#64729528) Journal
    How much Matt know about languages to form this opinion? How many codes he did in his life? How much he knows about development to came into this conclusion? Oh...
    • by leptons ( 891340 )
      Anyone that supposes that LLMs can "think" doesn't understand that they don't know enough about LLMs to make an assertion like "AI will take over programming jobs". There is nothing truly intelligent about LLMs. It's a cool trick, and that's about it. I'm not at all concerned that my programming job will be taken over by AI. This guy should be concerned that his days as CEO are numbered if he's making ridiculous claims like this.
      • If the AI is any good, then don't use it to write code snippets, instead use it to find bugs in existing code, stick the AI in code analysis tools, etc. But currently it's bad. Coverity for example, in the last few months it has gotten amazingly bad, with ridiculous false positives, which it hadn't done before. AI probably couldn't do worse but likely will have the same idiotic results. But *IF* AI improves, then the code analysis would be the suitable place for it, not the code generation.

        However, from

        • by leptons ( 891340 )
          "Impoving" LLMs still gives you an LLM. It isn't "Intelligence", it's statistical data mining. While that may end up being part of an "artificial intelligence", it's not improvable itself to the point of being "intelligent". Now we can argue about what "intelligent" means all day long, but I'd rather not. LLMs simply regurgitate whatever inputs they are "trained" on to form an output that's statistically likely to please the person querying it. It's not intelligent. It's going to be wrong a lot of the time.
  • Garman doesn't believe it himself. As the articles says, it's more musing on his part than a dire warning to the staff. After all, if he meant those comments, then the memo would have ended with "turn in your badge on your way out."
  • by Rosco P. Coltrane ( 209368 ) on Friday August 23, 2024 @01:56PM (#64729580)

    CEO predictions are always right on the money. And I'm writing this from my virtual house in the metaverse.

  • by quantaman ( 517394 ) on Friday August 23, 2024 @01:56PM (#64729584)

    I'm not certain he really understands what software development is. LLMs have the potential to raise the level of abstraction in the same way compilers did, but the fundamental problem remains, figuring out the problem and coming up with a detailed logical solution to it.

    That's what gets me with all those companies claiming they'll write some webapp with an LLM in 5 minutes or something. A web app doing what? How does it handle authentication? Does it integrate with other services? Where does it deploy? How should it scale? etc, etc.

    There's definitely a possibility that the industry shrinks, I've had a couple freelance jobs where after asking detailed questions I apparently gave the potential client enough guidance that they solved the issue with ChatGPT. But I don't think the professional as a whole gets decimated.

  • Willful ignorance or plain stupidity, in order to be a CEO?
  • by Somervillain ( 4719341 ) on Friday August 23, 2024 @02:06PM (#64729612)
    I've heard this song and dance before...20 years ago...how we'd never write SQL again now that we have Hibernate or even ever write code thanks to Ruby on Rails. Without a doubt, AI will play "some" role in our future, but I think it's more like a helper that writes shitty code...like ORM frameworks do. I will wager than in 2 years, AWS will have just as many, if not developers working on their product.

    I've seen the code written by your shitty tools...we're far off from it replacing developers. It's, at best, a roomba...fun to play with, helps a little...costs way too much...but no one is firing their maid after buying one. If you can get by with a Roomba alone, you weren't paying anyone to clean your floors before. Similarly, if you can go dev-free thanks to copilot...you weren't really using one before.

    Also, remember the lessons learned from Rails/Grails. It generates demo code fine...once you want to do anything outside the default path and you ALWAYS want to...the framework starts falling apart and being more trouble than it's worth...hence why the whole industry abandoned those frameworks. I am skeptical AI can fix most common bugs. So even if it can generate decent code, someone has to maintain it when things go wrong.
    • First time I hear this mantra was 1992 from my Calculus II professor who said "don't go into CS because I have seen the RAD tools that will replace programmers in just 2 to 3 years..." He was a very smart mathematician but knew nothing about software development. As a side note for what it is worth, he was also impressed with Apple Talk which connected the Macs in our classroom on a very simple network LOL.
      • Yep, first it was RAD tools, then CASE, then Visual XYZ and Delphi. They were all going to "put coders out of work" but in the end they needed more. *YAWN*
    • Those were different ways to code. This is a machine that writes the code for you. It's completely different.
      • Not really. We have had automation, boiler plate, code generators since the first compiler. Regardless of the generator technology it can only do what you command it to generate and in a manner it was programmed / trained. But the biggest issue I have seen with these magical code generators over the years is (1) it writes code in the style and format it knows which is almost always not what you want and (2) you may be able to code gen the first iteration but code generating bug fixes and new features in
      • by Somervillain ( 4719341 ) on Friday August 23, 2024 @04:14PM (#64730034)

        Those were different ways to code. This is a machine that writes the code for you. It's completely different.

        If you were correct, we'd be debating working demos, not theory published by someone selling AI solutions.

        ORM is a library that writes code for you. It works...poorly...and expensively, but generally with heavy user input. Ruby on Rails wrote code for you...again, pretty poorly. Writing code is a far easier task than most of these things we're asking AI to do. It has clearly defined boundaries. It just usually is very obvious when you make a mistake. When ChatGPT makes mistakes...and it makes a lot, our brains can compensate for bad grammar or confusing sentences. A compiler will just error out.

        This is very much an apples to apples comparison and writing business code is a pretty simple problem. EVERY developer who enjoys coding has written some tool to write code many many many times. It works until it doesn't. It turns out writing new code isn't that hard, but maintaining it and handling complex use cases is. AI is not going to solve anything new. At best, it can apply familiar patterns.

        I'm convinced it's all hype and it will impact the software industry the way roombas impacted the cleaning industry.

        However, if I am wrong...well...why are you and the AWS guys making predictions? Why aren't you showing me software? Why doesn't AWS, Oracle, or IBM have some magic ETL-generator tool? I've written dozens myself...there's money to be made...why haven't they solved that problem? Why hasn't MS build a game level generator? Why not play fortnite in a map of your city rather than a fictitious one? Why not have your favorite 3D game custom generate maps for your favorite real world location?...and charge a premium for it? Hell, why isn't meta creating virtual cities? If you could have a tool that writes code for you, there are many ways to make a profit, especially in games and entertainment for highly specialized or customized content.

        Sillicon Valley has poured over a trillion dollars in to AI initiatives among the many major companies. They threw unlimited best-in-class minds, unlimited hardware, and unlimited funds...and ChatGPT is the best they can do? There's a FUCKTON of money to be made in making me obsolete....or having AIs that can write drivers or even optimize code. They have the motivation, the resources...why haven't they made it happen?

        I am convinced they can't.

      • Those were scams. This is the real deal. It's completely different. :-)

    • by CAIMLAS ( 41445 )

      In 2 years, AWS will have started consolidating their services. They have a lot of stuff which doesn't print money, and those will be relegated as minor features for the ones which do.

      Anything which provides 'value' (read: isn't predictable in pricing) will be hustled out the door.

    • Also, remember the lessons learned from Rails/Grails. [...] hence why the whole industry abandoned those frameworks.

      Yes, Ruby on Rails is ass.
      There are plenty of very useful well made frameworks in use in various programming languages, however.

      Also, ORMs (augmented with a few native queries) are definitely a good fit for a lot of applications and can improve maintainability significantly.

      A good developer picks the right tools for the job without being driven by ideology.

    • At one job, in an embedded system with relatively limited memory (still lots compared to many), one manager really was in love with a certain automatic code generator. UML in, working code out. He was so impressed that he would brag "It generates code that is only double the size of existing code!" All the engineers are "Double the size? That's catastrophically bad! How are we going to add more RAM to all the machines in the field?" But, that was the manager, he was more interested in speed than quali

  • I framed this [despair.com] poster decades ago.
  • ...because hype jacks up Amazon stock.

  • ...to give you code to solve most problems...it's pretty good...
  • by GFS666 ( 6452674 ) on Friday August 23, 2024 @02:13PM (#64729642)
    For what it is worth, I am NOT a computer programmer. I'm a Mechanical Engineer. Even I know that the code writing part of being a Software Engineer is just part of the job. Other things just as important are: Getting requirements, planning out the best way to accomplish the task, dealing with edge cases, having different coding software(s) languages and choosing which one is best for a certain application, backwards compatibility, testing, etc. And I know enough to know that there is a vast amount of stuff that I'm not aware of affects the writing of code. And AI is not going to be able to do that anytime soon that I know of. The lack of knowledge/understanding here is staggering.
    • by Kalten ( 20368 )
      He's a CEO. The Kool Aid he's drinking is the one that makes him hallucinate, "oboyoboyoboy, I'll be able to fire a bunch of worthless $100k+/yr software devs and take home a HEWWWWWWWWGE bonus!"
    • by MobyDisk ( 75490 )

      I think AI will help people like you be able to write code. I often see an ME + SE paired together to prototype a solution. But now, you could probably use AI to create the prototype code. But I don't see it happening the other way around: I don't see AI as easily helping a software engineer become a mechanical engineer.

    • CEOs are literally *paid* to drink the Kool-Aid, and paid to make more of it for everybody else to drink.

  • He may dream of that perfect world where AI is actually intelligent. And he might even believe that is the reality now, unless of course he actually tries asking them to start writing code for something more complex than a hello world application...

  • So much for the 'age of information', I guess, because when stupid braindead excuse for 'AI' is writing all the code, nothing will work ever again, and I'm sure it'll be such total spaghetti code that no human will be able to deciper it.
  • by wakeboarder ( 2695839 ) on Friday August 23, 2024 @02:33PM (#64729718)
    Because right now LLM's can't even write code 80% of the time. There are also other pitfalls with AI, it costs a lot of energy and none of us has had to pay for it because most of the services are free, but the AI companies will need to start charging. Will you pay for 80% accuracy? I don't think I will.
  • AWS certified senior AI assisted developer architect SSP CNA Plus Plus

  • History (Score:5, Insightful)

    by JBMcB ( 73720 ) on Friday August 23, 2024 @02:48PM (#64729780)

    1970's - High level programming languages are going to make it so easy to code, anyone can do it! Who needs developers?
    1980's - Object oriented programming is going to make it so easy to code, anyone can do it! Who needs developers?
    1990's - RAD and 4GL tools are going to make it so easy to code, anyone can do it! Who needs developers?
    2000's - Visual modelling tools are going to make it so easy to code, anyone can do it! Who needs developers?

    etc, etc...

    • All those steps made it easier to code, improving programmer productivity... But we didn't have computers everywhere and in everything yet. The market grew faster than automation could make programmers redundant.

      I don't think that imbalance in growth is still in play. Further improvements in ease of coding will be used to reduce the workforce.

      • It can be taken as given that businesses make money from software developer output. If you find a tool to double productivity do you a) half the workforce and make the same profit, or b) keep the same workforce and double your profit? Not that I think AI tools will even double productivity of quality development in the foreseeable future
        • If doubling worker productivity doubled profits, every business would simply hire more people every time they earned enough money to do so, until their company grew to encompass the entire economy.

          Since this doesn't happen, we can be certain the situation is a little more complicated. In the real world, companies cut staff all the time.

    • 2020's - AI will do everything, Who needs employees?
      2030's A - YES! We fired everyone, they're all beneath me! AHAHAHAHAHA! Sniff Sniff, what's that burning smell?
      2030's B - Why didn't the AI work? I could have had so much more money.....
  • ...is he wants to push shitty software that costs less money to make. His job is to save money not make good software. Software is already pretty shitty, and at the point where it's produced in such a way that you try and fix bugs by trying to convince it with a conversation (prompt) it's going to get ugly. It's a very blunt tool for a very precise operation.

    Some of us out here down in the trenches are actually trying to figure out how to use AI/LLMs to enhance software development. I'd certainly be wil

  • ... the CEO can stop managing as AI takes over.

    Seriously though: I expect management positions to evaporate in the face of AI much faster than many others.

  • by OrangeTide ( 124937 ) on Friday August 23, 2024 @04:05PM (#64730004) Homepage Journal

    There are no other aspects of my job. I don't design. I don't check for correctness. I don't negiotate interface contracts. I just sit at my desk and wait foe the CEO to say "write me and XYZ program", and 1000 of us turn out some text files until QA says we're done.

    • I don't design. I don't check for correctness. I don't negiotate interface contracts.

      If you're an average software engineer, you don't do those things. You're a procedural developer who checks for "happy path" functionality, and you especially don't waste your time checking for correctness, whatever that is.

  • But I like coding. I like creating code that leads to results.
  • Coding for cloud computing is a whole another level of idiocy that I'll happily let AI to write all of it.

    I'll save myself the aggravation of why all their stuff is designed idiotically the way it is.

  • Instead, they're taking a well earned vacation to let CEOs like this have a sip of reality.

    A large part of software dev is about correctness, so it's just so eyerolly to see someone say that a technology that fundamentally has correctness problems to be able to do this kind of work in the next year.

  • A coder's job isn't about "how do I innovate", but "how do I make sure that it is understood what was required". When AI can do that, we can ALL stop working. Until then, AI is just a tool among others.
    • That's a very important part of the job, but not the only part. Coder still needs to code, once requirements are agreed/understood. The AI can't understand the requirements, and can't really code the implementation.

  • Well, the AI code work mostly, when it doesn't it's oven hidden, like in a book that has a lot of great sentences but no story.
    We will need a lot more humans to review the code. Well maybe we train another AI to review but on what?

  • Some human tasks have been automated 100%.

    Financial Math
    Printing (No hand made copies)
    Telephone / network switching

    It was never a slow process. But automation of other tasks has been much less successful. Speech to text. Automated Pilots. Self Driving Cars. The human is never removed. They still need humans to monitor the computer.

    My point is never fear the automation that is not 100%.

  • Just like assemblers, compilers, and libraries allowed programmers to be more efficient, I see AI in a similar way. There is currently zero evidence to point to LLMs being able to create any complete, complex software program. I expect AI to be like a dynamic library (not a dynamically loaded library but a created-on-the-fly library), allowing programmers to call low-level, small routines in a way that saves programming time. Perhaps assemblers and compilers did destroy some jobs, but new upleveled jobs

  • ... as the computer does what I want instead of what I tell it to do.

    Then, everything will be fine!

  • ... not necessarily the skill in and of itself.

    Technically correct: The skill is abstracting the real world and all its edge-cases into sets of numbers and rules. It's idiots like this who think anyone can write a sentence, therefore anyone can write a computer statement. It ignores the reality that most adults spent 10-12 years learning reading, writing and 'rithmetic (in the USA) in English. That does not mean they can speak computer-ese, and his assumption that democratization means lots of cheap employees depending on his internet servers, shows

  • You need human code and content to keep LLMs from poisoning themselves. It's actually just math. The small errors that statistical prediction makes will just amplify and feedback. Feed too much AI generated code to an AI, it will become useless pretty quickly.

    Not that the best LLMs are all that great at coding to begin with. I mean, the biggest windows of attention are nothing compared to the context needed for dealing with any realistic code base.

  • The AI programmers will follow soon after we have flying cars and jetpacks.

    The AI / ChatGPT I've seen strikes me as what a digital politician would look like.

    It talks a lot, pretends to know a lot, but spews mountains of bullshit in an effort to look more intelligent than it is.

    • >after we have flying cars ...We have flying cars, they just suck because it turns out it's an inefficient, very limited form of transport.

  • This feels like it is really a long way off, if it’s ever going to happen.

Nothing is more admirable than the fortitude with which millionaires tolerate the disadvantages of their wealth. -- Nero Wolfe

Working...