Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI Programming

OpenAI CEO Altman Says AI Will Lead To Fewer Software Engineers (stratechery.com) 163

OpenAI CEO Sam Altman believes companies will eventually need fewer software engineers as AI continues to transform programming. "Each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers," Altman told Stratechery.

AI now handles over 50% of code authorship in many companies, Altman estimated, a significant shift that's happened rapidly as large language models have improved. The real paradigm shift is still coming, he said. "The big thing I think will come with agentic coding, which no one's doing for real yet," Altman said, suggesting that the next breakthrough will be AI systems that can independently tackle larger programming tasks with minimal human guidance.

While OpenAI continues hiring engineers for now, Altman recommended that high school graduates entering the workforce "get really good at using AI tools," calling it the modern equivalent of learning to code. "When I was graduating as a senior from high school, the obvious tactical thing was get really good at coding. And this is the new version of that," he said.
This discussion has been archived. No new comments can be posted.

OpenAI CEO Altman Says AI Will Lead To Fewer Software Engineers

Comments Filter:
  • Yeah right (Score:5, Interesting)

    by ZiggyZiggyZig ( 5490070 ) on Tuesday March 25, 2025 @10:50AM (#65257879)

    Well, more jobs for the older workforce who has mastered the skill of reading (and correcting) other people's (or AI) code... How should we call those people, um, code rewriters? Code correctors? Code checkers? Quality proofers? Oh wait I got it, "engineers!" that sounds like a fine word.

    Still, it's a bit sad that the AI does the fun bit (writing the code), leaving the humans do the tedious bit (cleaning up the mess).

    • Re: (Score:3, Insightful)

      by gweihir ( 88907 )

      Still, it's a bit sad that the AI does the fun bit (writing the code), leaving the humans do the tedious bit (cleaning up the mess).

      Also remember that the competent older folks will typically not be that much in need of money. How many do you think will want to do the "tedious bit"? I am sure I do not want to and I prefer much to teach students with real, non-faked intelligence.

    • by Matheus ( 586080 )

      The vast majority of my career has been (cleaning up the mess)... I don't see a difference.

      At some point I started calling "Green Field Development" as "Debugging a Blank Page" just sayin...

    • In my shop, those functions (code rewrites, bug fixes, etc) are handled by the maintenance team.
    • The very first people they're going to try to replace are older, more costly employees. Nobody is going to pay for a 50+ year old code checker. That is literally a code monkey's job. Any kid out of college or hell a boot camp can do that.

      And for the most part those kids are going to lose their jobs too.

      I've said it before but in the old days you couldn't afford to have vast swaths of unemployed programmers and engineers because they would go off and start their own businesses and compete with you.
      • That makes no sense at all... the junior people can't adequately review code because they have no experience to recognize what good code is to start with
        Other articles posted here this week indicate advertisements for software jobs are down about 25%
        Sam's right this time.
        • by dgatwood ( 11270 )

          Except that those articles were misleading. Software engineer jobs as a whole are up. Only "programmer" jobs are down. Basically, the lowest of the low-end jobs decreased significantly while the total number went up.

          • therefore proving my point: it's not the most experienced people that will go first.
            rephrased: the people with the least skills are the easiest to replace.
            • by dgatwood ( 11270 )

              therefore proving my point: it's not the most experienced people that will go first. rephrased: the people with the least skills are the easiest to replace.

              Certainly true. On the flip side, there's also a decent chance that most of those jobs weren't real, and existed primarily to provide "proof" that they couldn't find someone in the U.S. so that they could get H-1Bs. :-)

    • by Luckyo ( 1726890 )

      Interesting. I usually found writing the actual code to be the menial part of the job. It's designing the software architecture that was the interesting and creative part.

      To each their own I suppose. I know some builders who think of their craft as the fun stuff, and engineers and architects having the boring job of having to design the thing and ensure that tolerances are met. And I also know engineers and architects who believe the opposite.

      • I'm with you 100%. I often find the coding part to be annoying, though I've now written code professionally in over 25 languages. Every new language creator thinks they have some new special twist on programming, but in the end, it's just another dialect to say the exact same things.

        • by Luckyo ( 1726890 )

          I can imagine how that sort of level of understanding would generate the output you suggest. I'm far less experienced in terms of variety of languages, but I have observed exactly the same thing.

          We actually had interesting discussions about this with construction engineers back in university (shared sports team for all engineering fields students). I remember one of the more experienced software architects in our group pointing out over post game drinks how while details in different software projects diver

    • by narcc ( 412956 )

      Oh wait I got it, "engineers!" that sounds like a fine word.

      Programmers aren't engineers. If you knew any real engineers, you'd know that nothing in software is remotely like engineering.

    • Many of us seasoned "code correctors" do *not* enjoy the "fun" bit (writing the code). I just want to see the software do what I want it to do, the code writing is the drudge work you have to do to get to what you want.

  • by nikkipolya ( 718326 ) on Tuesday March 25, 2025 @10:51AM (#65257881)

    But in the next decade?
    I think this guy is busy kicking dust in the eyes of the investors, customers, public... based on a "what's the next best token" model retrofitted with a yet to be seen "reasoning engine".

    • You and those that moderated you positively are morons. [engraved.blog]
      I'm sure your capable of learning what this technology actually is, you just don't seem interested in doing so.

      Is it a "token prediction model?" Yes, it is.
      However that doesn't mean anything meaningful in the context of, well, anything.
      The token they're predicting the next of is the token next in an answer to your question, including all the context of the conversation.
      If I train a language model to do n + m, I feed it 1 and 1, and it outputs 2,
      • *you're
      • If true, it should be able to deduce infinitesimals and the single- and multi- variable Calculus, given only the training materials available to Newton and Leibniz in the seventeenth/eighteenth centuries. Or, Maxwell's equations. Or, Carnot's theorem. Or, ... well, you get the idea. Maybe the NeuroSymbolic marriage will deliver this fruit.

        • If true, it should be able to deduce infinitesimals and the single- and multi- variable Calculus, given only the training materials available to Newton and Leibniz in the seventeenth/eighteenth centuries. Or, Maxwell's equations. Or, Carnot's theorem. Or, ... well, you get the idea. Maybe the NeuroSymbolic marriage will deliver this fruit.

          Do you doubt that an LLM could deduce special relativity from the work that preceded it? I don't.
          Can ours, right now? Better question. I'd probably err on the side of no. However, symbolic reasoning is getting better obscenely rapidly.
          LLMs are much better than humans, on average, at making deductions from data. I'd say they don't match the ability of our Great Geniuses in this, but they're certainly better than some large fraction of the people on this site.

          • Percentage of people that can predict the first irrational number > 37: 0. Percentage of people that know, and can demonstrate why, they cannot predict the first irrational number > 37: > 0. What says the LLM of your choice ?

            • Are you asking it to provide a proof?
              Such a question is going to confuse it, since the question itself is nonsensical.
              "First irrational number after 37" is meaningless. There are infinite real numbers between 37 and any real number greater than 37, and there are infinite rational and irrational numbers between any two real numbers.

              For shits and giggles,
              QwQ 32B. FP16 precision. Reasoning model. 40k token context.
              Q: Is it possible to predict the first irrational number > 37? There is no formatting re
              • Yes, I'd like to see QwQ's proof. My understanding is that QwQ is a NeuroSymbolic AI. I have my doubts that a straight-up neural-network LLM could provide one, but I'd sure be interested in what QwQ can do. Does it use Coherence-Driven Inference? If it can give me the first 10 digits of Chaitin's constant, all the better. I agree that Einstein's Special Theory of Relativity might be within reach soon, but general relativity would probably take longer.

      • Thanks for the reply. I appreciate all your thoughts here and I largely tend to agree with you. Emergence is what I too think intelligence is.

        But hey, here the premise largely is in our ability to predict and understand what emerges out of that very very large network. Today can we confidently say, "this very large network of neurons that I have put together, will do software development"? Or is it that, "this very large network of neurons that I have put together, seems to be doing software development"?

        I

        • I have no exceptions to any of this, other than this:

          based on a "what's the next best token" model retrofitted with a yet to be seen "reasoning engine".

          1) "what's the next best token" model is referred to as if that somehow precludes anything. It does not.
          2) The "reasoning engine" in this instance, is merely fine tuning to get the transformer to engage in a debate with itself. There's no modification to the transformer or the engine. It's just additional training. Since the internal debate goes into the context window, this allows the LLM to deduce answers.

          You were highly reductive. That doesn't make

      • by narcc ( 412956 )

        Transformers are demonstrably capable of turing completeness.

        This is very obviously not true. This has been explained to you several times already. It's really simple. Here's the ELI5: Feed-forward neural networks are not Turing complete. This is a well-established fact. Transformers are feed-forward neural networks. :. Transformers are not Turning complete.

        You've been taken in by some simple psychological tricks and a lot of wishful thinking. You put me in mind of one of those people that drove Joe Weizenbaum crazy. The kind that would go on and on about how E

        • And you are, as usual [arxiv.org] wrong.
          Stop being wrong. It's got to fucking hurt you.
          • by narcc ( 412956 )

            Sigh... As expected, you didn't read or understand the paper.

            What a waste of time you are. This is why uneducated laypersons, such as yourself, should refrain from making proclamations about things you clearly don't understand.

            • Sure did.
              Nice try on the gaslighting, though.

              Repeat after me.
              Attention layers are not feed-forward.
              Since transformer blocks contain an attention layer and a feed-forward layer, transformer blocks are not feed-forward.

              It's ok- you can do it.
              Keep doing it- eventually it'll stick.

              Let's talk about that paper, though.

              We conclude that, by the Church-Turing thesis, prompted gemini-1.5-pro-001 with extended autoregressive (greedy) decoding is a general purpose computer.

              What they've shown (and what most people could have intuited long ago, simply by understanding what attention is), is that with a correct set of instructions, gemini-1.5-pro-001, or frankl

              • by narcc ( 412956 )

                OMG... It must hurt being as stupid as you.

                I'll explain it to you, but I doubt you'll be able to understand it.

                While the system they describe is Turing complete, it really doesn't say anything about the Turing completeness of either transformers and LLMs as we understand them. There are two important differences, which you seem to have missed, because you didn't read or understand the paper:

                1) Their system can sometimes output two tokens at once. This is absolutely critical for their lag system to work. (

                • OMG... It must hurt being as stupid as you.

                  Coming from the person with the largest collection of absolutely wrong assertions that I've seen on here, except for maybe angelo- that really means a lot.

                  While the system they describe is Turing complete, it really doesn't say anything about the Turing completeness of either transformers and LLMs as we understand them. There are two important differences, which you seem to have missed, because you didn't read or understand the paper:

                  See where I said:

                  Some embed it in the weights with arbitrary precision (not too plausible), some use complex sets of instructions and truly massive prompts, with arbitrary context windows, some, like this simply show that if the decoder stage can be induced to produce n-grams as outputs, a context window is enough.

                  You really just don't even read shit, do you?

                  1) Their system can sometimes output two tokens at once.

                  Yes, it can output n-grams.
                  So can any transformer.

                  This is absolutely critical for their lag system to work. (See figure 3 in the paper)

                  Sure is. I literally said as much. Do you feel smart repeating it?

                  I'll also not that your claim wasn't "you can use an LLM as part of a Turing complete system", it was "LLMs are Turing complete". The former is trivially obvious, the latter is obviously wrong.

                  Actually, I said:

                  Transformers are demonstrably capable of turing completeness.

                  I think your reading comprehension isn't very good ;)

                  Like I said, you shouldn't pontificate on things you clearly don't know anything about.

                  And you probably shouldn't comment until you learn to read full sentences.

                  Just for the lols, though:

                  Feed-forward neural networks are not Turing complete. This is a well-established fact. Transformers are feed-forward neural networks. :. Transformers are not Turning complete.

                  Correc

                  • And just because I should back up my assertions, re: some use complex sets of instructions and truly massive prompts, with arbitrary context windows
                    A prompt and a transformer is all you need. [arxiv.org]

                    Again, this should be entirely intuitive to you.
                    Answer the question: What do you call a network that can fit any curve, attached to memory, capable of looping until instructed to halt?
    • If the ultimate impact of this technology is that it enables one person to accomplish more (and perhaps enables lower-skilled coders to accomplish more than they otherwise could), the net impact will be a reduction in the cost of software development (even if salaries don't come down; the cost reduction is in the need for fewer people on a per-task basis).

      Generally speaking, when costs go down, consumption goes up. It's basic supply-and-demand at work.

      The primary determinant of how much demand there will b

    • by allo ( 1728082 )

      Altman is a bit unhinged, but in the end I definitely see AI taking over software engineering tasks. I only wonder if that really means having less software engineers, or possibly building more and better products with the same number of employees.

  • Yes and also No (Score:5, Insightful)

    by FictionPimp ( 712802 ) on Tuesday March 25, 2025 @10:54AM (#65257885) Homepage

    I have been spending a lot of time working with AI tools to improve the velocity of my team. Done properly I've found AI tools like Cursor and Windsurf can help me accomplish tasks much faster than I would on my own. But I also know how to engineer software.

    Using these tools effectively is a lot like programming. You need to define specs files, desgin docs, rules for the AI, and even decision logs that the AI will reference when making changes. Learning to use these tools if challenging and takes time, skill, and practice just like learning a programming language. That said, the relationship between the engineer and the AI is more of a senior/junior pair programming relationship.

    I have my AI first build a plan via design and spec documents, then build tests, then finally build the solutions. I have it then loop checking linting, code quality, optimizations, and when it thinks it has an optimal solution send it to me for review in the editor. I then redirect or accept the changes, often adding new rules, decision items, or changes to the design/spec.

    I've been writing code for over 20 years, I'm prepared for this effort and the quality of the work put out is at least as good as what I would do on my own in 1/2 the time, better documented, better tested, and repeatable by other engineers on my team. However, by doing this, how do we get real junior engineers the experience needed to do what I'm doing? I can replace the need to hire on my team by 1-2 engineers per senior engineer using these tools. Without those junior engineers gaining experience how can they properly guide the AI to the solutions?

    • by godrik ( 1287354 )

      yeah, I've been playing with these things. And I don't think people realize how poor the code generated by these systems are.
      These AI agent really have a tendency to just add if conditions everywhere in the code leading to some really poorly architectured software.
      And about 5 to 10 iterations of features later, the agent can't get anything working even remotely well anymore, and then you'll spend the next day refactoring all the junk it created.

      So, yeah, these systems can be useful. Beware what you'll get.

      • I experienced that but found it's solvable with a good set of rules for the agent, a devlog, and having it write spec files and tests before taking on tasks. I'd say cursor and windsurf when properly used are at least as good as someone with 2-3 years of on the job experience and occasionally as good as someone with 5-7 years. Just like those engineers you need someone to keep everyone on task and following standards and ensuring code quality.

        If you expect perfect code from a jr engineer you are going to be

      • yeah, I've been playing with these things. And I don't think people realize how poor the code generated by these systems are. These AI agent really have a tendency to just add if conditions everywhere in the code leading to some really poorly architectured software. And about 5 to 10 iterations of features later, the agent can't get anything working even remotely well anymore, and then you'll spend the next day refactoring all the junk it created.

        So, yeah, these systems can be useful. Beware what you'll get.

        THAT'S FANTASTIC! THINK OF THE HARDWARE SALES ALL THAT BLOAT WILL DRIVE!

      • I have had a similar experience with code generated by pre-AI low-code platforms.
        I attribute that to the code generation framework adopting a relatively generic and inflexible approach to representing the problem and associated logic in code.

        I'm not surprised that AI has a similar quality, since I believe that current LLMs really don't have much internal logic, regardless of what the likes of Sam Altman say.

    • by gweihir ( 88907 )

      However, by doing this, how do we get real junior engineers the experience needed to do what I'm doing?

      We cannot. There is a huge crisis comming because of that. Sure, some junior people can turn themselves into the senior people critical to make this work, but they are rare. My estimate (from teaching IT) is 5...10%. That is not enough by far to keep things going. And hence I think AI coding will eventually mostly go away and at that time it will be an excessively expensive failed experiment.

  • Everyone can expect more BS from Sam as he's tries to remain relevant and grease the wheel for the OpenAI IPO, where he will cash out big time. Loads of completely unsupported BS until then. Thank you Media, for being an uncritical, unfiltered conduit for his sh*t.
  • by hjf ( 703092 ) on Tuesday March 25, 2025 @11:00AM (#65257899) Homepage

    what is the obsession with leaving software engineers jobless?

    Why is every AI out there trying to, first of all, leave us software engineers without a job?

    I think it's a false narrative: "all companies need software, and they can get it from expensive engineers, or from us". The reality is that: not all companies need software (at least not custom software), and those that do, already have as many as they need. Not every company out there is trying to be the next dotcom thing. Not every company wants an "AI Agent" to do whatever they do. And most companies are small, not FAANGs with thousands of "top talent". Even if you can replace the "engineer", in small companies the engineer fills many roles (IT, support, etc). Is an AI agent going to come to a desktop and unplug the printer and plug it back in?

    And, why are we talking so much about replacing engineers? We engineers know the truth: AI can leave lawyers without a job RIGHT NOW. It can be trained in the whole corpus of the law, in every ruling, know every precedent, and you can give it the context of all parts in a trial and it can provide you with a defense. A team of lawyers can be reduced to a single lawyer and an AI. Why is no one saying this? Why are they only focusing in the "engineers will be replaced"?

    I see many other lines of work replaced long before engineers.

    • by gweihir ( 88907 ) on Tuesday March 25, 2025 @11:17AM (#65257933)

      I think it is just an effect of tribalism. The "manager" tribe absolutely hates to have to employ people that can do things they cannot do and that are rare enough that they cannot be easilt bossed around. They like to think they are in control. But in reality we have reached a point where if you get weaker, less smart and less experienced software engineers, that will massively reduce profits and may even kill your organization. Hence they cannot actually implement their fantasies of dominance and of being in charge.

      And hence many of them mindlessly cheer for any false prophet that promises them they can get rid of the engineer tribe or at least reduce their numbers in their organization. Obviously, that will never work, but these power-hungry no/low-skill people are not mentally equipped to understand that.

    • by munehiro ( 63206 )

      because software engineers are expensive (both in terms of salary and human handling), and anyone that can get rid of them with something that costs a tenth of it is going to make a huge amount of money. First they transferred everything to India to reduce human costs and worker rights. Now they are trying to get rid of that too.

      The objective of every entrepreneur is to make a ton of money while giving away as little as possible.

    • It's already happening with companies farming more and more coding work out to low bidder bottom feeders (we all know who they are). This is just an evolution of that model. Will it improve velocity or quality? No idea....the bar is currently rather low.

    • by dvice ( 6309704 )

      - If you can replace programmers, you have an AI that can write AI that will replace lawyers, doctors and what ever you will.
      - You don't need special robot arms to replace workers if the worker is a programmer, so it is fast and easy to take into use.
      - Programmers have relatively high salary.
      - It is relatively easy to verify if AI made a good work or not, because you can just try compiling the code. If it compiles, it must be correct.

      But I agree with you. It is much more easy to replace for example teachers

    • "what is the obsession with leaving software engineers jobless?"

      It's not just software engineers. People are expensive. Companies are always trying to reduce head count. Whether it be welders, machinists, assembly line workers, or bank tellers, doing the same work with fewer employees is way up on Management's to-do list.

      Software engineers were protected until now because the machines couldn't automate the output, and also because the demand of the digital revolution stayed ahead of the number of people av

      • by narcc ( 412956 )

        How many drafters did AutoCad put out of business?

        I give up. How many drafters did AutoCad put out of business?

        I'm guessing none, but I'm sure you have a number that's well-supported by evidence.

        • Autocad and other drafting software increased productivity by a third to half. So 33 to 50% of the drafting jobs went away. There are only so many building that need to be built.

          My last employer (a chemical plant) went from five drafters to two and they were already using autocad when I got there. The old drafting tables with t-squares and triangles just went away.

          I'd work on your people skills, the snotty programmers will be the first to go.

    • As long as LLMs keep making shit up, they can't replace lawyers or any knowledge workers. They can do most of the lawyer's work of searching LexisNexis, but they lawyer still has to check everything, or have a human assistant check everything.
      The long-term problem with not having Junior assistants is that the path to become an associate is narrowed or removed. The hypothetical law firm here trains junior assistants as they work. An LLM will cost the firm big $$ without the potential to become a great partne

  • ...but we'll never be the disgusting human being that Sam Altman is, of that we can be very, very grateful.
    • by gweihir ( 88907 )

      Yep, there is quite a bit of that going on at this time. It seems open lying and open evil has become acceptable in the west.

      While I do not think there will be a reconing before you get reincarnated again, I cannot imagine people like that advancing on their path with lives like that. Probably more a regression. I mean, grabbing power is easy. Not abusing that power is the hard part and the growth opportunity.

  • In the short run it might lead to fewer coders that do not qualify as engineers. In the longer run, with all the damage likely done by "AI code", the need for actual engineers in the software space may even increase.

    So why is Altman lying? I mean besides that he has always been lying except at the very beginning? Simple: Software engineers are expensive and may just leave if you treat them badly. So many cretin-level manager dream of getting rid of them.

    • by dvice ( 6309704 )

      I think that a 10 person team of good developers can easily outperform a team of 100, that contains 10 good developers. So I think we could already get rid of 90% of developers and do all the work with the remaining people, if we can just pick the right people. Bad developers create so much extra work that good developers can not keep up. It took me once 2 weeks to clean up a mess that someone else created within a single day. and if let that trash sit in the code base for years, it becomes pretty much impo

      • by gweihir ( 88907 )

        I agree. My personal experience is that I did a project for a rather large bank in half a year that I realized after a while would probably have taken 20 or so people if done internally. I did it alone and likely a lot faster. I did need to fill about 10 diferent roles doing so.

        But the problem here is that you need really good managers for that to work. In the case I describe, I got managed directly by the manager of a strategic effort within that bank and he did do some things for me that usually would hav

      • A team can only go as fast as its slowest developer.
  • ... had a job putting the caps on toothpaste tubes, got replaced by machines, then got a job repairing the machines that replaced him in the first place. The same thing will happen here. Human intervention will never go away. If it does, well, Skynet will be online.
  • by davidwr ( 791652 ) on Tuesday March 25, 2025 @11:22AM (#65257941) Homepage Journal

    Code re-use has been a thing for almost as long as code has existed.

    In the early days, it was compilers, interpreters, and functions you borrowed from your past projects or your peers.

    Commercial libraries weren't long behind. Yes, I'm talking about source-code libraries you just "cut and paste" into your project as well as compiled ones you link in or call at run-time.

    Over time, libraries got richer and now we code to very rich APIs. Honestly, we no longer know or care what bits are moving around where unless we are working on a project that requires such knowledge (shout out to the bit-bangers in the crowd).

    Over time, the work one programmer can do has gone up tremendously. Today, without knowing much about database theory, I can "hack up" code that will query several databases on different servers each using its own API, save the output to a common format, massage the data, and create a 3D-animated chart, all in a few hours. Try that on a PC (even with state-of-the-art 3D graphics and networking) 20 years ago or any computer 40 years ago. Okay, it MIGHT have been POSSIBLE 20 years ago, but there were a lot fewer tools available to make it easy enough to do in a few hours (or less, if you are well-versed in the tools).

    "AI-written software" is just another step on the journey of re-used code.

    • "AI-written software" is just another step on the journey of re-used code.

      It's not. It's another step on the journey of auto-generated code, which also has a long, well-trod history; very different from reuse. It has known pitfalls that must be avoided.

  • It's a "scale" for experienced devs (as in real in-your-head knowledge), but a mere "bias" for inexperienced ones. Sure it will get fresh graduates at a good starting point, but with very poor scaling. Programming work is going to remain secure for the non-lazy. The rest will eventually eat dust while scraping by until then, while at the same time feeding big tech.
  • by linuxguy ( 98493 ) on Tuesday March 25, 2025 @11:33AM (#65257973) Homepage

    But he is right on this one. I write software every day. I think the amount of software I write has gone up by about 3x since I started using AI tools like Cursor. And it is better quality too. It is better documented and has more tests. AI is really really good at taking care of the mundane. My AI skeptic coworkers try hard to find issues with my code during PR review, but struggle with it.

    We will not all lose our jobs. The ones who have good grounding in software principles and can direct AI to accelerate development will be valued. Junior devs are in trouble though. And so are the senior devs who have their head stuck in the sand about AI. In the world of automobiles, if you insisted on using horses, you would be putting yourself at a disadvantage. This is no different.

    • Meh. Glad you are having a good experience with the LLMs. I like them for helping with testing and code coverage but a couple of times recently I've had to come up with a bit of specialized code and (after) when I ask my LLM overlord to accomplish the same task it gets really close but has a bug or two. My problem is that even modifying the prompt to give it more clues and even ask it to avoid the bug with specific suggestions it will spit out the code with the same bug. If I can't ask it for a picture

      • by ljw1004 ( 764174 )

        when I ask my LLM overlord to accomplish the same task it gets really close but has a bug or two.

        The way I use it: I write a method stub with just signature and docstring, or a class stub, then ask the LLM to flesh it out.

        Do I ever use what the LLM produced? -- never; I always discard every single line it produced, and supply my own.

        Do I ever benefit from what the LLM produced? -- usually yes, about 90% of the time. It shows me fragments or idioms or API usages that maybe I didn't know about, or it shows me a cumbersome approach which tells me where to focus my efforts on finding something more elegant

    • While I agree that developers will be able to write more, better quality code with AI...I don't believe it will result in fewer jobs. It will instead result in more getting done. Software development today is severely constrained by how expensive it is. With AI, it will get cheaper to develop functionality, and companies will realize they can do more with their limited budgets. What software team doesn't have a years-long backlog? That backlog exists because software is expensive to create. Bringing down th

    • by Calydor ( 739835 )

      Sure, not everyone will lose their jobs. But consider the people who five-ten years ago were told to forget all the skills they went to school for and learn to code instead; they're the ones most likely to now lose their jobs and AGAIN have to forget everything and redefine who they are. How often do you think a human can do that in a lifetime without suffering severe diminishing returns on the next thing they try to learn?

  • "Each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers"

    This will be true for companies that don't care about growth or expansion. However, many/most companies do care about growth, so increased efficiency is always followed by scaling up to produce total growth. These companies know that their competitors are scaling up, so they will also.

    The other effect is that increased efficiency lowers the threshold to enter the market,

  • I hope people also want less software.

  • until the idiot CEO's get some boots on the ground and actually USE AI tools and compare them to a real software engineer. Also just because it's a machine it isn't going to be cheaper. Even if you did theoretically come up with an AI that was as smart as an engineer, the switching energy is currently 1 million times larger for a the best machines, that's a lot of wasted energy compared to what you can get with wetware. But these guys want to think they are in control, so let them waste their money until th

  • This is example #7343 of media peoples' inability to distinguish advertising from prediction.

    To be fair, chronic gullibility seems to be an American problem in general.

    Here's my prediction (I will not profit if it comes true - I'll probably be collateral damage): Altman's hype and "advice" will destroy more wealth than the 2000 dot.com crash.

  • Sam Altman has never run a business for a profit. He has *sold* business for a positive return, for which he did not need the business operations to be profitable.

    Why would anyone expect him to know the demand for different types of labor?

  • AI cannot fix its mistakes. It's easy and common to have an LLM generate some code, then point out a mistake it made and have it re-write the code. At this point in what an LLM can do the BEST you'll get that way is code that runs. You need to have it generate the code, then YOU have to fix the mistakes in it. It cuts out that 30% of the time you're writing that initial code, thereby making you 30% faster. That's it. That's the speed boost. Push it past that point and you will definitely be introducing bugs
    • AI cannot fix its mistakes.

      Yes, it can.

      It's easy and common to have an LLM generate some code, then point out a mistake it made and have it re-write the code.

      Yup.

      At this point in what an LLM can do the BEST you'll get that way is code that runs.

      mmmh, could not parse... perhaps I should feed it to an LLM and see if it can figure out what you were trying to say.

      You need to have it generate the code, then YOU have to fix the mistakes in it. It

      I've played with this extensively.
      If something isn't right about the code it generates- tell it what's broken, include the compiler error. It will almost always fix it.
      If you see logic errors- point them out- it will refactor it.

      The only real problem I've seen is that it seems like their ability to do this definitely drops with the amount of context they have. I imagine it

  • I, for one, welcome our new AI Coding overlords...

  • It depends on how much source code AI manages to steal.
  • The hype around AI kind of reminds me of the hype around outsourcing: sounds good for saving money but will cost you more in the long run until the workflow gets settled... there was a series of at least 4 years where we got paid good money to fix up a project that overran its schedule when people first started outsourcing code.

    The great thing is, that transitional period is generally enough time to make sure existing engineers learn the tools and for the market to catch up, leading to companies getting MOR

  • Who's the target audience? CEO's. Again, CEO's look only to the current quarter stock price, since that's what determines CEO pay. Slashing staff, and the management/benefit overhead those require is huge win for current profit cycle. If it also happens to drain a company of future leadership and brain power- so what. The CEO will be gone by then with a spectacular golden parachute.
  • High level languages were meant to kill off programmers (Common BUSINESS oriented Language anyone?). Then 4GLs, then graphical programming (drawing diagrams and have it create the code), then object orientation, then frameworks, then.....blah blah blah...and now we're at some form of AI. Again. And again similar people are parroting the same nonsense.

    If you get a tool that can do more, you'll be asked to do more. The amount of work will increase and the tooling will be incorporated into it. Shiny tool de
  • Then again maybe AI doesnt know what COBOL is

  • In the 1930s, the economist Keynes predicted that by now, we would all be working only 15 hours a week because automation would handle most of the work we did. Now, people are making the same prediction about AI. It seems history is repeating itself. https://www.npr.org/2015/08/13... [npr.org]

  • Owning Adobe Creative Suite (or should I say 'renting', but I digress) doesn't make you an graphic artist, a photographer, or a videographer. It just makes mundane tasks easier. AI coding tools doesn't make you a software architect either. It just saves you hours of googling even when it gives you code that doesn't work.

  • Do you really want to have fewer employees, or maybe the same employees but more productivity?
    Most technical revolutions did not lead to less employment, but to products that were thought to be impossible before.
    Projects that seemed infeasible before may become much more feasible when your workforce can automate the boring tasks and work on the complicated parts.

  • Till we have our Y2K moment... That they plead all us old guys to come back and debug their AI code.

  • ...looking at the disasters that a group of engineers did to a 20 years old fool-proof, top-notch signal processing chain, I second their replacement with AI. It can't get any worse.
  • Why the idea that AI will reduce the work for humans rather than simply changing their role and increasing output. The economy is not a zero sum game. Or at least a normal competitive economy isn't. Perhaps where monopolies dominate and there is no possibility of competition the incentives are to limit output to maintain higher prices. That is the world in which AI executives work but I am not sure that it is a natural state.
  • Copy and paste is not engineering. ai is copy and paste via database.
  • From driving cars into flaming death, https://www.forbes.com/sites/m... [slashdot.org]>suggesting deadly recipes, to just really bad advice [wp.com]... AI already had your number and will depopulate the word only leaving the entrusted to tend their loving machines.

    No doubt, Altman uses AI to write all his press releases which the machines are using to groom the believers.
  • These licenses grant rights to human programmers who are sharing code with other human programmers... but what happens when all that open source code becomes the library from which all the AI models are grabbing cut-and-paste code to plaster into their output streams thereby destroying the programming profession?

    At some point we'll probably have armies of lawyers suing all the operators of AI coding systems for copyright infringement on all sorts of code they're stealing and pasting into the programs the AI

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...