Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming AI

Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry 75

Software development is entering an "autopilot era" with AI coding assistants, but the industry needs to prepare for full autonomy, argues former Salesforce co-CEO Bret Taylor. Drawing parallels with self-driving cars, he suggests the role of software engineers will evolve from code authors to operators of code-generating machines. Taylor, a board member of OpenAI and who once rewrote Google Maps over a weekend, calls for new programming systems, languages, and verification methods to ensure AI-generated code remains robust and secure. From his post: In the Autonomous Era of software engineering, the role of a software engineer will likely transform from being the author of computer code to being the operator of a code generating machine. What is a computer programming system built natively for that workflow?

If generating code is no longer a limiting factor, what types of programming languages should we build?

If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?

Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?

Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry

Comments Filter:
  • boeing or airbus autopilot mode?

  • I suppose I'll be left in the dust, but I view AI as maybe an additional tool for finding code snippets to accomplish a specific task. Software architecture is not something I'll use AI for. I'm too much of a control freak for that.
    • by Lisias ( 447563 ) on Wednesday December 25, 2024 @12:46PM (#65038669) Homepage Journal

      Agreed.

      AI is a tool for people that can't use google to find code in StackOverflow.

      Problem: not knowing how to google things also means inability to detect when the AI is hallucinating, what will create pretty interesting (besides annoying) consequences in the Field.

      People knowing how to code will make some serious bucks overcharging by fixing AI created mess.

      On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.

      • by narcc ( 412956 )

        The past couple generations of "developers" are almost ashamed to write code. C is considered a "low-level" language. The concept of memory addresses is considered an "advanced topic" these day. 20 years of Agile nonsense actively discouraging planning. Python. ...

        Yeah, the software industry is in real trouble. Not that anyone paying even a little bit of attention over the past 30 years didn't see this coming ages ago. Once CS programs turned into glorified programming bootcamps in the late 90's, that

      • On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.

        This might be the biggest challenge arising from modern AI tools. They make half-decent replacements for junior developers, and with the market dynamics of the past 10-15 years, hiring junior developers was already a questionable idea for most businesses. So if hiring good seniors and providing powerful workstations with AI-driven tools becomes SOP then where has the next generation of good seniors come from in 2030 or 2035?

        However, so far I don't see much evidence that these tools will pose a credible thre

        • by djinn6 ( 1868030 )

          The guy's apparently a developer by background, so surely he knows what compilers and interpreters are. Almost all software developers have been operators of code generators since somewhere in the middle of the 20th century. The trick has been, and still will be, to make the specification of the behaviour you want clear enough for those code generators to do what you actually want them to do.

          I'd say compilers are better. With LLMs, even if you make it perfectly clear what you want, they will sometimes interpret it in some other way. Plus you're having to use English. It's very hard to be exact when using a natural language.

      • by linuxguy ( 98493 )

        > AI is a tool for people that can't use google to find code in StackOverflow.

        I see these sorts of ideas pushed on Slashdot quite often. I can only hope that people pushing these ideas are not programmers themselves. If they are, they are in for a rude awakening.

        AI is much much more than Googling to find something on StackOverflow. A lot more. For example, I can feed it a large body of code and ask it to help me find potential issues. And it does a reasonably good job of it. It used to not be able

    • by HiThere ( 15173 )

      That's what it is *now*. What should you prepare for "in the next few years"?
      FFIW, I expect that as long as LLMs are the driver, what you'll get is something that is good at writing snippets. But I'm also convinced that there's noting that says LLMs need to be more than a part of the AI. How much learning (by what kind of device) is needed to "underpin" a LLM before you get something that understands at least a bit of non-verbal context? (FWIW, I expect we've already got to the "at least a tiny bit" sta

    • by gweihir ( 88907 )

      LLMs cannot do software architecture at all. For that you have to understand quite a few things and "understanding" is not something any AI can do today and something LLMs will never be capable of.

  • Clueless CxO (Score:5, Insightful)

    by bradley13 ( 1118935 ) on Wednesday December 25, 2024 @11:22AM (#65038515) Homepage

    Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.

    I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

    The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.

    • by dgatwood ( 11270 )

      Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.

      I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

      The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.

      Agreed. On the flip side, it makes interviewing harder, because LLMs can do all those LeetCode problems, kind of. Throw real code at them and they occasionally do something useful, but usually produce something that bears little resemblance to actual code, so if we could find a way to reliably do that, great, but the flip side is that they'll probably learn from the specific examples and learn to do those well, resulting in a need to constantly switch examples to stay ahead of the automation. And at the

    • Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.

      I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

      The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.

      Your comments raise some valid points about the limitations of current AI tools, such as Copilot and ChatGPT, in understanding requirements or system design. However, this critique seems to miss the broader perspective of the article, which is not about overestimating the current capabilities of AI but rather exploring the transformative potential of autonomous tools in the future.

      The article does not suggest that AI today can fully grasp system design or requirements. Instead, it speculates on what tools b

      • by Anonymous Coward

        ooh nice use of LLMs to make comments when you cba

      • by gweihir ( 88907 )

        Can you please stop posting LLM generated crap?

    • If you know anything about how Salesforce is doing right now, you know this guy is the real tool. That dumpster fire of a company seems to trundle on only because they've bought (and ruined) so many better products

    • by gweihir ( 88907 )

      I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

      And that is just it. If it is common enough that an LLM can (usually) do it well, it is common enough to pack it in a library. And that comes without all the problems LLM generated code has.

  • if there wasn't money on the table no one would care. these people don't make anything and never will. creation is the tool of man alone.
    • Charge ChatGPT for the massive increase in fossil fuel energy generation, or better yet, pass a law requiring data centers to NOT use the grid and use only onsite energy generation- and the entire exercise will be revenue negative.

      The only reason OpenAI can exist is because it is stealing money from grid tied energy customers and causing massive inflation in your electric bill.

      • ChatGPT uses far less fuel and generates far less carbon than what it would take to support a human worker doing the same task.

        • by HiThere ( 15173 )

          Got any evidence that is true? I haven't seen any. Definitely "a human worker" could not do the same tasks, because nobody can respond that fast. If you scale it to tasks a human worker could do, then ChatGPT produces a far inferior product, and the "costs to train and use" it are way over the top. But scaling is probably not a reasonable way to think in this area.

          That said, I'm convinced that the LLM-is-all approach to AI is extremely limited. Get a robot, even a robot dog, working with an LLM and you

          • Sure, back of the napkin: the cheapest human software developer costs something like 500 USD a month. If we assume that guy is working 9 hour days and a month averages 22 workdays, thats 198 hours at about $2.50 an hour, but it doesn't include the electricity to run their computer or any of the other stuff like an office and a desk or paying the chai guy. The rule of thumb here is that we double the salary to get the total cost, so we bump that up to $5. That worker could do something like 5 chatGPT-li

        • You'll have some figures to back that up then.

          • Sure, but the leap of faith here is energy value theory, since that's the only way we have to measure the total carbon footprint of the human worker.

    • creation is the tool of man alone.

      Ok, well consider AI a tool that man created to do some of that creation for him.

    • People do creation, not just men.

  • Sigh (Score:5, Insightful)

    by 26199 ( 577806 ) on Wednesday December 25, 2024 @11:39AM (#65038525) Homepage

    Generating code is not at all the limiting factor. Developers do not spend even 20% of their time typing new code.

    So, good luck with that.

    • by leptons ( 891340 )
      > Developers do not spend even 20% of their time typing new code.

      That sounds like a generalization that wouldn't hold up to scrutiny. Sure, some developers probably spend 1% of their time writing new code. But other developers may spend 100% of their time writing new code. You don't get to throw out a number like this without providing a citation.
      • I've worked on internal developer tooling at a large company and so had access to lots of data--you have to know how much people are using the tooling to know how much you can gain by improving it. So, I know what I'm talking about :) but no citations possible, sorry.

      • by djinn6 ( 1868030 )

        But other developers may spend 100% of their time writing new code.

        Please tell me who these developers are so I can avoid them. If they're not spending at least 20% of their time understanding and clarifying requirements, then whatever they wrote will be completely useless to the customer. That's assuming the code even works, you didn't give them any time to test or fix bugs. And this code will be unmaintainable because there was no time spent on design or architecture. Nor will it work together with other preexisting systems. So it's basically useless until a competent de

  • AI tools are not accurate enough to replace software developers, right now the accuracy of LLMs answers are in the 70 to 85 percent range. From what I've read there isn't a good pathway forward to fix that problem. So AI will remain a tool.

  • Is continuous bugs caused by hallucination. I think OpenAI needs to be charged a huge CO2 generation fee for the bloatware they've put out on the market, and also needs a large fine for every wrong answer their AIs generate.

  • by oldgraybeard ( 2939809 ) on Wednesday December 25, 2024 @12:08PM (#65038581)
    Wrong vision! "the operator of a code generating machine" this is equivalent to ancient times when a college student on 3rd shift,
    spent the night at the computer center dropping card decks into the main frame and tearing off the printouts and wrapping them up around the card deck with a rubber band while studying.

    The more likely vision for AI when it comes to programming is software engineers writing very precise software specifications.
    Which AI converts into applications or chunks of applications.
    Which software engineers then test and debug using a precise test specification and a different set of AI tools.

    The concept that anyone can code is the same as anyone can pound a nail. Net result, a lot of bent nails.
    Bring in a nail gun and the nails get driven in straight, but the gun has to be positioned for each nail before driving it.
    • well put... good analogy... and stories about the night shift and decks of cards.... your beard must be very long indeed...

      I had several friends in my youth that went into electrical engineering and what they related is much like what you are saying. I got the impression that no-one actually "did" hardware after a certain point, they just used software tools which automated much of their drudgery, then did a lot of quality control.
      • by HiThere ( 15173 )

        Well, when I went to school there were still a few EAM machines, with patchboards. The most common applications had patchboards with covers over most of the wires, sometimes over all of them. But if a knowledgeable person had special needs he would wire his own patchboard. They were really quite flexible within their domain, and totally useless outside it. And the domains were SMALL. The card sorter could be programmed to sort on any columns of the cards of the deck, either ascending, descending of a m

  • by Big Hairy Gorilla ( 9839972 ) on Wednesday December 25, 2024 @12:17PM (#65038601)
    I once re-wrote google maps before I had coffee.
    Everyone believes me, right?
    • by Big Hairy Gorilla ( 9839972 ) on Wednesday December 25, 2024 @12:31PM (#65038637)
      It's as vague as possible. This guy is an industry luminary? Sounds like he's just feathering his nest.

      Here's my unwanted 2 cents. In a stroke of irony, creative work like audio, video, music, art, is well within reach of current generative AI. Translators, are already out of work, soon "journalists", writers, commercial artists and musicians will all be unemployed... it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.

      What this guy is talking about requires multi domain knowledge... the place that AI is bad at. AI is going to need a quantum leap (apologies) of capability, which does not appear to be any time soon, or possibly anytime at all. I'm not buying it. But tasteless MBA groupthink management will also use it and be super pleased with it... so buckle up... also, hopefully as pointed out above, Boeing doesn't go all in on this bullshit, or seatbelts won't be of much use. Quality of output is no longer a measure. Even causing death isn't that big a deal anymore.
      • by HiThere ( 15173 )

        Yeah, "quantum leap" is the wrong term. That's as small a jump as is possible. But the idea of a discontinuity is correct. LLMs have definite limits. But LLMs aren't all of AI, they're just the part that juggles words. They're a necessary component of an AI that deals with people, but they don't understand non-verbal context. However there are existing AIs that do handle (limited) contexts. Merge one of them with an LLM, and you'll have something that can talk about a particular domain, and understa

      • by leptons ( 891340 )
        >creative work like audio, video, music, art, is well within reach of current generative AI.

        No, it's not.

        >it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.

        We've had completely tasteless human writers, and movies made with absolutely no redeeming values - so bad you have to wonder how people spent millions making drivel. They often lose millions, even the low budget shittiest movies are luc
  • "Former Salesforce CEO" -- NOBODY

    "Suggests" -- doesn't even "say", "imply", "puts forward", "funds", -- NOTHING

    "Blah blah blah"

    Imagine if Salesforce was link a THING that its former CEO wanking off would be a thing. But it's not.

    Next?

  • Predicts automation tools will dominate industry. More at 11.
  • We already operate code generating machines known as "compilers". Generative AI may enable a higher level of abstraction between a developer's intentions and executable binary code, but the developer's role is fundamentally the same.
  • by rocket rancher ( 447670 ) <themovingfinger@gmail.com> on Wednesday December 25, 2024 @12:46PM (#65038673)

    Bret Taylor's article raises some highly engaging and thought-provoking questions about the role of AI in software engineering. The parallels drawn between the evolution of software engineering and the advent of autonomous systems resonate deeply. AI's role in this future should be viewed through a collaborative lens rather than as a competitor to human creativity -- similar to how artists and musicians are beginning to embrace AI as a tool that augments, rather than replaces, their creative processes.

    For instance, the questions about verifying AI-generated code and designing new programming languages could lead to a new interface paradigm for software engineers. Such an interface might integrate real-time verification, intuitive visualizations, and explainable AI outputs, allowing engineers to operate as curators and architects of code, much like conductors guiding an orchestra.

    The potential for creating robust, verifiably correct software aligns with AI's transformative power to elevate human ingenuity. Software engineers, like other creative professionals, could offload repetitive tasks to AI while focusing on high-level design, problem-solving, and innovation -- areas where human creativity excels.

    His call for ambition in designing the new paradigm strikes a chord. Just as artists use AI to push the boundaries of their mediums, engineers could leverage AI to redefine software systems, prioritizing security, efficiency, and scalability. Why not aim for a future where every piece of software is as artfully constructed and harmoniously balanced as a well-crafted symphony?

    This article inspires reflection on how tools -- and roles -- will continue to evolve in the "Autonomous Era." It is an exciting time to consider the possibilities.

    • It's great to be optimistic, I suppose, but your analysis doesn't include the dark side of humanity: greed, stupidity, and laziness. Artists and musicians are being drained of their expertise by training the AI's everytime they "collaborate" with it, as they are providing the training data. Then that same expertise is being sold back to them and anyone, without attribution, and being defended as proprietary. AI is definitely replacing artists and musicians. That's the greed part.

      Now the laziness part. The s
    • by gweihir ( 88907 )

      And what LLM did generate this fake "comment"?

  • Ai def a development “pivot point” not just a tool boost. Its a moment that industry holds the opportunity to catalyze all the know-how, best practices, test suites, paradigms and procedures into formal architecture.

    Software development is at the stage Architecture founded its five pillars(Tuscan, Doric, Ionic, Corinthian, and Composite). Centuries later nothing has changed in Architecture five ordered pillars but huge improvements on the order that software is likely to see with Ai and quantum

    • by gtall ( 79522 )

      Too bad Slashdot cannot recognize bots generating comments like the parent comment.

    • There are more types of pillars, but people just got tired of naming them. For example, which of the classic five would this be? Or this one? [app.goo.gl] Or this strange but beautiful thing? [app.goo.gl]

      The other thing that happened is people came up with new types and named them an order, but doing so adds no practical value beyond the originals. Nonce orders are kind of cool but categorizing them doesn't do much pedagogically.
  • Computer programming was originally bit manipulation and has moved toward higher levels of abstraction over the decades. I expect that traditional programming languages will fade away and be replaced with specification languages. There will be some templates you use to describe what you want - first in general terms to generate an overall solution framework and an initial prototype, and then in increasing detail so that the final product gets iteratively ironed out. Even now we have "user stories" that tell

      • Variants of 4GL has been around for quite some time, and according to wikipedia "In the 1980s and 1990s, there were efforts to develop fifth-generation programming languages (5GL)". But I think things are moving far beyond that now.

        • The promises were great , but they never went very far when confronted with reality. Same as what previous poster is proposing... a high level specification that auto magically generates a working system. We all like the idea ... but .. the 4gl any 5gl seem to scale up
    • User stories are great for developing test cases, and figuring out what should be emphasized in an AI, but they absolutely suck for creating a versatile, elegant product. The natural design leans towards everything being like a Microsoft Wizard. Great if you want to follow that story exactly, but if you want to combine things in unique ways, you can't. You have to follow the story.
    • by gweihir ( 88907 )

      Hmm. When have I read this before? Oh, yes! When I studied CS 35 years ago, the 5GL project had just completely failed. It pretty much made the promises you just described.

      In actual reality, what you wrote is wishful thinking. We are nowhere near that could be turned into reality and it is unclear whether that will ever be possible. One exception: You can turn some forms of formal specification into code automatically. Worked well 35 years ago. Why did it not catch on? Simple: Writing a formal spec is much,

  • In addition to serving as the Board Chair [openai.com] of for-profit-status-investigating OpenAI [citizen.org], Bret Taylor is also co-founder of Sierra [sierra.ai], "the conversational AI platform for businesses."

  • Once upon a time programmers struggled with writing machine code. Then some clever people came up with a system for specifying what you needed in almost plain English, and the computer would do the rest. It was called COBOL, and coding was never quite the same. But still, programmers were badly needed, and once everyone was on board with these newmodern "plain English" ways of coding, the requirements just escalated so we still had more than enough work to do. After COBOL, ALGOL, and FORTRAN, there came new

    • by gweihir ( 88907 )

      Indeed. Writing the code is a minor part of software creation. That has been known for _ages_. This Taylor person is clueless or a liar.

  • At least if AI codes in current languages designed for human use, there's hope of it being legible to human auditing (careful of any obfuscation trickiness). But if humans, or AI, design a new language to write code in that's more efficient for the AI to process, at the cost of legibility to humans, then who knows what the code is really doing or how its doing it anymore? I suppose comments in code arent needed by AI and they could choose to work in a language thats even designed to be unintelligible to hum

    • Yeah, it kind of makes sense. You know, with AI you can type a word or phrase, and it will automatically have the code for making the computer perform many lines of code.

      We really need new structures for this kind of thing. It's completely new in the industry. I propose a name for it: we should call it a function. More encapsulated than a subroutine.
  • I dream of the day that Salesforce is automated away. Glorious.
  • If we get better tools for coding fine. But a chatbot that requires massive amounts of computational resources to do a slightly better job than a web search at best and, at worst, just invents completely wrong code is not it.

    All of this makes it clear that most executive have any idea what software development really takes. Writing code is a small part of it. Editing and rewriting code is a much bigger part of overall development.

    • by gweihir ( 88907 )

      If memory serves, initial code creation to initial release is 20% of overall project effort. Maintenance is 40%. That was 35 years ago. Of course, if you screw up the initial creation by using crappy AI code, that 40% figure may be a bit higher.

  • I have heard about "automatic code generators", "low-code/no-code", "constraint-based programming", etc. for about 35 years now. Never pans out. This guy is either clueless or a liar.

  • So I get the other 6 days off, right?

"We don't care. We don't have to. We're the Phone Company."

Working...