Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming AI

Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry 57

Software development is entering an "autopilot era" with AI coding assistants, but the industry needs to prepare for full autonomy, argues former Salesforce co-CEO Bret Taylor. Drawing parallels with self-driving cars, he suggests the role of software engineers will evolve from code authors to operators of code-generating machines. Taylor, a board member of OpenAI and who once rewrote Google Maps over a weekend, calls for new programming systems, languages, and verification methods to ensure AI-generated code remains robust and secure. From his post: In the Autonomous Era of software engineering, the role of a software engineer will likely transform from being the author of computer code to being the operator of a code generating machine. What is a computer programming system built natively for that workflow?

If generating code is no longer a limiting factor, what types of programming languages should we build?

If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?

Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?

Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry

Comments Filter:
  • boeing or airbus autopilot mode?

  • I suppose I'll be left in the dust, but I view AI as maybe an additional tool for finding code snippets to accomplish a specific task. Software architecture is not something I'll use AI for. I'm too much of a control freak for that.
    • by Lisias ( 447563 ) on Wednesday December 25, 2024 @12:46PM (#65038669) Homepage Journal

      Agreed.

      AI is a tool for people that can't use google to find code in StackOverflow.

      Problem: not knowing how to google things also means inability to detect when the AI is hallucinating, what will create pretty interesting (besides annoying) consequences in the Field.

      People knowing how to code will make some serious bucks overcharging by fixing AI created mess.

      On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.

      • by narcc ( 412956 )

        The past couple generations of "developers" are almost ashamed to write code. C is considered a "low-level" language. The concept of memory addresses is considered an "advanced topic" these day. 20 years of Agile nonsense actively discouraging planning. Python. ...

        Yeah, the software industry is in real trouble. Not that anyone paying even a little bit of attention over the past 30 years didn't see this coming ages ago. Once CS programs turned into glorified programming bootcamps in the late 90's, that

      • On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.

        This might be the biggest challenge arising from modern AI tools. They make half-decent replacements for junior developers, and with the market dynamics of the past 10-15 years, hiring junior developers was already a questionable idea for most businesses. So if hiring good seniors and providing powerful workstations with AI-driven tools becomes SOP then where has the next generation of good seniors come from in 2030 or 2035?

        However, so far I don't see much evidence that these tools will pose a credible thre

    • by HiThere ( 15173 )

      That's what it is *now*. What should you prepare for "in the next few years"?
      FFIW, I expect that as long as LLMs are the driver, what you'll get is something that is good at writing snippets. But I'm also convinced that there's noting that says LLMs need to be more than a part of the AI. How much learning (by what kind of device) is needed to "underpin" a LLM before you get something that understands at least a bit of non-verbal context? (FWIW, I expect we've already got to the "at least a tiny bit" sta

  • Clueless CxO (Score:5, Insightful)

    by bradley13 ( 1118935 ) on Wednesday December 25, 2024 @11:22AM (#65038515) Homepage

    Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.

    I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

    The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.

    • by dgatwood ( 11270 )

      Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.

      I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

      The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.

      Agreed. On the flip side, it makes interviewing harder, because LLMs can do all those LeetCode problems, kind of. Throw real code at them and they occasionally do something useful, but usually produce something that bears little resemblance to actual code, so if we could find a way to reliably do that, great, but the flip side is that they'll probably learn from the specific examples and learn to do those well, resulting in a need to constantly switch examples to stay ahead of the automation. And at the

    • Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.

      I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.

      The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.

      Your comments raise some valid points about the limitations of current AI tools, such as Copilot and ChatGPT, in understanding requirements or system design. However, this critique seems to miss the broader perspective of the article, which is not about overestimating the current capabilities of AI but rather exploring the transformative potential of autonomous tools in the future.

      The article does not suggest that AI today can fully grasp system design or requirements. Instead, it speculates on what tools b

      • by Anonymous Coward

        ooh nice use of LLMs to make comments when you cba

    • If you know anything about how Salesforce is doing right now, you know this guy is the real tool. That dumpster fire of a company seems to trundle on only because they've bought (and ruined) so many better products

  • if there wasn't money on the table no one would care. these people don't make anything and never will. creation is the tool of man alone.
    • Charge ChatGPT for the massive increase in fossil fuel energy generation, or better yet, pass a law requiring data centers to NOT use the grid and use only onsite energy generation- and the entire exercise will be revenue negative.

      The only reason OpenAI can exist is because it is stealing money from grid tied energy customers and causing massive inflation in your electric bill.

      • ChatGPT uses far less fuel and generates far less carbon than what it would take to support a human worker doing the same task.

        • by HiThere ( 15173 )

          Got any evidence that is true? I haven't seen any. Definitely "a human worker" could not do the same tasks, because nobody can respond that fast. If you scale it to tasks a human worker could do, then ChatGPT produces a far inferior product, and the "costs to train and use" it are way over the top. But scaling is probably not a reasonable way to think in this area.

          That said, I'm convinced that the LLM-is-all approach to AI is extremely limited. Get a robot, even a robot dog, working with an LLM and you

        • You'll have some figures to back that up then.

    • creation is the tool of man alone.

      Ok, well consider AI a tool that man created to do some of that creation for him.

    • People do creation, not just men.

  • Sigh (Score:5, Insightful)

    by 26199 ( 577806 ) on Wednesday December 25, 2024 @11:39AM (#65038525) Homepage

    Generating code is not at all the limiting factor. Developers do not spend even 20% of their time typing new code.

    So, good luck with that.

    • by leptons ( 891340 )
      > Developers do not spend even 20% of their time typing new code.

      That sounds like a generalization that wouldn't hold up to scrutiny. Sure, some developers probably spend 1% of their time writing new code. But other developers may spend 100% of their time writing new code. You don't get to throw out a number like this without providing a citation.
  • AI tools are not accurate enough to replace software developers, right now the accuracy of LLMs answers are in the 70 to 85 percent range. From what I've read there isn't a good pathway forward to fix that problem. So AI will remain a tool.

  • Is continuous bugs caused by hallucination. I think OpenAI needs to be charged a huge CO2 generation fee for the bloatware they've put out on the market, and also needs a large fine for every wrong answer their AIs generate.

  • by oldgraybeard ( 2939809 ) on Wednesday December 25, 2024 @12:08PM (#65038581)
    Wrong vision! "the operator of a code generating machine" this is equivalent to ancient times when a college student on 3rd shift,
    spent the night at the computer center dropping card decks into the main frame and tearing off the printouts and wrapping them up around the card deck with a rubber band while studying.

    The more likely vision for AI when it comes to programming is software engineers writing very precise software specifications.
    Which AI converts into applications or chunks of applications.
    Which software engineers then test and debug using a precise test specification and a different set of AI tools.

    The concept that anyone can code is the same as anyone can pound a nail. Net result, a lot of bent nails.
    Bring in a nail gun and the nails get driven in straight, but the gun has to be positioned for each nail before driving it.
    • well put... good analogy... and stories about the night shift and decks of cards.... your beard must be very long indeed...

      I had several friends in my youth that went into electrical engineering and what they related is much like what you are saying. I got the impression that no-one actually "did" hardware after a certain point, they just used software tools which automated much of their drudgery, then did a lot of quality control.
      • by HiThere ( 15173 )

        Well, when I went to school there were still a few EAM machines, with patchboards. The most common applications had patchboards with covers over most of the wires, sometimes over all of them. But if a knowledgeable person had special needs he would wire his own patchboard. They were really quite flexible within their domain, and totally useless outside it. And the domains were SMALL. The card sorter could be programmed to sort on any columns of the cards of the deck, either ascending, descending of a m

  • by Big Hairy Gorilla ( 9839972 ) on Wednesday December 25, 2024 @12:17PM (#65038601)
    I once re-wrote google maps before I had coffee.
    Everyone believes me, right?
    • by Big Hairy Gorilla ( 9839972 ) on Wednesday December 25, 2024 @12:31PM (#65038637)
      It's as vague as possible. This guy is an industry luminary? Sounds like he's just feathering his nest.

      Here's my unwanted 2 cents. In a stroke of irony, creative work like audio, video, music, art, is well within reach of current generative AI. Translators, are already out of work, soon "journalists", writers, commercial artists and musicians will all be unemployed... it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.

      What this guy is talking about requires multi domain knowledge... the place that AI is bad at. AI is going to need a quantum leap (apologies) of capability, which does not appear to be any time soon, or possibly anytime at all. I'm not buying it. But tasteless MBA groupthink management will also use it and be super pleased with it... so buckle up... also, hopefully as pointed out above, Boeing doesn't go all in on this bullshit, or seatbelts won't be of much use. Quality of output is no longer a measure. Even causing death isn't that big a deal anymore.
      • by HiThere ( 15173 )

        Yeah, "quantum leap" is the wrong term. That's as small a jump as is possible. But the idea of a discontinuity is correct. LLMs have definite limits. But LLMs aren't all of AI, they're just the part that juggles words. They're a necessary component of an AI that deals with people, but they don't understand non-verbal context. However there are existing AIs that do handle (limited) contexts. Merge one of them with an LLM, and you'll have something that can talk about a particular domain, and understa

      • by leptons ( 891340 )
        >creative work like audio, video, music, art, is well within reach of current generative AI.

        No, it's not.

        >it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.

        We've had completely tasteless human writers, and movies made with absolutely no redeeming values - so bad you have to wonder how people spent millions making drivel. They often lose millions, even the low budget shittiest movies are luc
  • "Former Salesforce CEO" -- NOBODY

    "Suggests" -- doesn't even "say", "imply", "puts forward", "funds", -- NOTHING

    "Blah blah blah"

    Imagine if Salesforce was link a THING that its former CEO wanking off would be a thing. But it's not.

    Next?

  • Predicts automation tools will dominate industry. More at 11.
  • We already operate code generating machines known as "compilers". Generative AI may enable a higher level of abstraction between a developer's intentions and executable binary code, but the developer's role is fundamentally the same.
  • by rocket rancher ( 447670 ) <themovingfinger@gmail.com> on Wednesday December 25, 2024 @12:46PM (#65038673)

    Bret Taylor's article raises some highly engaging and thought-provoking questions about the role of AI in software engineering. The parallels drawn between the evolution of software engineering and the advent of autonomous systems resonate deeply. AI's role in this future should be viewed through a collaborative lens rather than as a competitor to human creativity -- similar to how artists and musicians are beginning to embrace AI as a tool that augments, rather than replaces, their creative processes.

    For instance, the questions about verifying AI-generated code and designing new programming languages could lead to a new interface paradigm for software engineers. Such an interface might integrate real-time verification, intuitive visualizations, and explainable AI outputs, allowing engineers to operate as curators and architects of code, much like conductors guiding an orchestra.

    The potential for creating robust, verifiably correct software aligns with AI's transformative power to elevate human ingenuity. Software engineers, like other creative professionals, could offload repetitive tasks to AI while focusing on high-level design, problem-solving, and innovation -- areas where human creativity excels.

    His call for ambition in designing the new paradigm strikes a chord. Just as artists use AI to push the boundaries of their mediums, engineers could leverage AI to redefine software systems, prioritizing security, efficiency, and scalability. Why not aim for a future where every piece of software is as artfully constructed and harmoniously balanced as a well-crafted symphony?

    This article inspires reflection on how tools -- and roles -- will continue to evolve in the "Autonomous Era." It is an exciting time to consider the possibilities.

    • It's great to be optimistic, I suppose, but your analysis doesn't include the dark side of humanity: greed, stupidity, and laziness. Artists and musicians are being drained of their expertise by training the AI's everytime they "collaborate" with it, as they are providing the training data. Then that same expertise is being sold back to them and anyone, without attribution, and being defended as proprietary. AI is definitely replacing artists and musicians. That's the greed part.

      Now the laziness part. The s
  • Ai def a development “pivot point” not just a tool boost. Its a moment that industry holds the opportunity to catalyze all the know-how, best practices, test suites, paradigms and procedures into formal architecture.

    Software development is at the stage Architecture founded its five pillars(Tuscan, Doric, Ionic, Corinthian, and Composite). Centuries later nothing has changed in Architecture five ordered pillars but huge improvements on the order that software is likely to see with Ai and quantum

    • by gtall ( 79522 )

      Too bad Slashdot cannot recognize bots generating comments like the parent comment.

    • There are more types of pillars, but people just got tired of naming them. For example, which of the classic five would this be? Or this one? [app.goo.gl] Or this strange but beautiful thing? [app.goo.gl]

      The other thing that happened is people came up with new types and named them an order, but doing so adds no practical value beyond the originals. Nonce orders are kind of cool but categorizing them doesn't do much pedagogically.
  • Computer programming was originally bit manipulation and has moved toward higher levels of abstraction over the decades. I expect that traditional programming languages will fade away and be replaced with specification languages. There will be some templates you use to describe what you want - first in general terms to generate an overall solution framework and an initial prototype, and then in increasing detail so that the final product gets iteratively ironed out. Even now we have "user stories" that tell

      • Variants of 4GL has been around for quite some time, and according to wikipedia "In the 1980s and 1990s, there were efforts to develop fifth-generation programming languages (5GL)". But I think things are moving far beyond that now.

    • User stories are great for developing test cases, and figuring out what should be emphasized in an AI, but they absolutely suck for creating a versatile, elegant product. The natural design leans towards everything being like a Microsoft Wizard. Great if you want to follow that story exactly, but if you want to combine things in unique ways, you can't. You have to follow the story.
  • In addition to serving as the Board Chair [openai.com] of for-profit-status-investigating OpenAI [citizen.org], Bret Taylor is also co-founder of Sierra [sierra.ai], "the conversational AI platform for businesses."

  • Once upon a time programmers struggled with writing machine code. Then some clever people came up with a system for specifying what you needed in almost plain English, and the computer would do the rest. It was called COBOL, and coding was never quite the same. But still, programmers were badly needed, and once everyone was on board with these newmodern "plain English" ways of coding, the requirements just escalated so we still had more than enough work to do. After COBOL, ALGOL, and FORTRAN, there came new

  • At least if AI codes in current languages designed for human use, there's hope of it being legible to human auditing (careful of any obfuscation trickiness). But if humans, or AI, design a new language to write code in that's more efficient for the AI to process, at the cost of legibility to humans, then who knows what the code is really doing or how its doing it anymore? I suppose comments in code arent needed by AI and they could choose to work in a language thats even designed to be unintelligible to hum

    • Yeah, it kind of makes sense. You know, with AI you can type a word or phrase, and it will automatically have the code for making the computer perform many lines of code.

      We really need new structures for this kind of thing. It's completely new in the industry. I propose a name for it: we should call it a function. More encapsulated than a subroutine.
  • I dream of the day that Salesforce is automated away. Glorious.

What sin has not been committed in the name of efficiency?

Working...