Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry 76
Software development is entering an "autopilot era" with AI coding assistants, but the industry needs to prepare for full autonomy, argues former Salesforce co-CEO Bret Taylor. Drawing parallels with self-driving cars, he suggests the role of software engineers will evolve from code authors to operators of code-generating machines. Taylor, a board member of OpenAI and who once rewrote Google Maps over a weekend, calls for new programming systems, languages, and verification methods to ensure AI-generated code remains robust and secure. From his post: In the Autonomous Era of software engineering, the role of a software engineer will likely transform from being the author of computer code to being the operator of a code generating machine. What is a computer programming system built natively for that workflow?
If generating code is no longer a limiting factor, what types of programming languages should we build?
If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?
Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?
If generating code is no longer a limiting factor, what types of programming languages should we build?
If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?
Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?
boeing or airbus autopilot mode? (Score:2)
boeing or airbus autopilot mode?
Same as it ever was (Score:2)
Re:Same as it ever was (Score:5, Insightful)
Agreed.
AI is a tool for people that can't use google to find code in StackOverflow.
Problem: not knowing how to google things also means inability to detect when the AI is hallucinating, what will create pretty interesting (besides annoying) consequences in the Field.
People knowing how to code will make some serious bucks overcharging by fixing AI created mess.
On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.
Re: (Score:2)
The past couple generations of "developers" are almost ashamed to write code. C is considered a "low-level" language. The concept of memory addresses is considered an "advanced topic" these day. 20 years of Agile nonsense actively discouraging planning. Python. ...
Yeah, the software industry is in real trouble. Not that anyone paying even a little bit of attention over the past 30 years didn't see this coming ages ago. Once CS programs turned into glorified programming bootcamps in the late 90's, that
Re: Same as it ever was (Score:2)
Agile most definitely does not discourage planning. If people decided to start a project without knowing what they wanted to accomplish then that is called Stupidity.
Re: (Score:2)
On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.
This might be the biggest challenge arising from modern AI tools. They make half-decent replacements for junior developers, and with the market dynamics of the past 10-15 years, hiring junior developers was already a questionable idea for most businesses. So if hiring good seniors and providing powerful workstations with AI-driven tools becomes SOP then where has the next generation of good seniors come from in 2030 or 2035?
However, so far I don't see much evidence that these tools will pose a credible thre
Re: (Score:2)
The guy's apparently a developer by background, so surely he knows what compilers and interpreters are. Almost all software developers have been operators of code generators since somewhere in the middle of the 20th century. The trick has been, and still will be, to make the specification of the behaviour you want clear enough for those code generators to do what you actually want them to do.
I'd say compilers are better. With LLMs, even if you make it perfectly clear what you want, they will sometimes interpret it in some other way. Plus you're having to use English. It's very hard to be exact when using a natural language.
Re: (Score:2)
> AI is a tool for people that can't use google to find code in StackOverflow.
I see these sorts of ideas pushed on Slashdot quite often. I can only hope that people pushing these ideas are not programmers themselves. If they are, they are in for a rude awakening.
AI is much much more than Googling to find something on StackOverflow. A lot more. For example, I can feed it a large body of code and ask it to help me find potential issues. And it does a reasonably good job of it. It used to not be able
Re: (Score:2)
That's what it is *now*. What should you prepare for "in the next few years"?
FFIW, I expect that as long as LLMs are the driver, what you'll get is something that is good at writing snippets. But I'm also convinced that there's noting that says LLMs need to be more than a part of the AI. How much learning (by what kind of device) is needed to "underpin" a LLM before you get something that understands at least a bit of non-verbal context? (FWIW, I expect we've already got to the "at least a tiny bit" sta
Re: (Score:2)
LLMs cannot do software architecture at all. For that you have to understand quite a few things and "understanding" is not something any AI can do today and something LLMs will never be capable of.
Clueless CxO (Score:5, Insightful)
Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.
Re: (Score:3)
Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.
Agreed. On the flip side, it makes interviewing harder, because LLMs can do all those LeetCode problems, kind of. Throw real code at them and they occasionally do something useful, but usually produce something that bears little resemblance to actual code, so if we could find a way to reliably do that, great, but the flip side is that they'll probably learn from the specific examples and learn to do those well, resulting in a need to constantly switch examples to stay ahead of the automation. And at the
Re: (Score:3)
Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.
Your comments raise some valid points about the limitations of current AI tools, such as Copilot and ChatGPT, in understanding requirements or system design. However, this critique seems to miss the broader perspective of the article, which is not about overestimating the current capabilities of AI but rather exploring the transformative potential of autonomous tools in the future.
The article does not suggest that AI today can fully grasp system design or requirements. Instead, it speculates on what tools b
Re: (Score:1)
ooh nice use of LLMs to make comments when you cba
Re: (Score:2)
Can you please stop posting LLM generated crap?
Re: Clueless CxO (Score:1)
If you know anything about how Salesforce is doing right now, you know this guy is the real tool. That dumpster fire of a company seems to trundle on only because they've bought (and ruined) so many better products
Re: (Score:2)
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
And that is just it. If it is common enough that an LLM can (usually) do it well, it is common enough to pack it in a library. And that comes without all the problems LLM generated code has.
class war (Score:1)
Re: (Score:3)
Charge ChatGPT for the massive increase in fossil fuel energy generation, or better yet, pass a law requiring data centers to NOT use the grid and use only onsite energy generation- and the entire exercise will be revenue negative.
The only reason OpenAI can exist is because it is stealing money from grid tied energy customers and causing massive inflation in your electric bill.
Re: (Score:1)
ChatGPT uses far less fuel and generates far less carbon than what it would take to support a human worker doing the same task.
Re: (Score:2)
Got any evidence that is true? I haven't seen any. Definitely "a human worker" could not do the same tasks, because nobody can respond that fast. If you scale it to tasks a human worker could do, then ChatGPT produces a far inferior product, and the "costs to train and use" it are way over the top. But scaling is probably not a reasonable way to think in this area.
That said, I'm convinced that the LLM-is-all approach to AI is extremely limited. Get a robot, even a robot dog, working with an LLM and you
Re: (Score:2)
Sure, back of the napkin: the cheapest human software developer costs something like 500 USD a month. If we assume that guy is working 9 hour days and a month averages 22 workdays, thats 198 hours at about $2.50 an hour, but it doesn't include the electricity to run their computer or any of the other stuff like an office and a desk or paying the chai guy. The rule of thumb here is that we double the salary to get the total cost, so we bump that up to $5. That worker could do something like 5 chatGPT-li
Re: class war (Score:2)
You'll have some figures to back that up then.
Re: (Score:2)
Sure, but the leap of faith here is energy value theory, since that's the only way we have to measure the total carbon footprint of the human worker.
Loop Hole (Score:2)
creation is the tool of man alone.
Ok, well consider AI a tool that man created to do some of that creation for him.
Re: class war (Score:3)
People do creation, not just men.
Sigh (Score:5, Insightful)
Generating code is not at all the limiting factor. Developers do not spend even 20% of their time typing new code.
So, good luck with that.
Re: (Score:2)
That sounds like a generalization that wouldn't hold up to scrutiny. Sure, some developers probably spend 1% of their time writing new code. But other developers may spend 100% of their time writing new code. You don't get to throw out a number like this without providing a citation.
Re: Sigh (Score:2)
I've worked on internal developer tooling at a large company and so had access to lots of data--you have to know how much people are using the tooling to know how much you can gain by improving it. So, I know what I'm talking about :) but no citations possible, sorry.
Re: (Score:2)
But other developers may spend 100% of their time writing new code.
Please tell me who these developers are so I can avoid them. If they're not spending at least 20% of their time understanding and clarifying requirements, then whatever they wrote will be completely useless to the customer. That's assuming the code even works, you didn't give them any time to test or fix bugs. And this code will be unmaintainable because there was no time spent on design or architecture. Nor will it work together with other preexisting systems. So it's basically useless until a competent de
No and no (Score:2)
AI tools are not accurate enough to replace software developers, right now the accuracy of LLMs answers are in the 70 to 85 percent range. From what I've read there isn't a good pathway forward to fix that problem. So AI will remain a tool.
Re: (Score:2)
The accuracy of ChatGPT oi went DOWN- it's now in the 60% range.
Re: (Score:2)
Interesting. Any explanation as to why? Model collapse will eventually drive it even lower, but I would think the effect should not yet be that bad.
Continuous Integration of AI generated Code (Score:3)
Is continuous bugs caused by hallucination. I think OpenAI needs to be charged a huge CO2 generation fee for the bloatware they've put out on the market, and also needs a large fine for every wrong answer their AIs generate.
the operator of a code generating machine (Score:3)
spent the night at the computer center dropping card decks into the main frame and tearing off the printouts and wrapping them up around the card deck with a rubber band while studying.
The more likely vision for AI when it comes to programming is software engineers writing very precise software specifications.
Which AI converts into applications or chunks of applications.
Which software engineers then test and debug using a precise test specification and a different set of AI tools.
The concept that anyone can code is the same as anyone can pound a nail. Net result, a lot of bent nails.
Bring in a nail gun and the nails get driven in straight, but the gun has to be positioned for each nail before driving it.
Re: (Score:2)
I had several friends in my youth that went into electrical engineering and what they related is much like what you are saying. I got the impression that no-one actually "did" hardware after a certain point, they just used software tools which automated much of their drudgery, then did a lot of quality control.
Re: (Score:2)
Well, when I went to school there were still a few EAM machines, with patchboards. The most common applications had patchboards with covers over most of the wires, sometimes over all of them. But if a knowledgeable person had special needs he would wire his own patchboard. They were really quite flexible within their domain, and totally useless outside it. And the domains were SMALL. The card sorter could be programmed to sort on any columns of the cards of the deck, either ascending, descending of a m
that's nothing (Score:3)
Everyone believes me, right?
I actually RTFA, and (Score:5, Insightful)
Here's my unwanted 2 cents. In a stroke of irony, creative work like audio, video, music, art, is well within reach of current generative AI. Translators, are already out of work, soon "journalists", writers, commercial artists and musicians will all be unemployed... it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.
What this guy is talking about requires multi domain knowledge... the place that AI is bad at. AI is going to need a quantum leap (apologies) of capability, which does not appear to be any time soon, or possibly anytime at all. I'm not buying it. But tasteless MBA groupthink management will also use it and be super pleased with it... so buckle up... also, hopefully as pointed out above, Boeing doesn't go all in on this bullshit, or seatbelts won't be of much use. Quality of output is no longer a measure. Even causing death isn't that big a deal anymore.
Re: (Score:2)
Yeah, "quantum leap" is the wrong term. That's as small a jump as is possible. But the idea of a discontinuity is correct. LLMs have definite limits. But LLMs aren't all of AI, they're just the part that juggles words. They're a necessary component of an AI that deals with people, but they don't understand non-verbal context. However there are existing AIs that do handle (limited) contexts. Merge one of them with an LLM, and you'll have something that can talk about a particular domain, and understa
Re: (Score:2)
No, it's not.
>it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.
We've had completely tasteless human writers, and movies made with absolutely no redeeming values - so bad you have to wonder how people spent millions making drivel. They often lose millions, even the low budget shittiest movies are luc
NOBODY says NOTHING (Score:1)
"Former Salesforce CEO" -- NOBODY
"Suggests" -- doesn't even "say", "imply", "puts forward", "funds", -- NOTHING
"Blah blah blah"
Imagine if Salesforce was link a THING that its former CEO wanking off would be a thing. But it's not.
Next?
Guy selling automation tools (Score:2)
Code generating machines already exist (Score:2)
AI as a collaborator, not as a competitor (Score:4, Interesting)
Bret Taylor's article raises some highly engaging and thought-provoking questions about the role of AI in software engineering. The parallels drawn between the evolution of software engineering and the advent of autonomous systems resonate deeply. AI's role in this future should be viewed through a collaborative lens rather than as a competitor to human creativity -- similar to how artists and musicians are beginning to embrace AI as a tool that augments, rather than replaces, their creative processes.
For instance, the questions about verifying AI-generated code and designing new programming languages could lead to a new interface paradigm for software engineers. Such an interface might integrate real-time verification, intuitive visualizations, and explainable AI outputs, allowing engineers to operate as curators and architects of code, much like conductors guiding an orchestra.
The potential for creating robust, verifiably correct software aligns with AI's transformative power to elevate human ingenuity. Software engineers, like other creative professionals, could offload repetitive tasks to AI while focusing on high-level design, problem-solving, and innovation -- areas where human creativity excels.
His call for ambition in designing the new paradigm strikes a chord. Just as artists use AI to push the boundaries of their mediums, engineers could leverage AI to redefine software systems, prioritizing security, efficiency, and scalability. Why not aim for a future where every piece of software is as artfully constructed and harmoniously balanced as a well-crafted symphony?
This article inspires reflection on how tools -- and roles -- will continue to evolve in the "Autonomous Era." It is an exciting time to consider the possibilities.
Re: AI as a collaborator, not as a competitor (Score:2)
I assure you: the best bullshit is technical bullshit. And don't knock the humanities: pretty much every enjoyment you have ever had from art, writing, or music comes from those branches of study. And, though you probably don't believe, that work is advanced through deconstruction, intersectionality, and a host of other terms that seem meaningless to someone not actually working in the field.
Re: (Score:2)
I think the best bullshit of the humanites by far exceeds "technical bullshit". In technical fields, you can dig
Re: AI as a collaborator, not as a competitor (Score:2)
"Let's argue about the meaning of "best"." :-)
To do that, you will need some terms from the humanities.
Re: (Score:2)
Oh, I'm not disagreeing, when I got annoyed I used to bullshit my clients just to shut them up and end the meeting early.. a bunch of university professors in the arts... bring up "database" and they get all confused and start asking questi
Re: (Score:3)
Now the laziness part. The s
Re: (Score:2)
And what LLM did generate this fake "comment"?
Forward thinking (Score:2)
Ai def a development “pivot point” not just a tool boost. Its a moment that industry holds the opportunity to catalyze all the know-how, best practices, test suites, paradigms and procedures into formal architecture.
Software development is at the stage Architecture founded its five pillars(Tuscan, Doric, Ionic, Corinthian, and Composite). Centuries later nothing has changed in Architecture five ordered pillars but huge improvements on the order that software is likely to see with Ai and quantum
Re: (Score:2)
Too bad Slashdot cannot recognize bots generating comments like the parent comment.
Re: (Score:2)
The other thing that happened is people came up with new types and named them an order, but doing so adds no practical value beyond the originals. Nonce orders are kind of cool but categorizing them doesn't do much pedagogically.
The specification language (Score:2)
Computer programming was originally bit manipulation and has moved toward higher levels of abstraction over the decades. I expect that traditional programming languages will fade away and be replaced with specification languages. There will be some templates you use to describe what you want - first in general terms to generate an overall solution framework and an initial prototype, and then in increasing detail so that the final product gets iteratively ironed out. Even now we have "user stories" that tell
Re: The specification language (Score:3)
Re: (Score:2)
Variants of 4GL has been around for quite some time, and according to wikipedia "In the 1980s and 1990s, there were efforts to develop fifth-generation programming languages (5GL)". But I think things are moving far beyond that now.
Re: The specification language (Score:2)
Re: (Score:2)
Re: (Score:2)
Hmm. When have I read this before? Oh, yes! When I studied CS 35 years ago, the 5GL project had just completely failed. It pretty much made the promises you just described.
In actual reality, what you wrote is wishful thinking. We are nowhere near that could be turned into reality and it is unclear whether that will ever be possible. One exception: You can turn some forms of formal specification into code automatically. Worked well 35 years ago. Why did it not catch on? Simple: Writing a formal spec is much,
News Alert: OpenAI Board Chair Bullish on AI (Score:2)
In addition to serving as the Board Chair [openai.com] of for-profit-status-investigating OpenAI [citizen.org], Bret Taylor is also co-founder of Sierra [sierra.ai], "the conversational AI platform for businesses."
Seen it before (Score:2)
Once upon a time programmers struggled with writing machine code. Then some clever people came up with a system for specifying what you needed in almost plain English, and the computer would do the rest. It was called COBOL, and coding was never quite the same. But still, programmers were badly needed, and once everyone was on board with these newmodern "plain English" ways of coding, the requirements just escalated so we still had more than enough work to do. After COBOL, ALGOL, and FORTRAN, there came new
Re: (Score:2)
Indeed. Writing the code is a minor part of software creation. That has been known for _ages_. This Taylor person is clueless or a liar.
An AI first coding language? (Score:2)
At least if AI codes in current languages designed for human use, there's hope of it being legible to human auditing (careful of any obfuscation trickiness). But if humans, or AI, design a new language to write code in that's more efficient for the AI to process, at the cost of legibility to humans, then who knows what the code is really doing or how its doing it anymore? I suppose comments in code arent needed by AI and they could choose to work in a language thats even designed to be unintelligible to hum
Re: (Score:2)
We really need new structures for this kind of thing. It's completely new in the industry. I propose a name for it: we should call it a function. More encapsulated than a subroutine.
I also dream of such a day (Score:2)
This bubble can't pop fast enough... (Score:2)
If we get better tools for coding fine. But a chatbot that requires massive amounts of computational resources to do a slightly better job than a web search at best and, at worst, just invents completely wrong code is not it.
All of this makes it clear that most executive have any idea what software development really takes. Writing code is a small part of it. Editing and rewriting code is a much bigger part of overall development.
Re: (Score:2)
If memory serves, initial code creation to initial release is 20% of overall project effort. Maintenance is 40%. That was 35 years ago. Of course, if you screw up the initial creation by using crappy AI code, that 40% figure may be a bit higher.
Great, more clueless bullshit. (Score:2)
I have heard about "automatic code generators", "low-code/no-code", "constraint-based programming", etc. for about 35 years now. Never pans out. This guy is either clueless or a liar.
I can do in a day what took 7? (Score:2)
Did calculators eliminate mathematics? (Score:2)