Bret Taylor Urges Rethink of Software Development as AI Reshapes Industry 57
Software development is entering an "autopilot era" with AI coding assistants, but the industry needs to prepare for full autonomy, argues former Salesforce co-CEO Bret Taylor. Drawing parallels with self-driving cars, he suggests the role of software engineers will evolve from code authors to operators of code-generating machines. Taylor, a board member of OpenAI and who once rewrote Google Maps over a weekend, calls for new programming systems, languages, and verification methods to ensure AI-generated code remains robust and secure. From his post: In the Autonomous Era of software engineering, the role of a software engineer will likely transform from being the author of computer code to being the operator of a code generating machine. What is a computer programming system built natively for that workflow?
If generating code is no longer a limiting factor, what types of programming languages should we build?
If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?
Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?
If generating code is no longer a limiting factor, what types of programming languages should we build?
If a computer is generating most code, how do we make it easy for a software engineer to verify it does what they intend? What is the role of programming language design (e.g., what Rust did for memory safety)? What is the role of formal verification? What is the role of tests, CI/CD, and development workflows?
Today, a software engineer's primary desktop is their editor. What is the Mission Control for a software engineer in the era of autonomous development?
boeing or airbus autopilot mode? (Score:2)
boeing or airbus autopilot mode?
Same as it ever was (Score:2)
Re:Same as it ever was (Score:5, Insightful)
Agreed.
AI is a tool for people that can't use google to find code in StackOverflow.
Problem: not knowing how to google things also means inability to detect when the AI is hallucinating, what will create pretty interesting (besides annoying) consequences in the Field.
People knowing how to code will make some serious bucks overcharging by fixing AI created mess.
On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.
Re: (Score:2)
The past couple generations of "developers" are almost ashamed to write code. C is considered a "low-level" language. The concept of memory addresses is considered an "advanced topic" these day. 20 years of Agile nonsense actively discouraging planning. Python. ...
Yeah, the software industry is in real trouble. Not that anyone paying even a little bit of attention over the past 30 years didn't see this coming ages ago. Once CS programs turned into glorified programming bootcamps in the late 90's, that
Re: Same as it ever was (Score:2)
Agile most definitely does not discourage planning. If people decided to start a project without knowing what they wanted to accomplish then that is called Stupidity.
Re: (Score:2)
On the dark side: the current generation of skilled developers will have absolutely no incentive to train the next generation (it's the other way around), and so in a few years we will have some serious problems.
This might be the biggest challenge arising from modern AI tools. They make half-decent replacements for junior developers, and with the market dynamics of the past 10-15 years, hiring junior developers was already a questionable idea for most businesses. So if hiring good seniors and providing powerful workstations with AI-driven tools becomes SOP then where has the next generation of good seniors come from in 2030 or 2035?
However, so far I don't see much evidence that these tools will pose a credible thre
Re: (Score:2)
That's what it is *now*. What should you prepare for "in the next few years"?
FFIW, I expect that as long as LLMs are the driver, what you'll get is something that is good at writing snippets. But I'm also convinced that there's noting that says LLMs need to be more than a part of the AI. How much learning (by what kind of device) is needed to "underpin" a LLM before you get something that understands at least a bit of non-verbal context? (FWIW, I expect we've already got to the "at least a tiny bit" sta
Clueless CxO (Score:5, Insightful)
Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.
Re: (Score:3)
Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.
Agreed. On the flip side, it makes interviewing harder, because LLMs can do all those LeetCode problems, kind of. Throw real code at them and they occasionally do something useful, but usually produce something that bears little resemblance to actual code, so if we could find a way to reliably do that, great, but the flip side is that they'll probably learn from the specific examples and learn to do those well, resulting in a need to constantly switch examples to stay ahead of the automation. And at the
Re: (Score:3)
Copilot, ChatGPT & other LLMs do decent auto complete, when the next bit is trivially obvious. It's a nice timesaver. However, they have zero clue about requirements or system design.
I think people tend to overestimate their capabilities, because they can spit out whole solutions to LeetCode problems. People forget that they trained on the solutions posted by dozens of (human) programmers.
The same for semi-standard web development. Low-end template developers may indeed be replaced. Yet-another-WordPress-site? That's not really software development.
Your comments raise some valid points about the limitations of current AI tools, such as Copilot and ChatGPT, in understanding requirements or system design. However, this critique seems to miss the broader perspective of the article, which is not about overestimating the current capabilities of AI but rather exploring the transformative potential of autonomous tools in the future.
The article does not suggest that AI today can fully grasp system design or requirements. Instead, it speculates on what tools b
Re: (Score:1)
ooh nice use of LLMs to make comments when you cba
Re: Clueless CxO (Score:1)
If you know anything about how Salesforce is doing right now, you know this guy is the real tool. That dumpster fire of a company seems to trundle on only because they've bought (and ruined) so many better products
class war (Score:1)
Re: (Score:3)
Charge ChatGPT for the massive increase in fossil fuel energy generation, or better yet, pass a law requiring data centers to NOT use the grid and use only onsite energy generation- and the entire exercise will be revenue negative.
The only reason OpenAI can exist is because it is stealing money from grid tied energy customers and causing massive inflation in your electric bill.
Re: (Score:1)
ChatGPT uses far less fuel and generates far less carbon than what it would take to support a human worker doing the same task.
Re: (Score:2)
Got any evidence that is true? I haven't seen any. Definitely "a human worker" could not do the same tasks, because nobody can respond that fast. If you scale it to tasks a human worker could do, then ChatGPT produces a far inferior product, and the "costs to train and use" it are way over the top. But scaling is probably not a reasonable way to think in this area.
That said, I'm convinced that the LLM-is-all approach to AI is extremely limited. Get a robot, even a robot dog, working with an LLM and you
Re: class war (Score:2)
You'll have some figures to back that up then.
Loop Hole (Score:2)
creation is the tool of man alone.
Ok, well consider AI a tool that man created to do some of that creation for him.
Re: class war (Score:3)
People do creation, not just men.
Sigh (Score:5, Insightful)
Generating code is not at all the limiting factor. Developers do not spend even 20% of their time typing new code.
So, good luck with that.
Re: (Score:2)
That sounds like a generalization that wouldn't hold up to scrutiny. Sure, some developers probably spend 1% of their time writing new code. But other developers may spend 100% of their time writing new code. You don't get to throw out a number like this without providing a citation.
No and no (Score:2)
AI tools are not accurate enough to replace software developers, right now the accuracy of LLMs answers are in the 70 to 85 percent range. From what I've read there isn't a good pathway forward to fix that problem. So AI will remain a tool.
Re: (Score:2)
The accuracy of ChatGPT oi went DOWN- it's now in the 60% range.
Continuous Integration of AI generated Code (Score:3)
Is continuous bugs caused by hallucination. I think OpenAI needs to be charged a huge CO2 generation fee for the bloatware they've put out on the market, and also needs a large fine for every wrong answer their AIs generate.
the operator of a code generating machine (Score:3)
spent the night at the computer center dropping card decks into the main frame and tearing off the printouts and wrapping them up around the card deck with a rubber band while studying.
The more likely vision for AI when it comes to programming is software engineers writing very precise software specifications.
Which AI converts into applications or chunks of applications.
Which software engineers then test and debug using a precise test specification and a different set of AI tools.
The concept that anyone can code is the same as anyone can pound a nail. Net result, a lot of bent nails.
Bring in a nail gun and the nails get driven in straight, but the gun has to be positioned for each nail before driving it.
Re: (Score:2)
I had several friends in my youth that went into electrical engineering and what they related is much like what you are saying. I got the impression that no-one actually "did" hardware after a certain point, they just used software tools which automated much of their drudgery, then did a lot of quality control.
Re: (Score:2)
Well, when I went to school there were still a few EAM machines, with patchboards. The most common applications had patchboards with covers over most of the wires, sometimes over all of them. But if a knowledgeable person had special needs he would wire his own patchboard. They were really quite flexible within their domain, and totally useless outside it. And the domains were SMALL. The card sorter could be programmed to sort on any columns of the cards of the deck, either ascending, descending of a m
that's nothing (Score:3)
Everyone believes me, right?
I actually RTFA, and (Score:5, Insightful)
Here's my unwanted 2 cents. In a stroke of irony, creative work like audio, video, music, art, is well within reach of current generative AI. Translators, are already out of work, soon "journalists", writers, commercial artists and musicians will all be unemployed... it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.
What this guy is talking about requires multi domain knowledge... the place that AI is bad at. AI is going to need a quantum leap (apologies) of capability, which does not appear to be any time soon, or possibly anytime at all. I'm not buying it. But tasteless MBA groupthink management will also use it and be super pleased with it... so buckle up... also, hopefully as pointed out above, Boeing doesn't go all in on this bullshit, or seatbelts won't be of much use. Quality of output is no longer a measure. Even causing death isn't that big a deal anymore.
Re: (Score:2)
Yeah, "quantum leap" is the wrong term. That's as small a jump as is possible. But the idea of a discontinuity is correct. LLMs have definite limits. But LLMs aren't all of AI, they're just the part that juggles words. They're a necessary component of an AI that deals with people, but they don't understand non-verbal context. However there are existing AIs that do handle (limited) contexts. Merge one of them with an LLM, and you'll have something that can talk about a particular domain, and understa
Re: (Score:2)
No, it's not.
>it's cheaper to use AI and the people who use it are completely tasteless, they will get pure lowest common denomenator stuff, and be super pleased with it.
We've had completely tasteless human writers, and movies made with absolutely no redeeming values - so bad you have to wonder how people spent millions making drivel. They often lose millions, even the low budget shittiest movies are luc
NOBODY says NOTHING (Score:1)
"Former Salesforce CEO" -- NOBODY
"Suggests" -- doesn't even "say", "imply", "puts forward", "funds", -- NOTHING
"Blah blah blah"
Imagine if Salesforce was link a THING that its former CEO wanking off would be a thing. But it's not.
Next?
Guy selling automation tools (Score:2)
Code generating machines already exist (Score:2)
AI as a collaborator, not as a competitor (Score:4, Interesting)
Bret Taylor's article raises some highly engaging and thought-provoking questions about the role of AI in software engineering. The parallels drawn between the evolution of software engineering and the advent of autonomous systems resonate deeply. AI's role in this future should be viewed through a collaborative lens rather than as a competitor to human creativity -- similar to how artists and musicians are beginning to embrace AI as a tool that augments, rather than replaces, their creative processes.
For instance, the questions about verifying AI-generated code and designing new programming languages could lead to a new interface paradigm for software engineers. Such an interface might integrate real-time verification, intuitive visualizations, and explainable AI outputs, allowing engineers to operate as curators and architects of code, much like conductors guiding an orchestra.
The potential for creating robust, verifiably correct software aligns with AI's transformative power to elevate human ingenuity. Software engineers, like other creative professionals, could offload repetitive tasks to AI while focusing on high-level design, problem-solving, and innovation -- areas where human creativity excels.
His call for ambition in designing the new paradigm strikes a chord. Just as artists use AI to push the boundaries of their mediums, engineers could leverage AI to redefine software systems, prioritizing security, efficiency, and scalability. Why not aim for a future where every piece of software is as artfully constructed and harmoniously balanced as a well-crafted symphony?
This article inspires reflection on how tools -- and roles -- will continue to evolve in the "Autonomous Era." It is an exciting time to consider the possibilities.
Re: AI as a collaborator, not as a competitor (Score:2)
I assure you: the best bullshit is technical bullshit. And don't knock the humanities: pretty much every enjoyment you have ever had from art, writing, or music comes from those branches of study. And, though you probably don't believe, that work is advanced through deconstruction, intersectionality, and a host of other terms that seem meaningless to someone not actually working in the field.
Re: (Score:2)
I think the best bullshit of the humanites by far exceeds "technical bullshit". In technical fields, you can dig
Re: AI as a collaborator, not as a competitor (Score:2)
"Let's argue about the meaning of "best"." :-)
To do that, you will need some terms from the humanities.
Re: (Score:2)
Oh, I'm not disagreeing, when I got annoyed I used to bullshit my clients just to shut them up and end the meeting early.. a bunch of university professors in the arts... bring up "database" and they get all confused and start asking questi
Re: (Score:3)
Now the laziness part. The s
Forward thinking (Score:2)
Ai def a development “pivot point” not just a tool boost. Its a moment that industry holds the opportunity to catalyze all the know-how, best practices, test suites, paradigms and procedures into formal architecture.
Software development is at the stage Architecture founded its five pillars(Tuscan, Doric, Ionic, Corinthian, and Composite). Centuries later nothing has changed in Architecture five ordered pillars but huge improvements on the order that software is likely to see with Ai and quantum
Re: (Score:2)
Too bad Slashdot cannot recognize bots generating comments like the parent comment.
Re: (Score:2)
The other thing that happened is people came up with new types and named them an order, but doing so adds no practical value beyond the originals. Nonce orders are kind of cool but categorizing them doesn't do much pedagogically.
The specification language (Score:2)
Computer programming was originally bit manipulation and has moved toward higher levels of abstraction over the decades. I expect that traditional programming languages will fade away and be replaced with specification languages. There will be some templates you use to describe what you want - first in general terms to generate an overall solution framework and an initial prototype, and then in increasing detail so that the final product gets iteratively ironed out. Even now we have "user stories" that tell
Re: The specification language (Score:3)
Re: (Score:2)
Variants of 4GL has been around for quite some time, and according to wikipedia "In the 1980s and 1990s, there were efforts to develop fifth-generation programming languages (5GL)". But I think things are moving far beyond that now.
Re: (Score:2)
News Alert: OpenAI Board Chair Bullish on AI (Score:2)
In addition to serving as the Board Chair [openai.com] of for-profit-status-investigating OpenAI [citizen.org], Bret Taylor is also co-founder of Sierra [sierra.ai], "the conversational AI platform for businesses."
Seen it before (Score:2)
Once upon a time programmers struggled with writing machine code. Then some clever people came up with a system for specifying what you needed in almost plain English, and the computer would do the rest. It was called COBOL, and coding was never quite the same. But still, programmers were badly needed, and once everyone was on board with these newmodern "plain English" ways of coding, the requirements just escalated so we still had more than enough work to do. After COBOL, ALGOL, and FORTRAN, there came new
An AI first coding language? (Score:2)
At least if AI codes in current languages designed for human use, there's hope of it being legible to human auditing (careful of any obfuscation trickiness). But if humans, or AI, design a new language to write code in that's more efficient for the AI to process, at the cost of legibility to humans, then who knows what the code is really doing or how its doing it anymore? I suppose comments in code arent needed by AI and they could choose to work in a language thats even designed to be unintelligible to hum
Re: (Score:2)
We really need new structures for this kind of thing. It's completely new in the industry. I propose a name for it: we should call it a function. More encapsulated than a subroutine.
I also dream of such a day (Score:2)