Can a Code-Writing AI Be Good News For Humans? (indianexpress.com) 90
"A.I. Can Now Write Its Own Computer Code," blares a headline in the New York Times, adding "That's Good News for Humans. (Alternate URL here.)
The article begins with this remarkable story about Codex (the OpenAI software underlying GitHub Copilot): As soon as Tom Smith got his hands on Codex — a new artificial intelligence technology that writes its own computer programs — he gave it a job interview. He asked if it could tackle the "coding challenges" that programmers often face when interviewing for big-money jobs at Silicon Valley companies like Google and Facebook. Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes? It did both instantly, before completing several other tasks.
"These are problems that would be tough for a lot of humans to solve, myself included, and it would type out the response in two seconds," said Mr. Smith, a seasoned programmer who oversees an A.I. start-up called Gado Images. "It was spooky to watch." Codex seemed like a technology that would soon replace human workers. As Mr. Smith continued testing the system, he realized that its skills extended well beyond a knack for answering canned interview questions. It could even translate from one programming language to another.
Yet after several weeks working with this new technology, Mr. Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.
"This is a tool that can make a coder's life a lot easier," Mr. Smith said.
The article ultimately concludes that Codex "extends what a machine can do, but it is another indication that the technology works best with humans at the controls."
And Greg Brockman, chief technology officer of OpenAI, even tells the Times "AI is not playing out like anyone expected. It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once."
The article begins with this remarkable story about Codex (the OpenAI software underlying GitHub Copilot): As soon as Tom Smith got his hands on Codex — a new artificial intelligence technology that writes its own computer programs — he gave it a job interview. He asked if it could tackle the "coding challenges" that programmers often face when interviewing for big-money jobs at Silicon Valley companies like Google and Facebook. Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes? It did both instantly, before completing several other tasks.
"These are problems that would be tough for a lot of humans to solve, myself included, and it would type out the response in two seconds," said Mr. Smith, a seasoned programmer who oversees an A.I. start-up called Gado Images. "It was spooky to watch." Codex seemed like a technology that would soon replace human workers. As Mr. Smith continued testing the system, he realized that its skills extended well beyond a knack for answering canned interview questions. It could even translate from one programming language to another.
Yet after several weeks working with this new technology, Mr. Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.
"This is a tool that can make a coder's life a lot easier," Mr. Smith said.
The article ultimately concludes that Codex "extends what a machine can do, but it is another indication that the technology works best with humans at the controls."
And Greg Brockman, chief technology officer of OpenAI, even tells the Times "AI is not playing out like anyone expected. It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once."
Would we steer you wrong? (Score:2)
The article ultimately concludes that Codex "extends what a machine can do, but it is another indication that the technology works best with humans at the controls."
Autonomous vehicles.
Depends on what inputs it understands (Score:5, Insightful)
Or, "create a database with a user friendly gui to track systems composed of the following hardware, firmware and software options for each of the following n subsystems". (the problem here is that by the time I've defined the problem, I've 90% solved it
If all it can do is fill in code for a few types of problems it already knows how to solve, then it really is just a high level language with poorly defined syntax (which will surely cause problems when some doesn't ask for exactly what they want).
Re: (Score:3)
Couldn't Wolfram Alpha [wolframalpha.com] be thought of as a precursor to this system?
Re: (Score:1)
How about Stackoverflow? Isn't "the cloud" as magical as AI?
Re: Depends on what inputs it understands (Score:3)
Re: (Score:3)
Code writing AI could theoretically be designed to write better code faster.
I don't see how. Either you specify the task in 100% full detail, in which case you've just written the code, and all the AI is doing (at best) is recompiling your high-level code into a lower-level language; or you specify only a rough high-level outline of what needs to be done and let the AI fill in all the details, in which case you'll invariably end up with a program that does something like what you want but needs to be modified to match the implicit requirements that you didn't include in the outlin
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
You need someone who is PERFECT at writing unit tests. If you are missing even one condition in the tests, you will get broken application that passes the tests.
It usually takes me 5 minutes to write code and 2 hours to write tests.
Re: (Score:2)
It usually takes me 5 minutes to write code and 2 hours to write tests.
I like to test everything, to make sure it works, but this will never be 100%. My field is mainly electronic design, rather than software. I test all my designs at prototype stage, which can be quite a slog. But I have a theory about how the design works, so what I test is the extremes, according to my theory. Otherwise, I would never get the job done,
The basic point is that you can't just write tests completely blind.
There was a software testing method years ago, where you inflict your creation on the offi
Re: (Score:2)
Re: (Score:2)
I already have this problem with real humans.
Few people truly understand their code.
That's why I insist on hiring unreal humans.
Re: (Score:2)
Make it pass these unit tests? Then all you need is someone who is good at writing unit tests. I could see this as a net win.
The problem is, most non-trivial software would require an infinite number of unit tests to verify the AI's output. For example, for AI-written Microsoft Word you'd need to test every possible Microsoft Word document to make sure the program handles them all correctly. (With human programmers, you can get away with spot-checking just a few examples, to some extent, because your human programmers [hopefully] understand what the intent of the program is and code accordingly... an AI, on the other hand, does
Re: (Score:2)
...it is handled by going back to the programmer with an explanation of what you really want (vs what you got), and the programmer then sits down and modifies the program to suit, and you repeat the cycle until you're happy.
That's called the "Boeing Effect".
That's when you give the customer a finished deliverable built to spec and they say, "Yes, that's what I asked for, but it's not what I wanted."
Re: (Score:2)
It usually takes a day for a customer to write A4 of specifications. It then takes me 2 hours to write questions from those specifications. And then it takes 2 weeks for customer to answer those questions. It then takes about an hour or two for me to write code based on that and about a week to write tests for it. Then testing engineers spend another week writing their own tests for it. After this comes few hours of testing, few hours of bug fixing and test adjustments and meetings where specifications are
Re: (Score:2)
. .Either you specify the task in 100% full detail, in which case you've just written the code, and all the AI is doing (at best) is recompiling your high-level code into a lower-level language; or you specify only a rough high-level outline of what needs to be done and let the AI fill in all the details,
Probably the most difficult job in writing software is to define the problem you intend to solve. AI does nothing to help with that, as far as I can see. Some costumers have a rather vague idea of what they want the magic software to do, so you have to flesh that out with practical proposals, and see if the customer agrees. That does not sound like the kind of thing an AI algorithm can do.
When it comes to automatic code generation, C++ templates have a mixed reputation. They bloat executables, by generating
Re: (Score:2)
Code writing AI could theoretically be designed to write better code faster.
A real AI system could theoretically solve all of humankind's problems.
Too bad there is no such thing.
Re: (Score:2)
I've seen these kinds of "AI" for over thirty years. You have to define the problem well enough and give it some structure and context and then it will "program" the solution. You might as well code it yourself. It will take less time. No one is asking for spaces to be replaced by dashes. That's a one-liner in many languages. Determining if a zip code is valid is also trivial. These are solutions no one is looking for.
At best these kind of fake AI attempts will raise the level at which we program, bu
Re: (Score:2)
You have to define the problem well enough and give it some structure and context and then it will "program" the solution.
Kind of the role of a software architect. [ncube.com]
Re: (Score:2)
If you spend a lot of times doing things that have been done before, writing boilerplate code, you are programming wrong. That is what functions are for. Most of what we do should be novel (at least, situation specific. Otherwise you could just buy a package that already does what you need).
Re: Depends on what inputs it understands (Score:2)
Re: (Score:2)
I think the real goal is to give a set of inputs and outputs, ans have the computer write the function as needed.
This part is easy, the hard part is getting the computer to extrapolate correctly. Neural networks are extremely bad at extrapolating because they don't understand what is implied.
Re: (Score:2)
Re: (Score:2)
Valid zip code is REALLY hard. First you need to answer these questions:
- Which country zip code are we talking about? Or should it be global?
- If zip code was terminated last year, should it be considered valid? Should the function tell what zip code they should use now, auto-correct it or just reject? Or should we take the date as a parameter and validate data based on the date, assuming user can input historical data.
- Do we allow special syntaxes that are often used in the local culture or do we accept
Re: (Score:2)
Re: (Score:1, Informative)
I hope you shot whoever gave you that first problem specification, because it's terrible. Spherical coordinates only make sense in three dimensions, but a disk is two dimensional, which means you need orientation information and radius-of-sphere (more likely ellipsoid) to determine the angular coordinates.
Plus you need to know how the disk is projected or mapped onto the three-dimensional surface, at least if you want to be very precise in your answer.
Finally, "resolution" doesn't make sense like that. Fo
Re: (Score:2)
Re: (Score:1)
It's important that we understand its output.. Otherwise the machine will invent its own language that no human can understand, and then, when the machines start their own little gossipy social networks, watch out!
Re: (Score:2)
Already happened:
Jul 31, 2017
https://www.forbes.com/sites/t... [forbes.com]
"Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input."
Re: (Score:2)
Every single computer program ever written has been made with the purpose of affecting the user's mind. This is no different: as long as the process of AI-generated code can be shaped by the mind of someone who understands the end user's mind, human-to-human, it can create something useful. Left to its own devices it will create nonsense.
Re: (Score:2)
Every single computer program ever written has been made with the purpose of affecting the user's mind.
I don't think that's true.
It could (Score:2)
It depends on how it actually works, and also how well.
Productivity = less jobs (Score:5, Insightful)
Your job may not be replaced but your productivity will replace the need to keep your coworkers. May the best employee survive until we replace them with a cheaper youngster from anywhere in the world that has internet.
Re:Productivity = less jobs (Score:5, Insightful)
This seems to work in real life - while the auto industry put a lot of people out of work who used to deal with horses, it created a huge number of new jobs. Same for computers - people whose lives were spent doing sums by hands, lost their jobs, but a whole new industry was created.
Re: (Score:2)
increased productivity means that the company can afford to hire more workers.
This is known as Jevon's Paradox [wikipedia.org].
When productivity improvements allow a resource to be used more efficiently, demand for that resource often counter-intuitively goes UP.
Labor-saving technology usually leads to higher wages rather than unemployment.
Re: (Score:2)
Yes but what if we consider automation to be the resource in question? Wouldn't demand for automated processes coincidentally increase? This does make perfect sense as, after all, whatever a human can do, a machine will eventually learn to do better.
Ultimately, the great problem of automation isn't that it increases human productivity. This "new" kind of automation makes human workers completely obsolete. The AI in the article isn't doing the software engineering am doing. It's writing the code I would hire
Re: (Score:2)
Imagine its 1980 when someone might worry that computers can "think" and will soon replace all human jobs that involve thinking. Instead they just created an enormous computer industry that employs a lot of people.
Re: (Score:2)
Yes but what if we consider automation to be the resource in question?
Jevon's Paradox is not universally applicable. It is most common when a resource is a bottleneck. In most human endeavors, the primary bottleneck is the availability of skilled labor.
Wouldn't demand for automated processes coincidentally increase?
Possibly. Which will increase demand for people who can create and maintain those automated processes. These people are called "programmers".
This "new" kind of automation makes human workers completely obsolete.
No. Not at all. That is complete nonsense.
Re: (Score:2)
But this is not considering more complex models - the workers are not unskilled assembly line workers, the skills and jobs are not fungible, and so forth. In the real world, we are still unable to measure productivity effectively, especially with software. I've seen too many instances where someone really bad is praised for productivity, because he churns out bad code very fast, or churns out useless and unnecessary code, or gets out the features fast but also has a long stream of bug fixes afterwords.
Ie,
There was 80 years of "displacement" (Score:3)
So yeah, after 2 generations of poverty and war the survivors will thank us. Doesn't help us now.
Meanwhile automation killed about 70% of the middle class jobs since the 80s [businessinsider.com] (strickly speaking it wasn't just automation, process improvement played a role). This is why we've got 57 million people in the gig economy [forbes.com], they can't find stable, full time work that pays for rent
Re: (Score:2)
I'll ask you the same thing I ask everyone who makes this point: what jobs will replace those ones automated away?
This machines replacing labour has been going on for centuries. But there were still jobs. They were different jobs, So there was not so much of a demand for nailing iron shoes on horses hooves, but more of a demand for gas stations, because mechanised transport replaced horse-drawn transport.
Re: (Score:2)
Same for computers - people whose lives were spent doing sums by hands, lost their jobs, but a whole new industry was created.
It is something of a conundrum that that there are so many ways to use machines to avoid drudgery, and yet most people are working their butts off.
Re: (Score:2)
Often they get replaces by 2 or more cheaper workers, who end up being less productive overall; more code checked in, but ultimately it is a churn of fixing their own bugs, making bad designs, unable to debug customer issues, etc. Good programmers are not merely assembly line workers. And an "AI" that does this is not helping the matter.
Now, give me the AI that tells me how to shove 1MB of object code onto a 20KB code space, I'll pay attention then. Or if it can analyze a protocol and tell me what's wron
Sure it can (Score:2)
It's good news for tech firms (Score:2)
Re: (Score:2)
Excuses only work if they live up to results. That's not this story.
Re: It's good news for tech firms (Score:2)
Re: (Score:2)
They can use this as an excuse to lower tech wages.
They don't need an excuse.
If companies could pay lower wages and still recruit and retain the employees they need, they would already be doing so.
Re: It's good news for tech firms (Score:2)
Like all "AI" it's only as good as its own code (Score:5, Insightful)
Re: (Score:1)
Write code that does a tasks that is tedious to a human and call it AI. Done.
Children are AI.
Re: (Score:2)
No, we can make programs that take in data that no human brain can handle, and then have it write software. You can't call that resultant code "exactly what a programmer wrote." There is code produces that way that no human understands.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You miss the point, that was only the first step. What about the program written by a program, from data no human could understand and with code no human understands? No human could write that code, no human understands that code. A human only set things in motion, but the result was a system beyond comprehension.
Re: (Score:2)
Reminds me of a passage from Asimov's short story, Evitable Conflict:
[have some people check out the "machines" anyway]
"No, he said that no human could. He was frank about it He told me, and I hope I understand him properly, that the Machines are a gigantic extrapolation. Thus- A team of mathematicians work several years calculating a positronic brain equipped to do certain similar acts of calculation. Using this brain they make further calculations to create a still more complicated brain, which they use again to make one still more complicated and so on. According to Silver, what we call the Machines are the result of ten such steps."
Re: (Score:2)
The "AI" is still doing only what it was coded to do.
Well, yes and no. In modern AI, the AI is coded to learn, and (if successful) it learns how to do something. But at the end of the training process, it can now do something that no programmer ever coded it to do [economist.com].
Re: (Score:2)
Re: (Score:2)
It's a software program that does exactly what a programmer wrote.
The behavior of a DL system is determined far more by the training data than by what the programmer wrote.
Claiming that an AI's behavior is "just programming" is as silly as claiming that human behavior is "just DNA".
Re: (Score:2)
Science fiction has led some people to believe that AI does its own thinking. It doesn't. It's a software program that does exactly what a programmer wrote.
I think the problem is that some people believe that brains are computers, and so all you have to do is create a sufficiently powerful computer, and it will be as good as a brain, But this is like saying motor cars are improved horses, because cars go faster and for longer than horses. I am pretty sure, by introspection, that most of what my brain does is not computation. I am currently thinking of a recipe for leek and potato soup, that might involve Stilton cheese, and maybe some butternut squash. Is this
It's a tool. (Score:4, Insightful)
Anything that assists in the completion of the program (without actually understanding the goal) is a tool for a programmer. Call it a "code-writing AI" if you like but it lacks the ability to cognate the context of the code it spits out, so it's heavy on the 'A' and light on the 'I' in AI. This could be a useful tool but it could also be a disastrous tool, all depending on how good the programmer is that using it. You can give someone a CNC machine which does all the machining for you but if the craftsman doesn't know it's capabilities and limitations then you could end up with a shoddy product. It's not the tool, it's the craftsman that matters because the tools just make it easier.
Re: (Score:1)
Writing the code is the easy part (Score:5, Insightful)
Writing the code is normally the easy part. The tougher parts are understanding the requirements, digging deeply enough into them to identify all the possible edge cases, and then figuring out the best way to break up the solution so that it's easy for other humans to maintain. Writing code that replaces all the spaces in a sentence with dashes is fairly trivial, which is why we normally just use existing libraries to do that kind of stuff instead of building it from scratch.
Re: (Score:2)
Till AI can figure out what the user means high end coding is safe.
Routine SQL query jocks pretending to be coders will be out of their jobs. Lower grade imported H1Bs for example. Higher end will survive.
Re: (Score:2)
My thoughts when I looked at OpenAI Codex:
Yeah, impressive. When a human has done the hard work of splitting the whole up into easily describable pieces, the AI can turn their verbal description into code.
That's a nice amount of language processing. But it doesn't even touch what actual software development work is like.
Mod parent up! (Score:2)
Mod parent up!
Abstraction layers and fully inclusive specs are HARD
It's not the programming that's hard (Score:2)
It's asking the right questions of the person requesting the software solution that is hard.
Can a Code-Writing AI Be Good News For Humans? (Score:2)
same issue exists (Score:2)
Does it solve the age old problem of interpreting human expression? Computer languages are all about allowing humans to translate our thoughts into machine code. Writing that code is helpful but doing a better job understanding humans is even more helpful.
Depends (Score:2)
Which bathroom is he/she/it going to use?
Re: (Score:2)
It has to use the bit bucket.
It's glorified Stack Overflow Copy/Paste... (Score:2)
... with all the ensuing problems
Including cybersecurity problems - "GitHub Copilot AI Is Generating And Giving Out Functional API Keys" - https://fossbytes.com/github-c... [fossbytes.com]
Legal and copyright problems - "Analyzing the Legal Implications of GitHub Copilot" - https://fossa.com/blog/analyzi... [fossa.com]
Code quality problems - "GitHub's Copilot may steer you into dangerous waters about 40% of the time" - https://www.theregister.com/20... [theregister.com] .. and more.
Github Copilot is a piece of crap and a complete legal minefield - and n
Compilers are a type of code generating 'AI' (Score:4, Insightful)
Turtles all the way down (Score:2)
2. Programmer codes bits directly into a digital computer
3. Programmer create a language
4. Programmer creates a standard library
5. Programmer creates AI
6. Programmer tells AI what to code
7. Programmer tells the AI what problem to solve
8. Programmer identifies areas for the AI to improve
9. Programmer monitors AI
10. AI is self monitoring
It's going to take a bit before getting rid of programmers. Phase 10 is a weee bit out. And until phase 10, the output of the AI
Keep going ... 11 ... 12 ... singularity? (Score:2)
Singularity steps?
A.I. only serves a small set of human needs. (Score:1)
The best thing this will be able to do is write and algorithm for me to process some inputs and deliver the output.
so for example choosing the best sorting algorithm
or traversing a network of nodes
yeah ok, so what. we already have libraries for that.
the A.I. is not going to be able to set up the whole stack of junk needed from the funky DNS name, buy it, load balance it, secure it with my favorite Oauth, pick a proper UI framework, use it to create the human specific workflows, stand up and API in front o
Really? (Score:2)
"Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes? It did both instantly, before completing several other tasks.
"These are problems that would be tough for a lot of humans to solve, myself included,"
"tough for a lot of humans to solve..." Seriously? These seem like two very easy problems to solve, especially the first one- a line or two of code would do it in most programming languages.
Checking zip codes is onl
comprehension (Score:3)
My day job is security, and a good part of it is secure software development (and I co-published a whitepaper on secure AI development recently).
From that perspective, I don't want any code written by an AI, thank you.
Human coders are sloppy, they make mistakes, they are often not trained as well as they should be. But I can question them, I can teach them, I can audit their dev process and review their code.
AI is incomprehensible. The explainable AI approaches are in their infancy and are likely to be left in the dust by the rapid development or new AI systems.
From a security perspective, whatever exploits the AI puts in the code won't be found until a creative attacker rolls out his 0-day.
I would like AI to check code and point out coding issues, to assist the human developers and code reviewers. I'd like AI to help in making judgements, by adding its capability to access vast amounts of data and patterns. I'd like AI to tell me when I write code that there's a library function for that or that my loop can go outside the range or if it thinks I didn't sanitize my input properly.
I've seen the code-writing AI examples and they're fun to watch - but I most definitely wouldn't want to run a business on whatever they create. There's something to be said for human judgement, accountability and responsibility. The AI doesn't know what's behind it and what depends on it.
Lessons from Entity Framework (Score:2)
Entity Framework is an ORM that is supposed to intelligently, automatically "write code" in SQL without the programmer even needing to know SQL. For a lot of trivial CRUD operations and even some more advanced database programming, it works OK. But when performance is important, SQL engineers can testify that the SQL code written by Entity Framework is _awful_ and often is written in a badly-performing manner. Worse, when it does write bad SQL, it's almost impossible for the programmer to change the C# code
"Construct a trading algo to beat the market avg" (Score:2)
Oh it can't just "do that" for you? Hm. Strange.
Nope (Score:2)
Because it does not exist, will not exist any time soon and it is unclear whether it is even possible.
Please stop the AI bullshit.
Re: (Score:2)
I am with you in spirit, but we have lost that war.
Anything that does jobs that we think of as requiring intelligence is now considered AI.
Maybe this is for the best, because that's a fairly valid way to look at things, even if it isn't what was originally meant by the phrase.
What about license? (Score:4, Interesting)
So what's the license of the code the AI generates?
If it was trained with GPL code, does the GPL apply since it was GPL derived? What happens if it was trained with code under incompatible licenses? What happens if it was trained with 3-clause BSD code, requiring the advertisement?
This is likely going to be the biggest hindrance than anything because the last thing anyone wants is accidentally tainting their code.
Possibilities for injecting malicious code (Score:1)
Sure.. (Score:1)