Google's Secret New Project Teaches AI To Write and Fix Code (businessinsider.com) 50
Google is working on a secretive project that uses machine learning to train code to write, fix, and update itself. From a report: This project is part of a broader push by Google into so-called generative artificial intelligence, which uses algorithms to create images, videos, code, and more. It could have profound implications for the company's future and developers who write code. The project, which began life inside Alphabet's X research unit and was codenamed Pitchfork, moved into Google's Labs group this summer, according to people familiar with the matter. By moving into Google, it signaled its increased importance to leaders. Google Labs pursues long-term bets, including projects in virtual and augmented reality.
Pitchfork is now part of a new group at Labs named the AI Developer Assistance team run by Olivia Hatalsky, a long-term X employee who worked on Google Glass and several other moonshot projects. Hatalsky, who ran Pitchfork at X, moved to Labs when it migrated this past summer. Pitchfork was built for "teaching code to write and rewrite itself," according to internal materials seen by Insider. The tool is designed to learn programming styles and write new code based on those learnings, according to people familiar with it and patents reviewed by Insider. "The team is working closely with the Research team," a Google representative said. "They're working together to explore different use cases to help developers."
Pitchfork is now part of a new group at Labs named the AI Developer Assistance team run by Olivia Hatalsky, a long-term X employee who worked on Google Glass and several other moonshot projects. Hatalsky, who ran Pitchfork at X, moved to Labs when it migrated this past summer. Pitchfork was built for "teaching code to write and rewrite itself," according to internal materials seen by Insider. The tool is designed to learn programming styles and write new code based on those learnings, according to people familiar with it and patents reviewed by Insider. "The team is working closely with the Research team," a Google representative said. "They're working together to explore different use cases to help developers."
And thus... (Score:4, Interesting)
Sorry Dave (Score:5, Funny)
Re: (Score:2)
I, for one, welcome our new AI overlords.
Re:And thus... My code broke itself... (Score:2)
If you thought code was hard to get working before, wait until it starts "fixing" itself while you're trying to update it.
It's kind of like trying to clean up a toy room with a bunch of 4 year old kids loose, running around taking toys off the shelf again...
Good luck... especially if you're working with hardware drivers etc....If the code gets out of the sandbox or starts coding disallowed values it could literally get your hardware to "commit suicide"...
Is this the automation of "Gang of 4" Patterns? (Score:5, Interesting)
Seems to me that's a relatively obvious approach. And it's NOTHING NEW. The MIT AI Lab was talking about "cliches" A Long Time before the Patterns book came out, in the late '70s/early '80s. And much of their work went into recognizing the situations where a given "cliche" could be applied. https://dspace.mit.edu/handle/... [mit.edu] Note in this paper's abstract the key realization that these were not complete programs, but rather applied within the context of a program. (Thus a wave of the middle finger at those who claimed that all you needed for good code was a Patterns catalog, or those who claimed that "architecture is all about patterns.")
(p.s. to the Anonymous Coward who said, "Get a new meme, boomer", I'd respond, "Find something to talk about that boomers haven't already seen.")
Clinches will be around forever (Score:2)
This Google project will be carried to the Google Graveyard within 2~3 years because the main project manager is sacked due to Google's new stack ranking job evaluation.
Re: (Score:2)
True. Especially the manager that named the project "Pitchfork". You know what pitchforks are good for? Shoveling shit. That was the first thing that came to mind when I read the name. Might as well call the project "Roto-rooter".
Re: (Score:1)
You know what pitchforks are good for? Shoveling shit
Pitchforks are good for shoveling hay and other long-stemmed vegetation, they are terrible for "shit". You are thinking of a "manure" fork, which is great for shoveling shit.
Re: (Score:2)
OK, true. The "pitchforks" we used had five tines, which technically make them manure forks, but they worked well enough for also moving hay and straw. They didn't work for shit for trying to move wood chips, but they were great for getting the dung off the top of the wood-chip bedding in the stalls.
We didn't own any three-tine forks as they could only be used on loose hay and straw so it would've been necessary to check which fork we grabbed based on their specific use. Thanks for the correction.
Art shows us AI pattern misuse and lack of context (Score:1, Troll)
At this point I think enough people have played around wit stable diffusion, or other AI art makers to know that the AI takes patterns from other art, and reassembles them into something that yes, is sort of what you asked for but has lots of lumps. Like extra fingers, or no fingers, or maybe a hand coming out of a stomach.
All very funny and such but when you switch to code, just what kind of nonsense would it be generating? Would it be looking at a decorator pattern for something, and decide it was so go
Re: (Score:2)
https://www.deepmind.com/blog/... [deepmind.com]
More recent approaches allow multi-round interaction so you can specify gradually what you want from it. And it parses error messages to auto-fix the b
Re: (Score:1)
Very interesting - thanks! That may be further along than I thought. Multi-round especially, seems like a good idea.
I guess the problem will always remain, will humans tell the system what they really want to build just like today you can get specs that are not thought out fully... but an AI developing it could allow for faster iteration of ideas perhaps.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
And the spooks have an AI ... (Score:2)
to find security problems in existing in code. They will then add to their arsenal for cracking systems abroad and at home.
Re: (Score:3)
MIT, BSD, APACHE (Score:2)
Let's hope that the gang at google learnt anything from the GitHub copilot fiasco, and trained thier models using only code with MIT/BSD/APACHE type licenses, so that the AI is not marred by a damocles sword of copywright infringement
Re: MIT, BSD, APACHE (Score:1)
Are ai an application? A service? An environment? An architecture? Can they read and track the source code on a system they are running on? Identify bugs or bad code? I am thinking that if ANY of these are a solid yes lawsuits like the copilot one are going to be shot down.
Re: (Score:2)
There's a lawsuit, but I don't expect it to get any where.
Re: (Score:1)
Re: (Score:2)
Perhaps there should be a license that allows unrestricted code use, but only by humans.
How would the code be compiled without a computer seeing it?
Re: (Score:2)
What about Kite?! (Score:2)
Didn't we just read yesterday that "Kite" closed shop after ten years because this stuff doesn't work and isn't close to working?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Who fixes the fixer? (Score:3)
Re: Who fixes the fixer? (Score:1)
Once software is actually running it is very hard(impossible?) to have the exact same purpose, output without the original software in its current state being referenced. Legally that is.
Skynet (Score:2)
My turn! (Score:2)
Not another one. Where do I buy stock puts or shorts against them? This time *I* want to profit from morons & suckers instead of just the fat cats.
pride goeth before a fall (Score:2)
Re: pride goeth before a fall (Score:1)
Applications seem to write themselves there.....
Re: (Score:2)
Re: (Score:2)
Galactica redux? (Score:2)
Re:Galactica redux? (Score:4, Funny)
Can't work (Score:2, Insightful)
Re: (Score:3)
It means that AI can't write or fix all possible programs. It is of course possible to write an AI to fix a certain kind of bug, or generate some limited subset of programs. Say you could have an AI that spits out the source code for calculators for any specified base, like hexadecimal, oct, or base 42 if you like. And if you put more work into your AI, you can have it generate more complex things. The incompleteness theorem states that no matter how complex you make your AI, there will always be some progr
Sure (Score:2)
Brought to you by the company that doesn't bother to even document half their work.
But the training data will be buggy code (Score:2)
because all code has bugs.
Wave that magic wand harder for better results (Score:2)
It's only code fragments from (possibly) working programs that do 'something' that might or might not be applicable. It's not like they're trying to invent a perpetual motion machine or some other impossible task. Not like the halting problem at all. Code fragments are not equivalent to full programs. Completely different.
The lack of being able to describe how an AI process determines it's result is a feat
Re: Wave that magic wand harder for better results (Score:1)
Code fragments cannot be evaluated without knowing the purpose of the code surrounding it and even looking at a trace or memory dump only implies that the libraries are doing what they are tagged as doing. A person has to at some point go line by line to know what is really supposed to be happening. So you can call not knowing what a given segment of code is actually doing without testing, patching, retesting,....a convenient universal feature.
Ouch! My Curiosity hit a paywall (Score:1)
The jokes ... (Score:2)
... almost write themselves! :)
(Yes, I'm a dad and love Dad jokes.)
Karma and Cobol (Score:2)