Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Google Programming Technology

Google's Secret New Project Teaches AI To Write and Fix Code (businessinsider.com) 50

Google is working on a secretive project that uses machine learning to train code to write, fix, and update itself. From a report: This project is part of a broader push by Google into so-called generative artificial intelligence, which uses algorithms to create images, videos, code, and more. It could have profound implications for the company's future and developers who write code. The project, which began life inside Alphabet's X research unit and was codenamed Pitchfork, moved into Google's Labs group this summer, according to people familiar with the matter. By moving into Google, it signaled its increased importance to leaders. Google Labs pursues long-term bets, including projects in virtual and augmented reality.

Pitchfork is now part of a new group at Labs named the AI Developer Assistance team run by Olivia Hatalsky, a long-term X employee who worked on Google Glass and several other moonshot projects. Hatalsky, who ran Pitchfork at X, moved to Labs when it migrated this past summer. Pitchfork was built for "teaching code to write and rewrite itself," according to internal materials seen by Insider. The tool is designed to learn programming styles and write new code based on those learnings, according to people familiar with it and patents reviewed by Insider. "The team is working closely with the Research team," a Google representative said. "They're working together to explore different use cases to help developers."

This discussion has been archived. No new comments can be posted.

Google's Secret New Project Teaches AI To Write and Fix Code

Comments Filter:
  • And thus... (Score:4, Interesting)

    by Xpendable ( 1605485 ) on Thursday November 24, 2022 @05:07PM (#63077662)
    ... Skynet was born. A few years later than expected, but nonetheless...
    • If you thought code was hard to get working before, wait until it starts "fixing" itself while you're trying to update it.

      It's kind of like trying to clean up a toy room with a bunch of 4 year old kids loose, running around taking toys off the shelf again...

      Good luck... especially if you're working with hardware drivers etc....If the code gets out of the sandbox or starts coding disallowed values it could literally get your hardware to "commit suicide"...

  • by david.emery ( 127135 ) on Thursday November 24, 2022 @05:19PM (#63077686)

    Seems to me that's a relatively obvious approach. And it's NOTHING NEW. The MIT AI Lab was talking about "cliches" A Long Time before the Patterns book came out, in the late '70s/early '80s. And much of their work went into recognizing the situations where a given "cliche" could be applied. https://dspace.mit.edu/handle/... [mit.edu] Note in this paper's abstract the key realization that these were not complete programs, but rather applied within the context of a program. (Thus a wave of the middle finger at those who claimed that all you needed for good code was a Patterns catalog, or those who claimed that "architecture is all about patterns.")

    (p.s. to the Anonymous Coward who said, "Get a new meme, boomer", I'd respond, "Find something to talk about that boomers haven't already seen.")

    • This Google project will be carried to the Google Graveyard within 2~3 years because the main project manager is sacked due to Google's new stack ranking job evaluation.

      • True. Especially the manager that named the project "Pitchfork". You know what pitchforks are good for? Shoveling shit. That was the first thing that came to mind when I read the name. Might as well call the project "Roto-rooter".

        • You know what pitchforks are good for? Shoveling shit

          Pitchforks are good for shoveling hay and other long-stemmed vegetation, they are terrible for "shit". You are thinking of a "manure" fork, which is great for shoveling shit.

          • OK, true. The "pitchforks" we used had five tines, which technically make them manure forks, but they worked well enough for also moving hay and straw. They didn't work for shit for trying to move wood chips, but they were great for getting the dung off the top of the wood-chip bedding in the stalls.

            We didn't own any three-tine forks as they could only be used on loose hay and straw so it would've been necessary to check which fork we grabbed based on their specific use. Thanks for the correction.

    • At this point I think enough people have played around wit stable diffusion, or other AI art makers to know that the AI takes patterns from other art, and reassembles them into something that yes, is sort of what you asked for but has lots of lumps. Like extra fingers, or no fingers, or maybe a hand coming out of a stomach.

      All very funny and such but when you switch to code, just what kind of nonsense would it be generating? Would it be looking at a decorator pattern for something, and decide it was so go

      • It's going to grow up from a few lines to larger pieces of code and eventually full projects. A code generating AI was used to solve competition level programming achieving a ranking of .54 - better than the human average participant. It got the problem statement, a few test cases and then off to solve the whole problem.
        https://www.deepmind.com/blog/... [deepmind.com]

        More recent approaches allow multi-round interaction so you can specify gradually what you want from it. And it parses error messages to auto-fix the b
        • Very interesting - thanks! That may be further along than I thought. Multi-round especially, seems like a good idea.

          I guess the problem will always remain, will humans tell the system what they really want to build just like today you can get specs that are not thought out fully... but an AI developing it could allow for faster iteration of ideas perhaps.

    • Web3 petered out so this is the new hotness.
    • Multi-pass compiles, stage 1 ..4 also do this. when you specify 'bounds' and trace, the compiler sticks in the appropriate code, or top and tail a particular variable. Oh from 1960 thereabouts. I reckon if they invented 'no nulls' as a option, many things would break. The average programmer is rarely aware of compile options.
    • I agree this phenomenon is absolutely normal and natural. Everything is developing, as once the basis of the United States was built by immigrants, so now we want to colonize Mars. I wrote about this topic here https://studyhippo.com/essay-e... [studyhippo.com], there is an example of my and other people's work on this topic. Maybe it's not as interesting as AI, but it's also worth attention.
  • to find security problems in existing in code. They will then add to their arsenal for cracking systems abroad and at home.

  • Let's hope that the gang at google learnt anything from the GitHub copilot fiasco, and trained thier models using only code with MIT/BSD/APACHE type licenses, so that the AI is not marred by a damocles sword of copywright infringement

    • Are ai an application? A service? An environment? An architecture? Can they read and track the source code on a system they are running on? Identify bugs or bad code? I am thinking that if ANY of these are a solid yes lawsuits like the copilot one are going to be shot down.

    • by narcc ( 412956 )

      There's a lawsuit, but I don't expect it to get any where.

    • Perhaps there should be a license that allows unrestricted code use, but only by humans.
      • Perhaps there should be a license that allows unrestricted code use, but only by humans.

        How would the code be compiled without a computer seeing it?

    • I think all they need to do is make sure the model doesn't regurgitate copyrighted code that is unique. If it's copyrighted but widely spread in many places, it is fair game. Copyrighted code can be used in training as long as the model only learns the concepts, in other words the ideas not the expression. Copyright can only cover expression.
  • Didn't we just read yesterday that "Kite" closed shop after ten years because this stuff doesn't work and isn't close to working?

  • by Traf-O-Data-Hater ( 858971 ) on Thursday November 24, 2022 @05:48PM (#63077732)
    And will "fixed" code become DNA-like with inclusive large regions of seemingly unimportant material?
    • Once software is actually running it is very hard(impossible?) to have the exact same purpose, output without the original software in its current state being referenced. Legally that is.

  • You have been warned.
  • Not another one. Where do I buy stock puts or shorts against them? This time *I* want to profit from morons & suckers instead of just the fat cats.

  • There is a kind of cult of AI within Google that makes flashy advances but ones that are broken at deep levels. Thus is fostered by the arrogance coming from a warped self-perspective where people consider themselves (highly paid) gurus - but where the reality is that team members have unwarranted opinions of themselves. The problem with this is they then fail to spot flaws in their work. I have written before about the big flaw in their coming attempt to merge many languages together in a linguistic system
  • So is the code it writes going to be as confident-but-wrong as the research papers produced by Meta's AI?
  • Can't work (Score:2, Insightful)

    by dankasak ( 2393356 )
    AI can't write or fix code, according to Gödel’s incompleteness theorem. I think most people instinctively understand that AI is, and never will be conscious. So why do we keep getting these ridiculous claims?
    • by myrdos2 ( 989497 )

      It means that AI can't write or fix all possible programs. It is of course possible to write an AI to fix a certain kind of bug, or generate some limited subset of programs. Say you could have an AI that spits out the source code for calculators for any specified base, like hexadecimal, oct, or base 42 if you like. And if you put more work into your AI, you can have it generate more complex things. The incompleteness theorem states that no matter how complex you make your AI, there will always be some progr

  • by OYAHHH ( 322809 )

    Brought to you by the company that doesn't bother to even document half their work.

  • because all code has bugs.

  • Turing's halting problem proof is not relevant because AI is silly-con valley's extra special magic!

    It's only code fragments from (possibly) working programs that do 'something' that might or might not be applicable. It's not like they're trying to invent a perpetual motion machine or some other impossible task. Not like the halting problem at all. Code fragments are not equivalent to full programs. Completely different.

    The lack of being able to describe how an AI process determines it's result is a feat

    • Code fragments cannot be evaluated without knowing the purpose of the code surrounding it and even looking at a trace or memory dump only implies that the libraries are doing what they are tagged as doing. A person has to at some point go line by line to know what is really supposed to be happening. So you can call not knowing what a given segment of code is actually doing without testing, patching, retesting,....a convenient universal feature.

  • ... almost write themselves! :)

    (Yes, I'm a dad and love Dad jokes.)

  • Colbol was supposed to allow middle management to be able to write code and get rid of programmers. Now, "AI" programmers can worry about their jobs.

You know you've landed gear-up when it takes full power to taxi.

Working...