Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming AI Google

More Than a Quarter of New Code At Google Is Generated By AI 92

Google has integrated AI deeply across its operations, with over 25% of its new code generated by AI. CEO Sundar Pichai announced the milestone during the company's third quarter 2024 earnings call. The Verge reports: AI is helping Google make money as well. Alphabet reported $88.3 billion in revenue for the quarter, with Google Services (which includes Search) revenue of $76.5 billion, up 13 percent year-over-year, and Google Cloud (which includes its AI infrastructure products for other companies) revenue of $11.4 billion, up 35 percent year-over-year. Operating incomes were also strong. Google Services hit $30.9 billion, up from $23.9 billion last year, and Google Cloud hit $1.95 billion, significantly up from last year's $270 million. "In Search, our new AI features are expanding what people can search for and how they search for it," CEO Sundar Pichai says in a statement. "In Cloud, our AI solutions are helping drive deeper product adoption with existing customers, attract new customers and win larger deals. And YouTube's total ads and subscription revenues surpassed $50 billion over the past four quarters for the first time."
This discussion has been archived. No new comments can be posted.

More Than a Quarter of New Code At Google Is Generated By AI

Comments Filter:
  • by silentbozo ( 542534 ) on Tuesday October 29, 2024 @06:34PM (#64904809) Journal

    Does this mean 25% of the codebase at Google is completely autonomous, from inception, design, implementation, testing, deployment, and maintenance?

    Or does this mean Google is using their own version of Github Copilot as an autocomplete resource alongside normal engineering activities, and they're attributing 25% of the newly implemented codebase by lines, to "AI" generated activities?

    Given that Google is trying to push "AI" as a sellable feature, I'd want to know the actual breakdown of how it is being dogfooded, and the ROI. Instead of talking about how much new code is "AI" generated, how many engineer hours are they saving, and are they getting equivalent or better level of deliverables (features, tests, tooling), with an equiavlent, or better delivery timeline?

    https://www.businessinsider.co... [businessinsider.com]

    "Pichai said using AI for coding was "boosting productivity and efficiency" within Google. After the code is generated, it is then checked and reviewed by employees, he added.

    "This helps our engineers do more and move faster," said Pichai. "I'm energized by our progress and the opportunities ahead, and we continue to be laser focused on building great products.""

    This basically sounds like Github Copilot. You're pairing with an LLM, but you still need humans in the loop to judge and tweak the output, do code reviews, etc. Here the benefit is that Google has large internal repositories to train against. Will external customers be able to benefit from this, or are these proprietary Google-only models?

    • by r1348 ( 2567295 )

      Don't you just love when the LLM you're forced to work with is trying to use a string as an array index? Actual LLM coding experience.

      • by ls671 ( 1122017 )

        It works in php, maybe that's where it got the idea from.
        https://www.php.net/manual/en/... [php.net]

        • It works in php

          It works in C also.

          #include <stdio.h>
          int
          main(void)
          {
                  printf("%c\n", 0["Hello"]);
                  return 0;
          }


          > gcc -Wall -Wextra -O3 foo.c
          > ./a.out
          H

          • by Rei ( 128717 )

            My eyes are burning. ;) I should slip that in some code and see what the code reviewer thinks of it ;)

            • Grab yourself a copy of this: Expert C programming [amazon.com]. You'll thank me later.
              • by Rei ( 128717 )

                Oh, I fully get how it works (C arrays are just *(pointer + index), so you can swap pointer and index), but it's still painful to look at ;) Sort of a non-malicious version of these [github.com].

            • This is a part of my favorite obfuscated C contest winner in the early years. The classic 'unix["Hello"]' puzzler (what is the value of 'unix'?).

              I'm surprised it still works, even in the more strict C++. But essentially "E1[E2]" is shorthand for "*(E1+E2)", using pointer arithmetic, and one of the two expressions must be an integer and the other expression is a pointer to a type. Nothing in the rules indicates what the order of the expressions should be.

      • Maybe switch to a cool language that lets you do things like that? :-D (Also, is this yet another tale from GPT-3.5 you keep repeating? Have you tried e.g. Qwen 7b-coder, etc.?

        More seriously, code generation isn't the best use of LLMs in programming IME. They kick ass at code reviews, if you feel like being humbled. They are also great at helping to spitball stuff and getting pointers to authoritative sources of information.
        • But good or bad, the first rule still applies: Be skeptical! Don't rubber stamp a code review just because it was written by an AI, and also don't rubber stamp a code review just because the senior member of the programming staff with 30 years of experience wrote it. Doubt every line.

          This is one of my pet peeves, mostly because of nearly a decade at one company where code reviews were quite often rubber stamped and even if a bug was found there would be push back to not fix it because then deadlines woul

      • by lsllll ( 830002 )

        See? You can't even peel LLMs away from C-style programming, let alone hard-code C programmers. I wonder if the LLM programs against buffer overflows.

    • Does this mean 25% of the codebase at Google is completely autonomous, from inception, design, implementation, testing, deployment, and maintenance?

      You could even take it to mean terraform, kubernetes manifest, helm chart, etc. "code", or anything else from that whole ecosystem of plumbing-as-code job creation software.

      • Comments technically qualify as "code" for the purposes of managerial presentations. Which my "upline" didn't bother to mention during their quarterly E level meeting, when they said our code is 40% AI generated. (We use an AI system to help comment code as part of an automated documentation repository.)
    • by phantomfive ( 622387 ) on Tuesday October 29, 2024 @07:14PM (#64904899) Journal

      Does this mean 25% of the codebase at Google is completely autonomous, from inception, design, implementation, testing, deployment, and maintenance?

      In the investor announcement [blog.google], it's a somewhat disjointed statement at the end of a long line of more concrete statements. You can imagine that it was put there by some ambitious ladder-climbing manager, who did some "research" motivated to make a phrase into the earnings announcement. Having made a phrase into the earnings announcement, his profile is now raised compared to his fellow comrades (or so he thinks, while those around him roll their eyes. But it also might actually work in helping him get promoted).

      Incidentally, the earnings report also mentioned Notebook LM [notebooklm.google], which uses AI to summarizes long texts. So now we're going to have people generating long texts from simple prompts, and then we're going to use AI to simplify it back into the simple prompts. This is how we will communicate in the future. It's glorious!

      • *automatically generated summary*

        The 25% claim is marketting bullshit.

        The future is bots talking to each other.

    • by gweihir ( 88907 )

      Does this mean 25% of the codebase at Google is completely autonomous, from inception, design, implementation, testing, deployment, and maintenance?

      Sounds like it. Or maybe it is only experimental code, i.e. the number is very misleading? Basically a lie?

    • It means the internal IDE has a pretty good auto-complete.
    • There's a reason senior software engineers share their commit summaries when more lines of code are deleted than are added. As I've gone through my career, now a Principal Engineer, I spend less and less time in the editor. I joke with my boss (and his boss) that the best bang for their buck is when I'm asleep, or in the shower. That's where I come up with solutions to the capital-H HARD problems.
    • A while ago I had chatgpt take a list of keyboard codes and build them into a few rust structures. Saved me about 2 hours worth of time, and makes up over half of the "code" of that project. Problem is it's not exactly code, it's basically just rote transcription of data from one format to another. No flow control logic, just data. In other words, grunt work, not fun.

      I also asked it to write the code for actually searching for the right USB device to send those codes to and...wow, it did it in the hardest,

      • by Rei ( 128717 )

        FYI, ChatGPT isn't nearly as good at code as Claude.

      • I've used copilot a fair amount now, and it's a game changer. I write comments and it writes code. When I catch it doing dumb things, I fix it. Remember back when pair programming was supposed to be a big thing? It's kind of like that, only it types way faster than I ever could.

        The biggest thing is my comments are much more meaningful. Instead of a bunch of small comments throughout classes and methods describing minutiae, I spend more time on larger blocks of comments describing intent, specifications,

        • by lsllll ( 830002 )

          // Decrement i
          i--;

          • I don't comment --, but I *always* comment integer division and modulo operations because a shocking number of people don't know they exist or what they do.

            • I am indeed surprised, on a regular basis, that professional programmers with many years of experience really don't know how this stuff works. Worse, there are some static code analysis tools that dings a lot of module arithmetic for having overflows. But seriously, how can one be a professional and not know how to use their tools? I don't want an electrician who's never read the Electrician's Handbook or the local building codes. So how can you spend 30 years writing C and still not know that floating

        • only it types way faster than I ever could

          Is that important? Fast code to me is likely to be bad. I am extremely dubious when I do a code review and then 5 minutes later the code has been updated: Not enough time has been spent to actually think about what I wrote, and not enough time was spent after that to properly fix the code. 9 times out of 10 when this happens the "fix" was wrong. Some devs are obsessed with speed over quality. Slow down and take your time.

          Definitely for myself, when I have a serious bug it's almost always when I felt pr

    • Like you, I'm very curious how much of this actually ends up in production. And if it does end up in production, how much actual programmer time gets ate trying to sus out why the shitty generated code doesn't actually do what it's supposed to do. I can totally see top-down management telling them to generate code, then fix it rather than write from scratch. It's the type of, "We must train your replacements," bullshit that send upper management into apoplectic fapstorms of cost savings day-dream. "WE'LL BE

      • Like you, I'm very curious how much of this actually ends up in production..

        *snort*

        And if it does end up in production..

        *snort-choke*

        OK, before you kill me with my own coffee here, remember who we’re talking about.

        Dare you to ask Google what “production” means without snort-laughing.

        • Like you, I'm very curious how much of this actually ends up in production..

          *snort*

          And if it does end up in production..

          *snort-choke*

          OK, before you kill me with my own coffee here, remember who we’re talking about.

          Dare you to ask Google what “production” means without snort-laughing.

          Production is the step before cancelation. Except in extreme cases, where they are so efficient, they cancel before production!

          • Production is the step just after cancellation, so that they never get there. Everything before cancellation is development and beta-test-by-customer.

    • At best they are referring to the LLM meta-language as code.
    • It means google is suffering as much of the rest of us with all of this javascript boilerplate they helped invent.

    • "Eat your dogfood!"
      "But I hate it, it's nasty and gross and smells like squirrel."
      "Eat it, because no custoemrs will buy it if they think our own developers are squeamish!"

    • "After the code is generated, it is then checked and reviewed by employees, he added."

      "This helps our engineers do more and move faster," said Pichai.

      The snag here is that coding is already fast, most likely too fast. The big chunk of time in a quality product should be in the checking and reviewing. Given the shoddy state of AI and the amount of mistakes being made, an AI coding helper is likely to increase the overall development time. Or at least it should increase this, I don't doubt the existence of managers or team leads that trust AI to be correct more often than humans and so trim back on reviews, checking, testing, etc.

  • by GameboyRMH ( 1153867 ) <gameboyrmh.gmail@com> on Tuesday October 29, 2024 @06:50PM (#64904845) Journal

    The best-case scenario here is that Google counted every single character that came out when a coder hit the autocomplete key to enter the next couple words of their code (probably variable and function names mostly, like many code-oriented text editors have done for decades), and they're selling it to investors to make it look like they're 1/4 way to kicking all their coders to the curb and becoming a fully-automated post-human-labor company.

    Wouldn't be the first time something like that happened, the health care megacorp I used to work for once put out a press release where they said they were using AI for processes which I can assure you were 100% AI-free at the time (like the rest of the entire software suite it was part of) and I would bet still is. I pointed it out in our company chat and joked about whether we should be fitting our servers with GPUs or NPUs and got lots of laughs.

    • Any company with a datascience team (if they also needed to impress people) suddenly turned their data science team into AI when talking about it publicly.

      Why not? Even A* is AI.
      • Probably because data scientists have been using what we refer to as AI for over a decade now, and when the media jumped on a buzzword turning it into investor relevant info they just adopted the updated terminology.

        AI isn't new. And just because a team was renamed AI doesn't mean they weren't actually using it.

      • There's a team at most companies that can get turned into the BuzzWord team at a moment's notice, who then leap on that bandwagon with zeal and fervour. That team never seems to do much in the way of providing practical services or products but marketing loves them anyway.

    • A lot of companies still worship the software metrics. Lines of code per day, per developer, per project, etc, etc. It's really stupid. Many jobs I've had the worst code was by the guy who wrote the most code (often commit quickly then 27 successive commits until it actually works), or the guy who isn't helping but wants to seem important by adding new frameworks all the time, etc. The best code was often by the person who wrote slower and with fewer lines, testing code before committing, etc.

      And then s

  • by Anonymous Coward

    Who made fun of the writer’s strike, your day is coming. AI doesn’t have to be perfect, just good enough. Don’t think the suits aren’t paying attention. Your replacement is already planned.

    • It will eventually replace the suits too.
      • They will undoubtedly be capable of replacing the suits first. The question is whether the suits would allow themselves to be replaced. They haven't let any small shellscripts replace them so far :-P

        • Most suits are in precarious positions, such as any suit that is not a C-level exec. Being in a precarious position with no discernable talent or skills they often end up being the experts in causing busy work in others, so that the suits above them in the food chain will believe that usefull stuff is being done. The C-level suits often live in fear of the Board, unless of course the CEO is also Chairman of the Board (which is a terrible combination because of the lack of accountability).

      • The suits should be afraid of tape players that regurgitate the same bullshit over and over again on a continuous loop.
    • The SWE job is about more than code. A lot more.
    • Who made fun of the writer’s strike, your day is coming. AI doesn’t have to be perfect, just good enough. Don’t think the suits aren’t paying attention. Your replacement is already planned.

      Been there, done that. 20 years ago, /. was making bold proclamations that no one would ever write code in the USA again. Offshore outsourcing was a huge fad and it failed...we're going to go through this again. There will be AI generated code...which will be "good enough" until it isn't and people get sick of bloated code that's easy to break into and impossible to maintain.

      Remember...it's pretty easy to "write code". Every developer has written some script or tool to "write code." Maintaining it..

    • They will still need a couple humans to verify the code, make sure there aren't any mistakes, but everyone else becomes a buggy whip maker.
  • by Touvan ( 868256 ) on Tuesday October 29, 2024 @08:34PM (#64905057)

    It's very possible that Google managers really believe these numbers - the engineers almost certainly know better.

    I wonder whether they are just marketing to competitors. If they can get their competitors to double down on bug seeding AI generated code (42% more bugs, by some estimates), then they get to continue their enshitification campaign unchallenged by anyone else. Or maybe, we have an explanation for their current state as almost completely enshitified...

    • by ceoyoyo ( 59147 )

      I expect it's true. The studies suggest the AI code writers increase commits by 60% or so. They also increase the bugs by 80% or so.

      From what I've seen, those bugs can be pretty insidious too. For reasonably simple things, the code looks okay and might even run okay, but it's flawed. Which means when it eventually fails the bug is going to be extra expensive to fix.

      I noted some definite improvement though. When I asked the public version of chatGPT to write something a bit harder it made up some bullshit. W

    • They could be referring to LLM expressions as 'code'.
  • More than 75% of code for production lines, is reused from the last production line.

    News at 11:00

  • by Eunomion ( 8640039 ) on Tuesday October 29, 2024 @10:02PM (#64905163)
    It's going to be hilarious.
    • SISO code: Shit-in, Shit-out

      • I am a math teacher. Occasionally I feed a math question in an updated AI model to see what comes out. At first sight, it nails the answer. But then when you read it thoroughly, you notice very well that it is a statistics machine. The answers are full of errors and nonsense, but it sounds ok. It is like interviewing a student that has great memory but does not understand a word of the course.
        It makes me wonder if AI is suited for programming tasks. You need similar logical reasoning skills. Details need
        • I've been playing with your questions for a while.
          The answer is... it depends.

          In my experience, so far, the larger the model, the better the answers.
          Really small LLMs tend to do as you describe: Produce an algorithm that at a glance appears it would give the right answer, but upon actual execution- is fatally flawed and just doesn't work.
          As they get better, you start running into funny things like Q: Write an algorithm to calculate 5 digits of pi. A: printf("3.14159\n");
          But once you start getting into
          • Thanks, that was an interesting read. Do you count gemini, copilot and gpt as one of the big ones? The free versions, I mean.
            • Do you count gemini, copilot and gpt as one of the big ones? The free versions, I mean.

              Ya.
              Free/Paid quality varies between the 3.
              Paid gemini can look at a picture, and produce code that will replicate it via primitives in any language+library you want (html, html+insert_js_lib_here, c + SDL, etc)
              I tried it with a picture of my text editor with some text in it... it is, as I said, freakishly good.

          • But remember, underneath even the most advanced model out there, they do not understand the code. At. All.

            • Objection: Speculation.

              understand (v):
              1) perceive the intended meaning of (words, a language, or a speaker).
              2) interpret or view (something) in a particular way.

              The fact is, you don't really know wtf a LLM "knows" or "understands". It's billions to trillions of interconnected weights and connections.
  • by viperidaenz ( 2515578 ) on Tuesday October 29, 2024 @10:05PM (#64905171)

    If we're going by lines of code, I'd say about a quarter of the code at my job is generated by IDE plugins. Maybe more.

    • I find that if I write up some good comments about what I'm wanting to do beforehand, the coding assistant will autocomplete most of it as I go. I can just hit the tab key for the routine stuff.

  • by az-saguaro ( 1231754 ) on Wednesday October 30, 2024 @01:26AM (#64905319)

    "In Search, our new AI features are expanding what people can search for and how they search for it."

    From what I've seen, their new AI features are expanding what people didn't search for.

    • by stooo ( 2202012 )

      THIS.
      We don't need this B.S.

      • THIS. We don't need this B.S.

        But...but...but, cost savings! Plus! It'll fix the environment! It'll save us from ourselves! It'll end poverty! It'll stop systemic injustice. It'll flood us with positivity and joy and oh fuck off Sam Altman.

        So far all it's done is crapflood even more of the internet, and present us with the possibility of most of us getting booted out of the workforce because management believes hype over substance. Good job, AI prophets. You're fucking the entire species, possibly the entire biosphere in your power cons

  • Who verifies that it is correct ??
    • by Rei ( 128717 )

      The programmer, of course?

      You really should try some of these tools, to get a sense of what they're like. Try e.g. Cursor with Claude (there's a 2 week free unregistered trial). When you generate code, it returns as a diff in the IDE, for you to go over and decide what you want to merge in or not.

  • This is all NEW code. They are incorporating AI libraries in their new releases that amount to 25% of the new code. They are not using AI to generate code.
    • by caseih ( 160668 )

      "More than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers," CEO Sundar Pichai said on the company's third quarter 2024 earnings call. It's a big milestone that marks just how important AI is to the company.

      You think?

Hackers are just a migratory lifeform with a tropism for computers.

Working...