Forgot your password?
typodupeerror
Google Programming AI

Google's Internal Politics Leave It Playing Catch-Up On AI Coding (bloomberg.com) 24

An anonymous reader quotes a report from Bloomberg: At Google, leaders are anxious about falling behind in the race to offer AI coding tools, especially as rivals like Anthropic PBC offer more effective and popular tools to businesses, according to people familiar with the matter. The search giant is now working to unite some of its coding initiatives under one banner to speed progress and take advantage of a surge in customer interest. In some corners of Alphabet's Google, particularly AI lab DeepMind, concerns about the company's position are mounting, according to current and former employees and executives, who declined to be named because they weren't authorized to speak publicly.

Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot. But Google doesn't have a clear solution for them. Its Gemini model's capabilities are sprinkled across half a dozen different coding products with different branding, indicating how the company's lack of focus and competing internal efforts have hampered success, the people said. Even internally, some Google engineers prefer to use Anthropic's Claude Code, they said. More concerning, the people said, are the engineers who are struggling to adopt AI coding at all. [...] Google's emphasis on its own technology has also complicated the push to catch up. Most employees are banned from using competing tools such as Claude Code or Codex due to security concerns, but Googlers can request exceptions if they can demonstrate they have a business case, one former employee said. Some teams at DeepMind, including those working on the Gemini model, internal applications, and open source models, use Claude Code, according to three former employees. "You want the best people to use the best tool, even inside Google," one of the former employees said. [...]

In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said. [...] Within the Googleplex, there is a philosophical clash between AI researchers who want to move as quickly as possible and more traditional senior engineers who have exacting standards for code quality, former employees say. AI usage is factored into performance reviews, according to a former employee. But engineers who try to use internal AI coding tools often hit capacity constraints due to competition for computing power, the former employee said.

Google's Internal Politics Leave It Playing Catch-Up On AI Coding

Comments Filter:
  • by Pseudonymous Powers ( 4097097 ) on Tuesday April 21, 2026 @02:12PM (#66105340)
    One problem here is that, the better these tools work, the dumber your developers get, faster.
    • by Junta ( 36770 ) on Tuesday April 21, 2026 @02:27PM (#66105378)

      Anecdote to back this up, we have annual round of employee directed projects where people propose something they will do that no one asked for in hopes that they do something unexpected that's worthwhile. Generally it's a waste of time business wise, but at least people get to work on something they actually believe in.

      Anyway, usually they at least usually manage to create a somewhat working demo of their concept, but this year most of them failed to do so, because most of the pitches were people that didn't know how to do the work, but GenAI was able to generate pitch material that convinced executives to approve them and largely drowned out the people with actionable proposals. So most of the final presentations were people just repeating their pitch and hoping people didn't notice they had no new material since their pitch a few months back.

      • Counterpoint: these people were already dumb, AI just made them coherent. Which in a sense shows you how *good* AI is.

        • by Junta ( 36770 )

          Yes, but before, they wouldn't get greenlit and projects with a vague chance of progression would.

          Now they precluded quite a few projects that actually could have progressed.

          I tried to extract specifics to expose the ill conceived ones, but the other executives on the panel were still a fan of the slop ones, and preferred them over more grounded.

  • And why would they spend money on AI that doesn't increase ad sales anyways?

    • That's actually quite tricky for Google. LLM-based searches unquestionably cannibalize traffic to the web properties that make them the lion's share of their revenue.
      However, they surrender their position directing traffic to websites to competitors like OpenAI or Anthropic, then they are far less able to make ad revenue period.

      I think their hope is probably to outlast some of the hype cycle and then come in with decent products that leverage their current dominance in ad sales.

    • Because the market will continue to punish them for not following the pack long after the pack has run off a cliff. Even Google is not big enough to take the reputational hit of telling the market that they're wrong.

  • by gweihir ( 88907 ) on Tuesday April 21, 2026 @02:35PM (#66105406)

    What anyone can build is stings on the level of a child's crayon drawing. Suitable for mock-ups and maybe UI testing, but not production-ready at all. Unless you want to get massive inefficiencies, downtimes, no maintainability and get hacked as soon as some halfway competent attacker finds the time to.

    • Which explains bsky, what a bugfest.

    • I'm not sure I fully agree. If you know what you're doing, LLM-based code can be quite helpful. I just built a Python scraper using Antigravity in a few hours that would've taken me many days of work and required a lot of effort to learn async function syntax.

      It's not a super complex code base, so there's nothing architecturally complex about it. Even then, if I hadn't had a decent understanding of how playwright works, I would've had a much harder time debugging things and fixing some of the dumb decisions

      • I have found a few good fits for agentic coding. The first is to have it modify code in a precise, preplanned way, faster than you can do it yourself. If you can explain it as you would to a junior dev, with code pointers etc, it can do the exact same job in 5 minutes that would take me an hour.

        The second is to have it figure out how to do something that would take a lot of documentation study to work out. It's less reliable but it often saves time. "Here's a Go library doing this thing, make this C++ libr
        • by gweihir ( 88907 )

          I am not claiming these things are useless if somebody careful and competent uses them. But that is not the claim I was answering to.

      • But now you still don't know async function syntax. Short term gain for medium term cost. Was it worth it?

        • by gweihir ( 88907 )

          And that is an excellent point.

          This is also what is behind "cognitive surrender" when LLMs are used in an educational context. Far too many people simply stop thinking and trust whatever the LLM tells them. With the obvious consequences.

  • Been waiting for Google to start to fail. Google has been going sideways for a while now.

    • Yeah, this isn't it, unless they're claiming that Google is just as dumb as the author is, which is entirely possible. Comments like "Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot" (no, businesses thought that 2-3 years ago, they're just starting to realize that's bullshit and nothing substantial that will need maintenance can be done that way) makes me think the author is just a genAI shill.

  • Missed opportunity (Score:4, Informative)

    by hcs_$reboot ( 1536101 ) on Tuesday April 21, 2026 @02:51PM (#66105444)
    It’s a shame that Google didn’t make better use of their early ideas on LLMs, especially the 2017 paper "Attention Is All You Need", which was mainly intended for machine translation.
  • by MpVpRb ( 1423381 ) on Tuesday April 21, 2026 @05:44PM (#66105720)

    enable anyone to build crappy, bloated, inefficient, bug-ridden, insecure products. FTFY
    Experts can use the tools to produce quality code, but the myth that the clueless can effortlessly prompt their way to good code is silly and dangerous

  • This isn't a politics thing. It's a buckshot strategy thing. Build 100 different AI things and hope one of them hits the mark. Build the same tool with two different teams, then cross-pollinate ideas. We have reached the thinning phase of that strategy.
  • Just as not "anyone" can use power tools to build a house, not just "anyone" can use AI to build software.

  • by ZipNada ( 10152669 ) on Wednesday April 22, 2026 @12:13AM (#66106120)

    "Even internally, some Google engineers prefer to use Anthropic's Claude Code". That's because Claude is generally very good indeed, and better than Gemini.

    Like most software developers I get access to a lot of models. A dropdown menu in the IDE with a long list of options. I've tried Gemini many times and its 'okay' but nowhere near as slick and accurate as the frontier models. Gemini is cheaper and there's a reason for that.

    When you're trying to get something accomplished that's not so easy to do you want the best assistance, but it does cost money. After a while you learn to get the heavy lifting done with the more expensive AI, which might cost you as much as a quarter per prompt. Then you ramp back to the cheaper models that can adequately fill in the details.

"Success covers a multitude of blunders." -- George Bernard Shaw

Working...