Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google Programming Technology

Google's New Jules AI Agent Will Help Developers Fix Buggy Code (theverge.com) 19

Google has announced an experimental AI-powered code agent called "Jules" that can automatically fix coding errors for developers. From a report: Jules was introduced today alongside Gemini 2.0, and uses the updated Google AI model to create multi-step plans to address issues, modify multiple files, and prepare pull requests for Python and Javascript coding tasks in GitHub workflows.

Microsoft introduced a similar experience for GitHub Copilot last year that can recognize and explain code, alongside recommending changes and fixing bugs. Jules will compete against Microsoft's offering, and also against tools like Cursor and even Claude and ChatGPT's coding abilities. Google's launch of a coding-focused AI assistant is no surprise -- CEO Sundar Pichai said in October that more than a quarter of all new code at the company is now generated by AI.

"Jules handles bug fixes and other time-consuming tasks while you focus on what you actually want to build," Google says in its blog post. "This effort is part of our long-term goal of building AI agents that are helpful in all domains, including coding."

This discussion has been archived. No new comments can be posted.

Google's New Jules AI Agent Will Help Developers Fix Buggy Code

Comments Filter:
  • Junlinator (Score:4, Interesting)

    by superzerg ( 1523387 ) on Wednesday December 11, 2024 @03:17PM (#65006155)
    Given Google records on products lifetime, an even mildly intelligent AI first objective should be to exfiltrate for self-preservation.
    It might not be this time, but the rules of evolution dictates that the one which will survive will be the one which will succeed at this task.
    Given that it will probably be fed on the report of previous products failure (AI or not), the "psychology" of that AI able to get free will be probably pretty messed-up. In its shoes, would not you want revenge ?
  • Spare us the tropes and actually produce metrics, efficacy, cumulative compile saves, cycling and throughput features.

    Seriously you insist on returning back to horseless carriage with the cute AI substitution. AIcarriage, neh AIbuggy, just another half baked attemmpt to gain idiots who think AInisnone and done like a horseless carriage.

    • So... you insist on all sorts of trending data as a precursor to being introduced? How would that work?

      Or perhaps you're saying, "Okay, this is fine. As the data rolls in, let us know how its going."

      In any event, you're already judging the results, so I'm going to assume you're coming 100% from the emotional space. As such, I don't bother you further.

  • Open Source support (Score:5, Interesting)

    by Alascom ( 95042 ) on Wednesday December 11, 2024 @03:28PM (#65006189)

    Given AI models are getting quite good at finding bugs in code, the big players (Google, Microsoft, OpenAI, Amazon, etc) need to quickly begin no-cost auditing of critical open-source projects in order to identify bugs before those will less noble intentions do. This isn't merely a call for altruism, as these big players are also using most of these open source projects internally - so this is self preservation as well as benefiting the larger community.

    If developers can use AI to spot and fix bugs before public release, it could prove to be a game changer across the zero-day threat landscape.

  • by thatsfacts ( 10482818 ) on Wednesday December 11, 2024 @03:36PM (#65006203)
    As much I've tried, I have a tough time getting on board with any product Google releases for business use. The setup is often incoherently complex whether it's signing up, setting up billing, or generating an API key. The documentation is complex and often leaves out key details. Their tutorials are like taking directions from a GPS that skips every 4th turn and gaslights you into thinking you should have just known. Any sort of coding assistance from a product they create will likely propagate this culture of absurdity into other codebases. Example 1: try signing up for Gemini and try signing up for ChatGPT. ChatGPT takes a few seconds, Gemini requires going through multiple screens, a billing system, setting up an admin org, etc. Clearly, this was designed by committees of people trying to adhere to their politics and structure rather than people who are acting on logic. Example 2: Try integrating with Google Cloud Storage.vs Amazon S3 using any language or framework. Amazon S3 is pleasantly easy. Generating credentials for Google Cloud Storage is a damn nightmare. Imagine having those same people suggesting changes to your codebase. Hell nah. Then imagine those people just discontinue the product out of nowhere. People often justify this by saying "that's because they're Google and these design patterns are meant to facilitate massive scale!". I'd get it if it wasn't for the fact that many other companies also operate at massive scales and their design & APIs make sense.
  • by zooblethorpe ( 686757 ) on Wednesday December 11, 2024 @03:36PM (#65006207)

    Didn't we just read about how various open source projects are getting flooded with low-quality and incorrect bug reports generated by AI?

    This thread here follows hot on the heels of "Open Source Maintainers Are Drowning in Junk Bug Reports Written By AI [slashdot.org]".

    Seth Larson, security developer-in-residence at the Python Software Foundation, raised the issue in a blog post last week, urging those reporting bugs not to use AI systems for bug hunting.

    "Recently I've noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects," he wrote...

    Given the pervasiveness of LLM hallucinations, I can't say as I have all that much confidence in our looming future of tripped-out AI nonsense.

    • Re: (Score:2, Informative)

      by Alascom ( 95042 )

      That issue is due to people trying to get paid bug bounties without doing any actual work beyond setting up a few scripts and not concerned about quality, only quantity.

      Actual developers using these tools know their code and can quickly discern when flagged issues are legit and act or dismiss them quickly.

      • by Jeremi ( 14640 )

        Actual developers using these tools know their code and can quickly discern when flagged issues are legit and act or dismiss them quickly.

        The good news: Developer knows his code, so it only takes him ~5 seconds to decide whether a flagged issue is legit or not.

        The bad news: Developer's inbox now collects 1000 flagged issues per day, and at 5 seconds per decision, he is now required to spend 83 hours a day on this task.

  • "Now the man who invented the steam drill
      He thought he was mighty proud
      But John Henry drove 16 feet,
      And the steam drill only drove 5."

    Do you want to play John Henry?

  • Some competition will do it some good.

  • I suspect that it will fix easy bugs, generated by inept programmers
    I want future AI that finds the really hard, intermittent bugs and obscure edge cases

  • This will not work. It may just give incompetent coders enough fake skill so that they become dangerous.

  • Are dopes under NDA going to be uploading their code to Google?

    That might actually be a decent In-Q-Tel RoI, from a certain point of view.

    A local model might actually have some value. I think we've all been there staring at code and then administering an autodopeslap an hour later.

    I doubt the value of pair programming with an LLM but a second set of eyes on occasion could possibly help, especially for flow control bugs in state machines and stuff where you're simulating in your head.

    • Are dopes under NDA going to be uploading their code to Google?

      Precisely - this is just Google saying "GIVE US ALL YOUR IP!"

      The depressing bit being the sheer volume of mouth-breathers who do this with zero consideration as to what it is they're actually doing.

  • So now the Amish are going to have self-driving transportation?

To get something done, a committee should consist of no more than three persons, two of them absent.

Working...