Google's New Jules AI Agent Will Help Developers Fix Buggy Code (theverge.com) 19
Google has announced an experimental AI-powered code agent called "Jules" that can automatically fix coding errors for developers. From a report: Jules was introduced today alongside Gemini 2.0, and uses the updated Google AI model to create multi-step plans to address issues, modify multiple files, and prepare pull requests for Python and Javascript coding tasks in GitHub workflows.
Microsoft introduced a similar experience for GitHub Copilot last year that can recognize and explain code, alongside recommending changes and fixing bugs. Jules will compete against Microsoft's offering, and also against tools like Cursor and even Claude and ChatGPT's coding abilities. Google's launch of a coding-focused AI assistant is no surprise -- CEO Sundar Pichai said in October that more than a quarter of all new code at the company is now generated by AI.
"Jules handles bug fixes and other time-consuming tasks while you focus on what you actually want to build," Google says in its blog post. "This effort is part of our long-term goal of building AI agents that are helpful in all domains, including coding."
Microsoft introduced a similar experience for GitHub Copilot last year that can recognize and explain code, alongside recommending changes and fixing bugs. Jules will compete against Microsoft's offering, and also against tools like Cursor and even Claude and ChatGPT's coding abilities. Google's launch of a coding-focused AI assistant is no surprise -- CEO Sundar Pichai said in October that more than a quarter of all new code at the company is now generated by AI.
"Jules handles bug fixes and other time-consuming tasks while you focus on what you actually want to build," Google says in its blog post. "This effort is part of our long-term goal of building AI agents that are helpful in all domains, including coding."
Junlinator (Score:4, Interesting)
It might not be this time, but the rules of evolution dictates that the one which will survive will be the one which will succeed at this task.
Given that it will probably be fed on the report of previous products failure (AI or not), the "psychology" of that AI able to get free will be probably pretty messed-up. In its shoes, would not you want revenge ?
AI Bug Fixer (Score:2)
Spare us the tropes and actually produce metrics, efficacy, cumulative compile saves, cycling and throughput features.
Seriously you insist on returning back to horseless carriage with the cute AI substitution. AIcarriage, neh AIbuggy, just another half baked attemmpt to gain idiots who think AInisnone and done like a horseless carriage.
Re: (Score:2)
So... you insist on all sorts of trending data as a precursor to being introduced? How would that work?
Or perhaps you're saying, "Okay, this is fine. As the data rolls in, let us know how its going."
In any event, you're already judging the results, so I'm going to assume you're coming 100% from the emotional space. As such, I don't bother you further.
Open Source support (Score:5, Interesting)
Given AI models are getting quite good at finding bugs in code, the big players (Google, Microsoft, OpenAI, Amazon, etc) need to quickly begin no-cost auditing of critical open-source projects in order to identify bugs before those will less noble intentions do. This isn't merely a call for altruism, as these big players are also using most of these open source projects internally - so this is self preservation as well as benefiting the larger community.
If developers can use AI to spot and fix bugs before public release, it could prove to be a game changer across the zero-day threat landscape.
Re: (Score:2)
> Given AI models are getting quite good at finding bugs in code
You have got to be kidding me
Unusable & Unreliable (Score:3, Interesting)
Um... "Drowning in Junk Bug Reports Written by AI" (Score:5, Informative)
Didn't we just read about how various open source projects are getting flooded with low-quality and incorrect bug reports generated by AI?
This thread here follows hot on the heels of "Open Source Maintainers Are Drowning in Junk Bug Reports Written By AI [slashdot.org]".
Given the pervasiveness of LLM hallucinations, I can't say as I have all that much confidence in our looming future of tripped-out AI nonsense.
Re: (Score:2, Informative)
That issue is due to people trying to get paid bug bounties without doing any actual work beyond setting up a few scripts and not concerned about quality, only quantity.
Actual developers using these tools know their code and can quickly discern when flagged issues are legit and act or dismiss them quickly.
Re: (Score:3)
Actual developers using these tools know their code and can quickly discern when flagged issues are legit and act or dismiss them quickly.
The good news: Developer knows his code, so it only takes him ~5 seconds to decide whether a flagged issue is legit or not.
The bad news: Developer's inbox now collects 1000 flagged issues per day, and at 5 seconds per decision, he is now required to spend 83 hours a day on this task.
Re: Um... "Drowning in Junk Bug Reports Written by (Score:1)
You need a math refresher. 1000 bugs at 5 sec per is under 1.5 hours.
I doubt that it's ready YET, but (Score:2)
"Now the man who invented the steam drill
He thought he was mighty proud
But John Henry drove 16 feet,
And the steam drill only drove 5."
Do you want to play John Henry?
Good, GitHub Copilot has already stagnated (Score:2)
Some competition will do it some good.
Yeah, maybe (Score:2)
I suspect that it will fix easy bugs, generated by inept programmers
I want future AI that finds the really hard, intermittent bugs and obscure edge cases
Great, more bullshit lies (Score:2)
This will not work. It may just give incompetent coders enough fake skill so that they become dangerous.
Local or Cloud? (Score:2)
Are dopes under NDA going to be uploading their code to Google?
That might actually be a decent In-Q-Tel RoI, from a certain point of view.
A local model might actually have some value. I think we've all been there staring at code and then administering an autodopeslap an hour later.
I doubt the value of pair programming with an LLM but a second set of eyes on occasion could possibly help, especially for flow control bugs in state machines and stuff where you're simulating in your head.
Re: (Score:2)
Are dopes under NDA going to be uploading their code to Google?
Precisely - this is just Google saying "GIVE US ALL YOUR IP!"
The depressing bit being the sheer volume of mouth-breathers who do this with zero consideration as to what it is they're actually doing.
Buggy code? (Score:2)
So now the Amish are going to have self-driving transportation?