

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT 12
OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company's o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces "cleaner" code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved.
The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running.
Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.
The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running.
Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.
Only Interested if it's THIS Codex (Score:2)
The singularity ain't what it used to be (Score:1)
That thing about AI being cheaper than humans... (Score:3)
The company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex
By the time we're done, it's going to cost just as much to develop code using AI, as it does to just hire someone.
Re: That thing about AI being cheaper than humans. (Score:3)
On my bingo card: Double rates as soon as enough lazy managers and corporate purchasing policies insist on creating that dependency... to save costs, of course!
Also, there won't be any alternative. Us knowledgeable old farts will be retired or dead and the younguns will only know the tools, no first principles, so
Re: (Score:3)
I can't wait until some of those clueless executives will say something to the AI devs like:
"Can you please make me a productivity dashboard?
And the AI, instead of asking more questions, will dutifully create a dashboard. The exec will think he just got an amazing new tool, but he won't have a clue whether it does what he thought it should do, but he'll like it just the same.
Re: (Score:2)
that old joke about comedians and jokes that write themselves. It's here. it's the death of irony. ComedyBot, he literally writes "his"own jokes.
For once I'm not kidding.
So iro
Re: (Score:2)
The company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex
By the time we're done, it's going to cost just as much to develop code using AI, as it does to just hire someone.
There are indicators it may be more expensive, and that does not include the cost of running the AI.
Look at this idiotic thing here: It iterates until it passes the test-cases. Now, test cases are hardly ever complete. In most cases they cannot be. And then there are things you cannot really test for, like security. Most of that you need to do right by construction or you are screwed. Or try maintainability. No connection to test-cases, but a lot of connection to coder experience and insight.
Re: (Score:2)
> Look at this idiotic thing here
Actually, that's the good thing here. The AI will create test cases and will analyse code. It would also document it. It will likely do it better than an average developer will do. Sure, if you compare it to a really good developer it might not be as good in most cases, but compared to the average developer I think there's a good chance it will be better. An experienced developer will likely be able to direct it even better.
Re: (Score:2)
Your expectations are not grounded in reality. All this thing can do is crappy code that is insecure, unmaintainable and basically has negative worth.
Does it have microcontroller models for sim? (Score:2)
Bug finding (Score:3)
I'm not ready to turn over major code development to AI, but I would be perfectly happy to have it scan the code base for suspicious code. Just the other day I had a small bit of code in a language I'm just learning, and I pasted it into an AI and asked it to explain it. The explanation was good, and it pointed out that what the code actually did did not match what the comment said; that's a big red flag. It was also able to fix the code to match the comment. Since I'm just learning the language, it saved me a good bit of time (but probably slowed down my learning of the language).
But it would be great to have it look over the whole code base and report issues that may need attention.
Shotgun debugging.. (Score:2)
and will iteratively run tests on its code until passing results are achieved.
I'm not sure if it should be called shotgun debugging or shotgun coding but this seems like a fine way to wind up with obscure bugs.