Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Programming

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT 8

OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company's o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces "cleaner" code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved.

The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running.

Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT

Comments Filter:
  • Rise of the Machines (Temu Skynet edition) is gonna be death by cringe.
  • The company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex

    By the time we're done, it's going to cost just as much to develop code using AI, as it does to just hire someone.

    • Bingo!
      On my bingo card: Double rates as soon as enough lazy managers and corporate purchasing policies insist on creating that dependency... to save costs, of course!

      Also, there won't be any alternative. Us knowledgeable old farts will be retired or dead and the younguns will only know the tools, no first principles, so ... we'll pay whatever you say.
      • I can't wait until some of those clueless executives will say something to the AI devs like:

        "Can you please make me a productivity dashboard?

        And the AI, instead of asking more questions, will dutifully create a dashboard. The exec will think he just got an amazing new tool, but he won't have a clue whether it does what he thought it should do, but he'll like it just the same.

        • this software, or whatever it is, just writes itself!
          that old joke about comedians and jokes that write themselves. It's here. it's the death of irony. ComedyBot, he literally writes "his"own jokes.
          For once I'm not kidding. ...but your script ,"THE Productivity Dashboard". I love it. That's original material. I'll feed to... Google Notebook LM, we'll have a video in a few minutes, a podcast format..You, Sir, have yourself a new product! I'm fxcking jealous, I want one now. It's amazing how that works.
          So iro
    • by gweihir ( 88907 )

      The company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex

      By the time we're done, it's going to cost just as much to develop code using AI, as it does to just hire someone.

      There are indicators it may be more expensive, and that does not include the cost of running the AI.

      Look at this idiotic thing here: It iterates until it passes the test-cases. Now, test cases are hardly ever complete. In most cases they cannot be. And then there are things you cannot really test for, like security. Most of that you need to do right by construction or you are screwed. Or try maintainability. No connection to test-cases, but a lot of connection to coder experience and insight.

  • I recently tried to get help on a microprocessor project from AI's. They all seemed very good on simple micropython code. I could cut and paste what they gave me, and if there were errors, I cut and pasted the errors, and they fixed it. That is very cool!!! When it came to programming the peripherals, or subtle register manipulations, it either hallucinated or flat out lied. If they could test the code on models of microprocessors and iterate, I think it would be a game changer. It would also b

You've been Berkeley'ed!

Working...