Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Programming Microsoft

Microsoft CEO Says Up To 30% of the Company's Code Was Written by AI (techcrunch.com) 95

Microsoft CEO Satya Nadella said that 20%-30% of code inside the company's repositories was "written by software" -- meaning AI -- during a fireside chat with Meta CEO Mark Zuckerberg at Meta's LlamaCon conference on Tuesday. From a report: Nadella gave the figure after Zuckerberg asked roughly how much of Microsoft's code is AI-generated today. The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

Microsoft CEO Says Up To 30% of the Company's Code Was Written by AI

Comments Filter:
  • by Valgrus Thunderaxe ( 8769977 ) on Wednesday April 30, 2025 @01:37PM (#65342657)
    Does he really believe this?
    • I do. Why wouldn't you?
      • Re:BS (Score:4, Insightful)

        by tambo ( 310170 ) on Wednesday April 30, 2025 @01:51PM (#65342697)

        Because I've tried using LLMs to generate code and I've seen the results. They are not usable. They *resemble* valid code, but they typically throw exceptions and raise errors, they can't pass unit tests, and they don't correctly handle edge cases. AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

        There is a meme going around about the fact that you can tackle a normal coding task by spending 3 hours to write code and 1 hour to debug and test it, or you can use CoPilot to spend 15 minutes to write the code and 8 hours to debug and test it. That matches my experience.

        • Re: BS (Score:2, Insightful)

          It's possible you're just doing it wrong.

          • It's possible, you're wrong.
            • Re: BS (Score:5, Funny)

              by DamnOregonian ( 963763 ) on Wednesday April 30, 2025 @02:26PM (#65342839)
              It's possible you don't know what a comma is used for.
              • by jhoegl ( 638955 )
                Its possible MS uses India outsourcing to write its code that later has to be fixed. Not Because India wrote it, because MS processes and procedures are such a shitshow and their outsourcing of the projects leads to security holes and compatibility problems.

                Given, its tough to maintain such bloat ware that should be just a simple OS and not integrated with terrible options like web browsing in your fucking file system, or self masturbatory "AI" disguised as spyware by the Microsoft that proports to help y
                • This was a truly strange place in this thread to put your MS rant, lol.

                  FWIW, I don't particularly disagree with you.
              • I, know what a comma, is used for do, you?

        • Re: (Score:3, Informative)

          by ZipNada ( 10152669 )

          >> They are not usable.

          Maybe you just aren't doing it right. I use LLM's to generate code for me every day and usually it is entirely adequate. The AI does require some handholding at times. There is a learning curve for working with it effectively but once you figure out what it can and can't do it is a brute.

          Meanwhile, Nadella said that 20%-30% of code inside the company's repositories was "written by software" and that seems entirely possible.

          • Re: (Score:2, Insightful)

            by Anonymous Coward

            If you think that AI is doing a good job writing code, you probably shouldn't be writing code. You're clearly not qualified.

            • A true Anonymous Coward argument.

            • Nobody said it had to be good. Only more cost effective.

            • by Jeremi ( 14640 )

              you probably shouldn't be writing code. You're clearly not qualified.

              Being unqualified never stopped anyone else, why would it stop him?

              That's one of the best and the worst things about coding, any fool with a PC can do it. Many of those that keep doing it, eventually get better at it.

          • Meanwhile, Nadella said that 20%-30% of code inside the company's repositories was "written by software" and that seems entirely possible.

            The statement "20%-30% of code inside the company’s repositories was 'written by software'" was a response to the question of "how much of Microsoft’s code is AI-generated today." I assume that not all of Microsoft's repositories were created today, so the statement about repositories cannot possibly be true. I'm guessing that the article writer incorrectly paraphrased what Nadella said. Then again, I suppose it's possible that maybe Microsoft has been secretly using AI to generate code for t

          • I'm going to take a view in between the "AI can do it all", versus "AI can't do it".

            As of now, perhaps it may get better over time, if I ask a chatbot to make some code for me, I'll get it.

            However, I then have to debug it, unit test it, change stuff to match the coding conventions of everything else. This might be minimal, or I might have spent more time fixing AI code than if I wrote things from scratch.

            So, it is sort of like OCR was in the 1990s... it sort of helps, but you have to go through it with a f

        • Re: BS (Score:3, Insightful)

          by jddj ( 1085169 )

          Not denying your experience at all, but I've had a great experience using Claude to do the busy-work of "open a serial port on any of 3 platforms", or "set up the basics of a tkinter GUI" in Python, such that it let me get to the real work of manipulating data, addressing the device on the serial port, designing the UI much quicker.

          I wouldn't hand an AI the whole task, at least not yet. But the thankless grunt work? Yes please. Away with it.

          The one time I had trouble with Claude was when every suggestion ra

        • This is also my experience. AI can do the simplest of tasks - poorly. Generating a Makefile or a Maven pom.xml file, sure. I was struggling to get simple unit test cases.

          I was given a defect to fix. I found the problem quick enough. So I asked the AI.... It was completely useless.

          • Re: (Score:2, Flamebait)

            I think you're probably a liar, or downloaded some app and a 2b 7b model and were surprised when it sucked.

            I have dumped several thousand line highly complicated classes into large-context LLMs and ask them to evaluate, document, and improve, and they did all 3 things correctly.
            I happen to know this experience is not unique to me.
          • HOW you ask the AI tool is a very large part of the result validity.

            • Well, if I hold its hand long enough, it will eventually get the correct answer. Along the way, it will very confidently provide an incorrect answer.

              • You're the kind of genius that sits there and yells at someone who shuts down when someone yells at them, thinking you'll get through to them if only you could yell louder.
        • I use them on a daily basis. They are very usable.
          Your entire post is bullshit.
        • AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

          Sounds like a perfectly good description of Windows code to me!

        • by zlives ( 2009072 )

          reposited code doesn't not necessarily mean actually being used.

        • I have been working with various LLM solutions, as well as more specialized coding solutions being developed for certain specialized cases.
          They are not perfect, not by a long shot, but they yield pretty nice code, if properly prompted to do so.

          Prompting is an art, and it becomes more complex as time passes. There are already attempts to develop AI tools which generates complex prompts for, well, other AI tools.

          • There are already attempts to develop AI tools which generates complex prompts for, well, other AI tools.

            Attempts? This is the magic sauce of all agentic coding platforms. We're well past attempts- we have paid products.

            • Well, yes, but they are not always successful.

              • Things like Cursor are pretty damn successful.
                Do you know why you have to route all traffic through them, even when the end-model is something you could access directly like ChatGPT or Claude? So that they can hide the actual prompt engineering they do from you. That's their proprietary sauce- the thing they're actually selling, because as you noted- prompt engineering is 99.9% of the game with getting really good stuff out of LLMs. An amateur can fumble around, and rely on the LLM to fill in all the gaps
        • Because I've tried using LLMs to generate code and I've seen the results. They are not usable. They *resemble* valid code, but they typically throw exceptions and raise errors, they can't pass unit tests, and they don't correctly handle edge cases. AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

          There is a meme going around about the fact that you can tackle a normal coding task by spending 3 hours to write code and 1 hour to debug and test it, or you can use CoPilot to spend 15 minutes to write the code and 8 hours to debug and test it. That matches my experience.

          This has been my experience for long code snippets. If you give it a very specific function you need, sometimes it can produce something valid quickly. The problem is trying to break down large tasks into very small functions so that the AI generator can actually handle the request. Sometimes it's possible, sometimes user requirements are so wildly speculative to begin with you really can't break it down to small functions playing against each other.

        • From my experience, it's a question of scope and how detailed your prompt is. If you ask "write me a program in X that does Y" you aren't going to get nearly as good of an answer as "write me a function in X that takes A, B, C as inputs; and gives Y as the return"

          And yes, just like if you were doing a code review of a junior engineer, don't rubberstamp what it gives you back. And if what it gives you back is garbage, oh no you've sadly wasted 30 seconds and can always fall back on writing it yourself.

        • AI doesn't just mean LMM. If its something as broad as the autogenerated code when you create a project in visual studio than that 20-30% could easily be true.

        • Because I've tried using LLMs to generate code and I've seen the results. They are not usable. They *resemble* valid code, but they typically throw exceptions and raise errors, they can't pass unit tests, and they don't correctly handle edge cases. AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

          That's what the other 70% of the code is for, the human written code.

          There is a meme going around about the fact that you can tackle a normal coding task by spending 3 hours to write code and 1 hour to debug and test it, or you can use CoPilot to spend 15 minutes to write the code and 8 hours to debug and test it. That matches my experience.

          It depends entirely on the complexity of the code you asked for. For simple things, like very short code samples that might appear in a textbook or reference book, the state of the art seems to be about equal to these references. Note these references also lack the defensive programming you mention.

          AI is, currently, more of an alternative to reference books than programmers. And reference books have always provided code to real work pro

        • I just used an AI for re-skilling and it's helped immensely with a project of mine.

          With a bit of skill and the right direction it's not overly difficult to get good results.

          YMMV.

      • by DarkOx ( 621550 )

        20-30% of code in the repositories. That is 20% to 30% of the code they are shipping, or even code the represents a branch anyone is doing work on that will ever land in a deliverable.

        Could be just a bunch of devs branching stuff to try and quick experiment in and then forgetting about entirely 10min later.

        LLMs are good at generating a lot of code, and version control systems are good at storing it; none of this suggests a large portion of it isn't useless garbage.

        • LLMs are good at generating working code too.

          I think there is a strange kind of delusion that exists among the people that frequent this site that LLM usage equates to "vibe coding" or whatever the fuck they call that shit today where people who have no fucking idea what they're doing generate programs with LLMs.

          LLMs are becoming more and more integrated into actual professional developer workflows, and they're quite good at what they do in that capacity.
    • Re:BS (Score:5, Funny)

      by korgitser ( 1809018 ) on Wednesday April 30, 2025 @01:41PM (#65342667)
      It would certainly explain Windows 11...
    • by taustin ( 171655 )

      Seems plausible, given what shit their software is.

    • Dodgy wording - "written by software" could just mean boilerplate code generation that uses no LLM at all. It sounds impressive as long as you don't think about it.

      • It sounds impressive as long as you don't think about it.

        That sounds like most of silicon valley for the last 5 years.

      • written by software

        He means an editor software, like a text editor? Or he's attributing any usage of auto-complete, while someone is writing code? Or is he counting template code, where the compiler emits code based on a template definition? Maybe he is summing it all up and making it sound like AI?

      • 30% of the code in their repositories.
        How much of that is actually used? And correct?

        In a generation, kids won't believe you when you tell them we used computers to do calculations instead of hallucinate rationalizations for the decrease in the chocolate ration.
        • In a generation, kids will be laughing about the how the neo-Luddites deluded themselves into pretending the growing experience of the population was entirely hallucinated.

          You're talking out your ass, so it's like you're engaging in a PR war against the big bad LLM specter. To what end? Self delusion, or the hope of misinforming people that happen by your post?
      • by ceoyoyo ( 59147 )

        It could also mean boilerplate code generation that uses an LLM. They're pretty good at that.

        Dear Copilot, somebody wants another damn web form. Make it just like this existing one except change the label to "Fire idiot?" and the submission address to http://hr.microsoft.com/fireid... [microsoft.com].

        Tnks.

        And there you go, 1000 lines of code.

    • It depends upon how it is measured. If it is lines of code, I can easily imagine that AI spits out huge amounts of low quality code. There isn't any comment about what percent actually passes into production... just how much is in their repositories. You can always lie with statistics. CEO are masters at it.

    • by dbialac ( 320955 )
      Well, he said "up to". That includes any quantity between 0% and 30%. For example, up to 90% of this comment was written by AI, and it's completely true.
    • Does he really believe this?

      What makes you think an AI is any less capable of copy-paste'ing code from the internet than a modern human programmer?

    • Not of all written code, but newly written code.
  • Leap (Score:5, Insightful)

    by MBGMorden ( 803437 ) on Wednesday April 30, 2025 @01:41PM (#65342669)

    You may be putting words in his mouth by assuming "AI" when he said "software".

    Open up Visual studio and start drawing out a GUI app and a ton of the background code is then generated by software. Its been that way for ~30 years. That is was written by software doesn't necessarily mean it was AI, unless we're jumping on the train of calling basically everything in the computer "AI" these days.

    • "basically everything in the computer "AI" these days" - this seems to be the narrative these companies are pushing doesn't it. Another buzzword for bullshit as for as I'm concerned.

      • by taustin ( 171655 )

        If they convince you that their crap, hallucinating AI today is "just like what we've been using for years, only better," maybe people will spend money on their crap, hallucinating AI that they're pushing today.

        It's an admission of failure, when they push their new crap by saying it's more of the same, instead of the newest, bestest thing since sliced bread.

      • And before that everything was "blockchain"
    • "It's all ball bearings these days." -Fletch

    • Considering everything is compiled and no one writes in machine specific op codes so I’m surprised he didn’t just say 100% of it is.
    • Right. You're talking about a code generator. Those have been all the rage in MS world since at least the mid-aughts, notably for WinForms apps and for persistence layers. The code generated is shite, of course, but that doesn't ever seem to discourage anyone. "We don't have to look at the code, it just works!" they say. It's not clear to me that the people who say that have any intention of looking at the code even when it doesn't work, though.
  • Up to 30% (Score:5, Informative)

    by PsychoSlashDot ( 207849 ) on Wednesday April 30, 2025 @01:46PM (#65342685)
    That includes 0.000000000000000001%

    But realistically there's no way anything like 30% of the lines of code were written by GenAI. The vast majority of Windows Server 2025's code is identical to Server 2022, 2019, and 2016. Win11 24H2 is almost identical to Win10 1501. At least when you consider individual lines.

    Now, it's possible that 30% of the titles/programs they offer have been touched in one way or another by GenAI. I'm sure there's some stuff in Office and Windows and Exchange Online that was spewed by CoPilot.

    That said, I think the pride here is... misplaced. Want me to buy AI? Tell me it's reliable. I get that. Thats what he's doing. Want me to buy everything else? Tell me it's reliable... which GenAI code is not. So... this announcement disincentivizes me to buy MS products except CoPilot, which I don't want in the first place. It's kind of "we've smeared shit sandwich on all of the non-shit-sandwich meals at our restaurant. It's that good!"
    • That includes 0.000000000000000001% But realistically there's no way anything like 30% of the lines of code were written by GenAI. The vast majority of Windows Server 2025's code is identical to Server 2022, 2019, and 2016. Win11 24H2 is almost identical to Win10 1501. At least when you consider individual lines. Now, it's possible that 30% of the titles/programs they offer have been touched in one way or another by GenAI. I'm sure there's some stuff in Office and Windows and Exchange Online that was spewed by CoPilot. That said, I think the pride here is... misplaced. Want me to buy AI? Tell me it's reliable. I get that. Thats what he's doing. Want me to buy everything else? Tell me it's reliable... which GenAI code is not. So... this announcement disincentivizes me to buy MS products except CoPilot, which I don't want in the first place. It's kind of "we've smeared shit sandwich on all of the non-shit-sandwich meals at our restaurant. It's that good!"

      It could be as simple as AI can generating a *LOT* of code very, very quickly. I could believe 30% of code in a tech company that's created is created by AI. That doesn't mean 30% of code making it to release candidates is generated by AI.

    • by SirSlud ( 67381 )

      It's kind of cute you think Windows would constitute a significant portion of their code footprint.

      • It's kind of cute you think Windows would constitute a significant portion of their code footprint.

        Microsoft says there are 50 million lines of code in Windows 10, and it's guaranteed there are more in Windows 11. They also say there are 45-50 million lines of code in Office. These are believed to be their two largest codebases. You don't think 50 million+ lines, and being their single biggest codebase, is significant?

    • That's not what he's saying. The wording is terrible. He's saying 30% of new code is software written. Zuck was asking about how much of their code is CURRENTLY being written by software

  • Yeah... (Score:5, Insightful)

    by devslash0 ( 4203435 ) on Wednesday April 30, 2025 @01:54PM (#65342711)

    That explains a lot about the current state of Windows.

  • AI enshitification working hard to make your life worse.

  • Windows 10 is supposed to be around 50 million lines of code. For ease of argument lets ignore the tons of other software Microsoft creates (Office, Azure, video games, .Net, etc...). You're telling me at least 10 million lines of Windows is AI generated? WTF?

    Now a lot of web code is generated from templates or minimized. If you want to argue that is all AI generated code, fine. Do you want to argue that all code touched by auto-formatters is also AI generated? Fine. Are you claiming all code complie

    • That's not what he's saying. He's saying 30% of new code is written by software.

      • In a former position, well over 90% of the code I committed in a few projects was generated from automated processes. As an example, we had OpenAPI-ish documents describing the data structures on our network appliances. I consumed that and generated things like Ansible modules (Python) and statically typed API clients (C#).

        "Written by software" takes a lot of forms and I'd bet my last dollar on the notion that the portion described above is almost entirely generated by automated processes.

  • Recall. Oops, nvm.

  • probably started with Windows ME

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday April 30, 2025 @02:34PM (#65342861) Homepage Journal

    Microsoft CEO Satya Nadella said [...] during a fireside chat with Meta CEO Mark Zuckerberg

    I don't see a fire in the picture in the article, although I'd like to.

    • Microsoft CEO Satya Nadella said [...] during a fireside chat with Meta CEO Mark Zuckerberg

      I don't see a fire in the picture in the article, although I'd like to.

      I'd like to see both of them on fire. Maybe we can toss on Matthew Prince [wikipedia.org] on as some extra kindling.

    • by PPH ( 736903 )

      I don't see a fire in the picture in the article, although I'd like to.

      Here ya go! [giphy.com]

  • by RUs1729 ( 10049396 ) on Wednesday April 30, 2025 @02:39PM (#65342881)
    That will answer many questions from suffering Windows users.
  • by mmdurrant ( 638055 ) on Wednesday April 30, 2025 @03:06PM (#65342965)
    Written by software doesn't mean AI. It can - and likely does - mean generated by automated processes. That's all.
  • I guess that explains why their products have gotten so bad and bloated over time.
  • Now we know why we have the mess called Windows 8/10/11!

    It's all AI generated CR@Pola!
  • AI wrote huge volumes of throw-away code? Just keep the engine churning, soon it will write a volume of code equal to 20-30%. That doesn't mean the code is actually used in something real.

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...