Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Programming Microsoft

Microsoft CEO Says Up To 30% of the Company's Code Was Written by AI (techcrunch.com) 129

Microsoft CEO Satya Nadella said that 20%-30% of code inside the company's repositories was "written by software" -- meaning AI -- during a fireside chat with Meta CEO Mark Zuckerberg at Meta's LlamaCon conference on Tuesday. From a report: Nadella gave the figure after Zuckerberg asked roughly how much of Microsoft's code is AI-generated today. The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

Microsoft CEO Says Up To 30% of the Company's Code Was Written by AI

Comments Filter:
  • BS (Score:5, Interesting)

    by Valgrus Thunderaxe ( 8769977 ) on Wednesday April 30, 2025 @01:37PM (#65342657)
    Does he really believe this?
    • I do. Why wouldn't you?
      • Re:BS (Score:5, Insightful)

        by tambo ( 310170 ) on Wednesday April 30, 2025 @01:51PM (#65342697)

        Because I've tried using LLMs to generate code and I've seen the results. They are not usable. They *resemble* valid code, but they typically throw exceptions and raise errors, they can't pass unit tests, and they don't correctly handle edge cases. AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

        There is a meme going around about the fact that you can tackle a normal coding task by spending 3 hours to write code and 1 hour to debug and test it, or you can use CoPilot to spend 15 minutes to write the code and 8 hours to debug and test it. That matches my experience.

        • Re: BS (Score:2, Insightful)

          It's possible you're just doing it wrong.

          • It's possible, you're wrong.
            • Re: BS (Score:5, Funny)

              by DamnOregonian ( 963763 ) on Wednesday April 30, 2025 @02:26PM (#65342839)
              It's possible you don't know what a comma is used for.
              • by jhoegl ( 638955 )
                Its possible MS uses India outsourcing to write its code that later has to be fixed. Not Because India wrote it, because MS processes and procedures are such a shitshow and their outsourcing of the projects leads to security holes and compatibility problems.

                Given, its tough to maintain such bloat ware that should be just a simple OS and not integrated with terrible options like web browsing in your fucking file system, or self masturbatory "AI" disguised as spyware by the Microsoft that proports to help y
                • This was a truly strange place in this thread to put your MS rant, lol.

                  FWIW, I don't particularly disagree with you.
                • If you hate so many of the newer mis-features in Windows, why are you still using it? Even if you're stuck with it at work, what's forcing you to use it at home if you hate it so much?
              • I, know what a comma, is used for do, you?

                • It's interesting the way our brains encode their use. Such a silly little mark shouldn't cause so much difficulty when reading a sentence, but they're clearly integral in my brain's language decoding, because that almost physically hurt to read.
        • Re: (Score:3, Informative)

          by ZipNada ( 10152669 )

          >> They are not usable.

          Maybe you just aren't doing it right. I use LLM's to generate code for me every day and usually it is entirely adequate. The AI does require some handholding at times. There is a learning curve for working with it effectively but once you figure out what it can and can't do it is a brute.

          Meanwhile, Nadella said that 20%-30% of code inside the company's repositories was "written by software" and that seems entirely possible.

          • Re: (Score:2, Insightful)

            by Anonymous Coward

            If you think that AI is doing a good job writing code, you probably shouldn't be writing code. You're clearly not qualified.

            • Nobody said it had to be good. Only more cost effective.

            • by Jeremi ( 14640 )

              you probably shouldn't be writing code. You're clearly not qualified.

              Being unqualified never stopped anyone else, why would it stop him?

              That's one of the best and the worst things about coding, any fool with a PC can do it. Many of those that keep doing it, eventually get better at it.

          • Meanwhile, Nadella said that 20%-30% of code inside the company's repositories was "written by software" and that seems entirely possible.

            The statement "20%-30% of code inside the company’s repositories was 'written by software'" was a response to the question of "how much of Microsoft’s code is AI-generated today." I assume that not all of Microsoft's repositories were created today, so the statement about repositories cannot possibly be true. I'm guessing that the article writer incorrectly paraphrased what Nadella said. Then again, I suppose it's possible that maybe Microsoft has been secretly using AI to generate code for t

          • I'm going to take a view in between the "AI can do it all", versus "AI can't do it".

            As of now, perhaps it may get better over time, if I ask a chatbot to make some code for me, I'll get it.

            However, I then have to debug it, unit test it, change stuff to match the coding conventions of everything else. This might be minimal, or I might have spent more time fixing AI code than if I wrote things from scratch.

            So, it is sort of like OCR was in the 1990s... it sort of helps, but you have to go through it with a f

        • Re: BS (Score:3, Insightful)

          by jddj ( 1085169 )

          Not denying your experience at all, but I've had a great experience using Claude to do the busy-work of "open a serial port on any of 3 platforms", or "set up the basics of a tkinter GUI" in Python, such that it let me get to the real work of manipulating data, addressing the device on the serial port, designing the UI much quicker.

          I wouldn't hand an AI the whole task, at least not yet. But the thankless grunt work? Yes please. Away with it.

          The one time I had trouble with Claude was when every suggestion ra

          • I see this new stuff as essentially similar to spreadsheet software when that first became widespread. I've heard of a trade show in the 70s or 80s where accountants wept when they realized the years of time they would save. But you still had to be an accountant -if you enter the wrong formula in a spreadsheet and drag it down the workbook, it's all going to be garbage. There are many types of things a spreadsheet program can do well, and others which may technically be possible but are increasingly hacky
          • by cstacy ( 534252 )

            I've had a great experience using Claude to do the busy-work of "open a serial port on any of 3 platforms", or "set up the basics of a tkinter GUI" in Python, such that it let me get to the real work of manipulating data, addressing the device on the serial port, designing the UI much quicker.

            I wouldn't hand an AI the whole task, at least not yet. But the thankless grunt work? Yes please. Away with it.

            AI can be useful at cutting and pasting that kind of boilerplate code that it has seen on Stackoverflow many times. It will still mess it up and is not neatly as reliable as cutting and pasting it yourself.

            I have found that it cannot write trivial SQL queries that work. The code references table and columns that do not even exist. Looking it over and trying to debug it takes longer than writing it myself.

            Whenever someone tells me how helpful the AI is at writing code for them, all I hear is: "I can't progr

            • I can write, and have written both the examples I provided here before. Claude did a great job, and provided clear, legible working code.

              These were little jobs, not big ones. I wouldn't try having it land a plane or prescribe meds...well, at all right now. Maybe in future with much human supervision and review.

              It's also important to keep in mind that until very recently, ALL programming errors with catastrophic results came from humans, not AI.

              We're not uniformly good, and not at all foolproof.

        • This is also my experience. AI can do the simplest of tasks - poorly. Generating a Makefile or a Maven pom.xml file, sure. I was struggling to get simple unit test cases.

          I was given a defect to fix. I found the problem quick enough. So I asked the AI.... It was completely useless.

          • HOW you ask the AI tool is a very large part of the result validity.

            • Well, if I hold its hand long enough, it will eventually get the correct answer. Along the way, it will very confidently provide an incorrect answer.

              • You're the kind of genius that sits there and yells at someone who shuts down when someone yells at them, thinking you'll get through to them if only you could yell louder.
        • AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

          Sounds like a perfectly good description of Windows code to me!

        • by zlives ( 2009072 )

          reposited code doesn't not necessarily mean actually being used.

        • I have been working with various LLM solutions, as well as more specialized coding solutions being developed for certain specialized cases.
          They are not perfect, not by a long shot, but they yield pretty nice code, if properly prompted to do so.

          Prompting is an art, and it becomes more complex as time passes. There are already attempts to develop AI tools which generates complex prompts for, well, other AI tools.

          • There are already attempts to develop AI tools which generates complex prompts for, well, other AI tools.

            Attempts? This is the magic sauce of all agentic coding platforms. We're well past attempts- we have paid products.

            • Well, yes, but they are not always successful.

              • Things like Cursor are pretty damn successful.
                Do you know why you have to route all traffic through them, even when the end-model is something you could access directly like ChatGPT or Claude? So that they can hide the actual prompt engineering they do from you. That's their proprietary sauce- the thing they're actually selling, because as you noted- prompt engineering is 99.9% of the game with getting really good stuff out of LLMs. An amateur can fumble around, and rely on the LLM to fill in all the gaps
        • Because I've tried using LLMs to generate code and I've seen the results. They are not usable. They *resemble* valid code, but they typically throw exceptions and raise errors, they can't pass unit tests, and they don't correctly handle edge cases. AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

          There is a meme going around about the fact that you can tackle a normal coding task by spending 3 hours to write code and 1 hour to debug and test it, or you can use CoPilot to spend 15 minutes to write the code and 8 hours to debug and test it. That matches my experience.

          This has been my experience for long code snippets. If you give it a very specific function you need, sometimes it can produce something valid quickly. The problem is trying to break down large tasks into very small functions so that the AI generator can actually handle the request. Sometimes it's possible, sometimes user requirements are so wildly speculative to begin with you really can't break it down to small functions playing against each other.

        • From my experience, it's a question of scope and how detailed your prompt is. If you ask "write me a program in X that does Y" you aren't going to get nearly as good of an answer as "write me a function in X that takes A, B, C as inputs; and gives Y as the return"

          And yes, just like if you were doing a code review of a junior engineer, don't rubberstamp what it gives you back. And if what it gives you back is garbage, oh no you've sadly wasted 30 seconds and can always fall back on writing it yourself.

        • AI doesn't just mean LMM. If its something as broad as the autogenerated code when you create a project in visual studio than that 20-30% could easily be true.

        • Because I've tried using LLMs to generate code and I've seen the results. They are not usable. They *resemble* valid code, but they typically throw exceptions and raise errors, they can't pass unit tests, and they don't correctly handle edge cases. AI-generated code is a mess that *superficially looks right* but isn't fit to purpose.

          That's what the other 70% of the code is for, the human written code.

          There is a meme going around about the fact that you can tackle a normal coding task by spending 3 hours to write code and 1 hour to debug and test it, or you can use CoPilot to spend 15 minutes to write the code and 8 hours to debug and test it. That matches my experience.

          It depends entirely on the complexity of the code you asked for. For simple things, like very short code samples that might appear in a textbook or reference book, the state of the art seems to be about equal to these references. Note these references also lack the defensive programming you mention.

          AI is, currently, more of an alternative to reference books than programmers. And reference books have always provided code to real work pro

        • I just used an AI for re-skilling and it's helped immensely with a project of mine.

          With a bit of skill and the right direction it's not overly difficult to get good results.

          YMMV.

      • by DarkOx ( 621550 )

        20-30% of code in the repositories. That is 20% to 30% of the code they are shipping, or even code the represents a branch anyone is doing work on that will ever land in a deliverable.

        Could be just a bunch of devs branching stuff to try and quick experiment in and then forgetting about entirely 10min later.

        LLMs are good at generating a lot of code, and version control systems are good at storing it; none of this suggests a large portion of it isn't useless garbage.

        • LLMs are good at generating working code too.

          I think there is a strange kind of delusion that exists among the people that frequent this site that LLM usage equates to "vibe coding" or whatever the fuck they call that shit today where people who have no fucking idea what they're doing generate programs with LLMs.

          LLMs are becoming more and more integrated into actual professional developer workflows, and they're quite good at what they do in that capacity.
    • Re:BS (Score:5, Funny)

      by korgitser ( 1809018 ) on Wednesday April 30, 2025 @01:41PM (#65342667)
      It would certainly explain Windows 11...
    • by taustin ( 171655 )

      Seems plausible, given what shit their software is.

    • Dodgy wording - "written by software" could just mean boilerplate code generation that uses no LLM at all. It sounds impressive as long as you don't think about it.

      • It sounds impressive as long as you don't think about it.

        That sounds like most of silicon valley for the last 5 years.

      • written by software

        He means an editor software, like a text editor? Or he's attributing any usage of auto-complete, while someone is writing code? Or is he counting template code, where the compiler emits code based on a template definition? Maybe he is summing it all up and making it sound like AI?

        • by ukoda ( 537183 )
          I think you would need a switch based loader to write code without any software.
          • Then it's 100% code at MS is written using text editors (software), and then 70% is thrown away because it was prototype-PoC-ware (doesn't make it into the repositories). That's what he really meant.

      • 30% of the code in their repositories.
        How much of that is actually used? And correct?

        In a generation, kids won't believe you when you tell them we used computers to do calculations instead of hallucinate rationalizations for the decrease in the chocolate ration.
        • In a generation, kids will be laughing about the how the neo-Luddites deluded themselves into pretending the growing experience of the population was entirely hallucinated.

          You're talking out your ass, so it's like you're engaging in a PR war against the big bad LLM specter. To what end? Self delusion, or the hope of misinforming people that happen by your post?
      • by ceoyoyo ( 59147 )

        It could also mean boilerplate code generation that uses an LLM. They're pretty good at that.

        Dear Copilot, somebody wants another damn web form. Make it just like this existing one except change the label to "Fire idiot?" and the submission address to http://hr.microsoft.com/fireid... [microsoft.com].

        Tnks.

        And there you go, 1000 lines of code.

    • It depends upon how it is measured. If it is lines of code, I can easily imagine that AI spits out huge amounts of low quality code. There isn't any comment about what percent actually passes into production... just how much is in their repositories. You can always lie with statistics. CEO are masters at it.

    • by dbialac ( 320955 )
      Well, he said "up to". That includes any quantity between 0% and 30%. For example, up to 90% of this comment was written by AI, and it's completely true.
    • Does he really believe this?

      What makes you think an AI is any less capable of copy-paste'ing code from the internet than a modern human programmer?

      • by cstacy ( 534252 )

        Does he really believe this?

        What makes you think an AI is any less capable of copy-paste'ing code from the internet than a modern human programmer?

        Because the human programmer understands what they are reading, and the AI is just stringing shit together based on statistical likelihood of one meaningless (to it) token following another?

        Unless you mean to suggest that most "coders" don't actually understand what they are reading. In which case I think you might be right. In the early 2000s I worked with "coders" (mainly outsourced from India) that were like that. We had to rewrite pretty much everything they did. Every day.

    • Not of all written code, but newly written code.
    • by gweihir ( 88907 )

      Probably, The guy is definitely not competent with regards to technology ...

  • Leap (Score:5, Insightful)

    by MBGMorden ( 803437 ) on Wednesday April 30, 2025 @01:41PM (#65342669)

    You may be putting words in his mouth by assuming "AI" when he said "software".

    Open up Visual studio and start drawing out a GUI app and a ton of the background code is then generated by software. Its been that way for ~30 years. That is was written by software doesn't necessarily mean it was AI, unless we're jumping on the train of calling basically everything in the computer "AI" these days.

    • "basically everything in the computer "AI" these days" - this seems to be the narrative these companies are pushing doesn't it. Another buzzword for bullshit as for as I'm concerned.

    • "It's all ball bearings these days." -Fletch

      • by cstacy ( 534252 )

        And before that everything was "blockchain"

        And before that everything was "enterprise"

        I'm waiting for them to release the Blockchain-enabled Web-Scale Enterprise AI Agent Vibe Grid -aaS", with Quantum Guardeails to prevent incorrect code.

        Until then I'll just write it myself TYVM.

    • Considering everything is compiled and no one writes in machine specific op codes so I’m surprised he didn’t just say 100% of it is.
    • Right. You're talking about a code generator. Those have been all the rage in MS world since at least the mid-aughts, notably for WinForms apps and for persistence layers. The code generated is shite, of course, but that doesn't ever seem to discourage anyone. "We don't have to look at the code, it just works!" they say. It's not clear to me that the people who say that have any intention of looking at the code even when it doesn't work, though.
    • by cstacy ( 534252 )

      You may be putting words in his mouth by assuming "AI" when he said "software".

      Open up Visual studio and start drawing out a GUI app and a ton of the background code is then generated by software. Its been that way for ~30 years. That is was written by software doesn't necessarily mean it was AI, unless we're jumping on the train of calling basically everything in the computer "AI" these days.

      You've hit the nail on the head there with the last sentence.

  • Up to 30% (Score:5, Informative)

    by PsychoSlashDot ( 207849 ) on Wednesday April 30, 2025 @01:46PM (#65342685)
    That includes 0.000000000000000001%

    But realistically there's no way anything like 30% of the lines of code were written by GenAI. The vast majority of Windows Server 2025's code is identical to Server 2022, 2019, and 2016. Win11 24H2 is almost identical to Win10 1501. At least when you consider individual lines.

    Now, it's possible that 30% of the titles/programs they offer have been touched in one way or another by GenAI. I'm sure there's some stuff in Office and Windows and Exchange Online that was spewed by CoPilot.

    That said, I think the pride here is... misplaced. Want me to buy AI? Tell me it's reliable. I get that. Thats what he's doing. Want me to buy everything else? Tell me it's reliable... which GenAI code is not. So... this announcement disincentivizes me to buy MS products except CoPilot, which I don't want in the first place. It's kind of "we've smeared shit sandwich on all of the non-shit-sandwich meals at our restaurant. It's that good!"
    • That includes 0.000000000000000001% But realistically there's no way anything like 30% of the lines of code were written by GenAI. The vast majority of Windows Server 2025's code is identical to Server 2022, 2019, and 2016. Win11 24H2 is almost identical to Win10 1501. At least when you consider individual lines. Now, it's possible that 30% of the titles/programs they offer have been touched in one way or another by GenAI. I'm sure there's some stuff in Office and Windows and Exchange Online that was spewed by CoPilot. That said, I think the pride here is... misplaced. Want me to buy AI? Tell me it's reliable. I get that. Thats what he's doing. Want me to buy everything else? Tell me it's reliable... which GenAI code is not. So... this announcement disincentivizes me to buy MS products except CoPilot, which I don't want in the first place. It's kind of "we've smeared shit sandwich on all of the non-shit-sandwich meals at our restaurant. It's that good!"

      It could be as simple as AI can generating a *LOT* of code very, very quickly. I could believe 30% of code in a tech company that's created is created by AI. That doesn't mean 30% of code making it to release candidates is generated by AI.

    • by SirSlud ( 67381 )

      It's kind of cute you think Windows would constitute a significant portion of their code footprint.

      • It's kind of cute you think Windows would constitute a significant portion of their code footprint.

        Microsoft says there are 50 million lines of code in Windows 10, and it's guaranteed there are more in Windows 11. They also say there are 45-50 million lines of code in Office. These are believed to be their two largest codebases. You don't think 50 million+ lines, and being their single biggest codebase, is significant?

    • That's not what he's saying. The wording is terrible. He's saying 30% of new code is software written. Zuck was asking about how much of their code is CURRENTLY being written by software

    • by cstacy ( 534252 )

      That includes 0.000000000000000001%
        "

      You May Already Have Won Up To 1,000,000 Commits!*

      *Hallucinations may vary by state.

  • Yeah... (Score:5, Insightful)

    by devslash0 ( 4203435 ) on Wednesday April 30, 2025 @01:54PM (#65342711)

    That explains a lot about the current state of Windows.

  • Recall. Oops, nvm.

  • probably started with Windows ME

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday April 30, 2025 @02:34PM (#65342861) Homepage Journal

    Microsoft CEO Satya Nadella said [...] during a fireside chat with Meta CEO Mark Zuckerberg

    I don't see a fire in the picture in the article, although I'd like to.

    • Microsoft CEO Satya Nadella said [...] during a fireside chat with Meta CEO Mark Zuckerberg

      I don't see a fire in the picture in the article, although I'd like to.

      I'd like to see both of them on fire. Maybe we can toss on Matthew Prince [wikipedia.org] on as some extra kindling.

    • by PPH ( 736903 )

      I don't see a fire in the picture in the article, although I'd like to.

      Here ya go! [giphy.com]

  • by RUs1729 ( 10049396 ) on Wednesday April 30, 2025 @02:39PM (#65342881)
    That will answer many questions from suffering Windows users.
  • by mmdurrant ( 638055 ) on Wednesday April 30, 2025 @03:06PM (#65342965)
    Written by software doesn't mean AI. It can - and likely does - mean generated by automated processes. That's all.
  • I guess that explains why their products have gotten so bad and bloated over time.
  • Now we know (Score:4, Informative)

    by gabrieltss ( 64078 ) on Wednesday April 30, 2025 @04:48PM (#65343169)
    Now we know why we have the mess called Windows 8/10/11!

    It's all AI generated CR@Pola!
  • AI wrote huge volumes of throw-away code? Just keep the engine churning, soon it will write a volume of code equal to 20-30%. That doesn't mean the code is actually used in something real.

  • Maybe he meant machine code? This is 100% written by software - by compilers and assemblers.

  • AI product doesn't qualify for copyright, isn't that the legal standard.

Use the Force, Luke.

Working...