Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft

Software Engineering at Microsoft 502

an_mo writes "A link to a google cached document is floating around some mailing lists containing some info about microsoft software engineering. In particular the document contains juicy bits about the development of a large project like NT/2K. Some examples: Team size went from 200 (NT3.1) to 1400 (Win2k). Complete build of win2k time is 8hrs on 4way PIII and requires 50GB of hard drive space. Written/email permission required for checkins by the build team." The HTML version on Usenix's site is much nicer than Google's auto-translated version.
This discussion has been archived. No new comments can be posted.

Software Engineering at Microsoft

Comments Filter:
  • by Anonymous Coward
    Surely a quick goat killing is required at least before check-in
    • No, goat sacrifices are for stable running of Windows. Calf sacrifices are for the compilation. It's in the bible.
      • Indeed. You need to sacrifice at least the mythical all-redhair goat if you want to get 3 days of uptime with NT5.0 a.k.a. Win2000

        I saw a sig saying: Taking software security advice from Microsoft is like taking airline security advice from Bin Laden.

        I disagree. Bin Laden proved that he knew a lot about airline security (and how to defeat it).

        For all its self-congratulation, MS still does not know how to achieve code quality in a large software project. They do a lot of wide-and-shallow useability studies, but they pay as much attention to reliability testing as Hollywood pays attention to scriptwriters (i.e., not a lot. Remember the old joke? "How do you spot a blonde would-be actress in a movie cast? She sleeps with the writer.")

        -- SysKoll
  • Re: (Score:2, Informative)

    Comment removed based on user account deletion
    • not really I suspect that they have a few tricks up their sleaves

      what would really be fun would be to know what version of the compiler they are useing

      win2k it was Visual C 6 I wonder if they changed for XP

      regards

      john jones
      • ---"not really I suspect that they have a few tricks up their sleaves

        what would really be fun would be to know what version of the compiler they are useing

        win2k it was Visual C 6 I wonder if they changed for XP"---

        It'd be kinda funny if MS compiled all their OS 'stuff' on their unix boxes with GCC.

    • by Osty ( 16825 ) on Wednesday July 10, 2002 @07:09PM (#3861265)

      So for something like Windows 2000 is that a long time?

      It's long-ish, but not overly long. For a comparison that you may be more familiar with, consider the time it takes to compile the Linux kernel, your chosen libc, other libs you'll eventually need (say, gtk and/or qt, etc), X, GNOME or KDE, some apps (xmms, xine, a couple editors, etc), and probably 8 or 9 other things I'm forgetting right now. You'll probably come up with a similar number (probably smaller, but there's also probably less code in all the above tools).

      That's not to say it can't be made faster. I don't know whether that time was on a multi-threaded compile or not, but I'd sure hope so given that their build machines were 4-way machines. Also, note that they didn't say what speed the P3s were. 4 P3-500s will surely compile slower than 2 p3-1.2GHzs. Nor did they say if those were Xeons or not (larger cache is better for compiling). The obvious solution is to throw hardware at the issue, but there are other things that can be done like incremental building, better sync/drains for multi-threaded compiles, more efficient compilers and build scripts, etc.

      • well win2k came out in jan of 2000, so my guess would be p3's in the 500-700mhz range.
      • Xeons (Score:3, Insightful)

        by xrayspx ( 13127 )
        They have to be Xeons, AFAIK, non-Xeon Intel CPUs won't do 4-way. And even if you CAN do 4 way on regular PIII's, which you cannot, MS wouldn't, they would have Xeons.

        I'm imagining this machine to be a Compaq 6400r or the like, from the timeframe of the build it's probably 550s or 700(?), since they have a very close relationship to Compaq for servers.

      • 4 P3 (Score:5, Informative)

        by thopo ( 315128 ) on Wednesday July 10, 2002 @08:29PM (#3861639)
        sometimes it makes sense to read the article before you comment. (i know the chance is smaller to get modded up ...). the article says:

        Complete build time is 8 hours on 4 way PIII Xeon 550 with 50Gb disk and 512k RAM
      • But here is part of the whole point...

        The Linux system I'm running when not booted to the Dark Side (My daughter was running Age of Empires - more Dark Side software.) isn't a single chunk that has to be built as one unit. The kernel's one piece, and each lib is another. To be sure, some libs won't work without specific versions of others, so the pieces aren't all independent. But it's still not all one giant chunk.

        They're essentially making the RedHat distribution into one giant build. Kind of like Gentoo, which someone else brought up, and is a very appropriate comparison for build times.

        But even with RedHat or Gentoo, it's not one giant chunk. I've upgraded pieces of my RedHat for years, and to be fair, Microsoft issues fixes. But there's still a difference, in that I have a better understanding of what RedHat's doing with an update, and better understand what parts of my system are affected.

        While there may be modularity inside Windows, it appears to be intentionally hidden from the end user. I wonder if that's part and parcel of proprietary software, or if it's a side effect of the legal team arguing that Windows is "integrated" and IE can't be unbundled.
        • I wonder if that's part and parcel of proprietary software, or if it's a side effect of the legal team arguing that Windows is "integrated" and IE can't be unbundled.

          I would argue that it's for ease of testing - you build a development CD and a release CD. If the release Cd passes, you can them release that very Cd to manufacturing.

    • Well, as i recall, the box was an Dell 8-way Xeon p3, and it built everything, including both versions of several system Dlls (basically, anything with a TChar export, since that changed from char* to short* on win9x to winnt)

  • read a book (Score:5, Informative)

    by johnjones ( 14274 ) on Wednesday July 10, 2002 @07:05PM (#3861239) Homepage Journal
    Show-Stopper!: The Breakneck Race to Create Windows Nt and the Next Generation at Microsoft by G. Pascal Zachary

    very funny about the head guy throwing chairs out of windows ( the phyical ones ironic really )

    and the black team....

    read it and Mythical Man-Month, and then you might have a small background

    regards

    john jones
    • not a troll (Score:2, Informative)

      by johnjones ( 14274 )


      show stopper from [amazon.com]

      when has been recomending books on the subject a troll (and it was done by someone with unlimted .... )
      • Showstopper! (Score:2, Informative)

        by jkujawa ( 56195 )
        "Showstopper!" was fascinating. David Cutler really is a genius. NT had the potential to be a truly great operating system. I would have loved to have gotten a chance to play with it before they bolted win32 on top of it. Everyone who has the slightest interest in operating systems should read Showstopper.
    • Re:read a book (Score:2, Insightful)

      by selan ( 234261 )
      I really enjoyed reading Showstopper. It's very well written and tells interesting stories about the people behind NT. I was surprised by the amount of work and testing that went into NT. Actually raised my opinion of the lowly Microsoft coders (not the brass, though). The book also goes into the sordid history of how Microsoft shafted IBM and OS/2 by making NT for Windows only. Very good read.
  • by Anonvmous Coward ( 589068 ) on Wednesday July 10, 2002 @07:09PM (#3861260)
    Geez, I run at 1600 by 1200 and I still had to scroll every 10 lines or so. I got people yelling at me down on the street because I read slow.
  • This reminds me of an old article with the name of this subject that came out around the mid-90's about Microsoft's sync-and-stabilize methodology. Really not a lot new here, save the amount of time they required to build their OS.
  • by PingXao ( 153057 ) on Wednesday July 10, 2002 @07:12PM (#3861285)
    1 defect stops 1400 devs, 5000 team members!
    I would think this would lead to a situation where CYA would become a way of life. Sure, even the best developers will make an occasional mistake. The document notes that a successful culture needs to recognize that mistakes will happen, but if ONE defect is going to shut down 5,000 people, I know I wouldn't want to be the one everybody is pointing their fingers at. I can imagine the circus atmosphere when the blame-shifting and the search for the guilty goes into high gear.
    • by gwernol ( 167574 ) on Wednesday July 10, 2002 @07:31PM (#3861394)
      I would think this would lead to a situation where CYA would become a way of life.

      I don't think so - he's talking about buiuld brreaks (i.e. code that won't compile). These are automatically detected and the culprit is auto emailed. Under source code control there is nowhere to hide from this because you know whose code broke the build.

      The only CYA you can do is not check in broken code. This is a good thing :-)

      Runtime errors don't stop 5000 team members.
      • With proper branching in your source repository, you can isolate different areas of change, and thus keep build breakages limited to subsets of developers.

        With regards to isolating who broke a build, that would require a clean build for each and every checkin, which just isn't practical in terms of hardware resources. A more practical solution is to grab tip, build, if fail -> indicate all checkins since last green build. This gives you a bigger culprit set, but it's MUCH cheaper in terms of hardware.
        • With proper branching in your source repository, you can isolate different areas of change, and thus keep build breakages limited to subsets of developers.

          Agreed, and we know that their SCCS was broken in this respect.

          With regards to isolating who broke a build, that would require a clean build for each and every checkin, which just isn't practical in terms of hardware resources. A more practical solution is to grab tip, build, if fail -> indicate all checkins since last green build. This gives you a bigger culprit set, but it's MUCH cheaper in terms of hardware.

          Again going back to the article, we're talking about their daily builds, which will be clean. The compilers will spit out failure information that can be easily traced back to the culprit.

          This is how many large (i.e. OS-sized) projects work - regular clean builds, usually once per day, with auto emailing of break information to those responsible. One group I worked in also required you donate some chocolate to a central "fund" available to all the engineers when you broke the build. A fun way of encouraging people to compile against clean sources before checking in.
  • by rmassa ( 529444 )
    I'm a linux user, but most MS people I know hail win2k as the best microsoft OS ever. So this presentation seems to be kind of strange, pointing out things (increased build time, developers/testers) that illogically seem to create a better OS.
  • 8 hours? (Score:4, Funny)

    by hatrisc ( 555862 ) on Wednesday July 10, 2002 @07:15PM (#3861303) Homepage
    it may have taken 8 hours, because they had to reboot twice.
  • by teetam ( 584150 ) on Wednesday July 10, 2002 @07:22PM (#3861341) Homepage
    ...be among the world's best developers!
    "Best developers are going to check-in a runtime or compile time mistake at least twice each year"

    Twice a year??? Is that a conservative estimate or are they trying to show off? All the places I have worked, people check-in way more "mistakes" than that. Especially if they work late on deadlines.

    • I think maybe they are talking about checking in code that just plain doesn't compile? I would hope that developers would do a test compile and maybe some whiteboxing before just randomly checking things in. That's the kind of thing that can cause a lot of bad downtime for a lot of people (especially if you build at night when no one's around)... I would hope that 2 of those types of mistakes a year would be an average or even max!
  • by zdzichu ( 100333 ) on Wednesday July 10, 2002 @07:24PM (#3861360) Homepage Journal
    from google document:
    Complete build time is 8 hours on 4 way PIII Xeon 550 with 50Gb disk and 512k RAM
    ahh, I see... 50Gb swap partition!
    with half meg of RAM... yeah, boss said something about 640K beeing enough for everybody.
  • Old news... (Score:5, Informative)

    by Anonymous Coward on Wednesday July 10, 2002 @07:26PM (#3861373)
    Guys, the PowerPoint slides for the Lucovsky presentation has been publicly downloadable for almost 2 years. I always find it sad when Slashdot reports something old as something new.

    Go get the slides at http://www.usenix.org/events/usenix-win2000/tech.h tml
    • Why sad? (Score:3, Insightful)

      by alienmole ( 15522 )
      Not everyone can know everything. Why discriminate against good information based on its age?
    • Re:Old news... (Score:3, Insightful)

      by inkfox ( 580440 )
      Guys, the PowerPoint slides for the Lucovsky presentation has been publicly downloadable for almost 2 years. I always find it sad when Slashdot reports something old as something new.
      It was still probably news to most here. And it's interesting. Both make it a good story.
  • by Anonymous Coward on Wednesday July 10, 2002 @07:47PM (#3861485)
    It seems [google.ca] that Microsoft does not use Visual Source Safe for Windows source code.
    • Thanks!

      I've been trying to work out what SourceDepot was really called in the outside world for ages. I've been looking for a good replacement for VSS.

      Si
    • by Anonymous Coward
      Yes, we use Source Depot on my team at MS. It's very Unixy in its syntax (likes a lot of filtered output piped to it from other cmdline tools), and it's also a bit obscure in its details. It has a GUI client, but the bulk of it, other than the client mappings (which server stuff to sync) is all cmdline. It's not great, but at least it scales, which is more than you can really say for VSS.
  • if they are using SourceSafe.
    • SourceUnSafe is so bad, Microsoft doesn't even use it internally.

      Go figure.

      (They use some internal tool called SourceControl, or something like... the name escapes me)

    • Re:God help them... (Score:4, Interesting)

      by cant_get_a_good_nick ( 172131 ) on Wednesday July 10, 2002 @08:37PM (#3861683)
      If you think Visual SourceSafe is bad...

      I had a contract project, a porting job. The platforms were Win32 (where it originated) UNIX/Linux (our port), Novell, and OS/2. We had the command line version because the Linux GUI core dumped every 5 seconds. But the command line version stull sucked, and of course didn't know shit about line endings. We could script it with some extension mapping to try to do dos2unix/unix2dos, but good luck, cause the command line version wouldn't have any useful exit() values. I have no idea what the Novell and OS/2 guys did.

      Joel Spolsky (he's been on here before) wrote about sucky SourceSafe a bit [joelonsoftware.com] and how Microsoft really doesn't use it. Doens't give me a lot of confidence using it. He also had the link to the UseNix verion of the talk given in the story.
  • by davebo ( 11873 ) on Wednesday July 10, 2002 @08:00PM (#3861532) Journal
    Microsoft claims IE can't be separated from the OS. Yet, the presentation points out the code is broken into 16 sub-projects, largely isolated from each other, and separately buildable.

    Two of those projects were "INetCore" and "INetServices".

    So why can't you just build 2K without those 2 subprojects, or just stubs inserted for the functions declaired in those projects?
    • by patchmaster ( 463431 ) on Wednesday July 10, 2002 @08:29PM (#3861640) Journal
      Those claims are clearly gross exaggerations intended to fool idiots and judges into thinking IE is an integral part of the OS. They define "IE" as every line of code exercised by IE in doing its thing, including mundane things like writing to the screen or saving a file. Then they discover if you pull out all the code for "fwrite" suddenly the system stops working. Duh! It's like claiming your car won't run without the windshield wipers, defining the windshield wipers as everything needed to make them work, including the battery. So you pull out the battery and, what do you know, the car won't start.
      • I always thought of it as claiming you couldn't take the car stereo out without the engine failing. Sure, you can design a car that way but its incredibly stupid. The only reason GM would do such a thing is to keep AIWA et al from being able to install a third party player.
    • Seperately buildable does not imply seperately runnable.
    • So why can't you just build 2K without those 2 subprojects, or just stubs inserted for the functions declaired in those projects?

      The thing you must understand about Microsoft code is that everything is a component, OLE in the old days, COM now. That's why you can easily call Excel's charting functions from your own code, say. It's also why you can run macros inside Outlook, all Microsoft applications are components and scripting glue (like VBA). Wordpad, for example, is almost no code in and of itself, it's a rich text component, a toolbar component and so forth. If you want to build a custom web browser, you can just reuse the HTML renderer and whatever else you need from IE, they are all components.

      But this also means that if the internet components were entirely removed, there would be no OS-level TCP/IP support, the online help viewer which uses the HTML renderer wouldn't work, etc. So that's why MS say they can't remove MSIE - because IExplore.exe on your hard drive is just the glue holding together a bunch of components that are provided by the OS and available to any application.
  • I always figured that their development methodology invloved a room full of an infinite number of monkeys typing into Notepad. Learn something new everyday.

  • by sweede ( 563231 ) on Wednesday July 10, 2002 @08:44PM (#3861712)
    that the 8 hour, 4 way p3, 50 gig drive compile was the OLD WAY of doing Windows 2000 based on how the developed Windows NT.

    the later slides describe the NEW project resource management and development processes for the continuing development of Windows 2000 (before and up until after the release?)

    Slides 23 and up tell you what they did and how well everything works on a project as large as Windows 2000 is.

    This slide gives a sumary of the new build processes http://www.usenix.org/events/usenix-win2000/invite dtalks/lucovsky_html/sld033.htm [usenix.org]

  • says it all (Score:4, Interesting)

    by 0WaitState ( 231806 ) on Wednesday July 10, 2002 @08:46PM (#3861727)
    From the presentation:

    "Anything that crashes the OS is a bug. Very radical thinking inside of Microsoft considering Win16 was cooperative multi-tasking in a single address space,..."

    So the BSODs were caused by the old-timers? Were they also the ones who designed in the feature that every fucking install of an application requires a reboot?
    • Re:says it all (Score:3, Interesting)

      Were they also the ones who designed in the feature that every fucking install of an application requires a reboot?
      Most applications worked fine if you'd click "No" when it asked you to reboot. The reason for most applications asking for a reboot is the way installer tools like InstallShield work under Windows 95/98/NT At the end of an InstallShield script one can insert a statement that handles te reboot. IIRC the Installshield manual suggested a number of cases in which case you will need a reboot (some of these cases did not in fact require rebooting).

      So what happened? Most developers did not bother finding out whether their install process requires a reboot at the end; out of lazyness they just assumed it always does, and they made the user reboot every time "just to be safe".
  • BIL 9000: I'm sorry (Judge) Jackson, I can't let you do that.
  • by Twillerror ( 536681 ) on Wednesday July 10, 2002 @09:43PM (#3861923) Homepage Journal
    Remember that Windows 2000 is essentially everyone who is working on the linux kernal, basic distribution, and X. If the number includes Explorer, which could be likened to Mozzila and includes management, testers, and all the design specialistics ( people who do research to make it user friendly, or handicap accesable, I would think it's pretty small.
  • by g4dget ( 579145 ) on Wednesday July 10, 2002 @09:46PM (#3861928)
    Compared to the rest of Windows, the NT kernel seems reasonably well engineered. The problem I think is that the end product is a combination of features that marketing thinks really need to go in there for their feature check lists, and pet ideas of the developers/researchers.

    UNIX and Linux are different. UNIX (at least Research UNIX) was constrained by its paradigms: it was vigorously policed by its developers. For Linux, something doesn't make it into the kernel unless it really scratches an itch that a lot of people have--the feedback is immediate and direct: no interest, no developers.

    Microsoft software development doesn't operate in a competitive market of ideas (let alone a competitive market), it doesn't have a paradigm to focus it, and it doesn't even have resource constraints to focus it. It's nice that they make the software engineering work out, but the end result still is mediocre at best.

  • Full text of Proudly Serving my Corporate Masters here... [iuniverse.com]

    Just in case you wanted some more insight into working for the company. Fascinating stuff.
  • by AaronLuz ( 559686 ) on Wednesday July 10, 2002 @10:25PM (#3862083)

    Given the tone of most of the comments here, one might think that the slides merely reveal Microsoft's errors. In fact, they indicate what problems the company faced scaling their NT development team from 200 to 1400 programmers and their solutions. The conclusion is, "With the new environment in place, the team is working a lot like they did in the NT 3.1 days with a small, fast moving, development team."

    As Linux grows, it is headed for the same sorts of problems. The open source movement can learn a lot from Microsoft's struggles. The fact that Linus opted to use a new source control system -- just as Microsoft realized that their in-house system was not up to the task and so switched -- gives me hope.

    P.S. May we please have better summaries for the articles on the front page?

  • by DarkHelmet ( 120004 ) <<ten.elcychtneves> <ta> <kram>> on Thursday July 11, 2002 @03:02AM (#3862965) Homepage
    Complete build of win2k time is 8hrs on 4way PIII and requires 50GB of hard drive space.

    Some goofy Microsoft Intern forgot to put -j 4 along with compliation.

    Either that, or they compiled it on Win9x (which has NO multiprocessor support).

  • by epsalon ( 518482 ) <slash@alon.wox.org> on Thursday July 11, 2002 @05:18AM (#3863202) Homepage Journal
    Well, that's an oxymoron for you!
  • 1400? Try 3100! (Score:4, Interesting)

    by Queuetue ( 156269 ) <queuetue@gm a i l . com> on Thursday July 11, 2002 @05:23AM (#3863208) Homepage
    Take a look at slide 19 - 1400 devs, but 1700 testers. Do you suppose that means that Win2k had 3100 people working full-time on it? Lowballing the numbers (55k per dev, 45k per tester):

    1,400 * 55,000 = 77,000,000
    1,700 * 45,000 = 76,500,000
    153,500,000 a year * 3 years (from slide 3) = 460,000,000

    Include an overhead multiplier
    460,000,000 * 2.4 = 1,105,200,000

    And we wind up with a rough US$1.1BB.

    This [dwheeler.com] suggests that win2k represents 20 million SLOC, Just slightly higher than RH 6.2, at 17 and change.

    His cost estimates place RH 6.2 at US$614,421,924.71

    I suspect MS probably pays more per dev, but I have no proof, so I'll stick with the industry averages. Also, testers may have been shared across projects, MS can pool resources and bring overhead lower, etc...

    I'm not drawing any conclusions, just compiling data...
  • by zero_offset ( 200586 ) on Thursday July 11, 2002 @07:01AM (#3863392) Homepage
    (Yes, user johnjones already posted about Showstopper, but I have more to say than "this book was funnnneee..."). So, as johnjones pointed out, there is a book related to this subject: "Showstopper! The Breakneck Race to Create Windows NT and the Next Generation at Microsoft" by G. Pascal Zachary.

    What's interesting is comparing what Showstopper says to the claims in these slides.

    The slides suggest early NT development was done by a small team of super l33t c0d3rz who took care of business and frowned upon slacking. However, the picture painted by the book is dramatically different -- people were forced to work around the clock, the team was dominated by a small gang of guys who were basically complete assholes, everybody walked on eggshells for fear of pissing off Dave Cutler, The New Savior, and NOBODY in the group ever knew what was really going on. The whole project was shrouded in mystery, even to people on the team, because basically everything existed in Cutler's head.

    The only thing I see where Showstopper and the slideshow firmly agrees is the slide labeled "Goal Setting".

    I personally have a lot of other opinions about why some of the statistics may pan out the way they do (for example, how much hardware did you REALLY have to test with in NT 3.1 days, versus Win2K?) but I want to stay focused on the Showstopper/slideshow discrepancies, so I'll leave it at that.

    The thing to realize about Showstopper is that it was based almost entirely on interviews with the people who were involved with the initial NT coding effort.

    By comparison this slideshow was written by one guy, Mark Lucovsky, who gets lightly flamed in Showstopper (at best). Oddly, I grabbed Showstopper off my bookshelf and opened it straight to the page describing Lucovsky. Weird. Anyway, here are excerpts from a single paragraph: "...smart but immature... nevertheless angered teammates with his skepticism and self-serving judgements... relentlessly critical of others, constantly probing for weaknesses... 'Until you prove otherwise, you're wrong and he's right.'" Whew, hate to be THAT guy. It gets worse. One page later, a paragraph opens by simply saying, "Many people felt that Lucovsky was a jerk."

    Given that, it wouldn't surprise me if Lucovsky was still just trying to justify the fact that the early NT dev team was comprised of a bunch of flakes who had to burn the candle at both ends to actually deliver anything.

    Please understand I'm not necessarily defending any current MS practices, or even Win2K (which is still vastly superior to NT3.51). I've personally worked VERY closely with groups inside MS at different times (a couple times on-campus in Redmond), and I'll be the first to tell you the company is bureaucratic and packed to the gills with people who don't know what the hell they're doing -- just like every other company that employs tens of thousands of people.

    What I *am* saying is that this slideshow is looking at the past with "rose-colored hindsight" and I believe the motives are suspect at best. Draw your conclusions with a grain of salt. (Enough metaphor-abuse for today.)

    Do like johnjones suggested -- go buy or check-out Showstopper and read it. It's interesting, informative, and it IS kind of funny. It's amazing they were able to produce anything at all. How's THAT conclusion for contrast with the slideshow? ;)

This is now. Later is later.

Working...