Software Engineering at Microsoft 502
an_mo writes "A link to a google cached document is floating around some mailing lists containing some info about microsoft software engineering. In particular the document contains juicy bits about the development of a large project like NT/2K. Some examples: Team size went from 200 (NT3.1) to 1400 (Win2k). Complete build of win2k time is 8hrs on 4way PIII and requires 50GB of hard drive space. Written/email permission required for checkins by the build team." The HTML version on Usenix's site is much nicer than Google's auto-translated version.
What no sacrifices to the gods? (Score:2, Funny)
Re:What no sacrifices to the gods? (Score:2, Funny)
Re:What no sacrifices to the gods? (Score:2, Insightful)
Indeed. You need to sacrifice at least the mythical all-redhair goat if you want to get 3 days of uptime with NT5.0 a.k.a. Win2000
I saw a sig saying: Taking software security advice from Microsoft is like taking airline security advice from Bin Laden.
I disagree. Bin Laden proved that he knew a lot about airline security (and how to defeat it).
For all its self-congratulation, MS still does not know how to achieve code quality in a large software project. They do a lot of wide-and-shallow useability studies, but they pay as much attention to reliability testing as Hollywood pays attention to scriptwriters (i.e., not a lot. Remember the old joke? "How do you spot a blonde would-be actress in a movie cast? She sleeps with the writer.")
Re: (Score:2, Informative)
nope (Score:2)
what would really be fun would be to know what version of the compiler they are useing
win2k it was Visual C 6 I wonder if they changed for XP
regards
john jones
Re:nope (Score:2)
what would really be fun would be to know what version of the compiler they are useing
win2k it was Visual C 6 I wonder if they changed for XP"---
It'd be kinda funny if MS compiled all their OS 'stuff' on their unix boxes with GCC.
Re:nope (Score:4, Informative)
You can see the linker version using this command:
dumpbin %systemroot%\system32\ntdll.dll /headers
Re:I don't know much about build times.. (Score:4, Insightful)
It's long-ish, but not overly long. For a comparison that you may be more familiar with, consider the time it takes to compile the Linux kernel, your chosen libc, other libs you'll eventually need (say, gtk and/or qt, etc), X, GNOME or KDE, some apps (xmms, xine, a couple editors, etc), and probably 8 or 9 other things I'm forgetting right now. You'll probably come up with a similar number (probably smaller, but there's also probably less code in all the above tools).
That's not to say it can't be made faster. I don't know whether that time was on a multi-threaded compile or not, but I'd sure hope so given that their build machines were 4-way machines. Also, note that they didn't say what speed the P3s were. 4 P3-500s will surely compile slower than 2 p3-1.2GHzs. Nor did they say if those were Xeons or not (larger cache is better for compiling). The obvious solution is to throw hardware at the issue, but there are other things that can be done like incremental building, better sync/drains for multi-threaded compiles, more efficient compilers and build scripts, etc.
Re:I don't know much about build times.. (Score:2)
Xeons (Score:3, Insightful)
I'm imagining this machine to be a Compaq 6400r or the like, from the timeframe of the build it's probably 550s or 700(?), since they have a very close relationship to Compaq for servers.
4 P3 (Score:5, Informative)
Complete build time is 8 hours on 4 way PIII Xeon 550 with 50Gb disk and 512k RAM
Re:4 P3 (Score:3, Funny)
Maybe adding some RAM would help. :)
Re:4 P3 (Score:3, Funny)
...kernel, your chosen libc, other libs... (Score:3, Interesting)
The Linux system I'm running when not booted to the Dark Side (My daughter was running Age of Empires - more Dark Side software.) isn't a single chunk that has to be built as one unit. The kernel's one piece, and each lib is another. To be sure, some libs won't work without specific versions of others, so the pieces aren't all independent. But it's still not all one giant chunk.
They're essentially making the RedHat distribution into one giant build. Kind of like Gentoo, which someone else brought up, and is a very appropriate comparison for build times.
But even with RedHat or Gentoo, it's not one giant chunk. I've upgraded pieces of my RedHat for years, and to be fair, Microsoft issues fixes. But there's still a difference, in that I have a better understanding of what RedHat's doing with an update, and better understand what parts of my system are affected.
While there may be modularity inside Windows, it appears to be intentionally hidden from the end user. I wonder if that's part and parcel of proprietary software, or if it's a side effect of the legal team arguing that Windows is "integrated" and IE can't be unbundled.
Re:...kernel, your chosen libc, other libs... (Score:2)
I wonder if that's part and parcel of proprietary software, or if it's a side effect of the legal team arguing that Windows is "integrated" and IE can't be unbundled.
I would argue that it's for ease of testing - you build a development CD and a release CD. If the release Cd passes, you can them release that very Cd to manufacturing.
Re:I don't know much about build times.. (Score:2)
Well, as i recall, the box was an Dell 8-way Xeon p3, and it built everything, including both versions of several system Dlls (basically, anything with a TChar export, since that changed from char* to short* on win9x to winnt)
Re:I don't know much about build times.. (Score:3, Funny)
[OT] Re:I don't know much about build times.. (Score:2)
read a book (Score:5, Informative)
very funny about the head guy throwing chairs out of windows ( the phyical ones ironic really )
and the black team....
read it and Mythical Man-Month, and then you might have a small background
regards
john jones
not a troll (Score:2, Informative)
show stopper from [amazon.com]
when has been recomending books on the subject a troll (and it was done by someone with unlimted
Showstopper! (Score:2, Informative)
Re:Showstopper! (Score:2, Informative)
Re:read a book (Score:2, Insightful)
Did they use a big enough font? (Score:4, Funny)
"How Microsoft Builds" (Score:2)
Single point of failure (Score:5, Insightful)
Re:Single point of failure (Score:5, Insightful)
I don't think so - he's talking about buiuld brreaks (i.e. code that won't compile). These are automatically detected and the culprit is auto emailed. Under source code control there is nowhere to hide from this because you know whose code broke the build.
The only CYA you can do is not check in broken code. This is a good thing
Runtime errors don't stop 5000 team members.
Re:Single point of failure (Score:2, Insightful)
With regards to isolating who broke a build, that would require a clean build for each and every checkin, which just isn't practical in terms of hardware resources. A more practical solution is to grab tip, build, if fail -> indicate all checkins since last green build. This gives you a bigger culprit set, but it's MUCH cheaper in terms of hardware.
Re:Single point of failure (Score:3, Insightful)
Agreed, and we know that their SCCS was broken in this respect.
With regards to isolating who broke a build, that would require a clean build for each and every checkin, which just isn't practical in terms of hardware resources. A more practical solution is to grab tip, build, if fail -> indicate all checkins since last green build. This gives you a bigger culprit set, but it's MUCH cheaper in terms of hardware.
Again going back to the article, we're talking about their daily builds, which will be clean. The compilers will spit out failure information that can be easily traced back to the culprit.
This is how many large (i.e. OS-sized) projects work - regular clean builds, usually once per day, with auto emailing of break information to those responsible. One group I worked in also required you donate some chocolate to a central "fund" available to all the engineers when you broke the build. A fun way of encouraging people to compile against clean sources before checking in.
Actually quite strange (Score:2, Insightful)
8 hours? (Score:4, Funny)
People at MS should (Score:4, Insightful)
Twice a year??? Is that a conservative estimate or are they trying to show off? All the places I have worked, people check-in way more "mistakes" than that. Especially if they work late on deadlines.
Re:People at MS should (Score:2)
it took so long (Score:5, Funny)
Complete build time is 8 hours on 4 way PIII Xeon 550 with 50Gb disk and 512k RAM
ahh, I see... 50Gb swap partition!
with half meg of RAM... yeah, boss said something about 640K beeing enough for everybody.
Re:it took so long (Score:3, Informative)
Re:it took so long (Score:3, Funny)
Re:it took so long (Score:3, Funny)
Schedule: 18 months (only missed our date by 3 years)
Old news... (Score:5, Informative)
Go get the slides at http://www.usenix.org/events/usenix-win2000/tech.
Why sad? (Score:3, Insightful)
Re:Old news... (Score:3, Insightful)
SourceDepot = Perforce != VSS (Score:5, Interesting)
Re:SourceDepot = Perforce != VSS (Score:2)
I've been trying to work out what SourceDepot was really called in the outside world for ages. I've been looking for a good replacement for VSS.
Si
Re:SourceDepot = Perforce != VSS (Score:3, Funny)
Or so I've read.
Re:SourceDepot = Perforce != VSS (Score:3, Insightful)
Really lot of windows guys people believe that if they use gcc as compiler they have to be GPL, it's FUD, and jokes like this only HURT Gnu,GPL,Linux. (or use bison, or edit the code with vim, and so on). This kind of humor is just too expensive, as people not knowing the regarding background actually believe kind of stuff, it's fear from the FUD they heared from the MCSE's.
Re:SourceDepot = Perforce != VSS (Score:3, Interesting)
Re:SourceDepot = Perforce != VSS (Score:3, Interesting)
And when I worked there we used "Slime" for version control, VisualC as the IDE (Though some people chose to use another IDE).
MS has good people but a completely fscked development process.
One of only two jobs where I've been criticized for commenting my code. (Not lack of comments, but too many.)
God help them... (Score:2, Funny)
Re:God help them... (Score:2)
Go figure.
(They use some internal tool called SourceControl, or something like... the name escapes me)
Re:God help them... (Score:4, Interesting)
I had a contract project, a porting job. The platforms were Win32 (where it originated) UNIX/Linux (our port), Novell, and OS/2. We had the command line version because the Linux GUI core dumped every 5 seconds. But the command line version stull sucked, and of course didn't know shit about line endings. We could script it with some extension mapping to try to do dos2unix/unix2dos, but good luck, cause the command line version wouldn't have any useful exit() values. I have no idea what the Novell and OS/2 guys did.
Joel Spolsky (he's been on here before) wrote about sucky SourceSafe a bit [joelonsoftware.com] and how Microsoft really doesn't use it. Doens't give me a lot of confidence using it. He also had the link to the UseNix verion of the talk given in the story.
Can't pull IE from Windows, huh? (Score:5, Insightful)
Two of those projects were "INetCore" and "INetServices".
So why can't you just build 2K without those 2 subprojects, or just stubs inserted for the functions declaired in those projects?
Re:Can't pull IE from Windows, huh? (Score:5, Informative)
Re:Can't pull IE from Windows, huh? (Score:2)
Re:Can't pull IE from Windows, huh? (Score:3, Insightful)
Re:Can't pull IE from Windows, huh? (Score:3, Informative)
The thing you must understand about Microsoft code is that everything is a component, OLE in the old days, COM now. That's why you can easily call Excel's charting functions from your own code, say. It's also why you can run macros inside Outlook, all Microsoft applications are components and scripting glue (like VBA). Wordpad, for example, is almost no code in and of itself, it's a rich text component, a toolbar component and so forth. If you want to build a custom web browser, you can just reuse the HTML renderer and whatever else you need from IE, they are all components.
But this also means that if the internet components were entirely removed, there would be no OS-level TCP/IP support, the online help viewer which uses the HTML renderer wouldn't work, etc. So that's why MS say they can't remove MSIE - because IExplore.exe on your hard drive is just the glue holding together a bunch of components that are provided by the OS and available to any application.
Well, I stand corrected. (Score:2, Funny)
I always figured that their development methodology invloved a room full of an infinite number of monkeys typing into Notepad. Learn something new everyday.
Reading the Slideshow you'll find... (Score:4, Informative)
the later slides describe the NEW project resource management and development processes for the continuing development of Windows 2000 (before and up until after the release?)
Slides 23 and up tell you what they did and how well everything works on a project as large as Windows 2000 is.
This slide gives a sumary of the new build processes http://www.usenix.org/events/usenix-win2000/invite dtalks/lucovsky_html/sld033.htm [usenix.org]
Re:Reading the Slideshow you'll find... (Score:2)
says it all (Score:4, Interesting)
"Anything that crashes the OS is a bug. Very radical thinking inside of Microsoft considering Win16 was cooperative multi-tasking in a single address space,..."
So the BSODs were caused by the old-timers? Were they also the ones who designed in the feature that every fucking install of an application requires a reboot?
Re:says it all (Score:3, Interesting)
So what happened? Most developers did not bother finding out whether their install process requires a reboot at the end; out of lazyness they just assumed it always does, and they made the user reboot every time "just to be safe".
Re:says it all (Score:2, Interesting)
What was your first kernel and what was so funny about it?
Re:USB devices can require reboot on Win2K (Score:3)
Windows, a software engineering odyssey (Score:2)
The numbers aren't that large (Score:3, Insightful)
NT kernel problem is not software engineering (Score:4, Insightful)
UNIX and Linux are different. UNIX (at least Research UNIX) was constrained by its paradigms: it was vigorously policed by its developers. For Linux, something doesn't make it into the kernel unless it really scratches an itch that a lot of people have--the feedback is immediate and direct: no interest, no developers.
Microsoft software development doesn't operate in a competitive market of ideas (let alone a competitive market), it doesn't have a paradigm to focus it, and it doesn't even have resource constraints to focus it. It's nice that they make the software engineering work out, but the end result still is mediocre at best.
Requisite Karma Whoring... (Score:2, Interesting)
Just in case you wanted some more insight into working for the company. Fascinating stuff.
Microsoft Found Solutions to Their Problems (Score:5, Insightful)
Given the tone of most of the comments here, one might think that the slides merely reveal Microsoft's errors. In fact, they indicate what problems the company faced scaling their NT development team from 200 to 1400 programmers and their solutions. The conclusion is, "With the new environment in place, the team is working a lot like they did in the NT 3.1 days with a small, fast moving, development team."
As Linux grows, it is headed for the same sorts of problems. The open source movement can learn a lot from Microsoft's struggles. The fact that Linus opted to use a new source control system -- just as Microsoft realized that their in-house system was not up to the task and so switched -- gives me hope.
P.S. May we please have better summaries for the articles on the front page?
8 hours? Forgot something? (Score:4, Funny)
Some goofy Microsoft Intern forgot to put -j 4 along with compliation.
Either that, or they compiled it on Win9x (which has NO multiprocessor support).
Software Engineering at Microsoft (Score:3, Funny)
1400? Try 3100! (Score:4, Interesting)
1,400 * 55,000 = 77,000,000
1,700 * 45,000 = 76,500,000
153,500,000 a year * 3 years (from slide 3) = 460,000,000
Include an overhead multiplier
460,000,000 * 2.4 = 1,105,200,000
And we wind up with a rough US$1.1BB.
This [dwheeler.com] suggests that win2k represents 20 million SLOC, Just slightly higher than RH 6.2, at 17 and change.
His cost estimates place RH 6.2 at US$614,421,924.71
I suspect MS probably pays more per dev, but I have no proof, so I'll stick with the industry averages. Also, testers may have been shared across projects, MS can pool resources and bring overhead lower, etc...
I'm not drawing any conclusions, just compiling data...
Showstopper versus this Info (Score:5, Interesting)
What's interesting is comparing what Showstopper says to the claims in these slides.
The slides suggest early NT development was done by a small team of super l33t c0d3rz who took care of business and frowned upon slacking. However, the picture painted by the book is dramatically different -- people were forced to work around the clock, the team was dominated by a small gang of guys who were basically complete assholes, everybody walked on eggshells for fear of pissing off Dave Cutler, The New Savior, and NOBODY in the group ever knew what was really going on. The whole project was shrouded in mystery, even to people on the team, because basically everything existed in Cutler's head.
The only thing I see where Showstopper and the slideshow firmly agrees is the slide labeled "Goal Setting".
I personally have a lot of other opinions about why some of the statistics may pan out the way they do (for example, how much hardware did you REALLY have to test with in NT 3.1 days, versus Win2K?) but I want to stay focused on the Showstopper/slideshow discrepancies, so I'll leave it at that.
The thing to realize about Showstopper is that it was based almost entirely on interviews with the people who were involved with the initial NT coding effort.
By comparison this slideshow was written by one guy, Mark Lucovsky, who gets lightly flamed in Showstopper (at best). Oddly, I grabbed Showstopper off my bookshelf and opened it straight to the page describing Lucovsky. Weird. Anyway, here are excerpts from a single paragraph: "...smart but immature... nevertheless angered teammates with his skepticism and self-serving judgements... relentlessly critical of others, constantly probing for weaknesses... 'Until you prove otherwise, you're wrong and he's right.'" Whew, hate to be THAT guy. It gets worse. One page later, a paragraph opens by simply saying, "Many people felt that Lucovsky was a jerk."
Given that, it wouldn't surprise me if Lucovsky was still just trying to justify the fact that the early NT dev team was comprised of a bunch of flakes who had to burn the candle at both ends to actually deliver anything.
Please understand I'm not necessarily defending any current MS practices, or even Win2K (which is still vastly superior to NT3.51). I've personally worked VERY closely with groups inside MS at different times (a couple times on-campus in Redmond), and I'll be the first to tell you the company is bureaucratic and packed to the gills with people who don't know what the hell they're doing -- just like every other company that employs tens of thousands of people.
What I *am* saying is that this slideshow is looking at the past with "rose-colored hindsight" and I believe the motives are suspect at best. Draw your conclusions with a grain of salt. (Enough metaphor-abuse for today.)
Do like johnjones suggested -- go buy or check-out Showstopper and read it. It's interesting, informative, and it IS kind of funny. It's amazing they were able to produce anything at all. How's THAT conclusion for contrast with the slideshow? ;)
Re:What a waste of time and money! (Score:2)
Of course, knowing MS, they would do it in the middle of the day.
Re:What a waste of time and money! (Score:4, Insightful)
Only the NT build lab needs to rebuild everything. Individual developers only need to built their feature's DLL and EXE files.
Re:What a waste of time and money! (Score:2)
Re:What a waste of time and money! (Score:4, Interesting)
Imagine how hard it must be to co-ordinate a project that big without "management". I think Linux could gain by creating a kind of unofficial management structure to better co-ordinate some of the projects.
I see you missed the point (Score:2, Insightful)
It was only when they broke up into smaller, relatively independent groups, that they managed to regain some of the earlier productivity from their NT days.
And that's how Linux development works -- many small, relatively independent development groups.
With Windows, Microsoft even ties the applications into the Operating System, with the purpose of making the use of non-MS applications "a jolting experience" (as an internal Microsoft memo said about IE versus Netscape).
But with Linux, everything is developed to standardized interfaces. That makes it possible for the Kernel development to progress independently of the GUI development, which is independent of the development of the Desktop Managers, which is independent of the development of the Applications, which is independent of Distribution Packaging, and so on.
Even within the larger projects, such as the Kernel, or Mozilla, the work is divided into smaller, relatively independent modules.
And that's one of the reasons why Linux development is progressing so much faster than Windows.
Re:standard linux praise... (Score:2, Insightful)
Re:standard linux praise... (Score:2)
Re:standard linux praise... (Score:3, Funny)
All for wimps. I always start from stage -3. This means no machine-readable media whatsoever and a blank system EPROM. Nothing but source code printouts.
After 36 hours of entering bootstrap code via a bank of toggle switches, you can get to stage -2 (TTY keyboard and video on a temporary BIOS). After this, you've still got a long road to hoe before you get to a login prompt. It's well worth it though, if you want to know exactly what your system is running.
Re:A recipe for disaster (Score:2)
Re:A recipe for disaster (Score:2)
Re:A recipe for disaster (Score:5, Insightful)
I venture to guess, however, that your company is somewhat smaller than Microsoft, is held together by shared enthusiasm and the exilaration of short term releases, and that you don't face many of the problems that any large company, not just the Borg, does. I would never defend the quality of MS products but anyone who has worked on large products with many existing custoemrs in a large software company like an Oracle, Microsoft or IBM will understand that it is simply impossible to only hire expert programmers whose work never needs to be checked by anyone else and who don't need any supervision.
Some of your other statements are rather sweeping. Some parts of UML - such as object modelling - are very useful indeed and can act as highly rigorous sources for a lot of code and database generation or automated access. Others (like Use cases IMHO) suck and are of little use to programmers, though more in communication with PHBs and business types.
A lot of what you say is very true for small focused teams working in their bedrooms/garages/garretts but much less so for any large software developer who sells software for money. Your "expert-driven" approach would never work at a Microsoft.
Your last point, that OSS produces better results, is probably true. Certainly its more cost-efficient. But does it produce profitable companies that make heaps of money ? Maybe you don't like the idea of that. But most of the rest of the world, including your gray-haired neighbour who plans to retire on the proceeds of his portfolio, does.
use cases suck? (Score:2)
Others (like Use cases IMHO) suck and are of little use to programmers, though more in communication with PHBs and business types.
I've found use cases to be really useful. Two reasons:
But at the end of the day, bad programmers make worthless (but compliant!) UML diagrams/use cases/flowcharts as do programmer who are forced to create them. Good programmers make good ones or don't need them.
Re:A recipe for disaster (Score:2)
Not to mention that any such company has a responsibility, driven by enlightened self-interest, to turn Jr. programmers into expert programmers.
Re:A recipe for disaster (Score:5, Insightful)
UML and other modelling fads. My former employer required the use of 65-page UML diagrams for the simplest command-line utilities. Why? Because it was popular, and the investors liked to make sure we were buzzword-compliant. UML is designed for non-technical audiences, and as such it flies in the face of the engineering goals it is designed to solve.
I've found UML, or at least quasi-UML, useful; any time I design a system I draw a quick UML sketch just to help me think about what's invovled. Unless, that is, it's something really dead simple .. something equivalent to a homework assignment. Sometimes most of the really hard work goes into a good UML diagram, and the rest becomes easy.
But despite this, I can't help but reflect on your statement in utter horror. What the hell kind of UML diagram does one put together for, say, ls? Or cd? Or a numerical calculation?
Code review. Code review is a power trip and best, and a drain on morale at worst. If a programmer cannot be trusted to develop excellent code, he should be replaced with somebody who can. It's a tight labor market on the developers' side, so incompetent programmers should be spending their time reading O'Reilley books instead of playing games and looking at porn in their parents' basement.
I disagree with you on two fronts. One, I've always found code review beneficial for a project. Weaker coders learn good habits; stonger coders teach good habits; bugs not visible to some become visible to others; the general quality of code improves. People who can't deal with constructive criticism of their code make for bad team-mates.
Secondly, I've never met anyone who became a good programmer by reading books, even books as high quality as O'Reilly's. I learned to code by writing code and reading others' code. The books make handy references, but sticking to books is akin to trying to learn to write well by reading the dictionary.
Large, geographically concentrated development teams. The best work is emphatically not done by 1400 people in the Redmond campus. The best work is done by culling experts of individual niche areas from around the globe. Not surprisingly, this is the model that Linux and most Open Source software uses, and that is why OSS is phenominally successful compared with any of its proprietary competition.
Most of Microsoft's problems can probably be directly attributed to the size of its development team. MS project designers might do well to re-read The Mythical Man-Month (if they never read it, they have no business being project designers, IMO).
Re:A recipe for disaster (Score:2)
There is code reveiew and there is code review. I tend to take a very sour view of formal code review meetings, except under special circumstances: Better understanding problem code, guiding new programmers (and, even then, I'm not sure) and carefully reviewing things that a) have to work very well and b) must be very well understood by those whose code will interact with them.
Formal code reviews, unless managed very well by a competent and confident moderator, can easily degenerate into "See what I know!" fests whereby lots of folks who haven't lived and died with a problem pile on the person who has.
Formal code reviews often have an unintended consequence: they can (not must, but can) reduce the level of informal review, coordination and help that takes place. You know what I'm talking about : "Leave me alone, kid. I've got my own schedule to meet. Besides, that's what the code review is for."
Informal code reviews -- "Hey Jack, take a look at this, would you?" should be happening all of the time. Some managers (and developers) may believe that time spent huddled over a terminal together is half as productive as time spent alone in front of a terminal, but nothing could be further from the truth.
Re:A recipe for disaster (Score:4, Interesting)
It doesn't take much time, but it's only the smallest CRs that get away without at least a few changes. Sometimes it's just comments, sometimes it's a better way to do something. At the least, everyone has a better idea what's going on in code they're not in right now, but very well might be in the near future. An added benefit is that people who see CR coming clean up their code a bit more than usual.
I agree-- formal CRs suck in most cases (although some critical apps developers like them for some bits of code that might, say, kill someone if they malfunction, or that take $10,000 to test). But the e-mail deal works really well for our team. But we don't have any assholes or know-it-alls, so that helps.
Code Review (Score:2)
Your design process is the real disaster recipe... (Score:4, Insightful)
Good engineering (of any kind) starts with design... a plan. I'm glad you don't build skyscrapers or airplanes. Oh boy. So you basically are thinking... what... that code should be reviewed after it has already hit QA or something? Or perhaps we shouldn't review code at all?
Here's a clue. If a developer is costing 20-40 per hour writing CRAPPY code... THAT is a far worse waste of time than taking a little time... reviewing the code... and correcting it if necessary.
Development isn't just writing code any way you want. You want things to be very solid, standardized, and consistent before it gets into beta. Using your way... you'd never know if the code was good or not. Apparently... to you... if it works... ship it! What? How do we know if the code is bad? We have to REVIEW it? What if the developer doesn't understand a certain design pattern and implemented it incorrectly? Hell... what if a bug or flaw is discovered during the review process?
These are all common issues in everyday development. It doesn't necessarily mean the developer is BAD. Rather... the developer is HUMAN.
Although... with your lack of a code review process, lack of system design process, and lack of formal check-in process... I am surprised that any decent code gets written at all. You're comparing apples and tractors. Financial gain or customer/user base size are NOT measures of good code, excellent development standards, or strong design processes. Although, I'm not certain you will understand what I'm saying here.
There is some excellent open-source software out there. Likewise, there is some excellent proprietary software out there.
And there is crappy software out there, too... for both worlds. Whether or not something is open source or proprietary says nothing about how it is written or how well it is designed.
This obviously is a huge troll that I'm feeding here.
Re:Your design process is the real disaster recipe (Score:3, Insightful)
Re:A recipe for disaster (Score:2, Insightful)
And even if no bugs are found, it helps to have another pair of eyes go over code for readability. It may make sense to you, but it may not to someone else. When you leave and that code needs fixing, NOBODY will understand it because they don't have preconceived notions about how it operates.
The BEST thing you can do to improve quality of code, and of your developers, is to code review before every checkin. Find someone who is more/as intelligent as you, and have them scour your code while sitting next to you. At worst, they'll understand the code. At best, you'll find a bug before it goes out there, or you'll learn something new about the language/library you're using that you didn't know before.
Re:A recipe for disaster (Score:5, Insightful)
No, no, no. Code-review is VERY USEFUL. No, it won't catch architecture mistakes (necessarily). No, it won't catch design mistakes. Hopefully you already know how to design before you get your first software job.
What code-review catches is the annoying things that the best developers tend to think don't matter so much. Style-differences from company practices. Naming conventions not being followed. Poorly chosen variable-names. Lack of documentation.
In short, code-review makes your code more maintainable. Your company may not use it, but that doesn't make it useless.
Re:A recipe for disaster (Score:2, Informative)
Formal checkins. Make you own branch and go butt nutty in it. Sync to the trunk often. Let them review your changes and integrate on their own pace. Code review. Once again - having anybody - even inferior programmer to look over your code will do wonders to your own understanding and skills. I am a good coder, and I beg other people to review and comment. The more I ask - the less problems they find - I am getting better. I hope you do not assume that you have no way to improve - otherwise: you are a big fat liar. My goal is to write code so clear that they can understand what and how it does without my help. Large, geographically concentrated development teams. That one I would agree. Adding people slows everything down. Full team should not be above 20 engineers, and some QA - above that, keep splitting projects. If you think it is not possible - you are a lousy architect.
Step aside, hippie. (Score:3, Informative)
While UML isn't the end-all, be-all, it is certainly not a "fad". When it comes right down to it, your will need to be able to describe the architecture of your code with something more than comment-lines and manpages. And, with the "U" in UML standing for "Unified", the is the ability for a new-hire developer, or perhaps the purchaser of your source-code, to understand what the hell is going on without pouring over millions of lines of source code.
Code review is a power trip and best
I suppose you'd rather accept source code sight-unseen? True, there are good and bad ways to conduct code reviews, but all the code reviews I've been a part of have been a fairly easygoing experiences and almost always helpful. Sometimes you really need another set of eyeballs to catch problems. Isn't that one of the good aspects of OSS??
Large, geographically concentrated development teams
I'm torn on this one. Yes, it's bad to simply throw a large number of developers on a team (unless you break them down... way down). On the other hand, you can't tell me that it's not easier to resolve a problem by walking over to the co-worker in the next cube than than email the co-worker who lives thousands of miles away. Didn't the formal release of Mozilla 1.0 get held up because a few key developers had not signed off on the new open source license and they simply could not be found??
Re:A recipe for disaster (Score:5, Informative)
Eitehr you're a troll, or you've never done any real development.
UML, can't comment on. Never did any. What I can say is that design is important, and shooting from the him on 20million lines of code won't get you very far. If UML helps you design, use UML.
Formal checkins. In large complex projects, you need to be absolutely sure about your units. So many places for things to interact, if you don't have them as solid as you can get it, you'll get so many interaction bugs you'll never get anything done.
Developer time costs $20-40 an hour. Ha, now I know you've never done real programming. Developer wages start maybe at $30/hr (not $20), up to $100/hr at spots. Thats just wages, not benefits, taxes all that stuff. If you have no experience in big projects, don't talk.
Code review Code review is easily the best way of debugging. Study after study find that Code reviews find more bugs per unit of time than any other technique. as side benefits, it also transmits techniques from developer to developer. This comes from developers who want to learn and 1) too shy to ask 2) don't know that there is a better way. I learned something in code reviews, some techniques I never thought of.
Can it be a power trip? yeah. CAn it lead to a clash of egos? yeah, but thats up to the review lead to control. A good review lead will keep that in check.
Large, geographically concentrated development teams
Not surprisingly, this is the model that Linux and most Open Source software uses
They have no option because they can't pay developers, so no chance to get them in a concentrated area. There are plusses and minusses with the concentration.
why OSS is phenominally successful compared with any of its proprietary competition
Sales? No contest. MS.
On what definition of success? Bugs? I've seen some really shitty OSS software. yes, the kernel is high quality, Apache, FreeBSD, others.
Re:A recipe for disaster (Score:4, Insightful)
Can you see the spelling error you made in this sentence? Did you mean to make that error?
If you can't even type error-free prose, how could you be expected to create error-free code?
People make errors. Code review helps reduce the effect of those errors.
Re: So, you've been a developer for ....a week? (Score:2)
> Spoken like a true idiot, my friend.
Surely a troll, IMO.
> The UML is NOT a fad.
Yeah, I'm usually pretty cynical about this kind of stuff, but I've found UML useful for documenting the basic structure of very complex programs.
Re:Found the original ppt file for those of you wi (Score:2, Interesting)
Hats off to the staroffice teams for a nice job well done.
Re:Linux Distros? (Score:2)
Besides, how do the compilers compare in speed? (there'd be an almost totally useless benchmark)
All in all, I would say that a comparable gnu/linux system (kernel, base stuff, X, KDE or Gnome, plus a few other bits and bobs) would take a bit less time than this, but that's just a guess at best. Perhaps some of the Gentoo and LFS users can shed some light?
Re:MS Coders (Score:3, Insightful)
I was working on a pretty trivial part of NT, so the build system didn't affect me. However, when you walked around the halls you could see who checked in code that broke the build because they would have a "build breaker award" taped to their office window. It seemed to be in good fun, but I suppose it could result in a CYA mentality.
Also, I remember there being problems with source control, like the article mentioned, though not specific to NT. I seem to remember that Word Viewer used a different codestream from Word and the sample files in the SDK are merely very out-of-date versions of some of the small apps that ship with Windows.
-a
Re:What a DISMAL culture FAILURE. (Score:3, Informative)
NT4 came out on x86, Alpha, PowerPC and MIPS