Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Microsoft's Top Devs Don't Seem To Like Own Tools 496

ericatcw writes "Through tools such as Visual Basic and Visual Studio, Microsoft may have done more than any other vendor to make drag and drop-style programming mainstream. But its superstar developers seem to prefer old-school modes of crafting code. During the panel at the Professional Developers Conference earlier this month, the devs also revealed why they think writing tight, bare-metal code will come back into fashion, and why parallel programming hasn't caught up with the processors yet." These guys are senior enough that they don't seem to need to watch what they say and how it aligns with Microsoft's product roadmap. They are also dead funny. Here's Jeffrey Snover on managed code (being pushed by Microsoft through its Common Language Runtime tech): "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore." Snover also joked that programming is getting so abstract, developers will soon have to use Natal to "write programs through interpretative dance."
This discussion has been archived. No new comments can be posted.

Microsoft's Top Devs Don't Seem To Like Own Tools

Comments Filter:
  • Wow! (Score:5, Informative)

    by Anonymous Coward on Saturday November 28, 2009 @10:00PM (#30258230)
    Advanced developers who learned how to code on what would be considered bare bones IDEs don't feel the need to use tools that are meant to let low level developers produce functional GUI applications without having to dedicate tons of hours.

    News at 11!
    • by icebike ( 68054 ) on Saturday November 28, 2009 @10:22PM (#30258332)

      Well you are very close to the mark here.

      The integrated IDEs arose to allow the hobbyist and corporate IT newbie turn out something, which has a good chance of being functional if not maintainable or efficient. These tools also allow specialists from other fields (accountants, meteorologists, scientists) actually turn out products.

      Efficiency, maintainability and extensibility don't matter for that type of programmer. They just need to put up a few screens to count the widgets or produce the daily report. It doesn't have to be portable,or efficient, and chances are it will be tossed as soon as that programmer moves on to another job.

      Those programmers do manage to turn out a reasonable amount of customized applications, some of which are actually marketable. The vast majority are for in-house use. But some actually work quite well for specialized industry applications like Medical Billing, where the knowledge of the subject matter is more important than the efficiency of the code.

      OS level code has to be much more efficient, and there is no substitution for knowing the programming language, the processor capabilities, and the compiler peculiarities. You can not leave to some IDE the task of putting code behind a button that will drag in half a ton of MS Foundation Classes or use some particular C/C++ construct that is horribly inefficient. You are basically always dealing with some data stream or message and doing X Y and Z with it and handing it off to the next task.

      As such these guys virtually never see a piece of data all the way thru the computer. Their customers are other pieces of software. Their wholesaler are yet more pieces of software. Its data-in, whack it, pound it, and pass it on sort of code, and a lot of it, and a lot of self plagiarism. The IDEs just get in the way.

      • by KibibyteBrain ( 1455987 ) on Saturday November 28, 2009 @10:39PM (#30258446)
        Only one of these guys said anything about not liking to use an IDE. I use IDEs to write assembler language for microcontrollers at work every day. Sure I could do it in an editor as well but I much prefer the graphical debugger and simulator of my IDEs as being able to see all the dozens control registers' and fuses' bits graphically during the execution of each instruction is easier for my mind to wrap itself around than my screen littered in hex or ones and zeros, at least sometimes. That said, my assembler's emitted machine code is no different than if it wrote it in Vim and then ran the command line based build tools, which are the same thing my GUI runs when I press the associated F-key.

        So the IDEs really have nothing to do with the so called "designers" you see in Visual Studio. And yes, its true that no developer who was serious about maintaining a multi-year product could do it via the designers before, you just had no control over WTF was going on. Now with WPF and xaml you finally can use the designers in a maintainable fashion, but it's a bit too little, too late for most developers to care. You can write pure Windows code in C as a makefile project(so you have clear control over the build, even to the point of a different toolchain like GNU) just as well in the Visual Studio 2010 IDE as with Vim and the command prompt tools. It's just a matter of if the IDE at that point gets in the way more than it helps.
        • Re: (Score:3, Informative)

          by TapeCutter ( 624760 ) *
          I've been making a good living from MSVC since v1.5 arrived on win3.1. My current place of employment uses it to write a single code base that is later compiled in 32/64 bit versions for Win, Linux, Solaris, HPUX and Aix. IMHO MSVC's edit, search and (win) debug features are second to none. Not only that but as the GP implies, creating a large project that builds multiple sub-projects is a snap when compared to writing make files. Some of our build scripts create dozens of binaries, the MSVC command to kick
    • by KingSkippus ( 799657 ) on Saturday November 28, 2009 @10:47PM (#30258500) Homepage Journal

      The original article and the summary both come of as rather smug to me. The truth of the matter is that you need both low-level nitty-gritty programming and high-level programming. It depends on what you're using it for.

      Think of it this way. You have people who make pipes. You know, the kind used in plumbing. Fittings, too. And they're very good at it. If you take your average house builder and try to get him to make a pipe, he'll be hopelessly bad at it. But you know what those guys who build houses are good at? Putting the pipes together in meaningful ways to get what they need (i.e. building a house) done. Take a guy who's brilliant at making pipes and fittings and try to get him to build a house. Yeah, not such a superstar now.

      It's the same with programmers. Tell someone who is very good at writing low-level code, "I need a killer efficient compiler." Give them enough time, and they can churn it out and make it wicked optimized. Tell them, "I need a new type of control that works in this specific way and with crucial efficiency," and they're your guys. Tell them, "I need an new application entirely from scratch that can process my specific business logic, it needs to look and feel like a standard Windows application, it needs to be easy for end users to figure out and work with, and we need a working version in a couple of weeks," and they'll probably laugh at you. Yet that's what those people they're looking down on, the people developing with higher-level abstracted languages, are doing every day.

      In my experience, competence != usefulness. They're not opposites, mind you, but it takes both types. It takes the people who work with the low-level nitty-gritty stuff, and it takes the people who use what they churn out to actually accomplish real-world productive things. One isn't smarter, one isn't better, neither should be looked down upon. Both are necessary.

      • by Anonymous Coward on Sunday November 29, 2009 @12:36AM (#30259100)

        Also, experience has taught us that even elite programmers make errors, and the most common categories of errors happen to be of the kind that can be prevented by designing your language or platform properly. This results in enhanced reliability, security and performance, due to the lack of null-pointer dereferences, buffer overflows, memory leaks, and so on and so forth. Now, I'm not saying that you therefore have to use Dotnet; I'm not really a fan of having to install yet another disk space munching VM, when I already need Java (among other things) for my studies and to run a few handy free tools that I depend upon to handle certain parts of the life I enjoy efficiently. However, I can say that if Microsoft, and programmers across the globe, would deprecate and replace old APIs that for example use uncounted strings, non-garbage-collected memory, and so on, the world would be a better place. And you don't need Dotnet or Java - you can do all of these things in plain C++ or macroassembler if you want.
        "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore."
        Not particularly funny, and not an argument to go back to the olden days. Quite the opposite.

        • Mod parent UP! (Score:4, Interesting)

          by KingSkippus ( 799657 ) on Sunday November 29, 2009 @12:55AM (#30259180) Homepage Journal

          Very insightful reply, and you're 100% correct. That was my other line of thought. This is the company (and probably some of these "superstar" programmers are the very people) who have given us a litany of buffer overruns, security holes, and other low-level programming "features" over the years.

          I'm not saying that no one should ever program at a low level, but I am saying that people shouldn't be afraid to take advantage of features of managed code and other conveniences. Don't program at a low level if you don't have to. You're only making your life harder for no reason, and needlessly exposing yourself to risks of fundamental errors that are much worse. Take advantage of all of the hard work that others have already done.

        • Re: (Score:3, Informative)

          Now, I'm not saying that you therefore have to use Dotnet; I'm not really a fan of having to install yet another disk space munching VM

          That choice has already effectively been made for you - Windows comes with some version of .NET since Win2003.

        • by gmack ( 197796 ) <[gmack] [at] [innerfire.net]> on Sunday November 29, 2009 @02:50AM (#30259574) Homepage Journal

          Just because you don't know how to use non garbage collected memory does not mean that functions that don't use it should be phased out. Garbage collection trades safety for flexibility and if you remove the flexibility you will find a whole class of programs that would be a lot less efficient. These days there is a lot of work being put into making things like NULL pointer bugs and memory overflows simply crash instead of allowing a backdoor into the system and for some programmers crashing vs running slow is a good trade off

          I need those functions to do my job since I write network based apps that require memory reuse and pointers to be able to process data as quickly as possible.and I've seen the results when some of our competition has attempted to do my job with garbage collected languages. (generally 3x the hardware requirements, 5x if they used java)

          Not everything is a desktop application.

          • by ultranova ( 717540 ) on Sunday November 29, 2009 @09:37AM (#30260930)

            Just because you don't know how to use non garbage collected memory does not mean that functions that don't use it should be phased out.

            Nice ad hominem. But the issue isn't whether he can use it, the issue is whether everyone who's code is running in the machine can use it, especially the guys who wrote libraries.

            I need those functions to do my job since I write network based apps that require memory reuse and pointers to be able to process data as quickly as possible.and I've seen the results when some of our competition has attempted to do my job with garbage collected languages. (generally 3x the hardware requirements, 5x if they used java)

            This is a strange statement. Garbage collection in no way prevents memory reuse; it simply automatically releases blocks of memory when they can't be accessed by following pointers from the root set anymore. In fact there's garbage collection libraries for plain C, some of which don't even require recompilation of applications but simply replace malloc() and free().

            Besides, network applications are precisely those where it might be worthwile to use 5x hardware just to get some extra protection.

            Not everything is a desktop application.


            True. Not every application has someone babysitting them all the time and validating all their inputs.

        • Re: (Score:3, Insightful)

          by PitaBred ( 632671 )
          It's actually a good argument. With antilock brakes, most any moron can maintain control of their vehicle in panic-stop situations. The trade-off is that stopping distance is increased. So, sure, go without ABS if you're skilled enough to do so. You'll get some extra performance when you're racing. But if you're just driving to the grocery store in the winter? I'd hope that most of the soccer-moms on the road with me have ABS.
    • Re:Wow! (Score:5, Interesting)

      by rickb928 ( 945187 ) on Saturday November 28, 2009 @11:00PM (#30258596) Homepage Journal

      We've got a crew of .NET developers writing us an updated replacement to an existing VB app. I keep calling the new interface Fisher-Price, but actually it's Hasbro. I was mistaken, but an easy mistake to make.

      Where it should absolutely take two clicks to make something happen, they found a way to make it five. Where you should enter a date, they found a way to not allow special characters, like '/'. Where you should enter an address, well, no spaces allowed. Basic functionality is lacking for several features, but the interface is there.

      And no help files yet, despite beta release pending in a few days. In fact, though we have well over 1,000 pages of documentation, there seems to be no functional install that preserves the users' data in case they need to reinstall. I'm told that the next build introduces that.

      For all the fancy IDEs, tools, etc, these guys are still not getting it done. I dare not say how far behind schedule this is, nor what the actual platform is, or someone will guess and raise hell over how anyone could be so insensitive as to speak the truth.

      Your tools mean crap, if you're incapable. Just as your plumber would probably suck at actually making the pipe, your developers will suck if they don't 'get' what your users actually do.

      Of course, it would help if they asked what the users actually do.

      But I'm not bitter. I get to support this. Plenty of work.

      • Re:Wow! (Score:5, Interesting)

        by __aasqbs9791 ( 1402899 ) on Sunday November 29, 2009 @12:06AM (#30258944)

        I just wanted to let you know I feel your pain. I worked at this place a while back and I really liked my job. It didn't pay that well, but I felt important and had a massive amount of freedom. Then they hired a consultant to come in and take over IT. He knew how to run a business, but next to nothing about IT (though he knew just enough lingo to fool people who did for a few days). His 'programmer' didn't understand how to navigate file systems on Windows with Perl (and was supposedly a Perl guy). Being a Linux guy myself, I figured maybe he was, too. No, he had never even used Linux. Once I found that out I started to get rather scared and discouraged, because he was reworking a complicated, arcane, mission-critical system. I demanded all passwords be changed and that I not be given any of them because I didn't want to be blamed when they screwed everything up (plausible deniability). After assuring me that they (my bosses) wouldn't, and finally relenting another month later they finally fired the guys because they couldn't get anything working. At all. I even told them where to start to get a feel for what they needed to be able to do on it, and they still couldn't do it. They didn't even know enough to mess it up (generally the easiest thing to do). So management's answer was to just not have any sort of IT department at all. I could do all of the old IT manager's job, plus my old one, for the same pay and no possibility for advancement. So I gave them a month's notice and left. Probably not the smartest thing I've ever done (the economy tanked about 6 months later) but since most everyone else had left or been laid off around the same time, I'm not sure how much a difference it would have made to do otherwise.

      • Re:Wow! (Score:4, Interesting)

        by ducomputergeek ( 595742 ) on Sunday November 29, 2009 @01:24AM (#30259302)

        Of course, it would help if they asked what the users actually do.


        We had the advantage of a small business owner wanting our software developed because he thought "It should work like this". So we made it work like that and a lot of other small business owners found it to make sense and relatively easy to use. There were a couple quirks, but that's not good enough. Not for me.

        And this is where so many others fails. After the phase 1 deployment of our product (about 100 installs), I drove/flew around to our customers 6 months later, stopped by in person and asked as the first question: "What doesn't work?" followed by "How can it work better?"

    • by NoYob ( 1630681 )
      When I have to write a Windows GUI app, C# rocks! I can design the UI whip off the code and be done with it. It's better than MFC and after writing countless message loops in win32 and OS/2 for that matter, I don't think writing more GUI boiler plate code, using the APIs to match resource IDs, and all that mindless coding will do any good - especially when I need the time to figure out an algorithm or other problems. And I can still do low level stuff (really low level) with P/Invokes, so the only thing I'm
    • Re:Wow! (Score:4, Insightful)

      by ClosedSource ( 238333 ) on Saturday November 28, 2009 @11:12PM (#30258674)

      Well, I like some of those guys but as someone who has been closer to the bare medal than they have (I'm a former Atari 2600 programmer), I'd say that the habits of old pros say little about the quality of today's tools.

      We used 6502 cross-compilers on a PDPxx and VAX, not because we thought the command-line was better but because that was the best we had at the time.

      BTW, I'm not trying to say that I'm better or smarter than those other guys, I just have written very low-level, real-time code.

      • Re: (Score:3, Funny)

        by gandhi_2 ( 1108023 )

        In the Army, this is called "back when it was hard".

        In this context:

        How many developers does it take to screw in a light bulb?

        2: one screws it in, the other one talks about how hard it used to be.

  • So what? (Score:4, Insightful)

    by Aphoxema ( 1088507 ) * on Saturday November 28, 2009 @10:01PM (#30258234) Homepage Journal

    I hate Microsoft more than anyone, but... I really don't see an issue or any hypocrisy here.

    • Re:So what? (Score:4, Informative)

      by ceeam ( 39911 ) on Saturday November 28, 2009 @10:23PM (#30258346)

      That's because you weren't reading the ads - direct or indirect - of these MS "dev tools" (in magazines etc)

      And you haven't been affected by managers who were reading them.

    • Re:So what? (Score:5, Insightful)

      by Shakrai ( 717556 ) on Saturday November 28, 2009 @10:25PM (#30258366) Journal

      I really don't see an issue or any hypocrisy here.

      Yeah, really. Senior Engineers disagree with company marketing strategy and prefer to keep things simple. That isn't newsworthy -- that's a Dilbert strip ;)

      • Re:So what? (Score:5, Insightful)

        by shutdown -p now ( 807394 ) on Saturday November 28, 2009 @11:40PM (#30258818) Journal

        Yeah, really. Senior Engineers disagree with company marketing strategy ...

        Who said that "graphical programming" is company marketing strategy today? Oh, sorry, it's another kdawson story; you expected any facts here? Let me explain then.

        Anyone who deals with .NET tools knows that there had been a recent shift back towards code. For example, WinForms development was too tedious without visual drag&drop form editor, but WPF markup is best hand-coded, just like HTML (VS provides a visual editor, too, but hardly anyone uses it for anything except quick preview). Or what used to be called "typed datasets" - also very designer-centric, but with LINQ2SQL and Entity Framework, again, most people stick to writing code and mappings in XML by hand.

        In fact, it's easy to find out that much if you just look up the names mentioned in TFA. For example, who is Don Box? He's working on Microsoft "Oslo" project [wikipedia.org], next-gen modeling platform which was hyped [msdn.com] back on PDC2008, and all Microsoft managers in the division blogged on how it's the next big thing etc. And the main difference of that platform from the existing "DSL" tools in VS2008? Oslo is centered around text-based DSLs, and comes with a Emacs-like editor [msdn.com] which can handle them.

        In short, developers of the new tools for Microsoft development platform - which are fully backed by marketing - criticize some aspects of the previous generation of tools. Surprise, eh?

        But go ahead and ask them what they use to edit C# code that they write, and I bet you'll hear "why, VS of course".

        Then there's Jeffrey's comment on .NET. I don't see anything fundamentally wrong with it, and if you RTFA, you'll see that he is an architect for PowerShell - a very high-level scripting/shell language built on top of .NET! To interpret his comment as a criticism of managed, when he is in fact the one "pushing" for it via the tech he works on, is rather disingenuous.

    • Re:So what? (Score:5, Funny)

      by gzipped_tar ( 1151931 ) on Saturday November 28, 2009 @11:10PM (#30258660) Journal

      I hate Microsoft more than anyone

      See, being subjected to IDEs has lowered your ability of detecting faulty code. You're basically saying A > A since this "anyone" includes "you" too. ;)

  • pros and cons (Score:2, Interesting)

    by gcnaddict ( 841664 )
    The only pro: anyone can probably learn to write some sort of simple application through Microsoft's tools via managed code.

    The cons: managed code doesn't give nearly as much control because it tries to spoonfeed you. This is basically a catch-all for every con anyone can think of for managed code.
    • Re:pros and cons (Score:5, Insightful)

      by sys.stdout.write ( 1551563 ) on Saturday November 28, 2009 @10:28PM (#30258378)
      C'mon, this is unfair. By your logic we shouldn't have Perl or Python or any other scripting language because they "[don't] give nearly as much control because it tries to spoonfeed you."

      There are lots of situations when you don't need to twiddle the bits or delete your own allocated memory. What's wrong with simplifying the language for simplified tasks?

      It's not like Microsoft doesn't support low-level languages.
    • Re: (Score:3, Insightful)

      by wwahammy ( 765566 )
      Managed code takes some control away from the developer but is the developer having that control for the best?

      For example, think of the type of errors leading to security bugs. A lot of them have to do with buffer overflows primarily in the area of string manipulation. These are easy mistakes to make in C or C++. Hell Microsoft and others have tried to modify the C runtime library to have "safer" versions of string manipulation functions because these errors continue to happen. Now consider a managed lang
      • Re: (Score:3, Interesting)

        by ardor ( 673957 )

        But the GC does not solve two things:
        1) Freeing up resources other than memory (this is only possible with a deterministic GC and RAII/destructors, or with refcounting instead of a GC)
        2) Taking up tons of RAM because of unnecessary allocations (I've seen Java code that allocates MBs in tight loops...)

  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Saturday November 28, 2009 @10:03PM (#30258248)

    Yes, in some respects, programming is becoming easier and more unqualified people are able to do it.

    But I think that these guys are really missing the boat. The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain.

    In addition, content these days is a form of programming. Whether it is HTML/CSS or word processing or spreadsheets, the distinct line between what is a program and what is pure data is blurred beyond recognition. So a programming language for interpretive dance would probably find the Natal very useful.

    • Re: (Score:3, Insightful)

      by MBCook ( 132727 )

      Wow. BadAnalagyGuy got an insightful. Someone didn't read the full comment.

      Of course, this isn't true. The thing about end users is, they generally don't know what they want. Even if they had a tool that would do whatever they say, it won't solve their problem because they don't know how to formulate it. The tool would need to read their mind, to the point of making something they didn't even realize they really wanted.

      • Re: (Score:3, Interesting)

        by mhelander ( 1307061 )

        There's truth to what you are saying - I'll bet any senior developer can tell war stories for hours on the topic of users who don't know what they want - but BAG's comment was still very insightful.

        Despite how readily domain experts (that is, our customers) disappoint us when it comes to grasping the most basic stuff such as C, Java, SQL or even HTML, it is a mistake to think that they are stupid or that they don't know *their* domains very well (the most basic stuff of which we may then find ourselves str

    • Re: (Score:2, Insightful)

      by icebike ( 68054 )

      The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain.

      Well said.

      Nobody does accounting better than an accountant.

      But its not always the best Idea to hand data system development to an accountant and hope for the best. Some one has to guide that guy doing the data editing, manipulation, and storage just like that guy has to guide the programmer how to keep his company books, file taxes, etc.

      And this is where the current crop of tools fail. They let you build things that can go horribly wrong, because of simple errors that a professional programmer might have

    • The reason why programming became easier is that it was really hard to teach college graduates and other people how to manage their own code, do software maintenance, garbage collection, memory management, error trapping, management of pointers, etc. So the programming languages evolved to support the lowest common denominator programmers that the colleges kept producing.

      In the 1980's when I first went to college for Computer Science, we got taught a whole lot of techniques and methods, that they don't teac

  • You don't have to be a crack programmer or have a team of them to publish great software on a deadline.

    Yes, it helps. A lot. And in a serious large scale development effort you want as many as you can get...

    But it's good to be able to be useful without having to be elite.

  • I use Visual Studio because I couldn't program my way out of a wet paper bag. I'd be a bit concerned if the people writing the application were similarly impaired.
    VB.NET and Microsoft's other tools make programing possible. People on slashdot will argue that this leads to bad applications, but the choice is between bad applications and no applications, not bad applications and good applications. Granted, sometimes bad applications are dangerous, but that's not a sufficient rationale to withhold these type
  • by timmarhy ( 659436 ) on Saturday November 28, 2009 @10:10PM (#30258270)
    so MS has elitist morons in their ranks as well, how is this news?
  • It does not affect my decisions at all.
    Businesses aren't in business to push programming ideology. They are in business to make money. If I need an application I'm going to get the application that does the job for the least amount of money (all the caveats about it not being poorly written and being moderately open to possible future expansion, etc.. apply). If I need bare-metal code then I'll get a guy to do that. If VB will do the job then I'm going to get a guy to do that and probably a bit cheaper.

  • modify that analogy (Score:5, Interesting)

    by v1 ( 525388 ) on Saturday November 28, 2009 @10:15PM (#30258296) Homepage Journal

    "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore."

    Might have been more appropriate to compare it in that people in the high performance arena (nascar) don't like antilock brakes because of their limits and the separation you get from your task at hand. (you lose your "feel for the road")

    Tho I'm a little strangely biased, I miss the days of assembly, when 10k was a LOT of code to write to solve a problem, thing ran at blindingly fast speed with almost no disk or memory footprint. Nowadays, Hello World is a huge production in itself. 97% of today's coders don't have any idea what they've missed out on and just accept what they've got. Even someone that understands the nerf tools like VB at a lower level can get sooo much more out of them. I recall taking someone's crypto code in VB and producing a several thousand-fold speed boost because of my understanding of how VB was translating things. They didn't know what to say, they'd just accepted that what they were doing was going to be dog slow. (and unfortunately the users are also falling under the same hypnosis)

    • by Lord Grey ( 463613 ) * on Sunday November 29, 2009 @12:34AM (#30259092)

      ... 97% of today's coders don't have any idea what they've missed out on and just accept what they've got. ...

      My apologies for snipping such a large portion of your reply, but that one sentence from your post nicely sums up so many of the problems with new coders it deserves calling out.

      Disclaimer: I'm an old fart when it comes to programming. I admit it. I like bare metal programming, high-performance applications with minimal footprint, and elegant solutions to non-trivial problems. I don't avoid kernel-level threads; they're a useful tool.

      The company I work for has hired a large number of programmers over the last year in order to replace a number of aging systems. I've interviewed a lot of these people, and I've worked with most of the ones that have we've hired on various parts of the overall project. The newer programmers know quite a lot about available frameworks and their general capabilities. They've been taught the 80/20 rule early on, and they embraced it: When faced with a new task, these people find something that already exists and set about modifying it. All that is fine for applications that are of a certain size. A size that, apparently, is about the size of school projects and therefore succeeds admirably when graded.

      So what I've seen coming through the door are people who can put Lego blocks together. They're used to that type of problem solving. They've been taught to download 80% of the solution, then "fix it" so it also does the other 20%. This type of problem solving works well when you're building Lego-block-shaped solutions. That fails to happen much of the time, however. Most real-world solutions -- you know, the kind that are complex enough that someone is willing to pay an actual salary to solve -- don't look like a collection of Lego blocks. The amount of custom code grows and grows as more and more Lego blocks are added. Interoperability problems between the Lego blocks start encompassing the majority of coding effort. The overall system gains complexity at an alarming rate. Things start to suck, both from the programmer's perspective as well as from a systems perspective.

      The bad part of this, and to bring things back to my original point, is that these newer programmers expect it to be that way. What's worse, at least from my point of view, is that this entire mentality has been around long enough for these programmers to stop coding and start managing other programmers. So now we have people who build things that suck, and managers who expect it to suck. Expectations are lowered and, unfortunately, met.

      Google and Apple seem unafraid to break this cycle, albeit in different ways. So hope is not entirely lost. Maybe that's the 3% you alluded to in your original post.

    • by drsmithy ( 35869 ) <drsmithy@gm a i l .com> on Sunday November 29, 2009 @01:53AM (#30259396)

      Might have been more appropriate to compare it in that people in the high performance arena (nascar) don't like antilock brakes because of their limits and the separation you get from your task at hand. (you lose your "feel for the road")

      I always laugh when I read this sort of thing.

      In *real* high performance racing - Formula 1 - ABS (along with traction control, launch control, active suspension, and a whole bunch of other fancy electronics that basically turned the cars into a ludicrously fast go-karts) was used very successfully and then banned because it could do a far, far better job than any human.

  • This seems a good place to point out one of the chronic errors of people talking about software development...

    Abstract does not mean slow, bloated, inefficient, or incomprehensible.

    Having the wrong abstraction for the task at hand, however, often does. And blindly questing after "managed" "portable" and "high-level" is a good way to get abstractions which work poorly for *any* task. At best, you get Java/.Net/Javascript... tolerable for many tasks, and completely useless for others.

  • by Opportunist ( 166417 ) on Saturday November 28, 2009 @10:23PM (#30258348)

    I could write a lengthy essay about how old programmers don't like to use new tools that offer them little because they already know all the tricks and gadgets for their old, "inferior" and more complicated tools, while new tools are perfect for new programmers because they don't have to learn so much to achive the same results because those tools are easier to use and the learning curve isn't so steep until you have a result, but I think I can sum it up in a single word:


    • Re: (Score:3, Insightful)

      by RAMMS+EIN ( 578166 )

      In my experience, a lot of these "tools that are perfect for new programmers because they don't have to learn so much" really mean that you spend a lot of time learning the tool, and _then_ have to still learn what's really happening if you ever want to make it to the level of the "old programmers", and dog forbid you are ever required to use a different tool.

      To add insult to injury, the focus on the tool usually means there is so much boilerplate before you actually get to understanding the programs you wr

      • Well, that's what the industry wants.

        Look at it sensibly. Yes, RAD tools (and let's face it, that's what VS is under the hood) abstract away a lot of the "inner workings" of programs. Ask 10 RAD programmers for the difference of compiler and linker and 5 of them will stare at you blankly. Ask the remaining 5 why there is two steps in the first place and 4 more will go "ummmm...".

        But at the end of the day, the customer (or boss, if the programmers are employees) does not care. They care that they produce cod

  • by ChienAndalu ( 1293930 ) on Saturday November 28, 2009 @10:37PM (#30258436)

    do they use vim or emacs now?

  • Good debugger (Score:3, Interesting)

    by jpmorgan ( 517966 ) on Saturday November 28, 2009 @10:39PM (#30258444) Homepage

    Intellisense is pretty slick, but overall if I'm doing development on Windows I'd rather use an emacs as my text editor.

    Of course, Visual C++ makes a fantastic debugger. It's almost good enough to forgive Windows for its lack of valgrind. Almost.

  • by TheModelEskimo ( 968202 ) on Saturday November 28, 2009 @10:48PM (#30258514)
    Our programmers are getting a bad rep because of our coding-for-weenies tradition. Can you please run an article that makes Microsoft programmers look like total badasses?

    -Steve B.
  • Leaks like a sieve (Score:4, Interesting)

    by ToasterTester ( 95180 ) on Saturday November 28, 2009 @11:02PM (#30258614)

    I'd be frustrated too trying to write code with tools that generate memory leaks for days and sucks at returning free'd memory to the system. I remember one version of Word you could start it up and just let it sit, within and hour or so Windows would crash. Then the version of Excel that shipped with debug code because the stripped version would never pass QA. Aw fine tools.

  • by Rufty ( 37223 ) on Saturday November 28, 2009 @11:19PM (#30258714) Homepage
    Don't know about interpretive dance, but I'm guessing that the MFC was written in abstract pottery [kinsaleceramics.com].
  • by melted ( 227442 ) on Sunday November 29, 2009 @01:05AM (#30259224) Homepage

    The real reason why they don't use Visual Studio is far more prosaic -- the build environment of most of Microsoft products does not support the Visual Studio project files. Their products are built using a system called CoreXT -- basically a set of binary tools and scripts cobbled together by build engineers and developers over the past decade or so. CoreXT uses a lot of different crap, make, perl, compilers, etc, etc., and all tools and SDKs are checked in and versioned. The upside is that you can roll back your Source Depot (Microsoft's own flavor of Perforce) enlistment to an earlier date and be sure things will build exactly the same way, and once you enlist, you get repeatable, isolated build environment where you can guarantee the correctness of versions for all tools, compilers and libraries (Java developer's wet dream, even though they don't know it). The downside is that you have to maintain the makefiles by hand, and you can't use Visual Studio, because there are no project files checked in, and even if there are, most people don't use them and they are not updated, so you can count on them being broken.

    I did a lot of my coding in either Notepad2, or in a separate project in Visual Studio against a test harness emulating the rest of the project (what Enterprise Java types call a "mock"). Some folks used Ultra Edit or vi, or EMACS. For some just a bare Notepad did the trick. Some stuck with Visual Studio, which in their case was just a glorified Notepad with autoindent since it doesn't support build or Intellisense if you don't have a project file.

    Yes, it's an enormous waste of time, and yes, it was painful. But CoreXT is so integrated into the rest of the dev pipeline that replacing it with something else in a large product is a major, destabilizing endeavor that is bound to undo at least some of the work around gated check-in infrastructure, test infrastructure, automated deployment infrastructure and god knows what else, so few teams ever attempt it. Now naturally, DevDiv eats their own dogfood, so they were one of the first teams to switch completely to MSBuild. It took something like a year in their case, they did it gradually, from the leaves down the tree. I'm sure if they had a choice, they would be using CoreXT to this day though, and fighting with incremental build issues. :-)

    Recently, a few more teams have adopted MSBuild. They can actually open their entire projects in Visual Studio and rebuild them. If they have test infrastructure deployed on the side, some of them can even test the product without waiting for it to deploy. So I predict that as more and more teams adopt MSBuild (this in itself could take another decade easily), these "senior" folks will come around to appreciate its benefits. It's awfully handy when you can set a conditional breakpoint on your local box and step through things.

  • Oh please... (Score:5, Insightful)

    by bertok ( 226922 ) on Sunday November 29, 2009 @02:42AM (#30259542)

    I can't stand it when Microsoft developers talk about multi-threaded programming when the entire corporation has done that absolute bare minimum to make developer's lives easier. No wonder that they don't like using their own tools, because their tools are terrible.

    Many years ago, a brilliant third-party multi-threaded library [oswego.edu] was released for Java, by a professor at Oswego university. I used it in several large production apps, and it absolutely rocked. You could build up safe, reliable, scalable multi-threaded applications by simply snapping together flexible pieces like Lego. It was so good that it became a part of the SUN Java standard library, and it's now called "util.concurrent". Compared to having to "hand craft" multi-threaded code in C++, it was wonderful. It's as if the lights had just turned on, and everything had become clear to me.

    Now that I'm a C# dev, it's been a huge step backwards, doubly so because .NET was developed after the Oswego library was already popular, so Microsoft must have seen it and just flat out ignored it. For years afterwards, the whole entirety of multi-threading in both .NET and C++ were "threads" and "locks". The one nicety they included was an anemic thread pool in .NET which was just usable enough for the most basic tasks, but couldn't handle any real load. Even the locks were heavyweight inter-process kernel locks that are unusably slow for many tasks.

    It's only now in .NET 4 (which won't be final until 2010) that they are adding a small set of very basic lock-free containers, light-weight locks, and actual interfaces that one can implement in order to customize behavior. It's all still very basic, and nowhere near as flexible, powerful, or comprehensive as the Java APIs that are years old now.

    Microsoft's general attitude to API design is so bad that it can only be described as wilful ignorance. Reading articles evangelizing "modern multithreaded programming to better utilize new multi core processors" somehow feels like a religious zealot harping on about their appreciation of pure rational logic and science.

  • by tjstork ( 137384 ) <todd.bandrowsky@NosPAm.gmail.com> on Sunday November 29, 2009 @08:14AM (#30260520) Homepage Journal

    If only Microsoft's architecture and best practices groups actually worked to leverage the efficiency of the tools, rather than just drain it. The thing about Microsoft is that the tools are good, yes, but then they sell them with these practices and recommendations that just drain innovation and drags all developers down to a lowest common denominator.

    It's really, Taylor all over again but applied to computer programming. The problem is, Taylor is what GM and Ford and Chrysler do, and the unions are locked in with. Just like we have pipefitters and machinist titles of varying kind on the shop floor at an old style manufacturing plant, we have guys that are being pushed into the database, u/i design, or middle tier roles, and really, 90% of all projects could be done with one guy putting together a half way decent screen in a craftsmen like fashion.

    At this point, Microsoft is headed out just like GM - recruiting a lot of the best engineers, then just killing them in red tape, and delivering products that increasingly fail to captivate their market.

  • by A Guy From Ottawa ( 599281 ) on Sunday November 29, 2009 @01:32PM (#30262420)

    I was at the talk, and yes Don Box said "I will fight you if you try to take away my text editor" but it was after having being asked a leading question by Eric Meijer. Something along the lines of "will we ever write software entirely without writting text?"

    However, what was Don doing for the rest of the PDC? He was hawking Entity Framework and M, both of which allow users to model data access using rich graphical tools!

    The talk is here: http://microsoftpdc.com/Sessions/FT52 [microsoftpdc.com].

  • by Animats ( 122034 ) on Sunday November 29, 2009 @02:05PM (#30262690) Homepage

    Ask yourself why we have "builds", where everything gets rebuilt. Do I have to have my ICs re-fabbed when I change the PC board design? No. We're still not doing components right.

    Historically, the big problem came from C include files. Everything but the kitchen sink is in there. There's no language-enforced separation between interface (the parts clients of the module see and may have to recompile if changed) and implementation (the part the implementations see). Also, you can include files inside include files, even conditionally. So developing the dependency graph of the program is hard.

    C++ made things worse, not better. The private methods of a C++ class have to appear in the header file, which exposes more of the internals than is really necessary. Every time you add a new private method, the clients, who can never see or use that private method, have to be recompiled. This not only produces cascading builds, it discourages programmers from adding new private methods rather than bloating existing ones. That's bad for code readability and reliability.

    Ada explicitly dealt with this. Ada has a hard separation between interface and implementation. This was considered a headache when Ada came out, but now that everyone has bigger monitors, it's less of an issue.

    Java, despite having interfaces, seems to have build and packaging systems of grossly excessive complexity. I'm not really sure why.

    The next problem is the "make" mindset, which is built on timestamps. "make" doesn't check what changed; it checks was was "touched". If "make" decided what had changed based on hashes, rather than timestamps, many unnecessary recompiles would be avoided. Something could run "autoconf", produce exactly the same result as last time, and not trigger vast numbers of recompiles.

    There's also the tendency to treat "make" as a macro language rather than a dependency graph. This results in makefiles that always recompile, rather than only recompile what's needed.

    It would be useful if compilers output, in the object file, a list of every file they read during the compile, with a crypto grade hash (MD5, etc.) of each. A hash of the compile options and the compiler version would also be included. Then you could tell, reliably, if you really needed to rebuild something.

  • Crikies. (Score:3, Insightful)

    by RightSaidFred99 ( 874576 ) on Sunday November 29, 2009 @04:27PM (#30263556)

    People are reading a lot into this that isn't there. These people use Visual Studio, and I don't think they'd claim that using a GUI to design a...GUI is a bad thing. They're referring largely to the new modeling tools MS is pushing with VS2010, and they're saying sometimes it's quicker to just write the code than design the model. And indeed it is.

    It's a tradeoff. For example, they already have some modeling tools (web service factory) for developing a web service. You layout interfaces, data contracts, message contracts, etc... and associate them visually. I think this sucks, personally, and I still just do it the old school (and much quicker, more powerful) method by creating an interface and data contracts. But for some scenarios designing the model might pay off in terms of self-documentation and allowing some standards to be followed by multiple developers working on a web service.

"Let every man teach his son, teach his daughter, that labor is honorable." -- Robert G. Ingersoll