Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Businesses IT Technology

Avoiding Mistakes Can Be a Huge Mistake 268

theodp writes "No doubt many will nod knowingly as they read Paul Graham's The Other Half of 'Artists Ship', which delves into the downside of procedures developed by Big Companies to protect themselves against mistakes. Because every check you put on your programmers has a cost, Graham warns: 'And just as the greatest danger of being hard to sell to is not that you overpay but that the best suppliers won't even sell to you, the greatest danger of applying too many checks to your programmers is not that you'll make them unproductive, but that good programmers won't even want to work for you.' Sound familiar, anyone?"
This discussion has been archived. No new comments can be posted.

Avoiding Mistakes Can Be a Huge Mistake

Comments Filter:
  • Perhaps (Score:5, Interesting)

    by Thelasko ( 1196535 ) on Monday December 01, 2008 @06:26PM (#25952367) Journal
    Perhaps programmers that have consistently good code should have some value placed on them. We'll call it "Karma". Programmers with good Karma get audited less often than others. If they fail an audit, they loose some "Karma" and have to write a bunch of excellent code to get it back.
  • Re:Perhaps (Score:5, Interesting)

    by Pogue Mahone ( 265053 ) on Monday December 01, 2008 @06:47PM (#25952583) Homepage

    That's exactly the kind of check that is harmful, according to the article. Who determines what is "excellent"? Against what benchmark? Who performs the audits? Who checks that you have spelled "lose" correctly?

  • Re:Perhaps (Score:5, Interesting)

    by enjo13 ( 444114 ) on Monday December 01, 2008 @06:53PM (#25952661) Homepage

    How do you identify "good code"? That's one of the great problems we have as software developers. Quantifying 'good' code is extraordinarily difficult. Code reviews do an excellent job of identifying clever code, but rarely capture the full utility of what is being written. You may think you know good code when you see it, but over the course of my career I've become convinced that is not true at all.

    Really the problem is that the only way to truly measure code quality is by seeing how it runs in a production environment. Even then I can easily quantify the quality of the teams overall output (does it work? does it work consistently?), but tracing that back to an individual programmer is often nearly impossible. Systems tend to interact with each other, and placing blame is not an exact science. The gulf between 'good' and 'good enough' is not nearly as wide as it seemed when I was a novice programmer.

    Great code almost never breaks. Good code works most of the time. Poor code is another matter.

    Poor code is easy to spot. Poor code never works. It's ugly. It's complex. It's stateful. It's jump off of the screen and practically begs to be put out of its misery.

    That's precisely why companies have processes and checks. They are an attempt to catch marginal code and make it 'good enough'. The problem, as the article points out, is that in the process they often inspire great coders to deliver marginal code themselves.

    The secret is to spot (through some mixture of science and art) great programmers and provide them with the freedom to write great code. If circumstance requires you to hire marginal programmers, then by all means put the process in place to make sure that what they do doesn't detract from the work your best and brightest are doing. Separate them as best you can. Limit how their systems interact.

    But whatever you do... don't limit your best programmers, as they are far more valuable than hundreds of poor ones.

  • by cgifool ( 147454 ) on Monday December 01, 2008 @06:54PM (#25952699)

    My group is a prime example. We all worked for a startup that generally released a new version of our application about 3 times per year. Over a few years we had developed a nice lean development process that involved documenting our design, but only in enough detail to be able to fairly accurately estimate the development effort (in X days, X weeks, or X months).

    Based on the estimates, the biz dev group would then pick and choose features to make up 3 months dev + test time.

    This worked great, and we pretty much never had a late shipment and few bugs.

    Then we got acquired by a giant 3-letter company with huge amounts of development process and tons and tons of "standards", and immediately were ordered to begin a 16 month release consisting of removing all open source and complying with standards. All their architects routinely veto our decisions and our design documents must be very very detailed and approved via heavyweight process before implementation can begin. 24 months later we're still in development, only recently the last design document was finally approved; at the moment it seems we'll be about 12 months late in total.

    Now they're asking us why we have so many tests planned, and making us remove half of them. Supposedly quality is a major priority, but they have no testing group; only people to enforce standards. All tests and test cases are written and implemented by the developers themselves.

    Dont even get me started about the outsourcing issues.

  • by onescomplement ( 998675 ) on Monday December 01, 2008 @07:08PM (#25952849)
    Paul, as usual, backs into the key argument.

    This keeps coming up in various shapes and forms but the fact of the matter is that brilliant, high producers aggregate in places; and so do idiots.

    Tom DeMarco ran a study of this in the 80s wherein teams were asked to solve the same problem. He expected a scatter-plot. It was a 45 degree line between the people who knocked the problem off and those who were clueless.

    What didn't matter:

    Platform. Language (except assembler, those folks were _lost_.) Operating System.

    What did matter:

    Team coherence and capability.

    Design and planning; raw ability to design and plan as a coherent team. And not just a bunch of losers following a Pythonesque "Book of Common Knowledge."

    (I have been to many "Does the witch weigh less than wood" meetings...)

    Look at the back cover of Boehm's "Software Engineering Economics." What he _measured_ was that team capability overarchs everything. Period.

    I would also ask you to look at the surface exposure of development. Folks who develop on the shoulders of many giants can and should be trying lots of stuff, because that's why platforms are built.

    Folks working closer to the core (the OS, drivers, fundamental code) don't change as quickly, nor should they.

    I've worked as a hatmaker (sheer, unbridled creativity with fancy ribbons and flowers and such) for high-end ladies and I've sat, confounded by bad documentation for UARTS.

    Two different regimes.

  • by blake182 ( 619410 ) on Monday December 01, 2008 @07:25PM (#25953007)

    This story hits very close to home for myself -- I've sold two companies to larger companies where they commercialized my software. When you're a scrappy startup, you ship instantly. When you get incorporated into a larger organization, you don't ship instantly, which hurts because your intrinsic motivation (for the A-listers and entrepreneurs anyway) is shipping.

    One shocking thing in the article for me is just how much people would give up in order to ship faster -- startups that got acquired would give up some of the acquisition money in order to ship faster in the new company. It's probably a limited sample, but I know I've felt that way. This is a large portion of what I call "suck" -- things that slow down shipping. I'm not anti-QA, but after a particular point all you did was slow down shipping.

    One satisfying aspect of the work I did at the last acquiring company is that every time I checked in code, I knew I could with a straight face recommend that we ship it. I mean, it wasn't a full QA pass, but it was code with a supporting set of automated unit tests incorporated into a system designed as an extensible framework. Any negative impact would be isolated to that specific functionality (high cohesion, low coupling). A small group of internal power users and my own server would take the daily builds and give feedback as to how it felt in production and report any major issues.

    The message here seems to be "if you can optimize the process in some way, optimize it so you ship faster". And in the meantime, go ahead and pretend like you're shipping every day (a complete, ready-to-go, high quality build). You'll be surprised how much better you feel even with that.

  • Re:Death March (Score:3, Interesting)

    by wideBlueSkies ( 618979 ) * on Monday December 01, 2008 @07:50PM (#25953245) Journal

    And some of us like to program just for the sake of programming. And create applications and solve problems because to do so is interesting and fun, and one gets to work with other smart people.

  • Re:Perhaps (Score:5, Interesting)

    by ifdef ( 450739 ) on Monday December 01, 2008 @07:56PM (#25953291)

    We have a more informal system where I work. Whenever anybody checks in some code, whoever wants to automatically gets an email to notify them (for example, I am set up to receive notifications for any changes in a few directories where I do most of my work). Anybody who wants to then reviews the change. If there is nothing to comment on, it's completely transparent, and the person who checked in the changes is not even aware that they got reviewed. If there IS something to comment on, most likely someone will talk to the original coder (or send them an email), saying something to the effect of "by the way, did you consider such-and-such?", or maybe even "good idea!"

    The system keeps track of who reviewed what change. As a check on the process, if there is any change that has been in the system for x days but has not been reviewed by at least m developers and n testers, the original coder gets an automatic email saying "please arrange for someone to review your code."

    This is, of course, in addition to the automatic emails that get sent if a change actually BREAKS something. Those get dealt with right away.

    So the review process does no harm to anybody's morale, except in the cases where that person really is producing bad code.

    Strangely enough, it seems to work quite well.

    As for your other point, about "A developer codes well and gets through a few audits. Now they're trusted, they can afford to let things slip for some time before anything is caught. There's less incentive to keep producing good code": I don't want anybody with that attitude in my group, period. I am not in any kind of supervisory position, but if there is someone on the team who only produces good code because of some "incentive", and the team lead doesn't do anything about that, then I don't want to work for that team lead, either. I will vote with my feet, and I don't think I would be the only one. And then this would become, no doubt, a much more "normal" software department, instead of an amazing one.

  • Re:Perhaps (Score:4, Interesting)

    by ChrisA90278 ( 905188 ) on Monday December 01, 2008 @09:00PM (#25953857)

    I don't think you even need "good code". I worked on a project that eventually failed and all the code we looked at in those reviews was "good". Either that or we made it good. The big problem was that it did the wrong things.

    The problem is not with the people who write the code. Most are OK and reviews catch gross errors but in out case we had some basic "big picture" ideas wrong.

    Microsoft Vista is a good example of this. Likely the code would pass a review and has few mistakes but the problem is the dumb ideas that got written into the specifications.

    It's like the "bridge to nowhere" problem. Good, competent structural engineers build something no one wants or needs.

  • Re:Perhaps (Score:5, Interesting)

    by lgw ( 121541 ) on Monday December 01, 2008 @10:39PM (#25954607) Journal

    My fundamental point is not that documenation and paperwork should be avoided, but that in some cases it's just a stupid allocation of resoruces to have certain developers do that work. In the old days, IBM would pair a tech writer with *every* programer, so that really good docs could be produced with a minimum of distraction to the coding. It's hard to know whether that produced the best code (though there's a lot to be proud of in 70s mainframe code), but it produced lots of *really* good documentation.

  • Re:Perhaps (Score:3, Interesting)

    by networkBoy ( 774728 ) on Monday December 01, 2008 @11:02PM (#25954791) Journal

    to wish I had mod points...
    I really like igw's post http://developers.slashdot.org/comments.pl?sid=1047369&cid=25954607 [slashdot.org] about attaching a tech writer to a dev. While it may play to the primadonna set (in a bad way) there is some sense in getting those lone wolf types, especially when they have the combination of difficult to teach skills and the unteachable knack of genius for a particular type of problem you may have. I (senior tech) have been attached to such an engineer (hardware dev) who was simply a genius on analog design. His gut feelings for circuit design and layout were closer than most peoples second and third approximations for a PLL or transceiver. Downside was his documentation skills were essentially expressed by 1/(analog design talent). As a result a tech writer and I worked very closely with him when he was nearly done having him walk us through *everything*. I wrote the test plan and the Tech Writer wrote the docs. Worked great, even though he was a stuck up ass.
    -nB

  • Re:Perhaps (Score:1, Interesting)

    by Anonymous Coward on Tuesday December 02, 2008 @06:20AM (#25957301)

    Even worse when the idiot is replaced by a program. Forcing everybody to follow a single holy procedure simply reduces all code to be mediocre.

    You talk like it is a bad thing... but I see where are you coming from. Modern trends in quality management actually goes at lengths to assure steady mediocrity over uncertain/occasional excellence (see ISO9001). It makes their job easier, predictable. In industrial environment, anything non-repeatable is a liability. They would be thrilled if they could obtain a program to churn out mediocre code without programmer intervention.

    Place for code excellence is in CS academia, if anywhere. Even there, it will be used to make another set of rules and procedures to be forced upon everyone else.

  • Re:Real Talent? (Score:3, Interesting)

    by ScrewMaster ( 602015 ) * on Tuesday December 02, 2008 @08:42PM (#25969121)

    Maintainability damn well can be designed in from the beginning!

    I agree, it most certainly can. Seriously, what is the point of a clean, well-engineered design, if you're not attempting to build a solid foundation for future work? Why not just crank it out any old way so long as the spec is met? Rhetorical question.

    Learning how to code for maintainability up front is more a matter of experience (having been burned enough times by not having done it, and tried enough possible solutions until you've found those that work) that it is talent. It's also something that many programmers I've worked with over the years (almost three decades at last count) deem unnecessary, because they honestly don't know how. That, and the fact that thinking about maintainability requires actual forethought, as well as some extra bureaucratic/recordkeeping overhead. If you're not planning on being around for more than a few years in a given job, and don't give a flying fuck about the poor bastard down the line, then why waste a neuron on maintainability? That's how a lot of people look at it, I'm afraid.

    I ran my own software business for, oh, fifteen years or so. I started out not concerning myself about maintainability because I was only worried about delivering the product on time. Then, after being in business for a few years and having to maintain my own code I changed my tune pretty quick. I mean, I got really tired of asking myself the same question over and over: "what the hell was I thinking?" And I mean that literally: when the code was written, what was I thinking? 'Cause it wasn't always obvious from the code itself, and I wasted a lot of time refiguring things out from scratch. Wasted even more time refactoring code because I hadn't bothered to think ahead, either. Now to be fair, I wasn't very experienced then and didn't really know what kinds of things for which to plan. That excuse doesn't wash anymore, I'm afraid.

    Right now I'm responsible for several hundred thousand lines of code much of which is over a decade old. If you don't constantly work towards maintainability in an environment like that, you'll founder quickly. Matter of fact, several years ago I went through a major refactoring of that codebase (took a couple of months) just to make it into something that could be dealt with, long-term. Nobody had bothered until that time: the damn thing just kept growing like some amoeboid monstrosity. Reorganizing that project was a thunk of gruntwork that I don't want to ever have to do again, but I did it because not doing it was even more frustrating.

    I used to talk to myself a lot more back then.

Kleeneness is next to Godelness.

Working...