Forgot your password?
typodupeerror
Math Programming

Why Programmers Need To Learn Statistics 572

Posted by Soulskill
from the because-they-suck-at-poker dept.
David Gerard writes "Zed Shaw writes an impassioned plea to programmers: Programmers Need To Learn Statistics Or I Will Kill Them All. Quoting: 'I go insane when I hear programmers talking about statistics like they know s*** when it's clearly obvious they do not. I've been studying it for years and years and still don't think I know anything. ... I have taken a bunch of math classes, studied statistics in grad school, learned the R language, and read tons of books on the subject. Despite all of this I'm not at all confident in my understanding of such a vast topic. What I can do is apply the techniques to common problems I encounter at work. My favorite problem to attack with the statistics wolverine is performance measurement and tuning. All of this leads to a curse since none of my colleagues have any clue about what they don't understand. I'll propose a measurement technique and they'll scoff at it. I try to show them how to properly graph a run chart and they're indignant. I question their metrics and they try to back it up with lame attempts at statistical reasoning. I really can't blame them since they were probably told in college that logic and reason are superior to evidence and observation.'"
This discussion has been archived. No new comments can be posted.

Why Programmers Need To Learn Statistics

Comments Filter:
  • by Greyfox (87712) on Saturday January 09, 2010 @07:37PM (#30710568) Homepage Journal
    Everything I needed to know about statistics I learned playing poker.
    • by Anonymous Coward on Saturday January 09, 2010 @07:49PM (#30710662)
      • by ShakaUVM (157947) on Saturday January 09, 2010 @08:01PM (#30710740) Homepage Journal

        A manga statistics book, eh?

        I just realized I was a nerd. I looked at the table of contents and closed it down, then realized I hadn't even looked at the short skirt-wearing protagonist.

        Sigh...

        But to answer the article's point, elementary statistics are very easy. Advanced statistics are very hard. It's kind of like how people think "knowing the difference between circles and squares" is geometry and so analytical geometry must be just more of the same, right? It's quite possible the programmers think they know statistics because they know they're vaguely supposed to do a run multiple times, and maybe average the results or something.

        It's also possible the author of the article is a know-it-all douchebag who tries to solve problems with overwrought solutions.

        From TFA: "Zed: Fuck! Fuck! I have eyes! You do not! See!? No?! Exactly! Because you can't fucking see because you have no fucking eyes! Arrggh!"

        Just throwing that theory out there.

        • by Devout_IPUite (1284636) on Saturday January 09, 2010 @10:39PM (#30711960)
          "It's also possible the author of the article is a know-it-all douchebag who tries to solve problems with overwrought solutions."

          That was kinda what I got from this. Sure, my powers of ten runs to determine performance isn't statistically sound. Did I say it was? No. Why don't I care? Because my samples are cheap. Spiking vs non-spiking is something pretty easy to see when you glance at the data.

          I mean, he said we're going to die if we don't learn statistics, but he never gave a compelling argument for it.

          The best example was users, but even that was lacking. If you design a script that's as aggressive on a system as a high use user and your system supports as many 'users' as students, you're safe, if it supports less you work on qualifying the problem better then.
          • Re: (Score:3, Interesting)

            by ShakaUVM (157947)

            >>Spiking vs non-spiking is something pretty easy to see when you glance at the data.

            Yeah, in fact, the way that he presents it is bad statistics. =)

            If the problem is that one out of 1000 queries is taking a minute to return instead of 0.1 seconds, then using the std deviation to describe the problem is nonsense. It is not a Gaussian distribution!

            But of course someone who "has spent his life studying statistics and even R language" would know that, right? :p

            Instead, as you point out, any programmer w

          • Re: (Score:3, Funny)

            by Nazlfrag (1035012)

            Oh and by the way he's a hit with the ladies! He never has problems with them (well he is a dashing 6'2" *swoon*) and he's just such a nice guy too.

        • Re: (Score:3, Funny)

          by genner (694963)

          then realized I hadn't even looked at the short skirt-wearing protagonist.

          That sound you just heard was a million slashdotters clicking on that link at the same time.....
          except me since I'm familiar with the book in question and realized long ago that she has sharp knees,

  • by Anonymous Coward on Saturday January 09, 2010 @07:38PM (#30710580)

    110%.

  • by Hognoxious (631665) on Saturday January 09, 2010 @07:40PM (#30710596) Homepage Journal

    Correlation != causation. Just repeat that and you don't need to know statistics.

  • by BadAnalogyGuy (945258) <BadAnalogyGuy@gmail.com> on Saturday January 09, 2010 @07:42PM (#30710610)

    Maybe the problem is in your presentation. Even here, you tell programmers that you want to kill them for not understanding a topic that even you are unwilling to acknowledge mastery of. Then you tell us how hard the topic is to understand, even though you've spent so much time trying to learn it.

    Is it any wonder that no one takes your suggestions seriously? You are practically sabotaging yourself with self-effacement.

    These aren't homework problems you're tackling here. They are business problems and you need to sell yourself and your ideas if you want to get any traction. Do you have any evidence that your methods are better than the SOP thus far? Do you have any case studies that show how effective statistic analysis is in *any* of your projects?

    Or are you simply taking something that seems like a data point and extrapolating it to cover a vast swath of applications?

    • by Krishnoid (984597) * on Saturday January 09, 2010 @07:57PM (#30710724) Journal

      Or are you simply taking something that seems like a data point and extrapolating it to cover a vast swath of applications?

      Well yeah, that's what he was saying -- statistics!

    • Re: (Score:2, Offtopic)

      by ihavnoid (749312)

      Well, I think this would be the article Zed needs to read:

      http://www.joelonsoftware.com/articles/fog0000000332.html [joelonsoftware.com]

      Basically, many programmers feel that everybody else around him(or her) is a stupid asshole. However, if you want succeed, (e.g. have everybody around you learn statistics) you should never, ever, ever make enemies.

      Be productive, work hard, listen to others, and try to do the work in the *right way*. Gain respect from yor collegues, and then they will get interested.

      • by lena_10326 (1100441) on Sunday January 10, 2010 @05:44AM (#30713572) Homepage

        Basically, many programmers feel that everybody else around him(or her) is a stupid asshole

        That's one of the reasons working in IT is not all that satisfying [computerworld.com]. Many problems have multiple solutions which for the most part are equivalent in function but vary on what they're attempting to optimize for (* see below) yet developers seem to latch onto the solution they thought of and become down right rude and nasty when evaluating a teammate's solution. When every developer assumes he is the smartest of the bunch and all others are morons it fosters an environment where everyone is unwilling to compromise and a 3rd person usually has to step in to break the tie. That leads to a hostile work place where thought battles frequently occur. Losing a battle causes a teammate to become afraid of undue criticism in the future, so the next time around they over engineer the code trying to cover all bases. This leads to large systems that solve fairly simple problems with overly complex implementations. After a few cycles of this, the software is unmanageable, which becomes evidence proving to the developer that his teammates and ones who came before are idiots with no clue, and now it is up to that lone hot shot to bitch about fixing the mess, which of course is accompanied with many nasty critiques and insinuations.

        I am a developer with a fairly open mind and I strive to eliminate ego from the workplace by staying on the positive, helpful side, but honestly I'm getting sick of working with people who don't try to do the same.

        * Example, solutions can be optimized to target maintainability, readability, CPU/IO performance, availability, reliability, correctness/precision, recovery, automation, reduction of complexity, extensibility, cross platform, resilience to change, parallelism, security, partitioning, modularization, popular design idioms. The list is nearly endless.

    • by superdana (1211758) on Saturday January 09, 2010 @08:14PM (#30710850)
      Maybe the problem is in your presentation.

      Meet Zed Shaw.
      • by arendjr (673589) on Saturday January 09, 2010 @09:24PM (#30711382) Homepage

        I don't know Zed Shaw yet, but I think you're right.

        The whole problem he is describing sounds like a big ego problem. He himself has a huge ego, and has problems when he runs across the programmers, who often have huge egos as well.

        Now, I think he does make a point though. The programmers he is ranting about indeed do sound like assholes, just like he himself is. In order to be a really good programmer (or a good statistics expert) you should also know when to put aside your ego.

    • Re: (Score:3, Insightful)

      by dbIII (701233)
      It's just the "beige box is the hard drive and the screen is the computer" problem over again. People pretend they know what they are doing and make stuff up and pretend that they are confident that it is real. This really annoys those that do know what they are doing but don't want to appear to be overconfident because they haven't written the textbooks themselves.
    • not understanding a topic that even you are unwilling to acknowledge mastery of.

      Personally, I think that little acknowledgment increases his credibility quite a bit. It suggests to me that he's actually spent some real time coming to grips not just with glossy overview you get in a high school or college course but with some of the devilish subtleties of actually using the stuff.

      The funny thing about knowledge... the more it grows, the bigger you realize the frontier is. So, how good of a heuristic is appar

      • Re: (Score:3, Funny)

        by Hurricane78 (562437)

        No, it doesn’t. If someone spends years and years on a topic, and still has the feeling he understands nothing at all, then clearly, he’s just too dumb for it.

        It’s like high voltage without high current. The result is a not very bright and maybe even destroyed lamp.

    • by Hurricane78 (562437) <deleted @ s l a s h d ot.org> on Saturday January 09, 2010 @11:35PM (#30712258)

      I just found a very old hard disk. Double height. MFM/RLL. And after a “strings -n 32 /dev/hdd”, I got the following old saying, carved in the bytes of the disk:

      Computer science
      Statistics
      Social skills

      Choose one.

      ;)

  • Or, how about... (Score:5, Insightful)

    by halivar (535827) <bfelgerNO@SPAMgmail.com> on Saturday January 09, 2010 @07:43PM (#30710618) Homepage

    Statisticians need to learn programming or I will kill them all.

    • In 1976... (Score:3, Informative)

      by alispguru (72689)
      ... I ran into a professor of statistics who said that computers were going to be a passing fad in his field.
  • by HornWumpus (783565) on Saturday January 09, 2010 @07:44PM (#30710620)

    We know as much statistics as we need to know.

    Some know more, some less. Each has traded off hours vs. knowledge in many fields.

    For example: Why would a programmer who's job is to automate bean counting need to know more then basic statistics? (s)he rightfully focuses his efforts on accounting.

    One post calculus statistics course gives me enough grounding to know what I don't know and punt to experts when I need to.

    Fucking specialists forget all the things they don't know and only look at the world through one lens.

    • Re: (Score:3, Interesting)

      by gardyloo (512791)

      We know as much statistics as we need to know.

      Some know more, some less.

      That's either the most honest, insightful comment I've ever seen, or the most useless. I'm 92% sure, with an uncertainty of about +/-5%, that it's the latter.

    • by nextekcarl (1402899) on Saturday January 09, 2010 @08:14PM (#30710848)

      One post calculus statistics course gives me enough grounding to know what I don't know and punt to experts when I need to.

      That's actually his argument (though I'm pretty sure he doesn't realize it, having met him a few years ago at a conference). People need to know their limits, and the strengths (and weaknesses) of others, and defer to them when they know what they're talking about, rather than talking out of their asses. As you point out, you can't know everything, but you'll defer to others who know more when you need to. I'm pretty sure Zed would like working with you based upon that fact alone (I know I value that trait and try to express it myself). Far too many people think they aren't allowed to have any weaknesses (and we all do in some area or another) so they talk a big game, and when push comes to shove, they will actively block people who actually know more than they do about the subject at hand. Working with too many people like that has driven Zed insane (IMHO) and I know I've been close to it at a couple of work places before (and really loved the one that wasn't like that hardly at all).

    • by Toonol (1057698) on Saturday January 09, 2010 @08:17PM (#30710876)
      But statistics is one of those fields that benefits everybody; it's a bit like probability, logic, or (further afield) history. Lack of a fundamental understanding of statistic can lead you astray in a near-infinite number of ways.

      I have sat in business meetings hundreds of times where I've seen decisions made on completely meaningless and irrelevant data, because the people involved don't understand statistics. The same holds true in your personal life; decisions with purchasing products, investing money...

      Now, I'll bet that most slashdot readers have the minimum amount of knowledge of statistic to avoid the most egregious errors; but more knowledge is certainly helpful. It will help you in a myriad of ways.
  • Title fail. (Score:5, Funny)

    by girlintraining (1395911) on Saturday January 09, 2010 @07:44PM (#30710628)

    Programmers Need To Learn Statistics Or I Will Kill Them All

    Okay, two things: First, threatening programmers never work. Management's been trying that for years. Second -- don't you mean 'kill -9' them all, or maybe demalloc(), or cast them to void*, or one of a dozen other witty things you could do besides the mundane answer of threatening stabby bits on them because you have a case of intellectual snobbery?

  • I never took a statistics class as an undergrad. In retrospect, I think it would have been very useful, probably more so than the calculus I took (which I think is also a very good thing to know, but stats tend to be used more often).
  • by Rix (54095) on Saturday January 09, 2010 @07:52PM (#30710686)

    He's just as arrogantly claiming that he's right and they're wrong. Now, he may very well in fact be right, but he's taking the same obstinate position the people he criticizes do.

    It's important to know when your input is not desired. Even if you think it should be.

    • by SuperKendall (25149) on Saturday January 09, 2010 @11:59PM (#30712388)

      He's just as arrogantly claiming that he's right and they're wrong.

      No he doesn't.

      He claims that programmers need to understand statistics more. The people he is talking about are therefore not wrong - they are ignorant.

      But that term is loaded with negative meaning, it's more accurate to say they are like a variable with named "statistics" with a value that has never been set. Basically, they don't know what they are missing.

      It's like when programmers try to argue about how a language is bad when they've never used it. How would they know? Yet many without understanding of statistics are saying the same thing, they don't need to know any more.

      I know enough to know statistics can be a valuable tool. Why would you not want another tool that could help you? The people who refuse do so are less than they could be (as a programmer).

  • by Anonymous Coward on Saturday January 09, 2010 @07:54PM (#30710692)

    is not because they don't understand statistics. It is because you are a dick.

  • Statistics is HARD (Score:5, Informative)

    by omb (759389) on Saturday January 09, 2010 @07:54PM (#30710694)
    Statistics is HARD, for two reasons:

    (a) Probability theory, on which all practical Statistics is based it both (i) counter-intuitive and (ii) difficult

    (b) The very Mathematics on which it is based is obscure

    And, worst of all, it is uniformly badly taught, even in good universities, and the Statistics for XXX are uniformly awful, blind leading the blind.

    Lastly it is very hard to get a staight answer from a mathematical Statistician.
    • Re: (Score:3, Funny)

      by codewarren (927270)

      Statistics for XXX are uniformly awful, blind leading the blind.

      They have statistics for porn? (!!)

      What could be wrong with that? And blind on blind action? Strange, but interesting.

    • Can't agree with that.

      Basic statistics as taught in a beginning stats class is counter-intuitive because they don't teach the calculus behind it. But it's actually quite simple to use, however. The tough part is figuring out what statistic to apply to a given problem. It's not difficult. There's a reason that it satisfies the "basic math requirements" for a business major and physical therapy major.

      The mathematics behind statistics is Calculus 2 which is hardly obscure. The Statistics with Calculus class i

    • I didn't have much trouble with statistics in college after having studied physics the year before in high school, and firmly formulas are being taught because they've been proven true, so you just need to remember the steps to get something done, and the numbers were just filling in the variables. More numbers involved, but still there's formulas.

      I had such an easy time with the course, and had trouble hiding that, that I would regularly be visited by students asking for help on Sunday on the homework that

    • by radtea (464814) on Saturday January 09, 2010 @08:28PM (#30710954)

      Statistics is HARD, for two reasons:

      I'd argue that probability theory isn't as hard as people make it seem, but statisticians are wankers. Most of what we think of statistics was developed by people who were intimately engaged with empirical research, but modern statisticians are mathematicians, many of whom have never actually performed an experiment. They think the statistics are real, whereas experimental scientists know the truth: God made the Probability Distribution Functions. All else is the work of man.

      Furthermore, modern computing has made a lot of the conceptual apparatus of conventional statistics irrelevant, as it is designed to deal with the problem of reducing problems to something that can be computed by hand and finished off with a single table lookup. Today its a rare case that we can't get at the PDFs directly, bypassing much of conventional statistics. But due to how badly the stats are taught, and how poorly probability theory is understood, we are still living in a world where p-values are the exception, not the norm, and when they are quoted they are frequently unrealistic because they are based on statistical assumptions that are not warranted given the non-idealities of the data.

      So I'd argue that statistics is basically a dead field populated by zombies who are dedicated to infecting as many students as possible. If we taught thermodynamics or mechanics with equally outmoded concepts they would be really hard too.

      • by jackchance (947926) on Saturday January 09, 2010 @09:17PM (#30711326) Homepage

        Before computers stats involved using parametric tests (t-tests, anova, etc) which made assumptions like "the data comes from an underlying normal distribution". BTW, in stats terms "normal" mean "Gaussian" [wikipedia.org].

        Now, with cheap and fast computers, we can actually compute the confidence intervals non-parametrically through permutation tests and bootstrapping [wikipedia.org] without assuming anything about underlying distributions. In most cases, this non-parametric test is the "right thing to do". Most of the time, the results are the same as using a parametric test.

        However, a HUGE disaster in empirical science has been the problem of multiple comparisons. With computers it is so easy to compute correlations and significance tests between every possible slice of your data set. Many "scientists" don't have good statistical knowledge and pray at the alter of "p < 0.05". They don't know about or understand the problem of multiple comparisons. [wikipedia.org] So they do 20 tests, find one that comes out p0.05 and write a paper about it. They don't get that if you do 20 tests you are very very very likely to find one that come out p < 0.05.

        Anyone who has access to excel or matlab can do this little experiment.

        samp=50 normally distributed random numbers.

        for x=1:100
        test=50 normally distributed random numbers (mean=0, var=1);
        sig(x)=ttest(samp,test);
        end

        now look at the sig vector. OMG, 5% of the tests came out significant!!!

        Now you are writing a paper all about how x is linked to y. But you are essentially throwing dice and then writing a paper about why it came up '3-3'.

        • by Daniel Dvorkin (106857) * on Saturday January 09, 2010 @09:57PM (#30711640) Homepage Journal

          Resampling-based statistics haven't replaced parametric models, and I doubt they ever will, for one very simple reason: as the available processing power grows, so does the amount of data. In my field, bioinformatics, the size and complexity of the data sets follows a Moore's Law of its own, and I don't think bioinformatics is unique in this. "Just bootstrap it" is easy to say, and certainly there have been many times when dealing with an analytically intractable distribution when I've done just that, but if the analytical solution takes minutes and the bootstrap solution takes weeks, you have to take this into account.

          Of course, resampling isn't the only way to look at problems non-parametrically. Often a good compromise is to go with rank-based statistics, which are fast and easy to calculate -- and you may not have an analytically tractable model for the distribution of the original data, but you don't have to, since by working with ranks you can define a distribution with good analytical properties. You still need to do some reality-checking exploratory data analysis, of course, but this is an approach that generally works well in practice.

    • by thesandtiger (819476) on Saturday January 09, 2010 @08:35PM (#30711006)

      I don't think it's hard - I just think it requires a different way of thinking than most programmers usually take to maths.

      As a programmer/developer who went into research (in social sciences, so it's really soft), I can say that in my experience stats is really closer to a programming language than it is to other maths. Here's why:

      1) You have a LOT of tools to pick from. What kind of analysis do you want to do? What kind will give you the most useful result? What kind is your data amenable to?

      2) You don't always have a clear choice as to which is the best for a given situation. Sometimes you need multiple different types of analysis to really get the full picture.

      3) Just because it's math doesn't always mean it's right. There's some crazy ass black-box magic stats stuff we use for one project of ours that, in theory, will let us figure out the demographic composition of an unknown target population. Maybe. Sometimes. If the wind is right. Or not.

      4) At the advanced levels, it's fucking insane. People who hack stuff like ultra optimized 3d engines with large quantities of assembler or whatever always wigged me out because my brain just doesn't work that way. With the really complex stats stuff it's the same way - I can plug and chug with the formulas, but I honestly have about as much comprehension of why some of the more advanced stuff works as my dog has of CPU design.

      5) If you know the basics, you know just enough to be dangerous and really piss off people who know what they're doing. Being able to run an anova or determine correlation makes some people think they actually know what's going on because, hey, it's math. But a lot of people who just do the basic stuff think their results are more meaningful than they actually are - falling prey to the whole "it's statistically significant therefore it must be IMPORTANT" fallacy (when you can certainly have things that are "statistically significant" but actually have virtually no impact on the outcome.

      6) Even when people know their shit, they disagree. A fine example of this would be the Space Shuttle failure rate - you had people saying that the shuttle would suffer a critical failure from everywhere between 1 in 5 and 1 in 50,000 launches. And depending on what tools they used to do their analysis, they were correct. Same as with programming languages - depending on the problem, equally skilled programmers might pick entirely different languages to use because they think one part or another is more critical.

      Honestly, I really enjoy stats - if I had to do it all over again I would probably have spent a LOT more time working with stats than I did as a programmer in my younger years - but I won't pretend that it's totally clear what tools to use when. The author of TFA should do well to realize that even fellow statisticians would probably slap the shit out of him over some of his beliefs about how to properly go about utilizing stats toolsets.

    • Re: (Score:3, Insightful)

      by omb (759389)
      Sorry, the replies indicate just how correct what I wrote was:

      1. It is not about formulas, or Calculus xxx, it is about really understanding what you are doing, and how all the formulas were derived, and some of that is really heavy Pure Mathematics in particular Algebra and Analysis, so that, if necessary, you can work out the probability theory in new situations.

      2. In addition to the Math, there is Logic, Philosophy and Science in Experimental Design.

      The big problem is that people who just know the formul
  • ... something inside me wants to flame him for being a rude twat who wasted 1 minute of my lifetime, even though he has some valid points. I'd be surprised if he didn't get some responses along the lines of "cry me a river" etc.
  • by thetoadwarrior (1268702) on Saturday January 09, 2010 @07:55PM (#30710708) Homepage
    I know enough about statistics to know statistically I know I'm safe from his threats. I suspect if I were a bag of Cheetos the odds were be against me but that's not the case.
  • I've found that more than just about any other degree Computer Science and to a less extent Medical Degrees imbue the recipient with an unnatural ego when it comes to subjects with which they are unfamiliar. I propose we remove the word Science from CS degrees and call it what it is "Computer Programming and Troubleshooting". There are far too many CS graduates who think they are actually scientists.

    • by radarsat1 (786772) on Saturday January 09, 2010 @08:06PM (#30710782) Homepage

      I disagree that CS is just "programming and troubleshooting", but I do agree that Computer Science is a complete misnomer. It's extremely misleading, and difficult to explain to people, "I'm a computer scientist, but no I'm not actually a scientist, instead I understand how to describe formal languages in terms of strict grammar rules and transform abstract syntax trees from one representation to another."

      It shouldn't be called Computer Science, it should be called Computational Mathematics, because that's what it is.

      (On the other hand, there is whole branch of CS that extends very deeply into statistics called Machine Learning, but at the core I'd say it is still more mathematics than science. There is also human-machine interaction which often goes under CS, but is actually more like psychology.. so it's not so cut and dry.)

    • Re: (Score:3, Insightful)

      by Dahamma (304068)

      Maybe wherever you went to school they taught "computer troubleshooting" as a degree, but some of us actually got a solid foundation in the various theoretical and practical foundations of computer software engineering.

      Though I do agree that "Computer Science" is a stupid name. They already have Mechanical Engineering, Chemical Engineering, Electrical Engineering, etc - why not just call it "Software Engineering"? [I'd say "Computer Engineering", but since that was my major and I also had to do transistor

  • I was tasked recently with developing stat reports that would be used to give the best workers the most important tasks. I used their desired metric, and modified the numbers to show on a 0-100 scale where 75 is average and each standard deviation is 10 points. The result? The sample sizes were too small, and some groups had widely varying scores when every group member's performance was nearly identical. Then again, maybe I'm doing something wrong.
  • Seriously.

  • by v1 (525388) on Saturday January 09, 2010 @08:01PM (#30710746) Homepage Journal

    I've been studying it for years and years and still don't think I know anything.

    And yet you're expecting someone whose expertise is in a different field to know more about it than you?

    We can't all be experts in everything. If you're the expert in the field of discussion, get used to educating your coworkers on the topic, or find another job where you're surrounded by people with the same education and expertise as you.

    The average person is an expert in no more than two or three related areas. That's why people work in teams, to cover each other's blind spots.

  • by toby (759) *

    Nothing new to see here.

  • Stats? Fuck that. (Score:2, Informative)

    by delysid-x (18948)

    Statstics is WAY beyond what a programmer cares about. Logic is all that matters. Statistics->logic is the problem of the software engineer, not the programmer.

  • Zed Shaw writes an impassioned plea to programmers: Programmers Need To Learn Statistics Or I Will Kill Them All.

    // This will never happen

  • I certainly suffer from a feeling of being an expert in all fields. Deep down I guess I know I'm not, but I'd probably rather just muddle my way through it assuming I know everything there is to know. The trick is knowing when something is sufficiently out of your field that you need to defer to someone who is an expert in that field. Statistics is just one example. Certainly a little bit of knowledge in a lot of fields is a good thing, but when you have to choose between 4 years of study vs consulting some

  • I don't know how educated your colleagues are, but if they have studied computer science, then you should just shut your dumb mouth, because we learn how to analyze running times WITHOUT actually running it. Even without actually programming it, just by analyzing the problem itself. That is called "complexity theory" and (in that case) you are the one who doesn't have any clue about what you don't understand.

    and go away with "tuning". You might improve running times a bit, but no little tuning hack can d
  • 95% confidence in understanding statistics when applied to business setting is often just as good as 95% confidence in actual measurements. Yes, the last 5% are the trickiest bit, but be sure if there will be slightest indication that a proper application is required I won't be afraid to ask someone who knows more. It's just that it is quite rare.

    In example: Performance testing systems. You care way more about the degradation mode than statistical model of sustainable load.

  • Let's see, we have one guy complaining about how none of his programmer coworkers understand statistics, and we have X coworkers who undoubtedly disagree with him. Since we do not know him or any of his colleagues to any meaningful degree, we have to assign equal weight to each of their opinions. Statistics then tells us there is a 1/(X+1) chance of his being right, and an X/(X+1) chance of their being right. We can assume that X >= 2 based on his ranting, therefore resulting in the odds favoring them

    • by brian_tanner (1022773) on Saturday January 09, 2010 @09:56PM (#30711630)
      Wow. What class did you take that says if you don't know something you should assume equal probability?

      I don't know if there is an invisible elephant in my kitchen, so I guess I should assign equal probability to both outcomes. I also don't really know how Baccarat works, I guess my odds are 50/50.

      Without knowing something about he or his coworkers, you by definition cannot make any statistical statements. To make any statements, you would first need to make some observations. This is how statistics is different from logic. Statistics is grounded in data.

      I don't agree with Zed, but you may have just proved his point.
  • What has Zed Shaw done for humanity?

  • Everyone needs to learn statistics. All of us who understand one iota of it are in a constant state of depression over how everyone keeps on making the most banal mistakes. But just a general gripe is not very helpful. Getting everyone to take advanced degrees in statistics is simply not going to happen. Most engineering courses inclue some basics, but that only helps a bit. What is needed is to teach it (to the "masses", i.e. the ones who really ought to know better) in terms of the pitfalls first, and
  • I studied it for years, so my e-peen is bigger. It worked in school, so it has to work in reality and thus they are wrong when they tell me it does not, despite them having experience with real applications while I have not.

    Ok, snideness aside. Statistics is a wonderful tool (hey, my degree is in statistics actually), but I wouldn't want to impose my metrics on real applications without first looking whether they measure anything sensible. I turned for programming because, well, it's more suitable to me. Bu

  • ....Zed wants everyone to be just like him.

  • by yalap (1443551) on Saturday January 09, 2010 @08:31PM (#30710972)
    Lies, damned lies and statistics. Us programmers are too busy dealing with the first two to ever reach the third..
  • by SanityInAnarchy (655584) <ninja@slaphack.com> on Saturday January 09, 2010 @08:41PM (#30711054) Journal

    So, since so many people don't seem to want to actually read Zed's stuff -- and I honestly don't blame you -- I'll try to summarize:

    Eventually, every major science adopted an empiricist view of the world. Except Computer Science of course.

    He tends to bitch a lot about computer scientists. I'm just starting a CS degree, and there is a Statistics class in the curriculum. Is he working with people with good degrees, people from a technical college with a "programming" degree, people from a diploma mill, or high school students with no degree at all?

    Of course, he seems to be implying it's everyone, and doing so in a typically Zed-like way.

    "All you need to do is run that test [insert power-of-ten] times and then do an average." Usually the power-of-ten is 1000...

    I don't know that I've ever heard that particular statement. But it's a good point:

    How do you know that 1000 is the correct number of iterations to improve the power of the experiment?

    Generally because it was probably closer to a million, so I'm erring on the side of taking more, rather than fewer, measurements. But without careful consideration, I could be way off.

    How are you performing the samplings?

    I think this is vastly less important than how you are dealing with the data, but it is also a good point. For example, his complaint is that an average isn't enough; with detailed enough logging, he could easily go back into my data and figure out min, max, standard deviation, histograms...

    How do you know that 1000 is enough to get the process into a steady state after the ramp-up period?

    Not a huge deal -- the "steady state" will almost certainly be faster than the "ramp-up" period. Worst case, I'm over-optimizing.

    What will you do if the 1000 tests takes 10 hours?

    Either ctrl+c, or try it 10 times.

    How does 1000 sequential requests help you determine the performance under load?

    Very good point here. It's still a useful statistic, but you still need to measure things like 1000 simultaneous requests, not just 1000 all in sequence.

    On the other hand, if your performance is acceptable with them all in sequence, you could just run it through something like Event Machine, so it's all sequential on production, too.

    The most troubling problem with these single number “averages” is that there’s two common averages and that without some form of range or variance error they are useless. If you take a look at the previous graphs you can see visually why this is a problem. Two averages can be the same, but hide massive differences in behavior...

    So yes, always make sure you can record enough statistics so that someone else can come along and use your data to give you something meaningful.

    The moral of the story is that if you give an average without standard deviations then you’re totally missing the entire point of even trying to measure something. A major goal of measurement is to develop a succinct and accurate picture of what’s going on...

    It doesn't have to be statistically accurate. It just has to be close enough.

    Ah, confounding. The most difficult thing to explain to a programmer, yet the most elementary part of all scientific experimentation. It’s pretty simple: If you want to measure something, then don’t measure other shit.

    This is both a very good and a very bad idea. It ties into the peeve he had before -- ramp-up time. For example:

    If we want to take one single line of code and test it then we can. If we want to only verify one single query on a database then what’s stopping us?

    What's stopping us is that our applications don't actually work like that.

  • by presidenteloco (659168) on Saturday January 09, 2010 @09:07PM (#30711244)

    is one half mental.

    of course that explains why 90% of all programs written are CRUD.

    -with apologies to Yogi Berra, Theodore Sturgeon, and a 20% apology, as a matter of principle, to a guy called Pareto.

  • It's the Zed Effect (Score:4, Interesting)

    by greg_barton (5551) <<moc.oohay> <ta> <notrab_gerg>> on Saturday January 09, 2010 @09:13PM (#30711292) Homepage Journal

    The Zed Effect: Whether you're right or wrong people will disagree with you just to piss you off.

  • by Evil Shabazz (937088) on Saturday January 09, 2010 @09:53PM (#30711600)
    So I read through his article. Yes, the whole mindless rant. The conclusion that one should REALLY draw from it is: Zed Shaw is a douche with Asperger's who clearly feels like his own personal area of expertise is underappreciated. Hey Zed, get over it.
  • by Selfbain (624722) on Saturday January 09, 2010 @09:55PM (#30711620)
    I like how the first part of his Wikipedia article says "Zed A. Shaw is a troll" with four citations.
  • by frank249 (100528) on Saturday January 09, 2010 @10:05PM (#30711710)

    "I construct two sets of n=100 random samples from the normal distribution. Now, if I just take the average (mean or median) of these two sets they seem almost the same."

    So its true. The n's justifies the means.

  • by upuv (1201447) on Sunday January 10, 2010 @01:49AM (#30712832) Journal

    I hear you, I do performance engineering of web based systems. The developers, the managers, the testers, the architects all have no clue. You are correct here.

    However if you can not present your "theory" of how to do something in a dumbed down enough format then who cares. Because the pretty graph is pointless. It will be mis-interpreted, mis-understood, and mis-used.

    All the stats theory on the planet will not get you passed the dumb manager or developer. don't loose sleep of this. There is no point. Simply find metrics in your analysis procedure that do mean something to these people. They may not be the total picture but they are something. Build a reputation for being correct by starting with simple things. You are always going to but heads with a know it all developer / architect / manager. Fine let them go off and waste money and time. They will be found out as morons in time. You do your thing and simply become the guy to ask about performance and how to do this.

    Being understated and consistently showing above average results for your work is how you will rise up. Being and A-hole about it is not going to help anyone. As a matter of fact I would can your butt for being a D#ck.

  • by Kludge (13653) on Sunday January 10, 2010 @06:59AM (#30713744)

    I question their metrics and they try to back it up with lame attempts at statistical reasoning. I really can't blame them since they were probably told in college that logic and reason are superior to evidence and observation.

    I work with a number of statisticians and I have the opposite problem. They look at the data, apply mathematical transforms to it, and come to a conclusion, whether that conclusion makes any sense or not. They make little attempt to reason that the data may flawed (which experiments often are), or does not really represent what we are trying to measure, or they are using the wrong statistic to summarize the effect. It is very frustrating.

  • by Goldsmith (561202) on Sunday January 10, 2010 @12:14PM (#30714900)

    I'm a physicist, I know plenty of statistics. The kinds of statistics he's talking about are not hard. If you can do algebra, you can do things like calculate the standard deviation and variance of a set of measurements.

    Was this rant really necessary? I run into people in physics who don't take care of these details. I find that a simple "can you put a standard deviation on that number?" or "can you repeat the experiment?" generally gets the job done. If you want to be more scientific, just start with those questions, and see where it takes you... you could even add "please" if you wanted to be nice. I find threatening people with death and belittling their intellect while talking about trivial calculations doesn't generate useful data.

    To be fair, it sounds like Zed has been working as staff at a university. This has nothing to do with statistics, but it's probably the real reason he's in such a bad mood.

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...