Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming

Can You Measure Software Developer Productivity? (mckinsey.com) 157

Long-time Slashdot reader theodp writes: Measuring, tracking, and benchmarking developer productivity has long been considered a black box. It doesn't have to be that way." So begins global management consulting firm McKinsey in Yes, You Can Measure Software Developer Productivity... "Compared with other critical business functions such as sales or customer operations, software development is perennially undermeasured. The long-held belief by many in tech is that it's not possible to do it correctly—and that, in any case, only trained engineers are knowledgeable enough to assess the performance of their peers.

"Yet that status quo is no longer sustainable."

"All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving," McKinsey advises companies starting on a developer productivity initiative. "Assess your systems. Because developer productivity has not typically been measured at the level needed to identify improvement opportunities, most companies' tech stacks will require potentially extensive reconfiguration. For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run."

Before getting your hopes up too high over McKinsey's 2023 developer productivity silver bullet suggestions, consider that Googling to "find a tool that can track code executed during a test run" will lead you back to COBOL test coverage tools from the 80's that offered this kind of capability and 40+ year-old papers that offered similar advice (1, 2, 3). A cynic might also suggest considering McKinsey's track record, which has had some notable misses.

This discussion has been archived. No new comments can be posted.

Can You Measure Software Developer Productivity?

Comments Filter:
  • Sounds like (Score:5, Insightful)

    by quonset ( 4839537 ) on Saturday August 19, 2023 @05:40PM (#63781040)

    McKinsey is trying to drum up business. "Here, let our consultants show you how to measure things. Only $700/hour. Should only take a year or two. Maybe three. If you're not satisfied we'll keep working til you are."

    • Re:Sounds like (Score:5, Interesting)

      by rudy_wayne ( 414635 ) on Saturday August 19, 2023 @05:50PM (#63781066)
      Yes, You Can Measure Software Developer Productivity.

      If you pay McKinsey a lot of money.



      ** No refunds. Satisfaction not guaranteed.
      • Re:Sounds like (Score:5, Interesting)

        by Knightman ( 142928 ) on Saturday August 19, 2023 @09:02PM (#63781406)

        It's quite easy to measure developer productivity.

        Are they doing what they are supposed to be doing and are they doing it within a reasonable timeframe and quality?

        To answer that they need to have a boss that actually understand what being a developer entails - which seldom is the case. Adding more metrics means there will be an increase in micromanagement that leads to developers being less productive since they are forced to chase lagging metrics.

        As always, YMMV...

        • by AmiMoJo ( 196126 )

          Most software is vertical, i.e. it's only used internally by the company that developed it. All that matters is that it does the required job, and doesn't have any tangible costs that could be easily eliminated.

          It might be horrible to use, it might be inefficient, it might be a nightmare to maintain, but businesses don't care about that. If employees have to suffer to use it, too bad. Fixing it costs money and doesn't increase their profits.

          That's why you used to see so many Internet Explore only web apps u

          • by bteeter ( 25807 )
            Yep 100%. Though I'd argue that ActiveX browser components were almost never a good idea in the first place.
        • by bsolar ( 1176767 )

          It's quite easy to measure developer productivity.

          Are they doing what they are supposed to be doing and are they doing it within a reasonable timeframe and quality?

          Fundamentally the issue with productivity metrics is that they are an attempt to deliver a clear and simple solution to a complex problem and we all known how that tends to work.

          Take the latter evaluations suggested: they are not "quite easy" at all. Even among experienced developers, there can be very different opinions on what constitutes "reasonable timeframe" and "quality" and the answer is going to be very dependent on a myriad of factors on any non-trivial project.

          The problem with most productivity me

        • by kmoser ( 1469707 )
          "Quality" is not so easy to measure. There are many ways to write code, some faster (but perhaps with less efficient runtimes) and some slower (but perhaps with more efficient runtimes). To decide whether a given set of code meets your standards, you have to explicitly define those standards.

          And even then there will be side-effects: even the seemingly "best" code out there may in fact be riddled with inefficiency, if not outright bugs. Or maybe it simply wasn't forward-thinking enough, something you often
      • And you have to pay them again every few years to revise their measurement technique as Goodhart's Law [cna.org] takes effect. Mind you it's a great deal for McKinsey.
      • by dvice ( 6309704 )

        > Yes ... If you pay McKinsey a lot of money.

        I doubt that. They ask you to measure e.g. test coverage. They did that in one project. When I looked at the tests made by Indian developers, I noticed that there was a catch-block. They had made a test that actually finds a bug that crashes the software, but they just catched it and ignored it to make test percentage look good, instead of actually fixing the bug and making a test that actually tests something.

    • by Luthair ( 847766 )
      Yea, in addition to a weird cult-like structure they also have been recommended a lot of shady business practices and been involved in a number of scandals. https://en.wikipedia.org/wiki/... [wikipedia.org]
    • by alw53 ( 702722 )
      They're pretty productive; they got $14 million out of the university of arizona for useless advice.
    • by sjames ( 1099 )

      Brought to you by the makers of Enron!

    • Easier/cheaper way than throwing away $700/hr. Just feed all PRs, nay, all keystrokes that a programmer makes along with the current time and ask ChatGPT to rank by a score of productivity.

  • by Rujiel ( 1632063 ) on Saturday August 19, 2023 @05:47PM (#63781058)
    about fuckin McKinsey offering businesses new reasons to fire people? And then call it an "imrpovement" to the employer's "tech stack".
  • by ElitistWhiner ( 79961 ) on Saturday August 19, 2023 @05:49PM (#63781064) Journal

    Human programmer productivity has historically been measured twelve lines/day.

    • by ukoda ( 537183 ) on Saturday August 19, 2023 @06:11PM (#63781124) Homepage
      Yes, a pretty meaningless figure in the big picture view of things. What makes that worse is it is probably the best quantifiable metric out there. The reality is only a person who has done the job before, and has looked the work being done and evaluated it in detail, can make a meaningful assessment

      Sometimes I am at my most productive while blankly staring out the window while running the problem over in my head. Sometimes I am just looking at view and wondering when I can go home. How do you measure that?
      • The reality is only a person who has done the job before, and has looked the work being done and evaluated it in detail, can make a meaningful assessment

        I'm not even sure about this. Take such a person, have them evaluate the performance of another person given that task, and they'll undoubtedly hate it, say it took too long, and the solution was bad. So that person gets fired and this guy takes over. Then you hire another person who has done the job before, and have HIM evaluate the solution, and he'll s

        • by ukoda ( 537183 ) on Sunday August 20, 2023 @04:07AM (#63781814) Homepage
          You have a fairly dark view of things, lets hope I never have you doing a code review of my code. Firstly the ideal person to evaluate a developer is their immediate boss, so there is no motive to give a bad review just to get rid of them, you are just making work for yourself to find a replacement. While not ideal a peer could to do the review. Again getting rid of the person will probably just mean more work on their plate. I would be reluctant to bring in an external contractor, partly for the reason you raised, but also how do I measure the contractors ability to make a proper assessment?

          I have managed teams before and after a while you get a sense of each team members general skills, their strong and weak points. If you create a positive work environment where people feel ok discussing the challenges they are having and asking for guidance you soon get a feel for where their skills are at. I have been lucky to mostly work with skilled people. The few 'difficult' developers, who make expensive mistakes, are typically the ones who never seek help when they are out of their depth or just need some guidance to keep on track.
        • by dvice ( 6309704 )

          First of all, we should make some categories:
          - Superior developers
          - Average developers
          - Under performers

          We know that average developer can't tell if someone is superior, but we know that average developer will ask help from the superior developer or from another average developer.

          We also know that under performers don't get any work done and people don't seek their help. If their work is given to someone else, that person will do it in hours or days usually.

          Based on this, we should be able to spot under per

    • by Okian Warrior ( 537106 ) on Saturday August 19, 2023 @07:33PM (#63781284) Homepage Journal

      Easy way to find the programmer productivity:

      Just multiply the wavelength of the programmer by the voltage drop across, then divide by the speed of light.

      I have a bunch of pat answers to use when some damn-fool of an interviewer (or reporter) asks a nonsensical question. My favorite is "three", as in:

      Her: "How does one make an intelligent program?"
      Me: "There's no simple answer to that question."
      Her: "Can't you just give us a quick overview?"
      Me: "Okay, Three".
      Her: "Um... what do you mean, three?"
      Me: "It's three, the number three. You don't know what the number three is?"
      Her: "I mean, I don't understand how three is the answer".
      Me: "You wanted a quick answer. The subject is so big that any attempt to give a simple answer loses most of its meaning."
      Me: "But the answer you want, is three. Most people know what three is, so it's an answer they'll understand."
      Her: "Um... let's move on to another topic..."

  • Goodhart's law (Score:5, Insightful)

    by Bruce66423 ( 1678196 ) on Saturday August 19, 2023 @05:54PM (#63781076)

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    tells us that 'When a measure becomes a target, it ceases to be a good measure'. In programming that means that if developers are measured on a specific measure, they will address that measure - to the detriment of other aspects of the software development.

    • Re:Goodhart's law (Score:5, Insightful)

      by Echoez ( 562950 ) * on Saturday August 19, 2023 @07:32PM (#63781282)

      Exactly. If you measure me by lines of code? I'll create 100 lines of code to do something simple. Measure me by Jira tickets closed? I'll open more and then do minor things to fix them. Measure things by reliability in production? I'll spend months testing simple changes to ensure nothing goes wrong no matter what. Measure me by thorough code reviews? I'll spend a week on each one.

      • Or when they say I have to do a certain amount of training a year. Oh OK, we'll drop actual program work and do the training.

      • "Lines of code written" has long been recognized as a terrible way to measure software developer productivity.

        If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent."

        -- Edsger Dijkstra

        See also https://www.folklore.org/Story... [folklore.org]

      • Re:Goodhart's law (Score:4, Interesting)

        by Opportunist ( 166417 ) on Sunday August 20, 2023 @05:11AM (#63781894)

        Anecdote time: They introduced measurement by tickets in our company. We pentesters were measured by the number of tickets we created from our tests, operations was measured by the number of tickets they would manage to close successfully.

        It was a wonderful symbiosis between the teams, I can tell you that. It kinda fell apart when we overdid it and spent more time opening and closing tickets than actually doing any work. They started to notice when we opened/resolved a few 100 tickets every day.

        Admittedly, we wanted them to notice that their metric is stupid. And yes, it worked. They noticed. After THREE FUCKING MONTHS!

    • If you manage by numbers, people will make sure you get numbers. If you manage by results, people will deliver results.

      • True but unhelpful for the average coder for whom there is no measurable link between the results which can be measured without creating distortion and his coding activities. Which is why lots of other measures have been tried - and failed - to measure the output of coders...

  • I mean, when I write code, you can tell how I'm doing by how many mugs of coffee I've poured down my zombified gullet. :P
    • For productivity, yes.

      For debugging, and how much it pisses me off, the Gin Tonic number is a way more accurate measurement.

  • by david.emery ( 127135 ) on Saturday August 19, 2023 @06:06PM (#63781114)

    They know the cost of everything, and the value of nothing.

    And I remember arguments about "lines of code added" as an argument (in both mathematical and philosophical senses) for productivity. There's the friend who took over a compiler project, refactored it, got rid of 20k SLOC. I told him, "Huh. By the productivity models, you owe your employer a couple years worth of work."

  • Can You Measure Software Developer Productivity?

    Depends on the task: writing new code, editing existing code, debugging, porting, testing, etc ... Some people are better at some things than others, and some are *way* better. I'm also going to add: in what programming language(s) and/or on what OS(es).

    I have experience in several languages on several OSes and have been a sysadmin and software developer, mostly systems-type programming, on all the platforms I've administered and also have ported code (compiled and scripted) between many of those platf

  • Parrots (Score:5, Insightful)

    by Retired Chemist ( 5039029 ) on Saturday August 19, 2023 @06:19PM (#63781144)
    McKinsey comes in, asks the employees what is needed, and then tells management as if it was a revelation. Then they make a bunch of recommendations, which the leadership pretends to follow. There major function is to provide cover for senior management, if something goes wrong. The leadership can say that they were following the advice of the leading management company.
    • Perfectly put. I have seen this exact pattern in several cases. Sometimes it takes years to run its course, but typically as soon as the "consultants" left, it was back to whatever was done before.

      The worst bit was the inevitable employee who sees the whole thing as an opportunity to self-promote by strongly embracing whatever the consultant says, which then the consultant tells management that this person is key to their transformation and should be promoted. Anyone who questions the consultant openly is "

    • Norwegian Blue Parrots, no doubt!!

  • I hope not (Score:4, Insightful)

    by Baron_Yam ( 643147 ) on Saturday August 19, 2023 @06:25PM (#63781154)

    Sometimes you're typing a million characters per second, and sometimes you spend an hour or two (or more) thinking about what you should be typing.

    I can see no way to judge productivity other than comparing completion time and bug count against similar past projects.

    Find employees you trust, pay them enough to keep them, and unless they're surfing for porn or something... leave them the hell alone as long as milestones are being hit approximately as quickly as for similar past work.

    Give a manager a metric for judging coder performance, and you will ultimately have low morale, bad code, and high churn.

  • by clawsoon ( 748629 ) on Saturday August 19, 2023 @06:27PM (#63781158)

    This reminds me, by contrast, of Waddell and Bodek's "Rebirth of American Industry", in which they compared the management practises of American and Japanese carmakers in the 1980s. The Americans were focused on breaking down every step of manufacturing in accounting terms so that they could figure out exactly how much profit could be attributed to the assembly of a steering wheel or the installation of a taillight. The Japanese were focused on responding as quickly as possible to worker productivity suggestions so that they could build better cars faster.

    It was an attitude difference between "your employees are cogs who must be measured and controlled" and "your employees are doing the actual work and probably have the best ideas about how to do it better."

    Anyway, hopefully some software developer subject to these McKinsey ideas can write up a program to easily measure the productivity of their management.

  • by tlhIngan ( 30335 ) <slashdot.worf@net> on Saturday August 19, 2023 @06:51PM (#63781228)

    A lot of good software development requires sitting down and well, thinking. Be it trying to come up with a creative solution to the problem, or just "a solution". I would love to have a way to know if my sitting around and thinking is getting me closer to the solution I seek. I mean, imagine how much better my life would be if I had a progressbar indicating my progress. If I wander down the wrong path, the progress bar wouldn't move or move backwards, showing my path is wrong and I should back up and try the other way.

    So much time wasted going down the wrong way, knowing that would make me much more productive.

    And what about when I need to research the problem space more to understand the problem and potential solutions?

    Then again, why do companies waste money buying these non-solutions when for similar amounts of money, they could spend it improving the lives of their employees? Millions spent on consultants vs. those same dollars spent making people's working lives less miserable? It could be simple things - standing desks are stupidly cheap, as are nicer monitors and keyboards and even things like computers. Or even better chairs. Or better lighting. Or walls, or better venilation, or remote work?

    I mean, I was tasked to do a relatively simple thing, but the most straightforward solution to the problem didn't work (ended up in a dependency loop in the build system). So I tried a less straightforward solution, which ended badly before I came with a third solution that ended up being the most elegant solution given the restrictions I discovered during the first two attempts. I think over the 3 weeks I worked on it, I might have written maybe 2000 lines of code, but when I finally committed my solution to the repository, I really added maybe 100 lines total. Negative lines, if you consider that I got rid of some code.

    It took me 3 weeks, but I avoided adding a ton of technical debt and came up with something small and elegant along the way in that things worked the way everyone expected, the changes were self-contained and small and not exploding across the entire repository, and I removed a bunch of #ifdefs as the conditional code no longer applied, streamlining the source tree. No more special case code.

    How do you measure that? I spent nearly a month on a stupid problem that was an iceberg in disguise and ended up writing a tiny amount of code that cleaned up a lot of the code base.

    • A lot of software development is basically just like this - you're conceptually rewiring a network of dependencies. And those dependencies go right down to between lines of code. Every system is completely different, so every rewiring is new, unexplored, unpredictable territory.

      If the problem was predictable, then means it was repeatable, that means it should have been abstracted away and packaged into some component, and the issue (bug/feature) won't arise again. The next issue will be a new issue, not
    • by jeti ( 105266 )
      My proudest moments are those when I was able to remove three or five thousand lines of code while keeping all the features.
    • by Stalyn ( 662 )

      Kolmogorov Complexity Is not computable. Essentially it's impossible to know how _hard it is to write_ a particular algorithm if given the desired output. We have heuristic measures but that's all they are, heuristics. What can be measured is how an algorithm performs and that should be the measure of a programmer. Does the algorithm produce the desired output and does it perform optimally? But I feel like that's not what a programmer is truly measured on. They are more measured on how well they bend to the

    • If we can't measure "thinking productivity" or "thinking effort" how can we reward for it fairly?

      That is the basic idea behind "free markets", right? That information will allow us to fairly reward people, and the rewards will motivate us toward higher productivity.

      Heck, SHOULD we reward fairly like people claim we've been trying to? I'll bet a lot of people would go hungry if we truly did. If people were paid what they just produced, and lost based on wasting other's productivity.

  • If you could not measure productivity of something, that means that you could put a monkey to do that job. As in a literal fucking ape. And not one of the great ones. One of the stupid little ones would do fine.

    The question here isn't if you can measure software developer productivity. You absolutely can, because we can measure that said small monkey cannot do the job as well as a typical software developer. And no you racist twits, there's a reason why American Indians outearn you by such a massive margin

  • But you can tell on the timescale of projects, typically a couple of months at least.
  • by RightwingNutjob ( 1302813 ) on Saturday August 19, 2023 @07:10PM (#63781258)

    You can tell they were productive if you're still using their code two, five, or even ten years later.

    You can tell they were unproductive if you're not, or if you had to fire them, or if they got frustrated for whatever reason and quit.

    • Some unproductive people produce shitty code that gets used many years later. They get their promotion while others are stuck maintaining it.
    • by SendBot ( 29932 )

      This sounds like the kind of toxic optimism Pixar warned kids about in the movie Inside Out (the Joy character)

      You can tell they were productive if you're still using their code two, five, or even ten years later.

      The dangerously faulty code had been in place so long, no one could determine its origins. "Probably some jr dev", they excuse. "This is definitely Bad Practice. Even if we work around it, this could cause problems for someone else in the future. It could be causing problems now that we're unaware of."

      So why wasn't it fixed? Fear. Fear of their own chaos, for it controlled them now. The managers co

  • Measure my productivity by lines of code. I can pump shit out at a phenomenal rate. But it's going to be exactly that: shit.

    I'm paid to solve problems. Sometimes a problem requires me to change a single variable. But finding that single variable could take hours, sometimes days. Debugging is often a time consuming process.

  • got to fill out the TPS reports

  • Can you measure productivity? Yeah, kind of. It's not an exact science. Whatever you measure, people will optimize for it [joelonsoftware.com], specially software engineers.

    Another thing that's maybe as or more important than productivity is trustworthiness, which is not usually measured, but I much rather work with a trustworthy but mediocre engineer than a brilliant and highly productive but unreliable one.

    This sound they're fishing for business.

  • Leadership Program or whatever. Louis Rossmann covered this.

  • You can measure anything. The numbers will just be bullshit and useless. You can't make reliable predictions from them.

    Software is not an assembly line. There isn't some blue-print design that gets made repeatedly. If something in software development repeats, it's reused again.

    This means everything in software is always new, untested output. Especially at shitty web development jobs were people continually reinvent new Javascript frameworks. That means there is no applicable historical data that can
    • by gweihir ( 88907 )

      That means there is no applicable historical data that can be meaningfully applied to a new project that would predict how long it would take.

      Indeed. And even more critically, you cannot predict whether a software project will succeed or fail, whether the code will be maintainable to a reasonable degree, whether it will have potentially catastrophic security problems, whether usability will be good, etc.

      Face it, writing software is custom engineering. In any other engineering disciplines, that is reserved for experienced highly capable engineers. In software it is often done by people that do not really even understand the basics. That cannot and

      • I've always found a good way of framing it is to consider software engineering collective opera writing. The final product includes sheet music for every instrument and vocalist, costume design, blocking, lighting, all the set construction, and just to put a cherry on top, the negotiated contracts of every employee.

        • by gweihir ( 88907 )

          Interesting. Also should be stated that the music needs to harmonize in the end and be a valid opera that people actually want to listen to. You probably have some musical background (I do not), so that would be obvious to you.

          Not quite sure whether I would use this comparison, but it is definitely an interesting one.

  • What happens when you figure out that 10% of programmers are 10x more efficient than 90% of the programmers?

    Are you going to pay your high performers 10x more?

    Are you going to pay your low performers 10x less?

    We don't have nearly the range of compensation to deal with the consequences of truthfully measuring developer productivity.

    • by m00sh ( 2538182 )

      You help the other 90% get 10x more productive. You hired those people, you know they have the capability, just figure out what's causing them to not be productive.

      Or, you really look at your hiring process.

      • If that was possible, it would have happened already.

        Sadly, not everyone can be a x10 programmer, no matter how hard they try.

    • A friend of mine got a full time contract and works about 20 hours a week.

      His boss, when asked by someone else how this is fair, said "I pay him to produce results. Not to keep a chair from flying off into space. You produce what he produces in half the time, you only have to work 10 hours a week for full pay".

      Then again, he has a boss who is now 50 and was a developer in the same line of work for 30 years. He knows what assignments should need what time for an average programmer. And if he's done with it i

      • Yeah, that sounds about as good as it gets - maybe you comp the x4 programmer by only making them work 10 hours a week.

        And I guess the x10 coder just shows up for one morning a week for 4 hours :)

        Not sure how you institute that across a large bureaucratic organization though - some bean counter is not going to like that arrangement.

        • Well, being twice as productive as the average programmer is already pretty impressive, I doubt anyone will make 10 of them obsolete. :)

          My understanding is that the bigwigs in the company know about the arrangement and also know that losing this guy would be a loss, and that he can and does get up and leave if he isn't happy with the arrangements.

          If you have a very special skill set and are really good at what you do, it's usually not hard to get your demands met.

          • I have regularly noticed programmers that are literally orders of magnitude better than others. In some cases, you could put 1000 regular programmers on the job for any amount of time, and they'd never succeed where the one ace programmer can.

            I've certainly seen some people spend weeks working on a problem that I've been able to solve in an hour. Those people might be useful doing standard changes that don't require strong problem solving skills, but they simply aren't as fast (i.e., productive) as people

  • by gweihir ( 88907 )

    Or rather not meaningfully. You need to look at quality as well, at level of insight in the solutions, at maintainability, and more important than ever, security. You can measure some of these in isolation but the result will be meaningless.

  • by kschendel ( 644489 ) on Saturday August 19, 2023 @09:42PM (#63781444) Homepage

    Ah, yes. McKinsey, where management advice goes to die (after being billed).

    I've been involved in I think 4 McKinsey "interventions" in my career. 2 were outright harmful, and the other 2 were merely a massive waste of money and time.

    Can you measure software productivity? Well, maybe, depending on how you *define* productivity. For sure, you can't apply any naive metric; the real wizards in any given organization are the ones who might spend a day making the proverbial chalk mark where the part isn't working, where nobody else even knows where to look.

    There's a wonderful book, which is alas in a storage unit at the moment and I can't find the name, about measurement and organizational dysfunction. The thrust is that if what one measures isn't aligned with the organization's goals, and the latter is very often misunderstood, one will lose track of the organization's goals and favor the (organizationally irrelevant) measurable metrics. In simple words, the drunk under the lamp-post syndrome. "I dropped them over there, but the light is better here."

    I suspect that too many productivity measurers imagine (or hope!) that software is linear, akin to piecework like making skirts or hats. Alas for them, it's not. Software is everywhere discontinuous and so is software development - especially when it's bug fixing.

    • As regarding measuring the wrong thing.

      Military bases frequently have a "base exchange". In a nutshell, they're department stores where military members can purchase goods. Quite useful since many military bases are fairly isolated and junior enlisted frequently don't have cars to travel to nearby cities or towns. In any case, these stores need managers and these managers need to be evaluated. At one point some higher up decided that a good criteria would be so see how well the shelves were stocked. After a

  • I'm sure AI will be used to measure it in the future. However, developers will also write code using AI.

  • Linked not once, but twice:

    https ://dl.acm.org/action/cookieAbsent

    /sarcasm

  • by djb ( 19374 ) on Sunday August 20, 2023 @04:08AM (#63781816) Homepage

    The key to good productivity comes down to good leadership.

    Find good people
    Remove bad people
    Give clear requirements
    Create a great environment
    Allow the time needed for people to do their best work

    Then stand back and stay out of the way as much as possible.

  • Unfortunately only after the project is finished. You measure the number of bug report tickets.

  • McKinsey - creating a problem than only McKinsey can solve.

    Just like every Tom, Dick and Harry that invent a new Pattern/Practice that solves a problem that rarely exists, making development/debugging more complex - just for the sake of earning money - rather than getting stuff done and meeting expectations - on time and budget.

  • It's hard to measure individual performance. Engineering teams have deliverables on a schedule, making those schedules is a measurement of productivity. And effort can be estimated by developers and tracked to evaluate project managers. Tasks can be tracked and prioritized in something as simple as a bug database. From there management can collect metrics such as the time to close tickets or the rate that tickets are closed over a long period.

    Since engineering consists of professionals, making them define s

  • The is the kind of question and problem you have--when you decide not to promote from within. If you hire programmers on one side--and managers on the other, the managers have no idea what they are doing.
  • Pay them per K-LOC.

    In the PBS documentary Triumph of the Nerds, Microsoft executive Steve Ballmer criticized the use of counting lines of code:

    "In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand lines of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 50K-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS/2, how much they did. How many K-LO
  • How do you measure "deciding what to make"? Or how much effort it took to learn a new concept or tool?

    Is an artist "failing" to be productive if they don't love their own result? Many technology products need tons of revisions, or even replacement eventually. Are those "negative productivity"? Or was the previous claimed productivity false?

    Is putting paint on canvas being "productive" no matter how it was done, or the effect it has? And by that thinking submitting changes is productivity?

    We measure wha

  • "Developer productivity" is a really challenging problem because in many cases more-correct solutions are simpler than incorrect solutions - and, anyhow, it is often challenging to even measure "solution" in this space.

    But regardless of whether experienced developers can measure productivity in anything remotely resembling an actionable way, I'm 100% certain that some random third-party consultant can't do so. I'd say that for any situation where McKinsey CAN successfully measure developer productivity, the

  • Take the amount of code a developer wrote in order to solve a problem, add the dependencies and subtract comments and documentation. Perhaps if the language has some form of nested structure, multiply every token you count by the level of nesting before you sum them up. (Obviously one needs to work out the details)

    If one produces a lot of code, not only will they spend a lot of time and resources writing it, but everyone coming after them will also need to spend time and effort to read and understand it. Co

Ignorance is bliss. -- Thomas Gray Fortune updates the great quotes, #42: BLISS is ignorance.

Working...