Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Perl Programming

CPAN: $677 Million of Perl 277

Adam K writes "It had to happen eventually. CPAN has finally gotten the sloccount treatment, and the results are interesting. At 15.4 million lines of code, CPAN is starting to approach the size of the entire Redhat 6.2 distribution mentioned in David Wheeler's original paper. Could this help explain perl's relatively low position in the SourceForge.net language numbers?"
This discussion has been archived. No new comments can be posted.

CPAN: $677 Million of Perl

Comments Filter:
  • Wow, using sloccount on the full POPFile [sf.net] source shows that developing it would have cost around $500K in a regular software company. That seems about right given the length of time we've been working on it and the number of people involved. Cool tool.

    Now if only I could push the donations up above $5,000 :-)

    John.
  • If you take out the punctuation, though, it's down to twelve lines of code.
  • Huh? (Score:4, Insightful)

    by Billobob ( 532161 ) <billobob@gmail. c o m> on Friday July 30, 2004 @09:22AM (#9843449) Homepage Journal
    Low position? For a language that's not suppose to be a full-blown low-level language like C/C++, perl is pretty damn well represented - over 1/3 the number of projects compared to C isn't that bad. If you have just one file, something like sourceforge usually isn't needed.
    • Re:Huh? (Score:2, Insightful)

      by sdcharle ( 631718 )
      A lot of projects on SourceForge are like bands conceived by teenagers that never get past the 'designing the first album cover' stage.

      SourceForge is a great tool with meaningful projects there, but you kind of have to take the info you get from looking at overall numbers there with a grain of salt.

  • Bahhh! (Score:5, Funny)

    by justanyone ( 308934 ) on Friday July 30, 2004 @09:24AM (#9843467) Homepage Journal
    Bahhh, I know people richer than that!

    Now compute the economic gain of using Perl vs. any other language:
    Perl vs. Nothing : $677M
    Perl vs. C : $1.25B
    Perl vs. C# : $2.77B
    Perl vs. Hand Optimized Assembly on Honeywell DPS-3E running GCOS operating system: Priceless
  • Mining CPan (Score:2, Interesting)

    by Numen ( 244707 )
    Whatever ones favourite language might be, a project to mine CPan and port useful modules to Python, Java or C# would be interesting.... Perl syntax reads as a little terse to many non-Perl devs.
    • by Dr. Zowie ( 109983 ) <slashdot AT deforest DOT org> on Friday July 30, 2004 @09:27AM (#9843494)
      >"C#"
      You misspelled "INTERCAL".
    • Well, what I'd like to see first would be a Python equivalent to CPAN existing in the first place. (Name suggestion, since CPAN is taken: CYAN, the Comprehensive pYthon Archive Network.) Languages such as PHP with PEAR, and R with CRAN, have done very well following the same model, even if these archives aren't nearly as big or comprehensive as CPAN. Give it time, give it time ...

      And for those who say "do it yourself" -- I don't have the resources. However, I will say, that if CYAN existed, I would ha
      • Re:Mining CPan (Score:3, Informative)

        by Waffle Iron ( 339739 )
        Well, what I'd like to see first would be a Python equivalent to CPAN existing in the first place.

        While it's not nearly as big as CPAN, I often find Python code I need in the Vaults of Parnassus [vex.net]

        • Oh yeah, that's a good site, and I've found some useful stuff there. It just seems kind of ... unstructured compared to CPAN. The bigger flaw is that I can't type "vex install PACKAGE" at the command line and have it happen automagically. CPAN takes a lot of flak, but it's damn easy to use.
        • Re:Mining CPan (Score:4, Informative)

          by ajs ( 35943 ) <ajs.ajs@com> on Friday July 30, 2004 @12:30PM (#9845667) Homepage Journal
          I checked out that site.

          I only looked at a handfull of the links. It's sort of a Yahoo! (the original indexer, not todays search engine-cum-kitchen sink) for Python code, which is ok, but check out how one uses CPAN in the real world:
          # perl -MCPAN -e shell
          cpan> i /SpamAssassin/
          Distribution F/FE/FELICITY/Mail-SpamAssassin-2.63.tar.gz
          Modul e Mail::SpamAssassin (F/FE/FELICITY/Mail-SpamAssassin-2.63.tar.gz)
          cpa n> install Mail::SpamAssassin
          ---- Unsatisfied dependencies detected during [F/FE/FELICITY/Mail-SpamAssassin-2.63.tar.gz] -----
          Filter::Simple
          Shall I follow them and prepend them to the queue
          of modules we are processing right now? [yes]
          I'm sure you can see how this makes CPAN far more useful for building a large repository of useful Perl modules. How, in Python, can you build several layers of libraries that depend on each other without this kind of repository of dependency information? How does a user "come into the know" about these factors?

          Of course, that ignores the fact that CPAN modules all come with regression testing and online documentation (installed in the sytem "man" tree) as well.

      • Re:Mining CPan (Score:3, Interesting)

        by imroy ( 755 )
        I think the Python work would be interesting. I'm a long-time Perl coder and Python looks interesting. But IMHO, PHP would be a waste of time. Part of the reason CPAN is so huge is that perl5 is coming up to its 10th anniversary. The perl5 language has remained very stable over that time. But PHP5 has just been released and from what I've heard it's another major change to the language. But if it's got namespaces and/or sane package management like everyone's been begging for, then PEAR might start to reall
  • Golf? (Score:2, Funny)

    by ellem ( 147712 ) *
    Pfft 15.4 Million lines?

    I could write CPAN in a one liner!

    #! /usr/bin/perl
    use warnings ;
    use strict ;

    print "CPAN: ;
  • Perl isn't Linux (Score:3, Insightful)

    by gorim ( 700913 ) on Friday July 30, 2004 @09:25AM (#9843478)
    Perl is a cross-platform tool that existed long before Linux did. Why do such things get posted under Linux ? May as well post it under BSD it would be doing the same thing. This happened with the recent Bash 3.0 topic as well. Why do people associate things with Linux just because it is open source ? (Unless it is BSD open source).
  • by webword ( 82711 ) on Friday July 30, 2004 @09:25AM (#9843480) Homepage
    What is more important, lines of code or lines of quality code? People are always so impressed with sheer numbers. Quality is important.

    A similar issue is format and structure. You might do something almost right, but it could be better. For example, you might include dates on your web pages but is the format good for users [oristus.com]? It can probably be better!

    Numbers are only impressive when they are placed in context of their overall utility. Of course, regarding code, measuring "overall utitility" is no joke. Can you really tell that the code from Programmer A is better than Programmer B.

    In any event, keep your eyes open. Don't let "15.4 million lines of code" amaze you just because the number is big. Let it amaze you because of what it means, and what those lines of code do for users.
    • Ummm that date article makes a few decent points but is mostly a single person's rant.

      While true when dealing with CPAN, I think we know RH's utility and the main issue is that it is free.
    • by Geoff-with-a-G ( 762688 ) on Friday July 30, 2004 @09:45AM (#9843639)
      What is more important, lines of code or lines of quality code? People are always so impressed with sheer numbers. Quality is important.

      Seriously.
      And it's Perl.
      I thought the whole point was that you could write a massive Perl program in a single line.
      15.4 million just tells me that CPAN is getting sloppy. Let's knock that down to say, 17 HUGE lines, okay?
    • by ajs ( 35943 ) <ajs.ajs@com> on Friday July 30, 2004 @10:22AM (#9844008) Homepage Journal
      LOC isn't a great measure, but when talking about CPAN there are several things to keep in mind that modify the premise of measuring LOC:
      • Perl modules on CPAN include their own, customized installation and testing harness. This renders them far more valuable than a simple dumping ground of LOC.
      • CPAN presents a searchable, globally mirrored database of this code, which again increases its value.
      • Perl itself has an extremely powerful syntax. Many of Perl's detractors, in fact, will claim that this is far too much power to have in a syntax (vs. grammar and/or semantics and/or external libraries), so comparing 1000 LOC in Perl to 1000 LOC in, say, Java or C# or other "mid-level languages" (my phrase) can be quite favorable to Perl. Even comparing to other high-level languages can be, depending on the application (of course, each high level language has its own strengths, and for example, Python's thread handling is much simpler than Perl's, and both Ruby and Python make OO much easier).

      That said, I think that the idea that comparing LOC in, say, a Red Hat distribution to LOC in CPAN is valuable, regardless of the fact that structure and format are also concerns. They are equally concerns in both environments, and both environments have roughly equal pressures on improving both incrementally over time (e.g. bad code gets migrated away from the core and good code gets migrated in).

      ALL OF THAT aside, Perl's CPAN is most valuable not because of its size or the quality of the code, but because it is a repository where thousands of people with highly specialized needs share code with each other. Perl is unique in having created such a space that is widely used outside of core advocates of the language. I don't know why that's the case, but as long as it is, it's a very good thing.

      Getting code noticed by your niche's peers and making it available for everyone to use is key to Perl's success as a language.
  • by stinkyfingers ( 588428 ) on Friday July 30, 2004 @09:26AM (#9843491)
    It's relatively low because that list is in alphabetical order!
    • Yeah, no kidding. Just think about how dejected the Zope guys must feel with their nice modern system when both Ada and APL are blasting from the past ahead of all other languages. Maybe they'll try to get ahead by renaming themselves to "Aaaaaaa Zope".

      :-)

  • Gilb's Law (Score:5, Interesting)

    by YetAnotherName ( 168064 ) on Friday July 30, 2004 @09:28AM (#9843500) Homepage
    For anyone who says that lines of code isn't a useful measure, just remember "Gilb's Law":
    Two years ago at a conference in London, I spent an afternoon with Tom Gilb, the author of
    Software Metrics ... I found that an easy way to get him heated up was to suggest that something you need to know is "unmeasurable." The man was offended by the very idea. He favored me that day with a description of what he considered a fundamental truth about measurability. The idea seemed at once so wise and so encouraging that I copied it verbatim into my journal under the heading of Gilb's Law:

    Anything you need to quanitfy can be measured in some way that is superior to not measuring it at all.

    Gilb's Law doesn't promise you that measurement will be free or even cheap, and it may not be perfect---just better than nothing.
    --Tom DeMarco and Timothy Lister, Peopleware 2/E, Dorset House Publishing, New York, 1999.
    • Nonsense. (Score:3, Insightful)

      by dpbsmith ( 263124 )
      Patently, bad measurements are worse than no measurements.

      "Measurement drives performance." If you are measuring the wrong thing or using misleading measurements, you will do the wrong thing.

      Anyone who thinks they can devise a meaningful measurement the quality of Beethoven's Fifth Symphony versus Brahm's First... or which tastes better, vanilla ice cream or fresh pineapple... or who is a better ballplayer, Willie Mays or Sammy Sosa... needs to have their head measured, preferably with a standardized test
      • Re:Nonsense. (Score:5, Insightful)

        by Merk ( 25521 ) on Friday July 30, 2004 @10:27AM (#9844067) Homepage

        Read the quote carefully: "Anything you need to quanitfy can be measured in some way that is superior to not measuring it at all."

        He's not saying that *any* measurement is better than no measurement. He's saying that there exists a measurement that is better than no measurement.

        Which tastes better, ice cream or fresh pineapple? I don't know, but rather than say "It's impossible to say! Any measurement will be flawed." You could do a survey and see what most people think tastes better. That may not be the measurement that is better than no measurement, but for certain purposes it may be.

        In the end, it depends on what your reason for doing the measurement is. If you're going to be marketing a new bubble gum flavour, then this survey is better than no information at all.

        • Re:Nonsense. (Score:3, Insightful)

          "He's not saying that *any* measurement is better than no measurement. He's saying that there exists a measurement that is better than no measurement."

          Then he's really making a philosophical statement that probably has little value in a practical sense. Even if it were true, a measurement that you can't identify, is exactly the same as no measurement at all.

          So we just have to go back to basics and say that any proposed measurement should be supported by evidence and we should reject those that aren't.
      • You're taking only part of the original quote though:

        Anything you need to quanitfy can be measured in some way that is superior to not measuring it at all.

        You seem to be ignoring the first part of the statement -- "anything you need to quantify" -- by pointing out that it's meaningless to compare the quantitative difference between symphonies or flavors, but these things are essentially subjective, so there's little merit in "measuring" their differences. There is no need to quantify here.

        I mean, obv

      • Okay, but as long as your measurment has some significant corelation with what you are trying to predict, it brings you information and it is usefull. Not to say that there can't be a better measurment but it could be too costly or you just havent discovered it yet.

        The reason you can't measure the quality of Beethoven's Fifth is because it is so hard to define the concept of musical quality. You could always rate music according to a definition that you made. Consider something like:"I make a random group
    • Re:Gilb's Law (Score:3, Insightful)

      by Minwee ( 522556 )
      I'm not seeing any connection there.

      Glib's Law only states that there exists _some_ measure with a value greater than that of not measuring. It doesn't say that every measure, no matter how bizarre, is better than nothing. Glib's Law tells us nothing about the value of lines of code.

      If measurement for measurement's sake was always a good thing then I could take an eight bit CRC of the source code or the ratio of "e"s to "i"s and use those as metrics for quality.
    • Anything you need to quanitfy can be measured in some way that is superior to not measuring it at all.

      But, how do you know that the way you're measuring it is better than not measuring at all? There are lots of ways to measure things that are worse than no measurement at all, because they reward the wrong activity.

      The canonical examples here are paying programmers per bug fixed, or paying testers per bug detected. Either one of these alone is bad - together they allow programmers and testers to print

    • Gilb's law doesn't appear to be based on logic. It suggests a relationship between your needs and useful measures (if you don't have a need to quantify something, does that mean it can't be measured?").

      In addition, even if it was a fact that something "can be measured" that doesn't imply that any particular individual or institution is capable of performing that measurement.

      I think Gilb's law is primarily based on denial. Humans want to believe they are in charge of the universe and many can't accept the
    • Anything you need to quanitfy can be measured in some way that is superior to not measuring it at all.

      Gibb's Law does nothing to address the problem that just because you've found a way to measure something, that particular way is superior to not measuring it at all.

    • found that an easy way to get him heated up was to suggest that something you need to know is "unmeasurable." The man was offended by the very idea.

      Better not let him near any pure mathematics then. Mathematciains have determined that there are, in fact non-measurable sets [wikipedia.org] - that is, sets that you can't actually measure. Sure, they're also non-computable and highly pathological, but their existence can be proved.

      Jedidiah.
      • Re:Gilb's Law (Score:3, Interesting)

        by T-Ranger ( 10520 )
        You have a problem with different fields using "measurable" differently. In non-measurable sets, most non-set theorists would call that non-countable (which may or may not be a different theory. I remember "non-countable' from my discret math days, but never non-measurable. Beh). Many people would say that infinity is sufficent measurement (which is why I perfer countability to measurability).

        So far as Gib is concerned, "very big" is a measurement. If you have 3 sets of infinite size, saying that they hav

    • Anything you need to quanitfy can be measured in some way that is superior to not measuring it at all.

      Correct, but the problem is in the assumption that the measurement means more than it does.
      But this is also a problem with any measurement with any degree of imprecision.
      I see these references to programming languages that assume that a language is somehow representative of the level of the stuff written in them.

      You measure A. You measure B.
      Is A bigger than B?
      The numeric values give a precise answer but
  • by Bingo Foo ( 179380 ) on Friday July 30, 2004 @09:29AM (#9843510)
    ... I thought it said "C-SPAN."

    Although their "Book TV" show is usually as dense as Perl, and often profiles books that are write-only.

  • Low position? (Score:5, Interesting)

    by fanatic ( 86657 ) on Friday July 30, 2004 @09:35AM (#9843566)
    Copying and pasting the linked Sourceforge page into a file, then sortting yelds the following highest project numbers:

    Perl 5254 projects
    PHP 9010 projects
    Java 12210 projects
    C 13069 projects
    C++ 13255 projects

    So perl is behind only 4 others. Given that much Perl project work probably ends up in CPAN instead of sourceforge, this is actually pretty high. Did the poster mean he'd expect higher without CPAN?

  • by fanatic ( 86657 ) on Friday July 30, 2004 @09:49AM (#9843667)
    One line on perl typically does a lot more than one line of C code (even without absurd "golf" tricks). The same is true of other high level languages. So even leaving out issues of programmer quality, what does this really mean?

    Also, from the linked article:

    Reasons why these results are meaningless:
    • Most importantly, I've told SLOCCount all of CPAN is one project, which is probably inflating the numbers significantly. When I get more time, I may run SLOCCount per-distribution, then sum the totals. However, SLOCCount appears to have bugs handling this many sub-projects, so I will need to run them separately and manually sum the results.
    • mini-cpan.pl doesn't actually find only the latest versions of everything, some dists are duplicated and some may be ignored.
    • There's probably plenty of generated code not being identified correctly.
    • There's probably plenty of code downloadable from CPAN that wasn't written for CPAN, and so probably shouldn't be counted.
    • All the usual reasons why code metrics based on numbers of lines of source code are meaningless.
    And here's another: CPAN includes perl itself - which is probably a *lot* of lines of C code.
    • I've told SLOCCount all of CPAN is one project,
      isn't the sum equal to all parts? I know it is more difficult to do big projects. (all those middle-managers)
      generated code
      Is code, generated more efficiently.
      code downloadable from CPAN that wasn't written for CPAN,
      there is probably code in red had that was never written for red had. That is the trouble with open source.
      numbers of lines of source code are meaningless.
      No no, they give you serious bragging rights!

      So things are not that bad. Just the dup
    • One line on perl typically does a lot more than one line of C code (even without absurd "golf" tricks). The same is true of other high level languages. So even leaving out issues of programmer quality, what does this really mean?

      Tha standard argument is that, regardless of which language you're using, for a sufficiently large project, the number of lines of code a programmer can produce is relatively constant. Some languages may require less lines to get the same ammount of work done, but the work to pro
  • PERL is nice in that it has a lot of prepackaged modules that provide a lot of functionality. But when you distribute code that uses these modules, the end user must install them. This is a big pain in the rear for the average user, which is why I believe that PERL is a bad choice for programs intented for the end user.
    • This is a pain, and it has certainly hit us in the PDL [perl.org] world. We're only now starting to get appropriate automated packages built.


      On the other hand, if all you need is perl modules (no external libraries) then you can use the CPAN module itself to reach out to CPAN, get the perl code, and test it right there on your system. nearly all the time, it just plain works. That is amazing (to me, anyway).

    • You know if you took a second to search CPAN, you'd find that your assertion is not at all true [cpan.org]

      Also its "Perl" as the name of language "perl" as the name of the interpreter. They aren't acronyms, PERL doesn't exist.
    • when you distribute code that uses these modules, the end user must install them

      When you distribute code that requires Gnome, the user must install Gnome.

      When you distribute code that requires glibc2.2 the user must install glibc2.2.

      When you distribute code... is this getting through?

      If your program is distributed to Linux systems, you can easily build an apt tree that includes the modules on which you depend as RPMs (or DPKGs for Debian) and then the user just adds you to their sources.list. Alternati
    • In addition to the already posted reasons why you're wrong, there's nothing to stop you from including the necessary libraries with your Perl software in some lib directory, and making your software adjust the @INC variable to look there first.
  • I read the whole thing including the comments as CSPAN and was like wtf?
    • SLOCCount [dwheeler.com] measures "physical SLOC", and thus ignores blank lines and comment-only lines (including Perl PODs). It's not the same as "wc -l". Go read its documentation if you want to understand exactly what it does; it has a lengthy description of exactly what it measures, and why, along with references to the (substantial) research literature behind such tools.
  • by 14erCleaner ( 745600 ) <FourteenerCleaner@yahoo.com> on Friday July 30, 2004 @10:15AM (#9843931) Homepage Journal
    a) I thought there was only one line of Perl that did everything.

    b) Maybe the sloc counter didn't recognize Perl comments, so it overcounted lines. Wait, Perl programs never have comments.

    c) Does this make it "a Perl of great price"?

  • Remember (Score:3, Informative)

    by ajs318 ( 655362 ) <sd_resp2@earthsh ... minus herbivore> on Friday July 30, 2004 @10:18AM (#9843955)
    I think one of the reasons why many of the things people do in Perl don't end up becoming SourceForge projects is because they're specific to a particular environment -- my company does pretty much everything {that others might do on Windows desktops} using in-house-written Perl scripts accessed through a web browser; but they really aren't general-purpose enough to warrant releasing to the world at large. For instance, we need to store the Ordnance Survey grid references of our customers -- but not everyone will need that functionality. Perl itself provides a kind of "generality-of-purpose abstraction layer"; there's not much sense in writing a program that can handle fifty squillion different data formats if you're only ever going to use one, especially given that processor power and disk space are so cheap nowadays. I also use Perl for jobs that could be done using bash or awk or sed, but Perl is just so handy; and if I need to add one more fearure, I know I can. I'll also use perl -e 'print "something\n"' in an Xterm as a calculator {one day I'll even define a key map that puts the sequence on a function key}.

    Alternatively, Perl -- thanks to all those wonderful library bindings -- might well be used for an initial "feasibility study", say to develop and test the most important function(s) that will end up forming the core of a project; and, once the proof-of-concept is there, the whole thing is then rewritten "from the ground up" in something like C or C++ {which has bindings for the dead same libraries anyway, but feels more "proper" because it's compiled rather than interpreted}.
    • the proof-of-concept is there, the whole thing is then rewritten "from the ground up" in something like C or C++

      Ha ha, that never happens. Show someone a semi-working prototype and they'll think it's 90% done and want a ship date from you there and then, preferably some time in the next week, just long enough to tweak the interface. Never write a prototype you wouldn't be willing to support in production!
  • Sure, Slashdotters hate Flash, but why aren't there any ActionScript projects on SourceForge, while there are 1822 JavaScript projects?
  • by Offwhite98 ( 101400 ) on Friday July 30, 2004 @10:37AM (#9844179) Homepage
    In my experience with CPAN I have found it follows the Larry Wall concept that there are many ways to do the same thing. For starters, there are several modules which can communicate with a POP3 server. There are many XML parsers and many means of talking to a MySQL database. Unfortunately I would not say each solution is feature complete or even good quality. It is great that it has built-in Pod Doc, but the fact remains is that it can be quite difficult to get some things done.

    I was able to whip together a webmail client which fetches mail from a POP3 server and parse the MIME types to display content with several Perl modules which was a pretty amazing feat with the little amount of code which I wrote. But as I wrote it I had to come up with many workarounds for incomplete features in the CPAN modules. I also found that some modules were object oriented and some were not.

    So in the end I am finding things like the Java Foundation Classes or the .NET 1.1 profile implemented by Mono to be much more appealing. While there may be fewer means of connecting to a POP3 server, there is a good chance the one that is there will work well enough.

    But I am still curious how the Ruby folks are doing. They have been committed to object-oriented programming and may be able produce higher quality solitions. Anyone doing Ruby here?

  • ... that a lot more folks write Perl?

    Could it mean that folks who write Perl are more likely to submit their work to CPAN?

    How does the "instant gratification" of using an interpreted language factor into all this? I know one of the attractions of Perl for me is that I don't have to compile it to see if it works. I just run it.

  • ...how much of that is devoted to MP3 taggers and MVC frameworks... :-)
  • by dwheeler ( 321049 ) on Friday July 30, 2004 @12:03PM (#9845292) Homepage Journal
    If you find this interesting, you might also want to take a look at my updated paper More than a Gigabuck: Estimating GNU/Linux's Size [dwheeler.com], which examines Red Hat Linux 7.1. The "Gigabuck" paper shows that:
    1. It would cost over $1 billion (a Gigabuck) to develop this Linux distribution by conventional proprietary means in the U.S. (in year 2000 U.S. dollars).
    2. It includes over 30 million physical source lines of code (SLOC).
    3. It would have required about 8,000 person-years of development time, as determined using the widely-used basic COCOMO model.
    4. Red Hat Linux 7.1 represents over a 60% increase in size, effort, and traditional development costs over Red Hat Linux 6.2 (which was released about one year earlier).

    Another related paper (that I didn't write) is Counting Potatoes: The size of Debian 2.2 [debian.org]. They found that Debian 2.2 includes more than 55 million physical SLOC, and would have cost nearly $1.9 billion USD using over 14,000 person-years to develop using traditional proprietary techniques.

    So what's the purpose of all these studies? Insight. There are all sorts of limitations in any measure, including any source lines of code (SLOC) measure. But, in spite of those limitations, there are things you can learn. Using tools (like SLOC counting tools) to measure software can help you understand things about the software, as long as you understand the limitations of the measure.

    In particular, many studies have shown that SLOC is very strongly related to effort (so much so that you can even use equations to predict it). If you want to determine effort in CPAN, you can't just go ask people; few open source software / Free Software (OSS/FS) [dwheeler.com] developers record exactly how much effort they invested. So, these kinds of measures are really helpful for estimating how much effort went into developing the software. Obviously, not all effort is equal (a genius can turn a hard problem into an easy one). And not all code is good, or even useful. But if you want to understand and measure effort, then these measures do have a value. In particular, these results have shown that OSS/FS can scale up to large projects requiring large amounts of effort.

  • "At 15.4 million lines of code,"

    But how many libraries of congress is CPAN?
  • Code should not be multiplied beyond necessity.

    -- Bowery's Razor; a corollary (applicable to programmers and other state crafters) of Okham's Razor [wikipedia.org].

The explanation requiring the fewest assumptions is the most likely to be correct. -- William of Occam

Working...