Forgot your password?
typodupeerror
Programming

Visualizing the .NET Framework 320

Posted by kdawson
from the drowning-in-code dept.
eldavojohn writes "If you're a Web developer, you should check out a quick post about the number of types, methods, & fields in the .NET framework. This was done using NDepend. The numbers are quite large — e.g. 39,509 types. The blogger went on to generate tree maps and a dependency matrix."
This discussion has been archived. No new comments can be posted.

Visualizing the .NET Framework

Comments Filter:
  • by NewbieProgrammerMan (558327) on Tuesday March 18, 2008 @09:05PM (#22790614)
    Seeing that I have no personal experience with .Net, and seeing that this is Slashdot, I feel totally qualified to poke fun at its stupendous complexity with a quote:

    Any third-rate engineer or researcher can increase complexity; but it takes a certain flair of real insight to make things simple. -E. F. Schumacher
    • Any third-rate engineer or researcher can increase complexity; but it takes a certain flair of real insight to make things simple. -E. F. Schumacher

      May I just say... AMEN! :-)
      • Re: (Score:2, Funny)

        by IdeaMan (216340)
        "Any idiot can make something complicated. It takes a genius to make something simple" - F.O. Stanley

        Say It!
    • Any third-rate engineer or researcher can increase complexity; but it takes a certain flair of real insight to make things simple. -E. F. Schumacher

      And a stripped-down non-existent API is a way to make things simple? Pretty much all modern languages have very detailed and complete API/framework, and all for the same reason: Why should a programmer have to re-write common routines and data structures for every program? Why bother using a big external library (which can just becomes another dependency) when it can be built into the runtime?

      Horse and buggy carriages were much simpler than complex modern cars. We should probably go back to those.
      • by CastrTroy (595695) on Tuesday March 18, 2008 @09:25PM (#22790774) Homepage
        Yeah, let's compare .Net to PHP. .Net has a very extensive API. PHP also has a very extensive framework. The .Net framework was very well thought out and is very well organized. The PHP framework is cobbled together piece-meal, with everything in the same namespace (Yes, I'm aware that it still doesn't have namespaces, and won't until PHP6 is out, but that's yet another disadvantage of PHP), and separate functions for each database they support, where all are very similarly named, but not exactly the same.
      • And a stripped-down non-existent API is a way to make things simple? Pretty much all modern languages have very detailed and complete API/framework, and all for the same reason: Why should a programmer have to re-write common routines and data structures for every program? Why bother using a big external library (which can just becomes another dependency) when it can be built into the runtime?

        Horse and buggy carriages were much simpler than complex modern cars. We should probably go back to those.

        Actually, I was just trying to be funny; apparently I failed. I know there are applications that need all that complexity sometimes. But since you brought it up, when I was growing up, my family owned a horse. Maintaining a horse (without a buggy) is *far* more complex and inconvenient than my car. My car doesn't die if I fail to feed it regularly. I don't have to clean out my parking spot every few weeks. I can let my car sit in my driveway for weeks with no attention and it still works when I need it.

        • by peektwice (726616)

          My car doesn't die if I fail to feed it regularly. I don't have to clean out my parking spot every few weeks.
          My old unsightly Pontiac would die quickly if not fed gasoline, and routinely shit in the driveway, as does .Net. It dies if not fed huge amounts of memory, and pukes regularly.
      • And a stripped-down non-existent API is a way to make things simple?

        Nope. Einstein said it best, I think:

        "Everything should be made as simple as possible, but not simpler."

        It's a given that new features will increase complexity. (Which, surprisingly, is not always true. But stick with me for a moment.) The key is to increase the complexity only as much as necessary. If you increase the complexity any farther, you have failed.

        I can't count how many times I have seen code that is overly complex. e.g. Someone piled up layer upon redundant layer of code, hoping to get simplicity out of each one. Instead, they create more maintenance, more points of failure, and more bugs. Probably one of the most egregious examples was a company that wanted me to use a go-between piece of software on another server to make a real-time request for XML. They had a lot of weak excuses for this decision, not one of which held water. After peeling back the layers of nonsense, I found out that the reason why they wanted it used is because they had already sunk $10,000 into it.

        While not all examples I've seen are motivated by money or CYA, I have certainly seen a lot of examples that were motivated by blind adherence to company standards or existing technologies. Never mind that this particular module doesn't need any of the features of Struts, the company policy says use it, so we use it. Never mind that ADO isn't a good API. We have it, so we should wrap it. Never mind that we could just plug in a lib to do the secure transfer directly, we need to Rube Goldberg it through 5 different machines, protocols, and scripts written in 7 different languages!

        There's a reason why engineers harp on the KISS principle. KISS is all about engineering a solution that will last for a long time to come. A solution that can be understood by others, maintained, is reliable, and exceeds the specs wherever reasonable. As Einstein said, make it as simple as possible and no simpler.
        • The purpose of this complexity is to ensure the tool is obsolete before it is mastered.

          Since .NET platforms have an average lifespan of what, 18 months? You could spend that much time in a bootcamp drilling namespaces and methods all day and not get there before it's time to enroll in the next one. 384,000 methods? 12,324 public classes? How many of those are deprecated? How many soon will be? And of course if you use this junk to develop for windows, try to remember not to get uppity and make a market

          • I realize that you're one of those "Anything but Microsoft at any cost" people, but I have some news for you.

            1) You don't have to memorize what is in every namespace in *any* language. You just have to have a good enough overview of them to know where to look effectively for something you need. Spending all of your time trying to memorize everything in libraries is a waste of time and usually only done by undergads who think that they know it all.

            2) These are just toys to keep you occupied
            I hate to break it to you, but there is a great deal of business that is done on Microsoft software. There are a number of reasons for that (admittedly, not all of them are good reasons). One of them is that MS has done something Linux and the others haven't - they make software that makes it fairly simple for most businesses and people to do 80% of what they need to easily. In addition, needs which are not met by MS itself, and a number of needs which are, are covered by various 3rd parties.

            As for "toys to keep you occupied", I'm one heck of a lot more productive using C# in Visual Studio for most of the things that I have to do than I was using C or C++ in xemacs. C and C++ are fine tools for doing things closer to the metal. Most applications don't need that.

            In addition, dealing with deprecation in .NET isn't really any worse than when I was dealing with it in C, C++, or any other language that I've used.

            You need to learn to use the right tools for the job. Instead, you want to bash tools on an idealogical basis. It's a sign that you really need to grow up.
            • FWIW, he does have something of a point. Microsoft cycles its platforms at an incredible rate that just is not natural for the industry. As soon as one gets used to the existing API, Microsoft deprecates it and creates a new one. What's the advantage of the new one over the old one? In many cases, no one knows.

              Why did Microsoft rearrange the VBA API from Access 97 to Access 2000? Heck if I know. Why did Microsoft make IIS extensions written in .NET 1.x incompatible with 2.0? No idea. All I know is that Microsoft does these things. Evidence suggests that Microsoft does it intentionally to lock out competitors. (source: Barbarians Led by Bill Gates) If that's true, then it certainly doesn't cast Microsoft in a good light.

              That being said, you are also correct in saying that C# is a superior desktop development platform. If you're developing for Windows, I don't see any real reason not to use it. It's a fairly decent platform with tons of modern features. The only time it's really inappropriate is when your program needs to be cross platform. In which case Java might be the best choice despite the inherent difficulty in developing a good GUI in Swing. (Or SWT if you prefer. Don't even think about using Mono. Trust me, it's bad juju. Even the Mono devs will tell you that compatibility with .NET is NOT their primary goal.)
          • Re: (Score:3, Interesting)

            No, no, you're right. Linux tools and whatnot have lifespans significantly longer than those short-lived .NET things.

            Maybe I'm just bitter because I tried to get mySQL++ working with MSVS only to have it update after a week and work better... (less than a year)

            Or the time that I pulled off an offline installation of Fedora Core 3 with all drivers and library dependencies resolved (hey, this was my first linux attemp!), only to have Fedora Core 4 come out THE NEXT WEEK.

            Microsoft at least has the decency to
    • by PPH (736903) on Tuesday March 18, 2008 @09:16PM (#22790710)

      Any third-rate engineer or researcher can increase complexity; but it takes a certain flair of real insight to make things simple.
      -E. F. Schumacher

      Rube Goldberg is alive and working for Microsoft.

    • by MrSteveSD (801820) on Tuesday March 18, 2008 @09:52PM (#22790978)
      I'm not saying that .NET isn't too complex, but having a large number of types does not necessarily increase complexity. In fact having less types often leads to more complexity.

      Creating a new type abstracts away complexity and makes code easier to read. For example, you will often find that business software does a lot of comparing of dates. e.g. Checking whether a date is within a given range. More often than not you will find that programmers have just written things like...

      If (EnteredDate >= StartDate) and (EnteredDate <= EndDate)
      {
      //Do stuff
      }
      else
      {
      //Tell the user they have entered an invalid date.
      }


      The logic of checking that dates are in a range is repeated all over the place and the more you have to type, the more likely it is that you will make mistakes. This is where adding a new type, a "Range" type will help. With a Range type you can just say something like...

      If (AllowedRange.Contains(EnteredDate))
      //blah blah


      So adding a Range class actually reduces complexity rather than increasing it. There are plenty of examples of this sort of thing. Imagine writing a car ordering system without having a Car class to abstract away details about cars. You could do it, but the code would probably be a lot more sprawling and complex.
      • by daveime (1253762) on Tuesday March 18, 2008 @10:30PM (#22791226)
        It is this level of function (mis)use that makes me cringe.

        So instead of being able to see both the variable AND the range it is being tested against IN THE SAME LINE, I now have to go trawling back through the code looking for the place where you created the Range object to find the low and high boundaries of it.

        So yet more jumping all over the place hunting for stuff, when the original version was completely fit for purpose, clear, and most importantly, IN ONE BLOODY PLACE.

        Of course, it get's even "better" ... not all range checking will use the same ranges ... so then some bright spark will create Range2, Range3, Range4 objects with different ranges in each one. You see how this function does nothing for either readability or speed of debugging, but simply hides information that a programmer NEEDS to know in the context of the line he is looking at ?

        You can keep your Range object thanks.
        • While I think his example was overly simplistic and that there are good uses for more types, I do agree that over abstraction can be a hindrance to code maintainability and efficiency. However, the "abstraction sweet spot," so to speak, differs for each project, so I don't see why having more types for those that want it is a bad thing. You can always just not use them and compare to the date directly if you want.
          • Re: (Score:2, Insightful)

            by daveime (1253762)
            Yes, I mean don't get me wrong, there are a lot of useful types out there.

            The danger is "type overload", the obsession with using types for every sundry purpose, i.e. the same logic the parent displayed when automatically thinking that using the Range type was helping him or anyone else.

            Anything that abstracts out the conditional tests you are doing, so that one or even both halves of the tests have to be looked up elsewhere, is a bad thing. But you give someone too many types to play with, and he'll feel o
          • by MrSteveSD (801820)

            I do agree that over abstraction can be a hindrance to code maintainability and efficiency


            I'm not sure I've seen many cases of "over abstraction". Can you think of a good example you have seen? Normally what I see is under-abstraction. People repeating the same logic again and again all over the place and failing to see that it can be abstracted away.
        • Re: (Score:2, Insightful)

          by Gutboy (587531)
          Yeah, because right clicking on the code and selecting "Go to definition" from the pop-up menu is so difficult and takes so long to do.

          You don't need to know the ranges to know if your code will work. By creating a range tester, you can test that code once, and use it everywhere. You seem to be advocating recreating a range test everywhere you need it, adding to the complexity of your code, making it harder to read and maintain.
        • by MrSteveSD (801820) on Tuesday March 18, 2008 @10:56PM (#22791398)

          So instead of being able to see both the variable AND the range it is being tested against IN THE SAME LINE, I now have to go trawling back through the code looking for the place where you created the Range object to find the low and high boundaries of it.


          You seem to be assuming that there would be a hard coded range. The allowable range may be defined in a database or elsewhere. Checking against the same range may occur in many different places, so you certainly would not want to have the range hard-coded in every routine you need to do such checking.

          Imagine that the Date range object was intended to check the date of birth of new employees (e.g. You want prevent mistakes like they were born 200 years ago or 50 years in the future). If you are smart you will have created some kind of Employee class, and this Date Range checking object could just be a static variable of the class itself. It would be pretty easy to see where it was set.

          So yet more jumping all over the place hunting for stuff, when the original version was completely fit for purpose, clear, and most importantly, IN ONE BLOODY PLACE.


          The whole point is to reduce the unnecessary repetition of logic. Imagine if you wanted to do something more complex like check if one date range was contained within another. Suddenly you start repeating quite a lot of logic without a Range object.

          Of course, it get's even "better" ... not all range checking will use the same ranges ... so then some bright spark will create Range2, Range3, Range4 objects with different ranges in each one.


          Of course there will be different ranges. What does that have to do with anything? If anyone names variables "Range1, Range2" etc, they need some quick re-education.

          You see how this function does nothing for either readability or speed of debugging, but simply hides information that a programmer NEEDS to know in the context of the line he is looking at ?


          It's interesting that you think information is being hidden. This would only be the case if you compare it to the situation of hard coding things everywhere, which is generally a very bad practice. The information in a Range object is no more hidden that the case where your limits are two separate variables called "StartDate" and "EndDate" (Variables which might be initialized when the application first starts). What is really being hidden is logic. That's what object oriented programming is really all about, i.e. trying to abstract away complexity into new types.

          You can keep your Range object thanks.


          It's not really mine. Martin Fowler wrote about it in Analysis Patterns, although I'm quite sure it was being used long before it occurred to him.
        • by Siberwulf (921893) on Tuesday March 18, 2008 @11:45PM (#22791700)
          My two bits:

          With .NET 3.0/3.5, you can create something called an extension method.  This would probably be better than creating a range class, or even the inline code the grandparent of this used... Lets look!

          public static class DateExtensions
          {
               public static bool IsBetween(this DateTime tested, DateTime start, DateTime end)
               {
                    return (tested >= start && tested <= end);
               }
          }

          That leads to a great implementation that doesn't hide any variables or values and still allows for easy readability

          DateTime Signup = (Fetch some value here);

          if(Signup.IsBetween(DateTime.Now.AddDays(-1), DateTime.Now)))
          .......

          You get the jist... I know thats picky, but you don't have to give up anything these days, if you try hard enough.
      • by mikael (484)
        From what I have seen of .NET, most of it is wrappers around the existing Win32 widgets. On the blog, there is a comment about cyclic dependencies between different class libraries. This doesn't seem too good - surely it would be possible to split those modules up until the dependencies are removed?
  • by Anonymous Coward on Tuesday March 18, 2008 @09:09PM (#22790646)
    .NET and Java are both prime examples of object-oriented programming gone stupid. Their class libraries have become so utterly huge that it becomes damn near impossible for an individual developer to suitably grasp anything more than a small portion of them.

    Although they supposedly give more flexibility, something as essential as reading from and writing to a file becomes a hassle with .NET or Java. It's easy to get lost in whether we need a FileInputStream, or whether we should wrap a FileInputReader in a TextInputBuffer, and so forth. Give me fopen() any day.

    OO was supposed to solve the problems of writing applications in languages like C, Pascal and Fortran. All it has done is brought in a new level of complexity that results in monstrosities like the Java and .NET standard class libraries. Meanwhile, the POSIX API offers just as much flexibility, but is far easier to work with. Not to mention that programs using it are far more efficient.

    • by NewbieProgrammerMan (558327) on Tuesday March 18, 2008 @09:23PM (#22790750)

      Although they supposedly give more flexibility, something as essential as reading from and writing to a file becomes a hassle with .NET or Java. It's easy to get lost in whether we need a FileInputStream, or whether we should wrap a FileInputReader in a TextInputBuffer, and so forth. Give me fopen() any day.
      The first time I saw what you (supposedly) have to do to read from a file in Java, it pegged my OMGWTF meter. I'm sure there's totally valid reasons for making such simple (and common) tasks so complicated, but apparently I'm not smart enough to understand them. IMHO it's one thing to have the complexity available if it's needed, but it's another to make me endure all that complexity if I don't need it.
      • Re: (Score:2, Interesting)

        by grahamsz (150076)
        There are a number of advantages to a comprehensive type library, it does do a decent job of defining interfaces that external libraries can use.

        One example I came across recently was that I was able to couple one visualization library (that rendered to an on screen canvas) to a pdf library (that implemented the standard Graphics2D interface). With little more than a few lines of code, the full vector-based visualization appeared in a pdf file. Granted, I'm not a C++ programmer, but I doubt you could have g
        • The .Net framework does seem worse than the Java one, if only because it's documentation seems poorer than suns.

          I go back and forth on this.

          On one hand, Sun's javadocs for the base Java API are generally superior to MS's for the base .NET API.

          On the other hand, a lot of things effectively fall into the .NET framework, that the closest Java equivalent isn't a core Sun creation or generally just isn't documented as well. Something like a Struts or the Struts-successor of your choice, for example. In an awfu
      • by LaskoVortex (1153471) on Tuesday March 18, 2008 @09:35PM (#22790872)

        The first time I saw what you (supposedly) have to do to read from a file in Java, it pegged my OMGWTF meter.

        The idea is that you could encapsulate all that complexity inside a method inside a class--instantiate that class inside a class that has a "main()" and then put the whole thing in a module. You call all of that method with the correct parameters in an instance of another class created and instantiated the same way. You then jar it up as bytecode and then run it on the JVM--making sure your users are running the right versions of the JVM.

        On second thought, OMGWTF?

    • by free space (13714) on Tuesday March 18, 2008 @09:33PM (#22790838)
      I kinda disagree.
      To talk about your example: fopen( ) might be nice and simple, but the capabilities provided by .net and Java are much bigger in scope.

      You can use them to read and write files with different encodings, you can treat a lot of other things as files, and combined with formatters you could serialize your data to binary files or XML almost without writing code.

      Even more, the different classes are orthogonal, so you can mix and match different encodings, formattings, and file operations without the combinatoric explosion of having a separate function for every possible operation. It's an elegant design in my opinion.

      Furthermore, the libraries of Java and .net provide standard interfaces and hooks to link your own code. Want arrays of your new data type to have automatic sorting capabilities? just implement IComparable. A little bit of work would let your new collection class bind automatically to Winforms' data grid control. And many more examples.

      If you remove .net's huge libraries, you get a situation like C++ where there are half a dozen pseudo-standard libraries for encryption, networking, GUI and stuff. You have projects with incompatible dependencies and a lot of wasted effort writing, debugging and maintaining all those libraries. Microsoft may have a lot of problems with their products but .net is one of the most well designed things they've produced.

      To be fair, .net inherited a lot of this from Java, but they improved on it. Java, in turn, adapted/improved the Smalltalk libraries that have helped pave the way for the "language with everything included" paradigm.
      • Re: (Score:3, Informative)

        by Watson Ladd (955755)
        I just use parametric types. To make a sortable data type I define the type and a comparison, and my sort function uses any comparator passed to it. If I want to sort backwards one time, no problem. I just use lambda to make a quick reversed comparison operator. Want to access a different file encoding? No problem, just supply the functions as an argument. Threadsafe? I have immutable variables, so nothing bad can happen. What language do I use? Standard ML.
        • by free space (13714)
          Yeah. ML and it's descendants are awesome.
          Incidently, I thought you were talking about C# 3.0 when I was reading the beginning of your post. In my opinion this is one of the reasons Microsoft has succeeded: They shamelessly steal the best features of other good products.

          While Java fan's were complaining that .net "stole" from Java like this was a bad thing, MS was busy ripping off features from functional languages and improving their platform.
      • by Jester998 (156179)
        To talk about your example: fopen( ) might be nice and simple, but the capabilities provided by .net and Java are much bigger in scope.

        Yup, I agree -- sometimes encapsulating and abstracting the problem is the right approach. Key word there? Sometimes.

        If you have a simple, well defined problem (e.g. 'write these 8 bytes to a file'), a 'nice and simple' solution often does the trick. Less code means less complexity and more clarity. With C++, for example, you could use a simple fopen() -- or you could wr
        • by free space (13714)
          Sometimes a balance has to be striked between "make everything simple" and "application complexity keeps growing".

          Many languages start with the goal of making everything simple and keep bolting on features gradually until the language becomes a mess and everyone complains about the inconsistencies and (ironically) excessive complexity. PHP is such an example.

          Sometimes it pays to design for complexity up-front. I don't mean anticipating every possible feature and bundling it in (like, say J2EE) but to be rea
      • by ignavus (213578)
        "You can use them ..."

        No, you *have* to use them, and that is the problem.

        "Keep the simple things simple, and make the complex things achievable" is the goal of a good programming language.

        Not: "whatever your transport needs, a Boeing 747 is always the answer." Sometimes a pair of roller skates would be more practical.

    • by MrSteveSD (801820)

      Their class libraries have become so utterly huge that it becomes damn near impossible for an individual developer to suitably grasp anything more than a small portion of them.

      The funny thing is, they (and unfortunately some employers) think it is great to try to learn it all rather than look it up when needed. They call it Microsoft Certification. You can never learn it all in any sufficient depth and even if you could, by the time you did, they would have changed everything anyway.

      I think the reason it's so big is because they have tried to address so much. They haven't necessarily done it stupidly, but they have addressed a huge problem domain so there are huge numbers

      • The funny thing is, they (and unfortunately some employers) think it is great to try to learn it all rather than look it up when needed. They call it Microsoft Certification. You can never learn it all in any sufficient depth and even if you could, by the time you did, they would have changed everything anyway.

        Honestly, I think 90% of the actual value of a .NET cert is that you're at least exposed to all the major features of the framework. Honestly, you'd never need more than half of them on almost any pr
        • by MrSteveSD (801820)

          Honestly, I think 90% of the actual value of a .NET cert is that you're at least exposed to all the major features of the framework.

          I've never really been a big fan of trying to learn absolutely everything, especially when the things you are trying to learn are as transient as those of a particular Microsoft technology. If Microsoft kept completely changing physics every few years, I really wouldn't have bothered doing a physics degree. These Certification courses tend to be quite expensive and I know that the companies that do the training promise students (who fork over thousands) the Earth.

          • These Certification courses tend to be quite expensive and I know that the companies that do the training promise students (who fork over thousands) the Earth.

            Well, that's true. Someone who's working with .NET day to day doesn't really need a course like that, though, in my experience and the experience of developers that I've worked with. A couple $50 books is plenty, and I know people that have done 100% of their studying for cert tests through free online sources.
    • by n dot l (1099033) on Tuesday March 18, 2008 @10:11PM (#22791098)
      My experiences with Java were painful, but they are out of date, so I'm only going to talk about .NET, which I actually use in my day-to-day work because it's actually the best tool for many of the jobs I do.

      .NET and Java are both prime examples of object-oriented programming gone stupid. Their class libraries have become so utterly huge that it becomes damn near impossible for an individual developer to suitably grasp anything more than a small portion of them.
      Interestingly enough, an individual developer does not need to grasp anything more than a small portion of them. An individual developer needs to know the basics of the core class library and whatever else he needs to get his job done. The vastness of the ASP.NET (or whatever) libraries is not an impediment to one who does not use them.

      Also, there is documentation, and Intellisense (freely available, now), and a naming convention that actually makes sense after a while. F1 isn't that hard to press.

      Although they supposedly give more flexibility, something as essential as reading from and writing to a file becomes a hassle with .NET or Java. It's easy to get lost in whether we need a FileInputStream, or whether we should wrap a FileInputReader in a TextInputBuffer, and so forth. Give me fopen() any day
      Seriously? You actually found string[] lines = File.ReadAllLines( string path ); to be difficult? Or are you just talking out your ass?

      Of course, there are more complicated examples, but that's usually because they're either years out of date (.NET 1.0), or just plain doing more.

      OO was supposed to solve the problems of writing applications in languages like C, Pascal and Fortran. All it has done is brought in a new level of complexity that results in monstrosities like the Java and .NET standard class libraries. Meanwhile, the POSIX API offers just as much flexibility, but is far easier to work with. Not to mention that programs using it are far more efficient.
      Yeah? I find typing File[dot] and hunting through the fairly short list of methods easier than remembering what the valid values were for fopen's mode parameter are, and whether there was a platform-specific one I should be using to get the file-locking behavior I want. And file.Read( ... ) is a lot neater than fread( ..., file ). You can go on all you want about how I'm being lazy, but I have more important things to commit to memory than API entry points and the quirks of their parameters (like, say, the overall structure of my app, or the problem it's being written to solve). YMMV, of course.

      As for plain C applications being more efficient, well, what exactly does that have to do with what methods are named and what namespaces they do (or do not) reside in? Second, that's not the point. Getting a quick GUI app up and running in a hurry is more what you'd use .NET for, something you can't even begin to do in C until you've sat around for a while thinking about fun things like memory management for shared resources.

      Yes, C is valuable and it's still pretty much the best choice for writing tight, high-performance loops that do lots of pointer-manipulating, bit-twiddling evil - that's what I and every other sane programmer I know uses it for. But it's also a damn waste of my time to be using it to write Win32 GUIs for art tools. My time is more valuable than a few CPU cycles.
      • So the takeaway from your post is that files are easy to read in .net, as long as you read the entire thing into memory at once? I guess you only ever deal with text files that are a few KB or something.
        • So the takeaway from your post is that files are easy to read in .net, as long as you read the entire thing into memory at once? I guess you only ever deal with text files that are a few KB or something.

          It's slightly less straightforward in .NET for massive files, but still generally easier and produces much more readable code (to me) than doing it in C/C++/Java.

          I worked on a .NET project a little over a year ago that had to routinely process flat files 3-4 gig in size, so I have some idea of what I'm talki
    • Re: (Score:3, Insightful)

      by Dilly Bar (23168)
      You may be right, but try another example for .NET. Here are some .NET APIs you can use to read a file:

      byte[] File.ReadAllBytes(string path)
      string[] File.ReadAllLines(string path)
      string FileReadAllText(string path)

      and write to one:

      File.WriteAllBytes(string path, byte[] bytes)
      File.WriteAllLines(string path, string[] contents)
      File.WriteAllText(string path, string contents)

      Oh, and you can still compose types for more complex scenarios.

      Full disclosure - I work for MS on Visual Studio
      • One of the primary properties of a file is that you can gradually stream things to/from it. If files become "one shot serialize/deserialize a blob to disk," they quickly become useless in many scenarios.
        • Re: (Score:3, Informative)

          by Dilly Bar (23168)
          You can stream files as well in .NET. I wanted to point out that .NET typically has a level of API that makes simple scenarios easy and more complex scenarios possible (as easy as possible :-)) as in this case.
    • by batkiwi (137781) on Tuesday March 18, 2008 @10:18PM (#22791154)
      In .NET:

      byte[] fileContents = System.IO.File.ReadAllBytes("myBinary.blah");
      string fileText = System.IO.File.ReadAllText("myText.txt");


      That's if you want to read it all in as quickly as possible (no buffering). What's tough about that?

      Obviously if you need buffering you have to do some REALLY complex work:

      while (s.Position < s.Length)
      { //process your stream... read one byte at a time out of it
          int oneLittleByte = s.ReadByte(); //or chunk 50 bytes out of it, don't hardcode like this kids!
          byte[] someBytes = new byte[50];
          int bytesRead = s.Read(someBytes, s.Position, someBytes.Length);
      }



      That is both tough and complex. I don't know how I can cope.
    • Re: (Score:3, Informative)

      by jesterzog (189797)

      Although they supposedly give more flexibility, something as essential as reading from and writing to a file becomes a hassle with .NET or Java. It's easy to get lost in whether we need a FileInputStream, or whether we should wrap a FileInputReader in a TextInputBuffer, and so forth. Give me fopen() any day.

      I've been writing code in a Windows shop using .NET for a couple of years now. I like coding in C at times and still do for some things, but when I'm writing .NET apps I don't really have much of an i

    • by Oddster (628633)
      It is not just the class libraries which are a problem. The idea of using a garbage collected language in a production environment for a large project is just plain silly, for the simple fact that garbage collection does not scale well and it introduces nondeterminism in your code - two things which are fundamentally opposed to the proper operation of a large software system. Not to mention all the bad programming practices it encourages. It's great if you're trying to teach some concepts to students or
      • Re: (Score:3, Insightful)

        The idea of using a garbage collected language in a production environment for a large project is just plain silly, for the simple fact that garbage collection does not scale well and it introduces nondeterminism in your code - two things which are fundamentally opposed to the proper operation of a large software system.

        Define does not scale well. I worked on an enterprise .NET project a year or two ago that managed a variety of transactions amounting to millions of dollars worth of business every day for
    • Re: (Score:3, Insightful)

      by giminy (94188)
      I started as a unix developer and switched recently to programming in windows, and use .NET quite a bit (it was the job that was available at the time ;-)). .NET and Java are both prime examples of object-oriented programming gone stupid. Their class libraries have become so utterly huge that it becomes damn near impossible for an individual developer to suitably grasp anything more than a small portion of them.

      Give me fopen() any day.

      There are a lot of things that I don't like about .NET (and particularly
  • .NET (Score:3, Insightful)

    by The Aethereal (1160051) on Tuesday March 18, 2008 @09:10PM (#22790656)
    .NET really is an amazing framework on which to build software. It just needs more OS support and I would use it for programming other than what I do for a living. All those types are there, but they will not be loaded in to memory unless your software needs them.
  • goatse (Score:4, Funny)

    by Nimey (114278) on Tuesday March 18, 2008 @09:13PM (#22790686) Homepage Journal
    That's what I first thought of for visualizing .NET.

    The goggles, etc.
  • I want to see a comparison to PHP with all extensions/modules enabled... I have a feeling PHP will have many, many more.

    Sadly, that is not a good thing. I love PHP, but it's a mess and desperately needs an overhaul.

    (please note: This is my opinion... not out to start a pissing war.)
    p.s. - who the hell uses .NET anyway? This is Slashdot. ;-)
    • lol

      You're fanboyism of PHP is much appreciated (as I am myself a PHP developer) but PHP is actually smaller. Don't be deceived. This is a GOOD thing. Less bloat = more performance. PHP + *SQL + Memcache will always pack more punch in a smaller package then the .NET Framework. Wikipedia is my primary example. :-)

      Also I do have to point out that comparing PHP to .NET is really PHP vs ASP.NET and ASP.NET is actually a framework around the suite of .NET languages (ASP,JSP,C#,C++,and VB). So the comparison is a
  • by Anonymous Coward on Tuesday March 18, 2008 @09:31PM (#22790822)
    Before you impugn Microsoft with your short-sided emotional appeals against the sheer number of classes, use a hint of logic and realize that since MS copied Java shamelessly and ruthlessly (improving on some debacles in the Java classes, such as the crap IO classes that had to be redone from scratch), you'd be blasting Java, KDE, Python, and most any other class library as well.

    Look, in a class library that purports to help most everyone, there's going to be an awful lot of code. Class library implies that classes are used to organize the abstractions provided by the library. Proper OO design favors designing more types with smaller number of features rather than God-objects that do many things. Fine-grained objects are simpler to unit test and are much easier to reuse. The downside is the propagation of types and the verbosity level of the code generally goes up. But that is a fair trade-off in my opinion, since the most important work on the code happens in the maintenance phase, when someone else can come along and at least get a vague idea of what is going on.

    I've used the class libraries in Java, KDE, MFC, and Python, and the .NET class library beats them all easily. It is obvious that the designers did their homework and stole from each library what worked well, while dropping what didn't. If they were smart they'd take Java's excellent concurrency constructs such as the BlockingQueue and put them in (they may already have, I don't program much in .NET lately). Most of my beefs with the class library are the fact that it is huge (footprint size), and I don't agree with some of the modeling. But that is minor.

    There's a reason Miguel wanted to make this happen on Linux. It is close to making programming fun again, instead of squinting at hyper-abbreviated function names like sprintf and mucking around with idiotic string representations such as C's.
  • Correction (Score:5, Funny)

    by TheSpoom (715771) * <slashdot@Nospam.uberm00.net> on Tuesday March 18, 2008 @09:34PM (#22790852) Homepage Journal
    If you're a Web developer, you should ignore .NET and use something much less bloated.

    There, fixed that for you.
  • I remember JLG (the fearless leader of BeOS fame) once saying something to the effect that Windows has five billion lines of code in it and that he "loves every single one of those lines." I also seem to recall Bill Gates once saying that IBM liked software to be measured in k-locs, while he debated that it should be measured by what it does. He said something that I don't quite remember, but the jist was, "why would we do something in thousands of lines of code if it could be done in just a few?" How ironi
  • by wbean (222522) on Tuesday March 18, 2008 @09:45PM (#22790942)
    The comparison to the human genome is interesting. The genome contains about 3 billion base pairs and 30,000 coding genes. As best I can see, .NET is quite a bit bigger: The closest thing to a gene is a method (an object that can be used, or not used, and which does something). The genome has 30,000 and .NET has 384,000. So even if it takes 10 methods to do what one gene does, they are equivalent.

    It takes 3 base pairs to code for a single protein, perhaps the closest we can come to an instruction. Each gene has an average of 3,000 base pairs, equivalent to 1,000 instructions. So we are looking at 30,000 genes x 1,000 instructions/gene or about 30,000,000 instructions in the genome. .NET has 8,000,000 instructions. Given the roughness of the comparison, this is pretty close.

    The point here is that we are creating programs that are roughly equal in complexity to the human genome. If we were better programers, then perhaps we'd have come up with intelligent design.

    Finally, it's worth noting that the functions are unknown for over 50% of discovered genes. It may be about the same for .NET :))
    • by ars (79600)
      3 billion vs. 30,000 - actually the function of the 2.99997billion remaining genes is not known, but it is believed to have some function. So we are not even close to having enough complexity to compare to a human.
      • by wbean (222522)
        3 billion is the figure for base pairs, it takes three of them to code for a single amino acid. On average there are 3,000 base pairs per gene; so, even if the entire 3 billion were encoding something, there would still only be about 1 million genes. A huge part of the genomoe does not code for proteins. There are long sections of repeated blocks of G and C bases. These are belived to have a function relating to gene expression but not coding. There are other large areas repetitive sequences of "junk"
    • by n dot l (1099033)
      There's a big difference between a gene and the following (courtesy of Lutz Roeder's .NET Reflector):

      //in System.IO.File
      public static string[] ReadAllLines(string path)
      {
              return ReadAllLines(path, Encoding.UTF8);
      }
    • by Ambiguous Coward (205751) on Tuesday March 18, 2008 @11:40PM (#22791674) Homepage
      The difference between genetic code and any human-invented language is the following: have you ever tried to debug a genome? It's friggin' ridiculous. No documentation at all.

      -G
      • Re: (Score:3, Funny)

        by megaditto (982598)
        Debugging an existing genome is hard. Forking it is easy (and fun, esp. with multiple copies).
  • by spaceyhackerlady (462530) on Tuesday March 18, 2008 @10:07PM (#22791078)

    This reminds me of an old joke:

    Q: A Marketing executive and an RIAA lawyer jump off the top of a 50 story building. Who hits the ground first?

    A: Who the hell cares?

    laura, not a fan of .NET

  • by Paiev (1233954)
    GNU really has their work cut out for them here with their DotGNU project to provide a free implementation. Good luck rms and co; if you try hard enough, you might be able to implement it all before Duke Nukem Forever is released.
  • there's a lot of code. what else does this show / 'prove' ?
  • So, wait a minute (Score:3, Insightful)

    by glwtta (532858) on Wednesday March 19, 2008 @02:22AM (#22792558) Homepage
    Having a large class library is a bad thing now? You don't like it when other people do work for you?

    It's all properly namespaced (unlike some languages I could name), having a bunch of classes you don't need to use does not add to your mental footprint.
  • by 192939495969798999 (58312) <info.devinmoore@com> on Wednesday March 19, 2008 @06:45AM (#22793554) Homepage Journal
    The most common .Net developer mistake now is that people don't bother finding the function they need -- instead they just reinvent it and waste everyone's time when maintaining that code later. The problem with 30,000+ items is that there's no good way to teach people where to look for something that's already in there. If it were organized in such a way that one could easily not reinvent the wheel, then it would be an awesome language. Without that, though, it becomes yet another way for people to create crappy date parsers.

As far as we know, our computer has never had an undetected error. -- Weisert

Working...