Forgot your password?
typodupeerror
Java Programming

What's wrong with HelloWorld.Java 181

Posted by timothy
from the skipping-around dept.
prostoalex writes: "Daniel H. Steinberg posted an article on O'Reailly's OnJava.com discussing the difficulties and current problems with introductory Java in the classroom. The textbooks used in colleges are mostly the rewrites of C/C++ textbooks and thus start with HelloWorld program without really dwelling on object-oriented nature of Java and why it is important. In a nutshell, OOP even nowadays is treated as somewhat innovative concept in the classroom, mainly because of educators, who were taught C. Hence links and description of Rethinking CS101 Project."
This discussion has been archived. No new comments can be posted.

What's wrong with HelloWorld.Java

Comments Filter:
  • by BalkanBoy (201243) on Thursday August 22, 2002 @09:46PM (#4123932)
    People can never get this through their heads. OOP is _not_ about what language you use or what tool you use that more or less will or can facilitate OO programming. OOA&D (e.g. object oriented analysis and design) is not about mastering Java or C++, it is about mastering a new style, a new paradigm of thinking. This is precisely when Java or C++ is taught by "old skool" K&R C people who hate the thought of anything resembling OO (and I wont mention how many are of those out there... too many, rest assured) it looks like Java or C++ is C wrapped in objects. The usefulness of the paradigm is reduced and de-emphasized if the proper train of thought is not employed when analyzing solutions in an object oriented fashion.

    One has to be able to perceive problems in terms of objects. This may at a glance seem easy - our world is composed of objects, but when you start getting into more abstract concepts, e.g. trying to write OS's in a fully OO manner (akin to what BeOS was) , or other more complex applications like the entire JFC (for instance), then OOA&D does not seem so easy!

    Designing, or better yet, THINKING in OO terms is not something that happens overnight. This is precisely also the reason as to why 90% of large, pure OO projects either fail or start to degenerate into something that needs revamping every so often, only because the programmers who built the application did not take the time to properly analyze the problem and come up with the most natural solution possible. A natural solution is possible, but only at the hands of professionals, who understand what OO is all about (and it is least about WHAT LANGUAGE you use), who have experience in 'seeing' the world, or higher concepts through OO eyes and who are able to delimit, with crisp boundaries every concept/object available to them or as stated in the specifications by the customer and MOST importantly establish the PROPER relationships between those objects!

    Design patterns and such go a LONG way toward getting this objective, but one cannot fathom using or applying design patterns if he doesn't understand what OO design and analysis means, and without a shitload of experience to use toward this goal. True OO thinking is almost like a lithmus test of how good a programmer, or better said, an ANALYST, an ANALYTICAL person, or your ANALYTICAL skills are... In OO, 80% of the time or thereabouts is spend on analysis and design, 20% on the mechanics of writing the code. Then, and only then, will you be able to pull OO projects successfully through completion.

    And no, I'm not talking about your school/academic projects, I'm talking about large scale projects with possibly millions of lines of code where understanding the ESSENCE of the OO paradigm will either make or break a project and make it usable and extendable for a long time or make it a piece of crap that will never see the light of day...

    Most people shy away from OO or misunderstand it because they've never even read a book about OO either, such as the OO 'bible' by Rumbaugh/Premerlani "OO modeling and design using OMT", or some of Fowler's books on analysis, patterns, or Gamma's book on design patterns...

    Someone once said - pimpin' ain't E-Z! Well, neither is OO!
    • The generation of code is the small (critically important, but small) part of development. The "game" is in the head, or more precisely the mind of the developer. Teaching someone to write code effectively is not a terribly daunting prospect. However, teaching someone to think, much much more complicated. In contribution to this difficulty the education system here in the USA is not geared towards excellence, it is geared towards the average, the everyday. Just an opinion, feel free to flame away, but the so termed fuzzy subjects such as art and music teach students to see not what is there, but what isn't, and more importantly what could be. There are a great many technicians out there who generate code, but virtuosity in any endeavor is art as much as science or technology, it is seeing, feeling. I had the privalege of speaking at my daughter's school on career day and when asked what I did, my response was, I build models. I build models of business process or at the moment engineering processes. In short what we do as developers is model behavior. I have maintained for years that the only way to get good at development is not education, it's scars, scars earned in the trenches getting beaten on by cranky code, twitchy servers and managers who haven't got a clue. The same is true for OO, get into it up to your neck, get it into your pores think it breathe it read it discuss it beat it to death. Then you can become a prophet in the wilderness of software development and have managers look at you like you're something to the left of a cultist. Balkanboy is right it's not easy, but "It's the hard that makes it great. If it was easy anyone could do it." (paraphrased from "A League of Their Own"
      • 'is not geared towards excellence, it is geared towards the average'

        Well in my experiances the average is deamed to be excellence, if you in any different, off the wall or excel but not in a way that fits the average is is not usualy considered to be 'constructive' in the education system.

        Just a bit on OOP so as not to be off topic,

        You can write C in and object orientated way even though there is no real language support for objects in C.

        the old jpeg library was written like this, and I believe GTK is written this way (why they don't use C++ i'll never know?)

        One way to teach OOP is to get some spagettie and get the 'class' to refactor the code(aided and abbeted by the teacher!) not only does this teach OOP but it teaches the reasons why OOP is good.

        If i wanted a spelling critique I would have posted this comment on /.
    • by Tablizer (95088) on Friday August 23, 2002 @01:34AM (#4124764) Homepage Journal
      Designing, or better yet, THINKING in OO terms is not something that happens overnight. This is precisely also the reason as to why 90% of large, pure OO projects either fail or start to degenerate into something that needs revamping every so often, only because the programmers who built the application did not take the time to properly analyze the problem and come up with the most natural solution possible. A natural solution is possible, but only at the hands of professionals, who understand what OO is all about

      A fitting excerpt from my anti-OO webpage:

      OOP technology has generated more confusion than almost any other computer technology. For example, many OOP "experts" claim that most companies are either not using OOP properly, or are not taking advantage of OOP. Experts also say that there is a long learning curve before many people grasp the power of OOP; that its benefits can't really be taught, at least not understood, from a book. It has almost become like Particle Physics, in which only a small elite group appears to understand it properly, and everybody else needs years of meditation and practice.....

      Ironically, OOP is sometimes billed as "better fitting the way people think". Years of meditation and study to learn how to "think naturally"? I am thinking of setting up a $60-per-hour consultancy to teach sports fans to drink beer and belch in order to "optimize their recreational satisfaction".

      ....Further, there is a lack of consistency in modeling techniques by OOP celebrities. Methodology-of-the-week is commonplace. The lack of consistency makes it tough to make any generalizations about how OOP goes about modeling useful business applications. An OOP consultant may have to be well-versed in dozens of OO methodologies to be able to walk into a shop and perform any useful work any time soon.

      (oop.ismad.com)

      • Smalltalk (Score:3, Interesting)

        by booch (4157)
        Thinking that OO is hard is just plain wrong. The main problem with the way OOP is taught is that the commonly used languages mix both OOP and non-OOP procedural elements. Constantly switching between the 2 doesn't allow the student to "get" the OOP part very easily.

        The answer is to use something like Smalltalk, where everything is OO. In early testing, the Smalltalk developers found that it was *easier* to teach Smalltalk to beginners than procedural languages, because people are already familiar with doing things to objects in the real world. Whereas it takes a certain way of thinking to come up with step-by-step manipulations of abstract data structures.
        • (* The answer is to use something like Smalltalk, where everything is OO. In early testing, the Smalltalk developers found that it was *easier* to teach Smalltalk to beginners than procedural languages, because people are already familiar with doing things to objects in the real world. *)

          I have heard this claim from Smalltalk fans before, but the "experiment" has never been repeated in a proper research setting. Thus, it is an ancient legend that just keeps getting propagated over and over.

          I would note that people think *differently* than each other. Just because thinking X way is natural for person A does *not* necessarily mean X is natural for person B.

          Don't paint with too wide a brush.

          If OO and Smalltalk model *your* head well, that is fine. Just don't extrapolate that all over the planet without real research first.

          Personally, I think it is more important to focus on making software change-friendly rather than making it easy to learn to program. Although, both are important factors.
          • Well perhaps the "acecdotal evidence" is because that's the experience of a lot of teachers at universeties? Many of them used to teach smalltalk at one point, and they can compare to how it is now when they use Java or (heaven forbid) C++.

            It's not like you're going to put 10 people in one room and teach them Java/C++/whatnot and 10 in a different room and teach them Smalltalk and then see which group are best able to solve some random experiment. There's just not much of a point.

            And while people think differently I think that's besides the point. The issue was to compare languages which teach OO. If a person is more "apt" at procedural or functional programming is besides the point. It would seem as if the hypothesis that "If you want to teach something, try to teach it with as few distractions as possible." would be valid. And that would be the point of teaching Smalltalk.

            (Note: I haven't learned Smalltalk and have only studied it a little "for historical reasons".)
      • by scrytch (9198) <chuck@myrealbox.com> on Friday August 23, 2002 @01:04PM (#4127406)
        Go see that page, oop.ismad.com and you'll mod the parent up to +5 funny. Just ignore the gross misunderstanding of OO, the selective process of argument where he flips between implementation and concept, the plug for some vague "table driven programming" thing (that basically is OOP without inheritance), and the entire fallacy of division (google for it) that is promulgated throughout... probably a good fifth of the material is dedicated to red baiting, to the point of displaying a hammer and sickle flag. My congratulations on a masterful troll. It had me going for a bit. Love the "beat up spock" visual analogy for "abuse of logic" too.
        • (* Go see that page, oop.ismad.com and you'll mod the parent up to +5 funny. Just ignore the gross misunderstanding of OO *)

          Let's cut to chase.

          Where is this grand evidence that OOP is objectively better?

          The evidence on my webpage is as strong as ANYTHING used to justify OOP.

          Where is your evidience, Mr. GlassHouse?

          Ignore the fact that you don't like me and think I am a troll. Just produce the evidence for the world to see. Good evidence is orthogonal to any alleged troll.

          (I was often told that good evidence existed in Meyer's famous work. So, I purchased a copy. However, I found tons of reasoning holes in it. A review of it can be found on my website.)
          • Where is this grand evidence that OOP is objectively better?

            As I've pointed out before, it's in the collective experience of legions of software developers. If they didn't -- at least subjectively -- feel that the OO approach suited them better, they wouldn't use/adovcate it. And what feels better to you is often the best approach for you to take. There are an awful lot of people putting their money where their mouth is on this one, and they're still doing it decades later. It's hard to believe that they're all wrong on everything after all this time.

            You may not personally have seen any benefits from OO, and you may personally have seen benefits from a relational approach. As I've also pointed out before, and by your own admission, your experience comes from a very narrow field of programming, to which one approach seems much better suited. It's not surprising that you find that approach superior. OTOH, you are yourself falling for the "too wide a brush" problem of which you accuse others elsewhere in this thread. Those of us who work in diverse areas of programming have often found OO to be at least as natural as, or more natural than, a purely procedural approach. We also acknowledge that it has its flaws -- and there are plenty -- but many of these can be avoided if you use a tool that doesn't insist on a purely OO approach (and frequently one that ignores half of OO as well, such as certain popular mainstream programming languages today).

            • by Tablizer (95088)
              As I've pointed out before, it's in the collective experience of legions of software developers..... It's hard to believe that they're all wrong on everything after all this time.

              1. Collective experience use to be that the world is flat.

              2. It could be subjective (the "mindfit" argument). That is fine, but 99% of the stuff on the shelfs implies that OOP is objectively better. I don't see disclaimers that the benefit list may be subjective.

              3. The "popularity metric" is that Windows is "better". Do you really want to back that?

              4. I have never seen a good survey that said most developers prefer OOP.

              and by your own admission, your experience comes from a very narrow field of programming, to which one approach seems much better suited. It's not surprising that you find that approach superior.

              Narrow, but large, I might point out. Not a single OO book ever limited it's braggings to specific domains, instead strongly implying a "step up in evolution" over procedural.

              Those of us who work in diverse areas of programming have often found OO to be at least as natural as, or more natural than, a purely procedural approach.

              Unless you can define/measure "natural", that appears to be a rather subjective thing.

              Plus, some OO fans here have said that OOP is *not* "natural" nor should that necessarily be the goal of it.

              I believe in the scientific process where you have to openly justify things based on open evidence, and not personal opinion and "feelings". Your "evidence" fails miserably here.

              BTW, who gave that ahole a "4"? It contains almost nothing but personal digs. Damned moderators!
  • OOD101 or CS101? (Score:5, Insightful)

    by one9nine (526521) on Thursday August 22, 2002 @09:55PM (#4123964) Journal
    Are we talking about a beginning OOD class or a beginning CS/Programming class? When you first teach someone how to program, the last think you want to do is start with OOD. One must learn about variables, arrays, assignment vs. comparison, loops and conditional statements. Then one must learn about functions and how to separate code into them. Simple algorithms need to be introduced as well. Also, how to break down a problem into several steps and then code it. Finally you can start to teach about classes as well as one of my personal favorites, data structures.

    Just because Java is focused on objects doesn't mean you have to teach OOD right off the bad. You have to start with the basics. True, you going to have kids ask "What does static mean?". You just tell them to ignore it for now. Why is that looked upon as a bad thing? The same thing happens when you teach C++. You tell your beginners to ignore stdio. Later, when it's time, you can teach about includes and classes.

    This is why I didn't learn jack shit in college. Everything is focused on OOD. Object this and class that. I am not saying there anything wrong with OOD, but colleges don't focus enough on the fundamentals. That's why there are so many people who overengineer everything and who can't even tell you the difference between a Mergesort and a QuickSort or even know what a Red Black tree is!
    • I agree that a distinction has to be made between OOD and algorithms and basic programming fundamentals. I would say that a good way to learn software development would be as follows:

      • First, learn basics like arrays, data types, operators, functions, pointers, structures. Learn one or two languages.
      • Second, learn about algorithms and data structures. Learn about sorting and merging lists, learn about heaps, stacks, trees, etc. Learn about algorithm complexity. Make sure to emphasize modularization; that is, if you are learning about trees, make sure the code you write to manipulate the tree is cohesive.
      • Learn about objects and OOA/OOD. Learn how data structures lead to classes and objects. Learn about data hiding, iheritence and polymorphism.
      • Learn design patterns. Show how solutions to certain families of problems can be re-used. Show how algorithms can be made more generic by using polymorphism.


      • Somewhere along the line you should learn more about algorithm complexity, various programing paradigms (like functional programing), low-level languages like assembly, operating system and networking concepts, and any advanced topics like databases and distributed programming and real-time programming. But these are all extras. I still think that a programmer needs to learn what a loop is before he should be concerned about what an object is.
    • One must learn about variables, arrays, assignment vs. comparison, loops and conditional statements. Then one must learn about functions and how to separate code into them. Simple algorithms need to be introduced as well. Also, how to break down a problem into several steps and then code it. Finally you can start to teach about classes as well as one of my personal favorites, data structures.

      And don't forget relational databases. I think relational concepts are some of the greatest ideas of computer science. You can reduce complex GOF-like "patterns" into skinney little formulas, for example. GOF looks like the old-fashioned hard-wired "noun-structure in the code" way of "doing patterns" IMO. Relational transcends most of GOF.

      I don't know why database vendors don't spend more effort to point this out. I suppose because in OO projects you often end up noun-modeling twice anyhow: one in code and one in the database. Thus, it has not taken their sales. If dumb developers want to have roughly duplicate structures, why should they care?

      (Note that the current vendor offerings of RDBMS are not the ideal, IMO, but good enough for now.)

      oop.ismad.com

      • And don't forget relational databases. I think relational concepts are some of the greatest ideas of computer science. You can reduce complex GOF-like "patterns" into skinney little formulas, for example. GOF looks like the old-fashioned hard-wired "noun-structure in the code" way of "doing patterns" IMO. Relational transcends most of GOF.


        You are far off topic.

        a) a relational data base is not a programming language

        b) the relational paradigm has nothing in common with the oo paradigm or the procedural paradigm

        c) in a relational data base you store DATA, not code (except for stored procedures)

        d) GOF is about structure and behaviour, further: you can't express anything you can express with GOF design patterns in relatinal terms, you are plain wrong.

        e) in another post yu critics the need to meditate for thinking right: and? is it not necessary to meditate and think right to apply relational paradigms correctly? I asume you learned all ways of joins in a day? You also learned allways of normalizing data bases in a day?

        The thread was about the question how to teach a language. Further more it was about the question how to teach an oo language and how to teach Java.

        Its definitly not abpout tabelizers fight against OO paradigms ... you should defintly start to understand your enemy (oo) more in depth before ranting constantly about your superiour procedureal and relational approaches.

        In the world I live, procedural is dead ... and in the future I move into oo is allready left behind us, as there are far more efficient ways: aspect oriented and subject oriented programming for instance.

        Regards,
        angel'o'sphere
        • (* a relational data base is not a programming language *)

          It does not matter. If it replaces GOF it replaces GOF, whether itsa gerbal or a language.

          (* in a relational data base you store DATA, not code (except for stored procedures) *)

          Yes it can and I have done it before. However, it is not necessary to compete with most of GOF.

          (* GOF is about structure and behaviour, further: you can't express anything you can express with GOF design patterns in relatinal terms, you are plain wrong. *)

          The relational part replaces *most* of it. It does *not* have to replace *all* to be an effective alternative.

          (* in another post yu critics the need to meditate for thinking right *)

          No! I pointed out a contradiction of claims. I don't dispute that relational takes training.

          (* The thread was about the question how to teach a language. *)

          Yes, but "why" and "when" is a prerequisite to "how".

          (* you should defintly start to understand your enemy (oo) more in depth before ranting *)

          Red herring insult. I personally think you don't understand how to effectively use relational technology.

          (* In the world I live, procedural is dead *)

          In practice it is very much alive, even in OOP languages (it is just more bloated in them).
        • In my reply I forgot to mention:

          and in the future I move into oo is allready left behind us, as there are far more efficient ways: aspect oriented and subject oriented programming for instance.


          These technologies are currently only at the "lab" stage and are yet more convoluted patches on top of already convoluted OO to "fix" the sins of OO.

          They are at least a realization that OOP cannot handle relativism very well. Even IBM more or less agrees that OO has relativism problems in its introduction to such technologies.

          Are you gonna call IBM a "troll" also?
          • Some of the major alternative paradigms are well out of the lab stage. Popular functional programming languages have been used for real world projects for years, and some of those languages have well researched and documented advantages over current mainstream approaches (much faster development, formal proofs of correctness, much more concise code to solve the same problems, etc). Why doesn't the programming world move to them en masse? The same reason so many ex-procedural types don't "get" OO: momentum. It's really as simple as that.

            BTW, what do you mean by "OOP cannot handle relativism very well"?

            • (* Some of the major alternative paradigms are well out of the lab stage. *)

              I never suggested otherwise. I never said all alternatives are in the lab. I am not sure how you interpreted what I said. Aspect Oriented Programming is still in the "research stage". How that allegedly relates to functional programming, I don't know what you mean.

              (* Why doesn't the programming world move to them en masse? The same reason so many ex-procedural types don't "get" OO: momentum. It's really as simple as that. *)

              I am not sure what you mean here. Could you please elaborate?

              BTW, IMO many OO fans don't "get" databases. They often see them as a mere persistence tool.

              (* BTW, what do you mean by "OOP cannot handle relativism very well"? *)

              Well, in a nutshell, OOP is optimized for IS-A relationships. IOW, a *single* primary "view" of something.

              Now, it can indeed handle HAS-A kind of relationships via cross-referencing other classes and multiple inheritance. However, managing all those cross-references and sets is a pain in the ass using programming code.

              What is the best way to manage several hundreds of cross-references?

              A. Programming Code
              B. Database

              I vote for B. Fancy IDE's can help with A, but often they are simply reinventing a database without knowing/admitting it. Plus, they are usually language-specific and additional cost.

              Further, if similar classes become so numerous that you want to turn them into data instead of code (sometimes called "paramerization"), you have a lot of rework to do. If you start out with all that stuff as data (a DB), then you don't have the translation step to worry about.

              Given a choice, I would put GUI models in databases instead of code, for example. One advantage of this is that just about any language can read and modify the GUI items instead of the one that the GUI attributes were defined/written in.

              Plus, one could browse around in all that info to get all kinds of views and searches that are tough if you put boat-loads of attributes in code.

              The ad-hoc influence of relational queries applies also to getting new views that one did not anticipate up-front. You don't have to build new physical models to get new unanticipated views, you just apply a relational equation and the database builds the new view for you.
              • Why doesn't the programming world move to them en masse? The same reason so many ex-procedural types don't "get" OO: momentum. It's really as simple as that.
                I am not sure what you mean here. Could you please elaborate?

                My point is simply that whenever you have a large body of skilled people, and they are choosing the tools or techniques to work their craft, there will be an inherent bias towards those they already know.

                In the programming world, many people learned procedural first, whether it was C, FORTRAN, assembler, or whatever. Consider that it took the mainstream decades just to move beyond random-seeming gotos to structured programming (and more systematic gotos such as exceptions and labelled loops). Progress in the industry at large is years (decades?) behind what's known to be possible from an academic perspective.

                Many OO programmers today started out in procedural code, and unfortunately one of the biggest paths into OO was from C (purely procedural) to C++ (very much not purely OO). Now, if you take advantage of C++'s multi-paradigm support, you can do some very clever things, but unfortunately, those who make the jump without reading around their subject tend not to "get" OO first. As a result, you don't get a cohesive design with influences from both procedural and OO complementing each other, you get a C-like procedural design with some OO tools forced on top and looking out of place.

                It's notable, BTW, that programmers with backgrounds in established purely-OO languages such as Eiffel or Smalltalk tend to "get it" much more than those who program C++, or bastardised offshoots like Java. (I don't have a problem with Java; this comment refers only to the model underlying the language and the way it is presented). It's just unfortunate that such people represent only a small proportion of those using "OO" as a whole, and that such a pure OO approach does have some significant flaws that could be overcome using a mixed paradigm approach.

                I'm sure you see the same thing in your efforts to demonstrate the advantages of a relational approach; you just wrote that you thought many OO fans don't "get" databases. It's the same with functional programming as well. The few who have so far made the effort to learn something genuinely different have seen some advantages, and many have liked the alternative approach, but the vast majority don't make the effort to learn. Most programmers are not of that calibre by default, and most teachers, managers, and other guiding influences aren't sufficiently experienced with different approaches themselves to provide an informed and complete picture.

                • (* It's notable, BTW, that programmers with backgrounds in established purely-OO languages such as Eiffel or Smalltalk tend to "get it" much more than those who program C++, or bastardised offshoots [of it] *)

                  Well, "getting it" is hard to define/measure, other than maybe "using more OO", ignoring the quality for now. Anyhow, those who gravitated towared and/or stuck with Smalltalk and Eiffel probably have an affinity for OOP. Thus, there is a filtering mechanism perhaps.

                  IOW, it is a check-or-egg (cause or effect) type of question. Did the background make the programmer, or did certain kinds of programmers gravitate toward certain stuff?

                  (* and that such a pure OO approach does have some significant flaws that could be overcome using a mixed paradigm approach.*)

                  As I mentioned elsewhere, the problem is that there seems to be no consensus as when to use what. There are tons of "how" material for OO, but very very little "why".

                  Besides, some people, including me, have the opinion that mixing increases the complexity without adding much benefits unless one paradigm is really crippled in one area. I tend to combine procedural and relational because they compliment either other IMO. But, relational and OO tend to fight over territory and duplicate stuff among each other.

                  It may be better to focus on becoming the *best* at a given methodology/paradigm rather than mediocre at *multiple* methodologies/paradigms. Very few programmers have the gift to master them all.

                  (* Most programmers are not of that calibre by default, and most teachers, managers, and other guiding influences aren't sufficiently experienced with different approaches themselves to provide an informed and complete picture. *)

                  Nobody in the entire fricken world seems to have an "informed and complete picture". If they do, they have not documented it. Instead, the books all use the same (misleading) cliches and cliche examples over and over. I guess that is "reuse" for ya :-)
                  • Well, "getting it" is hard to define/measure, other than maybe "using more OO", ignoring the quality for now.

                    No, it's really not. I could talk to a programmer for five minutes and tell you if he understands OO or not. So could anyone else who's been using OO for any significant length of time. It's really not hard.

                    As I mentioned elsewhere, the problem is that there seems to be no consensus as when to use what. There are tons of "how" material for OO, but very very little "why".

                    The same is true of most science, of course. Or perhaps all mathematicians should be able to give proofs by following some sort of cookbook? Why did Fermat's Last take so long to prove? Because giving concrete, solid proofs is hard. So is knowing when to use which programming technique for best effect, and in each case, there are always many possible options that would give acceptable results. Programming is a skilled task, and knowing what to use when and how to approach a problem comes only with skill and experience. There is no quick fix cookbook, yet you seem to require one before you will give any credit to a method.

                    It may be better to focus on becoming the *best* at a given methodology/paradigm rather than mediocre at *multiple* methodologies/paradigms. Very few programmers have the gift to master them all.

                    Very few programmers have the gift to master any of them. That's why the best programmers can produce ten times as much code or more over a given period of time than a mediocre code monkey who just got hired and is learning on the job, and do much more than ten times as much useful work with it.

                    However, there is much merit to the argument that the best is the enemy of the good. If your only tool is a hammer, everything looks like a nail. If your only tool is procedural/relational programming, or OO, or functional programming, or whatever, then all programming problems must be solved in that framework. Clearly some paradigms are vastly more efficient for solving certain types of problems than others. It's similar to the argument for optimisation: it's much more effective to choose a sound argument in the first place than to pick a worse performing algorithm and try to make up for it with low-level optimisations.

                    Nobody in the entire fricken world seems to have an "informed and complete picture". If they do, they have not documented it.

                    On the contrary; there are a few people around who do have a broad understanding of the field. I don't know of any books ever written by any of them; they are generally pretty busy running development teams, or in a more senior role. Sadly, the vast majority of academic tuition and low level management decisions are provided by people who are not in this group.

                    • (* The same is true of most science, of course. Or perhaps all mathematicians should be able to give proofs by following some sort of cookbook? Why did Fermat's Last take so long to prove? *)

                      "Prove" is probably too strong a word. "Evidence" is better for engineering stuff. Comparing bridge designs, rocket brands, or basketball stats is probably a better analogy than math.

                      At this point I would like to see *any* evidence applicable to the biz domain. Your best evidence so far.

                      (* There is no quick fix cookbook, yet you seem to require one before you will give any credit to a method. *)

                      Why should OOP be *exempt* from providing evidence of betterment beyond "I am an expert and I say so".

                      (* Clearly some paradigms are vastly more efficient for solving certain types of problems than others. *)

                      Well, I have several times asked for areas or demos of where OO shines compared to p/r and where it doesn't, but usually get INconsistent answers from OO practitioners.

                      (* It's similar to the argument for optimisation *)

                      No, optimization is relatively easy to measure.

                      (* On the contrary; there are a few people around who do have a broad understanding of the field. *)

                      Often this ends up being "people who think like me". Most die-hard OOP fans think that a very narrow group of people properly "get" OOP, but everyone's group is different.

                      I think people mistake subjectivity for objectivity too often in this field. Without decent metrics, this is what happens.

                      Many programmers *insist* that semicolons are "clearly superior" for example, yet I don't like them despite lots of PASCAL use, and never will. They are militant about anyone who says semicolons don't work for them. They think that because semi's work fine in their head and fingers, that everybody else is or should be the same way.
                    • Comparing bridge designs, rocket brands, or basketball stats is probably a better analogy than math.

                      I disagree here. Computer science and software development work far closer to the way maths works than to most engineering disciplines.

                      At this point I would like to see *any* evidence applicable to the biz domain. Your best evidence so far.

                      No, what you would like to see is some magic metric that supports a claim that neither I, nor any other OO supporter I have seen, has ever made. Further, you would like to see it in a published and authoritative reference, and will ignore the legion of anecdotal evidence that is the basis for many of our decisions, because you personally haven't experienced it, and so don't buy it. That's your choice, of course, but the vast amounts of personal and anecdotal evidence that OO can make designs easier than a purely procedural approach is what convinces all those developers around the world (many of whom have moved to OO from a strong procedural background and therefore have plenty of personal experience on which to base their judgement).

                      Well, I have several times asked for areas or demos of where OO shines compared to p/r and where it doesn't, ...

                      Please show me one single claim by any OO fan that OO shines compared to procedural with relational approaches. Relational is a much higher-level paradigm than purely procedural programming, and has not featured in any comparison I have ever seen made, except for ones you've asked for.

                      No, optimization is relatively easy to measure.

                      Guess you haven't developed high-performance maths software that has to work on 15 different platforms lately, huh? Optimisation can be measured after you've done it, but predicting in advance what the effects will be of your "optimisations", and indeed whether they will actually improve performance and not actually reduce it, is almost impossible. We use profilers and usually leave micro-optimisations until late for a reason. Unfortunately, when you're talking about the overall design of your system, you obviously don't have that luxury (unless you're from the slanted side of the XP world, when the design phase not only doesn't happen formally, you claim it doesn't happen at all).

                      Often this ends up being "people who think like me". Most die-hard OOP fans think that a very narrow group of people properly "get" OOP, but everyone's group is different.

                      Well, of course it's somewhat about how you think; OO is just a more formal way of expressing techniques and ideas that many good procedural programmers had been using for years beforehand. But contrary to what you often claim, my experience is that OO developers often have very much the same view of things: they may not put the same responsibilities and relationships in the same places every time -- there are many ways to design and implement a solution to the same problem -- but they all understand the concept of responsibility and the different relationships that classes can have. Your claim that all these OO developers constantly disagree on even basic points just doesn't ring true.

                    • (* I disagree here. Computer science and software development work far closer to the way maths works than to most engineering disciplines. *)

                      Well, I disagree. I see software like a bunch of little virtual machines. The fact that they happen to be represented with symbols instead of transistors or gears is relatively minor. Cross-references (composition) is a lot like wires. And ER diagrams resemble chips wired up to each other with labeled inputs and outputs.

                      (* No, what you would like to see is some magic metric that supports a claim that neither I, nor any other OO supporter I have seen, has ever made. *)

                      Are you saying that p/r is *equal* to OOP in terms of productivity and change-friendliness, etc or NOT?

                      All the braggings in your message implies that you think OOP is superior and/or some "higher level abstraction" stuff.

                      (* Further, you would like to see it in a published and authoritative reference, and will ignore the legion of anecdotal evidence that is the basis for many of our decisions *)

                      Anecdotal evidence SUCKS! Anecdotal evidence says that Elvis performs anal probes on farmers in green saucers at 2:00am.

                      Plus, it points both ways. The failure rate of actual OOP projects appears to be at least as high as non-OO ones in Ed Yourdon's surveys.

                      (* That's your choice, of course, but the vast amounts of personal and anecdotal evidence that OO can make designs easier than a purely procedural approach is what convinces all those developers around the world.... *)

                      That is bullsh*t! It is "in style" so people just go with the flow and say what people want to hear. People are like that.

                      Again, Yourden's surveys show *no* clearly higher ratings of OO projects by managers.

                      How is that for "anecdotal"?

                      (* all those developers around the world (many of whom have moved to OO from a strong procedural background *)

                      And I have seen some of their sh*tty procedural code. They obviously have had no training or paid very little analysis attention to making it more change-friendly in a good many cases.

                      I will agree that OOP has brought some attention to software-engineering issues that lack in p/r training materials, but this is a human/fad issue and NOT the fault of the paradigm. IOW, "training traditions". It is tradition to bundle OO training with software engineering issues, while it was not with procedural. But, that is not the fault of the paradigm.

                      (* Relational is a much higher-level paradigm than purely procedural programming, *)

                      Exactly, that is why they make a good couple (P and R). They are Yin and Yang. If you use R with OOP, you get *two* Yangs more or less.

                      Relational is a *superior* noun modeling tool than OOP in my opinion. I can navigate, filter, change, search, manipulate, and viewify tables and databases much more smoothly than OOP code. That is my own personal experience and you cannot take that away from me.

                      Code sucks. It is tooooo static and hard to navigate IMO. The less of the model you put in code and more of it into the database, the better the result.

                      You cannot put the noun model in BOTH the database and code (classes). That is too much overlap and re-translation. It is unecessary duplication and unnecessary translation back and forth. Cut out the middle man and pick one or the other.

                      (* Well, of course it's somewhat about how you think; OO is just a more formal way of expressing techniques and ideas that many good procedural programmers had been using for years beforehand. *)

                      Formal? How are you discerning "formal"?

                      (* Your claim that all these OO developers constantly disagree on even basic points just doesn't ring true. *)

                      All the OO fans will agree that "classes are good". Beyond that, the agreement is tossed out the window. I have witnessed many intra-OO fights on comp.object, and they agree on VERY LITTLE. Hell, they cannot even agree on a definition of OOP and "type". They go bonkers.

                      Thank you for your feedback.
                    • There must be a pattern to what goes wrong with the non-OO equivalent. [...] Is it that so fricken hard to articulate?

                      Not at all. There are some clear areas where OO designs tend to do better than procedural. Most of them, in fairness, are due to the fact that an average programmer is not an expert and OO makes it easier for him to avoid problems that an expert would avoid using either style. However, the programming population is dominated by those in the middle of the ability bell curve, so I think this is a reasonable case.

                      The most obvious flaw with much procedural code that I have seen is that as projects grow, the design often becomes incoherent. In particular, special cases start appearing because it is easy to "just add another function" or "code the problem out with an if statement". These cases result in vast numbers of bugs, not because there is any inherent problem with using if per se, but because the approach does not scale. The number of possible results is exponential in the number of special cases, and sooner or later, special cases start conflicting and nobody notices. Even automated tests start suffering, because it becomes practically impossible to exercise all of the code paths and special cases systematically. (NB: I am not talking about cases where you have several genuinely different actions to take because your data is substantively different, which obviously happens frequently in programming whatever style you are using. I'm talking specifically about the little hacks that are put in rather than adjust a design to account for special cases cleanly throughout.) The greater range of design tools available to an OO programmer mitigates this effect somewhat, and the encapsulation and data hiding encourages the maintenance of a clean design without the proliferation of special cases.

                      I also question whether typical procedural designs do adapt to change as well as typical OO designs. Again, lacking the emphasis on encapsulation and data hiding that OO advocates, there is a tendency for responsibility to become spread out. Instead of particular data being owned and used by a clearly defined subsystem (whether that be a class or set of classes in OO, or a particular table or set of related tables in relational), in procedural code, it is easy and common for "convenience" structures to appear and be passed around all over the place. This, again, is a common source of bugs. Data spreads, but the controlling logic does not, resulting in accidental changes that violate invariants, mutually dependent items of data being used independently, clashes over resource usage, etc.

                      To me, the proliferation of special cases and the arbitrary spread of responsibility are the most common failings of moderate to large procedural designs. I'm sure a group of expert programmers would avoid many of them, but it is clear that typical, most-of-your-team coders do not. OO's approach helps to alleviate these problems with its emphasis on encapsulation and data hiding.

                      Note also that none of this has anything to do with inheritance and polymorphism. IMHO, these are also valuable tools, but much of their value comes from the way they allow you to extend an OO design without breaking that focus on encapsulation, which is the source of the big benefits in most of the projects I'd consider "good OO". Of course, there are other advantages for things like type safety, but these, to me, are secondary.

                      How does OOP allegedly fix [the special case problem]? Inheritance? I find that the granularity is often too large. You can't override 1/3 of a method for example if only 1/3 is different.

                      It is true that you cannot do this in most OO languages, and I freely concede that on occasion this is annoying. However, in fairness, if your methods are reasonably well thought out and self-contained as they should be, this problem rarely arises in practice. When it does, it's usually because a new and significantly different set of requirements have been added to a previous design, and that is normally grounds to adjust that design, possibly by reorganising the methods available. Wanting to override 1/3 of a method is usually symptomatic of either an unfortunate original choice of responsibilities, or of such a change in requirements, and in either case the answer is usually the same.

                      The second rewrite is often better simply because you have *hindsight*. IOW, learn from misktakes of the past. Thus, "rewrites" don't make very good comparisons.

                      Unfortunately, most large-scale projects don't get written simultaneously by two identical programming teams in the different styles just to provide an objective comparison. Such porting exercises are the closest you can realistically get to a fair comparison. I agree that the second project will mostly likely have an advantage due to hindsight, but it's also often hampered by the lack of expert domain knowledge that was available when the original was written. Even neglecting that drawback, I question whether that advantage could have made a three-fold difference to development speed (which was sustained even in new development) and account for a 90% reduction in bug rate (which was also sustained). If you truly believe that this could be the case, then I have no further information I can provide about this case to convince you of OO's benefits.

                      Unless you perhaps feel that the procedural version was the *best possible* that procedural can get. Do you feel that?

                      No, I'm quite sure it wasn't. However, it was real procedural code that was actually produced by a real development team, not a theoretical, artificially perfect implementation produced by a team of 100% effective experts. The OO version to which I'm comparing it was also real world code produced by a real world team. This is what actually happened, which IMHO is far more important for most purposes than what might have happened in an ideal world. Ideal worlds don't pay the rent. :-)

                      Perhaps, but one has to refactor *less* in p/r because most of the "noun model" is in the database. You just change relational formulas to get different "views" instead of physical code structure. It is like commanding and army instead of moving each and every soldier by yourself.

                      That's hardly fair. Refactoring a class design, even right down the hierarchy, is no more of an endeavour than a similarly scaled adjustment of a database schema to allow the new relations you want to describe. More modest changes to a design don't require this scale of change in either approach.

                      Well, there you go. Those are some actual, concrete, clearly defined situations where I feel that OO can convey significant advantages in measurable things like bug count and development rate, and an actual, concrete example where it worked in a real project. I can't do much better than that. :-)

        • I wrote the following about GoF patterns in some Slashdot thread a few months ago, but it is more relevant to this thread so I'm going to be a lameass and cut/paste it. (In fact, your handle rings a bell. Maybe you were even in that thread and have read this before. I can't remember.)

          Of course those GoF patterns can make life hell for the maintenance developer or app framework user, when people turn it into a contest to see how many design patterns they can fit into a single project. The overall "Design Patterns" philosophy is really "how can I defer as many decisions as possible from compile time to run time?" This makes the code very flexible, but the flexibility is wasted when a consultant writes code using lots of patterns to puff up his ego and then leaves without leaving adequate comments or documentation. Without insight into how the system works, the configurability and flexibility that these patterns offer is lost. The system hardens into an opaque black box.
          Deferring decisions to runtime makes code hard to read. Inheritance trees can get fairly deep, work is delegated off in clever but unintuitive ways to weird generic objects, and finding the code you're looking for is impossible, because when you're looking for the place where stuff actually happens, you eventually come across a polymorphic wonder like

          object.work();

          and the trail ends there. Simply reading the code doesn't tell you what it does; the subtype of object isn't determined until runtime. You basically need a debugger.

          You can take a really simple program and screw it up with aggressive elegance like this. Here is Hello World in Java:


          public class HelloWorld {
          public static void main(String[] args) {
          System.out.println("Hello, world!");
          }
          }


          But this isn't elegant enough. What if we want to print some other string? Or what if we want to do something else with the string, like draw "Hello World" on a canvas in Times Roman? We'd have to recompile. By fanatically applying patterns, we can defer to runtime all the decisions that we don't want to make at compile time, and impress later consultants with all the patterns we managed to cram into our code:


          public interface MessageStrategy {
          public void sendMessage();
          }

          public abstract class AbstractStrategyFactory {
          public abstract MessageStrategy createStrategy(MessageBody mb);
          }

          public class MessageBody {
          Object payload;
          public Object getPayload() {
          return payload;
          }
          public void configure(Object obj) {
          payload = obj;
          }
          public void send(MessageStrategy ms) {
          ms.sendMessage();
          }
          }

          public class DefaultFactory extends AbstractStrategyFactory {
          private DefaultFactory() {;}
          static DefaultFactory instance;
          public static AbstractStrategyFactory getInstance() {
          if (instance==null) instance = new DefaultFactory();
          return instance;
          }

          public MessageStrategy createStrategy(final MessageBody mb) {
          return new MessageStrategy() {
          MessageBody body = mb;
          public void sendMessage() {
          Object obj = body.getPayload();
          System.out.println((String)obj);
          }
          };
          }
          }

          public class HelloWorld {
          public static void main(String[] args) {
          MessageBody mb = new MessageBody();
          mb.configure("Hello World!");
          AbstractStrategyFactory asf = DefaultFactory.getInstance();
          MessageStrategy strategy = asf.createStrategy(mb);
          mb.send(strategy);
          }
          }


          Look at the clean separation of data and logic. By overapplying patterns, I can build my reputation as a fiendishly clever coder, and force clients to hire me back since nobody else knows what all this elegant crap does. Of course, if the specifications were to change, the HelloWorld class itself would require recompilation. But not if we are even more clever and use XML to get our data and to encode the actual implementation of what is to be done with it. XML may not always be a good idea for every project, but everyone agrees that it's definitely cool and and should be used wherever possible to create elegant configuration nightmares.
          • I thought I worked on that project, but then I realised mine was better - we used proxies as well. Ner nerny ner ner.

            You can have all my Karma, I will never see a post more deserving of up-modding on /., and therefore have no need to mod again.

            THL.
            • Oh you think you're so smart, do you...

              I'm going to improve my Hello World and make it even better than yours. Right now mine only takes advantage of Singleton, Factory and Strategy. I'm going to add even more patterns to it: Composite, Proxy, Bridge, Prototype, Adapter, Decorator, and Builder. Maybe Flyweight, if I can figure out a use for it.

              It will be the most flexible, configurable Hello World anyone ever wrote. Nobody will ever need to write another one. That would be "reinventing the wheel".
    • Indeed. You also don't teach tensor algebra before people have learned how to add two variables.

      Tensor algebra & mathematics is actually a very nice analogy to OOP&programming.

      One professor I had said when he introduced us to tensors that you don't understand them, you get used to them.

      And that is exactly what my expierence is with OOP. At first it "feels" strange and new, and you have a problem wrapping your mind around it. But the more you try, the more natural it feels.

      Another good example is languages. You can learn the rules and vocabulary of a foreign language as long as you want, if you don't speak and write ("get used to it") and with it learn to "think" the language, you'll never really be able to use it as a tool.

    • Your right. That point, in fact, negates the entire article. He skips back and forth between intro classes and more advanced classes as his arguments require.

      And I certainly hope this was just a thoughtless mistake:
      "If you were highly charitable, you might give HelloWorld OO points because the println() method of the out class in the System package is being invoked."

      System is a class
      out is a field (a PrintStream instance)
  • by jefflinwood (20955) on Thursday August 22, 2002 @09:57PM (#4123969) Homepage
    In my intro to CS class, we used a test harness to determine whether or not our code worked correctly. This was a C++ class on the Mac, though.

    JUnit could be used to create a test harness that "plugs" into the code the students write. The professor or TA could define an interface that the students have to implement.

    I think beginning computer science for majors is backwards, anyway. Intro to engineering classes at CMU for freshmen were all taught as practical, hands-on, applied courses that focused on real problems. My civil engineering class built bridges, visited dams, and visited construction sites. My chemical engineering class analyzed actual plant failures (things that go boom!) to determine what mistakes the engineers made. My intro to cs class was all theory, with one interesting project where we added AI into a 2D simulation. There wasn't a lot of practical information to take away from the class at the end of the year beyond a "Learning C++" book.
    • Computer Science is not about finding solutions to real world problems. Well, at least not like engineering is.

      It is a science which studies the possibilities of computing--not a field of engineering. (Though strangely at Marquette, almost all the computer engineering classes are taken from the comp sci dept.)

      The idea of comp sci 101 is to give you the building blocks on which to build theory. This usually involves basic computer architecture and programming in whatever language is currently seen as standard or best (or paid to be taught).
      • Computer Science is not about finding solutions to real world problems. Well, at least not like engineering is.

        I agree with you. I think undergraduate degrees in software engineering should be more readily available (and accredited by ABET [abet.org]). Sort of like the difference between chemistry and chemical engineering. Degrees in IS/MIS are available, but those are really focused on becoming a systems analyst or a corporate IT programmer, and not very heavy on actual programming or design.

    • JUnit could be used to create a test harn...

      I'm currently at Uni and we've had several large projects with automated test as part of the assesment (some using JUnit).

      Last time I checked no-one writes completly bug free code, we had problems with bugs in the tests. I believe this will happen to some extent with any automated tests being used to mark an assignment.
      Anyway to use something like JUnit to define tests you also need to define all the class's and public methods for the students. This may work fine for comsci101 but at any higher level assignments need to have some design flexibility.

      Orthanc

      • Last time I checked no-one writes completly bug free code, we had problems with bugs in the tests.

        Same thing at Rose-Hulman, for Dr. Anderson's classes (UNIX system programming, and programming language concepts). The students discussed the assigned problems in the course's newsgroup, and often, students would find bugs in the public test suites. That's how it is in any decent-size software engineering endeavour: a cat and mouse game between coders and testers.

  • by PD (9577) <slashdotlinux@pdrap.org> on Thursday August 22, 2002 @10:24PM (#4124055) Homepage Journal
    There really is no good OO way to print in Java. How are you going to make a hello world program print? System.out.println ("foo") isn't any better than the old BASIC

    10 PRINT "FOO"

    It does little good to make a version of hello world that has some objects in it when in the end there will be a System.out.println call.

    I think you're really arguing for a language that will let you write hello world like this:

    "hello, world".print

    • I've always like the C++ iostreams way of printing. It's not pure OO, but it's intuitive. It just needs another set of operators.

      I think where Java gets it wrong, and why "System.out.println()" looks so silly to you, is that Java students are taught that everything is an object. But not everything is an object, especially when you're printing.
      • Everything IS an object, but most computer people don't think about things that way, because they learned to code procedurally. To many programmers, objects are auxiliary to functions, used only when it is necessary to organize.

        If you teach a student to think in an object oriented way from day one, they will think of everything as objects, just like most coders think in procedures now.

        But that's just my two cents.
        • (* If you teach a student to think in an object oriented way from day one, they will think of everything as objects, just like most coders think in procedures now. *)

          First, it should be demonstrated that OOP is objectively better *before* making students think in such ways without giving them much alternatives.

          I will agree that OOP seems effective in physical modeling, where it was born (Simula 67), but IMO the benefits there do not extrapolate to modern business systems.

          The main reason is that modern systems need "relativistic abstraction", which OOP does not provide without making tangled messes. OOP is obtimized for hierarchical IS-A abstraction, which is the antithesis of relativism, where sets and "view formulas" do better IMO.

          • First, it should be demonstrated that OOP is objectively better *before* making students think in such ways without giving them much alternatives


            this is allready shown ...
            as well as it is shown that functional programmng languages are better than procedural ones ...
            as well as relational languages are shown to be better than procedural ones ...
            as well as it is shown that logic languages are better thanprocedural ones ...

            But: procedural languages are (arguable) the easyst ones and thats why they survided still now.

            I only know one language wich is still procedural only: Fortran. All other languages have made a hybrid oo evolution.

            OOP is obtimized for hierarchical IS-A abstraction

            How do you come to that opinion?

            The main reason is that modern systems need "relativistic abstraction"

            I dont't think so! Today systems need to interact. Ineract with DBs, business logic, and millions of concurrent users. They need to be maintaneable, evolveable and should be reuseable. They need to scale and you like to abstract away technical concerns as often as possible.

            The DB you use below such a system, is just a replaceable technical concern.

            In 90% of the cases a standard relational DB is not the best choice. Its only the cheapest in terms of available support and existing infrastructure. OO databases are in general far faster than relational ones in typical ussage scenarios.

            angel'o'sphere
            • (* this is allready shown ... *)

              Bull! Where's it?

              (* I only know one language wich is still procedural only: Fortran. All other languages have made a hybrid oo evolution. *)

              I thought they were adding OO extensions to Fortran. (Not that it proves anything except that OOP is in style right now.)

              (* The DB you use below such a system, is just a replaceable technical concern. *)

              Not any more than OOP is.

              (* OO databases are in general far faster than relational ones in typical ussage scenarios. *)

              Bull!

              It is moot anyhow because OO DB's have been selling like Edsels.

              I will agree that in *some* domains OODBMS perform better, such as CAD perhaps.
        • by Arandir (19206) on Friday August 23, 2002 @02:42AM (#4124924) Homepage Journal
          Sometimes you have to live in the real world, and you discover that not everything follows a single paradigm. Languages that follow a single paradigm have serious drawbacks. Java is one. There's a reason why Java isn't used for systems programming. There's a reason why Corel has yet to finish it's Java office suite.

          An int is four bytes on my CPU. Why should I have the overhead of an object wrapped around it? Why do I need runtime polymorphism on ints? For OO educational purposes, it makes sense to teach that an int is an object. But often in the real world it's far better to make an int simply four bytes in memory.

          Rule of thumb: if polymorphism doesn't make sense for an object, maybe it shouldn't be an object. What can you possibly derive from a bool that wouldn't still be a primitive bool?
          • Well, in Java, and int is 4 bytes. An Integer is an object wrapped around an int, as you put it. This object is useful for many things. Math isn't really one of them. For plain math, use plain ints.

            C#, however, has automatic wrapping of primitive types with objects. This is supposedly done on an as-needed basis. I've never tried it, but I'd assume that the wrapping happens only when it's required, otherwise the VM will preserve the basic types for performance reasons.

            As to reasons for why there is a Boolean object, it's really just a question of convenience. The Boolean class contains methods for manipulating booleans, like making Strings out of them, or making new booleans from strings. What's the harm in extending this helper class to also represent a boolean value? It's still an object. Maybe you never need to subclass it? That doesn't mean it shouldn't be an object.
            • For plain math, use plain ints.

              But then it's not an object! I thought everything was supposed to be an object.

              The Boolean class contains methods for manipulating booleans

              Sounds like an adaptor class to me. The bool itself is still a non-object. If you make a string of bools, that string is not a bool, it's a string of bools. Adaptor classes are handy for such cases, but don't confuse the wrapper with the contents.

              • For plain math, use plain ints.

                But then it's not an object! I thought everything was supposed to be an object.

                I think he was talking about "the real world". The one with a screen and a keyboard and a mouse and a computer.

                In the "real world" mapping things to objects is often easy. It is also often easy to see trivial ways to interact with said object.

                Wether or not you're dealing with an instanciated object (Java object that is) when you do an addition is both irrelevant and uninteresting. (Unless you happen to be a computer scientist focusing on virtual machines or compilers.)
    • > "hello, world".print

      You want to get finicky, that's still not great OO design. Unless you're designing a class hierarchy where every object has a print method, chances are you want to tell some output stream to print something, at which the output stream requests some format it prefers from the object being printed. With a .print method on objects, you have to have some print object in scope somewhere, on the object, the class, or globally. Should each possible scope really be deciding on its own what the default output stream is?

      Thus

      stdout.print(hellomsg)

      Or in more familiar syntax ...

      cout hellomsg;

      (Note that I have issues with C++ iostreams, but they did get this part right).

      In in a language that supports multiple dispatch, the issue is a bit moot, but what you put on the left side of the dot (or arrow or whatever) in most OO languages can make a big difference in design down the road.

  • by Anonymous Coward
    A lot of texts on OOP say that people new to programming learn OOP very quickly and naturally, and that teaching them OOP first is the way to go.

    This may not be true.

    I have recently taught programming to a few people. They were new to programming, and were honestly interested in it.

    I have tried the approach of teaching OOP first. They didn't get it. Then I tried to avoid the OO part, and teach them some programming, but using objects. This also didn't work very well.

    After this, I switched from Java to a simpler, structured language: PHP. Things worked a lot better, they seemed to understand the procedural paradigm naturally and very quickly.

    After a few months of teaching PHP, I tried to teach Java again. This also worked a lot better than my first attempt, as they groked objects more easily.

    After this experience, I belive that "teach OOP first" is not the way to go.

    I think the proper way to teach programming is:

    - Teach them a structured/procedural language. Drill into them the loops, if, switch, functions, parameter passing, etc. Teach very basic algorithms.

    - Make them do some real work using the new language.

    - Teach them OOP, using an OO language.

    If the first thing you teach is OOP programming, people won't understand the need for classes and objects. They will seem like useless abstractions.

    Also, people who are not accustomed to the way computers work don't understand a lot of things in OOP, as they miss a lot of context.

    If you teach them the much simpler structured programming, they will grok OOP easily.

    There is a third path: teach structured programming first, but in an OO language. I belive this can be done, but not in Java. In Java, everything in the library is an object, so you can't avoid lots of objects and object syntax.

    Another issue is that it is important (IMHO) to teach people a productive programming language, so they can see real, useful results quickly. PHP is good for this purpose.
    • I like Eiffel for this purpose. Clean syntax, and straightforward, relatively simple rules.

      Most importantly though, nothing takes place outside of a class. Consistency is good, as people tend to get confused when explaining exceptions to the rules.

      If you're going to teach OOP, in my humble opinion, you need to stress thinking about problems in terms of classes and objects from the very first day.

      The other approach I've given serious thought to is using a language like Perl to start out by showing how things can be done in a quick and dirty way, but then expand the "hello, world"(output) script to saying "hello" to a person(input), and so on and so on, and show how modules and classes can make expanding a small program much easier. At the same time, as you construct a class, you can demonstrate arrays, associative arrays, looping, conditionals, etc.

      I'm still debating which is the better approach.
  • by Anonymous Coward
    To really understand a language, one should learn the standard libraries first. The book Accelerated C++ takes this approach. Stroustrup advocates this method of teaching a language. The wrong way to teach a language is to exhaustively teach syntax.

    Of course syntax is important, but one should not be forced to become a language lawyer before useful tasks can be accomplished. By emphasizing a language's standard libraries, you learn the "philosophy" of the language as well as its syntax. And in the end you can do useful things with the language, and do it correctly within the philosophical context of the language. You avoid the such common problems as using a C++ compiler to write what in reality amount to C programs.

  • A quote from the articel:


    As we learned more and more about programming in Java, we found that C was not the right way to approach Java.


    To learn C you need to know assembler(it was invented to be a portable assembler).

    To learn C++ you need to know C (otherwise you better skip directly to Java/OO Pascal or well SmalTalk ... if not Eiffel).

    Unfortunatly you can not teach a starter in CS assembler, hm ... why not, sure you could! I learned assembler when I was 16 ....

    Unfortunatly CS emphasizes learnig of a beginners language. Instead of teaching higher level concepts. OTOH, thats what the students want and expect ...

    And if a course is directly put into touch with higher level concepts, you can bet its not only functional like Miranda or Ml, no you have Lisp .... arguable the ugliest language existinge besides fortrana nd JCL.

    I for my part only teach UML .... and wait for CASE systems wich skip from diagramming to code directly.

    angel'o'sphere
    • Unfortunatly CS emphasizes learnig of a beginners language. Instead of teaching higher level concepts.
      At Cal the first class you have is SICP. It is nothing but high-level languages and concepts.
    • > To learn C++ you need to know C

      A certain Mr. Stroustrup disagrees with you. In fact, C will teach you all kinds of things you need to unlearn in C++, such as pointer usage, arrays, and imperative design, that can be superseded with references, containers, and predicates, all to be found in the C++ standard library. To say nothing of generic programming with templates (you can actually write entire programs in nothing but templates, they're turing complete).
  • by Dr. Bent (533421) <ben@nOSpAM.int.com> on Friday August 23, 2002 @12:34AM (#4124553) Homepage
    The real problem here is software development has moved beyond what a scientific discipline can handle. Much like modern electrical engineering evolved from the findings of early 20th century experiments with electricity, the need for real software engineering is starting to become apparent.

    But, as always, acedemia is behind the curve. Not that they should be on the bleeding edge, but now it's time to catch up. Computer Science programs across the country have started to straddle the fence when it comes to coursework. Do we teach theoretical science, or applied science? This is a mistake; Nothing done half-assed is ever worthwhile. Do not make Computer Science more like an engineering discipline. Instead, make Software Engineering an undergrad degree unto itself.

    You should be able to teach CS101 in any language. If you can't, then you're trying to teach engineering in a science class. A stack is a stack regardless of what langauge it's written in. Don't pollute computer science by trying to make it something it isn't. Instead, make a new Class (pun!)...Software Engineering 101. There you can teach design methodologies (Like OOP), proper use of the latest tools, automated testing methods, and other applied theory that has no business in a computer science class.

    This is not to say they there wouldn't be a great deal of overlap between a C.S. and S.E. degree. After all, you have to learn physics before you can be a Civil Engineer. But it's just not possible to teach you everything there is to know in 4 years. I've learned so many formalisms and techniques since I recieved my B.S. in C.S. that I wondered why I hadn't heard anything about them while I was in school. The answer, I realized, is the days of the computer Renaisannce man are ending. Developing an algorithm and developing a software system are two completely different tasks. Just as a physicst can't build a bridge and a Civil Engineer didn't invent Superstring thoery, you can't ask a computer scientist to build a software system or ask a software engineer to develop a new compression algorithm...it's just the wrong skillset for the job.

    • Do we teach theoretical science, or applied science?

      You teach by example, and do both. Andrew Koenig and Barbara Moo, two of the prime movers behind C++, wrote a book called Accelerated C++: Pratical Programming by Example [att.com], as a new approach to teaching C++.

      It absolutely kicks ass. Somebody else on this page commented that you need to learn C before learning C++. Most C++ people disagree; this book proves them correct. It starts with

      #include <iostream>

      // something went here

      int main()
      {
      std::cout << "Hello, World!" << std::endl;
      }
      and the first lesson was, "the most important line in this program is the second one," i.e., the comment. How refreshing is that? It does not then follow up by diving into the guts of the IOstream library; they simply say, "when you want to send stuff to the screen, use this; when you want to get stuff from the keyboard, use this," and leave the details for later. Even though the IOstream library involves OOP, they don't shove the user's nose in it.

      The people I know who have started using this book, and the approach that they advocate, to teach beginning programmers, have all found their students not only picking up the language faster, but being less frustrated with programming in general (admit it, we've all been there), and having a better understanding of what's happening in their code.

      (Pointers aren't even introduced until chapter 9 or 10, which means anything that visibly uses pointers isn't needed until then, either. Very nice.)

  • If you want to teach students how to program in an OO way in Java. You can use this program:

    BlueJ [bluej.org]

    Teachers can start teaching objects and classes from the beginning. They don't have to tell students:

    "Just write down: public static void main (String args[]) { } And don't ask me about it until later".

    it wouldn't run some of my home-made classes, but then I didn't read the manual :P

  • Python (Score:2, Insightful)

    by tdelaney (458893)
    print 'Hello, world!'

    It does exactly what it needs to, without anything extra. Each piece can be discussed separated, and picked apart or expanded as desired.
  • by angel'o'sphere (80593) on Friday August 23, 2002 @11:09AM (#4126423) Homepage Journal
    I think one problem is the structure of a language.

    I mean: what is a first class citizen? In C everything can be degenerated down to a pointer, except a preprocessor macro.

    So the only true first class citizen is a pointer, or in other words a memory address. Structs and functions seem to be something utterly different. Even besides the fact that you can take the adress of both.

    In C++ suddenly we have similarities: structs are similar to classes and similar to unions. With operator overloading you can manage to get a class behaving like a function, a functor.

    But: wouldn't it make more sence to say we only have *one* thing? And wouldn't it make sence to make far more stuff optional? Like return types, access modifiers, linkage modifiers ... void as return type, how silly. I have to write: "HELLO HERE IS NOTHING" instead of writing nothing.

    {
    int i =1;
    }

    Whats that? Data? A thing with a 1 inside stored in a thing with name i? Or is it a function with no name and a local variable i with value 1?

    lets give it a name:

    thing {
    int i = 1;
    }

    Why can't a language creator understand that OO and functional paradigms are just the two sides of the same medal? The thing above serves pretty well as function and as class.

    thing a = new thing;

    Create an instance of thing ... if (a.i == 1) is true!

    if (ting().i == 1) is true also, call thing like a function.

    There is no need to have functions and structs to be different kinds of language constructs and thus it makes no sence that a modern our day language forces one to distinguish it.

    In short: System Architects get a language wich allows to express the world they like to modell in terms of Objects/things and assign behaviour/functions to objects. Unfortunatly the language designers are mostly BAD OO designers and are not able to apply the first principles of OO correctly to the languages they invent: everything is an object.

    Even a for(;;) statement is not a statement. Its an object. Its an instance of the class for, the constructor accepts 3 arguments of type expression, you could say Expression(.boolean.) for the second one. Well, for the compiler it DEFINITLY is only an object: java.AST.statement.ForStatement ... or something. Why the heck can't it be a class available to the ordinary programmer? At least for the teacher and the student it should be viewable as a for object and not a for statement.

    Sample:

    for (Expression init; Expression(.boolean.) test; Expression reinit) { Block block }

    Hm? a function or a class with name for.
    Two parameter sections, one in () parenthesis and one in {} braces.

    What you pass in () is stored in init, test and reinit. What you pass in {} is stored in block.

    The compiler crafter puts a for class into the lirary:

    class for (Expression init; Expression(.boolean.) test; Expression reinit) { Block block } {
    init();
    loop {
    test() ? block() : return;
    reinit();
    }
    }

    Wow, suddenly everything is a class. Hm, a meta class in the case above probably. A language would be easy to use if I told my student:

    Ok, lets store an addressbook! What do you like to be in an adressbook? Name, first name, birthdate, phone number? Ok, then you do something like this:

    { Name, FirstName, Birthdate, PhoneNumber }

    We group it. That thing has an anonymous type.

    How to create objects?

    new { Name="Cool", FirstName="John", Birthdate="12/27/66", PhoneNumber="080012345" }

    Wow ... and now we need two of them, so lets give them a name:

    cool = new { ... }
    bad = new { ... }

    And we need to compare them and search them and suddenly we need to put "methods" aka "behavioural" objects into them. Oh, yes and the anonymous thing above needs a name, so it becomes a class.

    What I describe above is Lisp, but with a more C/Java/C++ like syntax.

    And a radicaly reduced language design. The best would be to put the language design into the runtime libraries.

    Yes: every typed language should be able to work typeless as long as you are in a "skteching" phase.

    Regards,
    angel'o'sphere

    Note, for template arguments I used (. and .) instead of what you expect ... /. eats the less and greater signs.
    • > What I describe above is Lisp, but with a more C/Java/C++ like syntax.

      Actually it reminded me of nothing so much as the ML line of languages, which includes SML/NJ, Ocaml, and Haskell. All of those give you "anonymous types" like that, with named fields. They even infer types for you, so you can pass an anonymously constructed struct into a field that expected an AddressBookEntry for example, and so long as it had all the same fields, it would accept it. In fact, you don't typically tell functions what type to expect, you just write the code, and the compiler will infer it all for you (sometimes it needs help, so they support type constraints, but those are still inferred, you don't need to declare your anonymous struct as such a type).

      I strongly suggest you check out ocaml
  • by p3d0 (42270) on Friday August 23, 2002 @03:30PM (#4128882)
    I don't like the idea they present of computation as interaction, rather than computation as calculation. Computation-as-calculation views a program as having a specific, well-defined job to do, with a beginning and an end. This makes it much easier to reason about what the program does, and whether it does it properly: you can inspect the outputs for a given set of inputs and make sure the calculation produced the right result.

    Clearly not everything can be done this way, but I think the idea to throw in the towel and model everything as interacting processes is a huge mistake. This is especially true of concurrency, which is thrown into programs in a haphazard way these days with no particular benefit.

  • OOD Key Concepts (Score:3, Interesting)

    by hackus (159037) on Saturday August 24, 2002 @05:26AM (#4132470) Homepage
    This is my rant on the subject. You don't have to agree, but it comes from a guy writing software since he was 10 years old, who is now very old and crusty by comparison. This is what experience has taught me, perhaps your mileage will vary.

    OOD, without risking and sounding like those "experts" is no silver bullet for software design. But it is a sound evolutionary advance in Software Engineering techniques. Yes, I do agree, that the OLDER generation, is more inclined towards Structured Design before implementing OOD/OOP techniques. However, I disagree that it is because that is the only thing we have been taught or have been teaching. Thats complete bonk.

    Everyone in this field advances with the times. I would suggest, if it seems that way, the older generation, simply realizes what OOD/OOP is and what it ISN'T, and use OOD/OOP where appropriate in building software.

    First of all, OOD/OOP builds heavily on Structured Design techniques. (i.e. Building software using ADT definitions and the 4 foundation sequences of computer science: statement, do-while, while-do, if then else, case or selector statement.) That is, a properly built OOD will embody in every one of its object interfaces, methods which are built using sound Structured Design methods. So it is a Myth that OOD/OOP gets rid of Structured Design techniques. In FACT, those who write POOR OOD/OOP's are those that have not mastered the 4 constructs of computer science and the ADT that goes along with Structured Design.

    OOD does not a attempt to do away with Structured Design, it complements it by organizing Data AND Code in such a way that further increases the resulting code abstract properties. (i.e. it allows the resulting algorithms to be expressed in a way that makes said code even more reusable through inheritance for example. OOD is therefore impossible to implement without Structured Design.)

    The resulting code is far more abstract, and therefore generalized to be more reusable, and therefore, theoretically, more reliable. (i.e. Code that is used over and over again becomes more reliable over time, and is an extensible property of the life cycle of software. Although structured design allows you to reuse code through simple function calls, OOD/OOP takes it one step further and allows function calls and data representation to be generalized as a functional unit.)

    It has been pointed out, with good reason, that Java is a language which can help enforce good OO programming. However, it is not required and for example through the use of static methods, one can build Java code without using OOD/OOP techniques of any kind if one decides to do so.

    This is important: OOD because of its abstract properties, (primarily the use of inheritance) can be used to create software patterns that lend themselves to creating certain types of software.

    Certain types of software that benefit greatly from OOD/OOP implementations are for example, User Interfaces. Why? It is obvious. User interfaces are built using repeatable patterns themsleves application after application. (File, Edit, View, Window, etc.), at thier most basic level.

    When an implementation in and of itself such as the building of a GUI, for example, has a clear pattern itself, OOD/OOP methods can get a great deal more mileage out of simplyfying and building code. This creates a better implementation of a GUI than just a Structured Design approach alone can provide.

    With that said. You are probably thinking, what sorts of things is OOD/OOP NOT good at, and in fact SHOULD NOT be used. This is the part that gets controversial and you will decide, without knowing it, which camp you fall into by reading the next paragraph. :-)

    Well, abstraction, which through inheritance in OOD, while it provides excellent reusability in the context of building software, does not always result in the most effective implementation. By an effective implementation, I mean most efficient. :-)

    So what am I saying? Well, I am saying that you sacrifice some efficiency to gain the increased code reliiability that inheritance provides in OOD by compartmentalizing code AND data within an object vs Structured Design, which cannot do this through the use of simply an ADT and function/procedure calls.

    (i.e. You can never directly modify data in context of a classic OOD/OOP, you have to overlap or build a middle man, as it where, to modify any data you declare private through the use of accessor methods.)

    Althouh this enforces and corrects some deficiencies in Structured Design, it makes the program arguably slower to execute.

    In the context of building, say an Operating System, for example, OOD/OOP is not the way to go if you want a highly speedy and effective OS implementation.

    If you want such speed you invariable have to give up inheritance, and the benefits it provides, and resort to Structure Design principles only, to build your OS. (i.e. ALL function calls and procedures DIRECTLY access the data structures of the OS through passing parameters to functions or procedures, there by eliminating the middle man as above.)

    Which, is not so bad, really. OS's and components of OS's such as kernels, etc...are designed to be speedy, as they should be.

    So, my view on the topic is that OOD/OOP is best suited on top of the OS, vs IN the OS design.

    Not everyone agrees with that, and that is fine.

    Why? Well, because many argue that the sacrifice in speed is justified in the complexity of building a OS kernel, and that the reliability gained through the extensive use of OOD/OOP techniques in building the OS kernel for example, yields a better OS.

    Which is not something to be taken likely if your OS is charged with the responsibility to keep systems software on the space shuttle for example working with the fewest number of defects, and human lives riding on what the OS may or may not do next.

    On the other side, like I said, you have me and others who believe that OS should be very small and very fast, and that OOD/OOP shouldn't be used and that the realibility sacrificed is acceptable.

    So, that is just one aspect of when and where and why OOD/OOP should and should not be used. But as you can see, it is far from cut and dried, and primarily is once based on IMPLEMENTATION and engineering REQUIREMENTS, not on methodology.

    Which is how the real world works.

    For the most part, which drives 90% of the disagreements is the fact that many people see OOD/OOP being a generalized approach to solving ALL problems, and not a specialized addition to Structured Design techniques, suitable for SOME problems, not ALL problems.

    I personally, obviously, feel that OOD/OOP is NOT a generalized programming methodology for ALL cases.

    However, some of my friends feel very differently, and we have a good discussion on the topic wherever we go when we start discussing OOD/OOP.

    Things can get pretty heated, and most patrons at the local 3am diner wonder what all the screaming is about, particularly the buzz words. :-)

    Hack

    • There are a lot of brochure-like claims here, such as "OOP is a higher level of abstraction". Do you have some specific code to demonstrate this?

      I find procedural/relational techniques to be more "abstract" because much of GOF-like patterns can be reduced to mere relational formulas, as described previously. A formula is more abstract than a pattern, for the most part.

      And procedural ADT's only really differ from OOP ADT's when "subtyping" is used. However, subtyping is too gross (large chunk) a granularity of differences IMO. Even Stepanov, the STL guy, realizes this. (And he has enough respect to not get called a "troll", unlike me.)
      • OOD/OOP is fragile.

        It can get ugly quite quickly because of so much middle ware in between, which I pointed out.

        Calling constructors for example. Constructors and accessor mthods don't exist in Structured Design. Only procedural or functiona abstractions which directly initialize your ADT for use.

        I could provide two types of examples, but the Slashdot interface for dumping all that in would be painful for me to organize and type, so I won't support the above argument with a direct example.

        Hack
        • (* Calling constructors for example. Constructors and accessor mthods don't exist in Structured Design. *)

          Generally I use a data structure interface[1], usually a database table, for such. IOW "new" is a new record instead of a language-specific RAM thingy. The "instances" are simply another record or node in the collection/database.

          [1] Note I said "interface", since direct access limits implementation swappability.

          OO philosophically believes that it is good to hook behavior to the entities of the structures. In practice, I don't find this very useful for business modeling. Sometimes there is a permanent fairly tight relationship, but *most* of the time the nouns that participate are multiple and variant WRT importance. There is no "King Noun" that is appropriate and invariant.

          "Every operation should belong to one and only one noun" is an *arbitrary* restriction in my mind. If you can explain what is so magical about that OO rule is biz apps, I would be greatful, because I don't see the appeal. I just don't.

          (* Only procedural or functiona abstractions which directly initialize your ADT for use. *)

          I am not sure what you mean here. Note that ADT's generally are stuck to the "one noun" rule described above. This limits them for biz apps IMO.

          (* I could provide two types of examples, but the Slashdot interface for dumping all that in would be painful for me to organize and type, so I won't support the above argument with a direct example. *)

          Slashdot is shitty for nitty gritty software development discussions. You have hit-and-run superficial moderators, and it does not like programming code characters.

          Perhaps go to deja.com and post on comp.object to post some sample code. BTW, rough psuedocode is fine as long as you are willing to answer questions about it.

          Thanks for your feedback
  • by Tablizer (95088) on Saturday August 24, 2002 @04:43PM (#4134056) Homepage Journal

    I get different answers when I ask OOP fans what specificly are the (alleged) benefits of OOP. Most can be divided into one of these two:

    1. Easier to "grok". Enables one to get their mind around complex projects and models.

    2. Makes software more "change-friendly" - fewer code points have to be searched and/or changed when a new change request comes along.

    I did not include "reuse" because it seems to be falling out of favor as a selling point of OOP.

    If I can narrow it down, then perhaps I can figure out why OO fans seem to like what seems to be a messy, convoluted, bloated, and fragile paradigm to me.

    I would appreciate your vote. Thanks.

    P.S. Please state your primary domain (business, systems software, factory automation, embedded, etc.) if possible.

...when fits of creativity run strong, more than one programmer or writer has been known to abandon the desktop for the more spacious floor. - Fred Brooks, Jr.

Working...