What's wrong with HelloWorld.Java 181
prostoalex writes: "Daniel H. Steinberg posted an article on O'Reailly's OnJava.com discussing the difficulties and current problems with introductory Java in the classroom. The textbooks used in colleges are mostly the rewrites of C/C++ textbooks and thus start with HelloWorld program without really dwelling on object-oriented nature of Java and why it is important. In a nutshell, OOP even nowadays is treated as somewhat innovative concept in the classroom, mainly because of educators, who were taught C. Hence links and description of Rethinking CS101 Project."
Java != OOP, C++ != OOP (Score:5, Insightful)
One has to be able to perceive problems in terms of objects. This may at a glance seem easy - our world is composed of objects, but when you start getting into more abstract concepts, e.g. trying to write OS's in a fully OO manner (akin to what BeOS was) , or other more complex applications like the entire JFC (for instance), then OOA&D does not seem so easy!
Designing, or better yet, THINKING in OO terms is not something that happens overnight. This is precisely also the reason as to why 90% of large, pure OO projects either fail or start to degenerate into something that needs revamping every so often, only because the programmers who built the application did not take the time to properly analyze the problem and come up with the most natural solution possible. A natural solution is possible, but only at the hands of professionals, who understand what OO is all about (and it is least about WHAT LANGUAGE you use), who have experience in 'seeing' the world, or higher concepts through OO eyes and who are able to delimit, with crisp boundaries every concept/object available to them or as stated in the specifications by the customer and MOST importantly establish the PROPER relationships between those objects!
Design patterns and such go a LONG way toward getting this objective, but one cannot fathom using or applying design patterns if he doesn't understand what OO design and analysis means, and without a shitload of experience to use toward this goal. True OO thinking is almost like a lithmus test of how good a programmer, or better said, an ANALYST, an ANALYTICAL person, or your ANALYTICAL skills are... In OO, 80% of the time or thereabouts is spend on analysis and design, 20% on the mechanics of writing the code. Then, and only then, will you be able to pull OO projects successfully through completion.
And no, I'm not talking about your school/academic projects, I'm talking about large scale projects with possibly millions of lines of code where understanding the ESSENCE of the OO paradigm will either make or break a project and make it usable and extendable for a long time or make it a piece of crap that will never see the light of day...
Most people shy away from OO or misunderstand it because they've never even read a book about OO either, such as the OO 'bible' by Rumbaugh/Premerlani "OO modeling and design using OMT", or some of Fowler's books on analysis, patterns, or Gamma's book on design patterns...
Someone once said - pimpin' ain't E-Z! Well, neither is OO!
Re:Java != OOP, C++ != OOP (Score:2, Insightful)
Re:Java != OOP, C++ != OOP (Score:2)
Well in my experiances the average is deamed to be excellence, if you in any different, off the wall or excel but not in a way that fits the average is is not usualy considered to be 'constructive' in the education system.
Just a bit on OOP so as not to be off topic,
You can write C in and object orientated way even though there is no real language support for objects in C.
the old jpeg library was written like this, and I believe GTK is written this way (why they don't use C++ i'll never know?)
One way to teach OOP is to get some spagettie and get the 'class' to refactor the code(aided and abbeted by the teacher!) not only does this teach OOP but it teaches the reasons why OOP is good.
If i wanted a spelling critique I would have posted this comment on
Re:Java != OOP, C++ != OOP (Score:5, Interesting)
A fitting excerpt from my anti-OO webpage:
OOP technology has generated more confusion than almost any other computer technology. For example, many OOP "experts" claim that most companies are either not using OOP properly, or are not taking advantage of OOP. Experts also say that there is a long learning curve before many people grasp the power of OOP; that its benefits can't really be taught, at least not understood, from a book. It has almost become like Particle Physics, in which only a small elite group appears to understand it properly, and everybody else needs years of meditation and practice.....
Ironically, OOP is sometimes billed as "better fitting the way people think". Years of meditation and study to learn how to "think naturally"? I am thinking of setting up a $60-per-hour consultancy to teach sports fans to drink beer and belch in order to "optimize their recreational satisfaction".
(oop.ismad.com)
Smalltalk (Score:3, Interesting)
The answer is to use something like Smalltalk, where everything is OO. In early testing, the Smalltalk developers found that it was *easier* to teach Smalltalk to beginners than procedural languages, because people are already familiar with doing things to objects in the real world. Whereas it takes a certain way of thinking to come up with step-by-step manipulations of abstract data structures.
Re:Smalltalk (Score:2)
I have heard this claim from Smalltalk fans before, but the "experiment" has never been repeated in a proper research setting. Thus, it is an ancient legend that just keeps getting propagated over and over.
I would note that people think *differently* than each other. Just because thinking X way is natural for person A does *not* necessarily mean X is natural for person B.
Don't paint with too wide a brush.
If OO and Smalltalk model *your* head well, that is fine. Just don't extrapolate that all over the planet without real research first.
Personally, I think it is more important to focus on making software change-friendly rather than making it easy to learn to program. Although, both are important factors.
Re:Smalltalk (Score:2)
It's not like you're going to put 10 people in one room and teach them Java/C++/whatnot and 10 in a different room and teach them Smalltalk and then see which group are best able to solve some random experiment. There's just not much of a point.
And while people think differently I think that's besides the point. The issue was to compare languages which teach OO. If a person is more "apt" at procedural or functional programming is besides the point. It would seem as if the hypothesis that "If you want to teach something, try to teach it with as few distractions as possible." would be valid. And that would be the point of teaching Smalltalk.
(Note: I haven't learned Smalltalk and have only studied it a little "for historical reasons".)
Re:Java != OOP, C++ != OOP (Score:4, Insightful)
Re:Java != OOP, C++ != OOP (Score:2)
Let's cut to chase.
Where is this grand evidence that OOP is objectively better?
The evidence on my webpage is as strong as ANYTHING used to justify OOP.
Where is your evidience, Mr. GlassHouse?
Ignore the fact that you don't like me and think I am a troll. Just produce the evidence for the world to see. Good evidence is orthogonal to any alleged troll.
(I was often told that good evidence existed in Meyer's famous work. So, I purchased a copy. However, I found tons of reasoning holes in it. A review of it can be found on my website.)
Re:Java != OOP, C++ != OOP (Score:3, Interesting)
As I've pointed out before, it's in the collective experience of legions of software developers. If they didn't -- at least subjectively -- feel that the OO approach suited them better, they wouldn't use/adovcate it. And what feels better to you is often the best approach for you to take. There are an awful lot of people putting their money where their mouth is on this one, and they're still doing it decades later. It's hard to believe that they're all wrong on everything after all this time.
You may not personally have seen any benefits from OO, and you may personally have seen benefits from a relational approach. As I've also pointed out before, and by your own admission, your experience comes from a very narrow field of programming, to which one approach seems much better suited. It's not surprising that you find that approach superior. OTOH, you are yourself falling for the "too wide a brush" problem of which you accuse others elsewhere in this thread. Those of us who work in diverse areas of programming have often found OO to be at least as natural as, or more natural than, a purely procedural approach. We also acknowledge that it has its flaws -- and there are plenty -- but many of these can be avoided if you use a tool that doesn't insist on a purely OO approach (and frequently one that ignores half of OO as well, such as certain popular mainstream programming languages today).
Voting the World Flat (Score:3, Insightful)
1. Collective experience use to be that the world is flat.
2. It could be subjective (the "mindfit" argument). That is fine, but 99% of the stuff on the shelfs implies that OOP is objectively better. I don't see disclaimers that the benefit list may be subjective.
3. The "popularity metric" is that Windows is "better". Do you really want to back that?
4. I have never seen a good survey that said most developers prefer OOP.
and by your own admission, your experience comes from a very narrow field of programming, to which one approach seems much better suited. It's not surprising that you find that approach superior.
Narrow, but large, I might point out. Not a single OO book ever limited it's braggings to specific domains, instead strongly implying a "step up in evolution" over procedural.
Those of us who work in diverse areas of programming have often found OO to be at least as natural as, or more natural than, a purely procedural approach.
Unless you can define/measure "natural", that appears to be a rather subjective thing.
Plus, some OO fans here have said that OOP is *not* "natural" nor should that necessarily be the goal of it.
I believe in the scientific process where you have to openly justify things based on open evidence, and not personal opinion and "feelings". Your "evidence" fails miserably here.
BTW, who gave that ahole a "4"? It contains almost nothing but personal digs. Damned moderators!
Re:Java != OOP, C++ != OOP (Score:2)
I doubt this. OO is as mainstream as structured programming became when it was the rage. It is in fact so entrenched now that it's a bit of a *problem* to do even minor tweaks to enhance OOP itself, such as method(object,arg,arg) instead of object.method(arg,arg) in order to better support multiple dispatch, since it looks too much like non-OO code, regardless of the scoping of the actual implementations of method.
In fact, it's hard to convince people to write things in functional styles when OO is always the rage. Probably because functional's disdain for variables has made it hard to know where to put state, but there's a good amount of herd mentality too.
It needs a better fence or else more such "trolls" are eventually going to kill its acceptance with superficial, but catchy attacks.
Actually, I take the opposite view: it's kooks like Tablizer that make honest criticisms of OOP look bad.
Re:Java != OOP, C++ != OOP (Score:2)
I realize that some of you personally don't like me, but I am not getting specifics here about technical issues.
Vaguery is for PHB's. Be a *real* geek and give specifics, people. Example template: "Tablizer says on page X that 2 + 2 is 5. Clearly that is wrong based on the following reasoning....."
Instead you just call me names. What is up with that?
It is in fact so entrenched now that it's a bit of a *problem* to do even minor tweaks to enhance OOP itself, such as method(object,arg,arg) instead of object.method(arg,arg) in order to better support multiple dispatch, since it looks too much like non-OO code
OOP makes the faulty assumption that each verb has only one primary noun. English is actually the opposite in that there is only one primary verb (or verb clause) but up to many nouns. I find the "only one primary noun" aspect of OOP "unatural". In the real world, multiple "players" (nouns) participate in something and their "ranking" may change over time. It is hard to model that in OOP without funky girations.
Re:Java != OOP, C++ != OOP (Score:2)
Re:Java != OOP, C++ != OOP (Score:2)
Any OOP "multiple dispatch" I have seen was a fricken tangled mess. Also, some OO languages that do support it built-in are not considered "OOP" by some.
Anyhow, if you have a specific example you would like to explore, please present it. Preferably a custom biz example instead of say device drivers. I will admit that device driver examples make a fairly strong case for OOP [1], but I don't write device drivers for a living, and neither do most programmers.
[1] An OO fan who actually did write device drivers once said that such examples are often too simplistic. However, without both debate parties being experts at DD's, it is hard to evaluate and scrutinize their real effectiveness. The examples I propose on my webpage tend to be things that any programmer can relate to (student enrollment and class scheduling, for example).
Re:Java != OOP, C++ != OOP (Score:2)
In that case *anything* can be "natural" if you simply do it first and early.
That is not the issue. The issue is what to force students into and why.
(* God I hate idiots. *)
Me too. They often skip a step or two in science: "Evidence? We don't need no stinkin' evidence because we voted ourselves 'experts'."
Calling for Balance (Score:2)
I agree that much of it is subjective.
However, the "trend" seems to be to rank OOP as more sophisticated or "better" than other methodologies.
And, all the research and tools flow in the direction of OOP. How many NON-OO pattern books/articles do you know about? Why so few? How many non-OO software engineering books do you know about? (Besides those written in the 70's before relational databases were readily available.)
OOP is stealling far more spotlight than it's evidence (zilch) justifies.
Until one is proven objectively better, teach *all* paradigms equally: Procedural, Relational, Functional, OOP, etc........
Balance
Re:Calling for Balance (Score:2)
The right tool for the job!
For most jobs OOP is suitable. Naturally you'll have some procedural stuff thrown in as well, but that goes with the territory.
First off, you have to start somewhere. It seems like OOP would be a good place to start, since you'll get a feel for procedural languages as well, and it's usable. If you want to learn other types of languages later that's possible at most larger schools. But you can't introduce all types at first. And some like functional programming is pretty hard to wrap your brain around, quite a lot harder than OOP I'd say.
Well an idea would be to consider why books are printed and articles written. Generally to get sold, or in the case of articles to sell a magazine. You can't walk into a normal bookstore and expect to find books on exotic computer topics. I bet it's hard to find books on Verilog nad VHDL as well, that doesn't mean that the languages are insignificant, just that the books don't sell through those chanells. Likewise with articles in magazines. Most computer magazines are aimed at gamers or at people who can't find the on switch. Don't expect to find cool stuff here.
If you want the fun stuff look at academic papers. They tend to do stuff "because they can" and often you'll find new ideas years before even advanced magazines. For functional programming eg look at AI research, they use Lisp all the time.
OOP might not be the best tool for a lot of stuff that it's used for. (Although I think it is in most cases.) But it is a good "common denominator" as it can be succesfully applied to most problems.
And if you want to get all Zen about it you just need to realize that in the end it all comes down to pretty much the same thing. Mapping your thoughts to some bits. How you do it is irrelevant, but you have to do it in some way.
Re:Calling for Balance (Score:2)
I am sure everybody will put their personal favorate at the top of the list.
(* And some like functional programming is pretty hard to wrap your brain around, quite a lot harder than OOP I'd say. *)
Some people found it a "natural fit". It seems to greatly depend on the person. What fscks-up person A may be a joy to person B.
(* Well an idea would be to consider why books are printed and articles written. Generally to get sold, or in the case of articles to sell a magazine. *)
Are you agreeing that OOP is a "fad"? Or at least "in style" right now, and that is why it gets most of the SE attention?
(* But it is a good "common denominator" as it can be succesfully applied to most problems. *)
Turing equivalency pretty much guarentees that with all the common paradigms (except relational by itself perhaps).
Thus, I question that claim. My fav paradigms can be used in just about anything also. The only exception I know of may be suitations where microsecond timing is critical and can't be "farmed off" to custom drivers or hardware via API calls. Anything with auto garbage collection or auto-buffering is probably gonna have such problems anyhow.
OOD101 or CS101? (Score:5, Insightful)
Just because Java is focused on objects doesn't mean you have to teach OOD right off the bad. You have to start with the basics. True, you going to have kids ask "What does static mean?". You just tell them to ignore it for now. Why is that looked upon as a bad thing? The same thing happens when you teach C++. You tell your beginners to ignore stdio. Later, when it's time, you can teach about includes and classes.
This is why I didn't learn jack shit in college. Everything is focused on OOD. Object this and class that. I am not saying there anything wrong with OOD, but colleges don't focus enough on the fundamentals. That's why there are so many people who overengineer everything and who can't even tell you the difference between a Mergesort and a QuickSort or even know what a Red Black tree is!
Re:OOD101 or CS101? (Score:2, Insightful)
Somewhere along the line you should learn more about algorithm complexity, various programing paradigms (like functional programing), low-level languages like assembly, operating system and networking concepts, and any advanced topics like databases and distributed programming and real-time programming. But these are all extras. I still think that a programmer needs to learn what a loop is before he should be concerned about what an object is.
Re:OOD101 or CS101? (Score:2)
And don't forget relational databases. I think relational concepts are some of the greatest ideas of computer science. You can reduce complex GOF-like "patterns" into skinney little formulas, for example. GOF looks like the old-fashioned hard-wired "noun-structure in the code" way of "doing patterns" IMO. Relational transcends most of GOF.
I don't know why database vendors don't spend more effort to point this out. I suppose because in OO projects you often end up noun-modeling twice anyhow: one in code and one in the database. Thus, it has not taken their sales. If dumb developers want to have roughly duplicate structures, why should they care?
(Note that the current vendor offerings of RDBMS are not the ideal, IMO, but good enough for now.)
oop.ismad.com
Reread GOF tehn. Re:OOD101 or CS101? (Score:2)
And don't forget relational databases. I think relational concepts are some of the greatest ideas of computer science. You can reduce complex GOF-like "patterns" into skinney little formulas, for example. GOF looks like the old-fashioned hard-wired "noun-structure in the code" way of "doing patterns" IMO. Relational transcends most of GOF.
You are far off topic.
a) a relational data base is not a programming language
b) the relational paradigm has nothing in common with the oo paradigm or the procedural paradigm
c) in a relational data base you store DATA, not code (except for stored procedures)
d) GOF is about structure and behaviour, further: you can't express anything you can express with GOF design patterns in relatinal terms, you are plain wrong.
e) in another post yu critics the need to meditate for thinking right: and? is it not necessary to meditate and think right to apply relational paradigms correctly? I asume you learned all ways of joins in a day? You also learned allways of normalizing data bases in a day?
The thread was about the question how to teach a language. Further more it was about the question how to teach an oo language and how to teach Java.
Its definitly not abpout tabelizers fight against OO paradigms
In the world I live, procedural is dead
Regards,
angel'o'sphere
Re:Reread GOF tehn. Re:OOD101 or CS101? (Score:2)
It does not matter. If it replaces GOF it replaces GOF, whether itsa gerbal or a language.
(* in a relational data base you store DATA, not code (except for stored procedures) *)
Yes it can and I have done it before. However, it is not necessary to compete with most of GOF.
(* GOF is about structure and behaviour, further: you can't express anything you can express with GOF design patterns in relatinal terms, you are plain wrong. *)
The relational part replaces *most* of it. It does *not* have to replace *all* to be an effective alternative.
(* in another post yu critics the need to meditate for thinking right *)
No! I pointed out a contradiction of claims. I don't dispute that relational takes training.
(* The thread was about the question how to teach a language. *)
Yes, but "why" and "when" is a prerequisite to "how".
(* you should defintly start to understand your enemy (oo) more in depth before ranting *)
Red herring insult. I personally think you don't understand how to effectively use relational technology.
(* In the world I live, procedural is dead *)
In practice it is very much alive, even in OOP languages (it is just more bloated in them).
Re:Reread GOF tehn. Re:OOD101 or CS101? (Score:2)
These technologies are currently only at the "lab" stage and are yet more convoluted patches on top of already convoluted OO to "fix" the sins of OO.
They are at least a realization that OOP cannot handle relativism very well. Even IBM more or less agrees that OO has relativism problems in its introduction to such technologies.
Are you gonna call IBM a "troll" also?
Alternatives and labs (Score:2)
Some of the major alternative paradigms are well out of the lab stage. Popular functional programming languages have been used for real world projects for years, and some of those languages have well researched and documented advantages over current mainstream approaches (much faster development, formal proofs of correctness, much more concise code to solve the same problems, etc). Why doesn't the programming world move to them en masse? The same reason so many ex-procedural types don't "get" OO: momentum. It's really as simple as that.
BTW, what do you mean by "OOP cannot handle relativism very well"?
Re:Alternatives and labs (Score:2)
I never suggested otherwise. I never said all alternatives are in the lab. I am not sure how you interpreted what I said. Aspect Oriented Programming is still in the "research stage". How that allegedly relates to functional programming, I don't know what you mean.
(* Why doesn't the programming world move to them en masse? The same reason so many ex-procedural types don't "get" OO: momentum. It's really as simple as that. *)
I am not sure what you mean here. Could you please elaborate?
BTW, IMO many OO fans don't "get" databases. They often see them as a mere persistence tool.
(* BTW, what do you mean by "OOP cannot handle relativism very well"? *)
Well, in a nutshell, OOP is optimized for IS-A relationships. IOW, a *single* primary "view" of something.
Now, it can indeed handle HAS-A kind of relationships via cross-referencing other classes and multiple inheritance. However, managing all those cross-references and sets is a pain in the ass using programming code.
What is the best way to manage several hundreds of cross-references?
A. Programming Code
B. Database
I vote for B. Fancy IDE's can help with A, but often they are simply reinventing a database without knowing/admitting it. Plus, they are usually language-specific and additional cost.
Further, if similar classes become so numerous that you want to turn them into data instead of code (sometimes called "paramerization"), you have a lot of rework to do. If you start out with all that stuff as data (a DB), then you don't have the translation step to worry about.
Given a choice, I would put GUI models in databases instead of code, for example. One advantage of this is that just about any language can read and modify the GUI items instead of the one that the GUI attributes were defined/written in.
Plus, one could browse around in all that info to get all kinds of views and searches that are tough if you put boat-loads of attributes in code.
The ad-hoc influence of relational queries applies also to getting new views that one did not anticipate up-front. You don't have to build new physical models to get new unanticipated views, you just apply a relational equation and the database builds the new view for you.
Re:Alternatives and labs (Score:2)
My point is simply that whenever you have a large body of skilled people, and they are choosing the tools or techniques to work their craft, there will be an inherent bias towards those they already know.
In the programming world, many people learned procedural first, whether it was C, FORTRAN, assembler, or whatever. Consider that it took the mainstream decades just to move beyond random-seeming gotos to structured programming (and more systematic gotos such as exceptions and labelled loops). Progress in the industry at large is years (decades?) behind what's known to be possible from an academic perspective.
Many OO programmers today started out in procedural code, and unfortunately one of the biggest paths into OO was from C (purely procedural) to C++ (very much not purely OO). Now, if you take advantage of C++'s multi-paradigm support, you can do some very clever things, but unfortunately, those who make the jump without reading around their subject tend not to "get" OO first. As a result, you don't get a cohesive design with influences from both procedural and OO complementing each other, you get a C-like procedural design with some OO tools forced on top and looking out of place.
It's notable, BTW, that programmers with backgrounds in established purely-OO languages such as Eiffel or Smalltalk tend to "get it" much more than those who program C++, or bastardised offshoots like Java. (I don't have a problem with Java; this comment refers only to the model underlying the language and the way it is presented). It's just unfortunate that such people represent only a small proportion of those using "OO" as a whole, and that such a pure OO approach does have some significant flaws that could be overcome using a mixed paradigm approach.
I'm sure you see the same thing in your efforts to demonstrate the advantages of a relational approach; you just wrote that you thought many OO fans don't "get" databases. It's the same with functional programming as well. The few who have so far made the effort to learn something genuinely different have seen some advantages, and many have liked the alternative approach, but the vast majority don't make the effort to learn. Most programmers are not of that calibre by default, and most teachers, managers, and other guiding influences aren't sufficiently experienced with different approaches themselves to provide an informed and complete picture.
Re:Alternatives and labs (Score:2)
Well, "getting it" is hard to define/measure, other than maybe "using more OO", ignoring the quality for now. Anyhow, those who gravitated towared and/or stuck with Smalltalk and Eiffel probably have an affinity for OOP. Thus, there is a filtering mechanism perhaps.
IOW, it is a check-or-egg (cause or effect) type of question. Did the background make the programmer, or did certain kinds of programmers gravitate toward certain stuff?
(* and that such a pure OO approach does have some significant flaws that could be overcome using a mixed paradigm approach.*)
As I mentioned elsewhere, the problem is that there seems to be no consensus as when to use what. There are tons of "how" material for OO, but very very little "why".
Besides, some people, including me, have the opinion that mixing increases the complexity without adding much benefits unless one paradigm is really crippled in one area. I tend to combine procedural and relational because they compliment either other IMO. But, relational and OO tend to fight over territory and duplicate stuff among each other.
It may be better to focus on becoming the *best* at a given methodology/paradigm rather than mediocre at *multiple* methodologies/paradigms. Very few programmers have the gift to master them all.
(* Most programmers are not of that calibre by default, and most teachers, managers, and other guiding influences aren't sufficiently experienced with different approaches themselves to provide an informed and complete picture. *)
Nobody in the entire fricken world seems to have an "informed and complete picture". If they do, they have not documented it. Instead, the books all use the same (misleading) cliches and cliche examples over and over. I guess that is "reuse" for ya
Re:Alternatives and labs (Score:2)
No, it's really not. I could talk to a programmer for five minutes and tell you if he understands OO or not. So could anyone else who's been using OO for any significant length of time. It's really not hard.
The same is true of most science, of course. Or perhaps all mathematicians should be able to give proofs by following some sort of cookbook? Why did Fermat's Last take so long to prove? Because giving concrete, solid proofs is hard. So is knowing when to use which programming technique for best effect, and in each case, there are always many possible options that would give acceptable results. Programming is a skilled task, and knowing what to use when and how to approach a problem comes only with skill and experience. There is no quick fix cookbook, yet you seem to require one before you will give any credit to a method.
Very few programmers have the gift to master any of them. That's why the best programmers can produce ten times as much code or more over a given period of time than a mediocre code monkey who just got hired and is learning on the job, and do much more than ten times as much useful work with it.
However, there is much merit to the argument that the best is the enemy of the good. If your only tool is a hammer, everything looks like a nail. If your only tool is procedural/relational programming, or OO, or functional programming, or whatever, then all programming problems must be solved in that framework. Clearly some paradigms are vastly more efficient for solving certain types of problems than others. It's similar to the argument for optimisation: it's much more effective to choose a sound argument in the first place than to pick a worse performing algorithm and try to make up for it with low-level optimisations.
On the contrary; there are a few people around who do have a broad understanding of the field. I don't know of any books ever written by any of them; they are generally pretty busy running development teams, or in a more senior role. Sadly, the vast majority of academic tuition and low level management decisions are provided by people who are not in this group.
Re:Alternatives and labs (Score:2)
"Prove" is probably too strong a word. "Evidence" is better for engineering stuff. Comparing bridge designs, rocket brands, or basketball stats is probably a better analogy than math.
At this point I would like to see *any* evidence applicable to the biz domain. Your best evidence so far.
(* There is no quick fix cookbook, yet you seem to require one before you will give any credit to a method. *)
Why should OOP be *exempt* from providing evidence of betterment beyond "I am an expert and I say so".
(* Clearly some paradigms are vastly more efficient for solving certain types of problems than others. *)
Well, I have several times asked for areas or demos of where OO shines compared to p/r and where it doesn't, but usually get INconsistent answers from OO practitioners.
(* It's similar to the argument for optimisation *)
No, optimization is relatively easy to measure.
(* On the contrary; there are a few people around who do have a broad understanding of the field. *)
Often this ends up being "people who think like me". Most die-hard OOP fans think that a very narrow group of people properly "get" OOP, but everyone's group is different.
I think people mistake subjectivity for objectivity too often in this field. Without decent metrics, this is what happens.
Many programmers *insist* that semicolons are "clearly superior" for example, yet I don't like them despite lots of PASCAL use, and never will. They are militant about anyone who says semicolons don't work for them. They think that because semi's work fine in their head and fingers, that everybody else is or should be the same way.
Re:Alternatives and labs (Score:2)
I disagree here. Computer science and software development work far closer to the way maths works than to most engineering disciplines.
No, what you would like to see is some magic metric that supports a claim that neither I, nor any other OO supporter I have seen, has ever made. Further, you would like to see it in a published and authoritative reference, and will ignore the legion of anecdotal evidence that is the basis for many of our decisions, because you personally haven't experienced it, and so don't buy it. That's your choice, of course, but the vast amounts of personal and anecdotal evidence that OO can make designs easier than a purely procedural approach is what convinces all those developers around the world (many of whom have moved to OO from a strong procedural background and therefore have plenty of personal experience on which to base their judgement).
Please show me one single claim by any OO fan that OO shines compared to procedural with relational approaches. Relational is a much higher-level paradigm than purely procedural programming, and has not featured in any comparison I have ever seen made, except for ones you've asked for.
Guess you haven't developed high-performance maths software that has to work on 15 different platforms lately, huh? Optimisation can be measured after you've done it, but predicting in advance what the effects will be of your "optimisations", and indeed whether they will actually improve performance and not actually reduce it, is almost impossible. We use profilers and usually leave micro-optimisations until late for a reason. Unfortunately, when you're talking about the overall design of your system, you obviously don't have that luxury (unless you're from the slanted side of the XP world, when the design phase not only doesn't happen formally, you claim it doesn't happen at all).
Well, of course it's somewhat about how you think; OO is just a more formal way of expressing techniques and ideas that many good procedural programmers had been using for years beforehand. But contrary to what you often claim, my experience is that OO developers often have very much the same view of things: they may not put the same responsibilities and relationships in the same places every time -- there are many ways to design and implement a solution to the same problem -- but they all understand the concept of responsibility and the different relationships that classes can have. Your claim that all these OO developers constantly disagree on even basic points just doesn't ring true.
Re:Alternatives and labs (Score:2)
Well, I disagree. I see software like a bunch of little virtual machines. The fact that they happen to be represented with symbols instead of transistors or gears is relatively minor. Cross-references (composition) is a lot like wires. And ER diagrams resemble chips wired up to each other with labeled inputs and outputs.
(* No, what you would like to see is some magic metric that supports a claim that neither I, nor any other OO supporter I have seen, has ever made. *)
Are you saying that p/r is *equal* to OOP in terms of productivity and change-friendliness, etc or NOT?
All the braggings in your message implies that you think OOP is superior and/or some "higher level abstraction" stuff.
(* Further, you would like to see it in a published and authoritative reference, and will ignore the legion of anecdotal evidence that is the basis for many of our decisions *)
Anecdotal evidence SUCKS! Anecdotal evidence says that Elvis performs anal probes on farmers in green saucers at 2:00am.
Plus, it points both ways. The failure rate of actual OOP projects appears to be at least as high as non-OO ones in Ed Yourdon's surveys.
(* That's your choice, of course, but the vast amounts of personal and anecdotal evidence that OO can make designs easier than a purely procedural approach is what convinces all those developers around the world.... *)
That is bullsh*t! It is "in style" so people just go with the flow and say what people want to hear. People are like that.
Again, Yourden's surveys show *no* clearly higher ratings of OO projects by managers.
How is that for "anecdotal"?
(* all those developers around the world (many of whom have moved to OO from a strong procedural background *)
And I have seen some of their sh*tty procedural code. They obviously have had no training or paid very little analysis attention to making it more change-friendly in a good many cases.
I will agree that OOP has brought some attention to software-engineering issues that lack in p/r training materials, but this is a human/fad issue and NOT the fault of the paradigm. IOW, "training traditions". It is tradition to bundle OO training with software engineering issues, while it was not with procedural. But, that is not the fault of the paradigm.
(* Relational is a much higher-level paradigm than purely procedural programming, *)
Exactly, that is why they make a good couple (P and R). They are Yin and Yang. If you use R with OOP, you get *two* Yangs more or less.
Relational is a *superior* noun modeling tool than OOP in my opinion. I can navigate, filter, change, search, manipulate, and viewify tables and databases much more smoothly than OOP code. That is my own personal experience and you cannot take that away from me.
Code sucks. It is tooooo static and hard to navigate IMO. The less of the model you put in code and more of it into the database, the better the result.
You cannot put the noun model in BOTH the database and code (classes). That is too much overlap and re-translation. It is unecessary duplication and unnecessary translation back and forth. Cut out the middle man and pick one or the other.
(* Well, of course it's somewhat about how you think; OO is just a more formal way of expressing techniques and ideas that many good procedural programmers had been using for years beforehand. *)
Formal? How are you discerning "formal"?
(* Your claim that all these OO developers constantly disagree on even basic points just doesn't ring true. *)
All the OO fans will agree that "classes are good". Beyond that, the agreement is tossed out the window. I have witnessed many intra-OO fights on comp.object, and they agree on VERY LITTLE. Hell, they cannot even agree on a definition of OOP and "type". They go bonkers.
Thank you for your feedback.
Clear answers: why OO can be helpful (Score:3, Insightful)
Not at all. There are some clear areas where OO designs tend to do better than procedural. Most of them, in fairness, are due to the fact that an average programmer is not an expert and OO makes it easier for him to avoid problems that an expert would avoid using either style. However, the programming population is dominated by those in the middle of the ability bell curve, so I think this is a reasonable case.
The most obvious flaw with much procedural code that I have seen is that as projects grow, the design often becomes incoherent. In particular, special cases start appearing because it is easy to "just add another function" or "code the problem out with an if statement". These cases result in vast numbers of bugs, not because there is any inherent problem with using if per se, but because the approach does not scale. The number of possible results is exponential in the number of special cases, and sooner or later, special cases start conflicting and nobody notices. Even automated tests start suffering, because it becomes practically impossible to exercise all of the code paths and special cases systematically. (NB: I am not talking about cases where you have several genuinely different actions to take because your data is substantively different, which obviously happens frequently in programming whatever style you are using. I'm talking specifically about the little hacks that are put in rather than adjust a design to account for special cases cleanly throughout.) The greater range of design tools available to an OO programmer mitigates this effect somewhat, and the encapsulation and data hiding encourages the maintenance of a clean design without the proliferation of special cases.
I also question whether typical procedural designs do adapt to change as well as typical OO designs. Again, lacking the emphasis on encapsulation and data hiding that OO advocates, there is a tendency for responsibility to become spread out. Instead of particular data being owned and used by a clearly defined subsystem (whether that be a class or set of classes in OO, or a particular table or set of related tables in relational), in procedural code, it is easy and common for "convenience" structures to appear and be passed around all over the place. This, again, is a common source of bugs. Data spreads, but the controlling logic does not, resulting in accidental changes that violate invariants, mutually dependent items of data being used independently, clashes over resource usage, etc.
To me, the proliferation of special cases and the arbitrary spread of responsibility are the most common failings of moderate to large procedural designs. I'm sure a group of expert programmers would avoid many of them, but it is clear that typical, most-of-your-team coders do not. OO's approach helps to alleviate these problems with its emphasis on encapsulation and data hiding.
Note also that none of this has anything to do with inheritance and polymorphism. IMHO, these are also valuable tools, but much of their value comes from the way they allow you to extend an OO design without breaking that focus on encapsulation, which is the source of the big benefits in most of the projects I'd consider "good OO". Of course, there are other advantages for things like type safety, but these, to me, are secondary.
It is true that you cannot do this in most OO languages, and I freely concede that on occasion this is annoying. However, in fairness, if your methods are reasonably well thought out and self-contained as they should be, this problem rarely arises in practice. When it does, it's usually because a new and significantly different set of requirements have been added to a previous design, and that is normally grounds to adjust that design, possibly by reorganising the methods available. Wanting to override 1/3 of a method is usually symptomatic of either an unfortunate original choice of responsibilities, or of such a change in requirements, and in either case the answer is usually the same.
Unfortunately, most large-scale projects don't get written simultaneously by two identical programming teams in the different styles just to provide an objective comparison. Such porting exercises are the closest you can realistically get to a fair comparison. I agree that the second project will mostly likely have an advantage due to hindsight, but it's also often hampered by the lack of expert domain knowledge that was available when the original was written. Even neglecting that drawback, I question whether that advantage could have made a three-fold difference to development speed (which was sustained even in new development) and account for a 90% reduction in bug rate (which was also sustained). If you truly believe that this could be the case, then I have no further information I can provide about this case to convince you of OO's benefits.
No, I'm quite sure it wasn't. However, it was real procedural code that was actually produced by a real development team, not a theoretical, artificially perfect implementation produced by a team of 100% effective experts. The OO version to which I'm comparing it was also real world code produced by a real world team. This is what actually happened, which IMHO is far more important for most purposes than what might have happened in an ideal world. Ideal worlds don't pay the rent. :-)
That's hardly fair. Refactoring a class design, even right down the hierarchy, is no more of an endeavour than a similarly scaled adjustment of a database schema to allow the new relations you want to describe. More modest changes to a design don't require this scale of change in either approach.
Well, there you go. Those are some actual, concrete, clearly defined situations where I feel that OO can convey significant advantages in measurable things like bug count and development rate, and an actual, concrete example where it worked in a real project. I can't do much better than that. :-)
GoF patterns (Score:2)
Of course those GoF patterns can make life hell for the maintenance developer or app framework user, when people turn it into a contest to see how many design patterns they can fit into a single project. The overall "Design Patterns" philosophy is really "how can I defer as many decisions as possible from compile time to run time?" This makes the code very flexible, but the flexibility is wasted when a consultant writes code using lots of patterns to puff up his ego and then leaves without leaving adequate comments or documentation. Without insight into how the system works, the configurability and flexibility that these patterns offer is lost. The system hardens into an opaque black box.
Deferring decisions to runtime makes code hard to read. Inheritance trees can get fairly deep, work is delegated off in clever but unintuitive ways to weird generic objects, and finding the code you're looking for is impossible, because when you're looking for the place where stuff actually happens, you eventually come across a polymorphic wonder like
object.work();
and the trail ends there. Simply reading the code doesn't tell you what it does; the subtype of object isn't determined until runtime. You basically need a debugger.
You can take a really simple program and screw it up with aggressive elegance like this. Here is Hello World in Java:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, world!");
}
}
But this isn't elegant enough. What if we want to print some other string? Or what if we want to do something else with the string, like draw "Hello World" on a canvas in Times Roman? We'd have to recompile. By fanatically applying patterns, we can defer to runtime all the decisions that we don't want to make at compile time, and impress later consultants with all the patterns we managed to cram into our code:
public interface MessageStrategy {
public void sendMessage();
}
public abstract class AbstractStrategyFactory {
public abstract MessageStrategy createStrategy(MessageBody mb);
}
public class MessageBody {
Object payload;
public Object getPayload() {
return payload;
}
public void configure(Object obj) {
payload = obj;
}
public void send(MessageStrategy ms) {
ms.sendMessage();
}
}
public class DefaultFactory extends AbstractStrategyFactory {
private DefaultFactory() {;}
static DefaultFactory instance;
public static AbstractStrategyFactory getInstance() {
if (instance==null) instance = new DefaultFactory();
return instance;
}
public MessageStrategy createStrategy(final MessageBody mb) {
return new MessageStrategy() {
MessageBody body = mb;
public void sendMessage() {
Object obj = body.getPayload();
System.out.println((String)obj);
}
};
}
}
public class HelloWorld {
public static void main(String[] args) {
MessageBody mb = new MessageBody();
mb.configure("Hello World!");
AbstractStrategyFactory asf = DefaultFactory.getInstance();
MessageStrategy strategy = asf.createStrategy(mb);
mb.send(strategy);
}
}
Look at the clean separation of data and logic. By overapplying patterns, I can build my reputation as a fiendishly clever coder, and force clients to hire me back since nobody else knows what all this elegant crap does. Of course, if the specifications were to change, the HelloWorld class itself would require recompilation. But not if we are even more clever and use XML to get our data and to encode the actual implementation of what is to be done with it. XML may not always be a good idea for every project, but everyone agrees that it's definitely cool and and should be used wherever possible to create elegant configuration nightmares.
Re:GoF patterns (Score:2)
You can have all my Karma, I will never see a post more deserving of up-modding on
THL.
Re:GoF patterns (Score:2)
I'm going to improve my Hello World and make it even better than yours. Right now mine only takes advantage of Singleton, Factory and Strategy. I'm going to add even more patterns to it: Composite, Proxy, Bridge, Prototype, Adapter, Decorator, and Builder. Maybe Flyweight, if I can figure out a use for it.
It will be the most flexible, configurable Hello World anyone ever wrote. Nobody will ever need to write another one. That would be "reinventing the wheel".
Re:OOD101 or CS101? (Score:2)
Tensor algebra & mathematics is actually a very nice analogy to OOP&programming.
One professor I had said when he introduced us to tensors that you don't understand them, you get used to them.
And that is exactly what my expierence is with OOP. At first it "feels" strange and new, and you have a problem wrapping your mind around it. But the more you try, the more natural it feels.
Another good example is languages. You can learn the rules and vocabulary of a foreign language as long as you want, if you don't speak and write ("get used to it") and with it learn to "think" the language, you'll never really be able to use it as a tool.
Re:OOD101 or CS101? (Score:2)
And I certainly hope this was just a thoughtless mistake:
"If you were highly charitable, you might give HelloWorld OO points because the println() method of the out class in the System package is being invoked."
System is a class
out is a field (a PrintStream instance)
Re:MOD PARENT UP (Score:2)
I have learned to stop declaring "how people think". There is too much variation from person to person. Thus, basing any assumption about IT on "how people think" is a can of worms. I can tell you how I think, but not much about others.
I'm not sure I ever want to go back to C. But having learned C first really gives me an appreciation for C++ and for the wide variety of ways to get things done in each programming paradigm.
I am bothered by comparisons between C and C++ as a microcosm for paradigm comparisons. I don't like OOP, but I also don't like C.
C and Pascal are *not* the pennacle of procedural. It would be like using a model T car to compare cars to trains.
(Some people swear by C, but it is just not for me. Nothing personal to C fans.)
Re:MOD PARENT UP (Score:2)
Modelling a problem is often easier in OO than procedural. The actual flow of information crosscuts the object design, which has spawned ideas such as aspect oriented programming.
So yes, things happen procedurally. ("You open the door to the car.") But the design "You" and "car" are object oriented.
Re:MOD PARENT UP (Score:2)
Re: Scaling Large (Score:2)
OO fans seem to have divergent opinions on this. It seems split about 50/50 when I ask comp.object. IOW, half say it only really shines on large projects, and the other half say it shines on medium stuff also.
But, I find that procedural/relational scales better because each task is relatively independent from each other. You don't really have to care about the other 2000 tasks. You simply try to use the relational tables to communicate among tasks instead of huge parameter lists (a common design mistake in bad p/r).
True, the schema (ER) becomes more complicated as the project scales up, but a relational schema is 100 times simpler to grok and manage to me than a tangled, interweaved mess of 2000 OOP classes. There is too much "protocol coupling" in many OOP designs such that to grok A you have to grok B, and to grok class/protocol B you have to grok C, etc. Making communication "data centric", you can stop these "grok chains".
Introduce JUnit as a means of grading (Score:4, Interesting)
JUnit could be used to create a test harness that "plugs" into the code the students write. The professor or TA could define an interface that the students have to implement.
I think beginning computer science for majors is backwards, anyway. Intro to engineering classes at CMU for freshmen were all taught as practical, hands-on, applied courses that focused on real problems. My civil engineering class built bridges, visited dams, and visited construction sites. My chemical engineering class analyzed actual plant failures (things that go boom!) to determine what mistakes the engineers made. My intro to cs class was all theory, with one interesting project where we added AI into a 2D simulation. There wasn't a lot of practical information to take away from the class at the end of the year beyond a "Learning C++" book.
Re:Introduce JUnit as a means of grading (Score:2, Interesting)
It is a science which studies the possibilities of computing--not a field of engineering. (Though strangely at Marquette, almost all the computer engineering classes are taken from the comp sci dept.)
The idea of comp sci 101 is to give you the building blocks on which to build theory. This usually involves basic computer architecture and programming in whatever language is currently seen as standard or best (or paid to be taught).
Re:Introduce JUnit as a means of grading (Score:2)
I agree with you. I think undergraduate degrees in software engineering should be more readily available (and accredited by ABET [abet.org]). Sort of like the difference between chemistry and chemical engineering. Degrees in IS/MIS are available, but those are really focused on becoming a systems analyst or a corporate IT programmer, and not very heavy on actual programming or design.
Re:Introduce JUnit as a means of grading (Score:2, Informative)
I'm currently at Uni and we've had several large projects with automated test as part of the assesment (some using JUnit).
Last time I checked no-one writes completly bug free code, we had problems with bugs in the tests. I believe this will happen to some extent with any automated tests being used to mark an assignment.
Anyway to use something like JUnit to define tests you also need to define all the class's and public methods for the students. This may work fine for comsci101 but at any higher level assignments need to have some design flexibility.
Orthanc
Squeak, Meow, Shriek (Score:2)
Last time I checked no-one writes completly bug free code, we had problems with bugs in the tests.
Same thing at Rose-Hulman, for Dr. Anderson's classes (UNIX system programming, and programming language concepts). The students discussed the assigned problems in the course's newsgroup, and often, students would find bugs in the public test suites. That's how it is in any decent-size software engineering endeavour: a cat and mouse game between coders and testers.
Hello World not OO? Hello MCFLY! (Score:5, Insightful)
10 PRINT "FOO"
It does little good to make a version of hello world that has some objects in it when in the end there will be a System.out.println call.
I think you're really arguing for a language that will let you write hello world like this:
"hello, world".print
Re:Hello World not OO? Hello MCFLY! (Score:2)
I think where Java gets it wrong, and why "System.out.println()" looks so silly to you, is that Java students are taught that everything is an object. But not everything is an object, especially when you're printing.
Re:Hello World not OO? Hello MCFLY! (Score:2, Insightful)
If you teach a student to think in an object oriented way from day one, they will think of everything as objects, just like most coders think in procedures now.
But that's just my two cents.
Re:Hello World not OO? Hello MCFLY! (Score:2)
First, it should be demonstrated that OOP is objectively better *before* making students think in such ways without giving them much alternatives.
I will agree that OOP seems effective in physical modeling, where it was born (Simula 67), but IMO the benefits there do not extrapolate to modern business systems.
The main reason is that modern systems need "relativistic abstraction", which OOP does not provide without making tangled messes. OOP is obtimized for hierarchical IS-A abstraction, which is the antithesis of relativism, where sets and "view formulas" do better IMO.
Re:Hello World not OO? Hello MCFLY! (Score:2)
First, it should be demonstrated that OOP is objectively better *before* making students think in such ways without giving them much alternatives
this is allready shown
as well as it is shown that functional programmng languages are better than procedural ones
as well as relational languages are shown to be better than procedural ones
as well as it is shown that logic languages are better thanprocedural ones
But: procedural languages are (arguable) the easyst ones and thats why they survided still now.
I only know one language wich is still procedural only: Fortran. All other languages have made a hybrid oo evolution.
OOP is obtimized for hierarchical IS-A abstraction
How do you come to that opinion?
The main reason is that modern systems need "relativistic abstraction"
I dont't think so! Today systems need to interact. Ineract with DBs, business logic, and millions of concurrent users. They need to be maintaneable, evolveable and should be reuseable. They need to scale and you like to abstract away technical concerns as often as possible.
The DB you use below such a system, is just a replaceable technical concern.
In 90% of the cases a standard relational DB is not the best choice. Its only the cheapest in terms of available support and existing infrastructure. OO databases are in general far faster than relational ones in typical ussage scenarios.
angel'o'sphere
Re:Hello World not OO? Hello MCFLY! (Score:2)
Bull! Where's it?
(* I only know one language wich is still procedural only: Fortran. All other languages have made a hybrid oo evolution. *)
I thought they were adding OO extensions to Fortran. (Not that it proves anything except that OOP is in style right now.)
(* The DB you use below such a system, is just a replaceable technical concern. *)
Not any more than OOP is.
(* OO databases are in general far faster than relational ones in typical ussage scenarios. *)
Bull!
It is moot anyhow because OO DB's have been selling like Edsels.
I will agree that in *some* domains OODBMS perform better, such as CAD perhaps.
Re:Hello World not OO? Hello MCFLY! (Score:4, Insightful)
An int is four bytes on my CPU. Why should I have the overhead of an object wrapped around it? Why do I need runtime polymorphism on ints? For OO educational purposes, it makes sense to teach that an int is an object. But often in the real world it's far better to make an int simply four bytes in memory.
Rule of thumb: if polymorphism doesn't make sense for an object, maybe it shouldn't be an object. What can you possibly derive from a bool that wouldn't still be a primitive bool?
Re:Hello World not OO? Hello MCFLY! (Score:2, Insightful)
C#, however, has automatic wrapping of primitive types with objects. This is supposedly done on an as-needed basis. I've never tried it, but I'd assume that the wrapping happens only when it's required, otherwise the VM will preserve the basic types for performance reasons.
As to reasons for why there is a Boolean object, it's really just a question of convenience. The Boolean class contains methods for manipulating booleans, like making Strings out of them, or making new booleans from strings. What's the harm in extending this helper class to also represent a boolean value? It's still an object. Maybe you never need to subclass it? That doesn't mean it shouldn't be an object.
Re:Hello World not OO? Hello MCFLY! (Score:2)
But then it's not an object! I thought everything was supposed to be an object.
The Boolean class contains methods for manipulating booleans
Sounds like an adaptor class to me. The bool itself is still a non-object. If you make a string of bools, that string is not a bool, it's a string of bools. Adaptor classes are handy for such cases, but don't confuse the wrapper with the contents.
Re:Hello World not OO? Hello MCFLY! (Score:2)
I think he was talking about "the real world". The one with a screen and a keyboard and a mouse and a computer.
In the "real world" mapping things to objects is often easy. It is also often easy to see trivial ways to interact with said object.
Wether or not you're dealing with an instanciated object (Java object that is) when you do an addition is both irrelevant and uninteresting. (Unless you happen to be a computer scientist focusing on virtual machines or compilers.)
Re:Hello World not OO? Hello MCFLY! (Score:2)
You want to get finicky, that's still not great OO design. Unless you're designing a class hierarchy where every object has a print method, chances are you want to tell some output stream to print something, at which the output stream requests some format it prefers from the object being printed. With a
Thus
stdout.print(hellomsg)
Or in more familiar syntax
cout hellomsg;
(Note that I have issues with C++ iostreams, but they did get this part right).
In in a language that supports multiple dispatch, the issue is a bit moot, but what you put on the left side of the dot (or arrow or whatever) in most OO languages can make a big difference in design down the road.
Re:Hello World not OO? Hello MCFLY! (Score:2)
The string properly knows how to print itself. Am I null terminated or not? Am I a date string?
The system will know how to print a string, but it can't be expected to know how to print an inventory, a window, or a report.
You break the abstraction going the other way... (Score:2)
This is a good example of one of the many lose-lose situations you have in OO design.
teaching OOP first may not be the way to go (Score:2, Interesting)
This may not be true.
I have recently taught programming to a few people. They were new to programming, and were honestly interested in it.
I have tried the approach of teaching OOP first. They didn't get it. Then I tried to avoid the OO part, and teach them some programming, but using objects. This also didn't work very well.
After this, I switched from Java to a simpler, structured language: PHP. Things worked a lot better, they seemed to understand the procedural paradigm naturally and very quickly.
After a few months of teaching PHP, I tried to teach Java again. This also worked a lot better than my first attempt, as they groked objects more easily.
After this experience, I belive that "teach OOP first" is not the way to go.
I think the proper way to teach programming is:
- Teach them a structured/procedural language. Drill into them the loops, if, switch, functions, parameter passing, etc. Teach very basic algorithms.
- Make them do some real work using the new language.
- Teach them OOP, using an OO language.
If the first thing you teach is OOP programming, people won't understand the need for classes and objects. They will seem like useless abstractions.
Also, people who are not accustomed to the way computers work don't understand a lot of things in OOP, as they miss a lot of context.
If you teach them the much simpler structured programming, they will grok OOP easily.
There is a third path: teach structured programming first, but in an OO language. I belive this can be done, but not in Java. In Java, everything in the library is an object, so you can't avoid lots of objects and object syntax.
Another issue is that it is important (IMHO) to teach people a productive programming language, so they can see real, useful results quickly. PHP is good for this purpose.
Re:teaching OOP first may not be the way to go (Score:2)
Most importantly though, nothing takes place outside of a class. Consistency is good, as people tend to get confused when explaining exceptions to the rules.
If you're going to teach OOP, in my humble opinion, you need to stress thinking about problems in terms of classes and objects from the very first day.
The other approach I've given serious thought to is using a language like Perl to start out by showing how things can be done in a quick and dirty way, but then expand the "hello, world"(output) script to saying "hello" to a person(input), and so on and so on, and show how modules and classes can make expanding a small program much easier. At the same time, as you construct a class, you can demonstrate arrays, associative arrays, looping, conditionals, etc.
I'm still debating which is the better approach.
Goto's reborn as try/catch? (Score:2)
The C "break" statement is unnecessary if you have languages that use "sets" for their case lists instead. Look at VB's case statement, for example. (I am not promoting VB here, but it's case statement is far better than C's.)
(* or even 'throw', which is the worst example of unstructured programming I can think of. *)
I don't like try/catches either. They make the exception to the rule take a bigger influence over the code than the regular logic, bloating up the code and the nesting.
But, this is another one of those "language fights" that never ends once it breaks out.
Teach libraries first (Score:2, Interesting)
Of course syntax is important, but one should not be forced to become a language lawyer before useful tasks can be accomplished. By emphasizing a language's standard libraries, you learn the "philosophy" of the language as well as its syntax. And in the end you can do useful things with the language, and do it correctly within the philosophical context of the language. You avoid the such common problems as using a C++ compiler to write what in reality amount to C programs.
How do you learn C, how do you learn Java? (Score:2)
As we learned more and more about programming in Java, we found that C was not the right way to approach Java.
To learn C you need to know assembler(it was invented to be a portable assembler).
To learn C++ you need to know C (otherwise you better skip directly to Java/OO Pascal or well SmalTalk
Unfortunatly you can not teach a starter in CS assembler, hm
Unfortunatly CS emphasizes learnig of a beginners language. Instead of teaching higher level concepts. OTOH, thats what the students want and expect
And if a course is directly put into touch with higher level concepts, you can bet its not only functional like Miranda or Ml, no you have Lisp
I for my part only teach UML
angel'o'sphere
huh? not higher level concepts? (Score:2)
Re:How do you learn C, how do you learn Java? (Score:2)
A certain Mr. Stroustrup disagrees with you. In fact, C will teach you all kinds of things you need to unlearn in C++, such as pointer usage, arrays, and imperative design, that can be superseded with references, containers, and predicates, all to be found in the C++ standard library. To say nothing of generic programming with templates (you can actually write entire programs in nothing but templates, they're turing complete).
Java is just the tip of the iceburg (Score:5, Insightful)
But, as always, acedemia is behind the curve. Not that they should be on the bleeding edge, but now it's time to catch up. Computer Science programs across the country have started to straddle the fence when it comes to coursework. Do we teach theoretical science, or applied science? This is a mistake; Nothing done half-assed is ever worthwhile. Do not make Computer Science more like an engineering discipline. Instead, make Software Engineering an undergrad degree unto itself.
You should be able to teach CS101 in any language. If you can't, then you're trying to teach engineering in a science class. A stack is a stack regardless of what langauge it's written in. Don't pollute computer science by trying to make it something it isn't. Instead, make a new Class (pun!)...Software Engineering 101. There you can teach design methodologies (Like OOP), proper use of the latest tools, automated testing methods, and other applied theory that has no business in a computer science class.
This is not to say they there wouldn't be a great deal of overlap between a C.S. and S.E. degree. After all, you have to learn physics before you can be a Civil Engineer. But it's just not possible to teach you everything there is to know in 4 years. I've learned so many formalisms and techniques since I recieved my B.S. in C.S. that I wondered why I hadn't heard anything about them while I was in school. The answer, I realized, is the days of the computer Renaisannce man are ending. Developing an algorithm and developing a software system are two completely different tasks. Just as a physicst can't build a bridge and a Civil Engineer didn't invent Superstring thoery, you can't ask a computer scientist to build a software system or ask a software engineer to develop a new compression algorithm...it's just the wrong skillset for the job.
C++ inventors on how to teach C++ (Score:2)
You teach by example, and do both. Andrew Koenig and Barbara Moo, two of the prime movers behind C++, wrote a book called Accelerated C++: Pratical Programming by Example [att.com], as a new approach to teaching C++.
It absolutely kicks ass. Somebody else on this page commented that you need to learn C before learning C++. Most C++ people disagree; this book proves them correct. It starts with
and the first lesson was, "the most important line in this program is the second one," i.e., the comment. How refreshing is that? It does not then follow up by diving into the guts of the IOstream library; they simply say, "when you want to send stuff to the screen, use this; when you want to get stuff from the keyboard, use this," and leave the details for later. Even though the IOstream library involves OOP, they don't shove the user's nose in it.The people I know who have started using this book, and the approach that they advocate, to teach beginning programmers, have all found their students not only picking up the language faster, but being less frustrated with programming in general (admit it, we've all been there), and having a better understanding of what's happening in their code.
(Pointers aren't even introduced until chapter 9 or 10, which means anything that visibly uses pointers isn't needed until then, either. Very nice.)
Re:Java is just the tip of the iceburg (Score:2)
About half of it belongs in the same category as "engineering" and the other half in psychology. The psychology part is key to software engineering (SE). SE is mostly about humans communicating with *other* humans (programmers). The computer is secondary.
Many early SE experts tried to apply math, but it did not work very well. It comes down to the human brain, which still ranks among the biggest mysteries of science.
Thus, anything in SE that deals with "is X better than Y" is going to have to get neck-deep in psychology.
The only known semi-objective metrics outside of psychology that have any merit are code size (quantity of lines of code or tokens) and scenario-based change-impact-analysis.
Re:Java is just the tip of the iceburg (Score:2)
SE is another thing. Often a CS department teaches SE as well as CS, but that's just administartion.
I wouldn't use LOC (lines of code) as a metric. It doesn't work when you compare across architectures or even languages. How well it can be changed (Which I assume is what, "scenario-based change-impact-analysis" means.) is also hard. Generally if you can forsee the change then it's probably part of the original problem domain.
Besides, pshychology only enter into the first part of the design. The specification. After that you know what you are trying to do, and the engineering can start. (Naturally there may be some misunderstandings regarding what the specification mean, but that's not part of the engineering problem per se.)
But I'll agree with you that SE is for from being a solid engineering branch yet. We just haven't been building systems long enough yet. And for a lot of that time it wasn't even aknowledged that it should be engineered to begin with.
Re:Java is just the tip of the iceburg (Score:2)
In reality, you can't except for hindsight in case studies. However, you can guess at typical changes in many cases.
(* The specification. After that you know what you are trying to do, and the engineering can start. *)
Time and time again, being "change-friendly" has been shown to be more critical than initial design. Fitting a clear spec is a sinch in comparison.
Re:Java is just the tip of the iceburg (Score:2)
Refactoring is still a bit away from mainstream perhaps, but it's coming. Along with a bunch of other new ideas waiting to get accepted. Such as XP, which also support the "short release cycle" which is really the core of "change-friendly". All of these are becoming integrated into SE in order to make it a more solid field.
We're still at the early stages of SE after all, so bumps in the road are expected. Don't exepct CS to come to the rescue though, that's not it's job. It's like blaming the people dealing with mathematics when your engine doesn't work. Sure the fields are related, but they are not the same.
Re:Java is just the tip of the iceburg (Score:2)
That is becuase OOP has failed to be "change-friendly" as advertized. Rather than admit this, the industry has created the euphemism "refactor" rather than say, "it is not standing up to change, so we have to rework it and make a career out of reworking it".
Refactoring is a symptom, not a solution.
(* Along with a bunch of other new ideas waiting to get accepted. Such as XP, which also support the "short release cycle" which is really the core of "change-friendly". All of these are becoming integrated into SE in order to make it a more solid field. *)
XP is contraversial, and generally orthogonal to paradigm issues. XP seems to be in response to high project failure rates. If I am allowed to use sound p/r techniques, my projects *don't* have a high failure rate. Thus, I feel no need to fix something with extreme approaches that is not broken to begin with. Perhaps use XP for managers/people who have a high failure rate rather than everybody.
Re:Java is just the tip of the iceburg (Score:2)
Yes of course, OOP is not as good at good at making reuseable code and things like that. It's generally good for making reusable designs though. So if you have to add new features you can often use the old design as is. There is nothing magical about OOP which makes everything reusable. This wasn't the issue at hand though.
Now you're really starting to sound like an old gheezer refusing to learn new ideas. Sure there's a lot in XP that is not for everyone. There are also a lot of things which sucessful people have been doing for a long time. Just as OOP and design patterns were made to help everyone and not just those who discovered them XP is a bumdle of good ideas and methods. (Not all may apply to your projects.)
WYI the university I go to have now picked up XP and use it in the education. They used to follow the more strict RUP (Rational Unified Process) before. So while it's still pretty "radical" it's becoming more mainstream.
software ideas versus medical trials (Score:2)
I don't think I have ever heard that claim before. It is a little vague though. What constitutes the "design" exactly? I have reused procedural and relational "designs" also (with modifications).
(* Now you're really starting to sound like an old gheezer refusing to learn new ideas. *)
Perhaps I have seen too many "magic bullets" that turned out to be made out of melting ice.
I think it is a better use of resources if *volunteers* test "new ideas" FIRST. When and if it turns out to be better, then I will be more open to it.
In short, there are too many things for one person to test all by themselves. Let willing volunteers test it. When good evidence shows that it is better, then I'll switch.
If that is not Volcun-wise reasonable use of resources, then shoot me.
The medical institutions don't test every new cancer drug on EVERY cancer patient. They ask for volunteers, or at least a smaller group first. If it shows success, they then test it on increasingly larger populations. If it fails, then it fades in oblivion without dragging all patients with it.
Why should software fads be treated any differently?
Because dead software is less risky than dead humans? Well, maybe. But dead software is still dead software.
Use this program in classes (Score:2, Interesting)
BlueJ [bluej.org]
Teachers can start teaching objects and classes from the beginning. They don't have to tell students:
"Just write down: public static void main (String args[]) { } And don't ask me about it until later".
it wouldn't run some of my home-made classes, but then I didn't read the manual :P
Python (Score:2, Insightful)
It does exactly what it needs to, without anything extra. Each piece can be discussed separated, and picked apart or expanded as desired.
Re:Python (Score:3, Funny)
print 'hello world'
or:
print "hello world"
or even:
exec(__import__('zlib').decompress(__import__(
"
"95
"wER4
"uaH9OH
"FcgW8A8F
"3EzRTS8r4q
"Pu7TLdBSqa9j
"qsJZudqtDEynK4
"PtLlfjZnieObCSPT
"dOli+1U223J5Tv6C+u
Python also has the added bennefit of being an all-around much simpler language to learn than Java, as the last example demonstrates.
What is an object? What is function? (Score:4, Insightful)
I mean: what is a first class citizen? In C everything can be degenerated down to a pointer, except a preprocessor macro.
So the only true first class citizen is a pointer, or in other words a memory address. Structs and functions seem to be something utterly different. Even besides the fact that you can take the adress of both.
In C++ suddenly we have similarities: structs are similar to classes and similar to unions. With operator overloading you can manage to get a class behaving like a function, a functor.
But: wouldn't it make more sence to say we only have *one* thing? And wouldn't it make sence to make far more stuff optional? Like return types, access modifiers, linkage modifiers
{
int i =1;
}
Whats that? Data? A thing with a 1 inside stored in a thing with name i? Or is it a function with no name and a local variable i with value 1?
lets give it a name:
thing {
int i = 1;
}
Why can't a language creator understand that OO and functional paradigms are just the two sides of the same medal? The thing above serves pretty well as function and as class.
thing a = new thing;
Create an instance of thing
if (ting().i == 1) is true also, call thing like a function.
There is no need to have functions and structs to be different kinds of language constructs and thus it makes no sence that a modern our day language forces one to distinguish it.
In short: System Architects get a language wich allows to express the world they like to modell in terms of Objects/things and assign behaviour/functions to objects. Unfortunatly the language designers are mostly BAD OO designers and are not able to apply the first principles of OO correctly to the languages they invent: everything is an object.
Even a for(;;) statement is not a statement. Its an object. Its an instance of the class for, the constructor accepts 3 arguments of type expression, you could say Expression(.boolean.) for the second one. Well, for the compiler it DEFINITLY is only an object: java.AST.statement.ForStatement
Sample:
for (Expression init; Expression(.boolean.) test; Expression reinit) { Block block }
Hm? a function or a class with name for.
Two parameter sections, one in () parenthesis and one in {} braces.
What you pass in () is stored in init, test and reinit. What you pass in {} is stored in block.
The compiler crafter puts a for class into the lirary:
class for (Expression init; Expression(.boolean.) test; Expression reinit) { Block block } {
init();
loop {
test() ? block() : return;
reinit();
}
}
Wow, suddenly everything is a class. Hm, a meta class in the case above probably. A language would be easy to use if I told my student:
Ok, lets store an addressbook! What do you like to be in an adressbook? Name, first name, birthdate, phone number? Ok, then you do something like this:
{ Name, FirstName, Birthdate, PhoneNumber }
We group it. That thing has an anonymous type.
How to create objects?
new { Name="Cool", FirstName="John", Birthdate="12/27/66", PhoneNumber="080012345" }
Wow
cool = new {
bad = new {
And we need to compare them and search them and suddenly we need to put "methods" aka "behavioural" objects into them. Oh, yes and the anonymous thing above needs a name, so it becomes a class.
What I describe above is Lisp, but with a more C/Java/C++ like syntax.
And a radicaly reduced language design. The best would be to put the language design into the runtime libraries.
Yes: every typed language should be able to work typeless as long as you are in a "skteching" phase.
Regards,
angel'o'sphere
Note, for template arguments I used (. and
Re:What is an object? What is function? (Score:2)
Actually it reminded me of nothing so much as the ML line of languages, which includes SML/NJ, Ocaml, and Haskell. All of those give you "anonymous types" like that, with named fields. They even infer types for you, so you can pass an anonymously constructed struct into a field that expected an AddressBookEntry for example, and so long as it had all the same fields, it would accept it. In fact, you don't typically tell functions what type to expect, you just write the code, and the compiler will infer it all for you (sometimes it needs help, so they support type constraints, but those are still inferred, you don't need to declare your anonymous struct as such a type).
I strongly suggest you check out ocaml
Computation-as-interaction is a bad idea (Score:3, Interesting)
Clearly not everything can be done this way, but I think the idea to throw in the towel and model everything as interacting processes is a huge mistake. This is especially true of concurrency, which is thrown into programs in a haphazard way these days with no particular benefit.
OOD Key Concepts (Score:3, Interesting)
OOD, without risking and sounding like those "experts" is no silver bullet for software design. But it is a sound evolutionary advance in Software Engineering techniques. Yes, I do agree, that the OLDER generation, is more inclined towards Structured Design before implementing OOD/OOP techniques. However, I disagree that it is because that is the only thing we have been taught or have been teaching. Thats complete bonk.
Everyone in this field advances with the times. I would suggest, if it seems that way, the older generation, simply realizes what OOD/OOP is and what it ISN'T, and use OOD/OOP where appropriate in building software.
First of all, OOD/OOP builds heavily on Structured Design techniques. (i.e. Building software using ADT definitions and the 4 foundation sequences of computer science: statement, do-while, while-do, if then else, case or selector statement.) That is, a properly built OOD will embody in every one of its object interfaces, methods which are built using sound Structured Design methods. So it is a Myth that OOD/OOP gets rid of Structured Design techniques. In FACT, those who write POOR OOD/OOP's are those that have not mastered the 4 constructs of computer science and the ADT that goes along with Structured Design.
OOD does not a attempt to do away with Structured Design, it complements it by organizing Data AND Code in such a way that further increases the resulting code abstract properties. (i.e. it allows the resulting algorithms to be expressed in a way that makes said code even more reusable through inheritance for example. OOD is therefore impossible to implement without Structured Design.)
The resulting code is far more abstract, and therefore generalized to be more reusable, and therefore, theoretically, more reliable. (i.e. Code that is used over and over again becomes more reliable over time, and is an extensible property of the life cycle of software. Although structured design allows you to reuse code through simple function calls, OOD/OOP takes it one step further and allows function calls and data representation to be generalized as a functional unit.)
It has been pointed out, with good reason, that Java is a language which can help enforce good OO programming. However, it is not required and for example through the use of static methods, one can build Java code without using OOD/OOP techniques of any kind if one decides to do so.
This is important: OOD because of its abstract properties, (primarily the use of inheritance) can be used to create software patterns that lend themselves to creating certain types of software.
Certain types of software that benefit greatly from OOD/OOP implementations are for example, User Interfaces. Why? It is obvious. User interfaces are built using repeatable patterns themsleves application after application. (File, Edit, View, Window, etc.), at thier most basic level.
When an implementation in and of itself such as the building of a GUI, for example, has a clear pattern itself, OOD/OOP methods can get a great deal more mileage out of simplyfying and building code. This creates a better implementation of a GUI than just a Structured Design approach alone can provide.
With that said. You are probably thinking, what sorts of things is OOD/OOP NOT good at, and in fact SHOULD NOT be used. This is the part that gets controversial and you will decide, without knowing it, which camp you fall into by reading the next paragraph.
Well, abstraction, which through inheritance in OOD, while it provides excellent reusability in the context of building software, does not always result in the most effective implementation. By an effective implementation, I mean most efficient.
So what am I saying? Well, I am saying that you sacrifice some efficiency to gain the increased code reliiability that inheritance provides in OOD by compartmentalizing code AND data within an object vs Structured Design, which cannot do this through the use of simply an ADT and function/procedure calls.
(i.e. You can never directly modify data in context of a classic OOD/OOP, you have to overlap or build a middle man, as it where, to modify any data you declare private through the use of accessor methods.)
Althouh this enforces and corrects some deficiencies in Structured Design, it makes the program arguably slower to execute.
In the context of building, say an Operating System, for example, OOD/OOP is not the way to go if you want a highly speedy and effective OS implementation.
If you want such speed you invariable have to give up inheritance, and the benefits it provides, and resort to Structure Design principles only, to build your OS. (i.e. ALL function calls and procedures DIRECTLY access the data structures of the OS through passing parameters to functions or procedures, there by eliminating the middle man as above.)
Which, is not so bad, really. OS's and components of OS's such as kernels, etc...are designed to be speedy, as they should be.
So, my view on the topic is that OOD/OOP is best suited on top of the OS, vs IN the OS design.
Not everyone agrees with that, and that is fine.
Why? Well, because many argue that the sacrifice in speed is justified in the complexity of building a OS kernel, and that the reliability gained through the extensive use of OOD/OOP techniques in building the OS kernel for example, yields a better OS.
Which is not something to be taken likely if your OS is charged with the responsibility to keep systems software on the space shuttle for example working with the fewest number of defects, and human lives riding on what the OS may or may not do next.
On the other side, like I said, you have me and others who believe that OS should be very small and very fast, and that OOD/OOP shouldn't be used and that the realibility sacrificed is acceptable.
So, that is just one aspect of when and where and why OOD/OOP should and should not be used. But as you can see, it is far from cut and dried, and primarily is once based on IMPLEMENTATION and engineering REQUIREMENTS, not on methodology.
Which is how the real world works.
For the most part, which drives 90% of the disagreements is the fact that many people see OOD/OOP being a generalized approach to solving ALL problems, and not a specialized addition to Structured Design techniques, suitable for SOME problems, not ALL problems.
I personally, obviously, feel that OOD/OOP is NOT a generalized programming methodology for ALL cases.
However, some of my friends feel very differently, and we have a good discussion on the topic wherever we go when we start discussing OOD/OOP.
Things can get pretty heated, and most patrons at the local 3am diner wonder what all the screaming is about, particularly the buzz words.
Hack
Re:OOD Key Concepts (Score:2)
I find procedural/relational techniques to be more "abstract" because much of GOF-like patterns can be reduced to mere relational formulas, as described previously. A formula is more abstract than a pattern, for the most part.
And procedural ADT's only really differ from OOP ADT's when "subtyping" is used. However, subtyping is too gross (large chunk) a granularity of differences IMO. Even Stepanov, the STL guy, realizes this. (And he has enough respect to not get called a "troll", unlike me.)
Re:OOD Key Concepts (Score:2)
It can get ugly quite quickly because of so much middle ware in between, which I pointed out.
Calling constructors for example. Constructors and accessor mthods don't exist in Structured Design. Only procedural or functiona abstractions which directly initialize your ADT for use.
I could provide two types of examples, but the Slashdot interface for dumping all that in would be painful for me to organize and type, so I won't support the above argument with a direct example.
Hack
Re:OOD Key Concepts (Score:2)
Generally I use a data structure interface[1], usually a database table, for such. IOW "new" is a new record instead of a language-specific RAM thingy. The "instances" are simply another record or node in the collection/database.
[1] Note I said "interface", since direct access limits implementation swappability.
OO philosophically believes that it is good to hook behavior to the entities of the structures. In practice, I don't find this very useful for business modeling. Sometimes there is a permanent fairly tight relationship, but *most* of the time the nouns that participate are multiple and variant WRT importance. There is no "King Noun" that is appropriate and invariant.
"Every operation should belong to one and only one noun" is an *arbitrary* restriction in my mind. If you can explain what is so magical about that OO rule is biz apps, I would be greatful, because I don't see the appeal. I just don't.
(* Only procedural or functiona abstractions which directly initialize your ADT for use. *)
I am not sure what you mean here. Note that ADT's generally are stuck to the "one noun" rule described above. This limits them for biz apps IMO.
(* I could provide two types of examples, but the Slashdot interface for dumping all that in would be painful for me to organize and type, so I won't support the above argument with a direct example. *)
Slashdot is shitty for nitty gritty software development discussions. You have hit-and-run superficial moderators, and it does not like programming code characters.
Perhaps go to deja.com and post on comp.object to post some sample code. BTW, rough psuedocode is fine as long as you are willing to answer questions about it.
Thanks for your feedback
Survey question for OOP fans (Score:3, Interesting)
I get different answers when I ask OOP fans what specificly are the (alleged) benefits of OOP. Most can be divided into one of these two:
1. Easier to "grok". Enables one to get their mind around complex projects and models.
2. Makes software more "change-friendly" - fewer code points have to be searched and/or changed when a new change request comes along.
I did not include "reuse" because it seems to be falling out of favor as a selling point of OOP.
If I can narrow it down, then perhaps I can figure out why OO fans seem to like what seems to be a messy, convoluted, bloated, and fragile paradigm to me.
I would appreciate your vote. Thanks.
P.S. Please state your primary domain (business, systems software, factory automation, embedded, etc.) if possible.
Re:OO is for wankers (Score:2)
Because even Assembly is an abstraction, and once you start down that road, you might as well go all the way.
The third line pretends that their are strict lines drawn between procedural, functional, and OO programming paradigms.
And how are methods anything other than functions or procedures which operate on an encapsulated set of variables?
Re:OO is for wankers (Score:2)
I have tried to get useful descriptions of when and when not to use OOP by asking OO fans.
But, I get a different answer from every OO fan I ask.
OOP has shot consistency between the eyes.
I find procedural/relational design more consistent: tasks go into code structures, and noun modeling goes into the database and relational algebra (as opposed to also going into code structures such as classes).
Sure, there are differences from one p/r programmer to another, but not NEAR as much between as from one OO practitioner and another. They will bash each other's OO design, but cannot articulate exactly why one is bad. It is a doctrine fight usually. "But you are violating the Wesly-Demeter Principle blah blah".
I usually justify my designs in terms of how they will handle the most likely changes and change patterns. Using this metric, I can match or beat most OO designs. (Although the doctrine sometimes blinds OO fans to actual change patterns by hiliting only certain change patterns. It is hard to get beyond such points because it is one's world view in a given domain that you are up against. "Brainwashing" is what comes to mind. Indoctrinate not only the solution, but the problem as well by focusing only on a narrow subset of reality.)
oop.ismad.com
Re:OO is for wankers (Score:2)
How does one judge "good developers"? If you judge it by articulation skills, then most OO fans I know really suck. "It is good because I have experience and I just say so" is commonplace. Is science and western-style reductionism really dead in software eng.?
(* Many OOP based programs were generated without accounting for all of the requirements, either known or unknown, of the problem. *)
This is a weakness of OOP IMO. The "noun modeling" is in code instead of via relational formulas (as described elsewhere). Formulas are less disruptive to change than code structure (named units, etc.). GOF is the old-fashioned way.
(* Structured programming is a methodology. OOP is a methodology. *)
Structured programming + Databases is also a methodology, although it is not documented very well. Databases are what gives procedural programming its real power, not mere functional decomposition by itself. It makes structures/patterns virtual and formula-based instead of something you build by hand.
Would you rather order a bunch of bricks into place, or lay them yourself brick by brick?
Re:OO is for wankers (Score:2)
GOF is the old-fashioned way.
Please stop ranting against GOF, Ganng of Four, Design Patterns, Elements of Reusable Object Oriented Software.
In every sentence where you write: GOF == "bad term" you only show your bloudy ignorance.
Justifying that OO is 'bad' by claming all your friends who are OO fans are bad programmers
BTW: can someone please explain me how to put one on the ignore list?
tabelzier, stick to your tables
angel'o'sphere
Re:OO is for wankers (Score:2)
Re:OO is for wankers (Score:2)
OOP opinions are currently clouded by an Emporer's New Clothes syndrom. Anybody who speaks out is called a Luddite or the likes.
(* Believe it or not, OOP programmers use Databases! *)
But OO and databases tend to fight over the same territory. If you want to properly factor responsibilities and roles of the various technologies to reduce duplication and/or overlap, then one or the other must go.
Even Bertrand Meyer questions the wisdom of databases with OOP.
(* So you saw some bad OOP designs, probably created by people who didn't understand either OOP *)
Perhaps. I have asked for some good designs that show clear benefits and never receive any that stand up to scrutiny.
Besides, if only 2 percent know how to do OOP "properly", then there may be a huge problem with OOP. Ed Yourdon's surveys find no higher manager-level satisfaction with OOP projects, I would note. So either OO sucks or its too hard to "do right".
(* If you have to solve a problem, the solution takes about the same amount of work to develop, no matter what system you use. *)
I disagree. I found formula-based structuring/patterning to be quicker, more change-friendly, and less verbose.
(* What on earth are you talking about? If the code has to perform certain operations, they need to be coded somewhere, don't they? *)
I see a lot of OOP code or API's that *reinvents* database-like operations (find, join, get, set, insert, delete, save, filter, etc.)
I would rather *use* a database than make one.
(* Somewhere, either in your "formula", or in code, they need to be implemented. *)
I find the formula approach more compact, less intrusive to the global code structure, and more change-friendly.
big fOOt (Score:2)
Then how do you explain most of the GOF patterns, which can be tablized? Perhaps they are targeting systems software instead of business applications, but shouldn't that be stated somewhere? (I think someday relational technology will be used in S.S. also.)
(* You are saying many things about OOP that I feel are incorrect or undeserved, *)
The lack of consistency in the OO community makes it such that *no matter* what I say/show about OO, at least one OO fan will object to my characterization of OO. IOW, it is probably impossible to please them all.
(* You aren't going to convince me that it is bad, because I have seen otherwise. *)
Well, either the benefits are subjective (OO better fits your mind), or are like Bigfoot: every OO fan has seen them, but never captures them on film.
When I see a side-by-side comparision of good OOP shining compared to good procedural/relational, then I will believe you.
Until then, I only have bigfoot stories from OO fans.
Equal or unknown until proven otherwise.
re: Reuse (Score:2)
Even many "high-level" OO fans believe that "OOP is not about reuse". Here are some excerpts:
http://www.geocities.com/tablizer/reustalk.htm
If you disagree with these OO fans, go argue with them, not me. "Reuse" has been falling out of favor as an important "attribute" of OOP over time.
Show me an actual biz app example of OOP being better reuse, not just talk talk talk.
re: OOP and consistency (Score:2)
Shouldn't they solve the consistency problems *before* taking it mainstream?
It may turn out that consistency is a fault of the paradigm and not just the learning curve.
Besides, well-known OO practicioners who have been doing OOP for 15+ years show few signs of converging with each other.
The Wankers Strike Back! (Score:2)
Sure, there's a lot of crap C++ and Java code that's written by people who don't know what they're doing. That's true for any language. And a powerful language is easier to screw up in, just as it's easier to kill yourself in a Massarati than in a Model T. But that's a problem with the programmer/driver, not the language/car.
Re:OO overrated - Lisp beats Java any day, too. (Score:2)
Because the outside world often does not share the same ideologies as universities, and graduates need jobs so that they can pay off their student loans.
I am just the messenger.
Here's why (Score:2)
It's harder for a poorly trained lecturer who doesn't really know his own subject to make a fool of himself within five minutes in Java than it is with C or C++.
I'm sorry, but in my experience, a lot of the time it really is that simple. Java has no inherent CS-based merit over many other languages that have gone before or since, but a lot of people teaching CS don't really know their subject, and it's easier to cover that up without the risk of some smart alec noting that you dereferenced a null pointer in your lecture on linked lists.
I hasten to add that I don't think all CS lecturers are like this -- just the vast majority, based on my own experiences and what I've read of others'.