Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Education Programming

CMU Eliminates Object Oriented Programming For Freshman 755

fatherjoecode writes "According to this blog post from professor Robert Harper, the Carnegie Mellon University Computer Science department is removing the required study of O-O from the Freshman curriculum: 'Object-oriented programming is eliminated entirely from the introductory curriculum, because it is both anti-modular and anti-parallel by its very nature, and hence unsuitable for a modern CS curriculum.' It goes on to say that 'a proposed new course on object-oriented design methodology will be offered at the sophomore level for those students who wish to study this topic.'"
This discussion has been archived. No new comments can be posted.

CMU Eliminates Object Oriented Programming For Freshman

Comments Filter:
  • Re:Hmmm ... (Score:5, Insightful)

    by mellon ( 7048 ) on Saturday March 26, 2011 @08:20AM (#35621162) Homepage

    I don't know about *starting* in assembler, but a programmer who isn't somewhat proficient in assembler is going to have a very weird mental model of how programs work. OOP has the same problem--it's not that OOP is bad; it's that if you start out with an OOP language, you don't learn a lot of things that are important to understand. Once you know how the machine works, then you can start studying abstraction. Treating OOP as the only way, or even the best way, to solve any computing problem is going to tend to produce programmers who think everything is a nail. It doesn't mean that there are no nails, nor that students shouldn't learn to swing a hammer.

  • Why remove it? (Score:4, Insightful)

    by Haedrian ( 1676506 ) on Saturday March 26, 2011 @08:25AM (#35621182)

    I don't see why it should be removed, it should be 'complimented' instead with other programming methodologies in order to let users compare and contrast. But most CS in the end will end up being done with OOP so there's no reason not to start in the beginning - at least that's my personal experience.

    In my first year we had a mixture of different programming types, including functional programming. I never really used any of those, I'm sure there are certain places where Prolog or Haskel is used, but its not as common as an OOP.

  • by shoppa ( 464619 ) on Saturday March 26, 2011 @08:26AM (#35621188)
    Focusing on the basics, and not on the tools of the trade, is very important at something that is not a "trade school", and CMU's computer science department certainly lives above the trade school level. (Just to contrast: when I was a freshman, the "trade school" argument was whether new students should be taught Fortran or Pascal ! Thank heaven I didn't devote my career to being a programmer.)

    It seems to me that CMU's made the very obvious decision that today, OO is a tool for craftsmen, not for freshman computer scientists. And they probably are right. It's important to not confuse the tools of the trade, with the basics of the science, and this is especially true at the freshman level. For a good while (going back decades) OO was enough on the leading edge that its very existence was an academic and research subject but that hardly seems necessary today.

    In the electrical engineering realm, the analogy is deciding that they're gonna teach electronics to freshmen, and not teach them whatever the latest-and-greatest-VLSI-design-software tool is. And that's a fine decision too. I saw a lot of formerly good EE programs in the 80's and 90's become totally dominated and trashed by whatever the latest VLSI toolset was.
  • A larger problem (Score:5, Insightful)

    by wootest ( 694923 ) on Saturday March 26, 2011 @08:26AM (#35621190)

    The big problem with OO courses has always been that there's far more focus on the mechanism (and, lately, on design patterns) than on actually structuring your program to take advantage of objects. The word "polymorphism" isn't as important as getting across the concept that you can coordinate a few things as if they were the same thing, and have them act in different ways. Knowing a bunch of design patterns by name isn't as important as figuring out once and for all that you can structure your program in such a way that you can create interfaces (not necessarily actual interfaces/protocols) between different parts of your program and essentially replace them, or at least rework them entirely without having the whole house of cards collapsing.

    There's no focus on making it click. There's no course that teaches you explicitly just about formulating problems in code, and that makes me sad.

  • by WrongSizeGlass ( 838941 ) on Saturday March 26, 2011 @08:46AM (#35621302)

    Why are computer scientists even learning programming? When did this happen? Programming sounds like one of those get-your-hands-dirty jobs in flyover territory, where you would show a lot of ass crack on the job and live in a trailer park. Educated people don't do that.

    They need to be able to program for the same reasons management and engineers need to spend some time on the assembly line: so they can learn how things actually work. There's often a wide chasm between "on paper" and "in practice" and ideas need to be able to traverse it.

  • Re:What? (Score:3, Insightful)

    by Anonymous Coward on Saturday March 26, 2011 @08:47AM (#35621308)

    The point of a university Computer Science course is not to train students in iPhone development.

  • Re:What? (Score:3, Insightful)

    by osu-neko ( 2604 ) on Saturday March 26, 2011 @08:49AM (#35621332)
    Perhaps by not quitting after their freshman year, and learning some OOP?
  • Re:Hmmm ... (Score:5, Insightful)

    by osu-neko ( 2604 ) on Saturday March 26, 2011 @08:53AM (#35621364)

    Agreed ... but aren't most modern OS's OO based? In most cases students need OO programming in order to become employable. OO certainly isn't the holly grail of computing but it is entrenched in business and needs to be taught just like COBOL was all those years ago (when I had to learn it even though it was like writing a book every time I wanted to write a small program).

    And how is this an argument for including in the introductory, freshman curriculum? I put forward the possibility that some topics may be more appropriate to be taught to students only after they've learned the basics.

  • Re:Hmmm ... (Score:4, Insightful)

    by CptPicard ( 680154 ) on Saturday March 26, 2011 @09:39AM (#35621624)

    I actually have never believed that assembly is in any sense informative or educational because "it is how the computer works". It's actually a remarkably hard to decipher representation of a program, and what the program is and does is the important part.

    I consider myself quite a strong programmer, and I never think in terms of assembly when writing programs. Higher-level languages, in particular functional programming languages, are far closer to my mental model of things. Why not work in a language that represents or helps formulate the problem-space abstractions better?

  • by hey! ( 33014 ) on Saturday March 26, 2011 @09:44AM (#35621652) Homepage Journal

    Well, I'd agree with you if you this were a class in an O-O language, but it's a class in a major programming *paradigm*. And even theoreticians have to express themselves in code. If the summary is to be believed, the department is taking a stand against O-O programming based on what it sees as the paradigm's shortcomings. Let me tell you, thirty years ago you'd hear the same kinds of arguments in a computer science department about how bad virtual machines were. Virtual machines were an early, failed attempt at getting around the limitations of early compiler technology. C showed you could have a language which was nearly as fast as assembler (faster for most programmers), and easily portable across architectures. There was no reason to study virtual machines, they had no practical application. Well, it probably would have been a good idea to have them in the curriculum somewhere to study on a theoretical basis, because ideas in computer science have a way of re-emerging.

    Computer science as a discipline embraces topics other than the theory of computability and complexity theory, which might as well be taught in the mathematics department. For example there is computer language design, hardware architecture, data communication theory, database theory, AI etc. Some of these could be put into other departments, I suppose, but an understanding of the core intellectual discipline *is* widely applicable to all the topics that commonly fall under the "computer science" rubric.

    I think an empirical case for separating computer science from software engineering might have sounded convincing fifteen years ago, but Google, and the Internet in general, are strong disproofs of this. Google isn't a search company. It's an algorithm company. It's about doing things on a scale so massive it requires serious computer science ability, well above the "tradesman" level of expertise. Which is not to say that what it does is as "pure" as the work an academic interested in computability per se and defining new kinds of complexity classes, but that kind of work, isolated from its applications, is of such narrow interest it hardly justifies the existence of a department separate from the Mathematics department.

    I do see the need for separate concentration areas or departments for computer science per se, software engineering and information science, although having employed a number of people with Masters' degrees in information science I have grave doubts about the academic quality of degree programs in that field. An academic degree should give you the intellectual tools to think about a problem, not necessarily the practical tools to work on it, which can be picked up other ways. The O-O paradigm fits into this framework. The requirement to learn a specific O-O language like C++ or Java is a different matter; somebody with a degree in C.S. should be able to pick any such language up because he should understand the *design* of the language.

    I suspect we're seeing departmental politics here.

  • Re:Hmmm ... (Score:5, Insightful)

    by Dails ( 1798748 ) on Saturday March 26, 2011 @10:30AM (#35621938)

    I outright laugh at people in an Interview when they ask me if I'll write OO code.

    Good advice for those seeking work.

    If you can't read straight up C code and understand what the fuck is going on, stop calling yourself a programmer.

    They used to say that about assembly. Try writing a modern game or GUI in assembly. You might well be able to do it, but by the time your'e done I'll have finished several projects and be moving on to the next one.

    If you can't code directly for the hardware you're interfacing with, stop calling yourself a programmer.

    Right. Abstraction has no place in computer science education or in programming. As a matter of fact, if you're using anything other than a magnetized needle to right to your hard drive, quit calling yourself a computer user.

    If you depend on .NET, any library, framework, or something written by someone else, you're not a programmer. You're a script kiddy.

    Right. Code reuse is so lame. Anything that reducing development time and improves efficiency has no place in computer science. Everything should be written directly in binary on a computer you, the programmer, built from latches, switches, and flip-flops. Otherwise, you're just a bum.

    OO should never be taught lest we end up with a generation of useless tools who think they're "programmers" that can't actually accomplish fuck all

    Right. After all, every program written using OO is useless and nonfunctional. Except for the hundreds of thousands of programs that work just fine.

    Go back to your garage with your unflinching dedication to The Old Ways and leave the rest of us to be productive.

  • Re:Hmmm ... (Score:5, Insightful)

    by chrysrobyn ( 106763 ) on Saturday March 26, 2011 @11:18AM (#35622254)

    I consider myself quite a strong programmer, and I never think in terms of assembly when writing programs. Higher-level languages, in particular functional programming languages, are far closer to my mental model of things. Why not work in a language that represents or helps formulate the problem-space abstractions better?

    In a word, performance. In your particular field of expertise, performance and memory footprint may not matter (99% of desktop applications I suspect), but a strong programmer is fully aware of what goes on under the covers when [his|her] program is compiled to the degree that it impacts the product. Until recently, I never thought there was a functional difference between a lone line containing "i++;" "++i;". Of course, for variable assignment it matters, but what's going on under the covers? If you stop and think about it, i++ actually has to return the old value. ++i can destroy that old value and never needs to worry about returning the old value (you can avoid an extra copy). If you're in a high performance loop, either with not much body or with a lot of increments, such things matter. Understanding the impact of caching, the scarcity of registers, the high cost of flushing the cache, all of these can matter, and it's hard to teach these concepts without using at least a pseudocode that looks suspiciously like assembly. If you're paying for compute cycles, or if you're selling something in a market where performance matters, you're going to see a strong advantage in a computer scientist who understands assembly and compiling.

    Now, if your main concern is code readability, maintenance and/or moving onto the next product as soon as this one compiles with no errors, higher level languages are undoubtedly far more appropriate.

  • by digitig ( 1056110 ) on Saturday March 26, 2011 @11:20AM (#35622268)

    Sure your classes have methods. But good OO design says that your classes correspond to groups of real-world objects: bank accounts, people, widgets, whatever. Why? Because that is the data that needs to be processed.

    That seems to be a fundamental misunderstanding of OO. The focus on real-world objects is because they have self-contained properties and behaviour. "Because that is the data that needs to be processed" is no more (and no less the case) than Because that is the behaviour that needs to be produced.

    The methods are tied to the classes, meaning that their very definitions depend on the data modelling you have done.

    Or (and just as valid), the members are tied to the classes, meaning that their very definitions depend on the use cases (ie, actions) you have identified.

    Working this way is suitable when you are dealing with complex data and comparatively simple algorithms; get the data model right, and your problem is half solved.

    You do realise that data model != object model, don't you?

    The other situation is where you have complex algorithms, but simple data. Signal processing, control software, scientific programming, etc.. Take a radar as an example: the data is just a continuous stream of numbers coming in off the radar dish; but the algorithms that extract meaning from those numbers are very sophisticated. Using an OO language for problems like this is like trying to loosen a screw using a wrench - maybe you can make it work, but it's basically a really bad fit for the problem.

    Radar data was a bad example. A modern radar processing system takes data from many sources such as primary radar heads, secondary radar heads, ADS-B, ADS-C, WAM, neighbouring radar processing systems and so on. These behave the same in most ways but differently in some, and OO turns out to be an excellent way of dealing with that.

    Yes, in some applications OO doesn't offer you anything significant. They are typically the cases in which you end up effectively with one class because the data and the processing are so simple. But in this context "simple" processing doesn't mean mathematically or computationally simple, it means that you only do one thing with the data (in the case of the radar front-end, for instance, the one thing will be extracting distinct decluttered returns into distinct objects for passing to the next processing stage). OO design still doesn't get in the way, though: if that's the only thing going on in that process you simply end up with a design with only one class having only one significant method (so if your language supports it you might just as well write the lot as a single routine with no class packaging).

    For what it's worth, I have no problem with OO not being taught from the outset on a computer science course. It's a decompositional/compositional approach and is not a lot of use until you understand what you are decomposing and composing.

  • Re:Hmmm ... (Score:4, Insightful)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Saturday March 26, 2011 @11:28AM (#35622326) Homepage Journal

    but only a little bit better in that they do 'projects together' which in my experience means that the alpha nerd does 90% of the work and the other 4 team members offer worship and keep the ramen coming.

    That's called "managing expectations", and is a wonderful introduction to their future careers for a lot of CS grads. Now go fetch me some coffee while this build finishes, would you? Thanks.

  • Re:Hmmm ... (Score:4, Insightful)

    by siride ( 974284 ) on Saturday March 26, 2011 @11:47AM (#35622474)

    You are still probably doing OO on some level even in C. OO is not a whole different paradigm, but is rather an institutionalization, so to speak, of a variety of patterns that are already used in procedural programming, with its own additional "enhancements" (for better or for worse).

    I don't think the difference in job markets between C and Java really has a lot to do with OO as a methodology. It probably has more to do with what's available on what platform and what companies are using those platforms (more importantly, what *kind* of companies are using those platforms).

  • by JonathanAldrich ( 878621 ) <jonathan,aldrich&cs,cmu,edu> on Saturday March 26, 2011 @06:01PM (#35625044) Homepage
    As some readers may have guessed, "anti-modular," "anti-parallel," and "unsuitable for a modern CS curriculum" are one person's opinions, and do not represent the majority view of the CMU faculty. The introductory curriculum was changed away from Java for different reasons: primarily to focus on a language (Python) with simpler syntax and dynamic types, and to supplement with material on C that is closer to the machine. For more details, see a report by the SCS Deans:

    http://reports-archive.adm.cs.cmu.edu/anon/2010/CMU-CS-10-140.pdf [cmu.edu]

    Whatever you may think about delaying OO--and opinions are mixed at CMU as everywhere--one advantage of the new curriculum is that the sophomore-level course can do OO design more justice than we were ever able to do in the prior intro sequence, since the students already know how to program. Modularity and parallelism are in fact major emphases of that course, which I and other CMU faculty are currently developing.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...