Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

Intuitive Bug-less Software? 558

Starlover writes "In the latest java.sun.com feature at Sun's Java site, Victoria Livschitz takes on some ideas of Jaron Lanier on how to make software less buggy. She makes a couple of interesting points. First, making software more 'intuitive' for developers will reduce bugs. Second, software should more closely simulate the real world, so we should be expanding the pure object-oriented paradigm to allow for a richer set of basic abstractions -- like processes and conditions. The simple division of structures into hierarchies and collections in software too simple for our needs according to Livschitz. She offers a set of ideas explaining how to get 'there' from here. Comments?"
This discussion has been archived. No new comments can be posted.

Intuitive Bug-less Software?

Comments Filter:
  • by MysteriousMystery ( 708469 ) on Friday February 13, 2004 @03:22PM (#8272387)
    I found it to be fairly interesting but is it just me or were there a few too many shameless plugs for Java in her "interview"?
  • by Anonymous Coward on Friday February 13, 2004 @03:22PM (#8272390)
    Any other good ideas?
  • Objects (Score:5, Insightful)

    by tsanth ( 619234 ) on Friday February 13, 2004 @03:25PM (#8272431)
    I would love to use a C/C++/Java-like language that utilizes pure objects, versus the mish-mashy hybrid typing that exists in most languages that I've used. To me, Livschitz's observation about how programmers work in metaphors, while mathematicians work in pure syntax, is very true: I breeze through all my programming and software engineering classes, but struggle mightily with math courses (save boolean algebra, but I digress).

    I, for one, would like software writing to resemble (really resemble) building structures with Legos.
  • I'm sure... (Score:5, Insightful)

    by lukewarmfusion ( 726141 ) on Friday February 13, 2004 @03:26PM (#8272433) Homepage Journal
    "software should more closely simulate the real world"

    Because the real world doesn't have bugs, right? Our company doesn't have project management software yet - but we're working on it. Personally, I don't think it's worth it until we fix the real world project management issues that this software is supposed to help with. Maybe that's not quite the point, but it raised my eyebrows. (Which I'm thinking about shaving off.)
  • Jaron Lanier? (Score:3, Insightful)

    by jjohnson ( 62583 ) on Friday February 13, 2004 @03:26PM (#8272437) Homepage
    Someone please explain to me why anyone listens to this guy. I've read his essays; they're pedantic and hand-wavey. The term "Virtual Reality pioneer" should be enough to disqualify him from serious discourse.

    Somebody, please point me to something significant he's done so I'll know whether or not I should pay attention to him because, from everything I've seen so far, I shouldn't.
  • by PD ( 9577 ) * <slashdotlinux@pdrap.org> on Friday February 13, 2004 @03:26PM (#8272448) Homepage Journal
    Better yet, let's move to a system of logic where there is one state for each possible answer. This woman is asking if we're fundamentally making improper assumptions about programming, and that's why we're making crap programs. My proposed machine would fix that, because it has an output state for every possible answer. All that would be required is to select the desired output answer, then map it back to the input state. Bingo! We've now got the right question to ask.

    (Or in other non-silly words, adding more states to a computer logical system doesn't make it more useful).
  • by Telastyn ( 206146 ) on Friday February 13, 2004 @03:28PM (#8272464)
    It might be me, but I've seen more bugs created because of assumptions made about abstractions, or because someone was used to a pre-made abstraction and didn't learn how things actually worked.

    Want to make better software? How about actually scheduling enough QA time to test it? When development time runs over schedule, push the damned ship date back!

  • Test? (Score:5, Insightful)

    by JohnGrahamCumming ( 684871 ) * <slashdot@jgc.oERDOSrg minus math_god> on Friday February 13, 2004 @03:30PM (#8272501) Homepage Journal
    I find it enlightening that this article does not include the word "test" once. Rather than spending a lot of time hoping that the purest use of OO technology or some other fancy boondoggle is going to make software better actually writing tests that describe the expected behaviour of the program is a damn fine way to make sure that it actually works.

    Picking just one program from my experience, POPFile: intially we had no test suite, it quickly became apparent that the entire project was unmanageable without one and I stopped all development to write from scratch a test suite for 100% of the code (currently stands around 98% code coverage). It's particularly apparent when you don't have all day to spend fixing bugs because the project is "in your spare time" that it's vital to have fully automatic testing. You simply don't have time to waste fixing bugs (of course if you are being paid for it then you do :-)

    If you want to be really extreme then write the tests first and then write the program that stops the tests from breaking.

    John.
  • by cubic6 ( 650758 ) <tom AT losthalo DOT org> on Friday February 13, 2004 @03:34PM (#8272551) Homepage
    Well, the trick to "anticipating everything a person will do that will inadvertantly blow up your application" is to keep it as simple as possible, specifically by restricting how the user interacts with the app. If the user can only press one of 3 buttons or put a fixed number of characters into a text box, it's not impossible to code for every possibility. In theory, you could build a complex application from lots of very simple (and easy to test and write) parts interacting in a well-defined manner.

    In practice, this almost never happens. Most developers are willing to trade perfect code that'll take four months for mostly-perfect code that will be ready for the deadline.

    To sum it all up, a properly designed and written program should never choke on user input. If it doesn't, that means you cut corners somewhere. Don't blame it on the user.
  • Re:Well... (Score:5, Insightful)

    by RocketScientist ( 15198 ) * on Friday February 13, 2004 @03:38PM (#8272600)
    Man I hate this.

    How many times do we have to have fundamental truths reiterated?

    "Premature optimization is the root of all evil"

    I'd submit that nearly every bit of non-intuitive code is written because it "should be faster" than the intuitive of equivalent function. Just stop. Write the code the way it needs to be written. Decide if it's fast enough (Not "as fast as it could be" but "Fast enough") and then optimize if necessary.
  • by Fizzog ( 600837 ) on Friday February 13, 2004 @03:41PM (#8272638)
    The problem is that people see some kind of pattern they recognise in the initial requirement and immediately decide there is an abstraction to be made.

    An abstraction is a pattern you find in something that exists, not a pattern you decide on for something you are going to do.

    Write it to do what it is supposed to do, and then look for abstractions.
  • by kvn ( 64836 ) on Friday February 13, 2004 @03:44PM (#8272678)
    I agree completely. Whether a developer uses a functional language or an object oriented language doesn't matter. What does matter MORE THAN ANYTHING is understanding the process that the software is supposed to support. If it's hospital management software, you have to know how hospitals are managed. If it's banking software, you have to understand banking.

    And testing, testing, testing. Because people aren't perfect. Nor would we want them to be... Too much money to be made in support contracts. :)
  • Re:Test? (Score:4, Insightful)

    by ziggy_travesty ( 611150 ) on Friday February 13, 2004 @03:45PM (#8272687)
    I agree. From the disposition of her interview, it seems like testing is beneath her; programs should will themselves to work flawlessly. Just like NASA's reflector tests on the Hubble...right. This is definitely hand-waving. She whines about how modern OO languages aren't intuitive for certain relationships and offers no concrete (or abstract) solution for these shortcomings. The bottom line is: software has bugs because it is complex. Deal with it. It's very hard to write large, qualtiy applications. We need more skilled and better educated engineers, not more language constructs. Launching a space shuttle or writing a weapons targeting system will never be an intuitive process. Also, intuition and simplicity will never be a substitute for testing. What malarkey. -AZ Strengths: whining and moaning Weaknesses: independent thought
  • I agree, somewhat (Score:5, Insightful)

    by Captain Rotundo ( 165816 ) on Friday February 13, 2004 @03:48PM (#8272722) Homepage
    A lot of the article is common sense. But I have been perturbed by the ease with which a lot of people seen to claim that OO is the end all and be all of everything.

    Even on simple project I have sometimes found myself designing to fit into Java's OO and not to fit the problem. Its really a language issue when it comes down to it. I am most comfortable in C, so I start writing a Java app and can feel myself being pulled into squeezing round objects into square holes. You have to then step back and realize whats happening before you go to far. I think this is the main source of "design bugs' from myself either ignoring the strengths of a system (not taking advantage of Java's OO) or trying to squeezing a design that is comfortable without billions of objects into some vast OO system, in effect falling into the weakest parts of a language.

    Its probably very similar to the ways people screw up a second spoken language, mis-conjugating verbs and whatnot -using the style they are already most familiar with.

    So with that its such ridiculusly common sense to say we need an all incompasing uber-language that is completely intuitive, I jsut would like to see someone do it rather than go on about it.

    Why not experiment with added every feature to Java that you feel it lacks to see if you can achieve that? because then you end up with perl :)

    Seriously programming languages aren't that way by and large because they have to be designed to fight whatever problems exist that they are created to take care of. It a bit foolish to say we need a language that is perfect for everything, instead you look at what your problems are and develope a language to fight those. Invariably you end up with failings in other areas and the incremental process continues.
  • by FreshFunk510 ( 526493 ) on Friday February 13, 2004 @03:48PM (#8272724)
    I like to compare it to civil engineering.

    Civil engineering is superior for 2 reasons. 1) Time of QA and 2) Dependability of materials.

    In short, look at the time it takes to QA a bridge that is built. Not only is there QA done from design to finish, but real load testing is done. Although software does have serious QA, the time spent QAing civil engineering products is far more as a ratio to time spent actually building.

    Dependability. The thing with building things is that you can always take it for granted that the nuts, bolts and wires you use have a certain amount of pressure and force they can handle. Why? Because of the distinct same-ness behind every nut, bolt and wire ever built. One nut is the same as the other nut. All nuts are the same.

    In software, not all "nuts" are the same. One persons implementation of a string search can widely vary. Yes, we do have libraries that handle this issue, but there is a higher chance of error in software construction because of the ratio of libraries (third-party) used that are not as robust.

    Lastly, one reason why software hasn't been addressed with the same urgency is because of the consequences (or lack of). When a bridge is poorly built, people die. Laws go into affect, companies go out of business, and many people pay the price. When software starts failing, a patch is applied until the next piece of it starts failing when another patch is applied. In the end the software becomes a big piece of patched-up piece of crap.

    One advantage, though, of software is that a new version can be released with all the patches in mind and redesigned. :) This certainly has been proved by products like Mozilla that were probably crap when first released but definitely has matured into a solid product (imho).
  • Re:I'm sure... (Score:3, Insightful)

    by Carnildo ( 712617 ) on Friday February 13, 2004 @03:49PM (#8272726) Homepage Journal
    Because the real world doesn't have bugs, right?

    The first rule of programming is to realize that all input is intended to crash your program, so code accordingly.
  • by robbkidd ( 154298 ) on Friday February 13, 2004 @03:49PM (#8272733)

    Want to make better software? How about actually scheduling enough QA time to test it? When development time runs over schedule, push the damned ship date back!

    And mandate unit testing be integrated with coding.

  • fluff (Score:5, Insightful)

    by plopez ( 54068 ) on Friday February 13, 2004 @03:50PM (#8272743) Journal
    Well.. some interesting ideas in there mainly flawed.

    1) The concept that software should 'feel' right to the developer. First of all this cannot be formalized in any sense of the word. Secondly even if it could be it is focused on the wrong target, it should feel right to the end user/problem domain experts. More about this in point 2.

    2) Software tools should model the real world. Well.. duh. Any time you build software you are modeling a small part of the real world. THe next question is: what part of the real world. The reason that OOP has not progressed farther is that the real world is so complex that you can only build some generic general purpose tools and then have a programmer use those tools to solve a particular subset. So the programmer must first know what the problem domain is and what the tool set is capable of.

    3) Programmers should be average. Absolutely not. In order to model the real world a good programmer must be able to retrain in an entire new problem domain in a few months. This is what is missing in may cases, most people do not have that level of flexibility, motivation or intelligence and it is difficult to measure or train this skill.

    4) Programmers shouldn't have to know math. Wrong again. Programming IS math. And with out a basic understanding of math a programmer really does not understand what is going on. This is like saying engineers shouldn't need to know physics.

    5) The term 'bug' is used very loosely. There are at least 3 levels of bugs out there:
    a) Requirements/conceptual bugs. If the requirements are wrong based on misunderstanding you can write great software that is still crap because it does not solve the correct problem. This can only be solved by being a problem domain expert, or relying heavily on experts (a good programmer is humble and realize that this reliance must exist).

    b) Design flaws. Such as using the wrong search, bad interface, poor secuirty models. This is where education and experience come in.

    c) Implementation bugs, such as fence post errors and referencing null pointer. THis can be largely automated. Jave, Perl and .Net ae eliminating may of those issues.

    In short, a bad simplistic article which will probably cause more harm than good.
  • by Anonymous Coward on Friday February 13, 2004 @03:51PM (#8272763)
    expanding the pure object-oriented paradigm

    1. WTF does that mean? It's all just buzzwords. Woohoo. Another buzzword engineer. Just what the world needs.

    2. Making programmers program in an OO paradigm doesn't stop bugs. So why should "expanding the pure object-oriented paradigm" do anything productive?

  • by G4from128k ( 686170 ) on Friday February 13, 2004 @03:56PM (#8272806)
    Having watched many people struggle with physics, chemistry, and biology courses, I'm not sure that the real world is all that inituitive. Even in the non-scientific informal world, many people have incorrect intuitive models for how things work. For example, many people think that increasing the setting on the thermostat will make the room warm up faster (vs. warming at a constant rate, but reaching a higher temperature eventually). And my wife still thinks that turning off the TV will disrupt the functioning of the VCR.

    Another problem is that the real world is both analog and approximate, while the digital world calls for hard-edged distinctions. In the real world, close-enough is good enough for many physical activities (driving inside the white lines, parking near a destination, cooking food long enough). In contrast, if I am moving or removing files from a file system, I need an algorithm that clearly distinguishes between those in the selection set and those outside it.

    I like the idea of intuitive programming, but suspect that computers are grounded in logic and that logic is not an intuitive concept.
  • Re:Test? (Score:2, Insightful)

    by tyroney ( 645227 ) on Friday February 13, 2004 @03:58PM (#8272831) Homepage
    • If you want to be really extreme then write the tests first and then write the program that stops the tests from breaking.

    That, to me, is a very good way of thinking. In the same vein is paranoid programming. Remember murphy's law. You get the idea.

    Unfortunately, such things take far more time than making a few assumptions, and hoping for best practice.

  • by Tablizer ( 95088 ) on Friday February 13, 2004 @03:59PM (#8272832) Journal
    After many debates and fights over paradigms and languages, it appears that everyone simply thinks differently. The variety is endless. There is no universal model that fits everyone's mind well.

    As far as "modeling the real world", my domain tends to deal with intellectual property and "concepts" rather than physical things. Thus, there often is no real world to directly emulate. Thus, the Simula-67 approach, which gave birth to OOP, does not extrapolate very well.

    Plus, the real world is often limiting. Many of the existing manual processes have stuff in place to work around real-world limitations that a computer version would not need. It is sometimes said that if automation always tried to model the real world, airplanes would have wings that flap instead of propellers and jets. (Do jets model farting birds?)

    For one, it is now possible to search, sort, filter, and group things easily by multiple criteria with computers. Real-world things tend to lack this ability because they can only be in one place at a time (at least above the atomic level). Physical models tend to try to find the One Right Way to group or position rather than take full advantage of virtual, ad-hoc abstractions and grouping offered by databases and indexing systems.
  • Re:Objects (Score:5, Insightful)

    by weston ( 16146 ) <westonsd@@@canncentral...org> on Friday February 13, 2004 @04:00PM (#8272845) Homepage
    Livschitz's observation about how programmers work in metaphors, while mathematicians work in pure syntax

    It's an interesting thought, but it's not necessarily true at all. Mathematics is metaphors, even though they're often very abstract. But it's more like working with somebody else's codebase, most of the time. Unless you're striking out and creating your own formal system, you are working with metaphors that someone else has come up with (and rather abstract ones at that).

    The good news is that most mathemeticians have an aesthetic where they try to make things... as clean and orthogonal as possible.

    The bad news is that terseness is also one of the aesthetics. :)
  • by Dirtside ( 91468 ) on Friday February 13, 2004 @04:00PM (#8272846) Journal
    - I accept that humans are fallible, and as long as software is produced by humans, or by anything humans create to produce software for them, the software will have bugs.

    - I accept that there is no magic bullet to programming, no simple, easy way to create bug-free software.

    - I will not add unrelated features to programs that do something else. A program should concentrate on one thing and one thing only. If I want a program to do something unrelated, I will write a different program.

    - I will design the structure of the program, and freeze its feature set, before I begin coding. Once coding has begun, new features will not be added to the design. Only when the program is finished will I think about adding new features to the next version. Anyone who demands new features be added after coding has begun will be savagely beaten.

    - A program is only finished when the time and effort it would take to squash the remaining obscure bugs exceeds the value of adding new features... by a factor of at least two.

    - If I find that the design of my program creates significant problems down the line, I will not kludge something into place. I will redesign the program.

    - I will document everything thoroughly, including the function and intent of all data structures.

    - I will wish for a pony, as that will be about as useful as wishing that people would follow the above rules. :)
  • Tools (Score:2, Insightful)

    by tsanth ( 619234 ) on Friday February 13, 2004 @04:01PM (#8272860)
    That brings up a very good point: why limit yourself to using one tool for every kind of software development? Just as assembly, C, Perl, and even VB have their uses in programming, industry, and science, there exist programming environments where it'd be useful to deal only with abstractions.

    To wit: many of my peers in CS came into CS because they wanted to program--boy, were they in for a rude awakening! It's true: there will always be a need for Lego-builders, but I think that it's useful to have languages specifically targeted toward "Lego builders," and others specifically targeted at "builders who use Legos."

    Smalltalk and Ruby have been recommended to me already; perhaps one of those is more along the lines of a language that's more targeted at builders-who-use-Legos?
  • by e-Motion ( 126926 ) on Friday February 13, 2004 @04:08PM (#8272963)
    To produce bugless software we need to start with software designs that are provably correct and then produce code that is provably in line with the design. Using more objects that more closely model the "real world" is an invitation to producing larger number of bugs as the ambiguity of the real world infects the design and implementation of the program.

    You're absolutely right. Some people think that turning to the "real world" for guidance a good idea, but I've found that it only confuses things. Nobody knows how to model real-world objects and relationships inside a computer in a way that suits all potential uses of that model. I've found that most discussions about software models for the real world in OO projects tend to degrade into analyzing the structure of various English sentences and considering the plethora of ways that a person _could_ understand a relationship. If there are tons of ways to represent the real world, how is the real world supposed to help produce bug-free software? Why should I believe that the answer lies in the real world instead of the software development process itself?
  • by scorp1us ( 235526 ) on Friday February 13, 2004 @04:12PM (#8273021) Journal
    I was thinking about the same axact thing the other day. It's 2004, where are our common primatives?

    glibc is 'it' but it still gets updates, bug fixes, etc. It is not used on every platform. Yet it gets recreated over and over again.

    Then I thought about .Net. Finally any language an interface to any other language's compiled objects. So we're getting closer.

    But I think the biggest problem is the lack of software engineering from flow-charting. As mentioned, flowcharts allow us to map out or learn the most complicated software.

    I think we can accomplish all she describes inside an OOP language, be it Java or C++ or Python. The master-slave relationship is easily done. The cooler thing that I would like to see more of is the state.

    Rather than a process starting off in main(), and ini code run in constructors, each process and object need to have a state associated with it. This state is actually a stack, and not a variable.

    my_process {

    resister state handler 'begin' as 'init'
    resiter state hander 'end' as 'exit'
    state change to begin
    }

    init() {
    do_something();
    register handler condition (x=1, y=1, z=1) as 'all_are_one'
    }

    all_are_one() { // special state
    state change to 'in_special_case'
    do_something_else();

    pop state
    if (exit_condidtion) exit()
    }

    exit(){
    while (state_stack.length) pop state
    }

    What I'm tring to do is model the logical process with the execution of code, but in an asyncrounous manner. Sort of like a message pump, but its been extended to take process stages and custom events.
  • by boomgopher ( 627124 ) on Friday February 13, 2004 @04:14PM (#8273049) Journal
    True, but good abstraction skills are really important.

    Some of the guys I work with think that "tons of classes == object-oriented", and their code designs are f-cking unreadable and opaque. Whereas a few, thoughtfully designed classes that best model the problem would be magnitudes better.

  • by *weasel ( 174362 ) on Friday February 13, 2004 @04:17PM (#8273090)
    exactly. unit testing is smiply treating the symptoms instead of treating the disease. it's at the wrong level of their debate.

    they want software to be intuitive, with levels of fault, and 'programming' to be done nearly entirely at the design level. When building a house, generally design decisions are the biggest concern. With programming, it's more often the actual construction.

    If your roofers botch the shingling you might get a leak, but the result is not catastrophic the way an improperly designed roof woudl be. In software, any construction bug is catastrophic, while any design bug results in more graceful shortcomings.

    Now, what these paid-for-pundits are suggesting might not be possible, but that's the level they're holding their discussion on.

    They want coding to be done entirely on the design level. Rough carpentry is not done by architects. (Though to extend the analogy, 'materials' coders would certainly still exist to create new building-blocks for designers to use).

    Ideally this is what object-oriented and 'visual' programming were both supposed to do for the industry. It seems to me that they're primarily lamenting that even with all the effort invested, no existing paradigm has actually delivered on the root promise.

    Of course, this entire discussion is largely moot for actual coders. While they're pontificating form ivory towers, we've got deadlines - and they aren't giving us anything we can use, they're just restating the problem.
  • by Greyfox ( 87712 ) on Friday February 13, 2004 @04:19PM (#8273113) Homepage Journal
    All the process and buzzwords in the world will not help you if your programmers don't understand your needs.

    Want to make better software? Make sure your programmers understand what you're trying to do and make sure that enough people have "the big picture" of how all the system components interact that your vision can be driven to completion. It also helps if you have enough people on hand that losing one or two won't result in terminal brain drain.

    Recently management seems to be of the opinion that people are pluggable resources who can be interchangably swapped in and out of projects. Try explaining to management that they can't get rid of the contracting company that wrote a single vital component of your system becauase no one else really understands how it works. They won't get it and you'll end up with a black box that no one really knows how to fix if it ever breaks.

  • I'm sorry, but I've heard this argument from software design "purists" one too many times. The term "provably correct" as applied to software *design* is laughable. For software development we have unit tests, which take some effort but are well worth the overhead. The only way to prove that a software design matches real-world requirements is to implement the design, and completely test its interaction withing the domain.

    Simply put, the proof is in the pudding. You can cleanly define (and test) the interactions between separate software components, but outside the software, there're just too many variables. The real world presents an infinite number of ways to befuddle your code. It is infinitely detailed, and thus so is the problem domain.
  • by Anonymous Coward on Friday February 13, 2004 @04:24PM (#8273188)
    I have tried to learn Perl, and I simply do not like it as a programmer. (I do not like it, which is why I choose not to use it, but that doesn't mean that it doesn't work others... if you like it, then great. I don't want to start a flame war)

    As a programmer, I prefer C/C++ because things are pretty explicit, ie. you need to define your variables explicitly before you use them, and there is no guessing involved.

    However, with Perl, there are so many things that if they aren't present, they are assumed. It is very "hacky" and makes it very hard to read. When things are assumed, to me as a programmer, it just means it creates uncertainty, and this inevitably leads to bugs.

    The same goes with most scripting languages, like PHP. I use PHP because it is very easy to use, but it also suffers from similar bugs (ie. being able to use variables before explicitly declaring them, etc).

    Like I said, if you love Perl, that's great, and a good Perl programmer will know all this, and will probably make very few bugs, just like a good C programmer will make very few bugs in their code. My point is that for the lesser Perl programmers, it is very easy to write code that is simply horrible.
  • by Slak ( 40625 ) on Friday February 13, 2004 @04:26PM (#8273212)
    The zero-th reason software is buggy is the state of requirements. I've seen so many requirements documents that lack any form of internal consistency.

    These issues don't seem to be addressed until the rubber hits the road - when code starts compiling and demos are given. The pressure to market builds, as these issues are being resolved. Unfortunately, that's when The Law of Unintended Consequences strikes, scrapping much of the existing codebase.

    How can a programmer make "their own code solid" when the work it is supposed to perform is not clearly defined?

    Cheers,
    Slak
  • by richieb ( 3277 ) <richieb@gmai l . com> on Friday February 13, 2004 @04:26PM (#8273217) Homepage Journal
    She says:
    It is widely known that few significant development projects, if any, finish successfully, on time, and within budget.

    What bothers me about statements like this, is that no one is suggesting that perhaps our estimation and budgeting methods are off.

    What if someone scheduled one week and allocate $100 for design and construction of a skyscraper, and when the engineers failed to deliver, who should be blamed? The engineers?!

  • by hikerhat ( 678157 ) on Friday February 13, 2004 @04:35PM (#8273324)
    Not sure how you got modded up to +5 insighful, given that there were no sameless java plugs at all. I went to the article and searched for the word java.

    The first two hits in the article point out that java was architected with security in mind. This is simply true, and hardly a shameless plug.

    The next hit is in the question "How well do you think modern programming languages, particularly the Java language, have been able to help developers hide complexity?"

    The answer starts with the word "Unfortunately" and goes on to explain that not even OO languages reduce complexity enough when an app gets big enough. The word "Java" isn't used once in the answer. That certainly isn't a plug.

    The final hit is in the question "Do you have any concrete advice for Java developers? And are you optimistic about the direction software is headed?"

    Note some good general purpose advice is given in the answer, and the term "Java" isn't used once in the answer.

  • by Tablizer ( 95088 ) on Friday February 13, 2004 @04:46PM (#8273479) Journal
    There are some controversial claims being made in that article:

    The preventive measures attempt to ensure that bugs are not possible in the first place. A lot of progress has been made in the last twenty years along these lines. Such programming practices as strong typing that allows compile-time assignment safety checking, garbage collectors that automatically manage memory, and exception mechanisms that trap and propagate errors in traceable and recoverable matter do make programming safer.

    Fans of "dynamic" languages will surely balk. There is no evidence that strong/static typing makes programmers more productive or reduces total errors. Static/strong typing is not a free lunch: it adds more formality to the code, making it larger, and more code often means more bugs, partly due to slowing the reading of code. ("Static" typing and "strong" typing are generally different concepts, but tend to go hand-in-hand in practice.) There are also nifty things that can be done with dynamic evaluation that are hard or code-intensive to emulate with compiled languages. Often the same Java program rewritten in Python is 1/3 the size.

    However, it is sometimes said that the best programmers are best with dynamic languages and mediocre-ones best with strong/static typing.

    Further, some feel Java's error handling mechanisms are unnecessarily complex and "glorified goto's" or "glorified IF statements".

    In regard to recovery, I can't think of a recent technological breakthrough. Polymorphism and inheritance help developers write new classes without affecting the rest of the program.

    It appears to be a trade-off. Polymorphism and inheritance assume a certain "shape" to future changes. If the future fits those change patterns, then change effort and scope is smaller. However, many feel that such change patterns are artificial or limited to certain domains. For example, adding new operations to multiple "subtypes" often requires more "visit points" under polymorphism. It would be a single new function in a procedural version, and other code need not be touched. It is the classic "verb changes" versus "noun changes" fight that always breaks out when OO and procedural fans meet and fight over code being more "change friendly".

    Object-oriented programming allowed developers to create industrial software that is far more complex than what functional programming allowed.

    I think she means "procedural", not "functional". But there is no evidence that either is the case. There is no evidence that well-done procedural (usually with a RDBMS) systems are more buggy or costly than OO. Management satisfaction surveys by Ed Yourdon put them pretty much even.
  • by Paradox ( 13555 ) on Friday February 13, 2004 @04:47PM (#8273487) Homepage Journal
    It was probably modded funny because it's such blatent Perl-first-ism. Perl doesn't hold a monopoly on these things, perl does have a problem with readability, and perl does have many surprising flaws. It's really not the point here anyways. Perl doesn't really satisfy any of the (reasonably good) points made in the article, at least not in an acceptable fashion.

    I thought it was pretty funny anyways.
  • If I follow your train of thought to its natural conclusions, I should arrive at the idea that when building a bridge, it is not necessary to prove that the finished construction will be able to withstand the load that it bears. Would you agree with that assessment?

    No. But you can only use the science you know to help you proving that the thing will stand up. Furthermore, spec for briges tend to be pretty clear.

    In any case, if you look into history of bridge building you'll find that for the longest time the formula for the strength of cantilever beam was wrong (this guy Galileo got it wrong). So when the engineers building bridges reduced the safety factor (to speed up construction and reduce cost) bridges started falling down.

    In 19th centuary England a lot of iron bridges collapsed, despite the fact that it was "proved" that they were strong enough. Metal fatigue was not understood then.

    Lastly, your request for a proof exemplifies my point. You cannot offer such a proof and that is why Apache has to be patched.

    Why not? At least you have a spec for a HTTP protocol, which is pretty precise as such spec go. If you cannot offer a proof in the case where a precise spec exists, what hope is there for other software?

    Finally at most you can prove with a program that the program works according to its specification. But what about the correctness of the specification itself?

    Unfortunately, computer science is still in its relative infancy

    Exactly. But what we are talking about is software engineering. Engineers are paid to build things that work. They are free to use whatever helps them in their tasks, if there is good science they use it. If there is no science they have to hack.

    Try telling your next customer that implementing his system will take 50 years, because the science of translating his imprecise requirements into software hasn't been invented yet.

  • by Anonymous Coward on Friday February 13, 2004 @05:01PM (#8273712)
    I felt the same way when I first started learning Perl. After I got to know it then man oh man everything snapped into place. Very fast development.

    Not much different than the first time you saw any significantly different programming language.

    Although C and C++ are certainly fine languages, to truely become a programming master then you should at least understand the benefits of other systems. Learn a functional language like Erlang or Haskel. Learn Lisp. Learn a very high level scripting language like Perl and/or Python. Learn a 100% pure OO language like Smalltalk.

    They all have their own advantages.

    Too bad no one has combined the good features into a master programming language. All of the things mentioned in the article are available in certain languages but no one has put them together into something useful.
  • by Tablizer ( 95088 ) on Friday February 13, 2004 @05:02PM (#8273733) Journal
    This is life in a global economy. Live with it. If you want to love what you do for a living, be willing to accept less money for it

    They usually don't give you that option. How many offshored developers were asked, "You can go, or get 25K a year". Nor are there many programmer job postings for 25K.

    A truly global economy would let people shift around as easily as the jobs do. Instead we are limiting the variety of jobs available in the US, which is a security concern if you ask me. If we are cut off in a disaster or attack, all the marketers and managers here are not going to know how to do anything real. Other countries do all the real work now.
  • by Anonymous Coward on Friday February 13, 2004 @05:10PM (#8273863)
    I've been thinking about this complexity related issue, and one aspect where I think we are heading, and we are already doing this, is to write domain-specific languages.

    Thus, the building blocks could be written efficiently, and under the control of professional programmers, while the actual application environment is controlled by a cleaner language syntax suitable for anyone to use.

    This is why I think Java tries to be in both camps and is not succeeding. Same with C#. Ditto for Python and Perl.

    Javascript is somewhat closer, but more domain-specific languages could make sense long term. Examples:

    game development (Jamatic), end user UI development (Hypercard), business (Excel macros), 3d graphics(various modelling languages, VRML)...

    --Kent
  • by BrittPark ( 639617 ) on Friday February 13, 2004 @05:35PM (#8274186) Homepage Journal
    Because of human nature and because of the extreme complexity of the ideas we attempt to encapsulate in non-trivial software, buglessness is not an achievable goal, regardless of the methodology of the day. The interviewee seems to think that there is some magic bullet waiting (in new tools or methodologies I guess). This shows a fundamental rift between her and reality, and makes her opinions fundamentally suspect.

    The goal in any real software project is to meet customer's (and I use that in the broadest sense) expectations adequately. What is adequate? That depends on the software. A user of a word processor for instance is likely to not mind a handful of UI bugs or an occasional crash. A sales organization is going to expect 24/7 performance from their Sales Automation Software.

    The canny programmer (or programming group) should aim herself to produce software that is "good enough" for the target audience, with, perhaps, a little extra for safety's sake (and programmer pride).

    Of course their are real differences among the tools and methodologies used in getting the most "enough" per programmer hour. Among the one's I've come to believe are:

    1. Use the most obvious implementation of any module unless performance requirements prohibit.

    2. Have regular code-reviews, preferably before every check-in. I've been amazed at how this simple policy reduces the initial bug load of code. Having to explain one's code to another programmer has a very salutary effect on code quality.

    3. Hire a small number of first class programmers rather than a larger number of lesser programmers. In my experience 10% of the programmers tend to do 90% of the useful work in large software projects.

    4. Try to get the technical staff doing as much programming as possible. Don't bog them down with micromanagement, frequent meetings, complex coding conventions, arbitrary documentation rules, and anything else that slows them down.

    5. Test, test, test!
  • Re:Well... (Score:2, Insightful)

    by AeroIllini ( 726211 ) <aeroillini@NOSpam.gmail.com> on Friday February 13, 2004 @05:37PM (#8274222)
    I never did get around to asking him how he knew that, or if it was kind of a gut feeling he had.

    It must have been intuition.

    But in all seriousness, "intuitive" is a synonym for "personal preference" when it comes to abtract concepts like computing. After watching highly successful, highly intelligent professors (with PhDs, mind you) struggle with the basic concepts of computers, such as installing software, creating shortcuts, transferring files between two computers, etc., it became abundantly clear to me that "intuitive" is only what the creater of the "intuitive" system preferred. In fact, I know several professors who still write their highly complex numerical simulation code in FORTRAN, because "it's more like English than other languages." FORTRAN is highly counter-intuitive, but it's a damn fast number crunching language (rivaled only by C, if I remember my benchmarks correctly).

    Computers are obtuse, and rightfully so. Turning millions of micropulses of electricity flowing through tiny bits of various metals into a 2-dimensional dancing paperclip boggles the mind. In order to write complex applications with millions of lines of code, and have that code run at a reasonable speed and/or efficiency, the programmer is going to need some knowledge of how the computer works. That will require that s/he has access to things like memory locations as s/he codes, and also be fairly intelligent. If all you need to write is a little macro for inserting customer records into Excel, then efficiency is not really a factor and "dumbed-down" development environments like Visual Basic are just fine for the purpose.

    I'm all for making computers easy to use, but "intuitive development environments" are an Eldorado. Let's focus instead on creating programming languages that are not only stable, but secure, cross-platform, robust, *consistent*, and above all, efficient. Even if they are a little obtuse: steep learning curves are not the problem.
  • by ojQj ( 657924 ) on Friday February 13, 2004 @05:38PM (#8274232)
    Exactly.

    There are too many Marketing droids talking about the solution they visualize without ever clearly formulating the use case.

    A lot of people can make feature suggestions. Without a clear picture of what the user actually wants to accomplish, the feature suggestions can't be evaluated for usefulness, nor can better suggestions be made to solve the user's problem. Varying solutions also can't be compared on their merits. And so it gets down to an "I have more contact with the customer so I must have some magical ability to divine the best solution is for his as-of-yet unspecified task" pissing contest.

    If marketing departments would start doing their jobs -- gathering use cases to make requirements out of -- then software development groups would be able to make more high quality feature suggestions and would have more fun implementing them.

  • by BinxBolling ( 121740 ) on Friday February 13, 2004 @05:40PM (#8274254)
    When development time runs over schedule, push the damned ship date back!

    Not gonna happen. The vast majority of the time, the economic penalty associated with being late is much greater than the economic penalty associated with being buggy.

  • A good idea (Score:2, Insightful)

    by namidim ( 607227 ) on Friday February 13, 2004 @05:40PM (#8274255)
    I think that the author makes some very good points about the high-level problems of OOP and I think the same also applies to AOP (aspect oriented programming). Both approaches force you to take a design that consists of many different, well understood paradigms and then force it into a language that supports only one or two of them. AOP gets people excited because it means you have two paradigms to work with instead of one. This is great, but it misses the point. We should be putting a whole set of different design paradigms into the next generation of languages not just (last language)+1 of them.
    I think part of the reason that this broader problem is not fully realized is that you don't run into it until you have to update existing code with essentially new features and even then hindsight is 20/20. It shows up there because you have to now take the constrained language, look at it in terms of the many design paradigms you started with, add a new one, and then squish it all back in again. The result of this process may be very close to what you had before (this is when it's easy) or may be very different(this is often when we see a group scrap it all). Unfortunately it's all to easy to just say "well if I'd implemented it THIS way then the change would have been small" which ends up being almost as helpful as realizing you should have played the OTHER lottery number.
    The other reason I think we don't see any movement in this direction is that, much as with functional programming before it, once you have spent the last 5,10,50 years thinking of everything in terms of OOP it is very hard to see where it's letting you down.
  • Re:Objects (Score:4, Insightful)

    by Coryoth ( 254751 ) on Friday February 13, 2004 @05:43PM (#8274299) Homepage Journal
    There is a distinction between metaphor and definition. If I say a function is a machine for turning elements of the input set into elements of the output set, then this is a metaphor in which I have used the word machine to describe a function. I suspect that what you are arguing is that the word function itself is a metaphor. This is incorrect. The word function has a defintion. A function f:X->Y is a subset of XxY such that for every x in X there is a unique y in Y such that (x,y) is in the subset. This is not a metaphor.

    Which doesn't mean mathematicians don't use metaphor a lot. The catch is that they like to have solid rigorous definitions to tie things back to. Often when doing mathematics you will think in terms of the metaphors to percieve a way to proceed, and then try and explain the process you just considered in terms of definitions.

    If you want a metaphor for that: Published mathematics tends to be assembly code: low level with strong static typing, and everything very explicit. That doesn't mean that mathematicians don't write it in their heads in Python or Java and compile it. Think of research mathematicans as powerful metaphor compilers as well as programmers.

    Jedidiah.
  • by AeroIllini ( 726211 ) <aeroillini@NOSpam.gmail.com> on Friday February 13, 2004 @05:49PM (#8274389)
    Want to make better software? Make sure your programmers understand what you're trying to do and make sure that enough people have "the big picture" of how all the system components interact that your vision can be driven to completion.

    Well said. It's an extension of the "cubicle nature" of the working world. Here's your cubicle: you are part of the whole, but you're by yourself, and can't really see what's going on around you. Here's your project: it's a compenent of a larger piece of software, but you don't really need to know how the other pieces work. Just code your part and be happy.

    I'm not saying we shoud ditch the cubicle (where else would we hang our Dilbert clippings?) but we should certainly ditch the cubicle atmosphere surrounding technical projects. Let everyone in. Big Picture, people.
  • by dutky ( 20510 ) on Friday February 13, 2004 @06:02PM (#8274571) Homepage Journal
    richieb [slashdot.org] wrote
    She says:

    It is widely known that few significant development projects, if any, finish successfully, on time, and within budget.


    What bothers me about statements like this, is that no one is suggesting that perhaps our estimation and budgeting methods are off.


    What if someone scheduled one week and allocate $100 for design and construction of a skyscraper, and when the engineers failed to deliver, who should be blamed? The engineers?!


    First, there are lots of folks who have been saying, for a long time, that our estimation and budgeting methods are inadequate: Fred Brooks [amazon.com] and Tom DeMarco [amazon.com] are just two of the best known advocates of this position. It seems, unfortunately, that it is not a message that many folk like to hear. It is, I guess, easier (and more expedient) to blame the tools or the craftspeople than to figure out what really went wrong.

    Second, your example would be more apt if the building materials (steel and concrete) or the blueprints and construction tools were being blamed for cost overruns and schedule slips. No one would suggest that building skyscrapers would be easier and more reliable if the bricks and jackhammers were more intuitive.

    What she is saying smacks of silver bullets (see Fred Brooks Mythical Man-Month, chapter 16: No Silver Bullets - Essence and Accident in Software Engineering [virtualschool.edu] (and succeeding chapters in the 20th Anniversary Edition)) and just can't be taken seriously. To summarize Brooks:

    There is simply no way to take the programming and software engineering tasks and make them easy: they are difficult by their very essence, not by the accident of what tools we use.

    While we may be able to devise languages and environments that make the creation of quality software by talented experts easier, we will never be able to make the creation of quality software easy and certain when undertaken by talentless hacks, amatures and diletants. Unfortunately, the later is what is desired by most by managers, becuase it would mean that the cost of labor could be greatly reduced (by hiring cheaper or fewer warm bodies). It also happens to be the largest market, at least in the past two decades, for new development tools: think of the target markets for VisualBASIC, dBASE IV, Hypercard and most spreadsheets.
  • Re:Objects (Score:2, Insightful)

    by Suidae ( 162977 ) on Friday February 13, 2004 @06:05PM (#8274610)
    why the heck do you have to create and instantiate an object just to write a simple procedure with no inputs and no outputs?

    Well, what else are you going to do with a 5GHz 64bit processor?

    Besides using it as a space heater that is.
  • by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Friday February 13, 2004 @06:05PM (#8274611) Homepage Journal
    Believe me, I understand you completely (other than the pejorative use of the ill-defined term "scripting").

    I used to feel the same way after having programmed in C for many years. Some yahoo made me work with Perl, so I treated it like any other language that I had to pick up... and I hated it. It was full of little special cases and everything broke the rules in at least 3 ways. Most languages strove to remain as context-free as possible, but Perl was awash in as much context-senstivity as Larry Wall could mamage to make his C-compiler-stress-test of a tokenizer handle!

    So, why am I a staunch Perl advocate many years later?

    1. Because I can think in Perl better than any other language
    2. Because Perl favors human beings who have to program, not compilers and interpreters that have to parse the code
    3. Because I got orders of magnitude more work done in Perl than C, C++, awk, Java, LISP, or any other language I could find.

    "However, with Perl, there are so many things that if they aren't present, they are assumed. It is very "hacky" and makes it very hard to read. When things are assumed, to me as a programmer, it just means it creates uncertainty, and this inevitably leads to bugs."

    That's the theory... and that's what I was taught in school... It seems to make sense.

    And yet, there is this massive body of good code written in Perl. There is also a ton of BAD code written in Perl. Just check out bugzilla if you want to see the worst case scenario.

    But then ask yourself... is that Perl's fault any more than bad C++ code (and man I've seen some amazingly bad, impossible to debug C++) C++'s fault? I judge a programming language on the basis of what good programmers can do with it. If you want bondage languages that force bad programs to be minimally debuggable, use Python, but don't expect to be as productive in a language that forces you to think in some particular way about your problem.
  • by vt0asta ( 16536 ) on Friday February 13, 2004 @06:22PM (#8274799)
    I fear your comment is going to get lost in the crowd of people who don't understand Perl. The people who don't see that Perl maps exceptionally well to many problem spaces.

    Groking Perl seems to be like groking pointers in C. Some people seem to be simply born without the part of the brain that understands them.

    Perl is context-aware/intuitive. It understands the need to be able to easily take data from any source, chop it up, mangle it, and then easily spit it back out. There isn't much to learning Perl syntax, but it will insist that you memorize some traditional things, like operator precidence, syntax, and the basic perl functions. Not hard at all when you get down to it.

    Perl is inclusive. There is definitely more than one way to do it. This is a "good thing", because one way that works, might not be best way. Similiar problems, sometimes require a slightly different solution. Perl has online documentation out the wahzoo. perldoc rocks, and you have a list of up to date books that rival O'Reilly's (many times by the same authors). Perl modules have built in unit testing. Perl is a language and a culture that values and facilitates "testing".

    Pattern recognition, is something Perl excels at. Especially the type of pattern recognition and logic handling that is required for most applications. Need something fancier? Like fuzzy? Neural Net? Look to CPAN. Using regular expressions in Perl takes one line of code, no need to worry about making a regex struct or object, then compiling the syntax, and then running the match, and then deallocating the regex struct.

    You're right, the same people who pan Perl for being opaque are typically the same that use method overloading, polymorphism, and other abstraction and obfuscation techniques and then claim their code is more readable, and easier to understand. They also tend to be the same people who believe Perl is only good for one off scripts and hacks. To which I say that is only the beginning of what Perl is great at.
  • by hotpotato ( 569630 ) <guygurari@@@gmail...com> on Friday February 13, 2004 @06:23PM (#8274811)
    Perl is very good at certain things, and terrible at others. CPAN is an excellent example of what Perl is incredibly good at: A huge collection of relatively small components that perform very specific tasks.

    Perl fails, however, when it comes to scalability: Lack of compile-time type-safety, encapsulation and other mechanisms. It generally has a very exposed and ad-hoc OO model make creating large, complex structures exceedingly difficult. Not to mention the nightmare of trying to maintain an existing piece of code that is longer than a couple of tens of thousands of lines.

    Don't get me wrong: I love Perl. It is my first choice when writing tools that aid me in development or tie up some loose ends from several systems. But it pales when compared with Java for creating large systems.

  • by NoOneInParticular ( 221808 ) on Friday February 13, 2004 @06:49PM (#8275081)
    As someone who's programmed OO-style for a long time now, I can't agree more. The problems with a pure OO-style of programming usually become apparent when you take over a program. OO has really managed to replace the spaghetti code of the days before with a macaroni of little bits of code that work together in a completely intransparent way.

    Boy, sometimes I do long for managing a C-based project.

  • by Rick and Roll ( 672077 ) on Friday February 13, 2004 @07:06PM (#8275218)
    Master Programming Language??? There are conflicting ideas that prevent this from happening. How are you going to have fast and light on storage UTF-8 when your strings are lists of integers? How are you going to combine the idea of assigning types to variables with having all types be pointers to any type of object? Don't work.
  • by Paradox ( 13555 ) on Friday February 13, 2004 @07:12PM (#8275269) Homepage Journal
    People still use smalltalk. I wouldn't say that in this current world Pascal is much more popular than Smalltalk, although you could argue more lines are in deployment.

    When you get down to it, the concepts pioneered by Smalltalk are still ahead of their time. How can I say such a thing? Well, watch our modern languages evovle. People keep taking stuff from Smalltalk, and languages are slowly glomming on more and more parts of it. Of course, the same could be said of CLOS. This is the path to language elitism, which no language currently deserves.

    A safer statement to make is that the concepts these languages used were advanced, although the particular instance might not have been.

    People don't keep digging up these languages out of pure stubborness. They keep turning back to them because they were good ideas, and they worked well.
  • by swframe ( 646356 ) on Friday February 13, 2004 @08:17PM (#8275772)
    If you can't determine if an arbirary program halts on arbirary input then you can't tell if that program has a bug.
    If want a language without bugs, you have to use a language that does not interesting problems.
  • by whittrash ( 693570 ) on Friday February 13, 2004 @09:09PM (#8276156) Journal
    The syntax of all mainstream programming languages is rather esoteric. Mathematicians, who feel comfortable with purely abstract syntax, spend years of intense study mastering certain skills. But unlike mathematicians, programmers are taught to think not in terms of absolute proof, but in terms of working metaphors. To understand how a system works, a programmer doesn't build a system of mathematical equations, but comes up with real-life metaphor correctness which she or he can "feel" as a human being. Programmers are "average" folks; they have to be, since programming is a profession of millions of people, many without college degrees. Esoteric software doesn't scale to millions, not in people, and not in lines of code.

    In the article, her solution to error is to increase the tolerance for error, making direct mistakes unlikely or impossible because there is plenty of 'slop' in the system and you can't get a wrong answer. Theoretically, this lowers precision and increases overhead of the system. Her solution to the difficulty in understanding programming is making it so any idiot can understand it.

    To make an analogy, a programmer is like a bucket. Her solution to filling a bucket (writing code) is to submerge it inside a larger pool. In that situation, any old bucket will do, the bucket will always be full when placed in a pool; but you will then have to carry the entire pool if you want it to move. The question then becomes how much you can carry, not the performance of the bucket.

    She may well be right about intuitive programming, being easier to use, and that making programming more like regular language with intuitive syntax could be beneficial (more like programming a Star Trek AI computer than what we have now). But this would also shift the nature of the problem from design and architecture to performance and underlying stability issues. Any fool could write code without knowing how it worked. Some shortcuts may be appropriate in certain cases, but to rely on these kinds of methodologies in critical situations could lead to disaster and has a built in unreliability factor. If some company thinks they can buy this system and then expect bullet proof security, reliability and high performance, they are probably in for a rude awakening. They should expect 'good enough' performance, which is what they are getting already.

    The only way to do exceptionally good work in a complex situation is to have the knowledge and experience for what you are doing at all levels and the ability to execute. Allowing programmers to be ignorant of how a computer works doesn't seem like a solution to me. The real problem with crappy software is companies that don't care and consumers who don't know any better.

  • Anyone can call themselves a programmer, or even a software engineer. Someone who graduates from a BCS program is required[1] to do zero practical work in the field before they get their degree - which is the height of their qualifications.

    Engineers may graduate, but they require at least a few years of work before they can be licensed. Lawyers have to pass tests beyond those based in the fantasy world of academia. Medical doctors require years of on the job training under close supervision before they are turned loose. All of these professions are self-governed and discipline their members if - nay, when - one of them screws up. Potentially they can loose their license.

    The IT world has no such professional designation. The IT world has no such self-governing body. Given companies/individuals in the IT world can consistently produce almost criminally negligent code, and provided they bid low, will survive.

    MDs, PEngs (and even lawyers) can always refuse to do something if it is clearly dangerous, unsafe, or illegal. Their clients cant really go anywhere else to get that task done as all professionals will be bound by the same rules.

    Even trades: plumbers, carpenters, electricians, pipe fitters..... have some non-academic certification process. Most, beyond a (say) two year school program have to have years of apprentice work before they can be qualified. They are required by law to build things to some safety standard, building code, electrical code, fire code....

    Anyone can call themselves a programmer.

    The closest thing that the IT world has is various certs from for-profit companies. But they are generally for variations on systems administration, rather then programming. While, so far as I know, they cant be revoked for cause, they do all expire after some finite time.

    What the IT world needs is the equivalent of a PEng professional 'grade' designation for, ie 4 year BCS level of schooled people. And also a trades grade designation for 2 year community collage types. Implicitly from there you get higher quality product, because the people designing the product (PEng grade types), and the people implementing it (trade grade type) have higher obligations then just to the customer. They would have professional responsibilities, violation of which could cause them to loose their respective licenses. This would solve most of the bugs caused by cutting corners to save on cost, releasing before its done, etc. By no means all, but a lot.

    [1] yes, some schools have Co-Op programs. But I know of none that are requirements.
  • by eraserewind ( 446891 ) on Friday February 13, 2004 @10:06PM (#8276539)
    I judge a programming language on the basis of what good programmers can do with it.
    Unfortunately that is the exact opposite of what Software Engineering is all about. Any manager or architect worth his salt will judge a language by what bad programmers can do with it. Relying on having good programmers available is a crazy risk to take :-)
  • by waveman ( 66141 ) on Friday February 13, 2004 @10:12PM (#8276575) Homepage
    OO has one paradigm which is make everything objects. This is absurd.

    If you look through the Java libraries there are numerous examples where things are stretched to make them objects. You see the same thing in Java applications.

    Example: colors. Is a color an object or an attribute of a point on a surface? Apart from anything else Java having colors as objects means you have to create millions of objects or build a color object cache if you have large numbers of colors.

    Example: bignum. A number is a stage in a computation not an object. Again, tons of garbage and impossibly slow code.

    I agree with the lady that we need extra concepts but I disagree that we need a fixed set of concepts hard wired into the language. We need a powerful enough language where you can add concepts that you need.

    Just as OO was added to Lisp without changing the base language, so can other concepts. Try that in Java.

    Java is better than the original 3GLs like Fortran 58 and COBOL 60 but not by an order of magnitude.

  • Re:Test? (Score:3, Insightful)

    by rwa2 ( 4391 ) * on Friday February 13, 2004 @10:16PM (#8276594) Homepage Journal
    There are two ways to assure quality work, both in manufacturing & software.

    One is to have inspectors look at everything and make sure they're right. QA or "testing"

    The other is to actually fix the broken machines / processes that are stamping out broken widgets / buggy software in the first place. I think she's after this path.

    Of course, you still need both.
  • by E_elven ( 600520 ) on Friday February 13, 2004 @10:57PM (#8276812) Journal
    I must disagree on one part: CS has suffered from left-brainers for too long. The right-brainers are a useful bunch, and the best results come from mixing the two.

    The perceived problems are actually caused by the no-brainers.
  • by richieb ( 3277 ) <richieb@gmai l . com> on Saturday February 14, 2004 @12:31AM (#8277295) Homepage Journal
    Yes it is a lot of paperwork, but then another reason why bridges in the 19th century fell down was to do with the fact that the materials being used were not being monitored correctly.

    But there was a good reason. At that time we didn't know what to monitor for, never mind that tools to find the problems did not exist (eg. X-rays).

    But the bottom line is that bridges had to be build - without the knowledge of materials we have today.

  • Re:Well... (Score:3, Insightful)

    by mattgreen ( 701203 ) on Saturday February 14, 2004 @02:17AM (#8277815)
    Wrong. If you are optimizing without the guidance of a profiler you are wasting your time. Unless you knowingly put a bottleneck in you are merely shooting in the dark. And don't confuse choosing proper algorithms and data structures with optimization, either.
  • by daniel_yokomiso ( 641714 ) on Saturday February 14, 2004 @08:24AM (#8278917) Journal
    Especially when you can get a team of second rate VB coders for the price of one haskell coder (if you can find one)
    This is an exaggeration. Let's you'll pay US$ 15.00 per hour of a "second rate VB coder" and pay US$ 60.00 per hour for a "haskell coder". So it's better to have 4 lousy coders that will miss all of your deadlines, deliver low-quality, unmaintanable code or have one good programmer that'll ship the product earlier? I don't think this comparison holds.

    But really, do you want working code now? Or perfect code in 10 years? That's where the problem is. Time.
    Hmm, IME functional programming languages uses less lines of code, are faster to deliver bug-free code and are easier to maintain. So with FPLs you'll have near-perfect code now, against crappy code after several missing deadlines.
  • Re:fluff (Score:1, Insightful)

    by Anonymous Coward on Saturday February 14, 2004 @10:29AM (#8279288)
    > Well.. some interesting ideas in there mainly flawed.

    > 1) The concept that software should 'feel' right to the developer. First of all this cannot be formalized in any sense of the word. Secondly even if it could be it is focused on the wrong target, it should feel right to the end user/problem domain experts. More about this in point 2.

    Before making a fool of yourself, did you realize that she was a champion chess player ? I bet she knows a lot more than you about the power of unconscious 'feel' applied to very complex problems.

    And, yes, code should feel 'right' to the developer, like music feels right to the composer.

    It is obvious that the tool you use are obsolete when the software you write don't look 'right', only like a mess of spaghetti, just because the problem is supposed to be complex.

    It should not.

    We should have tools/concepts that enables the next generation of coders of writing code that 'feels' right.

    Even if you disagree.
  • by Anonymous Brave Guy ( 457657 ) on Saturday February 14, 2004 @03:39PM (#8281115)

    As with many (all?) other skills, I think two things probably dominate developer ability:

    • Developer potential follows a curve, starting with many people having little or no aptitude for programming, and tailing off with few programmers being able to be Really, Really Good. Most people at the bottom end don't work as developers, or don't get hired much.
    • How close any given developer gets to his maximum potential is a combination of attitude and exposure to opportunities to learn.

    Please note the key distinction there: one of these factors relates to a developer's potential, the other to what he can actually achieve in reality.

    To determine a good strategy for building a team of developers, you then have to consider the relative work rates of developers of different abilities, and the nature of the work. For example, most code is developed from relatively straightforward design and programming tasks, but often you have small areas that require much more skill to design and implement effectively. These areas require a more able developer/team, but OTOH we also know that such people can be anything up to 10x as productive as a "typical" developer on the more mundane work. Of course, employing such people also costs rather more.

    So what does this suggest about our choice of programming language? Well, if your development task is going to require any complex design or implementation work, you're going to need a sufficient number of top end people to do it, and you're going to need suitably powerful and flexible tools to help them.

    For the remainder of the work, highly skilled developers will still be happy using those powerful, flexible tools, but they may be in short supply, and chances are most of your team will be more average in ability, and thus more average in their ability to avoid mistakes. Thus you may need a tool that reduces the possibility or impact of those mistakes, even at the expense of some power and flexibility (which those developers will rarely if ever use anyway).

    Strangely enough, this has always been one of the reasons I've liked C++ as a practical, real-world language. While it has plenty of theoretical flaws, it does combine both raw power and flexibility with a decent set of abstraction tools to keep routine development away from the most dangerous areas. You can have your top developers write subsystems using all the cunning tricks they need, but keep everyone else using only clearly defined interfaces. Given a little basic training (sadly a lacking commodity in the C++ programming world, but not beyond any competent manager to arrange -- this is the second factor above) the vast majority of "typical" developers can avoid the really dangerous programming practices, and take advantage of the neat stuff the top guys made for them. When those top guys have finished developing really neat stuff, they can just become super-efficient people doing the mundane stuff using the same tool.

    Bottom line: for most real world projects, you need to judge a language by both what it's capable of when used by a really good guy and how well it looks after Joe Developer. If one language isn't enough to do both and your project needs them, maybe you need more than one language and some good glue, but that's a whole different topic. :-)

  • You are wrong! (Score:3, Insightful)

    by sorbits ( 516598 ) on Saturday February 14, 2004 @04:15PM (#8281332) Homepage

    Bad code arise when the requirements change and the code needs to be updated to these.

    Bad code arise when the beautiful algorithm needs to deal with real-world constraints.

    Bad code arise when the program grows over a certain size and too many modules depend on each other (this is often not avoidable).

    Bad code arise for many reasons -- premature optimizations is not a problem I face often (in my 15 years of programming), and I have worked with a lot of bad code, much of it my own, which did not start out to suck, but most successful projects will grow to a complexity that affect the code badly.

    Try to work with some "hard" problems and I bet your code will not look intuitive, like an arithmetic encoder, C++ parser, GIF decoder or a HTML parser which is compatible with the HTML on the net (as the users expect it to be!).

  • by eraserewind ( 446891 ) on Saturday February 14, 2004 @11:05PM (#8283611)
    Did I say I recommended hiring bad programmers? Did all those companies that have ever hired bad programmers deliberately do it?

    Sorry, but unless you plan on not growing at all, and not having any turnover of staff, then the profile of your employees will tend towards average over time. A mix, of a few very good, a few more very bad, and a rump of fairly mediocre programmers.

    That's why startups are so attractive for many people. There is a somewhat better chance of having a high %age of great programmers, and doing some innovative work in powerful languages. All other companies have to deal with the unfortunate reality that most of their programmers do not fall into the excellent category, and have to plan accordingly.

    Or do you have some incredible HR process not thought of by any other company in existance that ensures everyone hired by you will be excellent?
  • by Anonymous Brave Guy ( 457657 ) on Monday February 16, 2004 @08:47AM (#8292775)

    I agree with you completely that bad code often arises through innocent intentions. I disagree, however, that it's often not avoidable. It's almost always avoidable: if you wrote the same code from scratch, knowing what you know after writing it the first time, you would produce a much better result.

    The problem is just that complete rewrites are very expensive, and most development teams/managers are too stubborn to do minor rewrites when they should, preferring to add hacks or workarounds instead of maintaining a relatively clean design. Sooner or later, that usually results in the need for a much more expensive major rewrite instead.

  • by c0d3h4x0r ( 604141 ) on Monday February 16, 2004 @09:15PM (#8300065) Homepage Journal

    Nearly all software and engineering problems I see are due to unnecessary complexity. The more complex the system, the more prone it is to error, and the more difficult it is to fix. Since we cannot easily *prove* large programs correct, the best we can do is try to intuit our way through the program's structure and flow to spot problems and try to ensure correctness. So making code as understandable and simple as possible is the best way to reduce flaws, since the human brain is the last line of defense/validation as to the program's correctness.

    I've been a developer on a major Microsoft product for 5 years. The source for this application is very large and it requires a team of roughly 30 developers to work on it for each release. Even when you become an expert on one area, you still know nothing about other areas, because there's just so much code.

    There are areas of our code that are considered very touchy. They are expensive areas to work in because they are difficult to understand, are architected poorly, and break in unexpected ways nearly every time someone makes a change. This is directly due to the code not being as intuitive or as simple as it should be.

    There are other areas of our code that are very readable, smartly commented, and intuitively architected. These areas are pretty inexpensive to work in, and they tend not to have many bugs. These areas do some very complex work, and are sometimes optimized in ways that don't make a lot of sense at first glance, but because the code is commented and architected cleanly, it is still quite understandable even to a newcomer.

    When people start building houses of cards just because they can, instead of it actually being necessary to get the job done, that's when you end up with a mess on your hands. I've seen plenty of code (not at my job, but on my own time) written by "kiddies" who had just discovered recursive functions, and they do their damndest to try to tackle everything using recursion. I've seen the same thing with people who have just discovered C++ classes -- everything is over-encapsulated as a class, just because they think classes are cool, rather than using classes to aid in organizing things in any sane fashion.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...