Intuitive Bug-less Software? 558
Starlover writes "In the latest java.sun.com feature at Sun's Java site, Victoria Livschitz takes on some ideas of Jaron Lanier on how to make software less buggy. She makes a couple of interesting points. First, making software more 'intuitive' for developers will reduce bugs. Second, software should more closely simulate the real world, so we should be expanding the pure object-oriented paradigm to allow for a richer set of basic abstractions -- like processes and conditions. The simple division of structures into hierarchies and collections in software too simple for our needs according to Livschitz. She offers a set of ideas explaining how to get 'there' from here. Comments?"
Well... (Score:5, Funny)
Feels right.
Re:Well... (Score:5, Insightful)
How many times do we have to have fundamental truths reiterated?
"Premature optimization is the root of all evil"
I'd submit that nearly every bit of non-intuitive code is written because it "should be faster" than the intuitive of equivalent function. Just stop. Write the code the way it needs to be written. Decide if it's fast enough (Not "as fast as it could be" but "Fast enough") and then optimize if necessary.
You are wrong! (Score:3, Insightful)
Bad code arise when the requirements change and the code needs to be updated to these.
Bad code arise when the beautiful algorithm needs to deal with real-world constraints.
Bad code arise when the program grows over a certain size and too many modules depend on each other (this is often not avoidable).
Bad code arise for many reasons -- premature optimizations is not a problem I face often (in my 15 years of programming), and I have worked with a lot of bad code, much of it my own, which did not start o
Avoiding "bad code" (Score:3, Insightful)
I agree with you completely that bad code often arises through innocent intentions. I disagree, however, that it's often not avoidable. It's almost always avoidable: if you wrote the same code from scratch, knowing what you know after writing it the first time, you would produce a much better result.
The problem is just that complete rewrites are very expensive, and most development teams/managers are too stubborn to do minor rewrites when they should, preferring to add hacks or workarounds instead of main
Re:Well... (Score:3, Insightful)
Re:Well... (Score:5, Funny)
I never did get around to asking him how he knew that, or if it was kind of a gut feeling he had.
Re:Well... (Score:5, Funny)
Objects (Score:5, Insightful)
I, for one, would like software writing to resemble (really resemble) building structures with Legos.
Re:Objects (Score:5, Insightful)
It's an interesting thought, but it's not necessarily true at all. Mathematics is metaphors, even though they're often very abstract. But it's more like working with somebody else's codebase, most of the time. Unless you're striking out and creating your own formal system, you are working with metaphors that someone else has come up with (and rather abstract ones at that).
The good news is that most mathemeticians have an aesthetic where they try to make things... as clean and orthogonal as possible.
The bad news is that terseness is also one of the aesthetics.
Re:Objects (Score:4, Insightful)
Which doesn't mean mathematicians don't use metaphor a lot. The catch is that they like to have solid rigorous definitions to tie things back to. Often when doing mathematics you will think in terms of the metaphors to percieve a way to proceed, and then try and explain the process you just considered in terms of definitions.
If you want a metaphor for that: Published mathematics tends to be assembly code: low level with strong static typing, and everything very explicit. That doesn't mean that mathematicians don't write it in their heads in Python or Java and compile it. Think of research mathematicans as powerful metaphor compilers as well as programmers.
Jedidiah.
Re:Objects (Score:3, Informative)
Try Objective-C maybe? (Score:4, Informative)
As languages go, it's pretty awesome. It was well ahead of its time, anyways. Ruby (as another poster mentioned) also does some of this.
Smalltalk and Ruby are great if you're just working with components and assembling them lego style, sure. But what'd be really nice is to use a language that can do both high level coding and systems programming. Someone else thought of it. Brad Cox came up with Objective-C, which NeXT later expanded upon.
Apple is using Objective-C with the old OpenStep library as their primary development environment for awhile now. It's very nice, supports a lot of full features, has explicit memory management that is very flexible but also circumventable and tunable (using reference counting, but people have made mark-and-sweep extensions, both are not implicit like java though).
Objective-C supports late binding, weak typing, strong typing, static typing and dynamic typing, all in the same program. It can directly use C, so if you know C you're already 3/4 of the way there. The message syntax is slightly odd, but works out. Unfortunately, Objective-C doesn't have closures. David Stes developed a meta-compiler that turns Objective-C with closures into regular C (called the Portable Object Compiler) which might get you some distance if your work demands them.
ObjC can either use C-style functions, smalltalk style message passing, or a hybrid of both. It's a very interesting language. Apple added C++ extensions, so now in most cases you can even use C++ code (however C++ classes are not quite ObjC classes, and there are some caveats).
If you're looking for a language that splits the difference between Ruby/Python and C/C++, Objective-C might be your best bet. It's pretty hard to find an easy-to-use language that also provides a lot of performance.
Re:Try Objective-C maybe? (Score:3, Insightful)
When you get down to it, the concepts pioneered by Smalltalk are still ahead of their time. How can I say such a thing? Well, watch our modern languages evovle. People keep taking stuff from Smalltalk, and languages are slowly glomming on more and more parts of it. Of course, the same could be said of CLOS. This is the path to language eliti
I'm sure... (Score:5, Insightful)
Because the real world doesn't have bugs, right? Our company doesn't have project management software yet - but we're working on it. Personally, I don't think it's worth it until we fix the real world project management issues that this software is supposed to help with. Maybe that's not quite the point, but it raised my eyebrows. (Which I'm thinking about shaving off.)
Re:I'm sure... (Score:5, Funny)
You been experiencing a few too many glitches in the Matrix lately, or something?
Re:I'm sure... (Score:3, Insightful)
The first rule of programming is to realize that all input is intended to crash your program, so code accordingly.
Jaron Lanier? (Score:3, Insightful)
Somebody, please point me to something significant he's done so I'll know whether or not I should pay attention to him because, from everything I've seen so far, I shouldn't.
Not a good idea... (Score:3, Funny)
Writing bugless code would throw the universe upside down and could possibly mean the end of the world!
Moderation Guideline: +3 Funny.
Agreed, but for this reason... (Score:3, Funny)
Comments? (Score:5, Funny)
Not my problem anymore. (Score:5, Funny)
Ugh, more abstraction. (Score:5, Insightful)
Want to make better software? How about actually scheduling enough QA time to test it? When development time runs over schedule, push the damned ship date back!
Re:Ugh, more abstraction. (Score:5, Insightful)
And testing, testing, testing. Because people aren't perfect. Nor would we want them to be... Too much money to be made in support contracts.
Re:Ugh, more abstraction. (Score:4, Insightful)
Some of the guys I work with think that "tons of classes == object-oriented", and their code designs are f-cking unreadable and opaque. Whereas a few, thoughtfully designed classes that best model the problem would be magnitudes better.
Re:Ugh, more abstraction. (Score:5, Insightful)
Want to make better software? Make sure your programmers understand what you're trying to do and make sure that enough people have "the big picture" of how all the system components interact that your vision can be driven to completion. It also helps if you have enough people on hand that losing one or two won't result in terminal brain drain.
Recently management seems to be of the opinion that people are pluggable resources who can be interchangably swapped in and out of projects. Try explaining to management that they can't get rid of the contracting company that wrote a single vital component of your system becauase no one else really understands how it works. They won't get it and you'll end up with a black box that no one really knows how to fix if it ever breaks.
Re:Ugh, more abstraction. (Score:3, Insightful)
Not gonna happen. The vast majority of the time, the economic penalty associated with being late is much greater than the economic penalty associated with being buggy.
Functional Programming et al. (Score:5, Interesting)
Speaking of mistakes... (Score:5, Funny)
Re:Functional Programming et al. (Score:5, Interesting)
A lot of problems are solved with functional languages. Functional advocates claim to have the answer to software correctness and they decry the present state of imperative logic programming. What I think they fail to realize is that functional programming is ubiquitous, solving problems on a scale that contemporary imperative tools will never approach.
Microsoft Excel is, in essence, a functional programming language. It is utilized by non-"programmers" planet wide every day to quickly, accurately and cheaply "solve" millions of problems. It has, effectively, no learning curve relative to typical coding. I have found it to be an invaluable software development tool. I take it a bit further than the typical spreadsheet task by using to model software systems.
It is especially helpful with business logic problems. I recently implemented a relatively complex web-based product configurator. I know that if I can model the complete problem in a stateless manner using a spreadsheet, writing bug-free, efficient client and server side imperative code becomes a simple matter of translation. For any given state of a collection of inputs there is exactly one atomic result. In this case the result is a (possibly lengthy) structured document computed dynamically from a collection of input forms, both on the client (because page refreshes suck) and on the server (because validation must not depend on an honest client.) Both independent implementations (in different languages) are "obviously" correct in the sense that they are derived from a clear, functional model, built in a spreadsheet.
You may substitute any contemporary spreadsheet product in place of Excel; I have no love of Excel specifically. It's just what I've happened to have handy in all cases. The fact is that modeling most software problems requires very little of what any reasonablely competent spreadsheet can accommodate. Feel free to lecture me on precisely why it is blasphemous to suggest that a spreadsheet qualifies for the designation "functional programming." I know the difference because I've studied LISP and used Scheme. The subset of true functional programming that provides the most value is clearly represented by the common spreadsheet.
Re:Functional Programming et al. (Score:3, Interesting)
The big advantage of FP is its clearness and rigidness. To an experiences functional Programmer, its exactly clear what a piece of Haskell Code means, since the code is half general functions that are easy to understand (map, zip, fold et.al.) and half problem-specific functions that are about as easy. The solution is built from simple bricks everywhere, other than in impe
Test? (Score:5, Insightful)
Picking just one program from my experience, POPFile: intially we had no test suite, it quickly became apparent that the entire project was unmanageable without one and I stopped all development to write from scratch a test suite for 100% of the code (currently stands around 98% code coverage). It's particularly apparent when you don't have all day to spend fixing bugs because the project is "in your spare time" that it's vital to have fully automatic testing. You simply don't have time to waste fixing bugs (of course if you are being paid for it then you do
If you want to be really extreme then write the tests first and then write the program that stops the tests from breaking.
John.
Re:Test? (Score:4, Insightful)
Re:Test? (Score:3, Insightful)
One is to have inspectors look at everything and make sure they're right. QA or "testing"
The other is to actually fix the broken machines / processes that are stamping out broken widgets / buggy software in the first place. I think she's after this path.
Of course, you still need both.
Re:Test? (Score:4, Interesting)
I did a PhD at Oxford in the Programming Research Group and studied Z, CSP and all that stuff. My thesis even includes a program written in Occam proven via an algebra to meet a security specification.
Believe me, I'm aware of what the world could be like, but it is not practical to write real software this way yet. Hence we still need to test, and not enough people write tests today. Unit and system testing are best practices for the industry today, sure, there's a better theoretical way to do things, but I need to code in 2004 not 2054.
John.
The two big reasons software is buggy! (Score:5, Interesting)
The first is the intense pressure to get the product to market. This is especially true for custom code, written specifically for one client. They want it fast and cheap and in order to satisfy this desire, code invariably gets released/installed before it's ready. Then the "month of hell" starts as the client starts complaining about bugs, "bugs" and other problems and we bend over backwards to get it right.
As a ISV, we have no choice but to do it this way. If we don't quote the project with this in mind, the client will hire somebody else with a better "can-do attitude".
The second big reason software is buggy is because all the underlying tools (e.g. code bases, code objects, .dlls, etc.) are buggy as hell. I spend more time working around inherent bugs than I do debugging my own code.
Most programmers are perfectly capable of making their own code solid, given enough time.
Re:The two big reasons software is buggy! (Score:5, Insightful)
Civil engineering is superior for 2 reasons. 1) Time of QA and 2) Dependability of materials.
In short, look at the time it takes to QA a bridge that is built. Not only is there QA done from design to finish, but real load testing is done. Although software does have serious QA, the time spent QAing civil engineering products is far more as a ratio to time spent actually building.
Dependability. The thing with building things is that you can always take it for granted that the nuts, bolts and wires you use have a certain amount of pressure and force they can handle. Why? Because of the distinct same-ness behind every nut, bolt and wire ever built. One nut is the same as the other nut. All nuts are the same.
In software, not all "nuts" are the same. One persons implementation of a string search can widely vary. Yes, we do have libraries that handle this issue, but there is a higher chance of error in software construction because of the ratio of libraries (third-party) used that are not as robust.
Lastly, one reason why software hasn't been addressed with the same urgency is because of the consequences (or lack of). When a bridge is poorly built, people die. Laws go into affect, companies go out of business, and many people pay the price. When software starts failing, a patch is applied until the next piece of it starts failing when another patch is applied. In the end the software becomes a big piece of patched-up piece of crap.
One advantage, though, of software is that a new version can be released with all the patches in mind and redesigned.
Re:The two big reasons software is buggy! (Score:3, Interesting)
Bridges are built to be extremely fault tolerant. MechEs and CivEs use safety factors - big ones. Multiple bolts must fail before the structure becomes critical. Adding safety factors in mechanical structures is relatively cheap and easy.
In most software, nearly everything is critical in some way due to the logical step-by-step nature of code execution. It's possible to write good fa
Your describing why I dont want to be a programmer (Score:4, Insightful)
Anyone can call themselves a programmer, or even a software engineer. Someone who graduates from a BCS program is required[1] to do zero practical work in the field before they get their degree - which is the height of their qualifications.
Engineers may graduate, but they require at least a few years of work before they can be licensed. Lawyers have to pass tests beyond those based in the fantasy world of academia. Medical doctors require years of on the job training under close supervision before they are turned loose. All of these professions are self-governed and discipline their members if - nay, when - one of them screws up. Potentially they can loose their license.
The IT world has no such professional designation. The IT world has no such self-governing body. Given companies/individuals in the IT world can consistently produce almost criminally negligent code, and provided they bid low, will survive.
MDs, PEngs (and even lawyers) can always refuse to do something if it is clearly dangerous, unsafe, or illegal. Their clients cant really go anywhere else to get that task done as all professionals will be bound by the same rules.
Even trades: plumbers, carpenters, electricians, pipe fitters..... have some non-academic certification process. Most, beyond a (say) two year school program have to have years of apprentice work before they can be qualified. They are required by law to build things to some safety standard, building code, electrical code, fire code....
Anyone can call themselves a programmer.
The closest thing that the IT world has is various certs from for-profit companies. But they are generally for variations on systems administration, rather then programming. While, so far as I know, they cant be revoked for cause, they do all expire after some finite time.
What the IT world needs is the equivalent of a PEng professional 'grade' designation for, ie 4 year BCS level of schooled people. And also a trades grade designation for 2 year community collage types. Implicitly from there you get higher quality product, because the people designing the product (PEng grade types), and the people implementing it (trade grade type) have higher obligations then just to the customer. They would have professional responsibilities, violation of which could cause them to loose their respective licenses. This would solve most of the bugs caused by cutting corners to save on cost, releasing before its done, etc. By no means all, but a lot.
[1] yes, some schools have Co-Op programs. But I know of none that are requirements.Re:The two big reasons software is buggy! (Score:5, Insightful)
These issues don't seem to be addressed until the rubber hits the road - when code starts compiling and demos are given. The pressure to market builds, as these issues are being resolved. Unfortunately, that's when The Law of Unintended Consequences strikes, scrapping much of the existing codebase.
How can a programmer make "their own code solid" when the work it is supposed to perform is not clearly defined?
Cheers,
Slak
Re:The two big reasons software is buggy! (Score:3, Insightful)
There are too many Marketing droids talking about the solution they visualize without ever clearly formulating the use case.
A lot of people can make feature suggestions. Without a clear picture of what the user actually wants to accomplish, the feature suggestions can't be evaluated for usefulness, nor can better suggestions be made to solve the user's problem. Varying solutions also can't be compared on their merits. And so it gets down to an "I have more contact with the customer so I must
That is exactly the wrong approach (Score:5, Interesting)
Re:That is exactly the wrong approach (Score:3, Insightful)
You're absolutely right. Some people think that turning to the "real world" for guidance a good idea, but I've found that it only confuses things. Nobo
Re:That is exactly the wrong approach (Score:3, Interesting)
Good ontology modelling software would check assumptions about objects such as "if you remove a man's arm, he is still considered the same man" (in business context, yes) and "a company is the same as the people who work in it" (it's not). Basic stuff; people tend to know it intuitively, but that intuition tends not to make it
Re:That is exactly the wrong approach (Score:3, Insightful)
Simply put, the proof is in the pudding. You can cleanly define (and test) the in
Re:Is software engineering a form of engineering? (Score:4, Insightful)
No. But you can only use the science you know to help you proving that the thing will stand up. Furthermore, spec for briges tend to be pretty clear.
In any case, if you look into history of bridge building you'll find that for the longest time the formula for the strength of cantilever beam was wrong (this guy Galileo got it wrong). So when the engineers building bridges reduced the safety factor (to speed up construction and reduce cost) bridges started falling down.
In 19th centuary England a lot of iron bridges collapsed, despite the fact that it was "proved" that they were strong enough. Metal fatigue was not understood then.
Lastly, your request for a proof exemplifies my point. You cannot offer such a proof and that is why Apache has to be patched.
Why not? At least you have a spec for a HTTP protocol, which is pretty precise as such spec go. If you cannot offer a proof in the case where a precise spec exists, what hope is there for other software?
Finally at most you can prove with a program that the program works according to its specification. But what about the correctness of the specification itself?
Unfortunately, computer science is still in its relative infancy
Exactly. But what we are talking about is software engineering. Engineers are paid to build things that work. They are free to use whatever helps them in their tasks, if there is good science they use it. If there is no science they have to hack.
Try telling your next customer that implementing his system will take 50 years, because the science of translating his imprecise requirements into software hasn't been invented yet.
Re:Is software engineering a form of engineering? (Score:3, Insightful)
But there was a good reason. At that time we didn't know what to monitor for, never mind that tools to find the problems did not exist (eg. X-rays).
But the bottom line is that bridges had to be build - without the knowledge of materials we have today.
The real world is intuitive? (Score:5, Funny)
Perhaps she should make up her mind.
sounds like the perfect politician (Score:5, Interesting)
this is an exercise in wish-fulfillment, in suspending disbelief
writing software with less bugs by making things more intuitive and less hierarchical?
i mean, that's funny!
we're talking about telling machines what to do, that is what software writing is
writing software is an extremely hierarchical exercise, the art is giving people want they want
Orthogonal... (Score:3, Funny)
"....especially because I've always thought that the principles of fuzzy logic should be exploited far more widely in software engineering. Still, my quest for the answer to Jaron's question seems to yield ideas orthogonal to his own. "
I fear people that talk like this. It makes me wonder if they go home at night and plug themselves into something.....
Being sandbagged by java? (Score:4, Funny)
I think she means sandbox architecture [javaworld.com]
...and the wheel turns (Score:5, Funny)
Jaron Who? (Score:3, Interesting)
No really... does anyone care about Jaron Lanier?
I'd put his contributions to technology right up there with Esther Dyson's.
He's another person who calls himself a "visionary" because the specifics of technological development are far beyond his capacity.
He is, always was, and always will be, a non-player.
Workflow/StateMachine (Score:3, Informative)
I agree, somewhat (Score:5, Insightful)
Even on simple project I have sometimes found myself designing to fit into Java's OO and not to fit the problem. Its really a language issue when it comes down to it. I am most comfortable in C, so I start writing a Java app and can feel myself being pulled into squeezing round objects into square holes. You have to then step back and realize whats happening before you go to far. I think this is the main source of "design bugs' from myself either ignoring the strengths of a system (not taking advantage of Java's OO) or trying to squeezing a design that is comfortable without billions of objects into some vast OO system, in effect falling into the weakest parts of a language.
Its probably very similar to the ways people screw up a second spoken language, mis-conjugating verbs and whatnot -using the style they are already most familiar with.
So with that its such ridiculusly common sense to say we need an all incompasing uber-language that is completely intuitive, I jsut would like to see someone do it rather than go on about it.
Why not experiment with added every feature to Java that you feel it lacks to see if you can achieve that? because then you end up with perl
Seriously programming languages aren't that way by and large because they have to be designed to fight whatever problems exist that they are created to take care of. It a bit foolish to say we need a language that is perfect for everything, instead you look at what your problems are and develope a language to fight those. Invariably you end up with failings in other areas and the incremental process continues.
Re:I agree, somewhat (Score:3, Insightful)
Boy, sometimes I do long for managing a C-based project.
It's in the implementation (Score:3, Interesting)
Re:It's in the implementation (Score:4, Interesting)
The reason is that most developers INSIST that class structure should model the application domain. Even if it doesn't make the slightest lick of sense.
Reason? Because of how OO was taught. Concrete to abstract, keeping in line with a problem domain.
(coloured rectangle->rectangle->shape). This certainly makes teaching easier, but doesn't make for sensible class hierarchies.
OO is separate from a class hierarchy. The only reason we HAVE a hierarchy is to allow code to be reused. Therefore, the proper hierarchy is not a taxonomy, it is the one that leverages the code maximally.
As an example - Where to put a Date class?
Smalltalk classifies a Date as a Magnitude -- things that can be compared. So comparisions can be leveraged (eg. =). If it were NOT there, all comparisions need re-implementation.
Also Character should be a Magnitude as well.
Maybe String, but that's a bit shaky (mixins help, it's comparable, but is a collection of Character).
Where to put a class in the hierarchy should be driven by the principle of minimizing code. *NOT* modelling the real world. If you model the "real world" you are probably in a serious "world of hurt". Also, in this case, the OO "paradigm" isn't going to save you much in the way of coding (will save you debugging, hopefully).
Avoidance of bugs...
Stay away from stupid languages. Insist that optimization is the compiler/computers job. The Rosetta Stone is to ask for a factorial function, *without* specifying any details. Code it in the *most* natural way, and then test it with 10,000!
Now, determine how much breakage has occurred (if any).
The answer to LARGE projects is to write code ONCE, and be able to reuse it in any context that needs the same processing. I don't want to have to code the factorial algorythm for small integers, large integers, and really big integers.
I want the code to accomodate the data-type that is needed. If I sort, and use "" ordering, I want that to work across any datatype.
If I have to re-implement, I lose on the previous work.
Class hierarchies can help structure (look at Smalltalk), but are not often used in this way.
Ratboy.
fluff (Score:5, Insightful)
1) The concept that software should 'feel' right to the developer. First of all this cannot be formalized in any sense of the word. Secondly even if it could be it is focused on the wrong target, it should feel right to the end user/problem domain experts. More about this in point 2.
2) Software tools should model the real world. Well.. duh. Any time you build software you are modeling a small part of the real world. THe next question is: what part of the real world. The reason that OOP has not progressed farther is that the real world is so complex that you can only build some generic general purpose tools and then have a programmer use those tools to solve a particular subset. So the programmer must first know what the problem domain is and what the tool set is capable of.
3) Programmers should be average. Absolutely not. In order to model the real world a good programmer must be able to retrain in an entire new problem domain in a few months. This is what is missing in may cases, most people do not have that level of flexibility, motivation or intelligence and it is difficult to measure or train this skill.
4) Programmers shouldn't have to know math. Wrong again. Programming IS math. And with out a basic understanding of math a programmer really does not understand what is going on. This is like saying engineers shouldn't need to know physics.
5) The term 'bug' is used very loosely. There are at least 3 levels of bugs out there:
a) Requirements/conceptual bugs. If the requirements are wrong based on misunderstanding you can write great software that is still crap because it does not solve the correct problem. This can only be solved by being a problem domain expert, or relying heavily on experts (a good programmer is humble and realize that this reliance must exist).
b) Design flaws. Such as using the wrong search, bad interface, poor secuirty models. This is where education and experience come in.
c) Implementation bugs, such as fence post errors and referencing null pointer. THis can be largely automated. Jave, Perl and
In short, a bad simplistic article which will probably cause more harm than good.
Would much stronger data types help? (Score:4, Interesting)
I ask because I'm currently looking into dependent type systems, which aren't currently practical. However, their claim to fame is that the type system is much more expressive; it is possible to define types like "date" or "mp3" in them, and ensure that wrong data cannot be supplied to functions. As I play though, I get the feeling that if the type system is too powerful, people will just create bugs in types, and we'll not improve by as much as we could do.
expanding the pure object-oriented paradigm (Score:3, Insightful)
1. WTF does that mean? It's all just buzzwords. Woohoo. Another buzzword engineer. Just what the world needs.
2. Making programmers program in an OO paradigm doesn't stop bugs. So why should "expanding the pure object-oriented paradigm" do anything productive?
But is the real world intuitive? (Score:5, Insightful)
Another problem is that the real world is both analog and approximate, while the digital world calls for hard-edged distinctions. In the real world, close-enough is good enough for many physical activities (driving inside the white lines, parking near a destination, cooking food long enough). In contrast, if I am moving or removing files from a file system, I need an algorithm that clearly distinguishes between those in the selection set and those outside it.
I like the idea of intuitive programming, but suspect that computers are grounded in logic and that logic is not an intuitive concept.
Intuition or Cumulative Knowledge databases? (Score:3, Interesting)
Slightly OT: ... or rather, just ACCEPTING unknowns and their repercussion in logic. Say, If I withhold a fact in an argument but claim to be "right," he will say there i
This is indeed hitting the nail on the head. My father and I have lots of disagreement on the issue of "common sense." I am very smart and he is so too, but tends to fall behind when it comes to explaining
"Intuitive" is subjective (Score:3, Insightful)
As far as "modeling the real world", my domain tends to deal with intellectual property and "concepts" rather than physical things. Thus, there often is no real world to directly emulate. Thus, the Simula-67 approach, which gave birth to OOP, does not extrapolate very well.
Plus, the real world is often limiting. Many of the existing manual processes have stuff in place to work around real-world limitations that a computer version would not need. It is sometimes said that if automation always tried to model the real world, airplanes would have wings that flap instead of propellers and jets. (Do jets model farting birds?)
For one, it is now possible to search, sort, filter, and group things easily by multiple criteria with computers. Real-world things tend to lack this ability because they can only be in one place at a time (at least above the atomic level). Physical models tend to try to find the One Right Way to group or position rather than take full advantage of virtual, ad-hoc abstractions and grouping offered by databases and indexing systems.
A Programmer's Credo (Score:5, Insightful)
- I accept that there is no magic bullet to programming, no simple, easy way to create bug-free software.
- I will not add unrelated features to programs that do something else. A program should concentrate on one thing and one thing only. If I want a program to do something unrelated, I will write a different program.
- I will design the structure of the program, and freeze its feature set, before I begin coding. Once coding has begun, new features will not be added to the design. Only when the program is finished will I think about adding new features to the next version. Anyone who demands new features be added after coding has begun will be savagely beaten.
- A program is only finished when the time and effort it would take to squash the remaining obscure bugs exceeds the value of adding new features... by a factor of at least two.
- If I find that the design of my program creates significant problems down the line, I will not kludge something into place. I will redesign the program.
- I will document everything thoroughly, including the function and intent of all data structures.
- I will wish for a pony, as that will be about as useful as wishing that people would follow the above rules.
It's 2004, and we've still not cracked this nut (Score:3, Insightful)
glibc is 'it' but it still gets updates, bug fixes, etc. It is not used on every platform. Yet it gets recreated over and over again.
Then I thought about
But I think the biggest problem is the lack of software engineering from flow-charting. As mentioned, flowcharts allow us to map out or learn the most complicated software.
I think we can accomplish all she describes inside an OOP language, be it Java or C++ or Python. The master-slave relationship is easily done. The cooler thing that I would like to see more of is the state.
Rather than a process starting off in main(), and ini code run in constructors, each process and object need to have a state associated with it. This state is actually a stack, and not a variable.
my_process {
resister state handler 'begin' as 'init'
resiter state hander 'end' as 'exit'
state change to begin
}
init() {
do_something();
register handler condition (x=1, y=1, z=1) as 'all_are_one'
}
all_are_one() {
state change to 'in_special_case'
do_something_else();
pop state
if (exit_condidtion) exit()
}
exit(){
while (state_stack.length) pop state
}
What I'm tring to do is model the logical process with the execution of code, but in an asyncrounous manner. Sort of like a message pump, but its been extended to take process stages and custom events.
Why's it so difficult? Duh... (Score:5, Interesting)
Maybe because the programs contain 20 to 30 million lines of code.
Look, I understand that a lot of people are yearning for the good old days when software was less buggy. You know what? I suppose that if your entire application consists of something like 4000 assembly code instructions, you might just be able to make the program bug-free.
But it's not 1983 anymore and programs are on the order of millions of lines of code. Of course it's not feasible to go over the entire program manually and root out every single bug. The stuff I work with every day is considered extremely small and yet it depends on all sorts of external libraries, each of which may have dependencies, etc. It all adds up to amazingly large amounts of code. But, it requires large amounts of code to do extremely complicated things. Is this a surprise to her or something? I don't think there's any "paradigm shift" in the field of programming that's going to change the fact that:
* Doing complicated things requires lots of code.
* The more code you write, the higher the chance of bugs.
I reiterate: duh...
Budgets and schedules (Score:5, Insightful)
What bothers me about statements like this, is that no one is suggesting that perhaps our estimation and budgeting methods are off.
What if someone scheduled one week and allocate $100 for design and construction of a skyscraper, and when the engineers failed to deliver, who should be blamed? The engineers?!
Re:Budgets and schedules (Score:5, Insightful)
First, there are lots of folks who have been saying, for a long time, that our estimation and budgeting methods are inadequate: Fred Brooks [amazon.com] and Tom DeMarco [amazon.com] are just two of the best known advocates of this position. It seems, unfortunately, that it is not a message that many folk like to hear. It is, I guess, easier (and more expedient) to blame the tools or the craftspeople than to figure out what really went wrong.
Second, your example would be more apt if the building materials (steel and concrete) or the blueprints and construction tools were being blamed for cost overruns and schedule slips. No one would suggest that building skyscrapers would be easier and more reliable if the bricks and jackhammers were more intuitive.
What she is saying smacks of silver bullets (see Fred Brooks Mythical Man-Month, chapter 16: No Silver Bullets - Essence and Accident in Software Engineering [virtualschool.edu] (and succeeding chapters in the 20th Anniversary Edition)) and just can't be taken seriously. To summarize Brooks:
While we may be able to devise languages and environments that make the creation of quality software by talented experts easier, we will never be able to make the creation of quality software easy and certain when undertaken by talentless hacks, amatures and diletants. Unfortunately, the later is what is desired by most by managers, becuase it would mean that the cost of labor could be greatly reduced (by hiring cheaper or fewer warm bodies). It also happens to be the largest market, at least in the past two decades, for new development tools: think of the target markets for VisualBASIC, dBASE IV, Hypercard and most spreadsheets.
Let's not Ignore the Livschitz. (Score:4, Funny)
She's the real story here. I think I'm in love.
The goal is not bugless, but good enough, software (Score:4, Insightful)
The goal in any real software project is to meet customer's (and I use that in the broadest sense) expectations adequately. What is adequate? That depends on the software. A user of a word processor for instance is likely to not mind a handful of UI bugs or an occasional crash. A sales organization is going to expect 24/7 performance from their Sales Automation Software.
The canny programmer (or programming group) should aim herself to produce software that is "good enough" for the target audience, with, perhaps, a little extra for safety's sake (and programmer pride).
Of course their are real differences among the tools and methodologies used in getting the most "enough" per programmer hour. Among the one's I've come to believe are:
1. Use the most obvious implementation of any module unless performance requirements prohibit.
2. Have regular code-reviews, preferably before every check-in. I've been amazed at how this simple policy reduces the initial bug load of code. Having to explain one's code to another programmer has a very salutary effect on code quality.
3. Hire a small number of first class programmers rather than a larger number of lesser programmers. In my experience 10% of the programmers tend to do 90% of the useful work in large software projects.
4. Try to get the technical staff doing as much programming as possible. Don't bog them down with micromanagement, frequent meetings, complex coding conventions, arbitrary documentation rules, and anything else that slows them down.
5. Test, test, test!
Old ideas.... (Score:3, Interesting)
I couldn't stop thinking of existing theories and/or implementations of her ideas...
Modeling processes out of the OO paradigm (opposite to what Design patterns started to sacralize for example) is precisly the subjet of so-called business rules. But BR people are close to relationnal model of data, that is too quickly assimilated with SQL DBMS(*), so OO oriented people don't buy it (see the almighty impedance mismatch).
Data-structures other than trees and collections are already genericaly implementable in any modern OO language. See Eiffel for example which can perfectly do that for 15 years (parametric classes, with full type safety). May be the java generics will help to build highly reusable data structure... I doubt that, anchored type is missing (ie the possibility to declare the type of a variable as equal to another type, nearly mandatory when dealing with inheritance of generics).
Tom.
(*) I warmly recommend the writings of Chris Date and Fabian Pascal to really see how the relationnal model of data is different from SQL databases...see DBDebunk [dbdebunk.com] for references.
The lost art of modelling. (Score:3, Interesting)
"software should more closely simulate the real world"
From the article: "It's not the prevention of bugs but the recovery -- the ability to gracefully exterminate them -- that counts."
While the need to gracefully recover your design from bugs (bugs come from design, or lack of, not code) is laudable. The proper technique is to design without bugs in the first place. Assuming that you're actually meeting the business requirements or functional specifications, there is a straightforward method to flattening bugs before they become fruitfully overripe and multiply.
Once you have obtained the proper [amazon.com] requirements [amazon.com] (your goals), and after you've properly atomized it to its smallest component parts, you need to model those parts. Once you've modeled those parts, you need to test the model. This works in single process design, but it really shines in concurrency where anyone can truly screw up.
Get a good [amazon.com] book [amazon.com] on design [amazon.com]. Then get a good [amazon.com] book [ic.ac.uk] on modelling, mechanically analyzing, and testing those designed processes before commiting to code.
= 9J =
Types of bugs and how to prevent them (Score:4, Interesting)
1. Algorithmic bugs - you have a function with well-defined input and output, and it does the wrong thing (may include giving the wrong answer, looping forever, leaking memory, or taking too long to return). Can be avoided with a combination of code review, unit tests, and correctness proofs when possible.
2. Interface bugs - this includes validating input, both from the user and over the network or other ways in which your program gets input data. These bugs include buffer overruns, GUI bugs caused by an unanticipated sequence of clicks, etc. These bugs are mostly found by testing, but sometimes also with automatic code checkers or memory debuggers that highlight potential problems.
3. Bugs in the operating system or in sublibraries - any large project depends on large volumes of operating system code and usually lots of other libraries. These systems almost certainly have bugs or at the very least undocumented or inconsistent behavior. The only way to avoid this is to validate all OS responses and do lots of testing.
4. Cross-platform bugs - a program could work perfectly on one system, but not on another. Best way to address this is to abstract all of the parts of your program that are specific to the environment, but mostly this just requires lots of testing and porting.
5. Complexity bugs - bugs that start to appear when a program or part of a program gets too complicated, such that changing any one piece causes so many unintended side-effects that it becomes impossible to keep track of them. This is one of the few areas where good object-oriented design will probably help.
6. Poor specifications - these are not even necessarily bugs, just cases where a program doesn't behave as expected because the specifications were wrong or ambiguous. The way to avoid this is to make sure that the specifications are always clear. Resolve any potential ambiguities in the specs before finishing the code.
My overall feeling is that there are so many different types of bugs in a real-world programming project, and any one technique (like object-oriented design) only helps address one type of bug.
Great slashdot discussion (Score:3, Interesting)
Let me add my 2 cents: the problem is that computer programs represent the 'how' instead of 'what'. In other words, a program is a series of commands that describes how things are done, instead of describing what is to be done. A direct consequence of this is that they are allowed manipulate state in a manner that makes complexity skyrocket.
What did object orientation really do for humans ? it forced them to reduce management of state...it minimized the domain that a certain set of operations can address. Before OO, there was structured programming: people were not disciplined (or good) enough to define domains that are 100% indepentent of each other...there was a high degree of interdepentencies between various parts of a program. As the application got larger, the complexity reached a state that it was unmanagable.
All todays tools are about reducing state management. For example, garbage collection is there to reduce memory management: memory is at a specific state at each given point in time, and in manual memory management systems, the handling of this state is left to the programmer.
But there is no real progress in the computing field! both OO, garbage collection, aspect oriented programming and all other conventional programming techniques are about reducing management of state; in other words, they help answer the 'how' question, but not the 'what' question.
Here is an example of 'how' vs 'what': a form that takes input and stores in a database. The 'what' is that 'we need to input the person's first name, last name, e-mail address, user name and password'. The 'how' is that 'we need a textbox there, a label there, a combobox there, a database connection, etc'. If we simply answered 'what' instead of 'how', there would be no bug!!! instead, by directly manipulating the state, we introduce state dependencies that are the cause of bugs.
Functional programming languages claim to have solved this problem, but they are not widely used, because their syntax is weird for the average programmer (their designers want to keep it as close to mathematics as ever), they are interpreted, etc. The average person that wants to program does not have a clue, the society goes away from mathematics day by day, so functional languages are not used.
The posted article of course sidesteps all these issues and keeps mumbling about 'intuitive programming' without actually giving any hint towards a solution.
Re:Three-choice system of logic (Score:5, Informative)
There is a ton of information out there on this, and this is in no way a new idea. (Google it, lotsa reading for ya)
Currently, the only way to utilize this is to process ternary logic in software, as at this point there is no ternary circuitry in general use.
For this to actually be useful we would need a platform that can execute ternary code natively.
Lots of work has been done in this area too (not only with ternary, but with multi-state transistors with more than 3 states as well)
For those of us not at the bleeding edge of research in these areas though, we'll just have to wait until there is hardware to support this kind of thing, and then likely some tools to start with.
Re:Three-choice system of logic (Score:5, Informative)
I know of no class of problems in computer science that can be better addressed by ternary computing than by binary computing. There may be some of them out there. But in general ternary computing doesn't change enough to have an impact.
Re:Three-choice system of logic (Score:3, Interesting)
Actually, the very fact that we live in a binary world kinda makes your post redundant, we ended up here for a reason. However, does that mean we are not allowed to think outside of this paradigm? Can we not discuss things like this? Or shall we stay confined to our little box at all times?
Re:Three-choice system of logic (Score:3, Interesting)
I find True, False or Maybe to be fairly astonishing. Does maybe mean quit? Does it mean give me more options? Does it mean the same as True or False (it would if my wife coded it, oh!)?
All the third option gives us is more questions. Which defeats Echel's sixth principle: Simplicity before generality.
Re:Three-choice system of logic (Score:5, Funny)
Ternary Logic is Used in GIS (Score:3, Informative)
Re:Bug-less Software? (Score:5, Insightful)
In practice, this almost never happens. Most developers are willing to trade perfect code that'll take four months for mostly-perfect code that will be ready for the deadline.
To sum it all up, a properly designed and written program should never choke on user input. If it doesn't, that means you cut corners somewhere. Don't blame it on the user.
Re:I'd rather have... (Score:5, Funny)
Right now I am in a Computer Science program. I have had the pleasure to see:
I don't think the whole proper education thing is going to happen any time soon.
Re:I'd rather have... (Score:5, Funny)
Re:I'd rather have... (Score:5, Interesting)
Dijkstra has a few things to say on the topic as well:
On Education, Specifically:
And then on Computer Science in general:
Re:I'd rather have... (Score:3, Insightful)
The perceived problems are actually caused by the no-brainers.
Re:I'd rather have... (Score:3, Interesting)
I'm not suggesting that we do away with basic arithmetic or variable assignment. You can't do that and still have a programming language. The very idea of writing a program of any complexity that doesn't incorporate basic arithmetic or variable assignment is just plain silly.
Rather, I'm saying that so long as programmers can use such essential and basic functionality, the bad ones will find ways of producing buggy code. Inaccurate formulae. Hard-to-maintain code. In
Re:I found it to be interesting (Score:5, Funny)
* More intuitive
* More inclusive
* Pattern recognition vs. "yes/no" type logic
Ah... ok, let's turn those 90 degrees:
* More context-aware
* There's more than one way to do it
* Logic using higher order comparison such as regular expressions and grammars
Hmmm... Perl anyone?
Perl is universally panned by people who don't use it for being "opaque", and yet that opacity is the result of all three of the above, and CPAN is a monumental testiment to the value of those features in terms of large-scale software engineering.
If your opinion of dollar-signs is so valuable to you that you can't see the value in 4GB of source code sitting at your fingertips, then I direct you to the nearest Java tutorial....
Sorry, but I hate Perl (Score:4, Insightful)
As a programmer, I prefer C/C++ because things are pretty explicit, ie. you need to define your variables explicitly before you use them, and there is no guessing involved.
However, with Perl, there are so many things that if they aren't present, they are assumed. It is very "hacky" and makes it very hard to read. When things are assumed, to me as a programmer, it just means it creates uncertainty, and this inevitably leads to bugs.
The same goes with most scripting languages, like PHP. I use PHP because it is very easy to use, but it also suffers from similar bugs (ie. being able to use variables before explicitly declaring them, etc).
Like I said, if you love Perl, that's great, and a good Perl programmer will know all this, and will probably make very few bugs, just like a good C programmer will make very few bugs in their code. My point is that for the lesser Perl programmers, it is very easy to write code that is simply horrible.
Re:Sorry, but I hate Perl (Score:5, Insightful)
I used to feel the same way after having programmed in C for many years. Some yahoo made me work with Perl, so I treated it like any other language that I had to pick up... and I hated it. It was full of little special cases and everything broke the rules in at least 3 ways. Most languages strove to remain as context-free as possible, but Perl was awash in as much context-senstivity as Larry Wall could mamage to make his C-compiler-stress-test of a tokenizer handle!
So, why am I a staunch Perl advocate many years later?
1. Because I can think in Perl better than any other language
2. Because Perl favors human beings who have to program, not compilers and interpreters that have to parse the code
3. Because I got orders of magnitude more work done in Perl than C, C++, awk, Java, LISP, or any other language I could find.
"However, with Perl, there are so many things that if they aren't present, they are assumed. It is very "hacky" and makes it very hard to read. When things are assumed, to me as a programmer, it just means it creates uncertainty, and this inevitably leads to bugs."
That's the theory... and that's what I was taught in school... It seems to make sense.
And yet, there is this massive body of good code written in Perl. There is also a ton of BAD code written in Perl. Just check out bugzilla if you want to see the worst case scenario.
But then ask yourself... is that Perl's fault any more than bad C++ code (and man I've seen some amazingly bad, impossible to debug C++) C++'s fault? I judge a programming language on the basis of what good programmers can do with it. If you want bondage languages that force bad programs to be minimally debuggable, use Python, but don't expect to be as productive in a language that forces you to think in some particular way about your problem.
Re:Sorry, but I hate Perl (Score:3, Insightful)
"Good" and "bad" programmers (Score:5, Insightful)
As with many (all?) other skills, I think two things probably dominate developer ability:
Please note the key distinction there: one of these factors relates to a developer's potential, the other to what he can actually achieve in reality.
To determine a good strategy for building a team of developers, you then have to consider the relative work rates of developers of different abilities, and the nature of the work. For example, most code is developed from relatively straightforward design and programming tasks, but often you have small areas that require much more skill to design and implement effectively. These areas require a more able developer/team, but OTOH we also know that such people can be anything up to 10x as productive as a "typical" developer on the more mundane work. Of course, employing such people also costs rather more.
So what does this suggest about our choice of programming language? Well, if your development task is going to require any complex design or implementation work, you're going to need a sufficient number of top end people to do it, and you're going to need suitably powerful and flexible tools to help them.
For the remainder of the work, highly skilled developers will still be happy using those powerful, flexible tools, but they may be in short supply, and chances are most of your team will be more average in ability, and thus more average in their ability to avoid mistakes. Thus you may need a tool that reduces the possibility or impact of those mistakes, even at the expense of some power and flexibility (which those developers will rarely if ever use anyway).
Strangely enough, this has always been one of the reasons I've liked C++ as a practical, real-world language. While it has plenty of theoretical flaws, it does combine both raw power and flexibility with a decent set of abstraction tools to keep routine development away from the most dangerous areas. You can have your top developers write subsystems using all the cunning tricks they need, but keep everyone else using only clearly defined interfaces. Given a little basic training (sadly a lacking commodity in the C++ programming world, but not beyond any competent manager to arrange -- this is the second factor above) the vast majority of "typical" developers can avoid the really dangerous programming practices, and take advantage of the neat stuff the top guys made for them. When those top guys have finished developing really neat stuff, they can just become super-efficient people doing the mundane stuff using the same tool.
Bottom line: for most real world projects, you need to judge a language by both what it's capable of when used by a really good guy and how well it looks after Joe Developer. If one language isn't enough to do both and your project needs them, maybe you need more than one language and some good glue, but that's a whole different topic. :-)
Re:Sorry, but I hate Perl (Score:4, Insightful)
Sorry, but unless you plan on not growing at all, and not having any turnover of staff, then the profile of your employees will tend towards average over time. A mix, of a few very good, a few more very bad, and a rump of fairly mediocre programmers.
That's why startups are so attractive for many people. There is a somewhat better chance of having a high %age of great programmers, and doing some innovative work in powerful languages. All other companies have to deal with the unfortunate reality that most of their programmers do not fall into the excellent category, and have to plan accordingly.
Or do you have some incredible HR process not thought of by any other company in existance that ensures everyone hired by you will be excellent?
Re:Sorry, but I hate Perl (Score:3, Informative)
use strict;
Re:I found it to be interesting (Score:5, Insightful)
Groking Perl seems to be like groking pointers in C. Some people seem to be simply born without the part of the brain that understands them.
Perl is context-aware/intuitive. It understands the need to be able to easily take data from any source, chop it up, mangle it, and then easily spit it back out. There isn't much to learning Perl syntax, but it will insist that you memorize some traditional things, like operator precidence, syntax, and the basic perl functions. Not hard at all when you get down to it.
Perl is inclusive. There is definitely more than one way to do it. This is a "good thing", because one way that works, might not be best way. Similiar problems, sometimes require a slightly different solution. Perl has online documentation out the wahzoo. perldoc rocks, and you have a list of up to date books that rival O'Reilly's (many times by the same authors). Perl modules have built in unit testing. Perl is a language and a culture that values and facilitates "testing".
Pattern recognition, is something Perl excels at. Especially the type of pattern recognition and logic handling that is required for most applications. Need something fancier? Like fuzzy? Neural Net? Look to CPAN. Using regular expressions in Perl takes one line of code, no need to worry about making a regex struct or object, then compiling the syntax, and then running the match, and then deallocating the regex struct.
You're right, the same people who pan Perl for being opaque are typically the same that use method overloading, polymorphism, and other abstraction and obfuscation techniques and then claim their code is more readable, and easier to understand. They also tend to be the same people who believe Perl is only good for one off scripts and hacks. To which I say that is only the beginning of what Perl is great at.
Re:I found it to be interesting (Score:5, Insightful)
In the article, her solution to error is to increase the tolerance for error, making direct mistakes unlikely or impossible because there is plenty of 'slop' in the system and you can't get a wrong answer. Theoretically, this lowers precision and increases overhead of the system. Her solution to the difficulty in understanding programming is making it so any idiot can understand it.
To make an analogy, a programmer is like a bucket. Her solution to filling a bucket (writing code) is to submerge it inside a larger pool. In that situation, any old bucket will do, the bucket will always be full when placed in a pool; but you will then have to carry the entire pool if you want it to move. The question then becomes how much you can carry, not the performance of the bucket.
She may well be right about intuitive programming, being easier to use, and that making programming more like regular language with intuitive syntax could be beneficial (more like programming a Star Trek AI computer than what we have now). But this would also shift the nature of the problem from design and architecture to performance and underlying stability issues. Any fool could write code without knowing how it worked. Some shortcuts may be appropriate in certain cases, but to rely on these kinds of methodologies in critical situations could lead to disaster and has a built in unreliability factor. If some company thinks they can buy this system and then expect bullet proof security, reliability and high performance, they are probably in for a rude awakening. They should expect 'good enough' performance, which is what they are getting already.
The only way to do exceptionally good work in a complex situation is to have the knowledge and experience for what you are doing at all levels and the ability to execute. Allowing programmers to be ignorant of how a computer works doesn't seem like a solution to me. The real problem with crappy software is companies that don't care and consumers who don't know any better.
Re:I found it to be interesting (Score:5, Interesting)
That's not to say it's bad, but it simply isn't. Perl provides you with all of the tools you need to build a GREAT OO system, but that's not an OO system.
This is one of those things that Larry does that's just unfathomable to the rest of the world. He didn't really grok OO back 10 years ago when P5 was in the works. He understood it well enough to program in the large in C++, but he didn't quite have his head fully around the implications, so when he added OO to Perl 5, he did so in such a way that all of the various ways of approaching code from an OO standpoint could be accomodated.
This means that writing OO code in Perl kind of sucks, but if you want to design an OO model for a programming language, no tool (other than a parser generator) will be more powerful.
Come Perl 6, Larry finally feels that he gets it enough to tell all of the people who are going to have to use his language how to do it. He doesn't take that responsibility lightly, and the fact that SO MANY other language designers do should worry you.
That said, if you want to write medium-sized programs that are heavily OO-dependent, I suggest Ruby or Python or even Java. If you are writing small tools, OO vs non-OO won't matter that much.
If you are building huge systems, then you don't care because the amount of work required to lay out how you will use OO in Perl is insignificant next to the architecture that you have to lay out for the rest of your app. It's just noise in your timeline, and you can fully re-use that policy in every other project that your company tackles.
What's amazing is how Perl lets all of these OO models interact. I'm always stunned by this, and frankly it's a tribute to the language and its designer.
As for your comment about typing less... I don't think that languages with the level of abstraction of Ruby or Perl really need to have line-count contests. Dynamic typing, run-time data structure definition and garbage collection make programming SO much easier that Perl and Ruby are in the same order of magnitude, and I don't see a reason to quibble over the details.
Re:I found it to be interesting (Score:4, Insightful)
The first two hits in the article point out that java was architected with security in mind. This is simply true, and hardly a shameless plug.
The next hit is in the question "How well do you think modern programming languages, particularly the Java language, have been able to help developers hide complexity?"
The answer starts with the word "Unfortunately" and goes on to explain that not even OO languages reduce complexity enough when an app gets big enough. The word "Java" isn't used once in the answer. That certainly isn't a plug.
The final hit is in the question "Do you have any concrete advice for Java developers? And are you optimistic about the direction software is headed?"
Note some good general purpose advice is given in the answer, and the term "Java" isn't used once in the answer.
Re:Mod parent up! (Score:3, Interesting)
IDEs hide what's really going on in the development process. Once you learn your environment, the command line is still the fastest and the easiest to isolate build problems in.
The problem, as I see it, as a lot of programmers don't want to have to understand how to program or the problem that they're coding to, becuase that's hard. Understand what's going on, write higher quality code. That's all there is to it.
Now if we were talking about Joe Sixpack ba
Re:Test for NULL pointers (Score:3, Informative)
Since strings are immutable, it's actually creating a new string based on the content of strings a and b. So
is actually
Rich Waters did something like this in the 80s. (Score:3, Interesting)