Intuitive Bug-less Software? 558
Starlover writes "In the latest java.sun.com feature at Sun's Java site, Victoria Livschitz takes on some ideas of Jaron Lanier on how to make software less buggy. She makes a couple of interesting points. First, making software more 'intuitive' for developers will reduce bugs. Second, software should more closely simulate the real world, so we should be expanding the pure object-oriented paradigm to allow for a richer set of basic abstractions -- like processes and conditions. The simple division of structures into hierarchies and collections in software too simple for our needs according to Livschitz. She offers a set of ideas explaining how to get 'there' from here. Comments?"
I'd rather have... (Score:2, Interesting)
So long as you allow developers to do such things as basic arithmetic and variable assignment, you're gonna have to deal with buggy code written by self-recursive sphincter-spelunkers.
Functional Programming et al. (Score:5, Interesting)
The two big reasons software is buggy! (Score:5, Interesting)
The first is the intense pressure to get the product to market. This is especially true for custom code, written specifically for one client. They want it fast and cheap and in order to satisfy this desire, code invariably gets released/installed before it's ready. Then the "month of hell" starts as the client starts complaining about bugs, "bugs" and other problems and we bend over backwards to get it right.
As a ISV, we have no choice but to do it this way. If we don't quote the project with this in mind, the client will hire somebody else with a better "can-do attitude".
The second big reason software is buggy is because all the underlying tools (e.g. code bases, code objects, .dlls, etc.) are buggy as hell. I spend more time working around inherent bugs than I do debugging my own code.
Most programmers are perfectly capable of making their own code solid, given enough time.
That is exactly the wrong approach (Score:5, Interesting)
sounds like the perfect politician (Score:5, Interesting)
this is an exercise in wish-fulfillment, in suspending disbelief
writing software with less bugs by making things more intuitive and less hierarchical?
i mean, that's funny!
we're talking about telling machines what to do, that is what software writing is
writing software is an extremely hierarchical exercise, the art is giving people want they want
Jaron Who? (Score:3, Interesting)
No really... does anyone care about Jaron Lanier?
I'd put his contributions to technology right up there with Esther Dyson's.
He's another person who calls himself a "visionary" because the specifics of technological development are far beyond his capacity.
He is, always was, and always will be, a non-player.
It's in the implementation (Score:3, Interesting)
Would much stronger data types help? (Score:4, Interesting)
I ask because I'm currently looking into dependent type systems, which aren't currently practical. However, their claim to fame is that the type system is much more expressive; it is possible to define types like "date" or "mp3" in them, and ensure that wrong data cannot be supplied to functions. As I play though, I get the feeling that if the type system is too powerful, people will just create bugs in types, and we'll not improve by as much as we could do.
Re:That is exactly the wrong approach (Score:3, Interesting)
Good ontology modelling software would check assumptions about objects such as "if you remove a man's arm, he is still considered the same man" (in business context, yes) and "a company is the same as the people who work in it" (it's not). Basic stuff; people tend to know it intuitively, but that intuition tends not to make it into software, which causes breakage.
Re:I'd rather have... (Score:5, Interesting)
Dijkstra has a few things to say on the topic as well:
On Education, Specifically:
And then on Computer Science in general:
Why's it so difficult? Duh... (Score:5, Interesting)
Maybe because the programs contain 20 to 30 million lines of code.
Look, I understand that a lot of people are yearning for the good old days when software was less buggy. You know what? I suppose that if your entire application consists of something like 4000 assembly code instructions, you might just be able to make the program bug-free.
But it's not 1983 anymore and programs are on the order of millions of lines of code. Of course it's not feasible to go over the entire program manually and root out every single bug. The stuff I work with every day is considered extremely small and yet it depends on all sorts of external libraries, each of which may have dependencies, etc. It all adds up to amazingly large amounts of code. But, it requires large amounts of code to do extremely complicated things. Is this a surprise to her or something? I don't think there's any "paradigm shift" in the field of programming that's going to change the fact that:
* Doing complicated things requires lots of code.
* The more code you write, the higher the chance of bugs.
I reiterate: duh...
Is software engineering a form of engineering? (Score:2, Interesting)
As for cost, given the high rate of failure in the current system and the astronomical costs of bugs in current software products, I don't think that cost considerations support your argument.
Further, virtually every software text in existence states that more time spent in design reduces defects and yet very few projects spend sufficient time in the design state. Proper design is currently the path not chosen, so its costs are unknown. One cannot reasonably argue that an unknown cost is higher than a known cost prior to the unknown cost being known.
Lastly, your request for a proof exemplifies my point. You cannot offer such a proof and that is why Apache has to be patched. If Apache had been properly designed and constructed from the beginning, the only updates to Apache would be for new features. The cost of all the bugfixing that has gone into Apache over the years was unnecessary.
Unfortunately, computer science is still in its relative infancy. It is currently more akin to a skilled trade than a science. Also, our system of education (at least in the US) is geared toward producing artisans of computers rather than computer scientists. One hopes that this will continue to change over time.
Re:I'd rather have... (Score:3, Interesting)
I'm not suggesting that we do away with basic arithmetic or variable assignment. You can't do that and still have a programming language. The very idea of writing a program of any complexity that doesn't incorporate basic arithmetic or variable assignment is just plain silly.
Rather, I'm saying that so long as programmers can use such essential and basic functionality, the bad ones will find ways of producing buggy code. Inaccurate formulae. Hard-to-maintain code. Inefficient design. Poorly formed logic. Bad algorithm selection. You just can't 'fix' bad programmers with better languages.
There's just no way to teach a compiler to recognize bad code design, and there's no way to tell a programming language, "do as I mean you to do, not as I say you to do." Yes, things like garbage collection and bounds-checking help prevent some bugs, but the really nasty ones--the ones that take ages to fix--are the result of good ol'-fashioned bad design and programming.
As for the troll remark, go ahead and dig through my user info page. I may be snide at times, but I'm no troll.
Re:Three-choice system of logic (Score:3, Interesting)
Actually, the very fact that we live in a binary world kinda makes your post redundant, we ended up here for a reason. However, does that mean we are not allowed to think outside of this paradigm? Can we not discuss things like this? Or shall we stay confined to our little box at all times?
Re:The two big reasons software is buggy! (Score:3, Interesting)
Bridges are built to be extremely fault tolerant. MechEs and CivEs use safety factors - big ones. Multiple bolts must fail before the structure becomes critical. Adding safety factors in mechanical structures is relatively cheap and easy.
In most software, nearly everything is critical in some way due to the logical step-by-step nature of code execution. It's possible to write good fault tolerant software (i.e. w/ exception handlers) but that's one of the first things to suffer under the deadline as it's very expensive. I've never done a study but I would guess that at least 70% of good code writing is designing for all the non-standard cases!
I think software is a bit closer to a chain than it is to a bridge.
Some ideas (Score:1, Interesting)
- A lot of time is spent trying to get the syntax right. By moving software development away from pure-text writing and towards a more humane form (like dragging widgets together), the head could be freed up so that we don't have to think about adding ; at every line end and focus more on the problem.
- The form of source code (text) today doesn't reflect the complexity of the problem behind it. Surely there are tools that overcome some of these problems, but since they still work on the basis of a text document, they are error prone and won't free our heads. A good example for this is CVS: it compares *lines*, but it should compare the syntax - so it sees differences where there are none.
I feel that programming is held back in the era of the "command line" while all other fields have long moved to some sort of GUI.
I've got a list of dozens of ways to improve programming and making it humane without changing the syntax/language... does anybody know if there is research / finished products in this field? What have others already tried out?
Thank you for any suggestions!
What's intuitive?? (Score:2, Interesting)
Now I really struggle to see how one can dress up programming to be like any of these though I do admit getting a horn when I see a reaaly good linked list.
Software is inherently chaotic and complex. I think any attempts to say otherwise are just a front for pushing some new case tool or whatever. What sets geeks apart is that they are wired in a non-intuitive way: hence the ability to program and the problem coping with sex.
Test-Driven Development (Score:2, Interesting)
I've only come across one way to write code that is close to bug-free: Test-Driven Development (TDD.)
In TDD, you never write a line of code unless there is a unit test in place to check the results.
I have seen very complex systems get built from scratch with virtually no bugs when TDD is followed.
There are lots of online resources about TDD. It is one of the foundations of XP (Extreme Programming.)
Mod parent up! (Score:2, Interesting)
- A lot of time is spent trying to get the syntax right. By moving software development away from pure-text writing and towards a more humane form (like dragging widgets together), the head could be freed up so that we don't have to think about adding ; at every line end and focus more on the problem.
- The form of source code (text) today doesn't reflect the complexity of the problem behind it. Surely there are tools that overcome some of these problems, but since they still work on the basis of a text document, they are error prone and won't free our heads. A good example for this is CVS: it compares *lines*, but it should compare the syntax - so it sees differences where there are none.
I feel that programming is held back in the era of the "command line" while all other fields have long moved to some sort of GUI.
I've got a list of dozens of ways to improve programming and making it humane without changing the syntax/language... does anybody know if there is research / finished products in this field? What have others already tried out?
Thank you for any suggestions!
Re:It's in the implementation (Score:4, Interesting)
The reason is that most developers INSIST that class structure should model the application domain. Even if it doesn't make the slightest lick of sense.
Reason? Because of how OO was taught. Concrete to abstract, keeping in line with a problem domain.
(coloured rectangle->rectangle->shape). This certainly makes teaching easier, but doesn't make for sensible class hierarchies.
OO is separate from a class hierarchy. The only reason we HAVE a hierarchy is to allow code to be reused. Therefore, the proper hierarchy is not a taxonomy, it is the one that leverages the code maximally.
As an example - Where to put a Date class?
Smalltalk classifies a Date as a Magnitude -- things that can be compared. So comparisions can be leveraged (eg. =). If it were NOT there, all comparisions need re-implementation.
Also Character should be a Magnitude as well.
Maybe String, but that's a bit shaky (mixins help, it's comparable, but is a collection of Character).
Where to put a class in the hierarchy should be driven by the principle of minimizing code. *NOT* modelling the real world. If you model the "real world" you are probably in a serious "world of hurt". Also, in this case, the OO "paradigm" isn't going to save you much in the way of coding (will save you debugging, hopefully).
Avoidance of bugs...
Stay away from stupid languages. Insist that optimization is the compiler/computers job. The Rosetta Stone is to ask for a factorial function, *without* specifying any details. Code it in the *most* natural way, and then test it with 10,000!
Now, determine how much breakage has occurred (if any).
The answer to LARGE projects is to write code ONCE, and be able to reuse it in any context that needs the same processing. I don't want to have to code the factorial algorythm for small integers, large integers, and really big integers.
I want the code to accomodate the data-type that is needed. If I sort, and use "" ordering, I want that to work across any datatype.
If I have to re-implement, I lose on the previous work.
Class hierarchies can help structure (look at Smalltalk), but are not often used in this way.
Ratboy.
Re:Functional Programming et al. (Score:5, Interesting)
A lot of problems are solved with functional languages. Functional advocates claim to have the answer to software correctness and they decry the present state of imperative logic programming. What I think they fail to realize is that functional programming is ubiquitous, solving problems on a scale that contemporary imperative tools will never approach.
Microsoft Excel is, in essence, a functional programming language. It is utilized by non-"programmers" planet wide every day to quickly, accurately and cheaply "solve" millions of problems. It has, effectively, no learning curve relative to typical coding. I have found it to be an invaluable software development tool. I take it a bit further than the typical spreadsheet task by using to model software systems.
It is especially helpful with business logic problems. I recently implemented a relatively complex web-based product configurator. I know that if I can model the complete problem in a stateless manner using a spreadsheet, writing bug-free, efficient client and server side imperative code becomes a simple matter of translation. For any given state of a collection of inputs there is exactly one atomic result. In this case the result is a (possibly lengthy) structured document computed dynamically from a collection of input forms, both on the client (because page refreshes suck) and on the server (because validation must not depend on an honest client.) Both independent implementations (in different languages) are "obviously" correct in the sense that they are derived from a clear, functional model, built in a spreadsheet.
You may substitute any contemporary spreadsheet product in place of Excel; I have no love of Excel specifically. It's just what I've happened to have handy in all cases. The fact is that modeling most software problems requires very little of what any reasonablely competent spreadsheet can accommodate. Feel free to lecture me on precisely why it is blasphemous to suggest that a spreadsheet qualifies for the designation "functional programming." I know the difference because I've studied LISP and used Scheme. The subset of true functional programming that provides the most value is clearly represented by the common spreadsheet.
Re:Mod parent up! (Score:3, Interesting)
IDEs hide what's really going on in the development process. Once you learn your environment, the command line is still the fastest and the easiest to isolate build problems in.
The problem, as I see it, as a lot of programmers don't want to have to understand how to program or the problem that they're coding to, becuase that's hard. Understand what's going on, write higher quality code. That's all there is to it.
Now if we were talking about Joe Sixpack banging together an application to track his baseball cards and not some professional IT developer, I could see a less complex environment being useful to him. I was banging out dbase III procedures for my dad's office back when I was a boy, and it didn't have to be the most complex environment in the world for them to get their job done. They still had a solid idea of what they wanted the machine to do though. They just gave me the requirements, I implemented them, presented them them and let them come back with changes until we had a report they liked.
objects, conditions, and processes (Score:2, Interesting)
Re:I found it to be interesting (Score:2, Interesting)
perls has its place, I've tried to learn perl no big deal until you start to see "perlisms" AKA hacks, and then it becomes unreadable, I preffer perl's prettier and younger sis, Ruby, OOP done if not right at least a lot better, mmmh how to avoid bugs? typing less helps
And POOP stands for Pure Object Oriented Programming.
The secret to bug free code... (Score:1, Interesting)
Re:Three-choice system of logic (Score:3, Interesting)
I find True, False or Maybe to be fairly astonishing. Does maybe mean quit? Does it mean give me more options? Does it mean the same as True or False (it would if my wife coded it, oh!)?
All the third option gives us is more questions. Which defeats Echel's sixth principle: Simplicity before generality.
Re:I found it to be interesting (Score:5, Interesting)
That's not to say it's bad, but it simply isn't. Perl provides you with all of the tools you need to build a GREAT OO system, but that's not an OO system.
This is one of those things that Larry does that's just unfathomable to the rest of the world. He didn't really grok OO back 10 years ago when P5 was in the works. He understood it well enough to program in the large in C++, but he didn't quite have his head fully around the implications, so when he added OO to Perl 5, he did so in such a way that all of the various ways of approaching code from an OO standpoint could be accomodated.
This means that writing OO code in Perl kind of sucks, but if you want to design an OO model for a programming language, no tool (other than a parser generator) will be more powerful.
Come Perl 6, Larry finally feels that he gets it enough to tell all of the people who are going to have to use his language how to do it. He doesn't take that responsibility lightly, and the fact that SO MANY other language designers do should worry you.
That said, if you want to write medium-sized programs that are heavily OO-dependent, I suggest Ruby or Python or even Java. If you are writing small tools, OO vs non-OO won't matter that much.
If you are building huge systems, then you don't care because the amount of work required to lay out how you will use OO in Perl is insignificant next to the architecture that you have to lay out for the rest of your app. It's just noise in your timeline, and you can fully re-use that policy in every other project that your company tackles.
What's amazing is how Perl lets all of these OO models interact. I'm always stunned by this, and frankly it's a tribute to the language and its designer.
As for your comment about typing less... I don't think that languages with the level of abstraction of Ruby or Perl really need to have line-count contests. Dynamic typing, run-time data structure definition and garbage collection make programming SO much easier that Perl and Ruby are in the same order of magnitude, and I don't see a reason to quibble over the details.
Re:Test? (Score:4, Interesting)
I did a PhD at Oxford in the Programming Research Group and studied Z, CSP and all that stuff. My thesis even includes a program written in Occam proven via an algebra to meet a security specification.
Believe me, I'm aware of what the world could be like, but it is not practical to write real software this way yet. Hence we still need to test, and not enough people write tests today. Unit and system testing are best practices for the industry today, sure, there's a better theoretical way to do things, but I need to code in 2004 not 2054.
John.
Old ideas.... (Score:3, Interesting)
I couldn't stop thinking of existing theories and/or implementations of her ideas...
Modeling processes out of the OO paradigm (opposite to what Design patterns started to sacralize for example) is precisly the subjet of so-called business rules. But BR people are close to relationnal model of data, that is too quickly assimilated with SQL DBMS(*), so OO oriented people don't buy it (see the almighty impedance mismatch).
Data-structures other than trees and collections are already genericaly implementable in any modern OO language. See Eiffel for example which can perfectly do that for 15 years (parametric classes, with full type safety). May be the java generics will help to build highly reusable data structure... I doubt that, anchored type is missing (ie the possibility to declare the type of a variable as equal to another type, nearly mandatory when dealing with inheritance of generics).
Tom.
(*) I warmly recommend the writings of Chris Date and Fabian Pascal to really see how the relationnal model of data is different from SQL databases...see DBDebunk [dbdebunk.com] for references.
Re:The two big reasons software is buggy! (Score:2, Interesting)
Even when the latter happens, in a lot of cases the overall soundness of the implementation is compromised by the perceived need to maintain backwards compatability (e.g. why does the Borland Builder compiler use the 80386 instruction set by default, when almost any executable you are likely to build with it will be larger than the 386 can support?)
Bugless code!?!? (Score:2, Interesting)
Even if, by some miracle, all programs were immune to user errors, there are infinitly many hardware/OS/software combinations that some conflict with code will eventually surface. IMO, this ideal of bugless code is just that, an ideal.
The lost art of modelling. (Score:3, Interesting)
"software should more closely simulate the real world"
From the article: "It's not the prevention of bugs but the recovery -- the ability to gracefully exterminate them -- that counts."
While the need to gracefully recover your design from bugs (bugs come from design, or lack of, not code) is laudable. The proper technique is to design without bugs in the first place. Assuming that you're actually meeting the business requirements or functional specifications, there is a straightforward method to flattening bugs before they become fruitfully overripe and multiply.
Once you have obtained the proper [amazon.com] requirements [amazon.com] (your goals), and after you've properly atomized it to its smallest component parts, you need to model those parts. Once you've modeled those parts, you need to test the model. This works in single process design, but it really shines in concurrency where anyone can truly screw up.
Get a good [amazon.com] book [amazon.com] on design [amazon.com]. Then get a good [amazon.com] book [ic.ac.uk] on modelling, mechanically analyzing, and testing those designed processes before commiting to code.
= 9J =
Excellent! (Score:1, Interesting)
I'm a CS professor and I completely agree. I constantly challenge my tenured senior colleagues (I'm untenured) about their love affairs with UML, Java, and other tools. I teach undergraduate data structures and algorithms and never write one line of source code on the board -- it is all pseudocode. I don't ask students to use a particularl language for their programming assignments, either. As a result, I don't lecture from a book either. I lecture about topics and employ several books.
The problem as I see it is that it is human nature to get in a routine and not stretch your boundaries. Many profs take the easy way by letting the tools hide their complacency. I don't think they are stupid -- they knew things at one time. But they get lazy and "eat their brain". I hope I never eat mine because it is a great disservice to students and taxpayers.
Types of bugs and how to prevent them (Score:4, Interesting)
1. Algorithmic bugs - you have a function with well-defined input and output, and it does the wrong thing (may include giving the wrong answer, looping forever, leaking memory, or taking too long to return). Can be avoided with a combination of code review, unit tests, and correctness proofs when possible.
2. Interface bugs - this includes validating input, both from the user and over the network or other ways in which your program gets input data. These bugs include buffer overruns, GUI bugs caused by an unanticipated sequence of clicks, etc. These bugs are mostly found by testing, but sometimes also with automatic code checkers or memory debuggers that highlight potential problems.
3. Bugs in the operating system or in sublibraries - any large project depends on large volumes of operating system code and usually lots of other libraries. These systems almost certainly have bugs or at the very least undocumented or inconsistent behavior. The only way to avoid this is to validate all OS responses and do lots of testing.
4. Cross-platform bugs - a program could work perfectly on one system, but not on another. Best way to address this is to abstract all of the parts of your program that are specific to the environment, but mostly this just requires lots of testing and porting.
5. Complexity bugs - bugs that start to appear when a program or part of a program gets too complicated, such that changing any one piece causes so many unintended side-effects that it becomes impossible to keep track of them. This is one of the few areas where good object-oriented design will probably help.
6. Poor specifications - these are not even necessarily bugs, just cases where a program doesn't behave as expected because the specifications were wrong or ambiguous. The way to avoid this is to make sure that the specifications are always clear. Resolve any potential ambiguities in the specs before finishing the code.
My overall feeling is that there are so many different types of bugs in a real-world programming project, and any one technique (like object-oriented design) only helps address one type of bug.
Re:Functional Programming et al. (Score:3, Interesting)
The big advantage of FP is its clearness and rigidness. To an experiences functional Programmer, its exactly clear what a piece of Haskell Code means, since the code is half general functions that are easy to understand (map, zip, fold et.al.) and half problem-specific functions that are about as easy. The solution is built from simple bricks everywhere, other than in imperative Programming.
Another thing is, we're talking about functions aren't we? And shouldnt the First Class Citizens of our Language be functions then?
Besides, Functional Programs are very much easier to prove, and thats a thing that will be very important in some time in the future (or so I pray... or everyone in IT buisiness will go nuts over the giant pile of bugs that accumulates). For a little introduction into the theory behind this check this document about The Curry Howard isomorphism [www.diku.dk] (at the very Bottom of the page). Besides, this is a fundamental link between programming and philosophy (intuitionistic reasoning)
"Excel" type spreadsheets are very useful too, maybe not for everyday programming but for short and programs intended for quick use by non-programming coworkers. In fact Excel is the one Program that tells me that not everyone in Microsoft can be a moron.
Re:I found it to be interesting (Score:2, Interesting)
I don't really need someone telling OO has limitations, I already know and you should, a programmer should be limited by his ability to use his/her tools and not by knowing only a single lang or a single programming paradigm
What I want to point out is that:
Her proposition of a notch more to OO looks quite nebulous and ungraspable, I think she should research more languages.
Computer Aided Programming (Score:2, Interesting)
Re:Objects (Score:2, Interesting)
If it had a good object system, wouldn't that make it totally unlike C, C++,
or Java? At least, semantically it would. I suppose the syntax could be
C-like. Inform for example has C-like syntax, but the object system is so
much more advanced than the one used by C++ that there is no comparison.
(Inform, however, is not a general-purpose language. It doesn't have the
I/O capabilities to be general-purpose. This is mostly due to the virtual
machine it's designed to compile to, which for portability reasons doesn't
support such things as a filesystem per se. But it's a great language for
its intended problem space.)
Intuition or Cumulative Knowledge databases? (Score:3, Interesting)
Slightly OT: ... or rather, just ACCEPTING unknowns and their repercussion in logic. Say, If I withhold a fact in an argument but claim to be "right," he will say there is just NO way of winning the argument --and then I produce the "new evidence." He sometimes recoils thinking his logic models cannot be blown away by my (normal logic + hidden evidence).
This is indeed hitting the nail on the head. My father and I have lots of disagreement on the issue of "common sense." I am very smart and he is so too, but tends to fall behind when it comes to explaining
Whenever he says that I should know something because it's intuitive, I bring up example after example of why he's mistaken to expect all logical conclusions to be == to his. We saw a lady in a TV contest who had to see words hidden behind her husband and make him guess the , through signs and gestures she made for him. She stumbled upon "otorrino," (this TV show is in spanish) which is short for otolaringologyst, and said that she didn't know the word. Well, I won't get into more complex translation details. Suffice to say that she didn't know what it was and had to skip to the next thing. My father was outraged:
I asked my dad why he rationalizes which concepts she SHOULD know rather than why she just DIDN'T know what he's 100% sure she already grasps. Moreover, he's always too shocked to see through his own failure at accepting that common sense doesn't exist, and instead trying to verbally fix something that is has proven false before his own eyes. But some people think they already know what is and isn't IMPOSSIBLE.
I can list three other languages besides english and spanish that I can understand, so I speak 5 languages, right? No. This is an example of misinformation and generalization: If she says she speaks 7, it doesn't mean her job has made her command them all --even if you 'knew' as many languages as the Pope and had to work in New York City's linguistic melting pot, you will never exert more than 3 language roles in your official capacity, and your "other 4 languages" will be pet languages, specially if you're only 35. But sometimes dad waves off my facts as crazy talk of today's young and naive offspring. Too bad. Sometimes he's surprised at how lucky I am when my "wrong logic" can get him so many nice surprises when his ways should be the only solution to my "poor common sense." You can tell I deal with control freak parents eh?
Great slashdot discussion (Score:3, Interesting)
Let me add my 2 cents: the problem is that computer programs represent the 'how' instead of 'what'. In other words, a program is a series of commands that describes how things are done, instead of describing what is to be done. A direct consequence of this is that they are allowed manipulate state in a manner that makes complexity skyrocket.
What did object orientation really do for humans ? it forced them to reduce management of state...it minimized the domain that a certain set of operations can address. Before OO, there was structured programming: people were not disciplined (or good) enough to define domains that are 100% indepentent of each other...there was a high degree of interdepentencies between various parts of a program. As the application got larger, the complexity reached a state that it was unmanagable.
All todays tools are about reducing state management. For example, garbage collection is there to reduce memory management: memory is at a specific state at each given point in time, and in manual memory management systems, the handling of this state is left to the programmer.
But there is no real progress in the computing field! both OO, garbage collection, aspect oriented programming and all other conventional programming techniques are about reducing management of state; in other words, they help answer the 'how' question, but not the 'what' question.
Here is an example of 'how' vs 'what': a form that takes input and stores in a database. The 'what' is that 'we need to input the person's first name, last name, e-mail address, user name and password'. The 'how' is that 'we need a textbox there, a label there, a combobox there, a database connection, etc'. If we simply answered 'what' instead of 'how', there would be no bug!!! instead, by directly manipulating the state, we introduce state dependencies that are the cause of bugs.
Functional programming languages claim to have solved this problem, but they are not widely used, because their syntax is weird for the average programmer (their designers want to keep it as close to mathematics as ever), they are interpreted, etc. The average person that wants to program does not have a clue, the society goes away from mathematics day by day, so functional languages are not used.
The posted article of course sidesteps all these issues and keeps mumbling about 'intuitive programming' without actually giving any hint towards a solution.
Re: Spreadsheets lose appeal with age and increase (Score:1, Interesting)
For bug-free code, look at home building (Score:2, Interesting)
First of all, homes are build from standard components. Lots of standard components. If software development, if you want a linked list, you have, at most, a handful of choices. When building a home, if you want a nail, you have dozens or hundreds of choices. There are roofing nails, framing nails, finish nails, decking nails, galvanized nails, nails with ribs, nails with spiral grooves and all available in a variety of sizes.
The point is, if you need a nail, you can walk into home depot and get *exactly* the nail that you want. In software, you have to make do with what's available.
You could argue that one can create whatever custom list behavior is needed from the building blocks available. That's true. You can also make whatever nail you want from steal wire. But we don't do that because it's error prone and you might not get exactly the right kind of nail necessary for the job. It's better to get one off the shelf because you can be pretty sure that someone has thought of all the issues related to the problem it's designed to solve and taken care of them.
Next, look at who builds a home and compare it to who builds software. If we build houses the way we build software, you'd bring 150 people to the job site and say "you there, do the framing; you over there, you're on plumbing; you two in the back have electrical work. After all, builders are builders, right? Of course not. There are specialized skills involved in each step and you need the expertise to do them. You wouldn't want your plumber doing the roof work any more than you'd want an excavator doing the plumbing. To build a complex system, you need lots of people with very specific skills.
Standards and interfaces. So how is it that you can walk into Sears and buy a blender that interfaces so perfectly with the electricity in your house? Because there are standards, of course. The standards specify the voltage and frequency on the line, the exact shape of the plug that you use, how long the cord should be, what sort of insulation it has and probably a lot of other stuff. Software systems need the same sort of standardized interfaces for their basic properties. We've started this with things like constructors, destructors and I/O operators.
Basically, I see software development as a very young science. As the standards evolve, the quality and reliability will grow also.
Rich Waters did something like this in the 80s. (Score:3, Interesting)
I think in the end the lower-level techniques of such a system prove more useful to a programmer. At the very least they have had a higher rate of succesful adoption. Most modern programming languages use Lisp's ideas of uniform object referencing and automatic memory management, and stuff like quasiquoting pre-processors has started popping up for other languages in the past couple of years. Concepts a little higher-level, like AOP and refactoring, go a very long way toward the intelligent programmer's apprentice. Of course, if nothing else all this stuff makes making programming apprentices much easier - just try to imagine building one to deal with C's pointer referencing!