Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Programming Bug IT Technology

Intuitive Bug-less Software? 558

Starlover writes "In the latest java.sun.com feature at Sun's Java site, Victoria Livschitz takes on some ideas of Jaron Lanier on how to make software less buggy. She makes a couple of interesting points. First, making software more 'intuitive' for developers will reduce bugs. Second, software should more closely simulate the real world, so we should be expanding the pure object-oriented paradigm to allow for a richer set of basic abstractions -- like processes and conditions. The simple division of structures into hierarchies and collections in software too simple for our needs according to Livschitz. She offers a set of ideas explaining how to get 'there' from here. Comments?"
This discussion has been archived. No new comments can be posted.

Intuitive Bug-less Software?

Comments Filter:
  • by GeckoX ( 259575 ) on Friday February 13, 2004 @03:30PM (#8272500)
    It is called ternary computing, as opposed to binary computing.

    There is a ton of information out there on this, and this is in no way a new idea. (Google it, lotsa reading for ya)

    Currently, the only way to utilize this is to process ternary logic in software, as at this point there is no ternary circuitry in general use.
    For this to actually be useful we would need a platform that can execute ternary code natively.
    Lots of work has been done in this area too (not only with ternary, but with multi-state transistors with more than 3 states as well)

    For those of us not at the bleeding edge of research in these areas though, we'll just have to wait until there is hardware to support this kind of thing, and then likely some tools to start with.
  • by Sique ( 173459 ) on Friday February 13, 2004 @03:38PM (#8272599) Homepage
    The main problem with ternary computing is that it can be directly mapped onto binary computing. So it doesn't change the set of problems we can attack with computing, it just changes the way. But the conversation between binary and ternary logic can be done automatically.

    I know of no class of problems in computer science that can be better addressed by ternary computing than by binary computing. There may be some of them out there. But in general ternary computing doesn't change enough to have an impact.
  • by bigattichouse ( 527527 ) on Friday February 13, 2004 @03:46PM (#8272702) Homepage
    Most of my apps have been moving to a "State Machine" based workflow.. Each item of work or task sits in front of you, and only gives you the necessary choices to move it along... Once the engine is in place, you end up doing these simple "OK, lets build the code to show them enough info to make a choice".. and the idea translates well to automated processes, just pop the next item from the queue and work on it.
  • by subrosas ( 752277 ) on Friday February 13, 2004 @04:03PM (#8272890)
    A form of ternary logic is used to establish whether two lines/arcs intersect in some GIS implementations. This was introduced several decades ago. Usually its called Fuzzy Tolerance. So actually, ternary logic is useful and in use.
  • by MCRocker ( 461060 ) on Friday February 13, 2004 @04:11PM (#8273004) Homepage
    Many of the problems with the currently popular software design antipatterns like MVC [c2.com] are that they throw out all of the advantages of true Object Orientation in their zeal to re-invent the 1970's style data processing in OO languages. The NakedObjects [nakedobjects.com] folks have an approach [nakedobjects.org] that makes it much easier to code in a truely object oriented fashion. This results in more natural, behaviourly complete, objects that are easier to understand, test and refactor. Although this doesn't solve all of the problems that Livschitz mentions, it definitely mitigates the common problems that developers who use OO languages experience and reduces bugs for some of the same reasons that Livschitz cites because it solves the same problems.
  • by MrBlint ( 607257 ) on Friday February 13, 2004 @04:21PM (#8273148)
    And only declare a symbol once! A number of so called 'engineers' where I work put external function declarations in C source files because they've C&Pd some code and can't be arsed to find for the appropriate header file (or don't know how to change the make file to include it). Worse still are people who call a function without even declaring it at all - because "It only takes int params so there's no neeed". Idiots!

    I recently got management to launche a campaign to reduce the number of warnings caused by this sort of idiocy (we get about 5000 from a full rebuild). What do I find now? "#pragma nowarn", etc... give me strength!

  • Re:Objects (Score:3, Informative)

    by be-fan ( 61476 ) on Friday February 13, 2004 @04:43PM (#8273438)
    See: Lisp, Dylan, Scheme, Ruby, Python, Smalltalk, among others. These languages are all "objects all the way down" though Dylan, Ruby, and Python are more-so than Lisp and Scheme.

  • by Paradox ( 13555 ) on Friday February 13, 2004 @05:04PM (#8273760) Homepage Journal
    There is a language like that. In fact, both C++ and Java borrowed several ideas from it. It's called Smalltalk. :) In Smalltalk, everything is an object. Objects talk to each other via methods. Smalltalk has a limited form of closures, can handle exceptions, and has double-dispatch.

    As languages go, it's pretty awesome. It was well ahead of its time, anyways. Ruby (as another poster mentioned) also does some of this.

    Smalltalk and Ruby are great if you're just working with components and assembling them lego style, sure. But what'd be really nice is to use a language that can do both high level coding and systems programming. Someone else thought of it. Brad Cox came up with Objective-C, which NeXT later expanded upon.

    Apple is using Objective-C with the old OpenStep library as their primary development environment for awhile now. It's very nice, supports a lot of full features, has explicit memory management that is very flexible but also circumventable and tunable (using reference counting, but people have made mark-and-sweep extensions, both are not implicit like java though).

    Objective-C supports late binding, weak typing, strong typing, static typing and dynamic typing, all in the same program. It can directly use C, so if you know C you're already 3/4 of the way there. The message syntax is slightly odd, but works out. Unfortunately, Objective-C doesn't have closures. David Stes developed a meta-compiler that turns Objective-C with closures into regular C (called the Portable Object Compiler) which might get you some distance if your work demands them.

    ObjC can either use C-style functions, smalltalk style message passing, or a hybrid of both. It's a very interesting language. Apple added C++ extensions, so now in most cases you can even use C++ code (however C++ classes are not quite ObjC classes, and there are some caveats).

    If you're looking for a language that splits the difference between Ruby/Python and C/C++, Objective-C might be your best bet. It's pretty hard to find an easy-to-use language that also provides a lot of performance.
  • Re:Objects (Score:1, Informative)

    by Anonymous Coward on Friday February 13, 2004 @05:05PM (#8273774)
    There is a distinction between metaphor and definition. If I say a function is a machine for turning elements of the input set into elements of the output set, then this is a metaphor in which I have used the word machine to describe a function. I suspect that what you are arguing is that the word function itself is a metaphor. This is incorrect. The word function has a defintion. A function f:X->Y is a subset of XxY such that for every x in X there is a unique y in Y such that (x,y) is in the subset. This is not a metaphor.
  • by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Friday February 13, 2004 @05:57PM (#8274493) Homepage Journal
    Actually, from what I remember from my Java larnin' days, there was a big fight at Sun over the + operator and strings. It was left in for convenience. Personally, I never use it, not even in .NET, because it implies an activity that isn't going on, that is to say it implies you're APPENDING string A to string B.

    Since strings are immutable, it's actually creating a new string based on the content of strings a and b. So
    String newString = "this" + " language " + "sucks";
    is actually
    StringBuffer sb = new StringBuffer("this".length + " language ".length + "sucks".length); sb.Append("this"); sb.Apprend(" language "); sb.Append("sucks"); String newString = sb.toString(); sb = null;
    .

    A lot of bloat just for plus sign. eh? The first thing you learn when tuning Java is to avoid using that + sign at all costs. It's only there for learners. .NET has an even cooler functionality in its Formatter classes...
    String.Format("{0} {1} {2}", "this", "language", "rocks");
    ...cooler, because it's fairly efficient and has functionality to reformat popularly displayed datatypes (dates, numbers, etc). Sort of like iostreams, only at a higher level of programming.
  • by undef24 ( 159451 ) on Friday February 13, 2004 @08:11PM (#8275729)
    #!/usr/bin/perl -w
    use strict; ..... problem solved.
  • by shammahau ( 752404 ) on Friday February 13, 2004 @10:35PM (#8276685)
    While you are correct about the power of concurrent dataflow programming (ie excel ;), you are mistaken to suggest that this is not recognised by the FP community. A quick search for 'spreadsheet programming' on citeseer uncovered this: Domain-Specific Languages: An Annotated Bibliography http://citeseer.nj.nec.com/396896.html Haxcel: A Spreadsheet Interface to Haskell http://citeseer.nj.nec.com/lisper02haxcel.html Uncovering Effects of Programming Paradigms: Errors in Two Spreadsheet Systems http://citeseer.nj.nec.com/tukiainen00uncovering.h tml Spreadsheet Model for Programming http://citeseer.nj.nec.com/ambravaneswaran00spread sheet.html Unfortunately spreadsheets as they currently exist exhibit a striking level of bugs as they attempt to scale to larger systems[tukiainen00], the limitations are well know. Fortunately there is active research into addressing these issues (I've lost the citation, but I remember finding the papers on citeseer). Yes spreadsheets are a powerful FP mechanism; but there is still a lot of work left to do before they could be considered a candidate silver bullet.
  • by jonadab ( 583620 ) on Saturday February 14, 2004 @12:16AM (#8277237) Homepage Journal
    > I prefer C/C++ because things are pretty explicit, ie. you need to define
    > your variables explicitly before you use them

    You can fix this in Perl. Just put the following line at the top of each file:
    use strict;

    > However, with Perl, there are so many things that if they aren't present,
    > they are assumed. It is very "hacky" and makes it very hard to read.

    Only when you're very new to the language. Once you've learned it, the
    terseness makes it possible to see whole functions -- indeed, whole entire
    algorithms -- on the screen at one time in a clear, easy-to-follow layout.
    (Pay no attention to my signature; that's that way on purpose, and if you
    think C is immune to obfuscation, google for IOCCC sometime.)

    Having all that superfluous redundant stuff written out just turns simple
    functions that ought to be ten lines (half of that comments) into multi-page
    monstrosities that require several minutes to read and understand. Bleah.

    If you don't like Perl, try Python. It's more strict about a lot of stuff,
    so you might like it better, if you're into that sort of thing. However, it
    shares with Perl certain very critical features that every language ought to
    have, such as built-in memory management. (No more buffer overflows EVER :-)

    Personally, I tried Python, didn't care for it, and went back to Perl.

Function reject.

Working...