Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Programming IT Technology

Interview with Jaron Lanier on "Phenotropic" Development 264

Sky Lemon writes "An interview with Jaron Lanier on Sun's Java site discusses 'phenotropic' development versus our existing set of software paradigms. According to Jaron, the 'real difference between the current idea of software, which is protocol adherence, and the idea [he is] discussing, pattern recognition, has to do with the kinds of errors we're creating' and if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.'"
This discussion has been archived. No new comments can be posted.

Interview with Jaron Lanier on "Phenotropic" Development

Comments Filter:
  • Full of it. (Score:5, Insightful)

    by cmason ( 53054 ) on Saturday January 25, 2003 @12:37PM (#5157065) Homepage
    If you think about it, if you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time. Certainly there wouldn't be any evolution or life.

    <cough>Bullshit.</cough>

    This guy obviously knows nothing about biology. A single base change in DNA can result in mutations that cause death or spontaneous abortion. As little as a change in a single 'character' can be lethal. That's a pretty "small change" that results in a pretty big "crash."

    I'm not sure if this invalidates his argument, but it certainly doesn't do much for his credibility.

  • by aminorex ( 141494 ) on Saturday January 25, 2003 @12:41PM (#5157089) Homepage Journal
    And when you link your 10 million line program with my
    10 million line program, we've got a 20 million line program.
    This idea of an inherent limit to the complexity of
    programs using current methods is pure larksvomit, and
    if Jaron Lanier sells it, he's a snake oil hawker.

    This is Jack's total lack of surprise -> :|
  • Big Programs (Score:5, Insightful)

    by Afty0r ( 263037 ) on Saturday January 25, 2003 @12:42PM (#5157093) Homepage
    "if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become."

    Fantastic! We'll all get down and program small, specific routines for processing data, each one doing its' own job and doing it well. Those nasty, horrid standard protocols he refers to will allow all these small components to easily talk to each other - across architectures, networks etc.

    Oh wait, this is the way it already works. Is this guy then, proposing that we learn a new way to program because our systems aren't monolithic enough? *sigh*
  • by QEDog ( 610238 ) on Saturday January 25, 2003 @12:44PM (#5157103)
    That is only if you consider one living been only. I think he means that is robust as an ecological balance. If a small change in the DNA base of one animal happens, he dies, and is unable to reproduce. So the 'error' was confined, and dealt with. It didn't explode giving a blue screen. Evolution is a phenomena of many living beens, not one. Even if a big change happens in a specie, most of the time the system is robust enough to absorb it and change the system into one that works. And, because of the evolutionary mechanism, only the good mutations, by definition, spread. Imagine a computer program where only the useful threads got resources allocated...
  • by chuck ( 477 ) on Saturday January 25, 2003 @12:44PM (#5157111) Homepage
    Seriously, has there ever been a need to write a program of 10 million lines? I rather believe that creating a number of small components that work well, and combining them in some intelligent way, is the way that you build large systems.

    Now, the extent to which the pieces that you're building are called "programs," or whether the whole system is called "a program" is questionable.

    I mean, I've worked on programs of 10 million bytes, and they've seemed to work okay. It would surprise me if 10 million lines is out of my reach using the methods that I'm familiar with.

    -Chuck

  • Unfortunately... (Score:2, Insightful)

    by holygoat ( 564732 ) on Saturday January 25, 2003 @12:48PM (#5157129)
    ... this guy doesn't seem to really know what he's talking about.

    As someone else has mentioned, life in many respects isn't robust - and where it is, it's not relevant.

    For instance, genetic code is mutable - but try doing that with machine instructions. That's why Tierra creatures are written in its own pseudo-machine language.

    If there's one thing that I don't want happening in code, its tiny errors causing almost-reasonable behaviour. Brittle code is code that's easy to debug.

    What he really wants is lots of small, well-defined objects or procedures doing their one task well. If you decompose a design well enough, there's nothing to limit scalability.
    Good design is the answer.
  • and another thing (Score:4, Insightful)

    by Anonymous Hack ( 637833 ) on Saturday January 25, 2003 @12:55PM (#5157164)

    His comments don't seem to make any sense with regard to the way we, as humans, actually view The Real World either:

    So, now, when you learn about computer science, you learn about the file as if it were an element of nature, like a photon. That's a dangerous mentality. Even if you really can't do anything about it, and you really can't practically write software without files right now, it's still important not to let your brain be bamboozled. You have to remember what's a human invention and what isn't.

    Of course a file is a human invention, but it's also a concept without which NOTHING would work - not just computers. A "file" is just an abstraction of a blob, and i mean that both in the sense of a "blob" as in "a thing" and as in "Binary Large OBject". It's a piece of data that represents something. That's exactly the same thing as looking at a house and saying "that's a house" or looking at a car and saying "that's a car". It's just a way to categorize a bunch of photons/atoms/whatever into something we can readily communicate and understand. This is HUMAN, this is how we reason. If we "saw" the universe as a bazillion photons, we'd never get anything done, because we couldn't "here is a car", we'd be describing each photon individually, which would take up millions of human lifetimes. It's a human limitation, and i don't see any way that we could just up and ignore it.

    Don't get me wrong, i think what this guy is talking about is fascinating, but i also think it's got more of a place in some theoretical branch of math than in real life development.

  • My favorite quotes (Score:2, Insightful)

    by ckedge ( 192996 ) on Saturday January 25, 2003 @12:56PM (#5157168) Journal
    Virtual reality-based applications will be needed in order to manage giant databases

    "Phenotropic" is the catchword I'm proposing for this new kind of software.

    Oh, those are good signs.

    And we're at the point where computers can recognize similarities instead of perfect identities, which is essentially what pattern recognition is about. If we can move from perfection to similarity, then we can start to reexamine the way we build software. So instead of requiring protocol adherence in which each component has to be perfectly matched to other components down to the bit, we can begin to have similarity. Then a form of very graceful error tolerance, with a predictable overhead, becomes possible.

    Phht, I want my computer to be more predictable, not less.

    we need to create a new kind of software.

    No, what we need is an economic model that doesn't include a bunch of pointy haired bosses forcing tons of idiot (and even good) developers to spew crap.

    And we need consumers to up their standards, so that crap programs can't become popular because they look shiny or promise 100,000 features that people don't need. And we need to get rid of pointy-haired bosses that choose software because of all the wrong reasons.

    In phenotropic computing, components of software would connect to each other through a gracefully error-tolerant means that's statistical and soft and fuzzy and based on pattern recognition in the way I've described.

    Sounds like AI and another great method of using 10,000 GHZ CPUs to let people do simple tasks with software written by morons, instead of simply writing better code and firing and putting out of business the morons.
  • Re:Full of it. (Score:4, Insightful)

    by Sri Lumpa ( 147664 ) on Saturday January 25, 2003 @12:58PM (#5157174) Homepage
    "This guy obviously knows nothing about biology. A single base change in DNA can result in mutations that cause death or spontaneous abortion. As little as a change in a single 'character' can be lethal. That's a pretty "small change" that results in a pretty big "crash.""

    This means that natures has got an excellent error catching and correction system, rather than letting buggy code run and produce flawed result it catches the worst cases and prevent them from running (spontaneous abortion) while code with less bugs (say, a congenital disease) has less chance to run (early death, lesser sexual attractiveness to mates...).

    It is only with the advent of modern society and modern medecine that the evolutionary pressure has dropped enough to make it less relevant to humans. Maybe in the future and with genetic engineering we will be able to correct congenital diseases in the womb.

    Even beyond DNA I am convinced that nature has a good recovery system and that if humans were to disappear tomorrow most of the damages it did to Earth would eventually be healed (but how long before we reach the no return point?).

    Now if software could have similar damage control mechanism and if it could still function while invalidating the buggy code, that would be something.
  • by Anonymous Coward on Saturday January 25, 2003 @01:02PM (#5157187)
    He thinks pattern recognition-based methods like neural networks and genetic optimization are the solution to the complexity of traditional software.

    So do lots of naive people. HE has a fancy new word for it.

    Yes, fuzzy computing has its place -- there are certain applications for which it's much better than traditional programming -- but it took so many millions of years for our brains to evolve to the point where we can use logic and language to solve and express problems. It's ridiculous to think we should throw that all away.
  • Why do we need programs with more than 10 million lines of code?

    Has anyone ever noticed that every time Jaron writes an essay or does an interview he tries to coin at least one new word? Dude's better suited to the philosophy department.

    "It's like... chaos, man. And some funky patterns and shit. Dude, it's all PHENOTROPIC. Yeah..."
  • by rmdyer ( 267137 ) on Saturday January 25, 2003 @01:13PM (#5157241)
    Obviously you "can" write mammoth programs with 1 billion lines of code without crashing. It's the kind of program you are writing that becomes critical.

    Generally, serial programs are the simplest programs to write. Most serial programs control machines that do very repetitive work. It's possible to write serial programs very modularly so that each module has been checked to be bug free. Today's processors execute code so that the result is "serially" computed. Yes, the instructions are pipelined, but the result is that of a serial process.

    Where we go wrong is when we start writing code that becomes non-serial. Threads that execute at the same time, serial processes that look-ahead or behind. Most OOP languages tend of obfuscate the complexities behind the code. Huge class libraries that depend on large numbers of hidden connections between other classes make programming a real pain.

    Mr. Lanier might be right, but I doubt it. Seems to me that a line of code could be as simple as that of a machine language command, in which case we are already using high level compilers to spit out huge numbers of serial instructions. Does that count? I think it does. Scaling code comes only at the expense of time. Most people simply don't think about the future long enough.

    My 2 cents.
  • by Anonymous Coward on Saturday January 25, 2003 @01:22PM (#5157287)
    Exactly.

    I don't want my computer to be fuzzy and robust. I want it to be precise and brittle. I don't want computers to become "life forms". The whole point of the computer is to solve problems, not to be a little organism that "mostly works". Life forms already exist.

    That's what I hear all these "visionaries" talking about: they want to make computers operate like the human mind, but they miss the point that we ALREADY HAVE the human mind! We know how it can solve amazing problems quickly, but can also fail miserably. Why do we focus on this, and not on making computers simpler and more effective tools!

    It's good to always question the design of our computers. The stored program concept, files, all that stuff is arbitrary. But let's not miss the point that computers are tools, assistance, calculators, etc... they aren't brains and shouldn't be.
  • not that bad... (Score:4, Insightful)

    by nuffle ( 540687 ) on Saturday January 25, 2003 @01:34PM (#5157345)
    Give him a break.. He's got some points. And at least he's thinking about real and genuine problems. I see a lot of people commenting with the 'who needs 10 million lines of code?' shtick. Sounds familiar [uct.ac.za].

    We're going to need to do things in a decade or two that would require 10 million lines of code (measured by current languages), just as the things we do now would require 10 million lines of code in 1960's languages.

    The new languages and techniques that we have now provided ways for us to reduce the apparent complexity of programs. This guy is just looking for a way to do that again. Certainly there is room to disagree with his techniques for accomplishing this, but it is shortsighted to deny the problem.
  • by sql*kitten ( 1359 ) on Saturday January 25, 2003 @01:39PM (#5157361)
    Thank God.

    You're modded as funny, but what you said is insightful. The whole point of moving to ever higher levels of abstraction - from ASM to C to C++ (or CXX as we called it on VMS) to Java to <whatever comes next> is that you can do more work with fewer lines of code. The fact that programs aren't getting any longer is utterly irrelevant to any experienced software engineer.

    I don't think programs will get longer, since why would anyone adopt a language that makes their job harder? I bitch about Java's shortcoming's constantly, but given the choice between Xt and Swing, I know where my bread's buttered. Or sockets programming in C versus the java.net classes. I'll even take JDBC over old-skool CTLib.

    We have plenty of apps these days that in total are well over 10M lines, but you never have to worry about that because they're layered on top of each other. Someone else worries about the code of the OS, the code of the RDBMS engine, the code of GUI toolkit and so on.

    In short, pay close attention when someone from Sun tries to tell you anything about software development - he's got some hardware to sell you, and you'll need it if you follow his advice!
  • by AxelTorvalds ( 544851 ) on Saturday January 25, 2003 @01:41PM (#5157368)
    Yes, there have been the need. Windows2000 is well over 10million lines. Now is it a single program or system of programs or what? Arguably there is a large amount of that code whereby the removal of it would make the system stop being windows2000; GDI for example. LOC is a terrible metric.

    There are other very large systems out there. LOC never factors in expressivness though. I know of multimillion line 370 systems that were done in 370. I believe that they could be much much shorter if they were done in PLx or Cobol or Java or something else.

  • by Anonymous Coward on Saturday January 25, 2003 @01:51PM (#5157410)
    IMO, if you liken DNA to a computer program, an individual is one instance of that code, or one process. That process can be killed without the entire system going kaput, which is what makes biological systems robust.

    However, even though I think Lanier's observations are valid, they're not particularly groundbreaking. His "wire-bound" vs. "interface" argument is basically a minor revision of the old procedural vs. OO debate. The problems with coding in terms of objects and their interactions continues to be the same: It's never going to be the most efficient(in terms of information content) possible description of a problem, and it's hard work to write extra code for a level of robustness in the future, when most developers are paid for performance in the present. I strongly believe that the roadblocks in development of more robust software are not technical, but are mostly economic.
  • by ites ( 600337 ) on Saturday January 25, 2003 @01:52PM (#5157414) Journal
    There seem to be only two ways to break the need to write code line by line. Either it evolves (and it is possible that evolved software will work exactly as this article suggests, living off fuzzy patterns rather than black/white choices). Or it is built, by humans, using deliberate construction techniques. You can't really mix these any more than you can grow a house or build a tree. We already use evolution, chaos theory, and natural selection to construct objects (animals, plants), so doing this for software is nothing radical.
    But how about the other route? The answer lies in abstracting useful models, no just in repacking code into objects. The entire Internet is built around one kind of abstract model - protocols - but there are many others to play with. Take a set of problems, abstract into models that work at a human level, and make software that implements the models, if necesary by generating your million line programs. It is no big deal - I routinely go this way, turning 50-line XML models into what must be at least 10m-line assembler programs.
    Abstract complexity, generate code, and the only limits are those in your head.
  • by goombah99 ( 560566 ) on Saturday January 25, 2003 @01:52PM (#5157417)
    When I first started reading it I thought well this is cute but impractical. But I had a change of heart. What first bothered me was the idea that if a function is called with a list of args that the function should not just process the args but in fact should look at the args as a pattern that it needs to respond to. First this would imply that every function has been 'trained' or has enough history of previous calls under its belt that it's smart enough to figure out what you are aksing for even if you ask for it a little wrong. Second the amount of computational power needed to process every single function call as a pattern rather than as a simple function call is staggering.

    or is it? how does 'nature' do it. well the answer in nature is that everything is done in parallel at the finest level of detail. when a rock sits on a surface every point on the rock is using its f=MA plus some electomagentics to interact with the surface. each point is not supervised, but the whole process is a parallel computation.

    so although his ideas are of no use to a conventional system, maybe they will be of use 100 years from now when we have millions of parallel processors cheaply available (maybe not silicon). So one cant say, this is just stupid on that basis.

    indeed the opposite is true. if we are ever going to have mega-porcessor interaction these interactions are going to have to be self-negotiating. It is quite likely that the requirements for self negoitation will far out strip trying to implement doing something the most efficeint way possible as a coded algorithm would. spending 99% of your effort on pattern recognition on inputs and 1% of your processor capability fuulfilling the requested calacultion may make total sense in a mega scale processing environement. it might run 100x slower than straight code would but it will actually work in a mega scale system.

    The next step is how to make the processor have a history so that it can actually recognize what to do. That's where the idea of recognizing protocols comes in. At first the system can be trained on specific protocols, which can then be generalized byt theprocessor. superviser learning versus unsupervised.

    Cellular systems in multi-cellular organism mostly function analogously. They spend 99% of their effort just staying alive. hugeamounts of energy are expended trying to interpret patterns on their receptors. some energy is spent reponding to those patterns. Signals are sent to other cells (chemically) but the signals dont tell the cell what to do exactly. Instead they just trigger pattern recognition on the receptors.

    thus it is not absurd to propose that 'functions' spend enormous effort on pattern recogntion before giving some simple processing result. But for this to make sense youhave to contextualize it in a mega processor environement.

  • by markk ( 35828 ) on Saturday January 25, 2003 @02:03PM (#5157466)
    Looking at this I don't see anything revolutionary in what is proposed, just something hard to do. Of course we work hard at signal processing, pattern recognition and clustering algorithms. They are used for everything from Medical Imaging, Radar, Oil Exploration to trying to automatically call balls and strikes. What I see being proposed here would be to look at interfaces between hmm... modules for lack of a better term in a similar way. If you want it is a far out extension of the idea of Object Request Brokers.

    For example, a very large system would have a goal seeking component creating a plan, it would inquire as to what modules are available, look at the interfaces around and classify them (here is where the clustering and pattern recognition might help) and then choose ones which fit its plan. It would then check results to see if this worked closely enough to move it along its plan.

    This implies a database of types of modules and effects, a lower level standard protocal for inquiring and responding, and another means of checking results and recognizing similarity to a wanted state - a second place where the recognition and clustering algorithms would be useful. This is obviously not easy to do...

    The novel "Night Sky Mine" by Melissa Scott comes to mind as an example of this taken way out. There is an ecology of programs that is viewed by "programmers" through a tool that re-inforces the metaphor of programs being living organisms "trained" or used by the programmers to get what they want. I cannot see this being generically useful - many times we really do want a "brittle" system. It is certainly a way to get to very complex systems for simulation or study or games!

  • by Anonymous Hack ( 637833 ) on Saturday January 25, 2003 @02:14PM (#5157524)

    But if his theory is, in fact, what you are describing... Why would we ever do it on the program level? As a programmer, it's actually easier to debug an application if you know exactly how each function is going to treat your arguments. Let's try to think it back to today's technology for a second:

    void *myFunction(void *b)
    {
    if (mostProbable(b, TYPE_INT))
    printf("%d", *((int *) b));
    else if (mostProbable(b, TYPE_CHAR))
    printf("%c", *((char *) b));
    return (mostProbableReturn());
    }

    And in turn our calling function will have to sit and think about the return and what it probably is, etc, etc. What benefit can be gotten from programming like this? Yes, it means we could randomly fire something into a function that wasn't intended for it... for example (in Java): SomeUnknownObject.toString() and instead of getting "Object$$1234FD" we get a true string representation of whatever it is... but we programmed it in the first place! We programmed it, so we know precisely what the object is supposed to be, and precisely how to display it. Why have a computer guess at it when we KNOW?

    "Ah", i see you saying, "but won't it cut down on LOC if a user gives unknown input and the app can figure it out?" True indeed, but then why doesn't he talk about making these abstractions at the GUI-level? It is far, far more practical to keep the fuzzy logic on the first layer - the input layer. And in fact this is already done to some extent. Love him or hate him, Mr PaperClip Guy pops up and guesses when you're writing a letter and tries to help. Love it or hate it, most every text widget in Windows has an auto-complete or auto-spell-fix function. Hell, even zsh and i believe bash do it. This is where fuzzy logic is useful - making sense of input that is either not what you expected or something you could "lend a hand" with. But in the core API? In the libraries, in the kernel, in the opcodes of the CPU? I don't think so. It's in those places where 100% reliability/predictability are vital, otherwise it defeats the point of using a computer in the first place. You don't want your enterprise DB suddenly "losing a few zeros" because your server farm "thought it made more sense that way".

  • by whereiswaldo ( 459052 ) on Saturday January 25, 2003 @02:43PM (#5157662) Journal
    Such layers are stacked on each others (like microcode->assembly->C->SQL, or kernel->userland->libraries->apps).

    I think you just proved how greater than 10 million lines of code has already happened.

    How many lines of code is a 10 million line C program in assembly? 50 million? How many lines of code can a 10 line 4GL statement amount to in C or assembly?

    What I think we really need to advance is another way to express logic in a computer system. New languages seem to be getting more focus these days with the popularity of open source, and I think that's a great step in the right direction since more people will get to try out new ideas.

  • by Fermata ( 644706 ) on Saturday January 25, 2003 @02:44PM (#5157666)
    Jaron's concepts are vaguely stated but still interesting. If you imagine a computer having a central intelligence that is modeled after human intelligence with a certain amount of pattern recognition and iterative learning behavior, there are some potential immediate applications of this.

    A simple example would be the computer's parsing of HTML (or any other grammar/vocabulary-based file format) as compared to a human's parsing of the written word. The human mind has a certain amount of fault-tolerance for ambiguities and grammar-deviations which allows it to make some sense of all but the most mutated input. An example of this would be your own ability to grok most Slashdot posts, however rife with gramer, spelin, and logik errors they may be.

    The computer could also potentially do this to less than perfect input - smooth over the "errors" and try to extract as much meaning as possible using its own knowledge of patterns. It could make corrections to input such as transforming a STRUNG tag to STRONG, since this is probably what was intended and is the closest match to existing grammar.

    Obviously this is a very simple example, but I think this kind of approach could lead to new ways of solving problems and increasing the overall reliability of computer systems.
  • by TheLink ( 130905 ) on Saturday January 25, 2003 @02:53PM (#5157713) Journal
    Asimov style humanoid? We don't even understand how existing humanoids work.

    If people are going to build something like that without being able to understand it, they might as well not - there are 6 billion existing humanoids already. Why rewrite existing code? Why reinvent the wheel?

    Without understanding, those AI folk might as well give up and switch from CS to GM and breeding animals.

    Which is what some of them are already doing albeit virtually.
  • You're forgetting (Score:2, Insightful)

    by voodoo1man ( 594237 ) on Saturday January 25, 2003 @05:57PM (#5158587)
    that most of this code will be in rule-sets, which really don't qualify as part of the program but as a part of the program's data. Of course, as another poster pointed out, today it is pretty obvious that knowledge and rule-based style AI is only good for expert-systems style intelligence, largely due to the limitations of first order logic. Self-organizing agent based systems and neural networks seem to be the next approach to making a more useful, general-purpose AI, but those largely consist of dynamically created entities, and their code size can sometimes be surprisingly small. Of course, there is a limitation to those two, so if I can pull some speculation out of my ass (this seems to be a pretty safe thing to do when talking about AI), the two approaches of knowledge-based and agent-based systems will have to be combined to make something truly useful. Now, as to how (if) that's going to be achieved reliably and usefully is a whole other story.
  • by rollingcalf ( 605357 ) on Saturday January 25, 2003 @05:59PM (#5158590)
    The functioning of a multi-celled organism such as a vertebrate has incredibly high fault tolerance and would be a better analogy.

    For example, if you stick a pin or needle in your finger, all that happens is you have a moment of pain and lose a drop or few of blood. There is zero impact to your life, health or functional abilities (except in the 0.0001% of cases where a serious infection occurs). The equivalent damage to a software system might be something like changing one bit in a 100-megabyte program. Such a tiny amount of damage can easily bring the whole system crashing down or spitting out gargage information.

    Unlike software systems, animals (and plants, for that matter) can have multiple major components of their body damaged by disease or injury, yet not only do they survive, but they can recover well enough that their functional abilities and lifespan aren't damaged. You can lose your spleen, a kidney, or a good chunk of your liver, and eventually enjoy the same quality and quantity of life as those who have undamaged organs.

    For mere survival, the tolerance is far higher. People have lost multiple limbs, taken a bullet in the head or abdomen, had a lung collapse, or broken several bones and recovered to live to their eighties and beyond.

    It is very difficult to inflict a tiny damage that kills a large organism; the damage would have to be precisely directed at one of its weakest and most vital spots. But it is very easy to essentially destroy a large program by making a small random change.
  • by terrab0t ( 559047 ) on Saturday January 25, 2003 @06:37PM (#5158748)
    I think the problems with our current model of programming that Jaron is describing can be seen mainly in the idea of Leaky Abstractions [slashdot.org]. With software abstractions, we try to do exactly what Jarod is talking about; we try to simply make things interface with one another fluidly. What he's pointing out, and what leaky abstractions prove, is that our programming languages just don't work this way. Everything assumes the pieces it interacts with will interact in a specified way. The system depends on every piece to follow it's assumptions and often falls apart completely if one doesn't.

    There are questions to be raised about a flexible system like this:

    What about misinterpretation? Would software now behave like a human and "missread" another component's piece of information? (as people missread each other's handwriting)

    Would "fuzzy" interpretations lead to databases full of occasional false information? Could the same system still operate effectively with these kinds of errors? (a very tricky question)

    Could we still make secure systems with this kind of software interaction? Would secure systems still require the strict standards our current systems have? (ie. your password must still be entered with the correct capitalization)

    Obviousely, information passing wouldn't work in this model. Think of the party game where you sit in a circle and whisper a message in each other's ears to see how garbled it gets. We would just have to avoid that type of system.

    These (and the others I've read) are the kinds of immediate questions that one will make of this concept. I guess Jarod is proposing that these are things that can be worked around conceptually; they're implementation details.

    Personally, I think he's brilliant. I think he has stumbled onto what will be the foundation of the future of computing. Here is the big bold statement he is putting on the record for us:

    "So instead of requiring protocol adherence in which each component has to be perfectly matched to other components down to the bit, we can begin to have similarity. Then a form of very graceful error tolerance, with a predictable overhead, becomes possible. The big bet I want to make as a computer scientist is that that's the secret missing ingredient that we need to create a new kind of software."

    My question is this: Would any of you openly challenge this statement? If he were to more rigourousely define this bet and enter it here [longbets.org], would any of you put yourselves on record saying it was bunk?

    I know that's alot harder than simply not challenging him, but think of the ultimate outcome of his work. Do you truly think computing systems will still be cooperating the way they do now 100 years from now? If the answer is no, but you still don't think he's on to something, then what will change things? Genetically altered super-humans whose brains operate as our computers? The "Mentats" of the Dune novels?

    If I had $2000.00 to spare, I'd bet it on his idea.

    Feel free to quote me on that 100 years from now.
  • Re:Full of it. (Score:3, Insightful)

    by littleRedFriend ( 456491 ) on Saturday January 25, 2003 @06:42PM (#5158777)
    Indeed, sometimes a single basepair change is incompatible with life. However, biological systems have many safety features. Most of the time, the system can handle errors. Sometimes, people that can not see, will develop better hearing, or blocking a part of a metabolic pathway, will activate or increase the activity of another part to compensate. In biology, there are many feedback loops, and everything is regulated at every step along the way. The result is a complex system that is quite stable. Of course, if you hit the right switch the system will go offline.

    However, I don't see how this insight will lead to a better way of programming. Unless, maybe through sophisticated evolutionary / genetic programming techniques. I see many problems with rational design of stable complex systems like life or 10 million+ lines of code. Sorry Dave, you know I can't do that.
  • by Anonymous Coward on Sunday January 26, 2003 @01:39AM (#5160322)
    My opinion of Slashdot readers plummeted after reading these comments. The tone and content of the critisims range from childish (look at the funny hair) to stupid name calling (it's just a lot of BS that anyone can make up) to mindless bragging (I work on XXX million lines of code all the time). Very few of the negative comments were well reasoned or based on reference to any solid information. I think you are all scared shitless by the mere suggestion that your programming skills might not be the final word in technology. Jaron may be right or wrong, but he is taking a radical look at how well we do our jobs. Only a self satisfied smug idiot would accept the current state of the art for software development. Personally, I cringe whenever I am called a "Software Engineer." Engineers build reliable system based on well understood principles that are on time and on budget. Software development does none of these things, and we are lying when we call ourselves engineers. I think that the Slashdot readers who mindless flamed this posting are a big part of the problem, and to make matters worse, are happy with the current sad situation.
  • by mamba-mamba ( 445365 ) on Monday January 27, 2003 @02:13AM (#5165658)
    Don't get me wrong, i think what this guy is talking about is fascinating, but i also think it's got more of a place in some theoretical branch of math than in real life development.
    Actually, I think HE sees it as something that has limited utility in real life development, too. I think he's just trying to emphasize that things such as disk files are computer science constructs, and there MAY be Another Way. From the interview, I get the impression that his underlying fear, though, is that a dogmatic CS curriculum with in-curious students will fail to discover the Next Great Idea in computer science.

    On the specific case of files, one thing he may be discounting, however, is that we ended up with disk files by choice. I'm not sure what the criteria were for making the decision, but other ways of arranging data on a magnetic disk (and tape) were used in the old days, and somehow, files and file systems ended up ruling the day. It may just be a waste of time to try the old ideas again. I mean, you wouldn't build a square wheel just because you think that we may have settled too quickly on round ones...

    Anyway, this thread is dead, so I'm probably wasting my keystrokes.

    MM
    --

"I will make no bargains with terrorist hardware." -- Peter da Silva

Working...