Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Interview with Jaron Lanier on "Phenotropic" Development 264

Sky Lemon writes "An interview with Jaron Lanier on Sun's Java site discusses 'phenotropic' development versus our existing set of software paradigms. According to Jaron, the 'real difference between the current idea of software, which is protocol adherence, and the idea [he is] discussing, pattern recognition, has to do with the kinds of errors we're creating' and if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.'"
This discussion has been archived. No new comments can be posted.

Interview with Jaron Lanier on "Phenotropic" Development

Comments Filter:
  • by chuck ( 477 ) on Saturday January 25, 2003 @11:32AM (#5157040) Homepage
    we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.
    Thank God.

    -Chuck

    • we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.

      You can only drink 30 or 40 glasses of beer a day, no matter how rich you are. -- Colonel Adolphus Busch

      There really is a hard limit to just about everything...
    • by sql*kitten ( 1359 ) on Saturday January 25, 2003 @12:39PM (#5157361)
      Thank God.

      You're modded as funny, but what you said is insightful. The whole point of moving to ever higher levels of abstraction - from ASM to C to C++ (or CXX as we called it on VMS) to Java to <whatever comes next> is that you can do more work with fewer lines of code. The fact that programs aren't getting any longer is utterly irrelevant to any experienced software engineer.

      I don't think programs will get longer, since why would anyone adopt a language that makes their job harder? I bitch about Java's shortcoming's constantly, but given the choice between Xt and Swing, I know where my bread's buttered. Or sockets programming in C versus the java.net classes. I'll even take JDBC over old-skool CTLib.

      We have plenty of apps these days that in total are well over 10M lines, but you never have to worry about that because they're layered on top of each other. Someone else worries about the code of the OS, the code of the RDBMS engine, the code of GUI toolkit and so on.

      In short, pay close attention when someone from Sun tries to tell you anything about software development - he's got some hardware to sell you, and you'll need it if you follow his advice!
  • Just a thought (Score:2, Informative)

    by Neophytus ( 642863 )
    The day a compiler makes programs based on what it thinks we want a program to do is the day that conventional computing goes out the window.
  • Full of it. (Score:5, Insightful)

    by cmason ( 53054 ) on Saturday January 25, 2003 @11:37AM (#5157065) Homepage
    If you think about it, if you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time. Certainly there wouldn't be any evolution or life.

    <cough>Bullshit.</cough>

    This guy obviously knows nothing about biology. A single base change in DNA can result in mutations that cause death or spontaneous abortion. As little as a change in a single 'character' can be lethal. That's a pretty "small change" that results in a pretty big "crash."

    I'm not sure if this invalidates his argument, but it certainly doesn't do much for his credibility.

    • by QEDog ( 610238 ) on Saturday January 25, 2003 @11:44AM (#5157103)
      That is only if you consider one living been only. I think he means that is robust as an ecological balance. If a small change in the DNA base of one animal happens, he dies, and is unable to reproduce. So the 'error' was confined, and dealt with. It didn't explode giving a blue screen. Evolution is a phenomena of many living beens, not one. Even if a big change happens in a specie, most of the time the system is robust enough to absorb it and change the system into one that works. And, because of the evolutionary mechanism, only the good mutations, by definition, spread. Imagine a computer program where only the useful threads got resources allocated...
      • I can see this point. I think the analogy is faulty. If you liken DNA to a computer program, then consider a single organism to be a run of that program. A single organism can crash just like a run of a program can.

        Now there are certainly programming methodologies modeled on evolution. But that's not what he's talking about. What he's talking about is using pattern recognition to reduce errors in computer programs, I assume, although he doesn't say this, by making them more tolerant of a range of inputs. Evolution has nothing to do with pattern recognition, other than that both are stochastic processes. Evolution is tolerant of environmental pressure by being massively parallel (to borrow another CS metaphor). And even then it's sometimes overwhelmed (think ice age). His programs would be more tolerant of errors because they used better algorithms (namely pattern recognition).

        I think it's a bullshit analogy. As I said before, I'm not sure if this analogy is key to his argument, but I don't give him a lot of cred for it.

        • by haystor ( 102186 ) on Saturday January 25, 2003 @12:12PM (#5157232)
          I think its a pretty good analogy but that comparing it to biology leaves it a bit ambiguous as to what the metaphor is.

          If you compare it to something like building a house or office building the analogy works. If you misplace one 2x4, its very unlikely that anything will ever happen. Even with something as serious as doors, if you place one 6 inches to the left or right of where its supposed to be, it usually works out ok. It always amazed me once I started working with construction at how un-scientific it was. I remember being told that the contractors don't need to know that space is 9 feet 10 1/2 inches. Just tell them its 10 feet and they'll cut it to fit.

          One of the amazing things about AutoCad versus the typical inexpensive CAD program is that it deals with imperfections. You can build with things that have a +/- to them and it will take that into account.

          Overall, he definitely seems to be on the right track from what I've seen. Most of the projects I've been working on (J2EE stuff) it seems to be taken as a fact that its possible to get all the requirements and implement them exactly. Like all of business can be boiled down to a simple set of rules.
        • by Anonymous Coward
          IMO, if you liken DNA to a computer program, an individual is one instance of that code, or one process. That process can be killed without the entire system going kaput, which is what makes biological systems robust.

          However, even though I think Lanier's observations are valid, they're not particularly groundbreaking. His "wire-bound" vs. "interface" argument is basically a minor revision of the old procedural vs. OO debate. The problems with coding in terms of objects and their interactions continues to be the same: It's never going to be the most efficient(in terms of information content) possible description of a problem, and it's hard work to write extra code for a level of robustness in the future, when most developers are paid for performance in the present. I strongly believe that the roadblocks in development of more robust software are not technical, but are mostly economic.
        • program crashes.. but not the os..
    • I think his point is more of, what if that one small dna change caused the entire world to explode. One small creature dying isn't a "big crash".
    • This guy has seemed to turn into a bullshit artist as of late... i respected his early VR work, but VR is the tech that never was, and I guess he's just killing time now?
    • Re:Full of it. (Score:4, Insightful)

      by Sri Lumpa ( 147664 ) on Saturday January 25, 2003 @11:58AM (#5157174) Homepage
      "This guy obviously knows nothing about biology. A single base change in DNA can result in mutations that cause death or spontaneous abortion. As little as a change in a single 'character' can be lethal. That's a pretty "small change" that results in a pretty big "crash.""

      This means that natures has got an excellent error catching and correction system, rather than letting buggy code run and produce flawed result it catches the worst cases and prevent them from running (spontaneous abortion) while code with less bugs (say, a congenital disease) has less chance to run (early death, lesser sexual attractiveness to mates...).

      It is only with the advent of modern society and modern medecine that the evolutionary pressure has dropped enough to make it less relevant to humans. Maybe in the future and with genetic engineering we will be able to correct congenital diseases in the womb.

      Even beyond DNA I am convinced that nature has a good recovery system and that if humans were to disappear tomorrow most of the damages it did to Earth would eventually be healed (but how long before we reach the no return point?).

      Now if software could have similar damage control mechanism and if it could still function while invalidating the buggy code, that would be something.
    • Thank You! (Score:2, Interesting)

      by ike42 ( 596470 )
      Small perturbations often have disproportional large consequences, as your DNA example illustrates. Paradoxically, as Lanier suggests, complex systems can also be amazingly fault tolerant. This is in fact the nature of complex, or chaotic, systems and some say life. However, we cannot, in general, predict which sort of behavior a complex system is likely to exhibit. Lanier seems to miss this entirely. So while his ideas are intriguing I don't think he has a good grasp of the real issues in designing "complex" software systems.
    • by Pentagram ( 40862 ) on Saturday January 25, 2003 @12:16PM (#5157258) Homepage
      Who modded this up? A single base change in DNA is almost never fatal. For a start, considerably more than 90% of the human genome is junk that has no expressive effect anyway (according to some theories it helps protect the rest of the genome.) Even point mutations in coding sections of the DNA often do not significantly alter the shape of the protein it codes for, and many proteins are coded for in several locations in the genome.

      True, single base changes can have dramatic effects, but this is rare. As an example, the human genetic equipment is so fault-tolerant that humans can be even born with 3 copies of a chromosome and still survive (Down's Syndrome).
      • ...more than 90% of the human genome is junk that has no expressive effect anyway (according to some theories it helps protect the rest of the genome.) Even point mutations in coding sections of the DNA often do not significantly alter the shape of the protein it codes for, and many proteins are coded for in several locations in the genome.

        Do you think the DNA started off that way? Or is it possible it's more like RAID for organisms, that it evolved that way simply because it is more fault resistant that way?

        --
        Daniel
    • And that's one of the advantages diploid organisms have, allowing heterozygous organisms for even a deleterious mutation to be able to live normally.

      So in a way this guy's right about nature's built in error tolerance methods. But he still throws around words like "chaos" and "unpredictability" without really saying anything remarkable.

      This is only part 1 of what will supposedly be a series, but this looks nothing more to me than an artist's view of a computational problem. From a pragmatic perspective, it makes obvious observations, weak statements that only cite impressive concepts (out of dynamical systems theory, computer science and biology) and proposes no answers.

      To wrap it up, he suggests the reader should question where Turing, Shannon and von Neumann were wrong. Well, guess what: these were all mathematicians and even though one may question why they studied particular topics, their mathematics isn't and never will be wrong because it's logically sound.

      I'm not impressed.
    • Re:Full of it. (Score:2, Interesting)

      by raduf ( 307723 )
      Well, actually, changes in DNA often don't do anything bad, much less fatal. That's how evolution takes place, right? See how long/well a random DNA change can survive. Anyway, DNA and biological stuff in general is much more resistant (resilient?) to change than software. Much more. DNA is around for a very long time and it hasn't crashed yet.

      The point this guy makes and I totaly agree with is that programmimg can't stay the same for ever. I mean come on, we're practically programming assembly. High level, IDE'd and coloured and stuff but not a bit different fundamentally.
      Functional programming for example, that's different. It probably sucks (I don't really know it) but it's different. It's been around for about 30 years too.

      There has to be something that would let me create software without writing
      for i=1 to n
      for every piece of code I make. It's just... primitive.

      And this guy is right about something else too. If nobody's looking for it, it's gonna take a lot longer to find it.
    • This guy obviously knows nothing about biology. A single base change in DNA can result in mutations that cause death or spontaneous abortion. As little as a change in a single 'character' can be lethal. That's a pretty "small change" that results in a pretty big "crash."

      I think most of the data in DNA requires multiple base pair changes to have a major impact -- I'm not a biologist, though. Otherwise, radiation from the Sun would mutate the bejeezus out of everyone all the time.
    • Re:Full of it. (Score:3, Informative)

      by Trinition ( 114758 )
      This guy obviously knows nothing about biology

      Neither do you. The base pairs in DNA work in groups of 3. There's 4^3 possible combinations then, in one group... 64. However, there are only about 25-30 different results. It has been shown that the various combinations that lead to the same result are nearly-optimal. That is, the liklihood that any one base pair would be incorrectly copied as another is least likely to have an effect on teh result of that group.

    • by rollingcalf ( 605357 ) on Saturday January 25, 2003 @04:59PM (#5158590)
      The functioning of a multi-celled organism such as a vertebrate has incredibly high fault tolerance and would be a better analogy.

      For example, if you stick a pin or needle in your finger, all that happens is you have a moment of pain and lose a drop or few of blood. There is zero impact to your life, health or functional abilities (except in the 0.0001% of cases where a serious infection occurs). The equivalent damage to a software system might be something like changing one bit in a 100-megabyte program. Such a tiny amount of damage can easily bring the whole system crashing down or spitting out gargage information.

      Unlike software systems, animals (and plants, for that matter) can have multiple major components of their body damaged by disease or injury, yet not only do they survive, but they can recover well enough that their functional abilities and lifespan aren't damaged. You can lose your spleen, a kidney, or a good chunk of your liver, and eventually enjoy the same quality and quantity of life as those who have undamaged organs.

      For mere survival, the tolerance is far higher. People have lost multiple limbs, taken a bullet in the head or abdomen, had a lung collapse, or broken several bones and recovered to live to their eighties and beyond.

      It is very difficult to inflict a tiny damage that kills a large organism; the damage would have to be precisely directed at one of its weakest and most vital spots. But it is very easy to essentially destroy a large program by making a small random change.
    • Indeed, sometimes a single basepair change is incompatible with life. However, biological systems have many safety features. Most of the time, the system can handle errors. Sometimes, people that can not see, will develop better hearing, or blocking a part of a metabolic pathway, will activate or increase the activity of another part to compensate. In biology, there are many feedback loops, and everything is regulated at every step along the way. The result is a complex system that is quite stable. Of course, if you hit the right switch the system will go offline.

      However, I don't see how this insight will lead to a better way of programming. Unless, maybe through sophisticated evolutionary / genetic programming techniques. I see many problems with rational design of stable complex systems like life or 10 million+ lines of code. Sorry Dave, you know I can't do that.
  • by aminorex ( 141494 ) on Saturday January 25, 2003 @11:41AM (#5157089) Homepage Journal
    And when you link your 10 million line program with my
    10 million line program, we've got a 20 million line program.
    This idea of an inherent limit to the complexity of
    programs using current methods is pure larksvomit, and
    if Jaron Lanier sells it, he's a snake oil hawker.

    This is Jack's total lack of surprise -> :|
    • And when you link your 10 million line program with my
      10 million line program, we've got a 20 million line program.
      This idea of an inherent limit to the complexity of
      programs using current methods is pure larksvomit, and
      if Jaron Lanier sells it, he's a snake oil hawker.

      This is Jack's total lack of surprise -> :|


      If this isn't a troll, my name isn't Elwood P. Dowd. And it isn't.
    • This idea of an inherent limit to the complexity of programs using current methods is pure larksvomit, and if Jaron Lanier sells it, he's a snake oil hawker.
      I'd like to know how he manages to produce snake oil from lark's vomit!
  • Big Programs (Score:5, Insightful)

    by Afty0r ( 263037 ) on Saturday January 25, 2003 @11:42AM (#5157093) Homepage
    "if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become."

    Fantastic! We'll all get down and program small, specific routines for processing data, each one doing its' own job and doing it well. Those nasty, horrid standard protocols he refers to will allow all these small components to easily talk to each other - across architectures, networks etc.

    Oh wait, this is the way it already works. Is this guy then, proposing that we learn a new way to program because our systems aren't monolithic enough? *sigh*
    • Is this guy then, proposing that we learn a new way to program because our systems aren't monolithic enough

      Well, duh. How else are Sun going to sell all that 64-bit gear they make? Bigger and more monolithic the better!

      Oh, wait, the network is the computer. Maybe the best solution is to have smaller programs and standard protocols after all.
  • by Anonymous Hack ( 637833 ) on Saturday January 25, 2003 @11:44AM (#5157108)

    ...but i don't see how it's physically possible. It sounds like he's proposing that we re-structure programming languages or at least the fundamentals of programming in the languages we do know (which might as well mean creating a new language). This isn't a bad thing per se, but one example he talks about is this:

    For example, if you want to describe the connection between a rock and the ground that the rock is resting on, as if it were information being sent on a wire, it's possible to do that, but it's not the best way. It's not an elegant way of doing it. If you look at nature at large, probably a better way to describe how things connect together is that there's a surface between any two things that displays patterns. At any given instant, it might be possible to recognize those patterns.

    Am i stupid or something? He seems to be drawing two, completely unrelated things together. Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic. When we develop code for this environment, we have to develop according to those binary rules. We can't say "here's a rock", but we can say "turn on these switches and those switches such so that it indicates that we are pointing to a location in memory that represents a rock".

    Maybe i'm missing his point, but i just don't understand how you can redefine programming, which is by definition a means of communication with a predictable binary system (as opposed to a "probability-based system" or whatever quantum physicists like to call reality), to mean inputting some kind of "digitized" real-world pattern. It's bizarre.

    • Well, you can describe most occurences in nature with an extremely deterministic set of rules, the basic laws of physics everyone learns in school. It's not like a random, "probability-based system" was necessary to understand, simulate or imitate the behaviour he described. Also note that the digital representation in a computer also physically exists in the real world (in whatever way you chose to store them) and is thus influenced by the same "random" effects - although I wouldn't blame any software bugs on quantum phyics.

      That said, I also have a hard time creating a connection between what he suggests and the example he gives. Maybe you need better drugs to grok it. ;)

      Disclaimer: I'm not a physicist, and that likely shows. Heck, I'm not even a qualified computer scientist, yet.
    • by goombah99 ( 560566 ) on Saturday January 25, 2003 @12:52PM (#5157417)
      When I first started reading it I thought well this is cute but impractical. But I had a change of heart. What first bothered me was the idea that if a function is called with a list of args that the function should not just process the args but in fact should look at the args as a pattern that it needs to respond to. First this would imply that every function has been 'trained' or has enough history of previous calls under its belt that it's smart enough to figure out what you are aksing for even if you ask for it a little wrong. Second the amount of computational power needed to process every single function call as a pattern rather than as a simple function call is staggering.

      or is it? how does 'nature' do it. well the answer in nature is that everything is done in parallel at the finest level of detail. when a rock sits on a surface every point on the rock is using its f=MA plus some electomagentics to interact with the surface. each point is not supervised, but the whole process is a parallel computation.

      so although his ideas are of no use to a conventional system, maybe they will be of use 100 years from now when we have millions of parallel processors cheaply available (maybe not silicon). So one cant say, this is just stupid on that basis.

      indeed the opposite is true. if we are ever going to have mega-porcessor interaction these interactions are going to have to be self-negotiating. It is quite likely that the requirements for self negoitation will far out strip trying to implement doing something the most efficeint way possible as a coded algorithm would. spending 99% of your effort on pattern recognition on inputs and 1% of your processor capability fuulfilling the requested calacultion may make total sense in a mega scale processing environement. it might run 100x slower than straight code would but it will actually work in a mega scale system.

      The next step is how to make the processor have a history so that it can actually recognize what to do. That's where the idea of recognizing protocols comes in. At first the system can be trained on specific protocols, which can then be generalized byt theprocessor. superviser learning versus unsupervised.

      Cellular systems in multi-cellular organism mostly function analogously. They spend 99% of their effort just staying alive. hugeamounts of energy are expended trying to interpret patterns on their receptors. some energy is spent reponding to those patterns. Signals are sent to other cells (chemically) but the signals dont tell the cell what to do exactly. Instead they just trigger pattern recognition on the receptors.

      thus it is not absurd to propose that 'functions' spend enormous effort on pattern recogntion before giving some simple processing result. But for this to make sense youhave to contextualize it in a mega processor environement.

      • by Anonymous Hack ( 637833 ) on Saturday January 25, 2003 @01:14PM (#5157524)

        But if his theory is, in fact, what you are describing... Why would we ever do it on the program level? As a programmer, it's actually easier to debug an application if you know exactly how each function is going to treat your arguments. Let's try to think it back to today's technology for a second:

        void *myFunction(void *b)
        {
        if (mostProbable(b, TYPE_INT))
        printf("%d", *((int *) b));
        else if (mostProbable(b, TYPE_CHAR))
        printf("%c", *((char *) b));
        return (mostProbableReturn());
        }

        And in turn our calling function will have to sit and think about the return and what it probably is, etc, etc. What benefit can be gotten from programming like this? Yes, it means we could randomly fire something into a function that wasn't intended for it... for example (in Java): SomeUnknownObject.toString() and instead of getting "Object$$1234FD" we get a true string representation of whatever it is... but we programmed it in the first place! We programmed it, so we know precisely what the object is supposed to be, and precisely how to display it. Why have a computer guess at it when we KNOW?

        "Ah", i see you saying, "but won't it cut down on LOC if a user gives unknown input and the app can figure it out?" True indeed, but then why doesn't he talk about making these abstractions at the GUI-level? It is far, far more practical to keep the fuzzy logic on the first layer - the input layer. And in fact this is already done to some extent. Love him or hate him, Mr PaperClip Guy pops up and guesses when you're writing a letter and tries to help. Love it or hate it, most every text widget in Windows has an auto-complete or auto-spell-fix function. Hell, even zsh and i believe bash do it. This is where fuzzy logic is useful - making sense of input that is either not what you expected or something you could "lend a hand" with. But in the core API? In the libraries, in the kernel, in the opcodes of the CPU? I don't think so. It's in those places where 100% reliability/predictability are vital, otherwise it defeats the point of using a computer in the first place. You don't want your enterprise DB suddenly "losing a few zeros" because your server farm "thought it made more sense that way".

      • How about a nerual network. When I first took an artificial neural network class in college, I was blown away. So simple. So elegant. And so powerful!

        Like you describe, they require training, but they are MADE for pattern recognition. Artificial neural networks are already used in certain niches of programming. But maybe someone could make them more general purpose for general programming.

        I remember one example we saw in class was building an XOR neural network. It was incredibly complicated compared to a typical digital XOR gate. But the neural network used to remove unknown noise from a signal was surprisingly simple.

      • spending 99% of your effort on pattern recognition on inputs and 1% of your processor capability fuulfilling the requested calacultion may make total sense in a mega scale processing environement. it might run 100x slower than straight code would but it will actually work in a mega scale system.

        Good point -- like a human society.
      • if we are ever going to have mega-porcessor interaction

        My friend! Prepare to contain your joy 'cos mega-porcessor technology is already here - see press release below.

        Porcine Technologies
        Press Release
        April 1, 2002


        Porcine Technologies Inc, is pleased to announce, after 14 years of continuous research and development, the first public offering of it's mega-porcessor computing technology.

        Mega-porcessors are a completely new approach to computing, directly incorporating the exciting new worlds of genetic engineering and pattern recognition or phenotropic computing.

        Professor Clive Bacon, Porcine Technologies Chief Research Officer explains the technology:

        "Basically, the mega-porcessor is a genetically re-engineered large porcine scrofulus. The gut of the porcessor has been genetically engineered to be extremely sophisticated in pattern-recognition porcessiong. Problems are solved by coding the data set into an ingestible porcine feedmeal. This is then fed to the mega-porcessor whereupon the porcessor falls into a torpid state as it's gut quickly assimilates the data-set. We've done a lot of work to accelerate this process and really make these pigs fly.


        Once the dataset feedmeal has been fully porcessed in the gut and it's inherent patterns recognised and clarified the solution set to the problem is laid down as subcutaneous sebaceous material around the porcessors central porcessing unit (it's belly). This information transfer technique is referred to as Supine Porcessor Ameliorated Messaging (SPAM).

        Finally the solution data-set needs to be communicated to the human operators. This is effected by terminating the porcessor (usually by slitting it's throat) and then serving up the solution-doped meat of the porcessor to the porcessor operators. Ingestion of this material communicates an intuitive answer to the problem dataset directly to the operator's stomach, giving them a 'gut feeling' about the initial problem and it's inherent solution."


        Paul Jamon, CEO of Porcine Technologies, in launching the first three models (already dubbed the three little pigs) of this exciting and promising technology, also announced that development was nearing completion of it's second generation of the technology (BIGSOW) and that several large global enterprises were already testing the unique power of this larger body mass porcessor :

        "We already have several large global enterprises excited about and actively using this second generation of our revolutionary technology. Today, people talk about using 'big iron' to run mission-critical information analysis. We envision a future, where in as short a time as five years from now, companies and their computing operators will increasingly be turning, napkin in hand, to 'big porc'."
      • After I wrote the parent post, I recalled something douglas adams wrote about. He described a big desk that was actually a computer. You sat at the desk and tried to solve you problem. the computer watched you and after a while figured out what the problem you wanted to solve was. then it solved it. Nearly all the computer power was spent wathcing you and infereing your problem. Not quite the same as what I was saying but a remarkable instance of it.
    • I agree with your skepticism, Lanier is spouting vague principals with little basis in real systems. He ought not to say "we should go there" until "there" is somewhat mapped out and not a big spot on the map labeled "here there be dragons". However, I do have some things to say about your comments.

      Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic.

      Our DNA, the genetic code that is used to make the building blocks that make us us, and make chimpanzees chimpanzees, is essentially a number in base 4, manipulated by molecules that are either completely hardcoded, or defined within that base 4 number, and influenced by chaotic forces in the world at large..

      Mathematically and logically speaking, there is no difference between a base 4 number and a base 2 number. Nature uses base 4 because she had 4 nucleotides to play with, we use base 2 because it's cheaper to design and build; they are the same numbers.

      When we develop code for this environment, we have to develop according to those binary rules.

      Perhaps, but there are some things that need to be kept in mind here. As Lanier points out, much of what we consider "this environment" is the biproduct of business decisions, not the essential nature of binary computing, for example, processor registers, one dimentional memory, the four ring security model, interrupts, files, these can all be done differently.

      Also, as has been demonstrated in numerous ways, just because you are using a binary device doesn't mean that you must be develop based on binary logic, people aren't toggling the boot loader via the switches in front of the computer anymore. In binary, someone can develop an environment that is much much richer than binary. Then, separately, anyone can develop for that environment without having to worry about the binary details.

      We even have the technology to, given sufficent computing power, completely model any non-chaotic analog environment and have it work right (just keep making the bit lengths longer until you are safely well under whatever error tolerance you have). Chaotic analog environments are harder, but people are working on it; we've got the technology to make something that looks like the chaotic environment, but is missing out on much of the richness.

      We can't say "here's a rock", but we can say "turn on these switches and those switches such so that it indicates that we are pointing to a location in memory that represents a rock".

      But we can. When you write a paragraph of text in HTML, you don't say "turn on these switches and those switches such that it indicates that we are pointing to a location in memory that represents a paragraph", you say "here is a paragraph, here's the text in the paragraph". You can make an environment where you can say "here is a rock" (but until we get better at chaos, it will look and act at best almost, but not quite, like a rock).
      • But we can. When you write a paragraph of text in HTML, you don't say "turn on these switches and those switches such that it indicates that we are pointing to a location in memory that represents a paragraph", you say "here is a paragraph, here's the text in the paragraph". You can make an environment where you can say "here is a rock" (but until we get better at chaos, it will look and act at best almost, but not quite, like a rock).

        I think what's significant here is how something gets further and further removed from the "programming" he's talking about. Sure in HTML you can do that. In Microsoft Word it's even easier - open up your clip art directory, insert rock.bmp, fin. But how is that programming? That's using the user interface that the programmer has developed. He could be talking about some kind of AI where if the user wants a rock he says "i want a rock", and then the AI asks him more and more specific questions about the rock so it can adjust its properties to make it look more or less how the user intended. But how is this "phenotropic programming"? This is artificial intelligence, and idea that's been around for as long as we've been using machines.

        And then on top of that, he wasn't talking about a rock per se, but about protocols. Let's look at the HTTP protocol as an example. Now instead of doing socket() open() fprintf("GET HTTP/1.0 /") we just have a bajillion bytes of data. No concept of one computer being seperate from another. This is almost impossible to conceptualize anyway, because we're so stuck with the idea that we need TCP/IP, we need lower-level modem/NIC/communications protocols, etc. But just imagine for a second we have the whole "internet" as one big bunch of bytes and somehow our new version of DNS has found the bundle of bytes that represent the website we want. Now instead of getting the data in a stream we can understand it needs to know the user wants the "index" page in the data blob. So what does it do? Randomly leafs through the data till it finds something that resembles an index page, and just does a memcpy()? This kind of thing is way, way beyond the technology we have today... And i can already see a number of big security holes. If all the data is accessible to everyone for the purpose of the client deciding what each piece of data is supposed to represent, someone could get access to data illegally very easily. That's just the beginning.

    • This is an interesting concept..but i don't see how it's physically possible.

      Actually, it's already been done. The programming language Linda [unibe.ch] by David Gelertner uses pattern matching.

      Everything exists in a large tuple-space and objects can be "written" into the space. They are "read" by pattern matching. Objects can be passive data or active processes.

      It's a very simple and elegant idea. The JINI and JavaSpaces projects use these concepts, which is probably why Lanier's article is on the Java site.
    • ...but i don't see how it's physically possible. It sounds like he's proposing that we re-structure programming languages or at least the fundamentals of programming in the languages we do know (which might as well mean creating a new language).

      Hmmm. That's kind of like asking how it's possible for two three dimensional objects to occupy the same place is space. The answer, of course, is to displace those objects along the time vector. Similarly, I think that the author is trying to urge coding paradigms onto a new and different vector basis. This, of course, happens all the time, and people are always indignant when their domain of study's basis of authority is undermined by someone else's work.

      Am i stupid or something? He seems to be drawing two, completely unrelated things together. Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic. When we develop code for this environment, we have to develop according to those binary rules.

      No, not stupid. Caught up in the paradigm of binary opposition, perhaps. Personal computers produced for mass consumption are bundles of very, very tiny on/off switches. Research computers often utilize quadratic switches (biocomputing) and n-switches (optical and quantum computing). A biocomputer, for example, may run well over a billion solutions to a problem, simultaneously, utilizing A,C,G,T switches; the trade-off for breaking the on/off paradigm, however, is that you can only run this particular biocomputer once, and then it's no longer a biocomputer.

      Maybe i'm missing his point, but i just don't understand how you can redefine programming, which is by definition a means of communication with a predictable binary system to mean inputting some kind of "digitized" real-world pattern.

      The process works like this: You (PersonA) can redefine programming or whatever else you want (religion, science, government, etc. etc.) by gather at least one other person (PersonB) to you, and declaring between the two of you, 'We're going to redefine this term F(x) to now mean F(y).' Alternatively, you can say, 'We're going to redefine this term F(x) to now mean G(x).' Between PersonA and PersonB, this term is now redefined.

      After that, it's all a matter of gathering other people into your circle or domain of practice, and getting other people to believe in your ideas. If you, as PersonA, never get a PersonB, then you a lone crackpot without any supporters. If you, as PersonA, gather a million people around you and your believes, you are either L Ron Hubbard or Bill Gates.

      And lastly, programming for biocomputers often involves communication with a predictable quadratic (i.e. genetic) system. It just goes to show that the term 'programming' is pigeon-holed by the computer scientists to mean a particular thing in their field of study.
    • here's one way to understand the gist of the argument: consider programming to be the application of a specification to a protocol.

      • in the old old days, the protocol was "bang bits" and the specification was "register transfer level" (RTL) instructions.
      • in the old days, the protocol was "drive widgets w/ signals" and the specification was "connect this to that" instructions. a lot of gui programming is still done this way (lamentably).
      • in the less recent past, the drudgery of wiring things from the ground up spawned observation that it is possible to regularize some of the wiring; the protocol was still the same but the specification started looking like "connect pre-packaged-this to pre-packaged-that".
      • in the more recent past, the protocol expanded to "connect your-this to your-that" and the specification evolved to be able to generate more fitly that which was formerly pre-packaged en masse, to "declare my-this and my-that".
      • in the present day, the protocol is "your-connect your-this to your-that" and the specification is "declare my-connect, my-this and my-that".
      • the last step (towards adaptive programming by machines) is to hook up an inference engine that specializes on situation, in order to generate the proper my-* bits (all of them). then the protocol is "teach the engine" and the specification is "recognize situation-A and try my-bits-A".

      one can up-scope and note the (still somewhat imperfect) congruence of the last step w/ the original RTL... in any case (heh), the world is a better place if more users understand the programming mindset if not the act of programming, per se. what is a programmer but a cultivator of the machine? what is a good person but a self-programming philanthropist? what is a great hacker but a good person w/ skillz?

  • we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.

    Oh I am sure a group of say about 15 Irish kids could do it in a year.
  • Unfortunately... (Score:2, Insightful)

    by holygoat ( 564732 )
    ... this guy doesn't seem to really know what he's talking about.

    As someone else has mentioned, life in many respects isn't robust - and where it is, it's not relevant.

    For instance, genetic code is mutable - but try doing that with machine instructions. That's why Tierra creatures are written in its own pseudo-machine language.

    If there's one thing that I don't want happening in code, its tiny errors causing almost-reasonable behaviour. Brittle code is code that's easy to debug.

    What he really wants is lots of small, well-defined objects or procedures doing their one task well. If you decompose a design well enough, there's nothing to limit scalability.
    Good design is the answer.
    • by pla ( 258480 )
      Very good point.

      To convert it to the software-world equivalent - With enough knowledge of the specific hardware platform it will run on, a good programmer can write a 100% bug-free, "perfectly" robust "hello world" program.

      (Anyone who thinks "void main(void) {printf("hello world\n");}" counts as a perfectly bug-free program has clearly never coded on anything but well-behaved single-processor PC running a Microsoft OS with a well-behaved compiler).

      However, extending your idea, how do you get 100 "hello world" programs to work together to, say, play an MP3?

      Yeah, it sounds absurd, but seems like exactly what the parent article suggests. Trained on enough "patterns" of input, even a set of "hello world" programs should manage to learn to work together the play MP3s.

      That *might* work if we started writing programs more as trainable functional approximation models (such as neural nets, to use the best currently known version of this). But, as much as it seems nice to have such techniques around to help learn tasks a person can't find a deterministic algorithm for, they *SUCK*, both in training time *and* run time, for anything a human *can* write straightforward code to do. And, on the issue of training... This can present more difficulties than just struggling through a "hard" task, particularly if we want unsupervised training.

      I really do believe that, some day, someone will come up with the "killer" software paradigm, that will make everything done up to that point meaningless. But, including this current idea, it hasn't happened yet.

      But to end on a more Zen note... Phenotropic development already exists, in the perfected form. When a rock touches the ground, all the atoms involved just intuitively "know" where the balance of forces lies. They don't need to "negotiate", they just act in accord with their true nature. ;-)
      • Anyone who thinks "void main(void) {printf("hello world\n");}" counts as a perfectly bug-free program has clearly never coded on anything but well-behaved single-processor PC running a Microsoft OS with a well-behaved compiler

        Or maybe they have, but are complete idiots when it comes to C. Your program has three large errors.

        1. there is no prototype for printf. Thus printf will default to taking an int argument. The above code would only work if sizeof(int) == sizeof(char *)
        2. main should return an int, not void. Otherwise, it would not be possible to return an error status to the operating system. It also violates the standard.
        3. since main returns an int, you also need a return statement at the end of the function.
        • The above code would only work if sizeof(int) == sizeof(char *)

          ...Which it does, on the x86 line (post-286). I will grant, however, that most compilers would give a warning on a questionable type conversion.


          main should return an int, not void

          I could count, on one hand (with three fingers chopped off), the number of compilers I've used that force that. Most compilers just assume a return of zero (or true, or posixly-true, or whatever convention of the week seems popular). Again, a warning, but it will still go on to generate a functional executable.


          However, I think you have, to a large degree, basically agreed with me. What I wrote, for most C compilers running under DOS/Windows on an x86 box, would compile and run, and produce the correct output, despite having quite a few fundamental flaws in it.

          Of course, perhaps I should not have used that example, since I didn't mean that as key to my actual point... More like one absurd example out of a large number of possible choices that would prove equally useless for any task other than the one intended.
  • Most of what was stated is "pie in the sky" idealism. Get real, it will take a long time for programming and software development to get to the point where it works elegantly the way he describes it. I have no problems with reminding people "hey, lets try to improve how software is developed." Like those of us in the trenches don't realize how much of a mess it is most of the time. We can't get from point A to point M without going through all the painful intermediate steps.

    I seriously doubt nature came to the elegant design of 4 base pairs overnight, so let's work hard at making it better w/o throwing a pile of dung on people's face. After all, they are the ones who have to build the pieces to get to that point.

  • by Viking Coder ( 102287 ) on Saturday January 25, 2003 @11:55AM (#5157161)
    I used to like this guy.

    The problem with the kind of system he's talking about is that the more robust you make it, the harder it is to change it's behavior.

    Take the cockroach, for instance. It is damned hard to train 100 of them to work together to open a pickle jar.

    That's because a cockroach is extremely robust at being a cockroach, which has nothing to do with teaming up with 99 other cockroaches to open a pickle jar.

    I don't believe nature had a design for each individual life form, other than to be robust. That doesn't give us any particular insight into how to both design something robust that meets a specific goal, which is the point of almost all software.

    Once you get to the point where the specifications of each component are as exact as they need to be to meet a specific goal, you're lacking exactly the kind of robustness that he's describing.

    What he's really saying is that entropy is easy to defeat. It's not. Perhaps there will be easier ways to communicate our goals to a computer in the future, but the individual components will still need to be extremely well thought-out. I think it's the difficulty of the language that makes symbol exchange between a human and a computer difficult - the fact that the human needs an exact understanding of the problem before they can codify it isn't going to change.
    • by Anonymous Coward
      Exactly.

      I don't want my computer to be fuzzy and robust. I want it to be precise and brittle. I don't want computers to become "life forms". The whole point of the computer is to solve problems, not to be a little organism that "mostly works". Life forms already exist.

      That's what I hear all these "visionaries" talking about: they want to make computers operate like the human mind, but they miss the point that we ALREADY HAVE the human mind! We know how it can solve amazing problems quickly, but can also fail miserably. Why do we focus on this, and not on making computers simpler and more effective tools!

      It's good to always question the design of our computers. The stored program concept, files, all that stuff is arbitrary. But let's not miss the point that computers are tools, assistance, calculators, etc... they aren't brains and shouldn't be.
    • You're certainly right that it's hard to change, but I don't even think we need to go that far. It's hard to *make* a system like this. Nature used brute force, a massive computer, and bazillions of years to do it, and didn't get to specify much about what came out the other end.
  • and another thing (Score:4, Insightful)

    by Anonymous Hack ( 637833 ) on Saturday January 25, 2003 @11:55AM (#5157164)

    His comments don't seem to make any sense with regard to the way we, as humans, actually view The Real World either:

    So, now, when you learn about computer science, you learn about the file as if it were an element of nature, like a photon. That's a dangerous mentality. Even if you really can't do anything about it, and you really can't practically write software without files right now, it's still important not to let your brain be bamboozled. You have to remember what's a human invention and what isn't.

    Of course a file is a human invention, but it's also a concept without which NOTHING would work - not just computers. A "file" is just an abstraction of a blob, and i mean that both in the sense of a "blob" as in "a thing" and as in "Binary Large OBject". It's a piece of data that represents something. That's exactly the same thing as looking at a house and saying "that's a house" or looking at a car and saying "that's a car". It's just a way to categorize a bunch of photons/atoms/whatever into something we can readily communicate and understand. This is HUMAN, this is how we reason. If we "saw" the universe as a bazillion photons, we'd never get anything done, because we couldn't "here is a car", we'd be describing each photon individually, which would take up millions of human lifetimes. It's a human limitation, and i don't see any way that we could just up and ignore it.

    Don't get me wrong, i think what this guy is talking about is fascinating, but i also think it's got more of a place in some theoretical branch of math than in real life development.

    • There's lot of places that we use files where we don't have to.

      Take libraries for example - why are they in files? Why not put all the functions in a database? Fully indexed, and cross referenced. When you need new functions, just download them.

      Same with programs. Why not just make every program a function? That would make it a lot easier to manipulate the output and input (This is actually close to a project I've been working on for some time.)
      • Take libraries for example - why are they in files? Why not put all the functions in a database? Fully indexed, and cross referenced. When you need new functions, just download them.

        That's a red herring. How do we store the database? As a file. What is a file? An indexed, named set of blocks on a storage medium. But that wasn't my point. My point wasn't that we couldn't use a DB or some other way of accessing our data, my point was that the concept of a "file" is just a way of categorizing data. It's semantics - you could call a DB table a "file", you could call a single field in the DB a "file", it wouldn't make any difference. You still use the data in a "blob" format that arbitrarily represents something useful. I think what he was trying to say was that data shouldn't be stored in any format at all - that it should just exist randomly and in and of itself, and our program should at run-time determine what the random data is supposed to be and somehow use it in some way. "Just like nature". Except in real life we isolate connected atoms (in the sense of "small things", not physics/chemistry atoms) into arbitrary groupings aswell. It's a conceptualization we are required to make in order to create, reason, and communicate.

    • Don't get me wrong, i think what this guy is talking about is fascinating, but i also think it's got more of a place in some theoretical branch of math than in real life development.
      Actually, I think HE sees it as something that has limited utility in real life development, too. I think he's just trying to emphasize that things such as disk files are computer science constructs, and there MAY be Another Way. From the interview, I get the impression that his underlying fear, though, is that a dogmatic CS curriculum with in-curious students will fail to discover the Next Great Idea in computer science.

      On the specific case of files, one thing he may be discounting, however, is that we ended up with disk files by choice. I'm not sure what the criteria were for making the decision, but other ways of arranging data on a magnetic disk (and tape) were used in the old days, and somehow, files and file systems ended up ruling the day. It may just be a waste of time to try the old ideas again. I mean, you wouldn't build a square wheel just because you think that we may have settled too quickly on round ones...

      Anyway, this thread is dead, so I'm probably wasting my keystrokes.

      MM
      --

  • My favorite quotes (Score:2, Insightful)

    by ckedge ( 192996 )
    Virtual reality-based applications will be needed in order to manage giant databases

    "Phenotropic" is the catchword I'm proposing for this new kind of software.

    Oh, those are good signs.

    And we're at the point where computers can recognize similarities instead of perfect identities, which is essentially what pattern recognition is about. If we can move from perfection to similarity, then we can start to reexamine the way we build software. So instead of requiring protocol adherence in which each component has to be perfectly matched to other components down to the bit, we can begin to have similarity. Then a form of very graceful error tolerance, with a predictable overhead, becomes possible.

    Phht, I want my computer to be more predictable, not less.

    we need to create a new kind of software.

    No, what we need is an economic model that doesn't include a bunch of pointy haired bosses forcing tons of idiot (and even good) developers to spew crap.

    And we need consumers to up their standards, so that crap programs can't become popular because they look shiny or promise 100,000 features that people don't need. And we need to get rid of pointy-haired bosses that choose software because of all the wrong reasons.

    In phenotropic computing, components of software would connect to each other through a gracefully error-tolerant means that's statistical and soft and fuzzy and based on pattern recognition in the way I've described.

    Sounds like AI and another great method of using 10,000 GHZ CPUs to let people do simple tasks with software written by morons, instead of simply writing better code and firing and putting out of business the morons.
    • "No, what we need is an economic model that doesn't include a bunch of pointy haired bosses forcing tons of idiot (and even good) developers to spew crap. And we need consumers to up their standards, so that crap programs can't become popular because they look shiny or promise 100,000 features that people don't need. And we need to get rid of pointy-haired bosses that choose software because of all the wrong reasons."

      Indeed. Why do so many software projects fail? Rarely because of bugs (at least from where I am standing). Rarely because it is hard to interface different complex components together (like system integration). Sure it's often hard, but not impossible. Software projects fail because of unreasonable or poorly managed expectations, failing to define the functionality of the software properly, or unrealistic timelines.

      "Sounds like AI and another great method of using 10,000 GHZ CPUs to let people do simple tasks with software written by morons, instead of simply writing better code and firing and putting out of business the morons."

      That is what it sounded like to me. And you know what? Making two bits of software exchange information using a loose protocol, and letting some AI sort out the details, isn't all that hard to realise. This guy might well make that happen. But! for this to be any use, the systems will have to understand the data as well as receive it properly. The system will still have to know what to do with, for instance, a record of an on-line order transaction. It needs to be told by us, by programming to make it do the right thing. And that is the hard part of system integration: not the exchange of information, but dealing with the often slightly different meaning the various systems give to a similar piece of data. I don't see AI or genetic algorithms solve that problem anytime soon, and when at last they will... the focus will shift from programming to training and debugging the initial mistakes the AI will make, and that process may prove just as hard as programming in the knowledge ourselves.
  • Why do we need programs with more than 10 million lines of code?

    Has anyone ever noticed that every time Jaron writes an essay or does an interview he tries to coin at least one new word? Dude's better suited to the philosophy department.

    "It's like... chaos, man. And some funky patterns and shit. Dude, it's all PHENOTROPIC. Yeah..."
  • by rmdyer ( 267137 )
    Obviously you "can" write mammoth programs with 1 billion lines of code without crashing. It's the kind of program you are writing that becomes critical.

    Generally, serial programs are the simplest programs to write. Most serial programs control machines that do very repetitive work. It's possible to write serial programs very modularly so that each module has been checked to be bug free. Today's processors execute code so that the result is "serially" computed. Yes, the instructions are pipelined, but the result is that of a serial process.

    Where we go wrong is when we start writing code that becomes non-serial. Threads that execute at the same time, serial processes that look-ahead or behind. Most OOP languages tend of obfuscate the complexities behind the code. Huge class libraries that depend on large numbers of hidden connections between other classes make programming a real pain.

    Mr. Lanier might be right, but I doubt it. Seems to me that a line of code could be as simple as that of a machine language command, in which case we are already using high level compilers to spit out huge numbers of serial instructions. Does that count? I think it does. Scaling code comes only at the expense of time. Most people simply don't think about the future long enough.

    My 2 cents.
  • I wrote the following on Dec. 20, 2002 about phenotropics. Jaron Lanier is mostly known for being the guy behind the expression "virtual reality." For its special issue "Big [and Not So Big] Ideas For 2003 [cio.com]," CIO Magazine talked with him about a new concept -- at least for me -- phenotropics. "The thing I'm interested in now is a high-risk, speculative, fundamental new approach to computer science. I call it phenotropics," says the 42-year-old Lanier. By pheno, he means the physical appearance of something, and by tropics, he means interaction. Lanier's idea is to create a new way to tie two pieces of software together. He theorizes that two software objects should contact each other "like two objects in nature," instead of through specific modules or predetermined points of contact. Jason Lanier also talks about software diversity to enhance security. Check this column [weblogs.com] for a summary or the original article [cio.com] for more details."
  • Isn't that him in that Nissan car commercial (I think it's Nissan). It shows a shot of him pretty quickly riding in the car with some mild electronic music playing over the top, but I'm pretty sure it's him. Can anyone back me up on this?
  • by jdkane ( 588293 ) on Saturday January 25, 2003 @12:22PM (#5157289)
    If we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code, no matter how fast our processors become.

    I think if more people get turned onto pure component-based development, then the current object-oriented paradigm could carry us much further.

    You have chaotic errors where all you can say is, "Boy, this was really screwed up, and I guess I need to go in and go through the whole thing and fix it." You don't have errors that are proportionate to the source of the error.

    In a way you do. Right now it's known as try() {...}catch(...) {}throw -or- COM interface -or- whatever other language you might work with that has a way to introduce error handling at the component or exact source-area level, and to handle errors gracefully.

    "protocol adherence" is replaced by "pattern recognition" as a way of connecting components of software systems.

    That would mean a whole new generation of viruses that thrive on the new software model. That would certainly stir things up a bit. Of course any pioneered methadology is subject to many problems.

    But I'm not putting down his theory, just commenting on it. The guy's obviously a deep thinker. We need people to push the envelope and challenge our current knowledge like that. Overall the theory is extremely interesting, although the practicality of it will have to be proven, as with all other new ideas.

  • and the full text of the interview. I'm starting to think he's onto something, given such newer areas of research as chaos theory [around.com] and complexity [santafe.edu]. For the uninformed, these are the folks who bring you such things as fractal generation and the "butterfly effect". (I have purchased hardcopy/books a few years ago). I hope he will correct me if I'm wrong, but I think that what Jaron's question really is, is "At which point can we not use complex computations/computers to model the "real" world (FSVO $REAL)? If our computational mechanisims and models approach the complexity of the "real", how can we validate our results against a third-party?" Just an idea.
  • not that bad... (Score:4, Insightful)

    by nuffle ( 540687 ) on Saturday January 25, 2003 @12:34PM (#5157345)
    Give him a break.. He's got some points. And at least he's thinking about real and genuine problems. I see a lot of people commenting with the 'who needs 10 million lines of code?' shtick. Sounds familiar [uct.ac.za].

    We're going to need to do things in a decade or two that would require 10 million lines of code (measured by current languages), just as the things we do now would require 10 million lines of code in 1960's languages.

    The new languages and techniques that we have now provided ways for us to reduce the apparent complexity of programs. This guy is just looking for a way to do that again. Certainly there is room to disagree with his techniques for accomplishing this, but it is shortsighted to deny the problem.
    • We're going to need to do things in a decade or two that would require 10 million lines of code (measured by current languages), just as the things we do now would require 10 million lines of code in 1960's languages.


      Exactly. Just as libraries were created so that we can turn memory allocation into a single call, and just as some newer languages or libraries turn an http transaction into a single call, we will be able encapsulate more functionality as needed to reduce the LOC count to something reasonable. And we can do this without relying on Jaron's magic "chaos" or "complexity" or "pattern recognition" 90's buzzwords.

      Jaron is correct in that, yes, we will reduce program complexity and LOC by describing what we want more broadly. He is incorrect in believing that this requires any machine intelligence "magic".
  • Sumarry: In order to duplicate the features of a biological organism Jaron Lanier finds desirable, you'd end up with a programming language maybe something like Lisp, but a lot like Malbolge. [mines.edu] Malbolge, the programming language of the future, is a free download!

    There's something about the way complexity builds up in nature so that if you have a small change, it results in sufficiently small results; it's possible to have incremental evolution.

    Firstly, that simply isn't true at all; as someone who understands both computer programing and genetics (my degrees are in Biochemistry and Computer Science) I can say with confidence that this is all hogwash.

    The same is true of most of what supposedly imports biological concepts into computing. Neural nets and genetic algorithms are very useful tools, and they have been inspired by what we see in nature, but in terms of how they really function, under what circumstances they function, what sorts of problems they are suited for solving - a neural net is nothing like a real nervous system.

    As a biologist I put it this way - a neural net is a very poor model of a nervous system. Genetic algorithms are utterly dreadful models for natural selection.

    So, in this (utterly stupid) comparison between computer source code and living genomes, Jaron Lanier asserts that a living organism is somehow fault tolerant while a program is not. Let me disassemble this assertion.

    Firstly, a living organism is far larger than any single computer program, even windows. Living organism == computer is far more appropo. The Genome (analogous to source code) of a living organism runs up to the billions of bits; the proteome (the concentrations and structures of the proteins that do the actual work of the living organism) would map, even in a single celled organism, to some vastly larger and more complex structure, terrabytes of data at LEAST. You can say, "that's his point!" But this level of complexity is CONSTRUCTED FROM SMALLER PIECES; individual genes. We can duplicate the complexity of a living organism in a computer without duplicating the complexity of a living organism within a single program. If each program can be as complex as an individual gene (thousands of bytes? Easy!) and produce executable code as complex as an individual protein (this is actually harder, but I believe it is possible) than your program construct can mimic the level of complexity of a biological organism.

    So, how IS it that all of this complexity (a human organism) is bug free, while a computer program is not?

    Firstly, the human organism is NOT "bug free." There are all sorts of inputs (chemicals) that cause aberrant behavior of every sort. Bugs happen with some random frequency, anyway. Over time, even if nothing else did, the accumulated errors in your biological processes would kill you.

    Secondly, to the extent that the human organism is, in some abstract sense, more fault tolerant than a computer program, recall that the human organism is NOT designed (warning: science in progress. All creationists are asked to leave the room.) BILLIONS OF YEARS of trial and error have gone into the selection of the protein sequences that give us such problem free use (or not!) every day of our lives. With a development cycle that long, even Windows could be bugfree.

    Thirdly, there is another consequence to our having evolved (rather than having been designed) - inefficient use of memory. Most of the "junk DNA" probably serves some purpose, but brevity is barely a consideration at all (in a larger organism, such as you are I. In fast replicating organisms, such as bacteria or yeast, there is far less genetic packaging.) We are extremely tolerant to mutations in these regions of junk DNA - there are analagous regions in a computer memory were substitutions are tolerated; bit flips in the picture of autumn leaves displayed as my desktop would not crash my machine - in fact, this image is a bitmap, I wouldn't even NOTICE the changes. If we applied natural selection to our computer programs, some regions of high-fault tolerance code might eventually evolve into something functional; my desktop picture might evolve into awk (Okay, now I'm being silly.)

    In something which has been DESIGNED, you short-circuit all of that. The code of your computer program is not filler, pictures or stuffing; it doesn't, it CAN'T share the properties of these dispensible DNA sequences - it isn't dispensible! There are a number of single-nucleotide substitutions (equivalent to flipping a single bit) that will kill you stone dead! Your computer program is not less fault tolerant than the core sequences of the ribosome, the structure which you use to convert nucleic acid sequences (your genome) into protein sequences (proteome.)

    Now, it is true, there are other places in your DNA where a bitflip will alter some chemical constant in a non-fatal (possibly beneficial) fashion. Might we not duplicate this property in a programming language? A computer language with this property would have certain desirable properties, if you wanted your computer program to evolve toward a certain function through a series of bitflips. Indeed, there are computer languages which have this property, to some degree or another. LISP does. Do you know what programming language really EXEMPLIFIES this property? Malbolge. [mines.edu]

    Who wants to program in Malbolge? Raise your hands, kids! A protein does one job, instead of another, because it has affinity for one substrate/chemical, instead of another. In a computer, you'd duplicate this sort of thing by fiddling with constants, and not by changing the source code at all. Small, low order changes in these constants would have incremental effects on what your program actually did.

    Malbolge duplicates this property very nicely.

    To me, this complacency about bugs is a dark cloud over all programming work.

    Personally, I believe that this problem is fundamentally solved, and has been for some time. Heap on more degrees of abstraction. If I wanted to write a program that would take 1 billion lines of C-Code, I'd write a higher level language, and write in that, instead.
  • The analogy to the universe is important. The whole approach to physical sciences has been to simplify the rules (analogous to the lines of code) to as simple equations as possible. So the universe can be described with just a few equations with the rest just being data.

    I think Jaron is onto something, but his conclusions are off. I think current methodologies are just where we need to go. Keep the code as simple (small) as possible and put the data in some database or something.

    Just think of Microsoft... our favorite whipping Mega Corp... when separate groups work on millions of lines of code and try to piece it all together then the result is not always ideal. But when the goal is simple code and simple protocols then we get better software.

  • by ites ( 600337 ) on Saturday January 25, 2003 @12:52PM (#5157414) Journal
    There seem to be only two ways to break the need to write code line by line. Either it evolves (and it is possible that evolved software will work exactly as this article suggests, living off fuzzy patterns rather than black/white choices). Or it is built, by humans, using deliberate construction techniques. You can't really mix these any more than you can grow a house or build a tree. We already use evolution, chaos theory, and natural selection to construct objects (animals, plants), so doing this for software is nothing radical.
    But how about the other route? The answer lies in abstracting useful models, no just in repacking code into objects. The entire Internet is built around one kind of abstract model - protocols - but there are many others to play with. Take a set of problems, abstract into models that work at a human level, and make software that implements the models, if necesary by generating your million line programs. It is no big deal - I routinely go this way, turning 50-line XML models into what must be at least 10m-line assembler programs.
    Abstract complexity, generate code, and the only limits are those in your head.
  • A friend recently got a $90 plane fair to France. Business class. And got bumped up to First because it was oversold. There was a computer error, but the airline had to honor the offer. Much of that flight was sold at the $90 price.

    Right now, if that error's there, and the sale goes through, you can in principle track it back to whose error it is. If it was in an agent's system rather than the airline's, for instance, the airline could recover from the agent. So in Jaron's tomorrow, when things are matched by patterns instead of precisely, and an error happens, is it the case that no one exactly is responsible? Would you want to do business with someone whose systems just approximately, sort of matched up with yours? If yours are running on rough approximation rather than exactitude too, can a determination ever be made of ownership of a particular error?

    Maybe it will all balance out. Maybe large errors will become less frequent as Jaron claims. But small errors, in being tolerated, will become an overhead of corruption. Perhaps a predictable overhead, as he claims, but what's to keep it from inflating over time, as programming practices become laxer because nobody can really get blamed for the errors any more, because they'll be at best even more difficult to localize than now, and at worst totally impossible to pin down?
  • Looking at this I don't see anything revolutionary in what is proposed, just something hard to do. Of course we work hard at signal processing, pattern recognition and clustering algorithms. They are used for everything from Medical Imaging, Radar, Oil Exploration to trying to automatically call balls and strikes. What I see being proposed here would be to look at interfaces between hmm... modules for lack of a better term in a similar way. If you want it is a far out extension of the idea of Object Request Brokers.

    For example, a very large system would have a goal seeking component creating a plan, it would inquire as to what modules are available, look at the interfaces around and classify them (here is where the clustering and pattern recognition might help) and then choose ones which fit its plan. It would then check results to see if this worked closely enough to move it along its plan.

    This implies a database of types of modules and effects, a lower level standard protocal for inquiring and responding, and another means of checking results and recognizing similarity to a wanted state - a second place where the recognition and clustering algorithms would be useful. This is obviously not easy to do...

    The novel "Night Sky Mine" by Melissa Scott comes to mind as an example of this taken way out. There is an ecology of programs that is viewed by "programmers" through a tool that re-inforces the metaphor of programs being living organisms "trained" or used by the programmers to get what they want. I cannot see this being generically useful - many times we really do want a "brittle" system. It is certainly a way to get to very complex systems for simulation or study or games!

  • i think comparing computer science to nature is a pretty unfair comparison. Nature is analog, computers are digital. You can make computers seem like they are fuzzy recognizing, but underneath it all, it is, it is a very strict set of rules. Also, faults tolerance for an entire ecosystem is very high, but for the individual is very, very low. So if there is a small defect in my heart, the human race will continue without a hitch, but i won't and neither will my family... So putting fault tolerances into software might make an entire system stay up and running, it might behave slightly differently every time it adjusts for an error. After a while it might behave completely differently than what it was designed to. and IMHO, that's bad.

    The reason computers are so powerful and useful is there strict adherence to the rules, and the fact that it should be able to reproduce exactly repeatable results, no matter how often it is ran.

    Nature and computers play two completely different roles in our society, and trying to make one to be like the other seems for non-logical and detrimental.
  • There's something about the way complexity builds up in nature so that if you have a small change, it results in sufficiently small results; it's possible to have incremental evolution.


    This completely ignores the kind of non-linearity we mean when we talk about chaos theory. Small changes in initial conditions can create huge variations in output, which is why it is incredibly hard to predict the weather, the random motions of a billiard break, or even the exact positions of the planets thousands of years from now.

  • Yes, of course, we want our software to become adaptive, to be based on pattern recognition, to be able to make intelligent inferences and decisions by itself. And that's what a large number of people in computer science and related disciplines are working on: pattern recognition, rule based systems, logic programming, Bayesian networks, etc. And when people figure out how to apply the techniques we already have in order to simplify software, they do apply them.

    I somehow fail to see what Lanier is contributing to any of this, other than picking up some buzzwords and trying to make a name for himself.

  • I suppose the real difference between a software project and nature is that in nature nothing is preconcieved and there are no deadlines. Nature is a chaotic system where there is no plan, there is noone (except for environment of the system) that dictates what the business requirements are.

    The problem with software design can be traced to the problem of a business design that we are trying to describe in our software. And business is a result of evolution (business evolution) so it is just as complicated as some biological systems.

    The problem with software implementation is that all business rules need to be converted into machine language with full understanding of effect of every line of code upon every other line of code (good coding practices and different paradigms help with that), of course we also have deadlines that nature seems to know nothing about. I suppose that to get one feature right the nature can spend thousands and millions of years, and that feature still was not designed, it was evolved. With software we do not have that kind of luxury.

    We cannot yet compare software design to evolution, we can only compare software design to hardware design that is also done by humans. And hardware is complex and software is complex, only for software there are more expectations - it always solves a new problem, where for hardware the problems are limited - increase speed, increase memory size. Software seems to be the most complicated out of all artifficial systems ever built by humans.


    Will we ever simplify the way software is written? Will the simpliffication be based on pattern recognition? Will software have properties of nature - will it be satisfied with similar results as opposed to precise results? I have problems with that - software has to describe a business system and if it does not describe a business system precisely than we are changing business rules. I don't believe that business software will be written to accept results that are just close enough unless the business specifies that as a requirement. Normally (from what I've experienced in 5.5 years of professional programming, and in total of 14 years of programming) business requirements are quite precise and if they are not, they are incomplete.

    Creating another paradign or another language for software development only seems to delegate the problem and shift it from one layer to another, but at some point the code has to be written (some software 'architects' like this delegation thing, but they forget that it has to be written somewhere, you cannot delegate infinetely.)


    I believe in componentized software systems with well designed APIs between them, nothing more nothing less. Good luck.

  • Bullshit, More Shit, Piled higher and Deeper.

    Doh, I can spout BS too.

    If he's talking about following nature then he should know that nature tends to reuse old code a lot. So what if grocery carts/filesystems are dumb ideas, evolution won't throw them away if they aren't totally terrible. So you don't write 10 million lines from scratch. You write a little every now and then.

    The main problem with scaling is copyright and patent laws. Such artificial monopolies (especially with those 120+ year terms) mean you often have to reimplement stuff from scratch even if a decent solution already exists.

    Why is the article associating lines of code with complexity? Complexity has little to do with lines of code. And why is he talking about complexity and big programs almost as if they are good things? And why does all this seem so apt coming from a Sun Java site? ;).

    As for one bug killing an entire complex system. No that doesn't happen on my system or I'm sure many slashdot readers.

    If he really believes what he's spouting, then he really needs to learn how to design systems.

    Has he _really_ any idea how complex existing real world systems are? And why they still mostly work? If my web application has a bug, that doesn't kill my webserver, RDBMS, etc. Neither does it kill my microprocessor, caches, RAM, northbridge, southbridge, HDD, vidcard etc. If one custom server goes, my web app can still work without it, albeit it could use more CPU and response isn't as good.

    He says: "You don't have errors that are proportionate to the source of the error. And that means you can never have any sense of gradual evolution or approximate systems"

    Much software (including Linux) has progressed through evolution (with some leaps here and there).

    I sure hope what he's spouting won't encourage all those idiots to reimplement browsers, clients, servers, RDBMs, legacy systems as one huge "distributed enterprise application".

    Maybe I disagree because my degree is in Engineering not CS.

    But it's like talking about a single blueprint detailing 10 million _different_ moving parts. Sure maybe NASA/JPL can implement those (doubt they would tho), but the rest of us will just have to resort to designs that can be implemented in the "real" world.

    If you want something gracefully error-tolerant, soft and fuzzy and able to do pattern recognition, why don't you just get a dog?
  • Jaron's concepts are vaguely stated but still interesting. If you imagine a computer having a central intelligence that is modeled after human intelligence with a certain amount of pattern recognition and iterative learning behavior, there are some potential immediate applications of this.

    A simple example would be the computer's parsing of HTML (or any other grammar/vocabulary-based file format) as compared to a human's parsing of the written word. The human mind has a certain amount of fault-tolerance for ambiguities and grammar-deviations which allows it to make some sense of all but the most mutated input. An example of this would be your own ability to grok most Slashdot posts, however rife with gramer, spelin, and logik errors they may be.

    The computer could also potentially do this to less than perfect input - smooth over the "errors" and try to extract as much meaning as possible using its own knowledge of patterns. It could make corrections to input such as transforming a STRUNG tag to STRONG, since this is probably what was intended and is the closest match to existing grammar.

    Obviously this is a very simple example, but I think this kind of approach could lead to new ways of solving problems and increasing the overall reliability of computer systems.
  • by Bowie J. Poag ( 16898 ) on Saturday January 25, 2003 @01:48PM (#5157683) Homepage
    Real Lessons:

    1) Jarod Lanier is a talented, but vastly overhyped individual. Think of the geekiest person you know. Chances are, that person is better qualified to render a decision than this guy.

    2) No one will ever write more than 10M lines of code.. Yeah, ok. After all, nobody will ever need more than 640K either, right? Come on. Every decade, someone comes up with one of these nearsighted and baseless claims. Its a common trap. Just a million lines of code in a single program would have been inconcievable to someone just 10-15 years ago. Nowadays, its common. Every generation thinks theyre the top of the heap, when in reality, history proves them wrong every single time. Its fact. More importantly, the fact that Lanier doesn't notice this pattern sort of underscores point #1.

    Cheers,
  • Why are we saddled with low-level languages? 10 million lines should almost never be necessary.

    Even the highest level language used in industry, Java, is pathetic: test a value, if it is equal to something then skip forward a few line, if it isn't then go to the next line, multiply another value and assign it, increment the first value, skip back a couple of line, ....

    That is horrible! Even your geeks beloved beloved Perl (including the upcoming Perl 6) falls into the same traps. You have to deal with explicit control flow; you have to manually program things as simple as finding all the unique elements in a list or grouping them.

    Why is it that everybody is content to use these pieces of crap when not necessary? Developers cling to their beloved C++ writing hundreds of times more code that they need. Of course hitting 10 millions lines if going to be difficult, but why should you never need to write that much. The fastest relational database that I know of is rumored to be written in just a thousands of lines, instead of millions. Why? Because it is written in something that allows abstraction and not of the Object Oriented kind! Think about your tools. Find betters ones that push your limits and the limits of expressiblity. Don't be happy to sit and pound out 2000 lines of Java one day when you could have done it in 4 lines of something else.


    ten_rows_of_Pascals_triangle: 10{+':0,x,0}\1
    transitive_closure_of_map_at_point : {?,/&:'m@x}/p

  • I think the problems with our current model of programming that Jaron is describing can be seen mainly in the idea of Leaky Abstractions [slashdot.org]. With software abstractions, we try to do exactly what Jarod is talking about; we try to simply make things interface with one another fluidly. What he's pointing out, and what leaky abstractions prove, is that our programming languages just don't work this way. Everything assumes the pieces it interacts with will interact in a specified way. The system depends on every piece to follow it's assumptions and often falls apart completely if one doesn't.

    There are questions to be raised about a flexible system like this:

    What about misinterpretation? Would software now behave like a human and "missread" another component's piece of information? (as people missread each other's handwriting)

    Would "fuzzy" interpretations lead to databases full of occasional false information? Could the same system still operate effectively with these kinds of errors? (a very tricky question)

    Could we still make secure systems with this kind of software interaction? Would secure systems still require the strict standards our current systems have? (ie. your password must still be entered with the correct capitalization)

    Obviousely, information passing wouldn't work in this model. Think of the party game where you sit in a circle and whisper a message in each other's ears to see how garbled it gets. We would just have to avoid that type of system.

    These (and the others I've read) are the kinds of immediate questions that one will make of this concept. I guess Jarod is proposing that these are things that can be worked around conceptually; they're implementation details.

    Personally, I think he's brilliant. I think he has stumbled onto what will be the foundation of the future of computing. Here is the big bold statement he is putting on the record for us:

    "So instead of requiring protocol adherence in which each component has to be perfectly matched to other components down to the bit, we can begin to have similarity. Then a form of very graceful error tolerance, with a predictable overhead, becomes possible. The big bet I want to make as a computer scientist is that that's the secret missing ingredient that we need to create a new kind of software."

    My question is this: Would any of you openly challenge this statement? If he were to more rigourousely define this bet and enter it here [longbets.org], would any of you put yourselves on record saying it was bunk?

    I know that's alot harder than simply not challenging him, but think of the ultimate outcome of his work. Do you truly think computing systems will still be cooperating the way they do now 100 years from now? If the answer is no, but you still don't think he's on to something, then what will change things? Genetically altered super-humans whose brains operate as our computers? The "Mentats" of the Dune novels?

    If I had $2000.00 to spare, I'd bet it on his idea.

    Feel free to quote me on that 100 years from now.
  • The important thing is for our programming languages to not adhere to some gimmic paradigm, but instead the language should be WYSIWYG on a syntactical level. This is obtained by having many of the same properties found in languages such as the various Lambda-calculi:

    1. Referential Transparency: equivalent pieces of code can be swapped by simple cut-n-pasting the code

    2. Church-Rosser Property: programs aren't just deterministic, a program does the same thing no matter which parts are evaluated first

    3. Curry-Howard Isomorphism: statically typed programs have a logical behavior

    Note that I am not advocating functional programming, as other programming paradigms can still use these language properties.

    Surprisingly, these 3 properties are NOT found in most programming languages. This is why small changes in code can cause things to fall apart.
    Without these properties, small changes can have a domino effect and react badly with other far off pieces of code.

    For example, without the first property, referential transparency, two pieces of code that do the same thing can't necessarily be interchanged by simple cut-n-paste. With modern languages such as Java, this isn't as much of an issue, but you can still have problems, especially with regards to multiple threads. In languages such as C, however, unless the code is extremely well written, you can't swap out one piece of code for the other. Global variables, goto statements, etc... Referentially transparent languages would always 100% garrentee that equivalent pieces of code could be cut/paste swapped.

    Without property 2, programs tend to be extremely nondeterministic! C and C++ are two languages that are extremely nondeterministic. For example, not initializing variables, multiple threads, referencing deleted objects, etc... Java has similar problems, especially with regards to multithreading, garbage collection, etc...

    The lack of a logical interpretation (property 3) means you can compile code that just flat out don't make sense in the terms of describing an algorithm. C, C++, and Java allow use of generic pointer variables, so you accidentally pass meaningless things to functions. In C and C++ it is a void*, and in Java it is an Object reference.

    It is still possible to keep nondeterministic behavior, concurrency, and mutation in a language with the above 3 properties (relaxing property 3 though). The above 3 properties also don't necessitate a functional paradigm.

    Anyway, silly paradigms that sound all nerdy might be fun, but they aren't useful. Instead, we should work to make our programming languages well behaved! Subexpressions in the language should be "what you see is what you get"!
  • The Natural laws of the Physical Phenomenon of our creation and use of abstractions is what he is trying to nail down.

    But it has already been identified and being developed as a general user based automation tool. Useful in many ways, including in teh development and use of an autocoding environment.

    See Virtual Interaction Configuration [threeseas.net] and even some of my journal listed /. posts for more information.

    It's nice to know..... exactly where many seem to be headed. Unfortunate for them that they seem to be, in one way of another, off course, even just a little over a long ways makes a big difference.
  • found in his manefesto
    ...the Great Shame of computer science, which is that we don't seem to be able to write software much better as computers get much faster. Computer software continues to disappoint. How I hated UNIX back in the seventies - that devilish accumulator of data trash, obscurer of function, enemy of the user! If anyone had told me back then that getting back to embarrassingly primitive UNIX would be the great hope and investment obsession of the year 2000, merely because it's name was changed to LINUX and its source code was opened up again, I never would have had the stomach or the heart to continue in computer science.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...