Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Programming IT Technology

Interview with Jaron Lanier on "Phenotropic" Development 264

Posted by CowboyNeal
from the stuff-to-read dept.
Sky Lemon writes "An interview with Jaron Lanier on Sun's Java site discusses 'phenotropic' development versus our existing set of software paradigms. According to Jaron, the 'real difference between the current idea of software, which is protocol adherence, and the idea [he is] discussing, pattern recognition, has to do with the kinds of errors we're creating' and if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.'"
This discussion has been archived. No new comments can be posted.

Interview with Jaron Lanier on "Phenotropic" Development

Comments Filter:
  • by Anonymous Hack (637833) on Saturday January 25, 2003 @11:44AM (#5157108)

    ...but i don't see how it's physically possible. It sounds like he's proposing that we re-structure programming languages or at least the fundamentals of programming in the languages we do know (which might as well mean creating a new language). This isn't a bad thing per se, but one example he talks about is this:

    For example, if you want to describe the connection between a rock and the ground that the rock is resting on, as if it were information being sent on a wire, it's possible to do that, but it's not the best way. It's not an elegant way of doing it. If you look at nature at large, probably a better way to describe how things connect together is that there's a surface between any two things that displays patterns. At any given instant, it might be possible to recognize those patterns.

    Am i stupid or something? He seems to be drawing two, completely unrelated things together. Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic. When we develop code for this environment, we have to develop according to those binary rules. We can't say "here's a rock", but we can say "turn on these switches and those switches such so that it indicates that we are pointing to a location in memory that represents a rock".

    Maybe i'm missing his point, but i just don't understand how you can redefine programming, which is by definition a means of communication with a predictable binary system (as opposed to a "probability-based system" or whatever quantum physicists like to call reality), to mean inputting some kind of "digitized" real-world pattern. It's bizarre.

  • by f00zbll (526151) on Saturday January 25, 2003 @11:48AM (#5157130)
    Most of what was stated is "pie in the sky" idealism. Get real, it will take a long time for programming and software development to get to the point where it works elegantly the way he describes it. I have no problems with reminding people "hey, lets try to improve how software is developed." Like those of us in the trenches don't realize how much of a mess it is most of the time. We can't get from point A to point M without going through all the painful intermediate steps.

    I seriously doubt nature came to the elegant design of 4 base pairs overnight, so let's work hard at making it better w/o throwing a pile of dung on people's face. After all, they are the ones who have to build the pieces to get to that point.

  • by Viking Coder (102287) on Saturday January 25, 2003 @11:55AM (#5157161)
    I used to like this guy.

    The problem with the kind of system he's talking about is that the more robust you make it, the harder it is to change it's behavior.

    Take the cockroach, for instance. It is damned hard to train 100 of them to work together to open a pickle jar.

    That's because a cockroach is extremely robust at being a cockroach, which has nothing to do with teaming up with 99 other cockroaches to open a pickle jar.

    I don't believe nature had a design for each individual life form, other than to be robust. That doesn't give us any particular insight into how to both design something robust that meets a specific goal, which is the point of almost all software.

    Once you get to the point where the specifications of each component are as exact as they need to be to meet a specific goal, you're lacking exactly the kind of robustness that he's describing.

    What he's really saying is that entropy is easy to defeat. It's not. Perhaps there will be easier ways to communicate our goals to a computer in the future, but the individual components will still need to be extremely well thought-out. I think it's the difficulty of the language that makes symbol exchange between a human and a computer difficult - the fact that the human needs an exact understanding of the problem before they can codify it isn't going to change.
  • by cmason (53054) on Saturday January 25, 2003 @11:59AM (#5157180) Homepage
    I can see this point. I think the analogy is faulty. If you liken DNA to a computer program, then consider a single organism to be a run of that program. A single organism can crash just like a run of a program can.

    Now there are certainly programming methodologies modeled on evolution. But that's not what he's talking about. What he's talking about is using pattern recognition to reduce errors in computer programs, I assume, although he doesn't say this, by making them more tolerant of a range of inputs. Evolution has nothing to do with pattern recognition, other than that both are stochastic processes. Evolution is tolerant of environmental pressure by being massively parallel (to borrow another CS metaphor). And even then it's sometimes overwhelmed (think ice age). His programs would be more tolerant of errors because they used better algorithms (namely pattern recognition).

    I think it's a bullshit analogy. As I said before, I'm not sure if this analogy is key to his argument, but I don't give him a lot of cred for it.

  • Thank You! (Score:2, Interesting)

    by ike42 (596470) on Saturday January 25, 2003 @12:06PM (#5157206)
    Small perturbations often have disproportional large consequences, as your DNA example illustrates. Paradoxically, as Lanier suggests, complex systems can also be amazingly fault tolerant. This is in fact the nature of complex, or chaotic, systems and some say life. However, we cannot, in general, predict which sort of behavior a complex system is likely to exhibit. Lanier seems to miss this entirely. So while his ideas are intriguing I don't think he has a good grasp of the real issues in designing "complex" software systems.
  • by haystor (102186) on Saturday January 25, 2003 @12:12PM (#5157232)
    I think its a pretty good analogy but that comparing it to biology leaves it a bit ambiguous as to what the metaphor is.

    If you compare it to something like building a house or office building the analogy works. If you misplace one 2x4, its very unlikely that anything will ever happen. Even with something as serious as doors, if you place one 6 inches to the left or right of where its supposed to be, it usually works out ok. It always amazed me once I started working with construction at how un-scientific it was. I remember being told that the contractors don't need to know that space is 9 feet 10 1/2 inches. Just tell them its 10 feet and they'll cut it to fit.

    One of the amazing things about AutoCad versus the typical inexpensive CAD program is that it deals with imperfections. You can build with things that have a +/- to them and it will take that into account.

    Overall, he definitely seems to be on the right track from what I've seen. Most of the projects I've been working on (J2EE stuff) it seems to be taken as a fact that its possible to get all the requirements and implement them exactly. Like all of business can be boiled down to a simple set of rules.
  • by jdkane (588293) on Saturday January 25, 2003 @12:22PM (#5157289)
    If we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code, no matter how fast our processors become.

    I think if more people get turned onto pure component-based development, then the current object-oriented paradigm could carry us much further.

    You have chaotic errors where all you can say is, "Boy, this was really screwed up, and I guess I need to go in and go through the whole thing and fix it." You don't have errors that are proportionate to the source of the error.

    In a way you do. Right now it's known as try() {...}catch(...) {}throw -or- COM interface -or- whatever other language you might work with that has a way to introduce error handling at the component or exact source-area level, and to handle errors gracefully.

    "protocol adherence" is replaced by "pattern recognition" as a way of connecting components of software systems.

    That would mean a whole new generation of viruses that thrive on the new software model. That would certainly stir things up a bit. Of course any pioneered methadology is subject to many problems.

    But I'm not putting down his theory, just commenting on it. The guy's obviously a deep thinker. We need people to push the envelope and challenge our current knowledge like that. Overall the theory is extremely interesting, although the practicality of it will have to be proven, as with all other new ideas.

  • Reason for Bugs (Score:1, Interesting)

    by Anonymous Coward on Saturday January 25, 2003 @12:44PM (#5157383)
    Would you buy a CPU which is full of bugs?
    No? Why? Because it wouldn't be very usable?
    Would you use software which contains bugs?
    Yes? Why? Because it remains usable...

    It is possible to write bugfree software,
    but there's no need to for the average joe market.
  • Re:Full of it. (Score:2, Interesting)

    by raduf (307723) on Saturday January 25, 2003 @12:45PM (#5157387)
    Well, actually, changes in DNA often don't do anything bad, much less fatal. That's how evolution takes place, right? See how long/well a random DNA change can survive. Anyway, DNA and biological stuff in general is much more resistant (resilient?) to change than software. Much more. DNA is around for a very long time and it hasn't crashed yet.

    The point this guy makes and I totaly agree with is that programmimg can't stay the same for ever. I mean come on, we're practically programming assembly. High level, IDE'd and coloured and stuff but not a bit different fundamentally.
    Functional programming for example, that's different. It probably sucks (I don't really know it) but it's different. It's been around for about 30 years too.

    There has to be something that would let me create software without writing
    for i=1 to n
    for every piece of code I make. It's just... primitive.

    And this guy is right about something else too. If nobody's looking for it, it's gonna take a lot longer to find it.
  • by Gleef (86) on Saturday January 25, 2003 @01:11PM (#5157504) Homepage
    I agree with your skepticism, Lanier is spouting vague principals with little basis in real systems. He ought not to say "we should go there" until "there" is somewhat mapped out and not a big spot on the map labeled "here there be dragons". However, I do have some things to say about your comments.

    Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic.

    Our DNA, the genetic code that is used to make the building blocks that make us us, and make chimpanzees chimpanzees, is essentially a number in base 4, manipulated by molecules that are either completely hardcoded, or defined within that base 4 number, and influenced by chaotic forces in the world at large..

    Mathematically and logically speaking, there is no difference between a base 4 number and a base 2 number. Nature uses base 4 because she had 4 nucleotides to play with, we use base 2 because it's cheaper to design and build; they are the same numbers.

    When we develop code for this environment, we have to develop according to those binary rules.

    Perhaps, but there are some things that need to be kept in mind here. As Lanier points out, much of what we consider "this environment" is the biproduct of business decisions, not the essential nature of binary computing, for example, processor registers, one dimentional memory, the four ring security model, interrupts, files, these can all be done differently.

    Also, as has been demonstrated in numerous ways, just because you are using a binary device doesn't mean that you must be develop based on binary logic, people aren't toggling the boot loader via the switches in front of the computer anymore. In binary, someone can develop an environment that is much much richer than binary. Then, separately, anyone can develop for that environment without having to worry about the binary details.

    We even have the technology to, given sufficent computing power, completely model any non-chaotic analog environment and have it work right (just keep making the bit lengths longer until you are safely well under whatever error tolerance you have). Chaotic analog environments are harder, but people are working on it; we've got the technology to make something that looks like the chaotic environment, but is missing out on much of the richness.

    We can't say "here's a rock", but we can say "turn on these switches and those switches such so that it indicates that we are pointing to a location in memory that represents a rock".

    But we can. When you write a paragraph of text in HTML, you don't say "turn on these switches and those switches such that it indicates that we are pointing to a location in memory that represents a paragraph", you say "here is a paragraph, here's the text in the paragraph". You can make an environment where you can say "here is a rock" (but until we get better at chaos, it will look and act at best almost, but not quite, like a rock).
  • Re:10 million lines (Score:2, Interesting)

    by the-matt-mobile (621817) on Saturday January 25, 2003 @01:18PM (#5157541)
    I disagree. 10 million lines of code is not nearly enough for AI programming. If we ever get to a point where we're building Asimov style humanoids, there's no way 10 million lines is enough (even with shortcut languages like PERL and Python :-)

    I'm not saying this guy is right, but I will agree that we're not ready to maintain a codebase massive enough to do all the things we'll ever try to do with only our current conventions to get us there.
  • by pla (258480) on Saturday January 25, 2003 @02:31PM (#5157858) Journal
    Very good point.

    To convert it to the software-world equivalent - With enough knowledge of the specific hardware platform it will run on, a good programmer can write a 100% bug-free, "perfectly" robust "hello world" program.

    (Anyone who thinks "void main(void) {printf("hello world\n");}" counts as a perfectly bug-free program has clearly never coded on anything but well-behaved single-processor PC running a Microsoft OS with a well-behaved compiler).

    However, extending your idea, how do you get 100 "hello world" programs to work together to, say, play an MP3?

    Yeah, it sounds absurd, but seems like exactly what the parent article suggests. Trained on enough "patterns" of input, even a set of "hello world" programs should manage to learn to work together the play MP3s.

    That *might* work if we started writing programs more as trainable functional approximation models (such as neural nets, to use the best currently known version of this). But, as much as it seems nice to have such techniques around to help learn tasks a person can't find a deterministic algorithm for, they *SUCK*, both in training time *and* run time, for anything a human *can* write straightforward code to do. And, on the issue of training... This can present more difficulties than just struggling through a "hard" task, particularly if we want unsupervised training.

    I really do believe that, some day, someone will come up with the "killer" software paradigm, that will make everything done up to that point meaningless. But, including this current idea, it hasn't happened yet.

    But to end on a more Zen note... Phenotropic development already exists, in the perfected form. When a rock touches the ground, all the atoms involved just intuitively "know" where the balance of forces lies. They don't need to "negotiate", they just act in accord with their true nature. ;-)
  • Re:not that bad... (Score:3, Interesting)

    by SmokeSerpent (106200) <benjamin@psFREEBSDnw.com minus bsd> on Saturday January 25, 2003 @02:48PM (#5157936) Homepage
    We're going to need to do things in a decade or two that would require 10 million lines of code (measured by current languages), just as the things we do now would require 10 million lines of code in 1960's languages.


    Exactly. Just as libraries were created so that we can turn memory allocation into a single call, and just as some newer languages or libraries turn an http transaction into a single call, we will be able encapsulate more functionality as needed to reduce the LOC count to something reasonable. And we can do this without relying on Jaron's magic "chaos" or "complexity" or "pattern recognition" 90's buzzwords.

    Jaron is correct in that, yes, we will reduce program complexity and LOC by describing what we want more broadly. He is incorrect in believing that this requires any machine intelligence "magic".
  • by VoidEngineer (633446) on Saturday January 25, 2003 @03:19PM (#5158086)
    ...but i don't see how it's physically possible. It sounds like he's proposing that we re-structure programming languages or at least the fundamentals of programming in the languages we do know (which might as well mean creating a new language).

    Hmmm. That's kind of like asking how it's possible for two three dimensional objects to occupy the same place is space. The answer, of course, is to displace those objects along the time vector. Similarly, I think that the author is trying to urge coding paradigms onto a new and different vector basis. This, of course, happens all the time, and people are always indignant when their domain of study's basis of authority is undermined by someone else's work.

    Am i stupid or something? He seems to be drawing two, completely unrelated things together. Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic. When we develop code for this environment, we have to develop according to those binary rules.

    No, not stupid. Caught up in the paradigm of binary opposition, perhaps. Personal computers produced for mass consumption are bundles of very, very tiny on/off switches. Research computers often utilize quadratic switches (biocomputing) and n-switches (optical and quantum computing). A biocomputer, for example, may run well over a billion solutions to a problem, simultaneously, utilizing A,C,G,T switches; the trade-off for breaking the on/off paradigm, however, is that you can only run this particular biocomputer once, and then it's no longer a biocomputer.

    Maybe i'm missing his point, but i just don't understand how you can redefine programming, which is by definition a means of communication with a predictable binary system to mean inputting some kind of "digitized" real-world pattern.

    The process works like this: You (PersonA) can redefine programming or whatever else you want (religion, science, government, etc. etc.) by gather at least one other person (PersonB) to you, and declaring between the two of you, 'We're going to redefine this term F(x) to now mean F(y).' Alternatively, you can say, 'We're going to redefine this term F(x) to now mean G(x).' Between PersonA and PersonB, this term is now redefined.

    After that, it's all a matter of gathering other people into your circle or domain of practice, and getting other people to believe in your ideas. If you, as PersonA, never get a PersonB, then you a lone crackpot without any supporters. If you, as PersonA, gather a million people around you and your believes, you are either L Ron Hubbard or Bill Gates.

    And lastly, programming for biocomputers often involves communication with a predictable quadratic (i.e. genetic) system. It just goes to show that the term 'programming' is pigeon-holed by the computer scientists to mean a particular thing in their field of study.
  • by 10am-bedtime (11106) on Saturday January 25, 2003 @05:22PM (#5158682)
    here's one way to understand the gist of the argument: consider programming to be the application of a specification to a protocol.

    • in the old old days, the protocol was "bang bits" and the specification was "register transfer level" (RTL) instructions.
    • in the old days, the protocol was "drive widgets w/ signals" and the specification was "connect this to that" instructions. a lot of gui programming is still done this way (lamentably).
    • in the less recent past, the drudgery of wiring things from the ground up spawned observation that it is possible to regularize some of the wiring; the protocol was still the same but the specification started looking like "connect pre-packaged-this to pre-packaged-that".
    • in the more recent past, the protocol expanded to "connect your-this to your-that" and the specification evolved to be able to generate more fitly that which was formerly pre-packaged en masse, to "declare my-this and my-that".
    • in the present day, the protocol is "your-connect your-this to your-that" and the specification is "declare my-connect, my-this and my-that".
    • the last step (towards adaptive programming by machines) is to hook up an inference engine that specializes on situation, in order to generate the proper my-* bits (all of them). then the protocol is "teach the engine" and the specification is "recognize situation-A and try my-bits-A".

    one can up-scope and note the (still somewhat imperfect) congruence of the last step w/ the original RTL... in any case (heh), the world is a better place if more users understand the programming mindset if not the act of programming, per se. what is a programmer but a cultivator of the machine? what is a good person but a self-programming philanthropist? what is a great hacker but a good person w/ skillz?

  • Amorphous Computing (Score:1, Interesting)

    by buddyjones (627719) on Saturday January 25, 2003 @08:56PM (#5159623)
    Gerry Sussman (one of the authors of the famous 'Wizard' book [mit.edu] taught in beginning computer science classes) has been working on biology-inspired programming paradigms [mit.edu] over at MIT. The correspondance between the structure of living systems and computing systems was pointed out by John von Neumann quite at the dawn of both fields, and these notions seem to be alive and well today. In this view, the genome is like the assembly code of a program which, when run, is capable of replicating itself, developing from a single cell, maintaining and healing itself. Wouldn't it be great if we could write computer programs that had these same characteristics? It's an inspiring conception of biological systems and an incredible vision for a future for programming.

    Jarod is a bit of a galactic gas-bag [salon.com], having stated publicly that 'nothing good at all will come from biotechnology', but that information technology is 'almost all good' (interview on NBC, as I remember), but in this interview, I think he's on the mark.
  • by ballzhey (321167) on Saturday January 25, 2003 @10:46PM (#5160018) Homepage
    found in his manefesto
    ...the Great Shame of computer science, which is that we don't seem to be able to write software much better as computers get much faster. Computer software continues to disappoint. How I hated UNIX back in the seventies - that devilish accumulator of data trash, obscurer of function, enemy of the user! If anyone had told me back then that getting back to embarrassingly primitive UNIX would be the great hope and investment obsession of the year 2000, merely because it's name was changed to LINUX and its source code was opened up again, I never would have had the stomach or the heart to continue in computer science.
  • Re:10 million lines (Score:2, Interesting)

    by twentycavities (556077) <twentycavities@hotmai l . c om> on Sunday January 26, 2003 @04:45AM (#5160920)
    I don't think programs will get longer, since why would anyone adopt a language that makes their job harder?

    So they can brag about it all the time like Steve Gibson [grc.com]. Every time I go to his site I feel like such a pansy just 'cause I don't know ASM.
  • by Anonymous Coward on Monday January 27, 2003 @05:58AM (#5166390)
    The REAL idea he is getting at isn't physical lines of code, although this is a good judge, but how complex the programs are, and what we are modelling, how many interfaces, relationships and ideas we express in the code, and how we can reduce the process of managing bugs.

    The ideas of creating fuzzy relationships (erk, interfaces) between components is fascinating. Although, it seems by definition for one component to be tollerant to another, it must have a preconception of the inputs to expect, therefore it might be like

    if (reasonableResponse)
    return reasonableResponse;
    else if(someNewIdea)
    return myFuzzyWarmIdeaOfAReasonableResponse;
    else
    throw new HeySomethingHappenedDoSomethingAboutItGracefullyEx ception();

    I do however like people conceptions of the 10 Million limit, and code reuse. After all, most systems in the world is a collection of small systems, reused and repeated. The most complex behaviours and unpredictable systems can be broken down into a handful of simple rules or sequences, which is what programming is, or what we think it is.

    Who would want to model one single behaviour in so much code! Lets work at Javaesque ideas of modelling and code reuse, which have really matured recently.

    However, the aim isn't to give us the chance to make really big programs of billions of lines, but to give us the chance to model far more complex systems RELIABLY, including all the little connections between the rocks and grounds of our program.

    Yes Windows 2000 is a biiiiig application/system, but hands up who would say it is realible? Who wants to trawl through it bug finding? That is why they have to throw away and start again (heck they dont, they just leave the shelled out carcase of the code to bloat the system - and further downt he line programmers will reuse methods which look the same as well tested 2 year old code, but this is 2 year old code that want tested, and will call is Windows 2004, and we will all enjoy the lovely exploits which ensue)

    Can you imagine if the code base, or complexity of this beast doubled? And it will, they are trying to squeeze all manner of bits we don't need in there, to sell it as new.

    He also mentions how difficult it is for someone to write an operating system. We can imagine what it does, but it is quite an achievement. It seems that here we are talking about not MORE lines of code, but LESS. If 10 M was some cognitive limit (lets just say for example) we would need people to be able to model a complex application quickly and reliably with ideas, huge masses of code. Yes, reusing many components, but think outside the rect here and how can these bits of logic, these processes, interact in an error tollerant way.

    He wants people to compete more easily and quickly, new software ideas to bring power to those who have the real ideas. Brain grease over elbow grease.

    We all know the power of an interface, or contract. We abstract the idea of an 'application' and it runs inside the 'operating system' and we have a contract of resources and interaction.

    But if one element goes fizz, the whole house of cards goes down, especially if we are talking win32.

    Now you could argue, whatever happens, if a component goes fizz, for some reason, we NEED the application to understand this, rather than continue, as processing the data further with this inconsistency could be more damaging. (missiles landing in your back yard)

    So either we concieve some perfect system for creating programs (arguably impossible since we are only human) or just bring in ideas to MODEL and ABSTRACT the code process through strong contracts, and identify patterns (not the same patterns he talks about) in the code, and give us, as humans, the chance to drag little safety nets around each area, and say, if this part fails, I want it to decide to go here.

    I have been developing this way for well over 3 years, it wasn't natural, it was a developed skill, of course, I THINK like this naturally, but to bring it into the code was a process I had to enforce. (please, tell me you know all this already, and show me all your 100% bug proof applications over 300,000 lines of code)

    Back to this impossible model - which doesn't allow for errors, a strict rule based language which predefines all acceptable states as it is coded, and can see where errors can occur. These chunks can then be fitted together like Lego (tm), and if they don't fit, they don't fit, if they do, then they work together, and the behaviour is something only we can decide upon, ie through interpretation. Our interpretation would be the failing factor in any system now matter how strict, the legal behaviour of that system in itself might not match what we wanted to model, and is as much of a bug as one that goes fizz

    Now Java is getting there, I should say the Java ethos, and best practices and eXtreme Programming, and team based development practices, there are so many built in ideas, and certain powerful tips beknown to developers give us systems which 'cannot' fail. Of course, this is only based upon what we could identify at the time as a failing criteria.

What this country needs is a dime that will buy a good five-cent bagel.

Working...