Interview with Jaron Lanier on "Phenotropic" Development 264
Sky Lemon writes "An interview with Jaron Lanier on Sun's Java site discusses 'phenotropic'
development versus our existing set of software paradigms. According to Jaron, the 'real difference between the current idea of software, which is protocol adherence, and the idea [he is] discussing, pattern recognition, has to do with the kinds of errors we're creating' and if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.'"
This is an interesting concept... (Score:5, Interesting)
...but i don't see how it's physically possible. It sounds like he's proposing that we re-structure programming languages or at least the fundamentals of programming in the languages we do know (which might as well mean creating a new language). This isn't a bad thing per se, but one example he talks about is this:
Am i stupid or something? He seems to be drawing two, completely unrelated things together. Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic. When we develop code for this environment, we have to develop according to those binary rules. We can't say "here's a rock", but we can say "turn on these switches and those switches such so that it indicates that we are pointing to a location in memory that represents a rock".
Maybe i'm missing his point, but i just don't understand how you can redefine programming, which is by definition a means of communication with a predictable binary system (as opposed to a "probability-based system" or whatever quantum physicists like to call reality), to mean inputting some kind of "digitized" real-world pattern. It's bizarre.
Read it couple days ago (Score:2, Interesting)
I seriously doubt nature came to the elegant design of 4 base pairs overnight, so let's work hard at making it better w/o throwing a pile of dung on people's face. After all, they are the ones who have to build the pieces to get to that point.
"Robust" versus "goal-oriented" (Score:5, Interesting)
The problem with the kind of system he's talking about is that the more robust you make it, the harder it is to change it's behavior.
Take the cockroach, for instance. It is damned hard to train 100 of them to work together to open a pickle jar.
That's because a cockroach is extremely robust at being a cockroach, which has nothing to do with teaming up with 99 other cockroaches to open a pickle jar.
I don't believe nature had a design for each individual life form, other than to be robust. That doesn't give us any particular insight into how to both design something robust that meets a specific goal, which is the point of almost all software.
Once you get to the point where the specifications of each component are as exact as they need to be to meet a specific goal, you're lacking exactly the kind of robustness that he's describing.
What he's really saying is that entropy is easy to defeat. It's not. Perhaps there will be easier ways to communicate our goals to a computer in the future, but the individual components will still need to be extremely well thought-out. I think it's the difficulty of the language that makes symbol exchange between a human and a computer difficult - the fact that the human needs an exact understanding of the problem before they can codify it isn't going to change.
Re:Evolution and Core Dump (Score:3, Interesting)
Now there are certainly programming methodologies modeled on evolution. But that's not what he's talking about. What he's talking about is using pattern recognition to reduce errors in computer programs, I assume, although he doesn't say this, by making them more tolerant of a range of inputs. Evolution has nothing to do with pattern recognition, other than that both are stochastic processes. Evolution is tolerant of environmental pressure by being massively parallel (to borrow another CS metaphor). And even then it's sometimes overwhelmed (think ice age). His programs would be more tolerant of errors because they used better algorithms (namely pattern recognition).
I think it's a bullshit analogy. As I said before, I'm not sure if this analogy is key to his argument, but I don't give him a lot of cred for it.
Thank You! (Score:2, Interesting)
Re:Evolution and Core Dump (Score:5, Interesting)
If you compare it to something like building a house or office building the analogy works. If you misplace one 2x4, its very unlikely that anything will ever happen. Even with something as serious as doors, if you place one 6 inches to the left or right of where its supposed to be, it usually works out ok. It always amazed me once I started working with construction at how un-scientific it was. I remember being told that the contractors don't need to know that space is 9 feet 10 1/2 inches. Just tell them its 10 feet and they'll cut it to fit.
One of the amazing things about AutoCad versus the typical inexpensive CAD program is that it deals with imperfections. You can build with things that have a +/- to them and it will take that into account.
Overall, he definitely seems to be on the right track from what I've seen. Most of the projects I've been working on (J2EE stuff) it seems to be taken as a fact that its possible to get all the requirements and implement them exactly. Like all of business can be boiled down to a simple set of rules.
theory is very interesting (Score:4, Interesting)
I think if more people get turned onto pure component-based development, then the current object-oriented paradigm could carry us much further.
You have chaotic errors where all you can say is, "Boy, this was really screwed up, and I guess I need to go in and go through the whole thing and fix it." You don't have errors that are proportionate to the source of the error.
In a way you do. Right now it's known as try() {...}catch(...) {}throw -or- COM interface -or- whatever other language you might work with that has a way to introduce error handling at the component or exact source-area level, and to handle errors gracefully.
"protocol adherence" is replaced by "pattern recognition" as a way of connecting components of software systems.
That would mean a whole new generation of viruses that thrive on the new software model. That would certainly stir things up a bit. Of course any pioneered methadology is subject to many problems.
But I'm not putting down his theory, just commenting on it. The guy's obviously a deep thinker. We need people to push the envelope and challenge our current knowledge like that. Overall the theory is extremely interesting, although the practicality of it will have to be proven, as with all other new ideas.
Reason for Bugs (Score:1, Interesting)
No? Why? Because it wouldn't be very usable?
Would you use software which contains bugs?
Yes? Why? Because it remains usable...
It is possible to write bugfree software,
but there's no need to for the average joe market.
Re:Full of it. (Score:2, Interesting)
The point this guy makes and I totaly agree with is that programmimg can't stay the same for ever. I mean come on, we're practically programming assembly. High level, IDE'd and coloured and stuff but not a bit different fundamentally.
Functional programming for example, that's different. It probably sucks (I don't really know it) but it's different. It's been around for about 30 years too.
There has to be something that would let me create software without writing
for i=1 to n
for every piece of code I make. It's just... primitive.
And this guy is right about something else too. If nobody's looking for it, it's gonna take a lot longer to find it.
Re:This is an interesting concept... (Score:3, Interesting)
Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic.
Our DNA, the genetic code that is used to make the building blocks that make us us, and make chimpanzees chimpanzees, is essentially a number in base 4, manipulated by molecules that are either completely hardcoded, or defined within that base 4 number, and influenced by chaotic forces in the world at large..
Mathematically and logically speaking, there is no difference between a base 4 number and a base 2 number. Nature uses base 4 because she had 4 nucleotides to play with, we use base 2 because it's cheaper to design and build; they are the same numbers.
When we develop code for this environment, we have to develop according to those binary rules.
Perhaps, but there are some things that need to be kept in mind here. As Lanier points out, much of what we consider "this environment" is the biproduct of business decisions, not the essential nature of binary computing, for example, processor registers, one dimentional memory, the four ring security model, interrupts, files, these can all be done differently.
Also, as has been demonstrated in numerous ways, just because you are using a binary device doesn't mean that you must be develop based on binary logic, people aren't toggling the boot loader via the switches in front of the computer anymore. In binary, someone can develop an environment that is much much richer than binary. Then, separately, anyone can develop for that environment without having to worry about the binary details.
We even have the technology to, given sufficent computing power, completely model any non-chaotic analog environment and have it work right (just keep making the bit lengths longer until you are safely well under whatever error tolerance you have). Chaotic analog environments are harder, but people are working on it; we've got the technology to make something that looks like the chaotic environment, but is missing out on much of the richness.
We can't say "here's a rock", but we can say "turn on these switches and those switches such so that it indicates that we are pointing to a location in memory that represents a rock".
But we can. When you write a paragraph of text in HTML, you don't say "turn on these switches and those switches such that it indicates that we are pointing to a location in memory that represents a paragraph", you say "here is a paragraph, here's the text in the paragraph". You can make an environment where you can say "here is a rock" (but until we get better at chaos, it will look and act at best almost, but not quite, like a rock).
Comment removed (Score:2, Interesting)
Hello World, for example (Score:3, Interesting)
To convert it to the software-world equivalent - With enough knowledge of the specific hardware platform it will run on, a good programmer can write a 100% bug-free, "perfectly" robust "hello world" program.
(Anyone who thinks "void main(void) {printf("hello world\n");}" counts as a perfectly bug-free program has clearly never coded on anything but well-behaved single-processor PC running a Microsoft OS with a well-behaved compiler).
However, extending your idea, how do you get 100 "hello world" programs to work together to, say, play an MP3?
Yeah, it sounds absurd, but seems like exactly what the parent article suggests. Trained on enough "patterns" of input, even a set of "hello world" programs should manage to learn to work together the play MP3s.
That *might* work if we started writing programs more as trainable functional approximation models (such as neural nets, to use the best currently known version of this). But, as much as it seems nice to have such techniques around to help learn tasks a person can't find a deterministic algorithm for, they *SUCK*, both in training time *and* run time, for anything a human *can* write straightforward code to do. And, on the issue of training... This can present more difficulties than just struggling through a "hard" task, particularly if we want unsupervised training.
I really do believe that, some day, someone will come up with the "killer" software paradigm, that will make everything done up to that point meaningless. But, including this current idea, it hasn't happened yet.
But to end on a more Zen note... Phenotropic development already exists, in the perfected form. When a rock touches the ground, all the atoms involved just intuitively "know" where the balance of forces lies. They don't need to "negotiate", they just act in accord with their true nature.
Re:not that bad... (Score:3, Interesting)
Exactly. Just as libraries were created so that we can turn memory allocation into a single call, and just as some newer languages or libraries turn an http transaction into a single call, we will be able encapsulate more functionality as needed to reduce the LOC count to something reasonable. And we can do this without relying on Jaron's magic "chaos" or "complexity" or "pattern recognition" 90's buzzwords.
Jaron is correct in that, yes, we will reduce program complexity and LOC by describing what we want more broadly. He is incorrect in believing that this requires any machine intelligence "magic".
Re:This is an interesting concept... (Score:3, Interesting)
Hmmm. That's kind of like asking how it's possible for two three dimensional objects to occupy the same place is space. The answer, of course, is to displace those objects along the time vector. Similarly, I think that the author is trying to urge coding paradigms onto a new and different vector basis. This, of course, happens all the time, and people are always indignant when their domain of study's basis of authority is undermined by someone else's work.
Am i stupid or something? He seems to be drawing two, completely unrelated things together. Our computers, our CPUs, our ICs, at the end of the day they're just a bundle of very, very tiny on/off switches - pure binary logic. When we develop code for this environment, we have to develop according to those binary rules.
No, not stupid. Caught up in the paradigm of binary opposition, perhaps. Personal computers produced for mass consumption are bundles of very, very tiny on/off switches. Research computers often utilize quadratic switches (biocomputing) and n-switches (optical and quantum computing). A biocomputer, for example, may run well over a billion solutions to a problem, simultaneously, utilizing A,C,G,T switches; the trade-off for breaking the on/off paradigm, however, is that you can only run this particular biocomputer once, and then it's no longer a biocomputer.
Maybe i'm missing his point, but i just don't understand how you can redefine programming, which is by definition a means of communication with a predictable binary system to mean inputting some kind of "digitized" real-world pattern.
The process works like this: You (PersonA) can redefine programming or whatever else you want (religion, science, government, etc. etc.) by gather at least one other person (PersonB) to you, and declaring between the two of you, 'We're going to redefine this term F(x) to now mean F(y).' Alternatively, you can say, 'We're going to redefine this term F(x) to now mean G(x).' Between PersonA and PersonB, this term is now redefined.
After that, it's all a matter of gathering other people into your circle or domain of practice, and getting other people to believe in your ideas. If you, as PersonA, never get a PersonB, then you a lone crackpot without any supporters. If you, as PersonA, gather a million people around you and your believes, you are either L Ron Hubbard or Bill Gates.
And lastly, programming for biocomputers often involves communication with a predictable quadratic (i.e. genetic) system. It just goes to show that the term 'programming' is pigeon-holed by the computer scientists to mean a particular thing in their field of study.
Re:This is an interesting concept... (Score:2, Interesting)
one can up-scope and note the (still somewhat imperfect) congruence of the last step w/ the original RTL... in any case (heh), the world is a better place if more users understand the programming mindset if not the act of programming, per se. what is a programmer but a cultivator of the machine? what is a good person but a self-programming philanthropist? what is a great hacker but a good person w/ skillz?
Amorphous Computing (Score:1, Interesting)
Jarod is a bit of a galactic gas-bag [salon.com], having stated publicly that 'nothing good at all will come from biotechnology', but that information technology is 'almost all good' (interview on NBC, as I remember), but in this interview, I think he's on the mark.
liked the linux comments (Score:2, Interesting)
Re:10 million lines (Score:2, Interesting)
So they can brag about it all the time like Steve Gibson [grc.com]. Every time I go to his site I feel like such a pansy just 'cause I don't know ASM.
The REAL idea of complexity (Score:1, Interesting)
The ideas of creating fuzzy relationships (erk, interfaces) between components is fascinating. Although, it seems by definition for one component to be tollerant to another, it must have a preconception of the inputs to expect, therefore it might be like
if (reasonableResponse)
return reasonableResponse;
else if(someNewIdea)
return myFuzzyWarmIdeaOfAReasonableResponse;
else
throw new HeySomethingHappenedDoSomethingAboutItGracefullyE
I do however like people conceptions of the 10 Million limit, and code reuse. After all, most systems in the world is a collection of small systems, reused and repeated. The most complex behaviours and unpredictable systems can be broken down into a handful of simple rules or sequences, which is what programming is, or what we think it is.
Who would want to model one single behaviour in so much code! Lets work at Javaesque ideas of modelling and code reuse, which have really matured recently.
However, the aim isn't to give us the chance to make really big programs of billions of lines, but to give us the chance to model far more complex systems RELIABLY, including all the little connections between the rocks and grounds of our program.
Yes Windows 2000 is a biiiiig application/system, but hands up who would say it is realible? Who wants to trawl through it bug finding? That is why they have to throw away and start again (heck they dont, they just leave the shelled out carcase of the code to bloat the system - and further downt he line programmers will reuse methods which look the same as well tested 2 year old code, but this is 2 year old code that want tested, and will call is Windows 2004, and we will all enjoy the lovely exploits which ensue)
Can you imagine if the code base, or complexity of this beast doubled? And it will, they are trying to squeeze all manner of bits we don't need in there, to sell it as new.
He also mentions how difficult it is for someone to write an operating system. We can imagine what it does, but it is quite an achievement. It seems that here we are talking about not MORE lines of code, but LESS. If 10 M was some cognitive limit (lets just say for example) we would need people to be able to model a complex application quickly and reliably with ideas, huge masses of code. Yes, reusing many components, but think outside the rect here and how can these bits of logic, these processes, interact in an error tollerant way.
He wants people to compete more easily and quickly, new software ideas to bring power to those who have the real ideas. Brain grease over elbow grease.
We all know the power of an interface, or contract. We abstract the idea of an 'application' and it runs inside the 'operating system' and we have a contract of resources and interaction.
But if one element goes fizz, the whole house of cards goes down, especially if we are talking win32.
Now you could argue, whatever happens, if a component goes fizz, for some reason, we NEED the application to understand this, rather than continue, as processing the data further with this inconsistency could be more damaging. (missiles landing in your back yard)
So either we concieve some perfect system for creating programs (arguably impossible since we are only human) or just bring in ideas to MODEL and ABSTRACT the code process through strong contracts, and identify patterns (not the same patterns he talks about) in the code, and give us, as humans, the chance to drag little safety nets around each area, and say, if this part fails, I want it to decide to go here.
I have been developing this way for well over 3 years, it wasn't natural, it was a developed skill, of course, I THINK like this naturally, but to bring it into the code was a process I had to enforce. (please, tell me you know all this already, and show me all your 100% bug proof applications over 300,000 lines of code)
Back to this impossible model - which doesn't allow for errors, a strict rule based language which predefines all acceptable states as it is coded, and can see where errors can occur. These chunks can then be fitted together like Lego (tm), and if they don't fit, they don't fit, if they do, then they work together, and the behaviour is something only we can decide upon, ie through interpretation. Our interpretation would be the failing factor in any system now matter how strict, the legal behaviour of that system in itself might not match what we wanted to model, and is as much of a bug as one that goes fizz
Now Java is getting there, I should say the Java ethos, and best practices and eXtreme Programming, and team based development practices, there are so many built in ideas, and certain powerful tips beknown to developers give us systems which 'cannot' fail. Of course, this is only based upon what we could identify at the time as a failing criteria.