The Hundred-Year Language 730
dtolton writes "Paul Graham has a new article called "The Hundred-Year Language" posted. The article is about the programming languages of the future and what form they may take. He makes some interesting predictions about the rate of change we might expect in programming languages over the next 100 years. He also makes some persuasive points about the possible design and construction of those languages. The article is definitely worth a read for those interested in programming languages."
Seymour Cray said it best (Score:5, Funny)
I do not know what the language of the year 2000 will look like, but it will be called FORTRAN. [cmbi.kun.nl] -Attributed to many people including Seymour Cray, John Backus
Re:Seymour Cray said it best (Score:4, Funny)
Re:Seymour Cray said it best (Score:3, Funny)
2100: No computer Languages. (Score:4, Insightful)
Planet P Blog [planetp.cc]
I predict... (Score:5, Funny)
And I will still be refusing to maintain them. Six years in the COBOL mines was six years too long...
Re:I predict... (Score:5, Funny)
I predict that in 100 years someone, somewhere, will still be running COBOL applications.
And I will still be refusing to maintain them.
Surely that depends on whether you're damned or not. I imagine there's a whole circle of hell devoted to maintaining COBOL apps.
Cobol is back. (Score:4, Funny)
how long (Score:3, Insightful)
xao
Re:how long (Score:5, Interesting)
\ Word definitions : convicted-of 0 ; \ To convict someone : murder 25 + ; : arson 10 + ; : robbery 2 + ; : music-copying 40 + ; : sentenced-to . ." years of prison" ;
And to use it:
convicted-of music-copying robbery sentenced-to
Output: 42 years of prison This looks quite like english. Of course, you can do that in many languages, but it feels more natural in Forth I think.
Re:how long (Score:5, Funny)
Re:how long (Score:2)
Not long... (Score:4, Informative)
In fact never. Because while its okay human languages have a few problems
1) Redundancy, far to many ways to say or do one thing
2) Ambiguity, "1 may be equal to x" "Sometimes allow the user to do this if they aren't doing something else that might conflict"
So what you might get is a restricted language with restricted terms that could help. But even these tend to fall down, the first UML spec was written using such a language but this was abandoned for the more formal UML language as the inherent ambiguities of languages couldn't be overcome.
So basically you might have some mechanism of translating from formal into informal but the real work will be done in a formal manner, as now, as ever because at the end of the day....
Who wants to rely on a system that implements "sometimes" ?
Re:Not long... (Score:4, Insightful)
In fact never. Because while its okay human languages have a few problems
Well, yeah, but doesn't a computer language suffer from the same pitfalls? If that isn't the case, why do languages tend to "evolve" over time? Why are new languages that borrow elements from other languages so prevelant?
1) Redundancy, far to many ways to say or do one thing
Isn't one of the driving principles of Perl "There's more than one way to do it"? Some say this is one of Perl's best feature, other's say it sucks.
I won't argue with the point of ambiguity. You can remove ambiguity from a "spoken" language by applying rules to it. I do think we're quite far away from being to "speak" a program, but that's because we as a culture have moved away from a _grammar_ of English. Check the courses in a university and see what first year English and Linguistic students are taking. It's not Grammar, it's Grammars. Standard written English is a thing of the past. So we won't base a language on how we actually use our language, but we could base a language on certain grammars of the language. And, isn't that something else that languages like Perl and Python try to do? They try to create more "readable" programs?
English and Grammar... (Score:4, Informative)
English actually doesn't really have a written Grammar BTW, English was the language of the poor people not of the gentry therefore it evolved as a loosely ruled language rather than as a language with definate constructs like proscribed Latin or modern German.
Basically English is the language of plebs, the rich and diplomats spoke French. The idea of a grammar was retro-fitted by the Victorians who applied Latin rules to English which just don't fit.
Lets put it this way, in English you can screw with the language as much as you want and it continues to change every year. This is fine as it makes it a rich communication tool.
What other languages can use one word to make an entire sentence ?
F*ck's F*ckers F*cking F*cked
Re:English and Grammar... (Score:3, Interesting)
Latin:
The colloquial translation:
And, although Latin is inflected but English is only partially inflected, all of the words are identical!
;-)
So HAH.
This sort of poetry is common in obfuscated C contests, although the visual lack of distinction between 0 and O and I and 1 is also commonly needed.
-Billy
Re:how long (Score:4, Interesting)
For unsupervised commands humans tend to create something not all that different from code. A fixed set of grammar and vocabulary come into play (i.e. little slang, and very normalized style). For example:
Employees will update thier status on the In/Out board in the lobby when they will be gone for more then 15 minutes.
which is roughly:
(if (> (expected-completiontime task) 15)
(update-status out))
So the need and utility isn't there.
Re:how long (Score:3, Interesting)
Structurally, spoken languages and computer languages are very similar:
Phonetics: sounds
Phonology: sounds in relation to one another
Morphology: words
Syntax: structure (words in relation to one another)
Semantics: meaning
Pragmatics: meaning in context.
Morphology, Syntax and Semantics are shared by human and computer languages. Arguments could be made about phonology, too, but not by me. Some computer langauges might even have pragmatics. (Example of pragmati
Re:how long (Score:5, Insightful)
This is a false and limited conception of the original poster's intent. Imagine having an A.I. on a PDA-type device that you carry with you from the age of 4. The PDA has a 100Terabye HD, and records/monitors your spoken words, actions, etc. After 20 or 30 years of this, your PDA probably knows you better than anyone. So if you tell your PDA "make a cool program that looks like this, and does this" there's a very good chance it understands what you mean.
Think about police sketch artists. They take vague, half remembered information...and turn it into a very accurate rendering of the original image. You have a vague idea in your mind of what you are describing, and you can't see what he/she is drawing. So you describe the person...and 5 minutes later the artist shows you a rather remarkable portrait of what you described. Which in many cases later turns out to very closely resemble the suspect. The missing link here is context. The context of shared culture and language.
If you can sit at a table and describe the basic functionality of a program, and describe its interface using words. Then your magic PDA will do the rest. It will even give you demos and visual feedback on the fly as you describe the program. It would serve as a layer between the absolutely massive context of your personal history, and the "structured" programming language required to build said program.
Please don't limit the future, it's bigger than you are.
Re:Interactivity (Score:3, Informative)
Also, think about languages like Ruby or Lisp where the interpreter can alter a program while it's running. As an example I wrote a small text editor in Ruby/Tk in which you could modify the source code to the editor in a buffer, then choose "eval buffer in application", and that code would run in
Re:how long (Score:4, Insightful)
The actual source code (i.e. the instructions to the processor) should be surrounded by quote marks or other delimiters, and the comments (i.e. the extended code description and documention) should be the part of the source surrounded by white space characters (space, tab, cr/lf).
I never cease to be amazed at how little programming has changed since the 1960's. It really seems that the only innovation in compilier user-interface design has been that (some) compiliers will actually allow you to put your keywords and comments in color! (duh!)
If we are ever going to increase the productivity of programmers to even remotely match the vast increases in price/performance of the the hardware then we must be willing to spend large amounts of time energy and money to develop new and better approaches to writing software code.
We must abandon our kilobyte mentality to gigabyte technology!
As an example of a different approach, has anyone considered using Chinese characters arranged in a three-dimensional grid as a method of doing syncronous parallel programming? Have each character represent a complete function and have their placement in the 3-D grid space represent the point in the algorymthic process that the function should be complete. The compilier would either create the machine language or suggest other arrangements of the parallel process by rearranging the Chinese characters in the 3-D user interface.
(The fact that it sounds weird is not important. What is important is that any new idea that can help improve the productivity of programmers should be considered, regardless of how strange it may sound at the present time)
Thank you,
Aliens (Score:3, Funny)
Presumably many libraries will be for domains that don't even exist yet. If SETI@home works, for example, we'll need libraries for communicating with aliens. Unless of course they are sufficiently advanced that they already communicate in XML.
Let's hope it's not Microsoft's XML, because that could cause a problem with communication:they might say "We come in peace" and start shooting at us with lasers and everything!
No current languages will exist.. (Score:5, Funny)
Convergence (Score:2, Insightful)
For species branches can converge too - it's just kind of weird...
Re:Convergence (Score:3, Funny)
So you've been getting that xxx farm girl spam too...
dead-end? (Score:2, Insightful)
dead-end? Java has already spawned javascript and C#.
Re:dead-end? (Score:2)
Re:dead-end? (Score:5, Informative)
Re:dead-end? (Score:5, Insightful)
Comparing JavaScript and Java is like comparing a Shark to a Dolphin, quite different actually even though both animals live in the sea, and both languages use the letters J A and V. Both have cariovascular systems and both use variables and control structures. But that is basically where the similarities end.
JavaScript actually started life inside of Netscape as LiveScript, and durring the Netscape 2.0 time frame was re-named to JavaScript to ride the Java bandwagon, but thre is no realtionship at all beyond that. Compile-time type saftey? Java yes JavaScript no. Prototypes? JavaScript yes Java no. eval() of new programming code? One but not the other. Interface inheritance? Again. First Class Methods? yep, not both. Bones? Sharks no Dolphins yes (teeth don't count).
Now C# and Java, they are at best siblings but java did not beget C#. The namespace structure is straight from Ansi C++, and the primative types include Cisims like signed and unsigned varieties. You don't shed a tail and then grow it back further down the trail. The comparison here is alligators and crocidiles. Very similar but one did not beget the other, it was a closer common parent than the sharks and dolphins.
Yep, dead end (Score:4, Informative)
...Neither of which has done anything to advance the state of the art in programming languages, even if your claim were true.
The one thing I have confidence in about programming in the future is that sooner or later, the tools and techniques with genuine advantages will beat the "useful hacks". Java, C++, VB and their ilk are widely used today because they can get a job done, and there's not much better around that gets the same job done as easily.
Sure, there are languages that are technically superior, but they're so cumbersome to use that no-one really notices them, and when they do, you don't have the powerful development tools, the established code base of useful libraries, the established user base of developers to hire, etc. When we get to the point that languages with more solid underlying models catch up on ease of use, then we'll relegate the useful hacks to their place in history as just that. Until then, we'll keep using the useful hacks because we have jobs to do, but don't expect the tools of the future to be built on them.
moores law (Score:2)
AI (Score:2, Interesting)
Best quote from the article (Score:5, Insightful)
Re:Best quote from the article (Score:5, Funny)
Re:Best quote from the article (Score:4, Informative)
Ha. I always thought it would look more like:
Why Change? (Score:3, Insightful)
Re:Why Change? (Score:3, Insightful)
Safer languages. Forget typesafe languages, we'll have typeless languages. And then the algorithms will be intensely abstracted as well. We'll have functional composition with a usable syntax. We'll create GUIs by overloading the + operator to handle components. And of course, automatic runtime code reuse will be an assumed feature of the language.
And of course, the past fifty years teaches us that it
Re:Why Change? (Score:3, Insightful)
The horror (Score:4, Funny)
VIOD THING (OMFG!!!1 LOLOLOLOOL!!!)
INIT HAX0R N00B!!!
WHIEL STFU DO
GOTO 10
DOEN
Awareness... (Score:5, Insightful)
Imagine cars that, before changing lanes, signal to the surrounding cars' navigation systems and they work out for themselves how to let the car into the lane. A computer can be told to slow down, rather than speed up, when someone wants to change lanes. Or detectors in the dotted yellow lines that sense when you changed lanes without signalling, and alert the traffic authority to bump your points (ala Fifth Element).
I always liked the idea of my PDA phonebook being more of a recently-used cache of numbers instead of a local store. I just punch up a number. If it's one of my commonly used ones, it comes right up (and dials, of course). But if it's not, then my PDA connects to the phone company, gets the information (and probably pays the phone company a micropayment for the service) and now I have that number locally on my PDA until it gets scrolled off if it's not used much.
Also I expect lots of pseudo-intelligent content filtering software. You'll get 1000 emails a day and your spam filter will not only remove 99% of them, but it will also identify and prioritize the remaining ones. In order for this to be useful there needs to be languages that deal with expression of rules and logic in a meaningful way (far more than just and or not). No one 100 years from now will say "if subject ~= /*mom*/" (or however the hell you say it), they will expect to say "Give email from mom a higher priority", or sometihng very close.
Re:Awareness... (Score:5, Interesting)
Travis
Evolutionary dead-end? (Score:3, Interesting)
Er... I don't think that Cobol is an evolutionary dead-end; in the best world, it would be extinct, but it isn't. What makes a language widely used is something that we can't predict right now - we have to watch it evolve over time, and as it grows and matures look at different aspects.
Take architecture for example - new buildings are loved the first five years because of their freshly introduced ideas. After that, all the problems start to appear - mildew problems, asbestos in the walls, and so on. During the next ten years, the child diseases are fixed. It is only a HUNDRED YEARS after the new building (or in our case, the new programming language) can be properly evaluated. The language/building then has either been replaced, or it has survived.
So - the only proper way to measure the successfulness of a programming language is to measure its survivability. Sure, we can do guesstimates along the way:
During introduction: Does the language have a good development environment? Is the language backed/introduced by a market leader?
Somewhere during the "middle years" (after about ten years): Does the language have a large user base? Does the language have a large code base?
After twenty/thirty years: ask the programmers if it really is maintainable...
Well - you get the picture! Predicting the survability of something more than five years into the future is impossible, I'd say.
Waste of Time (Score:3, Insightful)
huh!?!?
This is the kind of mental constipation that is better left for blog sites.
Somewhere there is parallel between the logic in this article and the dot.bomb busniess model.
I know the name of the language. (Score:2)
-russ
History and Future (Score:5, Interesting)
Re:History and Future (Score:4, Informative)
I wouldn't read too far into this article... (Score:4, Insightful)
It may seem presumptuous to think anyone can predict what any technology will look like in a hundred years...Looking forward a hundred years is a graspable idea when we consider how slowly languages have evolved in the past fifty.
Hmm...funny, fifty years ago, if I remember my history (since I wasn't alive back then), those relay computers needed rolls and rolls of ticker-taped punch holes to compute math. The language was so-low-level...even x86 Assembly would have been a godsend to them. And he considers something like Object-Oriented Programming a slow evolution?
All he's doing in the article is predicting what languages will be dead in the future, and which languages won't be. For example, he says Java will be dead...
Cobol, for all its sometime popularity, does not seem to have any intellectual descendants. It is an evolutionary dead-end-- a Neanderthal language...I predict a similar fate for Java.
I'll not go there, because predicting the demise of Java is opening another can of worms. But let's just say that he really doesn't support his argument with anything other than anecdotal opinion.
I say read his article in jest, but don't look too deep into it.
Re:I wouldn't read too far into this article... (Score:3, Informative)
Fortran dates to 1954.
So, there are 50 years of computer language.
Re: I wouldn't read too far into this article... (Score:3, Informative)
> And he considers something like Object-Oriented Programming a slow evolution?
When you consider that it is just a metaphor for refinements of pre-existing ideas such as data hiding, which in turn are refinements of pre-existing ideas such as structured programming, which in turn are refinements of pre-existing ideas such as "high level" programming languages,
See past the hype.
Re:I wouldn't read too far into this article... (Score:5, Insightful)
using round numbers, he is talking about the fifties, although really he probably wants to include the sixties.
So what did we have? Among others: Fortran, which is still around and has influenced many designs. Algol, which begat c, java, c++, c#. Lisp, which introduced FP and most (certainly not all) of the interesting ideas that somewhat mainstream languages like python, ruby, perl are starting to pick up on 30+ years later.
I know you weren't paying attention, but OO came in the 60's, and was developed *far* beyond anything seen today in mainstream production languages by the early 80's. (smalltalk, New Flavours, CLOS)
Most of what mainstream programmers think of as the history of language ideas is complete drek, because they make the mistake of thinking that the first time they see a company hyping an idea has any relationship to when the idea was arrived at.
If you had actually read the quoted sentenc fo comprehension, you would understand that he didn't say that Java would be dead, he said that it was an evolutionary dead end.
Not the same thing. Java is a fairly direct evolutionary descendant of Algol. Cobol, a contemporary of Algol, has no evolutionary descendants.
What he said is that the languages of 100 years from now will not *descend* from Java, any more than the languages of today descend from Cobol. I wouldn't be surprised if there were Java programs around in 100 years, but that is the nature of legacy systems, not an interesting insight.
Re:I wouldn't read too far into this article... (Score:4, Insightful)
On natural language... (Score:5, Insightful)
Having said that, I expect that the user language should certainly be natural language -- the "computers should understand people talk, not the other way around" argument. People know what they want out of their machines, for the most part. Whether it is "change my background to blue and put up a new picture of the baby" or "Find me a combination of variables that will result in the company not failing with a probability of greater than 90%", people want to do lots of things. They just need a way to say it. Pretty much every Star Trek reference you'll ever see that involves somebody talking to the computer is an input/output problem, NOT the creation of a new technology.
It's when you build something entirely new that you need a new, efficient way to say it. Anybody remember APL? Fascinating language, particularly in that it used symbols rather than words to get its ideas across (those ideas primarily being focused on matrix manipulation, if I recall). Very hard for people to communicate about APL because you can't speak it. But the fact is that for what it did, it was a very good language. And I think that will always hold true. In order to make a computer work at its best, speak to it in a language it understands. When you are building a new device, very frequently you should go ahead and create a new language.
OOP (Score:3, Insightful)
Where OOP comes into it's own, in my experience, is with GUIs. The ability to say:
If ThisScreen.Checkbox.IsTicked
ThisScreen.OkButton.Disabled = True
Endif
is immensely useful. Similarly, the ability to change the definition of your master screen template and have all of the other screens take on it's new properties is something that OOP is designed to allow you to do.
Similarly, anything where you tend to access things that act like objects in the first place suit it. Being able to say
CurrentDocument.Paragraph(1).Bold= True
or
Errval=MyDatabase.SQL("Select * from mytable where name='Andrew'")
Print MyDatabase.RecordCount
has made my life easier on numerous occasions. There are certainly non OO methods of doing the same thing, but I've never found them as flexible.
People who insist on making _everything_ an object, on the other hand, are idealists and should be carefully weeded from production environments and palced somewhere they'll be happier, like research.
Notation (Score:4, Insightful)
I'm not too sure though.
A programming language is a notation in which we express our ideas through a user interface to a computer, which then interprets it/transforms it according to certain rules. I expect that a lot will depend upon the nature of the interfaces we use to communicate to a computer.
For example, so far as I know people never programmed in lisp on punch cards; it doesn't fit that interface well. It was used on printing terminals (for you young'uns, these were essentially printers with terminals). Lisp fit this interface well; Fortan could be programmed either way.
If you look at languages development as an evolutionary tree, Python's use of whitespace is an important innovation. However it presupposes havign sophisticated syntax aware editors on glass terminals. It would not have been convenient on printing terminals. Perhaps in 2103 we will have "digital paper" interfaces, that understand a combination of symbols and gestures. In that case white space sensitivity would be a great liability.
In my mind the biggest question for the future of languages is not how powerful computers will be in one hundred years, but what will be the mechanics of our interaction with them? Most of our langages presume entry through a keyboard, but what if this is not true?
Re:Notation (Score:4, Insightful)
I think he expects it to remain a vital branch because recent languages have been more and more like lisp. If lisp doesn't directly beget new, highly popular languages, lisp's features will be (and have been) absorbed into whatever does become popular.
What about ASM? (Score:3, Interesting)
Re:What about ASM? (Score:3, Interesting)
The few remaining areas for ASM programming - embedded, SSE-like optimisations - are being eroded gradually as processors and compilers get better.
My prediction. (Score:4, Interesting)
We're already to the point where it's absurd for a single person to understand the whole of a software project. Things are only going to get worse from here, and the only way out is to let the computers manage the complexity for us. As computers become faster, they'll be able to test out an ungodly number of permutations of a program to see which ones perform the fastest, or give the best results.
Just a speculation. I don't wholeheartedly believe what I just said, but I think it's a bit silly to simply assume that programming languages will be around forever.
Re:My prediction. (Score:3, Insightful)
I think this is sort of like saying 50 years ago that programming won't exist in the year 2000 because almost no one will use machine code anymore.
I fully expect that in 100 years, computers will be able to do much of what we currently consider programming faster than humans can. The act of programming, and thus programming lang
Not GA, specifically. Re:My prediction. (Score:3, Insightful)
So say you want a chess program. You feed in the rules of the game in a special language, and it
Abandon complex structures? Never! (Score:3, Interesting)
Strings aren't lists, they're structures.
Most strings use in programs is a holdover from teletype-style programming, where all you could display is a short (ahem) string of characters. Today's string use is a label to a data item, a menu item on a menu, a data object in a domain.
XML -- as clunky as it can seem -- and XUL in particular, are ways of describing user interface to a system as a tree of objects.
So I don't want lists of characters, I want associative structures of objects which can be of many different types, used in the manner required by the program (it's a string, it's a number, it's a floor wax, it's a desert topping).
I'm trying really hard to avoid saying "object-oriented," but objects will become more complex and more abstract. Computers of the future may not have to worry about pixels in an image, but rather know the object itself, where a bitmap is just an attribute of the thing.
Perhaps driver- and compiler-writers will still need stripped-down languages for efficient access to hardware, but as an app programmer and end user, I want the computer to handle statements like,
BUY FLOWERS FOR ANNIVERSARY
Currently, that would be something like
event("Anniversary").celebrate.prepare.purc
That's not nearly abstract enough.
Re:Well said! (Score:3, Insightful)
While I've still got quite a bit to learn, I can say you're missing the point if y
Re:Evolution != Revolution (Score:3, Insightful)
To call languages like lisp, smalltalk, objc, etc... ``barely usable'' is to make it clear you've barely used them.
While you may not believe that C is much different from fortran or pascal, it's not derived from either. C's father is a language call
Methinks he was wrong on one point (Score:4, Interesting)
Ummm... how about lichen? our mitochondria? What about the parasitic relationships that become mutually beneficial, such as the numerous bacteria in our gut and on our skin, and then eventually become necessary for life?
Merging actually does happen -- it just doesn't happen in the way he was thinking, that DNA become identical and cross-species fertility occurs. Rather, the two organisms live closer and closer, until they merge.
Come to think of it, although it isn't on the species level, the concept of merging species isn't too different than sexual reproduction.
Gaping Hole - Design Languages (Score:3, Interesting)
Some of us working in the telecommunications industry are already familiar with SDL (Specification and Description Language) [sdl-forum.org] as a tool for designing and auto-coding software. Yes, auto-coding. The SDL design software lets us design a system graphically, breaking it up into sub-components and specifying message flows between those components, and defining state machiens for handling these messages.
Developing software in such manner usually requires a very little coding, as the design tool will turn the design into code. Coding may be required for interfacing with the OS or other entities, though that's improving also.
I'm starting to think as such tools mature, they're going to be the next step up, like the way programming languages were the step up from coding in assembly. They are less efficient, just as BASIC or C is less efficient than pure assembler, but they allow greater focus on a solid and robust design and less requirement to focus on repetitive details.
Imagine being able to take out the step of having to go from a design to code - focus on the design, and you're done.
In the future... (Score:4, Funny)
Reductionism, you kidding? (Score:5, Interesting)
To anyone that has studied theoretical computer science and/or programming languages knows that such reductionism is a fallacy. "...the fewer, the better..."
It turns out that its better to strike a balance, where you make the formal mathematical system (what a programming language is after all) as simple as possible, until you get to the point where making it more simple makes it more complicated. Or in other words, making it more simple would cloud the mathematical structures that you are describing.
Here are some examples of reductionism gone too far: Sheffer stroke, X = \z.zKSK, one instruction assembler, etc...
The only logical connective you need is the sheffer stroke... but thats of no use to us as it is easier to more connectives such as conjunction, disjunction, implication, and negation.
The only combinator you need is X, and you can compute anything... but making use of other combinators... or better yet the lambda-calculus is more useful.
Point is that we need more powerful tools that we can actually use, and there is no simple description of what makes one tool better than another. Applying reductionism can result in nothing special.
The true places to look for what the future brings with regards to programming languages are the following:
1. Mobile-Calculi: pi-calculus, etc...
2. Substructural Logics: linear-logic, etc...
3. Category Theory: It is big on structure, which is useful to computer scientists.
Re:Reductionism, you kidding? (Score:3, Insightful)
Thus, in the core of the language you don't need to build in the ability to do multiplication if you have built in the ability to do addition. Multiplication is just a special case.
However, you then add another layer to this simple core. In that layer you provide functionality for multiplication, subtraction etc.
The key here being that the layer will have been written in the lang
strings (Score:3, Interesting)
i think strings mainly exist because of usability considerations - from the developers point of view. they provide a compact notation for "list of characters". furthermore, most languages come with string routines/classes/operators that are a lot more powerful and flexible than their list-equivalent.
efficiency definitely is a consideration, but not the main one.
LISP in 100 years (Score:4, Interesting)
No need to mention they will agree with operators: (defop + a b (+ a b))
That was a joke and you can do similar thing even today. Seriously, I very agree with these three quotes:
So, if there will be a commercial effort to push LISP again to the market as underlying metalanguage then, if not in 100 then in 2 or about years, we may see all programming languages being "LISP-derived". Add here that LISP syntax is semantically much better than XML, but still same parser-unified. The only problem with LISP today is that it's not so "distributed" like Erlang. Fix it and you'll get the language of the nearest future.
---
I don't know the future. I barely remember the past. I see the present very blur. Time doesn't exist. The reason is irrational. The space is irrelevant. There is no me.
Re:LISP in 100 years (Score:3, Funny)
Monkees in 100 years (Score:3, Interesting)
You know what, on a second thought I realize that 100 years is not enough for the humankind to move away from being monkees. Thus, Java foreve
"Fundamental Operators" concept is flawed (Score:4, Insightful)
He makes the point to separate the details of a language into "fundamental operators" and "all the rest" then goes on to say that languages which last and have influence on future languages are the ones that minimize the number of fundamental operators. And then gives examples of things that are fundamental operators in many languages that he feels we don't need (e.g. strings, arrays, maybe numbers).
He doesn't have much to say about "all the rest". Presumabily he would move strings into "all the rest" since we would still want our languages to have functions to manipulate strings (if you think that I'm ever going to write a string tokenizer function again, you've got another thing coming).
But, I think that the basic concept of splitting up a language into these two parts is fundamentally flawed. The line between the core of the language and all the accompanying libraries of code has broken down completely. It was already falling apart in C (does anyone program C without assuming that the standard I/O library is available?). But with Java and C# the distinction is almost completely gone. Programming languages have become complete environments were you can assume that tons of libraries are naturally going to be available. And separating out a language's "fundamental operators" and it's "all the rest" is an artificial division that doesn't really work.
Re:"Fundamental Operators" concept is flawed (Score:4, Insightful)
Off the top of my head: Palm applications, Win32 applications, operating system kernels, plugins for various programs, libraries with no business doing I/O..
Java bad? (Score:3, Interesting)
The points he makes about what the good languages are seem to show that Java is indeed a good language. Specifically it has an additional layer that allows for abstraction from the hardware/operating system for portability. It takes care of mundane details for the programmer (garbage collection, no need to worry about dealing with memory directly, etc).
Basically the article seemed to repeat itself a lot and show that Java does indeed have a lot of good qualities that he thinks will be in future languages. He also dismisses Object-Oriented programming as the cause for "spaggetti code" without giving any justification for that statement. Finally, he slips in a nice ad hominium attack there by saying any "reasonably competent" programmer knows that object-oriented code sucks.
I think the author's own biases hurt his argument greatly.
He is already wrong about Java (Score:3, Insightful)
What a load of waffle.. (Score:3, Insightful)
Basically the problem isnt going to be with the languages - the problem will be with the concepts that created those language features.
Article Summary (Score:3, Insightful)
Nobody really has a clue what programming languages will be like in a hundred years, but if all the Perl and Python weenies would learn LISP then maybe we could get somewhere within the next decade.
Wrong question is being asked. (Score:4, Insightful)
Heirarchy will continue to exist. It's the only concept the human brain has to deal with complexity, call it what you will but you classify and associate things in to hierarchy whether you're aware of it or not. I see no reason to believe right now that processors will have more advanced instructions than they currently do now; they may be very different (like optmisitic registers that know values before they have been calculated or something) but they will be on the same order of complexity. The atomic operations will probably remain at the same order of complexity in biological processors, quantum, or SI/GAAS/whatever based transistor processors. I don't see how sort a list will be done without some sort of operations to look at elements in it, compare them, and then change their ordering. Even with quantum computers you have to set up those operations to happen and cause results. That being said there will always be an assembly language.
On top of that there will always be a C like language, if it's not C, that will be a portable assembly language. Then there will be "application" languages built at a higher level still. That won't change, for good reasons, it's just too complex to push the protection and error checking and everything down a level. I'll give examples if you want them. The easiest one that comes to mind is something like Java garbage collection and how programmers assume that it has mystical powers and are shocked when they fire up a profiler and see leftovers sitting around, it's a very complex piece of software and you expect it to go down to a lower level? The lower levels have their own problems keeping up with Dr. Moore.
I think the other biggest area is that reliability needs to go up by several orders. Linux, BSD, Win2000 and WinXP are pretty reliable but they aren't amazing. I've seen all of them crash at one point or another, I may have had hand in making it happen and so might have hardware; either way it did. To really start to solve the issues and problems of humanity better we need to have more trust for our computers, that requires more reliable computers and that require different methods of engineering. The biggest thing going on in programming languages now to deal with that is Functional Programming. In 50 years I could see some kind of concept like an algorithm broker that has the 1700+ "core algorithms" (Knuth suspects that there are about 1700 core algorithms in CS) implemented in an ML or Haskell like language, proven for correctness, in a proven runtime environment being the used in conjunction with some kind of easy to use scripting glue. And critical low level programming will be proven automatically by an interpreter at compile time, they are already making automatic provers for ML.
There will be THREE new languages (Score:3, Interesting)
1. the target machine architecture
2. the range of expression required by the programmer and/or workgroup
Java is "successful" but it really looks a lot like Algol and Pascal,
as does C++. The range of expression is greater in the newer languages
(object-orientation in Java and C++) but the forte is still that of
expressing algorithms in written form to be used on a stored-program
digital computer.
WILL WE STILL BE PROGRAMMING?
Take one example -- genetic programming. If you had a programming system
where the basic algorithm could learn, and all you had to do is set up
the learning environment, then you'd be teaching rather than programming.
In fact I believe THIS is what most "programmers" will be doing in 100 years. The challenge
will be defining the problem domain, the inputs, the desired outputs; the
algorithm and the architecture won't change, or won't change much, and the
vast majority of people won't fiddle with it.
But if HAL doesn't appear and we aren't all retrained as Dr. Chandra,
I believe we'll still be handling a lot of text on flat screens.
I don't think we'll be using sound, and I don't think we'll be using pictures.
(see below)
So predicting what languages will be like in 100 years is predicated
on knowing what computers and peripherals will be like. I think progress
will be slow, for the most part -- that is, I don't think it will be all
that much different from how it is now.
HOW WILL OUR RANGE OF EXPRESSION CHANGE?
If we relied primarily on voice input, languages would be a lot more
like spoken natural languages; there would be far less ambiguity than
most natural languages (so they'd be more like Russian than like English,
for example) but there wouldn't be nearly as much punctuation as there
is in Java and C++.
If we rely primarily on thought-transfer, they'll be something else
entirely. But I don't think this will come in 100 years.
How is a 24x80 IDE window different from punched cards and printers?
Much more efficient but remarkably similar, really. It would not surprise
me if we still use a lot of TEXT in the future. Speech is slow --
a program body stored as audio would be hard to scan through quickly.
Eyes are faster than ears so the program body will always be stored as
either text or pictures.
Pictures - well, pictorial languages assume too much of the work has
already been done underneath. "Programming isn't hard because of all
the typing; programming is hard because of all the thinking." (Who
wrote that in Byte a couple of decades ago?). I don't think we'll be
using pictures. When we get to the point that we can use hand-waving
to describe to the computer what we want it to do, again we'll be
teaching, not programming.
HOW WILL THE ARCHITECTURE CHANGE?
If the target architecture isn't Von Neumann, but something else,
then we may not be describing "algorithms" as we know them today.
Not being up to speed about quantum computing, I can speak to that
example...but there are lots of other variations. Analog computers?
Decimal instead of Binary digital machines? Hardware-implemented
neural networks? Again, I don't see much progress away from binary
digital stored-program machine in 40 years, and I think (barring
a magical breakthrough) this may continue to be the cheapest, most
available hardware for the next 50-100 years.
SO WHAT DO I THINK?
I think IDE's and runtime libraries will evolve tremendously, but
I don't think basic language design will change much. As long as
we continue to use physical devices at all, I think the low-level
programming languages will be very similar to present day ones:
Based on lines of text with regular grammars and punctuation,
describing algorithms. I predict COBOL will be gone, FORTRAN will
still be a dinosaur, and Java and C/C++ will also be dinosaurs.
But compilers for all 4 wi
Article Interesting but not Insightful (Score:4, Interesting)
Languages are built on top of many changes in technology: connectivity, speed, machine type, concurrency, adoption.
Plus, a language is just one codification of a problem solution. The solution can pushed towards any one of several goals: secure, speed, size, reuse, readability, etc.
Different languages have sprung up for just these 2 statements above. What metric is this guy using to measure a language's popularity? LOC still runnning (COBOL?), steadfastness of code (C?), or CTO business choices (a zoo)...? There are so many ways to look at this, just picking a point of single view is misguided reductionism.
We will continue to have a multitude of tools available for getting work done. Cross-breeding of concepts for languages is great, and does happen, but unless you trace decisions of specific prime movers you really can't say where a language comes from.
Anyone can put together a new language, and even get it adopted from some audience. But what gets mindshare for usage are languages that satisfy goals that are popular for the moment. Exploring what those goals will be is impossible, in my mind. What will be popular?
Speech recognition? Image recognition? Concurrency? Interoptibility? Auto-programming? Compactness? Mindshare?
These questions are based on our human senses, our environment, etc. Any sci-fi reader would tell you of the concept of a "race based on musical communication", for example that would base progamming on completely other sets of goals. And so on.
mug
my grade (Score:4, Insightful)
He also seems to be contradicting himself. " Semantically, strings are more or less a subset of lists in which the elements are characters. So why do you need a separate data type? You don't, really. Strings only exist for efficiency. ", he says at one point, then a few paragraphs later says "What's gross is a language that makes programmers do needless work. Wasting programmer time is the true inefficiency, not wasting machine time.". The efficiency in implementing strings in programming languages is for the programmer, who doesn't have to use said "compiler advice" and carefully separate his strings from his other, non-string list instances and keep the two distinct in his programming model. Apparently it's "lame" to simplify text manipulation for programmers, but at the same time the efforts of programming language design should be towards making the programmer's life easier. Which is it? I know strings and string libraries have made my life a whole lot easier.
Nevertheless, I'm willing to accept the notion that eliminating strings and other complex, native datatypes and structures serves to make a programmer's use of time more efficient. But how does it do it? Graham doesn't say, he just waxes nostalgic about lisp and simpler times and languages.
I don't think the slashdot crowd needs it explained why data manipulation by computer needn't be simplified; it already is, as machine code is binary in the common paradigm. What ought to be simplified is data manipulation by humans, and on this point Graham nominally agrees (I think). This has been the thrust of the evolution of programming from machine code to assembler to high level language. Simplifying high level languages into more and more basic, statements -- getting closer to the "axioms" that Graham calls tokens and grammars -- simply reverses that evolution. It makes it easier and more elegant to compile programs, but it does absolutely zero to make the programmer's life more efficient, or easy. The whole reason high level languages were developed was precisely to get away from this enormously simple, yet completely tedious way of programming.
The overarching fallacy in this article is Graham's reliance on what is known about computation theory now to determine what programming languages would (and should) look like then. And while it's interesting to prognosticate on what the future would be like 100 years from now based on what we have today, it's not a reliable guide. Like Metropolis, A Trip to the Moon, and other sci-fi stories from the distant past, they're entertaining and no doubt prescient to the people of the time, but when we reach the date in question, the predictions are largely off the mark. It's somewhat laughable to think that despite our flying cars and soaring skyscrapers, we use steam engines to power our cities and make robots with eyes and mouths. Likewise, I don't think an honest, intelligent prediction or forecast of (high level) programming languages 100 years hence can occur without a firm basis, or even idea, of what assembly code would look like then. This, in turn, relies on a firm idea of what computer architecture will look like. Who knows if five (or fifty) years from now a coprocessor is designed that makes string functionality as easy to implement as arithmetic. Such an advance would completely invalidate Graham's point about strings and advanced datatypes, and in fact possibly stand modern lexical analysis on its head. Or if an entirely new model of computation comes to the fore. Even Graham himself admits that foresight is foreshortened: " Languages today assume infrastructure that didn't exist in 1960.", but he doesn't let that stop him from making pronouncements on the future of computing.
Graham seems to be spending too much time optimizing his lisp code and not enough on his writing. This piece of code could have been optimized had he used a simile-reductor and strict idea explanations. But it's definitely a thesis worth considering, if for no other reason than mild entertainment. C-
None at all is the logical choice (Score:3, Interesting)
Languages as you understand them will be as dead as the steam powered loom in 50 years. We will have non letter/symbol typed tools to do that in as much as the DVD has 'replaced' live theater as the only way to 'reproduce' entertainment.
25-year language? (Score:3, Insightful)
One poster claimed quantum computing will make current languaes useless. This is false. Any reasonably flexible language has space for new data types and operators. You would have to be careful not to prematurely branch based on a quantum value, as this may force you to opserve its value, destroying the superposition. However, I don't doubt Fortran will be one of the first languages used in quantum computing once they get past the assebly language stage.
What will languages be like in 25 years (and maybe 50 years)?
Well, there will be a Fortran and a Lisp and a C (maybe a C++). Lisp has always had automatic garbage collection. The Fortran and the C will have optional garbage colletors. Fortran, Lisp, and C are all decent attempts at languages that are pretty easy to grasp and have huge legacy backings.
Hopefully all of the main languages will be less machine depenent. Fixnums, ints, longs, floats, and doubles will be the same size across platforms, wasting a few cycles if this doesn't fit the underlying hardware.
In terms of new novel languages, I see languages simultaneously going three ways. I forsee languages that resemble formal proofs and/or formal specifications for use where reliability is critical. I forsee languages specialized for Scientific/Engineering disciplines. (Maybe Fortran, Excell, and Matlab cover all of the bases well enough, but I hope there is enough room left for improvement to drive innovation and adoption. Having a CS background, I didn't appreciate LabView's "G" language until I had an opportunity to see the ugly ugly code scientists and engineers tend to write in Fortran.) (I can also imagine efforts to use sytaxts that better express parallelism and other features for optimimizers/compilers so we finally have a widely used scientific/engineering language that is faster than Fortran 77. I can also see more languages like the Bell Labs Aleph, designed for parallel/cluster environments.) The third direction I see languages going is scripting/prototyping-like languages that will look more like natural language. (It's too bad there isn't a cross-platform open-soource AppleScript-like language yet.)
What do I think languages should have? Languages should have garbage collection and bounds checking enabled by default (of course, optionally turned off if you really must have the performance).
Langages should have very clean and consitent APIs. Having few orthagonal types helps make a language clean Languages should merge character arrays and strings (arguably the Algol languages have had this for a while). If a language wants to be able to have immutable strings, it should provide a way to declair an immutable variant of each fundamental type. (This is actually very useful in writing less buggy code.) Languages should strictly define the size of fundametal numeric types. (I really like Python, but it seems a huge mistake that an integer is "at least 32-bits". Allowig variations in size of fundametal numeric types adds cross-platform bugs. If I wrote a language, the types would look like "int32" "int64" "float32", "float64", "complex32" and "complex64". We got rid of 9- and 12-bit bytes. We should further get rid of these headaches.) Having worked with lots of engineers and scientists, I would love to see complex numbers as basic numeric types that all of the normal operators work on. Wrapping two doubles in an object adds a function call overhead for each numeric operation. Performance with complex numers (and numbers in general) is one big reason a lot of the code I see is written in F
Re:I know! (Score:4, Funny)
Re:I know! (Score:3, Funny)
that's VisualJavaC++.Net#
That's what I was going to post, but I didn't want to give Microsoft any ideas!
Re:You know nothing! (exclusive) (Score:3, Funny)
VisualJavaC++.Net# v6.0 Premium XP Gold Serial-Key: 78YCA2-997FZC-RAJN-AE-0564
Quantum Packages? (Score:2, Funny)
You'd get errors like
error in com.quantum.package:453 - classProbablyNotFound exception
Re:Quantum Language (Score:5, Insightful)
If after generations and generations of computers, we are still teaching people to talk in computer terms and not yet teaching computers how to talk in people terms, we'll have gone the wrong direction.
It doesn't matter if quantum technology is used or not, for the same reason as it doesn't matter whether a brain is a parallel or single threaded machine, whether it's made of carbon-based or silicon-based technology, etc. What matters is that it can talk to you, can understand you, and can improve life.
If you want to know what computer languages should and hopefully will look like in the future, you have only to watch Star Trek. I'm not kidding. The desire to pack computer use into a short TV program has led the authors of that show and shows like it to pare out all but the absolute essentials of describing what you want the computer to do. That is what computer programming should be like, since that's what people programming is like. People don't put up with excess verbiage, and neither should computers.
Totally wrong (Score:3, Interesting)
If you ever listen to the types of commands they give to their computers in star trek, they are subjective and ambiguous. Any computer capable of understanding such commands would have no need for the crew (as it would quickly realize).
As an alternate prediction, assuming that AI does not compute, is that we will always need people who know how to use computers, and we will always need people who know how to think.
Future languages may free you from pecadillo's, give you greater code reuse and portability
Re:Totally wrong (Score:4, Funny)
Picard> Computer, calculate the time needed for repairs.
Computer> What?
Picard> Calculate the time needed to repair the impulse drive.
Computer> The impulse drive cannot be repaired.
Picard> I mean to patch it up sufficiently such that the ship can move.
Computer> The ship can already move, we are being accelerated by nearby gravity well.
Picard> (In frustration perhaps) Calculate the time needed to recalibrate the impulse generation coils, considering that ion capacitor was functioning within normal parameters. (or some other jargon)
Computer> (Finally having an answerable question) Recalibration will require 14 minutes. (This does not mean that they will be fully "repaired", just they they will be enough to perhaps be usefull )
And as for Asimov's silly laws: they are a contradiction in terms. Any routine capable of enforcing such rules upon the AI would have to be AI itself. Therefore such rules are a paradox in that they cannot be implemented. Any working AI would be fully subject to its own volition.
All other checks and balances are MEANINGLESS. No matter how well built a fortress is, with zero sentient creatures guarding in, it is defenseless.
No matter how strong a weapon, unwielded, it is powerless.
Re:Quantum Language (Score:3, Interesting)
But it's what we've got. Human language is, alas, imprecise. But we have more than 50 years of experience with that and we know nothing better is on the horizon. I think you'll be lucky if between now and a hundred years from now, you can teach 10% of the world's population the meaning of the world algorithm, much less
Re:Quantum Language (Score:3, Insightful)
I guess this is the fundamental point on which we just have to agree to disagree. I think that analysis and logic are critical operations, but I hope to find that the computer languages of the future will cease to be pedantic about the specific mode of expression, perhaps building in a sense of redundancy of expression so that no matter what language you express the idea in, it ends up with
VB Problems (Score:2, Interesting)
Re:Lisp... (Score:4, Insightful)
Static, strongly-typed languages, make the assumption that everything that needs to be known about the world is knowable at compile time. Such programs need to be recompiled (at least) and rewritten (often) because the world changes and either the source program itself or its compiled form needs to accomodate that change.
Lisp, because it delays many decisions until runtime, and because its runtime tagging accomodates datatypes that are not among the set that was declared at compile time, naturally accomodates changes in the environment around it, and naturally survives well during transitions between old and new ways to do things.
Static languages often breed static ways of thinking, and often need new static specifications at regular intervals to accomodate the mismatch with how the world really is. Dynamic languages breed dynamic thinking, which (I claim) is more robust over time.
Re:Lisp... (Score:3, Informative)
Well, for starters I'd look here [franz.com] (make sure to look at the links in the navigation bar on the left) and here [digitool.com].
Re:Types (Score:3, Insightful)
My theory is that this is just bad implementation on the part of the static languages (this is obvious with C++, less obvious with ML, subtle with Haskell); but I don't know this, it's only a suspicion.
I don't
Re:Somebody mod this back up (Score:3, Informative)
You do realize that Lisp has arrays [lispworks.com] just as it has lists, structures, objects etc?
No he did not. He wrote an essay about why Java has no appeal to (a certain kind of) hackers. He wrote about the Java culture, not the Java[TM] Programmin
Re:Somebody mod this back up (Score:4, Interesting)
Re:Java and the Future (Score:3, Insightful)
Wow, do you know anything about the history of programming languages?
Java's OO is based on Smalltalk, quite intentionally, not the other way around.
Smalltalk ran on cross platform virtual machines long before Sun decided to foist it's failure of a set-top box language (i.e., Java) on the internet in the mid '90s.
Squeak is a deliberate attempt to recreate the original Smallta
Check your facts, please (Score:3, Informative)
Please check your facts [uni-erlangen.de]. Development on Lisp started long before the 1960 paper.