Kent M. Pitman Answers On Lisp And Much More 346
1) (just one thing (I) want to (know))?
by An Anonymous Coward
((
What (
(is)
with (all)
) of (the)
()s?
)
Hmmm?
)
Kent M. Pitman: This question actually got scored down to -1 and marked as a troll question, but I fished it out of the barrel and restored it because everyone asks and I might as well confront the issue head-on.
Ironically it's non-Lisp languages that allow and encourage you to put ()'s in any place you want, as if there were no meaning to the introduction of gratuitous paren groups.
3+(2*5)+7 means the same thing in an algebraic language as does 3+2*5+7. In Lisp, we write:
(+ 3 (* 2 5) 7)This shows you the structure and means you never have to learn obscure precedence rules that make expressions like -3! confusing in algebraic languages, where you must learn whether it means (-3)! or -(3!). In Lisp, the parens would show you immediately that (factorial -3) or (- (factorial 3)) was intended.
The thing I personally like about (+ (* 2 y) x) rather than 2*y+x is that it simplifies my editing. I'm a touch-typist and I use the emacs commands to go forward and backward over expressions, to swap expressions, and to delete expressions very heavily. And I don't have to reach for the mouse to manipulate large, complex expressions because they are paren-bounded. If I put the cursor at the head of 2*y+x and say "go forward an expression", ought this go forward over 2, 2*y, or 2*y+x? Having different editor commands to move across a sum, a product, etc. would be unwieldy. Yet without that, I don't see how the editor would know. In Lisp, there can't be any ambiguity because every sub-expression has its own start character, so a single notion of "the expression in front of the cursor" or "the expression after the cursor" suffices.
This, by the way, also answers the question of why we don't write foo(x) and instead write (foo x). In Lisp notation, foo is an expression. In the expression (foo x), it's a subexpression, so it's enclosed within it. Were it outside, a text editor would not be sure if foo(x) were one expression (a function call) or two expressions (the symbol foo followed by the list (x)). That would make going forward over 'one expression' ambiguous when at the start of foo(x). Should the cursor end up after the foo or after the (x)? In other words, The natural purpose of parentheses is to enclose things, so that's what Lisp uses them for. Avoiding ambiguity is critical to the writing of correct "keyboard macros" in Emacs, where I might interactively write a program to do a lot of code transformations quickly. In an algebraic language, such keyboard macros can be much harder to write robustly.
2) It's not just me is it?
by demo9orgon
After trying to "self-learn" lisp in the 80's I get this physical reaction to the word "lambda"...a cold sweat combined with the involuntary retraction of my testicles to a protected location in my abdomen (damn unpleasant shit)...I usually avoid that second one by mentally going through the mechanics of "hello world" in C, or any half-a-dozen other programming languages.
Lisp is one of those meta-languages you either learn or avoid. I write practical stuff all the time, daily in fact, and I've never had something that required the arcane stuff in LISP.
KMP: Actually, "hello world" in Lisp looks like this:
(defun hello-world ()
(write-line "Hello, World!"))
I don't know about you, but I find that pretty soothing.
And as to LAMBDA, one only needs use it when they find it useful. For example, after a while, one sometimes gets tired of writing a separate function where that function will only be used once, as in:
(defun sort-by-name (list)(sort list #'name<))
(defun name<(name1 name2)
(or(string<(last-name name1) (last-name name2))
 (and (string= (last-name name1) (last-name name2))
(string< (first-name name1) (first-name name2)))))
so Lisp allows one to instead say:
(defun sort-by-name (list)(sort list #'(lambda (name1 name2)
(or (string< (last-name name1) (last-name name2))
(and (string= (last-name name1) (last-name name2))
(string< (first-name name1) (first-name name2)))))))
Whether one actually does this is purely a personal preference. Some people like having separate named functions, some don't. Sometimes the separately named function might have a nonsensical name, though, and it's nicer not to have to invent a stupid name for a one-shot use.
Now, as to why it's called LAMBDA and not FUNCTION, that's just a piece of history. You get used to it. Toward that end, I'll offer a story that will perhaps help you put it in perspective:
Early in my not-yet career as a computer scientist, which is to say, while I was in high school, I lived in the Panama Canal Zone. Computers were not at all common there at the time. In fact, the place being entirely run by the US Government, there was some weird edict that said no one was allowed to own one so that they would all be centralized in the Comptroller's Office and not wasted in individual offices around the Zone. Our school had to bend the rules in order to get us a computer to study. So one thing I did while trying to learn about computers was to go downtown (out of the Canal Zone into Panama City, in the Republic of Panama) and visit a company there who did computer work. Of course, people there spoke Spanish, but fortunately I did, too. They showed me some of their code, and I was immediately struck by the fact that all the language keywords were in English.
"Doesn't that bother you?" I asked. But the person I was talking to was quite a thoughtful person and he immediately responded this way: "Do you know how to read music?" "A little," I said. "Have you seen the notations on music like forte, sotto voce, and so on?" I nodded. "Does it bother you that they are in Italian?" "No," I had to admit. His point was to make me see that it could be viewed as part of the charm and history of the notation. He was, perhaps, unusually forgiving. But this was in the late 1970s, when everyone who had access to computers was far too excited about just plain having them to care about subtle issues of whose culture got too much say in the design of a world-wide phenomenon.
So when today I look at the very few mysterious-looking terms like LAMBDA, CAR, and CDR that still linger untouched in modern Lisp's design, I think of them as I do those musical notations, conceptual links to a little piece of history that I'm just as happy not to see crushed by an overeager rush to regularize and homogenize the world--something the computer culture has done altogether too much of.
3) Interactively programmable applications
by divbyzero (divbyzero@hotmail.com)
One of the primary reasons why Scheme and Lisp interest me is that they are well suited for making applications interactively programmable at runtime (Scheme especially, due to its small size). This is far more flexible and useful than applications which are only extensible through heavyweight, precompiled plugins. Since the Slashdot readership tends to be made up of people who are comfortable with programatic interfaces (unlike the general computer-using public), why do we not see more such applications?
KMP: I think it's just an issue of education, formal and otherwise. Without being explicitly guided, some people will try out all kinds of ways to do things, or invent them where they're not present. But many others will simply do what they have been taught to do without exploring the alternatives.
In the past, everything was about speed. Every instruction was precious. The focus was entirely on "micro" efficiency. People would examine the cost of being able to redefine something (which sometimes involves as much as following pointer indirection), and if there was a cycle lost, the game was over. Today, hardware cache and prefetch architectures can often hide such costs anyway, but even if they couldn't, processors run so fast that one has time to worry not only about micro efficiency but also macro efficiency--that is, "running smart", not just "running fast", as a way of assuring total efficiency.
A lot of people identify Lisp as a language that is "just good for Artificial Intelligence (AI)". Certainly Lisp is good for AI. But saying it is just good for AI misses the point. Lisp doesn't do AI. Lisp is a programming language. AI researchers program AI, and often their language of choice has been and continues to be Lisp. But the important thing is that AI researchers have been banging on the door of Lisp implementors for years, demanding the introduction and tuning of the features and constructs they need in order to get their work done. Lisp hasn't become a mere AI toolbox as a result of that. Rather, it has become a robust tool for addressing the world's most complex and vexing problems. The Lisp community has a long experience with supporting "intelligent programming", and with doing so efficiently.
Lisp's biggest problem in the past is probably that it hit its commercial peak too early, in the mid 1980s, before most computational problems the world was confronting were big enough to need the power Lisp had to offer. Those were the days of MacWrite and MacPaint and Lotus 1-2-3, and it just didn't make any difference whether one used Lisp or C for those. But for better or worse, the world has grown up around us, and the important problems of the day are a lot more complex. I think Lisp has a lot more to offer to the world of today than it ever did in the past.
4) The standard process
by VP
As participant in the standardization process for Lisp, what are your thoughts on standards for programming languages? What would you like to see different in this process? And speaking of standards, what do you think about the RAND licensing issue and the W3C?
KMP: I think standards have served their time to provide a stable base for people to build on, but for the modern environment, they move way too slowly to keep pace with the speed of change in business. It took a long time to put the Common Lisp standard together. We began in 1986, finished work in 1994, and got the actual document to press just before the end of 1995. Getting community consensus on something that big really does take that long, and I think it was an exercise worth doing to create the stable base that we created, but for future evolution of the language, I think there needs to be another way with far less overhead.
I see standards as having two components: The first is to simply cast a name into concrete so that reference to that name will always have a clear meaning. The definition of ANSI Common Lisp, at least for 1994, is now permanently registered. Anyone who wants to can now conform to that definition and others will know exactly what they mean by that. The second component is to assert an informal consensus in the community that there is a single right way of doing things. This latter component may be useful for the foundation (to define the initial market space), but I'm not sure it's appropriate for the library level of the language.
For the base language, if 60% of the community wanted to do things one way and 40% another way, the 60% got to roll over the 40%, and 100% of the community was expected to do things in the way that won. But at the library level, if 60% want one library and 40% want another, I'd rather 100% of the community get what they want by having some people just do it one way and the rest of the people do it the other way. The Lisp community has not traditionally done things that way; they've sought consensus. The Scheme community has been even more conservative about this than the Common Lisp community, and as a result has even fewer standardized facilities than the Common Lisp community.
The Scheme community has moved to a more loose-knit approach to break the design deadlock brought on by the core language committee's consensus process through its Scheme Requests for Implementation (SRFI) process. The Common Lisp community hasn't got anything quite so organized yet, but I suspect will eventually evolve something similar.
As to the question of the W3C, I'm not a huge fan at the moment. At a prior employer, we had the opportunity to join, but the contract we'd have had to sign made it clear that votes among members were advisory only, and W3C itself could decide to override what people voted on. This, to me, is not a consensus body. Furthermore, although I think standards bodies like ANSI move in near glacial time, I don't think you can fix things by just shortening the times. True national and global consensus just takes time, and shortening timelines doesn't just make things move faster, it also disenfranchises people. While I use the existing HTML, CSS, XML, XSL, and other W3C guidelines, I don't feel they were created in a manner that I respect as proper consensus process. I think the process was insular and rushed.
Neither am I happy with the notion of processes involving Reasonable and Non-Discriminatory (RAND) fees being part of a standard; I think consensus standards should only involve royalty-free (RF) technologies. I think adherence to standards should not induce a baseline cost beyond the cost of creating the code so that the cost of compliance with standards can closely approach zero. If there is a profit to be made on the implementation of a standard, it should go to the implementor, not to a patent holder. Then again, while I'm a strong proponent of software copyright, I'm not at all a fan of software patents. Rather than seeing independent creation as infringement, I think independent creation should be contributory proof that an idea was more obvious than perhaps the patent office thought. I don't mind copyright because there are ways that one can demonstrate that one did not merely copy another's work, and independent creation is a defense.
5) Advice to Aspirants
by An Anonymous Coward
Kent, I am one of the lucky ones who programs professionally in Common Lisp. I certainly appreciate your hard work and the hard work of everyone else who helped to bring us the ANSI standard - which serves to reify much of the esoteric knowledge the Lisp community has developed in the many years since the language was born.
While I do not need to be sold on Lisp, I know many people who do not fully appreciate the power of the language. To a large degree, this is due to misconceptions about the language. Specifically, there seem to be a number of what I would call 'cultural misconceptions'. Because many people have never worked in a tightly interactive development environment with incremental compilation, language-level introspection, and real code/data equivalence (not to mention the differences between CLOS and what the rest of the world seems to have come to believe is the God-given definition of 'object-oriented' programming) - they don't really 'get' what makes Lisp so special and so powerful. More to the point, because the logistics of developing and deploying applications in Lisp is different than what the typical c/c++/perl/java developer knows, the hurdle to even investigating or considering Lisp as a real possibility seems unnecessarily high.
Could you talk a bit about how those who have a feeling that Lisp might help them with their hard problems could go about bootstrapping their way into finding out? How would you suggest getting started? What is a reasonable set of tools for experimentation, and where should a beginner start with the language? (The standard is a big document!) Also, could you give an example of the type of problem space and style of application delivery that demonstrates that Lisp is more practical than many seem to believe?
KMP: Well, one thing to note is that there's very little overhead to just downloading an implementation and diving in. Not only do the major commercial vendors like Xanalys and Franz offer high quality, no-cost trial versions of their proprietary software, but there are quite a number of free (non-proprietary) versions of Lisp as well. Information about these, as well as much other useful information about Lisp, can be found at the Association of Lisp Users (ALU) web site. I've also recently purchased common-lisp.info, which I plan to maintain as a repository for information about Common Lisp; the site doesn't have a large base of information yet, but it does have a list of the problem spaces in which you might consider using Lisp.
The ANSI Common Lisp standard, effectively available in webbed form as the Common Lisp HyperSpec, is indeed a big document (about 16MB and having about 108 kilohyperlinks downloadable). I think it's fairly readable as standards go. But you're right that it takes some work to get through and it wasn't really intended as a tutorial.
The ALU web site will also have pointers to books and online tutorials about Lisp. Books by Paul Graham and Peter Norvig on the subject are very highly regarded. I think there is always room for more, and I'm working on several, at least one of which I hope to complete in the not too distant future; feedback from you and others is useful to me in understanding what areas most urgently require treatment.
One resource that some people might find useful is an article I wrote called Accelerating Hindsight: Lisp as a Vehicle for Rapid Prototyping. This article is intended primarily for a Lisp programmer audience, to help them articulate some of the ideas you've asked about to others. It was not intended to be read by the audience you'd like to convince mainly because it appeals periodically to Lispy notation that might not be familiar to them, but it may still be of interest to the adventurous non-Lisp reader.
As your project becomes more sophisticated, and evolves from a personal toy to a real commercial product, it also doesn't hurt to ask an expert for help. My company offers consulting services that include helping companies manage the transition into Lisp. One of my major clients, The Software Smith approached me on just such a basis and the result has been very exciting both for me (getting to help them improve their system) and, I think, for them (getting to see more of how Lisp is supposed to be used). I don't want to turn this interview into a huge advertisement, but people can contact me for more information. If I'm either not competent to help you or am too busy to help you, there's a very good chance I can refer you to someone else who can help you.
6) Language feature trickle-downby WillWare
I was a big Scheme/Lisp fan five or six years ago, but now I see most of my favorite Lisp-like language features available in Python, which is getting a huge amount of high-quality development mindshare these days. Some of the Lisp-ish features in Python that spring right to mind are functions as objects, closures, garbage collection, and dynamic-yet-strong typing, and convenient rapid-app development.
One needn't look far to find arguments that there is still something unique to Lisp that differentiates it even from very recent languages which have had ample opportunity to borrow from Lisp. But one rarely finds a really clear articulation of that uniqueness. Do you think concur with the view that Lisp is still unique, and if so, do you think that Lisp's putative advantage really is ineffable?
If there is an advantage but it's ineffable and therefore opaque to managers with purchasing power, that would explain why Franz, Harlequin, et al have had such a rocky road. Does the Lisp/Scheme community regard this as a worrisome issue? (Some folks on c.l.lisp clearly don't think so, but I don't know if they are just a noisy minority.)
KMP: I guess I think Lisp is unique, but whether it is or not doesn't affect its usefulness as a tool. I'll enumerate some things I like about Lisp, but Slashdot readers shouldn't assume that I'm asserting for each of these features that Lisp has a lock on these. Various other languages surely have some of these. But I am often heard to say: languages are ecologies. Language features are not a priori good or bad. Rather, language features are good or bad in context, based on how well they interact with other language features. Some of what makes Lisp what it is has to do with the features it offers, but some of what makes Lisp what it is has to do with how the features work together to make a coherent whole. Lifting some of these features out of context might sometimes work, but in other cases, it might not. To get a real feel for Lisp, or any language, I think you have to really use it.
Also, in my 1994 article Lambda, the Ultimate Political Party, I advance the hypothesis that languages are defined as much by their community as by their semantics. That is, languages are forever in flux, and the semantics you read about in a language spec is a point in a multi-dimensional space telling you the current location, but it does not tell you the velocity vector in that space. For that, you must look to the community. Even if two languages happened to occupy precisely the same point in design space, that is, if they had the same semantics, would they continue to over time? I think not.
For what it's worth, here are just some of the things I personally like about ANSI Common Lisp:
-
Lisp is dynamic. The world is ever changing and it's useful to allow programs to change dynamically with it. I can load new or changed functions, classes, and method definitions into a running image that I'm debugging, or even in a deployed production application. When I do, the code that was running will immediately start using the new definitions. Classes can be redefined even if the new class has different slots, and, if I care to, I can control how the update is done from old to new slot arrangements for already-created instances. This kind of thing supports programs that must be continually running yet must be responsive to changes or even just bug fixes.
-
Lisp is introspective. Not only can functions, packages, classes, methods be dynamically added, redefined, or removed, but programs can also inquire about whether aspects of the programming environment (functions, packages, classes, and so on) are defined, can manipulate those objects as data, can save them away, can transform or encapsulate them, etc. Also, the Lisp compiler is a standard part of the language and can be invoked even at runtime by applications that need to augment themselves. New programs can be created on the fly, then compiled and loaded and executed in the same running image as they were created, without ever exiting (and even without doing file I/O). This facilitates automatic programming and the development of layered languages.
-
Lisp's syntax is malleable. There's nothing worse than being stuck in a syntax that you don't like in a language you're going to use for a long time. Lisp allows programmers to reconfigure the syntax rules for parsing characters into data and programs, as well as allowing macro technology that transforms one parsed program expression into another. And it allows control of how data is displayed during program execution and debugging. Moreover, this can generally be done in such a way that one programmer's customizations don't adversely impact another's. This makes interactions with Lisp more pleasant and debugging sessions more productive.
-
Lisp doesn't force users to use variable type declarations in order to just get a program to run. The initial focus in Lisp is on getting programs working. You can add type declarations when you're done if you want to, in order to enable additional compiler optimizations. This facilitates rapid prototyping by first getting an application running quickly with low overhead, and then allowing an application to be tuned as a second pass operation.
-
Lisp has a powerful class system, and a flexible meta-class system. The class system allows powerful slot and method definition, method combination, and a great many other detailed features. The meta-class system allows users to treat the object system as data that can be programmed, creating new kinds of classes.
-
Lisp gives the user powerful tools for both signaling and handling errors. This means that when an error occurs, there are often a variety of ways to continue programs other than simply aborting or dumping core. Moreover, object-oriented error handling allows programs to represent errant situations, evaluate the options for how to proceed, and select an appropriate option under program control.
-
Lisp uses automatic memory management. This means that when a programmer is done with an object, they just let go of it and the garbage collector reliably frees its storage. This means Lisp programs do not suffer from the memory leaks that commonly plague programmers in many other languages.
by kfogel
For myself and a number of friends, Lisp/Scheme programming has for too long been a kind of mystical Eden, fading in our memories, from which we have been mostly banished in our professional lives. But we can still recall how it felt to work in a language able to shape itself to any pattern our minds might ask: coding was more interesting and more expressive, and the rate of increasing returns over time was tremendous, because fine-grained -- almost continuous -- abstraction was in the nature of the language. Life was just more fun, frankly.
Alas! In our jobs and even in our personal projects, we are often forced to use C, C++, Java, Perl, or Python -- not because we prefer to write in those languages, but for two much less satisfying reasons: first, everyone else knows those languages, so we'll get more developers with them. And second, you can't count on users and testers having the right environment to run programs written in Lisp/Scheme, so right away you take a portability hit if you choose to develop in them.
Do you think there is a chance of Lisp/Scheme becoming "mainstream" again? That is, when someone contemplates starting a project, it would be as realistic for them to consider Lisp or Scheme as, say, Perl, without worrying about losing developers or initial testers? What will it take?
KMP: First, let me say that I really appreciate the poetic description you offer in the first paragraph above. I very much think that captures how I and others think about the experience of using Lisp.
And as to the future of Lisp, I think the situation for Lisp is looking pretty upbeat these days. Enough so that my own infant business is building its tools in Lisp, both for sale and for our own internal use on products we produce.
There are a lot of implementations, both commercially maintained and "free", with a wide range of delivery options, from conventional executables to "remote" solutions: Some implementations support CORBA and/or COM interfaces, for example. Also, most implement some kind of sockets interface, and there are several Lisp-based web servers available that build on this. Lisp programs can dynamically load DLLs, or can be delivered as DLLs themselves. They can do "foreign function call" to functions in other languages. It can also communicate with databases, and so with other programs via databases.
As the world moves increasingly to high-bandwidth global connectivity, I think the issue of the delivery environment will become less important. People have been waiting for an e-Service based society to take off, and it hasn't quite done that yet, but I think it's coming. I can't see how it won't. The overall savings in quality assurance and support of not having to re-deploy an application in a hostile customer-premise environment will be a lot, just as your question implies. One will just bring an application up on the right kind of hardware, connect it to the net, and then forget about where the program is actually being used. That may be an oversimplification today, but I wouldn't waste my money betting against it for tomorrow.
8) Questions I've Come Across Learning Lisp
by Jon Howard
I was recently (April) hired-on as webmaster at Franz [franz.com], a commercial lisp company (we make Allegro Common Lisp [franz.com]) which has introduced me to lisp in a very loud way. Since joining these guys (and gals), I've been thoroughly indoctrinated - with my full consent - because of my belief that as computing hardware progresses programming in more abstract languages will allow for more creative and effective use of the platform. Sure, coding assembler on a new super-duper petaflop chip will still be possible and less wasteful, but who would want to code a million lines of asm to save a few (or even a few thousand) operations out of a few billion, or trillion when it will only net a difference of nanoseconds in the end? I'm less interested in making super-fast programs than I am in making artistic and super-functional programs.
I'm not expressing the views of Franz, every member of the company has their own beliefs on what makes for great programming - which is one of the major reasons I find this place so fulfilling, everyone has complex reasons for their design considerations, and everyone communicates them (something I've grown to appreciate from working in too many places where this was definitely not the case), and consequently I've been exposed to quite a few different techniques of Lisp coding since my introduction half a year ago. I'm constantly amazed that so many different styles of programming can be expressed in the same language, it's capable of accommodating any logical thought process that can be converted to code - and I doubt many of you often use recursion in a logical way on a daily basis, but even that can be done efficiently in lisp.
I'm still very new to lisp, and I was never a serious programmer in the past, but I've always been accustomed to asking questions, and here are a few that I'd like some input on:
- If you learned any other programming language, did you initially find the formalities of its structure to be a significant stumbling block to understanding the language as a whole? Was the same true of learning lisp?
- How much time do you spend debugging non-lisp code? How much on lisp?
- What language took you the most time to learn - was it your first?
- What feature do you consider to be the most important for an abstract language to support efficiently - and which features have you found to be most poorly implemented in lisp distributions?
I'd love to hear about what people think sucks about lisp and needs improvement - or can't be improved, so far I haven't found anything that I could complain about, the most difficult thing for me has been managing all the documentation on a half-century old language in the process of learning it. I've begun to love working in lisp, but I suppose being surrounded by a group so full of passion for it has helped contribute to my bias - if I'm wrong, help snap me out of it with a good argument against using lisp. ;)
KMP: I knew FORTRAN and Basic before I learned Lisp. And I've dealt with numerous languages of all kinds since learning Lisp. With most, the syntax itself is generally not a burden. Some languages have more pleasant syntaxes than others, but the human brain has an amazing ability to cope. Of all the many languages and syntaxes I've seen, about the only thing I've never been able to cope with is the "*" used to notate indirection in C. I understand thoroughly the notion of pointer indirection, and the difference between "pointer to array" and "array of pointers", but I find it forever hard to read and write that particular awful notation for some reason. Give me Teco or Perl any day.
Mostly, though, I think the issue of how hard a syntax makes it to learn a language is overblown. Humans have brains that are adapted to processing myriad special cases and can mostly cope with obscure syntaxes. The real issue is how hard it is for humans to pass on their knowledge to programs. People are good at judgment, and programs are good at repetition. Over time, though, judgment tasks become repetitive and it's time for programs to take them over. I like to write macros to package up things I do a lot, and the key to that is having a reliable mapping between program syntax and program structure. The last thing one wants is a macro language based on character syntax, since such syntax is too unpredictable. Lisp offers macros based on program structure, and that greatly reduces the number of programmer errors one makes in macro writing.
As to debugging, I try to use non-lisp code as little as possible because of how hard it is to debug. Most other languages don't have good visual representations of their data, so when I get in the debugger, the manner in which I am presented with errant data is usually low-level and hard to read. A great deal of my valuable time is spent painstakingly piecing structure back together. But in Lisp data objects have familiar visual representations and I find it's usually easier to see what has gone wrong.
What language took me the most time to learn? Probably Teco. There was a lot of trivia to learn there. What language took the least time? Probably FORTRAN, BASIC, Lisp, HyperTalk, and MOO. Fortran just because it was small. The others because they are highly interactive, which is a huge boon to learning.
Actually, I learned PostScript very fast, too. There are some excellent cookbooks on this. But I never learned to debug PostScript. When my programs erred, I mostly just wrote them anew and hoped they'd work then because debugging was too painful.
What do I consider it most important for an abstract language to support efficiently? My time. Time is the only true, non-renewable commodity. I eschew languages like C because they often waste enormous amounts of my time trying to develop and debug programs, and justify it on the basis of micro-differences in speed that have just never ended up mattering to me. I regard C as appropriate for use as an assembly language, but it doesn't provide enough high-level services for me. When I'm old and grey and look back on my life, I want to have done a lot of interesting things, not just have done a few interesting things but "boy were they fast".
I think it's important to pick a language not on the basis of how fast its implementations are today, but on the basis of how much they do what you want. Lisp has an undeserved reputation for being slow, which I think results from deciding to make it do things that there are not always known optimizations for at the outset. Like garbage collection. But as Lisp is used, people complain about the things that are slow, and fixes get found. So Lisp moves ahead. If Lisp had started instead only with the things it knew how to implement efficiently, it would be holding things back. I want my ideas to lead my technology and my tools, not to have my technology and tools leading my ideas.
9) Basis set for programming languages?
by PseudonymousCoward
As a Scheme and Common Lisp programmer, I got excited when I heard that the Java Virtual Machine would have automatic memory allocation and garbage collection. I thought it would be possible to build Lispish languages to run on the JVM. The rate at which Kawa has been developed, to implement a near-Scheme on the JVM has been frustrating to me. I attribute this at least in part to the absence in the JVM of a construct equivalent to Scheme's continuations. Do you think it is feasible to establish a "basis set" of programming language concepts on which all programming languages could be built, so that the distinctions between C, Scheme, etc would be "merely" syntactic? If yes, please enumerate your candidate set.
KMP: Well, continuations are just functions. What's really lacking to make this easier is good tail call support so that continuations can be called correctly without pushing stack.
I don't really have personal experience with using the JVM directly, but my experience with the MOO programming language led me to believe that there might be a problem with integrating tail calling and security, since sometimes security is implemented by asking "who called me?" and tail calls can mean that the apparent caller is not the real caller. So I asked my spies at Sun about this.
I'm told that the original security model for Java worked the way I expected (by examining the call chain), and that concern over consequent security matters contributed to the absence of tail calling support in early releases. But apparently it was conceded a long time ago that such support should be added some day, and that day simply hasn't come yet. So perhaps there is hope.
Even so, I'm not so sure no matter how hard you try that you can just paper over the many differences between languages and say that the only remaining issues are ones of syntax. I do think you can probably get to a point where all languages can compile to this machine, but that may not always mean that programs in one language are as efficient as those in another, or that data structures in one language are as naturally represented as those in another. For example, both Lisp and Scheme assume that small integers (that would fit in a machine number) are still integers; they don't have the int/Integer disjointness that Java has. A Lisp-to-JVM compiler could presumably hide this distinction, but it would be wrong to say that the only difference between Java and Lisp was syntax--there are really some material philosophical disagreements between the two languages.
10) Scheme as an XML Translation Language
by Evangelion
I've become fairly interested lately in using Scheme (probably mzscheme) and the SXML package as a way to do arbitrary XML translations in my free time (if I had any).
From the looks of it, the ability to create a reflexive mapping between an arbitrary XML document and an interpretable programming language is too powerful to be ignored.
Do you think that in the future one of the primary roles of Scheme/Lisp is going to be in manipulation of XML documents, or is this going to be relegated as an academic curiosity while the world struggles through parsing XML in Java?
KMP: Are those my only two choices? The second one sounds awfully bleak. I'd better choose the former.
I don't know whether you'll see XML as a formal part of either Lisp or Scheme any time in the near future, but a lot of that is because the standards bodies administering these are not extraordinarily active at this time. That doesn't mean the languages are dead, just stable. Ongoing work is mostly happening at the level of libraries, and such libraries can generally be written by anyone using existing primitives, without modifications to the core language.
Lisp manipulation of XML and HTML is something people have been working on for a long time. For example, the Document Style Semantics and Specification Language (DSSSL) was a purely functional, side-effect free variant of Scheme. Even XSL, the apparent replacement to DSSSL, offers the same kind of functionality. It just uses a more CSS-like page model and XML syntax. But, conceptually, it's Scheme inside.
In my recent professional life, I have personally written several XML parsers, all in Lisp, for various employers and most recently for myself and my fledgling company. My company's implementation is not available on the market yet, but when it is, I'm quite sure the chief competition will not be around the availability of mere "availability". Already there are a variety of libraries related to XML, XSL, and SAX floating around. And I'm quite sure there will be more to come. Competition will be over things like efficiency, robustness, representation, and optional additional features.
11) Lisp vs. the world
by hjs
What do you see as the unique strengths and weaknesses of Lisp?
What strengths does it specifically have over other functional languages (such as ML), over structured languages (such as C, Algol, etc), over object oriented languages (such as C++, smalltalk, simula, etc), and over scripting languages (such as TCL, perl, etc)? Can these other languages or classes of languages be enhanced to include these strengths? If so, how, and if not, why?
What about weaknesses? What do you see as the weaknesses of Lisp, both in general and in comparison to the above classes of languages? Can these weaknesses be eliminated? If so, how and if not, why?
I mean strengths and weaknesses not only in the formal sense of the language itself, but also in terms of its usability in today's world. For example, difficulty in delivering binaries or lack of accessibility of system libraries from within common implementations of a language would be considered weaknesses.
KMP: There are so many things I like about Lisp, but most of them come under the heading of "doing things in the right order."
For example, type declarations in many languages are required but in Lisp they're optional. I prefer to first get my program working, and only then to tune it to be more efficient by adding type declarations. What's the point of doing a lot of make-work declarations if you're not even sure you're going to keep the result? I do a lot of exploratory programming just to answer "what if" questions. I also write lots of little throwaway programs just to compute a simple result. I don't need such programs to run in 5 microseconds instead of 10.
I also view the process of programming as a series of "times" at which decisions can be made: "coding time," "parsing time" (Lisp calls this "read time"), "macro expansion time," "compilation time," "load time," and "execution time." Lisp gives me a great deal more control for each piece of code as to when it runs, so that it can run at the appropriate time when the data it depends on is known. Other languages, especially statically typed ones, often make me specify information too soon, before it is really known, which usually means "making up" answers instead of really knowing the answers. Sometimes that makes programs run faster. Sometimes it just makes them run wrong.
And I like Lisp's willingness to represent itself. People often explain this as its ability to represent itself, but I think that's wrong. Most languages are capable of representing themselves, but they simply don't have the will to. Lisp programs are represented by lists and programmers are aware of that. It wouldn't matter if it had been arrays. It does matter that it's program structure that is represented, and not character syntax, but beyond that the choice is pretty arbitrary. It's not important that the representation be the Right® choice. It's just important that it be a common, agreed-upon choice so that there can be a rich community of program-manipulating programs that "do trade" in this common representation.
I write a lot of macros because there are a lot of interesting things one can do with macros in Lisp. In other languages, macro-writing is a process of manipulating strings containing input syntax. That feels very unreliable and I've never liked that. Lisp's willingness to represent its code in known data structures makes macro writing feel a lot more reliable. And the presence of macros in Lisp generally means that the boring parts of coding get removed, because repetitive patterns usually get captured by a macro and hidden away, keeping the developer's attention on the "interesting parts", and making the activity of programming itself both more fun and more efficient.
Could other languages borrow some of Lisp's strengths? Sure. And they do. Java, Dylan, and I suspect even C++ have all borrowed ideas from Lisp. But that's ok. We'll make more. And anyway, it's not a zero sum game. Everyone benefits when there's this kind of cross-pollination, whether it's Lisp influencing other languages or vice versa.
Weaknesses of the language? Well, that's harder to say. I think the basic design is quite strong. Sometimes you see an implementation that has put more energy into some parts of the language than others, but usually that has created a market opportunity for another, so overall we have our bases covered.
For example, you might find some implementations that have big "hello world" footprint sizes compared to "hello world" in other languages. Some in the Lisp community, don't think this matters much, because disk and RAM are getting ever cheaper. "Real" applications (i.e., not "hello world," but something meaty) of 5-10 megabytes are pretty commonplace these days. Years ago, Lisp used to be seen as large, but due to such criticism, Lisp has held its size constant in the last decade while other languages and systems have bloated rapidly. So nowadays, Lisp is comparatively quite small. And even still, if you don't like the size you get from one vendor, it seems there's always another trying to squeeze into the niche of addressing your need. Corman Common Lisp (an up-and-coming commercial implementation) and CLISP (a GPL-style "free" implementation) have given special attention to this issue. So there's a vendor for everyone on the size issue. And, though I deal more often in Common Lisp in my day-to-day work these days, I would be remiss if I didn't mention that image size is also a key concern of the Scheme language community, so that's yet another way the size issue is addressed for those who see it as critical.
Some might have heard that Lisp, being dynamic, doesn't make use of static type information. This isn't quite right. In fact, the language doesn't require static type analysis, it merely permits it. This gives a lot of leeway to each implementation to address the specific needs of its own customer base. The CMU Common Lisp implementation has, for example, addressed the issue of type analysis in great detail and offered a clear demonstration that there are many exciting things that implementations of Common Lisp can do with type declarations if they choose to.
Why don't all implementations optimize all of these aspects--footprint size, static type analysis, etc.? The Common Lisp language is admittedly conceptually large and correct, efficient compilation requires considerable time and cleverness to implement. "Why not make the language smaller so it requires less work to implement?" is a query you hear a lot from the outside, and even from members of the Scheme community. The answer from the Common Lisp community amounts to this: Programs are written all the time, but implementations are written much more rarely. What the implementation does not do is left for the user. The more hard work the language does, the less hard work programs do. In effect, the thesis of Common Lisp is that bigger languages make for smaller sentences in the language. (To see that there is at least some intuitive basis for this, think about how long a novel like Gone With the Wind is in English, then try to imagine whether the same novel re-expressed in Esperanto would be longer or shorter.)
If a language offers only what a programmer could implement overnight, it gives its programmers not much of a leg up on their final application. Many members of the Scheme community boast that they have written a Scheme implementation, while many Common Lisp programmers have not. Common Lisp is surely harder to implement, but the Common Lisp community does not see as its primary purpose to put out legions of implementors, each with their own easily-created implementation. The Common Lisp community has chosen to be about commercial applications, and its designers have provided a "meaty chunk" of useful power for programmers to use, with the promise that if programmers write their programs to that standard, not only will those programs work well today, but as implementations get better, those same programs will work even better tomorrow.
[to be continued...]
Good to see Lisp is still around. (Score:4, Insightful)
I have never recovered from learning Smalltalk as a postgraduate, and then being forced to take a job programming in C++ because corporations are so far behind the times.
I look forward to the day when programmers in large corporations are able to use high level languages such as lisp, scheme and smalltalk instead of the current crop of low level languages like C++ Java and Perl.
developers need to see the light, not suits (Score:2, Insightful)
currently high-level languages like Lisp are good for early prototyping and development stages, but lack the library hooks and other trappings needed for real, industrial strength application development. what i'd like to see Lisp and Smalltalk and Eiffel develop is a good compiler and a good interface to the system and GUI code.
(no, Java does not cut it.)
when i can write an app in Lisp and still use GTK, Athena widgets, etc, then we might see corps moving from C/C++ to languages where memory allocation, etc become fond memories and real high-level thinking may take place.
Re:developers need to see the light, not suits (Score:2)
have you looked at python?
Re:developers need to see the light, not suits (Score:2)
Like what? Common Lisp is as capable of writing 'industrial strength' applications as any other languages. And its dynamic properties mean that you can debug them on-line, without taking systems down for massive recompiles.
what i'd like to see Lisp and Smalltalk and Eiffel develop is a good compiler and a good interface to the system and GUI code.
(no, Java does not cut it.)
Commercial Common Lisp compilers are good enough. What do they lack their C/C++, Java, and other counterparts? (Specific technical features please.)
As far as GUI coding goes, ALL languages but Java seem in the same leaky boat when it comes to developing cross-platform applications. (Java is in its own boat, one with different problems than the rest.)
Re:developers need to see the light, not suits (Score:3, Informative)
This just isn't true. All major Common Lisp environments, Commercial and Free, offer a painless way to call C functions. No recompilation of any kind needed, you just declare the C function, load the library, and call it.
As for GUI's, there's something called CLIM, the Common Lisp interface manager. It's a standardized Common Lisp GUI. There are at least 4 major implementations. Using it, you can write a GUI application that is Source-code compatible across anything Commons Lisp/CLIM runs on, which includes pretty much any PC operating system as well as a plethora of Unix workstations
In fact, Lisp is ideal for ``Industrial Strength''; application development. It's portable. It's (nearly always) compiled to native code. It offers superb exception handling. It has a package system. You can update running, deployed programs without stopping them. There are many other good reasons.
As for GTK specifically, there are no fewer then 2 Common Lisp GTK bindings out there.
Re:developers need to see the light, not suits (Score:4, Informative)
When you grow tired of sticking your head in the sand, have a look at ITA's [itasoftware.com] website. They make the software that powers Orbitz's [orbitz.com] web site. If this is not an impressive testament to Common Lisp's ability to do industrial strength parallel programming, I'm not sure what would satisfy you.
You are correct that Common Lisp lacks a standard definition of parallel programming constructs, and thus parallel programs must use vendor-specific extensions. (There are some efforts to abstract over the differences.) In this regard, Common Lisp seems in the same boat as most other languages.
Re:Good to see Lisp is still around. (Score:3, Insightful)
Productivity is also a huge judgement call. One judges productivity by familiarity; I don't think there's a programmer at my office that knows Lisp (too old), Smalltalk (most are self-taught or consultants), or any of these other obscure languages. Businesses don't choose languages for the 'fun factor'.
Re:Good to see Lisp is still around. (Score:2)
C++, Java, and Perl are low-level like my dog is a cat. C is low-level (or with better libraries, mid-level). Anything that supports objects or runs in a virtual machine cannot by definition be low-level because there's too much abstraction of system functionality between the source and object code. Just because C++ still lets you twiddle with bits, doesn't mean you HAVE to.
What! For one, C++ is almost 100% compatible with C89 (just some type stuff, C99 also has some other stuff like restrict and _Imaginary and stuff). C++ is horribly low level, when compared to LISP. I tried to implement an "abstract" plugin model for a media player (shut up! Everyone thinks the world needs a 10,001st console media player) in C++. I failed. Why? The details. The damn little details. To achieve the abstractness that C++ programs are supposed to have, I had to bend over backwards. Now, after learning Scheme (and GOOPS) and some clisp, I relized how easily the same thing could be done with GOOPS or CLOS. Sure, I would have to write some wrappers (oh wait, SWIG can do this for me) to access available libraries, but the time that would be saved in the long run would offset that. Now, if only I could find something that wouldn't piss people off when I released it (not even I need another media player!). Now, Java and perl might be a bit higher level, but in C++ you almost always have to deal with low level things. Now, there are some libraries (i.e. the STL or Qt) that can save you from some of the low level stuff, but I doubt that you will go through an entire program without dealing with something like, say, memory managment (which you can ignore in most LISP dialects).
Re:Good to see Lisp is still around. (Score:2)
More Lisp (Score:4, Insightful)
Re:More Lisp (Score:4, Insightful)
There as been a significant amount of pressure (from both non-CS administration and some CS-because-it-pays-well,-not-because-I-want-to-lea rn students) to change to Java or some other "real world" language, but thankfully, the instructors haven't given in.
The differentiation between good programmers and bad, isn't in the number of languages they "know." Programming is a methodology, and Lisp/Scheme is a great tool to teach it.
Re:More Lisp (Score:3, Insightful)
Re:More Lisp (Score:2)
My friend who graduated from Harvard said that he wished he learned Java instead of Lisp so he could of learned something he would use in the world.
I know that you learn and take more than details of a language, but his point was not unfounded. Depends on what you go to school to learn and do. Maybe you should choose your school based on your goals in education. His happened to be one that didn't see a lot of value in learning lisp.
Oh well.
Re:More Lisp (Score:2)
Learning Common Lisp as an undergraduate at CMU in the late 1980s changed my life for the better. I guess to each his own.
Then again, isn't one of the goals of a CS degree to expose you to a variety of computational paradigms? Lisp is quite unlike C, C++, Objective C, and the like. Certainly worth a semester or two of exposure.
Perhaps it is your friend that has the problem if all he could handle in college was one freakin' language.
Re:More Lisp (Score:2, Informative)
Re:More Lisp (Score:3, Insightful)
I'd like to see, for instance, a word processor written in Lisp. Seriously -- something compatible with Microsoft Word. An action game with sophisticated graphics, including transparency and particle effects? How about an MPEG decoder, or an MP3 player? What about 3D shooters? I'm certainly not saying they aren't possible, and for all I know they might even exist, but I know I can find all of those for Java inside of two minutes.
Lisp doesn't have a toolkit equivalent to Java 2D + Swing, and I seriously doubt it has anything equivalent to Java 3D, the Java Advanced Imaging API, or the Java Media Framework.
I'm not insulting Lisp the language, but the fact is that its libraries are woefully inadequate compared to Java's. Lisp programmers have had twenty-odd years to come up with decent libraries -- so where are they?
Re:More Lisp (Score:2)
If you seek in-principle proofs, all of this and more existed for Lisp Machines of the 1980s. they still exist, in fact. Nichimen sells a graphics/game engine for Ninetendo written in Lisp. There's a commercial game for PC available right now that is partially written in Lisp. I don't have the time, but I'm sure a quick Google search would reveal positive answers to your questions if you care to look.
You do bring up a good point. While this stuff is possible in Lisp and has been done, it's not done now. Bad PR associated with AI Winter has left Lisp reeling in this (i.e., graphics and GUIs) regard.
(I really gotta go!) Have a look at CLIM for a cross-platform graphics library for Common Lisp -- Windows, Unices, the Mac. But it's probably fair to say that Lisp is for other parts of your application at the moment, just as a game is part assembly, part C, and part other languages. Write the parts that can be elegantly written in Lisp in Lisp.
Re:More Lisp (Score:2, Interesting)
Long before Java was born, Lisp actually led the industry in high-quality 3d graphics long ago. Around the time of the gulf war, CNN and other major TV systems were doing their graphics rendering in Lisp. There was some bad business management that led to the demise of the company with that tool, but the technical tools had nothing wrong with them. (sigh)
Re: I'd like to see, for instance, a word processor written in Lisp.
MIT Lisp Machines were the first to run an Emacs-style editor based on Lisp, I think. Then MIT Multics, where it was conventional to use PL/1, deviated to prefer Lisp as its implementation language. Then Stallman rewrote Teco-based Emacs in Lisp. It's different than MS/Word out of preference and tradition, not becuase something in Lisp keeps it from doing what Word does. But it is a word processor.
Also, Lisps can call out to other programs by native function call, by RPC, by CORBA, by COM, by sockets, and probably (though I haven't recently checked because I don't do Java) by RMI. So we're not lacking for ways to integrate others' tools if we need them.
Re: I'm not insulting Lisp the language, but the fact is that its libraries are woefully inadequate compared to Java's.
The Lisp community was in a boom at the time AI became unpopular. Many siezed upon the opportunity to blame Lisp for AI not meeting everyone's expectations (as if had the work been done in C++, AI would be blossoming today). It's easy for something to become a scapegoat, just as the word "dot com" has become a scapegoat for many companies doing bad financial planning recently. Since that time, Lisp has labored under a bad reputation that I think was unfairly attached as part of a convenient blame game. Yet even though many have tried to kill Lisp, it won't go away mostly because it still has ideas to contribute and commercial problems it can solve that other languages can't. It is gradually building back up to the image it had before, I think, by embracing rather than fighting other technologies. I dare say that if we had as much money to throw around in our community as Sun had to throw around getting Java advertised and populated with libraries, our image would be as good. We're just working on less budget, and so the process of rebuilding trust can't happen overnight by sheer force of dollars.
Re:More Lisp (Score:3, Insightful)
Any Java "word processor" is going to have a hard time beating MS Word at its own game. Emacs, however, thrives by playing another game, the "text editor" game. And it wins.
Lisp has toolkits like Common Lisp Interface Manager (CLIM) [uni-hamburg.de] which are much more than simple layers to create windows with widgets. I can't even come up with a concise description of how different the Lisp idea is. (Some of this is my lack of practical experience with CLIM. I don't program GUI applications.)
Java is fine for relatively simple, solved problems (like GUI word processors). Lisp is for horrendously complex, hard-to-solve problems like managing logistics for airlines [franz.com]. Or bioinformatics [franz.com]
Or managing information in complex investigations and audits. [xanalys.com]
You'll notice that all of these are created to solve real-world problems, where it might not be obvious how a computer could help you. Word processing and 3D-shooters are all very securely "inside the box" of what computers are known to do, and have been done many times. Lisp is for taking on the world, forging into new territory, and kicking the world's ass. If you want to stay safe and do "yet another" of the same old thing, maybe Java is all you need.
Re:More Lisp (Score:2)
Second, I am aware that Lisp has some graphical toolkits available. However, nothing that I have seen comes remotely close to Swing's power and expressiveness.
Third, anyone who calls a word processor a "relatively simple, solved problem" has clearly never written one. There's a reason why there are only a few competitive ones, and that's because they're unbelievably nasty programs to write -- far more so than they look on first blush.
Further, you'll have to do better than that to convince me that Lisp is somehow the uberlanguage. The fact that Lisp has been used to, for instance, handle airline logistics does not in any way imply that it was the best choice for that problem. Note that I'm not saying it isn't -- I'm just saying that the fact it was used in, say, a bioinformatics program doesn't say crap about whether e.g. Java couldn't have solved the problem equally easily. Assembly language was used to solve some tough problems which hadn't been solved before, too -- and that doesn't mean a damned thing as to whether or not it was a good choice.
I hardly consider airplane logistics "forging into new territory and kicking the world's ass", by the way. Yes, it's a terribly involved problem, but it's also a boring one, and I seriously doubt that it would be significantly tougher in Java.
"I kicked the world's ass today, Mom!"
"Oh, what did you do?"
"I wrote an *AIRPLANE LOGISTICS PROGRAM*!!! WHOOOOO!! I ROCK!!!"
See? Maybe I'm just getting cynical in my old age, but that just doesn't sound like bragging rights to me.
(Please keep in mind that I'm not insulting Lisp. I haven't used Lisp enough to have a real opinion either way. But the fact does remain that Java's libraries are much more powerful, and it is much more popular. I'm reasonably certain that those two facts are related.)
Re:More Lisp (Score:2)
Right around the time when you show me the Mona Lisa done in fingerpaint.
Considering your preferred example was a 3D-shooter game, I'm relatively skeptical about your opinion on world-class application development.
If you think anything in Java touches what CLIM has achieved, you obviously haven't even followed the link I gave and read what is there. You also ignored all the positive qualities of Emacs (ever try to program Word?) that I cited. I fully conceded at the outset that Emacs and MS Word are competing in different arenas. Lisp has a history of serious publishing tools (also not word processing). These solved, for example, the serious problems involved in generating large sets of documentation using hypertext. [win.tue.nl] in 1985.
Again, I concede that MS probably didn't use Lisp to develop Word. The fact that there isn't a Lisp-based competitor probably is due to the fact that smart people don't try to compete in the same niche as MS, and Lisp is used by smart people.
Re:More Lisp (Score:2)
"Point me to a documented case where Java handles these kinds of intense logistics problems, and I'll begin to give you some credibility."
So a language is only interesting if it's used for "intense logistics problems"? I guess word processing, hospital record keeping, or web page design aren't good enough. Well, I have absolutely no idea about Java logistics programs, because I am not involved in logistics. Doesn't mean it isn't used, doesn't mean it is.
"Considering your preferred example was a 3D-shooter game, I'm relatively skeptical about your opinion on world-class application development."
I listed a 3D shooter as one example out of many, and nowhere did I say it was a "world-class" application. Please stop implying I've said things that I didn't say.
"Ever try to program Word?"
Yes, actually. Not the most fun I've ever had, but Visual Basic for Applications isn't all that bad for a macro language.
"smart people don't try to compete in the same niche as MS, and Lisp is used by smart people"
So Linux, which is a direct competitor to Windows NT, is written by a bunch of idiots?
One's choice of language has nothing to do with one's intelligence. Lisp does indeed seem to have more than its fair share of elitists, however.
Re:More Lisp (Score:2)
kilohyperlinks (Score:2)
Re:kilohyperlinks (Score:2)
You should check out Kent's HyperSpec. It's an amazing (and amazingly useful) document. Just project the obvious thoughtfulness of his replies over a huge document and imagine the result.
C takes too long to write? (Score:2, Interesting)
IMHO, he's just biased to Lisp, and I'm just biased to C. But, outright saying that C programmers are a bunch of speed-freak-holier-than-thou losers, was going a little far..
Re:C takes too long to write? (Score:2)
I predict the new language I am designing, ^H, will be very sucessful.
Re:C takes too long to write? (Score:4, Insightful)
Yes, C supports the fundamental unit of program abstraction known as the function. But that does not make up for the various other drawbacks.
Why C takes longer to write is because the programmer must deal with every detail of the computation---other than some machine-specific details such as allocating values to machine registers, explicitly managing the passing of parameters, or manually scaling pointer displacements based on types.
Firstly, there is the memory management. Every significantly large C program which uses dynamic memory, unless it is correctly written by a miraculous fluke, will suffer from failures due to premature deallocation of memory and memory leaks. It's not possible to create a significant data abstraction of C without encumbering it with memory management burden. A typical abstract datatype (ADT) module will have create and destroy functions which call malloc and free. The user of these functions inherits all of the responsibility that comes with malloc and free: the avoidance of premature deallocations and concerns about leaks. There are many things you can't do effectively without a garbage collector: the entire technique of functional programming is made impractically difficult. When you don't have to worry about memory allocation, you gain productivity.
Secondly, there is the strict, static type system. Static type systems get in the way of certain types of programming. Here is an example. It's not unusual for parsers written in C to use a union type to represent the items stored in a parse tree, and to use integer tags to identify what is present. What is that, if not an emulation of dynamic typing?
This brings us to my last point. Compiler writing is incorporated into the Lisp programming style, and the dynamic typing supports it directly. In Lisp, in addition to the use of functions as an abstraction mechanism, you have macros. These are not like C macros which work with tokens of the program text; they are operators that work on data structures; data structures which typically represent some programming construct and are translated into some other data structure, which represents Lisp code that will be substituted for the macro and evaluated in its place.
The techniques for abstraction provided by Lisp macros are squarely out of reach of the programmer working in C, who is stuck with a fixed set of langauge features, and a lame preprocessor that can perform some very simplistic emulations of new language features.
What if you want to embed a whole new language in C? You have to write an external processor that works with the raw text of your source code. This is exemplified by ``embedded SQL''.
Continued... (Score:2)
Suppose you have 10 C programmers working on a project, and each of them wants to invent a few language features. So you end up with 10 translators, which require each source file to be passed through a 10 stage filter. Can you imagine that? What if there are subtle dependencies on the order of the filters, or if the output of one breaks the other? Or does not transparently pass through the language features understood by the next one? The scenario is almost unimaginable; quite likely, the very idea will be rejected as braindamaged early in the project.
In Lisp, the 10 programmers can easily have their own macro libraries contained in their modules of the program. They can use each other's macros easily, even nest them in the same code. There is nothing special to do, no code preprocessing to set up; it's not any different from using a library of functions.
Re:C takes too long to write? (Score:2)
Have you spent any time at all thinking whether a numeric argument should be "int" or "long" or "float" or "double"??? You're falling behind Lisp. Did you remember to worry about integer overflow? (Lisp integers only overflow when you run out of memory. C programmers think that wrap-around is a problem that won't happen soon enough to care (2038 A.D.?)) Yet more time wasted, or bugs introduced.
Did you need to define a data structure? In Lisp, you can just use lists for quick-and-dirty. If you want to be cleaner, use structures. You don't have to define the type of every slot, unless you want to.
Did you need serious, robust string handling? Watch out for buffer overflow exploits, extraneous NULL characters, and so on. Did you do any memory allocation? Be sure to think very carefully about "who owns what" or you've introduced memory leaks. Well, even very careful probably wasn't enough, so you still have memory leaks, but you can just restart every so often, right?
You did remember to check the return values of all your system calls to make sure they worked, right? Lisp file I/O throws exceptions, so you can handle errors in structured ways.
Lisp has arrays that can automatically expand as you add data to them (retrieval is fast, storage is fast until the size needs to bump up, when it is still very fast, no arbitrary array bounds to cause buffer overflows later).
Sure, you can write C programs fast. If you have time to put in extraneous detail. And can ignore all the pitfalls in the resulting code. Something like doing surgery fast with a chainsaw. Sure, it might be fast, but do you really like the results?
Re:C takes too long to write? (Score:2)
C's ability to declare functions is a good tool for abstraction, but doesn't come close to the power of abstraction you can get in Scheme/Lisp.
For example, there is a standard library function in Scheme (and every other halfway-functional language in existence) called map. map takes a function and any number of lists, and returns a new list such that element X of the returned list is the result of applying the given function to element X of each of the input lists. So, for instance,
(map + (list 1 2 3) (list 4 5 6))is
(list 5 7 9)
In Scheme, the function map is extremely easy to write -- it's certainly appropriate for a student in a beginning programming course taught in Scheme after about five or so weeks. In C, even if you substitute arrays for lists, map as described would be a real pain to implement -- I challenge you, in fact, to write a version that's as general as the Scheme version. And, if you're really up for a challenge, find someone who's had as much experience in Scheme as you have in C, and race. :)
Of course, nobody misses map in C (well, nobody who isn't indoctrinated in functional programming, anyway). C programmers just idiomatically write for or while loops instead. And, surprise surprise, C programmers are always making silly little mistakes that cost time -- "Oops! I got that termination condition wrong!" "Oops! I incremented the wrong counter!" "Oops! I forgot to initialize that variable to zero!" "Oops! I forgot to allocate memory for the return array!" Sure, they're mistakes that C programmers learn to catch quickly because they happen so frequently. But my 5-week-educated Scheme programmer will never have any of those bugs, and won't ever have to spend any time finding or fixing them.
Re:C takes too long to write? (Score:2, Interesting)
I recently wrote a macro for use in a VM I was playing with, that is used for defining opcodes,
and is used something like this:
(definstruction 32 addi ((n1 integer) (n2 integer) -- (result integer))
(setq result (+ n1 n2)))
(definstruction 10 swap (a b -- b a)
)
The definstruction macro takes the opcode number, opcode name, a stack picture, and the body of the function. It then expands into code for a function that implements the opcode. In the case of addi, this function verifies that there are at least two items on the stack, and that they are both integers, throwing appropriate exceptions for each error case. It then generates code to pop these items off the stack into local variables and also declares the output variable, making sure not to double-declare if some of the same variables are used on both the input and output. It then generates the body of the function wrapped up in an exception handler that restores the stack to its initial state before calling the exception hook if it's a restartable exception, or just calls the exception hook directly if it's a resumable exception (yes, Lisp has resumable exceptions so you can clean up and keep going). The macro then generates code to push the result back on the stack, checking for overflow if needed, and (optionally, for debugging purposes) verifying that the result was indeed of the correct type.
Finally, it generates code to add the opcode to the jump-table that the vm interpreter uses, and also to the tables that the disassembler uses.
For something like addi maybe 30 lines of code are generated. For something like swap, it only generates about 4, because the macro is intelligent enough to realize that with no body and no typechecking it doesn't need the exception handlers etc.
This one macro is under 100 lines of lisp code, and will save 5000 to 10000 lines boring, repetitious, bug-prone code!
Learning Lisp/Scheme? (Score:2)
Doesn't Lisp have a foundation in AI? Or is that Prolog? Doesn't Lisp somehow have a relationship to prolog?
Re:Learning Lisp/Scheme? (Score:3, Informative)
Re:Learning Lisp/Scheme? (Score:3, Interesting)
Re:Learning Lisp/Scheme? (Score:2)
Emacs lisp is an old dialect. It differs fundamentally from Common Lisp and Scheme in lacking lexical variable bindings. Avoid it in favor of free versions of the newer dialects if your purpose is to understand Lisp as it has been for the past twenty years. Especially explore Common Lisp, which includes so much cool functionality as part of the standard. For example: I believe that Common Lisp was the first object-oriented programming language to become an ANSI standard.
Re:Learning Lisp/Scheme? (Score:2)
/Brian
Re:Learning Lisp/Scheme? (Score:2)
From a naive perspective, lisp is just a programming language that you can tell what to do and it does it. Prolog doesn't seem to actually do anything--You just tell it information and it somehow knows the answer.
Re:Learning Lisp/Scheme? (Score:2)
Check out the Association of Lisp Users [lisp.org] web site for references. I believe Dave Touretzky has made his very good introductory book available online and Dave Lamkins has also written a long web-based tutorial.
Also, the introductory course at MIT uses the fabled 'Structure and Interpretation of Computer Programs' which I believe is also available online. It covers Scheme, not Common Lisp, though.
Re:Learning Lisp/Scheme? (Score:3, Informative)
Here, have a link:
Touretzky, D.S. Common Lisp: A Gentle Introduction to Symbolic Computation [cmu.edu], Benjamin/Cummins, 1990. ISBN 0-8053-0492-4. (For Lisp novices.)
Re:Learning Lisp/Scheme? (Score:2)
(1) Peter Norvig's "Artificial Intelligence Programming: Case Studies in Common Lisp"
(2) Sonya Keene's "Object-Oriented Programming in Common Losp: A Programmer's Guide to CLOS"
(3) Paul Graham's "ANSI Common Lisp"
(4) Paul Graham's "On Lisp: Advanced Techniques for Common Lisp"
(5) Friedman, Wand, and Haynes's "Essentials of Programming Languages"
Java's solution to the gc problem (Score:2)
LISP, just like all the languages that have copied its gc system, neglects to understand that there are many other resources I need to manage other than memory (db handles, shared memory regions, mutexes, files, etc.).
The Java language provides a solution that resembles C++'s destructors. Before the Java runtime's garbage collector disposes of objects, it calls their finalizers.
Lisp Not Hard (Score:2, Informative)
Just My CS 2 cents.
Re:Lisp Not Hard (Score:2)
If you think that vim is nice for lisp editing, try emacs.
Being able to send the current expression to the lisp interpreter with a keystroke to see if it evaluates correctly is awesome once gotten used to
Emacs Lisp idiom (Score:2)
Shouldn't you have made it a lambda function passed to global-set-key?
The Emacs idiom for defining new features is to define the function (giving it a long name suitable for an M-x call) and then use global-set-key to add a shortcut. For example, first make M-x tetris-on-drugs [rose-hulman.edu], and then map Ctrl+Super+T to it. (The super key on a PC bears a Windows logo.)
MOO! (Score:2)
Anyway, implementing tail calling in MOO isn't as bad as it sounds, and for a trivial case of it, I implemented it. You do lose most of callers(), and thus have less meaningful tracebacks, but I just keep the last frame for caller and caller_perms(). The only thing that permanently breaks is callers() based security like @gag and @refuse, but I implemented a "taint" mechanism (in-db, could easily have been done in-server) that just held the set of perms used. gag_p was then a simple matter of $set_utils:intersection(this.gaglist, permset)
Sometimes wish I still had my old MOO code, but I gave up on MOO long ago after seeing that it just wasn't going to get anywhere. Shame I don't see any real languages anymore with integrated security like MOO had.
Comparison with ML/OCaml (Score:2, Interesting)
These languages give up the s-expression syntax, and thus the powerful Lisp macro facility which people like Paul Graham believe to be critical to high-end Lisp programming.
What they offer in return is static type checking, which has saved me countless hours of bug hunting, and some wonderful mechanisms for abstraction and code clarification: sum types, modules, functors, and exceptions.
I used to do all my work in Lisp/Scheme. And occasionally I miss the simple clarity of the s-expression syntax and the macros. But these days I do everything in OCaml and have been amazed at the ease with which conceptual structures become code.
Re:Comparison with ML/OCaml (Score:3, Informative)
Sorry the interview got split. There was apparently a length limit, perhaps caused by a pre-allocated fix-length buffer due to failing to use a language with dynamic memory allocation.
I agree! (Score:2)
I had a +5 question about this in the asking phase, and I was disappointed to not see it answered.
I too first learned lisp, but (after the initial shock) found SML to be much, much easier to use.
In my mind, we win on:
- static typing: makes debugging so much easier,
module interfaces more expressive, and programs faster. Note that ML and OCaml both have
type inference, so this business about writing
down types all the time doesn't apply!
- datatypes and pattern matching: A very natural
way to represent recursive data structures.
Often saves you lots of typing over objects
and subtyping, while being more "safe" in
the sense of avoiding casts.
- Syntax. This is subjective, I guess, but
I don't need practically any punctuation
to program in SML, which makes it easier to
read (for me) than lisp or C.
From his response, it seems we're missing:
- "dynamic stuff". It's not clear to me that
this is actually useful for working on
large problems, but I suppose this is
not reconcilable with ML's static type
and scope rules.
- self-interpretation. This sounds fun, and
I had hoped to hear about how this is useful.
Any suggestions, anyone? It would seem that
this could be added to ML, though at a
cost of efficiency.
Re:I agree! (Score:2)
static typing
Yes it spots some errors, but at a great cost - greater difficulty in code modification. Most Lisp programmers don't hit that many type errors and when they do, they have the option of fixing the error or extending the interface to accept the new type.
It's not like strongly typed FP systems have complete type systems anyway. How do I specify a type "must be prime" for use in constructing hash tables, for instance? In Lisp, you just declare a variable as being of type (satisfies primep) and let the function primep do the checking. You can't do that in a statically typed language.
In summary, most Lispers see static typing as a very rigid answer to a non-problem.
datatypes and pattern matching
Datatypes I covered above. I'll only add that both structures and objects are available. As for pattern matching on said items, Lisp has that in CLOS. You can specialize a method on type or value. Here's an example:
(defmethod factorial ((i (eql 0))) 1)
(defmethod factorial ((i integer)) (* i (factorial (1- i))))
Syntax
Definitely subjective. But having written code that builds code in languages with more complex syntax, I know which one I'd pick...
dynamic stuff
I can't say if it is valuable for "big things" either, but it is very valuable for "important things" - things that can't go down.
self-interpretation
You can't even define a static type (as defined by most strong typing systems today) for this. I'd find it very hard for it to be put consistently into a statically-typed language.
syntax nothing to be proud of (Score:2, Interesting)
The thing I personally like about (+ (* 2 y) x) rather than 2*y+x is that it simplifies my editing.
Abd then bragging about how easy it is to write editor macros to manipulate expressions? some of the examples ended in lines like:
(string < (first-name name1) (first-name name2)))))))
Crapping closing parens like that makes the language difficult to read without a text editor for matching. And it hurts my eyes
Closing parenthesis placement vs. readability. (Score:4, Insightful)
If a C program is properly formatted using one of the popular styles, then you can remove all of the braces and its meaning is still obvious; you can put the braces back automatically, or nearly so.
I've experimented with a C formatting style in which closing braces are stacked like in Lisp, and it didn't make one bit of difference to the readability of the code.
In a way, the closing brackets, braces or parentheses are for the compiler, not for the reader; the programmer simply has to ensure that they balance.
In some implementations of Lisp long ago, there was feature known as superbrace. If you wrote a right square bracket, it would close all outstanding open parentheses. I think there was one additional rule; the superbrace closing would stop upon encountering a matching open right bracket.
So you could write something like:
(defun hello() [let (foo bar) (a (b] ((c d(e)]
The first ] will properly close the outstanding parentheses all the way back to the [let. The second one will balance back to the (defun.
The superbrace did not catch on; it's not part of modern Lisp. That's could be because programmers simply didn't perceive enough of a benefit from the feature.
Re:syntax nothing to be proud of (Score:2)
Just think of it as trading all of your semicolons, colons, parentheses, braces, brackets and weirdo digraphs and trigraphs (e.g., ->) -- all that line noise -- for two characters, opening and closing parentheses. Once you do, you stop seeing them.
(Trust me. If you don't, note that I made a similar comment in the past and a C++ type did some actual comparisons that backed me up.)
The gains are metalinguistic: Your programs are now infinitely malleable by other programs, you can customize the lexical and syntactic structure of the language, you can effortlessly write new languages on top of Lisp (ever check out the internal representations of GCC?), etc.
pre-runtime error detection (Score:2)
hehe.
I think this sentiment is great. Essentially you are saying that lisp's syntax (along with the help of an editor) is conducive to syntax-error-free programming, since it has checks (in a way) that occur while you program, not while you run.
The reason why I chuckle is that I often use this same argument to explain why I find ML's static type system so useful. Rather than catch syntax errors, it catches semantic errors in your program before it is run. I find this to be an incredible boon; I very infrequently have to debug a program that's type correct. Yet the "dynamic" camp doesn't buy into this -- they say that their dynamic typing lets through *more* programs, and therefore is somehow more powerful.
What's the deal here, do you believe the first and not the second, or am I merely making an unfair stereotype?
If Lisp is so great, why isn't it more popular (Score:3, Interesting)
While weak typing and dynamic scoping are great for some things, it really trips up a lot of beginner programmers. An alternative Lisp that requires declarations might be very helpful for beginners. For strongly typed languages, compilers are a major help in debugging.
I would agree that other languages have become huge, I think the problem is that Lisp is a big and idiosyncratic language. Some things are in Lisp because of tradition. Some more things are in Lisp because they were grafted on top of the tradition. Then you have exceptions such as macros that violate the usual rules. It is true that Java is also huge, but each object in the API follows a very restricted syntax.
CLOS has all sorts of interesting things in it such as multiple inheritance and methods for combinations of objects. These are very nice once you have learned to use them, but there are lots of pitfalls, too.
I guess this means Lisp is a power tool for those who have learned how to use it. But it is difficult to learn, and unfortunately, a widely-used and widely-understood (more or less) language needs to appeal more to the lowest common denominator rather than only to those that get it.
dynamic scoping, not! (Score:2)
(let (*dynamically-scoped-var*)
(declare (special *dynamically-scoped-var*))
So now with this declaration you have effectively created a local ``override'' for a global variable. If some form you evaluate from within this context refers to the identifier *dynamically-scoped-var* it will resolve to the one declared here, not the global one (unless there is a local declaration there which shadows that reference, of course).
Once upon a time, in dialects of Lisp preceding ANSI Common Lisp, this dynamic behavior was the norm. Dynamic scoping is tremendously useful, so it is retained in the form of special variables.
Some ``old timer'' Lisp users should take a second look at the current state of Lisp, because important details change.
Re:If Lisp is so great, why isn't it more popular (Score:2)
Well, one thing's for sure: you're either a Windows user or a hypocrite.
Re:If Lisp is so great, why isn't it more popular (Score:2)
Unless you go out of your way to declare a variable, a variable can be assigned any type of value. Variables are weakly typed. Values are strongly typed, but not variables (without additional effort). Also, lists do not have subtypes as far as I know, it could be a list of numbers, list of strings, or a mixture of types.
How do macros violate the rules? Well, the syntax in the macro does not need to be prefix notation, so all a beginner needs to learn is how to parse the individual syntax of each macro.
No, I am not a Windows user, but Linux is harder to learn and use.
And if you think my reasons are lame, you should at least try to come up with better ones. The conspiracy theories are not very good.
the first time I saw lisp I said... (Score:2)
I get from this article the following:
1)you should use lisp because its emacs friendly.
2)we use notation thats different then what mathmaticians use, but is better
3)Lisp programmer feel that speed is un-important because there is enough processor speed. (that little comment made me want to put my fist through my monitor) clearly not made by an engineer.
4)bracket intense sphagetti code is a good thing.
5)old guys can be even more fanatical about an obscure language then 'lee7 w4rez d00dz'
6)its bad that the most powerfully, rich, and technological nation on the planet had a cultural influence on the computer industry.
7)LAMBDA is just a way to maintain lisp leetness
Re:the first time I saw lisp I said... (Score:2, Insightful)
What the heck? I remember making exactly that comment when choosing Python to implement a Point of Sale system for a local business. The machines are powerful enough to run an interpreted language quickly, and it's a heck of a lot faster to develop under Python than under C++ or Java. If anything, that mentality *increases* my interest in the language...
Perhaps you are a hardware engineer or perhaps you work exclusivly on embedded systems. Either way, get a clue and recognize the usefulness of such a claim...
Re:the first time I saw lisp I said... (Score:2)
2) see answer to (1). Uniform notation means programs can understand (and manipulate) programs.
3) Lisp lets you add declarations to speed up a program that you have written, without changing the behavior. But only if you want to. Premature optimization is the root of all evil. Lisp lets you paint with a broad brush, so you can cover a large area fast. When you need to get the little tiny details painted, (the routines that run many times, and must be made faster) you switch to a smaller brush. Instead of having to paint the whole damn wall with the small brush. Lisp applications have been compiled which run at speeds competitive with Fortran applications.
4) brackets are the glue that holds the uniform notation together. With auto-parent matching and auto-indenting in your editor, you learn to see past the parens. See answer to (1) again.
5) What are you fanatical about? C? Obviously something is making you fanatically opposed to Lisp.
6) Huh?
7) lambda means that functions are just as much data in Lisp as numbers are in Fortran. When you write programs that crunch functions in the same way that Fortran crunches numbers, you can kick some serious ass.
Re:the first time I saw lisp I said... (Score:3, Insightful)
1) you should use lisp because its emacs friendly.
Among other, much more important things.
2)we use notation thats different then what mathmaticians use, but is better
Better for a programming language, yes. If you're really in love with having to know what's left versus right associative, try ML. Not that you'd like it, judging from your comments.
3)Lisp programmer feel that speed is un-important because there is enough processor speed. (that little comment made me want to put my fist through my monitor) clearly not made by an engineer.
Of course not. That would be stupid. Lisp programmers (or more likely Lisp implementors) feel that adding a constant slowdown is acceptable when it leads to markedly sped-up program development. All engineers do this: you don't design your CPU chips by thinking about NMOS and PMOS, and you generally don't even think about NAND and NOR gates either (one level of abstraction up). You could probably make a faster chip if you hand-optimized all that logic, but it would take 10 years to make the damn thing and you'd never be sure there wasn't some crazy obscure bug lurking in it.
4)bracket intense sphagetti code is a good thing.
Of course not. Lisp programs have a beautiful organization when programmed in the proper style. You're just not familiar with it.
5)old guys can be even more fanatical about an obscure language then 'lee7 w4rez d00dz'
No argument there. But fanatical in a good way.
6)its bad that the most powerfully, rich, and technological nation on the planet had a cultural influence on the computer industry.
Reading comprehension was never your strong suit in high school, was it?
7)LAMBDA is just a way to maintain lisp leetness
Or, there's no good reason to change it, so why break all the existing code and bother all the existing programmers? Besides, if you really care about it, FUNCTION instead of LAMBDA is a macro-definition away.
Try spending more of your effort trying to figure out what it is about Lisp and Scheme that make people like it rather than wasting it on knee-jerk vitriol.
Re:the first time I saw lisp I said... (Score:2, Informative)
One of the things I love about Lisp is that you don't start by optimizing your solution before you know it. You start by solving the problem (correctly!), then you start optimizing the solution. This can either be done by recoding (considering the first code as a prototype) or evolving the code (typically choosing more optimal constructs and adding type annotations).
I have personally gotten a Lisp implementation to perform within 10-15 percent of C code on computationally intensive code. I could perhaps have gotten even closer if it had been important to me. Or I could have chosen to use C functions for the "inner loops" (which is in fact simpler than doing the same from Python). I sometimes do the latter when I need to squeeze the last few bits out (or when I already have optimized C solutions for known algorithms available).
The whole point is not to spend time on optimizing something which is already fast enough. When you do need to optimize the code, you profile it and start optimizing the bits that eat up your time instead of optimizing parts which are not contributing much to your run time.
Real reason for all the ()s (Score:4, Insightful)
Most production-level Lisp programs may well never encounter a single list during their execution. Hashtables, structures and arrays are all primitive types in Common Lisp, and CLOS lets you build multiply-inheriting object classes to your heart's desire. But these programs will all still be lists. This means that you can use LISP's list-processing tools to write and rewrite them.
Which is where macros come in. Unlike other languages, Lisp macros are not just a simple preprocessor. They put the entire language at your disposal in constructing the expressions you want. Hence you can add new control constructs to the language - with 5 lines of code you can add a for, as in (for (x 1 10) (print x)). You can introduce new tools for updating generalized variables - (setf (aref a n) x) and (setf (property object) v) are equivalent to a[n]=x and object.property=v, but what about a user-defined (setf (min l) n) that changes all values in l smaller than n to n, thereby enforcing the identity? You can even embed entire languages on top of Lisp, and write your programs in that. And because this is all handled at compile time, not only will you not incur the cost associated with using high-level interfaces, you could also use this opportunity to perform extra computation while compiling, based on values potentially already known.
The most obvious analogy is to XML. It too obeys the "program is data" paradigm and has delimiters everywhere (though its are more verbose). This means you can rewrite your XML content using the XSLT stylesheets, which themselves are XML documents and hence can be rewritten too. The main difference here is that XSLT is nowhere near as well equipped to deal with language rewriting as Lisp is (ever tried even a simple recurse across more than one axis?).
Why Lisp? .... CLOS (Score:2)
Scheme on JVM (Kawa) (Score:2, Informative)
I think you're asking a lot. Kawa has over 80k lines of code (as measured by wc), almost all of it written by one person (me) not working on it full-time, and much of it re-written as I improve things. That is a lot of code. Kawa includes a full compiler and a big runtime with lots of features. I get lots of compliments.
I attribute this at least in part to the absence in the JVM of a construct equivalent to Scheme's continuations.
In principle it is not that difficult to implement continuations on the JVM. Once you have full tail-call support (which Kawa does), you can implement continuations by a source-level transformation. What has mainly held me back is the desire to do it right, by which I mainly mean efficiently, building on top of the right calling conventions, and with interoperability with "direct" code. The other thing is I have never seen continuations as all that important. It's a check-off item if you want to claim to implement Scheme, but I suspect very few people actually would be able to use them. Still for those that do, I will get around to it as soon as I can.
ANSI CL and the Lisp machine killed Lisp (Score:4, Insightful)
Lisp machines were expensive workstations that cost tens of thousands of dollars to deliver performance that, even at the time, could easily be had for a few thousand dollars. Contrary to popular claims, the programming environment has some serious limitations, including lack of source level debugging (eventually it got added, but only after the system had already fallen from grace). Those systems simply were not competitive and gave Lisp a reputation of requiring gold plated hardware, carried over into hugely expensive development and runtime licenses. And the use of those machines also kept Lisp from ever integrating well into mainstream environments.
An even bigger problem with Lisp was ANSI CommonLisp. ANSI CommonLisp failed to standardize some really important functionality, like threads, reflection, and networking. What it did specify it specified poorly: the meaning of type declarations and conditions (exceptions) is still vague. The upshot is that CommonLisp programs are a pain to port and require careful hand-tuning for each implementation. A program that runs fast on one implementation runs like molasses on another.
The most frequently named "issues" with Lisp never were issues as far as I can tell. People who put up with Perl syntax should have no problem putting up with Lisp syntax. And performance and resource requirements of Lisp implementations are small compared to Java or even modern scripting languages.
So, where is Lisp going? CMU CommonLisp is trying valiantly to maintain CommonLisp functionality and enhance it, but it is hampered by being based on a poorly written Lisp standard. Python actually gives you most of the power and convenience of Lisp but integrates much more nicely with its environment; Python's big drawback is the lack of good native code compilation. Java includes many Lisp features (Java was designed by people with a lot of Lisp experience) and it specifies reflection, runtime code generation, and exception handling much better than CommonLisp (too bad about the syntax, though). Scheme is probably the best variety of Lisp these days, and there are some really good implementations out there (Bigloo and PLT Scheme being some of them; Bigloo and Cygnus's Scheme compilers even compile to the JVM if you like). And the ML series of languages (SML, OCAML) give you most of the convenience of Lisp with full type checking and no type declarations.
Lisp continues to live in many forms, despite the Lisp machine and despite ANSI CommonLisp.
Re:ANSI CL and the Lisp machine killed Lisp (Score:2)
Also, you fail to prove the case that ANSI CL killed Lisp (even if I somehow grant that Lisp is dead). What is your realistic alternative? A Lisp that somehow got all the features and market acceptance of Java? Would this have come about in the absence of a standardization effort like ANSI CL? Without the resources of Sun?
Standardizing threads, etc., is quite difficult in a language that is meant to run on a wide range of platforms and OS's. Commercial CL implementations manage to include interfaces to all of this functionality anyway. What in this "poorly written standard" prevents different implementations from choosing a standard networking interface, if the market really demanded it?
Lisp machines sure didn't help Lisp in the long term, but all independent workstation vendors have essentially lost to Intel. Their problems weren't unique to Lisp, but rather to independent microprocessor development.
Re:ANSI CL and the Lisp machine killed Lisp (Score:2)
That's a valid question. My response is that I don't view Scheme and CommonLisp as covering the same ground. CommonLisp really tried to claim being an industrial-strength application development language, and I think it failed badly at that. Scheme, at least to me, is a variety of things that CommonLisp never really was: an extension language, a scripting language, a prototyping language, and a teaching language. At its chosen problem domains, Scheme is quite good.
Also, porting the code without worrying about efficiency, is usually trivial, even between different implementations on different platforms (assuming CLIM or some other standardized interface was used). This means that you instantly get working code, even though it may not be as fast as it was on the originally implementation.
I have to disagree with this. It is easy to port code that was intended to be portable, but once you put in all the hooks to make it efficient, it fails. Some CL implementations, for example, tolerate incorrect type declarations, while they lead to crashes or runtime errors on other systems. And what is "incorrect" may itself depend on the implementation.
Compare this with C, C++, etc. where porting between different machines, even with the same compiler vendor, is often quite tricky.
Yes, it is very easy to write C/C++ code that is completely unportable. Even worse, the source code contains no indication of the use of unportable constructs. This is a major problem with C/C++. However, unlike CommonLisp, it is also very easy to write C/C++ code that is both efficient and completely portable. I think that has contributed to making C/C++ so popular.
I also would like to know what you consider vague about the condition system. I've always considered it one of the strong points of the language and easily graspable.
The condition system itself is OK, but the standard leaves a lot of leeway to implementations of what exceptional conditions are actually detected and what their consequences are. And the CL standard still contains many undefined effects. The hope was that this would allow implementations to be more efficient, but that doesn't seem to have panned out.
I think Java is a lot better in this regard, specifying precisely what exceptions get thrown by what primitives, and the runtime cost is small. (Java got a little overly zealous with numerical exceptions, to the point where it unnecessarily affected efficiency, but that's being fixed.)
I also disagree with your statement that the ANSII Common Lisp standard is poorly written. I consider it to be a superb example of technical writing. I use it day-to-day as a reference when writing ANSI Common Lisp programs.
I wasn't referring to style (the prose is nice), but to content.
Altogether, I think CommonLisp has the right idea in terms of overall language functionality for an advanced language for rapid development and prototyping: Lisp syntax, macros, reflection, powerful object system, interactive development, dynamic modification of code, etc. But the ANSI CL standard just got too many practical issues wrong as far as I'm concerned. The right thing to have done would have been to start over and not accomodate the half-dozen or so vendors with their oddball interests. In fact, some such efforts were around in the early 1990s, but the existence of the ANSI CL standard never let them get off the ground, and I think ANSI CL gave Lisp such a bad name that eventually people just gave up on it. In fact, in many ways, systems like Java and Python are Lisp systems disguised carefully enough so as to not offend a Lisp-weary public.
Re:ANSI CL and the Lisp machine killed Lisp (Score:2)
from the Hyperspec:
In April 1981, after a DARPA-sponsored meeting concerning the splintered Lisp community, Symbolics, the SPICE project, the NIL project, and the S-1 Lisp project joined together to define Common Lisp. Initially spearheaded by White and Gabriel, the driving force behind this grassroots effort was provided by Fahlman, Daniel Weinreb, David Moon, Steele, and Gabriel. Common Lisp was designed as a description of a family of languages. The primary influences on Common Lisp were Lisp Machine Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme. Common Lisp: The Language is a description of that design. Its semantics were intentionally underspecified in places where it was felt that a tight specification would overly constrain Common Lisp [r]esearch and use.
and in
1.1.1 Scope and Purpose
The specification set forth in this document is designed to promote the portability of Common Lisp programs among a variety of data processing systems. It is a language specification aimed at an audience of implementors and knowledgeable programmers. It is neither a tutorial nor an implementation guide.
Personally, I think it is a lot more important to have a portable object model than to have a portable networking interface. When porting from implementation to implementation, the networking performance of different platforms is certain to be different, and require re-tuning anyway.
As for the issues you mention on type-checking being non-portable, optimization and portability are nearly opposite directions. Different implementations (and different (optimize
Re:ANSI CL and the Lisp machine killed Lisp (Score:2)
I wasn't talking about what CL "claimed" to be but what it was actually being used for: large, real-world artificial intelligence systems. And it clearly wasn't all that good at the other areas: too messy for teaching and not well-enough integrated with UNIX for scripting.
Personally, I think it is a lot more important to have a portable object model than to have a portable networking interface.
That's kind of like saying that having a brain is a lot more important than a heart. Any modern programming system that wants to succeed needs to have both, as well as a lot of other things. ANSI CL failed to standardize those and many other important pieces of functionality.
As for the issues you mention on type-checking being non-portable, optimization and portability are nearly opposite directions. Different implementations (and different (optimize ...) settings) will make different trade-offs. If you're spending the time to seriously optimize a program, you probably have gone past the point of worrying about your choice of platform.
Well, you may not think it mattered, but the simple fact is that lots of CommonLisp users have voted with their feet. I'm just telling you why I and most other people I know stopped using CL for anything big and important. I expect a good language standard to allow me to write programs that are both portable and predictably efficient, and a language standard that fails to do that is of no interest to me.
Re:ANSI CL and the Lisp machine killed Lisp (Score:2)
Don't be silly. If this were "my sole opinion", lots of people would still be using CommonLisp and I'd be the lonely holdout not using it. Reality is that most experienced CommonLisp users have switched to Java or other languages by now, and they didn't do so out of ignorance.
Re:Ah, LISP fanaticism (Score:2, Insightful)
To me, Lisp is wonderful for three reasons:
(1) It's highly abstract. It makes working with fuzzy concepts (such as natural language) so much easier; you very rarely (if ever) have to get into the nitty-gritty of worrying about how your code actually interfaces with the hardware it's running on. Lisp is one of the highest-level languages I've ever used. Compared to Lisp, C++ is just horribly ugly.
(2) Lisp code can be self-modifying -- it's easy to write a program which puts together a new function and adds it to itself. This makes it great for artificial intelligence, in which self-modifying programs are good things. Compared with this, Perl's 'eval' function is woefully inadequate.
(3) One of the most important things to learn about Lisp is that good code flows from the fingertips, while bad code snarls up and is hard to write. If you're having a hard time getting a piece of Lisp code to work, then you're probably going about solving a problem the wrong way. When thoughts and code are in harmony, it's a very Zen form of beauty.
Lisp code is just *fun* to write.
Lisp commenting. (Score:4, Informative)
;;;
;;; This function computes the factorial of its
;;; argument x. The argument must be a
;;; non-negative integer. If the argument is 0
;;; or 1, the result is 1. Otherwise the result
;;; is the product (x)(x-1)(x-2)
;;;
(defun fact (x) "Computes the factorial function"
(case x
((0 1) 1)
(otherwise (* x (fact (1- x))))))
The comp.lang.lisp FAQ has a few pointers on style, including use of whitespace, comment placement, how many semicolons to use for what comments and the like.
Re:Lisp commenting. (Score:3, Insightful)
There is a technique one can use for modifying the reader to find comments and attach them to the parent object using hash tables, repatriating them later. I've seen this done for some source-to-source translation facilites such as one provided with the Symbolics system for upgrading user code between releases. It is more work, but given how often the issue comes up, the extra cost is probably reasonable in exchange for how much easier it is to transform code without worrying that comments are intervening.
The alternative is to do like Interlisp did and make comments be structures, but then you can only put comments in certain places. It's plainly more flexible to put them anywhere you want and we pay the cost for that in the READ function.
Re:Lisp commenting. (Score:2)
That's not entirely true. While, strictly, comments are diregarded by the interpreter, many Lisp dialects have the conecpt of docstrings, where the first element of a function or structure is a string describing the object's purpose. For an example, let me steal some elisp code from ILISP:
(defun bridge-call-handler (handler proc string)
"Funcall HANDLER on PROC, STRING carefully. Error is caught if happens,
and user is signaled. State is put in bridge-last-failure. Returns t if
handler executed without error."
(let ((inhibit-quit nil)
(failed nil))
(condition-case err
(funcall handler proc string)
(error
(ding)
(setq failed t)
(message "bridge-handler \"%s\" failed %s (see bridge-last-failure)"
handler err)
(setq bridge-last-failure
(` ((funcall '(, handler) '(, proc) (, string))
"Caused: "
(, err))))))
(not failed)))
--Phil (Gee, I wish Slashdot would let me close the <TT> tag!)
Re:Lisp commenting. (Score:2)
Erm, oops. So I had two <tt> tags at the top of my post and only one at the bottom. Preview was showing the monospaced font extending past my </tt>, galeon wasn't showing my the page source properly (it reloads the page to see the source--bad when on dynamically-generated pages), and </tt> isn't in the list of allowed HTML at the bottom. I jumped to a conclusion.
--Phil (Appropriately chagrined.)
Re:Lisp commenting. (Score:2)
class Square(Shape):
"""An abstract representation of a box"""
def size():
"""Return the size of the box"""
return self.x * self.y
The docstrings are queriable (object.__doc__), and there exist excellent tools to give a module/class/method's documention: pydoc. pydoc is such a nice wonder.
IIRC, Python 2.2. has docstrings applicable to object attributes, too.
Re:Lisp commenting. (Score:2)
(defun square (x)
"Return the square of x"
(* x x))
(defclass square (shape)
((side-length
(:documentation "A representation of a box"))
Then
(documentation 'square 'function)
returns "Return the square of x"
(documentation 'square 'type)
returns "A representation of a box"
Re:Lisp commenting. (Score:3, Informative)
Common Lisp as had this capability for 15 years. The following is from Macintosh Common Lisp [digitool.com]:
Re:Ah, LISP fanaticism (Score:2)
No place to hang the comments?! (Score:2)
; - inside code (at end of line)
Or have you never heard of the ANSI standard generic function (documentation place &optional type) that lets you access comment strings, which can be hung just about anywhere? I even have a little package I picked up in 1997 that trolls Lisp source code, extracts the documentation strings, and emits javadoc-like documents, in HTML or TeX or RTF.
"no place to hang the comments", indeed.
You might, just possibly, be thinking of Interlisp, which did have a comment form that could be placed inside your function code and was edited with the structure editor like everything else. In ancient times you had to be a little careful as to where you put comments, but that problem went away with Xerox Common Lisp, around 1986 or so.
Re:Ah, LISP fanaticism (Score:2)
Actually, quite the opposite. Because Lisp programs have a nested tree structure and there are universal indentation conventions, any single expression in a Lisp program can be isolated on its own line in a way that's natural to the reader, and then commented. Plus, Common Lisp (like all decent languages) permits inline comments: #| comment |#.
Poorly written code in ANY language displays the qualities you attribute uniquely to Lisp.
syntactic stupidity (Score:2)
(Incidentally, 10000 lines in any programming language isn't a lot of code, so by your own admission, you don't really know Lisp very well.)
Re:AHH!!!!! (Score:2, Informative)
Seriously, it becomes natural. Just like using RPN HP calculators (which is kind of similar)
Re:AHH!!!!! (Score:2)
You should see how Lisp folks wretch when they seem the horrors the **ML crowd has foisted on the world.
Aren't you tired of serving Lord infix and the dark side, Darth?
Solution: you can write a little infix translator! (Score:2)
(infix x / sqrt ( x * x + y * y))
whose expansion might be
(/ x (sqrt (+ (* x x) (* y y))))
Really, if some way of programming bothers you, you can write a little compiler which translates from a notation that you find more suitable.
That's a fundamental concept in Lisp; that it's easy for programmers to write little compilers for new language features; compilers which are incorporated right into programs, and whose target language is Lisp.
Re:Scheme isn't dead? Then it should. (Score:5, Informative)
But anyway I think it's an archaic language (it was invented in the 50's if I recall) and like anything invented nearly 50 years ago in the computer world, something better has evolved from it or was either created from scratch in a better way.
Lisp has evolved. ANSI Common Lisp is a very different language then what was used in the 50's. It has been continously evolving for nearly half a century.
I hate the gazillions of parenthenses and especially the poor interface given to me by DrScheme (of course again there might be something better but it's our teacher's restriction).
Nearly all serious Lisp development is done in an Emacs-like editor that has built-in support for writing Lisp programs. This support includes facilities that make keeping paranthesis balanced trivial.
I also don't think the language would have survived if it was not supported by universities morons who just don't want it to die. Leave it be! It's time is over!
I'd highly recommend you go to www.lisp.org and look at the hundreds of huge commercial applications that have been written in Common Lisp. It is not an obscure research language: If Common Lisp was only used by ``universities morons,'' would 2 major vendors (www.franz.com, www.xanalys.com) be able to make money off it? Incidently, both the preceeding sites--especially www.franz.com--list commercial customers who have been satisfied with developing software in Common Lisp.
Anyway... speaking about speed. We had a small project of doing fractals and compared it to a c++ program and the scheme program took nearly 20 times than the c++ to do the same recursion level.
It's very important to note that Scheme is not Common Lisp. Scheme is a very different language: Scheme is about having an elegant tool to solve problems; Lisp is about having a tool to elegantly solve problems. In particular Common Lisp is typically compiled to native code and allows the programmer to include type declarations. These two features alone can improve speed by a couple orders of magnitude. To be fair, some Schemes support these features but, AFAIK, it's not standardized.
Re:Scheme isn't dead? Then it should. (Score:2)
oh please, have you ever tried to write C++ in notepad?
if you're doing any programming in any language, you should have an editor suited to that task.
Re:Scheme isn't dead? Then it should. (Score:2)
Re:My god.... (Score:3, Funny)
You prefer the sarcastic barbs of linguist/banana republic dictator Lary Wall?!? (Incidentally, Kent's undergraduate degree is in linguistics.)
Or the detached arrogance of Bjarne Stoustroup (sp)?
Assume the position and be Wayne and Garth to Kent's Steven Tyler.
Re:python v lisp (Score:2)
How about (write-line "hello, world")?
Lispers feel the same way about about have Lots of InSpired Parentheses, if this makes any sense.
Python's missing Lisp's Macros; Java's missing Ace (Score:3, Insightful)
String based macros like the C and C++ preprocessors are woefully inadequate. Representing code as data and transforming code with macros is essential to Lisp. Take macros and s-expressions away from Lisp, and you have Python. Take macros and gratuitous syntax away from C++, and you have Java.
Because of the syntax of languages like C, C++ and Java, there's no good way to design a macro language as powerful and easy to use as Lisp macros. The pointlessly ridiculous syntax of Perl make it impossible to implement Lisp-like macros for Perl in a meaningful way. The Byzantine parse-tree data structures required to represent the syntax of a Perl program are much too complex for macros to easily understand and transform.
Before he designed Java, James Gosling took a crack at the problem by designing and implementing a C macro language called "Ace".
Ace was a high level C parse tree macro transformation language, used to generate the low level raster-op code for the X11/NeWS server. The previous version of NeWS totally abused the C preprocessor in ways it wasn't designed to be used, in order to implement the low level high performance graphics code (a dark, unpleasant practice known as "macrology").
Gosling designed Ace in response to the problems and limitations of the C preprocessor, as he later designed Java in response to C++. You could give Ace several ways to implement loops like two dimensional raster operations, and it could different plug code into the middle of loops with different performance characteristics.
Commonly used rasterops could be expanded to different degrees, with many different cases separately coded (pulling the if statements to the outside and generating big fast code). Seldom used rasterops could be collapsed so the code was correct but compact (pushing the if statements inside of the loop and generating small slow code). Ace operated on the level of C parse trees, not text like C preprocessor macros. Ace would actually estimate the space/time tradeoffs, and decide how to expand macros according to the hints you gave it.
But Ace transformations were quite complex, special purpose and difficult to program. So James Gosling decided not to put macros into Java at all.
The success of Java proves that C and C++ preprocessor macros are not essential to those kinds of languages. But Lisp macros are vastly more powerful, and absolutely essential to Lisp.
Ace was an ambitious tour de force for a C macro language, but it was much too complex and unwieldy for Gosling to design into Java. But that kind of macro programming is actually quite commonplace and straightforward with Lisp.
-Don
Re:Help me start learning (Score:3, Interesting)
I recommend using ILISP [sourceforge.net] with Emacs. It integrates with most of the Lisp environments out there and provides some neat features such as sending the new version of your defun to Lisp, and a slightly buggy buffer-package-matching thingy. Here's the Common Lisp devel stuff in my ~/.emacs:
Lameness filter encountered. Post aborted! Reason: Please use fewer 'junk' characters.
[Sorry about this, but it looks as if I'll have to change the character ratios a bit...] Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation or any nation so conceived and so dedicated can long endure. We are met on a great battlefield of that war. We have come to dedicate a portion of that field as a final resting-place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But in a larger sense, we cannot dedicate, we cannot consecrate, we cannot hallow this ground. The brave men, living and dead who struggled here have consecrated it far above our poor power to add or detract. The world will little note nor long remember what we say here, but it can never forget what they did here. It is for us the living rather to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us--that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion--that we here highly resolve that these dead shall not have died in vain, that this nation under God shall have a new birth of freedom, and that government of the people, by the people, for the people shall not perish from the earth.
CHAPTER 1
It was a bright cold day in April, and the clocks were striking thirteen. Winston Smith, his chin nuzzled into his breast in an effort to escape the vile wind, slipped quickly through the glass doors of Victory Mansions, though not quickly enough to prevent a swirl of gritty dust from entering along with him.
The hallway smelt of boiled cabbage and old rag mats. At one end of it a coloured poster, too large for indoor display, had been tacked to the wall. It depicted simply an enormous face, more than a metre wide: the face of a man of about forty-five, with a heavy black moustache and ruggedly handsome features. Winston made for the stairs. It was no use trying the lift. Even at the best of times it was seldom working, and at present the electric current was cut off during daylight hours. It was part of the economy drive in preparation for Hate Week. The flat was seven flights up, and Winston, who was thirty-nine and had a varicose ulcer above his right ankle, went slowly, resting several times on the way. On each landing, opposite the lift-shaft, the poster with the enormous face gazed from the wall. It was one of those pictures which are so contrived that the eyes follow you about when you move. BIG BROTHER IS WATCHING YOU, the caption beneath it ran.
Inside the flat a fruity voice was reading out a list of figures which had something to do with the production of pig-iron. The voice came from an oblong metal plaque like a dulled mirror which formed part of the surface of the right-hand wall. Winston turned a switch and the voice sank somewhat, though the words were still distinguishable. The instrument (the telescreen, it was called) could be dimmed, but there was no way of shutting it off completely. He moved over to the window: a smallish, frail figure, the meagreness of his body merely emphasized by the blue overalls which were the uniform of the party. His hair was very fair, his face naturally sanguine, his skin roughened by coarse soap and blunt razor blades and the cold of the winter that had just ended.
Outside, even through the shut window-pane, the world looked cold. Down in the street little eddies of wind were whirling dust and torn paper into spirals, and though the sun was shining and the sky a harsh blue, there seemed to be no colour in anything, except the posters that were plastered everywhere. The blackmoustachio'd face gazed down from every commanding corner. There was one on the house-front immediately opposite. BIG BROTHER IS WATCHING YOU, the caption said, while the dark eyes looked deep into Winston's own. Down at streetlevel another poster, torn at one corner, flapped fitfully in the wind, alternately covering and uncovering the single word INGSOC. In the far distance a helicopter skimmed down between the roofs, hovered for an instant like a bluebottle, and darted away again with a curving flight. It was the police patrol, snooping into people's windows. The patrols did not matter, however. Only the Thought Police mattered.
Behind Winston's back the voice from the telescreen was still babbling away about pig-iron and the overfulfilment of the Ninth Three-Year Plan. The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it, moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live -- did live, from habit that became instinct -- in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.
Winston kept his back turned to the telescreen. It was safer, though, as he well knew, even a back can be revealing. A kilometre away the Ministry of Truth, his place of work, towered vast and white above the grimy landscape. This, he thought with a sort of vague distaste -- this was London, chief city of Airstrip One, itself the third most populous of the provinces of Oceania. He tried to squeeze out some childhood memory that should tell him whether London had always been quite like this. Were there always these vistas of rotting nineteenth-century houses, their sides shored up with baulks of timber, their windows patched with cardboard and their roofs with corrugated iron, their crazy garden walls sagging in all directions? And the bombed sites where the plaster dust swirled in the air and the willow-herb straggled over the heaps of rubble; and the places where the bombs had cleared a larger patch and there had sprung up sordid colonies of wooden dwellings like chicken-houses? But it was no use, he could not remember: nothing remained of his childhood except a series of bright-lit tableaux occurring against no background and mostly unintelligible.
The Ministry of Truth -- Minitrue, in Newspeak* -- was startlingly different from any other object in sight. It was an enormous pyramidal structure of glittering white concrete, soaring up, terrace after terrace, 300 metres into the air. From where Winston stood it was just possible to read, picked out on its white face in elegant lettering, the three slogans of the Party:
Re:The Largest Disservice to LISP (Score:2, Insightful)
Sorry about the arrogance you have perceived in some. We're a diverse community. We're neither all arrogant, nor I suppose all not. You just have to pick and choose, like most other things in life, even Slashdot. I hope you'll use this incident as a reason to give the language a second look. Thanks for the useful feedback in any case.
Re:The Largest Disservice to LISP (Score:2)
You're missing out on much more than that: LISP is the most powerful, elegant programming language in the world.
LISP offers some advantages like complete self-introspection, closures, and the ability to scratch together deeply-nested data structures on the fly, that the Algol camp is still struggling with. And LISP has had these things for years. These things are not necessary for every task (I certainly wouldn't write an operating system kernel in LISP (though it is possible) or a really high-speed graphics routine), but they sure come in handy for many tasks on a high level of abstraction, and so LISP is a good thing to have in your toolkit.
Oh, and by the way, Emacs is probably the world's most powerful text editor. Which is a bit like saying the Space Shuttle is the world's most powerful airplane. It has its downside in that it's quite heavyweight if your intent is merely to edit text but it is infinitely customizable to nearly any task within that domain and can really make life easier on a programmer. And code bloat being what it is, the Microsoft Word of today is easily many times the size of Emacs. The Emacs model of customization and code reuse also beats COM hands-down.
Don't get me wrong. I know of what you speak. I've had to deal with arrogant Mac weenies for, like, a decade now and there are quite a great deal more of those than there are rabid LISP fanatics. Doesn't mean I want to dissociate myself from the Macintosh: Quite the opposite. My sister has started drooling when a Windows Xp commercial comes on and I want to show her there's a better way.
Re:system interface (Score:2)
So why didn't you just use a Lisp with threads. E.g. almost any (or perhaps even every) major Common Lisp.?
Re:Correction (Score:2)
Re:What about currying? (Score:2)
CL doesn't support currying (or continuations, or monads, or a bunch of other functional stuff) because Common Lisp isn't a purist language. It is rather a pragmatic language. Purists are welcome to look elsewhere while Lisp programmers still manage to write world-beating applications like Orbitz.
That is all.
Wrong! (Score:2)
So when you have, say
f() + (g() + h())
the functions can be called in any of the six possible orders; which order is chosen is unspecified. But there is no question that the addition on the right must be done first, in the abstract semantics.
It so happens that in C, the expresssion
a + b + c
has exactly the same meaning as
(a + b) + c
this is due to the left associativity of the + operator.
The compiler is not free to add the b + c first, if it could make some kind of difference, like a change in the result, or an exception that would otherwise not happen. The result has to be ``as if'' the abstract semantics were followed.
So for instance if a, b and c are integers, and overflow is reversible (as it is with most two's complement machines) then the addition can be done in either order. If they are floating point types, then it cannot be reordered, because floating point addition is not associative: a + (b + c) is semantically not equal to (a + b) + c.
If your compiler treats a + b + c as a + (b + c) and produces the wrong result, it's a broken (nonconforming) C or C++ implementation, as the case may be.