Common Lisp: Inside Sabre 227
bugbear writes "I just got permission from the author (Carl de Marcken of ITA Software) to publish this
email, which describes the inner workings of
Sabre, the flight search software that the airlines and travel agencies use. It
is a case study in cheap Linux/Intel, NT/Intel and Hpux boxes replacing mainframes, and also the use of lisp and other languages in a server-based app. Update: 01/16 13:45 GMT by H :RawDigits writes "Common Lisp: Inside Sabre - correction. The Lisp engine is used by Orbitz, and not Sabre. Sabre still maintains mainframe systems for their booking. I should know, I am sitting in the Orbitz NOC right now ;)"
I hadn't realized... (Score:4, Interesting)
Re:I hadn't realized... (Score:5, Informative)
This isn't limited to the commercial ones: CMUCL [cons.org] and SBCL [sourceforge.net] do also compile to native code. The compilers are optimizing (you can choose between variying degrees of Speed, Safety, Debugability and Compile Speed) and you can even enter Assembler code or disassemble single expressions.
Re:I hadn't realized... (Score:1)
If it lags behind C, it lags behind Pascal and Fortran too.
Re:I hadn't realized... (Score:1)
Re:I hadn't realized... (Score:2)
Re:I hadn't realized... (Score:2)
Re:I hadn't realized... (Score:2)
See Richard Fateman's home page [berkeley.edu], where you'll find, among other essays:
(1) Fast Floating-Point Processing in Common Lisp. [berkeley.edu]
(2) Software Fault Prevention by Language Choice: Why C is Not my Favorite Language. [berkeley.edu]
Or see Gerald Sussman's home page for [mit.edu] a paper on the use of Lisp in performing astronomical calculations [mit.edu].
Language chauvinism (Score:4, Insightful)
The vast majority of language preferences reduce to religion, and of those the vast majority are dilettantes who know a dozen languages or fewer.
In reality, LISP isn't slow, and it really never was, compared to the alternatives. About 20 years ago, some deluded souls converted some heavily numerical FORTRAN code to LISP and found out it ran about 20% faster on LISP. I call them deluded because they were under the false impression that any of the detractors of LISP would care. They don't.
People prefer languages primarily for cultural reasons. LISP fits certain cultures and doesn't fit others. You can say the same about FORTRAN, C++, Java, Visual Basic, or any other language you like. Pseudo-quantitative arguments come later as justifications to be used in arguments.
Beyond that, different languages are good for different things, but it's almost impossible to have a discussion about it. If we were to have such a discussion, I would say that LISP is a great choice for anything involving graph theory, which a reservation system obviously does.
Me, I'm in the middle of writing a game engine and editor that's built in C and C++, does the plotting in an embedded Scheme, uses a Postscript-like code for run-time, interfaced with the Scheme, and under OSX uses Objective C for the user interface layer. So, what do I know anyway?
Re:I hadn't realized... (Score:2)
What my point is is that it seems that a language like LISP is designed to handle data in the way that is optimally processed by modern computing architectures. Binary trees, lists and stacks are handled easily by LISP, as well as nearly every other recursive and relational structure in widespread use today. While C allows immense flexibility, I believe that it's strength is in it's ease of use; It's easier to program functionally or iteratively rather than recursively. However, optimization of C code probably requires much more knowledge of the specific architecture one is working on. LISP by it's very structure is optimized for any turing-complete system.
Take in mind, IANALP = (IANA = (I Am Not A) Lisp Programmer)
Look at the Lisp newsgroups for more information.. (Score:2, Informative)
Go forth ye google hunters! I'm going to bed.
More like "Look how clever we are!" (Score:1, Troll)
Well done on getting some free advertising.
Yahoo Stores uses Lisp (Score:4, Interesting)
It would seem to me that if it can power 14,000 e-commerce sites for the largest web network that it must be pretty scalable.
Lisp, due to its recursive nature is often used in AI because it can perform operations with lower overhead.
--Jon
Re:Yahoo Stores uses Lisp (Score:2)
Best quote (Score:3, Interesting)
"Of course, with things like STL and Java, I think programmers of other languages are also becoming pretty ignorant."
Re:Hey don't knock the STL :-) (Score:1)
Re:Hey don't knock the STL :-) (Score:2)
This is nitpicking, but have you tried comparing something written in fortran on an old PDP-something to something using a less efficient algorithm on an Athlon box in assembly?
Point made, though.
Re:Hey don't knock the STL :-) (Score:2)
For FORTRAN on an old PDP to assembly on an Athlon, a few million record sort should do the trick.
Re:Hey don't knock the STL :-) (Score:2)
And Common Lisp delivers the best of both worlds. Dynamic binding during rapid prototyping and development. As hot-spots creep-up, simply add OPTIONAL declarations that inform the compiler of the types of critical variables, and the speed benefits of knowing variable types at compile time obtain.
Re:Best quote (Score:2)
Why do people get in such a huff about learning new programming models? Neither Lisp, nor C, nor Java requires you to learn a new programming model to start writing programs.
None of the above is the optimal programming model for each of the languages, and would not normally be used for a large scale or high performance application. However, large scale and high performance applications are not tasks for beginning programmers, and only a beginner would shy away from an opportunity to learn a new, potentially useful programming model.
Re:Best quote (Score:2)
Agreed.
The author's comment strikes me as just another case of blaming the tool when the problem is the user.
Perhaps what he's saying is that generic programming is simplicty itself in a dynamic programming language such as Common Lisp, and a syntactic nightmare in C++, where it runs head-first into the static bias of that language?
Lisp without GC! (Score:2, Insightful)
TWW
Re:Lisp without GC! (Score:4, Informative)
It is not that the garbage collection is too slow in Lisp, he gave the reason that the amount of data that it had to go through was very large. The point of the system was to be as speedy as possible and garbage collection would slow that down no matter how much or how little data you gave it to process. If you look at real-time processing projects, none of them (to my knowledge) employ a garbage collector because that would take up valuable resources.
They made a wise decision to keep the garbage collection to a minimum so that the actual searching process would be all that was running on the boxes.
Re:Lisp without GC! (Score:1)
TWW
Re:Lisp without GC! (Score:2)
Re:Dual processors and GC? (Score:2, Informative)
IIRC the following paper details it being done on DEC Firefly processors:
Andrew W. Appel, John R. Ellis, and Kai Li. Real-time concurrent garbage collection on stock multiprocessors. In Proceedings of the 1988 SIGPLAN Conference on Programming Language Design and Implementation, pages 11--20, Atlanta, Georgia, June 1988. ACM Press.
Re:Dual processors and GC? (Score:2, Informative)
Re:Lisp without GC! (Score:3, Informative)
A real-time GC is not in the least what we need. A real-time GC makes the pauses go away. We don't care about the pauses, we're not interactive. But the real-time GC creates an even bigger overall CPU- and run-time overhead. While the visible pauses go away, the application is slower.
That is the basic problem about many of these discussions: there is no magic bullet. There are GC schemes more or less suitable for some tasks, but not for others. It is tradeoffs, guys.
BTW, at my former employer I had a system written in C with parts using the Boehm GC. It was faster than the malloc/free variant (the application's part could switch at compile time). The Boehm GC would screw ITA's system royally, though.
We could very well write a GC specially coded for our application, but that is a lot of work. And since we can do without system-visible memory allocation when answering search requests, we prefer to that and we hide the data bulk from the GC when cleaning up the systems in-between activities.
GC is a hard problem, just as manual memory management is. For none of these you can buy or download a perfect solution.
Re:Lisp without GC! (Score:2)
Re:Lisp without GC! (Score:2)
Re:Lisp without GC! (Score:2)
Re:Lisp without GC! (Score:2)
Re:Lisp without GC! (Score:3, Interesting)
In this case they are getting the best of both worlds by using GC where appropriate, and then using special knowledge about the application to optimise away the overhead.
Paul.
Re:Lisp without GC! (Score:2, Insightful)
This is actually not right. GC does not have to (as opposed to _can_) scan everything. Some (e.g. copying) GCs scan only the objects that are referenced (live objects). And in generational GCs the set of objects scanned is usually a fraction of live objects (recently created ones).
Re:Lisp without GC! (Score:2)
A copying garbage collector is actually guaranteed to allocate at least twice as much memory from the operating system then is currently occupied by live objects. Otherwise, it couldn't copy them. Now, imagine what that does for locality of memory, and cache efficiency (This is why Appel's paper comparing gc and stack allocation [nec.com] haven't made most people abandoning the stack just yet).
On the other hand, a mark-sweep garbage collector never moves objects, resulting in external fragmentation (also true of malloc/free), which is also bad for locality of memory and cache efficiency.
You can also use a compacting garbage collector, which works mostly like a mark-sweep garbage collector, but occasionally compacts fragmented memory as well. The downside is complexity (and thus also performance of the compacting phase).
In practice, the problem of external fragmentation isn't really much of a problem with a good allocator using the buddy-system. At least, almost any non-contrived application or benchmark should have less than 10% fragmentation using a good allocator. But that still leaves the problem of having related objects occupy nearby memory locations open (which is needed for good cache efficiency).
Copying garbage collection can be used for that purpose. But if you are using the simple two-space algorithm, you'll find that you are doing the exact opposite of that, the two-space algorithm tends to put related objects far away from each other, due to the depth-first search it uses to find live objects.
So we need another more complex copying collector... But that will again slow down the performance of the collector.
One possible performance neck of a mark-sweep collector is that it needs two passes: one to mark live memory, and one to collect garbage. But the copying collector is no better, first you need to traverse live objects, then you need to copy them somewhere else.
While I still haven't touched on the subject of generational and incremental garbage collection, I think you will already see that writing a good garbage collector is far from trivial. There are no simple cookbook approaches for achieving optimal performance. The only reasonable approach is experimentation, testing, and fine-tuning of parameters. Most real-life garbage collectors use a hybrid of different techniques for different types of objects (distinguished by such attributes as age, size, mutability, etc...)
LISP, language for "perfect code" (Score:5, Interesting)
I once worked on a project where we used LISP to processes elements of Radar data. Our reason for choosing LISP was two fold, firstly we were doing List transformation, mapping and comparison. Secondly and most importantly though....
we knew that when it worked, that was it and we didn't want people buggering with it if they didn't understand it. LISP makes sure that the people writing it are going to have a better grasp on computing that the average C/C++/Java person.
Of course the comment at the top of "If you come here thinking you've found a bug, you are wrong, look elsewhere. If you are 100% certain then remember this.... everyone relies on this, if you bugger it up thats a lot of angry people" also probably helped. But using LISP enabled us to write a small piece of very tight code that made understanding the task simple.
You can also write the most evil code in the world in LISP, variables that become functions... occasionaly, excellent stuff >:-)
Re:LISP, language for "perfect code" (Score:2)
Damn right! I'm disgusted with the common argument that, "in three years, when you're gone, the 'Learn VB in 24 hours' intern is going to have to debug/extend it". That is not a rational approach to code maintenance. You wouldn't have the intern write the code; why would you have him maintain it?
Keeping a codebase clean and correct long-term is not simple. In some ways, it is fundamentally more difficult than writing new code: You don't get to pick your concepts from scratch, instead you have to discover sensible ways to alter and reuse existing ones. If you just do the first thing that works, you will turn a beautiful design into an abject mess alarmingly fast.
Of course, your application had special demands, but I believe most organizations would benefit from a similar attitude. Using "obscure" languages to discourage unconsidered changes may seem elitist, but I agree that it is probably effective. (You might compare Linus's argument about debuggers.) I like your warning, as well. :-)
Re:LISP, language for "perfect code" (Score:3, Insightful)
You make an interesting point, but the 'obscure languages' angle cannot work in a typical corporate IT environment. In this world you need to deal with legions of meatball programmers peppered with a few alpha-geeks to set direction. The realities here are:
The only way to deal with all this is to ensure that a hive mentality is enforced where coding standards and methodology are King. This ensures that, from a purely technical perspective, all code looks (somewhat) familiar to all programmers and new programmers can be put to good use in fairly short order.
Sounds boring (and maybe evil) - sure. But this has been my experience and observations at more than a few large development sites.
Re:Surely APL (Score:2)
Disclaimer. I am no way anything like an expert is Lisp, but using things like ATOM, LIST, CAR, CDR as variable or function names, and getting away with it, is child's play.
Rule 1 of Efficient Lisp: Lisp is not functional (Score:5, Interesting)
The characteristic that really gives you benefits in Lisp is the way you can have Lisp write itself, creating little programming languages which fit each problem. They don't have the visual appeal of a specialized language written with full freedom to define the syntax, but their form still reflects the programmer's understanding of the problem, rather than the details of the solution.
People shouldn't talk about Lisp as "functional" versus "imperative" languages like C++, they should talk about Lisp as "flexible" as opposed to the inflexibility of C, which forces the programmer to do tedious, repetitive work.
Everything about Lisp facilitates this flexibility, from its simple, regular syntax to its implicit type handling.
Turing said: "This process of constructing instruction tables [i.e., programming] should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself." And Lisp is certainly well-made for this method of avoiding drudgery.
The real beauty of it comes when you have to optimize your code: rather than fiddling with the part that defines the problem, you change the bit that transforms it from a problem definition to a solution. This ability to seperate leaves you free to optimize one problem area however you wish, without having to go around and fix the code in a thousand other places your modification breaks.
Getting back on topic, Lisp certainly allows functional programming, but sit down with Common Lisp and try to translate a C program into it line by line; you'll have very little trouble: it contains all the imperative stuff you need. For that matter, you can program C in a very functional style, using the trinary ?: operator and recursion, if you like. In either language, though, sticking to functional style as strictly as possible will hurt your performance.
Just as the standard teaching examples of C, full of gets(), sprintf(), and the like, are terrible for C code stability, the standard teaching examples of Lisp, which emphasize its functional nature, are terrible for code efficiency.
Some tasks are naturally functional, some are inherently imperative, and any large project (even most small projects!) will include both. A good language for large projects provides support for both, as it is foolish to fight the nature of the problem.
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
Some tasks are naturally functional, some are inherently imperative, and any large project (even most small projects!) will include both. A good language for large projects provides support for both, as it is foolish to fight the nature of the problem.
I'd like to see somone post a couple of brief examples of things that were well-suited to Lisp (and would be much more difficult in C) - anyone have anything handy?
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:3, Interesting)
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:5, Informative)
If you're interested is LISP, you should take a look at Paul Graham's excellent ANSI Common LISP [amazon.com], a wonderfully written introduction to LISP which is nonetheless a decent resource which can almost replace the much heftier Steele [amazon.com]. If you're not sure you want to spend the cash, the first couple [paulgraham.com] chapters [paulgraham.com] are online.
In this very small, very chatty book for beginners with not too much code, Graham nonetheless manages to include examples such as a ray tracer (90 lines of code); a program to dynamically generate HTML pages (119 lines of code; this program (very much expanded, but without a single rewrite) now powers Yahoo! Stores); and a complete, seperate object-oriented language with multiple inheritence (89 lines; but a much more powerful OO language, CLOS, is already included with Common LISP). The last two in particular would be impossible to do as quickly or easily in C.
A much bigger LISP book I happen to have at the moment is Peter Norvig's Paradigms of Artificial Intelligence Programming : Case Studies in Common Lisp [amazon.com], which includes a whole lot of impressive and/or historically interesting examples, including ELIZA, STUDENT (solves algebraic word problems) MACSYMA (symbolic integration ala Mathematica), a Prolog interpreter and compiler, a Scheme interpreter, an optimizing LISP compiler, a natural language grammar parser, and a couple other things. I just finished (well, turned in...) a project which extended Norvig's code to play the game Othello, also from this book, to use trained neural nets (which unfortunately didn't train all that well). The coding part of this was made darn easy by the fact that Norvig's Othello function takes as inputs two functions which provide the move-selection strategies for black and white respectively--something that can't be done in a language without functional closures.
I certainly wouldn't want to do any of these in C; although all of them could be so done, it would only be at the cost of a good deal of length, functionality and elegence.
In general, LISP is great for anything involving GOFAI (good old fasioned AI, i.e. non-stochastic), anything that needs to generate hierarchically nested text (e.g. HTML, XML, or LISP programs), anything that needs to be written quickly (or LISP can be used as a rapid-prototyping language), any sort of interpreter, or for any time you wished you could modify the available programming languages to build one that really suits your problem. LISP is also great for extending existing programs, which is why almost every user-extensible application uses a dialect of LISP to do the job. (e.g. emacs, AutoCAD, etc. No, VB macros for Word don't count, although it is noteworthy that LISP is useful over such a wide range of programming tasks as to be a replacement for VB and C.)
What is LISP bad at? Well, its libraries can be rather weak and nonstandard (although ANSI Common LISP itself comes with a large array of useful functions); GUI stuff, multithreading, and networking all fit in this category and are often implementation specific. (Of course, this is nothing to do with the language itself but just with what tools are available.) Its use for really low level bit-twiddling stuff is somewhat awkward. Iteration in LISP suffers somewhat from being only a little bit more powerful than iteration in C; the upside is you can still combine it with all the other great stuff in LISP, but the downside is that the parenthisis-style syntax, which is so much better for writing macros and functional code, only clutters up iterative code.
And, certain of the most powerful features of LISP, like macros and closures-as-first-level-objects, take a bit of experience to wrap your mind around, as does the functional programming paradigm. (LISP does not in any way require functional programming; it's just that while there are other languages as good as LISP at iterative code and arguably as good as LISP at OO code, there is nothing as good for functional code.) This is usually taken to mean that LISP is only suitable for CS students and AI researchers, because ordinary programmers are too dumb to get this stuff. I'm just a CS student, and I haven't had much experience with how dumb ordinary programmers are or aren't, but intuitively I think this argument is bunk.
Personally I think these techniques are just new things to learn; subtle and powerful, sure, but so is simple recursion the first time you learn it and every programmer knows how to use that. Indeed, once you understand recursion well, functional programming and function closures are not very large conceptual leaps at all. Sometimes the mechanics of lambda closures can be slightly tricky, but no more so than referencing and dereferencing pointers in C, and with a lot greater payoff. Hell, the most complicated uses of functions as objects in LISP are a lot easier to get right IMO than even simple uses of templates in C++, and "templates" (i.e. generic funcions) come for free in LISP, due to runtime type checking. (Of course, this is why no one uses C++ templates, but whatever.)
Macros are difficult to write. But then again, they are incredibly powerful, and not "necessary" very often. And it's usually *extremely* easy to understand someone else's macro code, which is all a novice would have to do anyways.
Plus there are lots of features of LISP which make it incredibly easy for beginners. Debugging in LISP is ridiculously easy, at least for programs which don't use too many functional closures or complex objects. Instead of the C paradigm where you only have one big executable main(), LISP programs are made up of lots of little functions, all of which are callable (and thus extraordinarily easily debuggable) from the top-level evaluator. There's no write-save-compile-test-debug loop; it's all together, and all very fast. Immediate feedback means more willingness to take chances, try out things, and make mistakes.
Plus, because there's no main(), your programs are always extensible. If you want to, once you're done with a function it's trivial to make a larger function which calls the other function, takes it as an input, etc.
There is no need to manage memory, no need to futz around with pointers, and no way to cause a segfault until you start optimizing. Buffer overflows are impossible. You can start with a skeleton of a program, gradually add functionality, and only add optimizations at the end when you have tested your code; and you can test every new function or optimization, so you know exactly what goes wrong when something does.
And it's fast: once you put in proper optimizations, compiled LISP is nearly as fast as C. Of course this wasn't always the case, and it's not the case for LISP before you put in type declarations. And a compiled LISP file will probably be bigger than compiled C code, especially when you add the LISP top-level eval to it. On the other hand, C is usually not as fast or small as well optimized assembly code, but there is a good reason very few people program in that anymore: because programming in C makes your code less buggy and much faster to develop. Similarly, programming LISP will almost always make your code less buggy and much faster to develop than using C. Now that compiler technology and computer hardware have made those differences almost moot, it probably makes much more sense to use LISP than C.
Of course, the result of this change has not been to drive more people to LISP, but instead to drive LISP's features into other languages. Thus we have C++ with attempts at generic functions; Java with decent OO and automatic garbage collection; Python showing the usefullness of an interactive top-level. Nowadays Perl and Python are getting functional closures and the list datastructure, although their functions are not quite first-level objects and so not quite as powerful. Plus it will probably take another prefix-syntax language for macros to be copied properly.
Whether the world will realize that LISP already exists (and indeed has since the late 50's) or continue to reinvent it, I dunno. Probably the latter so long as LISP remains short of libraries that tie it down to modern computers. (Again, GUIs, multithreading, networking.) Still, it's probably worth learning LISP just so that when the same ideas come out in more "mainstream" languages years from now you'll already know and understand them.
Re:bit-twiddling in Lisp (Score:2)
Granted, this depends on what you mean by "bit-twiddling"--smashing memory based on a pointer given to you by the OS is going to be implementation-dependent.
However, for actually pulling bits out of data and re-arranging them, I find the Common Lisp routines to be great. You can assign/extract integer values to/from any consecutive block of bits, without having to compute the masks and shifts yourself. I've found this quite nice in creating code that converts to/from native IEEE floats from/to other floating-point formats.
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2, Informative)
What is LISP bad at? Well, its libraries can be rather weak and nonstandard (although ANSI Common LISP itself comes with a large array of useful functions); GUI stuff, multithreading, and networking all fit in this category and are often implementation specific. (Of course, this is nothing to do with the language itself but just with what tools are available.) Its use for really low level bit-twiddling stuff is somewhat awkward. Iteration in LISP suffers somewhat from being only a little bit more powerful than iteration in C; the upside is you can still combine it with all the other great stuff in LISP, but the downside is that the parenthisis-style syntax, which is so much better for writing macros and functional code, only clutters up iterative code.
Multithreading is found in the commercial Common Lisp environments and in the CMUCL/x86 port. CLOCC [sf.net] maintains several libraries for cross-implementation usage of non-standard features such as networking. CLIM solves the GUI problem, the problem is that there was no free CLIM implementation for a long time due to legal issues. Finally, a free CLIM is being developed: McCLIM [mikemac.com] and I'm sure they can use help. As for iteration, perhaps your mind has been clouded by Paul Graham; who has an irrational fear of the LOOP macro. The LOOP macro, however, provides one of the most powerful iteration constructs I've seen; and it's not parenthesized like the DO macro is. Example:
(loop for x from 1 to 10 summing x do (format t "~&~A" x))
Prints out a list of numbers from 1 to 10 and the sum of them all at the end.
The equivalent DO:
(let ((sum 0))
(do ((x 1 (1+ x)))
((> x 10) sum)
(incf sum x)
(format t "~&~A" x)))
Also the LOOP macro provides yet more keywords for all sorts of handy features which aren't so easy to do with DO; collecting, appending, finally, initially, if/else, etc... Please read the section in the HyperSpec about LOOP, Section 6.1 [xanalys.com]
I even once wrote a finite-state-machine entirely within a single LOOP macro that processed the Unix mbox format. It's quite nearly a language in itself (speaking of which, FORMAT is in a similar category, except for formatted output instead).
I would argue that CL is better at bit-twiddling than C is. Take a look at the CLHS Section 12.1.1.3.2 [xanalys.com] and the functions BYTE, LDB, and DPB. It's a different perspective than the C view, but more interesting since you can extract and replace any number of bits that you want. Also it's not dependent on 8 bits per byte.
Still, there are many areas where CL just doesn't have the sheer effort put into the libraries, likely due to the lack of manpower. Particularly in the Free-software category; Lisp has a tradition extending long before the current wave of Free-software and while many commercial vendors will provide good support and lots of libraries, the Free implementations often lack this. Many Lisp programmers use the commercial Lisps and have the features they want; if not they ask/pay the vendor to implement them. Another issue is that Common Lisp is not Unix-centric, unlike *ahem* most popular languages today. CL was designed to be workable in any environment, so the designers could not take shortcuts with things like pathnames, executable formats, system libraries, or other system-dependent issues. After all; Common Lisp was conceived in the era of the Lisp Machine. Unix was just another OS in the vast array. Finally, it is unfair to compare the Common Lisp standard against a single-implementation language such as Perl. Standards cost $$$$$ and require a great deal of effort and responsibility. If a Common Lisp implementation does not comply with the standard then it is at fault. But with Perl, whatever Larry Wall does goes. Even if it breaks all your code; too bad.
Some interesting sites with regard to libraries:
(Back to the OP's topic) Franz's Success Stories [franz.com] has plenty of examples of Lisp applications. Franz develops Allegro Common Lisp, a popular commercial CL.
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
Read two numbers.
Add them together.
Print the result.
(No restriction on size other than assuming that you have enough memory)
Re:Math (Score:2)
Re:Math (Score:2)
Re:Math (Score:2)
Multiple value returns in Lisp are more flexible than structs because the caller does not have to accept all of the values that are being returned. If you just want the first value returned, simply call the function. If you want to capture multiple values, do so. Furthermore, multiple values propagate painlessly through the function call chain.
With structs, you have only a single return value, the type of which has to be recognized by every function in the call chain in order for the values to be captured.
Primary example: getting a value from a hash table. There are really two things you want back: the value corresponding to your key, and whether the key actually had an associated value.
Lisp: "return two values: the value, and whether the key was present in the table" No call by reference needed. No pointers.
Other useful examples: the Lisp floor and ceiling function return two values: the integer part and the fractional part that was truncated. Usually, you ignore the fractional part, but sometimes, it is convenient to have.
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:3, Informative)
For an example of an (very basic, and done by all LISP implementations, and even partly by some C compilers, e.g. gcc) optimization, see "tail recursion optimization" in your nearest LISP-implementation documentation.
As yoy state, some problems are "inherently" imperative, and some are functional, to their nature, but that has more to do with how easily their solution is formalised in either formalism, not how well that solution is then executed.
But I think one should not emphasis that property as much as that LISP does have a garbage collector, which C does not have. It does not _allow_ for you to have neither memory leaks, nor crashes due to multiple free()'s of the same location. And it doesn't have any sprintf() that can produce buffer overruns. But OK, then you could use Java
Also, as you state, lambdas and higher-order-functions are central to what's good about LISP.
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2, Interesting)
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
> with the extended syntax are quite powerful. I
> would say that Common Lisp is better at iteration than
> most other languages.
I totally agree. The LOOP macro isn't just a way to do for, while, and foreach like we find in most primitive languages- LOOP is an entire language in and of itself. Immensely powerful.
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
Okay, LISP doesn't lack objects, but it does lack (automatic) function dispatching depending on the type of an object. But this could easily be hacked into it, if someone wanted to
I have read some about MOP, and am really impressed. That is probably one of LISPs biggest strengths - introspection. All aspects of a program, and the language, can be determined, and controlled, by the progrram itself.
I have read some of the CL HyperSpec. But as I said before, I'm blinded by Scheme after reading too much of r5rs and the SRFIs
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
Okay, LISP doesn't lack objects, but it does lack (automatic) function dispatching depending on the type of an object. But this could easily be hacked into it, if someone wanted to :)
I'm not sure what you mean by this? What do you think having various methods for a generic function does if not dispatch based on the types of the arguments? Perhaps you ought to make section 7.6 one of the sections of the CLHS that you read. :-)
Re:Rule 1 of Efficient Lisp: Lisp is not functiona (Score:2)
you can even do stuff like:
do(something(else(again(and(again()))));
but the reason you don't is because people realized that it isn't maintainable, any you'll probably lose count of the parentheses. Of course a LISP guru would probably complain about the semi colon. Before they get to excited about their own superiority and job security, they should realize most VB types will pick up LISP very quickly, thanks to their experience with *gasp* Excel.
Excel IS strictly Functional? (Score:2)
But... a curious observation.... this last semester I was encouraged for a "Teaching with Technology Class" to develop an application using Excel with a limited number of Visual Basic macros thrown in. As I spent time working with the system, it occured to me I was actually doing functional programming. Each cell was a function call, which could only be fed results of other cells/function calls.... (any comments on whether this is actually true?)
It turned out to be a bit confining; iteration was a bit painful, and I ended up with a big state table that I was proud to simply have come up with, messy as it was. But it was also interesting to see what you can/have to come up with when confined. In the end, the application worked.
This is talking about Orbitz, not SABRE itself (Score:5, Informative)
Think of the various systems, SABRE, etc. as just different systems that are using more or less the same amalgamation of airline-provided data.
SABRE, and the other CRSes themselves are still running the big iron mainframe stuff, not LISP or Linux, and will likely remain so for a long time.
Re:This is talking about Orbitz, not SABRE itself (Score:2)
SABRE is a real-time database system with insane storage, speed and reliability requirements. It runs on the biggest, baddest mainframes money can buy, with an enormous disk storage farm. It doesn't run on a normal mainframe operating system like MVS.
Re:This is talking about Orbitz, not SABRE itself (Score:2, Informative)
Supplement would probably be a better word than replacement. The Compaq release [compaq.com] talks about the fare shopping feature. It is actually a small part of the whole Sabre system; and one that they have never liked having on the same hardware that runs flight management, crew scheduling, pricing, yield managment, booking, ticketing, etc.
Re:This is talking about Orbitz, not SABRE itself (Score:2, Informative)
And I thought I'd never... (Score:2, Insightful)
Check out there Careers/Programming page (Score:2, Interesting)
ODBMS (Score:1)
Impressed, but... (Score:3, Interesting)
However it usually does do a reasonable job, the savings I can get by extra typing are minimal. My biggest gripe against such systems is that they havent got a clue about what it is travellers actually want to do. I hardly ever want to travel 'from LHR to CDG' (ie specific airports). I'm usually able to get to any airport with in 50 miles using public transport. So in the UK I would usually like to be able to propose Glasgow and Edinburgh as alternates, or Manchester and Leeds/Bradford, etc. But what I really want is not these simple choices but... I'll tell you where I am, and where I want to get to, now tell me about through fares on buses trains and planes from here to there.
Given the description they give of the problem it sounds way too hard. But....
They describe what they're doing as basically enumeration of the graph of all routes between destinations. Once again, this does not mimic what a traveller will do when figuring out how to get from place to place. We construct routes using waypoints - going from one regional aiport to another usually involves connections via hub airports; travelling by train from eg Auchtermuchty to Reading means going via Edinburgh and London. By thinking this way we reduce the number of routes under consideration to a manageable size. (this is also how game ai works. I would include a link to an article on this at http://www.gamasutra.com/ but all their articles are now members only)
Hell I'm sure they know what they're doing - sound like smart guys...
Re:Impressed, but... (Score:2)
So for instance I got a ticket from a travel agent the other day, for a train from Manchester to Cambridge. Total cost...over 200 pounds! Madness. Obviously I sent it back. The insanity is such that even the travel agents get confused.
Consumer choice sounds like a good thing, and it is in some ways. But choosing can be very difficult. And when its NP-complete choosing with knowledge can be nearly impossible.
Phil
Re:Impressed, but... (Score:2)
You can say LON (London Any) which covers Heathrow, Gatwick, City and Luton, IIRC. There are similar codes for New York, Toyko etc. Obviously, BOS is only Logan and AMS is only Schiphol.
CDG is fairly crap as airports go, I'd rather go via Orly or Eurostar to Gare du Nord
Re:Impressed, but... (Score:3, Informative)
Metropolitan Areas with Multiple Airports
These codes don't specify single airports but whole areas where more than one airport is situated.
BER Berlin, Germany
BGT Baghdad, Iraq
BHZ Belo Horizonte, MG, Brazil
BUE Buenos Aires, Argentina
BUH Bucharest, Romania
CHI Chicago, IL, USA
DTT Detroit, MI, USA
JKT Jakarta, Indonesia
KCK Kansas City, KS
LON London, United Kingdom
MFW Miami, Ft. Lauderdale and West Palm Beach, FL, USA
MIL Milano, Italy
MOW Moskva, Russia
NRW airports in Nordrhein-Westfalen, Germany
NYC New York, NY, USA
PAR Paris, France
OSA Osaka, Japan
OSL Oslo, Norway
QDV Denver, CO, USA
QLA Los Angeles, CA, USA
QSF San Fransisco, CA, USA
RIO Rio de Janeiro, RJ, Brazil
ROM Roma, Italy
SAO Sao Paulo, SP, Brazil
STO Stockholm, Sweden
TYO Tokyo, Japan
WAS Washington, DC, USA
YEA Edmonton, AB, Canada
YMQ Montreal, QC, Canada
YTO Toronto, ON, Canada
Re:Impressed, but... (Score:3, Interesting)
The airlines tries to segregate customers into different categories by their ability and willingness to pay. They want to make sure that the business traveler who "must fly now" pays more than the vacation traveler. They regulate the inventory to sell enough cheap seats to ensure the inventory doesn't spoil (airline lingo for a flight with empty seats), yet keep enough for travelers with the need to fly and the ability to pay more. The dream is to make sure that every passenger on board a full flight paid as much as their situation supports.
it's probably a toss-up (Score:3, Interesting)
Most people who have tried developing these kinds of systems seem to move away from them over time and end up developing a single-language solution--it's simpler to maintain and debug in the long run.
Re:it's probably a toss-up (Score:1)
It is (afaik) also not clear what the Lisp code competes against on a software level.
A large part of the article is about the cluster mainframe difference, the rest just says things like Lisp code "WITH BETTER ALGORITMS" vs old mainframe assembler.
It seems to get the job done, but doesn't reveal if that is because someone sat down, and did his math
homework properly, or because LISP is great.
(And I think it is the math thing
Is there some better statement of the problem? (Score:2)
It's the flexibility... (Score:2)
Not really. The objects searched over (phonemes) come from a relatively unchanging set having relatively unchanging attributes (frequency, duration, level, etc.) and are mapped to a relatively unchanging set of objects (vocabulary doesn't change that fast). New service levels, attribute changes, and itineraries need to be taken into count in the airline booking issue.
In the end, it's the flexibility in adding new models and code to handle new models in a seamless manner that makes Lisp (and other dynamic languages - e.g., Smalltalk for finance, Erlang for multiprocessing phone switches) so appealing for this type of problem.
Re:It's the flexibility... (Score:2)
confused? (Score:2)
I have known about this for a bit (Score:1, Interesting)
Best pronunciation of "Lisp" (Score:1)
Yeah Yeah Yeah I know it's poor too
Good lessons (Score:3, Insightful)
Reguardless of the language, I found the post insightful and informative. All of the techniques and decisions described in the post can be applied to most projects. Sure, most websites don't need to support the massive number of searches like Sabre, but programmers can apply those principles. Keeping a balanced foot isn't easy and there's always politics added to the brew, but with perseverance, programming can be something that provides a great service and tremendous personal pride.
Sabre is its own interesting case study (Score:2)
The Sabre system is maybe the more interesting case study - how engineers kept an ancient and likely antiquated system up and running in the face of massive industry/technological change.
Re:Sabre is its own interesting case study (Score:3, Funny)
Re:Sabre is its own interesting case study (Score:2)
Point was that the hardwired dumb terminals themselves went the way of the dodo about 8 years ago. There are still a few locations with them, but it's mostly a PC world now. They just came up with a nifty bit of programming to make the PC act and talk like one of those terminals within the confines of the program. Amazingly enough, it still gets the same 4 second request to response time anywhere on the network in normal conditions. Considering how many requests go over that system in an hourly basis, that's pretty darn impressive.
200 boxes at $3000 is not cheaper than big iron (Score:2, Interesting)
Also, ignoring whether Lisp may or may not be better suited for this problem, the algorithms described can be implemented in many languages with. Indeed, many program use all those tricks.
My experience with Common Lisp (Score:4, Interesting)
Since the day I joined Franz Inc. [franz.com] as the new Webmaster, I have been writing more code than at any previous point in my career. I have become immersed in Lisp programming, specifically AllegroCL [franz.com], which I found to be a stimulating challenge to learn. I discovered that writing Lisp is sheer joy to anyone who has ever been frustrated out of programming by the tedium of obligatory declaration of data types, allocation and de-allocation of memory and the like, or simply by the time they take to learn. To finalize my education in AllegroCL, I was tasked with replacing the Franz webserver with AllegroServe. Though I am not a slow student, I made many mistakes and found that the simplified testing of code via the AllegroCL debugger and the ability to modify a program while it is running were indispensable tools both in my education and software troubleshooting. Making use of these features, I have found that adding new code to a program is remarkably easy to do, even when that new code requires making significant structural changes. In the end, I'm always left with a program which runs as quickly as any others I use and exhibits enhanced stability and security features while maintaining a reasonable memory footprint.
Among my first tasks at Franz was familiarizing myself with Allegro Common Lisp. My interest in Lisp's long, rich and diverse history was one of the chief reasons I applied for the job, so I was happy to oblige. I've always found the history of computing to be of great interest, and Lisp has been there throughout most of the last 50 years (of currently-used languages, only Fortran predates its nativity), so I find its endurance of especial interest. Lisp has undergone a process of evolution during its lifetime spawning several dialects, one of which is Common Lisp; AllegroCL is an implementation of Common Lisp.
The aspects which I find most satisfying in AllegroCL include automatic memory management and dynamic typing of data. Both of these features eliminate a tremendous amount of tedium from coding and allow me to get more work done in less time. I was never a serious programmer before I was introduced to Lisp, but now I've found a passion which outweighs my penchant for computer gaming. In the past, I would frequently spend much of my free time mastering the newest reason to own a 3d-accelerated video card, but recently I've found that I have more to show afterwards if I write code for fun, as evidenced by the chatroom software I wrote as an educational exercise which can be seen in production on my server at home, here [antidistortion.net] (running on AllegroServe). It took a little longer to write the chat software than it usually takes me to master a new game, but at a total of 16 hours, it was less than half the time that most games take to complete. I began working more and producing a tremendously increased level of output, all without the slightest increase in my stress level.
After spending a couple of months with Franz, familiarizing myself with my responsibilities as Webmaster while learning Allegro Common Lisp, I was tasked with converting the Franz website from Apache [apache.org] webserver to an AllegroServe [franz.com]-based solution, which entailed writing a webserver which used AllegroServe at its core and provided all of the features which I found in Apache, while adding a few site-specific features. AllegroServe's chief developer, John Foderaro, and I were able to complete this task in time for the recent release of AllegroCL 6.1. The speed of development under AllegroCL was due in no small part to the ACL debugger of which I made prodigious use early-on. The ability to inspect running code and make modifications at the point of failure not only made it a simple matter to identify and fix bugs, but it was also an invaluable educational tool. Initially, I wrote bad code - lots of bad code - but every mistake I made was immediately obviated and resolved through liberal application of this handy tool. The ability to directly interact with data in a running program provided education that extended beyond the scope of any single programming language, my ability to visualize software structure and the flow of data was greatly enhanced.
After a few weeks of use, I began to realize that I wasn't having more than one bug in my code every few days - needless to say, I was elated. Until this point, I was working on relatively simple aspects of the webserver, such as the Franz menu generation, customer survey, and trial download sections. This accelerated rate of learning gave me enough positive feedback that I felt comfortable taking on more ambitious segments of the project. After I progressed through the header, menu, and footer-wrapping code which provides the interface to my earlier menu generator's output on Franz' "lhtml" pages, I came to the logging facility. By far, writing the code to manage the log handling was the most challenging aspect of the webserver's design so far. It was at this point that John and I came to realize that we would need to significantly enhance the virtual-host capabilities of AllegroServe to provide such services as separate access and error log streams for each individual virtual host. Despite the challenge, John managed to implement these changes in less time than it took me to write the code to handle formatting the logfiles in a manner compatible with Apache's output, which Franz especially required to enable the continued use of certain website log analysis tools. The two of us had completely changed the manner in which AllegroServe handled logging in a mere two days. John also eventually added excellent support for running standard CGI programs which would have their own log streams, and I made use of the added functionality to support a "cgiroot" which allows the Apache-like feature of being able to specify a path in which cgi programs will reside while sending any cgi log output to the vhost error log. I would encourage any current Apache users who wish to try-out AllegroServe to make use of this feature when configuring a server, it makes CGI installation and use a snap. After I'd written the bulk of my contribution to our system, I hit upon another necessary feature, the ability to include in-tree access control files akin to ".htaccess" files under Apache. This was a significantly more complex challenge than the logging and virtual host modifications John and I had previously added, due to the depth of the AllegroServe feature-set we would have to make available for modification within these files, and the associated security concerns. This obstacle took a fair amount of time to surmount, John made significant changes throughout AllegroServe, and we went through a great deal of testing to ensure that no security risks had been created. In the end, we were satisfied that we had made a very worthwhile addition to the webserver.
I continued writing interface and configuration code and enlisted John's expert help whenever I would find a feature AllegroServe lacked, and we concluded the conversion with a version of the Franz webserver that has only required minor modifications since. When I had ironed-out any remaining bugs, of which there were fortunately very few, John assisted me in profiling our code to assess its speed bottlenecks. After heavily load-testing the server, we discovered that the slowest part of the code was that used to check the timestamps on files for the purposes of updating our cache. This was greatly satisfying because the speed of this code was so fast that we could not consider this to be a problem. We also discovered that there was an excessive memory waste within a few seemingly clean segments of code, we were using a dynamically-sized string creation function which relies upon the multiple different data types for the sake of convenience. We converted this to make use of a large fixed-size array which would contain the string, even if it grew as long as it possibly could, and halved the server's memory usage. Bandwidth load testing showed that we had an extremely fast server - we were able to utilize around 850-900KB/sec. across a 10 megabit network when running the system on an Intel Celeron 533. Additionally, thanks to our prior memory-usage enhancement which came-up during profiling, we were only using a total of 30MB of RAM for the webserver, cache and all.
I am very satisfied to have had a hand in such a successful project, especially successful considering that I was a rank novice programmer when I began work on it. The speed with which I learned to program in AllegroCL was an entirely new phenomenon to me, one which has enriched my computer usage and allowed me to express my ideas for software in code, something I never had the capability of doing in the past due to my unwillingness to suffer through the tedium programming had historically presented me with. When I found myself attaining a satisfactory level of programming ability, I was struck by the ease of writing clean and modular code on the first attempt. Augmenting that ability, the ease of adding and restructuring AllegroCL code to a running or non-running program, especially with the aid of the ACL debugger, greatly decreased both my development time and my frustration while further enhancing my level of programming skill. I have learned a great deal about Lisp, AllegroCL, and programming in general over the course of this project, and without it I would not have had the chance to make such a satisfying acquaintance with Allegro Common Lisp, which has become my programming language of choice.
Some "semi-official" comments (Score:5, Informative)
I am working for ITA and like to comment on some issue brought up here:
1) As said, we talk Orbitz here, and not SABRE. Currently, Orbitz
uses our software for domestic US flights, not for international.
2) Our engine does not use a functional programming style, rather the
opposite. Still, we found that Lisp is a great advantage. While
each hacker here has own preferences why he/she likes Lisp, key
elements (I see) are:
2a) macros, especiallly macros that allow us to define new iteration
constructs. C programmers can thing of being able to write their own
for/while/if as seem appropriate for the task as hand. Especially
with-[whatever] constructs, but also nice tricks with
destructuring-bind.
2b) scope, working annonymous functions with static scope. Kind of
Java's inner classes but in 1/10 of the codelines.
2c) said destructuring-bind which frees your from a lot of boring and
error-prone tasks of tree parsing with a snap.
2d) compile-time computing, a key element to make our software fast
without cluttering it up by expensing manually written source code by
a factor of 100 or by inventing ad-hoc code generators which need to
be debugged after they broke your system for weeks. Macros that can
use the full language at compile time and macros that can "walk" their
argument when passed at compile-time to find interesting things to do
with them. Also see define-compiler-macro to get an idea what makes
Lisp code fast while maintaining elegance (use with care, though).
2e) safety. A language without optional overflow checking of integers
is a toy at best and dangerous at worst.
2f) debugging and testing with the read-eval-print-loop (REPL). Like
the gdb prompt for evaluating code, but you can use the native
language and you have the full language. Or better like a shell where
thing's aren't echoed in ASCII and need to be re-parsed, but you get
the real objects you can play with (send message as defined in your
system). The debuggers in Allegro and CMUCL are rather crappy, IMHO,
but the REPL and ultra-fast re-compilation and loading of single
functions (standard feature of every Lisp) -used for debugging print
statements- make more than up for that.
Keep in mind that everyone of our Lisp hackers can contribute a Lisp
of similar length, this is just what *I* like.
For the record, I like C++, but I couldn't absord all the application--specific knowledge I need while spending my day figuring out C++ specialities and keeping them swapped in. C++ is for full-time C++ coders only.
Re:Some "semi-official" comments (Score:2)
Obviously, for errors that are foreseen, you can set up proper error-handlers in a robust, flexible way, as usual for Common Lisp. The problem is unforeseen errors, such as stupid typos in code that were never exercised, and cause run-time type errors. The default action in Lisp is to present the debugger prompt.
A "core dump" is an OS-dependent concept--a C program does not know what to do when it dumps core, the OS does. However, you can typically "save" an executable image of a Lisp session.
One can provide a value for *debugger-hook*, a special variable that contains a function which will be called before the debugger is entered. In that handler function, one could include a "save image" command, then a jump to some "reset" point in the main program loop, to start from scratch.
Then, one can probably debug the saved image at your leisure. I have never tried this, so I don't know if most vendors' Common Lisp implementations can get you to the debugger in the error condition , preserving the call stack so that you know where the error occured.
Alternatively, you could call a system command to dump your core for you. What your debugger does with such a core file is a good question.
Re:Some "semi-official" comments (Score:2)
I wonder if any systems have enough introspection that you can capture the call stack in a variable before dumping, for later inspection in the image?
The problem of the OS core dump that I was thinking about is somewhat more subtle: when you dump core, the PC is at the point where you are making the system call. When you bring the core image up in the debugger, can you continue executing machine code where that system call *would have* returned? Because then, in your code, you could have something like
;;; this is my runtime error handler
(progn (dump-core) (invoke-debugger))
of course, dump-core doesn't return in the normal execution flow, but later in the core debugger, can you simply continue execution into the Lisp debugger, and then get full access to the backtrace inspection, etc.?
This problem isn't as hard as they make it sound (Score:2)
LAX -> JFK
JFK -> SFO
SFO -> BOS
It shouldn't take a smart computer to rule those out. Also, the possibility of 5000 (going) x 5000 (returning) x 100000 (fares) is ludicrous. Plus, airlines use hubs airports, so there are only a limited number of logical flights.
And to top it all off, you don't have to consider every combination. You can start with a list of (arbitrary number) 50 likely flights for price, time, speed, etc. and try their connections. You just do a simple query, pull up the 5000, and before even eliminating illogical and conflicting ones, pull a few the top of the list.
Re:This problem isn't as hard as they make it soun (Score:2)
The real reason why people haven't done this "simple" task is that it ISN'T as simple as it sounds and that "simple" solutions turn out to be woefully inadequate when confronted with "real-world" data and problems.
Re:This problem isn't as hard as they make it soun (Score:2)
bozo and me can't get access to it, no matter how much we pay, on account of the airlines (and travel services) don't want me to know what possible prices are available. They're business depends on it. If everyone knew there were a dozen open seats in first class and 50 in coach on the flight I wanted to take from LA to Boston, they couldn't get away with their pricing scheme.
Re:This problem isn't as hard as they make it soun (Score:2, Insightful)
Unfortunately you're just talking flights and not fares. [Hint: if you think you pay a price for a flight you board like you do in a Bus, you're wrong].
Another large Lisp project (Score:3, Interesting)
We travel folks pick the strangest languages... ;) (Score:3, Interesting)
Other guy: So, what's your system coded in?
Travel guy: Well, there's a little C for API glue, but about 99% of it is in (LISP, Tcl, etc).
The reactions are lots of fun, from confusion to disbelief to horror.
Re:Crazy stuff. (Score:2)
Also what's the differences in Power utilisation?
Look at the big boys. (Score:3, Informative)
For something a little more practical and realistic, the extremely-fast yet value priced Compaq AlphaServer [compaq.com] rings in at 47 GFLOPS.
Granted, FLOPS aren't a very good judge of speed for this application, but they are easy stats to find. If you really want a standardized test, take a look at the TPC-C stats [tpc.org] for the fastest cluster machines in the world. These more accurately reflect the kind of performance stats you're looking for in relation to this article.
Re:Crazy stuff. (Score:1)
This is a bit like comaparing a fleet of ford escorts with a PeterBuilt.
You can buy a lot of low end fords for the same price as a 50 ton truck, but all the family saloons put together will not transport 50 tons.
The current mainframe technoligy "IBMs zSeries" is extremely powerfull and so highly configureable that each "mainframe" is effectively a "one-off" with differnt numbers of CPUs, memory size, channels(IO buses) and interconnects, it is very diffcult to compare with other architectures.
Some years ago IBM posted the higest TCP/A benchmark ever using a cluster of mainframes and the venerable IMS database software, this configuration was some 5 times faster than the previous fastest benchmark using Oracle and DEC alpha cluster.
Given that the alpha chip hasn't moved on much since then and that a 1gz+ pentium is roughly equvalent to a compaq alpha chip this differential still holds.
On the other hand you don't really get much of a zSeries box for $1,000,000. But the power consumption and aircon required for 400 Intel processors has got to eat up some of the differential.
Re:Crazy stuff. (Score:1)
The IBM mainframe I used at a very large life insurance company had 11 400 Mz cpus that compared roughly to Intel CPUs, so I assume that they were a PowerPC like chip. 5 were dedicated to development and testing. One nice feature was that the L2 caches were shared across all the CPUs, unlike the Intel model, so process affinity wasn't a very big deal.
CPU resources are very expensive on a mainframe, but you do have the option of features like hot swappable CPUs and main memory. Mainframe uptimes can be 25 years; you can't do that with most computers, because you need to shut them down to upgrade CPUs and memory.
Supercomputers are another story. You can buy SGI hardware that is build around rendering images and pumping them through some very high bandwidth channels for display walls and the like. But I don't know anything about supercomputers.
Joe
Re:Power (Score:3, Informative)
Re:Crazy stuff. (Score:3, Funny)
I wish I had 200 of those babies. My 3D Studio rendering would fly like nobodies business
Would it be fair to say that you are imagining a beowulf cluster of these?
Ahem . . . . Bad joke, I know.
Original Title: Inside Orbitz (Score:2, Informative)
--pg
Re:ITA is Hiring! (Score:2)
The goal is to get candidates to submit code samples of their work that actually address the kinds of problems that we work on, so if you can do the puzzles, send us your code and a resume, and there's a great chance you'll be hauled in for an interview at one of the best tech companies in the country!
Keep in mind that we're not just a LISP shop. I'm in the operations group, and I do a lot of Perl and C++ work. Submit code in any language you like!
Re:ITA Software (Score:2)