Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Common Lisp: Inside Sabre 227

bugbear writes "I just got permission from the author (Carl de Marcken of ITA Software) to publish this email, which describes the inner workings of Sabre, the flight search software that the airlines and travel agencies use. It is a case study in cheap Linux/Intel, NT/Intel and Hpux boxes replacing mainframes, and also the use of lisp and other languages in a server-based app. Update: 01/16 13:45 GMT by H :RawDigits writes "Common Lisp: Inside Sabre - correction. The Lisp engine is used by Orbitz, and not Sabre. Sabre still maintains mainframe systems for their booking. I should know, I am sitting in the Orbitz NOC right now ;)"
This discussion has been archived. No new comments can be posted.

Common Lisp: Inside Sabre

Comments Filter:
  • I hadn't realized... (Score:4, Interesting)

    by boopus ( 100890 ) on Wednesday January 16, 2002 @04:24AM (#2847281) Journal
    This is the first use of lisp that I've seen that used it in an environment where performance was the main goal. This seems like what I've always been told any "reasonable person" would use C for. Is it common that lisp is used in mission-critical high volume computing? Or was the point in using lisp the fare-searching algorithm? My lisp knowledge is limited to one semester of scheme, so I'm pretty ignorant.
    • by entrox ( 266621 ) <slashdot&entrox,org> on Wednesday January 16, 2002 @05:57AM (#2847395) Homepage
      Lisp doesn't need to be slow at all. You're thinking of the old 70's Lisp, which was usually interpreted and ran slowly. Today's Lisp implementations can also be compiled in addition interpreted, which results in a big performance boost (lagging only slightly behind C, but faster then Java). Commercial Lisps capable of compiling are for example Allegro CL [franz.com] and LispWorks [xanalys.com].
      This isn't limited to the commercial ones: CMUCL [cons.org] and SBCL [sourceforge.net] do also compile to native code. The compilers are optimizing (you can choose between variying degrees of Speed, Safety, Debugability and Compile Speed) and you can even enter Assembler code or disassemble single expressions.
    • Contrary to what most C people think, performance is very dependent on what you're doing and how it was designed. I've seen some benchmarks where a well designed compiled Common Lisp program beat out a well designed FORTRAN program for heavy duty number munching. I wish I could find that bleeding link again!
    • by epepke ( 462220 ) on Wednesday January 16, 2002 @02:51PM (#2849907)

      The vast majority of language preferences reduce to religion, and of those the vast majority are dilettantes who know a dozen languages or fewer.

      In reality, LISP isn't slow, and it really never was, compared to the alternatives. About 20 years ago, some deluded souls converted some heavily numerical FORTRAN code to LISP and found out it ran about 20% faster on LISP. I call them deluded because they were under the false impression that any of the detractors of LISP would care. They don't.

      People prefer languages primarily for cultural reasons. LISP fits certain cultures and doesn't fit others. You can say the same about FORTRAN, C++, Java, Visual Basic, or any other language you like. Pseudo-quantitative arguments come later as justifications to be used in arguments.

      Beyond that, different languages are good for different things, but it's almost impossible to have a discussion about it. If we were to have such a discussion, I would say that LISP is a great choice for anything involving graph theory, which a reservation system obviously does.

      Me, I'm in the middle of writing a game engine and editor that's built in C and C++, does the plotting in an embedded Scheme, uses a Postscript-like code for run-time, interfaced with the Scheme, and under OSX uses Objective C for the user interface layer. So, what do I know anyway?

  • When the last big Lisp story came up on /. I spent a fair amount of time following the Lisp newsgroups. The airline system came up a couple times and a few very interesting posts were made. I haven't found anything yet with groups.google.com but I remember reading quite a bit more than what was in the email linked to in the main story....


    Go forth ye google hunters! I'm going to bed.

  • by Anonymous Coward
    I expected to see a report on how Sabre works... not a long diatribe - 'oooh, look at us - we do it faster!' - about the replacement software these guys have cooked up.

    Well done on getting some free advertising.
  • by niola ( 74324 ) <jon@niola.net> on Wednesday January 16, 2002 @04:33AM (#2847292) Homepage
    I did some research a few months ago on Lisp since I am not very familiar with it and I discovered that Yahoo stores uses [paulgraham.com] Lisp.

    It would seem to me that if it can power 14,000 e-commerce sites for the largest web network that it must be pretty scalable.

    Lisp, due to its recursive nature is often used in AI because it can perform operations with lower overhead.

    --Jon
    • His logic seems to be related to the reason NeXTStep/OpenStep/MacOSX use ObjectiveC instead of the more common C++. If its best for you, use it; it'll either fail miserably or become the secret weapon that empowers you ahead of your competitors.
  • Best quote (Score:3, Interesting)

    by Dr. Tom ( 23206 ) <tomh@nih.gov> on Wednesday January 16, 2002 @04:38AM (#2847298) Homepage
    They use Lisp (with a little bit of C++), and they find that some of their customers can't program very well in Lisp. Typical Lisp education teaches inefficient techniques. Furthermore:

    "Of course, with things like STL and Java, I think programmers of other languages are also becoming pretty ignorant."

  • Lisp without GC! (Score:2, Insightful)

    by nagora ( 177841 )
    How very odd... I'd have liked a bit more on why they used Lisp since one reason many give is normally the garbage collection which isn't being used here. Why is GC too slow in Lisp when there are years of experience behind it?

    TWW

    • Re:Lisp without GC! (Score:4, Informative)

      by sarcast ( 515179 ) on Wednesday January 16, 2002 @04:57AM (#2847321)
      Why is GC too slow in Lisp when there are years of experience behind it?

      It is not that the garbage collection is too slow in Lisp, he gave the reason that the amount of data that it had to go through was very large. The point of the system was to be as speedy as possible and garbage collection would slow that down no matter how much or how little data you gave it to process. If you look at real-time processing projects, none of them (to my knowledge) employ a garbage collector because that would take up valuable resources.
      They made a wise decision to keep the garbage collection to a minimum so that the actual searching process would be all that was running on the boxes.

      • Yes, you're right. It hadn't sunk in that they were not using any GC at all.

        TWW

      • Actually, I'm not totally sure that this was the right decision. Some of the generational garbage collector schemes could have given much the same behaviour- they don't touch objects that don't change and their basic data set doesn't change.
    • The C/C++ memory allocator is about as slow as a good Lisp or Java garbage collector. If you write performance critical code, the first thing to do is to look at your memory allocation and usage patterns--in any language.
      • no, look what your passing around. In C/C++, probably the biggest resource hog in untuned code is passing around big ugly structures (esp. strings) instead of a pointer, reference, or a state flag.
        • Whether copying or sharing is faster really depends a lot on the program and the hardware. "Everything is pointers" can be just as devastating to performance as "nothing is pointers". That's probably why C/C++ are so popular, since they give the programmer control over this important aspect of computation. Languages like Lisp don't, and their optimizers cannot infer the necessary optimizations automatically.
    • A garbage collector has to scan all the data in memory to decide which bits are needed and which bits are garbage. If you have gigabytes of data, or you allocate lots of data and therefore need to run the GC a lot to reclaim it, then the GC is going to take a correspondingly long time.

      In this case they are getting the best of both worlds by using GC where appropriate, and then using special knowledge about the application to optimise away the overhead.

      Paul.

      • by jonis ( 27823 )
        > A garbage collector has to scan all the data in memory to decide which bits are needed and which bits are garbage.

        This is actually not right. GC does not have to (as opposed to _can_) scan everything. Some (e.g. copying) GCs scan only the objects that are referenced (live objects). And in generational GCs the set of objects scanned is usually a fraction of live objects (recently created ones).
        • While technically true, the difference is really not that much important in practice, as there is a lot of other problems involved in building a good garbage collector (and for any practical program and garbage collector, the size of the live objects would be of the same order as the size of memory requested from the operating system).

          A copying garbage collector is actually guaranteed to allocate at least twice as much memory from the operating system then is currently occupied by live objects. Otherwise, it couldn't copy them. Now, imagine what that does for locality of memory, and cache efficiency (This is why Appel's paper comparing gc and stack allocation [nec.com] haven't made most people abandoning the stack just yet).

          On the other hand, a mark-sweep garbage collector never moves objects, resulting in external fragmentation (also true of malloc/free), which is also bad for locality of memory and cache efficiency.

          You can also use a compacting garbage collector, which works mostly like a mark-sweep garbage collector, but occasionally compacts fragmented memory as well. The downside is complexity (and thus also performance of the compacting phase).

          In practice, the problem of external fragmentation isn't really much of a problem with a good allocator using the buddy-system. At least, almost any non-contrived application or benchmark should have less than 10% fragmentation using a good allocator. But that still leaves the problem of having related objects occupy nearby memory locations open (which is needed for good cache efficiency).

          Copying garbage collection can be used for that purpose. But if you are using the simple two-space algorithm, you'll find that you are doing the exact opposite of that, the two-space algorithm tends to put related objects far away from each other, due to the depth-first search it uses to find live objects.

          So we need another more complex copying collector... But that will again slow down the performance of the collector.

          One possible performance neck of a mark-sweep collector is that it needs two passes: one to mark live memory, and one to collect garbage. But the copying collector is no better, first you need to traverse live objects, then you need to copy them somewhere else.

          While I still haven't touched on the subject of generational and incremental garbage collection, I think you will already see that writing a good garbage collector is far from trivial. There are no simple cookbook approaches for achieving optimal performance. The only reasonable approach is experimentation, testing, and fine-tuning of parameters. Most real-life garbage collectors use a hybrid of different techniques for different types of objects (distinguished by such attributes as age, size, mutability, etc...)

  • by MosesJones ( 55544 ) on Wednesday January 16, 2002 @05:05AM (#2847335) Homepage

    I once worked on a project where we used LISP to processes elements of Radar data. Our reason for choosing LISP was two fold, firstly we were doing List transformation, mapping and comparison. Secondly and most importantly though....

    we knew that when it worked, that was it and we didn't want people buggering with it if they didn't understand it. LISP makes sure that the people writing it are going to have a better grasp on computing that the average C/C++/Java person.

    Of course the comment at the top of "If you come here thinking you've found a bug, you are wrong, look elsewhere. If you are 100% certain then remember this.... everyone relies on this, if you bugger it up thats a lot of angry people" also probably helped. But using LISP enabled us to write a small piece of very tight code that made understanding the task simple.

    You can also write the most evil code in the world in LISP, variables that become functions... occasionaly, excellent stuff >:-)
    • we didn't want people buggering with it if they didn't understand it.

      Damn right! I'm disgusted with the common argument that, "in three years, when you're gone, the 'Learn VB in 24 hours' intern is going to have to debug/extend it". That is not a rational approach to code maintenance. You wouldn't have the intern write the code; why would you have him maintain it?

      Keeping a codebase clean and correct long-term is not simple. In some ways, it is fundamentally more difficult than writing new code: You don't get to pick your concepts from scratch, instead you have to discover sensible ways to alter and reuse existing ones. If you just do the first thing that works, you will turn a beautiful design into an abject mess alarmingly fast.

      Of course, your application had special demands, but I believe most organizations would benefit from a similar attitude. Using "obscure" languages to discourage unconsidered changes may seem elitist, but I agree that it is probably effective. (You might compare Linus's argument about debuggers.) I like your warning, as well. :-)

      • You make an interesting point, but the 'obscure languages' angle cannot work in a typical corporate IT environment. In this world you need to deal with legions of meatball programmers peppered with a few alpha-geeks to set direction. The realities here are:

        • software change is constant
        • organizational change is constant
        • code development is a business process

        The only way to deal with all this is to ensure that a hive mentality is enforced where coding standards and methodology are King. This ensures that, from a purely technical perspective, all code looks (somewhat) familiar to all programmers and new programmers can be put to good use in fairly short order.

        Sounds boring (and maybe evil) - sure. But this has been my experience and observations at more than a few large development sites.

  • by Nindalf ( 526257 ) on Wednesday January 16, 2002 @05:41AM (#2847372)
    The system has a state, which you don't feed entirely into your top-level query, rather, you examine the state, and sometimes change it, from wherever in the program flow you need the data.

    The characteristic that really gives you benefits in Lisp is the way you can have Lisp write itself, creating little programming languages which fit each problem. They don't have the visual appeal of a specialized language written with full freedom to define the syntax, but their form still reflects the programmer's understanding of the problem, rather than the details of the solution.

    People shouldn't talk about Lisp as "functional" versus "imperative" languages like C++, they should talk about Lisp as "flexible" as opposed to the inflexibility of C, which forces the programmer to do tedious, repetitive work.

    Everything about Lisp facilitates this flexibility, from its simple, regular syntax to its implicit type handling.

    Turing said: "This process of constructing instruction tables [i.e., programming] should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself." And Lisp is certainly well-made for this method of avoiding drudgery.

    The real beauty of it comes when you have to optimize your code: rather than fiddling with the part that defines the problem, you change the bit that transforms it from a problem definition to a solution. This ability to seperate leaves you free to optimize one problem area however you wish, without having to go around and fix the code in a thousand other places your modification breaks.

    Getting back on topic, Lisp certainly allows functional programming, but sit down with Common Lisp and try to translate a C program into it line by line; you'll have very little trouble: it contains all the imperative stuff you need. For that matter, you can program C in a very functional style, using the trinary ?: operator and recursion, if you like. In either language, though, sticking to functional style as strictly as possible will hurt your performance.

    Just as the standard teaching examples of C, full of gets(), sprintf(), and the like, are terrible for C code stability, the standard teaching examples of Lisp, which emphasize its functional nature, are terrible for code efficiency.

    Some tasks are naturally functional, some are inherently imperative, and any large project (even most small projects!) will include both. A good language for large projects provides support for both, as it is foolish to fight the nature of the problem.
    • Some tasks are naturally functional, some are inherently imperative, and any large project (even most small projects!) will include both. A good language for large projects provides support for both, as it is foolish to fight the nature of the problem.

      I'd like to see somone post a couple of brief examples of things that were well-suited to Lisp (and would be much more difficult in C) - anyone have anything handy?

      • An RPN calculator? Back when I used to play with Scheme in school (Scheme is the toy version of Lisp), I noticed that anything with a stack based logic was natural in Scheme. The C implementation was considerably less elegant in the end.
        • Uh... Scheme isn't a toy. It doesn't have the super-expansive libraries of Common Lisp, but it's used in real world applications all the time. Hell, a month or so back, Transmeta posted to comp.lang.lisp, looking for someone to continue working on some huge Scheme app that they used for design.
      • by ToLu the Happy Furby ( 63586 ) on Wednesday January 16, 2002 @01:10PM (#2849225)
        I'd like to see somone post a couple of brief examples of things that were well-suited to Lisp (and would be much more difficult in C) - anyone have anything handy?

        If you're interested is LISP, you should take a look at Paul Graham's excellent ANSI Common LISP [amazon.com], a wonderfully written introduction to LISP which is nonetheless a decent resource which can almost replace the much heftier Steele [amazon.com]. If you're not sure you want to spend the cash, the first couple [paulgraham.com] chapters [paulgraham.com] are online.

        In this very small, very chatty book for beginners with not too much code, Graham nonetheless manages to include examples such as a ray tracer (90 lines of code); a program to dynamically generate HTML pages (119 lines of code; this program (very much expanded, but without a single rewrite) now powers Yahoo! Stores); and a complete, seperate object-oriented language with multiple inheritence (89 lines; but a much more powerful OO language, CLOS, is already included with Common LISP). The last two in particular would be impossible to do as quickly or easily in C.

        A much bigger LISP book I happen to have at the moment is Peter Norvig's Paradigms of Artificial Intelligence Programming : Case Studies in Common Lisp [amazon.com], which includes a whole lot of impressive and/or historically interesting examples, including ELIZA, STUDENT (solves algebraic word problems) MACSYMA (symbolic integration ala Mathematica), a Prolog interpreter and compiler, a Scheme interpreter, an optimizing LISP compiler, a natural language grammar parser, and a couple other things. I just finished (well, turned in...) a project which extended Norvig's code to play the game Othello, also from this book, to use trained neural nets (which unfortunately didn't train all that well). The coding part of this was made darn easy by the fact that Norvig's Othello function takes as inputs two functions which provide the move-selection strategies for black and white respectively--something that can't be done in a language without functional closures.

        I certainly wouldn't want to do any of these in C; although all of them could be so done, it would only be at the cost of a good deal of length, functionality and elegence.

        In general, LISP is great for anything involving GOFAI (good old fasioned AI, i.e. non-stochastic), anything that needs to generate hierarchically nested text (e.g. HTML, XML, or LISP programs), anything that needs to be written quickly (or LISP can be used as a rapid-prototyping language), any sort of interpreter, or for any time you wished you could modify the available programming languages to build one that really suits your problem. LISP is also great for extending existing programs, which is why almost every user-extensible application uses a dialect of LISP to do the job. (e.g. emacs, AutoCAD, etc. No, VB macros for Word don't count, although it is noteworthy that LISP is useful over such a wide range of programming tasks as to be a replacement for VB and C.)

        What is LISP bad at? Well, its libraries can be rather weak and nonstandard (although ANSI Common LISP itself comes with a large array of useful functions); GUI stuff, multithreading, and networking all fit in this category and are often implementation specific. (Of course, this is nothing to do with the language itself but just with what tools are available.) Its use for really low level bit-twiddling stuff is somewhat awkward. Iteration in LISP suffers somewhat from being only a little bit more powerful than iteration in C; the upside is you can still combine it with all the other great stuff in LISP, but the downside is that the parenthisis-style syntax, which is so much better for writing macros and functional code, only clutters up iterative code.

        And, certain of the most powerful features of LISP, like macros and closures-as-first-level-objects, take a bit of experience to wrap your mind around, as does the functional programming paradigm. (LISP does not in any way require functional programming; it's just that while there are other languages as good as LISP at iterative code and arguably as good as LISP at OO code, there is nothing as good for functional code.) This is usually taken to mean that LISP is only suitable for CS students and AI researchers, because ordinary programmers are too dumb to get this stuff. I'm just a CS student, and I haven't had much experience with how dumb ordinary programmers are or aren't, but intuitively I think this argument is bunk.

        Personally I think these techniques are just new things to learn; subtle and powerful, sure, but so is simple recursion the first time you learn it and every programmer knows how to use that. Indeed, once you understand recursion well, functional programming and function closures are not very large conceptual leaps at all. Sometimes the mechanics of lambda closures can be slightly tricky, but no more so than referencing and dereferencing pointers in C, and with a lot greater payoff. Hell, the most complicated uses of functions as objects in LISP are a lot easier to get right IMO than even simple uses of templates in C++, and "templates" (i.e. generic funcions) come for free in LISP, due to runtime type checking. (Of course, this is why no one uses C++ templates, but whatever.)

        Macros are difficult to write. But then again, they are incredibly powerful, and not "necessary" very often. And it's usually *extremely* easy to understand someone else's macro code, which is all a novice would have to do anyways.

        Plus there are lots of features of LISP which make it incredibly easy for beginners. Debugging in LISP is ridiculously easy, at least for programs which don't use too many functional closures or complex objects. Instead of the C paradigm where you only have one big executable main(), LISP programs are made up of lots of little functions, all of which are callable (and thus extraordinarily easily debuggable) from the top-level evaluator. There's no write-save-compile-test-debug loop; it's all together, and all very fast. Immediate feedback means more willingness to take chances, try out things, and make mistakes.

        Plus, because there's no main(), your programs are always extensible. If you want to, once you're done with a function it's trivial to make a larger function which calls the other function, takes it as an input, etc.

        There is no need to manage memory, no need to futz around with pointers, and no way to cause a segfault until you start optimizing. Buffer overflows are impossible. You can start with a skeleton of a program, gradually add functionality, and only add optimizations at the end when you have tested your code; and you can test every new function or optimization, so you know exactly what goes wrong when something does.

        And it's fast: once you put in proper optimizations, compiled LISP is nearly as fast as C. Of course this wasn't always the case, and it's not the case for LISP before you put in type declarations. And a compiled LISP file will probably be bigger than compiled C code, especially when you add the LISP top-level eval to it. On the other hand, C is usually not as fast or small as well optimized assembly code, but there is a good reason very few people program in that anymore: because programming in C makes your code less buggy and much faster to develop. Similarly, programming LISP will almost always make your code less buggy and much faster to develop than using C. Now that compiler technology and computer hardware have made those differences almost moot, it probably makes much more sense to use LISP than C.

        Of course, the result of this change has not been to drive more people to LISP, but instead to drive LISP's features into other languages. Thus we have C++ with attempts at generic functions; Java with decent OO and automatic garbage collection; Python showing the usefullness of an interactive top-level. Nowadays Perl and Python are getting functional closures and the list datastructure, although their functions are not quite first-level objects and so not quite as powerful. Plus it will probably take another prefix-syntax language for macros to be copied properly.

        Whether the world will realize that LISP already exists (and indeed has since the late 50's) or continue to reinvent it, I dunno. Probably the latter so long as LISP remains short of libraries that tie it down to modern computers. (Again, GUIs, multithreading, networking.) Still, it's probably worth learning LISP just so that when the same ideas come out in more "mainstream" languages years from now you'll already know and understand them.
        • I find the bit-twiddling abilities of Common Lisp to be pretty advanced compared to C.

          Granted, this depends on what you mean by "bit-twiddling"--smashing memory based on a pointer given to you by the OS is going to be implementation-dependent.

          However, for actually pulling bits out of data and re-arranging them, I find the Common Lisp routines to be great. You can assign/extract integer values to/from any consecutive block of bits, without having to compute the masks and shifts yourself. I've found this quite nice in creating code that converts to/from native IEEE floats from/to other floating-point formats.
        • What is LISP bad at? Well, its libraries can be rather weak and nonstandard (although ANSI Common LISP itself comes with a large array of useful functions); GUI stuff, multithreading, and networking all fit in this category and are often implementation specific. (Of course, this is nothing to do with the language itself but just with what tools are available.) Its use for really low level bit-twiddling stuff is somewhat awkward. Iteration in LISP suffers somewhat from being only a little bit more powerful than iteration in C; the upside is you can still combine it with all the other great stuff in LISP, but the downside is that the parenthisis-style syntax, which is so much better for writing macros and functional code, only clutters up iterative code.

          Multithreading is found in the commercial Common Lisp environments and in the CMUCL/x86 port. CLOCC [sf.net] maintains several libraries for cross-implementation usage of non-standard features such as networking. CLIM solves the GUI problem, the problem is that there was no free CLIM implementation for a long time due to legal issues. Finally, a free CLIM is being developed: McCLIM [mikemac.com] and I'm sure they can use help. As for iteration, perhaps your mind has been clouded by Paul Graham; who has an irrational fear of the LOOP macro. The LOOP macro, however, provides one of the most powerful iteration constructs I've seen; and it's not parenthesized like the DO macro is. Example:

          (loop for x from 1 to 10 summing x do (format t "~&~A" x))

          Prints out a list of numbers from 1 to 10 and the sum of them all at the end.
          The equivalent DO:

          (let ((sum 0))
          (do ((x 1 (1+ x)))
          ((> x 10) sum)
          (incf sum x)
          (format t "~&~A" x)))

          Also the LOOP macro provides yet more keywords for all sorts of handy features which aren't so easy to do with DO; collecting, appending, finally, initially, if/else, etc... Please read the section in the HyperSpec about LOOP, Section 6.1 [xanalys.com]
          I even once wrote a finite-state-machine entirely within a single LOOP macro that processed the Unix mbox format. It's quite nearly a language in itself (speaking of which, FORMAT is in a similar category, except for formatted output instead).

          I would argue that CL is better at bit-twiddling than C is. Take a look at the CLHS Section 12.1.1.3.2 [xanalys.com] and the functions BYTE, LDB, and DPB. It's a different perspective than the C view, but more interesting since you can extract and replace any number of bits that you want. Also it's not dependent on 8 bits per byte.

          Still, there are many areas where CL just doesn't have the sheer effort put into the libraries, likely due to the lack of manpower. Particularly in the Free-software category; Lisp has a tradition extending long before the current wave of Free-software and while many commercial vendors will provide good support and lots of libraries, the Free implementations often lack this. Many Lisp programmers use the commercial Lisps and have the features they want; if not they ask/pay the vendor to implement them. Another issue is that Common Lisp is not Unix-centric, unlike *ahem* most popular languages today. CL was designed to be workable in any environment, so the designers could not take shortcuts with things like pathnames, executable formats, system libraries, or other system-dependent issues. After all; Common Lisp was conceived in the era of the Lisp Machine. Unix was just another OS in the vast array. Finally, it is unfair to compare the Common Lisp standard against a single-implementation language such as Perl. Standards cost $$$$$ and require a great deal of effort and responsibility. If a Common Lisp implementation does not comply with the standard then it is at fault. But with Perl, whatever Larry Wall does goes. Even if it breaks all your code; too bad.

          Some interesting sites with regard to libraries:

          (Back to the OP's topic) Franz's Success Stories [franz.com] has plenty of examples of Lisp applications. Franz develops Allegro Common Lisp, a popular commercial CL.

      • Here's one.

        Read two numbers.
        Add them together.
        Print the result.

        (No restriction on size other than assuming that you have enough memory)
    • Functional programming vs. imperative programming have nothing to do with efficiency. At least not run-time efficiency.

      For an example of an (very basic, and done by all LISP implementations, and even partly by some C compilers, e.g. gcc) optimization, see "tail recursion optimization" in your nearest LISP-implementation documentation.

      As yoy state, some problems are "inherently" imperative, and some are functional, to their nature, but that has more to do with how easily their solution is formalised in either formalism, not how well that solution is then executed.

      But I think one should not emphasis that property as much as that LISP does have a garbage collector, which C does not have. It does not _allow_ for you to have neither memory leaks, nor crashes due to multiple free()'s of the same location. And it doesn't have any sprintf() that can produce buffer overruns. But OK, then you could use Java :) Java is just LISP with objects and fancier syntax...

      Also, as you state, lambdas and higher-order-functions are central to what's good about LISP.
      • A few notes:
        • Common Lisp does not guarantee tail call optimization, but most decent implementations do it.
        • Common Lisp does not guarantee a garbage collector either, but again, most implementations gotta do it ;)
        • Tail-recursion is nice, but macros like LOOP with the extended syntax are quite powerful. I would say that Common Lisp is better at iteration than most other languages.
        • Your implication that Lisp lacks objects is false. In fact you are wrong: not every type in Java is an object (int, char, etc..). In Common Lisp, every type is an object that has its own identity. You should read Kent Pitman's article [std.com] on the misappropriation of the term "Object- oriented"
        • Also you should read the section of the CLHS that discusses CLOS: The Common Lisp Object System, which is far more powerful object-system than the pathetic Java/C++ one: with multimethod dispatch, method combination, and multiple inheritance. If you feel adventurous, read the Art of the Meta Object Protocol (AMOP) which you can buy for ~ $40, which discusses how to implement CLOS while exposing the internals in a controlled fashion; which allows you to modify the behavior of CLOS easily.
        • Yes, Common Lisp is not all there is to Lisp but it is the most widely used one today. Consult the Common Lisp HyperSpec for more information: CLHS [xanalys.com]
        • > Tail-recursion is nice, but macros like LOOP
          > with the extended syntax are quite powerful. I
          > would say that Common Lisp is better at iteration than
          > most other languages.

          I totally agree. The LOOP macro isn't just a way to do for, while, and foreach like we find in most primitive languages- LOOP is an entire language in and of itself. Immensely powerful.
        • I must admit it was some time since I hacked CL, and that I am biased towards Scheme. Scheme does require a GC, aswell as certain constructs of the language to be tail-recursion-optimized.

          Okay, LISP doesn't lack objects, but it does lack (automatic) function dispatching depending on the type of an object. But this could easily be hacked into it, if someone wanted to :)

          I have read some about MOP, and am really impressed. That is probably one of LISPs biggest strengths - introspection. All aspects of a program, and the language, can be determined, and controlled, by the progrram itself.

          I have read some of the CL HyperSpec. But as I said before, I'm blinded by Scheme after reading too much of r5rs and the SRFIs :)
          • Okay, LISP doesn't lack objects, but it does lack (automatic) function dispatching depending on the type of an object. But this could easily be hacked into it, if someone wanted to :)

            I'm not sure what you mean by this? What do you think having various methods for a generic function does if not dispatch based on the types of the arguments? Perhaps you ought to make section 7.6 one of the sections of the CLHS that you read. :-)

    • you can write functions in C!

      you can even do stuff like:

      do(something(else(again(and(again()))));

      but the reason you don't is because people realized that it isn't maintainable, any you'll probably lose count of the parentheses. Of course a LISP guru would probably complain about the semi colon. Before they get to excited about their own superiority and job security, they should realize most VB types will pick up LISP very quickly, thanks to their experience with *gasp* Excel.
    • Disclaimer: the closest thing I've done to any real functional programming is a weird mix of logical and imperative programming in Prolog, so I may have a warped idea of what functional programming is.

      But... a curious observation.... this last semester I was encouraged for a "Teaching with Technology Class" to develop an application using Excel with a limited number of Visual Basic macros thrown in. As I spent time working with the system, it occured to me I was actually doing functional programming. Each cell was a function call, which could only be fed results of other cells/function calls.... (any comments on whether this is actually true?)

      It turned out to be a bit confining; iteration was a bit painful, and I ended up with a big state table that I was proud to simply have come up with, messy as it was. But it was also interesting to see what you can/have to come up with when confined. In the end, the application worked.
  • by Ryu2 ( 89645 ) on Wednesday January 16, 2002 @05:54AM (#2847391) Homepage Journal
    SABRE != Orbitz. The author's company ITA software writes software that the ORBITZ site uses to answer queries against the same flight/fare dataset that SABRE and the other CRSes use, provided by the airlines.

    Think of the various systems, SABRE, etc. as just different systems that are using more or less the same amalgamation of airline-provided data.

    SABRE, and the other CRSes themselves are still running the big iron mainframe stuff, not LISP or Linux, and will likely remain so for a long time.
  • I just finished CS161 (Artificial Intelligence) at UCLA last quarter.... nice to see some applications for the stuff they teach us here, as most of it seems like garbage. -Berj
  • by Anonymous Coward
    Located here [itasoftware.com].
  • I was also under the impression that Sabre make a lot of use of the Versant ODBMS [versant.com]. Pretty advanced stuff.
  • Impressed, but... (Score:3, Interesting)

    by Bazzargh ( 39195 ) on Wednesday January 16, 2002 @06:28AM (#2847439)
    The description of the inner workings of SABRE impresses the hell out of me. However, I've used sabre a lot for getting iternational flights in the past, and I can only recall _one_ occasion when I've not been able to find better/cheaper fares by messing with the search process - kidding the system on that I was going on a single hop flight on each leg that I knew was sensible, for example.

    However it usually does do a reasonable job, the savings I can get by extra typing are minimal. My biggest gripe against such systems is that they havent got a clue about what it is travellers actually want to do. I hardly ever want to travel 'from LHR to CDG' (ie specific airports). I'm usually able to get to any airport with in 50 miles using public transport. So in the UK I would usually like to be able to propose Glasgow and Edinburgh as alternates, or Manchester and Leeds/Bradford, etc. But what I really want is not these simple choices but... I'll tell you where I am, and where I want to get to, now tell me about through fares on buses trains and planes from here to there.

    Given the description they give of the problem it sounds way too hard. But....

    They describe what they're doing as basically enumeration of the graph of all routes between destinations. Once again, this does not mimic what a traveller will do when figuring out how to get from place to place. We construct routes using waypoints - going from one regional aiport to another usually involves connections via hub airports; travelling by train from eg Auchtermuchty to Reading means going via Edinburgh and London. By thinking this way we reduce the number of routes under consideration to a manageable size. (this is also how game ai works. I would include a link to an article on this at http://www.gamasutra.com/ but all their articles are now members only)

    Hell I'm sure they know what they're doing - sound like smart guys...
    • I think the fundamental problem is here, is that the travel companies make things so complex, in terms of fairs, and the various restrictions they place on you. If you combine several forms of transport, and worse international travel things multiple towards madness rapidly.

      So for instance I got a ticket from a travel agent the other day, for a train from Manchester to Cambridge. Total cost...over 200 pounds! Madness. Obviously I sent it back. The insanity is such that even the travel agents get confused.

      Consumer choice sounds like a good thing, and it is in some ways. But choosing can be very difficult. And when its NP-complete choosing with knowledge can be nearly impossible.

      Phil
    • I hardly ever want to travel 'from LHR to CDG' (ie specific airports). I'm usually able to get to any airport with in 50 miles using public transport.

      You can say LON (London Any) which covers Heathrow, Gatwick, City and Luton, IIRC. There are similar codes for New York, Toyko etc. Obviously, BOS is only Logan and AMS is only Schiphol.

      CDG is fairly crap as airports go, I'd rather go via Orly or Eurostar to Gare du Nord :0)
    • Re:Impressed, but... (Score:3, Informative)

      by jea6 ( 117959 )
      From the FAA (http://www.faa.gov/aircodeinfo.htm#multiple):

      Metropolitan Areas with Multiple Airports

      These codes don't specify single airports but whole areas where more than one airport is situated.

      BER Berlin, Germany
      BGT Baghdad, Iraq
      BHZ Belo Horizonte, MG, Brazil
      BUE Buenos Aires, Argentina
      BUH Bucharest, Romania
      CHI Chicago, IL, USA
      DTT Detroit, MI, USA
      JKT Jakarta, Indonesia
      KCK Kansas City, KS
      LON London, United Kingdom
      MFW Miami, Ft. Lauderdale and West Palm Beach, FL, USA
      MIL Milano, Italy
      MOW Moskva, Russia
      NRW airports in Nordrhein-Westfalen, Germany
      NYC New York, NY, USA
      PAR Paris, France
      OSA Osaka, Japan
      OSL Oslo, Norway
      QDV Denver, CO, USA
      QLA Los Angeles, CA, USA
      QSF San Fransisco, CA, USA
      RIO Rio de Janeiro, RJ, Brazil
      ROM Roma, Italy
      SAO Sao Paulo, SP, Brazil
      STO Stockholm, Sweden
      TYO Tokyo, Japan
      WAS Washington, DC, USA
      YEA Edmonton, AB, Canada
      YMQ Montreal, QC, Canada
      YTO Toronto, ON, Canada
    • Re:Impressed, but... (Score:3, Interesting)

      by Marillion ( 33728 )
      I highly recommend a book by Robert Cross. ISBN 0-7679-0033-2 [amazon.com]. He used to work for Delta Airlines [delta.com] and developed some of the industries priceing practices.

      The airlines tries to segregate customers into different categories by their ability and willingness to pay. They want to make sure that the business traveler who "must fly now" pays more than the vacation traveler. They regulate the inventory to sell enough cheap seats to ensure the inventory doesn't spoil (airline lingo for a flight with empty seats), yet keep enough for travelers with the need to fly and the ability to pay more. The dream is to make sure that every passenger on board a full flight paid as much as their situation supports.

  • by markj02 ( 544487 ) on Wednesday January 16, 2002 @06:40AM (#2847460)
    I think it's pretty much a toss-up whether using Lisp in this kind of system helps. While using Lisp makes initial development and testing much nicer, once you start mixing Lisp and C, debugging gets much harder and you may get very subtle and time consuming bugs in the Lisp/C interface. Note also that Carl also says that, for performance reasons, they have really limited the Lisp features they use.

    Most people who have tried developing these kinds of systems seem to move away from them over time and end up developing a single-language solution--it's simpler to maintain and debug in the long run.


    • It is (afaik) also not clear what the Lisp code competes against on a software level.

      A large part of the article is about the cluster mainframe difference, the rest just says things like Lisp code "WITH BETTER ALGORITMS" vs old mainframe assembler.

      It seems to get the job done, but doesn't reveal if that is because someone sat down, and did his math
      homework properly, or because LISP is great.

      (And I think it is the math thing :-)
  • The airline trip planning problem sounds very similar to constrained graph search problems in speech recognition (which are now routinely carried out in real time on graphs with millions of nodes). It would be interesting to see a more detailed statement of what the problem actually is.
    • ...very similar to constrained graph search problems in speech recognition...

      Not really. The objects searched over (phonemes) come from a relatively unchanging set having relatively unchanging attributes (frequency, duration, level, etc.) and are mapped to a relatively unchanging set of objects (vocabulary doesn't change that fast). New service levels, attribute changes, and itineraries need to be taken into count in the airline booking issue.

      In the end, it's the flexibility in adding new models and code to handle new models in a seamless manner that makes Lisp (and other dynamic languages - e.g., Smalltalk for finance, Erlang for multiprocessing phone switches) so appealing for this type of problem.

      • Your idea of speech recognition is about 10 years out of date. Language models these days are huge and take into account syntax and semantics (not just vocabulary). Systems adapt to individual speakers, whose pronunciation differs systematically, and their language models adjust dynamically to the conversation. You update these systems not by hacking around with the code, but by adding new models and rules.
  • If some of the terminology in this article gets too daunting, check out this online Dictionary of Algorithms, Data Structures, and Problems [nist.gov]
  • by Anonymous Coward
    As Paul Graham said at Beating The Averages [paulgraham.com], anyone who wants to run a business can say whatever they want in publicity, but they have to tell you their technology in their job ads [itasoftware.com].
  • Surely it's "Lithp"?

    Yeah Yeah Yeah I know it's poor too ;-)
  • Good lessons (Score:3, Insightful)

    by f00zbll ( 526151 ) on Wednesday January 16, 2002 @08:43AM (#2847692)
    The post had some valuable lessons:

    1. use the right tool. As the post stated, they chose to use C when LISP would incure a performance hit. Loading all the data into memory statically makes a lot of sense, considering how many searches it has to support.
    2. advanced topics like algorithms are good tools for solving challenging problems
    3. try to maximize your knowledge of the system. This is probably one of the hardest, since most projects are under "I want it yesterday" schedules. When it was necessary, they looked at how LISP compiles the code and weighed the benefit of optimization or using C. Realistically, most projects don't have the time for this step, but it is still worth while to attempt it.

    Reguardless of the language, I found the post insightful and informative. All of the techniques and decisions described in the post can be applied to most projects. Sure, most websites don't need to support the massive number of searches like Sabre, but programmers can apply those principles. Keeping a balanced foot isn't easy and there's always politics added to the brew, but with perseverance, programming can be something that provides a great service and tremendous personal pride.

  • If memory serves me correct, the application (which is quite old) sits directly on hardware, no OS in between. Correct me if I am wrong on this, as this may be true for an older version of Sabre.

    The Sabre system is maybe the more interesting case study - how engineers kept an ancient and likely antiquated system up and running in the face of massive industry/technological change.

    • It used to. I still remember the old nasty green screen dumb terminals that every airport and agency had. Nowadays, they use an emulator on a PC to get the same functionality.
  • Just to point out that this is another case of the mainframe big iron being more cost effective. Take 200 boxes, add networking, admin cost, and the mainframe looks pretty cheap.

    Also, ignoring whether Lisp may or may not be better suited for this problem, the algorithms described can be implemented in many languages with. Indeed, many program use all those tricks.
  • by Jon Howard ( 247978 ) on Wednesday January 16, 2002 @12:00PM (#2848714) Journal

    Since the day I joined Franz Inc. [franz.com] as the new Webmaster, I have been writing more code than at any previous point in my career. I have become immersed in Lisp programming, specifically AllegroCL [franz.com], which I found to be a stimulating challenge to learn. I discovered that writing Lisp is sheer joy to anyone who has ever been frustrated out of programming by the tedium of obligatory declaration of data types, allocation and de-allocation of memory and the like, or simply by the time they take to learn. To finalize my education in AllegroCL, I was tasked with replacing the Franz webserver with AllegroServe. Though I am not a slow student, I made many mistakes and found that the simplified testing of code via the AllegroCL debugger and the ability to modify a program while it is running were indispensable tools both in my education and software troubleshooting. Making use of these features, I have found that adding new code to a program is remarkably easy to do, even when that new code requires making significant structural changes. In the end, I'm always left with a program which runs as quickly as any others I use and exhibits enhanced stability and security features while maintaining a reasonable memory footprint.

    Among my first tasks at Franz was familiarizing myself with Allegro Common Lisp. My interest in Lisp's long, rich and diverse history was one of the chief reasons I applied for the job, so I was happy to oblige. I've always found the history of computing to be of great interest, and Lisp has been there throughout most of the last 50 years (of currently-used languages, only Fortran predates its nativity), so I find its endurance of especial interest. Lisp has undergone a process of evolution during its lifetime spawning several dialects, one of which is Common Lisp; AllegroCL is an implementation of Common Lisp.

    The aspects which I find most satisfying in AllegroCL include automatic memory management and dynamic typing of data. Both of these features eliminate a tremendous amount of tedium from coding and allow me to get more work done in less time. I was never a serious programmer before I was introduced to Lisp, but now I've found a passion which outweighs my penchant for computer gaming. In the past, I would frequently spend much of my free time mastering the newest reason to own a 3d-accelerated video card, but recently I've found that I have more to show afterwards if I write code for fun, as evidenced by the chatroom software I wrote as an educational exercise which can be seen in production on my server at home, here [antidistortion.net] (running on AllegroServe). It took a little longer to write the chat software than it usually takes me to master a new game, but at a total of 16 hours, it was less than half the time that most games take to complete. I began working more and producing a tremendously increased level of output, all without the slightest increase in my stress level.

    After spending a couple of months with Franz, familiarizing myself with my responsibilities as Webmaster while learning Allegro Common Lisp, I was tasked with converting the Franz website from Apache [apache.org] webserver to an AllegroServe [franz.com]-based solution, which entailed writing a webserver which used AllegroServe at its core and provided all of the features which I found in Apache, while adding a few site-specific features. AllegroServe's chief developer, John Foderaro, and I were able to complete this task in time for the recent release of AllegroCL 6.1. The speed of development under AllegroCL was due in no small part to the ACL debugger of which I made prodigious use early-on. The ability to inspect running code and make modifications at the point of failure not only made it a simple matter to identify and fix bugs, but it was also an invaluable educational tool. Initially, I wrote bad code - lots of bad code - but every mistake I made was immediately obviated and resolved through liberal application of this handy tool. The ability to directly interact with data in a running program provided education that extended beyond the scope of any single programming language, my ability to visualize software structure and the flow of data was greatly enhanced.

    After a few weeks of use, I began to realize that I wasn't having more than one bug in my code every few days - needless to say, I was elated. Until this point, I was working on relatively simple aspects of the webserver, such as the Franz menu generation, customer survey, and trial download sections. This accelerated rate of learning gave me enough positive feedback that I felt comfortable taking on more ambitious segments of the project. After I progressed through the header, menu, and footer-wrapping code which provides the interface to my earlier menu generator's output on Franz' "lhtml" pages, I came to the logging facility. By far, writing the code to manage the log handling was the most challenging aspect of the webserver's design so far. It was at this point that John and I came to realize that we would need to significantly enhance the virtual-host capabilities of AllegroServe to provide such services as separate access and error log streams for each individual virtual host. Despite the challenge, John managed to implement these changes in less time than it took me to write the code to handle formatting the logfiles in a manner compatible with Apache's output, which Franz especially required to enable the continued use of certain website log analysis tools. The two of us had completely changed the manner in which AllegroServe handled logging in a mere two days. John also eventually added excellent support for running standard CGI programs which would have their own log streams, and I made use of the added functionality to support a "cgiroot" which allows the Apache-like feature of being able to specify a path in which cgi programs will reside while sending any cgi log output to the vhost error log. I would encourage any current Apache users who wish to try-out AllegroServe to make use of this feature when configuring a server, it makes CGI installation and use a snap. After I'd written the bulk of my contribution to our system, I hit upon another necessary feature, the ability to include in-tree access control files akin to ".htaccess" files under Apache. This was a significantly more complex challenge than the logging and virtual host modifications John and I had previously added, due to the depth of the AllegroServe feature-set we would have to make available for modification within these files, and the associated security concerns. This obstacle took a fair amount of time to surmount, John made significant changes throughout AllegroServe, and we went through a great deal of testing to ensure that no security risks had been created. In the end, we were satisfied that we had made a very worthwhile addition to the webserver.

    I continued writing interface and configuration code and enlisted John's expert help whenever I would find a feature AllegroServe lacked, and we concluded the conversion with a version of the Franz webserver that has only required minor modifications since. When I had ironed-out any remaining bugs, of which there were fortunately very few, John assisted me in profiling our code to assess its speed bottlenecks. After heavily load-testing the server, we discovered that the slowest part of the code was that used to check the timestamps on files for the purposes of updating our cache. This was greatly satisfying because the speed of this code was so fast that we could not consider this to be a problem. We also discovered that there was an excessive memory waste within a few seemingly clean segments of code, we were using a dynamically-sized string creation function which relies upon the multiple different data types for the sake of convenience. We converted this to make use of a large fixed-size array which would contain the string, even if it grew as long as it possibly could, and halved the server's memory usage. Bandwidth load testing showed that we had an extremely fast server - we were able to utilize around 850-900KB/sec. across a 10 megabit network when running the system on an Intel Celeron 533. Additionally, thanks to our prior memory-usage enhancement which came-up during profiling, we were only using a total of 30MB of RAM for the webserver, cache and all.

    I am very satisfied to have had a hand in such a successful project, especially successful considering that I was a rank novice programmer when I began work on it. The speed with which I learned to program in AllegroCL was an entirely new phenomenon to me, one which has enriched my computer usage and allowed me to express my ideas for software in code, something I never had the capability of doing in the past due to my unwillingness to suffer through the tedium programming had historically presented me with. When I found myself attaining a satisfactory level of programming ability, I was struck by the ease of writing clean and modular code on the first attempt. Augmenting that ability, the ease of adding and restructuring AllegroCL code to a running or non-running program, especially with the aid of the ACL debugger, greatly decreased both my development time and my frustration while further enhancing my level of programming skill. I have learned a great deal about Lisp, AllegroCL, and programming in general over the course of this project, and without it I would not have had the chance to make such a satisfying acquaintance with Allegro Common Lisp, which has become my programming language of choice.

  • by cracauer ( 6353 ) on Wednesday January 16, 2002 @12:36PM (#2849002) Homepage
    Hi,

    I am working for ITA and like to comment on some issue brought up here:

    1) As said, we talk Orbitz here, and not SABRE. Currently, Orbitz
    uses our software for domestic US flights, not for international.

    2) Our engine does not use a functional programming style, rather the
    opposite. Still, we found that Lisp is a great advantage. While
    each hacker here has own preferences why he/she likes Lisp, key
    elements (I see) are:

    2a) macros, especiallly macros that allow us to define new iteration
    constructs. C programmers can thing of being able to write their own
    for/while/if as seem appropriate for the task as hand. Especially
    with-[whatever] constructs, but also nice tricks with
    destructuring-bind.

    2b) scope, working annonymous functions with static scope. Kind of
    Java's inner classes but in 1/10 of the codelines.

    2c) said destructuring-bind which frees your from a lot of boring and
    error-prone tasks of tree parsing with a snap.

    2d) compile-time computing, a key element to make our software fast
    without cluttering it up by expensing manually written source code by
    a factor of 100 or by inventing ad-hoc code generators which need to
    be debugged after they broke your system for weeks. Macros that can
    use the full language at compile time and macros that can "walk" their
    argument when passed at compile-time to find interesting things to do
    with them. Also see define-compiler-macro to get an idea what makes
    Lisp code fast while maintaining elegance (use with care, though).

    2e) safety. A language without optional overflow checking of integers
    is a toy at best and dangerous at worst.

    2f) debugging and testing with the read-eval-print-loop (REPL). Like
    the gdb prompt for evaluating code, but you can use the native
    language and you have the full language. Or better like a shell where
    thing's aren't echoed in ASCII and need to be re-parsed, but you get
    the real objects you can play with (send message as defined in your
    system). The debuggers in Allegro and CMUCL are rather crappy, IMHO,
    but the REPL and ultra-fast re-compilation and loading of single
    functions (standard feature of every Lisp) -used for debugging print
    statements- make more than up for that.

    Keep in mind that everyone of our Lisp hackers can contribute a Lisp
    of similar length, this is just what *I* like.

    For the record, I like C++, but I couldn't absord all the application--specific knowledge I need while spending my day figuring out C++ specialities and keeping them swapped in. C++ is for full-time C++ coders only.
  • While it may be possible to create 5000 connecting flights between Boston & LA with a maximum of 3 hops in a 24 hour period, you'll have to include flights like:

    LAX -> JFK
    JFK -> SFO
    SFO -> BOS

    It shouldn't take a smart computer to rule those out. Also, the possibility of 5000 (going) x 5000 (returning) x 100000 (fares) is ludicrous. Plus, airlines use hubs airports, so there are only a limited number of logical flights.

    And to top it all off, you don't have to consider every combination. You can start with a list of (arbitrary number) 50 likely flights for price, time, speed, etc. and try their connections. You just do a simple query, pull up the 5000, and before even eliminating illogical and conflicting ones, pull a few the top of the list.
    • No offense, but if any bozo with a P3 and a crappy SQL/Perl system can do it, why haven't they? Why doesn't someone start an open-source travel planner? I'm sure the travel agencies and booking agencies would be happy to use it. And the airlines would provide flight data, as long as they get their cut.

      The real reason why people haven't done this "simple" task is that it ISN'T as simple as it sounds and that "simple" solutions turn out to be woefully inadequate when confronted with "real-world" data and problems.

      • Proprietary data.

        bozo and me can't get access to it, no matter how much we pay, on account of the airlines (and travel services) don't want me to know what possible prices are available. They're business depends on it. If everyone knew there were a dozen open seats in first class and 50 in coach on the flight I wanted to take from LA to Boston, they couldn't get away with their pricing scheme.
    • You are abolutely right, it's easy.

      Unfortunately you're just talking flights and not fares. [Hint: if you think you pay a price for a flight you board like you do in a Bus, you're wrong].
  • by Syre ( 234917 ) on Wednesday January 16, 2002 @04:24PM (#2850623)
    Yahoo shopping, was mainly written in Lisp too, as this article [paulgraham.com] by one of the original authors of Viaweb (which is now Yahoo Shopping) details.
  • by dbirchall ( 191839 ) on Wednesday January 16, 2002 @05:08PM (#2850968) Journal
    Although SABRE (as others have pointed out) doesn't build their stuff in common LISP like ITA, they're not exactly using the fashionable language of the week for everything, either. Their web-based stuff (Travelocity, and transactional sites they've built for other people) has at least at times used the Vignette framework, which is heavy on... you got it, Tcl! And Vignette also shows up on various other travel sites. So being a developer in one of the few dot-com segments that's actually widely viewed as sustainable, when one runs into a developer from a more "normal" place, the inevitable discussion goes something like this:

    Other guy: So, what's your system coded in?
    Travel guy: Well, there's a little C for API glue, but about 99% of it is in (LISP, Tcl, etc).

    The reactions are lots of fun, from confusion to disbelief to horror.

Make sure your code does nothing gracefully.

Working...