Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Debugging Configure 72

An anonymous reader writes "All too often, checking the README of a package yields only the none-too-specific "Build Instructions: Run configure, then run make." But what about when that doesn't work? In this article, the author discusses what to do when an automatic configuration script doesn't work -- and what you can do as a developer to keep failures to a minimum. After all, if your build process doesn't work, users are just as badly off as if your program doesn't work once it's built."
This discussion has been archived. No new comments can be posted.

Debugging Configure

Comments Filter:
  • by mopslik ( 688435 ) on Friday December 05, 2003 @10:07AM (#7638062)

    ...what to do when an automatic configuration script doesn't work.

    I've written a little program that debugs configure automatically. To compile my program, simply run configure, then run ... oh, wait. Never mind.

    • by jamsho ( 721796 )
      If configure fails the first thing to do is rm -R <the sources dir>, unpack sources and try again.... this can clean up some bugs involving the temporary files and the cache configure creates when it's run.... especially if you've run configure inside that source tree before.

      Infact I NEVER run configure in the same source tree twice (still smarting after bitter 2 days debugging experience that got resolved by doing exactly the above ... :)
      • by ffsnjb ( 238634 )
        'make clean' better trash the configure cache files, or someone didn't write their make files correctly. I've been using the same source trees for apache, postfix, ssh.com's ssh2, and bind for years, along with FreeBSD stable sources. Never once have I had a configure problem that 'make clean' didn't resolve.
        • Now, that may be a bit tricky, seeing as an overwhelming majority of software that I've installed using the good old method of "./configure && make && make install", didn't even _have_ a Makefile until configure was done configuring, writing the Makefile being the last step performed.

          And, when there's no Makefile, there's no "make clean" either :-)
      • 1 unpack
        2 configure
        3 error
        4 rm -Rf
        5 goto 1

        Do'h!
  • by samjam ( 256347 ) on Friday December 05, 2003 @10:25AM (#7638171) Homepage Journal
    You send configure.log and the failure message to the developers, who hopefully understand autoconf a bit more and why it failed.

    Some projects you can't find the developers, then its time to learn how autoconf works and read the autoconf mailing lists and work out what test failed and why.

    Most tests fail because
    * some required feature is not supported on your platform,
    * or because the configure script is old and autoconf has been updated

    The latter is sadly quite common in badly maintained projects as autoconf has undergone revisions and badly written autoconf definitions have started failing.

    If your platform lacks a feature you write around it for your platform with "ifdefs" which get activiated according the the test failure. I dont know how to do this, fortunatly for me all my projects have been working under people who understand autoconf quite well.

    Sam
    • Well the danger with that, is the authors being deluged by people who just don't have the right dependancies installed.

      Of course, it could be considered a bug if the configure script fails silently or ambiguously just because a dependancy was missing, but I see it a lot.

      It'd really help if the authors would put very verbose missing dependancy messages, even as far as including a URL for the dependancies, if they aren't common.

      One build that comes to mind that was total hell was Flightgear. That thing ha
    • First I always put the last line or two of error messages into google. 8 times out of 10, I find someone asking the same question, and usually someone has already posted an answer.

      Actually, before even that I check with Linux From Scratch [linuxfromscratch.org] (and Beyond LFS). But there are a whole lot of packages they don't cover.

    • Most tests fail because

      * some required feature is not supported on your platform,


      In which case, the configure script should print a message telling you about it (and ideally, what configure option (eg --without-foo) to try to get around it, or what other package to install.

      Preferably in noticeable enough text to stand out from all the normal messages configure spits out. And maybe prefixed by "Don't Panic!" in large friendly letters....

      (I'm in the midst of tweaking my configure scripts for a rather co
  • Would it be possible to have a program that read configure.in files directly, worked out what they were testing for etc, and just returned the right values?
    Basically caching the results, but also moving the code for checking for the conditions to the users machine rather than pregenerated on the developers machine.

    When I run configure it often says stuff like "check...(cached)" but configure still runs slow as molasses - so I'm not sure what exactly it is doing..
    • Would it be possible to have a program that read configure.in files directly, worked out what they were testing for etc, and just returned the right values?

      Yeah, and where's the C compiler that will fix bugs in my code and just do what I MEAN??

    • by Anonymous Coward
      Would it be possible to have a program that read configure.in files directly, worked out what they were testing for etc, and just returned the right values?

      This is what the pre-make-kit project (http://pmk.sf.net) is trying do to. The project aims to provide an alternative to autoconf. You should have a look.
    • tips and tricks (Score:5, Interesting)

      by devphil ( 51341 ) on Friday December 05, 2003 @04:01PM (#7641330) Homepage


      The "code for checking" is all just a bunch of macros. Believe me, the slowdown you see isn't the shell reading a bunch of lines of text.

      Some points that might speed things up:

      • Some shells suck. The generated configure is designed to not require any modern shell features (in case you run it on some ancient piece of crap), so you can use whatever stripped-down streamlined implementation of Bourne shell you want. (Assuming the developers of whatever package you're installing haven't used such features themselves, and most don't, for the same reasons that autoconf recommends.)
      • Some shells really suck. Under Solaris and AIX, for example, /bin/sh is such a flaming piece of shit that people running non-trivial configures are advised to run configure with a different shell [gnu.org].
      • If you're getting the same results for commonly-used tests, strip them out of the generated cache, keep the cache around, and when you go to run a new package, preload the cache. (Can be tricky, but is usually safe. Just cache the safe stuff like, "checking if CPU is on fire... no".
      • On recent Linux systems, a big slowdown is the part at the very end, when files like Makefile and config.h are being created. These are basically huge sed operations. Recent versions of sed and glibc have really taken a performance hit in this area. (Depends on your distro. Some sed's are compiled with their own regex engine, some use glibc's. There're more details, but I'm pressed for time.) You might try timing the last part (running config.status over and over to get an average time), then putting a different sed in your PATH and trying again. I found old versions of "ssed" to really kick ass.
      • a big slowdown is the part at the very end, when files like Makefile and config.h are being created.

        'sed' aside, I wonder how much of that has to do with the ridiculous Makefiles (ok, Makefile.ins) that get generated if you use automake. I recently tried automake on a project, and the Makefiles were fscking huge! 600 and 800 line makefiles! I blew them away and make Makefile.ins out of the original development Makefiles I had. Sheesh.
        • I wonder how much of that has to do with the ridiculous Makefiles (ok, Makefile.ins) that get generated if you use automake.

          Nothing whatsoever. Those are created long before the user (not the developer) runs 'configure'.

          As to the size... well, yeah, you need that in there if you're going to be portable, and safe, and not use any weird extensions for weird versions of make, and still support all the makefile targets that a mature project would require. (Packaging, cleaning, dependancies, relocatable

          • Well, yeah, the Makefile.in files get created before the user runs 'configure', but configure has to translate those to Makefiles. The bigger they are, the longer that takes. And the longer the make itself takes to run (with all the elaborate dependency checking).

            Shrug. I may try again on my next project. I just tried the default on my current project and didn't like the results, so I went back to handmade Makefile.ins.

            (BTW, I see you're one of the GNU libstdc++ maintainers -- any particular reason w
            • The Makefile.in -> Makefile translation is purely a sed operation. It doesn't even involve any regexs, just 's/@fixed_text@/other_fixed_text/g', which is entirely bounded by how good the sed implementation is. (Only a DFA engine is required, so if sed simply assumes that something tricky is going to happen and uses an NFA, or doesn't even have a DFA engine, then it's going to be needlessly slow. I've seen the time go from 20+ seconds to less than one second simply by replacing sed.)

              BTW, I see you'

  • by neglige ( 641101 ) on Friday December 05, 2003 @10:37AM (#7638260)
    Personally, I like configure. Editing Makefiles (or Imakefiles if you are lucky) is often like "reading core dumps", as someone put it. Maybe it depends on which system you run (Linux in x86 is probably mainstream and well tested). The automated configure process only failed my once, where the created script was about testing one feature but failed on another issue. Reading the log (as suggested by the author) solved the case. Plus, most problems arise after a system update, when some files/dev-packages/stuff are still missing. Once you get an application successfully through configure, similar apps should work as well.

    It's still better to see something like "Testing for SSL... failed" than go "Can't locate libmcop_mt.so -- Huh? What child is this? Let's google..."

    And if all else failes, remove the problematic code in configure, make, make install && pray ;)
    • by cperciva ( 102828 ) on Friday December 05, 2003 @10:52AM (#7638408) Homepage
      Editing Makefiles (or Imakefiles if you are lucky) is often like "reading core dumps", as someone put it.

      Makefiles are often poorly written. In particular, people very often fail to use .include directives properly.

      If you want to see well-written Makefiles, look in the BSD source tree. Taking one at random, here's FreeBSD's src/usr.sbin/edquota/Makefile:

      # @(#)Makefile 8.1 (Berkeley) 6/6/93
      # $FreeBSD: /repoman/r/ncvs/src/usr.sbin/edquota/Makefile,v 1.8 2003/04/04 17:49:13 obrien Exp $

      PROG= edquota
      MAN= edquota.8

      WARNS?= 4

      .include <bsd.prog.mk>

      • That may be simple and well written, but it only needs to compile on FreeBSD.

        What if you have a package that has to work on FreeBSD AND OpenBSD AND Linux AND bazillion other unixes AND win32/cygwin AND win32/mingw AND .... gazillion other os/library/whatever combinations.

        It won't be that simple any more, and at that point it's probably impossible to "write well" as well.
      • Makefiles are often poorly written. In particular, people very often fail to use .include directives properly.

        Make itself is badly underspecified. POSIX Make simply doesn't scale well enough to support a medium-sized or larger project; it doesn't standardize anything like dependancy tracking or including other makefile fragments.

        As a result, every implementation of make grew its own features to handle the deficiencies. Including files, to take your example, can be done with every make program out t

    • by Anonymous Coward
      As a developer, I use CMake [cmake.org] instead. Much easier to use and more cross-platform.

      As a user, I give up when dealing with unpleasant configure errors.

    • Reading makefiles is easy if you understand make. Well, most makefiles, that is. If the makefile was created via autoconf, then I personally find reading coredumps with a hex editor easier...
  • Ports (Score:5, Insightful)

    by cperciva ( 102828 ) on Friday December 05, 2003 @10:43AM (#7638311) Homepage
    Rather than attempting to include support for every architecture via autoconf, I think the BSD ports approach is far superior: Write code once, and have people put together their own sets of patches, makefile wrappers, et cetera to fit their own architecture.

    For example, I wrote my binary patching tool on FreeBSD, but I don't recommend that people (even on FreeBSD) build it directly from the source tarball; instead, I advise people to use the ports tree, since that puts BSDiff into FreeBSD's packaging system. If someone running Gentoo wants to use BSDiff, they can install it from portage, which adds workarounds for gmake and linux breakage.

    Most developers only have access to a couple platforms for testing their code. Rather than doing a poor job of supporting every platform, it makes much more sense to support one platform directly, and allow other people to step in and provide the necessary patches and packaging to support other platforms.
    • Re:Ports (Score:3, Informative)

      by bogado ( 25959 )
      I think that this is the easy way out, this way the author of the original code can simply ignore the other plataforms. I think that the author should be consient that other plataforms do exists and he should actively try to separate code that is plataform dependent into separate files or libs. And learning autoconf do help to learn where the pifalls are.

      Said that, I do agree that a good autoconf configuration is very hard to acomplish, mainly when you doing it for the first time.
      • Autoheadache (Score:3, Insightful)

        by dmelomed ( 148666 )
        "aid that, I do agree that a good autoconf configuration is very hard to acomplish, mainly when you doing it for the first time."

        If autoconf is so problematic and PITA, why use it in the first place?
        autoconf/make is more trouble than it's worth. Portable makefiles and small portability test programs is the right way to do it.

        Some people just don't value complexity enough. These tools are needlessly complex. This page has more info: http://www.ohse.de/uwe/articles/aal.html
        • Amen to that.

          The number of layers of indirection used by
          automatic configuration systems is absurd.
          The number of otherwise useless skills which
          must be mastered in order to debug a code
          generator, and the complexity of that task
          even in the presence of mastery of those
          skills, represent a barrier to portability
          which is much more substantial than the
          relatively trivial task of constructing
          code which is portable in the first place.
        • Here's an article on Freshmeat expressing similar sentiments [freshmeat.net].
        • I'll agree that automake is more trouble than it's worth -- it really needs a complete rewrite. OTOH, I've never met a make-make program (to automatically generate a makefile from the source tree) that did an adequate job. They either miss too much or include almost everything in the system as a dependency (which may be technically correct, but is usually unnecessary).

          Autoconf, though, is pretty handy for dealing with things that are fairly commonly on different places (or even named differntly) on diff
        • Huh? (Score:3, Insightful)

          by devphil ( 51341 )

          "aid that, I do agree that a good autoconf configuration is very hard to acomplish, mainly when you doing it for the first time."

          I don't know of any aspect of computer programming that's both non-trivial and easy to accomplish, when you're going it for the first time.

          If autoconf is so problematic and PITA, why use it in the first place? autoconf/make is more trouble than it's worth.

          You know, I've never had that much trouble with it. I've never had more trouble than doing it by hand would have g

    • Rather than attempting to include support for every architecture via autoconf, I think the BSD ports approach is far superior

      The problem is, how do you install the ports system? I'm currently trying to install Gentoo [gentoo.org] on my Linux box, but the installation is failing.

      • Where is it failing? What processor are you using? Have you done a Google search for the error messages?
        • make[2]: *** [/var/tmp/portage/glibc-2.3.2-r3/work/glibc-2.3.2/ buildhere/sunrpc/xbootparam_prot.stmp] Illegal instruction

          I just started the install last night. When I woke up this morning, I got that error. Assuming (incorrectly) that the make would restart from where it left off, I started it up again, and thereby lost the error message. But now it's many hours later, and I've reached the error message once again, so I will be doing a search on Google. Maybe I screwed up with my -march setting.

          v

          • by the way,
            CFLAGS="-march=k6 -O3 -mcpu=i686 -fomit-frame-pointer -funroll-loops -pipe"
            • trying it again, with -mcpu=i586...
              • by O ( 90420 )
                Yeah, if you get an illegal instruction, you're likely having a problem with optimizing for a CPU class you don't have.

                I had the same problem years back when trying to build LFS for a WinChip-based iOpener. I was compiling everything on a P3 and making sure to optimize for i586 (I think), but the glibc had to be done differently, and consequently it kept optimizing for i686 and pratically every binary would terminate with an illegal instruction.
          • Is that K6, or K6-3?

            According to the bug reports, I don't know. :^) Seriously, I think that is a K6-3, but it is so poorly documented by AMD. You have to look in /proc/cpuinfo [or whatever it is called], & compare the flags to the bug reports. Even then, it still may not work. I know that for a fact because I tried all of the K6 alternatives & it would fail during the gcc compile as well. I had to end up using i586. It was kind of sad, but @ least I finished installing it.

            It just happens that I fin

            • Well, I'm in the process of recompiling everything with all the default optimizations. For now, that'll be good enough (hopefully it'll work), and I'll play with the optimizations once I've grown more comfortable with gentoo. Thanks for your post, it's nice to at least know that I'm not the only one having this trouble.
    • Re:Ports (Score:2, Interesting)

      by Anonymous Coward
      Ah, an interesting example. You see, I recently tried to evaluate your BSDiff program. Not only could I not build it, I couldn't even figure out how to modify it to make it build - because you haven't even tried to write portable code.

      For example, you rely on BSD make. That makes some sense for a BSD project, of course. But BSD make is not available on Cygwin. If you don't want to use the more widely available GNU make and its extensions, you could at least restrict your makefile to the subset of cons
      • Ah, an interesting example. You see, I recently tried to evaluate your BSDiff program. Not only could I not build it, I couldn't even figure out how to modify it to make it build - because you haven't even tried to write portable code.

        In that particular case, I wrote the program for a specific purpose -- FreeBSD Update -- and was surprised by how many people wanted to use it for other purposes. So no, I wasn't really trying to write portable code. (On the other hand, GNU make is five times as large as B
        • Re:Ports (Score:3, Informative)

          by aminorex ( 141494 )
          Supporting GNU make is laudable because it is
          by far the most portable and ubiquitous build
          system in the world.

          Portability is laudable because it allows
          people to use your code.

          If using GNU make could be a barrier to the
          adoption of your code, that might be a reason
          not to use it, but the license of GNU make
          doesn't have any obvious bearing on the
          usability of your application which is built
          by its mechanisms.
      • But BSD make is not available on Cygwin. If you don't want to use the more widely available GNU make and its extensions...

        A long time ago in a galaxy far far away, there were two makes. One was BSD make. The other was SysV make. They were not compatible with each other. Then along came GNU and instead of choosing one or the other, decided to meld them together in an unholy marriage whose offspring wasn't compatible with either. As if that weren't enough, it added in half a million extensions of its own.

        T
    • Backwards (Score:5, Informative)

      by devphil ( 51341 ) on Friday December 05, 2003 @12:06PM (#7639106) Homepage
      Rather than attempting to include support for every architecture via autoconf,
      [...]
      Rather than doing a poor job of supporting every platform, it makes much more sense to support one platform directly,

      That's exactly how autoconf doesn't work. "Including support for every architecture" is how previous build systems worked, like xmake. Those were an unmitigated disaster.

      Automake's goal is to allow the build system to adapt itself to whatever system happens to be available. It does not look at the OS and say, hey, I'm on Solaris, bring out these hardcoded settings. Instead it performs tests for each feature of interest -- where is the compiler? where are the SSL libraries? what's the exact function signature for this not-quite-standard-C subroutine? If your system has been customized from the factory default, so to speak, then hardcoded answers will be wrong, no matter how dilligently those porting people were in originally deriving them.

      The idea is to have configure discover whatever the correct answers are, and set variables appropriately so that you don't have to care about the differences -- you can write the code once, as you say, and let the automatics do the necessary adjustments, rather than other people. Assume you're running on a POSIX system, for example, and let configure #define things as necessary to make up for the non-POSIX systems.

      • Re:Backwards (Score:3, Insightful)

        by aminorex ( 141494 )
        And you're saying that automake is NOT an
        unmitigated disaster?

        In my view any project that needs to resort
        to automake in order to configure the build
        environment has already failed. It has failed
        to deliver a portable Makefile, and failed to
        deliver portable application source code,
        and tried to work around that failure after
        the fact by patching it with automake.
        • Re:Backwards (Score:4, Insightful)

          by devphil ( 51341 ) on Friday December 05, 2003 @02:07PM (#7640266) Homepage
          And you're saying that automake is NOT an unmitigated disaster?

          Huh? When did this switch from autoconf to automake? Okay, sure, I'll play. I think automake is a fine tool.

          In my view any project that needs to resort to automake in order to configure the build environment

          Your view is wrong, since that's not what automake is for. That's autoconf's job.

          It has failed to deliver a portable Makefile,

          Here you're clearly talking out of your ass. Go look at a generated Makefile.in. It's whole purpose in life is to be more portable than any hand-written makefile could ever be. Go ahead, try to implement the same complicated, real-world-necessary rule patterns yourself, without resorting to a nonportable feature of GNU make, or BSD make, or Sun make, or...

          The point is to save typing during the input (Makefile.am) and yet keep the output (Makefile.in/Makefile) utterly portable. And it succeeds admirably. What, you'd rather force everyone to maintain hundreds of lines of makefile by hand?

          and failed to deliver portable application source code,

          What the fuck does that have to do with the makefile? Or the build system? This is a red herring, and a specious argument.

          and tried to work around that failure after the fact by patching it with automake.

          ...a tool which, in fact, does not actually patch any files. Brilliant argument there, buddy. Perhaps you should pay attention to which tool does what.

          Feh. Go back to your hand-written tedious makefiles. I'll stick with tiny free-form automake files. And add you to my /. killfile.

          • Re:Backwards (Score:2, Insightful)

            by bazongis ( 654674 )
            I so totally agree with the parent post like I've never agreed with a slashdot post before ;)

            Sure, for small and simple programs you can resort to hand-crafted Makefiles and just skip all that autofoo business, no question.

            But once you bring stuff like multiple libraries (even if it's just no-install static libraries) or conditional compilation (enable/disable features that rely on certain libraries for example) into the equation, you'll have an extremely hard time getting this right on most platforms wit
            • Also, let's not forget that most developers do not have 15 boxes with different architectures/OSes to play around with at home.

              Maybe most don't. Some of us do.

              In my office alone I've got 11 boxes with 3 architectures (x86, PPC, 68K) and 6 OSs (Linux, Solaris, FreeBSD, MacOS, Windows - oh, okay, 5), including a small Beowulf cluster (only 5 nodes of P-166s, but hey..). Then there's the four machines in the basement, one of which has yet another architecture (Sparc). And if I get really wild and crazy,
        • I don't think there is any such thing as a portable Makefile for large projects. There are just too many strange bugs or oddities in shells and make in different Unix variants. There is very little standardization in the more advanced compiler or linker flags, for example.

          Similarly, portable application source code is rather difficult in many cases. You can write to a standard like ISO C or ISO C++, but then what do you do about the systems that don't quite conform to that standard? Being portable to m
      • Re:Backwards (Score:3, Insightful)

        by Brandybuck ( 704397 )
        You're both right and wrong.

        You're right in that xmake and similar systems didn't work well. You're right in that an automatic configuration system should query for capabilities.

        But you're wrong in that autoconf/automake is the answer. I tried to build some software yesterday and I got the error that I wasn't using the correct version of automake. WTF! When the specific versions of automake/autoconf are themselves configuration variables, something's seriously wrong.
        • unless the configure script is senile and bootstraps itself from YOUR local autoconf (which is just programmer laziness).

          Tell the maintainer of your troubles... it needs to be fixed.

          autoconf/automake are self-contained, and should only need to be used for building your own configure scripts, or for setting up packages from CVS.
          • autoconf/automake are self-contained, and should only need to be used for ... setting up packages from CVS.

            And that's exactly what I'm doing. automake/autoconf need a simple way to switch between versions. There isn't one that I can find. If you know of a way, please tell me. Rid me of this aggravation.

            If these incompatibilities were between major versions of the software, I could understand. But GNU has an annoying propensity to break compatibility for every minor release with most of their software.
            • It's not so much the minor incompatabilities, in my experience, it's the overeagerness of developers to keep bumping the "minimum required version" needlessly, just to use the latest.

              Autoconf did a huge leap, with many changes, and some projects still use the last of the old versions (2.13). Automake had a series of broken minor versions, so there's a jump from 1.4 to 1.7 or so.

              Both automake and autoconf can have multiple versions sit side-by-side, so that's what some distros do; and Debian, at least,

              • *grumble* YEs, they quite botched that 2.13 -> 2.5x thing. It could have been handled much better. Us Cygwin folk require libtool 1.5 to make shared libs, which means that you also require automake > 1.5 and autoconf > 2.50. However, due to how they introduced the new 2.5 series ("Here ya go! It's new! It's different! It breaks everything! No conversion is possible! It's very difficult to run both at the same time! Suck it!") many projects still use 2.13 because those developers don't want
            • Check out the 'modules' program at modules.sourceforge.net [sourceforge.net]. It makes it fairly easy to switch between versions of any program you like (and choose to set up with modules).
    • configure can give you a lot more than just portable builds - it also allows you to select optional features for compilation. Check out the monster list of options for configuring gcc [gnu.org] or just run
      configure --help
      for your favourite package (gdb is another good example).
    • Maybe you're right - in this day and age when almost everybody uses GNU, BSD or other sane and non-crusty Unix variant, it's not worth trying to cope with all of them. In the earlier days it was necessary to have GNU packages build on all sorts of proprietary Unix variants, hence autoconf.

      However, even if you do let other people provide the necessary patches for other platforms, there's something to be said for having a central repository of these patches. If I happen to use a weird system where the argu
      • Also, if the changes to program source code are fairly small, it can make more sense to incorporate them in the main source tree so that they can more easily stay up to date - rather than having a dozen patch sets maintained by disparate people.

        Sure; there's nothing to say that you can't look at the patches people are distributing. But there's a balance -- it doesn't make sense to include lots of #ifdef AIX code when 99.99% of users don't run AIX.

        if you want your app to run on AIX 4.x do you really exp
        • I think a reasonable rule of thumb is that you make your app work on platform X if and when someone sends you patches to support it. Then you don't divert too much of your own effort towards platforms that nobody wants, but if someone was concerned enough to make patches for a platform then either (a) the patches must be trivial enough to be worth including or (b) the platform must be important enough to at least some people. Of course you can reject a patch that gunges up the whole source tree with '#ifd
  • by devphil ( 51341 ) on Friday December 05, 2003 @11:56AM (#7639028) Homepage


    The GNU Autotools have their own published book, the electronic edition of which is online [redhat.com]. This doesn't seem to be listed in the resources at the end of the article.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...