Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

File Packaging Formats - What To Do? 231

Jeve Stobs writes: "It seems that nowadays, there are three ways of distributing a program. In a tarball (be it a .gz or .bz2), in a Debian package, or in a RPM. These are all fine methods of packaging a piece of software, but they each have their places, and they aren't as comprehensive as I would like. I really think that, as we move into a broader user base with the variety we have so far (not to mention the variety we are likely to have in the near future) that a new method of software distribution is needed. osOpinion has an excellent editorial piece which details some solutions to this growing problem."
This discussion has been archived. No new comments can be posted.

File Packaging Formats - What To Do?

Comments Filter:
  • Soon enough a .zip type standard will gain dominance and such fussiness can end.. lzh, arj, and those other ones from dos days all eventually (should have?) died out and left .zip as the standard.. hell, why dont you linux guys just use that? pkunzip /d source target..
  • Windows and Mac OS are PC operating systems, used by a single user. Unix is not.

    Regardless of its capabilities, Unix is
    ---

  • Two possibly relevant places:
  • Why 1500 packages? Just get some packages from the A, AP, D, K and N disks. Slackware [slackware.com]
  • What about something like updaterpmdb, like updatedb for the locate command ???
  • You can search for the package that owns a given file with dpkg -S
    ---
  • Cool, will it have sound support? I'm imagining a heated discussion between the system and conflicting packets...

    And why does this remind me of a certain scene in "Dark Star" [imdb.com]?

    Doolittle: Hello, Bomb? Are you with me?
    Bomb #20: Of course.
    Doolittle: Are you willing to entertain a few concepts?
    Bomb #20: I am always receptive to suggestions.
    etc... [uiuc.edu]

  • The goals of newbie users and the goals of sophisticated power admins are very different. RPM probably is a fine choice for newbies and others who may be more experience but don't care to worry about the administration of what is installed, especially on a home or office desktop.

    OTOH, for someone like me that cares about what is installed, how it is installed, where it is installed, and the overall security of my server, I have found that RPM just gets in the way and increases the frustration. I'm personally dealing with details that RPM is supposed to help me avoid dealing with, because I need to deal with them, not avoid them. RPM is definitely NOT easy for everyone.

    The correct answer is, there is no one size fits all. Standardizing on a single packaging format is the wrong thing to do, unless we want to standardize on one type of user (which the other user types are not going to be happy about, at all).

  • I use Info-Zip [info-zip.org], the third most portable program in the world. Info-Zip compiles on every know variety of Unix, and binaries are available for the most popular platforms.
  • I've been thinking about this for a while. The
    problem is you have several classes of files:
    Binaries - exported over a network readonly. Maybe arch independant.
    Data files, read only, or read write, may be on a network, machine local, or per user.
    Configuration, again, network, or local, or per user.
    The current fs layout is organised like this, so you
    can mount /usr readonly, /var read write, and mount /usr/share over a network.
    if you have /package// then you loose this.
  • A ports tree would be ridiculous for Debian.

    There's already APT, which is awesome - 'apt-get update; apt-get dist-upgrade' goes and updates your ENTIRE INSTALLED SYSTEM (except for kernel and non-Debian packages).

    Installing individual packages is trivial - 'apt-get install PACKAGENAME'. And if you want the source, just do 'apt-get source PACKAGENAME'.

    There are a number of frontends for apt, such as Gnome-Apt (not sure whether it's still under development, or orphaned), Aptitude, Console-Apt, StormPkg, and Corel's get_it.
  • Ah, yes. I've been thinking about this for some time. I would love to see something like this. Nice and clean. Delete that file and you "uninstall" the application. Simple, clean.

    Just to toss out some ideas:

    Imagine if the entire OS was one file. So you might have a base Linux, BSD, or whatever system that you would install just by copying a file over. This would require some type of very basic system for managing the "filesystem" (or database, whatever it would be).

    MVS uses a very flat filesystem model. And you can drill into the datasets as if there was a directory structure (which I guess there is, sorta, from the layout of the dataset). I'm not saying I like the MVS filesystem. It sucks because it's so cryptic, but there is no reason a modern filesystem patterned after it would have to be that way.

    In order to do this right I think it would require some major architecture changes to any OS. That would facilitate this one file == one application, system.

    Simplicity, I like it.
  • A source install package manager will also need to properly integrate local hacks/patches to the package. If it doesn't do that, then I won't be able to use it for those source packages I do have my own code hacks to (several of them, such as apache, bind, ssh, qmail). There will also need to be a standardized place for it to find any "local modifications to source" for a given package so it can automatically include them.

  • I propose that we follow Apple's lead in this area and move to Self Extracting Executables, ELF binaries that you run and will extract the software and install it for you, without the hassle of remembering arcane flags and what program you're supposed to use.

    You need to read up an Apple's OS X packaging model. It's not nearly as stupid. "extracting the program and installing it for you" is moot because the program and all the things it depends on are self-contained--just download and unpack using your favorite decompression utility, and there you are.

    If you care to look at the source, you can look at it--and compiling the source produces the package.

    I would not EVER run a system such as you describe.

  • from the jar tool : [sun.com]

    ...jar is a general-purpose archiving and compression tool, based on ZIP and the ZLIB compression format...

    You can even use winzip (ug) to crack jars open.

    --d
  • It still lacks some functionality though, like being able to unpackage from other places than the CD-rom, but it's improving.
    If you doubleclick an .rpm in mc (gnome midnight commander, the file browser), gnorpm will prompt you for the root passwd and install/upgrade the .rpm from any location (if you have read access).
    -><-
    Grand Reverence Zan Zu, AB, DD, KSC
  • by dabadab ( 126782 )
    I am somewhat perplexed. OK, our current package system (I'm using Debian) has its own problem if you install stuff (especially libs) from tarballs. That's clear. But how on the earth could it be solved using self-extracting archives? And why on earth would you want to do a self-extracting archive anyway? It is not secure, you need some installer anyway to remove the packages and it just does not make sense at all.
    What we need is an easy way to register manually installed software. Something like stow, but much more powerful. Yeah, that's what we need, not self-executing archives.
  • It was said before. If your package needs a mail agent be installed, how are you going to make sure it works when I have qmail installed instead of sendmail?

  • Damn. Sorry about that last post... I forgot a closing tag symbol. I know... use the preview.

    from the jar tool documentation [sun.com] :

    ...jar is a general-purpose archiving and compression tool, based on ZIP and the ZLIB compression format...

    You can even use winzip (ug) to crack jars open.

    --d
  • Three points.
    1. Static linking is an option, but not the only one. You can link dynamically, distribute all needed libraries, but install only those not already installed (check with a secure hash). If it's already there, make a symlink.
    2. Linking is not the only problem. Many programs communicate with scripts or run other programs. You have to make sure they are ok, or install your own versions.
    3. DLL Hell is there. Linux is no different from Windows in this regard (it can and should be better of course).

    --
  • Windows and Mac OS are PC operating systems, used by a single user. Unix is not.

    Regardless of its capabilities, Unix is used by many people as a single user workstation, in a role identical to those other OSes. What do you think the target market for Loki's game ports is? Who uses KDE and Gnome?

    Obviously, with a multiuser server where a lot of people depend on it, you need a good admin. I am not suggesting that anything be done to take away that admin's ability to get his job done. But for Joe Schmoe who wants to play a 3D game on his home PC, he should certainly be allowed to play the game without having to know where Mesa's libaries go.


    ---
  • by stevenj ( 9583 ) <stevenj&alum,mit,edu> on Sunday August 06, 2000 @10:06PM (#874685) Homepage
    ...is the lack of a central packaging organization, I think. This results in (at least) two problems.
    • First, if you go to a site like rpmfind.net [rpmfind.net], you'll find a zillion different versions of a given package, all with different numbering schemes. Besides the obvious duplication of effort, it is often not clear which one to use, especially if you want the "latest" RPM (or SRPM so you can build on an atypical architecture).
    • Second, suppose you find a program that is not packaged already, and the authors don't want to package it themselves, so you want to package it as a service to yourself and the Linux community. There is no guarantee that 10 other people aren't doing the same thing; you don't have any chance to become the "official" RPM maintainer for the program. Moreover, users will have no way of knowing whether you can be trusted. Finally, there is no definitive place to post the finished package (see above).

    One of the strongest points about Debian [debian.org], I think, more than any minor technical distinction between .deb and .rpm, is their strong centralized packaging organization that solves the above problems. (Based on this, they can also make nicer tools e.g. for automated network-based updates.)

    Now, if only Debian stabilizes their new release so I can install it on my PPC...

  • Then don't do the "install everything" install. I use dselect to get exactly what I need. For a server that means less than 70megs installed, for a gnome desktop there is more, but if you take a little time to pick and choose, it is still quite reasonable.

    Although I won't discourage you from using tar and compiling source, it's a good skill, but if you've got the time for it, then I say good luck to you. I just don't like admining boxen that I have little idea what is installed, and precious few tools to find out, mainly cd and ls.

    The old libaries are often from non-free binary-only progs, that can't be recompiled anyway, no ones got the source or is allowed to re-distribute it, etc.

    I don't quite get by what you mean "depend conflicts", unless you're trying to get 2.2 installed in under 50megs, which is a bit hard, because debian wants to install a semi-full system, internationalization support, nurses, a large terminal db, dpkg, plus things like debconf, which is debatable. In that case there is at least 1 debian derrivative meant for the embeded market.
  • Not necessarily. Lets say you have a 2-way index. Then you can search for all B's where A is true, or all A's where B is true.

    Now, let's say that one attribute in the database is the directory, and another is the file. (This is not unlike how MUDs and MUSHes represent their objects.)

    When you go into a directory, you see all files for which (object.directory == user.directory).

    Now, when objects move around, the database and filesystem remain in sync, as they have become a unified concept.

    When used the reverse way, it works just as well. I want to run the XYZ web browser, go grab the nth version found in the database. This is perfect for a GUI interface, where paths should be transparent or non-existant, and where objects just -are-.

  • I like the present situation. I use an RPM (or try to find one) for software that I won't tinker with much. For other's that I might want to customize, I prefer a tarball. Too many choices of course would become a headache.

    Founder's Camp [founderscamp.com]

  • ZZiiiiiiiiiiiiiiip it.
    Ziiiiiiiiip it.

    Actually, my favorite compression format was
    a trash compactor.

    It did wonders for my mac.
  • That's an MUA, not an MTA. The generic mail interface is the "sendmail" command. MTAs other than sendmail provide their own version of this command, taking most of the same options as sendmail (and possibly ignoring some of them).
  • Fine, but don't put cocksucker-marketing dorks looking to fill this market in charge of my linux. That will be the death of it. Debian GNU/Linux, in particular is quite fine as it is. Let corel or someone else do the luser-related tweaking, and simplification stuff, just make that a schmeer on top, one that I can always wipe off when it gets in my way. What I want is a real server OS (that is not BSD licensed, sorry no flames meant, its just the truth), that you can make into a desktop, if you want.
  • Agreed. Once the job of the LSB is done and all major distros agree on it, we can proceed with either creating new package manager, or refining the existing ones.

    RPM *and* .deb are both great, but both have some serious shortcomings. What we need is a flexible, intelligent package manager that would work with source as well as binary packages (i.e. if you pick up a source package, it compiles it on your machine and then installs it transparently).

    I use Red Hat and hence deal with RPM extensively and frankly think it's great.The only problem is that for performance reasons I like to have some things compiled on my machine (e.g. kernel, compiler, httpd...). That involves picking up a .src.rpm and, while it's no biggy for me (you sometimes have to adjust some parameters, make a few changes to the spec file), I understand it can be a daunting task to those who're not used to it.

    -----
  • Yeah, like me and all my cool linux friends are totally like laughing at you and what not! ;)

    What you need is a recent cheapbytes debian cd and some cool t-shirts, six-pack of jolt + a bottle of voka, then you can join in the fun! Driving around, looking for BSDer's and yelling crap at them, "you call that liscense free!!!" "microsoft could take yer code, and sell it!!" "you smell funny!" "Cut your hair, the early 90's sixties revivial ended last century!!!" etc and then burning tread, peeling away as the cops whip it around with the lights flashing... ah to be linux and young..........
  • Well you're right, but i think no one will be able to declare standards on software (non)installation, because everybody has different views.

    The best would be Linus or RMS defines standards, but let's wait for GNU/Linux to maturize a bit more.

  • The last thing in the world that Linux needs is an easy and consistent way of installing and configuring software.

    99% of Linux's much ballyhooed ability to run for weeks, months, years on end is that users are NOT installing new software or new versions of software every other day.

    Every installation is a chance to screw something new up, and because of the potential for interactions among all the softwares installed, the likelihood of problems increases exponentially with each package installed.

  • Ha! and RPMs aren't fragile? How many times have you installed an RPM that has broken dependencies? The argument you make here applies equally to RPMs.
  • "install" can just put a symbolic link into some directory pointing at the actual executable to run. Deleting the package will still successfully remove all the files, though it will leave a broken link. Still much preferrable to the existing shotgun "installation".

    But rather then trying to figure out how to break this excellent idea so it works with old tools, why not fix the old tools. What we want is the existence of a file to "install" it.

    For the shell: modify it (or the exec system call) so you can execute a "package" and run the application. Tell everybody to put ~/bin in their path. And then tell everybody that sticking the package in ~/bin will allow them to run it from the shell.

    For things like the Gnome start menu: fix gnome to look for packages in ~/bin and extract the necessary information (help, icon, etc) from it. Then again, putting the package there will suddenly make them appear on the menus.

    To "install" a package so all users can see it, become superuser, and move (or link) the package to /System/bin.

    To "uninstall" a package, just delete the file (modify rm so it does rm -R on them automatically).

    A package that has to change the system somehow (like install a service) will pop up a question box when run saying "do you want to install this". If the user says yes it then runs some kind of "install shell" program. This is a setuid program that asks for the root password in a pop-up window and if the user types in the correct thing it then runs the script. Cheap solution: user must put the "file" (jar/.app/whatever) into ~/bin to see it without installation (and their path must contain ~/bin). Making shells see the command is the same as making the Gnome command see the command or the Windows start menu see the command. Some thing needs to be done other than simply putting the file on the file system.

  • This was one of the key reasons I switched from Linux to FreeBSD. After trying to get things working on Debian, Redhat, and SuSE I tried FreeBSD and haven't looked back.

    I remember seeing something about a organization to standardize all the Linux file system layouts etc. but nothing seems to have come of it. Redhat was not a part of it and Slackware falt out refused to abide by any decisions that they made.

    It seems to me that Debian has it's act most together but has problems because it there are not standards.

    Contrast this with a *BSD OS (and perhaps others) which have a centralized system of source and/or binary (ports/package) installation using trusted files. Download, make, grab and install all dependencies, install, done. And everything is mantained by a group of people who make the latest greatest software buildable and runnable on *BDS. This is, of course, precisely because there is little fracturing in the origanizational structure of the overall OS. A weak point is trying to update a program to a newer version (I've seen ways to do this but it's not exactly simple).

    It won't happen untilthe Linux *distro leaders* decide to wake up and compromise on this key area of standardization. I know Linux users love to use the 'multiple choices' argument but at some point you have to start catering to the majority rather than the minority... (yeah yeah, sounds more and more like a commercial OS)

    Anyways, I don't mean to slam Linux but it's a key area of weakness. It's ironic that a great strong suit of Linux is also a major headache.
  • By opening a socket to port 25, perhaps? Or by providing a /usr/sbin/sendmail that's option-compatible (as, I believe, most of the MTAs do)?
  • One of the things that annoys me greatly is packages that install stuff into my system configuration. I want total control over my system, including what services are started at boot time, and in what order.

    OTOH, I can understand someone else not understanding at all how the system starts up, and wants the package to take care of that (if its a service to be run). Part of the difficulty of all this is that we have not standardized on one kind of user.

  • by Hard_Code ( 49548 ) on Monday August 07, 2000 @06:06AM (#874705)
    I am reposting this from a previous article because it makes more sense under this one.

    Ok, as only partially-initiated, I must ask, in spite of the simplicity of the philosophy of Unix, why oh why are there so many damn interdependencies in applications? Example: I install RedHat (yeah, shut up, it was the only thing that would install over DHCP on this old ThinkPad, after trying FreeBSD, Slackware, NetBSD, and TurboLinux), and choose the most minimal of configurations, and also choosing some small tools like cvs, etc. Well all of a sudden it is prompting me for all sorts of other dependent packages. I could not believe it when it told me I needed the entirety of KDE, and *then* also GNOME to satisfy dependencies! That is bullshit. Tk, Tcl, Python, Perl, Expect (!)...how the hell many things do I need to install? Am I the only one who thinks that backending GUI or administrative applications by Perl is just a god-awful abuse?

    Sure this is just one experience, but I've found the same general thing when installing other distributions. Is this just a commercial flaw? Or do other "non-commercial" distributions like Slackware and Debian not require this? I just boggle at the horrendous amount of crap that even the most trivial of applications is dependent upon.

    Ok, I'm putting on my asbestos trousers...
  • What I see most important in the short term is the complete lack of a standard installer (the likes of InstallShield) and a standard GUI for it in X and curses.

    1. It should be a binary installed on the local system and so separated from the packages it will be installing. (for security reasons as stated elsewhere in this thread)

    2. All packages should be digitally signed and this signature has to be validated before installation can commence. (be vary about M$ signatures ;-) Distribution maintainers should include trusted signer's keys in the distribution for convenience.

    3. Within the installation GUI it should be possible to get a clear view of what-goes-where in terms of files contained in the package

    3.1 For this sort of a major installation system overhaul to become reality there must be very good tools to create these new packages and utilies for converting existing .rpm/.dep/.tgz packages to whatever this new system will be.

    4. Also installation paths should be configurable and there should be a button for checking and updating the system path setup if needed -automatically. Also post copying/installation configuration should be available through the same GUI, so that one is able to install and configure software easily through a "wizard" in the beginning and then get down to 'vi' if tweaking is needed at a later time.

    5. The installer should have a decent help on LSB directory meanings and a FAQ of known best practices of software installation. (I use /opt for just about everything)

    6. The packaging format should be something like the .tgz package format *BSD's ports collection with automake/autoconf Makefiles for dependency checking and a way of automatically fullfilling these dependencies if needed á la the ports collection (It really works wonders - it is something we should adapt from our *BSD kinders)

    6.1 The packaging format should also support source compilation i.e. instead of being an "only binary" package the SW could be distributed in source form and the installation system will give the local machine optimisation flags to the Makefiles of the package to be compiled. Flag override should be also available in the package if the SW is known to malfunction with some optimisation flags i.e. it's not SMP safe or breaks with high optimisation parameters.

    6.2 There should also be a proper uninstall scripting in the installer and another handy feature would be the possibility of making an orphan check in the installer i.e. when a software is uninstalled no shared libraries should be removed (this semantics will leave orphaned libs hanging arround, but retain other SW operational if it hasn't been registered with the package system as using this library) Now with orphan detection root has direct control over what libs are to be removed as surplus and if something then breaks he will be able to reinstall what ever he just removed, instead of being left guessing what essential part was removed with software package ZYX.

    7. A new binary fileformat of the .jar like would be helpfull if one strives to minimize the directory count of the box, but at the same time it compromises control of individual files in a given software (think icons in KDE as part of a kde2.package - how can those be updated easily and conveniently?) Perhaps some kind of a hybrid system with a hierarhy of

    say /opt/kde2/kde2.package
    and /opt/kde2/conf/(config files)
    and /opt/kde2/icons/(icon files)

    8. At the same time we should try to rationalise the current confusing directory hierarchy system of /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin, /usr/share/bin, etc. as this becomes entirely uncontrollable after a while of mixing packages and/or source compilations from various sources on any given system.

    Nowadays it is complicated to get a grasp of what's installed on a system just after installation if using something like RH/Mandrake that stick every SW beneath the sun and their friends in a default installation. (even with the minimum/expert installation =(

    Hopefully someone with enough programming expertise to actually do something about these suggestions sees this post and finds some of it rational enough to try to implement a system like this on Linux/BSD.

    I'm looking forward to see the day it works,
    ++ Raymond

  • You write up specs for a "meta-package" format, and EPM will build various packages for you. It supports RPM, .deb, Solaris packages, IRIX packages, and HP-UX packages, as well as a "tarball" format that installs with a setup program (user-supplied, or you can use theirs).

    It's GPL, and it's packaged for Debian (for woody, not for potato, unfortunately, although the woody package works on potato).
  • Your program outputs nothing. But you can demonstrate what I mean in Perl:

    sub make_package {
    my $library = "hello, world";
    my $program = sub { print $library,"\n"; }
    };
    # create the program/package
    $package = make_package();
    # change the environment incompatibly
    $library = "oops, wrong environment";
    # run the program
    $package->();

    This outputs "hello, world" in Perl. In the real world, it will output the wrong thing.

  • Admittedly this applies only to tar'd source packages, but the autoconf system already allows you to structure your system that way: for a package foo, just configure it with

    ./configure --prefix=/usr/local/foo

    and then after the "make install" go to /usr/local/foo/bin and run

    for FILE in *; do ln -s `pwd`/$FILE /usr/local/bin/$FILE; done

    If the package provides dynamic libraries they you also need to change /etc/ld.so.conf, and if it provides man pages then ditto for /etc/man.config, but it's no great hardship.
  • by Skapare ( 16644 ) on Monday August 07, 2000 @06:19AM (#874713) Homepage

    Time and time again we hear the call for standardizing something. The common argument tossed out is that such and such standard is needed if Linux is to succeed. I say baloney! to that. Linux will survive, grow, and prosper, on the basis of what has made it do so, so far: its diversity and opportunity to try things in a different way.

    If we are going to standardize something, then maybe what we should standardize on first is a single type of user. Afterall, if we only have one type of user to have to deal with, then we don't have to deal with a diversity of needs and we can focus on exactly what this one type of user does need. Then we can have just one distribution, one filesystem layout, one package format, one window manager. Hell, we might even be able to narrow it down to just running a single application. Then Linux will RULE!.

    ... in your dreams!

  • Yes, if bundles are just big executable archive files or directory trees with a known structure, you don't need a database--you can just search through them in a more traditional UNIX shell style. A database approach would be another way, though, of letting both users and packages find the executables they really want (the "owned by..." was just another example--for example, for security reasons, I might not want to execute files owned by anybody else right now).
  • slow news day eh?

    personally, i don't understand why cardboard boxes have been around for so long. I figure, if you order a washer/dryer combo from a store, the cardboard you purchase it in should be able to unpack the washer/dryer combo for you, set it all up, and do a complimentary load of laundry.

    my not-so-serious point is that tarballs and zips are the cardboard boxes of the computer world. They're meant to do one specific task, and they do it well. No one can say the tar command isn't pretty robust. But basically, do you care about the packaging something comes in? I sure as shit don't. I care what's in it. - *nix, generally being more hacker oriented has always had a do it yourself attitude. Personally, i don't want to see a single file that extracts, runs, and does your dishes all at the same time...that's just not what i switched up for.


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • by schon ( 31600 ) on Sunday August 06, 2000 @09:37PM (#874719)
    A new package format that would combine the strengths of the current various is a good idea, but (and it's a big but) I hope the author isn't the guy to do it (he doesn't have a good grasp of security..)

    He mentions making the package self-extracting - "just do a chmod +x on it, and away you go"

    Sorry, but that's why "security" in Windows is so abysmal..

    Think about it - to install, you're running this file as ROOT - would you run a binary from an unknown source as root? (I don't even untar source as root!) - this is inherently a bad idea.

    Just my .02...
  • by ssd ( 219700 ) on Sunday August 06, 2000 @10:13PM (#874720)
    Packages are nice, but really, if we want to see Linux make it in the Real World (TM) we need to advance to the 21st century and recognize that we must have a simple and easy means of installing new software. I propose that we follow Apple's lead in this area and move to Self Extracting Executables, ELF binaries that you run and will extract the software and install it for you, without the hassle of remembering arcane flags and what program you're supposed to use.

    Just look at it! The author of the article makes a very good point. How many of you really look over a binary before you install it? Do you just rpm a package and then run it? What do you have to lose from running this random binary? No more than if you downloaded the rpm and ran the binaries contained therein.

    What we as the Open Source community need to do is create a standard install tool, like what windows programmers have with the install shield software. The software could be customised, and could be written to reconize the directory structure, and install the files in the correct locations, maybe through the use of certain files that would be kept in a standard location, like /opt/local/lib/install/.

    So, whaddya think? Think we can get something whipped together? Even just a proof of concept done in a combination of perl and python? We certainly don't want to make the mistake OS/2 made and not at least keep up with what windows has had for years.

    -ssd
  • by lordsutch ( 14777 ) <chris@lordsutch.com> on Sunday August 06, 2000 @10:15PM (#874721) Homepage
    A less flip response is that Debian has a standard menu system [debian.org] that is compatible with any window manager; you simply stick a file in /usr/lib/menu/[package], and update-menus in your post-install script does the rest. The user can also override these menu entries if she wants. I believe Debian has recommended menu entries for all interactive programs (i.e. ones that don't need command line options or a shell to do useful work) for a while (see policy section 3.6 [debian.org]); packages that don't have menu entries usually get nasty bug reports filed against them ;-).

    GNOME and KDE also have a similar concept.

  • I've always despised the idea of hard-coded paths. What we need is symbolic path references. (Not to be confused with symbolic links, but I don't feel like flipping through my thesaurus.) Instead of putting a file in "/etc" or "/usr/local/etc" (not that packages should be allowed to touch /usr/local). But instead you copy it to, say, "$PKG_DIR_CONFIG" which can be whatever you want. Or better yet, taking a cue from autoconf, "$PKG_PREFIX/$PKG_DIR_CONFIG". And similar variables for binaries, libraries, data, and documentation. Then, not only could a single package be made for almost every distribution, but the installation method can be controlled by the user if the usual install paths aren't desired.
  • I agree 100%.

    There is no reason for Linux to duplicate DLL hell from Windows.

    And everybody asks "what kind of interface is friendly for the user" and talks about installation wizards and other crap. What is "friendly" is when there is a little box in their file browser that is the "program" and they click on it to "run" it and they NEVER have to "install" it.

    Static linking will 100% allow this for programs that don't have to mess with system resources like drivers or continuously-running services. For programs that do have to mess with the system, I recommend that the program have compiled into it the ability to execute a setuid utility (that asks the user for the root password) that can run a script to "install" the program.

    And he did not mean static linking libc or Xlib or other things that are obviously part of the system, he was talking about static linking those hundreds of Gnome libraries that are making it almost impossible to get Gnome to work!

  • So why not just keep statically-linked binaries in /boot/bin, /boot/lib, etc.?
  • Look at the gnu stow package --- It facilitates this.
  • Perhaps I don't have my full dosage of clue, but, while the LSB is nice and all, I still would like to see broad standards to which all distributions conform. Linux, the net, and open source software in general is largely based on open specifications, standards, so that things can interoperate. I'd like a few things cleared up:

    /usr/*, /usr/local/* ?
    /var usage
    /opt usage
    /users, /usr/home, /home ?
    init scripts: System V, BSD ?
    name and format of configuration files (*.conf, *.rc, .*rc)

    Also I think there should be a unified package manager system. It seems to me package management needs to migrate to a standardized repository or database (dare I say "registry"?) of application information, indicating interdependences, and what is installed. Somebody mentioned OS X-like application package system. I don't think that is entirely feasible, because many packages consist of different components: applications, documentation, system components - these all need to be put in their correct place. We hate the windows registry because it is of some proprietary binary format that you need a tool to use, but what about a simple XML file, or micro-database, where information is stored centrally?

    You might even migrate various application configuration files into this, but that's pushing it, and for the most part, application-specific configuration files do the job just fine (I believe there is at least one effort to standardize application configuration files).

    Again if things are already being done, then fine, I'm not aware of them. Ideally one should be able to install any distribution, and immediately know where things are and how things work, having used any other distribution, and be able to install, configure and remove packages in a standardized way.
  • Maybe you should read past the second paragraph?
  • Incompatable in what way? glibc is glibc, xlib is xlib, and sendmail is sendmail no matter what distribution you use. I can't think of any other possibly incompatability other that file-location and possibly library version conflicts, which can be handled by dependancies. It's perfectly possible to install a Redhat package on a Mandrake system, for example. With alien, you could install .debs. :)
  • So... you're worrying about your installation script is going to do anything nasty, but you're quite happy to implicitly trust whatever program it's installing?

    The only possible courses of action are:
    * Carefully check the source of the software
    you're installing and the installation
    procedures

    * Only get trusted software from a trusted
    source

    * Sod it, and hope for the best
  • by stikves ( 127823 ) on Sunday August 06, 2000 @10:23PM (#874749) Homepage
    Ok this comment is an enhanced version of my reply to another comment so i am not sure how this will affect my Karma, anyway

    First of all only packaging standards is not enough for Linux. There are many other ares that need to be standardized.

    1. DOS/Windows compatibility layer. Wine and SAMBA both use a registery but it is not shared. Also Wine and DOS uses DOS driver aliases. We may include MAWS_NWE and SAMBA shares here too.

      We may not enforce people to use one central configuration file, but there may be a "global DOS config file" which is parsed after application specific files, or teams may agree on specific registry format.

      I think i do not have to mention that DOS is not the only case in this.

    2. Multimedia. HAs anyone looked at Microsoft DirectShow SDK? One can take the input from TV card, then tee it and send one end to the screen, the other to the Indeo codec and then to AVI Mux with sound input from mic and store them in a file. This is a area that needs standards too.
    3. Browser plugins. But netscape plugins seems to be standardized, bucause Konqueror and Mozilla uses them.
    4. Drag&Drop, VFS and many other Desktop issues
    5. DDE Like system...
    6. Common object model between Desktops
    7. Something like Novell Directory Services
    And many other thing...

    But i think these will not be possible until RMS or Linus does. Who knows?

  • I thought it might be useful to present another way of looking at the application packaging problem. Rather than as "object oriented encapsulation" and "communcation through well-defined interfaces", we can also look at it as a kind of lexical closure. In programming languages, lexical closures ensure that a function that was written in one kind of environment behaves predictably even when called in a different environment.

    Any program is written and tested in a particular environment (libraries, writable directories, etc.) and makes references to those objects. To function correctly, it should really be shipped with the complete environment it depends on. That's no different from a nested function in Scheme or some other fully lexically scoped programming language (sorry, C/C++ doesn't need to apply).

    Of course, requirements on applications are a bit more complicated than functions. When shipping the "closure" of a program over its environment in that way, as an optimization, we may not want to ship every single library with it; many of them may be present in the target execution environment (shipping a secure checksum may be sufficient). Furthermore, for some values in the closure, we may want to allow "substitutions". For example, for some libraries, we may want to allow newer versions to be substituted. And some references, e.g., to a "document directory" are dependent in more complex ways on the environment. Such deviations should probably be identified and declared explicitly as exceptions.

    Lexical closures get their power from two properties. First, in modern languages, the "free" references to dependencies in an environment are captured automatically and reliably. Second, once closed over an environment, a function can't go out and start accessing other parts of the environment (at least not without being very explicit about it in most languages).

    The alternatives (manual declaration of environmental dependencies, unrestricted access) were tried with programming languages, and they led to unreliable programs, and languages got rid of those approaches pretty quickly. We should probably follow a similar path for packaging applications.

  • by leftyunix ( 219703 ) on Sunday August 06, 2000 @10:33PM (#874753) Homepage
    Is everyone stuck in Linux? It seems like nobody has used /usr/ports on a BSD machine lately. Personally, I think it's got what everyone needs, being a package system that's got the features talked about here, and it even builds from source. Now, if I'm seeing this correctly, the reason we _have_ source distributions of things is a) for the customizability of it all , b) to have the source at our disposal to do with as we please. /usr/ports allows one to do just that. Coupled with ease of use ('cd /usr/ports/category/myprogram/' 'make install'), I think we've got a winner. Too bad the linux world doesn't have that... Until then, FreeBSD and OpenBSD get my vote... c
  • It seems to me that the author simply lacks experience with RPM. RPM in its more recent incarnations has a Prefix option so that you can define relocations from /usr to /usr/local or from /usr/doc to /usr/share/doc or whatever. You simply specify directories that can be relocated without breaking the package.

    That solves the problem of different directories for different distributions. If you are not using the same paths as the packager, you can give options to RPM to move the files to where you want them. Obviously it's not perfect because some programs hard code path information, in which case the directory cannot be relocated and should not be specified in a Prefix line.

    The author also says that a standard .tar.gz should be used. I totally disagree with this. CPIO is very similar to tar, but has some distinnct advantages over tar. One such advantage is the ability to read filenames to include from stdin. Of course there is the tar -T option, which may be able to take "-" as an argument, or possibly /dev/stdin to read files from input. Another advantage to cpio is that it deals with device special files and pipes/sockets better than cpio. Of course all of these are moot points, RPM is already using cpio, DEBs use tar. Either way it's about the same thing.

    Another concern was the dependency problems. Sometimes package maintainers will list other packages on the requires line. This is a really bad thing to do. RPM will automatically find library dependencies, so there is no need to do this. Packagers who do this are creating the problem, not RPM itself. Do note that RedHat has a habit of doing this, so they are somewhat guilty.

    One issue I would like to bring up is that RPM works best on a system when EVERYTHING is an RPM and you do not install any shared libraries as source. I am thinking of making a tool that will generate a small .spec file which given a directory where a program has already been built, it will make install to a buildroot and then package the files into an RPM. That would be usefull for keeping track of programs that are constantly changing, such as those you are updating from CVS and recompiling only the necessary parts.

    Finally, there is a need for a better version of alien. Right now alien is severly broken in that you must run as root to get good results. Alien should instead read the permissions and any device special information from the source file and use that information when packaging instead of requiring root access.

    Perhaps another way to fix the packaging problem is a tool sort of like alien that would conver an RPM from one distro to an RPM of another. Something that could relocate files, fix dependency info, and then create an output RPM with the changed information. Of course I still say if the package is made right in the first place, this is unnecessary

    Well, that's a lot to think about, hopefully I got some other people thinking too.

    -Dave

  • by goingware ( 85213 ) on Sunday August 06, 2000 @10:36PM (#874755) Homepage
    Learn from other platforms.

    And I mean this not just in regard to installers and packages, but everything.

    And no, I'm not proposing that what we need to do is make Linux look more like Windows [geometricvisions.com] or the MacOS.

    But there are problems that others have solved and we can draw on their solutions, even if we can't use their source code.

    (Even when I was working at Apple [apple.com] I would tell people about stuff from SunOS or Linux that I thought would go good in the Mac - they wouldn't hear of it).

    I think an indicator of the problem we face in trying to bring Linux to the desktop was when I was corresponding with RMS about things I thought would be helpful to the users and I suggested an installer. He replied "What's an installer?"

    The best installer I've ever come across on any platform, both to create packages with and for the user to install products with is Mindvision Vise [mindvision.com].

    It would be worthwhile to find a friend with a Mac and download it, and make a little toy installer that installs SimpleText and a readme file to try it out (you can download it for free - the installers created with it complain that you've lifted it until you get a valid serial number. It is possible to get a serial for free for installers for freeware).

    It beats the living hell out of anything I've seen for Linux.

    BTW - if you want to see an installer that really blows, check out PackageBuilder/Software Valet for the BeOS. [be.com] The thing drove me to distraction. It wasn't just the way it would corrupt the data in my archives or crash while users were installing my software with it.

    What really drove me nuts is that it had no concept of updating an installer when I had built new software to go in it.

    With vise you just drop your new files in the folder next to your installer project and tell it to update. It gives you a list of files that have changed and you can approve or disapprove of updating them (or deleting the ones that are now missing).

    PackageBuilder requires you to delete the old file from the installer project, which loses its settings, then you have to go and add your file back in and reset your settings. This is probably the number one reason for every time I've been reluctant to release a new version of my software on the BeOS - I enjoy programming it but I hate the damn installer.

  • by Sloppy ( 14984 ) on Monday August 07, 2000 @07:00AM (#874759) Homepage Journal

    I don't think you guys should be allowed to run a *nix of any description if you are going to start installing stuff left, right and centre without knowing what files are going where.

    Why do Unix users have to be Sysadmins, but Windoze and Mac users don't? Why are the rules different?

    This is not Windows. You are trying to make Linux aspire to be Windows.

    It doesn't look that way to me. It looks like they just want a system that works well. Lowering the sites to Windows would be pretty unambitious.

    It's not that you should make the interface neccessarily easier, but that you should attempt to make the user more clued up.

    Obviously it would be nice for the user to be more clued up. But there are a lot of people who use computers these days, and making cluefulness a requirement just isn't realistic. If it continues to be a requirement for Unix, then then people who don't want to learn sysadmin stuff are just going to have to scratch Unix off their list of options. Do you want them to do that? If so, why?


    ---
  • I'm wondering if all the people posting all the merits of the redhat package manager have ever had any experience with debs.
    A lot of the flexibility that the author wants i.e., all the power of a configure script, can be easily acheived by the package maintainer
    simply setting up the preinst script for a debian package. I've never seen a system SO flexible. Debian packages have the ability
    to suggest, reccomend, and require each other. They know exactly what each other provide, and they describe dependencies in much
    more intellegent manner than RPM does. Dselect allows you to easily(if you know what you're doing) resolve package conflicts
    and work out exactly how you want everthing installed. Apt will actually fetch and install all packages necessary to install the requested
    program(with your permission of course).

    The author admits that he hasn't ever used Debian packages, so why bring them into the article?! He talks about how he would like a
    package manager that will recognize files for what they are and not based on the packages that they belong to(i.e., libraries),
    but personally, I don't see the need for this type of functionality when your distro already has packages for virtually everything you
    can imagine. Sure, many people would like to install the latest bleeding edge libs, but that's what Woody is for ;-). One of the main
    purposes of packages is to allow for easy upgrades between releases. If you're going to be installing lots of libs from tarballs, it kind
    of defeats the purpose of using packages since you can no longer apt-get dist-upgrade and have everything brought up to date. If there's
    some obscure library/program out there that there isn't a package for then great, why not become a package maintainer and fill the gap?
    It seems like the author of this article knows rpm pretty well, but he shouldn't reference things that he doesn't have experience with...
    This is not flamebait!

  • You are thinking of GNU Stow [gnu.org] which does exactally this. While I wouldn't recommend this type of file layout for /usr, /usr/local and /opt are annother matter entirely and I would recommend Stow type packaging there.

  • by jd ( 1658 ) <(imipak) (at) (yahoo.com)> on Monday August 07, 2000 @02:15AM (#874767) Homepage Journal
    I'd like to suggest a variant of my entry to the Software Carpentary contest.

    Instead of having large numbers of disparate databases, all holding essentially the same information, most of which is never kept in sync with the filesystem (and therefore becomes stale very easily), I'd like to propose that Linux filesystems become package managers in their own right.

    And not just package managers. Most of the information that such tools as 'configure' generate is already known -to- the filesystem, albeit in a form that's not readily accessible (hence the need =FOR= 'configure'). But if the filesystem keeps an actual database of all installed packages and files (along with where they currently reside), then 'configure' scripts would become largely simple database searches, which would have increased speed and increased reliability. (No more having to tell 'configure' where non-standard installations are. It'll still find them.)

    Also, finding files would be quicker. A linear search is horribly slow, when all you really need is a sorted index that you can do an n-ary search on.

    Does this sound a bit like Windoze? Nope! NTFS and FAT32 keep very primitive caches, I believe, but not a comprehensive database. That's why you need installers with Windoze. The filesystem itself is not capable of acting as an "installer" in it's own right. The system I'm proposing would. A simple copy would be as "uninstallable" as an RPM, DEB or SRP. A tarball would have equal functionality as any package manager.

    In a day & age where computers can perform feats so astonishing that nobody, even 10 years ago, could have imagined them, and where integration & interoperability have long been established, we are STILL using totally incompatiable methods of doing identical operations on identical spaces. This is stupid.

  • Some of these problems could be solved by having alot more default groups as well as a more capabilities based security model, instead of the "root uber alles" security model. Different subsystems should have different group ownerships and you should be able to pick who has control of what subsystems at install time. Also a sane sudo config at install time would be a plus. That would allow Joe User to change resolv.conf and Jane User the hosts file but not destroy the sendmail settings or install a trojan accidently. It think we could use finer grained access controls than root and user.

    I like the idea of installing more stuff in the individual's home directory, but that won't work right if you do have multiple people using a household computer.

    I do see some problem with having per-user system settings, in that it could be a big pain for a neophyte to troubleshoot. It would seem to add unnecessary complexity.

  • Dude. Do you really want to have 50 copies of libc hanging around? Or what if each (Windoze) program came with its own copy of directx?

    There's a reason for dynamic linking. It's because some of these libraries are huge! System libraries that many applications are based upon should always be dynamically linked. Not everyone has an 80-gig hard drive, you know.

    --
  • Applications and tasks have changed over the years, maybe we should work on how to make the fs layout accomodate that.

    Hear hear. Applicatons are more self-contained than they used to be, and a global hierarchy like /bin, /usr/bin, /usr/lib, etc. isn't really needed any more.

    What about something like this:


    / - Root
    /bin - SYMLINKS to individual user-executable
    binaries.
    /include - SYMLINKS to include files elsewhere
    /lib - SYMLINKS to dynamic run-time libraries
    /devel - Development
    /devel/include - SYMLINKS to include files
    /devel/lib - Static and import libraries
    /dev - Devices, same as today (or with devfs)
    /doc - SYMLINKS to application-provided documentation
    /doc/man
    /doc/info
    etc.
    /prog - Contains program packages
    /prog/XEmacs (For example)
    /prog/XEmacs/config - System configuration files
    /prog/XEmacs/doc - Program documentation
    /prog/XEmacs/etc. etc. etc.
    /home/user/ - Mirrors above heirarchy,
    overriding anything in
    the global with anything
    in the user's

  • Maybe, maybe not. Concatenated mounts or similar mechanisms in Plan 9 are one way around the PATH variable. Another is using a file system that is a database; in that case, your "PATH" (i.e., the set of programs you might run by typing their name) would turn into a query "find a file with NAME that is executable, is owned by root or by me, and lives under /usr/executables or /home/myname/executables".

    But, yes, fundamentally I agree: doing packaging "right" (in my sense) requires breaking significantly with the way things are done in Linux/UNIX right now. On the other hand, it may be possible to get there without breaking existing functionality. Plan 9 actually has most of the necessary bits and pieces, and it has a usable POSIX system side-by-side with its advanced naming facilities.

    For example, one way of getting there might be to start by adopting some kind of "bundle" format for GUI applications (together with a small modification to the kernel to make them executable) and to add an enhanced version of "chroot" (with more control over the environment). I think both of those would be natural, the first for better usability, the second for better security in some applications. Over time, other pieces could fall into place.

  • All communication protocols nowadays talk to each other before starting any kind of transfer. Dialing in with your modem also requires a negotiation about rates, noise, et cetera.

    So, why are packages communicating in one way before installing? I mean, tarball installs poke around on your system. RPMs and Debs use a predefined database-like structure to poke around, anything outside these structures can not be found.

    A bit simplified, but I think you will get the point.

    System: So, and you are?
    Package: foo
    System: A foo version has been found, 87229. What version are you?
    Package: 239
    System: I don't understand the version scheme. Could you explain?
    Package: The lower the number, the higher the version.
    System: OK. Is an upgrade from 87229 to 239 seamless?
    Package: No, some changes in the configuration files are needed
    System: Do you supply an algorithm to cleanly change an older configuration file?
    Package: Yes.
    System: Do you require other software to be installed?
    Package: Yes, I need library glibc 2.0 and libfoo 8
    System: Well, we have glibc 2.1.3 installed. Hang on, I will make a connection between you and glibc to sort things out.
    Package: Glibc issues are sorted out.
    System: Libfoo is not present on this system. Any hints?
    Package: Yes, it can be on a FOO CD or downloaded from http://.....
    System: Libfoo installed.
    System: The library path is: /usr/lib

    Etc., etc...
  • That's the big problem with RPM, it can do almost everything that Deb can but there isn't the ironfisted standards that Debian has for its packages. Therefore dependancies break when you try to install packages from one RPM based distro to annother RPM based distro. For example one distro might have "Provides: MTA" and annother could have "Provides: email-server" for the same package, or one might install in /opt on one distro and /usr/local on annother.
  • Oh, I forgot, also Debian packages generally have much nicer pre and post-install scripts. You can do that very easily with RPM as well but in my experience that breaks many RPM frontends that don't watch STDIN/STDOUT for RPM's requiring input and just break. Packages really should ask you for the defaults you want your system to have if it can't easilly pick sane defaults by itself.
  • Actually, on non braindead systems this is not the case. Most UNIX systems do not allow you to give away files. If you can give away files to root and other users, you can easily defeat the quota systems. You can also create some subtle attacks on various things like global .forward directories.
    --
    Mike Mangino
    Sr. Software Engineer, SubmitOrder.com
  • It seems to me that the author simply lacks experience with RPM. RPM in its more recent incarnations has a Prefix option so that you can define relocations from /usr to /usr/local or from /usr/doc to /usr/share/doc or whatever. You simply specify directories that can be relocated without breaking the package.

    It's nice that recent versions of RPM use this. However, it's not quite so nice when somebody decides to make a package with a brand-spanking-new version of RPM which I try to download. 'Cause it breaks.

    RPM has some serious issues. Among them (some detailed earlier):

    • No central package authority. Red Hat sort of counts, but they don't package nearly enough stuff. Forget even trying to use contrib.redhat.com, unless you want to use the SRPMs. And if you're going that route, you might as well just download the source tarball yourself.
    • Package dependencies. Yes, this is a good thing, but sometimes the packagers get a little too zealous in what they depend on. Sometimes I don't want version 4.5.6.73445 of the whizzy installer that requires GNOME 1.4435.8beta10. There needs to be an easy way to specify such things (perhaps there is and I haven't found it yet).
    • Version incompatibilities. We all know about version incompatibilities between programs (can you say glibc?).

    I have RPM on my box, but that's only because I run Red Hat -- I'm familiar with it. Nearly everything else (including new versions of included packages) gets installed in /usr/local.

    Every time I install Red Hat I tell myself I'm going to walk the Path of RPM. Every time it's too much of a pain and I go back to .tar.gz.

    -Nathan

  • by Glenn R-P ( 83561 ) <randeg@alum.rpi.edu> on Monday August 07, 2000 @03:32AM (#874804) Journal
    How many of you really look over a binary before you install it? Do you just rpm a package and then run it? What do you have to lose from running this random binary?

    1) IMO, It's not a good point at all. It's correct that only a minority of ppl care, but that's no reason to remove the choice. It is very important to some people. If there is to be one single standard (god forbid) then it should cater for everyones needs, not just most peoples.


    Also, people who don't examine the source can take comfort from the fact that they are installing from a source that the public can examine for security problems.

  • The major advantage that RPM has over tar/make is the ease of removal.

    Call me a wuss, but I like to know that I'm not going to have to spend an hour tracking down file all over the disk when something I installed sucks and I want rid of it.
  • Ugh. Packages. Hell.

    Like some other poste mentioned, ideally packages/programs should be self contained - check out MacOSX's solution for things like shared libraries and the like (I think there's an Arstechnica article about it that is pretty good)

    But I'm afraid _that_ will never happen...

    The second best thing would be a system that:
    - has list of dependencies (like virtual packages in debian) for every distro, with the corresponding packages for that distro.
    - describes the distro fs differences in a flexible way (do i dare mention xml?) - the Redhat, Suse, Debian, whacked-out-beyond-recognition structure...

    if we go even more crazy:
    - that includes the source, a standard way of making it, and repackaging it immediately to the native package format of your choice (distro)
    - that can include source patches, that can be optionally applied (by user choice/distro description whatever)
    - (really crazy) can even (if the appropriate build tools are installed, and the source supports it of course) automatically build packages for other platforms (MacOS, BeOS, Win32)

    Can you imagine, one day saying "and now something completely different...",- and just making a new "distro description", downloading a bunch of next-gen source packages, and just rebuilding a fresh new distro overnight -> blammo

    It would definately make the lives easier for the poor slobs that have to maintain binary packages for different platforms/distros.

    I'm positive i'm oversimplifying things, but universal source packages would be VERY helpful, esp for the open source community. Universal binary packages would be a good start too though.

    What _isn't_ gonna happen is that distros converge to a specific package manager... But we could try to put a layer in between.

    Oh well... in the meantime i'll just pretend my good ole debian setup is the only distro... They're actually doing good stuff, with apt and debconf, and a bunch of tools to make it easier for debian-maintainers. Too bad it's of little use with respect to interoperability with other distributions.

  • by Tough Love ( 215404 ) on Sunday August 06, 2000 @11:24PM (#874811)
    A new package format that would combine the strengths of the current various is a good idea, but (and it's a big but) I hope the author isn't the guy to do it (he doesn't have a good grasp of security..) He mentions making the package self-extracting - "just do a chmod +x on it, and away you go" Sorry, but that's why "security" in Windows is so abysmal.. Think about it - to install, you're running this file as ROOT - would you run a binary from an unknown source as root? (I don't even untar source as root!) - this is inherently a bad idea.

    The vast majority of RPM packages don't have any valid reason for being installed as root. The main reason you have to have root right now is to be able to write the RPM database. Fix that. For example, give me a local branch of the RPM database and give root the ability to merge that with the main, shared RPM database.

    Miscellaneous other reasons for needing to install as root:
    • Making the package available system-wide. Why can't I install a package in my home directory, then ask via a root-privileged command to link it from /usr/bin? Or, more permanently, to move it there. Such a privileged command would still require security checks of course, but these checks can be very well-defined, and it would be easy to ensure that no suid programs were being installed. With a little more work, it could satisfy itself that no common user commands were being overridden (a technique favored by script-kiddies for hiding their root-kits).
    • Installing device drivers. Solution: user-space device drivers. We're some distance away from being able to do that just now. For now, elevate privilege to root just long enough to install the device driver, and require root's password to install such a package.
    • Installing daemons and privileged programs. Obviously, only root should be installing them. In many cases a daemon doesn't really need root privilege, and in that case it should run in user space and be installable by a normal user for their own use, or be shareable by the mechanism mentioned above.
    • Changing system config files. The author should try to write the program so it does this only as a last resort. We should have a way of overriding certain system config files locally, for example, resolv.conf, so that we have a per-user view of the system configuration. As a last resort, the installer can invoke a root-privileged utility just to change the config files, and require root's password.
    • (Other reasons, please enumerate.)
    This is all by way of saying that we can have mindless (read: stressless; user-friendly) auto-installing packages just like Windows, without breaking security.
    --
  • by Xzzy ( 111297 ) <setherNO@SPAMtru7h.org> on Sunday August 06, 2000 @11:24PM (#874812) Homepage
    I don't think it's quite so much an issue with the existing package formats as it is an issue with most of the Linux developers out there not really knowing what they want to do. It seems to me Linux is in this phase where a few leaders have been established, they're going their own way, and are becoming increasingly divergent.

    It's basically a standards issue. I've used Slackware for a long time.. like it. Tar and gzip is all I need to install a package. But once stuff started creeping in that only came out in rpm.. I was forced to coax RPM work on my Slackware installation, or go RedHat. Maybe that's great for the l33t Linux hackers out there, as it gives them options, but for someone who's more concerned with getting software running on their machine, it's a hassle.

    IMO, someone the entire Linux community trusts needs to answer some of these questions:

    "Where do we install stuff?"
    "What format should the distributed files be in?"
    "How do we handle upgrades or patches to our software?"

    Every answer to these questsions are subjective. This is why Debian and RedHat differ. I think that someone just needs to tell us how it's going to be, and let us start doing it. If it doesn't happen, in a few years, one really WILL be able to safely refer to Linux as "RedHat" or "Debian."

    And I know some people out there are trying to answer these questions. But are they making ANY impact, whatsoever? The only reason the linux directory structure isn't complete spaghetti is because people kind of do what most other people are doing. That works great today.. but I think it's going to implode on us sometime in the future.

    Now maybe good 'ol Torvalds doesn't want to take a dictatorship stance like that. I can understand that. But these questions of "standardized distribution schemes" will continue to rise as long as no one in authority stands up and makes a decision. Linux may truly only be the kernel.. but that's not how the world is viewing it.

    I guess I'm okay with the issue remaining unsolved, but if that's going to be the case.. I think the community needs to start spreading that news. "No package standard is forthcoming, choose your favorite, live with it, and stop asking for something unified."

    But of course, there's additional problems that appear with a statement like that..
  • >How many of you really look over a binary before >you install it? Do you just rpm a package and >then run it? What do you have to lose from >running this random binary?

    1) IMO, It's not a good point at all. It's correct that only a minority of ppl care, but that's no reason to remove the choice. It is very important to some people. If there is to be one single standard (god forbid) then it should cater for everyones needs, not just most peoples.

    2) Binaries are architecture specific. Linux is not.
  • I disagree with this editorial. The rpm approach is the right one. rpm attaches a symbolic name to a particular capability to avoid every other package in the universe re-inventing the wheel (i.e. having to figure out whether that package exists).

    So for example, rpm is supposed to tell you for sure if you have glibc version x, without you having to write some stupid complicated package script to go and figure out if glibc version x is hiding around somewhere.

    Now if there is some issue with compatibility between suse and redhat, obviously something should be done. Ensuring compatibility between different distros in this area was never going to be an easy one, but throwing the baby out with the bathwater is not the right thing.

    It sounds like standardised package names is the answer, although that is never going to be a 100% full solution. (If it was, then that would mean all distros are exactly the same, so why have more than one distro? (why indeed?)). But having thousands of packages out there trying to figure all these things out themselves isn't going to help either.
  • There is no need for multiple, mutually incompatable systems for doing the same thing. Diversity is one thing but code reuse and compatability is more importent here. The tired old example of "What if everybody used a different, incompatable but similar network protocol stack" applies here.
  • When you do an 'rpm -Uvh my.rpm' or 'rpm -i my.rpm' what do you think is happening? Two scripts are being run as root, one before extractions and one after.

    This is a good point, but there are two issues wit it..

    First, a script is slightly different than a binary executable - I can look at the source of a script to see what it does before I run it.

    Second, _I_ never run either of the commands you specified, without first examining the source of the scripts; I use a utility to unpack the RPM, then look at what I'm installing.

    OK, so I may be a _bit_ paranoid :o) ... but then, when everybody's out to get you, paranoid is just good thinking.
  • by enneff ( 135842 ) on Sunday August 06, 2000 @09:39PM (#874825) Homepage
    I think before we start getting all twisted about file distribution standards, perhaps we should look to the directory structure.

    There is an alarming lack of education about what parts of the linux file system are for. I'm sure regulars at #linux can attest to the massive volumes of new users asking questions such as "what's /proc?" or "should I install (app) into /usr, /usr/local, or /usr/local/mysql ?".

    It would also appear that many application authors don't know where their apps should live. For example, mysql by default (in the INSTALL) wants to go to /usr/local/mysql, but other applications want to sit in the /usr/local/[bin/etc/lib] heirachy.

    We need to get someone to provide a definitive explanation of what each part of the file system is for, and how they should be used, so that we'll be able to say "RTFM", and have a sounder understanding of our own operating system.


    nf

  • by Animats ( 122034 ) on Monday August 07, 2000 @08:23AM (#874826) Homepage
    Instead of having large numbers of disparate databases, all holding essentially the same information, most of which is never kept in sync with the filesystem (and therefore becomes stale very easily), I'd like to propose that Linux filesystems become package managers in their own right.

    That's worth thinking about. The MacOS has had something like this for years; programs are registered in a database when an executable is placed in a folder. Programs are found based on what they are, rather than where they are. Configuration files go in a standard place (the Preferences folder) and, by convention, can be deleted without major trauma. This simplifies the usual case for installation enormously.

    So suppose we have a filesystem that holds "packages", rather than files. A "package" is a directory tree, and is normally installed intact and unmodified. Where the package is isn't relevant; there's a database of packages and information about them maintained by the file system. This database replaces "PATH"-type variables; everything that looks at at "PATH" type variable needs to become a query to this database.

    Unlike the Windows registry, the package database needs to be locked to the file system. It's a cache for finding stuff, not something you edit. There should be something you can put in a package which causes a cache of (package-name package-version attribute value) items to be updated.

    You also need a place to store preferences-type data. This should not be mixed in with the installed data; for one thing, much preferences-type data needs to be per-user. This looks like a database of (user-id package-name package-version attribute value)

    It's too advanced an idea for Linux, though. The UNIX world is too heavily tied to explicit pathnames. BeOS, maybe.

  • by carlfish ( 7229 ) <cmiller@pastiche.org> on Sunday August 06, 2000 @09:45PM (#874838) Homepage Journal
    I've only used Debian indirectly, it's never run on any of my personal boxes (because I'm too lazy), but I really appreciated the idea of having dependancies based on roles, rather than a particular version of a particular program. For example, you can specify that your software "requires a mail transport agent", rather than specifying "requires sendmail".

    Debian also got a lot of things right by specifying up-front a standard for package naming. I'm sick of all of my dependancies breaking because I dared to install a non-RedHat RPM.
    --
  • /bin - SYMLINKS to individual user-executable binaries.
    [...]
    /lib - SYMLINKS to dynamic run-time libraries


    BAD!

    Most people have a separate /usr filesystem, lets say you're upgrading your hard drive, resizing your partition, or otherwise need to unmount /usr and mount a different one. Where do all these symlinks lead? You umount /usr, then try to mount another and one of 2 things happens.

    1. You have no more mount binary, unless you expect to keep it in /sbin
    2. You have no dynamic C library, its in /usr remember? You're fucked.

    Not to mention, how are you going to mount anything a boot time? The kernel self mounts /, but then your startup scripts call mount -a, oops cant do it, no C library, your system is hosed, nice filesystem setup there big guy.

    -- iCEBaLM
  • by DrWiggy ( 143807 ) on Sunday August 06, 2000 @11:44PM (#874848)
    Now, this might get moderated down as flamebait or as troll, but I have to say this: I don't think you guys should be allowed to run a *nix of any description if you are going to start installing stuff left, right and centre without knowing what files are going where. You're all going to think I'm crazy, but I really think the 5 or 10 minutes or so it takes to read and understand what happens when you type 'make install' is worth it. This is not Windows. You are trying to make Linux aspire to be Windows. This is not a Good Thing. It's not that you should make the interface neccessarily easier, but that you should attempt to make the user more clued up. I'd much rather see comprehensive documentation on the lines of "Makefiles for Dummies" being shipped about for the newbies than some new package format being created.

    But seriously, to just randomly pull down a package from a site you don't know from Adam, and then as root say "Oh, go on then, do whatever you want" is plain madness, but then, you guys are going to flame me to death anyway, so perhaps I should just go and be quiet somewhere. :-)
  • > For example, you can specify that your software
    > "requires a mail transport agent", rather than
    > specifying "requires sendmail".

    Actually, I believe this is possible in RPM as well using virtual packages. There's a section in either "Maximum RPM" or the RPM HOWTO that uses almost your exact words as an example.

    As I recall, anything that needs a mail transport agent says
    Requires: MTA
    in its spec file, and then any package that provides mail transport should contain a
    Provides: MTA
    line. Not sure how this gets co-ordinated in practice, but there are provisions for doing it, theoretically, anyway.
  • Sandbox the installer properly, and whoops! -- your troll (it is a troll, right?) becomes just another perfectly valid opinion. For, you see, "list of places to install stuff" is just a very limited form of interpreted sandboxed code. The funny part is, it doesn't have to be limited or interpreted. Those are just the easiest means to have it sandboxed.

    In the end the installer just has to put stuff where it belongs, and probably modify some kind of registry (or setup files or whatever). If it needs to execute some code in order to determine what exactly to do, then so be it. Just don't let the executed code do any damage. This way we can have more robust, but still perfectly safe, installer.
    --

  • good, lose some of that weight chubby! BTW, .deb's are far better! Get a grip, get a life, get a deb.
  • by ruud ( 7631 ) on Sunday August 06, 2000 @09:51PM (#874860) Homepage

    We need to get someone to provide a definitive explanation of what each part of the file system is for, and how they should be used, so that we'll be able to say "RTFM", and have a sounder understanding of our own operating system.

    See www.pathname.com [pathname.com].


    --
  • There's already a pretty definitive<A HREF="http://www.pathname.com/fhs">Linux filesystem standard.</A> (FHS, the standard formerly known as FSSTND)
    --
  • windows install

    put your application in a subdir of c:\program files\

    put your dlls in c:\windows\system\

    write some crap to the registry

    maybe add some stuff to the start menu

    done.

    granted, 99% of windows installs don't worry about where libraries are, and what compilers are needed, etc. as unicies do, but they're easy and simple.

    My FreeBSD box has more bin and lib directories than I'm probably aware of, and it seems silly. Perhaps it's time to invent a whole new layout for the unix fs organization. Applications and tasks have changed over the years, maybe we should work on how to make the fs layout accomodate that.

    These have been the ramblings of a very sleep-deprived person ... oh well.
  • In The Reality That Is The Desktop, the most important thing is for ordinary (aka non-geek) users is the ability to install/deinstall absolutely any software without penalty or pain. This concern overrides security, it overrides performance, it overrides more efficient usage of resources, and overrides just about anything else. Period. In the discipline of engineering, it is understood that you trade off one thing for another to achieve the necessary goals. In The Reality That Is The Desktop, you trade off the more efficient usage of resources that dynamic linking provides with the robustness of installation that static linking gives and end users requires. Right now, there is such intense fear among windows users that anything they do, any upgrade they make, any new software they install will, in their words, "screw up my computer". So, your average joe computer user will not think very well of Linux when, on Christmas morning, RPM reports that Barbie Magic Funhouse refuses to install because it conflicts with Evolution. Once again, this is The Reality That Is The Desktop. In order to succeed in The Reality That Is The Desktop, one must statically link, as it is the only thing that insures the software will actually install without destroying existing software or precluding the installation of any future software. Dynamic linking simply does not fit in with a healthy model of average joe, consumer desktop computing. Ask 1000 people outside of a CompUSA if,given a choice, whether they would rather have a computer that uses less resources (RAM, disk space, etc) or a computer that will let add or delete absolutely any piece of software without destroying previous software or precluding the installation of any future sofware. I guarentee you that at least 95% of the people will choose the second option. They have all had their share of windows doing unacceptable things when adding or removing software, and they will not accept the unacceptable from an operating system touted as superior to windows. Anyone who argues otherwise does not understand the desktop market and probably has no business whatsoever trying to bring linux to the desktop. Static linking not only provides robustness of installation, it also adds an element of ease of use, too. To deinstall a statically linked program, a user only has to click on the program folder (which can be placed in any damn directory the user chooses) and drag to trash. Empty trash, and you deinstall program. Ridiculously simple and keeps consistency with the UI concept of getting rid of something by dragging to it trash. To install program, the user copies the program folder from CD-ROM to wherever they want it. That's it. To install from internet, the user only has to download and unzip the program folder, then place the program folder where you want it. Simple and quite effective. These are the steps that must be taken for linux to succeed in The Reality That Is The Desktop. Oh, and whether you agree or disagree with me doesn't matter. 'Cause I've got the code.
  • Packages are nice, but really, if we want to see Linux make it in the Real World (TM) we need to advance to the 21st century and recognize that we must have a simple and easy means of installing new software

    We do. We have several - they're called rpm, dpkg and the various front ends. If the user interface of these packages are too arcane, then *they* need work, rather than what you propose . . .

    I propose that we follow Apple's lead in this area and move to Self Extracting Executables, ELF binaries that you run and will extract the software and install it for you, without the hassle of remembering arcane flags and what program you're supposed to use.

    This is such a boneheaded idea that even Microsoft has largely given it up - IIRC, while each program contains its own installer,the actual work is now managed by the OS.

    I'm not denying that there are problems here, your solution has been demonstrated to be a Bad Thing. Making friendly interfaces to dpkg and rpm, and setting them up to make package installs off a CD easier, might help.

  • Ever heard of the MentalUNIX Packaging System(mpkg)? It takes a radically new approach to packaging. It's truly amazing what it does. The site is at

    http://mentalunix.sourceforge.net

    They currently have a 4 or 5 page "mentalunix paper" in the entire distribution, with a small part on mpkg. From what I've read(in the paper and on the mailing list), mpkg is going to be amazing.

    No more binary packages! Everything distributed as source code. The nice thing is that it can be interactive. yes, interactive. If you load up the packager with the normal options, it will load up a simple console GUI(they are supposedly going ot make a Gtk+ based front-end too), and you can configure the program. It stores everything as XML. The congfiguration is handled in a config.xml file, the interface in interface.xml, the dependencies in dependencies.xml. The interfaces will be build using XML, TCL, PHP, and javascript(well, XML and any of those languages added). Imagine -- The kernel pacakge will have xconfig load when you open it! mpkg will also feature a daemon that montitors when new programs are installed in the the standard directories(everything but /home). It then checks the binaries against a build in database of known software, and can ad the program to the database. So, you can have rpm, deb, and plain ol soure code distributions that CO-EXIST! i suggest that you join the mailing list, and read the site. it has lots of information. Sccording to one somebpody said on the mailing list, sometime this week a detailed mpkg spec is going ot be released.
    -------------
  • When you do an 'rpm -Uvh my.rpm' or 'rpm -i my.rpm' what do you think is happening? Two scripts are being run as root, one before extractions and one after.

    I felt he had some good points. A dependency system that doesn't depend on itself being present in its dependencies *would* be cool.

  • by jetson123 ( 13128 ) on Sunday August 06, 2000 @10:00PM (#874889)
    I think the whole notion of exploding a piece of software and spewing the bits and pieces all over the file system is wrong. Applications should be self-contained collections of resources (somewhat like .jar files or the Apple format); let's call them "application bundles" like Apple/NeXT does.

    For stuff that isn't packaged with the applications, the package format should contain version information and checksums for any other files it depends on. It should declare those dependencies. The operating system should then be able to identify what is needed, possibly fetch it over the Internet, and give the package access to those files under a symbolic path name.

    Such application bundles with dependencies should not (normally) be allowed to access the raw file system for their installation at all. In fact, even most applications should probably not be allowed to reference absolute pathnames. All references should be through symbolic paths. That way, one actually has a prayer of figuring out what the thing depends on, and it would also help with security.

    Furthermore, the notion of "install scripts" is broken because it is difficult for anything to figure out what went wrong in the bowels of some script, and because scripts may do things to the system that are difficult to undo. Information about how an application bundles and their needs should be entirely declarative.

    Another way of looking at that is that applications and application installation needs to work similar to objects and software components. In the past, programs consisted of functions that looked for, and modified, global variables all over the place. These days, good software components have well defined interfaces, get access to their environment only through those interfaces, and rarely use global variables. That's the kind of change that also has to happen at the level of applications.

    The system that is closest to that in many ways is Sun Java: with jar files and the JavaBeans standards, there are well-defined ways for software to get installed and interact with the rest of the system, yet the environment can limit what new software components can do. We need something similar for Linux packages. That would also require additional naming and name translation support for Linux, similar to what Plan 9 offers (however, Plan 9 never adopted a good way of packaging applications based on their naming model--a shame, they were pretty close).

  • by Silver A ( 13776 ) on Sunday August 06, 2000 @10:02PM (#874891)
    Linux Standard Base [linuxbase.org]

    RPMs, and other packaging formats, should have install scripts, especially if they're not part of a particular distro. Most source tarballs have reasonably good configure scripts, why don't RPMs? I'm not a programmer, either, but I'd think that RPM is capable of doing everything that the author wants it to do, if it's used right.

We warn the reader in advance that the proof presented here depends on a clever but highly unmotivated trick. -- Howard Anton, "Elementary Linear Algebra"

Working...