


File Packaging Formats - What To Do? 231
Jeve Stobs writes: "It seems that nowadays, there are three ways of distributing a program. In a tarball (be it a .gz or .bz2), in a Debian package, or in a RPM. These are all fine methods of packaging a piece of software, but they each have their places, and they aren't as comprehensive as I would like. I really think that, as we move into a broader user base with the variety we have so far (not to mention the variety we are likely to have in the near future) that a new method of software distribution is needed. osOpinion has an excellent editorial piece which details some solutions to this growing problem."
Remember LZH ARJ etc (Score:1)
Re:I'm astounded... (Score:2)
Regardless of its capabilities, Unix is
---
Information (Score:1)
Slackware is for you (Score:1)
Re:rpm is the right one (Score:1)
Re:I'm astounded... (Score:2)
---
Re:Some sort of communication protocol? (Score:1)
And why does this remind me of a certain scene in "Dark Star" [imdb.com]?
Doolittle: Hello, Bomb? Are you with me?
Bomb #20: Of course.
Doolittle: Are you willing to entertain a few concepts?
Bomb #20: I am always receptive to suggestions.
etc... [uiuc.edu]
Re:RPM is best (Score:2)
The goals of newbie users and the goals of sophisticated power admins are very different. RPM probably is a fine choice for newbies and others who may be more experience but don't care to worry about the administration of what is installed, especially on a home or office desktop.
OTOH, for someone like me that cares about what is installed, how it is installed, where it is installed, and the overall security of my server, I have found that RPM just gets in the way and increases the frustration. I'm personally dealing with details that RPM is supposed to help me avoid dealing with, because I need to deal with them, not avoid them. RPM is definitely NOT easy for everyone.
The correct answer is, there is no one size fits all. Standardizing on a single packaging format is the wrong thing to do, unless we want to standardize on one type of user (which the other user types are not going to be happy about, at all).
Re:Remember LZH ARJ etc (Score:1)
Re:don't install at all (Score:1)
problem is you have several classes of files:
Binaries - exported over a network readonly. Maybe arch independant.
Data files, read only, or read write, may be on a network, machine local, or per user.
Configuration, again, network, or local, or per user.
The current fs layout is organised like this, so you
can mount
if you have
Re:We already have a de-facto one. (Score:1)
There's already APT, which is awesome - 'apt-get update; apt-get dist-upgrade' goes and updates your ENTIRE INSTALLED SYSTEM (except for kernel and non-Debian packages).
Installing individual packages is trivial - 'apt-get install PACKAGENAME'. And if you want the source, just do 'apt-get source PACKAGENAME'.
There are a number of frontends for apt, such as Gnome-Apt (not sure whether it's still under development, or orphaned), Aptitude, Console-Apt, StormPkg, and Corel's get_it.
Re:don't install at all (Score:1)
Just to toss out some ideas:
Imagine if the entire OS was one file. So you might have a base Linux, BSD, or whatever system that you would install just by copying a file over. This would require some type of very basic system for managing the "filesystem" (or database, whatever it would be).
MVS uses a very flat filesystem model. And you can drill into the datasets as if there was a directory structure (which I guess there is, sorta, from the layout of the dataset). I'm not saying I like the MVS filesystem. It sucks because it's so cryptic, but there is no reason a modern filesystem patterned after it would have to be that way.
In order to do this right I think it would require some major architecture changes to any OS. That would facilitate this one file == one application, system.
Simplicity, I like it.
Re:Directory Structure First (Score:2)
A source install package manager will also need to properly integrate local hacks/patches to the package. If it doesn't do that, then I won't be able to use it for those source packages I do have my own code hacks to (several of them, such as apache, bind, ssh, qmail). There will also need to be a standardized place for it to find any "local modifications to source" for a given package so it can automatically include them.
Re:Packages are nice, but..... (Score:1)
You need to read up an Apple's OS X packaging model. It's not nearly as stupid. "extracting the program and installing it for you" is moot because the program and all the things it depends on are self-contained--just download and unpack using your favorite decompression utility, and there you are.
If you care to look at the source, you can look at it--and compiling the source produces the package.
I would not EVER run a system such as you describe.
Re:Use a jar! (Score:1)
You can even use winzip (ug) to crack jars open.
--d
Re:HelixCode package format (Score:1)
-><-
Grand Reverence Zan Zu, AB, DD, KSC
??? (Score:1)
What we need is an easy way to register manually installed software. Something like stow, but much more powerful. Yeah, that's what we need, not self-executing archives.
Re:Standardise on deb! (Score:2)
It was said before. If your package needs a mail agent be installed, how are you going to make sure it works when I have qmail installed instead of sendmail?
Re:Use a jar! (Score:1)
from the jar tool documentation [sun.com]
You can even use winzip (ug) to crack jars open.
--d
Re:Static linking necessary for linux on desktop (Score:1)
--
Re:I'm astounded... (Score:1)
Regardless of its capabilities, Unix is used by many people as a single user workstation, in a role identical to those other OSes. What do you think the target market for Loki's game ports is? Who uses KDE and Gnome?
Obviously, with a multiuser server where a lot of people depend on it, you need a good admin. I am not suggesting that anything be done to take away that admin's ability to get his job done. But for Joe Schmoe who wants to play a 3D game on his home PC, he should certainly be allowed to play the game without having to know where Mesa's libaries go.
---
the biggest problem with RPM... (Score:4)
One of the strongest points about Debian [debian.org], I think, more than any minor technical distinction between .deb and .rpm, is their strong centralized packaging organization that solves the above problems. (Based on this, they can also make nicer tools e.g. for automated network-based updates.)
Now, if only Debian stabilizes their new release so I can install it on my PPC...
Re:Debian Problems (Score:1)
Although I won't discourage you from using tar and compiling source, it's a good skill, but if you've got the time for it, then I say good luck to you. I just don't like admining boxen that I have little idea what is installed, and precious few tools to find out, mainly cd and ls.
The old libaries are often from non-free binary-only progs, that can't be recompiled anyway, no ones got the source or is allowed to re-distribute it, etc.
I don't quite get by what you mean "depend conflicts", unless you're trying to get 2.2 installed in under 50megs, which is a bit hard, because debian wants to install a semi-full system, internationalization support, nurses, a large terminal db, dpkg, plus things like debconf, which is debatable. In that case there is at least 1 debian derrivative meant for the embeded market.
Re:File systems as package managers (Score:2)
Now, let's say that one attribute in the database is the directory, and another is the file. (This is not unlike how MUDs and MUSHes represent their objects.)
When you go into a directory, you see all files for which (object.directory == user.directory).
Now, when objects move around, the database and filesystem remain in sync, as they have become a unified concept.
When used the reverse way, it works just as well. I want to run the XYZ web browser, go grab the nth version found in the database. This is perfect for a GUI interface, where paths should be transparent or non-existant, and where objects just -are-.
A few choices are good (Score:1)
Founder's Camp [founderscamp.com]
Ziiiiiiiiiip (Score:1)
Ziiiiiiiiip it.
Actually, my favorite compression format was
a trash compactor.
It did wonders for my mac.
Re:Standardise on deb! (Score:1)
Re:Post-Install (Score:1)
Re:Directory Structure First (Score:2)
RPM *and*
I use Red Hat and hence deal with RPM extensively and frankly think it's great.The only problem is that for performance reasons I like to have some things compiled on my machine (e.g. kernel, compiler, httpd...). That involves picking up a
-----
I like 24 e's so bite me (Score:1)
What you need is a recent cheapbytes debian cd and some cool t-shirts, six-pack of jolt + a bottle of voka, then you can join in the fun! Driving around, looking for BSDer's and yelling crap at them, "you call that liscense free!!!" "microsoft could take yer code, and sell it!!" "you smell funny!" "Cut your hair, the early 90's sixties revivial ended last century!!!" etc and then burning tread, peeling away as the cops whip it around with the lights flashing... ah to be linux and young..........
Re:don't install at all (Score:1)
The best would be Linus or RMS defines standards, but let's wait for GNU/Linux to maturize a bit more.
Don't make installation easy!!! (Score:1)
99% of Linux's much ballyhooed ability to run for weeks, months, years on end is that users are NOT installing new software or new versions of software every other day.
Every installation is a chance to screw something new up, and because of the potential for interactions among all the softwares installed, the likelihood of problems increases exponentially with each package installed.
Re:BSD does it again (Score:1)
Re:don't install at all (Score:2)
But rather then trying to figure out how to break this excellent idea so it works with old tools, why not fix the old tools. What we want is the existence of a file to "install" it.
For the shell: modify it (or the exec system call) so you can execute a "package" and run the application. Tell everybody to put ~/bin in their path. And then tell everybody that sticking the package in ~/bin will allow them to run it from the shell.
For things like the Gnome start menu: fix gnome to look for packages in ~/bin and extract the necessary information (help, icon, etc) from it. Then again, putting the package there will suddenly make them appear on the menus.
To "install" a package so all users can see it, become superuser, and move (or link) the package to /System/bin.
To "uninstall" a package, just delete the file (modify rm so it does rm -R on them automatically).
A package that has to change the system somehow (like install a service) will pop up a question box when run saying "do you want to install this". If the user says yes it then runs some kind of "install shell" program. This is a setuid program that asks for the root password in a pop-up window and if the user types in the correct thing it then runs the script. Cheap solution: user must put the "file" (jar/.app/whatever) into ~/bin to see it without installation (and their path must contain ~/bin). Making shells see the command is the same as making the Gnome command see the command or the Windows start menu see the command. Some thing needs to be done other than simply putting the file on the file system.
Re:The root of the problem. (Score:2)
I remember seeing something about a organization to standardize all the Linux file system layouts etc. but nothing seems to have come of it. Redhat was not a part of it and Slackware falt out refused to abide by any decisions that they made.
It seems to me that Debian has it's act most together but has problems because it there are not standards.
Contrast this with a *BSD OS (and perhaps others) which have a centralized system of source and/or binary (ports/package) installation using trusted files. Download, make, grab and install all dependencies, install, done. And everything is mantained by a group of people who make the latest greatest software buildable and runnable on *BDS. This is, of course, precisely because there is little fracturing in the origanizational structure of the overall OS. A weak point is trying to update a program to a newer version (I've seen ways to do this but it's not exactly simple).
It won't happen untilthe Linux *distro leaders* decide to wake up and compromise on this key area of standardization. I know Linux users love to use the 'multiple choices' argument but at some point you have to start catering to the majority rather than the minority... (yeah yeah, sounds more and more like a commercial OS)
Anyways, I don't mean to slam Linux but it's a key area of weakness. It's ironic that a great strong suit of Linux is also a major headache.
Re:Standardise on deb! (Score:2)
Re:Possible Security issues... (Score:2)
One of the things that annoys me greatly is packages that install stuff into my system configuration. I want total control over my system, including what services are started at boot time, and in what order.
OTOH, I can understand someone else not understanding at all how the system starts up, and wants the package to take care of that (if its a service to be run). Part of the difficulty of all this is that we have not standardized on one kind of user.
Interdependencies - warning, may be flame bait (Score:3)
Ok, as only partially-initiated, I must ask, in spite of the simplicity of the philosophy of Unix, why oh why are there so many damn interdependencies in applications? Example: I install RedHat (yeah, shut up, it was the only thing that would install over DHCP on this old ThinkPad, after trying FreeBSD, Slackware, NetBSD, and TurboLinux), and choose the most minimal of configurations, and also choosing some small tools like cvs, etc. Well all of a sudden it is prompting me for all sorts of other dependent packages. I could not believe it when it told me I needed the entirety of KDE, and *then* also GNOME to satisfy dependencies! That is bullshit. Tk, Tcl, Python, Perl, Expect (!)...how the hell many things do I need to install? Am I the only one who thinks that backending GUI or administrative applications by Perl is just a god-awful abuse?
Sure this is just one experience, but I've found the same general thing when installing other distributions. Is this just a commercial flaw? Or do other "non-commercial" distributions like Slackware and Debian not require this? I just boggle at the horrendous amount of crap that even the most trivial of applications is dependent upon.
Ok, I'm putting on my asbestos trousers...
My 2c on installing software on Linux/*BSD (Score:2)
What I see most important in the short term is the complete lack of a standard installer (the likes of InstallShield) and a standard GUI for it in X and curses.
1. It should be a binary installed on the local system and so separated from the packages it will be installing. (for security reasons as stated elsewhere in this thread)
2. All packages should be digitally signed and this signature has to be validated before installation can commence. (be vary about M$ signatures ;-) Distribution maintainers should include trusted signer's keys in the distribution for convenience.
3. Within the installation GUI it should be possible to get a clear view of what-goes-where in terms of files contained in the package
3.1 For this sort of a major installation system overhaul to become reality there must be very good tools to create these new packages and utilies for converting existing .rpm/.dep/.tgz packages to whatever this new system will be.
4. Also installation paths should be configurable and there should be a button for checking and updating the system path setup if needed -automatically. Also post copying/installation configuration should be available through the same GUI, so that one is able to install and configure software easily through a "wizard" in the beginning and then get down to 'vi' if tweaking is needed at a later time.
5. The installer should have a decent help on LSB directory meanings and a FAQ of known best practices of software installation. (I use /opt for just about everything)
6. The packaging format should be something like the .tgz package format *BSD's ports collection with automake/autoconf Makefiles for dependency checking and a way of automatically fullfilling these dependencies if needed á la the ports collection (It really works wonders - it is something we should adapt from our *BSD kinders)
6.1 The packaging format should also support source compilation i.e. instead of being an "only binary" package the SW could be distributed in source form and the installation system will give the local machine optimisation flags to the Makefiles of the package to be compiled. Flag override should be also available in the package if the SW is known to malfunction with some optimisation flags i.e. it's not SMP safe or breaks with high optimisation parameters.
6.2 There should also be a proper uninstall scripting in the installer and another handy feature would be the possibility of making an orphan check in the installer i.e. when a software is uninstalled no shared libraries should be removed (this semantics will leave orphaned libs hanging arround, but retain other SW operational if it hasn't been registered with the package system as using this library) Now with orphan detection root has direct control over what libs are to be removed as surplus and if something then breaks he will be able to reinstall what ever he just removed, instead of being left guessing what essential part was removed with software package ZYX.
7. A new binary fileformat of the .jar like would be helpfull if one strives to minimize the directory count of the box, but at the same time it compromises control of individual files in a given software (think icons in KDE as part of a kde2.package - how can those be updated easily and conveniently?) Perhaps some kind of a hybrid system with a hierarhy of
/opt/kde2/kde2.package /opt/kde2/conf/(config files) /opt/kde2/icons/(icon files)
say
and
and
8. At the same time we should try to rationalise the current confusing directory hierarchy system of /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin, /usr/share/bin, etc. as this becomes entirely uncontrollable after a while of mixing packages and/or source compilations from various sources on any given system.
Nowadays it is complicated to get a grasp of what's installed on a system just after installation if using something like RH/Mandrake that stick every SW beneath the sun and their friends in a default installation. (even with the minimum/expert installation =(
Hopefully someone with enough programming expertise to actually do something about these suggestions sees this post and finds some of it rational enough to try to implement a system like this on Linux/BSD.
I'm looking forward to see the day it works,
++ Raymond
EPM, at www.easysw.com, does this (Score:2)
It's GPL, and it's packaged for Debian (for woody, not for potato, unfortunately, although the woody package works on potato).
Re:packaging applications == lexical closures (Score:2)
sub make_package {
my $library = "hello, world";
my $program = sub { print $library,"\n"; }
};
# create the program/package
$package = make_package();
# change the environment incompatibly
$library = "oops, wrong environment";
# run the program
$package->();
This outputs "hello, world" in Perl. In the real world, it will output the wrong thing.
./configure --prefix=/usr/local/foo (Score:2)
./configure --prefix=/usr/local/foo
and then after the "make install" go to
for FILE in *; do ln -s `pwd`/$FILE
If the package provides dynamic libraries they you also need to change
The standard Linux USER (Score:3)
Time and time again we hear the call for standardizing something. The common argument tossed out is that such and such standard is needed if Linux is to succeed. I say baloney! to that. Linux will survive, grow, and prosper, on the basis of what has made it do so, so far: its diversity and opportunity to try things in a different way.
If we are going to standardize something, then maybe what we should standardize on first is a single type of user. Afterall, if we only have one type of user to have to deal with, then we don't have to deal with a diversity of needs and we can focus on exactly what this one type of user does need. Then we can have just one distribution, one filesystem layout, one package format, one window manager. Hell, we might even be able to narrow it down to just running a single application. Then Linux will RULE!.
... in your dreams!
Re:don't install at all (Score:2)
pffff... (Score:2)
personally, i don't understand why cardboard boxes have been around for so long. I figure, if you order a washer/dryer combo from a store, the cardboard you purchase it in should be able to unpack the washer/dryer combo for you, set it all up, and do a complimentary load of laundry.
my not-so-serious point is that tarballs and zips are the cardboard boxes of the computer world. They're meant to do one specific task, and they do it well. No one can say the tar command isn't pretty robust. But basically, do you care about the packaging something comes in? I sure as shit don't. I care what's in it. - *nix, generally being more hacker oriented has always had a do it yourself attitude. Personally, i don't want to see a single file that extracts, runs, and does your dishes all at the same time...that's just not what i switched up for.
FluX
After 16 years, MTV has finally completed its deevolution into the shiny things network
Some interesting ideas, but... (Score:3)
He mentions making the package self-extracting - "just do a chmod +x on it, and away you go"
Sorry, but that's why "security" in Windows is so abysmal..
Think about it - to install, you're running this file as ROOT - would you run a binary from an unknown source as root? (I don't even untar source as root!) - this is inherently a bad idea.
Just my
Packages are nice, but..... (Score:4)
Just look at it! The author of the article makes a very good point. How many of you really look over a binary before you install it? Do you just rpm a package and then run it? What do you have to lose from running this random binary? No more than if you downloaded the rpm and ran the binaries contained therein.
What we as the Open Source community need to do is create a standard install tool, like what windows programmers have with the install shield software. The software could be customised, and could be written to reconize the directory structure, and install the files in the correct locations, maybe through the use of certain files that would be kept in a standard location, like
So, whaddya think? Think we can get something whipped together? Even just a proof of concept done in a combination of perl and python? We certainly don't want to make the mistake OS/2 made and not at least keep up with what windows has had for years.
-ssd
Re:Post-Install (Score:3)
GNOME and KDE also have a similar concept.
Re:Directory Structure First (Score:2)
Re:Static linking necessary for linux on desktop (Score:2)
There is no reason for Linux to duplicate DLL hell from Windows.
And everybody asks "what kind of interface is friendly for the user" and talks about installation wizards and other crap. What is "friendly" is when there is a little box in their file browser that is the "program" and they click on it to "run" it and they NEVER have to "install" it.
Static linking will 100% allow this for programs that don't have to mess with system resources like drivers or continuously-running services. For programs that do have to mess with the system, I recommend that the program have compiled into it the ability to execute a setuid utility (that asks the user for the root password) that can run a script to "install" the program.
And he did not mean static linking libc or Xlib or other things that are obviously part of the system, he was talking about static linking those hundreds of Gnome libraries that are making it almost impossible to get Gnome to work!
Re:Directory Structure First (Score:2)
Re:don't install at all (Score:2)
Standards (Score:2)
/usr/*,
/var usage
/opt usage
/users,
init scripts: System V, BSD ?
name and format of configuration files (*.conf, *.rc,
Also I think there should be a unified package manager system. It seems to me package management needs to migrate to a standardized repository or database (dare I say "registry"?) of application information, indicating interdependences, and what is installed. Somebody mentioned OS X-like application package system. I don't think that is entirely feasible, because many packages consist of different components: applications, documentation, system components - these all need to be put in their correct place. We hate the windows registry because it is of some proprietary binary format that you need a tool to use, but what about a simple XML file, or micro-database, where information is stored centrally?
You might even migrate various application configuration files into this, but that's pushing it, and for the most part, application-specific configuration files do the job just fine (I believe there is at least one effort to standardize application configuration files).
Again if things are already being done, then fine, I'm not aware of them. Ideally one should be able to install any distribution, and immediately know where things are and how things work, having used any other distribution, and be able to install, configure and remove packages in a standardized way.
Re:packaging applications == lexical closures (Score:2)
Re:Biggest criticism from outsiders. (Score:2)
Re:Possible Security issues... (Score:2)
The only possible courses of action are:
* Carefully check the source of the software
you're installing and the installation
procedures
* Only get trusted software from a trusted
source
* Sod it, and hope for the best
More Standars but not yet... (Score:3)
packaging applications == lexical closures (Score:2)
Any program is written and tested in a particular environment (libraries, writable directories, etc.) and makes references to those objects. To function correctly, it should really be shipped with the complete environment it depends on. That's no different from a nested function in Scheme or some other fully lexically scoped programming language (sorry, C/C++ doesn't need to apply).
Of course, requirements on applications are a bit more complicated than functions. When shipping the "closure" of a program over its environment in that way, as an optimization, we may not want to ship every single library with it; many of them may be present in the target execution environment (shipping a secure checksum may be sufficient). Furthermore, for some values in the closure, we may want to allow "substitutions". For example, for some libraries, we may want to allow newer versions to be substituted. And some references, e.g., to a "document directory" are dependent in more complex ways on the environment. Such deviations should probably be identified and declared explicitly as exceptions.
Lexical closures get their power from two properties. First, in modern languages, the "free" references to dependencies in an environment are captured automatically and reliably. Second, once closed over an environment, a function can't go out and start accessing other parts of the environment (at least not without being very explicit about it in most languages).
The alternatives (manual declaration of environmental dependencies, unrestricted access) were tried with programming languages, and they led to unreliable programs, and languages got rid of those approaches pretty quickly. We should probably follow a similar path for packaging applications.
BSD does it again (Score:3)
RPM already does most of this (Score:3)
It seems to me that the author simply lacks experience with RPM. RPM in its more recent incarnations has a Prefix option so that you can define relocations from /usr to /usr/local or from /usr/doc to /usr/share/doc or whatever. You simply specify directories that can be relocated without breaking the package.
That solves the problem of different directories for different distributions. If you are not using the same paths as the packager, you can give options to RPM to move the files to where you want them. Obviously it's not perfect because some programs hard code path information, in which case the directory cannot be relocated and should not be specified in a Prefix line.
The author also says that a standard .tar.gz should be used. I totally disagree with this. CPIO is very similar to tar, but has some distinnct advantages over tar. One such advantage is the ability to read filenames to include from stdin. Of course there is the tar -T option, which may be able to take "-" as an argument, or possibly /dev/stdin to read files from input. Another advantage to cpio is that it deals with device special files and pipes/sockets better than cpio. Of course all of these are moot points, RPM is already using cpio, DEBs use tar. Either way it's about the same thing.
Another concern was the dependency problems. Sometimes package maintainers will list other packages on the requires line. This is a really bad thing to do. RPM will automatically find library dependencies, so there is no need to do this. Packagers who do this are creating the problem, not RPM itself. Do note that RedHat has a habit of doing this, so they are somewhat guilty.
One issue I would like to bring up is that RPM works best on a system when EVERYTHING is an RPM and you do not install any shared libraries as source. I am thinking of making a tool that will generate a small .spec file which given a directory where a program has already been built, it will make install to a buildroot and then package the files into an RPM. That would be usefull for keeping track of programs that are constantly changing, such as those you are updating from CVS and recompiling only the necessary parts.
Finally, there is a need for a better version of alien. Right now alien is severly broken in that you must run as root to get good results. Alien should instead read the permissions and any device special information from the source file and use that information when packaging instead of requiring root access.
Perhaps another way to fix the packaging problem is a tool sort of like alien that would conver an RPM from one distro to an RPM of another. Something that could relocate files, fix dependency info, and then create an output RPM with the changed information. Of course I still say if the package is made right in the first place, this is unnecessary
Well, that's a lot to think about, hopefully I got some other people thinking too.
-Dave
Learn from Other Platforms (Score:4)
And I mean this not just in regard to installers and packages, but everything.
And no, I'm not proposing that what we need to do is make Linux look more like Windows [geometricvisions.com] or the MacOS.
But there are problems that others have solved and we can draw on their solutions, even if we can't use their source code.
(Even when I was working at Apple [apple.com] I would tell people about stuff from SunOS or Linux that I thought would go good in the Mac - they wouldn't hear of it).
I think an indicator of the problem we face in trying to bring Linux to the desktop was when I was corresponding with RMS about things I thought would be helpful to the users and I suggested an installer. He replied "What's an installer?"
The best installer I've ever come across on any platform, both to create packages with and for the user to install products with is Mindvision Vise [mindvision.com].
It would be worthwhile to find a friend with a Mac and download it, and make a little toy installer that installs SimpleText and a readme file to try it out (you can download it for free - the installers created with it complain that you've lifted it until you get a valid serial number. It is possible to get a serial for free for installers for freeware).
It beats the living hell out of anything I've seen for Linux.
BTW - if you want to see an installer that really blows, check out PackageBuilder/Software Valet for the BeOS. [be.com] The thing drove me to distraction. It wasn't just the way it would corrupt the data in my archives or crash while users were installing my software with it.
What really drove me nuts is that it had no concept of updating an installer when I had built new software to go in it.
With vise you just drop your new files in the folder next to your installer project and tell it to update. It gives you a list of files that have changed and you can approve or disapprove of updating them (or deleting the ones that are now missing).
PackageBuilder requires you to delete the old file from the installer project, which loses its settings, then you have to go and add your file back in and reset your settings. This is probably the number one reason for every time I've been reluctant to release a new version of my software on the BeOS - I enjoy programming it but I hate the damn installer.
Re:I'm astounded... (Score:3)
Why do Unix users have to be Sysadmins, but Windoze and Mac users don't? Why are the rules different?
It doesn't look that way to me. It looks like they just want a system that works well. Lowering the sites to Windows would be pretty unambitious.
Obviously it would be nice for the user to be more clued up. But there are a lot of people who use computers these days, and making cluefulness a requirement just isn't realistic. If it continues to be a requirement for Unix, then then people who don't want to learn sysadmin stuff are just going to have to scratch Unix off their list of options. Do you want them to do that? If so, why?
---
RPM is NOT best... Debs are the way to go. (Score:2)
I'm wondering if all the people posting all the merits of the redhat package manager have ever had any experience with debs.
A lot of the flexibility that the author wants i.e., all the power of a configure script, can be easily acheived by the package maintainer
simply setting up the preinst script for a debian package. I've never seen a system SO flexible. Debian packages have the ability
to suggest, reccomend, and require each other. They know exactly what each other provide, and they describe dependencies in much
more intellegent manner than RPM does. Dselect allows you to easily(if you know what you're doing) resolve package conflicts
and work out exactly how you want everthing installed. Apt will actually fetch and install all packages necessary to install the requested
program(with your permission of course).
The author admits that he hasn't ever used Debian packages, so why bring them into the article?! He talks about how he would like a ;-). One of the main
package manager that will recognize files for what they are and not based on the packages that they belong to(i.e., libraries),
but personally, I don't see the need for this type of functionality when your distro already has packages for virtually everything you
can imagine. Sure, many people would like to install the latest bleeding edge libs, but that's what Woody is for
purposes of packages is to allow for easy upgrades between releases. If you're going to be installing lots of libs from tarballs, it kind
of defeats the purpose of using packages since you can no longer apt-get dist-upgrade and have everything brought up to date. If there's
some obscure library/program out there that there isn't a package for then great, why not become a package maintainer and fill the gap?
It seems like the author of this article knows rpm pretty well, but he shouldn't reference things that he doesn't have experience with...
This is not flamebait!
Re:./configure --prefix=/usr/local/foo (Score:2)
You are thinking of GNU Stow [gnu.org] which does exactally this. While I wouldn't recommend this type of file layout for /usr, /usr/local and /opt are annother matter entirely and I would recommend Stow type packaging there.
My 2 cents worth (Score:3)
Instead of having large numbers of disparate databases, all holding essentially the same information, most of which is never kept in sync with the filesystem (and therefore becomes stale very easily), I'd like to propose that Linux filesystems become package managers in their own right.
And not just package managers. Most of the information that such tools as 'configure' generate is already known -to- the filesystem, albeit in a form that's not readily accessible (hence the need =FOR= 'configure'). But if the filesystem keeps an actual database of all installed packages and files (along with where they currently reside), then 'configure' scripts would become largely simple database searches, which would have increased speed and increased reliability. (No more having to tell 'configure' where non-standard installations are. It'll still find them.)
Also, finding files would be quicker. A linear search is horribly slow, when all you really need is a sorted index that you can do an n-ary search on.
Does this sound a bit like Windoze? Nope! NTFS and FAT32 keep very primitive caches, I believe, but not a comprehensive database. That's why you need installers with Windoze. The filesystem itself is not capable of acting as an "installer" in it's own right. The system I'm proposing would. A simple copy would be as "uninstallable" as an RPM, DEB or SRP. A tarball would have equal functionality as any package manager.
In a day & age where computers can perform feats so astonishing that nobody, even 10 years ago, could have imagined them, and where integration & interoperability have long been established, we are STILL using totally incompatiable methods of doing identical operations on identical spaces. This is stupid.
Re:Don't make me install as root. (Score:2)
Some of these problems could be solved by having alot more default groups as well as a more capabilities based security model, instead of the "root uber alles" security model. Different subsystems should have different group ownerships and you should be able to pick who has control of what subsystems at install time. Also a sane sudo config at install time would be a plus. That would allow Joe User to change resolv.conf and Jane User the hosts file but not destroy the sendmail settings or install a trojan accidently. It think we could use finer grained access controls than root and user.
I like the idea of installing more stuff in the individual's home directory, but that won't work right if you do have multiple people using a household computer.
I do see some problem with having per-user system settings, in that it could be a big pain for a neophyte to troubleshoot. It would seem to add unnecessary complexity.
Re:Static linking necessary for linux on desktop (Score:2)
There's a reason for dynamic linking. It's because some of these libraries are huge! System libraries that many applications are based upon should always be dynamically linked. Not everyone has an 80-gig hard drive, you know.
--
Re:Directory Structure First (Score:2)
Hear hear. Applicatons are more self-contained than they used to be, and a global hierarchy like
What about something like this:
/ - Root
/bin - SYMLINKS to individual user-executable
binaries.
/include - SYMLINKS to include files elsewhere
/lib - SYMLINKS to dynamic run-time libraries
/devel - Development
/devel/include - SYMLINKS to include files
/devel/lib - Static and import libraries
/dev - Devices, same as today (or with devfs)
/doc - SYMLINKS to application-provided documentation
/doc/man
/doc/info
etc.
/prog - Contains program packages
/prog/XEmacs (For example)
/prog/XEmacs/config - System configuration files
/prog/XEmacs/doc - Program documentation
/prog/XEmacs/etc. etc. etc.
/home/user/ - Mirrors above heirarchy,
overriding anything in
the global with anything
in the user's
Re:don't install at all (Score:2)
But, yes, fundamentally I agree: doing packaging "right" (in my sense) requires breaking significantly with the way things are done in Linux/UNIX right now. On the other hand, it may be possible to get there without breaking existing functionality. Plan 9 actually has most of the necessary bits and pieces, and it has a usable POSIX system side-by-side with its advanced naming facilities.
For example, one way of getting there might be to start by adopting some kind of "bundle" format for GUI applications (together with a small modification to the kernel to make them executable) and to add an enhanced version of "chroot" (with more control over the environment). I think both of those would be natural, the first for better usability, the second for better security in some applications. Over time, other pieces could fall into place.
Some sort of communication protocol? (Score:2)
So, why are packages communicating in one way before installing? I mean, tarball installs poke around on your system. RPMs and Debs use a predefined database-like structure to poke around, anything outside these structures can not be found.
A bit simplified, but I think you will get the point.
System: So, and you are?
Package: foo
System: A foo version has been found, 87229. What version are you?
Package: 239
System: I don't understand the version scheme. Could you explain?
Package: The lower the number, the higher the version.
System: OK. Is an upgrade from 87229 to 239 seamless?
Package: No, some changes in the configuration files are needed
System: Do you supply an algorithm to cleanly change an older configuration file?
Package: Yes.
System: Do you require other software to be installed?
Package: Yes, I need library glibc 2.0 and libfoo 8
System: Well, we have glibc 2.1.3 installed. Hang on, I will make a connection between you and glibc to sort things out.
Package: Glibc issues are sorted out.
System: Libfoo is not present on this system. Any hints?
Package: Yes, it can be on a FOO CD or downloaded from http://.....
System: Libfoo installed.
System: The library path is:
Etc., etc...
Re:Standardise on deb! (Score:2)
Re:Standardise on deb! (Score:2)
Re:don't install at all (Score:2)
--
Mike Mangino
Sr. Software Engineer, SubmitOrder.com
Re:RPM already does most of this (Score:2)
It seems to me that the author simply lacks experience with RPM. RPM in its more recent incarnations has a Prefix option so that you can define relocations from /usr to /usr/local or from /usr/doc to /usr/share/doc or whatever. You simply specify directories that can be relocated without breaking the package.
It's nice that recent versions of RPM use this. However, it's not quite so nice when somebody decides to make a package with a brand-spanking-new version of RPM which I try to download. 'Cause it breaks.
RPM has some serious issues. Among them (some detailed earlier):
I have RPM on my box, but that's only because I run Red Hat -- I'm familiar with it. Nearly everything else (including new versions of included packages) gets installed in /usr/local.
Every time I install Red Hat I tell myself I'm going to walk the Path of RPM. Every time it's too much of a pain and I go back to .tar.gz.
-Nathan
Re:Packages are nice, but..... (Score:3)
1) IMO, It's not a good point at all. It's correct that only a minority of ppl care, but that's no reason to remove the choice. It is very important to some people. If there is to be one single standard (god forbid) then it should cater for everyones needs, not just most peoples.
Also, people who don't examine the source can take comfort from the fact that they are installing from a source that the public can examine for security problems.
It's not the installation, but the removal. (Score:3)
Call me a wuss, but I like to know that I'm not going to have to spend an hour tracking down file all over the disk when something I installed sucks and I want rid of it.
some thoughts about the -Grand Unifying Packager- (Score:2)
Like some other poste mentioned, ideally packages/programs should be self contained - check out MacOSX's solution for things like shared libraries and the like (I think there's an Arstechnica article about it that is pretty good)
But I'm afraid _that_ will never happen...
The second best thing would be a system that:
- has list of dependencies (like virtual packages in debian) for every distro, with the corresponding packages for that distro.
- describes the distro fs differences in a flexible way (do i dare mention xml?) - the Redhat, Suse, Debian, whacked-out-beyond-recognition structure...
if we go even more crazy:
- that includes the source, a standard way of making it, and repackaging it immediately to the native package format of your choice (distro)
- that can include source patches, that can be optionally applied (by user choice/distro description whatever)
- (really crazy) can even (if the appropriate build tools are installed, and the source supports it of course) automatically build packages for other platforms (MacOS, BeOS, Win32)
Can you imagine, one day saying "and now something completely different...",- and just making a new "distro description", downloading a bunch of next-gen source packages, and just rebuilding a fresh new distro overnight -> blammo
It would definately make the lives easier for the poor slobs that have to maintain binary packages for different platforms/distros.
I'm positive i'm oversimplifying things, but universal source packages would be VERY helpful, esp for the open source community. Universal binary packages would be a good start too though.
What _isn't_ gonna happen is that distros converge to a specific package manager... But we could try to put a layer in between.
Oh well... in the meantime i'll just pretend my good ole debian setup is the only distro... They're actually doing good stuff, with apt and debconf, and a bunch of tools to make it easier for debian-maintainers. Too bad it's of little use with respect to interoperability with other distributions.
Don't make me install as root. (Score:5)
The vast majority of RPM packages don't have any valid reason for being installed as root. The main reason you have to have root right now is to be able to write the RPM database. Fix that. For example, give me a local branch of the RPM database and give root the ability to merge that with the main, shared RPM database.
Miscellaneous other reasons for needing to install as root:
--
The root of the problem. (Score:3)
It's basically a standards issue. I've used Slackware for a long time.. like it. Tar and gzip is all I need to install a package. But once stuff started creeping in that only came out in rpm.. I was forced to coax RPM work on my Slackware installation, or go RedHat. Maybe that's great for the l33t Linux hackers out there, as it gives them options, but for someone who's more concerned with getting software running on their machine, it's a hassle.
IMO, someone the entire Linux community trusts needs to answer some of these questions:
"Where do we install stuff?"
"What format should the distributed files be in?"
"How do we handle upgrades or patches to our software?"
Every answer to these questsions are subjective. This is why Debian and RedHat differ. I think that someone just needs to tell us how it's going to be, and let us start doing it. If it doesn't happen, in a few years, one really WILL be able to safely refer to Linux as "RedHat" or "Debian."
And I know some people out there are trying to answer these questions. But are they making ANY impact, whatsoever? The only reason the linux directory structure isn't complete spaghetti is because people kind of do what most other people are doing. That works great today.. but I think it's going to implode on us sometime in the future.
Now maybe good 'ol Torvalds doesn't want to take a dictatorship stance like that. I can understand that. But these questions of "standardized distribution schemes" will continue to rise as long as no one in authority stands up and makes a decision. Linux may truly only be the kernel.. but that's not how the world is viewing it.
I guess I'm okay with the issue remaining unsolved, but if that's going to be the case.. I think the community needs to start spreading that news. "No package standard is forthcoming, choose your favorite, live with it, and stop asking for something unified."
But of course, there's additional problems that appear with a statement like that..
Re:Packages are nice, but..... (Score:2)
1) IMO, It's not a good point at all. It's correct that only a minority of ppl care, but that's no reason to remove the choice. It is very important to some people. If there is to be one single standard (god forbid) then it should cater for everyones needs, not just most peoples.
2) Binaries are architecture specific. Linux is not.
rpm vs ??? (Score:2)
So for example, rpm is supposed to tell you for sure if you have glibc version x, without you having to write some stupid complicated package script to go and figure out if glibc version x is hiding around somewhere.
Now if there is some issue with compatibility between suse and redhat, obviously something should be done. Ensuring compatibility between different distros in this area was never going to be an easy one, but throwing the baby out with the bathwater is not the right thing.
It sounds like standardised package names is the answer, although that is never going to be a 100% full solution. (If it was, then that would mean all distros are exactly the same, so why have more than one distro? (why indeed?)). But having thousands of packages out there trying to figure all these things out themselves isn't going to help either.
Re:The standard Linux USER (Score:2)
Re:Some interesting ideas, but... (Score:2)
This is a good point, but there are two issues wit it..
First, a script is slightly different than a binary executable - I can look at the source of a script to see what it does before I run it.
Second, _I_ never run either of the commands you specified, without first examining the source of the scripts; I use a utility to unpack the RPM, then look at what I'm installing.
OK, so I may be a _bit_ paranoid
Directory Structure First (Score:3)
There is an alarming lack of education about what parts of the linux file system are for. I'm sure regulars at #linux can attest to the massive volumes of new users asking questions such as "what's /proc?" or "should I install (app) into /usr, /usr/local, or /usr/local/mysql ?".
It would also appear that many application authors don't know where their apps should live. For example, mysql by default (in the INSTALL) wants to go to /usr/local/mysql, but other applications want to sit in the /usr/local/[bin/etc/lib] heirachy.
We need to get someone to provide a definitive explanation of what each part of the file system is for, and how they should be used, so that we'll be able to say "RTFM", and have a sounder understanding of our own operating system.
nf
File systems as package managers (Score:3)
That's worth thinking about. The MacOS has had something like this for years; programs are registered in a database when an executable is placed in a folder. Programs are found based on what they are, rather than where they are. Configuration files go in a standard place (the Preferences folder) and, by convention, can be deleted without major trauma. This simplifies the usual case for installation enormously.
So suppose we have a filesystem that holds "packages", rather than files. A "package" is a directory tree, and is normally installed intact and unmodified. Where the package is isn't relevant; there's a database of packages and information about them maintained by the file system. This database replaces "PATH"-type variables; everything that looks at at "PATH" type variable needs to become a query to this database.
Unlike the Windows registry, the package database needs to be locked to the file system. It's a cache for finding stuff, not something you edit. There should be something you can put in a package which causes a cache of (package-name package-version attribute value) items to be updated.
You also need a place to store preferences-type data. This should not be mixed in with the installed data; for one thing, much preferences-type data needs to be per-user. This looks like a database of (user-id package-name package-version attribute value)
It's too advanced an idea for Linux, though. The UNIX world is too heavily tied to explicit pathnames. BeOS, maybe.
Standardise on deb! (Score:3)
Debian also got a lot of things right by specifying up-front a standard for package naming. I'm sick of all of my dependancies breaking because I dared to install a non-RedHat RPM.
--
Re:Directory Structure First (Score:2)
[...]
/lib - SYMLINKS to dynamic run-time libraries
BAD!
Most people have a separate
1. You have no more mount binary, unless you expect to keep it in
2. You have no dynamic C library, its in
Not to mention, how are you going to mount anything a boot time? The kernel self mounts
-- iCEBaLM
I'm astounded... (Score:3)
But seriously, to just randomly pull down a package from a site you don't know from Adam, and then as root say "Oh, go on then, do whatever you want" is plain madness, but then, you guys are going to flame me to death anyway, so perhaps I should just go and be quiet somewhere.
Re:Standardise on deb! (Score:2)
> "requires a mail transport agent", rather than
> specifying "requires sendmail".
Actually, I believe this is possible in RPM as well using virtual packages. There's a section in either "Maximum RPM" or the RPM HOWTO that uses almost your exact words as an example.
As I recall, anything that needs a mail transport agent says
Requires: MTA
in its spec file, and then any package that provides mail transport should contain a
Provides: MTA
line. Not sure how this gets co-ordinated in practice, but there are provisions for doing it, theoretically, anyway.
Re:Packages are nice, but..... (Score:2)
In the end the installer just has to put stuff where it belongs, and probably modify some kind of registry (or setup files or whatever). If it needs to execute some code in order to determine what exactly to do, then so be it. Just don't let the executed code do any damage. This way we can have more robust, but still perfectly safe, installer.
--
Re:RPM is best (Score:2)
Re:Directory Structure First (Score:4)
We need to get someone to provide a definitive explanation of what each part of the file system is for, and how they should be used, so that we'll be able to say "RTFM", and have a sounder understanding of our own operating system.
See www.pathname.com [pathname.com].
--
Re:Directory Structure First (Score:2)
--
Re:Directory Structure First (Score:2)
put your application in a subdir of c:\program files\
put your dlls in c:\windows\system\
write some crap to the registry
maybe add some stuff to the start menu
done.
granted, 99% of windows installs don't worry about where libraries are, and what compilers are needed, etc. as unicies do, but they're easy and simple.
My FreeBSD box has more bin and lib directories than I'm probably aware of, and it seems silly. Perhaps it's time to invent a whole new layout for the unix fs organization. Applications and tasks have changed over the years, maybe we should work on how to make the fs layout accomodate that.
These have been the ramblings of a very sleep-deprived person
Static linking necessary for linux on desktop (Score:2)
Re:Packages are nice, but..... (Score:2)
We do. We have several - they're called rpm, dpkg and the various front ends. If the user interface of these packages are too arcane, then *they* need work, rather than what you propose . . .
This is such a boneheaded idea that even Microsoft has largely given it up - IIRC, while each program contains its own installer,the actual work is now managed by the OS.
I'm not denying that there are problems here, your solution has been demonstrated to be a Bad Thing. Making friendly interfaces to dpkg and rpm, and setting them up to make package installs off a CD easier, might help.
The MentalUNIX Packaging System fixes all of this! (Score:3)
http://mentalunix.sourceforge.net
They currently have a 4 or 5 page "mentalunix paper" in the entire distribution, with a small part on mpkg. From what I've read(in the paper and on the mailing list), mpkg is going to be amazing.
No more binary packages! Everything distributed as source code. The nice thing is that it can be interactive. yes, interactive. If you load up the packager with the normal options, it will load up a simple console GUI(they are supposedly going ot make a Gtk+ based front-end too), and you can configure the program. It stores everything as XML. The congfiguration is handled in a config.xml file, the interface in interface.xml, the dependencies in dependencies.xml. The interfaces will be build using XML, TCL, PHP, and javascript(well, XML and any of those languages added). Imagine -- The kernel pacakge will have xconfig load when you open it! mpkg will also feature a daemon that montitors when new programs are installed in the the standard directories(everything but
-------------
Re:Some interesting ideas, but... (Score:2)
When you do an 'rpm -Uvh my.rpm' or 'rpm -i my.rpm' what do you think is happening? Two scripts are being run as root, one before extractions and one after.
I felt he had some good points. A dependency system that doesn't depend on itself being present in its dependencies *would* be cool.
don't install at all (Score:5)
For stuff that isn't packaged with the applications, the package format should contain version information and checksums for any other files it depends on. It should declare those dependencies. The operating system should then be able to identify what is needed, possibly fetch it over the Internet, and give the package access to those files under a symbolic path name.
Such application bundles with dependencies should not (normally) be allowed to access the raw file system for their installation at all. In fact, even most applications should probably not be allowed to reference absolute pathnames. All references should be through symbolic paths. That way, one actually has a prayer of figuring out what the thing depends on, and it would also help with security.
Furthermore, the notion of "install scripts" is broken because it is difficult for anything to figure out what went wrong in the bowels of some script, and because scripts may do things to the system that are difficult to undo. Information about how an application bundles and their needs should be entirely declarative.
Another way of looking at that is that applications and application installation needs to work similar to objects and software components. In the past, programs consisted of functions that looked for, and modified, global variables all over the place. These days, good software components have well defined interfaces, get access to their environment only through those interfaces, and rarely use global variables. That's the kind of change that also has to happen at the level of applications.
The system that is closest to that in many ways is Sun Java: with jar files and the JavaBeans standards, there are well-defined ways for software to get installed and interact with the rest of the system, yet the environment can limit what new software components can do. We need something similar for Linux packages. That would also require additional naming and name translation support for Linux, similar to what Plan 9 offers (however, Plan 9 never adopted a good way of packaging applications based on their naming model--a shame, they were pretty close).
Re:Directory Structure First (Score:3)
RPMs, and other packaging formats, should have install scripts, especially if they're not part of a particular distro. Most source tarballs have reasonably good configure scripts, why don't RPMs? I'm not a programmer, either, but I'd think that RPM is capable of doing everything that the author wants it to do, if it's used right.