Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

RPM Dependency Graph 208

Lomby writes "Following the spirit of the kernel schematics poster, I wrote a script that generates a diagram that depicts the rpm packages installed in your system, along with their dependencies. You can find more details and a download link at freshmeat."
This discussion has been archived. No new comments can be posted.

RPM Dependency Graph

Comments Filter:
  • which vendor (Score:1, Insightful)

    by Anonymous Coward
    The problem with RPMS is when there are different dependencies for the same package depending on who it was packaged for.

    So how do youy handle that, seperate grpah for eahc vendor ?
    • Just because rpm packages have the same name, it's not the "same package". Note that rpm is only a way to deploy software, think "tar with metadata" (or cpio, in this case). It's purpose is to offer a mechanism, the packaging politics is entirely up to the distribution.

      It's a common mistake to imagine rpm packages as a big, uniform package base serving all distributions that use the format. It isn't. Each vendor can package software in a different way, built with different options and with different dependecies. Some distributions based on rpm are even migrating to a packaging layout more similar to Debian than Red Hat (e.g. libfoobar2 instead of foobar-libs).

      So the answer is yes, you must have a different graph for each vendor.

      As a side note, I've done that before using Gustavo Niemeyer's depmanager [sf.net]. If you don't work with a very restrict set of packages, the graph becomes very, very dense and confuse. But it's good to find dependency errors.

  • Recursive loops (Score:5, Interesting)

    by flonker ( 526111 ) on Monday July 29, 2002 @02:36AM (#3970310)
    Ah, but does it handle recursive loops? ie. Package A v1.2 requires Package B, but package B requires Package A v0.9?

    I've encountered that kind of thing way too frequently building stuff on Cygwin. Admittedly, RPMs are not the same as building from source.
    • With all the huge companies and brilliant minds behind Linux I can't believe there is such a huge lack of alternatives for distributing and installing software. Such an alternative would both make things much easier on the end user and free up developer time. IMHO, this is currently #1 item on the whine list.

      PS. Interesting RPM rant here [distrowatch.com]

  • by Other ( 29546 ) on Monday July 29, 2002 @02:39AM (#3970315)
    I wonder how hard this would be to modify to use debian packages and dpkg instead of rpm. Anyone taken a look at the source?
  • by linuxbaby ( 124641 ) on Monday July 29, 2002 @02:39AM (#3970317)
    I would just put a title at the top of it saying:

    "Why to use apt-get:"
    • or use up2date or autoupdate

    • "Why to use apt-get:"

      This is very funny but not being fair. :)

      Any package system when connecting all packages with dependencies would look horrible.

      Please refer to my previous post [slashdot.org] and create a similar dependencies graph in Debian and you'll see. :)
    • Unfortunately, apt-get goes barmy as soon as you install an rpm it doen't know and the documetation does not seem to show any way of fixing it. Plus it doesn't (and can't) solve the problem of false dependencies. I use my own spooling software [freshmeat.net] and apt-get kept telling me to install LPRng before it would work so I had to ditch it. The packages that claim to need LPRng actually only need a program called `lpr' which sends jobs to the printer.

      Some way of telling apt-get that its idea of what's installed or needs to be installed is sometimes wrong is needed before it's usable for me.

      TWW

      • There is a package called equivs have you tried that? Something like: apt-get install equivs cd /tmp (or somewhere easy) equivs-control vi Put in the dependencies and values you need (read up on this). equivs-build dpkg -i Hope that helps.
        • There is a wonderful package called equivs have you tried that?

          Something like:
          apt-get install equivs
          cd /tmp (or somewhere easy)
          equivs-control [nameofpackage]
          vi [nameofpackage]
          Put in the dependencies and values you need (read up on this).
          equivs-build [nameofpackage]
          dpkg -i [dpkgs it spits out]

          Hope that helps.
    • Here's my RPM dependency graph, along with additional text to get past the lameness filter.
      _
      / \
      \_/
    • > I would just put a title at the top of it saying:
      >
      >"Why to use apt-get:"

      The dependency mess is one of the reasons we had to add rpm support to apt. But if your package dependencies are really bad, just throwing apt in won't help much. In fact, you must build you packages based on a consistent policy in order to make apt work properly. Debian relies strongly on its policy because doing that you ensure that apt will work correctly later, it's not apt that magically fixes a bad packaging layout.

      I maintain an RPM-based distribution and I can say that it took a long time to fix our package base in such a way that apt can work smoothly. And the real problem caused by a bad dependency layout is not on package installation, but in package upgrades. (Imagine two different hairy graphs and you must convert from one graph to the other without breaking anything.)

      I would call it "why you need a good packaging policy". Once you implement it, apt will work as a consequence.

  • Clutter everywhere.... Gota have Gtk+ , QT , imlimb , libgraphics , libjpeg, libgif, libHBig, Libtiff, libJohn, Libfruit, Libmonkie, libsuit, i wonder when somebody will set up a cvs for a linux distro. Kinda like FreeBSD does..
  • I made a program that could draw random lines and dots one time too. I never thought to submit it to Slashdot.

    Actually, on a more serious note, a quick look for a screen shot brought up an image that was a bunch of lines and dots all looking pretty and stuff, and I'm sure it represented an RPM, but absolutely none of it was labled. So there doesn't appear to be any practical use for this at all.

    And if you want something for the 'Oh, that looks neat and its meaningful too', I think you should stick to the Linux Kernel. It seems deeper than an RPM to me for some reason.
  • RPM is nice, because almost everyone uses it, and because it is based on Redhat, which - unlike Debian - devotes enough effort to the initial installation process that it comes close to being a viable Windows alternative.

    I love debian - in theory - but in practice, it can be a bitch to get working. Even experienced Debian users who repeatedly try to persuade me to abandon RedHat are forced to admit that they never did get USB working, and after a while you realize that they are more in-love with the theory of debian than the reality.

    So what are the problems with Linux?

    Firstly, multiple incompatable packaging systems. There is no good reason why we need both debs and rpms other than petty politics.

    Secondly, no elegant way to integrate software that hasn't committed to one of the packaging systems into an architecture. Both RedHat and Debian both work great when you stick to rpms and debs, but just try installing the latest version of a piece of software that doesn't have an rpm or deb yet, and you run into a world of pain.

    It is time for a new approach, hopefully one that is backward compatable with previous packaging systems, but which provides a unified distribution mechanism for binaries, while allowing different distributions to do things in their own way.

    None of this is brain-surgery people!

    • There is a reason. (Score:1, Insightful)

      by Anonymous Coward
      And the most important one of all.

      Choice.

      You don't like rpm? Use deb. You don't like deb? Use rpm. You don't like either? Create your own or compile from source.

      As for the FUD about '..piece of software that doesn't have an rpm or deb..', well, that's what sacrifice you make when you choose to use a distribution as opposed to rolling your own system.

      All that aside, the most blatant flaw here are the words 'Windows alternative'. Linux is Linux. It's not an alternative to anything. Don't mind the zealots. If you ignore them, they go away. ;)
    • by dmiller ( 581 ) <djm.mindrot@org> on Monday July 29, 2002 @02:58AM (#3970356) Homepage

      Secondly, no elegant way to integrate software that hasn't committed to one of the packaging systems into an architecture.

      One does not have to "commmit to one of the packaging systems". Adding a single .spec file does not make adding Debian support any more difficuly. Your paragraph implies some sort of conflict between the two systems, where there is none.

      Both RedHat and Debian both work great when you stick to rpms and debs, but just try installing the latest version of a piece of software that doesn't have an rpm or deb yet, and you run into a world of pain.

      What is so difficult about installing unpackaged software? Redhat & Debian go out of their way to ensure that /usr/local is free for such things. If you mean that it is difficult for end-users to install such software, perhaps you should try getting them to compile and install unpackaged Windows software for a comparison.

      That being said, it is very easy to turn most random tarballs off the net into RPMs, so long as they don't deviate too far from standard build/install procedures. Your typical ./configure && make && make install package can usually be turned into an RPM in about 5 minutes, without the need for patching.

      • by Anonymous Coward
        Or in 5 seconds, using checkinstall:

        http://freshmeat.net/projects/checkinstall
      • What's so difficult is the incompatiblities between where RedHat (et. al.) decide a package should sit, and where the author believes it should sit. The worst example I can think of is perl and mod_perl. The RPM versions of perl insist on cluttering up /usr (along with every other thing in existance), but as soon as you go to CPAN and update something, voila... it wants a newer version of perl (sorry, I don't WANT to wait 6 months for RH to get around to releasing new rpms for bugfix versions of perl). Now, if you put it in /usr/local where it likes to be, suddenly you have two seperate module trees... perl will use /usr/local, but mod_perl will still use /usr. UGH! If you try putting it in /usr, you'll stomp ALL OVER the redhat dependancies, so when they DO release "updates", you'll backpedal versions.

        I'm a big fan of the BSD ports system. If it was installed as part of the OS, it goes in /usr, if not.. it gets it's own subdirectory in /usr/local so it's contained. If you want to minimize your path to /usr/local/bin, just add symlinks.
        • You are abusing the system and then complaining when it breaks. Use cpanflute2 (in the rpm-build) package to make building rpms of CPAN modules very very easy.
    • by Anonymous Coward
      It is pretty easy to make an RPM from almost any set of files that you can think of. All you really need to do is create an RPM "spec" file with a text editor. It is a short simple script that you can create which guides RPM in the packaging and installation of a piece of software. It is cookbook stuff, a no-brainer.

      Take the time to learn RPM. It is an awesome sys admin tool. It is not just for intalling software. It is a complete configuration management system for software. You can verify checksums, check for missing files, find out which file belongs to what software package, verify your entire system. Too bad most people haven't taken the time to learn a little more about RPM. It is a real time saver.

      I'm interested in hearing valid criticism of RPM from individuals who have worked with it and know its ins and outs. But really, unless you have that level of experience with RPM, all I can say is that you don't know what you are talking about.

      • I'm interested in hearing valid criticism of RPM from individuals who have worked with it and know its ins and outs.

        I turned to Debian from RedHat and Mandrake because RPM's build process lacks granularity. RPM itself is a strange mix of C, perl, and shell that can't be distangled to a user's working tastes. It was designed with the end user in mind and does that job well, but if you want to build from source debian's system makes more sense.

        Suppose you want to tweak XFree86. Once you've made your alterations RPM requires a complete rebuild of the package (This was a deliberate design decision made for sound reasons). Debian's build system allows one to rebuild only what needs to be rebuilt just as if with a vanilla tarball. The process can be "rewrapped" with dh_foo commands at any point. Different stages of the build process can be accessed with "make -f debian/rules target" from the root of the source tree. Building a deb is just like building a tarball. (In fact, building a deb is building a tarball!) Debian lets you keep your sources cleanly organized too. With RPM, should you be working on a few other packages in addition to XFree you need to manually take note of which patch applies to a particular package as every srpm gets unpacked into the same directory. There is a macro file you can alter somewhat to taste (if you prefer "srpm" instead of "SRPM" for example), but in practical terms the entire process is locked up from the begining. For a time a I had the "rpm --rebuild foo" dance wrapped with a script that tried to keep a sane tree full of sources but there is only so much one can do without resorting to ridiculous symlink farms that can't be sanely pruned. The binary half of RPM leaves certain macro variables unset until later in the build process so if you've tried a little reorganization you'll find that RPM's macro-file half barfs on unknown quantities.

        Once I switched to Debian I found it much easier to integrate new and interesting software into my system without creating too much of an unmanaged Frankenstein in /usr/local. This is entirely because of the deb package format and it's associated toolchain. Whenever the deb vs rpm topic comes up on Slashdot many chime in with praise for apt. Yes, apt is a wonderfull tool, but if I were forced to choose between debs without apt and rpms with apt I would still choose debs.

        RPM does have the virtues you extoll. But it has vices too that came about for a few different reasons. IIRC, RedHat wanted a package format that would make it easy for third parties to distribute commercial software for RedHat Linux. It's also, well... just plain RedHat's and they can do what they want with it since it really only needs to work for distributing easily installed binary packages. Hell, that's what makes RedHat worth buying.

        Debra and Ian had other things to keep in mind though. Their distribution is, um... distributed in it's development so the deb package format must cater to 1,000 or so individual maintainers and developers who pool their efforts voluntarily. The modularity, simplicity, and utility of the deb format reflects this.

    • "There is no good reason why we need both debs and rpms other than petty politics."

      Space-saving, number of fields in each... Hmmm. More to the point though, choice. No reason to abandon something that's existed quite happily for a while; why don't you concentrate on writing a wrapper around either package?

      " no elegant way to integrate software that hasn't committed to one of the packaging systems into an architecture."

      You obviously haven't run debhelper any time recently, nor have you played with _stow_.
    • Hello!

      I have a Mac OS X-Machine on my desk and there is fink, which allows me to use the debian package management system too. But I don't do it.

      Why not steal a little bit from the Mac OS X ideas ?

      There is a library folder, with all libraries in it. On Linux, this could like this: /Library/readline-1.2.0 /Library/readline-1.2.3 /Library/readline-1.3.1 /Library/perl-10.2.3
      etc.
      In order to use that stuff, there are only links inside the "normal" places. So there could be a link from /usr/bin/perl -> /Library/perl-10.2.3/perl/perl
      Exception is the bare-bone stuff like inside /bin - certainly.

      Application stuff, that's not started from the command-line like KDE or Gnome should through away their starter-stuff. This ugly stuff is borrowed from Microsoft and it's evil. On Mac OS X, there is an "Application" folder and every folder inside this has an .App suffix. If you click on this folder, the finder starts the applications with this path: /Application/_app-name-folder_.app/MacOsX/_the-bin ary_
      All other stuff for the application like icons, translation files and that stuff is inside the _app-name-folder_.app.

      I thinks it's pretty cool.
    • by hysterion ( 231229 ) on Monday July 29, 2002 @04:16AM (#3970470) Homepage
      Secondly, no elegant way to integrate software that hasn't committed to one of the packaging systems into an architecture. Both RedHat and Debian both work great when you stick to rpms and debs, but just try installing the latest version of a piece of software that doesn't have an rpm or deb yet, and you run into a world of pain.
      Checkinstall [asic-linux.com.mx] makes this easy as pie.
      $ ./configure
      $ make
      # checkinstall (*)
      • builds your choice of a .deb or .rpm or Slackware package,
      • installs it,
      • saves it in (e.g.) /usr/src/packages/RPMS/<arch>,
      • saves a .tgz of the sources in (e.g.) /usr/src/packages/SOURCES/.
      It has served me quite well -- except the version I'm using (1.5.1) makes empty .tgzs. Not a big deal, and hopefully fixed by now.

      (*) or else 'checkinstall your-install-script'

    • Take a look at the Portage system on gentoo, this may solve some of your problems.

      "Unlike other distros, Gentoo Linux has an advanced package management system called Portage. Portage is a true ports system in the tradition of BSD ports, but is Python-based and sports a number of advanced features including:
      dependencies,
      fine-grained package management,
      "fake" (OpenBSD-style) installs,
      path sandboxing,
      safe unmerging,
      system profiles,
      virtual packages,
      config file management,
      and more. "

      My main problems with package systems are.

      There not granular enough, you get everything or nothing.

      Dependentancies are often compleatly mad and over strict.

      There's no centrally intergrated package list (except rpmfind i suppose).

      and
      Distribuions package things up in all kinds of weird ways, If they done things to the LSB and decided on a name/location for each package then you could use a suse package on Mandrake without any major grief.
    • " love debian - in theory - but in practice, it can be a bitch to get working. Even experienced Debian users who repeatedly try to persuade me to abandon RedHat are forced to admit that they never did get USB working, and after a while you realize that they are more in-love with the theory of debian than the reality. "

      Well, I can't speak for any of your friends, but my Debian install works absolutely perfectly with USB. I use a USB SanDisk SmartMedia reader, a wireless USB mouse, a Logiteck QuickCam, a USB hub, an Epson 880, a serial->USB converter for my Palm, and a crappy Canon USB scanner that I can use through VMWare. (Canon will not support Linux in any capacity, their scanners are junk. Barely works for my in Windows.)

      Oh, and the handy acpid package that is part of Debian really helps my ACPI-only laptop have decent battery life.

      Do I like the theory of Debian? Yes, and for me, the reality is just as good. Oh, in addition to installing flawlessly on my tricky laptop, Debian worked equally well for me on an old Alpha and a NetWinder. RH/MDK/FBSD all refused to install on that particular Alpha, and no other distro that I know of supports the NetWinder.

      Apt-get is great, but it is only one part of what makes Debian the respected quality distribution that it is. I've had no more issues setting up USB (or anything else really) with Debian than I have with any other OS I've used recently.

      "It is time for a new approach, hopefully one that is backward compatable with previous packaging systems, but which provides a unified distribution mechanism for binaries, while allowing different distributions to do things in their own way."

      Hmm... isn't that what the LSB is all about? Giving a known base that you can build on?

    • Sadly, like almost every post here suggesting ways to make Linux more usable and popular, this post is drawing a sprinkling of ill-founded defensive replies from folks who seem to see increased ease of use and increased popularity as a threat. No one is suggesting turning Linux into another Windows. But, some focus on ease of use and a standard approach to installing and removing software is overdue if Linux is going to make serious inroads into a customer base beyond the very small segment of the population that actually enjoys writing code and running networks.

      Over the last several years, I've tried Red Hat, SuSe, Mandrake, Slackware, Debian, FreeBSD, Gentoo, and probably some others I can't remember. Geez. I even had Minix installed on a B&W ThinkPad 500 way back when. All packaging systems seem to break at some point as you introduce "foreign" software. In my experience, this usually happens when you need to remove or upgrade some piece of code in order to keep your New Favorite Toy happy, but, guess what, your packaging system thinks every other package on your system is dependent on that code.

      "Choice" turns out to be equivalent to being held hostage to a single vendor.

    • None of this is brain-surgery people!

      It isn't brain surgery, but software dependencies are a very complex problem. On a graph of all software packages, a subset of the packages are always moving forward in versions while others lag behind. Packages could depend on any range of versions of other packages, and sometimes those versions are not compatible, for any number of good or bad reasons. So, if you want to create a distribution that seems to require versions 1.3, 1.7, and 2.3 of package X, but the version 2 series is a severe change relative to the version 1 series, what do you do?

      If a new package system comes about, new filesystem hierarchies should be devised to allow seamless installation of many versions of software with some sort of advanced linker that can deal with them all. This solution could be as complex as the problem!
    • Even experienced Debian users who repeatedly try to persuade me to abandon RedHat are forced to admit that they never did get USB working, and after a while you realize that they are more in-love with the theory of debian than the reality.

      I run only Debian and I found it to be a piece of cake to get USB working.

      The problem was that once my camera was recognized, the Linux kernel didn't know what to do with it. Does that make me more in love with the theory of the Linux kernel than the reality?

    • All this insistence upon binary packaging...have you tried Gentoo? Why bother with a binary package when you can just as easily do it with a source package that compiles itself? I'll grant Gentoo is a bit green for the moment, but it's making rapid progress and personally, I think that once it's been around a little longer, Gentoo and Gentoo-style source and local compilation systems will get more reliable interop than RPM files compiled by 8 million different users...
    • Here's one tiny solution that will go a long way. I've never understood why all the distros don't use it:

      No dependency should be a package! If kdelibs-3.0.3 requires qt-3 or greater, then the dependency should be "libqt.so.3", and not qt-3.0.3-17.i386.rpm. (of course, even that is oversimplifying, as many distros will break Qt up into five different packages).

      The purpose of packages is to make the user's life easier, not to lock them into a particular lifestyle.
  • finally made a script drawing the hell.
  • by Erpo ( 237853 ) on Monday July 29, 2002 @03:18AM (#3970384)
    This is a really neat project. I'm definitely going to download the code and generate that map, just to see how massively hairy it is. However, the fact that a project like this is newsworthy (i.e. produces such interest-generating and complicated output) seems to suggest that perhaps package management on linux (rpm, deb, whatever...) has just served to cover up a much larger problem.

    Package management makes it possible and (depending on your point of view) easy to update an entire system using apt-get or up2date (or whatever). It also allows users to install and uninstall additional programs with a minimum of fuss. I think it's safe to say that without package management, system administration would be much harder. However, what's been created to support this system is a visually attractive, yet tangled web of dependencies and interrelations between software packages that make maintaining multiple versions of shared libraries for legacy as well as bleeding edge applications, creating backwards and forwards compatilbe software packages, and installing software that isn't aware of the package system in use on the machine a real pain and sometimes (for non-ultragurus) impossible.

    In my opinion, what we really need is a single, standard package system for all linux-based distros. Chuck rpm, chuck deb, chuck them both and create a new one incorporating the best features of both, I don't care, but I think it really needs to be done. Also, I think a change in what we think of as a 'package' is in order, a minimum functionality so to speak. If a user cannot make the statement: "If I install (package X) then I can do (process Y)," then package X does is not significant enough by itself and should be incorporated into another package that requires/uses it. Examples:

    If I install the "linux base" package, I can boot an absolutely bare-bones system.

    If I install the "textual system" package, I will have access to a complete textual system, including a text-mode console login, text-mode editing tools (vi for example), a textual system configuration manager, etc...

    If I install the "graphical system" package, I can boot into a system that is completely graphical all the way, including a graphical login, desktop environment, a graphical system configuration tool, a web browser, etc...

    If I install the "office suite" package, I will have access to a word processing program, a spreadsheet editing program, and a presentation creation program.

    Individual options (e.g. vi or emacs) within each package should be just that - options, not separate packages. Sure, a user may install more than he or she needs if packages are this "collective", but in my opinion, users would be much happier having an office suite installed when they only really need document editing capabilities than with a default OS install that takes up more than 1GB because it comes with everything preinstalled so regular users won't have to puzzle out the overcomplicated package managment system in order to install something else.

    </rant>

    Sound good?
    • Debian provides something much like this by having meta-packages called "tasks". You might have an office-suite package that contains no files of its own, but depends on (say) kword, gnumeric, etc.
      • That sounds really cool -- I'll have to check it out the next time I try a deb-based disto. However, it still doesn't solve the problem of overcomplicated webs of dependencies and the lack of a single standard package system.
    • Ummm maybe you have missed the point that *all* current OS's have this same problem. Look at DLL's under windows. If you wish to run multimedia/game apps you need directx - DLLs and com objects. But the way this is normally handled is that you buy a CD, which checks automagically for dependancies and offers to install them(most games offer to install directx - sometimes even when you have a newer version). On linux the model is a little different. You download packages, you dont buy them. If it was on CD, a package could include all its dependancy installations in a single monolithic install script. But when you download it, it will be kept minimal. And even in windows you occasionally need something completely new and not just a quick fix.

      Maybe a good solution would be install scripts, that after checking dependancies, offer to go and download and start install scripts for its dependancies. I can see this becoming recursive.
      • I completely agree with you - windows installers can (and often do) spray files all over the place. Sure, windows has an add/remove control panel, but installers aren't required to place an entry there or on the start menu to make uninstallation easy. An installer could drop files all over the place and offer no way to clean them up. However, the windows model has two big advantages over linux in this respect:

        1. There is a standard, accepted way to install and uninstall programs. Installer wizards for installation, the add/remove control panel to remove them. Installers can choose not to do this, but programs that don't make it easy to uninstall themselves don't endear themselves to the user, and offering an uninstall option is really easy to do.

        2. When installing DirectX, you run one program and it installs everything. It doesn't offer an uninstall feature and new versions are often bug-ridden, a terrible combination, but you don't have to go out and install the pieces one-by-one yourself, nor do you have to worry about backwards compatibility.

        (btw, yes I'm aware that you can download dx uninstall utilities off the net, but they don't come bundled. I'm also aware of how much code bloat its backwards compatibility creates.)

        On the other hand, there are some big disadvantages that come with the windows model:

        1. You must execute freshly downloaded code to install an app, except possibly in the case of an MSI. This isn't increasing the security risk, as you have to execute freshly downloaded code every time you run a freshly downloaded program anyway, but it does increase the chance that a bug in the latest version of the installer would prevent you from installing or would cause problems with the system.

        2. Programs aren't required to register for uninstallation. One very good example is software that installs spyware, but conveniently forgets to uninstall it when you remove the program. I know we should all run ad-aware regularly anyway, but it's still not a good idea to leave the responsibility of uninstalling a program with the program itself.

        What I'd like to see is a combination of the techniques commonly used on windows with the package management systems used on linux to create a better solution. The idea would be to have a system:

        -That has a standard way of installing and uninstalling programs.
        -Whose packages are made up of large groups of files providing a specific capability (functional applications) rather than a set of a few shared libraries or an executable.
        -With a single package management application that is in total control of the install process, rather than leaving this important task to the individual applications.
        -That makes it easy and convenient to uninstall _any_ installed application.
    • What is the problem you're trying to solve? Who cares about complicated dependencies, so long as they are minimal and acyclic?
    • Sound good?

      No. You're an idiot. Say the word "modular". Repeat it a couple times. Contemplate it. Then go beat your head against a wall.

    • In my opinion, what we really need is a single, standard package system for all linux-based distros. Chuck rpm, chuck deb, chuck them both and create a new one incorporating the best features of both, I don't care, but I think it really needs to be done.

      I've said that before and I apologize to repeat myself, but I must insist in the fact that most people don't seem to understand the difference between mechanism and policy in package management. The package manager offers a mechanism, the distribution enforces policy.

      That said, having a common package tool for all distributions wouldn't help. You can say most distros today standardized on rpm, but the packages are largely non-compatible because there's no common policy between them.

      One of the policy rules could be, for example: "all runtime libraries must be packaged separately, and named differently according to binary compatibility". It makes sense, it works, Debian does that, Conectiva (which uses apt-get) does that, I think PLD and Mandrake are doing that. But for other distros it would mean a massive package layout change, and I doubt they would like to to that. (The reason for that rule is: if you upgrade a binary that needs a new version of a library that breaks binary compatibility with the previous versions, other binaries can still use the old library.) Before you say anything, the rpm ability to keep multiple version of a package installed is largely useless for this case doesn't help here.

    • This is really a mechanism vs. policy issue. The "options" you describe isn't much different from having RPMS like

      • office-base
      • office-wordprocess (depends on office-base)
      • office-spreadsheet (depends on office-base)
      • et cetera ad nauseum.

      Options are simply packages that depend on the mother package (and possibly some other options). Or can you prove me wrong?


    • Individual options (e.g. vi or emacs) within each package should be just that - options, not separate packages. Sure, a user may install more than he or she needs if packages are this "collective", but in my opinion, users would be much happier having an office suite installed when they only really need document editing capabilities than with a default OS install that takes up more than 1GB because it comes with everything preinstalled so regular users won't have to puzzle out the overcomplicated package managment system in order to install something else.
      And then what happens when there is a new version of one "option" in your "old" package that needs one new lib from the "base" package?
      Your idea doesn't scale AT ALL, nor does it make any sense.
      Sound good?
      No, it sounds like you have no idea what you are talking about and you've also never used debian which accomplishes this in a sane way via pseudo-packages and tasks.
      -davidu
    • No. You cannot just chuck existing package formats. You need to be able to import either format to your new scheme or your new format will never take off.
    • yeah, sounds great; but only as long as our beloved "users" use it, and I never have to see this horrendous mess of "generic application" package installations without knowing what applications I install.

      I personally think Gentoo's portage/emerge is perfect and I love it to bits. I would not in a million years recommend that our coveted "home users" use it. Which is why talk of "single" and "standard" always undermines one of linux's (GNU/Linux's whatever) strongest points. If you like standartization above all, use Windows - they seem to be pretty good at it.

      while we are falling all over ourselves trying to come with things "users" will like, let's not forget what we like. (btw, I consistently put "users" in quotes because I feel the title would be more applicable to people like me, seeing how we actually use the damn thing.

  • No output? (Score:2, Funny)

    by OSSturi ( 577033 )
    Strange, I get a correct looking rpmgraph.dot file when running the programm but neato refuses to make a ps-file: warning, language ps2 not recognized If I change ps2 to ps in the Makefile, all I get is a zero byte ps-file. Well, there's probably something missing on my system, I should have a look at the rpmgraph. Oh, wait...
  • ...is cooler.

    Ever wondered what a plot of a portion of the PGP web of trust [rubin.ch] would look like? Here it is. [chaosreigns.com]

    sig2dot generates plotting data from the signatures in your GPG keyring; this data can be rendered by springgraph or graphvis. Many pretty sample plots on the page.



  • With so many VB "certified engineers" out there, someone ought to do something to depict how the VB codes function.

    Or better yet, how about something for MS's "Visual Suite" ?

    That ought to make Billy the boy a very happy man.

  • by Walles ( 99143 ) <johan.wallesNO@SPAMgmail.com> on Monday July 29, 2002 @04:20AM (#3970475)
    Shameless plug:

    I have written a small tcl script (called pkgusage) that lists all your installed packages (RPMs or DEBs) together with the number of days ago you last accessed any of the files in each package. Thus, if you do "pkgusage.tcl | sort -n", packages which you seldom / never use will be at the end of the list.

    It also checks dependencies between packages, so it won't tell you to uninstall a package that something else depends on.

    If you are interested, get it here [nada.kth.se].

    • This sounds like a really good thing, so I wanted to try it. I downloaded the package and ran it according to the instructions. It gets to the "Resolving inter-package dependencies..." phase in a few seconds, but then it just stays there, using 100% cpu. Is it really supposed to take that long? I have waited for around 45 minutes before killing it, should I wait longer? This is a 1.8 GHz, so it is fairly fast.
      • On my system (a 400MHz Pentium II), the first phase ("Calculating ages of NN packages") takes quite a while (15 minutes?), with a percentage counter ticking up every couple of seconds. This part actually does stat() on all files included in any package, so this part should definitely take more than "a few seconds" (while doing mucho disk access). If this part actually takes only a few secs, the problem might be somewhere in this phase. Could you e-mail me the complete output of running the program without passing the result through sort?

        The second (inter-dependency resolving) phase performs one packaging system (rpm or dpkg) call per package, so (especially on a Debian system), this may take a while with lots of CPU usage but not so much disk access. But as I stated above, do e-mail me your output (without sort) and we'll see if we can resolve your problem.

        Cheers //Johan

    • I wrote another script along the same lines. Yours is probably better, as I presume it checks the last-used timestamp on each file in the package. Mine just checked what was currently running (based on /proc), including libraries. It then checked the dependencies and gave a list of RPMs that you are running, RPMs that aren't running but are required, and RPMs that are candidates to uninstall.

      The reports of the tcl script running a long time aren't surprising. Mine is a csh script, and it, too, will sit there for a long time before giving a result, and I don't think mine is doing as much work.
    • I used this on my workstation at home. It works quite well, thank you!

      -molo
  • by BJH ( 11355 )
    The example file [inf.ethz.ch] he provides is quite interesting - there seem to be three major dependency points; fileutils (which you'd expect), perl (ditto), and python (huh?).

    I guess the python dependency comes from some of the configuration tools that Red Hat includes - can anyone confirm that?
    • rpm -e python
      error: removing these packages would break dependencies:
      python is needed by modemtool-1.22-3
      python is needed by 4Suite-0.11-2
      python is needed by dateconfig-0.7.4-6
      python is needed by redhat-config-users-0.9.2-6
      python >= 1.5.2-27 is needed by python-xmlrpc-1.5.1-7.x.3
      python >= 1.5.0 is needed by eroaster-2.0.11-0.6
      python >= 1.5.2 is needed by hwbrowser-0.3.5-2
      python = 1.5.2 is needed by python-devel-1.5.2-35
      python is needed by redhat-config-network-0.9.10-2
      python is needed by pythonlib-1.28-1
      python is needed by PyXML-0.6.5-4
      python >= 1.5.2 is needed by apacheconf-0.8.1-1
      python >= 1.5.2 is needed by pygtk-0.6.8-3
      python = 1.5.2 is needed by tkinter-1.5.2-35
      python >= 1.5 is needed by rpm-python-4.0.4-7x
      python >= 1.5.2 is needed by rhn_register-2.7.9-7.x.2
      python is needed by python-popt-0.8.8-7.x.2
      python >= 1.5.2-27 is needed by up2date-2.7.61-7.x.2
      python is needed by fetchmailconf-5.9.0-11
      python is needed by printconf-0.3.61-4.1 /usr/bin/python is needed by 4Suite-0.11-2 /usr/bin/python is needed by redhat-config-users-0.9.2-6 /usr/bin/python is needed by gettext-0.10.38-7 /usr/bin/python is needed by eroaster-2.0.11-0.6 /usr/bin/python is needed by kdelibs-2.2.2-2 /usr/bin/python is needed by redhat-config-network-0.9.10-2 /usr/bin/python is needed by PyXML-0.6.5-4 /usr/bin/python is needed by anaconda-7.2-7 /usr/bin/python is needed by alchemist-1.0.18-1 /usr/bin/python is needed by gnome-core-1.4.0.4-38 /usr/bin/python is needed by rhn_register-2.7.9-7.x.2 /usr/bin/python is needed by rhn_register-gnome-2.7.9-7.x.2 /usr/bin/python is needed by up2date-2.7.61-7.x.2 /usr/bin/python is needed by up2date-gnome-2.7.61-7.x.2 /usr/bin/python is needed by printconf-0.3.61-4.1 /usr/bin/python is needed by printconf-gui-0.3.61-4.1 /usr/bin/python1.5 is needed by up2date-2.7.61-7.x.2
  • Does anyone know where I can find a contrib rpm for the app ? The source doesn't compile on this machine...
    • Well, there's nothing to compile, it's only a straitforward script. Read the README file for info on what you need. Oh, well, here is the list from the README: - An rpm based Linux system - Graphviz (http://www.research.att.com/sw/tools/graphviz/) - psutils - Ghostscript (http://gnu-gs.sourceforge.net/) - gawk - Python
  • /usr/ports/sysutils/pkg_tree

    Manpage [mavetju.org]

    Of course, it's not graphical, but it's the same sort of thing.
  • Why all that complaining about the lack of a single universal package systems when Debian has a tool to convert .rpm to .deb? That's almost as stupid as complaining about that there's not a single universal graphics format. As long as there are tools to convert from one to another, what's the problem?
  • Damn, I would have truly thought it funny if rpmgraph required some other small library called rpmgraph-core to run.

    Then, when I try to get rpmgraph-core installed, it requires rpmgraph to be installed already.

    No wonder only nerds only use linux: the average layman does not WANT to know what the word Recursive means.

  • Isn't it ironic (er something) that is program
    doesn't itself come in an RPM.
  • For some reason I quickly read this as 'RMS dependency graph'..

    That'd be interesting to see.. ;)
  • Could someone please do something like this for gentoo, so I can tell what I'm going to be breaking when I remove a package?
  • by Anonymous Coward
    No, you don't need special tools. It's all there, baby.

    $ su root apt-get install graphviz
    $ apt-cache dotty > package-graph.dot
    $ dot -Tps -o package-graph.ps package-graph.dot

    - chad

  • well for those who like me have no ps viewer installed but are even more lazy than I am, here's a png version [pandemonium.de]. It's really tiny but you get the basic idea from it.
  • I tried to make that dependencies using Prolog in 1995 in my postgrad work. I tried to find out dependencies between help files "SEE ALSO" to programs and then
    paint that graph depending on distribution of dependencies of program, related to their valuability/security implications
  • Many people are complaining about RPM and it's weaknesses/strengths.

    This isn't about which package manager/build from source/auto conflict resolver is better. This is a graphical representation of the dependencies in RPM.

    If it provides anything beyond artwork, I'd imagine that it would be a handy chart to work out dependancies before you start, and a visual reference for programmers that care about how difficult/easy it is to install their package.

    Heaven help us, it might even become a tool to get the large distros to coordinate thier packaging schemes.
  • by bryanbrunton ( 262081 ) on Monday July 29, 2002 @09:09AM (#3971293)

    Someone should build be a web interface that graphically depicts the dependency tree.

    It would then be installed at rpmfind.net and other RPM repositories.
  • Readers may be interested in the InDependence [ogi.edu] project, which built tools to help make the dependency info in RPMs more accurate and complete.

    Crispin
    ----
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX Communications, Inc. [wirex.com]
    Immunix: [immunix.org] Security Hardened Linux Distribution
    Available for purchase [wirex.com]

  • Try this on a debian (potato) box:

    apt-get install jserv

    Look in absolute horror as it trawls the kitchen sink down, including xfree.

    This isn't debians fault, exactly - the package is fully featured, but it's useless for people who just want the core functionality.

    The only place I've seen this done right, so far, is the FreeBSD ports system - mod_php being a good example, it asks you what support you want before checking dependancies.

    I'd imagine the same goes for gentoo, which I will try one day - but I'm currently using SuSE because I've been through the whole slackware/roll your own/freebsd/redhat/etc mill so many times that I'm now happy to just use one that works, but isn't necessarily bang up to date with package versions.
    • The problem is in packages that are "fully featured". If a package provides several programs, some of which need X and some of which don't, it should be split into two packages, to avoid problems such as the one you describe.

  • also graph the rise in utter annoyance you feel as you fight your way through the rpm dependancy 5th plane of Hell????

    --Curse you Debian users...

"If value corrupts then absolute value corrupts absolutely."

Working...