Follow Slashdot stories on Twitter


Forgot your password?
Programming IT Technology

Tom Lord's Decentralized Revision Control System 300

Bruce Perens writes: "He'll have to change its name, but Tom Lord's arch revision control system is revolutionary. Where CVS is a cathedral, 'arch' is a bazaar, with the ability for branches to live on separate servers from the main trunk of the project's development. Thus, you can create a branch without the authority, or even the cooperation, of the managers of the main tree. A global name-space makes all revision archives worldwide appear as if they are the same repository. Using this system, most of what we do using 'patch' today would go away -- we'd just choose, or merge, branches. Much of the synchronization problem we have with patches is handled by tools that eliminate and/or manage conflicts -- they solve some of the thorny graph topology issues around patch management. Arch also poses its own answer to the 'Linus Doesn't Scale' problem. This is well worth checking out." If you're asking "What about subversion?", well, so is Tom.
This discussion has been archived. No new comments can be posted.

Tom Lord's Decentralized Revision Control System

Comments Filter:
  • POSIX! (Score:3, Funny)

    by ekrout ( 139379 ) on Tuesday February 05, 2002 @06:20PM (#2958452) Journal
    In his FAQ he states it works on any system that's POSIX compliant.

    /me high-fives Tom
  • why FTP? (Score:5, Insightful)

    by devphil ( 51341 ) on Tuesday February 05, 2002 @06:22PM (#2958465) Homepage

    I guess I'm wondering why arch uses FTP as its network protocol. The FAQ says that it should be workable behind firewalls since the data is all transferred in passive mode, but this still seems like a huge step backwards.

    So, what am I missing? I only got to read a little bit of the site before it got DDOS'd by slashdot.

    • Re:why FTP? (Score:4, Insightful)

      by Anonymous Coward on Tuesday February 05, 2002 @06:30PM (#2958518)
      I guess I'm wondering why arch uses FTP as its network protocol.
      It's because this "Decentralized Revision Control System" is just a guise for a p2p filesharing. It's really cool: you check in all your files and they automatically get replicated, having become part of the "master tree". No one can shut down the master tree. No one can tell you not to put your files there. (Hey, it's part of my project!)
    • Re:why FTP? (Score:2, Insightful)

      I guess I'm wondering why arch uses FTP as its network protocol.

      Well, the article mentioned that arch consisted of a bunch of shell scripts and some C code, so it looks like ftp was just an "off the shelf" component that the author could make good use of.
      • Re:why FTP? (Score:5, Insightful)

        by curunir ( 98273 ) on Tuesday February 05, 2002 @06:59PM (#2958692) Homepage Journal

        wouldn't rsync over ssh have been a much better choice for an "off the shelf" component? Most ftp servers tend to have a few (read: waaaaay tooooo maaaany) security concerns for my taste.
        • Re:why FTP? (Score:3, Insightful)

          by GigsVT ( 208848 )
          or even scp for that matter...
          • Rsync works (nowadays, I believe, by default) over an SSH connection, but unlike FTP or scp, it doesn't have to transmit the whole file... only the parts that change. So it could be part of an effective version control system.
      • by devphil ( 51341 )

        Well, flowerpot, now I'm wondering whether arch uses the ftp programs, or just the ftp protocol. That is, do you need an ftp client or server installed for arch to work? From what I've seen it wouldn't be too hard to do the protocol yourself.

        I still can't get to the site, so oh well.

        • by slamb ( 119285 )
          Well, flowerpot, now I'm wondering whether arch uses the ftp programs, or just the ftp protocol. That is, do you need an ftp client or server installed for arch to work? From what I've seen it wouldn't be too hard to do the protocol yourself.

          Whether or not you use a standard client or server, the protocol itself is flawed. It sends passwords in plaintext.

          True, implementing an extremely simple FTP server might avoid the buffer overflows in standard stuff, but it couldn't solve that problem.

        • I've written an FTP server, and it actually wasn't that hard. Granted, it was for a specific purpose so I left some of it out...

          My two main objections with FTP itself: 1) plaintext passwords; and 2) a separate data connection, whether it's passive or active.

          Passive data transfers work well if the client is behind a firewall. If the server is behind one (like in the DMZ, DNAT'ed or otherwise), active is better. Passive transfers will just hang unless the firewall is smart enough to snoop the control connection.

          I'd go for either rsync over ssh (as has been suggested) or even HTTP before FTP.
    • FTP is there. It's there on all sorts of systems. It was sufficient to get it working.

      I'm sure that down the road it would be a very slick thing to the rsync protocols for data transfer between sites, as implemented in rsync and Unison. That would provide all sorts of ooey-gooey- encrypted, compressed goodness to help network connections be used more efficiently.

      The file transfer protocol isn't nearly as important as how it deals with versioning, logging, and thie likes, to be sure...

  • And others (Score:4, Insightful)

    by Ed Avis ( 5917 ) <> on Tuesday February 05, 2002 @06:23PM (#2958470) Homepage
    Not only 'what about Subversion' but also 'what about CVS, what about Aegis'. If you include non-free systems then what about Perforce or Bitkeeper.

    This is getting worse than journalling filesystems :-(.
    • Re:And others (Score:2, Insightful)

      If you include non-free systems...

      Unfortunately, for some people/projects that's not a n option.

      This is getting worse than journalling filesystems :-(

      I can see how you would feel this way, but keep in mind, a healthy number of different implementation ideas and design philosophies can only hasten the development of open source tools.
    • Re:And others (Score:3, Informative)

      by Polo ( 30659 )
    • Aegis does not only deal with source code management, but can be used to enforce a development process which includes steps like testing and peer review.

      You can implement the same thing using arch or CVS, of course, but Aegis offers much more structure in this regard.
  • er... (Score:3, Funny)

    by Anonymous Coward on Tuesday February 05, 2002 @06:23PM (#2958472)
    A global name-space makes all revision archives worldwide appear as if they are the same repository
    I don't know whether to laugh or cry...
    • Re:er... (Score:2, Interesting)

      by coyul ( 119455 )

      I don't see this as a big deal (or a big problem) at all. If you simply prefaced the name of each module with the fully qualified domain name of the server it happens to be sitting on, you'd accomplish this painlessly. This is already recommended practice for third-party Java developers (a package called 'widget' developed at would be called com.acme.widget)

      • Re:er... (Score:2, Insightful)

        So, if I develop something useful, it would be called edu.cornell.resnet.jmk63.widget? A little unwieldy, methinks. I suppose it works as a unique identifier, but what if I graduate and I no longer have control of that location? Back to the arch example, if merging code trees is done without a full copy, anyone who is patching against my code tree (or even anyone patching against them) is out of luck.
  • by eric_aka_scooter ( 556513 ) on Tuesday February 05, 2002 @06:25PM (#2958476) Homepage
    I used to work for a company (let's call them ACME, because I don't want to be sued) whose hq was on the other side of the contry, and with programming groups all around the world. We used VSS, with the server at HQ, and it literally took 10 seconds or more to change directories, and much longer to retrieve or update! This hobbled our office's ability to work (HQ didn't care, they just made us work weekends to make up for the loss of efficiency).

    A more distributed source control system could obviously circumvent problems like these, but with this caveat: the code that different groups work on would need to be sufficiently black boxed that most changes wouldn't require changes in other projects. It's just good programming style, but I know that this wasn't the case at ACME, and given my experiences with Corporate America I doubt it's true in most places. Maybe I'm just being pessimistic...

    Anyway, it sounds like a good idea if it's used right.

    • We tried - briefly - VSS in a project involving approximately 15 developers in the same building. It was slow and awful.

      CVS may not integrate so prettily into VC++, but it does work! We found switching over to CVS to be relatively painless: the only problem was that sometimes a file would be edited using Notepad or something, that shouldn't have been, which introduced ^M characters that confused CVS.

      Extrapolating from our experiences, the reason why VSS worked so poorly for your company might be more due to the quality of VSS rather than the degree of distribution of your developers.

      • CVS DOES integrate into Visual Studio. Look for the IGLOO plugin. Works like a charm.

        I also agree VSS sucks and CVS works better.

        What annoys me about all the new Version Control systems like the one being proposed is that they are ignorant of NON POSIX / UNIX systems. I have to work on both LINUX and Windows. And as such need version control on both.
    • (let's call them ACME, because I don't want to be sued)

      Have I got news for you! []

      (But it seems to be a personal company, so maybe he won't sue :)
  • by Sludge ( 1234 ) <slashdot.tossed@org> on Tuesday February 05, 2002 @06:26PM (#2958486) Homepage
    That sounds like hype. In the real world, selecting the aspects of software we want to compile from on remote sites would have serious implications. The first being security. The second being quality. Linus may not scale, but he has good judgement. That's the fundamental problem.
    • by patnotz ( 112592 ) on Tuesday February 05, 2002 @06:39PM (#2958562)
      Whether Alan Cox (or whomever) uses patches or some other source control (like arch) (a) you still have to download the software from a remote site (i.e., the Net) and (b) Alan still has control over what makes it into his repository.

      The point is that it allows separate developers (AC, AA, LT, etc. in the kernal case) all to maintain their OWN trees while enjoying the powers of source control software. The added benefit of arch is that their separate trees are all connected without having to give write-permission to each other.
    • If so, you've noticed that when you choose to merge data from branch (A) into branch (B) [no, it *doesn't* happen automatically unless you want it to!], then you have *control* over what parts of A go into B. You may have noticed that you can ask for the differences between A and B, and go through them by hand, and accept only specific parts -- just as someone doing patching does.

      No revision control system tries to replace good maintainership -- rather, their job is to make it easier.
  • by ethereal ( 13958 ) on Tuesday February 05, 2002 @06:26PM (#2958489) Journal

    The ability to do distributed development, manage multiple (possibly hostile or private) branches at once, good merge and diff tools, etc. sounds sort of like ClearCASE. Except of course that ClearCASE costs money, and doesn't have the global namespace thing going on. Rational had better be careful or their customers are going to move over to arch (especially since their Unix GUIs have sucked more and more with each successive release).

    Bravo to the author on this tool - it sounds like a great advance of the state of the art if it works like he says.

    • by Anonymous Coward
      Except of course that ClearCASE costs money
      Actually, I wrote an open-source implementation here [] (with a few additions: mounting the repository as a filesystem, and a couple of other things as I note them.). Actually, I didn't really "write" it, just cleaned it up a little (besides these additions).. The original "implementation" in open source is just the output of program to turn my reverse engineered bytecode into pretty object code. Then I gave it names and stuff.
      NOTE: You can only do this with COPYRIGHTED but UNPATENTED software. You can't circumvent a patent by reimplementing it with different control structures and variable names. You CAN do so with a copyright. If the binary is totally different (based on objectification), then so is the content. (This is the "clean room" reimplementation you sometimes hear about.)
  • Question (Score:2, Offtopic)

    by Taco Cowboy ( 5327 )

    Other than CVS and arc, are there any other (GPL)software revision control system available, and how best you rate them ?
    • Re:Question (Score:2, Funny)

      by rbgaynor ( 537968 )
      Do yellow Post-It notes stuck to the bezel of my monitor count?
  • Call me a dummy but I assumed he meant the possibility of corrupting a distributed global namespace. I presume this features some form of strong authentication system (couldn't reach the site) but it could be pretty hairy if you were doing a make world out of this using any "unofficial" patch sources, but we all audit all the code we run don't we!
  • From his faq (Score:3, Interesting)

    by Anonymous Coward on Tuesday February 05, 2002 @06:44PM (#2958596)
    On, subversion and arch...

    Both systems provide repository transactions with ACID

    ACID (Automicity, Consitancy, Isolation and Durability) is only something that has been implemented and tested well on high read RDBMS such as Oracle.

    When you think about that, why is it that no one is using a DB backend to source control? Wouldn't that just get rid of so many ambguities? For one, we wouldn't have to deal with all the nonsence and create a million wheels, when a nice pair of rolls royces resides with a good RDBMS.

    People need to think outside their brains, and in regard to source control, I feel we need to make more packages that interface well with a good RDBMS rather than create our own RD functionality in 40ks. What's the use?

    Anyone know a good system of incoroprating source control with a databases? Oracle and Postgres would do.
    • Re:From his faq (Score:2, Informative)

      by The Man ( 684 )
      Anyone know a good system of incoroprating source control with a databases? Oracle and Postgres would do.

      Well, it's certainly not a GOOD source control system, but I know for a fact that starteam [] uses a database backend. I'm pretty sure Rational ClearCase does also, and I'm told it sucks a good deal less. Anyway, there are a lot of problems with starteam; one of them being its strong preference for running on microshaft platforms, another its lack of database support (access, sql server, and oracle only - gimme a break!) and its outrageous cost (10s of $k for a small team plus massive server hardware). So, yeah, it's been done, but I'd much rather use even CVS than starteam. ClearCase, well, I'd love the chance to see it, but I never will at this cheapass company.

    • by jonabbey ( 2498 )

      ACID (Automicity, Consitancy, Isolation and Durability) is only something that has been implemented and tested well on high read RDBMS such as Oracle.

      Oh, come on. ACID isn't that hard to do. Lots of systems implement ACID. Why do you imagine that only Oracle, etc., can do it?

    • FreeVCS []

      Free as in beer, source available but I am not sure if the license is compatible with OSS, works with interbase, windows only, integrates with VC++, Delphi etc.
    • You'd think that using an rdbms would give you lots of control over your source tree, but think again. Any decent rcs works incrementally- i.e. you are storing deltas, not always whole lines of code.

      The indices (stuff this "indexes" crap) would be really bad and slow on all your tables.

      Also RDBMSs suck at representing hierachies, which source trees naturally are. In fact, I dare say the only reason that RDBMSs are so widespread and accepted today is that originally it was much faster to do this rather than use an OO, hierachical way of doing things.

      The way you store things has to be written specifically so that it fits in with the way projects work and evolve.

      Forgive the lameness. Haven't had my 2nd coffee of the day yet...

    • Re:From his faq (Score:2, Informative)

      by owenomalley ( 103963 )
      Actually, I'm team lead on a CM system where all of the metadata is in Sybase. We use Sybase replication to keep multiple servers at different sites in sync with each other. (Sybase has a nice replication model that will store changes in a stable queue until the remote server is available again.) Anyways, using a real database means that our tool scales to insane levels (we see peaks on one project of 20,000file versions/day). We also get the ability to do live backups, etc. It is also very nice being able to write adhoc queries against the database in sql. (ie. in the last month, show me how many file versions were generated at each site on each day.)

      While we keep all of the metadata in Sybase, we store the actual bits in the filesystem.
  • This looks really cool, if only for the fact that it finally has a sane way to rename files. It's annoying renaming, deleting, removing, and adding with CVS.

  • I've been struggling with CVS for a while now, and while it does the job I've always been thinking "There's got to be something out there with recursive add built in."

    Now here comes slashdot with an actual useful story about source control and some of the options and development outside of CVS.

    The only thing to find out now is if the discussion will be of any use, obviously I'm not helping...
  • by mikemulvaney ( 24879 ) on Tuesday February 05, 2002 @06:51PM (#2958629)
    It sounds like it has a lot of nice features, but then you realize the whole thing is written in sh? One of the nice things about CVS is that the client-server nature allows someone to use pretty much any operating system as a client. Subversion takes this to the next step, by making all connections use the client-server model.

    Forcing everyone to use sh is a major hassle. I know that it would work with any "reasonably POSIX" OS, but then developers can't get arch accessibility built into their favorite tools, like NetBeans or whatever.

    Creating local branches is pretty cool, though.

    • I think a major test of this or any other successor to CVS should be how amenable is the design to alternative implementations which integrate seamlessly with the reference implementation.

      I think the fact that the "arch" solution is designed to be so simple and clean that it can be implemented with a few shell scripts bodes well for it.

      I would expect it to be pretty easy to integrate the "arch" solution into lots of other tools by writing a little code which manipulates the files the same way the "arch" shell scripts do.

    • Well, at the least it seems less space efficient than CVS. The arch repositories seem rather frighteningly brittle as well, given that anyone could use file access tools to subtly corrupt the repositories.

      Will be interesting to see what sort of response all this sudden hype for Arch provokes from Larry McVoy and his Bitkeeper project. arch seems unique enough to be worthy of comment from the revision control big boys. ;-)

      • Well, at the least it seems less space efficient than CVS.

        Well yeah, but with the cost of disks these days, I don't think this is a very big problem. I had a hard drive crash on my server last night (this is at home, so its not professionally backed up or anything), and I was glad to have a real copy of all my source checked out on two different PC's at the time.

        This is an interesting subject, though. At work, we are in the middle of an agonizingly slow migration from BCS/RCS (I'd love to give a link to BCS, if there was one :) to CVS. BCS uses an intricate system of softlinks to provide a CVS-like working area, so switching to CVS is likely to chew up a lot more disk. But really, who cares? For a couple hundred bucks you could set up a RAID array that would hold every line of source that you ever wrote in your life, 1000 times over. Plus, you really only have to check out the stuff you are working on, and if you are currently working on gigabytes worth of source code, then whoa. I know you're talented Jon, but jeez...

        The arch repositories seem rather frighteningly brittle as well, given that anyone could use file access tools to subtly corrupt the repositories.

        Yes, this worries me too. For the projects I work on, I *want* to limit access to the repository to the tool itself, for data integrity and logging purposes, if nothing else. That's one of the problems with CVS: you can still go into those ,v files and wreck havok manually. (And the bigger problem is that CVS forces you to do so on occasion, like when moving a file to a new directory).

        I think its just different tools for different purposes, though: if you really want that bazaar style, this could be useful. But for most projects, you want tighter control, so CVS or subversion or any client-server based system would be better.


        • But really, who cares? For a couple hundred bucks you could set up a RAID array that would hold every line of source that you ever wrote in your life, 1000 times over. Plus, you really only have to check out the stuff you are working on, and if you are currently working on gigabytes worth of source code, then whoa. I know you're talented Jon, but jeez...

          Heh, not even close. The last 6 years of work by everyone who has done anything for Ganymede amounts to 19 megs in cvsroot plus change.

          However, source code isn't the only thing you'd like to be able to manage with a CVS-like system. We've spent the last few months building a web management/authorization tool that is based on mod_python, CVS, and MySQL. People put their content on our staging server, then use our web tool to browse the staging server and sign files to approve the exporting of those files to our new external web server. When the periodic sync happens, a daemon does a CVS export to a working directory, tars it up, and ssh's it out to the heavily secured external web server.

          Works great, except we have a person who has loaded 500 megabytes of data onto the staging server for a project of his. I don't imagine the content will be changing very rapidly, but I'd hate to have several 500 megabyte copies of his content hanging around in our staging server's CVS repository, cheap disks or no cheap disks.

          Really, revision control management systems are getting specialized enough that one size tool simply won't fit all. Bitkeeper, arch, and the other 'new wave' of SCM systems are explicitly designed to manage distributed source code development, which is not really what we are using our signing/authorizing tool for.

          • We've spent the last few months building a web management/authorization tool that is based on mod_python, CVS, and MySQL. People put their content on our staging server, then use our web tool to browse the staging server and sign files to approve the exporting of those files to our new external web server. When the periodic sync happens, a daemon does a CVS export to a working directory, tars it up, and ssh's it out to the heavily secured external web server.

            Wow, that sounds really cool. Are you going to GPL this stuff too? I would love to get a hold of it.


            • Wow, that sounds really cool. Are you going to GPL this stuff too? I would love to get a hold of it.

              Yup, that's the intent, once things settle down a bit more. The thing started out using PHP, Python, and MySQL, then we added in a CVS layer and the ability to view a copy of our external web server's configuration at any given time. Then the PHP layer got ripped out in favor of mod_python, and now we're looking at adding in a WebWare [] layer to handle session management, caching, etc., rather than simple http basic authentication.

              Give us another month or two to get some user testing and documentation done and we should be making an announcement and putting it out under the GPL.

          • It may be interesting to note that you can do an "svn commit" to check in a change to a .html file and have it immediately appear on your web site. In fact, SVN uses a URL to specify the repository to check out. That URL can be your website. For example:

            $ svn checkout -d site
            $ jed site/index.html
            $ svn commit -m "more tweaks" site

            Your tweaks are immediately published.

            (of course, it sounds like you want a staging server in there, and some kind of workflow, but that can be done and is an exercise for the reader... :-)
            • What's SVN? Does that have to do with Subversion, or is it a WebDAV thing?

              Our tool is designed to allow arbitrary people with UNIX file access privileges (or Samba, or FTP) to manage the content on the internal staging server, then have a defined set of users with review and approval authority who can sign files through the GUI, which tracks md5 signatures for the files to be able to determine whether a given file has changed since it was last signed or not. If a file has changed, the GUI can present a nice graphical context diff between the last signed version and the version on the staging server. If an approved user signs the file, that file gets checked into CVS, and so will be part of the next web server synchronization.

              So yes, workflow, auditing, and a staging server separated from the (very locked down, firewalled) external server.

        • A RAID array, eh? Let's see, that would be a Redundant Array of Independent Disks array. Wow! You have a Redundant Matrix of Independent Disks!

          How does that RMID work for you?
      • The arch repositories seem rather frighteningly brittle as well, given that anyone could use file access tools to subtly corrupt the repositories.

        Huh? The same applies to CVS - any monkey with a text editor could go in and edit the ,v files in the repository. Or you could just remove the repository entirely, or rot13 encode half of the files, or whatever. If a version control system uses a Berkeley DB database, anyone could use rm(1) to 'silently corrupt' that. This is a property of any program which stores data on disk. So what?

        You could argue that the problem is too much privelege - one shouldn't need direct write access to the repository in order to check in code. However I don't think this is a big deal in practice, if you trust someone to make direct commits to your tree you probably trust them not to do idiotic things generally.

    • Well, it's not *entirely* in sh:

      Totals grouped by language (dominant language first):
      ansic: 61064 (66.48%)
      sh: 27853 (30.32%)
      lisp: 1868 (2.03%)
      awk: 1044 (1.14%)
      sed: 24 (0.03%)

      (If you want more detail, run sloccount over it yourself)

      Anyay, it could be worse; it could be written in Perl ;)

      • Hmm quite of lot of different languages..

        OK, I'm curious why lisp??

        Mmm, you have to be very careful when you code in shell otherwise you have a portability mess, and I don't find shell scripts very readable when they become large..
  • He'll have to change its name, but Tom Lord's arch revision control system is revolutionary.

    How about polyfork? Sounds like a great way to give equal weighting to every trivial disagreement over design.

    • How about polyfork?

      Silverware would be a better name ... as one can spoon changes back into whichever tree one is following, knife out other changes, and fork the system themself if they wish.

      Seriously, this wouldn't give equal weighting to every trivial disagreement any more than free source code does anyway. Whether the control system is subversion, cvs, arch, or plane ole text files, we as individuals choose which fork we want to follow. Indeed, currently the mechanism in use is ftp (or alternatively http/rsync), ie. do you ftp linux-2.4.17.tar.gz, linux-2.4.17-ac3.tar.gz, or linux-2.4.17-myfork.tar.gz. Your decision is based on your trust of Linus, Alan Cox, or myself (probably nil). Using arch wouldn't change this, it would merely give you more flexibility in choosing bits of the Linus kernel, bits of the AC kernel, etc. in creating your own, personal fork that reflects your values and interests, and if others like your choices, they can benefit as well. If they ignore your choices, then who cares? You still benefit in having been able to make and prosper from your choices yourself.

      How on earth could that be a bad thing?

      That having been said, my wishlist would be support of gnupg signatures and authentication and scp instead of ftp. As to it being written in a shell scripting language, so what. If you really want to run a client or (god forbit) a server under Windows, there is nothing preventing you from writing a compatible client or server in the programming language of your choice (although the mockery one would receive for having used Visual Basic would probably detract some from the feeling of accomplishment, but I digress).
      • (although the mockery one would receive for having used Visual Basic would probably detract some from the feeling of accomplishment, but I digress).

        Hmm. That was meant to be a toung in cheek jab at Microsoft, but in rereading my post it sounds like a jab at you. Apologies, as that was not the intent. You might as easilly use java, C, C++, or C# if you're feeling particularly masochistic. The point is that you are given choice, which is always a net positive.
  • I smell trouble (Score:4, Insightful)

    by heretic108 ( 454817 ) on Tuesday February 05, 2002 @07:04PM (#2958732)
    From the article, it looks good.
    But let me say that I've sometimes been in the position of having to merge branches. In my first hacking job, I had to take code that had been written by 2 crazy Polish programmers, and merge 37 non-working branches into one branch that worked. It was *not* fun, and I enjoyed a well-deserved beer when it was done.
    IMO, a distributed system of archive management that doesn't make ongoing reference to a central tree is a sure recipe for chaos, and poses the risk of making software harder to install/use for the non-skilled, and creating a lot of work in merging disparate branches for the skilled.

    You want package xxyzz? OK - go to Jim's store in San Diego. It's easy to set up. Oh, I forgot to tell you, you've gotta get some bits from Lucy's store in Manchester, and Frieda's fixed a few bugs too - get her fixes from Bonn. And don't forget Peter's enhancements - his store is at the Adelaide University site. What? it doesn't compile? What kind of idiot are you? Just hack it till it does compile, then put it together in your own tree!
  • by e40 ( 448424 ) on Tuesday February 05, 2002 @07:06PM (#2958747) Journal
    It is an important feature of subversion that it will be CVS compatible. I manage a 10+ year old/1+GB CVS repository. CVS has a lot of faults, but I can't throw that version history away. It's too valuable. subversion gives me hope that I'll get something more usable than CVS (we'll see, won't we!) without much pain.

    I'm really hoping the subversion developers succeed.

    Having said that, I'm all for arch succeeding too. Perhaps it will be better for new projects. Who knows.
  • by markj02 ( 544487 ) on Tuesday February 05, 2002 @07:20PM (#2958854)
    The feature list sounds nice, and using the file system in the way it does is also pretty nice. But I just can't deal with 40kloc of shell script for a version control system. How am I supposed to run that sort of system on a non-UNIX system? What kinds of oddball dependencies is it going to have on the shell, path, and environment?

    This seems like it's worse than CVS. Functionally, I'm quite happy with CVS. The main complaint I have about it is that it isn't self-contained but invokes rcs and other shell commands in mysterious ways. "arch" seems to make things worse, not better in that regard. What I would like to see is something mostly like CVS, but something that is implemented as a clean, self-contained library with a single command line executable (with subcommands) and a built-in HTTP-based server. Until that comes along, I think I'll just stick with CVS.

    • CVS hasn't invoked rcs or diff or anything for ages.
    • arch isn't designed for someone who wants a clean CVS replacement. It's a completely different system, with all its own powers and drawbacks.

      Subversion is a CVS replacement.

    • From the cvs info pages:
      CVS started out as a bunch of shell scripts written by Dick Grune, posted to the newsgroup `comp.sources.unix' in the volume 6 release of December, 1986. While no actual code from these shell scripts is present in the current version of CVS much of the CVS conflict resolution algorithms come from them.
      A "mess of shell scripts" can be very useful for a proof-of-concept.
      • > A "mess of shell scripts" can be very useful for a proof-of-concept.

        Indeed. The language doesn't make a whole lot of difference, and well written /bin/sh code is going to work pretty much anywhere. Hell, cvsup is written in Modula 3 and people use that :P

        Personally I wouldn't mind a Ruby version.. :)

        (having said that, C isn't all that wonderful a language - it's low level power can just as easily be used to blow your head off as make for a super-fast program).
    • I guess what you're suggesting stems from a different philosophy (Windows/classic Mac OS)--monolithic--than that of UNIX--writing tools that do one thing and do it well, while leveraging other tools on the system that do what they do well.

      I really don't care if this system is written using shell scripts, Java, or plain old C. Well, I do prefer just C, but that's my personal preference. I don't want the author to implement his own version of diff, check in, check out, etc. These subsystems are already available. Why reinvent the wheel again? If there are existing source repositories, it would be a pain to convert all the trees into a new proprietary format. RCS has worked well for so long, it would be a shame to throw all the histories away and start anew.

      Written properly, the shell, path, and environment dependencies shouldn't be a big problem although I have run into annoyances with environment space limitations under different UNIX OSes. But this particular environment space difference is taken care of by xargs(1).

      My largest concern is performance, but since most of the work is done with compiled code, it shouldn't be too bad, however I haven't looked at the source.
  • by kfogel ( 1041 ) on Tuesday February 05, 2002 @07:31PM (#2958923) Homepage
    I hope both systems (Arch and Subversion) get some widespread use. Like a lot of Subversion developers, I'm genuinely curious to see a) how well the Arch model works in practice, and b) how well Arch's implementation of that model works out. If it turns out to be winning, then that'll be a big step forward for collaborative projects & free software. Arch sounds a lot like Bitkeeper [] only without the license problems, and I've talked to some happy Bitkeeper users before (a small sample, so it's hard to know whether we're dealing with a Shift To Better Paradigm or just good software).

    Subversion [] was deliberately designed to address CVS's shortcomings, not to break new ground. Our philosophy was essentially conservative: CVS basically works, but has some bugs and maintainability problems. Let's keep the model and fix the problems. Result: Subversion.

    The ideal situation is a world where both models have good, free implementations. Then we'll all very quickly find out which model works better. :-)


    • I've been keeping an eye on subversion, as the goals are noteworthy. Fundamentally Bitkeeper and now 'arch' model is very powerful. I used Sun's Teamware (Bitkeeper is an enhance Teamware) in organizations with over 100 developers, and remote development and it required almost zero administrative overhead. The core of Sun's Teamware, Bitkeeper, HP's old KCS, Sun's Smerge/Smoosh, and 'arch' is simply the branch/merge capabilities. Once this problem is solved, then the rest of the services can be built around it. This is where most SCM systems fall flat on their face... They lock you into a centralized server model, user interface that is clumsy, terminology that is cumbersome, policies that don't meet the consumers needs, etc...

      I view 'arch' as having a great model with a very simple implementation. Because of the simplicty, 'arch' developers will be able to respond very quickly with bug fixes and new functionality, and others can build around 'arch' to support their own policies, and process flows

  • Check out Meta-CVS. (Score:4, Informative)

    by Kaz Kylheku ( 1484 ) on Tuesday February 05, 2002 @07:33PM (#2958938) Homepage
    Adds renaming over top of CVS and some other niceties. Can be used to create patches that contain versioning changes. With Meta-CVS, people can restructure directories in conflicting ways, and then resolve conflicts when they merge the structure.

    This doesn't add anything else; no atomic commits or distributed operation over multiple repositories, etc.

    Of course, you can use branches to track foreign code streams, as you can with CVS. The nice thing is that you can rename things on your own branch and keep up with an unrenamed source of patches. Or if the other people are using Meta-CVS, they can give you patches that include restructuring.

    Meta-CVS is currently about 1600 physical lines of Common Lisp (with some CLISP extensions and bindings to glibc2) scattered in twenty or so files. A lot is done with little!
  • Dialup? (Score:2, Insightful)

    by gouldtj ( 21635 )
    Maybe I don't quite get it...

    Let's say that I don't have write access to the Linux kernel tree. So I go grab a copy and make a branch on my machine and fix it. So then I post to the kernel mailing list saying that I've fixed this bug. Linus gets all excited and want so merge my branch in, but he can't because I am offline. So he forgets, and nothing happens.

    Now you could say that I could upload it to the central server, but I don't have write access to that. I wouldn't imagine that they would give me (a non-kernel developer, trust me, I'd break something) access to the tree.

    I guess I just don't get how useful this will be.

    • So you'd make a patch set with the provided tools, tar it up, and upload it to any server you have write access to, or attach it to your e-mail.
  • by wls ( 95790 ) on Tuesday February 05, 2002 @07:52PM (#2959042) Homepage
    I've done SCM for a number of years, professionally evaluated version control product, and helped edit an Anti-Pattern book on the subject. It seems, at least to me, that the majority of version control systems out there have the basis covered when it comes to check-in, check-out, branching, and labeling. The standard features, if you will.

    However, most of the reasons that I've seen companies change version control systems is because of completely different reasons. Here are a few that come to mind:

    - A version control system must be fast. I worked at one company where we tried to use Visual SourceSafe over a WAN; it took HOURS to share code. A good VCS should transmit the minimal amount of data.

    - A version control system must provide security. All too often management uses the SCM repository as kind of a shared directory (BAD, BAD, BAD) -- and people who have no need to see or modify the code, do... implicitly.

    - A version control system should provide extensive auditing and notification capabilities that can be discretely turned on and off. Allow logging the positive, the negative, and letting people know when particular operations happen to a set of files. In once case we attempted to get PVCS to automate scripts on a change to send mail to the PM. Checking in a directory flooded inboxes, since it could audit collections of code.

    - There MUST be a recovery mechanism. Ever try to recover a lost SourceSafe password? Yikes. (Gaining re-entry is possible, back stuff up, change your password, do a diff. Copy pattern into the admin record with hex editor. Login as admin with new password. Change admin password. ...this worked at least twice for me.)

    - Again, there MUST be a recovery mechanism. I love RCS, SCCS, and PVCS for their file-related mechanisms. Why? I've had SCM systems go down hard when the database got munged. Yes, you can recover from a backup, but a lot of work gets lost. With an open file format, you can at least hand fix localized problems.

    - That said, good version control systems should allow you to check in collections of files as atomic units, move files and directories, and operate on projects as a whole. Anytime I have twiddle with a repository, thereby breaking past history, something is seriously wrong with the VCS system model.

    - Good systems must have an IMPORT / EXPORT capability that PRESERVES HISTORY. The less I feel locked into a solution, the more likely I'll be to try it out. Porting between system is usually painful.

    - SCM systems must conform to how the CM manager wants to run things, not the other way around. Let's face it, users can and will make mistakes, and that's okay. Mistakes should be fixable. I'll never use StarTeam because it was too easy for users to check in accidentally branches that couldn't be removed. Tech support argued that version control should reflect the history of the product, where I maintain (and still do) that it should reflect the intended history. If I want to include user errors, that should be my policy, not the tools. My users should be able to reflect upon the project history and know why things changed. Period. You don't use a hack to undo a mistake.

    - Branching notation should be clear and to the point. CVS has it's magic numbers, StarTeam has god awful views. Let me choose the numbering scheme, don't play games with odd/even numbering. Version numbers should not be overloaded to carry additional meta-information by the product.

    - A good SCM tool should remember tag history. Suppose I accidently move or delete a tag, now I want to put it back. Suppose I want to see where it's been. This case is rare, but anyone who's had a user twiddle with the wrong tags feels this pain as sharp and deep.

    - More ADMINISTRATIVE control. My big beef with CVS is when I have to twiddle with the repository structures and permissions directly to accomplish what I want done. No. No. No. There should be a tool (that audit's change) for standard operations.

    - An admin should have the ability to define, enforce, and audit user permissions that should be applied cross dimensionally against repository, commands, and elements within the repository.

    - Data should be stored in a manner that can be parsed by custom tools. It allows me to write extensions and automation.

    - Nothing should be possible in a GUI that is not possible from the command line. The inverse holds true as well. Everything should be automation friendly. Early versions of PVCS pissed me off for this reason. As a SCM manager, I've used both, and I'll take a command line over a GUI any day. My novice users want a GUI, my advanced ones usually revert back to command lines (and integrate it with their editors).

    - There must be readable 2 and 3 way diffs.

    - A good SCM tool will be able to produce reports, or at least make it possible to export information that can produce reports.

    - A good SCM tool should know how to handle binary files efficiently, rather than just storing the whole copy.

    - A good SCM system should not put a limitation on comments.

    - A good version control system should not try to "do it all" (CCC/Harvest) and do none of it well. When GUI's pop up off screen, or you have to artificially create packages for simple files, something's wrong. Which leads into...

    SCM systems should operate the way the users of that system do.

    There is a BIG difference between how commercial houses run things verses OpenSource projects.

    Commercial groups usually have a smaller set of developers, they are known in advance, and commonly use the locking model. OpenSource models tend to use concurrency a lot more, and operate on the applying diff's procedure. (Yes, I know, exceptions are out there.)

    Thus, some tools that feel more natural in some environments get quickly rejected in others. I've yet to see someone produce a readable guide about version control abstracted at a high level bringing all the terminology together. (Incidentally, I'm about to release one; email me for a draft.)

    The overall problem in tends to be that people look on the side of the box for features, rather than asking if the features are even applicable for what they're doing.

    Worse yet, proper SCM often gets sidestepped in commercial world. Ask: Do you want branching? You get, is it a feature?...yes! Now ask: Do you know when it's appropriate to branch, how to do the branch efficiently, how to graft branches back to the root, or how to physically do it... and you find out this is where a lot of bad CM happens. It isn't fun to inherit a screwed up repository.

    The most common downfall of SCM, as I've seen in the commercial world, is a failure of the those running it (quite often over-tasked infrastructure people) failing to understand the product being built with the tool, failure by team leads to communicate repository structure, failure by management as they use the SCM tool as a substitute for communication, and failure by the developers who don't know how to use the tool and when to use the appropriate features.
    • Branching notation should be clear and to the point. CVS has it's magic numbers, StarTeam has god awful views. Let me choose the numbering scheme, don't play games with odd/even numbering. Version numbers should not be overloaded to carry additional meta-information by the product.

      This is incorrect. The CVS numbers are internal. If you care about them at all, you are doing something wrong. Your baselines and branches are identified by tags. If you understand how the CVS numbers work, they are actually quite logical; there are reasons why they work they way they do. It's not play ``games''.

      Version numbers *are* meta-information, so it's meaningless to talk about them being overloaded with metainformation. They are not intended to correspond to your product release numbers, which are usually the fabrications of a marketing department anyway, like e.g. Solaris 7 being the followup to 2.6. Do you think the Sun guys bumped up their version control system to use the number 7? ;)

      • Excellent point; poorly worded on my part. In general, your statement ought to be true about all version number schemes inside a repository. (Cederqvist, section 4.3)

        Labels are our friends. Though, I've actually heard people using phrases like "We're modifying the branch today." That doesn't convey a lot of meaning.

        However, based on real world practices, people tend to use revision numbers as version numbers. They shouldn't. And there is a difference between the two. Your point illustrates that well; thank you for raising it. I'd like to think Sun didn't tweak their internal revision numbers to mirror product version numbers.

        Where I was going was, if numbers are going to be used to convey repository structure, it should be hack free. If revision numbers are going to be used to convey information, the user should have control over what gets used. The reserved use (which personally I like), in CVS's case came from [Cederqvist section 13]. PVCS is pretty darn good about giving the right level of control for those who want to twiddle numbers directly.

        I'm simply saying it's up to the person running the repository to decide -- ideally they should have a clue of what works well.
    • There is one thing that you did not mention and that is important for many OpenSource projects: weakly connected clients.

      Many contributors to OpenSource projects are doing their development at home. Although some of them have a cable or xDSL connection, many others are still using a slow modem and an expensive telephone line (especially outside the US). A good SCM system for OpenSource projects should therefore support these weakly connected users as well as possible.

      CVS is far from ideal, but not too bad from that point of view because you can work for a while in your local tree and then update/commit without transmitting too much data on the network. Systems like Rational ClearCase require a specific setup and different clients (Attaché) for that.

      With CVS and some other systems, it is even possible for someone who has no direct connection to the Internet to get/checkout only a small part of the tree (or individual files) on some computer that is connected to the Internet (at work or at school) and then take the files at home, modify them and upload the changes. This is possible because you can easily remember or write down the names (and optionally the versions) of the files that you want to fetch with CVS. I think that the proposed "arch" system is worse than CVS in this case.

  • RCS to CVS to arch, same story, a decade later. However, arch is far more competively priced. ;-)
  • Uggghhh.... [OT] (Score:4, Insightful)

    by ryanvm ( 247662 ) on Tuesday February 05, 2002 @08:40PM (#2959294)
    I am getting soooo tired of this notion:
    Arch also poses its own answer to the 'Linus Doesn't Scale' problem.

    Look people, the "Linus doesn't scale" issue is NOT something that can be solved by replacing the use of 'patch'. Putting the Linux kernel on CVS (or Arch or whatever) would just allow people to commit stupid changes.

    The reason Linus doesn't scale is not because he doesn't have enough time to run 'patch'. It's because changes to the kernel MUST be approved.
    • Re:Uggghhh.... [OT] (Score:2, Informative)

      by cduffy ( 652 )
      What it would do is force the downstream forks to stay sync'd with Linus's version, and thus make merging between them easier. Yes, the code still needs to be reviewed -- but that's not the only task involved in maintaining a tree.
    • > Putting the Linux kernel on CVS (or Arch or whatever) would just allow people to commit stupid changes.

      Not true. Putting the Linux kernel on publically writable CVS would allow that, but no one is seriously suggesting that.
  • if it's using FTP as it's transport, how does it handle simultanous writes to common files?

    i thought you needed some sort of atomic test/exchange method to ensure consistency in such situations?

  • by srussell ( 39342 ) on Tuesday February 05, 2002 @11:39PM (#2959920) Homepage Journal
    I'm not addressing Subversion vs. Arch, but rather Tom's evaluation of Subversion, which isn't entirely accurate.

    I'd also like to say, up front, to the Anonymous poster who asked:

    Anyone know a good system of incoroprating source control with a databases? Oracle and Postgres would do.

    Subversion does. The backend it currently uses is Berkeley DB, but the backend is pluggable. After version 1.0 comes out, expect to see a backend for one of the SQL databases pop up.

    Now, on to Tom's comparison to Subversion. Caveat: I am not a Subversion guru. I lurk in the developer mailing list, and I use Subversion myself. Therefore, I may make mistakes about details, but I'm fairly certain I won't provide completely bogus information. I got some reviews on this post from the Subversion dev list, including some comments from Tom, but any mistakes in here are my own, and they're copyrighted mistakes, dammit.

    I'm not going to quote whole sections; just enough for context.
    1. Smart Servers vs. Smart Clients. Subversion clients are also smart, although perhaps not as smart as Arch. Diffs travel in both directions, so a minimum of network traffic is used. Many Subversion operations (status, diffs against the last revision, etc) are purely client-side opereations.
    2. Trees in a Database vs. Trees in a File Systems This is misleading. You *can* get stuff out of the Subversion database with the standard BDB tools, so Subversion isn't required. Also, because Subversion is based on WebDAV, access to the database through a web server is a freebee; also, Subversion is very Windows friendly, from many points of view, which should help its adoption in a corporate setting. Subversion only stores the differences between two versions of a file or directory, which is space efficient. The advantage to being able to access a filesystem-based repository of diffs is arguable.
    3. Centralized Control vs. Open Source Best Practices In practical application, there is no advantage to the ARCH system over Subversion. Subversion allows per-file/directory sourcing, so you could create a project that includes sources from any number of different repositories. (This code is not currently working in Subversion.)
    These are simple mistakes. There is also one statement that is wrong: arch is better able to recover from server disasters The argument was that, because arch is a dumb FS, it is easily mirrored. The implication is that databases aren't easily mirrored. BDB is just as easily mirrored, and most other databases are easily replicated.

    Other comments pointed out were:

    • Subversion does not require Apache. It works over a local filesystem just fine. If you want network access, you need Apache.
    • Subversion has all of the strengths of Apache. You therefore get Apache access control (well defined and understood), SSL, client and server certificates, and interoperability with other WebDAV clients, among other things.
    • With Subversion, you have both client side and server side hooks, as well as smart diffs.
    • Arch has both revision libraries and repositories. The comparison document doesn't differentiate between them. In some cases, the comparisons made aren't meaningful. Revision libraries, for example "... also have to be created and maintained by the user. So comparing them to accessing past revisions through normal means in subversion is not a fair, or even really meaningful, comparison." (Daniel Berlin).
    • When comparing Arch's repositories to Subversion's there is no speed advantage. Arch's storage is either diffy (storing only differences), in which case it is not easily browsed and is no faster (at best) than Subversion; or the storage isn't diffy, in which case it isn't efficiently stored (imagine multiple copies of each file for each revision).
    • Subversion's choice of BDB as a backend was not accidental. Some of the tools Subversion got from using BDB are: Hot backup and replication, all kinds of existing tools that know about BDB databases (e.g. Python or Perl bindings). A body of - "community" knowledge. etc (Greg Stein).
    I've left out vaporware features, such as the future SQL backend of Subversion 2.0.
  • Stop complaining (Score:3, Insightful)

    by chewy ( 38468 ) on Wednesday February 06, 2002 @03:00AM (#2960408) Homepage
    Hello there, I'm reading these Slashdot comments, and start realizing very slowly that people are missing the point of our software community hopelessly. People find stuff to complain about, even the sh dependancy! Well, AFAIK, the reason I love GPL software is because, if there is something bothering me about it, I can CHANGE it. That's right, boys and girls, you can take those sh scripts, and write some proper C code from that. From the original developer's point of view, he just wanted a system up *that worked* ASAP, using whichever tools he can to get it that way. Now that it's in the wild and known, it can be refined and perfected and fixed and whatever else and we can have a beautiful piece of software like CVS or Linux or Apache or whatever in the end (not that most of them will ever meet their end :)

    NOW is the time to stop complaining and getting those hands dirty and taking those things that bother you about the very first implementation, and go make some code. I see those sh scripts as nothing but prototyping code, and changing prototype code into C code is one of the easiest tasks a programmer can ever get to do (since the THINKING has already been done for you).

    So please everybody, take this brilliant idea and let's make ourselves another open-source success.

  • I didnt see anyone else mention this... in all the discussion of whether using rsync/ftp/whatever is the best protocol for versioning software, I didnt see anyone mention that WebDAV (distributed authoring and versioning - http extensions managed by w3c) is a protocol intended for this use (which can run over ssl).

    The DeltaV proposal (the versioning bits of webdav, which they chopped from the original webdav working group in order to get a document out the door) became a proposed standard in October 2001 ( I have seen people semi-seriously suggest that webdav clients and servers could replace cvs. If you're at work right now and have office2k you have a (limited) webdav client right there.

    Going back to arch, it scares the willies out of me that we might entrust all of our trees to ftp...but OTOH it is clear that arch can be extended to support other protocols. Like the one above. Anything that can be made to appear as a r/w posix filesystem can be used.

    I wish this was usable on windows though...vss sucks soooooooooo badly...
  • it has failed to be secure enough over the years, so it is never ran on any system I own. I don't even install the client on my machines anymore in order to discourage it's use.

    If this system used rsync or at the very least scp then I would be much more willing to look at it. I hope someone modifies it to be secure at some point.

    Until then, I'll keep on using CVS over ssh.

To do two things at once is to do neither. -- Publilius Syrus