Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Announcements

Subversion Hits Alpha 210

C. Michael Pilato writes: "This overheard while eavesdropping on announce@subversion.tigris.org: Gentle coders, The ever-growing cadre of Subversion developers is proud to announce the release of Subversion 'Alpha' (0.14.0). Since we became self-hosting eleven months ago, we've gone through ten milestones. This milestone, however, is the one we've always been working towards; it's a freeze on major features for our 1.0 release. From here out, it's mostly bug-fixing. We hope this announcement will lead to more widespread testing; we welcome people to try Subversion and report their experiences on our development list and issue tracker." Subversion, a source control system akin to CVS, has been in the works for a couple of years now.
This discussion has been archived. No new comments can be posted.

Subversion Hits Alpha

Comments Filter:
  • Why? (Score:4, Funny)

    by Anonymous Coward on Wednesday July 24, 2002 @09:27AM (#3943816)
    Just use some of the perfectly good source control programs out there. Visual Basic Source-Safe comes to mind.
    • Re:Why? (Score:2, Interesting)

      by Peyna ( 14792 )
      Isn't Source-Safe expensive?
      • Isn't Source-Safe expensive?

        Yes. Insanely expensive. It is pretty good but not really close to being worth $500 per seat (when purchased separately from Visual Studio).

        • If by 'pretty good' you mean 'total piece of shit', I agree. Most of the Visual Studio suite is pretty decent, but Source Safe comes off exactly as what it is, a software package bought out by MS for the express purpose of saying Visual Studio has SCM, and then doing next to nothing with it to make it work better. As an admin in a shop of people begging to leave Source Safe for something like Rational, or even CVS, I know how it is... Subversion may offer the core needed to build a replacement that keeps both developers and the businessmen happy...
          • For small (less than 200.000 lines of code) projects it's pretty good. You should know the limits like the size of the database shouldn't exceed 1GB, but overall the tool works seemlessly. Here we have over 20 projects in several databases and haven't found any problem with it since we started using it back in 1999. (Yes we check for errors ;) ). For the small price-tag it has, it has a lot of features and a nice gui, which supports visual conflict resolving, drag/drop sharing/branching etc.

            You shouldn't use it for large projects. So when people still use it for large projects, it can be cumbersome and slow.

            So your 'it's a total piece of shit' is way off base, or you're one of these people who cram 1.5 million lines projects in Sourcesafe and then start complaining.
        • Re:Why? (Score:2, Interesting)

          Only $500/seat? Good lord how I wish our versioning software was that cheap. Right now I'm writing a Purchase Request for 4 more Rational ClearCase licenses...

          When the license + 1 year support for each is all added up, we'll be cutting a check to the tune of about $16,000

          I've asked the CM people before why we're not using something cheap (free) like CVS (as the sysadmin, I don't get to make the decisions, I just get to make it work once the stuff is purchased), and they casually said "Hell, I don't know, that's just what we're stuck with."

          I guess "Hell, I don't know, that's just what we're stuck with" is an appropriate attitude when you've got oodles and oodles of taxpayer dollars to spend...

          I know, I know. I'm going to hell.
    • I agree that there are plenty of good source control programs. SourceSafe is NOT one of them, I have to use it every now and then and know The idea is that you control the system, not the other way around. Besides, it isnt free, and there some people that would like a free source control system - imagine that.
      Good source control systems includes Bitkeeper and ClearCase. Neither of those are free/opensource.
    • Here's a list ...

      • DNS: Only a handful internal Win2K DNS/AD Servers, 0 external, numerous ActiveDirectory issues (let alone DNS changes done "on-the-fly" that takes them down weekly)
      • Firewall/Proxy: Virtually 0 ISA Servers
      • SCM: No team larger than 50 using Visual Source Safe
      • Not quite:

        There are lots and lots of DNS/AD servers at MS, although not as many running W2k anymore (they're running W2k + 1 mostly)

        And, unfortuneately, we have LOTS of boxes running ISA server. On numerous occasions i've emailed the relevant admins saying "please let me setup 1 squid box for you so i dont have to put up with this crap anymore". It's gotten better but man dogfooding is painful sometimes.

        I can think of 1 team larger than 50 using VSS. There's an internal-only project spread over several teams that has been the same source base for 4+ years that is using VSS. There are easily over 50 people who've made checkins. Thats probably different than 50 active developers.

        You're generally right though about VSS - it's not being used anymore internally for large projects. It's an adequate SCM for small projects or groups of small projects. The project I mentioned has about 5GB under VSS control and it works reasonably well, but I probably wouldn't start with VSS if i were starting from scratch.

    • I suspect that the anonymous coward is joking, but please, if you care about your source code at all, do not use Visual SourceSafe [highprogrammer.com]. Visual SourceSafe is awful software that plagued my existance for five years. If you are using Visual SourceSafe, or are considering it, please see this page on Visual SourceSafe's faults [highprogrammer.com].
  • In 20 words or less. (Score:2, Interesting)

    by Anonymous Coward
    Tell me what has Subversion got that CVS hasn't? (No this is not a flame, I'd like to know).

    • Four words (Score:1, Funny)

      by Anonymous Coward
      A way cooler name.
    • by Westley ( 99238 ) on Wednesday July 24, 2002 @09:44AM (#3943927) Homepage
      Surely the easiest way of finding out is to visit the website. From the front page:

      Directories, renames and file meta-data are versioned.

      Commits are truly atomic.

      Apache as network server, WebDAV/DeltaV for protocol

      Branching and tagging are cheap (constant time) operations.

      Natively client/server, layered library design.

      Client/server protocol sends diffs in both directions.

      Costs are proportional to change size, not data size.

      Efficient handling of binary files.

      Parseable output.

      For more details, see the website.

      Jon
    • Versioning of filenames (and other metadata) so you can svn mv something, cheap branching/tagging, binary diffs, and atomic commits.
    • by ebuck ( 585470 )
      It's been awhile since I looked at it, but as I recall:

      In 20 words or less:

      Subversion is CVS on steroids without being tied down to the history of CVS.

      And some of the reasons why are:

      Subversion handles directories.
      CVS does not.
      Subversion handles file permissions.
      CVS does not.
      Subversion makes atomic commits (and rolls back prior to the commit if necessary).
      CVS cannot, it will stop at the last file processed (and you have to clean it up by hand).
      Subversion uses HTTP/WebDAV (both reliable protocols).
      CVS uses it's own protocols which might be less reliable.
      Subversion performs more operations in constant time.
      CVS uses more time for the same operations although it is not necessary.
      Subversion is naitvely client-server.
      CVS had the client-server added on after the core code was developed creating some odd aspects of operation.
      Subversion transmits deltas, so costs are porportional to change size.
      CVS changes are (I believe, not know) proportional to project or file size.

      --- Fabrication is the stuff filling the holes of memory.
      • Subversion uses HTTP/WebDAV (both reliable protocols). CVS uses it's own protocols which might be less reliable.

        CVS uses ssh which is much more reliable and secure than yet-another-protocol-over-HTTP.

        • by slamb ( 119285 )
          CVS uses ssh which is much more reliable and secure than yet-another-protocol-over-HTTP.

          CVS uses [kgnp]server (Kerberos, GSSAPI, NTLM, Password) as its communication protocol. It's not even encrypted.

          The cvs-over-[rs]sh thing is a kludge, an extension of the local repository access. It requires each person have a Unix shell account with write access to the repository. You can't do much security-wise with that. Since CVS stores each file independently, you can at least say they don't have access to a module but you can't say they don't have access to a certain branch. And you certainly can't say something like "they can't delete/modify existing revisions".

          HTTP/WebDAV/DeltaV is nice for a few reasons:

          • the protocols were already made. HTTP, TLS, WebDAV, and DeltaV all existed beforehand. The authentication and stuff were settled. It saves work designing protocols.
          • Support of existing software. You can mount a Subversion repository read-only with no special software in most operating systems. ("Web folders" under Win2K for example. Try it. http://svn.collab.net/svn/repos/trunk) Eventually, even write access with automatic versioning. (Which means the log messages will be pretty worthless, but it has some of the advantages of revision control and is completely transparent.) DeltaV-supporting software will probably start to come out pretty soon as well.
          • Existing code. Apache has a pretty solid server architecture. It divvies up the requests, handles TLS, does authentication, logging, etc. mod_dav was already written as well. mod_dav_svn is a pretty small part of the whole.
          • by ftobin ( 48814 )

            CVS uses [kgnp]server (Kerberos, GSSAPI, NTLM, Password) as its communication protocol. It's not even encrypted.

            Noone in their right minds uses this.

            The cvs-over-[rs]sh thing is a kludge, an extension of the local repository access.

            It's a 'kludge' that works extremely well, and fits well into the unix philosophy.

            It requires each person have a Unix shell account with write access to the repository. You can't do much security-wise with that.

            False. It requires that they have an account on the system, not one necessarily that allows you to execute a shell (just like SourceForge has it set up).

            Since CVS stores each file independently, you can at least say they don't have access to a module but you can't say they don't have access to a certain branch. And you certainly can't say something like "they can't delete/modify existing revisions".

            True. But this has little to do with the transport protocol.

            • > > CVS uses [kgnp]server (Kerberos, GSSAPI, NTLM, Password) as its communication protocol. It's not even encrypted.

              > Noone in their right minds uses this.

              Right, no one uses its authentication for anything important. CVS doesn't have a decent protocol. For extra annoyance, they do use it for anonymous stuff, since it is not good to have a Unix account for anonymous people. So you need two different ways of accessing CVS.

              > > The cvs-over-[rs]sh thing is a kludge, an extension of the local repository access.

              > It's a 'kludge' that works extremely well, and fits well into the unix philosophy.

              No, it does not work well. There's not a lot of Java code available to talk ssh, for example. It's not good for cross-platform stuff.

              Also, ssh handshakes are time-consuming. This is important because cvs reconnects for each operation. In contrast, HTTP has well-defined and well-known standards for keepalive and pipelining.

              > > It requires each person have a Unix shell account with write access to the repository. You can't do much security-wise with that.

              > False. It requires that they have an account on the system, not one necessarily that allows you to execute a shell (just like SourceForge has it set up).

              I'm afraid you have me at a disadvantage - I've not seen SourceForge's setup. I'm not a committer on any projects there. However, ssh requires a shell account - it might be a restricted shell of some sort, but they need a shell.

              Also, the manual certainly has no better way. If you are able to do so, please patch it. I quote:

              It is possible to grant read-only repository access to people using the password-authenticated server (*note Password authenticated::). (The other access methods do not have explicit support for read-only users because those methods all assume login access to the repository machine anyway, and therefore the user can do whatever local file permissions allow her to do.)

              There are a lot of things not possible to do with Unix file permissions. Saying things can be added but not modified. (You can have setgid directories, but not setuid ones.) One group that can read/write, one that can read, everyone else who can do nothing. Permissions within the files (short of splitting them into more files, which makes Subversion's ACID semantics difficult) . All of these things are possible with Subversion - you just write a Perl script that inspects the transaction and allows or denies it. Please take a look at commit-access-control.pl [collab.net] for an example.

              > True. But this has little to do with the transport protocol.

              You need a smart server to accomplish this. Subversion's :ext: method of remote access (rerunning the command on the other machine through [rs]sh) doesn't qualify. Arch's modify-via-ftp doesn't either. Those can't ever do anything but the Unix file permisssion way, but with a server between it can decide what is allowed or denied.

              You notably didn't quote/comment on my points about why HTTP/WebDAV/DeltaV was a good choice. They clearly needed a protocol of some sort. I think using the existing standards was a good choice. Why would something else be better? Why would you not use HTTP? You said:

              > CVS uses ssh which is much more reliable and secure than yet-another-protocol-over-HTTP.

              Do you have anything to back that up? How is HTTP/TLS/WebDAV/DeltaV unreliable or insecure?

              If you're that dead set against that protocol, write a new one. It already has the abstraction - both ra_local and ra_dav are supported. Write a new ra_XXX if you so desire. And a new server to replace mod_dav_svn. Of course, no one will use it - the DAV stuff works well. But maybe you'd feel better.

              • There are a lot of things not possible to do with Unix file permissions.

                You assume a basic unix filesystem, not something like AFS, which has rich, powerful (though not sub-file) ACL support.

                However, ssh requires a shell account - it might be a restricted shell of some sort, but they need a shell.

                SourceForge only lets you execute cvs when you login (that is, you cannot execute any other program, including any shell). Furthermore, you really don't need line in /etc/passwd, if that is your concept of a 'shell account'. SourceForge uses an LDAP server, I think, for accounts. So, given the abscence of a line in /etc/passwd, and only the ability to execute 'cvs', I don't quite see how this qualifies as a 'shell account'

                You notably didn't quote/comment on my points about why HTTP/WebDAV/DeltaV was a good choice.

                I did not comment because I had nothing to argue against in what you said, they were all quite true statements. But the benefits you stated has no value to me.

                How is HTTP/TLS/WebDAV/DeltaV unreliable or insecure?

                The cryptography for ssh is much more secure than the examples you've give. The authentication means are more powerful, and there is agent-forwarding, both extremely important.

                And a new server to replace mod_dav_svn.

                I highly dislike systems writing servers where none is needed (ala CVS with [rs]sh; ssh handles the network).

          • CVS uses [kgnp]server (Kerberos, GSSAPI, NTLM, Password) as its communication protocol. It's not even encrypted.
            Oh, yes, very "informative".

            From cvs.info (Direct connection with GSSAPI):

            The data transmitted is _not_ encrypted by default. Encryption support must be compiled into both the client and the server; use the `--enable-encrypt' configure option to turn it on. You must then use the `-x' global option to request encryption.

  • Does anyone have a test set to try subversion (or cvs) out with, lying around at the back of a directory somewhere?

    Seriously, though, how, other than using it for real, might one test subversion? And how would you recover from the bugs that will be in there without devoting your life to it for a few weeks?

    Just wondering.
    Graham
    • by Anonymous Coward
      they use RCS file text and diff files, so you can make a weekly backup of them (or more often. You do make backups, don't you?), and if something gets funky, you can extract the info from the ,v files.
      • yes of course I make backups but when testing something that's in alpha, if I'm testing it with real work, I either

        a) hope nothing goes wrong (ROFL)
        b) backup my personal source tree immediately before each subversion commit step

        Because otherwise you can guarantee that when I want to roll back 2 or 3 steps to dig through an issue with my code, I'll hit a bug in subversion.

        Now do you get what I was asking? And why?
        Maybe next time I'll use a few more words...
        • Something that might give you more confidence is that as subversion is self-hosting, much of what you would test on "real code" is being done every day by the subversion team, and has been for months. Branching, merging, rollbacks, etc would have to be pretty rock solid by now, otherwise the SVN team wouldn't be able to self-host effectively.

          But extreme pessimism for the first couple of "checkout-edit-compile-test-release-commit" cycles wouldn't hurt either - just don't expect to be shocked at issues.

          I think this alpha stage is more about getting a wider audience using SVN, and give feedback on usability, rather than stability and correctness. Things like how noisy is it, how informative, can a oft-repeated three-step process be reduced to two, or one (or none!) with a little thought for SVN's activites. Stuff that comes up when code is released into the wild.
    • by nthomas ( 10354 ) on Wednesday July 24, 2002 @11:25AM (#3944575)

      Seriously, though, how, other than using it for real, might one test subversion? And how would you recover from the bugs that will be in there without devoting your life to it for a few weeks?

      You raise some serious concerns, let me try and alleviate those fears.

      I've been using Subversion for a few months now (since revision 1210 or so), and let me to tell you, there is nothing that the dev team values more than the integrity of your data. Nothing. This means that once something has been comitted, it will never be lost.

      Does this mean your data is guaranteed with an alpha-quality system? No. But let me tell you, in 6 months I've not seen it happen once. Oh sure, there have been a few times when the DB schema changed, and the format of the dumpfile (more on that in a bit) changed on you, but these things were discussed well in advance on the dev list and not only did you have plenty of opportunity to prep your data for the change, there were ways for you to convert your data after the fact.

      If you are the sort of person that likes to tweak around with your data in the repository (if you come from a CVS background -- you have to be) and gets the heeby-jeebies from having your data stored in a non-accessible format, let me ask you this, do you worry about the fact that you have data stored in Oracle/Postgres/Sybase/MySQL? No? Then why worry about the Subversion repository at all?

      Of course, the dev team has provided you with some nice backup tools, for example, the normal Unix cp command can be used to make hot-backups of your repostories, a very cool trick. In addition, there is an svnadmin command that has a "dump" feature that allows you to store your repository in a text file, if you worry about Subversion trashing your data, keep regular dumps of your repository.

      Of course, all is not rosy. I would like to see a patchsets feature, and I really miss "cvs annotate" (but "svn blame" is scheduled to be added after the 1.0 release), and of course, the db has a tendency to lock up every once in a while (you can fix it easily with db_recover) but by and large, these are things I can live with.

      After using this system for a while, I've come to one conclusion: it works. And it works better than CVS. Forget the years of bad habits you learned on CVS, once you start using Subversion, you will start to think about SCM systems in a whole new way. Try it out.

      • Of course, the dev team has provided you with some nice backup tools, for example, the normal Unix cp command can be used to make hot-backups of your repostories, a very cool trick.

        Please check out hot-backup.py [collab.net]. It doesn't do much more than cp, but it doesn't just do cp repository backupdir. It copies the logfiles last. That's important.

  • by Emrys ( 7536 ) on Wednesday July 24, 2002 @09:40AM (#3943902)
    It would be more accurate to say subversion:CVS::mozilla:netscape4. Subversion is intended to replace CVS, and it's core team is made up many of the people that currently maintain CVS. CVS has really reached the end of its life cycle; its really showing its age, and it just doesn't make sense to extend it anymore. No, this is not a "CVS is dying" post, but anyone who has adminned it has been frustrated with it from time to time, and Subversion aims to remedy that. They're keeping what's good about CVS and replacing the bad with better things based on decades of experience with CVS and improvements in the SCM field in general.

    This is intended to be a replacement for CVS. No less, and no more (for the "more", see some of the more experimental SCM stuff like Tom Lord's arch).
  • Currently working on a project completely relying on ClearCase(tm) configuration management, I am glad if anyone in the world hacks just a single line of code to improve the way we work with source control.
    • Currently working on a project completely relying on ClearCase(tm) configuration management...

      Be interested to hear what problems you're having. I used ClearCase in one of my jobs, and thought it was really rather good.

      Cheers,
      Ian

      • ClearCase(tm) is a pretty neat tool, no doubt.
        Working on a medium to large project with it is a good choice.
        But if you're out to get a huge project going, it's getting complicated. I'm talking about multi-site development. You just have to take so much attention, it will eat away a pretty large amount of time.
        Not to mention the terror an unexperienced or badly trained engineer can cause.
        • I'm talking about multi-site development.

          Ah yes, I remember now. We too had to use it for multi-site development, and the speed was awful. So we got into some sort of remote syncing, and that turned into a nightmare too...

          Yes. Now I recognise what you mean.

          Cheers,
          Ian

          • I haven't used ClearCase, but I'm wondering if something like arch or BitKeeper with native support for distributed trees wouldn't help some of these central-server merging problems?

            curious,
            -l
        • My little group (4 programmers) had been using CVS for years, and another group (10 programmers) installed ClearCase, and management decided our CVS group should convert. There were two CC admins; one wrote a piss poor install script. When it started deleting files it had no business even looking at, I aborted it, cleaned things up (*I* kept backups :-), told my boss, and he backed me up -- we stayed on CVS. The other CC admin was a joke, and twice (!) deleted the CC repository by mistake. Other times I don't know what he did, or if it was just CC taking a dive, but they were down all day getting it straightened out. Most of that group were envious that my group stayed on CVS.

          I have never worked on huge projects, never more than a dozen programmers at most, and CVS has always been good enough. I will certainly switch to subversion, or maybe one of the others, because I like a lot of the improvements, but CC has always seemed like bloated overkill.
    • What so bad about ClearCase? Well, yeah, the learning curve is steep, and licenses are expensive, and it is not free, but I've always found it to be a powerful tool (albeit easily abused) for advanced source code control, particularly when dealing with multiple branches/forks of a common code base.
      • That's obvious!! (Score:5, Informative)

        by Outland Traveller ( 12138 ) on Wednesday July 24, 2002 @10:15AM (#3944112)
        What is so bad about clearcase? From my point of view what *isn't* bad about clearcase is an easier question. Here's my hot list:

        1. Needs kernel modifications in order to work. PROFOUNDLY STUPID. It's always an adventure trying to get clearcase to work with any recent linux kernel, and forget trying to keep current with kernel security patches.

        2. "Filesystem" style sharing does not scale well outside of a high speed, local network. If your developers are distributed around the internet you need to use clearcase's horrible hack "snapshot" views, or shell out ridiculous amounts of money and complexity to implement multisite. It's very difficult from a performance and a security standpoint to use clearcase over a low-speed VPN.

        3. Good GUI administration tools are windows-only. While rational could have created cross platform admin tools when they ported their product to Windows, they didn't. Instead they rewrote their admin tools to be windows only, added many new features, and now the windows tools are 200X more usable than their unix equivalents. When I pressed irRational when the unix tools would be similarly improved they gave the patronizing answer that unix customer's don't want good admin tools. Sounds like a self fullfilling prophesy to me. The unix GUI tools are so awful that it is easier to use the command line! Thus, irRational insures that unix shops with clearcase will always have a brick-wall style learning curve.

        4. Vobs don't scale well, especially when you version large binary files, like media. You have to manually tune how many vobs to use and how large to make them.

        5. Relies on automounting and persistent filesystem connections for day-to-day work. This design is inferior to a more traditional client-server TCP/IP app in terms of both performance and robustness.

        6. Lack of commitment to the unix platform. iRational has stopped future development on their unix bug tracking software (DDTS) in favor of a MS-ACCESS backed solution. A large majority of new clearcase features are windows-only. You would think that Rational would be a cross platform company, but they are not. They make platform-specific solutions for multiple platforms, most of them purchased from some other company and poorly maintained.

        7. Extremely high maintenance costs, not just in the licensing but in the dedicated personel needed to throw their careers away doing nothing but babysitting the vobs and views.

        If you're buying a proprietary CMS the last thing you should consider is iRational clearcase. Try bitkeeper or perforce and you'll be much happier.
        • Re:That's obvious!! (Score:2, Interesting)

          by Feign Ram ( 114284 )
          Kee -Rect ! The VFS used by Clearcase while providing a lot of it's cool features is also responsible for many drawbacks, including some of the ones you mention.

          Scalability is the biggest downstream issue any manager has to consider before choosing Clearcase. It is extremely resource hungry and I used to work for a small company that deployed a Sun Enterprise server to support CLearcase for just 10-15 developers. Get ready with barrels of memory - shticks and drives.

          The Steep learning curve is not something that u can wish away in a production environment.

          In spite of all this, I remain fond of Clearcase - It was the first Version Control/Configuration System I used seriously and haven't found anything even remotely similar in terms of functionality. I felt like vomiting when I first used CVS after 4 years of CVS. Another nice feature is it integrates nicely with other Rational products like ClearDDTS the bug tracking system - Against a specific ticket you can check the list of related checkins/checkouts.

          It was originally developed by a company called Atria and was later taken over by Rational.

          And don't forget multisite. A Pig it is - but it provided lot of value for money, especially to comapnies that could afford it.
          • "I felt like vomiting when I first used CVS after 4 years of [Clearcase]."

            That's funny ... because I felt like vomiting when I first used Clearcase after 4 years of CVS. Even the vague memories I have of Clearcase make me queasy just to think about ...

            At my current job we use Perforce, which, although it has its own problems, is quite alot better than either CVS or Clearcase.

            But subversion looks really good ... can't wait to play with it ... (I can't believe I'm excited about version control, if that's not the definition of a geek I don't know what is!) ...
  • Way to go to the subversion guys
    faster, better overall design, extendable, seamless integration with apache .. still gotta give it a few months to get finalized, but it's looking really good.
  • While we're considering throwing away CVS, let's also throw out make. Check out Scons [scons.org], a replacement for make. I have been using it for a few months on small projects and it's shaping up to be a really great tool.

    Burn your Makefiles!

    • by Anonymous Coward
      Also, check out jam [perforce.com]. It's by the same people that brought you perforce.
    • by smagoun ( 546733 ) on Wednesday July 24, 2002 @10:10AM (#3944079) Homepage
      ...and there's always ant [apache.org], from the folks over at jakarta.apache.org. It's aimed at java development, but can be used with other languages as well.

      Ant has some pretty cool features (and a few misfeatures, sadly), but it's really caught on in Java-land.

      • Nant [sourceforge.net] has also caught on in the c# world. It's basically a .net port of ant.

        We are moving our visual studio build to Nant because it works with our Nunit [sourceforge.net] tests and can do everything we need automatically. We have used both nmake (make) and VS to accomplish builds in the past but have had to overcome a few difficulties in large builds that hopefully this will solve. We'll see but using ant in the past for my java projects I am hopeful. Using nmake simply was to error prone on large builds, we don't like finding that our build failed because of whitespace problems, and VS was simply to unreliable for us (your own results may vary). We still use vs.net to build our custom installer, but we call it from the cmd line using Nant.

        Tying the whole process in to a source control app like subversion or cvs (which we currently use) would benefit us. Hopefully someone will write a .Net cruise control or it' equal for the .Net platform. That would allow the build process to get the latest files from our repository for the build.

      • I never figured out a way to get ant to reliably compile a Java project correctly. It uses the javac dependancy engine, which is specified in a broken way (the rules in the specification don't actually mean that compiling the classes it finds as needing recompilation is the same as compiling from scratch).

        After using ant for a while at my work, we decided that it was the most common cause of people checking in broken code (which hadn't caused a problem for the author) and incorrect builds, and switched to make (with a python script to find java dependancies correctly).

        The other problem with ant was (at the time, at least), there was no way to avoid running a program because it could be determined that it was unnecessary. This made trying to use EJB with a container that required an EJB compiler practically impossible, because we had a 20 minute build cycle, even when the ejbc step wasn't necessary.
    • Make is actually quite nice if you use a little trick: have only one Makefile. Have that Makefile include a file from each directory that contains variable definitions. That way, you separate the code from the data, meaning that you don't need to automatically generate the Makefiles (since you don't change them for each project and directory), so the Makefiles can be readable.

      You can also do some really interesting things with conditionals and what amounts to iterative includes. I have a set of Makefiles totalling 315 lines which will accurately do exactly the steps needed to rebuild a program if any source file changes, regardless of which directory the file is in, and can be run from any directory in the tree. If nothing has changed, make says nothing except "'target' is up to date". It wasn't terribly hard to do.
  • For a moment there - seeing just the headline, I thought that the scientists were planning a revolt on the ISS.

    Thank goodness, the last thing we need are some subverted scientists doing whatever in LEO!

    Gil, just being a peanut gallery member

  • Are there any good resources out there that compare available source control systems? My group is currently stuck on Visual Source Safe, but open to the idea of switching. I tried the trial version of bitkeeper, which looked pretty good though with a somewhat steep learning curve. The license was somewhat confusing as well. Basically anything that gives you visual merges/compares, lets multiple developers work on a project easily, and doesn't require you to run the "Analyse and Fix" tool weekly would be good...
  • arch vs Subversion (Score:4, Informative)

    by Luyseyal ( 3154 ) <swaters@NoSpAM.luy.info> on Wednesday July 24, 2002 @10:25AM (#3944173) Homepage
    Here is short comparison of why you might want to use arch over Subversion, depending on your project's needs:

    http://regexps.com/src/src/arch/=FAQS/subversion

    -l
    • It's hard to read a document which starts out so fundamentally /wrong/. It claims that using "ordinary files" makes its format somehow more managable -- what baloney. By using "ordinary files" it's actually choosing to implement its own proprietary database. If you want to manage it, you have to learn its format.

      Subversion's not ahead here; but by using a standard database, at least you can use standard database tools to manage it. You still have to learn, of course.

      I like arch. It's a cool system. But nonsense like that...

      -Billy
      • by millette ( 56354 )
        Seriously, if we had a good enough filesystem, there wouldn't be a need for any db. It's only a question of point of view. You mention using standard database tools to manage subversion. What's so wrong about standard filesystem tools to manage arch then? You know, like cat and grep, and ls even. (Please, don't point out that grep isn't a filesystem tool, please).
        • You mention using standard database tools to manage subversion. What's so wrong about standard filesystem tools to manage arch then?

          Nothing. Nothing's wrong with using standard filesystem tools to manage arch. Nothing's wrong with arch -- or at least I have nothing to criticise.

          What's wrong is arch's idiodic propaganda stating that Subversion is magically inferior because it uses a database rather than a file system.

          The one weakness in arch is that it manages the existing filesystem as a database but accepts the use of non-database tools to alter it. You can use grep and so on to maintain it, but you'll certainly destroy it if you don't know exactly what you're doing, since filesystem tools can't possibly know how to maintain a database, while database tools must.

          But this isn't a big deal to me -- after all, you can have a perfectly good database which isn't a version control system, so your database management tools can cause a lot of problems as well when used by an idiot. So again, I have no complaints with arch's approach. Only its marketing.

          -Billy
        • Either make the filesystem DB oriented, or let people use DBs. The flexibility of a DB on top of a filesystem is that you can keep the OS simple and extend it using whatever DB you want to use, or multiple DBs or no DB at all.

          The benefit of a DB based filesystem (direction of ReiserFS4) are also great. It may be the case that you'll not need a separate DB (but maybe you will, if you need other features!).

          MS going to a DB filesystem will make our lives more difficult for sure. We just need to way and see, but that's my guess. And they can make it work, because the can force a single DB filesystem you cannot avoid.

          On the other hand, Linux will probably have options for DB filesystem, but as they will not be widespread (and there may be a lot of incompatible DB fs) for a long time, you can't sucessfully base aan app on a specific DB fs beign at the core.

          This my semi-uneducated opinion of course.
      • The point is that having a local archive of all the versions allows you, for example, to grep through the source for that old snippet of code in some version that you need but aren't sure what version it's in. Sure, you can checkout all the Subversion versions locally, but arch does it implicitly. I think that's pretty sweet.

        dig around in here to get an idea of what the file tree looks like:

        http://regexps.com/src/src/arch/%7barch%7d/

        -l

        • Why are you replying to me with this? It doesn't seem to fit any of my posts. Perhaps you meant to reply to someone else?

          -Billy
          • I was simply pointing out why arch's "ordinary files" are more managable, from a code perspective, than databases.

            -l
            • Okay. Why did you compare local version storage versus remote version storage, then? This isn't a database versus files issue; it's a local versus remote issue.

              Anyhow, I certainly agree that arch has this, and other, advantages. (Many others.) None of them make arch "more managable" or "more usable" or anything else than subversion; they simply give the two different characteristics. Remote files have HUGE advantages as well, in the right environment.

              -Billy
    • by Jerf ( 17166 ) on Wednesday July 24, 2002 @03:02PM (#3946383) Journal
      To whomever wrote that document: Speaking as a disinterested third party with some experience, the document does not look like a "short comparision", it looks like a Subversion bash fest written by somebody with an axe to grind.

      As a simple example, consider
      In Subversion, a lot of revision control "smarts" are built into the server. In arch, the smarts reside entirely within the clients. Therefore....
      1. arch is very fast
      2. arch is scalable
      3. arch servers are easy to administer
      4. arch is resiliant when servers fail
      5. arch is better able to recover from server disasters
      (numbered for my convenience)

      However, this characterization is horribly, obviously lopsided in favor of arch. Putting the smarts on the server is a good thing, because it prevents replication and therefore differences and therefore bugs on the client side, with logic the client should not need to deal with. It makes it harder to write an arch client correctly (witness the profusion of cvs clients).

      1 does not follow; a server can often do things faster then a client, because the client may be slow while the server is an 8GHz quad-Sexium with 8 gigs of RAM.

      2 does not follow as an advantage; there's nothing that says a server-based solution can't scale, they do all the time.

      3 is true, but you're trading off with an entire system (server + clients) that's harder to program correctly because of rampant logic duplications in the clients. It's not an unmitigated advantage in favor of arch, and in fact I read it as an advantage to Subversion.

      4 is a nonsequitor; it may be true but it does not follow from being non-centralized. Same for 5. Again, there's no law that servers must be difficult to recover failure from.

      This is just one example of an attitude that pervades the linked document. In fact, the article pointed does more to turn me off to arch then anything else. If the author was a developer for arch, I'd be concerned at the lack of experience in design (it is almost never the case that one solution is better then another in every way) and inability to fairly evaluate two products (why not show what both are good for?) being shown here.
      • The point of distributed archives is replication. It's fundamental to the design. It assumes you live in an often disconnected world and you can sit there with your laptop in the middle of nowhere doing merges and whatnot independent of some remote archive.

        Anyway, that's the point of it. If it doesn't fit your environment, you shouldn't use it. :)

        using your enumeration:

        [1]. The problem is he doesn't define "fast". When I think "slow", I'm thinking being on the slow end of a pserver/webdav connection in a large project with a lot of concurrent branches in need of merging. Still, I agree, if you can afford the server and all the clients have a decent connection.

        [2]. Well, server-based solutions are expensive on the Linus Torvalds level. Have you seen the merges that guy does? Scary.

        [3]. You're right, except that there are no arch servers (by server, I'm guessing the guy means "main ftp archive" or something). But sometimes it's better to have a smart client. I definitely don't want Apache trying to render web pages for me... sending an image to a dumb browser, no matter how annoying IE vs Mozilla incompatibility gets! :) But yeah, in a highly centralized, probably corporate-style, environment, arch is probably not as good a fit as Subversion.

        [4] & [5] follow from being decentralized because the distributed trees all maintain the history. It obviates the need to specifically keep mirrors of the main archive around since each local archive is already a mirror. At least, that's my understanding of it.

        As far as the author is concerned, I'm guessing he's just trying to advertise his wares. He may have failed due to his poor writing, but I'm guessing his goal was marketing. You might contact him about it... you know: download arch, get a copy of his tree, write up a patch, and publish the archive. hehehehe

        -l

        • I wasn't really interested in giving exhaustively correct answers myself, as I've used neither.

          He may have failed due to his poor writing, but I'm guessing his goal was marketing. You might contact him about it... you know: download arch, get a copy of his tree, write up a patch, and publish the archive. hehehehe

          "Marketing" open source is an interesting issue. I think the issues involved in attracting developers and users to your project are not well explored by the community. There should be a 'definitive essay' on the topic, as in 'Homesteading the Noosphere". (I intend to write one in a few years, if my projects are released and do well. Failing that, I'm not qualified, so don't ask me to do it. ;-) )

          See, the author here has already turned me off and lost. Marketing Open Source should be more honest... 'here's what it's good for, here's what it isn't, here's what needs more work'.
  • by awb131 ( 159522 )

    Does anyone know how subversion compares with Slide [apache.org] from the Jakarta Project? Slide is also a WebDAV/DeltaV client and server. In the past, I've been more interested in Slide because it has a more "pluggable" back end (Slide is in Java, and I am a pretty good Java programmer, not so much with the C.) Easier to embed/extend for my own uses.

    For example, are the two interoperable in any way? Can you use one's client to talk to the other?

    • Since Slide is not specifically intended for source code management, I would imagine subversion has many features in this area that Slide does not. However, I haven't used either, only read about them.

      If you're looking for something to embed/extend and you know Java, then Slide would make sense, especially if you're planning to use it for something other than source code management. However, you might still want to use subversion for the source of that project... Most people don't need to extend their source code manager much, except perhaps with a few scripts.

  • We use CVS here, and like everyone else I'm fed up with the lack of rename support and branching. But looking at the install requirements of Subversion is very intimidating! It requires:
    - Berkeley DB, a particular version (this makes sense)
    - Apache 2.x
    - WebDAV
    - Neon
    and a bunch of other stuff, IIRC. (Their site is /.ed, so I can't check, sorry)
    All we need at my company is a server to run on one Linux machine and clients for all the others (MacOSX/WinXP/Linux/IRIX), all within our firewall.

    Doesn't all the above stuff, especially the Apache/WebDAV/Neon stuff, seem like overkill just to implement a network protocol for a version control system? Setting up a CVS server is certainly not this complicated, and it seems like with a little more effort on the developers' part, much end-user time and pain could be saved. Does Apache/WebDAV/Neon really buy enough so it's worth the install&admin overhead?

    I'm not trying to rag on the Subversion developers; it looks like a really cool system, once you get it up & running. It also looks like they've really done a great job of meeting their goals. I'm definitely looking forward to checking it out -- as soon as I have enough time.
    • Actually, the install's not that bad. You download and compile berkeley db 4.0.14. Get apache from CVS and compile that. Download the subversion tarball, and compile that. Use that version to checkout a new version from the repository, wget neon and untar it, and compile it. Then, just edit the apache config file. The nice thing about it is that while it is long, the install document is *extremely* precise. It basically tells you exactly what commands to type in. Editing the apache config, file, for example, had the potential to be disastrous for someone like me who had never set up apache before. Yet, I just copied what it said in the config file, and it worked! The configure script is also very good. It makes sure to properly check all the dependencies, and installing a missing one is a matter of reading the very helpful error message and typing in [urpmi/emerge/apt-get install] "offending-library."
      • Several folks pointed out that WebDAV is a protocol that comes with Apache. Sorry, my mistake -- I didn't read the docs carefully enough. Well, I'll put my money where my mouth is: I'm trying it right now. It's taken two hours so far.

        APR was dead simple. (No RPM available, but not needed.)

        Autoconf: I had a version of autoconf in /usr/bin, and the version subversion needs autoconfig'ed into /usr/local (of course). I fiddled with that, OK. (RPM available.)

        libtool 1.4: no problem. (RPM available.)

        I downloaded neon, and subversion built it automatically. (RPM available, but I didn't use it.)

        Berkeley DB was pretty simple, except that the whole subdir of docs/ that explains the build process was missing in my download -- I found the instructions on the web. This also created errors when doing the 'make install', had to use make -k to work around them. (No RPM for this version available.)

        However, when I went to configure subversion, it didn't recognize my Berkeley DB install (in the default place, /usr/local/BerkeleyDB.4.0), so I just copied it into the subversion build tree and it built it OK.

        As for apache, the first thing is subversion requires 2.0.40, but the latest I can find is 2.0.39. OK, so I figured out that means I need the latest CVS version. Also, I already have an older 1.3.xx version running because RT requires it, so now I have two versions running simultaneously! A bit of an admin hassle, but again, not a showstopper. (No RPM available for this version.)

        (I also installed python 2.0, which required a bunch of other stuff, but I gather that was optional.)

        I guess, from the comments I've seen, that I'm the only one who thinks it's weird to require people to install a web server just to do source control (with more than one machine). HOWEVER: now that I've started to try it out, I have to say it's really a pretty cool idea. It might even be useful to my company!

        I expect all this will get much simpler, but for now it's not for the faint of heart. Still, from what I can tell I think subversion will be the best post-CVS CVS!
        • Keep in mind: you only need to build Apache 2.0 if you want to create a Subversion *server*, i.e. network your repository.

          But you can just as easily build a Subversion client that has BerkeleyDB linked into it; you'd still be able to create and access a repository on local disk.

          In other words, it's relatively simple to build and use Subversion for "personal" use -- say, on a box where you don't have installation privs. Just use it in your home directory.
    • Well the nice thing about it is that it uses tools that already exist, so they get client/server essentially for free. WebDAV, for example, is an existing protocol for document versioning over networks. Makes it a perfect fit for this. Also, because it uses Apache, it can take advantage of the proven security and stability of the server. What would be bad is if Subversion rolled its own server and protocol.
    • You are overstating the requirements a bit. WebDAV is part of Apache 2, and Apache 2 is only required for remote access to the repository. The only real dependancies are Neon, BerkelyDB 4, which, at least for linux, will likely be included in the next generationj of distributions as Berkeley DB 3 and 2 are now. A neon is justa small shared library, not that big of a deal.

      Look at it this way, by the time Subversion is released the packages it depends on will be standard parts of most Linux distributions and will be staples in the *BSD ports system if they aren't already. Subversion will just snap right in.

      And I have to disagree about administrative overhead. By integrating with Apache, it's one less network service to configure, plus you get to take advantage of Apache's authentication modules, and you get web repository access with no extra setup.

    • Listing 'WebDAV' as a separate requirement makes very little sense.

      'WebDAV' is a protocol. 'Neon' is a client library we (Subversion) use to speak that protocol. 'Apache' is a server that provides an implementation of the protocol that we use in our server.

      So yes, we require Neon for building the client, and Berkeley DB if you want to access a repository directly (either for a local repository or if you're building a server), and Apache if you want to run a server. These requirements don't seem to crazy to me, and if you don't want to mess with them yourself, download a package. There are RPM's and a FreeBSD port (I think both still need to be updated for alpha, but I'll be doing that for the FreeBSD port tonight, and the RPM's are always updated pretty quick).

      -garrett
    • But looking at the install requirements of Subversion is very intimidating!

      I did:

      sudo apt-get install subversion

      Voila! Installed, configured. It's just a little older than today's alpha (.13) but I don't mind. It will automatically update to the latest version in a few days.
  • by Ludwig668 ( 469536 ) on Wednesday July 24, 2002 @11:41AM (#3944724)
    ... and have been really happy with it. Setting it up is a thesis project (the most common problem with software that's free) but once that was done, it works beautifully.

    SCC works well for several purposes:
    • Backup--I save everything in a personal 'svndocs' directory; including stuff like quicken databases, word documents, all that stuff. Just 'svn commit' (or in my previous life, 'cvs commit') and you have your backup stored on another computer. I had a laptop die at a customer's site, and it took downloading the client and 20 minutes to resume development on another computer. My brownie point score soared.
    • Share files with customers which are far away. SCC acts like a low-bandwidth file server. There are suddenly no hassles putting together installers and such; so the rate at which you can deploy updates greatly increases. CVS really sucks when it comes to directory versioning, that's why I switched to SVN. I can now configure the whole deployment tree on my side, and don't have to start e-mails with 'well, because CVS can't do this, you need to delete the whole project and check out over again.' Monkeying around with directories is much more important, considering the way that ant relies on java files being in directories which correspond to their package names.
  • oh no! (Score:3, Funny)

    by r00tarded ( 553054 ) on Wednesday July 24, 2002 @01:15PM (#3945545)
    from the bang-on-it-if-that's-your-thing dept.
    How did you know what I was doing? Did someone stick an X10 in my bedroom?
  • It would be very useful if they had tools for making a subversion repository from a CVS repository, keeping all the history, because people who are now using CVS won't want to lose their historical info. Since the features seem to be a superset of CVS's features, the only problem would be that the pre-subversion history would look odd where people did things to work around missing features.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...