Performance Tuning Subversion 200
BlueVoodoo writes "Subversion is one of the few version control systems that can store binary files using a delta algorithm. In this article, senior developer David Bell explains why Subversion's performance suffers when handling binaries and suggests several ways to work around the problem."
Why binaries? (Score:2, Interesting)
SVN will not replace CVS (IMO) (Score:1, Interesting)
CVS http://en.wikipedia.org/wiki/Concurrent_Versions_
You have code that many projects share, like multi-platform-compatibility-layers? Just use symbolic links and CVS will follow them.
In SVN you have to create a repository for these shared source files and write config files by hand to make it include these files your repository.
I hardly see SVN reach the point of flexibility CVS has. They support Windows (which doesn't have symbolic links) and give up usability.
Except this difference SVN and CVS are the same. There are marginal differencies in features but these affect no real world use. So if you want a version control system where you don't need to write config files by hand you choose CVS. If you want the latest hype you choose SVN.
There wasn't really a need for SVN.
performance not the biggest problem (Score:3, Interesting)
more that you lose changes without any warning or whatsoever during merging
(don't get me wrong, i love subversion
What about git? (Score:1, Interesting)
Long version: The fraction of a few speedup described in the article is blown away by the several orders of magnitude you get by using git. Then there are all the other goodies, like real branches and merges, git-bisect, and visualization with gitk. Subversion is just for people who are forced to use it, or those not exploring all their options these days.
why import/export (Score:1, Interesting)
It may have performance problems, but... (Score:5, Interesting)
Re:SVN will not replace CVS (IMO) (Score:4, Interesting)
Store them differently (Score:4, Interesting)
What I don't like about this article is that it implies I should have to restructure my development environment to deal with a flaw in my version control. The binary issue is huge with subverison, but most of the people working on subversion don't use binary storage as much as game projects. Subversion should have an option to store the head as a full file, not a delta, and this problem would be solved. True, it would slowdown the commit time, but commits happen a lot less than updates (at least for us). Also the re-delta-ing of the head-1 revision could happen on the server in the background, keeping commits fast.
Re:SVN will not replace CVS (IMO) (Score:1, Interesting)
HINT: When you do it the way CVS provides, you will lose all of your revision history.
SVN does not have this fatal flaw.
Yeah, that is a problem with CVS. Your revision history is there, you just can't trace it since a move is a delete and recreate. So if in your move/rename commit comment you say where you are moving it from, you can manually trace (though this is a huge pain).
We have moved all our CVS repositories to SVN at work. As much as I like the revision history problem being gone, I would've pushed harder to stick with CVS (I didn't think SVN was ready at the time, and I still don't). CVS is way more stable, solid, and trouble free, and clients for it are also very stable. SVN has numerous issues that keep popping up, mostly in the clients (the working copy metadata gets corrupted all the time), but some that might even be server-side corruption (didn't quite figure out why, but everyones' working copy got corrupted in the same place).
Are there any SVN-to-CVS conversion utilities out there for those of us who want to go back to CVS?
What's wrong with version control? (Score:5, Interesting)
I mean, is it just me or is revision control software incredibly difficult to use? To put this into context, I've developed software that builds websites with integrated shopping cart, dozens of business features, email integration, domain name, integration, over 100,000 sites built with it, (blah blah blah) but I find revision control HARD.
It feels to me like there is a fundamentally easier way to do revision control. But, I haven't found it yet or know if it exists.
I guess for people coming from CVS, Subversion is easier. But with subversion, I just found it disgusting (and hard to manage) how it left all these invisible files all over my system and if I copied a directory, for example, there would be two copies linked to the same place in the repository. Also, some actions that I do directly to the files are very difficult to reconcile with the repository.
Since then, I've switched our development team to Perforce (which I like much better), but we still spend too much time on version control issues. With the number, speed of rollouts and need for easy accessibility to certain types of rollbacks (but not others), we are unusual. In fact, we ended up using a layout that hasn't been documented before but works well for us. That said, I still find version control hard.
Am I alone? Are there better solutions (open source or paid?) that you've found? I'd like to hear.
Re:What about git? (Score:3, Interesting)
Re:Why binaries? (Score:3, Interesting)
Re:What's wrong with version control? (Score:3, Interesting)
I think the only way to make it work really well is to have an administrator whose job it is to be a VC expert, rather than a programming expert. You need someone with some serious scripting skills and a deep understanding of the structure of the VC filesystem. With the proper scripts in place, you can really streamline the process for your specific project and enforce your coding practices, but maintaining the system is a seperate skill from programming. Also, when performing non-standard merges or whatever, you would probably need a coder to work with the admin to make sure you don't do it in a way that will hamstring you later. Of course, most projects can't afford that, and many programmers don't want to leave their code in the hands of some script monkey, or won't believe that someone else can do something as "trivial" as vc better than them
Re:It may have performance problems, but... (Score:4, Interesting)
I suppose one has to be conservative with deployment of this stuff, you don't want to have code locked away in unmantained software, or erased by immaturity bugs, but it's still an interesting field.
Re:Why binaries? (Score:5, Interesting)
We give our outside designers access to their own SVN repository. When we contract out a design (for a brochure, for instance), I give them the SVN checkout path for the project, along with a user name and password. They don't get paid until they commit the final version along with a matching PDF proof.
This solves several issues:
(a) The tendency for design studios to withhold original artwork. Most of them do this to ensure you have to keep coming back to them like lost puppies needing the next bowl of food. It also eliminates the "I e-mailed it to you already!" argument, removes insecure FTP transfers, and can automatically notify interested parties upon checkin. No checkin? No pay. Period.
(b) Printers have to check out the file themselves using svn. They have no excuse to print a wrong file, and you can have a complete log to cross-check their work. They said it's printing? Look at the checkout/export log and see if they actually downloaded the artwork and how long ago.
(c) The lack of accountability via e-mail and phone. We use Trac in the same setup, so all artwork change requests MUST go through Trac. No detailed ticket? No change.
(d) Keeps all files under one system that is easy to back up.
You may have a little difficulty finding someone at both the design and print companies that can do this, but a 1 page Word document seems to do the trick just fine.
Re:Why binaries? (Score:3, Interesting)
Re:Why binaries? (Score:4, Interesting)
And yes, for a 250mb audio file, it is VERY slow.
Re:What's wrong with version control? (Score:3, Interesting)
1) You want to make a copy of trunk to send to somebody:
tar cvf project.tar .
With svn you have to go through a bunch of magic to do this or you end up giving them an original copy when you may have local changes (you tweaked some config option or whatever), your username, time svn repo address and structrure, etc. If you do svn export it makes a copy of what is in HEAD not in your folder, so there is no way to do this without going back and weeding out this junk
2) You want to export something
# svn export svn:something
svn: '/tmp' already exists
Really, you think?
3) You make a copy of a file and then decide to rename it (or other cases).
# svn cp
# svn mv file.c newname.c
svn: Use --force to override
svn: Move will not be attempted unless forced
# svn --force mv file.c newname.c
svn: Cannot copy: it is not in repo yet; try committing first
Svn says you *must* do a bogus commit because you wanted to rename a file, or alternatively you can revert the new file and lose it? wtf? dumb.
4) You want to do the same thing on lots of files
# svn mkdir newdir
# svn cp *.c newdir
svn: Client error in parsing arguments
That's right you have to break out your bash/perl script skills to do this. Lame.
There's a *lot* to dislike about svn. It's basically just 'icky' all throughout. The checkouts are huge and ugly, many operations are slow (compared to monotone), its really annoying to have a private repo that you sync occasionally so you end up with zillions of tiny commits or losing work because you didn't commit enough. And the repo itself is very large -- converted a 2g repo from svn to monotone preserving revisions and even with straight add/del instead of renames/moves the monotone database was a small fraction of the size, about 1/6th. Incidentally, the monotone version was much faster in pretty much every way.
Monotone is technically much better than subversion, except for one problem that you can't checkout only a subset of a repo. Maybe they have fixed that by now and if so it would be crazy to use svn instead of it IMO. I'm sure there are also many others out there better than svn.
Comment removed (Score:3, Interesting)
Vesta is better (Score:3, Interesting)
The first time I used Vesta, it was a life-changing experience. It's nice to see something that isn't a rehash of the 1960s
Notice.. (Score:3, Interesting)
Questions that remain:
1. Does the algorithm simply "plainly store" previously-compressed files, and is this the reason why that is the most time-efficient?
2. What exactly was the data for the *actual check-in* times? (What took 28m? What took 13m?)
3. Given that speedier/efficient check-in requires a large tarball format, how are artists supposed to incorporate this into their standard workflow? (Sure, there's a script for check-in, but the article is absent any details about actually using or checking-out the files thus stored except to say it's an unresolved problem regarding browsing files so stored.)
The amount of CPU required for binary diff calculation is pretty significant. For an artistic team that generates large volumes of binary data (much of it in the form of mpeg streams, large lossy-compressed jpeg files, and so forth) it would be interesting to find out what kind of gains a binary diff would provide, if any.
Document storage would also be an interesting and fairer test. Isn't
Re:Store them differently (Score:3, Interesting)
It's like when I added 2,457 files to a VLC play list. It took 55 minutes to complete the operation. I immediatly downloaded the VLC code, and went looking through it...
It loops, while(1), through a piece of code that is commented "/* Change this, it is extremely slow */", or some such. The moment I have a C/C++ Linux development environment functioning, I am going to fix that, if it hasn't been already, as well as looking into the SVN problem.