Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Tips on Managing Concurrent Development? 256

An Anonymous Coward queries: "I work on a fairly large-sized project with at least a dozen developers. Advanced tools like CVS and ClearCase allow concurrent development, and provide merging tools to merge in different changes to the same file. This can be a significant productivity gain, particularly with files that are unavoidably common to several developers (C header files, most notoriously). During crunch times, such as before delivery deadlines, we often find that we are checking in changes to the same file several times a day, often hourly. The problem does not seem to be with conflicting changes to the same lines of code, but rather with developers knowing the sequence in which concurrent changes will be checked in. It is not possible to always be aware of who is checking in what and when, so programmers submitting patches to the baseline often have to redo those patches multiple times in a day in order to have them applied. Have other programming projects developed solutions for dealing with this problem?" The submitter proposes another solution, below, how well would it work?

"Take, for example, the extreme case of something like Linux (not only concurrent development, but geographically distributed development), how is this managed? One solution we were contemplating was to try to do an 'air traffic control' type of sequencing and conflict resolution. As early as possible in the development stage, we try to identify what will be finished when, and assign a one-up sequence number to each patch. Developers then know that they will be patching against the baseline that was patched by the patch with the previous sequence number. It is hoped that this prevents a lot of rework of patches. A potential problem with this approach is the need for a responsive central authority to assign sequence numbers. Also, such sequence numbers may have to be rearranged in the face of last minute advances and setbacks in developer progress. Despite careful scheduling and detailed design, it may be impossible to know the exact check-in sequence of patches more than a week or two in advance.

Will such an idea be successful, or is it fatally flawed? Are there better solutions to the problem with less effort? Are we treating symptoms and not the disease (i.e., should we be planning better so that we know patch sequences and dependencies early on)? Management likes to keep staff productively occupied and working up until deadlines, so this usually means a lot of checkins within a short period of time, rather than staged checkins. Can checkins be spread out over time while keeping developers productively occupied?"

This discussion has been archived. No new comments can be posted.

Tips on Managing Concurrent Development?

Comments Filter:
  • by ramdac ( 302865 ) <ramdac [at] ramdac.org> on Friday March 15, 2002 @05:17PM (#3170620) Homepage Journal
    Maybe we're just stupid. We dont use any CVS or any other versioning type software to keep track of changes. We dont' check in our code or anything.

    Luckily our development is done on the web, we just create folders, and move them up when we're done.
  • by Anonymous Coward on Friday March 15, 2002 @05:18PM (#3170626)
    (disclaimer: I am a VA Software employee)

    I know this sounds corny, being said on a VA property such as Slashdot, but SourceForge 3.0 [vasoftware.com] is easily the best concurrent development environment i have ever used. It was my love for Sourceforge which made me pursue a job at VA Software in the first place! The fully web-based administration hides all the niggling details of commandline cvs tools and makes managing huge projects a piece of cake.

    In short, if you haven't been to VA Software's site [vasoftware.com], you don't know what you're missing.

    • Problem with SF EE:

      Minimum user license is 30 users, and that is roughly $30,000!!!

      I run a small development firm and I wanted to use the enterprise edition. I'll pay a few thousand for something, but $30k for 30 people? I think not.
      • I run a small development firm and I wanted to use the enterprise edition. I'll pay a few thousand for something, but $30k for 30 people? I think not.

        While I can't say the SourceForge is decent (imho looks like a hodgepodge of thrown together junk), $30k for 30 people is not really a lot. Say salary, benefits, overhead, etc., cost you $1k per week per employee (which is likely very modest). Is something like this going to save each developer a week of time? If there's a tool that good, then it's well worth the money. Sure, it's a big one time cost, but even over the course of a year, it really does pay for itself.
      • 1) Install debian
        2) apt-get install sourceforge.
        3) Do a bit of fiddling (well maybe more then a bit).
        4) save 30K.

    • I thought SF was open source? When did it become proprietary? Oh yes when VA locked it up.
  • by Anomolous Cow Herd ( 457746 ) on Friday March 15, 2002 @05:19PM (#3170630) Journal
    Even though it is backed by CVS (and you could possibly get away with using just that), SourceForge OnSite (c) (sold by VA Software at a reasonable price) makes managing CVS and concurrent code development a snap! Just plug it in and code away with the knowledge that you are paying for the support services of one of the leading vendors of enterprise-grade Linux solutions.

    I wouldn't, however, recommend working with anything from Microsoft. Benchmarks and real-life statistics have shown that their source control solutions are not only slower, but are also less stable and more likely to corrupt your source tree. I hope you have backups!

    • Everything turns into 'the linux solution' vs. 'the Microsoft solution' even if one or more of those is inapplicable.
      I wouldn't, however, recommend working with anything from Microsoft.

      I don't think Microsoft has any offerings in the serious source control space. They do have a toy called "Visual SourceSafe (?)" but I don't think even they take it seriously. The leader in source control is ClearCase. The ClearCase VOB server is normally run on SPARC/Solaris. The client runs on everything common, including of course NT, Linux and Solaris.

      So I'd leave Microsoft out of this discussion and ask whether Sourceforge.* can compete with ClearCase. Or what Sourceforge.* offers beyond ordinary CVS.
    • Please, stop it already!!!!

      I realize that 12 programmers writing code 80 hours a week, on one product is a nifty challenge for a tool as robust as OnSite , but honestly (and I'm not saying this in some sort of 'my crank if bigger than yours' type of way, as I'm sure to you your project is very intense) it's the same as some rich executive that hasn't used a computer sense the 70's saying 512k of RAM is all anyone will ever need.

      If you plan on having really well done configuration management, you need a configuration management tool. ClearCase, Continuus/CM Synergy, PVCS (to some extent) are what you are looking for. I'm not advocating any of these, and I have a cheaper alternative I'll talk about later. If you're wondering, I am a CM administrator (thus my viewpoint may be a little more relevant than that of a manager who doesn't know how the tools works.) The original question was about concurrent development, and as such, doesn't deal at all with versioning, but more along the lines of lifecycle. Any one who has ever stepped out side of a VCS environment and into an actual CM environment can tell you about how a well planned life cycle can save your developers hours and hours of headaches just like the ones that you are dealing with now.

      This is one field of software, where once your into it ($500k - $750k when you figure in man hours and software and hardware) it really hard to get out and put yourself into another system. We just moved from Continuus and the cost to get into that CM Tool was (after the accounting geek figured it) was almost 4 million. You have to think about things like rewriting code, and restructuring directories, and while your developers are doing that, they are not developing new code, and that costs the company money too. I say that to tell you that this was a massive project. We moved 14 products, 12 development teams, 1.24 million files (and that was without previous versions for history), and all from a home brew version control system on top of rcs. We were happy a penguins on a screensaver, when our company made us move to a CM(ish) tool of it's own again. (ooohhhh the joys of being acquired). The new relocation cost? 4 million.

      If you want a really nice tool, you have to pay for it. There are no freebies out there. Save one. Perforce [slashdot.org] has a tool that is almost as robust as the big guys with out the fuss of four digit per user licensing fees. It's not free to use commercially, but they offer great deals for small shops, and OSD. And even the commercial licenses are not blown out of proportion that badly.

      If you really want to do concurrent development (or parallel development, PD, as its called by professional grade tools) you need to invest in one of the big three, or the little CM tool that could. If you have a good life cycle and a tool that was built to handle PD, you will not have to post your CM questions to slashdot any more.

      but for god's sake.... Please stop ranting about OnSite...
  • by bstrahm ( 241685 ) on Friday March 15, 2002 @05:19PM (#3170633) Homepage
    I don't think so, but then to many people this might be large...

    Of course some of these problems sound like lack of planning early in the game...

    For example changing headers that two developers need... The only headers that two groups need should be interface headers, these should be set early and not need a lot of change, with any change taking both developers changing the code internally...

    Another note, I get really worried when people say that process problems only show up at the end crunch time. If it is crunchtime it is time to use all of your processes, because the processes should be designed to produce the best bug free code the quickest... otherwise it shouldn't be in the process...

    That is just my 2c worth however
  • BitKeeper (Score:2, Interesting)

    http://bitkeeper.com/Products.BitKeeper.html [bitkeeper.com]

    If it's good enough for Linus & friends, it's good enough for me ;-)

    MONOLINUX.com :: All Linux. No ads. [monolinux.com]
  • by noahbagels ( 177540 ) on Friday March 15, 2002 @05:21PM (#3170647)
    I currently work in an organization with 75 Engineers (50 USA, 20 India, 5 Asia) - and we use CVS. It's free, easy to use, and has a simple feature set so that more than one person has enough knowledge to do things like branching and merging branches.

    We nearly never have merge problems. It is standard procedure that people keep their tree up-to-date with the cvs tree, and thus conflicts rarely arrise. Even at crunch-time, I probably have one merge conflict every 2 weeks, and with CVS - you are notified of the conflict and it is wrapped with CVS comments.

    To put this in perspective - while at Oracle with 1000s of engineers working on the same tree, we used ClearCase and it was awesome. The difference here is that there was much steeper a learning curve, and no normal engineers could actually do complex tasks - i.e. create branches etc. We had a complete groud dedicated to ClearCase.


    Conclusion:
    Educate your engineers - and politely have the senior engineers tell them when they mess up - enforce a policy that people must update the source every day that they plan on checking-in files.

    Also - I don't know what CVS versio you are using, but the latest free WinCVS client will not allow you to check in a file with a conflict! It will force you to update/merge/resolve the conflict before updating the tree. I highly recommend CVS and WinCVS due to the ease of use and cost.

      • To put this in perspective - while at Oracle with 1000s of engineers working on the same tree, we used ClearCase and it was awesome. The difference here is that there was much steeper a learning curve, and no normal engineers could actually do complex tasks - i.e. create branches etc. We had a complete groud dedicated to ClearCase.

      To put this in perspective, I currently handle the Clearcase side of a transatlantic development effort, with maybe 200 developers. The other side uses Continuus (office politics, don't ask). They have a complete config/build group. They even have a tools group that does nothing but evaluate, purchase and support tools for the config/build group. Until very recently, I handled the Clearcase side on my own. Part time (I'm a developer). It got to the stage where I would actually take the source from Continuus, import it to Clearcase, produce reports, perform a build and test it before the Continuus team could do it, and my builds got used in preference to theirs.

      Just goes to show, there's always a worse system, or other alternatives to explore. The developers who're used to using Continuus are all in love with Clearcase, and rebellion is brewing. One guy said that he'd learned to do in Clearcase in two weeks what it had taken him two years to learn in Continuus. And yet I agree with you: CVS is even easier than Clearcase, and does everything you'd need to do on a typical project!

      • Yeah, I've been doing what most people consider a LARGE development for several years. (i.e. more than 500 software engineers working on several million lines of code... multiple continents); we're currently using Clearcase. Compared to any other tool I've used, Clearcase has a few minor issues, but on the whole it's pretty *damn* excellent, for nearly anything you could want from a source control system.

        The few problems we've had are its multisite support is a shade awkward, and if you can't fit all the source into one database the 'intervob' stuff is a bit weird, and it doesn't group changes to multiple files very well. Other than that it's really, really, really excellent.

        Still, CVS or even RCS is good too in my experience; but not nearly as polished.

    • We nearly never have merge problems. It is standard procedure that people keep their tree up-to-date with the cvs tree, and thus conflicts rarely arrise

      I'll second this: At many companies I have heard over and over that CVS sucks because of the conflicts. When I inquire further, I find out that people never or rarely use the cvs update command to synchronize. cvs update should be executed almost like a nervous habit, or at least a couple times a day. More if code is being checked in frequently, or if you are working on the same code as someone else. Use the watch commands in cvs to be notified when others edit your files.

      It's a shame, really, because programmers seem to be afraid of cvs, preferring a more primitive tool such as rcs or pvcs. cvs lets programmers work in the style that is most natural in an open-source arrangement, and in my opinion can be a far more productive environment than a systems that locks the rest of the team out of critical files.

  • Subversion! (Score:3, Interesting)

    by Pointer80 ( 38430 ) on Friday March 15, 2002 @05:21PM (#3170650)
    Check out subversion [tigris.org].
    It's CVS, but better and based on WebDAV for RPC and BerkeleyDB for storage.

    Cheers,

    pointer
    • Re:Subversion! (Score:3, Informative)

      As early as possible in the development stage, we try to identify what will be finished when, and assign a one-up sequence number to each patch. Developers then know that they will be patching against the baseline that was patched by the patch with the previous sequence number. It is hoped that this prevents a lot of rework of patches. A potential problem with this approach is the need for a responsive central authority to assign sequence numbers. Also, such sequence numbers may have to be rearranged in the face of last minute advances and setbacks in developer progress. Despite careful scheduling and detailed design, it may be impossible to know the exact check-in sequence of patches more than a week or two in advance.

      When Subversion is ready, you might check it out. It keeps track of not specific versions for files, but revisions/patches for the entire tree. This way you can tell exactly the state of all the code at, say, revision 2735. No manual tagging needed. This would take up a lot of the work of your sequence numbers, without the severe administrative overhead. You could even try to assign a range of actual revisions for which a specific feature is targetted.

      I'm already using Subversion for the early stages of one of my projects. It seems to be very stable currently, and of course I still make backups of the DB in case there still remain bugs that would corrupt it. I figure, I won't need to make any branches or merges in this project until well after the time that subversion can support it (due in April).
  • Modular Isolation (Score:5, Insightful)

    by pyrrho ( 167252 ) on Friday March 15, 2002 @05:21PM (#3170651) Journal
    The only thing I know that really works (i.e. "is simple") is to lessen the conflicts through design... that is, two people shouldn't HAVE to edit the same modules. Or at least not the same lines of the same modules (those are the only merges that are really painful). Similarly, if you have well understood specification for modules then there should not be a problem when the lines edited don't overlap, because the functions and modules will continue to behave to the spec, which is all the other code can expect.

    I know this isn't really easy to do (can't be done retroactively), and doesn't really fit all cases (such as near a release when there is a lot of chaos), but it's the only elegant solution I know of, all the rest are more brute force.
    • Yes, I agree with this one: the best solution is to design your project so that you don't need to have two people working on the same piece of code at once (and if my some chance you do, then one should wait for the other to finish before starting).
      • In those cases where you do have to have two people work on the same code at the same time, seat them elbow-by-elbow. If a bunch of people need to work on the same code, move their workstations into a conference room.

        When the crunch is over, go back to your normal work areas and vow to plan better on the next project.

        -Ed
        • What I wish for is a tool that constantly downloads changes from all of the other developers' trees and highlights the lines that have changed in whatever file you're working on (but does not merge them), and optionally lets you view the changes. So you can immediately see if someone else has modified code that you want to mess with, and you can call up that person and negotiate a compromise or proper fix. Network bandwidth would be a total hog, but the time saved when you don't have to suddenly discover that someone else broke all of your changes.

          Anyone up for the task?
    • I second this. However it flies in the face of "load balancing".

      You know, 12 UI changes to be made divided by 6 developers = 2 UI changes per developer. Nice "perfect" load balancing. Never mind that 5 of those 6 now have to learn the structure of the UI system developed by the 6th and not yet documented for public consumption and that the 6th could have implemented all 12 changes himself in the same time as the entire team.

      Oh yes... such "load balancing" does expose merge "hot spots" pretty quick.

      I've seen two conflicting approaches: check in to a common branch and trust the source code control system to keep things sane (Clearcase can do this, CVS sticky tags can as well). The trouble here is that people wait for exclusive checkouts, or run the risks associated with non-exclusive checkouts. The other approach is to maintain a branch for every developer and merge. This resolves the exclusive checkout problem, but requires discipline and avoidance of hot spots and architecturally-based task assignment.

      I worked in a shop that started with the branch per developer approach (not a problem with Clearcase, but from what little I know about CVS, this may not be practical), and quickly abandoned it because too many people played with the same code causing merge nightmares (exacerbated by management's load balancing based on estimate lines of code added/changed without regard to architecture or skill).

      I now work in a shop that sticks to CVS, and people generally know who's playing where. Sticky tag confligs occur rarely.

      Personally, I like the idea of modular isolation, because it lets you use the one branch per developer approach (well, with Clearcase, at least) if you want, letting people test changes to code their not supposed to be "officially" changing (it sometimes that you find a bug in code you shouldn't "officially" change).

      • You know, 12 UI changes to be made divided by 6 developers = 2 UI changes per developer. Nice "perfect" load balancing. Never mind that 5 of those 6 now have to learn the structure of the UI system developed by the 6th and not yet documented for public consumption and that the 6th could have implemented all 12 changes himself in the same time as the entire team.

        Rock on, dude! Prima donnas (sp?) may suck, but no matter what, 90% of the real coding is done by 10% of the coders. If I were a manager, I would rather "balance" the load by assigning each one or two developers to a specific piece of the system, and have them become experts at it. Then, I'd give all 12 fixes to the UI expert(s), and if he couldn't handle it, let him ask one of the less-busy experts in another area to help out on the changes for which the learning curve isn't too high.

        Almost all software projects or mostly-self-contained components start out with one or maybe two people doing all of the initial coding. It takes a lot of work to get it to a point where more people can even start to work on it concurrently.

        So if you are starting a project, plan modularly so there are not conflicts between the different "experts". Separate concerns. Also, lessons learned here will help when your project becomes so large or old that no single person knows the entire system.
  • by rimsky ( 106475 ) on Friday March 15, 2002 @05:25PM (#3170674)
    One solution to avoid patching problem is to use continuous integration [martinfowler.com]. It's an integration technique that builds your source multiple times a day, getting all the latest source code from the CVS tree, and building from that code. If anything fails, the offending developer gets warned. Mozilla uses the same thing, calling it TinderBox [mozilla.org]. It's one of the principles of Extreme Programming, and seems to work quite well at our company.
  • Well, I've been on projects in which people say "oh, you rearrange that code first, then I'll do my thing" (more informally than your "air traffic controller" example, but kind of the same thing). It can sometimes be helpful, but all too often it gives an excuse for inaction ("well, I couldn't do xxx because yyy had to happen first"). One thing which can help a bit is continuous integration - run "cvs update" often and checkin as often as you can (say, whenever your automated tests pass). Oh, and of course trying to not do everything right before the deadline (although it can be hard to change a culture and I'm not sure it is worth it). Other than that, I don't know of any magic bullets - just to say that dealing with the pain of continuous (or at least frequent) integration is better than the alternatives (such as having everyone do their patch against a build from last week and then having an overworked release manager try to assemble them into a working program).

  • Write tests (Score:2, Informative)

    One of the strategies we use at my company (We also use plain ol' CVS, but we don't have many branches on our development tree) is to team-code and to write tests for every script, so that you can tell if/where a problem has been created by someone else editing a header file. A test should simulate a user using the file/program/script/etc and should double-check the values entered against any values that are stored. By running the testing suite after a file changes, you might still have a merge conflict, but after it's resolved you won't have a bad piece of code.
  • Extreme Programming (Score:5, Interesting)

    by Frank Sullivan ( 2391 ) on Friday March 15, 2002 @05:28PM (#3170698) Homepage
    Check out the development techniques of Extreme Programming (just search Google, silly, and buy a book or three). They have a real solid handle on concurrent rapid development.

    The real heart of Extreme Programming is "test-first" programming. The entire development process revolves around unit and integration tests, for extremely fine-grained control over code quality. Any changes that might impact other code should break a test. You fix the stuff that breaks, check in your changes, and move on.

    Multiple programmers touching the same C files many times a day sounds like you have either design issues, structural issues, or both. That just should not happen, crunch time or not. Heck, crunch shouldn't happen if you're managing your development correctly.

    If you're using cvs, conflicts with source checkins should be very easy to resolve. Even if two programmers touch the same file, they shouldn't be in the same function. If they are, you're back to management and architecture problems, and you need to fix those NOW before work grinds to a complete halt.
    • Extreme Programming doesn't solve this problem; it depends on a solution existing.

      (The XP crowd came from a Smalltalk background, and mostly relied on Envy Developer, of which they had many good things to say.)
    • Heck, crunch shouldn't happen if you're managing your development correctly.

      If you ever finish a project without crunch time, the marketing guys will find out and have your schedules shortened appropriately.
    • I understand that Extreme Programming is comprised of many sound principles...I just wish the name didn't force images of surfers and that Kool-Aid man into my mind.

      Anyway, I agree that source files being touched by multiple people at one time can indicate design problems. One thing I've learned is that properly designed software naturally becomes modular enough that several engineers can work together yet not step on each other's code. This also tends to limit the number of engineers that can be applied to a project, but sometimes more engineers just crowd things rather than help. (I thank "The Mythical Man Month" by Fred Brooks for this essential wisdom)
    • Test first....

      But tests are written by the developers, so good developers = good tests, bad developers = bad tests.

      Don't do XP, anything in engineering that says Requirements and Design don't come before implementation lose the right to be considered engineering.
  • by Anonymous Coward on Friday March 15, 2002 @05:29PM (#3170702)
    Why in the world should developers be applying the same patch multiple times? You've just said that the problem is not with developers needing to touch the same lines of code -- so, once a patch is in, shouldn't the next person be merging their code with what's already there?
    If your problem is with people overwriting the changes that previous submitters made, then you've got a very different kind of mgt problem -- one that can be solved by getting people to use the tools they already have. CVS, for example, lets you merge the current branch head with your working copy, incorporating any changes that may have been made since you checked things out.
    Submitters should always diff their current code with the head before commiting a check-in, to see if they are breaking previous changes. This kind of practice is more important when schedules are tight, and you shouldn't let people off the hook because they were in a rush or some other lame excuse.

    --tsw
    • The way I understand the text of this article is that the patches have to be reworked several times. This is integration work, not submission work. The developer makes a change which depends on some other change, but rather than waiting for that other change, just goes ahead and does it. Then when the dependent change comes in, the patch is wrong and has to be reworked. Am I close?
  • by spullara ( 119312 ) on Friday March 15, 2002 @05:32PM (#3170714) Homepage
    These are the ingredients to make large projects successful from a technical point of view. At the company I work for, we have literally hundreds of people working in the same source tree using P4 [perforce.com]. It manages merges, versioning, and works flawlessly over the internet (well VPN anyway). It is also much, much faster at syncing to the the depot than CVS because the server keeps track of those files that you are editing and does not need to do diffs with the local filesystem. This is very helpful during crunchtime where you might want to sync serveral times a day (and you have about 10000 files in the system). Also, until your locally edited files are resolved with changes in the depot you cannot submit them, so you don't have the problem of ordering patches properly.

    For the second part, I highly reccommend that you have automated build and tests that run after changes have been submitted. You can see how this is done en mass on the mozilla.org [mozilla.org] site. Also, developers should have access to the same build and test infrastructure on their machines so they can do the build and test before they check in their code.

    Finally, you need a good bug tracking system. You might try Bugzilla [bugzilla.org].

    Good luck,
    Sam
    • Finally, you need a good bug tracking system. You might try Bugzilla.

      You might try it, but you'd probably find it lacking. Bugzilla is far from a good bug tracking system. Actually, let me clarify that -- it's a great bug tracking system, if you're tracking bugs in Mozilla. It's horrible for anything else. Data is hardcoded in source files, and if you want to configure it for non-mozilla bug tracking, then you have to edit the source directly. Red Hat and GNOME have obviously put the time in to do so, and have got good results, but for a small business like ours, we couldn't justify the manpower needed to get the system up and running, so we were forced to go with an alternate solution (one I hacked together in PHP one evening -- it may not be pretty, but it works, and gives us 90% of what we need).

      • I recommend checking out Mantis [sourceforge.net] if you need a good, simple, easy to use and configure bug tracker. You can have it installed and working in minutes, and only requires PHP and MySQL. The source is very clean, so it's easy to make changes to it. And the developers are very good about accepting bug reports and feature requests. For a large project like Mozilla, you might need something more large scale, but for small or medium size projects with 10-20 people, it works very well.
    • Disclaimer: I not an employee of Perforce. I used to be a ClearCase weanie, but now that I've been using Perforce for about a year and a half, I think it's better for several reasons:
      • Smaller. You only need one executable on your client. And one more for your server. No kernel patches, no drivers, no installation, just the binaries. Ubercool.
      • Multi-platform. Perforce has binaries [perforce.com] for practically every platform in use out there. Find me another Version Control System with BeOS, QNX, AIX, SCO, MacOS9 and MacOSX support.
      • Fast, fast fast. Because of the low communication overhead, it works extremely well across slow/high latency links.
      • Ease of use. It's really easy to configure and setup.
      • Great support. We've had to go to perforce support twice and both times they've been awesome, with quick responses and knowledgeable people.
      • Price. The single server, two client setup is FREE! And per seat licensing + support is very very reasonable. I use the free one on my laptop to version files.
      • Plug-ins. Perforce publishes their API, and they have perl, python, ruby and tcl utilities [perforce.com] galore.
      Now my process recommendation. If you have paid for ClearCase then pay for some more education. ClearCase/Perforce can do lots of what you need automatically. You should practically NEVER have to repeat the same change on a file. You may want to look at some branching/merging techniques which can eliminate the need for colliding checkin's also. Rational has a bunch of whitepapers [rational.com] up on their site, as does Perforce [perforce.com].

      Most of all, I would advise you to educate yourself on the options/methods of version control. 12 isn't that big. Wait 'til you get to 1200.

  • This is a common problem most easily managed by a human, not automated tools like CVS. On a per-tree, per-branch or even per-module basis, each chunk has a person responsible for managing changes to the tree, which I traditionally call the "Patch Master". This alleviates the common problem of multiple patches wiping each other out, as described.

    Patches are either sent directly to the patch master, diff'd against a base or branch, or are committed on a per-developer branch, after which the patch master is notified either by built-in CVS mechanism or email. In both cases, it is the Patch Master's responsibility to merge changes from diff or from branches. Merging is a tedious process, but this alleviates the productivity problems affecting everyone on the devteam, limiting it to just one person and allowing everyone else to progress with further development.

    Some people complain that having one person manage patches does not scale (i.e. "Linus does not scale"), but what I'm suggesting is a more collaborative, distributed, team-oriented approach -- perhaps you have a team of 10 developers with 5 "modules" in active development; each module is assigned a "team lead" as patch master and they are responsible for managing commits.

    --jordan
  • by McMuffin Man ( 21896 ) on Friday March 15, 2002 @05:38PM (#3170743)
    I manage a somewhat larger project (35 developers at two sites), and we have very few issues of the type you describe, despite aggressive development schedules. The key is in architecture and planning. If developers are really performing tasks so different from each other that they aren't already in close communication with one another, you need to ask yourself how they happen to be working in the same source file. With proper architecture and modularization, discrete tasks will naturally land in different parts of your source tree.

    There will always be some cases where there's a little overlap, but if your architecture makes these overlaps rare, it becomes relatively easy to see them coming when you lay out your schedule and plan around them.
  • by dant ( 25668 ) on Friday March 15, 2002 @05:41PM (#3170760) Journal
    Yikes! You have the process problems of a group five times your size.

    If you have several people changing the same file in a given day, then one of two things (probably both) is wrong:

    • Coordination between features/projects. Somebody should be keeping an eye on the list of fixes/enhancements that are coming down the line and making sure you don't get too many in the same neck of the woods. This person doesn't necessarily have to be a developer (but should be able to speak developerese), and their whole job is to tell people, 'No, I'm sorry, but there's no room in the schedule for feature X that you want. Can I interest you in feature Y, which is in a different part of the code?'

      Also, if your code is fairly big (more than a few hundred thousand lines), you need to break it into logical chunks and assign somebody to watch every checkin to each chunk. That person is a developer and responsible for making sure new code gets reviewed and unexpected changes aren't being made. If your code is smaller, one person can probably do that.

    • Code Architecture. If several functionally-unrelated features end up needing to change the same file, then something is wrong with that file. There's too much going on in it--you've got to be dilligent about keeping your components small and keeping each component in a separate file. See the excellent Lakos book [barnesandnoble.com] for tips on how and why to do that.
    Most likely, your organization went way too fast at some point in the course of setting up the core code architecture and the processes by which you decide what does and does not go into a release. You need to get started fixing both--or this problem will keep getting worse and worse until you're unable to move forward through your own inertia.
  • .2 cents (Score:2, Insightful)

    by ZeroConcept ( 196261 )
    My experiences on a 80ppl project:

    1) Minimize dependencies through refactoring.
    2) Try to avoid branching as much as possible.
    3) If branch, minimize the lifetime of it.
    4) Before merging back a branch into the main, merge the main to your branch, recompile, test and then merge back.

    Just some thougths...
  • Sorry, but that's a TERRIBLE idea. Things will always happen out of order. The best baseline is a fresh cvs checkout. Programmers are responsible for not checking things in until they know it doesn't break anything. How do they know it doesn't break anything? All the tests pass. Where do the tests come from? (Wahwahwah, we're too busy coding to write tests!) You write tests BEFORE you write code. When you write the test, it's broken. Then you fix it. It works.

    Another useful Extreme Programming idea is "stories". Programmers work on problems small enough that they can fit on a notecard. If it won't fit on a notecard, break it into smaller problems that will. Pretty soon, you have a huge stack of "stories". Sit down with your customer and triage them. Release early, release often, and the code is ALWAYS correct. Or at least all the tests run, which is more than can be said for 99% of all the code out there.
  • We use a couple of methodologies in my work place. Granted we are a Java shop so some of this stuff doesn't apply across the board, but the concepts work for almost any language/development platform.

    First, XP Style testing. Test first, and test often. Write a test case for every class you make, test everything, unit test, regression test, integration test you name it just TEST it.

    Second, simplify your development process. There really should not be the need for multiple people to be working in the same file/class/header etc... Assing pieces of the project to different developers and model it out, have them work in there rescpective pieces, if you MUST assign multiple people to the same header, thats okay, but make sure they work together closely to not step on each others toes. This is really a planning issue.

    Third, I assume you are following a build process (nightly, weekly etc..), we use Ant [apache.org] to help with this. Granted it doesn't help for the problems of developers stepping on each other during the day. But it forces everybody to check in there code and make sure it works, everyday (we use nightly builds).

    Okay with all of this stuff, we rarly EVER have problems. Our code is usually close to bullet proof (the constant testing), each developer really knows the portions of the code they worked on, and can quickly make fixes if needed (the simplification of the development process), and we are constantly aware of our timeline and progress (nightly / weekly builds).

    Anyways, thats just how we do it ;)

    -ryan
  • It would seem to me that you would need a patch code monkey. Someone to review the patches before they are applied to make sure that two people do not over patch each other.

    When I worked with CVS I always had problems of people overwriting my changes or incompatible changes.

    In the Linux world there are usually modules maintainers. Often only one maintainer is responsible for the ci/co of the source tree and more often people pick a branch to work on then port their stuff to the branch it gets checked in to.

    The Linux kernel does not use CVS, right now they are moving to (or moved) bit keeper.

    Where I work we use a custom program that does locking so that only one person can work on a file at a time and this eliminates all conflicts. PVCS also does this and run under both Windows and various UNIX flavors. Locking is not as bad as it may sound and in fact it is good in the case where you have several files that are frequently accessed.

    CVS is good when you have less bumping heads. Of course this has been my experience, others may have other opinions.

    One thing you could do is build a small program layer on top of cvs, or maybe some scripts to do some locking so that people are less likely to bump heads.

  • by Kaz Kylheku ( 1484 ) on Friday March 15, 2002 @05:51PM (#3170805) Homepage
    The reason it isn't concurrent is because you have people separately working on patches, but those patches are dependend on one another, so their ordering matters. When ordering matters, the situation is sequential, not parallel. The whole bit about air traffic control and sequence numbers is about serializing the parallel development.

    The trick is to decompose the development task into chunks that are in fact parallelizable. In turn, those chunks may have sub-chunks which are not parallelizable; those chunks should be perhaps done by one developer, in the correct order.

    No developer should wait around for another's patch, and nobody should develop anything that he or she knows will soon be invalidated by a forthcoming patch so badly that it will have to be substantially reworked. If a unit of work depends on some forthcoming patch so badly, a developer should find something else to work on until that patch arrives. How you know that the patch has arrived is by monitoring your e-mail, or scanning the version control system for changes. The other developers should know that someone is waiting for their patch in order to do the next, dependent part of the change, and broadcast it to the team when they are done.
  • I work with a number of other software developers on a fairly large project where things are constantly changing. We currently use Clearcase. To avoid multiple submissions in order to have changes accepted once, each developer has their own branch, which is a copy of the main one. Developers work in their branch changing files until they feel they're ready to sumbit changes to the baseline. They merge the current baseline into their branch, grabbing all changes so far, and then merge back into the main branch. This effectively checks out the files, so only one person can make and submit changes at a time.
    • by conradp ( 154683 )
      As someone who has used both CVS and ClearCase extensively, I'd say that this points out one of the major problems with ClearCase: checked-in files are seen immediately by all developers. (This is sometimes touted as an advantage, or even a "feature".)

      CVS lets developers update to see other people's changes at their own convenience. But that also means developers need to exercise some discipline to update frequently enough that their code does not remain too far out of sync with the baseline. This, combined with a "checkin early and checkin often" approach, should really minimize the number of conflicts, even for fairly large projects.

      I can't imagine the problems that the original poster described ever happening with proper use of CVS, but perhaps there's something in that "developing patch sets" phrase that he hasn't fully explained to us.

      Just a couple of other thoughts:

      Distributed ClearCase works reasonably (though I wouldn't say well) for projects that have a few interconnected sites, but is not well-suited at all for a project involving many different developers each in a different location. CVS is ideally suited for that type of environment.

      CVS really needs a way to move or rename files, and a way to do atomic checkins of multiple files. When will this happen? I know, "sooner, if I help."

      No version control system should prevent people from fixing code just because the code "belongs" to someone else, or is "being modified" by someone else. This sort of "coordination" and "planning" obstructs progress more than hinders it.

      Although it's possible in theory for an automatic merge to succeed while being semantically incorrect (with either CVS or ClearCase), I've never once seen it happen. If your code is well-written, the dependencies on certain assumptions should be fairly collocated, not spread all over the code where they could get out of sync.

      In a large, well-segmented project where the "frequent checkin" policy is used, it is rare indeed that two people even modify the same file at the same time, let alone modify the same lines.

  • by PureFiction ( 10256 ) on Friday March 15, 2002 @05:52PM (#3170812)
    I worked on a large project (20+ developers) where this situation occured from time to time. There were specific interdependencies between some sources files for various parts of the development effort, and it was easy to step on other peoples feet unless specific steps were taken to prevent this.

    Before I describe how we handled this situation, I want to stress the fact that if someone intelligently devides the labor according to how the changes will affect other parts of the code, the need for developers to sqabble over specific changes in specific files should be eliminated.

    Labor should be devided at well defined "interface points" where the additions/changes to the interface can be done quickly, satisfying the needs of other developers requiring those interfaces to build against, and then completion of the underlying code can be done with little interference or effect on others.

    In short, devide work along interface boundaries, and stub out interfaces with enough code to allow compiling, while developer(s) continue to actually implement the code behind the scenes. Thus, swapping in the actual code has no effect on any one else code, exept that the stubs are now full implementations and work correctly.

    Ok, so what happens if the devision isnt clean and you have two people working on the same file?

    NOTE: When I am talking about file granularity, or developers "owning" specific files, you can also substiture "subsystem", etc. Sometimes developers are working in entirely different areas of the source tree for the most part, and it makes sense to assign an entire sub tree to a specific developer. This is the devision by interface, which is the usual case.

    What we did was assign specific files to specific developers, who have the most work to add/modify to the file. When other developers require changes to a file "owned" by another, they perform the merge, which is verified by both sides to work correctly, and then it is checked in by the "owner".

    This was all accomplished using locks (file checked out, ClearCase) and multiple views. The locking of files was a benifit, as it prevented accidental overwrites of other peoples code. Once you check out a file and lock it, no one else (short of the administrator) can check in a modified version and clobber your changes.

    A short scenario:

    Alice: owns file/tree "something.cpp"
    Bob: owns file/tree "modified.cpp"

    Tree is something like this:

    RootBranch
    |
    |-- Devel
    |
    |-- AliceView ---- BobView
    | |
    [code] [code]


    Alice and Bob are both working in a development branch. Alice has the files she is modifying and "owns" checked out and locked. Same for bob.

    Bob realizes he needs to make changes to "something.cpp" to support some changes he is making in "modified.cpp".

    He checks out a temporary version, unlocked, of "something.cpp" and makes the required changes.

    He then notifies alice of the changes, and using the automated merge features she adds his changes, manually resolving conflicts if necessary.

    Bob changes his view to use Alice's version of the file with the rest of the code from his view. He builds and verifies that everything is working correctly. Once this is verified, Alice can check in the changes, and Bob can now use the most recently committed version and continue on his merry way.

    What this boils down to is basically enforcing ownership through locks to prevent accidental overwrites of others code, and defining clear lines of ownership so that a change is only accepted and merged when the person responsible for that code has tested it (in addition to the developer desiring his minor modifications be included)

  • On CVS and Clearcase (Score:5, Informative)

    by ajs ( 35943 ) <{moc.sja} {ta} {sja}> on Friday March 15, 2002 @05:53PM (#3170817) Homepage Journal
    I've admined both extensively, and I can make a couple of comments here. First, Clearcase is licensed software. Understand that when you get locked out because all of the licenses are in use, you cannot touch your source-code (though someone with a license can copy it into a sandbox for you). Also, Clearcase is a resource pig. It wants a pretty beefy central machine to run on, and if lots of people compile at the same time, the virtual filesystem is not very efficient.

    Now on to CVS. CVS is most everything you want from revision control. It's biggest shortcomings are in branch management and the ease with which changes can be made incorrectly. Its ability to interface with well known and standard protocols like rsh, ssh and gzip (which is a format more than a protocol) make it painful to move to anything that's overly proprietary. Its use of your local diff is wonderful ("cvs diff -u" was a revelation for me).

    Clearcase manages branches better and can handle non-realtime latency in updates (e.g. you can have two Clearcase repositories at different sites and you can connect them by mailing tapes around or by dialing up once a day). This can be invaluable when you're working in high-security environments, but is otherwise mostly a moot point.

    Clearcase has improved in the last few years. They've added some local-checkout features where you don't have to work off of the virtual filesystem, and that helps.

    Overall, I'd say CVS is the better system, but Clearcase will sometimes get jammed down your throat, and there are definitely worse fates than to have to get your project working under it.
    • by curunir ( 98273 )
      CVS is most everything you want from revision control

      What about file locking, code promotion, build labels or grouped check-ins? As far as I know, CVS has none of these. These are big issues.

      File locking removes the need for constant branching. Granted CVS's automatic merging capabilities are more advanced than most of its competitors, but branching is the enemy. It should be avoided unless it is absolutely necessary. You lose the ability to have two people work on the same file at once but, from my experience, saving yourself the hassle of losing changes is a big plus.

      Code promotion (as I understand it, I haven't worked too extensively with it) is nice because it allows developers to continue development while their code moves through the QA process and have their bug fixes easily merged back into the source tree.

      Build labels are great because it allows you to group file versions into a logical release (rather than just the current version at a specific date).

      Grouped check-ins are probably the feature that is most lacking in CVS. It amazes me how many people won't call MySQL a real database because of its lack of atomic transactions but are still willing to call CVS a version control system. If all application code was contained in one file, this wouldn't be necessary. However, it is often necessary to make a change to one file that requires a change to another file. If these files are checked in individually (as CVS does it), it is possible to get version conflicts with these files. To make matters worse, if the change needs to be rolled back, you have to remember to roll back both files. The situation gets exponentially worse the more mutually-dependant files you check in.

      The only real advantages of CVS over most commercial versioning software are
      a) free...important for open source projects without funding.
      b) readily available to make your source tree available to people outside your development team...also important for open source projects.
      and c) the large selection of front ends (gui,text, web and otherwise) that have been written for it.

      However none of these features qualify it as being an "advanced" (as the original post called it) version control solution.
  • If you have multiple developers creating conflicting patches, they are working on the same part of the code and they need to coordinate and communicate. There is nothing that CVS or any other version control system can do about that.

    How they communicate is a separate issue. IRC works for some projects; IM is another choice. CVS also offers file-level locking and that can be used to communicate that someone is preparing a patch for a file, but it requires planning ahead and splitting up files.

  • Oh, did I mention design?

    Most companies find (the ones that actually DO it rather than pay lip service to it) that designing a project PREVENTS the "patches on patches".

    Another thing that many people say they do (but rarely do) is actually have meetings that accomplish REAL goals rather than perceived ones. I have been in "design" meetings that were merely CYA meetings - nothing was designed and it was all a waste of time. On the other hand, I have been in meetings that I was invited to (but really had no business being in) that actually SOLVED problems a) BEFORE they happened or b) reworked the nature of the beast so that it was not nearly so intractable design.

    Communication. Not just CYA, but actually TALKING and LISTENING (you wouldn't believe the number of software engineers that just will talk and talk and never hear a damn thing).

    Making it a death penalty to break someones locks helps too...


  • Meaning let the user see realtime results instead of through the web.

    An IDE interface which is sorta like IRC or something
  • CVS is more than adequate for development on small to mid-scale projects. There are several things that you need to keep in mind, however.

    • Make it a policy that developers must do a 'cvs update' before they commit any changes.
    • CVS is not a backup system for your work. Consequently, have developers only commit changes that are complete; do not check in changes that will break existing code.
    • If a developer is making wide sweeping changes and/or rewriting a major portion of the program/project do it on a branch
    • Have someone become the dedicated CVS administrator for a project. This person is the only one allowed to tag releases and manage the integration of developer branches with the main development trunk.
    • Enforce a policy that code commited to CVS must build. I once worked for a company that enforced this rule by setting a policy that the first person to commit broken code each week had to buy the rest of us a case of beer Friday afternoons.

    As a side note, problems with header file commits late in the development stage of a project is more likely do to poor planning and design than to the configuration management tools you use. C header files are used to specify interfaces to various modules of a program and should not change late in development, this indicates either a deficient or non-existent design.

  • by dgerman ( 78602 ) on Friday March 15, 2002 @05:59PM (#3170855) Homepage
    You need to:

    1. Create as granular files as possible. If a C file is often modified by more than people, then split that file into 2 files.

    2. Use a transaction oriented versioning system (such as IBM's CMVC). CVS is not up to the task of larger organizations with plenty of concurrent development in the same files. You need to be able to roll back an entire transaction, not only a file change! CMVC provides the notion of transaction, either the patch happens, or it does not happen. It also allows you to lock the files when you are ready to commit.

    3. Hire a person to take care of the build. It is cheaper to hire somebody than having your developers wasting time doing it. The build manager will be responsible for serialization
    and rejection of atomic patches. Somebody can
    still brake the build, but your developers
    don't have to worry about it (only the breaker, who will have the builder's finger pointing at him and get the shame!)

    4. Accept that manual merging is a matter of life. Merging offers also the chance to do code inspection, which is a good thing.
  • We've heard in the past few weeks about how VA's Sourceforge on-site is not making the $$$ it needs to be, then we see a "Story" in ./ about developing woes, as well as a few replies from VA guys who are recommending SF.

    Conicidence? That's Your call.

  • At work we use Perforce [perforce.com] as our revision control system for verilog ASIC designs. It requires you to explicitly "check out" a file to edit it, and if you attempt to check out a file while someone else has it out, it warns you. It's got a great setup system that allows you to add certain directories to your "view" (like, I'm only concerned with the verilog from the ASIC(s) I work with, and not any of the others) without seperating everything out into different projects. It's got auto-merge abilities, as well as warnings at check-in if there's conflicts with lines. You're not allowed to check in a file that hasn't been properly merged, either manually or automatically.

    Perforce has a daemon that is run, and it's got clients for Windows and all the Unix clones, free and non-free.

    Now, perforce itself isn't free, but some things are just worth the money.

  • Everyone needs to stay up to date all the time.

    If you have a regular (hourly/daily) build that smoke-tests and reports the results to everyone, people will be more willing to sync to the latest and trust that they won't lose lots of time with problems. Embarass anyone that breaks the build, make sure that everyone understands it gets fixed ASAP when it breaks, checking in broken code is NOT ok; and then people can sync every few hours or every day, and the problem simply goes away.
  • (My screen got jumbled... i hope this isn't a duplicate post.)

    My job has been to create a version control system that solves exactly this problem and automates others. Right now, it uses CVS, but is based off of a system that used PVCS and ClearCase.

    Our revision system starts with release labels. Each directory gets tagged to a particular release label. Baselines, then are groups of release labels.

    When a developer releases, their changes are branched off of the release label the directory started at. Then, the latest release is merged into their changes (if a later release from the one the branch is based off of occured). If everything still checks out, the branch is merged to the main development tree.

    We have two scripts that make this process very simple: a commit and a release. The commit will take the current directory and create a users branch. This is for moving between trees - the developer now has a sandbox label they can update to. The release, which calls commit to make the user branch, handles both passes of merging.

    I've noticed that CVS is not the greatest system for handling branches. It's ability to detect common ancestors is, well, flaky. I'm eagerly awaiting the release of Subversion, since my company really doesn't have the ooodles of $$ for ClearCase.

    Also, note that when a developer releases, we also collect information which can link this release to a bug tracking slip. We also collate the release notes for releases. Ergo, you want to make a patch for your product, it's easy to collect all of the notes of changes and then create a single ChangeLog (which you usually want to edit a bit).

    The system actually has been extended to handle multiple streams of development. This allows us to bugfix released versions of the product while making later changes, and smoothing the bugfix integration between releases. Adding this feature is both an enormously complex task and a huge timesaver for a large development team. You'd definitely push the bounds of what CVS can handle at this point.

    Just some thoughts. This isn't a simple issue, but can have a solution that is simple to use.

    -T
  • by MidKnight ( 19766 ) on Friday March 15, 2002 @06:34PM (#3171003)
    I'm not going to suggest which source code tool to use, or how to organize your projects. But here's the short version of a few general suggestions that might help:

    • Do the planning. This is the most difficult stage because (as was mathematically proven last year... where is that link?) it is impossible to estimate the complexity of a computer project. So, your plan should be reasonably flexible, but still stick to a concrete form. Oh, and don't call it a schedule! Most managers don't have the self-control (or technical expertise) necessary to grasp the fact that computer project "schedules" are at best a shot in the dark.

    • Set written-in-stone dates to re-evaluate the validity of your plan. It isn't an admission of failure if, half-way through the project, you realize your original plans were ass-backwards :) If it were easy, everyone would do it.

    • Whatever source control package you use, try to avoid the "One Parent Tree, 200 children" syndrome. Any project will have reasonably logical groupings of tasks. Each sub-group of developers working on those tasks should first consolidate their changes into their own sub-tree, and then consolidate with the main tree. 3 or 4 levels of parent-child trees can help break down the pain of file merging.

    • Most development groups have some type of a code review process before the code goes back to a shared source tree. Improving on that, I've had some success at requiring two code reviews: one for code correctness, and one for its impact on the build process. These two reviews should never be done by the same person (or group).

    That's the a few points that I've found to be helpful in my professional work. Your mileage may vary.

    Good luck,

    --Mid

  • The Largest project I worked on was when I was leader of a team of 6 people, which was in a team of 20 people, in a sub-project of 100 people, in a project of 1,000 people. And that was just development.

    Tools are great, but for large projects the hammer rules.... don't know the hammer ? The hammer says "Fuck up the build, I nail your testicals to the floor, fuck up the release and I nail your eye-lids to your testicals first."
  • Sounds like a more coordinated build process would help out a lot. Martin Fowler's article Continuous Integration [martinfowler.com] talks about this problem,
    The fundamental benefit of continuous integration is that it removes sessions where people spend time hunting bugs where one person's work has stepped on someone else's work without either person realizing what happened. These bugs are hard to find because the problem isn't in one person's area, it is in the interaction between two pieces of work. This problem is exacerbated by time. Often integration bugs can be inserted weeks or months before they first manifest themselves. As a result they take a lot of finding.
    If you're not already familiar with the Mozilla Tinderbox you should examine carefully how they coordinate ten simultaneous and continuous builds across 3 different operating systems with dozens of developers scattered across timezones.
  • If you want to improve parallelism when people work on different pieces of the same program, try to stabilize the interfaces early. Then two or more developers can walk away and do their thing independently of the others: they are all dependent on the interfaces which change very little, and not on each other.

    This is not quite the same thing as merely decomposing a program into modules. It's decomposing a program into modules which have well-defined interfaces that have an identity of their own. But there are other things that are interfaces.

    An interface is convention by which two parts of an application relate to each other: a set of functions with given arguments and semantics; messages exchanged over a network; file formats or database schemas, directory structure layouts, names of files, language syntaxes, and so on. Essentially anything which has structure and which is formed in one place, and interpreted in another. All such structures create dependencies, and the dependencies must be identified and managed if you want to know how to parallelize development.

  • by Tim Ward ( 514198 )
    There's a well known answer to this (and many other problems that amateurs run across time and time again) - see subject.
  • The problems mount as the source files get longer. On the project I'm currently working on, we have an average of 80-100 lines per file. Are some files abnormally huge? Yeah, but for the most part the files are small. Some times a file has to be a little bigger than that to perform its function. Of course with only a few lines of code per file, it will be very rare that more than one developer will have to touch it at a time, and furthermore, there's only room for so many bugs in 80-100 lines before it becomes more worthwhile to rewrite it. If the code is really that buggy, then the small source files will make it easier to gradually reimplement sections and make the task seem less daunting to the developers. Also, If the interfaces are intelligently designed and well documented in advance, then you have the ability to reimplement files without causing troubles. When I open up a piece of someone else's code and find a 7000 line source file (I've seen 20000 line source files in some projects...one piece of control software had an entire user interface implemented in xlib and network communication layer in one 20,000 line file), It becomes a formidable task to understand it well enough to fix or improve it. If the source files are small and their purposes well defined, then everything gets along better.

    Brian
  • Checkin Token (Score:2, Interesting)

    by Technomancer ( 51963 )
    In one company I worked for we used Perforce and checkin token file. Only the person who had the token was allowed to check in, also he had to smoke test the project before releasing the token. We also had a tradition of adding haikus to the checkin token file. You can read them in one of easter eggs :)
  • The last two companies I worked for used Perforce, and it solves this quite nicely. It will tell you if a file you want to check in has been changed by someone else in the mean time, and will help you merge in that person's changes before you make your checkin. Perforce is extremely powerful and easy. The only reason *not* to use Perforce is if you're hung up on using an open source product for your source-repository needs. If you just want to get the job done, you use Perforce. For home use, they even have a limited (2-user) version. For open-source projects, they offer free licenses as well.

  • by SuperKendall ( 25149 ) on Friday March 15, 2002 @07:23PM (#3171206)
    I have used both CVS, Clearcase, MKS, straight RCS, and a bit of Visual Source-Safe (Ha!).

    What I have found makes the most difference in reducing merge issues is (1), have developers merge on checkin (more on that in a moment), and (2) develop a branching strategy that reduces the need to merge.

    On (1). When using ClearCase, have people set checkouts to be "unreserved". When they check out a file it won't lock it so everyone else can use it, and when they check something in ClearCase will force them to perform a merge if it has been updated since they checked it out. Make everyone learn how to use the henious merge tool. Also (and I'm not sure if this is fixed now or if I never figured out how to configure this right) I have had major issues with EOL markers in files and clearcase merge - if a developer edits a file that changes the EOL character(s), then ClearCase merge will claim the whole file differs.

    For CVS, it pretty much works naturally the way a system should - the default is to check out something "unreserved" (or at least that's how I remember it!). You can either update a file while you're making changes to keep up to date an make merges smaller, or just wait until you're done - before you can check it in CVS will make you update the file and merge your changes.

    In both systems, an approach of having the developer merge the file means the person who really knows what's going on can resolve any conflicts with the merge. Most merges are automatically handled and so often no work is required. I'm not sure how much this really addresses your issue, but it can't hurt to rely more on the source control system helping you manage merges.

    For option (2), think carefully about how you want to structure releases. One approach I've used before is having dfferent "levels" of development - you have a production branch, fix branch, development 1, dev 2, etc. You start out code at some given level (say development 2) and as it is completed and tested it moves up through the ranks until it reaches the production level. That worked pretty well and meant a fixed number of branches.

    In more recent projects I've been working with a monthly release structure - each months release requires a new branch. This was not my idea, believe me... it might seem nice in theory but in practice you might have three or more months of development at the same time. This leads to something of a merging cross-nightmare. Some sort of branching structure might well help solve your problem, if only in thinking up how you can more clearly seperate changes to a file or set of files.

  • The CVS Way (Score:4, Insightful)

    by SJS ( 1851 ) on Friday March 15, 2002 @07:33PM (#3171253) Homepage Journal

    [I am not a CVS guru, I just use it.]

    If you have to apply patches multiple times, then you're probably patching branches, and developing in the branches. The "CVS Way" seems to be (corrections welcome) to develop in the default branch, and to tag the tree at drop-points -- when you ship the code. If you need to support an old code-drop, you turn the tag into a branch, and then patch the branch.

    If you have too many delivered branches being supported at the same time, perhaps you should upgrade those customers to a newer version of the software. They'll appreciate, and it will simplify your situation.

    The develop-in-a-branch-and-ship-the-default is appealing, but troublesome.

    Otherwise, it sounds like your developers aren't playing nice... developer A patches the tree, and developer B goes to commit his changes, but gets told that there are conflicts and that he needs to update. Not wanting to deal with the conflicts, he copies his important files to a save spot, updates, copies his "important" files back over the top of the conflicted files, and then commits the whole thing, effectively "rolling back" the patch.

    If this is what is going on, you need to educate your developers. With a stick.

    Over the years, I've discovered that a significant amount of heartburn I may have with CVS comes not from any deficiency of CVS, but from the fact that I frequently fail to use CVS "properly" -- meaning, of course, "as intended".

  • by Anonymous Coward


    We all know that BitKeeper isn't free software, and I don't really know why it was chosen by Linus to manage Linux development since we have a much better solution in arch.


    I think it is great. It follows the *nix spirit, spliting a problem in lots of little parts as addressing each part with one pre-existing tool or some home-made bash scripts. For the diff/patch lovers (like myself), it is great to find another uses for these tools.


    I used to use CVS... but it doesn't scale so good. When Linus move to BitKeeper, I give it a try... but it was so disappointing. So I decided to give arch a try.... I am using it 'til now.


    You can check it here [regexps.com].

  • So this is the only sort of development that we do. A team with 20 engineers is actually pretty small here.

    The first thing to do is to divide the functionality among engineers and have the engineers work in their own modules, talking with other engineers to find out what their needs are. That way, one person is responsible for changes to any given file. In addition, make sure that files are *locked* while somebody has them checked out -- ie you can't change it while I do.

    Next, people should know about what they're going to do to a module before they check it out -- if it's checked out for more than 1-2 days, it's too long.

    And, finally, there needs to be a mechanism to group changes together. So, if you're making a change in the code that requires changes in modules A, B and C, you check out A, B and C, and then you check them back in. That way, later on when somebody wants to compile the entire thing, they know they're getting your full change and not just your changes to A & B.

    In any sufficiently-large software project, 10% of the code will receive 90% of the changes. The key is to find a way to serialize changes to that 10% -- the other 90% of the code will probably never be an issue.

  • by tlambert ( 566799 ) on Friday March 15, 2002 @08:26PM (#3171386)
    As the author of the original 386BSD 0.1 PatchKit software, I have to say that your "air traffic control" approach will not work.

    The 386BSD 0.1 patchkit used a serialization of patch numbers, with central assignment. The reason for this was that the patch dependency management was done by manually applying patches posted to Usenet, and then diffing the modified version of the code against a version with the previous N-1 applied.

    Effectively, it was a "human CVS repository" system.

    Ir was necessary, because the latency in the Usenet system meant that you couldn't "lock down" a file or set of files for some major change: you had to do what you wanted to do against what you had, which was almost never "the most currnet concensual version" of the code, and then hope someone else didn't win the race to "the repository" (at the time, terry@cs.weber.edu's incoming email, and then, later, Rod Grimes', Nate Williams', and then Jordan Hubbard's... no one wanted it for very long).

    This led to all sorts of problems; the major one was that the patch kit format was "reverse engineered" (not hard; the patch tools, except the creation software itself, were widely distributed), and a group started releasing patches in the "1000+" ID range, under the incorrect impression that the concern was over the patch namespace collision, not topological application problems. This eventually led to a big argument, and other people going off to play in their own sandbox.

    You've probably heard of "NetBSD". A couple (not all, of course) were motivated by communit rejection of the 1000+ numbered patches, which, while they were not colliding in serial number space, seriously blew out topological dependency space for modified files.

    In any case, that's exactly what you are doing with your code, when you plan on assigning patch numbers based on expectation of completion.

    With the number of people you have, the comments about contested interfaces being agreed to beforehand, and the comments about you having no real problem here in the first place are probably accurate.

    You can basically take a couple of approaches.

    The first is: don't accumulate patches, just check the code in. This respolves the problem of stale patches by not permitting them to become stale in the first place.

    The second is: "cvs tag" before any major commits, so that there is a baseline from which to work to resolve conflicts.

    Really, you should not be accumulating large patch sets, with as few people as are involved.

    If you have a huge offline latency from a developer or group of developers (e.g. you send a CDROM to Antarctia, and two months later the send back a CDROM with their patches on it), or if you have a huge number of developers, you should reconsider your chioce of tools.

    The 386BSD patchkit serialization of patch sequence numbers through a couple of human beings was a serious mistake. It had the emergent property of having a tiered set of priviledge. I'm convinced that this is what resulted in the current "core team/committer/less-than-dirt" striation in the BSD camps today.

    I mention this, because CVS has a similar, though somewhat less profound, emergent property of "The One True HEAD Branch". By its nature, it encourages a single direction for all experimentation and all forward looking thought, denying nourishment to any contradictory lines of inquiry, by chopping off the roots. CVS is, in a nutshell, anti-research. It prevents people from going off 90 degrees from where everyone else is headed, and discovering new territory.

    Perhaps you've heard of OpenBSD. It emerged because there was "One True HEAD Branch" in NetBSD (an early adopter of CVS, in Open Source-land), and several people felt strongly enough that the focus of the project should be secure systems research, that the resulting code directions were incompatible.

    Tools issues are at the base of nearly any strong divide you can name in an Open Source community.

    Linux currently has issue, where Linus is investigating the use of Larry McVoy's BitKeeper (Larry was smart, in that early on, he recognized the emergent properties tools choices force onto projects, and tried to design around the problem). It turns out that a single human CVS repository doesn't scale infinitely.

    FreeBSD is in the throes of a "To use Perforce or not to use Perforce" decision. Perforce supports seperate lines of concurrent developement.

    It fosters, as my former boss' boss, Ray Noorda, used to say, "coopetition": help each other make the best implementation according to their design, and then may the best design win.

    Perforce lets this happen, but it also tends to balkanize developement, if not everyone is using the tool. There are complaints in FreeBSD that significant work is taking place in Perforce branches that aren't visible to normal CVS users. The Perforce users complain back that there would be no need for Perforce, if the develeopement were permitted in the main CVS tree -- along with the breakage that would entail. Both arguments have merit. Right now, there is a truce... more of an agreement to disagree, and not force the issue today, but a promise that the battle will be fought to the death at some later date.

    For your project, a tool which supports multiple concurrent "One True HEAD Branches" seems like it fitys the bill (though as I wrote that, I still asked myself why, with so few people, it was an issue for you in the first place).

    Whether the tool you pick is Perforce, Bitkeeper, or some other tool that can support that developement model is irrelevent.

    What is relevent is that you understand that our tools shape the way we think about solving problems, and if you have already arrived at an approach that doesn't -- or *can't* -- fit into the shape dictated by CVS, then it's probably time to look at another tool.

    Not matter what you do, I can guarantee you that layering another, less adequate, tool on top of an already inadequate tool, will not fix your problem.

    I can also guarantee you that if you can't change your model to fit an existing tool, you're going to find yourself in the source code control tools business, instead of the business you intended to be in.

    Probably, you should rethink whatever premise it is that's resulting in large, infrequently integrated patch sets. If it's just your release engineering department not wanting to do their work on a branch, well, that's tough. Branch tag for releases as a matter of policy, and move on. If on the other hand, it's something more profound, perhaps you need to rethink your assumptions in favor of what the tools can do, vs. what you would like them to be able to do.

    Alternately, welcome to the source code control tool business.

    -- Terry
  • by HalB ( 127906 )

    Okay, I think 90% of the people responding are missing the heart of the question. The original question was about parallel development, not just working on small changes in the same source tree and then synchronizing in the changes.

    The best way to solve this problem is by better design of the software. If your software is well-designed, you the only time you really need to do parallel development is when you're maintaining multiple versions of the software. (i.e. service pack 1 and 2 to windows 95, while you work on windows 98 elsewhere).

    The way we solve the problems above is by using a task-based change management solution. We use a commercial product Continuus/CM (now Telelogic/CM) to handle both the parallel release maintenance problem, and the manangement-didn't-enforce-good-design problem. This problem can be exacerbated by having almost random changes in the requirements combined with very tight deadlines. Fortunately, having a task-based CM tool lets you roll with the punches.

    In task based change management, groups of file changes are grouped into a task. This task is one unit of work, something like "change product name string from a character string to a unicode string" which may involve touching several files. These file changes are brought in (or excluded) as a whole - the whole being a "task". Integration test approves "tasks", and if a developer wants a task before it has been integration tested, s/he can bring it in manually, and get all the updates.

    This allows a group to work in parallel with the main effort, by including groups of tasks themselves that haven't gone through integration test (perhaps because they don't work yet, other developers' tasks are needed, or it's just a large change requiring more people to work on it before it can be tested).

    Merging is done when needed, this way. One thing you can do in this program is "show conflicts", to show you what merges need to be done - on your parallel development effort, not the rest of the tree.

    The merges never really get confused if you use a decent merge tool (we use tkdiff). The only time you would have problems is if everyone is rewriting the file to be merged from scratch every time... And in that case, the sofware design problems are so bad you really can't do anything about it.

    The Continuus/CM software, however, is very costly. I think BitKeepr does some of the same things, but unless your company is one which doesn't mind its company secrets being posted to the web, you will have to pay for that too.

    aegis is another free tool you might look into. It doesn't have a GUI, though.

  • Actually Use CVS (Score:2, Informative)

    by sdowney ( 447548 )
    The poster seems to be contemplating a system such as Linus used to use to manage the Linux kernel. People would submit patches to Linus, and he would attempt to integrate them into his source tree.

    This is a dumb way to manage a project. It barely works for Linus. And he's a genius.

    The right thing to do is to give your 12 developers access to the CVS repository. If they are geographically separated, use a VPN or ssh to connect to the repository. When they finish their work, they first update their local workspace, compile and test, and if it passes, commit their changes back to the repository. Other developers get the changes as soon as they are available.

    Twelve developers is the leading edge of a small project. You can have a single team, and everyone can be aware of what everyone else is doing. The best process is the one with the minumum of overhead that suffices for the project at hand. Don't add process for the sake of process.

    On the other hand, don't sacrifice the basics.

    Version control is not 'Advanced', it's fundamental. You might as well say you're using advanced visual editors, such as vi. SCCS is from '75. RCS is from '82. CVS is from '86. This isn't new stuff.

  • I've used clearcase (great tool for many things, but slow and expensive), cvs (cheap, functional, but frustrating at times), and perforce. For those that don't know, perforce is commercial but not that expensive.

    perforce makes use of sequence numbers for just about everything.. It uses RCS as the backend, and berkley hashtables for a database. Unlike CVS, all files are initially read-only, and you have to "edit" a file (register an edit with the server). This along with the database provides very extremely fast revision-control operations (diffs, checkins, updates, submits, etc).

    perforce has a really sophisiticated branching system (I found a few nifty advanced uses for it, though I wouldn't recommend getting too creative like I did). There is a free version of perforce that only allows a single user. Doesn't allow multiple branches or anything either... But if you're a business, then I'd definately suggest that it's worth it.. It's all the advantages of RCS/CVS with only the problem of cost. (I think it's $200/head / year).

    In general, however, you really should establish a commision process... We typically have multiple branches dedicated to different aspects of development.

    Each developer gets his/her own branch.. Then there's an integration branch.. A single human being has access to the integration branch.. It is his/her responsibility to take in the changes (from a posted commit number) from each developer's branch and bring into the integration branch.. That branch should then run through sanity testing (to verify more than freedom from conflicts). Finally when everybody is happy, the integration branch gets merged into the main branch (as a form of release).. At each stage, commit labels shbould be applied (for further backtracking).

    -Michael
  • rational (Score:2, Informative)

    by zoftie ( 195518 )
    It seems XP is taking over masses of programmers that love to code. Rational is for pseudo programmers and managerial types who have weak analytical thinking and often like to fall back onto diagrams. UML is no slouch, but it is often used to conceal ignorance and laziness to undestand technical underpinnings of a system.
    UML is good, but as many things it is often misused.

    Rational software not one I would use, but it should work considering all developers are on the same page. Tools should be used to get understanding of system being built, not building system like lego toys, slapping a schedule on imlpementation and stilling back and relaxing.
    Revisions to diagrams are often painful, because you often have to explain just basic things, and gut feelings to managers who do not care, and can't understand what you really mean. Often enegery is spent on changing diagrams around and talking to pseudo programmers, than implementing quality code.
    Any methodology can be great, but only if management believs it in, keeps everyone on same page and makes process a development and understanding tool, rather than programmer control tool - which often happens.

    I suggest using bitkepper or CVS in combination with automatic documentation generators(Doxygen?).
  • Boy this opens a real can of worms for me. Caveat Emptor: I used to be an SCM consultant under the employ of a company formerly known as Continuus software. It is now a part of Telelogic AB. FYI: I am Yet Another Laid Off Person (YALOP).

    First. If there's only a dozen of you, and you have a branch, perhaps you should talk. Now if you were like some of my former client base who does follow-the-sun development around the world, this gets a little hairy. By that I mean a company like a TI or BT or Philips is passing code around the world and flexibly allocating resources from multiple sites to a project. Serious commercial SCM systems like Continuus/CM, err.. Telelogic CM/Synergy with DCM and CC with Multisite can be used to coordinate. This is one place that open source has failed to penetrate due to the small size of the potential market and the general antipathy that the average developer has to the CM people. Until, that is, they get religion by losing something very important at a crucial moment ;-) Even then most shops can get by with VSS or PVCS.

    OBTW: I'm an ex-developler, system admin, desktop video weenie and computer consultant that backed into SCM. I got religion back around the late 80's but it was pre-emptive. Anyway, concurrent development actually comes in three flavors: concurrent by collision, concurrent by release (patches versus next generation) and finally concurrent by platform (Windows vs Unix, etc). Each is handled differently and under different timeframes (including permanently in the latter).

    Your post alludes to the first case of a "popular" file that needs checking out a lot like a header file. In a shop your size, this should be handled by parallel checkout notification (a common feature in commercial high end systems) and hopefully a bit of coordination by phone or shouting over the pod wall. If you are in severe RAD mode, I suggest using a shared work area to enforce sequential checkout. Your SCM tool does build a work area doesn't it? If not, hit yourself on the head and ponder why people like me are now on the dole.

    Okay, so you decide to branch anyway since you or the other developer are really going to break things with a rewrite. Hey, wait a minute! Shouldn't a big feature change (or small) be considered as a single unit of work where all of the files changed by the feature get moved through the mill at once. You have just re-invented the wheel known as Change Packages, Task Based CM, ChangeSets and whatever else the marketing wonks are using nowadays. Once again the high end tools have this (or should). Continuus had it back around 1997. Rat got it into CC much more recently, but there it is.

    If you have a task based system, it makes life much easier to do merges and to put things in or take things out of the build. Being ex-Continuus, I could pound the pulpit about how build managers can check the configuration and its set of included tasks for consistency, but I won't. Instead I'll just mention that graphic history views and a decent merge tool that takes into account ancestor versions should take care of any merging issues.

    One more thing. The sooner you catch the branch and do the merge the better. Don't punt it to CM unless they are ex-developers who know the product as well as or better than you. You may also come to find that an integrated Change Request Management system like ChangeSynergy for Continuus (err Telelogic) and Clearquest for CC become very important in your life.

  • I use to be a Sysadmin. The deal is with CVS is that as many other people have noted merges are not so much an issue if you do your updates regularily in your workspace. Unless the project is really small 2-5 programmers or so most people know that they can working on files in the same module as someone else. When the programmer is ready to unit test it is vitally important that they update. Also regular check in's along the development path is also very important.

    Remember also, that the use of tags for marking the progress of files in development from a code ready to test ready to integration ready status is very important and as for remembering cryptic commands that is why you need a technical person and not a corporate type as your CM because all it takes is a few shell scripts, perl scripts and web interfaces to simplify tagging files and looking over the CVS structure. Hack CVSweb that is why its there.

    _________________________________
  • As pointed out by others, your problem is clearly that your developers are not doing a 'cvs update' before they 'cvs ci'. The tutorials and HOWTO's recommend this consistantly. Otherwise, you end up patching *out* what another developer patched *in*.

    Have your developers read the CVS Book: http://cvsbook.red-bean.com/. If it helps, consider buying a copy or several. Helps the authors, and makes good reference for your developers library.
  • by joto ( 134244 ) on Saturday March 16, 2002 @09:06AM (#3172873)
    ...that the best solution to this problem is that your fellow developers are in an office across the hall, and that you can walk into their office and talk to them about the changes.

    But that doesn't scale... So you've got to modularize the project so that each team works against their own baseline. Any changes that you do to another teams part, must be given to them, for them to check out, and integrate themselves. The important thing is communication, it often happens that they are working on fixing the same problem, but in another and better way, and can give you back their fix instead.

    So when you're having problems with your revision control system, what you are really experiencing most of the time, is problems with communication.

    Ideally, all developers should be in the same building, they should work at (mostly) the same time (normal office hours), and they should all be friends, and keep a list of each others telephone-numbers, and eat lunch at the same time. Development should be split into projects (having 10-50 people on each project) that are mosty independent. All the people on the project should meet weekly, and discuss their project, important changes, etc... On each project there can be one or more teams, each consisting of four to eight people, who should have offices really close to each other. On each team, development should be split into sub-teams, consisting of 1-3 persons (depending on difficulty and experience), who should share an office, and thus communicate even further. And just as important: this should not be formalized (at least not too much). People should rotate around somewhat, not being stuck with the same people all the time, to get them to know other parts of the project, and other people to communicte with.

    The important lesson I am trying to give is that the most beneficial communication, is often the informal. While having a tool help you with managing deltas is surely helpful, it can't solve every problem in the world. You need to work together, but you also need to modularize, and most importantly, you all need to be friends...

Duct tape is like the force. It has a light side, and a dark side, and it holds the universe together ... -- Carl Zwanzig

Working...