Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Stop Breaking the Build 92

Cap'n Grumpy writes "You know the score - you've just finished some coding, do a final cvs update before commiting, and all of a sudden all hell breaks loose. Your code now refuses to compile, or xunit starts flashing up red - test failures! One of the other members of your team has checked in something which breaks the build, and they just went out for lunch ... Argh! Did you know there is a solution to this problem? It is a system which makes it impossible for people to check in code which does not compile or test successfully. It allows coders to review others coding efforts code before it goes into the baseline, rather than after. It organises your checkins into logical change sets. It enforces continuous integration. It is linux based, and GPL'd. It's called Aegis."
This discussion has been archived. No new comments can be posted.

Stop Breaking the Build

Comments Filter:
  • by Pentagon13 ( 166309 ) on Wednesday February 19, 2003 @08:29PM (#5340306)
    Some more ways to improve the checkin process:

    run the code through a lameness filter before allowing the check-in

    enforce a 20 second delay from the time you want to check-in to the time you are actually allowed to check-in

    make sure a different developer checks-in the exact same code a day or two later

    • by Anonymous Coward

      Please try to keep checkins on topic.

      Try to update other people's checkins instead of creating new files.

      Read other people's checkin notices before checking in your own to avoid simply duplicating what has already been said.

      Use a clear subject that describes what your checkin is about.

      Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive checkins might be purged. (You can maintain everything, even purged checkins, by adjusting your threshold on the User Preferences Page)

    • Not only that, but have a really nice build script. If something in the code fails to compile check out the previous version of that part of the code and try to compile again. You'll be alerted to what's not working, but you are also guaranteed to most up to date buildable version of your code, without losing anything.
    • by devphil ( 51341 ) on Thursday February 20, 2003 @08:43AM (#5342913) Homepage
      run the code through a lameness filter before allowing the check-in

      Okay, I agree with you that the /. editors were on crack when they made that call. But the idea of "filter before checkin" has been in CVS for a long, long time.

      Basically, there's a file in CVSROOT that can call an arbitrary program. If that program succeeds, checkin proceeds. If it doesn't, it doesn't. :-)

      I was very surprised to see this article treat such an idea as something new.

    • Make sure that the proposed checkin builds correctly. Make sure that there are tests written for this exact change. Make sure that the source is up to date with the baseline. Then make sure that somebody else reviews the code. Then try integrating it. Rebuild those parts of the program that were touched and run the tests again. Make sure it is still not out of date with the baseline. If that all works out, then you can put your change in the baseline.
      Sounds Complicated? It is if you dont use a tool for it. Or you could just use Aegis to help you write better code.

  • Ads as articles (Score:5, Interesting)

    by Cyclone66 ( 217347 ) on Wednesday February 19, 2003 @08:30PM (#5340307) Homepage Journal
    Is it just me or does this sound like an advertisement?
  • If there is an Eclipse plug-in for any of these CVS alternatives, I'd give 'em a try.

    But what is the point if your development environment doesn't support these versioning systems?

    • by aminorex ( 141494 )
      Does your development environment include a command-line?

      If so, then it's trivial to use another version control system.

      Assuming that eclipse CVS support invokes the cvs
      command for its operations, it would be trivial to
      make a cvs-compatible commmand line for aegis.
    • CVS is a history tool, it stores older versions and branches.
      Aegis is a Process tool, it helps you adhere to a particular model for developing software.

  • by ClosedSource ( 238333 ) on Wednesday February 19, 2003 @08:43PM (#5340360)
    Allowing multiple developers to edit the same file at the same time is inherently more dangerous than a more conservative approach. Open Source has it's own special needs, but for closed source development you should rarely need to edit the same file unless your team is poorly organized or system poorly designed.
    • by etcshadow ( 579275 ) on Wednesday February 19, 2003 @09:27PM (#5340589)
      "for closed source development you should rarely need to edit the same file unless your team is poorly organized or system poorly designed"

      Oh, come on. That's such a load. I'll agree that CVS is a big part of the problem, but not because you shouldn't let more than one developer edit the same file at the same time. Rather, simply because CVS sucks.

      I hate to admit it, because I love the open source movement, and I know what an important role CVS plays in it, but CVS relly does bite compared to some of the commercial alternatives. I mean, no trackable atomic changes? No means for integration with job tracking? A shitty-beyond-belief branch methodology? Poor tracking of and integration of changes across branches? Crappy permissioning structure?

      Where I work we use Perforce, which I absolutely adore. Does not have any of the issues I listed above, works in unix and windows, is command line easy and has a pretty damn good GUI (compare to WinCVS, ack!), is wonderfully scriptable, and is really not that expensive (although I can definitely understand the desire to spend $0).

      Anyway, you can't really expect to have a large group of developers both iterating and maintaining a fairly large codebase without ever needing to edit overlapping files... not unless you keep each function/subroutine/method in its own file. Even then, I imagine you'd still run into occasional change resolution issues. The better way to deal with the problem is not to close your eyes and put your hands over your ears; it's to outfit yourself with decent tools capable of dealing with real life. I know that with perforce, and I would imagine with most other half-decent source management systems, simultaneous edits are really not that big of a deal. Unless you've actually edited the same lines in the same file, the user doesn't typicaly have to do a damn thing. And regardless, it is still the fault of the first user to check in a busted file. Again, atomic changes mean that if it compiled for you, and you check it in, then compiles for everyone else.
      • I'm not a CVS expert - I use it at home a bit. At work we use Clearcase. This allows us to put a nice logical structure on our development tree, so we know where everything comes from and is going. Moreover, certain people are tasked with the responsibility for particular branches. It is their job to ensure that particular branches build, and to resolve any conflicts.

        For example. We have a main branch (current release), with integration branches for the various subsystems off this. Off these branches our team has a branch, and off this individual developers work on individual features/bugs etc. Because we have clearly defined responsibilities (e.g. X is responsible for merges into the team branch, Y is responsible for this integration branch, Z looks after the main branch), we always know who is responsible for sorting the issues out.

        Because everyone is working on different branches, and the only collisions are at merge time, we usually don't have a problem. And when we do, the responsibilities are clearly defined.

        Best regards, treefrog
      • While I agree on most things, WinCVS is certainly NOT the GUI you should be comparing to - TortoiseCVS is.

        I have used SourceSafe, Perforce (very lightly), WinCVS, and Tortoise. Tortoise is above and beyond the rest. I still use the command line sometimes for intricate log grepping, but for everyday usage, Tortoise is simply amazing.

        TortoiseCVS [tortoisecvs.org]
      • by renehollan ( 138013 ) <<ten.eriwraelc> <ta> <nallohr>> on Thursday February 20, 2003 @10:11AM (#5343559) Homepage Journal
        Lesse...

        I have used, in my time, Clearcase (which I rather liked despite the high price tag and apparent inefficiency repository-side), CVS, and most recently Perforce. For all the complaining about CVS lacking sophistication, it does get 95% of the job of source code control done. But, none of these address the root problem, and it's unfair to pick on one for not addressing it, implying that the others do a better job.

        Inconsistent checkins of code, even code in different files, can break builds, unit, and integration tests. Consider the development process:

        You check out a read-only version of the top of the development tree, that builds clean, and passes all the tests. Great. You get a write checkout on the stuff you want to change. Even if others don't touch those files (which can be difficult to enforce for some kinds of files, like headers of unique enumerations that everyone updates), you can still break things.

        When you test build, you test build with your stuff changed, and everything else frozen. There is no guarantee that when you check in your changes, the break will build because something else on which your code depends got changed. Like a header file. I suppose you could get a write lock on all the files on which your code depends, but that still isn't good enough.

        Consider a processing sequence where two function in two code files get called. One of them is supposed to increment some global "the sequence was run" counter. Both do. Oops. Builds fine, fails regression testing.

        Short of locking the entire source tree when one developer changes something, you can't avoid this in general. Oh sure, you can resync all the files you didn't change, and run a build and regression test before checking your change in, but, lo, you'd have to lock the entire source tree during that time (or at least see if it changed again after your build and test, and repeat the process if it did, possibly indefinately). Serializing an otherwise parallel development process that way is murder on productivity: even if you run all the sanity builds at night, you now have a one day turnaround just to test build all changes, and hope they make it into the source repository. Kinda sucks, when you just changed a few lines.

        Any automated solution will have to rely on serialization of source tree access, at some level. If the project can be broken down into independent components, serialization within a component can be relatively painless, with somewhat less frequent serialization across components (so called "integration" which the nightly build strives to avoid because it happens all the time). Experience shows that, unless you want to plan "integration" phases, this defeats a large benefit of the nightly build process, though, it is not unreasonable for very large projects, with clearly independent parts.

        So, what's the solution?

        The same, it's always been: divide and conquer.

        While "integration" phases are to be shunned, and "repository serialization" kills productivity, one can take a statistical approach: instead of verifying everything before a checkin to a frozen repository, you design your project to try to isolate the effects of implementation changes from one another. Early on, this may be difficult as interfaces are still being tweaked, but you have less code then, and builds happen quickly. This is a large part of what a project architect is supposed to do.

        If done correctly, a local delta built with a recent snapshot of the source repository is not likely to break the build and unit test of a more recent snapshot if it builds and passes regression tests against the somewhat older snapshot. This isn't foolproof, of course, but a proper design will have the necessary isolation: an implementation change in one area, or the addition of a new interface should be oblivious to your code.

        Now, if interfaces need to change, or new features are to be added that have to be coordinated among developers (i.e. someone adds a feature and someone else writes code to use it), then you need greater coordination among all those that might be affected. Such work can take place on a side branch, and merged to the main development tree only when it is internally consistent. Again, this is generally the responsibility of the common lead programmer of those implementing the changes that need to be coordinated (perhaps the project architect, but in large projects, that kind of detail can be delegated).

        Where trouble brews is when such a synchronized change has to take place across development teams: it usually is more effective if one person handles the integration of the new feature and code that uses it. Because the feature user has the need, it may be him or her. However, review of code to be added to a base for which another team is responsible should, of course, rest with that team: "You code didn't have this which I needed, so here's the patch, wanna check it out?" Yes, this can cause political friction, but in a mature development team, that won't happen. When communication between teams is minimal, hostile, or non-existent, and such functionality is to be added, is when builds break, and regression tests fail -- and the finger pointing begins.

        • by etcshadow ( 579275 ) on Thursday February 20, 2003 @12:02PM (#5344633)
          Well, obviously no source code control software is gonna compensate for developers who can't write good code, use common sense, and follow *simple* process. Granted, requiring complex process of your developers is asking for trouble, but you can't live without simple rules. Some simple rules really require being backed up by the capabilities of the software, though. An example that comes to mind: consistent atomic checkins.

          It goes something like this: say you change a function signature. It is your responsibility to grep for all the uses of that function and change them. It is also your responsibility to check in all of those changes atomically. That is: an all-or-none checkin of a group of files all at once. That group is also bound together into the future (the relationship is not severed after checkin).

          Another simple point of process that saves your ass is JOB TRACKING. If your source control repository doesn't link into a job tracking system, then I pitty you. I've been there, and it sucks. It took a while for us to work out exactly what was missing and how to get it... but now that we have implemented it, it makes life livable again. The idea with a job tracking system is: assosciate all of your changes with a job. If you want a bleeding edge revision, then sync to the head revision and don't be surprised when stuff breaks. If you want a rough-around-the-edges version to test against, then sync to the highest revision that is entered in the job tracker. Use the various life-cycle statuses in the job tracker to sync to various points. On the whole: Get all files in QA status, or all files in QATESTED status, or CODECHECK status (or however you choose to name these things in your job tracker), or whatever status you want.

          In that way, you don't easily break the build, because before you even try to build off of something, you've tracked its code review and its unit testing. Of course, there are always the possibilities for unit testing problemss, but they are usually going to end up being the fault of a developer not following simple processes. In the example I used above where we changed the signature of a function, and updated all of the calls to that function in the same atomic change... you could have had another developer creating a *new* call to that function in their own working copy of the code. You would, of course never catch that, and they might not either... but hopefully whoever is doing code reviews has an eye open to things like function signature changes, and can catch it at that point.

          Make it clear to your developers that changes not assosciated with a job will never see the light of day. Every night, review any untracked changes and email the developer, asking them what the hell.

          It's true, there is no way to make everything 100% bullet-proof against checking in bad code. Of course, that's why we do things like freeze integration and test before a release.
    • Having multiple developers tinkering with the same part of the same file is a project management problem- not a tool problem.
    • If you don't have multiple developers working on the
      same files, your project is poorly organized. Code-
      ownership is bad^3 news. CVS makes merges 90%
      trivial. There is nothing fundamentally different
      about open vs. closed source development. Open
      development just increases the probability of
      receiving unsolicited patch sets.

      • What I meant about open vs. closed is that you can't really manage them the same way. In closed source development I've found that having someone assigned as the primary owner of a file has worked fairly well. In open source you can't handle it the same way because you're usually dealing with volunteers and you can't really tell them what to do (I'm speculating here since I haven't done any open source development).
    • by Anonymous Coward
      Editing the same source files is common when you have a shared resource DLL for multiple programs. Think about it...everyone has to add strings, icons, dialogs, etc to the same .rc file. When we did it at my company, we just used resource IDs in different ranges, so merging was never a problem. Multiple people working on the same file is common!
    • by Twylite ( 234238 ) <[twylite] [at] [crypt.co.za]> on Thursday February 20, 2003 @01:48AM (#5341780) Homepage
      but for closed source development you should rarely need to edit the same file unless your team is poorly organized or system poorly designed

      ...or if you happen to have a high level of code reuse; or if you have doing firmware, software and driver development in parallel; or if you have a small but busy team; or if you have a large but busy team; or ... or ... or.

      This is a ridiculous statement. There are any number of reasons that multiple developers will work on a single file at once, especially in a well structure organisation or development. Development, code inspection, fixes in response to testing and maintenance fixes (bringing a patch from a release into current) can ALL happen simultaneously in a development tree, and can ALL happen simultaneously in one file. They just shouldn't often happen simultaneously in one method/function.

      • "...or if you happen to have a high level of code reuse;"

        One would hope that reused code would be fully developed and debugged before it was shared. Sure new bugs will turn up, but the thought of people tweaking reused code at will sounds pretty dangerous to me. I've seen plenty of cases where unilateral changes to reused code by one group has brought another to a grinding halt. Shared code should be carefully mangaged and should not be altered without prior coordination with all its users.
        • Shared code should be carefully mangaged and should not be altered without prior coordination with all its users

          This is true, and usually means use of procedures external to source control. Of course there are some revision control and/or configuration management products that have workflow support, to gain prior approval for a change, or to perform a "trial" checkin so that the change can be approved before actually becoming part of the development branch.

          Quite often though you will find that shared code is in development by multiple teams simultaneously, and there is no other option. In my experience this is far more common than sharing of fully developed code.

          The reason is that it is usually prudent and beneficial to reuse fully developed code as a library, whereas closely linked products (such as firmware and associated software) need to share definitions and often have the same or mirror-effect routines.

          In an iterative/incremental process, there is no chance for one group to fully develop a particular shared file. Obviously the groups need to work closely and communicate regularly, but the problems of sharing can be managed if they are identified up front and procedures put in place.

    • CVS ensures mutual exclusion over an entire tree when you commit, and performs an up-to-date-check which will fail your commit if *any* files, including ones you did not touch, have new revisions that you have not incorporated into your working copy.

      Taking advantage of the ``up-to-date check failed'' feature requires that you do a whole-tree commit at some level of the tree structure that you care about---perhaps the project root---rather than doing a selective commit on individual files. Change to an appropriate directory level and execute a ``cvs ci'' without any filename arguments.

      The other part of the equation is doing a proper regression test before committing. It would be nice to automate this, but in the real world, that is a pipe dream.

      What if your tests have to be executed on something other than your development platform, such as a PDA or embedded system? What if you are targetting several flavors of Unix-like operating systems and Windows?

      Your regression test run on Linux does not assure you that you did not break the Win32 version of the program.

      That's not to say that there isn't value in running at least *some* automated tests, like for instance exercising the functions of some portable library.
  • by crow ( 16139 )
    At work, we use CVS, and our build tools guru has it set up so the checkins fail unless we've build without errors first. The testing isn't integrated (it can't be, as we're cross-compiling for an embedded system), and you can break the build by doing a partial checking or with bad interaction between different checkins, but for the most part it works really well.

    It seems like common sense for most projects to refuse to allow checkins without building first, and that's the sort of policy that can have a fairly effective mechanism for enforcement.
    • Yeah, but what if your build takes over an hour? (Like where I work) Say you need something checked in, like now. You're sure it works, but you have to hang out for an hour to make sure it went in? There is a certain level of trust you need in a development environment. Waiting for it to completely rebuild simply isn't always an option.

      Sure, you can generally do partial builds. But what happens when you just had to change that include file that somehow effects nearly everything else? You're stuck for that hour.
      • Yes, you're blocked from checking stuff in until the test build finishes, but even though you are SURE it works it's better to have a sanity check just in case.

        Have some coffee, fill out the TPS report, look at pending bugs. There's a few things you could be doing during that hour.
        • The bigger problem might be locking the tree waiting for the test build to complete. If you have a 12 hour work day, a one hour build, and 30 developers, less then half your team will be able to commit code on any given day, or you'd have a hard time commiting because you wouldn't know that somebody already has a pending commit that's waiting on the build.

          This is a bad thing to programatically enforce during the commit on large projects using CVS.
      • Yeah, but what if your build takes over an hour? (Like where I work) Say you need something checked in, like now. You're sure it works, but you have to hang out for an hour to make sure it went in?

        In my experience, if the modification affects enough code to make it an hour rebuild, that's when you have to be extra careful. An hour-long build is generally something people don't sit and watch...they run it over lunch or something. Anyone who grabs the broken update will come back after an hour, find it didn't work, have to revert to a previous version, spend another hour compiling, and then have to do it all again once it's fixed.

        If several people have to do this, that's a lot of wasted engineering time. If it's such a simple change that it should be just fine, then an alternative might be to call in a couple reasonably paranoid engineers to have a look. If everyone says 'no problem', then maybe skip the test-compile. Better 3 people waste 5 minutes than everyone waste an hour. If more than a couple lines are changed, though, I'd go for the test compile, regardless.

      • I haven't RTFA yet, but I have glanced at the page.

        I think that if you work through this you could come up with a workable solution. Perhaps this would work, though not the most efficient, but should be stable.

        When you check something in, it is checked into an unstable tree. You set the build process runnig then. If it builds and validation tests work, it checks the changes into the nightly tree. If the unstable build/tests fail it email the developer of the issue. The nightly tree is built after all the unstable checkins from the day are either rejected or submitted. Then the nightly build is compiled and tested, if this fails then the project manager is notified who then tries to figure out why and bust some nuts, er has the developer(s) responsible for the failure fix it. If there were no issues nightly build is put into a stable tree.

        In theory the this should yeild a "last stable build" on a regular basis unless there is major overhaul/development currently in process, in which case the build probably isnt a good idea to ship to a customer anyways.
    • If the cvs server has not checked your files in yet, how can it use them to test the build? That is not cvs behaviour but some wrappers around cvs. I am sure that if you are persistent you can bypass them easily.
  • Request Tracker (Score:5, Interesting)

    by babbage ( 61057 ) <cdeversNO@SPAMcis.usouthal.edu> on Wednesday February 19, 2003 @08:58PM (#5340441) Homepage Journal
    This is what Jesse Vincent has been using for RT: Request Tracker [bestpractical.com] development for several months now, rather than CVS [bestpractical.com]. Apparently it's much nicer than CVS, but it's exotic and not many people know about it or how to submit patches with it, so RT3 from what I can tell is kind of a one man project at the moment. In any case though, I've heard nothing but good things about Aegis, and it seems like a tool worth checking out if you have a software project to manage.

    (And for that matter, if you need to track software bugs & other issues, RT rocks. Don't bother with Bugzilla, it's not half as good as RT is for most of the same tasks. And no, no one is paying me to endorse RT or anything, it's just great software and, in reference to Aegis, I respect the judgement of the guy developing it...)

    • Re:Request Tracker (Score:3, Informative)

      by phamlen ( 304054 )
      At the risk of being modded Off-Topic, I will also state that RequestTracker (a system that's currently being built by Aegis) is a great tool for tracking software bugs, issue tracking, trouble-ticketing, and general to-do list issues.

      It rocks, it's free, and it does virtually everything that you need it to do without complexity!!!

  • Pair check-in (Score:2, Interesting)

    by UberChuckie ( 529086 )
    There is a rule for check-ins when my team is trying to stablize code; that is have another developer go through your changes in case you did something silly. It also serves as a mini code review. This includes making sure the code builds. :)
    • At my work, that's the rule for *all* checkins. Whenever you want to check something in, it must be reviewed by someone else. Doesn't matter if it's a one-line change or the addition of a whole new subsystem. Each checkin also has to have a description with it outlining the change, its tracking number, which files, and who it's reviewed by.

      Things that break builds or functionality can still get through, but having an established process (yes, I know how much fear that word strikes in the hearts of some developers) can make life a lot easier.
  • I have been backward and forward over Peter's web site, and I haven't been able to find any suggestion that it has a network port, and client.

    On a network, does it only work on a shared filesystem like NFS?

    • From the Aegis HOWTO manual (howto.pdf):

      6.1.3 Multi User, Multi Machine

      Aegis assumes that when working on a LAN, you will use a networked file system of some sort. NFS is adequate for this task, and commonly available. By using NFS, there is very little difference between the single-machine case and the multi-machine-case.

      This is wishful thinking. To be reliable, version control must be based on some sane client-server protocol.

      Visual SourceSafe's reliabilty problems are caused in large part to the use of file sharing rather than a proper protocol. CVS is also known to be unreliable when a repository is accessed via NFS rather than in client-server mode. This practice is loudly discouraged by experts in the CVS mailing list.

      NFS implementations differ from one another. As a rule of thumb, NFS works best when you have a matching client and server implementation from the same vendor.

      • are you saying this:
        • Aegis should use a client server model because I say that is more reliable.
        • visual source safe is bad because it uses something similar to NFS
        • CVS over NFS is bad, so therefore anything over NFS is bad
        • NFS is bad

        If you dont like NFS, you could use any other networking filesystem, maybe samba? aegis doesn't care.
  • by Anonymous Coward on Wednesday February 19, 2003 @09:43PM (#5340665)
    1) Post advertisements as news articles
    2) ???????????????????
    3) Profit!!!!!!!!!!!!!!!!!
  • Solution (Score:5, Funny)

    by jsse ( 254124 ) on Wednesday February 19, 2003 @09:59PM (#5340757) Homepage Journal
    ...and they just went out for lunch ... Argh! Did you know there is a solution to this problem?

    Go to lunch.
  • Cookies (Score:5, Insightful)

    by IanBevan ( 213109 ) on Wednesday February 19, 2003 @11:34PM (#5341134) Homepage
    Well, as odd as this might sound, when anybody checks anything to our source control that breaks the build, they have to go out and buy the entire development team a coffee and/or chocolate cookies. This worked like a charm. It not only raises awareness and makes people more careful, it has actually increased moral in the team as we look forward to the weekly build to see who has cocked it up :-)
    • This would become very expensive very quickly where I work ;)

      I have the advantage of mostly working in a pair on my project, so there is no real chance of fuxoring ;)
    • At a development office I once contracted at, the person who broke the build had to where one of those Cheesehead hats from Wisconsin.

      Being from Wisconsin, I thought they had done something particularly cool until they told me.
  • Krispy Kremes (Score:4, Interesting)

    by mpechner ( 637217 ) on Thursday February 20, 2003 @03:33AM (#5342098) Homepage
    2-3 times a month we get Krispy Kremes. That is the penance for breaking a nightly build. Engineer or build meister. Screw it up and bring in the donuts.
  • by droyad ( 412569 ) on Thursday February 20, 2003 @03:41AM (#5342122)
    The base ball bat. At my work after we beat the first few developers to a pulp after they checked in broken files, we found that the rate of broken files decreased dramatically
    • We had the same policy at my last workplace. Did we work together? Mad or cruel as it seems, it worked.

      I wish they'd implement a scheme like that where I'm working now. They haven't quite got the hang of source control. Most of the time, the version under source control doesn't even compile, let alone run. No-one, it seems, bothers with a local merge and test before checking in. Also, everyone works using data off a shared network drive, so when any of that changes, you're screwed.

      I need a new bat. With rusty nails.
    • I think the baseball bat is too cruel. Where I worked, a rubber chicken lart worked best without physically harming the developers (much). Also, keeping up scores of howmany times a developer was larted every month helped a lot. The one with the highest lart-score was to be called "crap-coder" for that whole month.

      It's social systems like these that work better than _every_ software based solution :)

  • A lot of times, I just send a copy of the build failures to the person who broke the code. But the real problem is that, we use vss. With all the GUI tools, you'd think it'd be a no brainer to do a "project difference" before going to lunch, but 1 time out of 10, that step gets skipped.

    But the best way is to just have a neutral machine that no one develops on. It's just for getting updates and doing test builds.

    Aegis calls itself a project change supervisor. It's nice that there's a framework, especially for large projects. For small ones, throwing wads of paper or other methods of embarrasment works fine.
  • I started exploring Aegis for managing a Linux kernel tree a few weeks ago, I've found that it has a number of cases where the program deliberately aborts if you want to want to adopt a system administration policy different from the author's. Grep through the source for error messages like "too privileged", "AEGIS_MIN_UID", "AEGIS_MIN_GID" for examples (there is even code to bomb out if you modify AEGIS_MIN_[U,G]ID in ways that Aegis doesn't like).

    Whether Peter Miller's favorite policies are optimal is not the issue to me. For me, an important aspect of free software is that the owner of a computer is given maximum control. If the group maintaining a piece of software is basically trying to wrest that control from the computer owner, then my distrust of that group is enough so that the amount of work I would have to do in studying every line of their code and undoing their logic bombs is exceeds the productivity benefits that I would expect from using the software. For me this attitude problem is the critical issue. If this problem were fixed, I would probably deploy and contribute to Aegis.

    For completenes, I'll mention a couple of other issues that other people considering using Aegis might find helpful to know about, although fixing these other issues probably would not tip the balance for me as to whether or not I'd choose Aegis.

    Firstly, Aegis has a bit of an unncessary learning curve because Aegis does not use commands compatible with cvs, sccs or rcs when this is possible. If you invest time in learning Aegis commands, you will not get a return on that investment elsewhere. In comparison, BitKeeper uses rcs and sccs command names and options when possible. I consider Aegis's freeness to be more important than BitKeeper's rcs/sccs command line familiarity, and would also consider incompatible command names and options to be worthwhile if the new syntax were a enough of a user interface improvement, but that doesn't seem to be the case here.

    Secondly, I've become skeptical that Aegis really needs to be a single software package (yes, I know that "cook", Aegis's recommended replacement for "make", is distributed separately from Aegis). I'd like a cvs variant (incompatible if need be) that would suport atomic operations, symlinks, inode information, renaming files, and maybe some distributed development features. I think Aegis's repository control based on regression testing is a great idea, but I also think that it could be implemented more simply as a separate package that used some kind of cvs successor (or, ideally, was retargetable to a number of software repository types).

    I think Aegis has some great ideas, but I currently think it will be less work, especially in the long run, to find or make something that I like better.

    • Aegis is designed to eliminate several common classes of bugs. Already mentioned are who-broke-the-build, no-tests and no-code-review. Another source of disasters is do-everything-as-root.


      There is no need to perform the entire Linux kernel build as root - only the final install requires root privilege.


      I have quite successfully done linux kernel module development with Aegis, performing most operations as a simple user, and sudo for the few commands which require elevated privileges.

      • There is no need to perform the entire Linux kernel build as root - only the final install requires root privilege.

        Users should be able to make that decision themselves. (It's also worth noting that, aegis, as shipped, also refused to run from my personal account because my user-id belong to group 0, "system".) Imagine if other development tool authors also inserted logic bombs to enforce their varying and probably conflicting system administration philosophies. I repeat:

        Whether Peter Miller's favorite policies are optimal is not the issue to me. For me, an important aspect of free software is that the owner of a computer is given maximum control. If the group maintaining a piece of software is basically trying to wrest that control from the computer owner, then my distrust of that group is enough so that the amount of work I would have to do in studying every line of their code and undoing their logic bombs is exceeds the productivity benefits that I would expect from using the software.

        Thanks for responding anyhow.

    • Aegis has a bit of an unncessary learning curve because Aegis does not use commands compatible with cvs, sccs or rcs when this is possible.

      Aegis is a process wrapper around any of those source code management systems, so it's doing a great deal more. I view the fact that Peter Miller chose not to just pass-through the (often clunky) underlying syntax of the source-code management commands to be a great strength of Aegis, not a weakness. Why? I can develop on any Aegis project and not have to know or care what underlying source code management system is being used. I get to focus on developing and testing code, not the specifics of wrestling with RCS or SCCS or CVS or fhist.

      In other words, it's an abstraction layer, and as in any abstraction, you lose a little fine-grained control. But in my experience this is more than offset by the benefits. I personally use Aegis on multiple projects; some older ones still use RCS, while I've started using CVS on some newer ones as my tastes have changed over time. Guess what? I've been able to do this without having to re-learn my development methodology, because I've been using Aegis to wrap the underlying source-code management details.

      I think Aegis's repository control based on regression testing is a great idea...

      Hear, hear. This is very well-thought-out in Aegis, and a huge win.

      ...but I also think that it could be implemented more simply as a separate package that used some kind of cvs successor (or, ideally, was retargetable to a number of software repository types).

      Ummm... it already is a separate packge. You can use Aegis in conjunction with RCS, SCCS, CVS, fhist, or (probably just about) any other underlying source code management system by configuring the appropriate commands into the project config file. What makes you think Aegis isn't already separate enough from the underlying source code management?

      • I view the fact that Peter Miller chose not to just pass-through the (often clunky) underlying syntax of the source-code management commands to be a great strength of Aegis, not a weakness.

        I never said that Aegis should "just pass-through" commands. If Aegis were to use, say, rcs-compatible commands where possible, I would still want it to use those commands even if it were using SCCS underneath (like BitKeeper). I also said "[I] would also consider incompatible command names and options to be worthwhile if the new syntax were a enough of a user interface improvement, but that doesn't seem to be the case here." Aegis installs 17 new programs (at least they begin with "ae") plus an "aegis" program that is designed to implement yet another command syntax in an attempt to ameliorate the complexity of the ae* programs.

        Perhaps you feel that the argument syntax of these 17 new programs is a great improvement over cvs (but readers may notice that you don't explain why), in which case I'll leave it to interested readers to check out some of the aegis manual pages and form their own conclusions.

        For me, I probably would be willing to climb the Aegis learning curve if the logic bomb issue were resolved, but I mention the unnecessary command line incompatability issue because other people may find this information useful when surveying source code control systems.

        • Aegis installs 17 new programs (at least they begin with "ae") plus an "aegis" program that is designed to implement yet another command syntax in an attempt to ameliorate the complexity of the ae* programs

          This is a misrepresentation. The aegis program itself is not to "ameliorate the complexity" of other programs, it is the workhorse that implements the process and is the only executable the vast majority of users will end up running. The other ae* programs are for ancillary functions like generating reports, distributing change sets, command completion, etc. They need only be learned as needed, or by an administrator, in much the same way that not every developer who uses CVS has to learn the branching mechanisms.

          Perhaps you feel that the argument syntax of these 17 new programs is a great improvement over cvs (but readers may notice that you don't explain why)...

          No, I think I stated why I consider Aegis a huge step forward from underlying, raw source-code management commands, and it is for far more important reasons than command-line syntax. Among other reasons, Aegis is worth the learning curve because, once configured, you sidestep having to be preoccupied with the particular syntax or specific model of an underlying source-code management system. You get to focus not on managing your source-code tree, but on developing and testing your softare. That's supposed to be the whole point of good tools, yes?

          (If we're going to point out parts of the argument that are being ignored, though, I didn't see anything explaining why you think Aegis isn't separable from the underlying source-code management after I pointed out that it can be configured to use any package. :-) This makes me realize that I'm confused about one seeming inconsistency: you complain on the one hand that Aegis isn't separate enough from the source-code package, and at the same time say that isn't enough like CVS or RCS or whatever you'd like it to look more like. Perhaps I'm missing your underlying point...?)

          ...in which case I'll leave it to interested readers to check out some of the aegis manual pages and form their own conclusions.

          Agreed; one size doesn't fit all, so people should figure out if the Aegis model (or any other process/tool set) is appropriate for their needs. Forewarned is forearmed, as they say...

          • The aegis program itself is not to "ameliorate the complexity" of other programs, it is the workhorse that implements the process and is the only executable the vast majority of users will end up running. The other ae* programs are for ancillary functions like generating reports, distributing change sets, command completion, etc. They need only be learned as needed, or by an administrator, in much the same way that not every developer who uses CVS has to learn the branching mechanisms.

            I have to admit, I was looking at the many commands in the howto and didn't notice that very few of them are in /usr/bin (where did they go?). Even so, aediff, aepatch, aeimport, and aecomp look rather fundamental. The "common commands" section of the howto lists 11 new commands, for example. I think it's fair to say that given that that the "aegis" program and the howto define two different command syntaxes for the same thing, somebody else must also have felt that there was something wrong with at least one of these syntaxes (both of which share the disadvantage of not being compatible with some other program like rcs or sccs in cases where that is possible).

            It is irrelevant that you consider Aegis to be "a huge step forward from underlying, raw source-code management commands, and it is for far more important reasons than command-line syntax", because all I am saying is that making these commands incomptable in cases where the incompatability is unnecessary is an unnecessary disadvantage.

            (If we're going to point out parts of the argument that are being ignored, though, I didn't see anything explaining why you think Aegis isn't separable from the underlying source-code management after I pointed out that it can be configured to use any package. :-)

            What you call "underlying source-code management" is not what I was referring to. I said, "I'd like a cvs variant (incompatible if need be) that would suport atomic operations, symlinks, inode information, renaming files, and maybe some distributed development features." aegis apparently has a layer for representing these types of transactions (that operates on top of an existing file-based revision control system such as sccs or rcs). That layer is what I think should be separate, as I think it has much broader applicability.

            • Even so, aediff, aepatch, aeimport, and aecomp look rather fundamental.

              Umm... They're not fundamental. At least, I can personally vouch that I never use any of them, and I've been using Aegis heavily for five years. (No, wait, I can remember using aepatch once--it generates an incremental change set in an cpio-based Aegis format for distribution to other sites or other branches.)

              So you obviously didn't RTFM...

              ...making these commands incomptable in cases where the incompatability is unnecessary is an unnecessary disadvantage.

              ...but you somehow know enough about how Aegis is designed to have already determined, after a quick sniff, that the design decisions were "unnecessary."

              I see.

              Hey, guess what? I can shout while quoting your original post, too... :-) :

              I also think that it could be implemented more simply as a separate package that used some kind of cvs successor (or, ideally, was retargetable to a number of software repository types).

              And, as I've pointed out, Aegis is retargetable to any number of source-code management models and repository types. So when your uber-CVS gets invented, my assertion is that the Aegis model will handle it just fine, thank you. And I'm confident of that because I've actually been using Aegis and experiencing its flexibility first-hand.

              aegis apparently has a layer for representing these types of transactions (that operates on top of an existing file-based revision control system such as sccs or rcs). That layer is what I think should be separate, as I think it has much broader applicability.

              And based on my experience with it, Aegis is a separate layer that leaves the details of source code management and building software to the underlying utilities, which sounds to me exactly like the kind of separability you seem to think is a good idea... So are we just violently agreeing? :-)

              More specifically, it seems like you're noticing that Aegis accomodates file-based source-code management systems like SCCS and RCS, and leaping to the conclusion that it doesn't do other things that you'd like. But Aegis does operate on atomic change sets, handles symlinks and file renames just fine, and has a proven distributed development model. The only capability you list that I'm not sure about is inode information, but that's simply because I don't know how you'd want that tied in to your process layer. If the underlying (source-code and build) tools handle inode information just fine, I'd be highly surprised if Aegis would even blink.

              Look, I'm not saying you, or anyone else, have to like Aegis. Its command syntax evidently rubbed you the wrong way, and that's fine. But out of that initial bad taste, you're making assertions about what Aegis' design can handle that are demonstrably not true. I'm frankly at a loss as to why you don't just admit the fact that, based on an initial cursory glance, you had some misimpressions about what Aegis could do, and no real harm done.

              • You haven't shown any benefit to the different names for commands and options used in Aegis in the cases where those differences are unnecessary (see BitKeeper as an example of using compatible names and options when possible).

                You also seem to be very confused about the issue of an "uber-cvs" (which you seem to be confusing with file-based revision systems like SCCS and RCS--the "uber-cvs" part is not currently distributed separately from the rest of Aegis). I realize that Aegis does contain an "uber-cvs", to use your term. My point is that that functionality would have broader applicability as a separate software package.

                Since you say "So you obviously didn't RTFM" and then completely ignore my point about all of the additional commands in the HOWTO, I don't think that you're reading my responses carefully enough so that it would be the a good prioritization of anyone's time for me to respond to you further. You may have the last word now if you like.

    • it has been pretty quiet lately on the aegis mailing list, I dont recall seeing your name there. There are other people that could have helped you if you asked.
      Maybe you should have payed more attention to the manual than to dissecting the source.
      • it has been pretty quiet lately on the aegis mailing list, I dont recall seeing your name there. There are other people that could have helped you if you asked.

        I don't recall saying that I had found a case of Aegis not running as intended. I said I do not believe it would be a net savings of my time to adopt software that is intended to run that way (with logic bombs, unnecessary new syntax that is not a substantial improvement, etc.).

        Maybe you should have payed more attention to the manual than to dissecting the source.

        I don't know what you mean by this. I read much, but not all, of the documnetation in aegis/lib/en. Perhaps your meaning would be clearer if you could quote a passage of the documentation that solves, to my satisfaction, one of the problems that I referred to.

Order and simplification are the first steps toward mastery of a subject -- the actual enemy is the unknown. -- Thomas Mann

Working...