Stop Breaking the Build 92
Cap'n Grumpy writes "You know the score - you've just finished some coding, do a final cvs update before commiting, and all of a sudden all hell breaks loose. Your code now refuses to compile, or xunit starts flashing up red - test failures! One of the other members of your team has checked in something which breaks the build, and they just went out for lunch ... Argh! Did you know there is a solution to this problem? It is a system which makes it impossible for people to check in code which does not compile or test successfully. It allows coders to review others coding efforts code before it goes into the baseline, rather than after. It organises your checkins into logical change sets. It enforces continuous integration. It is linux based, and GPL'd. It's called Aegis."
Re:Doesn't work on Windows (Score:1)
Well, you sound quite uninformed (Or acting as uninformed). All I'll do is the server you are using RIGHT now is written for a Unix system, just one simple example for you.
Lots and lots of stuff more, without "Unix systems" computing wouldn't be as good as it is now. It's a very important type of Operating systems!
Re:Doesn't work on Windows (Score:2, Insightful)
The server side of computing is not really as narrow as you have explained it.
Another thing is, no, not everybody uses Windows, I do use it, but since we are here in slashdot, I'm telling you that most of the users around you use, and love, linux and apple systems, and use them as workstations as well (I have a linux work station although it's not my main pc too).
The idea that developing programs for windows is more important is wrong, I really don't like to spend my time developing a program to help Microsoft gain more power in the monopoly.
I am not much into software development for the time being, but if I get into that I'll always make sure to build programs for linux BEFORE Windows (or make portable solutions).
It's just that Unix is a nice OS for MANY things, not some things!
Re:Doesn't work on Windows (Score:3, Informative)
In fact, there are more installed UNIX servers in large scale operations than there are Windows servers (I work for a company that sells hundred-terabyte disk storage systems, to those exact operations, and more than 60% of our non-mainframe-using customers are using Solaris, AIX, and HP/UX, with Windows rolling in at about 35%).
On the whole, Windows is completely unsuited to enterprise-level programs and projects. It has a laughably low limit on the number of attached disk devices, as well as ludicrious limits on how big those disks can be. Sharing disks between clusters of Windows servers is tenuous, at best, and not recommended for high-risk environments.
Unix(es), on the other hand, grew up in the enterprise, and is quite well suited to that environment. Just as an example, I am aware of at least ten multi-national banks that operate only Unix in their transaction processing centers, which is one of the most demanding enterprise solutions available. Only Unix. No Windows in any of their datacenters.
The fact that there may, or may not, be a system like Aegis for Windows is irrelevant to your original message, which explicitly anointed truth to things which are wholly untrue.
Re:Doesn't work on Windows (Score:1)
Yes, Unix is good for some things. It's very reliable. It's got lots of things going for it. It's used as the backbone of a lot of important businesses. I said all this in my post.
None of that changes the fact that most software today is written for Windows and that developers of Windows applications would benefit from an Aegis system.
It is you who is the troll here, not me. Shame on me for responding.
Are you *sure*? (Score:2)
I'm not so sure about this. A significant number of UNIX users constantly write new, say, perl programs. The total amount of code being generated there may be significant.
I'll agree that Windows is certainly dominant in the closed-source, horizontal market area -- but while horizontal markets produce a lot of sales, they don't necessarily mean that all that much code gets written.
Re:Doesn't work on Windows (Score:2)
Re:Doesn't work on Windows (Score:2)
Maybe medium scale, but certainly NOT large scale. Large scale programs, say like weather simulators or 3D rendering applications or genetic sequencers pretty much always run on UNIX.
Re:Doesn't work on Windows (Score:1)
The middle ground which makes up the bulk of programming (tossing out one-offs, of course) is dominated by Windows programming.
Which brings us back to the original question, is there a system like Aegis for Windows?
Re:Doesn't work on Windows (Score:1)
No.
Aegis is apparently used with Windows (Score:2)
Yes IT DOES. (Score:1)
The reason you cannot use Aegis in a multi user environment on cygwin, is because the setuid method on unix is still too different from the windows counterpart. If you want the details on that restriction, consult the manual or website...
Other ideas... (Score:3, Funny)
run the code through a lameness filter before allowing the check-in
enforce a 20 second delay from the time you want to check-in to the time you are actually allowed to check-in
make sure a different developer checks-in the exact same code a day or two later
Re:Other ideas... (Score:2, Funny)
Please try to keep checkins on topic.
Try to update other people's checkins instead of creating new files.
Read other people's checkin notices before checking in your own to avoid simply duplicating what has already been said.
Use a clear subject that describes what your checkin is about.
Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive checkins might be purged. (You can maintain everything, even purged checkins, by adjusting your threshold on the User Preferences Page)
Re:Other ideas... (Score:2)
Actually, that's not far off what CVS does (Score:4, Informative)
Okay, I agree with you that the /. editors were on crack when they made that call. But the idea of "filter before checkin" has been in CVS for a long, long time.
Basically, there's a file in CVSROOT that can call an arbitrary program. If that program succeeds, checkin proceeds. If it doesn't, it doesn't. :-)
I was very surprised to see this article treat such an idea as something new.
more other ideas... (Score:1)
Sounds Complicated? It is if you dont use a tool for it. Or you could just use Aegis to help you write better code.
Ads as articles (Score:5, Interesting)
It's just you (Score:1)
Well it certainly ain't "Breaking News" (Score:2)
Check this out from their website:
Aegis is mature software - it was first released in 1991.
GMD
Re:Ads as articles (Score:1, Offtopic)
"In a land before time..."
"One man..."
"You know the score..."
Re:Ads as articles (Score:1, Offtopic)
Re:Ads as articles (Score:2)
--Anti: Of your three choices, I choose tomorrow at 1:00PM sharp in your timezone. You will receive e-mail from me shortly before that time with further instructions.
Eclipse Plugin? (Score:2)
But what is the point if your development environment doesn't support these versioning systems?
Re:Eclipse Plugin? (Score:3, Flamebait)
If so, then it's trivial to use another version control system.
Assuming that eclipse CVS support invokes the cvs
command for its operations, it would be trivial to
make a cvs-compatible commmand line for aegis.
Re:Eclipse Plugin? (Score:1)
Aegis is not a CVS alternative (Score:1)
Aegis is a Process tool, it helps you adhere to a particular model for developing software.
Part of the problem is CVS (Score:3, Informative)
Re:Part of the problem is CVS (Score:4, Insightful)
Oh, come on. That's such a load. I'll agree that CVS is a big part of the problem, but not because you shouldn't let more than one developer edit the same file at the same time. Rather, simply because CVS sucks.
I hate to admit it, because I love the open source movement, and I know what an important role CVS plays in it, but CVS relly does bite compared to some of the commercial alternatives. I mean, no trackable atomic changes? No means for integration with job tracking? A shitty-beyond-belief branch methodology? Poor tracking of and integration of changes across branches? Crappy permissioning structure?
Where I work we use Perforce, which I absolutely adore. Does not have any of the issues I listed above, works in unix and windows, is command line easy and has a pretty damn good GUI (compare to WinCVS, ack!), is wonderfully scriptable, and is really not that expensive (although I can definitely understand the desire to spend $0).
Anyway, you can't really expect to have a large group of developers both iterating and maintaining a fairly large codebase without ever needing to edit overlapping files... not unless you keep each function/subroutine/method in its own file. Even then, I imagine you'd still run into occasional change resolution issues. The better way to deal with the problem is not to close your eyes and put your hands over your ears; it's to outfit yourself with decent tools capable of dealing with real life. I know that with perforce, and I would imagine with most other half-decent source management systems, simultaneous edits are really not that big of a deal. Unless you've actually edited the same lines in the same file, the user doesn't typicaly have to do a damn thing. And regardless, it is still the fault of the first user to check in a busted file. Again, atomic changes mean that if it compiled for you, and you check it in, then compiles for everyone else.
Re:Part of the problem is CVS (Score:1)
For example. We have a main branch (current release), with integration branches for the various subsystems off this. Off these branches our team has a branch, and off this individual developers work on individual features/bugs etc. Because we have clearly defined responsibilities (e.g. X is responsible for merges into the team branch, Y is responsible for this integration branch, Z looks after the main branch), we always know who is responsible for sorting the issues out.
Because everyone is working on different branches, and the only collisions are at merge time, we usually don't have a problem. And when we do, the responsibilities are clearly defined.
Best regards, treefrog
Re:Part of the problem is CVS (Score:3, Insightful)
I have used SourceSafe, Perforce (very lightly), WinCVS, and Tortoise. Tortoise is above and beyond the rest. I still use the command line sometimes for intricate log grepping, but for everyday usage, Tortoise is simply amazing.
TortoiseCVS [tortoisecvs.org]
Re:Part of the problem is CVS (Score:5, Interesting)
I have used, in my time, Clearcase (which I rather liked despite the high price tag and apparent inefficiency repository-side), CVS, and most recently Perforce. For all the complaining about CVS lacking sophistication, it does get 95% of the job of source code control done. But, none of these address the root problem, and it's unfair to pick on one for not addressing it, implying that the others do a better job.
Inconsistent checkins of code, even code in different files, can break builds, unit, and integration tests. Consider the development process:
You check out a read-only version of the top of the development tree, that builds clean, and passes all the tests. Great. You get a write checkout on the stuff you want to change. Even if others don't touch those files (which can be difficult to enforce for some kinds of files, like headers of unique enumerations that everyone updates), you can still break things.
When you test build, you test build with your stuff changed, and everything else frozen. There is no guarantee that when you check in your changes, the break will build because something else on which your code depends got changed. Like a header file. I suppose you could get a write lock on all the files on which your code depends, but that still isn't good enough.
Consider a processing sequence where two function in two code files get called. One of them is supposed to increment some global "the sequence was run" counter. Both do. Oops. Builds fine, fails regression testing.
Short of locking the entire source tree when one developer changes something, you can't avoid this in general. Oh sure, you can resync all the files you didn't change, and run a build and regression test before checking your change in, but, lo, you'd have to lock the entire source tree during that time (or at least see if it changed again after your build and test, and repeat the process if it did, possibly indefinately). Serializing an otherwise parallel development process that way is murder on productivity: even if you run all the sanity builds at night, you now have a one day turnaround just to test build all changes, and hope they make it into the source repository. Kinda sucks, when you just changed a few lines.
Any automated solution will have to rely on serialization of source tree access, at some level. If the project can be broken down into independent components, serialization within a component can be relatively painless, with somewhat less frequent serialization across components (so called "integration" which the nightly build strives to avoid because it happens all the time). Experience shows that, unless you want to plan "integration" phases, this defeats a large benefit of the nightly build process, though, it is not unreasonable for very large projects, with clearly independent parts.
So, what's the solution?
The same, it's always been: divide and conquer.
While "integration" phases are to be shunned, and "repository serialization" kills productivity, one can take a statistical approach: instead of verifying everything before a checkin to a frozen repository, you design your project to try to isolate the effects of implementation changes from one another. Early on, this may be difficult as interfaces are still being tweaked, but you have less code then, and builds happen quickly. This is a large part of what a project architect is supposed to do.
If done correctly, a local delta built with a recent snapshot of the source repository is not likely to break the build and unit test of a more recent snapshot if it builds and passes regression tests against the somewhat older snapshot. This isn't foolproof, of course, but a proper design will have the necessary isolation: an implementation change in one area, or the addition of a new interface should be oblivious to your code.
Now, if interfaces need to change, or new features are to be added that have to be coordinated among developers (i.e. someone adds a feature and someone else writes code to use it), then you need greater coordination among all those that might be affected. Such work can take place on a side branch, and merged to the main development tree only when it is internally consistent. Again, this is generally the responsibility of the common lead programmer of those implementing the changes that need to be coordinated (perhaps the project architect, but in large projects, that kind of detail can be delegated).
Where trouble brews is when such a synchronized change has to take place across development teams: it usually is more effective if one person handles the integration of the new feature and code that uses it. Because the feature user has the need, it may be him or her. However, review of code to be added to a base for which another team is responsible should, of course, rest with that team: "You code didn't have this which I needed, so here's the patch, wanna check it out?" Yes, this can cause political friction, but in a mature development team, that won't happen. When communication between teams is minimal, hostile, or non-existent, and such functionality is to be added, is when builds break, and regression tests fail -- and the finger pointing begins.
Re:Part of the problem is CVS (Score:4, Interesting)
It goes something like this: say you change a function signature. It is your responsibility to grep for all the uses of that function and change them. It is also your responsibility to check in all of those changes atomically. That is: an all-or-none checkin of a group of files all at once. That group is also bound together into the future (the relationship is not severed after checkin).
Another simple point of process that saves your ass is JOB TRACKING. If your source control repository doesn't link into a job tracking system, then I pitty you. I've been there, and it sucks. It took a while for us to work out exactly what was missing and how to get it... but now that we have implemented it, it makes life livable again. The idea with a job tracking system is: assosciate all of your changes with a job. If you want a bleeding edge revision, then sync to the head revision and don't be surprised when stuff breaks. If you want a rough-around-the-edges version to test against, then sync to the highest revision that is entered in the job tracker. Use the various life-cycle statuses in the job tracker to sync to various points. On the whole: Get all files in QA status, or all files in QATESTED status, or CODECHECK status (or however you choose to name these things in your job tracker), or whatever status you want.
In that way, you don't easily break the build, because before you even try to build off of something, you've tracked its code review and its unit testing. Of course, there are always the possibilities for unit testing problemss, but they are usually going to end up being the fault of a developer not following simple processes. In the example I used above where we changed the signature of a function, and updated all of the calls to that function in the same atomic change... you could have had another developer creating a *new* call to that function in their own working copy of the code. You would, of course never catch that, and they might not either... but hopefully whoever is doing code reviews has an eye open to things like function signature changes, and can catch it at that point.
Make it clear to your developers that changes not assosciated with a job will never see the light of day. Every night, review any untracked changes and email the developer, asking them what the hell.
It's true, there is no way to make everything 100% bullet-proof against checking in bad code. Of course, that's why we do things like freeze integration and test before a release.
Re:Part of the problem is CVS (Score:3, Insightful)
Re:Part of the problem is CVS (Score:2)
Re:Part of the problem is CVS (Score:2)
same files, your project is poorly organized. Code-
ownership is bad^3 news. CVS makes merges 90%
trivial. There is nothing fundamentally different
about open vs. closed source development. Open
development just increases the probability of
receiving unsolicited patch sets.
Re:Part of the problem is CVS (Score:2)
Re:Part of the problem is CVS (Score:1, Interesting)
Re:Part of the problem is CVS (Score:5, Insightful)
...or if you happen to have a high level of code reuse; or if you have doing firmware, software and driver development in parallel; or if you have a small but busy team; or if you have a large but busy team; or ... or ... or.
This is a ridiculous statement. There are any number of reasons that multiple developers will work on a single file at once, especially in a well structure organisation or development. Development, code inspection, fixes in response to testing and maintenance fixes (bringing a patch from a release into current) can ALL happen simultaneously in a development tree, and can ALL happen simultaneously in one file. They just shouldn't often happen simultaneously in one method/function.
Re:Part of the problem is CVS (Score:2)
One would hope that reused code would be fully developed and debugged before it was shared. Sure new bugs will turn up, but the thought of people tweaking reused code at will sounds pretty dangerous to me. I've seen plenty of cases where unilateral changes to reused code by one group has brought another to a grinding halt. Shared code should be carefully mangaged and should not be altered without prior coordination with all its users.
Re:Part of the problem is CVS (Score:2)
This is true, and usually means use of procedures external to source control. Of course there are some revision control and/or configuration management products that have workflow support, to gain prior approval for a change, or to perform a "trial" checkin so that the change can be approved before actually becoming part of the development branch.
Quite often though you will find that shared code is in development by multiple teams simultaneously, and there is no other option. In my experience this is far more common than sharing of fully developed code.
The reason is that it is usually prudent and beneficial to reuse fully developed code as a library, whereas closely linked products (such as firmware and associated software) need to share definitions and often have the same or mirror-effect routines.
In an iterative/incremental process, there is no chance for one group to fully develop a particular shared file. Obviously the groups need to work closely and communicate regularly, but the problems of sharing can be managed if they are identified up front and procedures put in place.
CVS can help if you use it right! (Score:2)
Taking advantage of the ``up-to-date check failed'' feature requires that you do a whole-tree commit at some level of the tree structure that you care about---perhaps the project root---rather than doing a selective commit on individual files. Change to an appropriate directory level and execute a ``cvs ci'' without any filename arguments.
The other part of the equation is doing a proper regression test before committing. It would be nice to automate this, but in the real world, that is a pipe dream.
What if your tests have to be executed on something other than your development platform, such as a PDA or embedded system? What if you are targetting several flavors of Unix-like operating systems and Windows?
Your regression test run on Linux does not assure you that you did not break the Win32 version of the program.
That's not to say that there isn't value in running at least *some* automated tests, like for instance exercising the functions of some portable library.
Used something like this (Score:1, Interesting)
It seems like common sense for most projects to refuse to allow checkins without building first, and that's the sort of policy that can have a fairly effective mechanism for enforcement.
Re:Used something like this (Score:2, Insightful)
Sure, you can generally do partial builds. But what happens when you just had to change that include file that somehow effects nearly everything else? You're stuck for that hour.
Re:Used something like this (Score:3, Insightful)
Have some coffee, fill out the TPS report, look at pending bugs. There's a few things you could be doing during that hour.
Re:Used something like this (Score:2)
This is a bad thing to programatically enforce during the commit on large projects using CVS.
Re:Used something like this (Score:1)
In my experience, if the modification affects enough code to make it an hour rebuild, that's when you have to be extra careful. An hour-long build is generally something people don't sit and watch...they run it over lunch or something. Anyone who grabs the broken update will come back after an hour, find it didn't work, have to revert to a previous version, spend another hour compiling, and then have to do it all again once it's fixed.
If several people have to do this, that's a lot of wasted engineering time. If it's such a simple change that it should be just fine, then an alternative might be to call in a couple reasonably paranoid engineers to have a look. If everyone says 'no problem', then maybe skip the test-compile. Better 3 people waste 5 minutes than everyone waste an hour. If more than a couple lines are changed, though, I'd go for the test compile, regardless.
Re:Used something like this (Score:1)
I think that if you work through this you could come up with a workable solution. Perhaps this would work, though not the most efficient, but should be stable.
When you check something in, it is checked into an unstable tree. You set the build process runnig then. If it builds and validation tests work, it checks the changes into the nightly tree. If the unstable build/tests fail it email the developer of the issue. The nightly tree is built after all the unstable checkins from the day are either rejected or submitted. Then the nightly build is compiled and tested, if this fails then the project manager is notified who then tries to figure out why and bust some nuts, er has the developer(s) responsible for the failure fix it. If there were no issues nightly build is put into a stable tree.
In theory the this should yeild a "last stable build" on a regular basis unless there is major overhaul/development currently in process, in which case the build probably isnt a good idea to ship to a customer anyways.
How can you build something that is not checked in (Score:1)
Request Tracker (Score:5, Interesting)
(And for that matter, if you need to track software bugs & other issues, RT rocks. Don't bother with Bugzilla, it's not half as good as RT is for most of the same tasks. And no, no one is paying me to endorse RT or anything, it's just great software and, in reference to Aegis, I respect the judgement of the guy developing it...)
Re:Request Tracker (Score:3, Informative)
It rocks, it's free, and it does virtually everything that you need it to do without complexity!!!
Pair check-in (Score:2, Interesting)
Re:Pair check-in (Score:2)
Things that break builds or functionality can still get through, but having an established process (yes, I know how much fear that word strikes in the hearts of some developers) can make life a lot easier.
Does it have a server port? (Score:2)
On a network, does it only work on a shared filesystem like NFS?
Re:Does it have a server port? (Score:2)
6.1.3 Multi User, Multi Machine
Aegis assumes that when working on a LAN, you will use a networked file system of some sort. NFS is adequate for this task, and commonly available. By using NFS, there is very little difference between the single-machine case and the multi-machine-case.
This is wishful thinking. To be reliable, version control must be based on some sane client-server protocol.
Visual SourceSafe's reliabilty problems are caused in large part to the use of file sharing rather than a proper protocol. CVS is also known to be unreliable when a repository is accessed via NFS rather than in client-server mode. This practice is loudly discouraged by experts in the CVS mailing list.
NFS implementations differ from one another. As a rule of thumb, NFS works best when you have a matching client and server implementation from the same vendor.
What is the problem exactly? (Score:1)
If you dont like NFS, you could use any other networking filesystem, maybe samba? aegis doesn't care.
slashdot business model (Score:4, Funny)
2) ???????????????????
3) Profit!!!!!!!!!!!!!!!!!
Solution (Score:5, Funny)
Go to lunch.
Re:Solution (Score:2)
Or do what I do.
Surf the web until they get back, inform them that they broke the build, and then go to lunch.
Cookies (Score:5, Insightful)
Re:Cookies (Score:1)
I have the advantage of mostly working in a pair on my project, so there is no real chance of fuxoring
Re:Cookies (Score:1)
Being from Wisconsin, I thought they had done something particularly cool until they told me.
Krispy Kremes (Score:4, Interesting)
Another one.. (Score:3, Funny)
Bats == effective (Score:1)
I wish they'd implement a scheme like that where I'm working now. They haven't quite got the hang of source control. Most of the time, the version under source control doesn't even compile, let alone run. No-one, it seems, bothers with a local merge and test before checking in. Also, everyone works using data off a shared network drive, so when any of that changes, you're screwed.
I need a new bat. With rusty nails.
Re:Another one.. (Score:2)
It's social systems like these that work better than _every_ software based solution :)
ain't broke (Score:1)
But the best way is to just have a neutral machine that no one develops on. It's just for getting updates and doing test builds.
Aegis calls itself a project change supervisor. It's nice that there's a framework, especially for large projects. For small ones, throwing wads of paper or other methods of embarrasment works fine.
Why I've tentatively abandonded Aegis (Score:2)
Whether Peter Miller's favorite policies are optimal is not the issue to me. For me, an important aspect of free software is that the owner of a computer is given maximum control. If the group maintaining a piece of software is basically trying to wrest that control from the computer owner, then my distrust of that group is enough so that the amount of work I would have to do in studying every line of their code and undoing their logic bombs is exceeds the productivity benefits that I would expect from using the software. For me this attitude problem is the critical issue. If this problem were fixed, I would probably deploy and contribute to Aegis.
For completenes, I'll mention a couple of other issues that other people considering using Aegis might find helpful to know about, although fixing these other issues probably would not tip the balance for me as to whether or not I'd choose Aegis.
Firstly, Aegis has a bit of an unncessary learning curve because Aegis does not use commands compatible with cvs, sccs or rcs when this is possible. If you invest time in learning Aegis commands, you will not get a return on that investment elsewhere. In comparison, BitKeeper uses rcs and sccs command names and options when possible. I consider Aegis's freeness to be more important than BitKeeper's rcs/sccs command line familiarity, and would also consider incompatible command names and options to be worthwhile if the new syntax were a enough of a user interface improvement, but that doesn't seem to be the case here.
Secondly, I've become skeptical that Aegis really needs to be a single software package (yes, I know that "cook", Aegis's recommended replacement for "make", is distributed separately from Aegis). I'd like a cvs variant (incompatible if need be) that would suport atomic operations, symlinks, inode information, renaming files, and maybe some distributed development features. I think Aegis's repository control based on regression testing is a great idea, but I also think that it could be implemented more simply as a separate package that used some kind of cvs successor (or, ideally, was retargetable to a number of software repository types).
I think Aegis has some great ideas, but I currently think it will be less work, especially in the long run, to find or make something that I like better.
Re:Why I've tentatively abandonded Aegis (Score:1)
There is no need to perform the entire Linux kernel build as root - only the final install requires root privilege.
I have quite successfully done linux kernel module development with Aegis, performing most operations as a simple user, and sudo for the few commands which require elevated privileges.
Re:Why I've tentatively abandonded Aegis (Score:2)
Users should be able to make that decision themselves. (It's also worth noting that, aegis, as shipped, also refused to run from my personal account because my user-id belong to group 0, "system".) Imagine if other development tool authors also inserted logic bombs to enforce their varying and probably conflicting system administration philosophies. I repeat:
Thanks for responding anyhow.
Re:Why I've tentatively abandonded Aegis (Score:1)
Aegis is a process wrapper around any of those source code management systems, so it's doing a great deal more. I view the fact that Peter Miller chose not to just pass-through the (often clunky) underlying syntax of the source-code management commands to be a great strength of Aegis, not a weakness. Why? I can develop on any Aegis project and not have to know or care what underlying source code management system is being used. I get to focus on developing and testing code, not the specifics of wrestling with RCS or SCCS or CVS or fhist.
In other words, it's an abstraction layer, and as in any abstraction, you lose a little fine-grained control. But in my experience this is more than offset by the benefits. I personally use Aegis on multiple projects; some older ones still use RCS, while I've started using CVS on some newer ones as my tastes have changed over time. Guess what? I've been able to do this without having to re-learn my development methodology, because I've been using Aegis to wrap the underlying source-code management details.
I think Aegis's repository control based on regression testing is a great idea...
Hear, hear. This is very well-thought-out in Aegis, and a huge win.
Ummm... it already is a separate packge. You can use Aegis in conjunction with RCS, SCCS, CVS, fhist, or (probably just about) any other underlying source code management system by configuring the appropriate commands into the project config file. What makes you think Aegis isn't already separate enough from the underlying source code management?
Re:Why I've tentatively abandonded Aegis (Score:2)
I never said that Aegis should "just pass-through" commands. If Aegis were to use, say, rcs-compatible commands where possible, I would still want it to use those commands even if it were using SCCS underneath (like BitKeeper). I also said "[I] would also consider incompatible command names and options to be worthwhile if the new syntax were a enough of a user interface improvement, but that doesn't seem to be the case here." Aegis installs 17 new programs (at least they begin with "ae") plus an "aegis" program that is designed to implement yet another command syntax in an attempt to ameliorate the complexity of the ae* programs.
Perhaps you feel that the argument syntax of these 17 new programs is a great improvement over cvs (but readers may notice that you don't explain why), in which case I'll leave it to interested readers to check out some of the aegis manual pages and form their own conclusions.
For me, I probably would be willing to climb the Aegis learning curve if the logic bomb issue were resolved, but I mention the unnecessary command line incompatability issue because other people may find this information useful when surveying source code control systems.
Re:Why I've tentatively abandonded Aegis (Score:1)
Aegis installs 17 new programs (at least they begin with "ae") plus an "aegis" program that is designed to implement yet another command syntax in an attempt to ameliorate the complexity of the ae* programs
This is a misrepresentation. The aegis program itself is not to "ameliorate the complexity" of other programs, it is the workhorse that implements the process and is the only executable the vast majority of users will end up running. The other ae* programs are for ancillary functions like generating reports, distributing change sets, command completion, etc. They need only be learned as needed, or by an administrator, in much the same way that not every developer who uses CVS has to learn the branching mechanisms.
Perhaps you feel that the argument syntax of these 17 new programs is a great improvement over cvs (but readers may notice that you don't explain why)...
No, I think I stated why I consider Aegis a huge step forward from underlying, raw source-code management commands, and it is for far more important reasons than command-line syntax. Among other reasons, Aegis is worth the learning curve because, once configured, you sidestep having to be preoccupied with the particular syntax or specific model of an underlying source-code management system. You get to focus not on managing your source-code tree, but on developing and testing your softare. That's supposed to be the whole point of good tools, yes?
(If we're going to point out parts of the argument that are being ignored, though, I didn't see anything explaining why you think Aegis isn't separable from the underlying source-code management after I pointed out that it can be configured to use any package. :-)
This makes me realize that I'm confused about one seeming inconsistency: you complain on the one hand that Aegis isn't separate enough from the source-code package, and at the same time say that isn't enough like CVS or RCS or whatever you'd like it to look more like. Perhaps I'm missing your underlying point...?)
Agreed; one size doesn't fit all, so people should figure out if the Aegis model (or any other process/tool set) is appropriate for their needs. Forewarned is forearmed, as they say...
Re:Why I've tentatively abandonded Aegis (Score:2)
I have to admit, I was looking at the many commands in the howto and didn't notice that very few of them are in /usr/bin (where did they go?). Even so, aediff, aepatch, aeimport, and aecomp look rather fundamental. The "common commands" section of the howto lists 11 new commands, for example. I think it's fair to say that given that that the "aegis" program and the howto define two different command syntaxes for the same thing, somebody else must also have felt that there was something wrong with at least one of these syntaxes (both of which share the disadvantage of not being compatible with some other program like rcs or sccs in cases where that is possible).
It is irrelevant that you consider Aegis to be "a huge step forward from underlying, raw source-code management commands, and it is for far more important reasons than command-line syntax", because all I am saying is that making these commands incomptable in cases where the incompatability is unnecessary is an unnecessary disadvantage.
(If we're going to point out parts of the argument that are being ignored, though, I didn't see anything explaining why you think Aegis isn't separable from the underlying source-code management after I pointed out that it can be configured to use any package. :-)
What you call "underlying source-code management" is not what I was referring to. I said, "I'd like a cvs variant (incompatible if need be) that would suport atomic operations, symlinks, inode information, renaming files, and maybe some distributed development features." aegis apparently has a layer for representing these types of transactions (that operates on top of an existing file-based revision control system such as sccs or rcs). That layer is what I think should be separate, as I think it has much broader applicability.
Re:Why I've tentatively abandonded Aegis (Score:1)
Even so, aediff, aepatch, aeimport, and aecomp look rather fundamental.
Umm... They're not fundamental. At least, I can personally vouch that I never use any of them, and I've been using Aegis heavily for five years. (No, wait, I can remember using aepatch once--it generates an incremental change set in an cpio-based Aegis format for distribution to other sites or other branches.)
So you obviously didn't RTFM...
I see.
Hey, guess what? I can shout while quoting your original post, too... :-) :
I also think that it could be implemented more simply as a separate package that used some kind of cvs successor (or, ideally, was retargetable to a number of software repository types).
And, as I've pointed out, Aegis is retargetable to any number of source-code management models and repository types. So when your uber-CVS gets invented, my assertion is that the Aegis model will handle it just fine, thank you. And I'm confident of that because I've actually been using Aegis and experiencing its flexibility first-hand.
aegis apparently has a layer for representing these types of transactions (that operates on top of an existing file-based revision control system such as sccs or rcs). That layer is what I think should be separate, as I think it has much broader applicability.
And based on my experience with it, Aegis is a separate layer that leaves the details of source code management and building software to the underlying utilities, which sounds to me exactly like the kind of separability you seem to think is a good idea... So are we just violently agreeing? :-)
More specifically, it seems like you're noticing that Aegis accomodates file-based source-code management systems like SCCS and RCS, and leaping to the conclusion that it doesn't do other things that you'd like. But Aegis does operate on atomic change sets, handles symlinks and file renames just fine, and has a proven distributed development model. The only capability you list that I'm not sure about is inode information, but that's simply because I don't know how you'd want that tied in to your process layer. If the underlying (source-code and build) tools handle inode information just fine, I'd be highly surprised if Aegis would even blink.
Look, I'm not saying you, or anyone else, have to like Aegis. Its command syntax evidently rubbed you the wrong way, and that's fine. But out of that initial bad taste, you're making assertions about what Aegis' design can handle that are demonstrably not true. I'm frankly at a loss as to why you don't just admit the fact that, based on an initial cursory glance, you had some misimpressions about what Aegis could do, and no real harm done.
Re:Why I've tentatively abandonded Aegis (Score:2)
You also seem to be very confused about the issue of an "uber-cvs" (which you seem to be confusing with file-based revision systems like SCCS and RCS--the "uber-cvs" part is not currently distributed separately from the rest of Aegis). I realize that Aegis does contain an "uber-cvs", to use your term. My point is that that functionality would have broader applicability as a separate software package.
Since you say "So you obviously didn't RTFM" and then completely ignore my point about all of the additional commands in the HOWTO, I don't think that you're reading my responses carefully enough so that it would be the a good prioritization of anyone's time for me to respond to you further. You may have the last word now if you like.
Re:Why I've tentatively abandonded Aegis (Score:1)
Maybe you should have payed more attention to the manual than to dissecting the source.
Re:Why I've tentatively abandonded Aegis (Score:2)
I don't recall saying that I had found a case of Aegis not running as intended. I said I do not believe it would be a net savings of my time to adopt software that is intended to run that way (with logic bombs, unnecessary new syntax that is not a substantial improvement, etc.).
Maybe you should have payed more attention to the manual than to dissecting the source.
I don't know what you mean by this. I read much, but not all, of the documnetation in aegis/lib/en. Perhaps your meaning would be clearer if you could quote a passage of the documentation that solves, to my satisfaction, one of the problems that I referred to.