When Making a Comprehensive Retrofit of your Code... 385
chizor asks: "My
programming team is considering making some sweeping changes to our
code base (150+ perl CGIs,
over a meg of code) in the interest of consistency and reducing
redundancy. We're going to have to make some hard decisions about
code style. What suggestions might readers have about tackling a
large-scale retrofit?" Once the
decision has been made for a sweeping rewrite of a project,
what can you do to make sure things go smoothly and you don't run
into any development snags...especially as things progress in the
development cycle?
object orientation (Score:3, Informative)
Re:object orientation (Score:3, Interesting)
Re:object orientation (Score:5, Informative)
Anyway, if I'd spent a little more time thinking about the advice side of it, taking a look at appropriate programming methodologies (like Extreme Programming advocated in another thread here) would be one piece I'd advocate. Given the size of the code (1 MB = about 20-30,000 lines?) there's no need for major heavy-weight processes here. More important I'd say is sitting down and figuring out in the appropriate level of detail what exactly your system is doing right now - you can do this using UML [omg.org] diagrams which seems to be becoming a standard, though the main use we've found is to try to get an overall view of things which we then throw out when we get into the details again.
The other thing to do along these lines is look for your use of standard patterns within your code - the Design Patterns [amazon.com] book is extremely helpful if you're moving to an object-oriented framework at all; following well-known patterns and indicating clearly what you are doing can make your code much easier for others to follow.
Re:object orientation (Score:2, Insightful)
Just as a point of reference: I've noticed that the term "refactor" has gained popularity with the whole XP push. Of course "refactor" is just a cooler way of saying "rewrite". i.e. It's not "We should rewrite that module", but rather it's now cooler to say "We should refactor that module". Blah.
Re:object orientation (Score:5, Insightful)
In his book, Refactoring [fatbrain.com], Martin Fowler talks about how code "smells". He identifies a whole slew of reasons why code can "smell" bad, such as having a method that's too long, having a method on the wrong object, incorrect use of polymorphism, etc. He then outlines "Refactoring Patterns" which can be applied to certain "smells" to make the code more managable.
The last project I managed was a 3 month re-write of over 75,000 lines of code for an auction component that had gotten way out of control. The original component integrated with a very very poorly written (yet still very expensive) auction engine, and our task was to take it out.
Because there was no time allotted to proper requirements gathering, the powers that be decided that we could use the existing system as a requirements base, and just make the new system "act" like the old one.
So I was thrown two things: A staff that was inexperienced in design, and a lack of requirements. We were re-writing from scratch -- throwing all of the code away and starting again. I'm one of the only ones on my team that actually knows what refactoring means, let alone how to apply it.
So we built the new system based on a solid design. We even did the design in UML. But that only goes so far. Given a properly designed system, there is still another lower step -- how to actually implement the code. Invariably, the design doesn't take into account helper methods that are necessary, other objects, etc. Because we were on a tight schedule, I left it up to each developer to design these new tidbits.
If they had known what they were doing, everything would have been fine, but there are several places where the code is absolutely abysmal. It's not that it will perform poorly or is totally unmanagable, but in some places it's close.
Granted, the business rules for the system are some of the most complex I've ever seen, but if the developers had known more about refactoring, they might have made better choices.
There's a whole notion of Quality here that should be discussed as well. Most good has a good enough quality, and though it looks ugly, I'd throw it away. There's an interesting interview [softwarema...lution.com] by Joel (a former microsoft programmer) which tackles these problems of Quality head on. I also reccommend reading Zen and the Art of Motorcycle Maintenence: An Inqiry into Values [fatbrain.com] for a pretty good discussion of the whole notion of quality.
I'd say just keep the codebase unless you are totally switching languages (or in our case taking out a 3rd party integration). Otherwise if you re-write you'll wish you hadn't. The new codebase will end up being in the same state as the old codebase, it'll just take a little bit more time to get there.
I'd also reccommend just doing a performance test of the site, and find out which modules really need to be re-written. The 80/20 rule will invariably apply -- 80% of the execution time will be spent in 20% of the code -- so at a minimum just re-write the 20% of the code that's executed on the most frequent paths. Leave the rest alone. Re-writing it will just be a waste of time, and you have to fix all the old bugs that were re-introduced during the re-write.
Re:object orientation (Score:3, Interesting)
Actually we didn't use it formally. But we used some of the best ideas. Iterative development, frequent code reviews, programming in pairs etc. We had a pretty small team so XP might have helped a little bit more, but it's a fundamental paradigm shift that most people I work with aren't comfortable making. My team would have been less effective had I said, "You *must* use XP."
I have been on projects in the past that have embraced more of the XP model, but something like Rational's Unified Process (RUP) kind of fits in more with what we do.
For traditional software shops (e.g. those that make products) a methodology can be set in place by the management, because the company is in essense it's own client. For other types of software shops -- such as consulting firms, or web design shops -- where one-off software is commonplace, the type of methodology that's used is usually determined by the client.
For example, if I'm working with a client that isn't very savvy, I probably don't want to do all of my requirements in formal use-cases, followed up by UML design documents. They won't add any value to the client at all, yet we still have to make sure we understand what it is we're building.
It's an interesting challenge and it's usually best overcome by having solid people on the team that can speak to both sides. A solid requirements team which can talk to the client (be it internal or external) and truly understand the business problems, all the whole clearly articulating those to the technical team. Sometimees double documentation is necessary -- one set for the client and one set for the developers or development lead.
There is just no easy way to do software development correctly. It's amazing that in nearly 60 years, no one has come up with a "right" way. There's only so much a methodology can bring to the table -- after that it's about finding and keeping a competant staff who understand the value of what they're doing.
The methodology is just a roadmap. How the car is driven is completely up to the development team. Even a good technical leader can become a backseat driver at times, as there is just not enough hours in the day to provide oversight to every single member of a team.
I think the more traditional software shops should look at the console gaming industry for tips. They have a difficult challenge -- it's not possible for them to release a fix or update for their game once it's completed. So how do companies like Activision, Square, Nintendo and Sega do their software development, and is there anything that the development teams at large can learn? Rigerous testing is one of the most obvious things that comes to mind for them, but how do they know what it is they're testing when a game's vision usually only exists in the mind of the creator?
.anacron
finding and keeping a competant staff (Score:3, Insightful)
You are so utterly right. The key idea is "people over process". I've rather have 5 great people with any old process, than 25 mediocre people with the perfect process.
[look at the console gaming industry for tips]
I have always been impressed with the gaming industry for this reason. Sometime people erroneously think of game programming as being somehow less "heavy duty" than business apps, etc. But games are usually under incredible schedule pressure (for vital marketing reasons) yet also need to ship very few bugs, because buyers are prone to return a game that doesn't work, not become a source of upgrade revenues.
Paradigm Caution (Score:2, Insightful)
IOW, don't switch paradigms just because something is in fashion right now.
Look carefully before leaping into something that may just turn out to be a fad or only shines in specific domains.
Use a pretty printer (Score:2, Insightful)
refactor as a result of learning from your mistakes and redundancies;
and try to minimize the busy parts (where all developers have a hand) when things change (like lists of unique symbols, numbers, etc.)
Sleeping dogs (Score:5, Insightful)
Your tangled mass of spaghetti code paths are probably full of almost incomprehensible little design decisions and seemingly out of place declarations and functions, but most of those were probably added as specific fixes for bugs encountered under real-world use.
Most companies that decide to massively re-engineer their code (do a big rewrite) usually end up regretting it because it forces them to re-fix the problems that caused the original strange looking code in the first place.
Does your CGI nest work? If so, maybe you should leave it alone. If you are fixing specific problems, then go ahead, but if this is a generalized attempt to fix the 'not invented here' syndrome that plagues engineers (who will almost universally agree that it is easier to write code then it is to read it), perhaps you should reconsider.
Re:Sleeping dogs (Score:3, Informative)
This is a lesson to be learned. Engineer your code from the beginning. Use easy to understand commenting, and strucutured code. Although it takes some discipline, you will almost never have to reconsider "re-writing from scratch".
Re:Sleeping dogs (Score:3, Insightful)
Re:Sleeping dogs (Score:3, Insightful)
Thinking that writing one-off code is giving you flexibility is grave mistake.
-sam
Re:Sleeping dogs (Score:2)
++ to that comment.
A general refactoring, without the intent of doing it to add new functionality / fix bugs, isn't worth it. What business value do you gain from it?
However, there are times that it is worth undertaking substantial refactoring to add what may seem like relatively small pieces of functionality. I think it was in his _Refactoring_ book that Martin Fowler likened this to reaching a local peak in a mountain range, where you have to go down into a valley before you can climb the next peak.
ObOfftopic: This is my first post in a while that hasn't been a troll or flamebait. It just isn't as fun being informative.
Re:Sleeping dogs (Score:5, Insightful)
Related note: the original poster doesn't say "refactoring" and he does say "Perl." Informative statements relating to these two facts:
Re:Sleeping dogs (Score:5, Funny)
I second the concern about PERL. And I offer my advice as someone who has virtually no qualifications to talk about large systems of code. I just like Python [python.org] better than PERL because it doesn't hurt my eyeballs like PERL.
Yeah, I guess this is a troll. But it's honest. I use Python like most people use toilet paper: several times a day, and for more things than it was originally intended.
Re:Sleeping dogs (Score:3, Insightful)
Unless you're writing an operating system, why use C ? What does it buy you ?
IIRC, the Avionics system of the F-15 is done in Ada, because its "Safer" than C. I'd say thats "Mission Critical".
I too am weary of perl, but like everything else, its the right tool for _some_ jobs, arguably, some pretty important jobs.
Perlmonks.org (Score:2, Informative)
What a monstrous Christmas troll you are. What qualifies you to make this judgement? Perl, like any other mature language, has people who write kludges with it and people who write clean, elegant code with it. Your lousy Perl code is not indicative of a language problem.
That said, you're probably stuck with it, and AFAIK, you may be forging new paths in programming for reusability by applying the above concepts to Perl.
And this shows how much you know, since the Perl community is full of activity around design patterns, refactoring tools, unit-testing, and other practices which are in favor among experienced people trying to write solid, maintainable code.
My suggestion for those who are looking for actual useful advice rather than this kind of "throw away all your work and learn Java" crap, would be to head straight for http://perlmonks.org/ [perlmonks.org] and read up. There's tons of advice there for serious Perl coders. You would also do well to start reading the mod_perl mailing list, which often has informative discussions about these issues.
Re:Perlmonks.org (Score:2)
wouldn't suggest using it for a large project.
Re:Sleeping dogs (Score:3, Interesting)
I agree 100%. My company started to bring a commercial application to market a little less than a year ago. I prototyped the code in Perl, and the prototype was sufficiently okay that the decision was made to evolve the prototype into the release code.
This was A Mistake. It was A Dumb Idea. It was also My Decision. I have taken Much Shit for this from my coworkers. But you live and you learn.
I have since (over the past four months or so) rewritten the entire application-- every line, every file-- in C++. The source tree is 3.8 MB, and it compiles to about 100 MB of object code. (The actual executables, of course, are much smaller than that.) It was a pretty big job.
Not only is my code tighter and cleaner than the original Perl stuff (which was actually pretty okay code) but it's between 2 and 10 times faster.
I love Perl, absolutely adore programming in it, but there are some things that are easier to do with C, or C++, or (presumably) Java. When you split a project up among a number of people, for example, using the Bridge design pattern and distributing read-only interface header files makes modular integration so very much easier. That's just one example.
We would not have been able to get our app to market without the Perl prototype. And I don't think it would have been worth a damn if we hadn't rewritten it in C++.
see joel on software (Score:4, Informative)
Re:see joel on software (Score:3, Interesting)
Unexploded bombs, not sleeping dogs (Score:5, Insightful)
Your tangled mass of spaghetti code paths are probably full of almost incomprehensible little design decisions and seemingly out of place declarations and functions, but most of those were probably added as specific fixes for bugs encountered under real-world use.
Yes, and if they're cryptic and uncommented, they are worthless. Eventually one of these incomprehensible, magical fixes will stop working. Perhaps the bug it works around is fixed. Perhaps how the function is being used changes to previously unexpected behavior. Some poor engineer will look at the little big of magic, scratch his head, and be forced make a blind decision about how to fix it. Perhaps he can change the code while leaving the bit of magic in working, but he can't be certain, since he doens't understand it. If the collection of cryptic tweaks becomes dense enough, any attempt to fix a bug or add a feature becomes highly risky.
On a related note, don't let this happen to you. If you add one of these strange little fixes, for the sake of the programmer that follows you, document them. Just a little "Need to toggle the Foo because Qux 1.4 is correctly fails to do so" will bring tears of joy to the eyes of future programmers.
Re:Unexploded bombs, not sleeping dogs (Score:3, Insightful)
Instead, refactor and unit test. Unlike comments inserted into the middle of the code, unit tests will fail and point you to the reason for the failure. In the above example, when we upgrade to Qux1.5, the unit test which asserts that the Foo is untoggled will fail, and will point right to the function which made the assumption. Bingo -- a quick fix.
-Billy
Re:Unexploded bombs, not sleeping dogs (Score:5, Insightful)
A better solution would be to extract the thing you're commenting into a function, and name the function so that the comment is unneeded. Don't forget to write a test which checks the purpose of the function. Then (and only then) delete the comment.
MakeNotAccidentallyClickable(widget);
See what I mean? Now it's impossible for your comments to become out of sync with your code -- your code IS your comments.
-Billy
Re:Sleeping dogs (Score:2)
... which is why whenever you add one of these "little design decisions" whose purpose isn't blatantly obvious, it's important to put in a comment saying what it does and why it is there. Otherwise someone might come through later on, think it's an error, and remove it.
A Code Nazi Disagrees (Score:2)
If you need to make big changes to the code, then you are already hosed. If you have no record of what bugs you fixed, and no way of testing if those bug fixes are still working after you make code changes, then keeping all of your spagetti in place is no guarantee that you won't re-introduce bugs later on.
The Code Nazi approach is to write a unit test for each bug you fix. And you have an easy way to re-run all of your unit tests. After you make a big change, you can test if all of your bug fixes are still working. And now you have the flexibility to refactor your code whenever you want. Which means you can keep your code base clean and elegant, and you'll never reach the crisis point that this group has reached. This is the approach advocated by Extreme Programming, as well as by other software disciplines.
Doug Moen
Joel Spolsky's Take... (Score:2)
Joel Spolsky has an article [joelonsoftware.com] up on his blog site that speaks to this point.
He uses Netscape's decision to rewrite Netscape 6 from scratch as an example, and expands upon many of the points mentioned above.
Re:Sleeping dogs (Score:2)
When you investigate those twistly little lines of code, see if a business rule can be simplified. Try really hard. If it can't be, then put it into code. Don't make the opposite mistake of making the rules TOO simple.
It's called depreciation (Score:4, Interesting)
There's a concept in the world of stuff you can touch called 'depreciation'. Why the software world hasn't caught onto this is beyond me, but from my experience it seems to apply reasonably well.
The value of your code goes down in value over time. Now, don't confuse that with the value of your design - your design could be grand, but the code is part of your equipment, like your hardware. Depreciate it over time to reflect the increasing cost of maintenance and integration costs in a migrating business.
Reworking your code allows you to make adjustments to your design to reflect a new environment, or to move away from languages/APIs/toolkits that might be hard to maintain.
I depreciate my code over about 3 years. It's all modular, and I replace code about as frequently as I add. A number of years ago, I had tons of bandwidth but not much CPU power, so I tended to push data rather than compute. The reverse is now true, so I made some design changes as part of a standard rewrite - no need to wait until it broke. Overall, most of my code hasn't radically changed in it's design, but it has been rewritten several times. I've had code cut back to 10% of it's original size by adopting a new toolkit, etc. I've made it more robust, faster, cleaner, better documented. I can't think of a case where it's gotten worse, and I can't think of a case where the rewrite took much longer to write than the original code - and more often than not took much less time. It's worked well enough that I've added a considerable amount of functionality, but spend no more time reworking code because it becomes increasingly efficient and is never too far removed from future additions.
Many people suggest that code rewrites are a waste of time, but it's a maintenance function. People that only budget time to write new code often find those extra work hours devoted to maintenance. Budget it in - and the best way is through rewrites.
Extremem Programming - Refactor (Score:3, Interesting)
Re:Extremem Programming - Refactor (Score:2, Insightful)
Interpreted by management?
Not that I'm bitter, cause I'm not... but XP takes a handful of generally good practices and assumes perfect management buy-in and team communication. With any management support and teamwork, almost any method works.
Re:Extremem Programming - Refactor (Score:2, Funny)
First time I heard the term, I asked the guy "Is that where you jump out of an airplane with a parachute and a laptop?"
Modules (Score:2)
CYA (Score:5, Insightful)
Don't stop maintaining the old code code until the new code is on solid ground. No matter how sure you are that you can do it, the new code might never come through.
The Gardener
Don't do it (Score:3, Informative)
Basically, Joel's take on a similar problem is: don't do it.
Unless you have a _really_ good reason to do huge change to a big codebase, don't bother, and make something more productive instead.
Code style and large rewrites. (Score:2, Interesting)
That being said, my last company sat around and bickered about code style for nearly 4 months and produced no code that wasn't rewritten later. If you are going to concern yourself about style, settle that well in advance and make sure it's logical and consistent.
It's also been my experience that conformant code style is highly overrated. Once the Best Practices document extends beyond language constructs and caveats, into brace styles, spacing, tab size (yes, there was a 3 space tab stop standard at my last job--wretch), and even the naming of locals, parameters, members, constants, enumerations, etc... it got to be a thick ass bible of stuff that only a few people would digest or attempt to adhere to. The point I'm trying to make is, choose your battles. The hope is your developers will make sane choices independently, and use standards to help integrate different peoples' work together. Anything beyond that and it's pissing in the wind.
My $2e-2 or less.
Horror Story (Score:3, Informative)
Anyway, it took 7 of us over 2 months to get even halfway done. The pressure the boss was putting on us was awful, and he didn't really even understand what we were doing, even though he was the one demanding it. I think she read it in a trade mag somewhere. God I'd do a lot more work if she didn't read that shit.
Anyway, about halfway through the "Great Leap Forward" (as we [appropriatly] named it) the boss quit, and the next boss, who so far has been fairly clueful. He didn't think the whole deal was needed, but he was pressured by the former bosses husband (the CTO) to get it done. Seriously.
Hope yours goes better than ours. From what we did, heres some tips I can give you.
1. Be consistant through the whole thing.
2. Make sure everything is planned before you start. This was the one part we got right.
3. The team you have should have worked together before, because this sort of task requires previous knowledge of eachother.
Other than that, my condolences. Or maybe it will work better for you.
Good luck!
Rule #1 (Score:2, Informative)
Transitioning in new managers or having the current manager only look in on the project once in a while is as sure a path to madness and doom as no management at all.
Our due date was mid-August, we'll be lucky to get it through testing and into production by January 31st. All the while with the logjam we're having to put pieces of it into production and cross our fingers that the new changes don't break anything.
Love to talk more about it, but need another gallon of coffee.
That there is no such thing... (Score:4, Insightful)
A large scale retrofit is really an oxymoron.
IMHO, but with 15 years experience.
Listen To The Experts (Score:3, Funny)
Document (Score:5, Insightful)
1) Know what you are doing. 2) Know what the code does.
Both are expedited by good documentation. This is so important, it deserves to be written thusly:
DOCUMENTATION!
Write everything down. If the code is not commented, figure out what it does and write it down. When you add a line or a module, write down what it is supposed to do. Declarations? Write those down too. Document everything so that you can figure everything out, both now and down the road when you decide to fix something else.
This is the voice of experience. I have had to reverse-engineer my own code 6 months after I wrote it because I failed to document anything. Learn from my mistake.
Minimise Untested Documentation! (Score:3, Interesting)
Use as little documentation as possible, BUT NO LESS. (In other words, for heaven's sake don't ever try to get away without any documentation.) Documentation should state fundamental premises -- things like "The customer wants X." and "This code checks that I'm fulfilling the customer's requirement of X." Documentation should not state intrinsic properties -- the statement "this code does X" should be made as a test, not a document.
-Billy
Re:Minimise Untested Documentation! (Score:3, Informative)
Document every function, listing the purpose of every parameter and the meaning of the return value. Document why you are doing something if there is more than one way to do it. If a section of code fixes a bug, document what it does and do not just document a bug number. Use self-documenting code whenever possible (ei. name your variables and functions meaningfully). Use a document generation tool if possible (javadoc, doxygen, etc). Write the user docs at the same time you're writing the code. Incorporate the user docs into the code if at all possible.
Here is a bad comment:
Here is a good comment:
Here is a bad comment:
Here is a good comment:
The most important part of commenting is realizing who is going to read it. It may be you. But in all likelihood it will be someone you never met long after you have left the project or even the company. It may be code or design reviewers who don't know programming but do know how to block projects they can't understand. If it's Open Source, it may be some brilliant programmer wanting to fix a bug but without the time to puzzle over your constructs.
In every code review I have ever been in, someone has made some silly assumption about the code, with the final recommendation that that section of code be commented better to avoid future silly assumptions.
Think again (Score:2, Insightful)
From what you say, you are planning on making these changes to clean up the code and make it prettier.
I would strongly urge you to reconsider this as the probability that you will end up breaking parts of the code while "cleaning" it up is quite high..especially since you seem to have a fairly large code base ~1MB
Ugly code containing redundant stuff is still better than beautiful code that doesn't work.
Regression tests (Score:2, Offtopic)
See no evil, hear no evil, ...? (Score:2, Flamebait)
It really is valid (and in my opinion, correct) to say that if you _are_ going to do this you should look at other technologies and languages. Perl is for system administrators and system administrators-cum-developers, not real software development. Look at java. Look at PHP. Look at commercial and non-commerical web application systems, like Zope. Or don't rewrite it at all if it works. But for God's Sake, don't rewrite it in Perl - it's pointless.
Perl is a *tool* (Score:3, Insightful)
Baloney.
Contrary to what lots of people around here seem to think, especially those who like to make wide-sweeping declarations about what things are and what they are not, Perl is a tool. Nothing less, nothing more. Like all tools, it can be used to create well-engineered systems, and it can be used to create crap.
The community that grew up around Perl is all-welcoming and generally free of elitism. That's why a lot of newbie programmers and "system administrators-cum-developers" use Perl -- because they can without getting crapped on by others who think they know better. As a result, there is a lot of ametaurish-looking Perl code out there, but that's not a result of the language, that's a result of the all-inclusive set of people who use Perl.
Let's be clear: If you write code in any language and the code sucks, it's your fault, not the language's -- the language is just a tool.
Don't blame the problems of programmers on their tools.
Re:See no evil, hear no evil, ...? (Score:2)
The team is obviously already familiar with perl, and some code could be reused after a bit of cleaning. Perl is perfect for applications that require a lot of string manipulation and as a front end for a database. This is what most web programming involves.
The only drawback that I can see in using perl is if your team is messy. Perl will allow you to be messy, but it doesn't have to be. Perl can look just as clean as any other language. I go back to apps I wrote 3 years ago in perl and can follow every bit of it. It's all commented and has plenty of white space to make it easy on the eyes.
I've written several large perl applications for the web and yes, I do know other languages, but perl is better suited for this type of work. It's fairly quick to put together and performs well. Now I wouldn't try to write an OS or first person shoot'em up game in perl, but for this it's just as viable of a choice as anything else.
I'm also not saying that PHP or java would be bad if you have a good grasp on it and are starting a new project. Remember, TMTOWTDI is good!
Just don't use ASP!
Re:See no evil, hear no evil, ...? (Score:3, Insightful)
Mmmm. I write large-scale web applications for a living, and I do it in Perl. By large-scale, I mean sites that are expected to support hundreds of thousands of page-views per day, serve hundreds of thousands of distinct users per month, and collate hundreds of thousands of distinct chunks of content into dynamic pages.
My company is a high-end development shop. We generally bid on projects that will take six to nine months to complete, and we only do jobs for clients who understand how we work and why. Part of our approach is to use very small teams of extremely experienced web developers. We usually deploy four programmers on a project. Other companies that bid against us sometimes use several times that many people on a single development team. Another part of our approach is to build everything on top of our open-source development framework [xymbollab.com]. That sometimes used to be a tough sell ("what, give software away for free, heaven forfend") but these days, most customers are pretty receptive to the twin "more eyeballs means better code" and "you're not locked into our closed, proprietary product" arguments.
We also generally build small clusters of dual-processor Athlon or PIII machines, whereas our Java- (Oracle-, IBM -, BEA-, etc) wielding colleagues often specify absurdly-expensive hardware.
Perl is flexible, complete, performs relatively well and has an extroardinary base of skilled developers and re-usable components (CPAN). We couldn't work as quickly or write code that is as concise and maintainable using any other currently-available language/approach. Most of the lines of code that we write actually go into Perl modules, with HTML::Mason templates handling the dynamic web-page generation. We can push Mason out of the way and use straight mod_perl for small, defined tasks. And we can easily integrate C code into our Perl frameworks in places where performance is really, really critical (though those places are rare, as when push comes to shove, one is almost always waiting on the database).
There are lots of things that aren't "right" about web development. The package that results from gluing HTML and program logic together in a stateless execution environment is sometimes a little lumpy, and unavoidably so. There's no magic bullet toolkit, and (as with other specialized programming arenas, like graphics or embedded systems) a lot of hard-won, domain-specific knowledge goes into the development of a fast, reliable, maintainable web app.
The Perl/Apache/Mason combination that we use is far from perfect. But it's better -- for us -- than any of the alternatives.
I really like Java, and have written big systems in that language, too. If for some reason I had to manage a very large team of programmers, or had to manage a team with a large percentage of less-experienced programmers, I would use a Java-based solution. Java is a more rigid language than Perl, and the structure that the language provides would be a useful management tool in those contexts. But for my small teams of skilled hackers, Perl is more productive. (We have an extensive, evolved, self-imposed "structure," so we don't need the language to impose one on us -- in fact, it gets in the way.)
I would never use PHP for the kind of work we do. PHP just isn't the kind of powerful, flexible, complete environment that Perl is.
Zope and Python are really neat. I'm a fan of the work that folks on that side of the fence are doing. But Python+Zope don't offer us anything new that Perl+Mason+Comma don't. I also like Perl more than Python (which is a subjective preference), and think that the Perl development environment is more mature (which is a subjective judgment).
So don't listen to the folks who tell you to dump Perl. You should certainly consider all of your options and make an informed decision about core tools, but anyone who thinks that Perl is just a "scripting" language, or that it doesn't scale, hasn't been paying attention.
To finish this up with a little more specific advice to the original poster: You mentioned "150+ perl CGIs" in your question. You should consider moving away from the CGI model, if possible. Take a look at HTML::Mason [masonhq.com], which is a very good embedded-perl environment. You can build solid, consistent application layers using Mason as a base. Also, I couldn't agree more with the folks recommending writing perl modules and requiring complete regression tests for each module. There are lots of ways to write tests, but in perl-land one of the easiest is to simply make a t/ directory down your module tree, write a bunch of scripts in that directory named <some-test>.t that print out a series of "ok <n>\n" lines, and use make test or Test::Harness::runtests() to invoke them.
Re:Perl vs. PHP ? (Score:3, Informative)
For example, most enterprise applications do something important -- trade shares, manage accounts, track patient records, etc. How it's done is governed by business logic -- the company rules, policies, regulations, procedures, and so on. Now, you can spread this logic across your web site (as one might do using PHP, which is tied to the web site). Or you can bundle it up into an independent application, keeping all of the business logic in sensible, cohesive compartments that run on an application server (e.g., using one of the existing Perl app servers [google.com] or one you've rolled yourself via, say, POE [perl.org]). This not only makes the business logic easier to understand and manage but also makes the logic independent of and accesssible from any number of front ends that you might need. Simultaneous web, client-server, and even command-line interfaces become possible, and for enterprise projects, multiple simultaneous interfaces is often a requirement for backward compatibility with older interfaces.
In summary, for shallow web-only stuff, PHP is a reasonable tool. For stuff beyond web work, PHP is out of its design envelope. However, Perl works here just as well as it does for web work.
Architect it to death (Score:2, Insightful)
Also, since you have decided to pour resources into this thing, then my opinion would be to make as much of your code generic so that you don't have to make code changes later. It doesn't matter if there is an initial performance hit with the systems, becuase in the short term, you can convince your boss to get a new leet server, and in the future, hardware needed to run your apps will be trivially cheap anyway.
If you are going to cut a new release, try to avoid going back and taking snippets of code from your old system. It makes people slide back into the odl paradigm and can cause detrimental effects on the bottom line of your new system. This is new, so the least exposure to the old one, the better.
Use framework, seperate code form display.. and.. (Score:3, Interesting)
Resin is a JSP, Servlet, XML, XSLT application server that support all the latest and greatest EJB components and Managed persistance on the database and makes a great framework to build from.
I have Postgresl for the Database, and use beans to run queries and output via xml and then i have XSLT draft xml data into the wonderfull HTML code.
The beauty is, my code is in beans, servlets and jsp's. My HTML is in
Just means i can produce output easily for WAP, Palm, CE, Normal Web Browsing, EMail, and what not without modifying the backend. Just create xslt using a session identifier to bring up the corresponding stylesheets for whatever device is acesssing the page.
Enough about java, but something similar, be it in house developed would be your best bet.
I also get away with using a Swing applicatoin to manage the database, users and run reports provide an easy to navigate gui which just interprests the same xml data that would be retrieved by the html client. Not a single change to the backend since i'm using the beauty of soap, jsp's, servlets and java.
Virtualize your interfaces, standardize on your backend and use re-useable components. Perl is similar in many ways that you can load libaries and abstract your code from the display which will save you tons of hours of hassle in any future upgrades compared to the bit of changes you would do now
Don't Listen (Score:5, Insightful)
- Do not undervalue the investment you have learning your existing coding language. New challenges await you if you jump on a new language like java. Make the jump if you are excited about learning about the new language.
- If you use your existing coding language you will literally fly through the retrofit. Do it piece by piece. Make all those changes first, then test app, then make next set of changes then test. The simple fact is, most wasted time is spent on bugs not working on performance, and you've already knocked down a lot of bugs, don't let them pop back up by blowing everything up. There are books on this.
- Sometimes blowing everything up is worth it. Do it right this time. Realize it won't be as perfect as you might think it will be.
- Remember there are countless open source and shareware products that tried to create TNG with a total rewrite, got nowhere, and ended up improving their existing product. Remember the lesson, bite off what you can chew.
- Spend a week poking around researching possibilities. I do this all the time, bookmark things I think are important. Then for the next project you've got all the little things you might forget at your fingertips. Optimzations/Tools/Paradigms. Think you know it all? You'd be suprised at what is out there and what you missed. And what you spent a month in house re-inventing. This one's important.
- Use open source software. Nothing beats free. Nothing is more fun. Java's ugly standardization history makes me puke... the BS Sun has pulled with Java is staggering. That the Java Lobby swallows it and loves it even more so. This is irrelevant to your question, and not fair to the Lobby, but I like to give them a hard time.
- Colorary to Java. You need less abstract design then you think. Endless object hierarchies will weigh you and your app down... Their are books on this too.
- You need more documentation then you think. Ever found code someone ELSE wrote too EASY to follow. I don't think so. Especially if you are using perl and someone is enjoying the line noise capabalities perl allows. Perl has 20 ways to do EVERYTHING, you may not know the latest or twistiest. Document as you WRITE the code. Do not leave at the end of the day without catching up the docs. A week of documenting is the worst form of hell, avoided with a minutes worth of clarification each time you write a function/class.
- Hardware is cheap.
Anyways, have fun... and good luck. Be interested to read what others have to say.
Re:Don't Listen (Score:2, Informative)
This is good advice. To be more specific:
1. START with your regression test suite
2. Then add self-documentation features like standard naming conventions. Seems dull and bureaucratic and pointless but really truly saves maintenance time.
3. Have a standard comment header for each function. The standard should answer questions like "Can that argument be NULL?" and "What do the error returns mean?"
4. If you're going through every line already, do a security audit.
There's good advice in the refactoring books, for example http://www1.fatbrain.com/asp/bookinfo/bookinfo.as
Re:Don't Listen (Score:2)
Re:Don't Listen (Score:3, Interesting)
All too many times I see sites begging for money for hosting/bandwidth. Take a look at their HTML/CSS and see it is HUGLY bloated (no linked css, prevent all caching including image with default cache buster installs) and not gzipped, and I wonder, if what I can see is so bad, behind the scenes it is probably even worse. (ie dynamic page gens where none are needed). Which I had included this in my list and left of the hardware point, which I agree is the wrong message.
But damn, if you do code right, hardware is so cheap I can't beleive it. I'm convinced some 10 machine projects with bad coding can be supported on a single machine now.
Hardware isn't cheap (Score:2)
Yeah, adding a gig of ram or increasing the CPU from 800Mhz to 1.8Ghz isn't expensive. But, if you go beyond what a singl-processor machine can handle, you run into another host of problems.
Adding a second CPU means *MUCH* higher chances of race conditions and other threading bugs. If you know you're coding for a single processor, you can often use a single-threaded model which makes life so much easier.
Adding clustering brings a whole hoist of data synchronization problems. It's *ALWAYS* easier to code for a single-machine than to code for a cluster. There are tools you can use to make shared memory easier, but those often flood the network.
Re:Don't Listen (Score:2)
The "hardware is cheap" mantra is exactly why we have crapy code and bloatware. "I dont have to optimize, hardware is cheap." "Why make it efficient? hardware's cheap!" "I dont care that it is slow, hardware is so cheap these days"
Please, everyone, cary stones in your pockets and throw them really hard at the next programmer that makes that statement.
Re:Don't Listen (Score:2)
"Sure, the dual tbird appro should handle 10000 users"
omitting, of course, if it was all written in clean tight C, instead of java.
KISS (Score:2, Insightful)
Im a strong believer in managed code (whether its C# or Java, except managed extensions to c++ which are damn pig ugly
Make sure there is GOOD in code documentation (and out of code documentation) to explain the intention of that code. (Intentional programming is a research area btw, to program ones intentions).
You know when something is bad when you have to maintain that code later on, 6 months down the line or whenever. Thats when it bites you in the buttie.
Refactoring your project (Score:3, Insightful)
1. Write a test for a specific block of code.
2. Appliy the refactoring [refactoring.org]
You are going to want a good testing framework.
To expand on the modules post: Do a dependency analysis. If you are writing DB based code, look at what tables can be logically grouped together.
We did something like this at my company not too long ago. The basical level package we had was the security package, which identifies users and roles. Most other packages depended on this. All contact management stuff went into a package called Directory. All stuff for the people our system was managing went into Participant etc.
For each of the packages, split the code out into a set of interfaces, and a set of implementing code for business logic, and the UI required to drive that business logic. This is the standard breakdown for code. You may want to further pull out code into helper packages. Avoid the urge to call these helper or util, and instead try to name them based on what they contain: we have one called format for stuff like phone numbers and social security numbers.
Don't forget the make scripts. What ever build you use, it should be used to specify which modules you want to compile/deploy
I recommend a little UML modeling session for the end package structure.
Go in little pieces. After each refacotring, makes sure stuff still runs.
Good Luck
Playing with fire... (Score:2, Insightful)
Most knowledge is in theyre head, not on paper unfortunately. Theyre experience in that area can never be written down.
From experience: (Score:2)
Don't build it. Instead, evaluate the code you have now and plot a course towards the idealized system. Approach the actual work of the "retrofit" incrementally. Count on having multiple customer-facing revisions of the software tagged and QA'd before the system you're delivering looks anything like the planned rewrite.
Taking baby steps towards a new design is probably the only way you'll ever migrate your project to that design. With the knowledge you've accrued working on the old system, it probably seems straightforward to start from scratch. Even if this isn't wishful thinking, though, it's a waste of time. Part of the discipline of design is an understanding of where the "hot spots" are that can't tolerate inferior implementation, and how to tell those hots spots from the spongy mass of integration, reporting, tracing, and sanitizing that is neither performance sensitive nor mutable enough to justify engineering effort.
You can take early baby steps that make it easier to make holistic changes down the road. Refactor relentlessly. Migrate code recklessly out of subsystems and into common repositories and libraries. I've found it handy to distinguish between "proper" shared library and "dumps" of utility code that don't need scrupulously conceived interfaces.
Most importantly, design for testability. In this respect, the biggest asset you have is the steaming lump of old Perl code you're facing; use it to figure out the expected behavior of subsystems. Write replacements, in modules with clean interfaces, and unit test them. A unit test probes code (functions, statements, internal states) --- NOT entire programs. You'll work ten times faster when you can move forward as a team knowing what components you can trust and what components you need to worry about. You'll work ten times slower if you haven't clarified your outputs, side effects, and return values enough to know whether your replacement parts are valid!
We've seen articles on Slashdot before about this and I agree with the prevailing opinion: rewrites are often seductive traps and time sinks that don't offer value to customers. A better mentality that will eventually get you where you (think you) want to be is to adopt a strategy of constant measurement (testing, profiling, debugging) and improvement.
Parallel development (Score:2)
It might seem like and obvious step, but don't throw away the old system until you're sure that the new one works! Keep somebody minding the existing, working system so that if/when your attempt to completely rework it fails you won't be stuck. Once you have rewritten it, try setting it up on a trial basis in parallel to the working system so you can find the crippling bugs before they take down your system.
While it's not a perfect example, Slashdot is actually a decent example with their switch to their new system. They kept the old, crufty version as the primary and set up a beta site with the new software. They knew that there would be problems and got some of their more loyal users to test the new system and only switched over it after they were pretty confident that they had gotten the worst problems out of the way.
You can afford to take a few more risks as long as you keep a known working system around as a fallback.
Small steps (Score:5, Insightful)
Large overhauls are usually mistakes. Details in the previous code are lost. If the overhaul takes non-trivial time, people become frustrated that two weeks ago they had a working (if problematic) system and today most of the system doesn't work.
Instead, make small incremental changes. Pick something lots of code is replicating and attempt to unify it into a shared code base. Spend some time documenting key parts of the code. Pick a particularlly hairy class or function and untangle some of the worst bits. These sorts of changes can reveal minor bugs, build up to significant improvements, and leave you satisfied at the end of the day that you improved things.
If a signficant overhaul is necessary, try to overhaul portions while maintaining the existing bits.
Compartmentalize & Destroy (Score:5, Informative)
2) Encapsulate in libraries
3) Be sure to extract enough generality that you don't have special case functions
4) Don't extract so much generality that functional interfaces become unwieldy.
5) Write everything in the same language.
6) Find any complex pieces or algorithms. If they can be simplified or re-written, do it. If not, save it so you don't need to debug it again.
7) Throw everything else away.
Regression test suit (Score:5, Insightful)
For those not familair with the term 'regression test':
Program a set of so called "test drivers", programms calling your code: routines/scripts.
Define test data, either in a DB or in flat files, used by those driver programs.
The test programs and test data needs to work with the old code, of course.
As the new code should behave similar you only need to adjust PATH or script names to let the test programs work with the new code.
Plan your project by defining which test cases(test program plus test data) should work at a planned milestone with the changed code.
After making changes rerun all tests.
Well, there is a lot more you could do, but that above is minimum (basic software engineering, sorry no art involved here).
Regards,
angel'o'sphere
Little by little (Score:3, Interesting)
Instead, leave most of the manpower working on the old tree as always. Take a small team of your best people and have them gather input from the others, while the others keep working on the old tree. Then have the small team outline the changes that need to be made.
Work on the changes while simultaneously working on the usual stuff. Say, 90% of your manpower should do what they do now, and 10% should work on restructuring things. One mouthful at a time.
When you have a mouthful that you think is ready, branch the old tree. Merge your diffs into the branch, TEST IT, and if things seem to work, land the change onto the old tree.
Re:Little by little (Score:2)
Don't play with code trees. Use only one "tree": the current, live one. Don't let one code base rot unmaintained while the other one is hacked to bits, untested.
Do take one mouthful at a time. Add unit tests, make sure they work. Now add a unit test for something you WANT to work, but which doesn't. Make that unit test work by implementing the feature. Release the result. Repeat.
-Billy
It is called Refactoring. (Score:5, Insightful)
I would HIGHLY recommend the book "Refactoring" by Martin Fowler.
There is a number of things you should do here.
1. Document your plan and come up with an official PROPOSAL document. Allow others to comment on this document and incorporate fix all relevant issues.
I started using this under the Apache Jetspeed project and now a lot of other Apache projects are accepting this practice.
It really allows the community to become involved in your changes and encourages constructive feedback and involvement.
2. Break this into phases. You should NOT attempt to do this all at once. Each phase should be isolated and should consist of one unit of work.
Each phase should be branched off of CVS, worked on, stabilized, brought back into HEAD and tagged. You should then RUN this code in a semi-deployment role for a period of time to correct all issues which WILL arise with the updated code.
After this you can then start your next phase.
3. UNIT TESTS! If management (assuming you have management) has approved the time for this type of refactor then you need to take the time and write Unit Tests for each major component.
It is important that Unit Testing can sometimes be just as hard, if not harder, than the actual development itself.
In some situations you can avoid Unit Testing, some here are going to call me crazy for saying this but it is true. In a lot of high level applications, which are NOT used as libraries by other applications, you can bypass Unit Testing in order to increase development time. This is a dangerous practice but it is often outweighed by the extra functionality you will end up with in your product.
Anyway. Good luck!
Kevin
Re:It is called Refactoring. (Score:2)
[unwillingness to sacrifice key features in order to make time to build unit tests for every radio button]
Personally, I do test-intensive coding because it helps me go faster (build more functionality per unit time). If it didn't, I wouldn't. I also don't see the point in unit testing a radio button.
Re:It is called Refactoring. (Score:2)
[asynchronous input from five different sources, where that input consists of five different complex packages]
I've found that it makes a lot more sense to focus on testing the hard stuff, like this, than simple things like the ones you mentioned. By the way, it's also quite useful (totally apart from XP) to decompose such a beast into pieces you can test effectively.
[so don't sacrifice your project and your customer on the altar of XP]
I've never heard of this happening. Can you point to any project which did XP (most or all of it, not just talked about it) and resulted in an unhappy customer for that reason?
I've mostly seen people do little bits of XP (ignoring most of it), fail, and blame XP. A typical story is "we did XP, but we didn't do much testing, we didn't pair program, we didn't have a customer on site, we didn't integrate very often, we didn't track our progress, and our project went badly, therefore XP is bad". Weird.
Complete rewrite necessary or a waste of resources (Score:4, Insightful)
His comments about code reworking and rewriting have a lot of insight in them.
Here are some quotes from his article:
The point is this... The benefits of spending time rewriting code completely may be a waste of the companies resources, this is for you to determine.
His interview is here:
http://www.softwaremarketsolution.com/index_2.h
and his site has more information about the concepts here:
http://joel.editthispage.com/articles/fog000000
Microsoft, paragon of quality code (Score:2)
If you enforce a good software process for every line of code that is written, then you have more flexibility. For example, suppose that you use Extreme Programming. Then you will have a unit tests for every bug that was fixed. Each time you re-run the unit tests, you find out if any of your previous bug fixes have become undone. This gives you the freedom to refactor your code whenever it needs to be refactored.
Doug Moen.
Re:Microsoft, paragon of quality code (Score:2)
In many of their products, the design decisions are documented, and documented well.
As with all documentation at that level of detail, good luck finding the right page...
Other than that, you're right.
-Billy
Re:Microsoft, paragon of quality code (Score:2)
Doug Moen.
Re:Microsoft, paragon of quality code (Score:2)
I personally have a bias against strong processes. I do admit that they can work, and work well; but they also cost a LOT, in every way; it's very hard to tell whether you're applying enough of the process to make it work. I'm definitely an XP fan; there, your process is light (although it IS hard), and feedback is immediate and automatic.
-Billy
modularity/incremental rollout/unit tests/iterate (Score:5, Informative)
Modularity is probably what you're already thinking about. Go over the old code base, in a code review, and find where the same thing is done over and over either with copy-and-paste code -- the bane of crap engineers -- or with different code that serves the same ends. Look for repeated sequences in particular. Create a new library that encapsulates those pieces of code.
Incremental rollout is vital. Only replace small parts of your system at a time, doing complete retests frequently. Don't write a new encapsulated routine and then roll it out to each of the three dozen places in which it appears in the whole code base. Write the whole function library, with unit tests, and then start applying it to separable modules one by one, retesting as you go. Otherwise I guarantee the whole thing will fall apart and you won't be able to tell why. Ideally, you might set a threshold on the rate of replacement of old modules and work primarily on creating new modules with the abstracted logic.
Unit tests are crucial because, as noted, the messiness of your old code probably conceals a lot of necessary logic. We had this great phenomenon on Apple's Copland where people who had never used the old OS managers were rewriting them in C or C++ from the assembly source. When they saw something in the assembly they didn't understand, they just ignored it. Guess what -- the new managers didn't have any backwards compatibility. The only answer to this is to have a thorough unit test for any module that you replace, against which you can test the new version. This also confers other quality benefits, but during a rewrite it's critical.
Finally, once you have replaced a significant number of your modules, you will find that new levels of abstraction appear. The average size of each function or method will have shrunk considerably, and now it becomes possible to see new repeated code sequences that were not visible due to the old cruft. Move these into your new library modules and start using them in continuing replacement work. In addition, start going back -- slowly and incrementally -- through the already converted modules and replacing the repeated sequences with calls to the new abstractions.
Finally, figure out how you got into this mess in the first place. The worst programmer habit I know of is copy-and-paste coding instead of using subroutines. You can tell people not to do it, but some always will. Those people should be bid farewell -- you can't afford their overhead. Other common problems include lack of planning and review, a code first and think later mentality. Start moving your organization up the levels of the CMM and you may find that you wind up with fewer modules that need replacement.
Hope this helps.
Tim
I'd like some info on this as well... (Score:2)
Go slowly young Jedi (Score:2)
Don't set out to refactor your codebase as one big project. Try and split up the code by functional areas and take them on one at a time. Now, this doesn't really work too well most of the time. You're going to run into way too many places where things are interdependant but try anyways.
If you can, resist changes to your database schema while you are refactoring code. Having both of these thing happen at the same time is pretty scary.
You say CGI so I asume you are using PERL, look into OO PERL, it's worth it. Even if you don't want to go OO all the way the 2 big things that can make your life easier are packages and layering.
Using packages, especially for things like DB access can save you tons of time and headaches. You have one place where you run a query and build a hash, all of your code calls it when it needs the data. HUGE adavantages here.
Layering your design is helpful ass well. I've found that you can do a lot of good if you have designed Data Access, Logic and Presentation layers seperately. All each one of these layers needs to do is take the hash ref passed by the other layer and do X with it. You can rebuild each layer at will as long as the data structures passed between them don't change.
Solicited suggestion (Score:2)
chizor wrote:
My programming team is considering making some sweeping changes to our code base (150+ perl CGIs, over a meg of code)... What suggestions might readers have about tackling a large-scale retrofit?
My advise to successfully accomplish the changes:
I led the development and migration of some very large mission-critical systems in my career. Too many programmers making decisions on-the-fly, totally centralized management, or a "leave the technical folks alone until they're done" attitude are sure recipies for disaster.
Good luck with the changes.
Merry Christmas, and God bless us everyone!
E
Re:Solicited suggestion (Score:2, Funny)
Indead. I'd like to extend this advice:
DON'T DO IT! (Score:3, Interesting)
I currently maintain a code base of around 120,000 lines of php and html (written by myself in a long, hard year) and have had to "retrofit" it a few times.
I find that when it's time to do an "over-haul" it's generally best to:
1) Pretend I know nothing - redesign from scratch. Write out a spec with flow charts, DB table definitions, etc. - make it VERY DETAILED. Spend lots of time at it. More time spent here saves even more time later.
2) Ignore your spec. (See step three)
3) When a bug comes up, or new functionality needs to be added to the codebase, refer to the spec built in 1, and build to it, and then put in compatability wrappers to work with the existing codebase.
Make these compatability wrappers log their calling in some way, based on a global variable. This allows you to see when they're no longer needed simply by defining a variable in a config file and waiting a while.
4) You'll be slowly bringing the application up to the new spec - eventually you'll reach a point where it's easier just to bring the remaining pieces up to snuff than to build more abstraction wrappers. When you get to that point, you'll find most of the work is already done, just finish it to the spec and remove the compatability wrappers.
This can still be a painful process, but at least it isn't a "gun to your head"! This allows you to regression test your work as it's done, resulting in a more stable deliverable, and you can still meet clients' needs in the meantime without making them wait 6 months while you re-write all your stuff.
Hope this helps...
Step #1 (Score:2)
[-1 Flamebait]
You have a Big Ball of Mud (Score:2, Interesting)
First, read this essay: The Big Ball of Mud [laputan.org]. It is an interesting look at why, when we all know that spaghetti, gnarly, twisted code is bad, that it happens anyway (hint: it may mirror your understanding of the problem).
Ignore the "don't touch it" naysayers. Even before it's done, it'll be much nicer code to deal with. You can make decisions with less nagging doubts. You'll code onward with gusto. You'll be able to accurately predict the names of methods without looking them up.
Test the current state of things at all points through the process. I'm hoping that you have lots of automated tests you can run everytime code is checked in; if not, make them FIRST, before the overhaul. You are majorly diverting the intent of the code at hundreds of points; you can run astray in so many places that the above naysayers would be correct. Constantly assure yourseleves that the code is working. Go out of your way to ensure that the code is buildable and runnable, even to the point of writing scaffolding you know will be soon thrown away.
Burning a little incense every day in obesience to the Gods can't hurt, and will make the room smell nice.
mahlen
Shantytowns are usually built from common, inexpensive materials and simple tools. Shantytowns can be built using relatively unskilled labor. Even though the labor force is "unskilled" in the customary sense, the construction and maintenance of this sort of housing can be quite labor intensive. There is little specialization. Each housing unit is constructed and maintained primarily by its inhabitants, and each inhabitant must be a jack of all the necessary trades. There is little concern for infrastructure, since infrastructure requires coordination and capital, and specialized resources, equipment, and skills. There is little overall planning or regulation of growth. Shantytowns emerge where there is a need for housing, a surplus of unskilled labor, and a dearth of capital investment. Shantytowns fulfill an immediate, local need for housing by bringing available resources to bear on the problem. Loftier architectural goals are a luxury that has to wait. -- from "The Big Ball of Mud"
Quick suggestions...not so quick solution (Score:4, Insightful)
Now, on coding standards and how to incorporate them into a legacy project. Your concern is NOT format. The format of your code, such as indentation, spacing, etc, should be the least of yoru concerns. Everyone has their own style, but there are wonderful tools that you can use to force everyone to a single style, ident and astyle just to name a couple. Use a wrapper script to the CVS (or similar system) on checkins. Force the code through an automated cleanup. check the code back out and make sure it compiles/runs as expected.
What you should worry about is how much your design team has embraced the "black box" design principal. Parameters go in, results come out with no "side-effects" that impact the remainder of the code. Make your code re-entry safe, i.e. stay away from globally scoped variables as much as possible.
Someone's going to give you the whole OO-Design sales pitch. Yes it's nice on paper, but don't sell out because something looks nice on paper. I learned this the hard way. I have a tendency to overdesign things. When OO, this gets to be really scary. I waste my time writing object classes for "everything" instead of simply designing the software to its functionality spec. Make things more "object oriented", "functional", or "blackboxed" when you find yourself repeating code elsewhere in the application.
Don't spend a lot of time with naming standards such as Hungarian, Modified Hungarian, etc. Find a style that you and your team is comfortable with for the Interface API level. Below the Interface API, be more lenient. It's likely that portion of the code will undergo many changes anyway.
And most importantly, document! This is the singlemost important issue of any coding project. Either force your developers to write docs as they go, use embedded documentation solutions, or hire a techwriter to follow you and your team around for a few months. Documented API is the quickest way to start someone off in the project, and a great way to keep track of the flow of the program.
Does it work? (Score:2, Insightful)
My suggestions (Score:5, Informative)
First of all, I think its important to realize that you have a medium-sized website and not a big software project. Therefore, some of the above comments recommending refactoring, UML, and eXtreme programming may be a bit overkill.
Web programming != software development! Its usually done at a much faster pace. Even if an object-oriented approach is taken, you are still probably talking about simple function libraries rather than complex C++ or Java classes. Again, overkill.
150 files is still a small enough project to be managed by one or two decent coders. Actually, I just looked at the amount of stuff I've written over the years for my online bookstore [page1book.com] and its more like 500 files and over 4 megs of code. I don't feel like its too much of a job to manage this codebase by myself.
So, here are my recommendations.
You probably have gotten better at programming since the time you started your project. Take a few of the most recent CGIs you have written and compare them to the first ones you wrote. You just might notice a glaring difference in the quality. Also, the first pages you wrote are likely to be among the most important in your project, yet they are also likely the worst quality-wise.
Regardless of what language you program in, I think its important that you can tell whats going on in the program by reading the comments. If a manager can understand what a program does by reading the English bit, there's a good chance other programmers will be able to jump in and help as well. One specific rule I also follow: if you do regexes, say IN ENGLISH what those regexes do. I say this because regexes are one of the hardest things to read.
Look for any code that can be "factored out" of your scripts and put those into function libraries. Then include those in your program. The only problem with this occurs when you have huge function libraries that slow down your scripts when you include them. In that case you would logically separate your functions into different files. I have included very common functions in different include files, so I can make the actual code compiled or interpreted as small as possible.
Consider using a flowcharting tool as an aid to programming and/or documenting your code.
Standardize how you name variables and functions, write comments, identation, and spacing.
Be sure and include the date you write your scripts in the comments, in case the filesystem wipes this out.
I'm sure theres other things I've left out, but following the above guidelines have helped me do exactly what you are trying to do: manage a growing codebase. But don't forget, this is web programming, not rocket science, and some of the above suggestions may be more trouble than they are worth. Keep it simple.
IBM's (re-) write of OS/400 (Score:3, Interesting)
One of the success factors they found was documenting the interfaces for each and every call between modules. The documentation turned out to be excruciatingly precise - but this led to zero ambiguity (and thus 100% interoperability). It also required meetings (sometimes arguments) between programmers to hash out what was actually going to happen. Another factor was that they decided to allow zero 'overloading' of functions by different modules. A programmer was not allowed to duplicate someone else's work, nor create a second, incompatible version of a function provided in a different module. If the function was provided by someone else's module, the programmer had to call it (properly). The result was that they reaped the benefit of object oriented programming - reuse and refinement of modular libraries.
It would be better if you could get the real scoop from the real programmers - but this might give you something to think about.
Learn from those who have succeeded at this... (Score:4, Interesting)
But you've decided to rework rather than rewrite, you say, so I have no doubt you'll ignore the naysayers here. So what CAN you do? After all, as you recognise, reworking is dangerous!
The following rules have worked for me; I've refined my own experience with advice from Fowler's Refactoring, a book as useful as Design Patterns, and with study of Extreme Programming, a design methodology forged in the traditions of Smalltalk, and in the knowledge that maintainance, the most important and expensive part of software engineering, is also the least studied.
First, do the simplest thing that could possibly work. Don't EVER take your program out of commission for more than a day; make sure it runs at the end of each day. If you're doing something and at the end of the day your code base is broken, STRONGLY consider throwing away your changes and going back to the design stages.
Second, rely on unit tests extensively. Start every change by writing as extensive of a unit test as possible. Unit test every function you touch, BEFORE you touch it, and after. Unit test every change you make, and run the unit test BEFORE you make the change to ensure that it fails (i.e. it detects the change). Write your unit tests BEFORE you write code, whenever possible; you'll objectively know your code is done when your unit tests pass.
Third, don't design too far ahead; you don't know what tricks the old code is going to throw at you. Implement one feature at a time, bringing the code into compliance. Once everything has a unit test (thanks to your following the above principles), THEN you can safely embark on larger design changes -- and in the meantime, you have working code with new features, a win even if your customer/boss/manager decides not to continue.
Fourth, don't be afraid to redesign your own code. The stuff you wrote has more tests, so it's safer to change, but it's more likely than the old code to lack some critical understanding only age can give.
Fifth, use the principles of refactoring. Whenever possible split each code change into two parts: first, a part which changes the structure of the code without changing its function (and which therefore allows you to run the same unit tests); and second, a part which uses the new structure to perform a new function (thereby requiring new unit tests).
Good luck. If you want more advice, read up on Extreme Programming [extremeprogramming.org].
-Billy
Re:Learn from those who have succeeded at this... (Score:2)
Beyond that, it's tough to stay motivated about writing tests; people want to write new code, not test the old. So mix it in-- we've decided to require a test to be checked in with every bug fix, figuring that if it broke once, it'll break again.
While I'm at it, I totally disagree that a total rewrite is always the wrong choice-- you just have to keep significant resources going ahead on the old stuff. If your old stuff really was that bad, the new will catch up with it without too much trouble. If it doesn't, well, you didn't really need the rewrite.
Don't rewrite, and don't refactor (Score:2, Insightful)
Don't spend a big chunk of time refactoring it either. Waste of time too.
Instead, make slight refactorings as you go. But make sure you are doing what you are really being paid for: implementing business value.
And you'll find that you'll have much more courage to refactor if you have a full set of automated tests, so maybe you should work on tests first.
Having done complete redesign of dynamic sites (Score:2, Informative)
In this particular case, it was necessary because the site was right at the max. If the traffic increased, it would kill the site. Since it was an E-Commerce site, rewriting it was fairly straight forward. The old code kept running, until we were able to finish the new system and make sure it was stable and ready.
As a consultant, one of the most important aspects is detailed documentation that explains both the high and low level details. Often I will include very specific details about why a design was chosen and what limitations it has. When applicable, I will also describe how to extend, or modify the code to support additional features. This means you spend a lot of time doing documentation, but it forces you to think about a design more thoroughly and will expose weaknesses. Always keep an open mind and never fall in love with your design. There is no right way to build something, only right for the situation you are given.
Don't break the #1 Cardinal Rule (Score:2)
What is the #1 thing he says causes software companies to fail? Rewriting from scratch!!!
I won't deny that there are times it has to be done. Joel points out some of those time (and yours isn't one of them even from the little you've written). Ours happened to fit everything he said, and we did rewrite, and not only was it the right thing to do, it was the only option for us, but the company just barely survived the process.
Don't take that lightly. I speak with experience and Joel is right. You don't rewrite from scratch unless there is no conceivable alternative. Joel describes it well, so I won't "rewrite" it. You can just click the link and read it yourself.
php project mngt app (Score:2, Informative)
i discovered a new tool on sourceforge [sourceforge.net] which is an open project written in php.
i'm impressed with it. the code is also well documented.
the homepage can be found here [tutos.org].
i recommend checking out the screenshots as well.
99% Planning, 1% Coding (Score:2, Informative)
lot of time is involved in cycling between "ok, I
know what to do" and "huh, maybe not". I've found
this crucial, esp. in team work, in order to gain
a good conception of the scope of the task. Also,
many external issues, e.g. how the module interacts
with the system, efficiency, etc. that aren't pure
functional issues, are first grappled with here.
Refactoring is different from this, in that you're
probably very comfortable with the "state of mind"
of the code. Instead of creating, you'll be
clarifying. So, most of the refactoring is in
your head (99%). All the external issues have been
addressed before (or else this probably isn't really
refactoring), so just work at a white board with
your team until writing the code will basically
be transcription (1%).
I've found this to yield the best code.
I have to agree with the concensus... (Score:3, Insightful)
If you feel you have to fix it, then prioritise the most problematic parts and fix them according to a set plan/policy. Use a naming and calling convention. Break functions that do more than one thing up into component functions that can be tested, verified and reused by other parts of your program. Fix it incrementally not all at once. Try using an interface contract when you make objects; that way, your new functions can call new methods and the old code can depend on old methods to be there. Deprecate the old methods when there is no code that depends on it. Don't forget to comment - comment the code then come back the next day and read your own comments. Make changes to the comments so they make sense today.
Blah... blah... blah... Etc... etc... etc...
None of us know your situation (Score:4, Insightful)
Next, no matter what you decide, from your current position, there's substantial risk involved. If you don't have a good way of estimating the costs and benefits of your alternatives, then you will wind up shooting in the dark no matter what you choose. This isn't really unusual, but it can be stressful! Somebody in management has to decide if they want to play high stakes poker with this project. This will establish how many nice risk mitigating activities you can budget (like unit tests, code reviews, documentation, etc). One thing a lot of technology people have difficulty grasping (this isn't directed at anybody in particular), is that if your management decides on a high risk course of action, it's not WRONG because it's technically inferior. Now, if management doesn't understand that it's a high risk proposition, then there's a communication failure somewhere that will screw you no matter what gets chosen.
Ok, now that you've decided where you're going, I've got 3 pieces of useless advice (all advice is useless advice, btw...)
1) Know your priorities. This is the most important thing to getting ANY solution.
2) Understand your requirements(I don't care how you do this). This is the most important thing to getting (close to) a CORRECT solution.
3) Remember it's only a job. This can be very important for retaining sanity
My analogous situation (Score:3, Interesting)
However, I've long known that there are two things my real users (ideal users? the serious mastering engineers) want: familiar interface and realtime processing. I can't deliver realtime processing without literally doing the whole thing over again in a language I don't know (it's done in REALbasic, which is little better than a scripting language for speed, though it's got really, really nice prototyping abilities and GUI support.
However, the time's come to completely overhaul the interface, partly because I have some ideas for mid/side processing and don't have any room in the current layout to fit them! The ideas have to do with rectifying the side channel and using it to either enhance or remove signals that aren't in both channels equally- where regular mono is a 'node' that totally eliminates out-of phase content, it's also possible to completely eliminate R and L-only content and keep only R+L and also the out-of-phase content.
That part's the easy part- it's simply signal processing (and will be duly released under the GPL as soon as it's done, as always). However, the interface is asking for a total, complete overhaul in several ways, and that's what's taking all my effort currently. Here's the situation and how I'm handling it...
Layering. It's no longer possible to fit the whole app in a 640x480 area, even with small print. There are several possible answers to this: one would be having separate windows. This would be more adaptable to larger screens, but it's untidy and there are issues with closing windows and still referring to controls they contain- so that's out. What's looking more reasonable is tabbed panels- RB implements a nice little drag and droppable tabbed panel control that appears quite easy (though I had some trouble attempting to do nested tabbed panels- after one experience of having all the nested panels (at identical coordinates) switch to the top panel of their parent panel, I quickly gave up on that concept. Instead I'm using more panel real estate and trying to divide the controls into logical categories. That is, of course, a real headache- doing interface properly is hard! (says 'interface is hard Barbie') It's only somewhat easier with the additional space. Complicating matters further, is the expectation of the intended audience here. It has to both be organized and look and feel like a mixing board, amplifier or rackmount box of some sort.
Solution: implementing controls like knobs and meters. It's actually quite fun to code a knob appearance out of graphics primitives- and surprisingly hard to get mouse gestures to work on the damn thing- using a two-arg arctangent routine in RB that I don't fully understand. I also have a single meter control already implemented for azimuth display- think that it might be best to tear that apart and re-implement it in a more general sense.
That's because the Knob class turned out to be the right thing to do- it's implemented almost totally separate from the main body of the program, as if it were a RB control or something. Knowing I was going to be using it in different sizes, possibly different colors etc, I wrote the knob code as completely scalable- from maybe 20 pixels to over 100. It reads its size from the control width and height, runs itself as far as handling mouse input and storing a control value, and does not inherit comparable interfaces to the controls it will be replacing- so when it's time to plug 'em in I can run the program and see what routines crash and burn (and need to be rewritten for the new control interface). I'm thinking meters need to also be handled in the same way- somehow- not clear on the form yet.
So: dunno what else to tell you, but it seems like the things I'm doing that are helpful are: compartmentalize new interfaces, make them adaptable and get them working independently of the existing code, while leaving the existing code in working condition. Then when the new stuff is brought online do it in such a way that you could do it one control at a time or in small batches.
Dunno if that's relevant to where you're coming from- but it's what I've found necessary when facing a major re-implementation.