Test-Driven Development by Example 189
Test-Driven Development by Example | |
author | Kent Beck |
pages | 220 |
publisher | Addison Wesley |
rating | Superb |
reviewer | PinglePongle |
ISBN | 0321146530 |
summary | Kent Beck -- author of the original Extreme Programming book -- explains in detail how to turn your development world upside down by starting with the test, then writing the code. |
What it's all about:
Test-driven development is about being able to take tiny little steps forward, test that the step took you in the right direction, and repeat. The "TDD Mantra" is red/green/refactor:- Red: write a test which will exercise a feature, but which will fail (because you haven't yet written the code)
- Green: make the test succeed, doing whatever you need to do to get to "green" as quickly as possible -- don't worry about prettiness
- Refactor: now that you have code which passes the test, eliminate all the duplication
The book then shows 2 fairly detailed examples of a development project (or snippet of a project) which progress using this style of coding. The first example deals with the creation of multi-currency capabilities for an existing project. In the space of 17 chapters, the author walks you through the creation of 6 classes (1 test class, 5 functional classes), complete with the thought-processes behind them. The code is written in java, and is trivially easy to follow, because it gets introduced in tiny little chunks; most chapters are less than 6 pages in length.
The second example is the creation of a unit testing framework in Python. It is significantly more complex and real-world than the first example, but again proceeds in very small steps, and in small chapters.
The final part of the book contains patterns for test-driven development -- practical real-world advice on how to do this stuff for real. Nearly all the "patterns" are phrased as question/answer pairs, and they range from deeply technical design patterns to advice on the best way to arrange the furniture.
What's good about the book?
Kent Beck is a very good writer -- his writing is clear, he is not afraid to leave out stuff he assumes you can guess for yourself, but when he does go into detail you feel it is necessary to get the big picture, rather than mere geek bravado. Even if you don't adopt Test-Driven Development, many of the ideas are well worth considering for your day-to-day coding situations.
What could have been better?
The book stresses the importance of taking 'little steps,' and sometimes you feel impatient to move to more challenging tests before properly finishing the current chapter. I was also hoping for more of a discussion on the practicalities of unit testing database-driven systems, where you frequently have to test business entities which are closely coupled to the database.
Summary
If you code for a living, or manage people who do, you should read this book -- it's a quick enough read -- and consider some of the assertions it makes. If you feel you're introducing more bugs than you expected, if you feel uneasy about how close your work matches the requirements, this book gives you some powerful ideas.You can purchase Test-Driven Development by Example from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
I always test drive my apps (Score:5, Funny)
Re:I always test drive my apps (Score:2)
hopefully... (Score:2, Redundant)
for the sarcasm impaired : this is sarcasm.
sounds great but... (Score:1, Insightful)
Re:sounds great but... (Score:3, Interesting)
I think it's great that there are new ideas for ways to develop software, (although XP is actually fairly 'old' in computer science terms). There is no magic formula for producing quality software in a reasonable amount of time yet, but XP is another step in the right direction.
Note: While XP's entire methodology wasn't written down until about 1996, many of the ideas that it uses had been in circulation since the 70s.
Re:sounds great but... (Score:1, Funny)
Re:sounds great but... (Score:5, Insightful)
I once worked at a start-up where someone started on Monday, and never came back after Wednesday night, leaving a voice mail message that said, "You never told me I was going to have to work with other people!"
You're going to have to work with other people. The better you work with them, the better you work, the better everyone works. (Hugs not required.)
Re:sounds great but... (Score:4, Interesting)
I couldn't agree more.
I can't think of many jobs - at least for college/university educated people - that do not require soft skills like ability to work with your coworkers and communication (meetings, presentations, acting as a tour guide for VIPs, etc.).
I used to hate having to deal with other people any any way. In fact, that was one of the reasons why I decided to embark on a research scientists (Physics) career. As a scientist I wouldn't have to deal with people, give talks, socialize or - most imporant of all - end up in any kind of a manager/boss line of work. I would just sit, think and write papers. That's what I thought and boy how wrong I was.
Most science, and experimental physics in particular, is done in groups. There's no way around it. You can't run a lab alone so you need to have people around you. Even as a postdoc you have to be able to hire good PhD candidates and supervise them. You have to be able to interview them, understand what makes each of them tick at workplace and how to manage them to get the best out of them.
Then, unless your lab is established and extremely well funded, staffed and equipped, you often need collaboration from other groups. Making contacts with other scientists and establishing mutually beneficial collaboration requires publicity (talks!), diplomacy (socializing!) and patience.
A person who cannot work with other people simply does not fit into this environment. No matter how brilliant scientist he is, without the social skills he is very likely to turn out as nothing.
I still get slightly nervous when I have to a give a talk. I still don't like meetings. However, I have grown fond of managing people, because, as difficult as it sometimes can be, it's a wonderful feeling to see your highly motivated and skilled people working with you in harmony. The older I get the more I appreciate social skills over raw intelligence or mathematical/logical ability. If it all comes in one package, jolly good. However, if I had to choose between a budding physics genius with a highly abrasive personality and a slightly well performing person with good social skills, I'd choose the latter. No question about it.
Re:sounds great but... (Score:2, Interesting)
On one project, we had a bug that would always cause a segfault at shutdown. No one had ever been able to track it down -- it wasn't hurting anything as far as we could tell, but it was highly annoying. This was a very small shop and we were just starting XP (or any determinable development process, for that matter), so we hadn't been doing much in the way of pair programming anyway.
Myself and another developer tried pairing to find the bug. We talked it out briefly, and decided to do a binary search of the code with print statements (as the debugger couldn't find it). We'd start the program then exit it, trying to find where in the shutdown process the bug was.
We went over the logs together, and the first person to narrow down another potential location. I'd suggest a spot and add a print statement there, then he'd suggest one. We'd then run the program again, collect our logs, and start over. If I got stuck, he'd have an idea and vice versa.
It took us about ten minutes to track it down and figure out exactly what was going on. I'd spent a couple of hours on it alone without making any progress before, and I think he'd done the same. If we'd paired on that before, we'd have saved a bit of time and money.
I'm not sure where you get the idea that pair programming is having some slob sit behind you and breathe over your shoulder. In my experience, it's like having a conversation about code that produces working code. If you can give up the idea that you have to be right and that your way is the best way and, for goodness' sake, hand over the keyboard about half the time, you'll probably find it's a lot more productive and even more fun than working alone.
Re:sounds great but... (Score:2, Insightful)
This is exactly the point the XP people are missing. All the practices they prescribe require a very sound organizational culture. Unusually sound. If you already have that in place, then XP might enhance your organization's performance. But if you already have that in place, you've already solved a long list of much harder problems.
And don't get me started on the requirements gathering end of the process: in full XP, you have to collaborate tightly with your business stakeholder folks. You have a problem with the other developers in your own department? Wait till you have to deal face to face with the idiot from marketing or the arrogant ass from finance. A lot.
Good (Score:3, Insightful)
I think we must try to get rid of wanting to design and plan every little thing in front and then find out stuff doesn't work ending up running out of time and in the end having noone willing to pay for all useless efforts.
Although many people don't believe in XP it is a way to accomplish development in such a way you do get deliverables. Maybe it does not improve speed but it does improve quality and reduce risk.
So any book which is able to explain the pro's of XP and open eyes of non-believers is good.
XM (Score:1, Insightful)
Your feeling I'll, so you pop along to the doctor.
Hey Doc, I have a problem can you fix it.
The Doc has an Idea what's wrong, could be complicated, but:
throws a load of drugs at you
does a some tests, your a bit better but, well have some nasty side affects.
Changes some of the drugs, and gives you a few more......
In the end, your sort of all right, but a drug addict, and occasionally have to go back and get a different fix from the Doc.
Not-so eXtreme Medicine.
Your feeling I'll, so you pop along to the doctor.
'Hey Doc, I have a problem can you fix it.'
The Doc has an Idea what's wrong, could be complicated.
I'll put you on some light medication and get an appointment with a consultant.
The consultant comes along, has a word, still not sure, advises the Doc on some better medication gets in a specialist for one of you conditions.
The doc treats most of you conditions, and some have already cleared up.
The specialist takes a look at you and give the Doc some more advise and training.
After a few months you in perfect health.
Re:XM (Score:3, Insightful)
eXtreme Medicine -> not so eXtreme Programming:
(in general use a lot of blahblah and don't show what you're doing)
make a big picture of all what's wrong
try to fix everything at once (dripping nose, broken legs, breast enlargements, etc)
determine whether or not thing have gone well is the patients problem (hey my?? oh, cute, well let's make em even bigger)
leave your patient, with a lot of (new) problems
not so eXtreme Medicine -> eXtreme Programming:
(in general use a little blahblah and show what you're doing)
focus on the most important issues (broken legs)
make sure you can see if it's fixed
fix the most important issue, and continue with the next important thing (dripping nose)
disencourage breat enlargement
Yes, it means that you need very good developers for XP. These people must be able to do good designing to have the big picture in mind and they must be able to judge quality.
Do they exist? Yes...
Many? Enough, but there is an awfull lot of very crappy developers out there.
What the XP folks have right (and wrong) (Score:5, Insightful)
There are many labs that don't test at all, and a vast majority test poorly. I've worked in some fortune 500 labs that didn't test at all. Scary stuff. Nothing life threatening, but all of a sudden I wasn't so convinced that the reason my account was misconfigured was because *I* gave wrong data. Simply bug riddled. Those that do test often do so manually. Forgetting for a moment that humans are likely to take short-cuts and not bother to execute tests they perceive to be out of the scope of their recent change, they are failable. Of course they are, that's how the bugs got there in the first place.
So, the XP folks have the testing thing down. They test before the code is written, and their tests are automated.
Then they take leave of their senses. The claim that because they've successfully turned one idea on it's head (i.e. testing *first*) that they can turn others is ludicrous. Design first is still valuable guys. I've eliminated thousands of bugs simply by having the right design to begin with. Waiting until you've cobbled something together that passes the test and then hoping that your boss will allow you to refactor is a loser. If it weren't Scott Adams wouldn't be a millionare.
So, write your tests first. But do your design before you code, not after you've put together a thousand lines of crap.
What's worse (Score:2)
seriously the percentage of projects I've seen with features that don't work because that's what the client asked for. Or are poorly put together is amazing.
You can't test if you don't have a design to test against. You can't design if you haven't consulted the requirements.
Re:What the XP folks have right (and wrong) (Score:5, Insightful)
XP breaks the design/coding/testing cycle into very small iterations, each one as big as one automated unit test case. It's a very exploratory style of software development. XP doesn't mandate any high-level design artifacts (though it doesn't forbid them either).
What none of the XP books say is that developing unit tests is a design activity, and the unit tests are design artifacts! Unit tests outline the responsibilites of classes, in the original responsibily-based style of object oriented design.
XP programmers do design on whiteboards, and in their heads. Some of these artifacts are lost. Some would have become obsolete in a hurry. (The unit tests are guaranteed not be obsolete, at least as long as they're passing.)
I'll take that, any day, over a hundred pages of out-of-date UML diagrams.
Re:What the XP folks have right (and wrong) (Score:2)
You missed one thing though: Test-first is design. The tests don't just ensure that the thing works as written, they define how the thing is intended to work, and in order to write a test that exercises that aspect, you have to think about what problem the code is supposed to solve. This results in the test being the thing that validates the features of the code under test, and by definition, the features of the code are its design.
Re:What the XP folks have right (and wrong) (Score:2)
The name of the practice is either Test Driven Development or (more commonly) Test Driven Design. I think it's a bad name, because it leads to the sort of confusion you have.
The tests they're talking about ARE design, so by writing them before you write code, you are, in fact, designing before coding. The difference is twofold: first, instead of stopping the design at an arbitrary level of detail and going to some other part of the program, you continue the detailed design until it produces code for that part of the program; second, instead of writing your design in an ignorable form, you write it in a simple, immediately testable form.
Because of the first difference, you get to implement your design while it's still fresh in your head; because of the second, your implementation will be forced to match your design.
I've worked with TDD for about a year now. It's a real winner -- mainly because it makes testing a creative, constructive activity (part of design) instead of destructive gruntwork (intended to tear down parts of your program).
Waiting until you've cobbled something together that passes the test
Waiting? That's exactly the point -- you DON'T wait. You implement the solution needed to pass the test (i.e. the design) immediately, as soon as you've managed to express your design in a testable form. You get to see immediately whether or not your design is workable, before you build some other aspect of your design on it.
I've worked TDD both with and without refactoring. It works either way -- without refactoring I have to take a few more risks, code a little slower, and accept slightly lower quality overall code (i.e. it doesn't fit together quite as well; refactoring doesn't slow me down at all. The nice thing is, though, that it's not under the control of my boss, anymore than which shift key I use is under his control. I don't send a unit in to Configuration Management until it's ready to roll; so within that unit, I refactor at will. (We're a CMM level 3 organization.)
Amusingly enough, the only criticism I get from my boss is that I _may_ be testing too much. Literally. He's not sure, but he's a bit nervous that I might be testing outside of the needed range -- for example, testing for negative numbers when the only allowable input is positive (he hasn't read my tests, and I don't think he can even imagine tests being something he'd want to read). You should see his tests -- but that's a different story.
-Billy
Re:What the XP folks have right (and wrong) (Score:2)
In one example I saw, the person wrote the test and the frame work for the function, but didn't write the skellital frame work of the class which contained the function used in the test. They also didn't put in any of the includes/imports that was needed. Instead, they compiled it and added it after the compiler told them these things didn't exist--kind of stupid when you knew it was needed before you compiled.
To me that's too small a step. I can see the benefit in taking small steps, but not taking microsteps. Also, I can see using the compiler errors as a sort of to-do list, but I can't see wait to put something in like a framework and/or includes when you know they are required. That seems excessive and a waste of time.
In some ways this is just a rehash of the methodolgy of structured programming. The way I was taught to break thigns down was to write the higher level code first, and this meant writing empty functions (or functions that return a constant). After that was done, you'd go back and fill in the code in increments. This just seems to be a rehash of that idea--which seemed to get lost mess when the OO craze started.
Re:What the XP folks have right (and wrong) (Score:2)
Yes, that's an example of an example trying to make a point: it's very easy to run the tests, so you'll often run the tests before the code's ready to compile. In real life, this happens a lot, no matter how silly it looks in a book: it's way more expensive to put in too much functionality than it is to put in too little, since if you put in too much your tests won't complain.
BUT, you don't have to do that. The #2 rule in the Pragmatic Programmers book is "THINK!" Good advice -- don't turn your brain off.
In some ways this is just a rehash of the methodolgy of structured programming. The way I was taught to break thigns down was to write the higher level code first, and this meant writing empty functions (or functions that return a constant). After that was done, you'd go back and fill in the code in increments. This just seems to be a rehash of that idea--which seemed to get lost mess when the OO craze started.
I see what you mean, but although this provides some wonderful support for top-down programming, it's not what it is or requires. It's just as easy to start in the middle and design outwards.
-Billy
Re:What the XP folks have right (and wrong) (Score:3, Interesting)
If you've written 1000 lines of code before you refactor, you're waiting way too long. I do little refactorings every few lines, and bigger ones whenever things don't look right.
Look at it this way: you can do your design before you code, while you're coding, or afterwards. The waterfall advocates about 80:20:0. But that assumes that you learn nothing, think of nothing when coding. That's stupid; instead, what people do is grumble and imagine what they'll do in the next version.
Now that I'm doing test-driven development, my ratio is about 20/30/50. I still do some design up front, and that's very valuable. But instead of having those raging arguments about what design might work best, we start building and find out, refactoring towards the best design as we go.
Waiting until you've cobbled something together that passes the test and then hoping that your boss will allow you to refactor is a loser.
If a boss is such a micromanager that he's telling you which lines of code to work on every 15 minutes, then it's time for a new job. The people I work for recognize that I'm a professional and trust me to tend to my business, especially given how willing I am to explain it to them.
When bosses ask about refactoring, I explain that we do it so we can go faster. If they think it's a net loss, then I am always glad to prove them wrong: if they want to measure productivity each month, we can see which way works better.
Think of it this way: not refactoring is like not exercising: you get slower and slower, everything gets creaky and unreliable, and you die sooner because your arteries are clogged. A little regular exercise keeps you healthy, limber, and active.
Re:What the XP folks have right (and wrong) (Score:2)
Refactoring is improving the design of existing code. Be it 10^1 or 10^6 lines, if you're changing the design without changing the functionality, it's refactoring.
In any case, to leave terminology aside, my point is that design is a higher level thing that helps 100K lines look, feel and behave similarly. XP doesn't address this well.
Design is design. A program can have good or bad design at many levels of abstraction. You're right that it's important to look at the high-level ones, but many of the important clues about high-level design lie in how things are working on smaller scales.
As far as I can tell, any refactoring of big issues can be broken down into a series of tiny refactorings. One's notions about the big picture are played out in one's small actions. As an analogy, think of a cross-continental road trip. The fact that I'm going from San Francisco to New York has at least a little influence on every move of the wheel, and a lot of influence on some of them.
See above, I wasn't referring to the small work units. I agree 100% that the XP ideas work really well on the time scale of a day or a week, but they don't scale to a project or a large team.
A bold assertion. Do you mean to say that you've been unable to do it? That it's utterly impossible for anybody to do it based on some mathematical proof you have? Or just that you can't currently see how it would work?
Re:What the XP folks have right (and wrong) (Score:2)
Yes, and I'm saying that they are the same thing. The techniques may be different, but the principles are the same. Someone who can't write good code will never make a great "architect", and somebody who has no clue about the bigger picture is a dangerous coder.
The high level view is important and, I think, mostly ignored in XP.
The fact that nobody explicitly says, "Hey, don't forget the big picture," doesn't mean that XP pracitioners don't pay attention to the big picture.
On the XP projects I've been on, we talk about the big picture with frequency. Little issues with object lifecycles lead us to examine and perhaps change our persistence layer. The recognition that some packages are annoying for us to work on because they have too many classes sometimes leads to a major code reorganization.
Note that some of the people involved in XP are people who are very interested in design on the larger scale. Martin Fowler, for example, has written entire books about UML and Enterprise Architecture. Many XP practitioners are also very involved in the Patterns movement.
You are right about one thing though, I shouldn't say "they don't scale". What I should say is that they don't scale as well as a more comprehensive approach.
Unless you want to come up with some statistics, I'm going to translate this as, "I think they don't scale [...]" or maybe, "I don't know how to make them scale".
People are using XP (and other Agile methods) on projects with large code bases. They seem to think it works well. Perhaps you should drop by a mailing list [yahoo.com] and find out how they're doing it. Arguing is always more fun once you have some facts.
Re:What the XP folks have right (and wrong) (Score:2)
Architecture is part of the process. Beck and others mention it; the teams I'm familiar with have no problem with it. There is no explicit step because they see it as all part of "design" and "refactoring", which are explicit steps already.
XP attempts to be scale-free. If you want to impose scale distinctions, fine, but recognize that it's not part of XP.
Each iteration included 'tests' (in the form of human walk throughs) to ensure adherence to design and refactoring of the design itself. Just like the code, we iterated over the design.
Why wasn't this happening naturally?
For adherence to the architecture, that's everybody's job. And if they forget, they have a pair to keep them in line. Sure, some people are more architecture-savvy than others, but since pairs should change a lot, those people should be able to influence the design as needed. Why wasn't that happening in your team?
As far as refactoring of the architecture it's not clear to me how a part of the architecture could be bad enough to need refactoring without anybody noticing that it was a pain. When they feel they pain, they should raise their voice a little and say, "Has anybody thought about X?"
Even if they don't raise it right then, they should be raising things at the next daily stand-up, yes? Were they doing that in your team?
Also, generally at release and iteration planning meetings we'll talk over architectural issues when new stories or tasks inspire their discussion. Didn't this happen for y'all?
We added the rituals.
Did you do this in response to an actual problem? E.g., that you got two months into and discovered that y'all were letting the architecture get crufty? Or was it imposed from the beginning based on theoretical concerns?
If the former, I'd agree that one way to solve that is adding an additional ritual every iteration. And bravo for fixing the problem you had! But there are other ways to solve the problem, too.
Saying, "My team had a problem adopting XP and we did this to fix it," is a pretty different thing than saying, "XP has a fundamental flaw."
Re:What the XP folks have right (and wrong) (Score:1)
> I've eliminated thousands of bugs simply by having the right design to begin with.
Have you? How do you know that?
Yours,
Tom
http://pmd.sf.net/
The important bit: zero tolerance for bad code (Score:2)
The really important thing in XP is that there is zero tolerance for bad code and incomprehensible design. Once it is realized that something could be made better another way it is changed to be that way. The automated tests greatly facilitate this.
Re:What the XP folks have right (and wrong) (Score:2)
That much I think is not contentious. Very few people (even those with experience) can pick the ideal design ahead of doing any implementation or predict what the changes in requirements will be.
More controversial, perhaps, is the XP idea that your initial design should not be any more general than it needs to be to implement the functional requirements of your first code drop. There is some merit in this, since the requirements _will_ change and unused generality is often a waste of coding effort (not to mention creating extra complexity which may not be tested enough), but still I feel you have to use common sense and often design for extensibility at the start, even if you are not 100% certain the extra flexibility will be needed. You might be 50% certain and that is often enough.
But I do feel that the XP approach fits in with my personality. If there is no bus approaching bus stop, I would rather walk to the next stop than wait for a bus to come along, because at least I am making some small progress and this journey strategy minimizes risk, even if the mean journey time is shorter by waiting around at the bus stop.
too many developers (Score:2, Insightful)
Re:too many developers (Score:2)
Haven't we... (Score:5, Funny)
Dibert and eXtreme Programing (Score:4, Funny)
What would Dilbert do?
Here [dilbert.com]and here [dilbert.com]!
I love buzzwords with X anyway...Buy PEAA, too (Score:3, Informative)
Is XP good? (Score:5, Interesting)
Does the reliance on incremental development and refactoring rather than a intricate, up-front design really work, or result in a big wad of band-aids?
Is pair programming OK, or do you sometimes get stuck with the nitpicker from hell who has to have every detail his own way?
Is close involvement with the customer good, or does it just give them daily opportunities for endless bright ideas that prevent convergence?
Just wondering...
Success Stories? (Score:4, Informative)
Last I heard, the Chrysler Compensation System was not finished, scrapped prior to going into production. What are the more recent projects that demonstrate how well XP works?
Re:Success Stories? (Score:2)
You've only told half the story
Re:Success Stories? (Score:5, Informative)
Hear it from the players players themselves [c2.com].
One of the benefits of XP is that it can tell you much earlier about whether or not to terminate a project in the first place. From Extreme Programming Explained [amazon.com]:
-- Kent BeckRe:Success Stories? (Score:2)
A brief excerpt:
Ack. Whatever its merits as a development technique, XP has clearly suceeded in generating a remarkably high concentration of silly buzzwords.
Re:Is XP good? (Score:3, Insightful)
I work in a game studio where our last project had 6 months of pre-production time. We generated reams of technical design documents. The intention was good, but they were never maintained or even referenced after their initial creation. We just said "documentation is necessary" and it needed to be done. In production, the team wasn't on the same page. Every programmer had a task list they just milled through. The assumption was the initial requirements won't change. The result was ugly. The product was subpar and a couple months late. Everyone was miserable. It sucked.
I'm currently leading a new project here. We're 6 months into production and every milestone has been delivered ON TIME and accepted by our customer. The team is focussed on the current milestone, there isn't a lot of process to get in the way. The best part is, writing code is fun again. We don't have goals we can't accomplish. And we fully expect the product requirements to change during production.
I could get into specifics about our process. But I don't think it would be that helpful. I think specific methodologies like XP are guidelines to get you started. From there, you really should re-evaluate your process frequently (a fundamental excercise to be "Agile") and make changes as necessary. Kind of like optimizing your code.
The following links gave me all the information I needed to devise an initial process plan (which included TDD). But once it was put into practice, it naturally evolved into the process we have now (which doesn't include TDD)...
The New Methodology by Martin Fowler [martinfowler.com]
The Agile Manifesto [agilemanifesto.org]
Agile Software Development Ecosystems by Jim Highsmith [amazon.com]
I also suggest reading the chapters on "thematic milestones" in Writing Solid Code.
Jim Highsmith? (Score:2)
Re:Is XP good? (Score:2, Insightful)
does anybody have some war stories?
It's interesting. The "war" has been with management. We showed our Project Manager a couple of the books (he actually READ them) and we were allowed to use XP. After our first project was successful, we have been trying to get official approval to develop using the methodology and it has been tough.
Does the reliance on incremental development and
refactoring rather than a intricate, up-front
design really work, or result in a big wad of
band-aids?
As with any approach the team must be disciplined to
1. Test First
2. Talk to each other
3. Ask for help when needed
4. Refactor mercilessly
5. Code the simplest thing that will work
You will probably say "of course!" to all of the above, but if you don't stay disciplined within the team, you will get into trouble.
Is pair programming OK, or do you sometimes get
stuck with the nitpicker from hell who has to
have every detail his own way?
Pairing is great. If your goal is not well written simple software, you are not part of the team. You are the cowboy who is the reason the team is all working on Saturday to fix the sh*t you went off and wrote by yourself! (Ok, Ok, breathe....)
Is close involvement with the customer good, or
does it just give them daily opportunities for
endless bright ideas that prevent convergence?
The customer wants an application as quickly as possible. They have a business need and don't want to have to wait 4 more months for their great ideas to be implemented. If they continue to think off the top of their heads, they probably didn't know what they wanted in the first place and it will take that extra 4 months to get it out of them and get it into the application anyway.
XP works. Our teams are 4 or 6 programmers with a tester. When we test first and give the tester something to test that is not fragile, we get much farther much faster than "code it and throw it" at the tester and hope it works.
Re:Is XP good? (Score:2, Interesting)
Then, after several months (almost a year) of developing the XP way, the company had to have a demonstrable product within the next two weeks. In the end, nothing mattered except the ol' bottom line. We had to go to market to beat a competitor. It did not matter that we still had nine weeks of stories to finish the project. It did not matter that tests were failing. It did not matter that the kernel and OS version had changed twice in the year, each time causing the developers and testers to re-visit, re-test, and re-code a large part of the existing code.
A nice shiny package was not ready exactly when the suits wanted it and the company shut down the devision, moved everything to another state and other developers. The product still has not gone to market, but the company got rid of the developers and testers that didn't deliver.
Lest you think this is sour grapes, I actually think XP practices can work for what it was intended; small project development.
Ours was a major undertaking with an entirely new product. There was education and training necessary that was never taken into consideration as far as the project timeline was concerned. We also had to learn new hardware technology, operating systems, development and testing languages at the same time we were learning XP methodology.
It was not fun!!Re:Is XP good? (Score:2)
Re:Is XP good? (Score:5, Interesting)
I'm immensely happy with it. It's not a magic bullet, but gives me the tools to solve a lot of common problems.
Does the reliance on incremental development and refactoring rather than a intricate, up-front design really work, or result in a big wad of band-aids?
Yes, it works. It takes a bit to learn how to do it well, but it works.
Indeed, in some ways, it works better. When I did most of my designing up front, I had to guess about a lot of things. Now I'm a pretty good guesser, but I'm not perfect. And since I knew I was making designs that I had to live with for years, I tended to play it safe, putting in things that I would probably need someday. I hoped.
Now, I don't put anything in until I know I need it. This keeps the design cleaner, and saves me all of the time I would have spent on bad guesses and things obsoleted by changing requirements.
Is pair programming OK, or do you sometimes get stuck with the nitpicker from hell who has to have every detail his own way?
I like pair programming a lot, and more people are good at it than you would expect. But some people are indeed pains in the ass; just last week we had to sit one guy down and have The Talk with him. If he doesn't shape up, he can go be a nitpicking cowboy on somebody else's project.
Right now I'm also doing some solo work, and it really suffers from the lack of a pair.
Is close involvement with the customer good, or does it just give them daily opportunities for endless bright ideas that prevent convergence?
It can work really, really well. The trick is that you must let them steer the project, so that they can see that asking for all sorts of random stuff means that their precious project will be screwed.
For example, I and another fellow were recently called in to work on a project that was six weeks from delivery and running late. The product manager handed us a one-page spec, spent 30 minutes showing us the existing code and said, "So can you do that in time?"
Now the only honest answer was, "I dunno, but probably not with all that you've asked for." But we walked him through the XP practice known as the Planning Game, where we broke his requirements down into bite-size chunks, wrote them on index cards, and then he ordered them by importance. There was far more work than could be done in time.
So then we asked him to draw a line at the point which, if we didn't reach it, he would slip the date, and if we did, he would ship. It cause him great pain, but he did it. Then we agreed that we would start developing and see how things were progressing.
As time went on, he, like any client, had all sorts of great ideas, and so did we. Every one got written on a card, and we asked him to place them wherever he wanted in the stack. This forced him to make tradeoffs; the more stuff he put in before the release, the farther away the release would be.
When the six weeks were up, we shipped a product with less than half of what he originally asked for. But instead of being pissed, he was happy! He got the half that was most important to him, and he was the one who made every choice about scope, so it wasn't our fault, that was just how things were.
And best of all, because we'd stuck to the XP practices and done extensive testing, there have been no bugs reported against our code. With code that solid, we're glad to start extending it, instead of the usual pattern where a big deadline push means the code is crap.
So yes, I like XP a lot. I get to write good code, polish the design, and have good relationships with the business folk. It rocks!
Re:Is XP good? (Score:2, Insightful)
The team should create a common guidelines specification which then must be agreed by everyone in the project. After that nitpicking to make stuff conform to the guideline is a good practise, since sticking to the agreed style is an improvement.
if you're organized (Score:2, Insightful)
Write code to determine if your code is flawed... (Score:2, Insightful)
Re:Write code to determine if your code is flawed. (Score:1)
Better Design is perferred, but it is not always a reality in the workplace.
Re:Write code to determine if your code is flawed. (Score:2)
In practice, this isn't really a problem. As long as you write your tests first (rather than hacking them together afterwards) then they're a pretty good expression of what the code should do. Then the code itself is an expression of how to accomplish that.
Of course, it's possible that one has the wrong idea and makes the same mistake twice, but you use other practices to guard against that.
Seems like an extra layer of complexity rather than a useful method.
Maybe. My score for the last year or so is less than one reported bug per man-month of coding. And it's lower than that for my code written while doing pair programming.
I spend almost zero time hunting bugs.
Re:Write code to determine if your code is flawed. (Score:2)
Actually, test-first development tackles this case specifically. You write the test first - which has to fail since you haven't written the code yet. (Technically, it even fails to compile. You then create stubs for the methods you call so that it compiles.) This is the state known as "Red". If the test passes when you first create it, you know that you have a bad test.
Once the test fails, you then write the code so that the test passes. This is the state known as "Green".
In practice, I do find that I occasionally write insufficient tests - which means that bugs get into the final product. But once a bug report comes back in and we find the problem, the first thing we do is write a test that fails (ie, back to the red state.) Then we fix the bug so that we move to the green state. And voila, we have a regression test for that particular bug.
flaws in tests not a problem (Score:2)
Suppose 10% of the code you write contains errors (at random), and you write tests covering 50% of the code. Then you find 90% of the errors in 50% of the code (assuming your error rate in test code is the same as the error rate in 'real' code). Your error rate for the overall program should now be around 5.5%.
In other words, it doesnt matter if your tests can also be flawed, you'll still improve your code by doing testing. On the other hand, planning will make absolutely
It then boils down to whether test-driven or test-last is the most economic policy, and how much testing to do.
Anyway - since this economic argument (not flawed tests) is the crux of the matter, you should take a read of Beck's thinking on the issue, in May 1999 issue of C++ Report [adtmag.com], particularly the last paragraph.
Cheers,
Baz
The Unix Philosophy (Score:2, Informative)
[camalott.com]
In Other News... (Score:2, Funny)
Moving beyond XP: XXP (Score:5, Funny)
That's why I've moved on to XXP, which focuses first on correctness of tests. First, I write a test that tests a test. Then I write the test. I test the test until the test tests ok. Then I write a test for another test, and so on.
My pair programming partner is currently working on an idea he calls "XXXP". I'll post our results if we ever finish a project without getting lost in infinite recursion.
Re:Moving beyond XP: XXP (Score:2)
Test the test? Sure it's possible
Tools like Jester [sourceforge.net] let you evaluate the quality of your JUnit test cases. So yes you can evise your test cases in between refactoring your code.
Hopefully the developers of Jester didn't evaluate their test code with Jester (XXP with Jester-for-Jester), the mere thought has me spiralling into a daze of infinite recursion.
Sloppy programming (Score:1, Troll)
Re:Sloppy programming (Score:1)
Admittedly I tend to just sketch out a basic overview of any project I set out on then dive straight into the code but my projects are small scale so I can get away with it
Re:Sloppy programming (Score:1)
Yeah, aren't you glad we live in a world with disciplined designers who don't test their code and never work to customer specifications and don't integrate continually and never run acceptance tests? I'd hate to think about the quality of software we'd have if people did THOSE things!
Re:Sloppy programming (Score:1)
Are you saying THOSE things were invented for XP?
Re:Sloppy programming (Score:1)
By no means! I am merely saying that they're fundamental to XP. If you're attempting to do XP without those, you will need a combination of very lucky and very, very brilliant developers to have a chance of delivering good software on time.
(If you're not doing XP but you are doing those practices, you have a pretty good chance of delivering good software on time.)
Re:Sloppy programming (Score:1, Interesting)
Actually, if you look at the credentials of the original "inventors" of XP, and some of it's biggest champions (Robert Martin, Martin Fowler), you'll see that they are big champions of proper software design as well. They just realized that "big design up front" -- which they were preaching -- wasn't working out, and that most implementations of more iterative approaches weren't working out either. Hence a "reaction" in the form of XP.
As for discipline, proper implementation of the XP practices requires far more discipline than most highly structured approaches. What it minimizes is the creation of documentation for documenation's sake. What has happened is that people who hate design/discipline have decided that by saying "we're doing extreme programming", they are off the hook. But don't blame the inventors.
Re:Sloppy programming (Score:1)
I guess you didn't actually read my post. When I said I worked with some guys who used XP, what do you think I was saying? And I find your comment about academia humorous. It supposes that nobody would want to actually *design* something before building it except for academics. Wrong, sparky. I work for a company that takes pride in actual engineering.
Is there anything here for the GUI developer? (Score:4, Interesting)
I'd be the first to admit that XP offers a lot of risk-reduction -- for teams that are working on things that are easy to unit-test.
With a class that is supposed to take in a bond and output the yield curve, it's easy to write a unit test. But what about the next class, that renders the yield curve on the screen? What about the complex, distributed system of Excel objects and forms things that draw a network and things that flash green and go 'ping' to indicate a change, that are equally necessary but generally much harder to write and much more likely to go wrong?
Has anyone tried to apply test-first programming to complex guis? I can't say that any obvious way to do it has ever occurred to me. Worse yet, when I ask I generally find that people either
a) Are in the same position as me, or
b) Believe that a GUI is a little thing you spend a couple of days on after you finish the application
So, for now XP is something I read about rather than something I actually do.
Re:Is there anything here for the GUI developer? (Score:2)
Other things such as matching pieces of the picture and such would also be nice.
It'll never be easy to test a GUI, but right now GUI widgets are almost entirely focused on giving commands and limit the information you can extract back from them to screen position, size, and little else. I want to be able to assert that my textbox can handle text of the size 50Ms with all of it being visible, which is exactly the sort of thing a user requirement might be. (Indeed, I think of this example because of a text dialog I just dealt with for entering a Wireless access key that was two chars too short; if you're that close, why not go for the gusto and let me see the whole thing for visual verification of correctness?) In fact you can do some of this in some widget sets; Tk will let you do that exact test, for instance. But it's haphazard from a testing point of view, because it wasn't implemented for testing.
The other thing we need is a way of inserting all user-sourced events cleanly, and in a well-documented and supported manner, directly into the event loop as if they came from the user, indistinguishable from user input to the rest of the application. Again, haphazard, poorly-supported and poorly-documented abilities to do this exist in some toolkits, but since it's not meant for testing it's often not complete or completely undocumented.
Re:Is there anything here for the GUI developer? (Score:3, Interesting)
However
Pros:
You can have many different gui's - personally this is how I like cross platform gui's done - one tailored for each platform.
If it is a seperate program, then you can script it, right from the start too.
You can test the code easier, since there's the gui is seperate.
Cons:
For some reason having them seperate seems to make them less robust in the examples I've seen.
So er yeah anyway. One solution is, as above, seperate your gui and code. Another solution, perhaps, is to plan all your dcop calls (or whatever your favourite scripting thing is).
This ties in nicely in nicely with your use case diagrams since you can have calls for your "stories".
I'll shut up now.
Re:Is there anything here for the GUI developer? (Score:2)
On smaller projects the controller and the view often get integrated into the same place, but once you start talking multiple windows etc then you need to split the Controller out.
I beleive that Java's gui stuff is done like this and Cocoa on MacOSX definitely uses this (the framework is built on this foundation).
Re:Is there anything here for the GUI developer? (Score:2)
Re:Is there anything here for the GUI developer? (Score:1)
A couple of days? If a variable dump's good enough for me it's good enough for the proles
MarathonMan (Score:2)
Re:Is there anything here for the GUI developer? (Score:2)
Rendering stuff, for example, can be tested by comparing output bitmaps. For example, I'm doing some photo stuff, and all of the methods that output images I just get working manually and then once their output looks right, I save that and make the check part of the test suite. Anything that outputs numbers (e.g., the calculation of the size of the new image for a scaling operation) is just tested the usual way.
Other GUI stuff you can test by generating fake events and setting up mock collaborator classes to make sure things do what they should when they should.
And you can also take a just-the-essentials approach. For my web work, I test the underlying components extensively. I also have a web robot that hits a test copy of the site and makes sure that it all works well enough. I don't test every bit of HTML; I just make sure that important strings appear in the right place, that clicking through gets to where I think it should, and that feeding the right data to the right forms gets abuot the right responses.
Still, it's more of a pain than it should be. I look forward to the day that somebody develops a GUI framework test-first; those components will be much easier to work with.
Re:Is there anything here for the GUI developer? (Score:1)
That is, I try to do a strict MVC (Model-View-Controller) class separation, with the View as lightweight as possible. That is, make the UI as dumb as possible, so that if there is any room for errors they show up in the simplest UI walk-through.
This also means that you can do rapid development (usability testing, anyone?) on the nitpicky presentation details. Automated GUI testing tends to be screen-reader testing, which means the test needs to change when minor GUI components change-- which is unreasonably nasty.
As a specific example, JSP tag classes have a doStartTag() method which typically write to the web page stream. It is annoying to unit test because it needs web page context and must handle lots of data gathering and exceptions.
I write my doStartTag()s to gather data, handle exceptions, and write to the web page-- and everyting else (the real meat of the method) is done by a helper method that takes data and returns a String. The helper method is written to be easy to unit test: it throws all the interesting exceptions and can fail in all the interesting ways. The doStartTag() is simply the JSP wrapper.
Of course, a pure MVC approach has my helper method in a separate class-- which is actually my most common approach-- but in many cases that would be overkill.
This approach can be applied to any kind of GUI: treat GUI code as a minimal wrapper around easy to test code.
Re:Is there anything here for the GUI developer? (Score:2)
Having had to test GUI apps before, I think a useful approach is to separate the GUI ("View") and the data ("Model"). Automating GUI tests is a bitch and the tests are very fragile if anything in the GUI changes. If possible, design a command-line version of the app that includes everything but the GUI. The command-line version can be scripted to perform automated input/output tests. If your real app is a COM object, your command-line version could even be a simple app that loads the app COM object.
This application design is not a "natural design" a developer would choose on their own, but I think using a test-driven process it can really improve the app's modularity.
My view on XTREMEprogramming (Score:1)
I don't know whether it is truly more practical than two separate individuals programming, but it does feel more productive and more enjoyable, it's like you hardly ever get stuck on anything.
The thing is, like I said I've only done it with my brothers or friends, people I generally get along with. I could easily imagine my great experience with it turning into the programming project from hell if you don't like the person you're with.
Re:My view on XTREMEprogramming (Score:1)
Do we need a book for every rule of XP? (Score:3, Interesting)
Testing is important, but XP testing philosophy is a catch all for actually thinking about your product and the purpose of you product. XP is about making hack programmers look legit. XP has some good points, such as an emphasis on simplicity, testing, and customer satisfaction, but mostly it's about making bad habits look good, like no design and iterative feature hacking with ignorance to the bigger picture of the app.
Some design up front is important. Documentation is important. Code ownership is important to an extent. In a medium to large system, having everyone able to change any line of code is just stupid. People change shit and don't have a clue why the code looks the way it does. One of the arguments for no code ownership is that a lead architect can't keep it all in his head. Well, what about a team of that consist of many folks that aren't as capable as that lead architect? they are able to comprehend the whole system according to XP. And, they are allowed to change whatever they want, when they want. So, get a couple of average programmers with large egos, and you have a lot of problems.
XP is great for people who are happy doing bug fixes all day instead of avoiding the bugs to begin with. The assertion that XP results in less bugs is pure speculation and from my experience, a very misleading claim. Just because your test succeeds doesn't mean that your program is correct. And if the test is the only glue validating the success of your final solution, you're screwed.
just another way of designing (Score:3, Insightful)
Re:just another way of designing (Score:2)
Should? What, because it was written on the back of the stone tablets that Moses brought down?
Hi, I've tried it both ways. Test-driven design and refactoring work. It's possible to build good software both ways. In my opinion, TDD and refactoring are more productive and more fun. But hey, you're welcome to stick with your stone tablets if you want.
Tests are only as good as your requirements…. (Score:4, Insightful)
Take a trivial example -- an entry form for a phone number. What is a valid phone number? Add in real world things like extensions, folks using alphanumeric substitution (1-800-DISCOVER), and internationalization and it gets interesting. Now a test driver is not that big of a deal if you know what to put in it. From a design standpoint, it would really be nice to have solid requirements and test scripts that provide concrete examples as to what the business was asking for. Real world? I could only dream for mediocre requirements that might resemble not only what they asked for, but what they want.... At least enough to try and read their minds.
Re:Tests are only as good as your requirements…. (Score:2)
XP also calls for "customer on site"; the theory is, it's far quicker to get answers in real time rather than waiting for someone to write a 600 page document.
THIS DOES NOT SCALE UP. You know it, I know it, Kent Beck probably knows it. Some of the other agile development methods [agilemanifesto.org] try to address this.
Re:Tests are only as good as your requirements…. (Score:3, Informative)
Test Driven Development is a great way to deal with changing requirements. For a phone number validator, you would write up tests for all of the initial requirements:
testLocalNumber()
testLongDistanceNumber()
testTenDigitLocalNumber()
testAlphaNumericNumber()
Then when you deliver the application and find out that you need to deal with international numbers, you write:
testInternationalNumber()
You get a red bar, beacuse you can't handle i18n numbers yet. So get that working, and when you are on a green bar, then you know that it still works for all those US numbers and it works for the new foreign numbers.
Then you extend as you get new requirements:
testEnglishNumber()
testFrenchNumber()
testItalianNumber()
What is the alternative to this? You are still lacking requirements, even if you aren't doing TDD. But you wouldn't have any tests, and you wouldn't know exactly what your class can do for you.
Test Driven Development is not all about leaving test artifacts. The tests are constantly changing during development as your requirements change. The main idea behind TDD is to program from the client side of things first. This is similar to the idea of writing documentation first, with the added benefit that as you finish the tests, you prove that the class does what you want it to do.
-Mike
Re:Tests are only as good as your requirements…. (Score:2)
I'm not sure I have. First off, I'll qualify this - I've done some XP development and I am bit cynical at this point in time. I'm not expecting a 200 page specification document, use cases, etc. Normally I'd settle for a return phone call, email, or some sort of feedback/response. The business users tended to be vague or rotate the requirements faster than a borg's shields if they responded. That was a pretty big if, btw... Poor communication is a major hurdle for any methodology, and XP suffers 'when business behaves badly'.
In the above example, what the business really wanted was a unique identifier - they just did not know it at the time. The phone number changed to an email address, to a SSN, and a few other things. The XP process puts too much emphasis on testing at the wrong time, IMHO. The forest was lost in the trees.
Don't think I'm against test scripts... I create drivers in my own code, and also expect someone to test the veracity as well. To be honest, I think it is better if I don't create the final test script as users tend to have an unlimited imagination when it comes to using things improperly. The lack of faith in XP does not translate into dumping QA at the development, staging, or production level.
I've got opinions on paired programming, and a few other aspects of XP too. The short of it is if you have strong communication, teamwork, and realistic expectations, almost any methodology will deliver. Success when things are going to hell in a hand basket? XP is not a silver bullet. I've seen management cut the number of workstations in half, but when was the last time you saw a 40 hour work week?
Anyhow, I concur about test based development. Same idea, different denomination... (grin)
Re:Tests are only as good as your requirements…. (Score:1)
--Steven
Re:Tests are only as good as your requirements…. (Score:3, Insightful)
Spoken like a guy who has never done XP.
One of the core requirements of XP is known as the On-site Customer. This means that there should be somebody at all the meetings and sitting within shouting distance of the programmers, somebody who can answer questions like that or find out the answers. (In some companies this person is called a Product Manager; in others a Business Analyst).
I could only dream for mediocre requirements
Yes, requirements documents are generally silly. That's why XP doesn't use them. Instead of waving around some phone-book sized pile of garbage, XP practitioners use high-bandwidth, low-latency communications techniques: they talk. And they try out new versions of the software every week.
At least enough to try and read their minds.
Yep! And get yelled at when it turns out you don't have psychic powers. 'Cause that's what it takes to make standard development practices work. Isn't that a sign we should try something different?
Integration tests on entertainment software? (Score:1)
OK, so developing automated unit tests, possibly even before writing the code that will be tested, is usually straightforward and almost always a good idea.
But how can anybody design automated integration tests for applications that are intended and designed to have pseudorandom behavior, such as interactive entertainment software?
Book Draft is available online (Score:3, Informative)
My (mixed) experience with test-driven development (Score:4, Insightful)
Test-driven development seems to call for a series of baby steps, each corresponding to a unit test case. Unfortunately, I wasn't always able to identify "the next baby step"; even if I could pick what I thought was the next unit test, I sometimes found myself spending far too much time, and writing far too much code, for just that next test.
I also sometimes found that "the next test case" already passed. I don't know if I wrote more than I needed to early on, or I picked the wrong next case, or if there's more to all this than I've picked up.
When I was in good TDD mode, I was flying; test, red, code, green, refactor, green, next! It's a very rapid, and very intense, experience. There's a reason XP usually calls for a 40 hour week; by the time you're done with a few hours of this, you are tired! (But you've gotten a lot done.)
Re:My (mixed) experience with test-driven developm (Score:2)
Generally my unit test code writing takes about 100% - 150% as long as my production code writing. However, I spend almost no time on debugging. I find 95% of the bugs that would otherwise make it to the QA department (or worse yet, production) while the problem is still fresh in my mind. Usually it's caught and cleaned within a few minutes of when I write it.
Try not writing any unit tests on a fairly isolated chunk of code sometime. Then keep track of your debug time on that class for the following six months.
and writing far too much code
The more you write unit tests, the quicker you will become at laying out test data. You may also want to look through the constructors of your production code - sometimes this can be an indicator that it would be worthwhile to simplify the construction process. If it's taking you a bunch of lines of code to get a test-worthy object when testing, it probably takes a bunch of lines of code to get a usable object in production.
That said, there are some objects that are just a hassle to create a sample of. For those, I grit my teeth and get through it by reminding myself how much worse it would be to have to debug it after it's in production, with the customers at a standstill and my boss pacing a groove in the carpet.
I also sometimes found that "the next test case" already passed. I don't know if I wrote more than I needed to early on
My first test for a given class usually fails with a "cannot resolve symbol" because the class doesn't exist yet. I also occasionally go back and intentionally break a class to get the test to fail, because I want to verify that the test itself is actually testing something (prevents you from having to write a test class for your test class: Monkey, MonkeyTestCase, MonkeyTestCaseTestCase, MonkeyTestCaseInfinity, MonkeyTestCaseInfinityPlusOne...)
eXtreem buzzWording (Score:1)
Realy! Who will care today about all those boring things like FSM analysis and formal task description! Who needs deep understanding of what his/her software does, when one just can test small part of it!
I just want to emphasize following simple thought: one may invent new perpetuum mobile without slight undrstanding of physics. And those invention nowdays aren't even considered anymore.
Computer science is matured enough to be utilized in everyday work. One have no need to study another perpetuum mobile invention guide. Just remember you classes.
I hope that in my lifetime analitical computer science will take its place in program developmen instead of witchcraft spells.
I just want to ask those adpets of XP: how you will ensure that small change in the code won't affect quite distant code parts?
XP == Xtremely Pstupid (Score:2, Informative)
http://http://www.softwarereality.com/lifecycle/xp /case_against_xp.jsp [http]
In two minutes I was able to save both myself and my company many man-years and headaches.
Re:XP == Xtremely Pstupid (Score:2, Informative)
http://www.softwarereality.com/lifecycle/xp/case_
How XP of me.
If you meet the XP in the road, kill the XP (Score:2)
I think the "Case Against XP" makes a lot of valid points. Who can afford to continuously refactor 100KLOCs of C code for example? It's hard enough keeping up with the integration issues with a stable code base. You want code rage? Try breaking a few dozen test cases by making "improvements" to your design that ripple in all directions.
One important point about "Case Against XP" is that it may be criticizing radical forms of XP that might not exist in practice. A little slack might be warranted if XP projects in practice do not actually churn the code base as much as the religion might expect. And the author makes fun of Beck for excessive optimism, even as he exhorts the effortlessness and elegance of "getting it right the first time"... yea right.
So yes I'm a critic of XP, but I'm also a critic of ~XP.
If you meet the XP {booster,critic} in the road, listen to him but don't follow him... chaos and madness may await you as surely as dereferencing NULL. You must find your own way on the road, grasshopper.
Great without XP (Score:1, Interesting)
I'd already written an engineering requirements document which we reviewed with management and our partner company on the project. I've already put together an architecture document showing all the major components and who talks to whom (reviewed with the dev and QA teams). My manager even insisted that I write a design document on the specific module I was about to write, which included a schedule and unit tests.
After two months of documentation I was desperate for a chance to code. TDD? Why not. I gave it a try -- not in nearly the detail the book mentions, but I have a dozen tests each hitting a key requirement of the design. When I recompiled the code for the ARM hardware, it was gratifying to see that everything still worked on the target platform. Every time we make a change, we have the initial tests I wrote to make sure nothing is broken. It's nice.
I'd like to make the junior guy's do this -- at least then I can tell how far along they really are. "Oh, it's 80% there!!" Sure, show me the test cases that pass and how these relate to the requirements and design.
Don't ask me to pair program. I'd rather write docs and review them with the team and do code reviews.
BTW, I didn't like the book that much. But the method is good.
scrum methodology plus xp == happy developers (Score:2, Informative)
I work for a largish (the largest?) media organisation in the world cutting code as part of a team of 4 developers plus a senior developer.
We've been using Scrum and Test Driven Development for about six months now, and there is NO WAY I'm ever going back to writing code without writing tests first.
Scrum (see http://www.controlchaos.com) is a "lightweight" development methodology. It's developer driven (no more gannt charts !!!!), and it's agile, meaning that it embraces change throughout the project lifecycle. I can highly recommend it. But I digress...
Test Driven Development is something that every halfway serious programmer should be doing IMHO. It doesn't replace the initial back-of-a-fag-packet design stage, nor does it stop you designing elegant and effective architectures. What it does do is:
So I can recommend TDD. Check it out. By the way, we're coding mainly in python with some java thrown in too.
I might be missing the point... (Score:1)
So many projects don't even get completed and this is mainly because management will not accept realistic estimates as to how long they will take to complete so the timescales get shrunk and funnily enough, the project runs over. Once the time pressure is on, quality goes out of the window because it is more important to deliver.
Cynical??? - Maybe, but I have seen this happen so many times and yet no-one ever faces up to the fact that poor estimates are the root cause of most failures. If it can't be done in time, DON'T DO IT. If it must be done, allow sufficient time to do it properly.
Still, I'm only a software engineer. What do I know about anything.
harrumpf.
Kent Beck is a cult leader (Score:2, Troll)
The first XP book written by Kent Beck reads like a self-help book. If you're going to write a book whose principle is "feel good about yourself," and you're trying to fill 200 pages, then you can't just cut to the chase. You have to ramble on for a few chapters about what you're going to say, and slowly let out bits of information here and there, then there are chapters the reiterate what you've already said. Beck's books--and all of the books in the XP line that I've seen--read the same way. You could explain XP clearly and concisely in a few pages, but the XP books go around and around in cicles, and after a while you're not sure if you're getting new information or not. And, miraculously, the XP line has been extended to six or more books, each of which goes over the same small bit of information in another verbose and rambling way. There's even a book about XP critcisms, which is an officially sanctioned book in the XP series, which exists simply to reinforce the basic principles of XP.
The whole thing smacks of books like Dianetics or various lightweight volumes from self-help gurus. If there was any meaning to XP, it has been lost in endless self-justification. Imagine an entire series of books that did nothing but tell you how cool Linux was. What's the point?
And now for something completely different... (Score:1)
Here's an idea: design projects so that they're testable! Yeah that's right design components and subcomponents so that they can be tested! This is the biggest problem I have found when tackling any project. Developers are always thinking in terms of quick and dirty solutions delving straight into implementation details without paying any attention to architecture
The fundamental problem is that most folks aren't thinking in terms of testing and architecture. I'm not convinced that the XP method works. Actually, after having read the case against it, it seems counter-intuitive and plain annoying. I'll say this though, XP is onto something by placing such a strong emphasis on testing. The first question on every developer's mind should not be "How can I solve this problem?" but "How can I solve this problem with a testable solution?" The project architecture should be subsetable and the only way to achieve that is by designing each piece of the total solution to be testable.
A project I'm currently working on has a portion of the team building a GUI enabled, unified test suite for testing the many components being developed. The developers in turn are making sure that each component has an interface available so that they can plug their components into the tester and test them. Not only can the developers test their own code with confidence but so can their peers without having to look at code written by someone else.
So far it seems as if the effort being invested into the dedicated testing environment is paying off...
I think you're on to something... (Score:2)
I have some misgivings about XP, having had a very, very bad experience with it in the past. But I do think there is something there... however, perhaps a lot of what makes XP work is that simply by writing tests first, they cannot help but write implementations that lend themselves to testing as you have noted! The resulting code is thus more modular and less prone to error.
I still think that XP presents a lot of overhead that might not be nessicary given the right people - like instead of having a thousand unit tests that you have to maintain with the code (one of the nightmares encountered), you have well-written modular code with just a few tests.
Extreme Programming-why not exterme hiring? (Score:1)
Paired programming starts with the HR department when they hire programmers for a project in that one should hire 7%5 exp in solving that exact or similar problem and 25% inexpericed in that area for that specific project..
But most companies as you may knwo woudl rather save short term costs and vastly increase futre costs rather than do it right the first time..
Contrast the with OpenSource OS efforts where testing is a standard processs simply because it gurantees it will compile right on very lean sub standard machines that this group tend s to have access to do their work..
The reason why HP and Microsoft and Cisco went to India to recruit and staff coding centers is because they understand testing jsut as OpenSource does...
How to chgange in the US? Volunteer to be an informed Software Eningeer speaker at your local univeristy..speak and instruct the new CS students on testing...
But I thought.... (Score:2)
I prefer bug-driven development... (Score:2)
I'm only partly joking. When the requirements are vague, this actually works.
Design & requirements (Score:1, Interesting)