Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Interview With Martin Fowler 101

Arjen writes "Artima has had a conversation with Martin Fowler, one of the gurus on software development today. It consists of six parts. Parts one, two, three, and four are available now; the rest will appear the next weeks."
This discussion has been archived. No new comments can be posted.

Interview With Martin Fowler

Comments Filter:
  • Telling Point! (Score:4, Insightful)

    by Cap'n Canuck ( 622106 ) on Thursday November 28, 2002 @05:57PM (#4776976)
    I read the first part of the interview, and there was a point that Bill made that struck me as very profound. He said that refactoring can cause severe differences between a stream that has been refactored and one that has not. I think that there has to be a limit on refactoring, especially once a code gets beyond a certain number of iterations (releases). For a Configuration Management person, or for CM software, refactoring can quickly turn into a nightmare.

    Just my $.02
  • by Textbook Error ( 590676 ) on Thursday November 28, 2002 @06:02PM (#4776987)
    many middle-tier and upper-level managers think that such concepts are useless and timewasting.

    Their problem is, it's very difficult for them to tell if refactoring was actually worth it - their developers start with a program that does A, they stop working on new features for the next release for 3 months, and they come back with a program that still does A ("but better!").

    As a developer you can explain how cleaning up code is preventative maintenance to make development easier in the future, but if you're not a developer that's a very hard thing to measure.

    Even if you trust your engineers implicitly, any good manager has to realise that developers love intellectual problems over more mundane tasks - and refactoring's biggest problem is that by design you end up looking like nothing happened.
  • by ssclift ( 97988 ) on Thursday November 28, 2002 @06:04PM (#4776989)

    What bothers me, is that the process being plugged here doesn't address methods for keeping all of the expressions of the program consistent. Tests are a (limited) alternate expression of the program's functionality that are trivial to compare to the original. Yes, they need to be good. What about documentation (which I define separately from annotated indexing, such as doxygen or Javadoc produce), which is another, separate expression (usually not in an executable language) of the program?

    Until these authors address how to keep the entire structure from docs down to code consistent during a refactoring I think they are missing an important point. I've pointed this way before on Slashdot, but the only good way I've found for ensuring that non-executable forms of the program are consistent with executable forms, is formal Software Inspection (see Gilb's stuff at Result Planning [result-planning.com] ). I've found that the more versions of the code there are (requirements, user docs, code itself, tests etc.) that are kept truly consistent, the more likely it is you will not make mistakes in the final expression (the code you ship).

    The refactoring process can be even less "like walking a tightrope without a net"; you're net is a web built out of the relationship between all of these documents, not just the code and tests!

  • by Ars-Fartsica ( 166957 ) on Thursday November 28, 2002 @06:21PM (#4777044)
    Most refactoring advocates seem to make two key assumptions:

    Your code will be needed for a long time

    Your requirements will not change drastically

    Even if they don't state these explicitly, it is asusmed in the premise of refactoring. If code did not have a long productive life ahead of it, you wouldn't waste your time continually replumbing it. If requirements did in fact change rapidly, there would not be time to revisit completed working code.

    In my experience most code lives for about four years and then dies. During that time the requirements often shift by at least 25%. Given these observations, refactoring appears to be a waste of time.

  • Automated Testing (Score:4, Insightful)

    by DoctorPhish ( 626559 ) on Thursday November 28, 2002 @06:38PM (#4777105) Homepage
    I agree with the author on the subject of writing a comprehensive test suite as you code, but I've found that in applications that need to process a significant variety of real-world data in large volumes, your mock-data will be far from sufficient.
    Often, the only real way to get good data for your tests is to have the software used in the field, and then use some of the more complex cases to test against. Corner cases are also a problem, esp. if you are relying on your tests to be comprehensive, and verify the correctness of your code. Tests are certainly valuable, but are by no stretch infallible. I've found that you don't get any really useful data until around the second revision of the code (assuming fairly wide use and access to end-user data). Sure, running tests against some custom widget might be pretty reliable, but once you run up against stuff that is inherently hairy, you need data that is representative of real-world usage patterns before you can be sure that changes you make won't break it out in the field.
  • by Dog and Pony ( 521538 ) on Thursday November 28, 2002 @06:57PM (#4777168)
    That is why refactoring by itself is no silver bullet, and noone is saying that it is. At least, noone that has any insight. It is, however, a great tool to meet the changed requirements.

    You have to Embrace Change [c2.com], and in that refactoring will really, really help you a lot.

    If requirements change so much it is not the same program anymore, well, then I'd not say that requirements have changed in the project. It is a whole new project, right? And then, of course, no rewrite will help you.

    But if they change a lot within the same functionality, you use refactoring to get to the new goal without breaking anything, and because you have been refactoring out good interfaces as you went the changes are easier to implement. You do not code for two months, then refactor for two, the code etc. You do both all the time.

    I think you are missing the point here.
  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday November 28, 2002 @07:22PM (#4777280) Homepage
    I think the point of refactoring is that it happens exactly when you add a new requirement. You have some code that does X. Now you have a requirement that it do Y as well. There is a clean way to add feature Y, but it would need restructuring the existing code first. So as a first step you refactor the code so it still does X - 'but better!' - and then you can add code to do Y more easily. Doing the two steps separately - the XP rule Do not refactor and add features at the same time - is probably less risky than both at once.
  • by vbweenie ( 587927 ) <dominic,fox1&ntlworld,com> on Thursday November 28, 2002 @07:44PM (#4777366) Homepage

    Refactoring is also not just something you do (or not) after you've finished (i.e. shipped / deployed). It's something you do as you go along, during the process of building the thing. If you need to refactor substantially after you've finished, then it may be that you didn't refactor enough as you went along.

    Refactoring is a fancy word for what many good programmers who are not geniuses do already, a few good programmers who are geniuses don't need to do, and a lot of bad programmers who may or may not be geniuses don't do enough. Speaking as an OK programmer who is not a genius, I feel I need all the help I can get, be it from test-infection or from taking a considered approach to cleaning up after myself.

    Periodic refactoring helps me keep abreast of shifting requirements; it isn't about prettying-up something that's about to become obsolete anyhow, but about keeping a check on creeping duplication and tacit dependencies so that the code can absorb new requirements instead of being broken by them.

  • by _wintermute ( 90672 ) on Thursday November 28, 2002 @10:16PM (#4777837) Homepage
    ~sigh~

    Some of you people simply have NO idea how code works in the real world, i am sure of it. Hacking perl scripts is so unlike developing the large OO software that drives most information systems.

    One of the fundamental issues with software architecture is that more often than not architecture is emergent. 'build one to throw away' is an old old adage (I believe it was Brooks who orginally declared this) and neatly summarises the key problem with developing software architecture.

    "Even when one has the time and inclination to take architectural concerns into account, one's experience, or lack thereof, with the domain can limit the degree of architectural sophistication that can be brought to a system, particularly early in its evolution." From the Big Ball of Mud (link below).

    We design, we develop, we learn, and then we design again ... the sceondary design phase can be called refactoring. There are a number of refactoring patterns (I recomend the 'Refactoring' book) and some of the coolest Java IDEs support refactoring (check IDEA and Eclipse) - you can do things like move methods up the object hierarchy into base/parent/super classes, extract methods from bodies of code, change names, merge classes etc etc). These features let the savvy developer leverage the emergent aspects of design. Driven by time/cost/deadlines, we often do the thing that works rather than the thing that works elgantly. Refactoring lets us recapture some of the elgance of our code. Coupled with test-first methods, we have an incredibly powerful system.

    Pretty much ALL modern software lifecycle models are iterative, simply because up front design does not work. The waterfall model is a naive concept of a bygone era.

    Refactoring is therefore a crucial aspect of an efficient design process. Typically, I would suggest that refactoring occurs at the end or begginning of each iteration ... our refactoring is informed by the evolution of the software - we don't refactor for fun, we clean up code SPECIFCALLY to target aspects of the product we know will change or

    To see refactoring in action, join an Open Source project. Most OS teams that I have witnessed in action will employ refactoring of some description, even if they don't call it that. It makes a great deal of sense in OS, because we have large, distributed teams working on software, refactoring helps consolidate changes by disparate team members.

    further reading: http://www.laputan.org/mud/
  • by Anonymous Brave Guy ( 457657 ) on Thursday November 28, 2002 @11:14PM (#4778002)
    I've found that the more versions of the code there are (requirements, user docs, code itself, tests etc.) that are kept truly consistent, the more likely it is you will not make mistakes in the final expression (the code you ship).

    I've found that the fewer versions of the code there are at all, the fewer cock-ups happen.

    Most documentation that gets printed out is a waste of time. Make the damn code its own documentation, at least as far as implementation details and such. Keep the printed stuff to be higher level overviews of the design -- the big picture -- and for describing the key algorithms and concepts. This way, others on the team or new joiners have somewhere to start and a frame of reference when they check the details (which are described by comments in the code, or indeed the code itself) but you don't waste massive amounts of time writing a silly document that just describes what every method of every class does. We have tools to do that for us; they're called comments. :-)

    (By the way, notice that simple refactoring rarely changes any of these higher-level overviews. If you're doing something significant enough that it does, it probably justifies a quick review of the overall design and a suitable update to the document anyway.)

    Aside from higher level design docs and feature specs, about the only non-code documents that are often justified are requirements specs and acceptance test specs, which should usually be written in the language of your application domain and not in software terms anyway, and which should be completely independent of the code. On some projects, other forms of testing might usefully be documented as a kind of checklist, but often automated unit tests remove most of the need for this.

    So, there you go: I have exactly one version of the code, and it's the code. There might be high-level summaries of features, or some details of the algorithms the code implements, documented as well, but they are somewhat independent. Everything else is for the customer, and totally independent of the code. If you've never tried working this way, and come from a "heavyweight process" environment, try it some time. I'd bet good money you'd prefer it. :-)

  • by miu ( 626917 ) on Friday November 29, 2002 @03:56AM (#4778833) Homepage Journal
    Even if you trust your engineers implicitly, any good manager has to realise that developers love intellectual problems over more mundane tasks - and refactoring's biggest problem is that by design you end up looking like nothing happened.

    Sometimes you can manage management expectations. The next time they complain about the padding you put on your time estimate, explain that the reason it will take so long to add a feature is that the code is brittle. That makes the results of cleanup visible and makes it clear that you aren't just doing science fair stuff, but solving an actual problem.

  • by UncleFluffy ( 164860 ) on Friday November 29, 2002 @05:21AM (#4778961)
    You should always plan for two rounds on any software that you plan to support for any lenght of time.

    I always plan for three: (1) do it wrong, but understand the problem at a deep level in the process, (2) do it right, (3) do it right, fast, and clean. (1)->(2) is a throw-away and rewrite, (2)->(3) is mostly refactoring-type work, but the term wasn't in use when I started coding so I didn't know what it was until today ;-)

  • by Big Sean O ( 317186 ) on Friday November 29, 2002 @11:05AM (#4779761)
    Martin Fowler is quite pragmatic when he's talking about refactoring. First of all, he doesn't think you should refactor for refactorings sake.

    The best piece of advice I've read of his is "Three strikes and you refactor". In other words, if you duplicate some code, you hold your nose and live with it. But if you're putting in similar code three times, then you're better off refactoring out the duplication.

    The other piece of advice I keep with me when I code is (I'm paraphrasing here) "If you need to add feature A, and your code makes it difficult to add feature A, refactor the code until feature A is trivial to add". A well-factored program makes it easier to add new requirements and often prevents a total rewrite.

    I don't think much of the "code for a month, refactor for a month" and I don't think Fowler does either. Most everything I've read is "code for 5 minutes, refactor for 5 minutes".
  • by alangmead ( 109702 ) on Friday November 29, 2002 @12:27PM (#4780033)

    You seem to have a somewhat different definition of refactoring than the one Fowler uses in his book on refactoring, in his other writings, and in the interview referenced above.

    First of all, adding OLE or CORBA would not be Refactoring. Fowler described it like this:Refactoring is making changes to a body of code in order to improve its internal structure, without changing its external behavior. [artima.com]

    Secondly, Fowler's book doesn't recommend refactoring for no reason. He has some specific design problems that a developer might see in a body of code they are working on. (a method too long, two classes too tightly intertwined, etc.) In his book, he describes refactoring as being the flip side of design patterns. Design patterns can be used during the design phase to create a good design. Refactoring can be used during the construction phase to become a good design.

    Thirdly, the developer who didn't create a good design initially can use refactoring and come up with something better, because there are catalogs of effective refactorings [refactoring.com] The recipes that define these refactorings describe how to make these changes efficiently and safely without disturbing any more of the code than necessary.

    These aspects work together like this. A developer , while coding finds that some problem is impeding their progress. For example, he discovers that every time he makes a change to one class, he discovers that he needs to make a correllary change to another class. He then decides that it fits the description of "Feature Envy", and performs the move method refactoring. [refactoring.com]

    Basically, I see refactoring as a software developers equivalent of building codes. Building contractors don't need to know, or at least calculate out every time, the physics involved to make a structure solid enough to support itself and its contents. The building codes are a distilled instructions of what the physics calculations would indicate as appropriate action (with a bit of a margin of error.) Performing refactorings based on well known, tested, refactorings is using design tips of people who are much better software designers than you are.

Genetics explains why you look like your father, and if you don't, why you should.

Working...