Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

When Do You Kiss Backwards Compatibility Goodbye? 241

Arandir asks: "Backwards compatibility is great for users. But it sucks for developers. After a while your normally sensible and readable code becomes a nightmare spaghetti tangle of conditions, macros and multiple reinventions of the wheel. Eventually you have to kiss off backwards compatibility as more trouble than it's worth. The question is, when? Should my code conform to POSIX.1 or Single UNIX 2? Should I expect the user to have a ISO Standard C++ compiler? What about those users with SunOS-4.1.4, Slackware-3.2, and FreeBSD-2.2?" This question is really kind of difficult to answer in the general sense. The best advice one can give, of course, is "when you can get away with it". Not much help, that, but the lost of backwards compatibility, like most complex decisions, depends on a lot of factors. The key factor in most developers eyes, of course, is the bottom line. Have many of you been faced with this decision? What logic did you use to come to your decision and what suggestions do you have for others who might find themself in this predicament?
This discussion has been archived. No new comments can be posted.

When Do You Kiss Backwards Compatibility Goodbye?

Comments Filter:
  • by Axe ( 11122 ) on Sunday September 09, 2001 @01:43AM (#2269980)
    Ironically I am doing it right now. Good part it is Saturday, and other developers do not know. Or they will lynch me..
  • by el_mex ( 175423 ) on Sunday September 09, 2001 @01:48AM (#2269988)
    You should know the best who your users are.


    You should know who they are, what equipment they have, who is making who a favor (ie: who has to adapt to whom), and specially you should know what they want (such as how much backward compatibility).

  • At least if you are trying to support an Open Source solution, you have a chance of going back and fixing the old application to be conformant with new API's, etc. If you're running an old binary, you have no choice but to provide the old API's.

    Hail Open Source, Death to binary only!
    • Did that make sense to you before you pressed the submit button? It sure seems pretty silly now. You're suggesting updating an old version to new standards. That is what the entire concept of versioning is based around! You cap old versions and start anew for the entire purpose of keeping your applications up to date.
  • OO design (Score:1, Insightful)

    A properly designed OO system should alleviate all those backward's compatibility issues. And yes, in spite of all the /bots who complain about it, Java sure solves a lot of those OS/hardware compatibility problems...
    • by hedley ( 8715 )
      I am sure all of these Linux Java's will work fine. Can't be any compatibility issues here right? Besides anyone got a good VHDL sim in Java? Also a good finite element analysis program would be great.

      JDK-1.0.2/ Wed May 6 00:00:00 1998 Directory
      JDK-1.1.1/ Wed May 6 00:00:00 1998 Directory
      JDK-1.1.2/ Tue May 5 00:00:00 1998 Directory
      JDK-1.1.3/ Wed May 6 00:00:00 1998 Directory
      JDK-1.1.5/ Tue May 5 00:00:00 1998 Directory
      JDK-1.1.6/ Thu Oct 8 00:00:00 1998 Directory
      JDK-1.1.7/ Sun Jun 13 00:00:00 1999 Directory
      JDK-1.1.8/ Tue Aug 8 00:00:00 2000 Directory
      JDK-1.2.2/ Thu Oct 19 00:00:00 2000 Directory
      JDK-1.2/ Wed Aug 11 00:00:00 1999 Directory
      JDK-1.3.0/ Wed Jun 13 18:05:00 2001 Directory
      JDK-1.3.1/ Thu Jul 5 18:05:00 2001 Directory

      Hedley
    • Re:OO design (Score:1, Insightful)

      by Anonymous Coward
      "A properly designed OO system should alleviate all those backward's compatibility issues."

      Hardly, since these issues have *nothing* to do with methodology as such. A well-designed structured system will be more amenable to being backwards compatible than a poor OO design.

      If the design of the external interfaces (file formats, network protocols, etc) allows for future features, specifies 0/blanks for reserved fields, etc, then it's much easier to be backwards-compatible. This has nothing to do with OO.

      And if you're talking Java or C++, each of those languages has ancestral versions which make more recent versions have interoperability fits (old vs new GUI event model, older name mangling/namespaces).
    • Re:OO design (Score:4, Insightful)

      by PrismaticBooger ( 103265 ) on Sunday September 09, 2001 @02:20AM (#2270062) Homepage

      It takes a lot more than OO design to solve this problem. The fact is that initial designs are almost always naive, and lack the specific flexibilities (and inflexibilities) the Ideal Solution would require. So, very often, are second iterations. But the third time's usually the charm.

      The need to depart from backward compatibility is the result of correcting design flaws. And design flaws happen, regardless of your programming methodology. Design flaws are less a product of programming methodology, and more a product of not completely knowing the problem.

      Java accommodates design flaws a little bit better not by being object-oriented, but by relying so heavily on dynamic binding. While this can make recovering from Bad Design simpler, it only lets you create bandaids that do nothing to fix the original design problems. And all that dynamic binding costs Java in performance across the board.

      • Re:OO design (Score:3, Insightful)

        by robbyjo ( 315601 )

        Java accommodates design flaws a little bit better not by being object-oriented, but by relying so heavily on dynamic binding.

        FYI, dynamic binding is a crucial element in OO programming. Programmers wouldn't have been able to downcast if you don't have this. It is indeed that this creates a lot of slowdown, but researcher has done flow analysis research to eliminate many of this.

        Concerning the design: Yes, you're right. Nobody starts with a "perfect" design. We will extend it, twist it and twirl it until it comes into a huge mess when we have to scrap the whole thing and rethink it.

        But, design flaws can be minimized. It doesn't depend on programming methodologies, but rather to a design methodologies -- which is pretty much debated by now. Using models (e.g. UML models) to depict a huge project maybe worthwhile because we can get a quick overview of what we are upto. Most of the times we can locate the design mistakes pretty quickly.

        Design flaws can also be minimized by documenting specification (which is pretty much a "meta-programming" approach). Sadly, nobody wants to do this. If you can get this done, there are a lot of tools (albeit still in research) that can automatically check your program whether it conforms to the spec or not.

    • Aaaaaahhhhbulshitchoooo!

      I like OO as much as the next guy (more, actually), but this is untrue. Since you bring it up, let's use Java as an example: do you use Swing or any of the Java 1.2 APIs? If so, you're not compatible with 1.1, so you have to make sure all your customers upgrade to the right version.

      For projects where you have 100 users that's not a big deal, but once you have millions it becomes a huge deal. It becomes a bigger deal when the you're thinking about using system APIs not available on older versions of the OS.

      Usually the solution is to dynamically use the library and only offer the feature on newer platforms (LoadLibrary/GetProcAddress on Win32), but after a certain point that leads to the mess the author was talking about.
    • Re:OO design (Score:3, Informative)

      by jon_c ( 100593 )
      OO design is a pretty vague term, as it means different things to different people. However one concept that is proven to work for backwards compatibility is component design as found in COM and other component based frameworks.

      For those who are not familiar with the concept you don't deal with objects per say, instead you deal with interfaces to implementations of objects.

      For example

      ICat *pCat = NULL;
      pCat = MakeCat(IID_ICAT);
      pCat->Meow();
      delete pCat

      COM does it a little differently but the basics are there. You request an implementation to an interface, not the object itself. The way this works for different versions is that instead of IID_ICAT you can have IID_ICAT2 and ICat2 interfaces without having to break your old ICat clients. The implementation could even share much of the old code.

      For example:

      ICat2 *pCat = NULL;
      pCat = MakeCat(IID_ICAT2);
      pCat->MeowLoudly();
      delete pCat

      Admittedly it's not the most elegant design, but it works in the sense that you're not breaking old clients and still have room to support new interfaces.

      -Jon
      • COM does it a little differently but the basics are there. You request an implementation to an interface, not the object itself. The way this works for different versions is that instead of IID_ICAT you can have IID_ICAT2 and ICat2 interfaces without having to break your old ICat clients. The implementation could even share much of the old code [...] Admittedly it's not the most elegant design, but it works in the sense that you're not breaking old clients and still have room to support new interfaces.

        But there's a much better way to do that: versioned libraries as in Unix. With library versions you don't have to insert all that compatibility guck inline in your code, and the library functions don't need the overhead of having to decode it. It's all done at link time. It doesn't make a bit of sense to pass the version information with every call, it should be constant across your whole application. Hmm, you could conclude that Microsoft designers knew what they wanted to accomplish but didn't know how to accomplish it, oh well.

        And yes, you can get problems with versioned libraries too, mostly because people may not have all the libraries they need. But that problem is solved definitively by apt-get, and soon apt-get will be standard for rpm's as well. As soon as I started using apt-get, all my library problems just vanished.

        No doubt Microsoft would like to pull off something like that but they never will because the whole apt-get packaging depends on continous feedback from informed users with sourcecode on hand, and by that I don't mean go through two months of negotiation to see the piece of source code you need, then start the whole process over again 1 hour later when you hit the next problem.

    • A properly designed OO system should alleviate all those backward's compatibility issues.

      A properly designed and maintained OO system will alleviate some of the backward compatibility issues. The thing is, most OO systems are not well maintained, even if they were reaonsably well designed originally. Most developers, in my experience, are far too reluctant to refactor properly.

      This generally boils down to one repeated mistake. When changed requirements suggest a change in the responsibilities of your classes and/or the relationships between them, too many developers try to change the implementation or interface of classes in isolation instead.

      Unfortunately, in the long run, no amount of interface hackery can make up for an inappropriate distribution of responsibilities. It's just fudging the design, and the one thing you don't want to do with an OO design is mess it up with fudges.

      The reason you get to this "it's got no chance, let's start from scratch" syndrome is that too many fudges were allowed to accumulate. Instead of refactoring as they go along, keeping a design with clear responsibilities and relationships, programmers try to shove things in where they don't really belong. Once or twice, you can get away with it, but it doesn't scale, and ultimately leads to an unmaintainable mess.

      Of course, the problem is that adopting the "I'll just fix it here" mentality is, in the immediate timeframe, quick and cheap. In the long run, it's much more expensive -- a rewrite from the top will do horrible things to your budget -- but people rarely consider that far ahead.

      Such is the curse of the OO paradigm, where the benefits are only to be had by those who think: most people don't.

  • Even if you are writing a minuscule app for your own use, that could not conceivably have any use for anyone else, you should always adhere to the following rules:

    • Use autoconf and automake. I don't care if there's nothing to configure. Do it.
    • Try and place as much code into libraries as possible. Modularization is good. Use libtool religiously
    • Try and test your code on as many unixes as possible. For this reason, I have every NSD, and linux kernel major on my machine. It is unthinkable to assume that if someone who wants to use your program on a platform you didn't test for would be willing to fix your code and send you the patches. Open Source just doesn't work that way
    • Forget the above rules and use java.
    • by Anonymous Coward
      "... Forget the above rules and use java."

      1.0, 1.1 or 1.2? IBM or Sun or Blackdown? AWT or Swing?
    • So where does one get a box full of all these unixes? I do agree with you on that part about broad testing. But I can't afford to buy all those boxes (especially not an IBM zSeries). And finding shell accounts (especially root ones) for various testing seems to not go beyond Linux, FreeBSD, and a handful of OpenBSD and Solaris. Even the IBM mainframe accounts are available to only a few people, and then for a limited time (have you ever known an open source project to take 3 months and stop development then because it's "done"?). I do have Linux, FreeBSD, and OpenBSD on Intel, and Linux, OpenBSD, and Solaris on Sparc. What else would you suggest?

      As for Java ... I'm waiting until environments are built that can do what I do now in C. I'm hoping gcj [gnu.org] will let me do at least some of these things in Java. But there are some things I doubt it can ever do. You can prove me wrong by rewriting LILO and init into Java.

      • Well, you do have a problem there. IMO, the best thing you can hope for is that people like me will find your program useful at work, in which case I'll take the code and do whatever porting is necessary to get it running on HP-UX, AIX and perhaps True64, and submit it back to you.

        Ask the people behind the software porting archives for the various platforms, they might be able to hook you up with either hardware or people. Or ask for porting/testing help on newsgroups or mailing lists. There are a lot of people out there with loads of junk hardware. You could try calling the companies in question, and maybe they can put you in touch with someone inside who might be able to arrange either access or testing, but finding the right person would be difficult. You'd probably have better luck finding someone who works for the companies by going over opensource development mailing lists. Oh, and VA/Sourceforge has a compilefarm with several platforms too.

        Of course, those options are only valid if you're doing opensource work and your application is both interesting and reasonably well written. Do use autoconf and automake, it makes porting a bit less painful.

        If you do proprietary software... well, good luck. There's a reason that software is more or less supported on the various platforms...
      • I think your problems partly answer the question. You need to make sure that your software runs on your users platforms. If you can't find shell accounts on a given OS, it's probably because your users aren't using it. If your users are using a given OS, but can't/won't give you a shell account, let them test/port to that OS.


        I'd think that if you've got big endian/little endian platforms, and SysV/BSD, that's a pretty good start.


        Did I mention, use autoconf, and it will make porting to anything else much easier ...

      • > You can prove me wrong by rewriting LILO and init into Java.

        The microcode in your CPU isn't written in C, I guess C isn't good for anything either. Anyway, my bootloader's written mostly in forth, which does specify a VM.
    • Try and place as much code into libraries as possible. Modularization is good. Use libtool religiously


      Personally, I *hate* libraries. Then again, I'm running RedHat on x86 using "standard" apps.

      My problem is that I *never* have whatever library the program I'm trying to install requires, and all too often, I have no idea where I might *find* the library. And I've had libaries that can't be installed due to other library dependencies... *grrr*

      I guess my problem isn't with the actual use of libraries, but with their implementations. Why don't developers offer a version of their packages *with* every libary you may need? Personally, I'd be soooo much happier if I could just run a script to install all the libraries.

      Of course, then you have to worry about how well that script works, and if it'll overwrite my libraries with older / non-working libraries. But you get the idea... :)

      • My problem is that I *never* have whatever library the program I'm trying to install requires, and all too often, I have no idea where I might *find* the library.


        www.rpmfind.net. You'll also find that a lot of the problems are caused by binary incompatibility. Binary compatibility problems don't have any easy solution.


        Why don't developers offer a version of their packages *with* every libary you may need?


        Because such a package would be enormous.
        A better approach would be to just link to the required libraries.




        Of course, then you have to worry about how well that script works, and if it'll overwrite my libraries with older / non-working libraries


        Again, binary compatibility. If this is a big problem, just compile from the source. Or get versions of the library that were compiled against your distribution.

      • Personally, I *hate* libraries. Then again, I'm running RedHat on x86 using "standard" apps.

        My problem is that I *never* have whatever library the program I'm trying to install requires, and all too often, I have no idea where I might *find* the library. And I've had libaries that can't be installed due to other library dependencies... *grrr*

        I guess my problem isn't with the actual use of libraries, but with their implementations. Why don't developers offer a version of their packages *with* every libary you may need? Personally, I'd be soooo much happier if I could just run a script to install all the libraries.

        Of course, then you have to worry about how well that script works, and if it'll overwrite my libraries with older / non-working libraries. But you get the idea... :)

        Here you're getting into the sort of terratory that is usually inhabited by, mostly propietry, paid for software. With very few exceptions I've found that paid license software will have the extra touches like libraries included with the install media (and usually an installer clever enough to only install a library if you need it and it won't break your system by over writing a later version of the same library) or at least the URLs of where to get the libraries or patches to bring you system to the required level to run the software. I rarely see a free/open source package that has that. You do get what you pay for, usually.

        My interpretation of the post was that the coder should put as much functionality into libraries or functions rather than the main executable(s) to aid modularisation and (probably) improve code sharing amongst modules of the app. Rather than relying on external libraries over which they have no control. Those liobraries I woulde expect to definately be in the install media.

        My thoughts on backwards compatabiity are to try to avoid changes to things like file formats and APIs unless absolutely necessary. With file formats try to support as many older versions as you can, ideally for both save and load but as a compromise you could make it such that it will load all previous versions but only save current and last 2 versions or something like that. With APIs, if the way it is called changes, where possible provide a wrapper/interpreter for the old API call that translates it to and from your new functionality. Anything that changes file formats or API calls in a non-transparent way incrememnts the major version number. Provide a migration path and utilities to aid migration.

        Also document, document, document. I'm deathly serious. Document everything. When you make a change to the source code put in a comment saying what you did, why you did it and how to undo it (eg if you make a change to a line, copy the line, commnent out the original and change the copy -- Although this is really only advisable for a few single line changes, anything bigger and you'd probably want to use something like RCS or SCCS to maintain your change history, which is another advantage of modularising your code as that way the number of changes per file will be small so you only have to rollback and reimplement a small part of your code to reverse a change), it makes it a heck of a lot easier to maintain your code later. Document all changes in the Release Notes, Migration guide, Installation documentation and the User Guide, odds are that users will read at least one of these, if they don't you can at least tell them to RTFM with some moral justification.

  • Bah, make the users conform to the developers, not the other way around!
  • Abandoning backwards compatibility is often a controversial action, that needs to be carefully considered beforehand. Often, the best course of action is to consult with the Licensing Dept. They've been making great progress in the time it takes to reword licensing so that it is illegal for end users to attempt to use older hardware and/or software, often they can provide a solution in less than 3 months. This provides the Legal Dept. with a steady stream of secondary revenue, when they audit, and sometimes sue those users who call support hotlines. A win-win situation for all of us!
  • by moebius_4d ( 26199 ) on Sunday September 09, 2001 @01:55AM (#2270013) Journal
    The two things I would say are, when you really reach the point where all the old crap is really clogging up the veins, fix it all at once. Make a clean break. Then people can at least keep in mind what is happening, what works with 2.x and what is still only for 1.x.

    The other thing is, try to design to keep this from happening. Expose APIs that don't need to change much instead of the actual functions or objects that you use. One more level of indirection won't kill your performance in almost every case, but it will give you a whole lot more room to re-engineer when you decide you have to.

    All that applies to the case where you control the interface and you need to change it. When you're publishing source code and want to decide what tools you can expect the user to have to make use of it, that's a marketing decision and not a technical one. You're talking about how many people will be excluded from your audience if you use GTK or assume a conformant C++ compiler. Technically the newer tools and libs are generally better, that's pretty clear. I think it's going to be a judgement call on the part of the developers as to how much they care about a lot of people being able to use their code. If they are willing to wait for the world to catch up before being able to use their program, then they can use the latest and greatest. If not, then they have to aim at a realistic profile.
    • The two things I would say are, when you really reach the point where all the old crap is really clogging up the veins, fix it all at once. Make a clean break. Then people can at least keep in mind what is happening, what works with 2.x and what is still only for 1.x.

      So what you're saying is, Microsoft should've given up on backward compatibility a long time ago? :-P

  • Apple's plan (Score:3, Interesting)

    by mr100percent ( 57156 ) on Sunday September 09, 2001 @01:57AM (#2270017) Homepage Journal
    I think I like Apple's backwards compatibility, but it's not making one app backwards compatible,

    For instance, when they migrated to the PPC architecture, they made apps that ran on both platforms and older OSes, then capped development and froze the older version at whatever version number, then developed for newer machines. The rest of the apps followed, like AOL 2.7 is AOL's last 68k release, they just developed only for PPC since then, although apps like Opera still make a 68k version alongside a PPC version.

    Of course, you need to know your users, Will they be satisfied with a frozen and version capped release, provided there are no bugs?

    • Re:Apple's plan (Score:5, Insightful)

      by darkonc ( 47285 ) <stephen_samuel@b ... m ['n.c' in gap]> on Sunday September 09, 2001 @02:34AM (#2270084) Homepage Journal
      Apple was very conscious of the idea of backwards compatability. More important than being able to run new apps on old boxes was the fact that most of the code that ran on a 128K 8Mz mac is still able to run on a 2GB 800Mz powerPC. Your investment in older software is not generally obsoleted by the new OS.

      Often, Backwards compatability problems can be avoided by careful design. Leave room for improvements. Designate certain structures as ignorable. Presume that the current incarnation of the code is not the final version.

      Design for elegence. If the current code is relatively clean, then chances are that it will be easier to tack on an addition later on. Include stubs for improvements that you can forsee adding later on -- even if you can't percive the exact form of the improvement at the time. When you tack on the addition, try and do that elegantly too.

      With languages, you can sometimes avoid backwards compatability problems by not using the latest and greatest features just because they're there. (it also allows you to avoid creeping featurism growing pains).

      If using a new feature makes a big difference in the implementation of a solution, then use it, but at least document it. It keeps you more conscious of the break, and makes life easier on the people who have to rip out your code and re-implement it on the older system that you thought nobody was using.

      Anecdote: A friend of mine recently found out that that the security system where he had a storage locker was run on by apple IIc. The box was doing a fine job of what it was designed to do 20 years ago. Just because it's old, doesn't mean it won't work.

  • Thats a rough rule at best - and one I often see flexed, and often flex myself - but 2 versions of any critical dependancy is often the accepted standard I see. In general it seems that for main stream applications it is reasonable to expect the user to be within 2 versions of the current latest version of whatever you're working on. It is reasonable to expect someone to be running at least - say - RedHat 5.x, or netscape 3 (IE 4 if you have the choice). Much beyond that and it seems that it is an unreasonable expectation of the user that the latest version of the software will work on their system. A hardware analogy is that at some point you just have to say, "I'm sorry 486SX_User_01, but this just isn't going to run on your system (at any decent speed)"

    For dependancies like the compiler listed in the question, if it was available in said 2-previous-versions of the distribution it is a reasonable expectation.

    Just my 46 Lira.
  • Possible Compromise (Score:5, Informative)

    by robbyjo ( 315601 ) on Sunday September 09, 2001 @02:03AM (#2270026) Homepage

    I think there are several approach you may choose.

    If you think your program/library has been widely adopted by many people, it would be very very hard to scrap the old one and start anew. You will provoke the wrath of other developers that use your program/library. If this is the case, then the possible compromise can be:

    1. Overhaul the inner working of your API, but retain the signature (and the semantics) and the developers can think that it is just like the old one. This perhaps does a limited reform.
    2. The second choice would be to extend the API. There are a number of choice you can do here. If your extension is like adding several parameters to a method/function, you'd probably want to add a macro to deal with the compatibility thing. Another way is to leave the old API that way and make a new function. (BTW, Windows does this). And then tell your user to use the new API instead.
    3. If your API is not used by a lot of people, you'd probably want to revamp the whole thing. You'd have to tell people why and how to convert their program into your new convention. Note that in doing this, you should carefully design your API so that you won't do another revamp (users will hate you if you do so again).

    To do revamping, you'd probably want to look on how the Windows COM approach. I'd hate to say this, but this approach is generally good, but I don't know whether I can come up with a better solution.

    Or alternatively do an OOP approach. OOP can help modularizing your code if it is done properly. (That's why KDE rocks)

    That's my 2c, though.

  • Small app story (Score:3, Interesting)

    by IceFox ( 18179 ) on Sunday September 09, 2001 @02:04AM (#2270028) Homepage
    This is the story of my app, Kinkatta ( http://kinaktta.sourceforge.net/ ). It originally was a QT only app and only recently did it move over to utilizing KDE. But that in itself isn't exactly what this is about so I will talk about the more spisific case. From .25 to .91 today we have gone through I believe 4 different formats in which we save our settings, buddylists and so forth. In each of the changes we had to add some code to convert it over. The best solution we found is to know our users and make sure they know about us. What I mean about that is that we only support the previous settings format and make sure everyone uses it. Kinkatta's users are kept informated when a new release is out through a number of ways. We then keep that format for as long as it takes for us to be sure that 99% of the users are using that format. One of the nice things incorperated into kinkatta is the auto-check feature. On login it will goto the webpage and see if there is a new version and if there is then it will tell the user. This prompts them to stay up with the new releases much more then if the feature was not there (and yes you can disable it). Do to the gpl/lgpl nature of the app people will upgrade more often and are unlikly to stay with version 0.64.1 This is a true plus point for the open source. Because of it we havn't had to worry about users who don't want to pay the 29.95 for the new version.
  • MS went overboard (Score:3, Insightful)

    by MSBob ( 307239 ) on Sunday September 09, 2001 @02:05AM (#2270030)
    Microsoft is the most notable supporter of backwards compatibility. That's a large part of the reason why Win32 is such a bad API. There is just too much junk there that accumulated over they years that seemingly won't ever go away. The other problem backwards compatibility creates is code bloat. Because old interfaces and often implementations have to be kept in the binary the libraries grow bigger and bigger purely to keep the old junk around. Microsoft COM is the most guility party in that. The rule that interfaces can never, ever change results in nosense like DirectX 8 still supporting all interfaces of DirectX2. Fortunately this trend seems to have reversed and everyone is now keen on starting afresh. MS is rolling out their .NET framework. Apple released their OSX which has new native APIs. It's probably for the best to the consumer in the long run as well.

    Luckily open source doesn't have ot suffer from the issue as much since source availability ensures that old software can often be tweaked or sometimes just recompiled to make it work with new versions of dependent libraries.

    How long to maintain backwards compatibility is really the question of your business domain. An in house app can probably be changed significantly without impacting many people while a widget library (like QT for example) must maintain backwards compatibility for at least a couple of minor versions. The ability to simply recompile old code after a major change in the library is a welcome feature too.

    • Re:MS went overboard (Score:3, Interesting)

      by Trepidity ( 597 )
      So you're claiming Microsoft is making a mistake by providing backwards compatibility for DOS, but "open source," whose primary operating systems are Linux and FreeBSD, both designed to be backwards-compatible with 1970s UNIX systems, are somehow not?
      • by SurfsUp ( 11523 )
        So you're claiming Microsoft is making a mistake by providing backwards compatibility for DOS, but "open source," whose primary operating systems are Linux and FreeBSD, both designed to be backwards-compatible with 1970s UNIX systems, are somehow not?

        Well, speaking as an experienced Dos programmer, the Dos interface is really a piece of crap. Nothing is defined with any kind of generality or foresight. Look at the "critical error handler" interface for example, it's unbelievably awkward and unworkable. Look at what happens if you shell to Dos, start a tsr, then exit the shell. Boom. This is broken by design. Just try to write a program that recovers control when a child aborts, that's when you find out how messed up the resource handling is, and the exit handling. The child tries to exit back to Dos instead of its parent unless you hack into all sorts of undocumented places, which everybody eventually has to do for a big system. All that undocumented-but-essential internal interface dung has to be supported, right?

        That said, no I don't think Microsoft made a mistake in supporting old Dos code, it's all part of maintaining the barrier to entry, the more messed up it is, the more expensive to replicate. Never mind that the whole sorry tale isn't good for anyone but Microsoft.

        Well, now that we have a several real operating systems as alternatives, there's no longer any advantage to Microsoft in keeping Dos on life support. When Dos gets painful, people just walk now. But Microsoft still has to support it or everybody will start publishing articles about how Windows doesn't work, and they have to keep all those "enterprise" solutions that consist of Dos programs and BAT files running. Now Dos support is just an expensive liability Microsoft can't escape. Heh, but the rest of us can, and do.

        The original Unix interface on the other hand, was quite well designed. 30 years later, for the most part the whole thing is intact and still in use, functioning reliably in enterprise-level multiprogramming environments. So, no, maintaining compatibility with that old-but-good interface is unquestionably not a mistake.

        The short story is: the whole Dos interface was a mistake, and Microsoft now has to live with it. The unix interface was not a mistake and we're happy to carry it forward.

    • by Anonymous Coward
      Microsoft is the most notable supporter of backwards compatibility.


      Not really, IBM on the mainframes and Intel on the 80X86 are the biggest backward compatibility stories around (over 20 years for Intel over 30 for IBM I would guess).
    • Microsoft is the most notable supporter of backwards compatibility

      Well, if you ignore Intel.

      Intel have keep binary compatibility for decades now, even the latest Pentium III will run code written for the 8086 - maybe with different timing/cycle counts - but certainly the opcodes still do what they used to.

      After playing with reverse engineering, and dissasembly for the past few years I can think of at least several 80[01]86 instructions I've never seen in a single binary - yet they're still there, due to backwards compatability..

  • by Anonymous Coward on Sunday September 09, 2001 @02:06AM (#2270035)
    God created the universe in 6 days because He didn't have to worry about an installed base.
  • along the same lines, when do you give up support for browsers that can't render DHTML or CSS?

    After a while your normally sensible and readable code becomes a nightmare spaghetti tangle of conditions, macros and multiple re-inventions of the wheel

    this describes some of my javascript to handle multiple browsers to a tee
  • Let's step into this very medium of communication--the world wide web. All those webpages scattered about the web, code folded over code to assure that users of all variants are able to view a certain page. Page designers have to peruse the web in search of web browser documentation for not only different browsers, but different versions of different browsers. We have to worry about the peons still using version 4 of Internet Explorer and Netscape Communicator! Browser checking is a pain in the neck, and aside from this, IE and NS play the browser version hiding game, so checking their versions isn't a simple one, two, three.

    I wish I knew the answer to when enough was enough, but I am at a loss. I will say, however, that I make my decisions of how far to go back in terms of versions by how much flexibility and usability the older versions offer. Anything less than version 4 of either Netscape or Internet Explorer offers very little in terms of flexible design.

    But honestly, where do we set that limit? I don't think it's the best thing to do to say, "Conform or die!" If that's how things were, then the world would be swarming with Hitler's Nazis.

    "You there! Cake or death!?" Uhh... death, no, no, I mean cake! "Ha, ha, you said death first!" I meant cake! "Oh, alright.. here's your cake. Thank you for playing Church of England, Cake or Death?"
  • rules of thumb (Score:5, Insightful)

    by deranged unix nut ( 20524 ) on Sunday September 09, 2001 @02:15AM (#2270056) Homepage
    Two rules of thumb:
    1) Support whatever 90% of your users are using
    2) Support the prior two versions

    If you can't do the above, make a clean break and give it a new name or change the major version number and list the changes in the release notes.

    If you have to make a clean break, if possible:
    1) Provide a migration path
    2) Provide an interop interface

    And above all, listen to your users.

    • 1) Support whatever 90% of your users are using
      2) Support the prior two versions


      Latest stats are showing IE5 to have 82% of the market, IE4 with 6% and IE6 with 1.5% - pretty much 90%

      Another 2% is spiders.

      So Should we only support IE? Or should we support the standard as it is guarented to work in all future browsers as well?
      • If your software is targeted for Linux users, then ask yourself what are the marketing stats of IE on the Linux platform? Or, are you targeting specifically Windows or Macintosh users (in which case IE is the predominent browser).

        If you are targeting the world in general and want to make money or want the widest possible dissemination of your code, then I'd reluctandly say "Yes", give IE preferential consideration over the more obscure browsers. Or, stay conformant to the HTML/XML specs and support everybody that's conformant but be willing to accept the price of losing specific capabilities.

        If you have code that works with Netscape and not IE or vice versa, then why not contract (or develop yourself) for a plugin or activeX control that will provide that capability? Or, bundle an existing control with your code?

        But, that is getting away from the "backwards compatibility" issue. The previous poster mentioned the rules of thumb, of which I wholeheartedly agree. If your code has to change so much that you break backwards compatibility or produce difficult to maintain code, then release it as a new product or new major version number. And, support the last to versions of your product. Makes good sense to me.
      • 1) Support whatever 90% of your users are using
        Latest stats are showing IE5 to have ... pretty much 90% ... So Should we only support IE?

        That's 90% of YOUR users, not 90% of the market you're trying to penetrate.

        If you wrote WhizzyWord, and 90% of your users have migrated up to v1.2, you can dump any v1.1-only "features" and clean it up. This has nothing to do with Microsoft Word, KWord, AbiWord, WordPerfect or any other word processor app on the market.


        • MSIE 5..............63.61%
        • MSIE 4..............18.27%
        • Netscape 4...........7.77%
        • MSIE 3...............4.73%
        • MSIE 6...............3.38%
        • Konqueror............1.73%
        • Gecko................0.52%


        Not sure where you got the 82% figure, but if you design your site based on the most popular browser according to market research rather than the audience most likely to visit your site you may run into trouble.

        Consider a "Best Viewed with Internet Explorer" logo on a Linux Howto site. Probably won't get much traffic...
        • I was on about developing general sites. royalmail.co.yk used to be a great site that happily worked. They've (for no reason), changed it now, yo get cookies, javascript etc. all over the place.

          I'm surprised, and plased, that your stats show konq to have such a (relativly) high market penetration. I use it almost exclusivly (occasionally lynx or sometimes mozilla get in the way), but i had no idea so many people did.

          I'm afraid some sites I have to pretend I'm an IE user though - as otherwise I dont get in (even though they work fine in konq and moz)

          My figures were from PC Pro, a UK magazine, October 2001 edition. I havent read it for a long time, and I'm unimpressed by the MS ass-kissing the have in there. I only bought it cause it had a tux on the front, pretty much the entire linux content of the magazine was that one cover picture though :(
  • by deranged unix nut ( 20524 ) on Sunday September 09, 2001 @02:22AM (#2270067) Homepage
    Release a set of updates, only change the minor version number, break one critical function in each update, fix it and break a different critical function in the next. Repeat until users no longer depend on the functionality that you want to change, then introduce the new functionality.

    But first, go read "How to write bad code," and start following those suggestions too. ;)
  • by Anonymous Coward on Sunday September 09, 2001 @02:53AM (#2270108)
    I like Microsoft's solution, which I'm sure was done somewhere else first. At any rate, here's the situation...

    Microsoft face the issue that they want to maintain backwards compatibility with everything, so they can leverage off the existing popularity of their platform. This means they can't let new libraries break old applications.

    They had a problem once with MFC, where they updated its memory allocation scheme to make it more optimal. Problem was, a whole lot of old apps happened to work because they accessed memory incorrectly, but the old memory allocation scheme didn't reveal their bugs (I think they were doing buffer overruns, but the old scheme allocated a bit of extra room). Anyway, Microsoft released an updated MFC DLL, and suddenly old applications started breaking. It wasn't really MS' fault, but it was a big event, and I think it was the last time they touched old code like that!

    Anyway, this is where they have their "DLL hell" problem too. Different apps are written against different versions of a DLL, many times with workarounds for known bugs in those DLLs. A new DLL comes along (installed by another application) and suddenly working applications start breaking.

    So here comes COM. I've encountered it with DirectX, and it works like this. When you request something like a DirectDraw object or a DirectDrawSurface object, you also tell the class factory what *version* of the interface you want. Then you're provided with that actual implementation of the library. If you write your game against DirectX 2, but the user has DirectX 5, well your request to the DirectX DLL will actually give you the version 2 implementation! Which is cool, because if you worked around old bugs, those bugs are still there; they're not fixed for you! :)

    As far as I know, you're not getting an "emulation" of the old implementation either; I'm pretty sure they just include binaries for each version of the interface. They could easily dynamically load whichever one you request.

    Of course it means bloat, but there's no other solution. If you want backwards compatibility, you can't fake it, because it's all the really subtle things (like bugs that people worked around) that will bite you in the butt. It's been Microsoft's nemesis but its strength through the years, just as it has been Intel's too. Both companies need to remain backwards compatible with a *LOT* of stuff out there, and so they're forced to have all this legacy architecture. It would be nice to start from scratch, but then they'd be levelling the playing field, and that's no fun :)

    Of course, this backwards compatibility drama affects everyone, not just Windows people. Just the other day someone installed a new PAM RPM on my Mandrake Linux. It installed fine, being a newer version, but some subtle change in behaviour meant I could no longer log into my own box. That's no fun either :)

    - Brendan
    • by billh ( 85947 ) on Sunday September 09, 2001 @04:25AM (#2270200)
      Oddly enough, when Altavista first came online, I used it frequently for trying to resolve DLL conflicts. Probably more than I used it for anything else.

      Sad to think that I can now find just about anything on the internet, and it all started because I had to support Windows 3.x.

      But it is nice to think that I was running Linux as my primary box in my office in 1995, using lynx to browse as I debugged my new Windows 95 installation on my other box.

      Just food for thought for the younger admins out there: DLL hell was the largest part of my job at many sites. Believe it or not, it has gotten much better in recent years. Think of video drivers messing up fax software, network drivers killing the accounting package, and video conferencing software killing the spreadsheet program. As sick as it sounds, Win 9x is a dream to support by comparison.

      4:30 in the morning, I must be getting delusional. :)
      • Funny...I remember doing tech support for Windows 95 at a major OEM in the late 90s and having DLL problems that most have never dreamed of. Imagine installing the OS, installing an upgrade to the OS (USB support), installing a motherboard driver (AGP, I think), installing a video card driver, and installing a hard disk controller driver. Imagine having a broken system if you installed them in the wrong order. THAT is what I think of when I think of Microsoft. The fact that they permitted mere applications to overwrite crucial OS code was unforgivable.
    • by doug363 ( 256267 ) on Sunday September 09, 2001 @06:32AM (#2270349)
      Much of Microsoft's problems don't just stem from multiple versions and fixing bugs (although I agree, these are significant), but badly thought out design in the first place.

      Also, when there is an opportunity to make a clean break (i.e. the Win32 API), they only make half the changes they should or could. Have you seen the number of functions which end in "Ex" in the Win32 API? Ever noticed how the Win9x series is full of 16-bit user interface code? Or how only half of the NT kernel objects are supported under Win9x? In DirectX, it took between 4 and 8 major versions of DirectX to create something which was really worthwhile (depending on your opinion), and major changes weren't implemented when they should have been.

      In regards to using COM, this helps versioning, but does not help the bad design problem. Even with full versioning, COM interfaces still have reserved fields in them, which is unnecessary if you're bundling all the previous versions.

      Besides, not all compatibility problems can be solved by COM etc. Microsoft are getting better at providing slicker interfaces, but I don't feel that the underlying design is improving as it should be. For example, using Automation objects (which is a disgusting kludge for VB "programmers", to put it nicely) in Office apps is still a pain. (In particular, Outlook 2000's object model and MAPI implementation is inconsistent and buggy as hell.)

      So yes, versioning does help alleviate backwards compatibility problems where they can't be avoided, but nothing is a substitute for good design.

      • "Ever noticed how the Win9x series is full of 16-bit user interface code"

        Windows95 design team had clear objectives:
        1. We need OS with protected memory etc ...
        2. It MUST run 99% apps from Win 3.11

        The only way to accomplish that was to retain tons of old 16 bit code and wrap it around with 32 bit stuff. Windows 95 is actually one of the most amazing pieces of software ever written.
        Consider the fact that average RH has problem running most apps written mere year before that particular version was released.

        "Or how only half of the NT kernel objects are supported under Win9x"

        That is genuinely MS fault. They had communication problems between NT and Win 95 teams.
    • So here comes COM. I've encountered it with DirectX, and it works like this. When you request something like a DirectDraw object or a DirectDrawSurface object, you also tell the class factory what *version* of the interface you want. Then you're provided with that actual implementation of the library. If you write your game against DirectX 2, but the user has DirectX 5, well your request to the DirectX DLL will actually give you the version 2 implementation! Which is cool, because if you worked around old bugs, those bugs are still there; they're not fixed for you! :)

      And neither are the exploits.

    • This sounds exactly like the library version numbers that have been on every shared library implementation ever made for Unix. I don't think this quite solves things for a few reasons from what I know has broken on Linux/Unix:

      1. The developer may think their change is not significant enough to warrent a new version. When they are wrong, you have the same problem as before. Trying to avoid this would result in thousands of versions, and is better solved by statically linking.

      2. The different versions still need to talk to underlying layers. Unless something like VMWare is used to run several copies of Windows at the same time, there is a layer where back compatability needs to be worked on. This could be solved by making a simple, well-defined lowest layer, but it is obvious that Windows (and anything in Unix designed after 1980) does not follow such design principles.

      One think MicroSoft may be addressing is the obscurity of the Linux library versioning. I have been writing software for this for 8 years and I still have no idea what figures out what version of a library to use, or how it works. It would seem that the filenames are all that are needed to control it, but apparently that is not how it works. Nor do I know how to find out what version of a library a program wants (ldd prints the version it will load, which is not exactly the same).

  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Sunday September 09, 2001 @03:02AM (#2270118)
    Commit yourself to a strict policy: nothing in a minor version will break anything since the last major version. If your code is at 1.9.99, it should be backwards-compat with 1.0.0. If your code is at 1.1.0 and backwards compatability breaks, move it to a 2.0 release.

    Typically, users expect breakage--or, at the very least, problems related to upgrades--with major versions. With minor versions, they don't expect breakage.

    Follow the Law of Least Surprise. If you break backwards compatability, up the major by one.

    Insofar as when to break backwards compatability, that's a much harder question. The obvious answer is a little philosophic: not all engineering problems can be solved by saying ``screw backwards compatability'', and some engineering problems cannot be solved without saying it.

    The trick is learning which is which.
  • by Skapare ( 16644 ) on Sunday September 09, 2001 @03:19AM (#2270138) Homepage

    A lot of the discussion seems to be related to issues of things like programming languages and operating systems (which are important). But what about keeping up with old formats and protocols? I think the issue is more one of what your project works with, than it is what language you choose (including the OS as part of the former).

    • Should a new web server support older versions of HTTP to the extent that they are not proper subsets of new versions?
    • Should my mail server support POP3, or can I make it do IMAP4 only?
    • Should my web site require every browser have cookies, Javascript, and Java enabled, or should it at least function with browsers that doesn't have these, or don't have them enabled, or have them filtered at the firewall?
    • Should I support ISO-8859 character sets, or can I just stick to Unicode (UTF-8 and maybe also UTF-16) to be universal?

    I'm not so much looking for specific answers to the above questions, but rather, a general idea of how you think one should go about deciding those issues to come up with the best answers in some given situation.

    • Answers. (Score:3, Insightful)

      by mindstrm ( 20013 )
      Your web server should support all versions of HTTP. (You meant HTML?)

      POP3 and IMAP4 are not 'new versions' of each other... neither is outdated. one is not a replacement for the other. Needs dictated solely by users.

      Your web site should require no more funcionality than needed ot operate the way you want it to. That's just good programming. Don't use cookies or javascript or java if you don't need to.

      You can stick to Unicode, because ISO-8859 maps into it properly.

    • I think you can drop support for all ISO character sets other than ISO-8859-1. This will eliminate a lot of problems because when only one set is supported, all code that has to decide which set to use can be deleted!

      UTF-8 and ISO-8859-1 can be supported at the same time for all real documents. This is done by treating all illegal UTF-8 sequences as individual 8-bit characters, rather than an "error". This also makes programming easier because there are no "errors" to worry about. For an ISO-8859-1 document to be mis-interpreted under this scheme you would need an 8-bit punctuation mark followed by two accented characters, which is not likely in any real European document.

      It is also necessary to treat UTF-8 sequences that code a character in more bytes than necessary as illegal, this is vital for security reasons.

      I am also in favor of adding MicroSoft's assignments to 0x80-0x9F to the standard Unicode. These are pretty much standard in the real world now anyways. This will make some more sequences (one MicroSoft symbol followed by one accented character) mismatch under the above scheme, but you are likely to get such documents anyway whether you interpret the MicroSoft characters or not.

      • > UTF-8 and ISO-8859-1 can be supported at the same time for all real documents.

        And we can forget about the rest of the world that doesn't use ISO-8859-1 or UTF-8? This will hopelessly munge any documents in a different charset; whereas a UTF-8 interpreter could realize it wasn't a UTF-8 document (and probably load in up in a locale charset.)

        > This also makes programming easier because there are no "errors" to worry about.

        But you've added another case to everything that has to handle UTF-8, and ignored real errors.

        >It is also necessary to treat UTF-8 sequences that code a character in more bytes than necessary as illegal, this is vital for security reasons.

        Why? You've just introduced alternate sequences for characters (the whole security problem) in another form.

        >I am also in favor of adding MicroSoft's assignments to 0x80-0x9F to the standard Unicode.

        Huh? All the characters are in Unicode. To mess with preexisting characters (that date back to Unicode 1.0) to add yet more redundant characters isn't going to happen, and shouldn't; you shouldn't break backward compatibilty unless there's at least a benefit.
        • And we can forget about the rest of the world that doesn't use ISO-8859-1 or UTF-8?

          Yes we can forget about them. This is because it will eliminate the need to transmit "which character set this is in". This will be an enormous win for software reliability!

          But you've added another case to everything that has to handle UTF-8, and ignored real errors.

          No, my whole point is that there is only *ONE* case, which is "handle UTF-8 but if the characters look illegal, display them as single bytes". At no point does a program have to think about whether a document is ISO-8859-1 or UTF-8, it just assummes UTF-8 at all times.

          You've just introduced alternate sequences for characters (the whole security problem) in another form.

          Only for characters with the high bit set. This is not a security problem. The problem is bad UTF-8 implementations that allow things like '\n' and '/' to be passed through filters by encoding them in larger UTF-8 sequences.

          Huh? All the characters are in Unicode

          Yes, but not at the MicroSoft code points. The problem is that if this is not a standard, programs are forced to examine the text before displaying it (because there are a huge number of documents out there with these characters, and they are not going to go away!) I also think the standards organizations were stupid for saying these are control characters, if they had assigned them we would not have MicroSoft grabbing them.

          • > This is because it will eliminate the need to transmit "which character set this is in".

            On one hand, UTF-8 alone can do that, without this Latin-1 hack. On the other hand, this in no way solves that, as you still have to deal with ISO-8859-(2-16), SJIS and a dozen other character sets that aren't just going to go away.

            > At no point does a program have to think about whether a document is ISO-8859-1 or UTF-8, it just assummes UTF-8 at all times.
            What makes ISO-8559-1 special? ISO-8859-1 only may as well be ASCII only for most of the world.

            > Only for characters with the high bit set. This is not a security problem.

            Yes, it is. It's a little more elaborate, but you can still get into cases where one program checks the data, say forbiding access to directory e', but misses the access to e' because it's encoded differently.

            > programs are forced to examine the text before displaying it
            I prefer my programs to get it right, rather than do it quick.

            > because there are a huge number of documents out there with these characters

            Plain text documents? Not so much. The number of plain text documents is probably much smaller than by the number of KOI8-R plain text documents.

            We have a solution - it's called iconv. iconv will handle all sorts of character sets, not just ISO-8859-1 and CP1252. (Actually, you don't completely handle Latin-1 either, since there are valid Latin-1 files that are valid UTF-8. But we can just handwave that away until someone gets bit by it.)

            > the standards organizations were stupid for saying these are control characters, if they had assigned them

            They did assign them. Single shift 1, single shift 2, etc. If the characters were needed, that's what they should have been used for, Microsoft be damned.

            • I think ISO-8859-1 is special because it is used much much more than 2-16. For the normal world it *is* ASCII. Therefore I think the best bet is to have programs guess that text that appears to not be UTF-8 is ISO-8859-1.

              Good point about the security problem, though. I have described a way in which there are two encodings for the same character. I was mostly concerned about avoiding the possibility of punctuation marks and control characters, but foreign letters are a possibility.

              Maybe it would be best to make the API pure UTF-8 (illegal sequences turn into the Unicode error character). My idea would be best done by applications upon recipt of external text. They could also do guessing as to the ISO-8859- state by checking the spelling of words, etc.

  • A few thoughts (Score:5, Informative)

    by Telek ( 410366 ) on Sunday September 09, 2001 @04:09AM (#2270184) Homepage
    I know that we had to make the decision to redesign our flagship product, and it was a tough one (since this product was the core of about 5 other products in house as well). The problem was:

    Code was ugly and hideous

    Someone forked the tree a while back, and now we had to support 2 seperate source trees (this one was really annoying, because if fix a bug and change some behaviour in one side, since both sides had to be able to talk to each other, you'd have to introduce a corrosponding "fix" in the other side.

    The core architecture was woefully outdated and ineficient

    Speed was an issue, and the current architecture was limiting, and the code was optimized about as well as it could be (this was also part of the uglyness problem)

    we were spending about 70% of our time fixing bugs in the old code, it took this much time because they were little and stupid and with the code in the state that it was in it took forever to trace things down.

    Well, we had been wanting (desperately) to redesign from the ground up for a while, but the powers-that-be wouldn't give us the time, until one customer asked for a feature, marketing promised that they'd get it, and we said "Know what? We can't do that. Not with the current infrastructure." So the powers-that-be said "do what needs to be done!" and we said "yipee!".

    Moral?

    How much extra time is spent during debugging code that is due to the current state of the code?

    How does the core architecture compare to what you will need in the future? Will it support everything?

    How efficient is the old codebase, and how efficient does it need to be?

    Can you get the time required to do so? This is a one-step-backwards before two-steps-ahead thing.

    Do you trust the people that you are working with enough to be able to competantly and efficiently create the new architecture? This is a serious question, because I have worked with some people that are good if you tell them exactly what to do, but I wouldn't trust them to recreate everything.

    Will you be required to keep the old codebase going while you are in the process of converting? If old customers need bug fixes, you might be forced to keep the old sourcebase around for a while.

    Can you make the new design backwards compatible? If not, can you provide a wrapper of sorts, or some easy way to convert your customers from your old version to the new one?

    If you are going to be redesigning the user interface or the API at all, then you must also think about the impact on your customers.

    Just some food for thought.

  • by Skapare ( 16644 ) on Sunday September 09, 2001 @04:09AM (#2270185) Homepage

    A feature that exists in the major UNIX systems, but is not part of the standards, will majorly improve the performance of my project, and make it a lot easier to code up. Should I use it or not?

    Of course the question is vague. I didn't state which systems and for a reason: I don't want to focus on the specifics (although I do have a specific case in mind), but rather, I want to focus on the general principle with this issue. Just how far should I go to make sure my program works on every damned UNIX out there? How much is important?

    • Who is your target audience/market? That's where you get your answer. Nobody can tell you what to write, especially if we don't know what it is.

    • My problem is the opposite. I want to use the existing standards as much as possible. But I am aware that not every system has them implemented. And even if new systems fully implement them, there's a lot of older versions installed out there that don't.

      I don't have the resources to support every Unix system out there. No way, no how. So I want to use the standards. But it will mean saying "no" to a lot of people.
  • Migration Tools (Score:2, Informative)

    by ImaLamer ( 260199 )
    Lets say you are developing an app that works with a certain set of data and it is stored one way.

    With your new release (to 2.0, or whatever) that is a whole new work - you bundle tools to convert the old data.

    MS does this with office, and people sometimes think its a pain, but I do not. If I'm getting a new version that (sometimes) works better, I don't mind running it through a filter - but make sure you don't make the mistake MS has made. Make sure this filter works.

    If you're talking a new OS or other such projects - create data (read: settings) BACKUP tools. If you want the people to all migrate to this new OS, or large system application, give them a tool to backup those old settings and put the code right into you're new version to grab those old settings from the backup.

    Thats all we want!
  • Analogy (Score:4, Funny)

    by noz ( 253073 ) on Sunday September 09, 2001 @05:40AM (#2270297)
    "Programming is like sex. Make one mistake and support it for the rest of your life."
  • The problem with M$ is not they always break something. The problem is that they break something and don't give a chance to fix it.

    That's why open source exists for. If things get broken, then, there is a very high chance to fix things up if one has access to code and gets an idea of the new features, bugs and innovations. If a developer thinks he is doing right on going away from standards and practices, then let it be. He is the author and he knows best what he may want or not. However, he should care that the people, dependent of his creation (aka users), will have a chance to readapt. On open source, a developer may not have to worry too much on this, because, if his product is popular and he is a good guy, there will be other developers that may help to convert data or code to the new conditions.

    On closed source, the developer himself should care to do this conversion. However, as we see, this is quite expensive in terms of resources and leads to the main bulk of the project going bloat and buggy. So, if you do care about innovation and stability do open source and keep the people coming to you.
  • by Anonymous Brave Guy ( 457657 ) on Sunday September 09, 2001 @07:54AM (#2270417)

    This is a confused discussion. A lot of people are mixing up "backward compatability for users" with "not making significant changes to the code base". The two are largely unrelated, except that screwing up the latter will also mess up the former.

    What your user sees is, and always must be, decided by your requirements spec, not programmer whim. The only people who can get away with doing otherwise are those developing for their own interest (hobbyists, people involved in open source development, etc).

    To put it bluntly, blanket statements like "meet 90% of your users' BC needs" are garbage. In many markets (notably the bespoke application development market) if you drop 10% of your users in the brown stuff, your contract is over, and your reputation may be damaged beyond repair. Look at MS; years later and in the face of much better libraries, MFC still survives, because people are still using it (including MS) and they daren't break it.

    This is far removed from rewriting things significantly as far as the code goes, which is where things like the standards mentioned come into things. I'm sorry, Cliff, but you don't do this "when you can get away with it" if you're any good.

    Every time you rewrite any major piece of code, "just to tidy things up", you run the risk of introducing bugs. You need to be pretty sure that your rewrite is

    • necessary to meet your requirements, or
    • fixing more than it breaks and not breaking anything unacceptable
    or, preferably, both.

    If the rewrite is justified using these objective criteria, then you do it. When you do, you try to minimise the number of changes you make, and to keep the overall design clean. You retest everything that might conceivably have been broken, and you look very carefully at anything that didn't work -- it's quite possible that the people who originally wrote this code months or years ago made assumptions they forgot to document and you've broken them. Finally, if and only if your rewrite is performing acceptably and all the tests are done, you decide to keep it. If not, you throw it away and start rewriting again.

    And for the record, yes, I spent most of last week rewriting a major section of our application, as a result of a code review with another team member. We kept the overall design, tweaked a few things within it, and rewrote most of the implementation. Now we need to retest it all, update all the docs, etc. This little exercise has cost our company thousands of pounds, but in this particular case it's justified by a needed performance increase and the significant reduction in bug count. But you can bet we thought very hard about it before we touched the keyboard.

  • I work for a company where we have a multi-platform calendaring solution. Groupware type of thing. (And yes, we support Linux as well.)

    While the Windows team was preparing drafts for an upcomming version, in the Mac team, we were readying a Carbon-compliant version of our product for Mac OS X.

    The previous version was 5.1. This new, carbonized version didn't have any new features per say. A couple of bug fixes, and a new core API (Carbon). It was then numerated as 5.2. Basically, it's just a port of the 5.1 release, plus extra things to make it behave better under Mac OS X (AQUA stuff, and direct BSD networking instead of Open Transport).

    The new 5.2 release supports Mac OS 8.6 and beyong, including Mac OS X. Prior to that, the 5.1 release supported System 7.1.2 on up.

    The 5.2 release being a Mac OS X port (although it works under Mac OS 8.6 and up) meant that we could break away with System 7 without pissing too many users, since it didn't bring anything new in terms of functionality, since the 5.1 release.

    The 5.2 release, therefore, acts as a cushion between the 5.1 release and the upcoming 6.0 which will most likely drop Mac OS 8.x, while be usable w/ Mac OS 9.x (and Mac OS X).

    So far, this has worked OK for us.
  • by crath ( 80215 ) on Sunday September 09, 2001 @10:18AM (#2270589) Homepage
    A key to providing backward compatibility is "design intent"; i.e., closely examine the backwards compatibility issue when you are first thinking about creating a piece of software. Internal data structures, external file formats, APIs, etc. are all influenced by the design constraints placed upon a project. If one of those constraints is backward compatibility then these structures will all be built differently than in the case where no backward compatibility is ever required.

    FrameMaker is a great example of an application that appears to have been architected from day one to provide backward compatibility: every version of FrameMaker imports and exports Maker Interchane Files (.MIF files) and so it is trivial to move files between releases of the application. While I'm sure this causes the developers some headaches from time to time, I know from personal experience that a constant anchor point like .MIF files influences coding decisions.

    Having done work on an ASCII interchange mechanism for a multiplatform application, I can be fairly certain that the FrameMaker decision isn't very difficult to implement: each release of the application has a pair of small functions, one to walk the internal data structure and emit the ASCII interchange format, and another that parses the ASCII interchange file and produces an internal data structure.

    When we designed our application, the ASCII interchange functionality was deemed important; this influenced the internal data structures, which in turn influenced the binary data files. If we had tried to bolt backward compatibility on at a later date (i.e., in version 2.0) it could have been a lot of work; whereas, building it in from day one didn't cause any extra work.

    Conscious design intent is the key to making backward compatibility a non-issue.
  • Finally.. something somewhat worthy of discussion.

    Along the same lines... what *really* irks me is when I go to compile some utility I used a couple years ago... and I it wants a *whole bunch* of libraries it didn't need before, usually related to display only.

    Too many good command-line apps get turned into huge, bloated GNOME apps for *no reason* (or just so the developer could play with gnome).

    You shouldn't let backwards compatability hold you back, that's for sure. If you wanna bring out version 2.0, as a rewrite, why not? Keep the old one available to people, though.

  • Java (Score:2, Interesting)

    by gUmbi ( 95629 )
    First, I'm not sure if we're talking about code compatibility or user compatibility but I'll talk about code compatibility (who cares about users anyhow? :).

    I've found that Java is a great lanaguage for making backwards compatibility easier. The main reason is that dynamic linking doesn't use cardinals or offsets. This means that I can fundamentally reprogram and entire class (even without using an interface) and still have a dependent program work fine with recompiling it. (Note: the only gotcha is finals, which are copied at compiliation to the dependent program - bah!).

    Along with a couple of other things, such as using lots of interfaces and factories throughout the library, backward compatibility is hardly an issue at the code level.

    Jason.
  • Asking for an ISO C++ compiler is asking for trouble, because such a thing does not exist. For example, how many compilers support export ? However, it's important to draw the line somewhere on features. Where you draw the line depends on how important it is to support broken compilers.

    There is a secondary issue, regarding work-arounds for small annoyances. Autoconf does a good job of taking care of this. For example, an autoconf macro in configure.in like this can be used to tell me whether I need to install an sstream header for old gcc versions, or if one is already installed. Sorry, had to snip it, because of the lame "lameness" filter.

    One can check other conditions and define macros, or automatically edit Makefiles based on outcomes of this and similar tests. GNOME and KDE packages ship with a good collection of autoconf macros. I have found these very useful.

    I test regularly on g++ versions 2.95 and up. Less frequently, I try to build against g++ 2.91xx. The compiler is portable, and using the same compiler makes life simpler. Even within this narrow framework, there are portability issues. For example, earlier gcc versions do not ship with the sstream header. The streams library is broken in early versions. For examploe, int is used instead of appropriate types. The best way to manage these subtle annoyances is to use autoconf. Libtool is also essential, for a different reason: the commands used to link vary wildly from platform to platform.

  • Arrogance (Score:3, Redundant)

    by Veteran ( 203989 ) on Sunday September 09, 2001 @12:59PM (#2270909)
    Much of 'throw out the old cruft' is simply arrogance on the part of new developers: "These guys didn't know what they were doing" implying of course that "we are much better than they were".

    Old code is hard to read - even if it is your code - because you lack the overall 'grasp' of what the code is doing - which the developer had had when it was written. Old code becomes just lines of code instead of part of an intelligent structure; you can see the 'trees' but the 'forest' has been lost.

    This means that the problem with old code is that YOU don't know what the developers were doing; not that THE DEVELOPERS didn't know what they were doing!

    It is a very easy mistake to confuse those two types of ignorance. Add in a little 'it can't be me that is wrong' attitude and the name of the game becomes a contemptuous 'out with the old in with the new.'

    The truth is that there are differences in skill levels of programmers - old code written by good programmers is a lot better than new code written by poorer quality programers. If you didn't know that programmers differ in skill level it is conclusive proof that you are not a good programmer; to a bad programmer all code looks the same - that is why a bad programmer is a bad programmer

    • This means that the problem with old code is that YOU don't know what the developers were doing; not that THE DEVELOPERS didn't know what they were doing!

      Of course, if the original developers document properly, this doesn't much happen. That will soon tell you whether the guys who did it first time around were good or not.

  • Sometimes the answer is not to target the newest implementation but the one that is used on all platforms in use. For Java this means sometimes going back to the 1.0 API! You lose a lot
    of cool functions but it is the only way. Then just make sure it works best on the newest platforms or browsers depending if the program runs on an OS or a web browser. This isn't always an easy thing to find out. We made a web application using Java 1.1 and it turns out that Netscape on the Mac supported it without any of the changes from 1.0 to 1.1 so it didn't work there. But recently I've heard that Netscape may finally be supporting 1.1 for real on the mac and 1.3 with OS X. Cool!!! If only we wrote the program in Java 1.0 there would be no problems (except that it would probably be really clunky).
  • UNISYS overdoes it (Score:4, Interesting)

    by Animats ( 122034 ) on Sunday September 09, 2001 @10:47PM (#2272025) Homepage
    After being away from UNIVAC mainframes since 1978, I recently came across online manuals for the latest version of the operating system for the UNISYS ClearPath Server. That machine is running a version of OS2200, which is an updated version of OS1100, which is an updated version of Exec 8, which was first demoed in 1967.

    The API has barely changed in the last 25 years. A friend of mine has an application that's been running unchanged since the 1970s. [fourmilab.ch] It has contined to work across generations of hardware. And it's in assembler.

    They had the advantage that their OS was decades ahead of its time. UNIVAC had symmetrical multiprocessing with threads in a protected mode environment thirty years ago.. And threads were designed in, not bolted on like UNIX.

  • 1. You have to distinguish backward compatibility in the program or in the interface presented to the users. Sometimes, the latter can be preserved for the sake of the clients while the former can be invisibly improved to adhere to current standards.

    2. Cross platform portability is a good thing in its right. Not just be cause you can increase the market by a few percent, but because, in my experience, the more a code has been required to run throught the gauntlet of different compilers on different platforms, the more likely the code is not going to break. That means break, period, as well as not break when porting to platform n+1.

    Maintainability and extendability of software is improved markedly if you make an effort to be cross platform portable.

    That's not to say that you need to port back to the same least common denominator as Ghostscript has been known to do. While impressive, I consider that level of effort to be more than I would consider for a software project. But a lot of that is legacy anyway.

    I work a large C++ Solaris project that has code from the mid-1990's when compilers didn't have as much of the standard as they do now. We're slowly making an effort to move in the direction of standards in this regard because it will decrease long term maintenance costs and make the code base easier to read and, therefor, to extend.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...