When Do You Kiss Backwards Compatibility Goodbye? 241
Arandir asks: "Backwards compatibility is great for users. But it sucks for developers. After a while your normally sensible and readable code becomes a nightmare spaghetti tangle of conditions, macros and multiple reinventions of the wheel. Eventually you have to kiss off backwards compatibility as more trouble than it's worth. The question is, when? Should my code conform to POSIX.1 or Single UNIX 2? Should I expect the user to have a ISO Standard C++ compiler? What about those users with SunOS-4.1.4, Slackware-3.2, and FreeBSD-2.2?" This question is really kind of difficult to answer in the general sense. The best advice one can give, of course, is "when you can get away with it". Not much help, that, but the lost of backwards compatibility, like most complex decisions, depends on a lot of factors. The key factor in most developers eyes, of course, is the bottom line. Have many of you been faced with this decision? What logic did you use to come to your decision and what suggestions do you have for others who might find themself in this predicament?
Ironically.. (Score:5, Funny)
.. (Score:2, Funny)
I thought it is a sign. ;-)
I just realised.. (Score:2, Interesting)
Re:Ironically.. (Score:2, Funny)
Re:Ironically.. (Score:3, Insightful)
and when Slashdot-posters abuse the term "nazi"...
Re:Ironically.. (Score:2, Insightful)
Re:Alanis (Score:2)
What are you asking us for? (Score:3, Insightful)
You should know who they are, what equipment they have, who is making who a favor (ie: who has to adapt to whom), and specially you should know what they want (such as how much backward compatibility).
Re:What are you asking us for? (Score:2, Interesting)
That is exactly my point: With Linux you know there are a great many users (with all sorts of different hardware, purchasing capacity, applications for your system, etc...) and so you would probably do best with having as much backward compatibility as possible.
With a propietary corporate system, such as the ones I write, you can pretty much dictate when people will upgrade, and if there are old and incapable systems, you just buy new machines. Much different than the open source/commercial software philosophy.
There's no way that you can satisfy all of your users
Well, system requirements much more broad than just "how much backward compatibility to have."
I would probably do what Linus does: Just focus on the very core and let someone else try to make everyone happy!
Open Source to the rescue! (Score:1, Informative)
Hail Open Source, Death to binary only!
Re:Open Source to the rescue! (Score:3, Insightful)
OO design (Score:1, Insightful)
Re:OO design (Score:2, Funny)
JDK-1.0.2/ Wed May 6 00:00:00 1998 Directory
JDK-1.1.1/ Wed May 6 00:00:00 1998 Directory
JDK-1.1.2/ Tue May 5 00:00:00 1998 Directory
JDK-1.1.3/ Wed May 6 00:00:00 1998 Directory
JDK-1.1.5/ Tue May 5 00:00:00 1998 Directory
JDK-1.1.6/ Thu Oct 8 00:00:00 1998 Directory
JDK-1.1.7/ Sun Jun 13 00:00:00 1999 Directory
JDK-1.1.8/ Tue Aug 8 00:00:00 2000 Directory
JDK-1.2.2/ Thu Oct 19 00:00:00 2000 Directory
JDK-1.2/ Wed Aug 11 00:00:00 1999 Directory
JDK-1.3.0/ Wed Jun 13 18:05:00 2001 Directory
JDK-1.3.1/ Thu Jul 5 18:05:00 2001 Directory
Hedley
Re:OO design (Score:1, Insightful)
Hardly, since these issues have *nothing* to do with methodology as such. A well-designed structured system will be more amenable to being backwards compatible than a poor OO design.
If the design of the external interfaces (file formats, network protocols, etc) allows for future features, specifies 0/blanks for reserved fields, etc, then it's much easier to be backwards-compatible. This has nothing to do with OO.
And if you're talking Java or C++, each of those languages has ancestral versions which make more recent versions have interoperability fits (old vs new GUI event model, older name mangling/namespaces).
Re:OO design (Score:4, Insightful)
It takes a lot more than OO design to solve this problem. The fact is that initial designs are almost always naive, and lack the specific flexibilities (and inflexibilities) the Ideal Solution would require. So, very often, are second iterations. But the third time's usually the charm.
The need to depart from backward compatibility is the result of correcting design flaws. And design flaws happen, regardless of your programming methodology. Design flaws are less a product of programming methodology, and more a product of not completely knowing the problem.
Java accommodates design flaws a little bit better not by being object-oriented, but by relying so heavily on dynamic binding. While this can make recovering from Bad Design simpler, it only lets you create bandaids that do nothing to fix the original design problems. And all that dynamic binding costs Java in performance across the board.
Re:OO design (Score:3, Insightful)
Java accommodates design flaws a little bit better not by being object-oriented, but by relying so heavily on dynamic binding.
FYI, dynamic binding is a crucial element in OO programming. Programmers wouldn't have been able to downcast if you don't have this. It is indeed that this creates a lot of slowdown, but researcher has done flow analysis research to eliminate many of this.
Concerning the design: Yes, you're right. Nobody starts with a "perfect" design. We will extend it, twist it and twirl it until it comes into a huge mess when we have to scrap the whole thing and rethink it.
But, design flaws can be minimized. It doesn't depend on programming methodologies, but rather to a design methodologies -- which is pretty much debated by now. Using models (e.g. UML models) to depict a huge project maybe worthwhile because we can get a quick overview of what we are upto. Most of the times we can locate the design mistakes pretty quickly.
Design flaws can also be minimized by documenting specification (which is pretty much a "meta-programming" approach). Sadly, nobody wants to do this. If you can get this done, there are a lot of tools (albeit still in research) that can automatically check your program whether it conforms to the spec or not.
Re:OO design (Score:1)
I like OO as much as the next guy (more, actually), but this is untrue. Since you bring it up, let's use Java as an example: do you use Swing or any of the Java 1.2 APIs? If so, you're not compatible with 1.1, so you have to make sure all your customers upgrade to the right version.
For projects where you have 100 users that's not a big deal, but once you have millions it becomes a huge deal. It becomes a bigger deal when the you're thinking about using system APIs not available on older versions of the OS.
Usually the solution is to dynamically use the library and only offer the feature on newer platforms (LoadLibrary/GetProcAddress on Win32), but after a certain point that leads to the mess the author was talking about.
Re:OO design (Score:3, Informative)
For those who are not familiar with the concept you don't deal with objects per say, instead you deal with interfaces to implementations of objects.
For example
ICat *pCat = NULL;
pCat = MakeCat(IID_ICAT);
pCat->Meow();
delete pCat
COM does it a little differently but the basics are there. You request an implementation to an interface, not the object itself. The way this works for different versions is that instead of IID_ICAT you can have IID_ICAT2 and ICat2 interfaces without having to break your old ICat clients. The implementation could even share much of the old code.
For example:
ICat2 *pCat = NULL;
pCat = MakeCat(IID_ICAT2);
pCat->MeowLoudly();
delete pCat
Admittedly it's not the most elegant design, but it works in the sense that you're not breaking old clients and still have room to support new interfaces.
-Jon
Re:OO design (Score:2)
But there's a much better way to do that: versioned libraries as in Unix. With library versions you don't have to insert all that compatibility guck inline in your code, and the library functions don't need the overhead of having to decode it. It's all done at link time. It doesn't make a bit of sense to pass the version information with every call, it should be constant across your whole application. Hmm, you could conclude that Microsoft designers knew what they wanted to accomplish but didn't know how to accomplish it, oh well.
And yes, you can get problems with versioned libraries too, mostly because people may not have all the libraries they need. But that problem is solved definitively by apt-get, and soon apt-get will be standard for rpm's as well. As soon as I started using apt-get, all my library problems just vanished.
No doubt Microsoft would like to pull off something like that but they never will because the whole apt-get packaging depends on continous feedback from informed users with sourcecode on hand, and by that I don't mean go through two months of negotiation to see the piece of source code you need, then start the whole process over again 1 hour later when you hit the next problem.
Re:OO design (Score:2)
Or , to put it in other way, users unnecessarily burdened with completely irrelevant to the task at hand, issues.
Sure, if you consider having a smoothly functioning distribution/update system irrelevant. Did I mention that said informed users seem more than happy to shoulder this burden so that the other 99% don't have to? No? Think about it.
Re:OO design (Score:2)
So, necessarily.
Re:OO design (Score:3, Insightful)
A properly designed and maintained OO system will alleviate some of the backward compatibility issues. The thing is, most OO systems are not well maintained, even if they were reaonsably well designed originally. Most developers, in my experience, are far too reluctant to refactor properly.
This generally boils down to one repeated mistake. When changed requirements suggest a change in the responsibilities of your classes and/or the relationships between them, too many developers try to change the implementation or interface of classes in isolation instead.
Unfortunately, in the long run, no amount of interface hackery can make up for an inappropriate distribution of responsibilities. It's just fudging the design, and the one thing you don't want to do with an OO design is mess it up with fudges.
The reason you get to this "it's got no chance, let's start from scratch" syndrome is that too many fudges were allowed to accumulate. Instead of refactoring as they go along, keeping a design with clear responsibilities and relationships, programmers try to shove things in where they don't really belong. Once or twice, you can get away with it, but it doesn't scale, and ultimately leads to an unmaintainable mess.
Of course, the problem is that adopting the "I'll just fix it here" mentality is, in the immediate timeframe, quick and cheap. In the long run, it's much more expensive -- a rewrite from the top will do horrible things to your budget -- but people rarely consider that far ahead.
Such is the curse of the OO paradigm, where the benefits are only to be had by those who think: most people don't.
Compatibility is crucial (Score:2, Insightful)
Even if you are writing a minuscule app for your own use, that could not conceivably have any use for anyone else, you should always adhere to the following rules:
Re:Compatibility is crucial (Score:3, Insightful)
1.0, 1.1 or 1.2? IBM or Sun or Blackdown? AWT or Swing?
Re:Compatibility is crucial (Score:2)
Re:Compatibility is crucial (Score:3, Insightful)
So where does one get a box full of all these unixes? I do agree with you on that part about broad testing. But I can't afford to buy all those boxes (especially not an IBM zSeries). And finding shell accounts (especially root ones) for various testing seems to not go beyond Linux, FreeBSD, and a handful of OpenBSD and Solaris. Even the IBM mainframe accounts are available to only a few people, and then for a limited time (have you ever known an open source project to take 3 months and stop development then because it's "done"?). I do have Linux, FreeBSD, and OpenBSD on Intel, and Linux, OpenBSD, and Solaris on Sparc. What else would you suggest?
As for Java ... I'm waiting until environments are built that can do what I do now in C. I'm hoping gcj [gnu.org] will let me do at least some of these things in Java. But there are some things I doubt it can ever do. You can prove me wrong by rewriting LILO and init into Java.
Re:Compatibility is crucial (Score:2)
Ask the people behind the software porting archives for the various platforms, they might be able to hook you up with either hardware or people. Or ask for porting/testing help on newsgroups or mailing lists. There are a lot of people out there with loads of junk hardware. You could try calling the companies in question, and maybe they can put you in touch with someone inside who might be able to arrange either access or testing, but finding the right person would be difficult. You'd probably have better luck finding someone who works for the companies by going over opensource development mailing lists. Oh, and VA/Sourceforge has a compilefarm with several platforms too.
Of course, those options are only valid if you're doing opensource work and your application is both interesting and reasonably well written. Do use autoconf and automake, it makes porting a bit less painful.
If you do proprietary software... well, good luck. There's a reason that software is more or less supported on the various platforms...
Re:Compatibility is crucial (Score:2)
I'd think that if you've got big endian/little endian platforms, and SysV/BSD, that's a pretty good start.
Did I mention, use autoconf, and it will make porting to anything else much easier
Re:Compatibility is crucial (Score:2)
The microcode in your CPU isn't written in C, I guess C isn't good for anything either. Anyway, my bootloader's written mostly in forth, which does specify a VM.
Re:Compatibility is crucial (Score:2)
Personally, I *hate* libraries. Then again, I'm running RedHat on x86 using "standard" apps.
My problem is that I *never* have whatever library the program I'm trying to install requires, and all too often, I have no idea where I might *find* the library. And I've had libaries that can't be installed due to other library dependencies... *grrr*
I guess my problem isn't with the actual use of libraries, but with their implementations. Why don't developers offer a version of their packages *with* every libary you may need? Personally, I'd be soooo much happier if I could just run a script to install all the libraries.
Of course, then you have to worry about how well that script works, and if it'll overwrite my libraries with older / non-working libraries. But you get the idea... :)
Re:Compatibility is crucial (Score:2)
www.rpmfind.net. You'll also find that a lot of the problems are caused by binary incompatibility. Binary compatibility problems don't have any easy solution.
Why don't developers offer a version of their packages *with* every libary you may need?
Because such a package would be enormous.
A better approach would be to just link to the required libraries.
Of course, then you have to worry about how well that script works, and if it'll overwrite my libraries with older / non-working libraries
Again, binary compatibility. If this is a big problem, just compile from the source. Or get versions of the library that were compiled against your distribution.
Re:Compatibility is crucial (Score:2)
Here you're getting into the sort of terratory that is usually inhabited by, mostly propietry, paid for software. With very few exceptions I've found that paid license software will have the extra touches like libraries included with the install media (and usually an installer clever enough to only install a library if you need it and it won't break your system by over writing a later version of the same library) or at least the URLs of where to get the libraries or patches to bring you system to the required level to run the software. I rarely see a free/open source package that has that. You do get what you pay for, usually.
My interpretation of the post was that the coder should put as much functionality into libraries or functions rather than the main executable(s) to aid modularisation and (probably) improve code sharing amongst modules of the app. Rather than relying on external libraries over which they have no control. Those liobraries I woulde expect to definately be in the install media.
My thoughts on backwards compatabiity are to try to avoid changes to things like file formats and APIs unless absolutely necessary. With file formats try to support as many older versions as you can, ideally for both save and load but as a compromise you could make it such that it will load all previous versions but only save current and last 2 versions or something like that. With APIs, if the way it is called changes, where possible provide a wrapper/interpreter for the old API call that translates it to and from your new functionality. Anything that changes file formats or API calls in a non-transparent way incrememnts the major version number. Provide a migration path and utilities to aid migration.
Also document, document, document. I'm deathly serious. Document everything. When you make a change to the source code put in a comment saying what you did, why you did it and how to undo it (eg if you make a change to a line, copy the line, commnent out the original and change the copy -- Although this is really only advisable for a few single line changes, anything bigger and you'd probably want to use something like RCS or SCCS to maintain your change history, which is another advantage of modularising your code as that way the number of changes per file will be small so you only have to rollback and reimplement a small part of your code to reverse a change), it makes it a heck of a lot easier to maintain your code later. Document all changes in the Release Notes, Migration guide, Installation documentation and the User Guide, odds are that users will read at least one of these, if they don't you can at least tell them to RTFM with some moral justification.
Who Needs Backwards Compatibility (Score:2, Funny)
From the M$ Software Development Policy Manual... (Score:3, Funny)
Break it all at once (Score:5, Insightful)
The other thing is, try to design to keep this from happening. Expose APIs that don't need to change much instead of the actual functions or objects that you use. One more level of indirection won't kill your performance in almost every case, but it will give you a whole lot more room to re-engineer when you decide you have to.
All that applies to the case where you control the interface and you need to change it. When you're publishing source code and want to decide what tools you can expect the user to have to make use of it, that's a marketing decision and not a technical one. You're talking about how many people will be excluded from your audience if you use GTK or assume a conformant C++ compiler. Technically the newer tools and libs are generally better, that's pretty clear. I think it's going to be a judgement call on the part of the developers as to how much they care about a lot of people being able to use their code. If they are willing to wait for the world to catch up before being able to use their program, then they can use the latest and greatest. If not, then they have to aim at a realistic profile.
Re:Break it all at once (Score:2)
So what you're saying is, Microsoft should've given up on backward compatibility a long time ago?
Apple's plan (Score:3, Interesting)
For instance, when they migrated to the PPC architecture, they made apps that ran on both platforms and older OSes, then capped development and froze the older version at whatever version number, then developed for newer machines. The rest of the apps followed, like AOL 2.7 is AOL's last 68k release, they just developed only for PPC since then, although apps like Opera still make a 68k version alongside a PPC version.
Of course, you need to know your users, Will they be satisfied with a frozen and version capped release, provided there are no bugs?
Re:Apple's plan (Score:5, Insightful)
Often, Backwards compatability problems can be avoided by careful design. Leave room for improvements. Designate certain structures as ignorable. Presume that the current incarnation of the code is not the final version.
Design for elegence. If the current code is relatively clean, then chances are that it will be easier to tack on an addition later on. Include stubs for improvements that you can forsee adding later on -- even if you can't percive the exact form of the improvement at the time. When you tack on the addition, try and do that elegantly too.
With languages, you can sometimes avoid backwards compatability problems by not using the latest and greatest features just because they're there. (it also allows you to avoid creeping featurism growing pains).
If using a new feature makes a big difference in the implementation of a solution, then use it, but at least document it. It keeps you more conscious of the break, and makes life easier on the people who have to rip out your code and re-implement it on the older system that you thought nobody was using.
Anecdote: A friend of mine recently found out that that the security system where he had a storage locker was run on by apple IIc. The box was doing a fine job of what it was designed to do 20 years ago. Just because it's old, doesn't mean it won't work.
2 Versions (Score:1)
For dependancies like the compiler listed in the question, if it was available in said 2-previous-versions of the distribution it is a reasonable expectation.
Just my 46 Lira.
Possible Compromise (Score:5, Informative)
I think there are several approach you may choose.
If you think your program/library has been widely adopted by many people, it would be very very hard to scrap the old one and start anew. You will provoke the wrath of other developers that use your program/library. If this is the case, then the possible compromise can be:
To do revamping, you'd probably want to look on how the Windows COM approach. I'd hate to say this, but this approach is generally good, but I don't know whether I can come up with a better solution.
Or alternatively do an OOP approach. OOP can help modularizing your code if it is done properly. (That's why KDE rocks)
That's my 2c, though.
Small app story (Score:3, Interesting)
Re:Small app story (Score:2)
Re:Small app story (Score:2)
That's what I did in my application (which kaim/kinkatta borrowed from, serendipitously enough). When I first wrote it, it was at a furious pace and I put in a quick-n-dirty file format. Then when Qt got XML support, it only made sense to use that instead. So I change the file formats over, but kept the old way around. It now converts automatically from the old to the new (which a info message to the user that it is doing so). It has caused a bit of code bloat, but by the time I reach version 1.0, it will have been long enough to be safe to drop the old way.
That episode was what got me started in the first place pondering backwards compatibility, and it eventually led me to post this topic's question.
Or you could use a data format that's a bit more extensible, this seems to be the real problem.
By using XML, I expect full extensibility. But I can forsee one problem kinkatta may run into (if they haven't already). Using XML is great, awesome, and the Right Thing(tm). But KDE still uses a group/name/value ascii format for its settings files. Is it better to go against the KDE flow by using XML, or use the existing KDE API for settings? Hopefully KDE will switch to XML for its setting in 3.0.
MS went overboard (Score:3, Insightful)
Luckily open source doesn't have ot suffer from the issue as much since source availability ensures that old software can often be tweaked or sometimes just recompiled to make it work with new versions of dependent libraries.
How long to maintain backwards compatibility is really the question of your business domain. An in house app can probably be changed significantly without impacting many people while a widget library (like QT for example) must maintain backwards compatibility for at least a couple of minor versions. The ability to simply recompile old code after a major change in the library is a welcome feature too.
Re:MS went overboard (Score:3, Interesting)
Re:MS went overboard (Score:3, Insightful)
Well, speaking as an experienced Dos programmer, the Dos interface is really a piece of crap. Nothing is defined with any kind of generality or foresight. Look at the "critical error handler" interface for example, it's unbelievably awkward and unworkable. Look at what happens if you shell to Dos, start a tsr, then exit the shell. Boom. This is broken by design. Just try to write a program that recovers control when a child aborts, that's when you find out how messed up the resource handling is, and the exit handling. The child tries to exit back to Dos instead of its parent unless you hack into all sorts of undocumented places, which everybody eventually has to do for a big system. All that undocumented-but-essential internal interface dung has to be supported, right?
That said, no I don't think Microsoft made a mistake in supporting old Dos code, it's all part of maintaining the barrier to entry, the more messed up it is, the more expensive to replicate. Never mind that the whole sorry tale isn't good for anyone but Microsoft.
Well, now that we have a several real operating systems as alternatives, there's no longer any advantage to Microsoft in keeping Dos on life support. When Dos gets painful, people just walk now. But Microsoft still has to support it or everybody will start publishing articles about how Windows doesn't work, and they have to keep all those "enterprise" solutions that consist of Dos programs and BAT files running. Now Dos support is just an expensive liability Microsoft can't escape. Heh, but the rest of us can, and do.
The original Unix interface on the other hand, was quite well designed. 30 years later, for the most part the whole thing is intact and still in use, functioning reliably in enterprise-level multiprogramming environments. So, no, maintaining compatibility with that old-but-good interface is unquestionably not a mistake.
The short story is: the whole Dos interface was a mistake, and Microsoft now has to live with it. The unix interface was not a mistake and we're happy to carry it forward.
There are better examples (Score:2, Insightful)
Not really, IBM on the mainframes and Intel on the 80X86 are the biggest backward compatibility stories around (over 20 years for Intel over 30 for IBM I would guess).
Re:MS went overboard (Score:2)
Microsoft is the most notable supporter of backwards compatibility
Well, if you ignore Intel.
Intel have keep binary compatibility for decades now, even the latest Pentium III will run code written for the 8086 - maybe with different timing/cycle counts - but certainly the opcodes still do what they used to.
After playing with reverse engineering, and dissasembly for the past few years I can think of at least several 80[01]86 instructions I've never seen in a single binary - yet they're still there, due to backwards compatability..
Reminds me of an old joke... (Score:4, Funny)
Re:Reminds me of an old joke... (Score:3, Funny)
And that reminds me of a quote I heard some months back.
"God created the world in 6 days. Perl gods are not impressed."
browsers? (Score:1)
After a while your normally sensible and readable code becomes a nightmare spaghetti tangle of conditions, macros and multiple re-inventions of the wheel
this describes some of my javascript to handle multiple browsers to a tee
Our very means of communication... (Score:1)
I wish I knew the answer to when enough was enough, but I am at a loss. I will say, however, that I make my decisions of how far to go back in terms of versions by how much flexibility and usability the older versions offer. Anything less than version 4 of either Netscape or Internet Explorer offers very little in terms of flexible design.
But honestly, where do we set that limit? I don't think it's the best thing to do to say, "Conform or die!" If that's how things were, then the world would be swarming with Hitler's Nazis.
"You there! Cake or death!?" Uhh... death, no, no, I mean cake! "Ha, ha, you said death first!" I meant cake! "Oh, alright.. here's your cake. Thank you for playing Church of England, Cake or Death?"
rules of thumb (Score:5, Insightful)
1) Support whatever 90% of your users are using
2) Support the prior two versions
If you can't do the above, make a clean break and give it a new name or change the major version number and list the changes in the release notes.
If you have to make a clean break, if possible:
1) Provide a migration path
2) Provide an interop interface
And above all, listen to your users.
Re:rules of thumb (Score:2)
1) Support whatever 90% of your users are using
2) Support the prior two versions
Latest stats are showing IE5 to have 82% of the market, IE4 with 6% and IE6 with 1.5% - pretty much 90%
Another 2% is spiders.
So Should we only support IE? Or should we support the standard as it is guarented to work in all future browsers as well?
Re:rules of thumb (Score:3, Insightful)
If you are targeting the world in general and want to make money or want the widest possible dissemination of your code, then I'd reluctandly say "Yes", give IE preferential consideration over the more obscure browsers. Or, stay conformant to the HTML/XML specs and support everybody that's conformant but be willing to accept the price of losing specific capabilities.
If you have code that works with Netscape and not IE or vice versa, then why not contract (or develop yourself) for a plugin or activeX control that will provide that capability? Or, bundle an existing control with your code?
But, that is getting away from the "backwards compatibility" issue. The previous poster mentioned the rules of thumb, of which I wholeheartedly agree. If your code has to change so much that you break backwards compatibility or produce difficult to maintain code, then release it as a new product or new major version number. And, support the last to versions of your product. Makes good sense to me.
Re:rules of thumb (Score:2)
1) Support whatever 90% of your users are using ... pretty much 90% ... So Should we only support IE?
Latest stats are showing IE5 to have
That's 90% of YOUR users, not 90% of the market you're trying to penetrate.
If you wrote WhizzyWord, and 90% of your users have migrated up to v1.2, you can dump any v1.1-only "features" and clean it up. This has nothing to do with Microsoft Word, KWord, AbiWord, WordPerfect or any other word processor app on the market.
Re:rules of thumb (Score:2)
Not sure where you got the 82% figure, but if you design your site based on the most popular browser according to market research rather than the audience most likely to visit your site you may run into trouble.
Consider a "Best Viewed with Internet Explorer" logo on a Linux Howto site. Probably won't get much traffic...
Re:rules of thumb (Score:2)
I'm surprised, and plased, that your stats show konq to have such a (relativly) high market penetration. I use it almost exclusivly (occasionally lynx or sometimes mozilla get in the way), but i had no idea so many people did.
I'm afraid some sites I have to pretend I'm an IE user though - as otherwise I dont get in (even though they work fine in konq and moz)
My figures were from PC Pro, a UK magazine, October 2001 edition. I havent read it for a long time, and I'm unimpressed by the MS ass-kissing the have in there. I only bought it cause it had a tux on the front, pretty much the entire linux content of the magazine was that one cover picture though
Or, if you really want to annoy people... (Score:4, Funny)
But first, go read "How to write bad code," and start following those suggestions too.
Microsoft's solution... (Score:5, Informative)
Microsoft face the issue that they want to maintain backwards compatibility with everything, so they can leverage off the existing popularity of their platform. This means they can't let new libraries break old applications.
They had a problem once with MFC, where they updated its memory allocation scheme to make it more optimal. Problem was, a whole lot of old apps happened to work because they accessed memory incorrectly, but the old memory allocation scheme didn't reveal their bugs (I think they were doing buffer overruns, but the old scheme allocated a bit of extra room). Anyway, Microsoft released an updated MFC DLL, and suddenly old applications started breaking. It wasn't really MS' fault, but it was a big event, and I think it was the last time they touched old code like that!
Anyway, this is where they have their "DLL hell" problem too. Different apps are written against different versions of a DLL, many times with workarounds for known bugs in those DLLs. A new DLL comes along (installed by another application) and suddenly working applications start breaking.
So here comes COM. I've encountered it with DirectX, and it works like this. When you request something like a DirectDraw object or a DirectDrawSurface object, you also tell the class factory what *version* of the interface you want. Then you're provided with that actual implementation of the library. If you write your game against DirectX 2, but the user has DirectX 5, well your request to the DirectX DLL will actually give you the version 2 implementation! Which is cool, because if you worked around old bugs, those bugs are still there; they're not fixed for you!
As far as I know, you're not getting an "emulation" of the old implementation either; I'm pretty sure they just include binaries for each version of the interface. They could easily dynamically load whichever one you request.
Of course it means bloat, but there's no other solution. If you want backwards compatibility, you can't fake it, because it's all the really subtle things (like bugs that people worked around) that will bite you in the butt. It's been Microsoft's nemesis but its strength through the years, just as it has been Intel's too. Both companies need to remain backwards compatible with a *LOT* of stuff out there, and so they're forced to have all this legacy architecture. It would be nice to start from scratch, but then they'd be levelling the playing field, and that's no fun
Of course, this backwards compatibility drama affects everyone, not just Windows people. Just the other day someone installed a new PAM RPM on my Mandrake Linux. It installed fine, being a newer version, but some subtle change in behaviour meant I could no longer log into my own box. That's no fun either
- Brendan
Re:Microsoft's solution... (Score:4, Interesting)
Sad to think that I can now find just about anything on the internet, and it all started because I had to support Windows 3.x.
But it is nice to think that I was running Linux as my primary box in my office in 1995, using lynx to browse as I debugged my new Windows 95 installation on my other box.
Just food for thought for the younger admins out there: DLL hell was the largest part of my job at many sites. Believe it or not, it has gotten much better in recent years. Think of video drivers messing up fax software, network drivers killing the accounting package, and video conferencing software killing the spreadsheet program. As sick as it sounds, Win 9x is a dream to support by comparison.
4:30 in the morning, I must be getting delusional.
Re:Microsoft's solution... (Score:2)
Re:Microsoft's solution... (Score:5, Informative)
Also, when there is an opportunity to make a clean break (i.e. the Win32 API), they only make half the changes they should or could. Have you seen the number of functions which end in "Ex" in the Win32 API? Ever noticed how the Win9x series is full of 16-bit user interface code? Or how only half of the NT kernel objects are supported under Win9x? In DirectX, it took between 4 and 8 major versions of DirectX to create something which was really worthwhile (depending on your opinion), and major changes weren't implemented when they should have been.
In regards to using COM, this helps versioning, but does not help the bad design problem. Even with full versioning, COM interfaces still have reserved fields in them, which is unnecessary if you're bundling all the previous versions.
Besides, not all compatibility problems can be solved by COM etc. Microsoft are getting better at providing slicker interfaces, but I don't feel that the underlying design is improving as it should be. For example, using Automation objects (which is a disgusting kludge for VB "programmers", to put it nicely) in Office apps is still a pain. (In particular, Outlook 2000's object model and MAPI implementation is inconsistent and buggy as hell.)
So yes, versioning does help alleviate backwards compatibility problems where they can't be avoided, but nothing is a substitute for good design.
Re:Microsoft's solution... (Score:2, Interesting)
Windows95 design team had clear objectives:
1. We need OS with protected memory etc
2. It MUST run 99% apps from Win 3.11
The only way to accomplish that was to retain tons of old 16 bit code and wrap it around with 32 bit stuff. Windows 95 is actually one of the most amazing pieces of software ever written.
Consider the fact that average RH has problem running most apps written mere year before that particular version was released.
"Or how only half of the NT kernel objects are supported under Win9x"
That is genuinely MS fault. They had communication problems between NT and Win 95 teams.
Re:And Linux got it right the first time? (Score:2, Insightful)
But all that aside, yes linux did get it right the first time, being that they went with POSIX. UNIX has been around long enough the general API's are pretty stable. There's been some bumps - glibc comes to mind, since libc5 didn't support internationalization to the extent required, and the a.out to ELF conversion, but these things happen to all software. The vast majority of software needs little to no change between linux versions.
Sun's done a real good job with this from what I can tell, since the same app running on solaris 2.3 on a sun4m machine can be run on an E10000 with solaris 8 without a recompile (although changing it would certainly help performance in many cases). They've had to deal with a lot of things, like the conversion from 32bit to 64bit, and have handled it pretty well.
Now as for software written for libraries like GTK and whatnot, well, you knew what you were getting into when you wrote it. If you want something whose API doesn't change, program for xlib or motif (thanks to lesstif and the free motif clause, just about everyone can run motif apps).
Many of us who were linux users from way back don't neccessarily want a nice, stable platform that never changes, never pushes new ground. If that takes a bit of compatability breaking, then so be it - the old libraries are still there if you want to use them. I remember having both libc5 and glibc on the same system, and I remember when you could only get certain key apps for libc5 even after most people changed to glibc (netscape, for instance). We try to limit such changes to the major version numbers, and the standard UNIX way of handling this makes it no problem to have multiple major versions of the same library on the system. It's our answer to the dll hell problem, and it works rather well.
Re:Microsoft's solution... (Score:2)
And neither are the exploits.
Sounds like library versions (Score:2)
1. The developer may think their change is not significant enough to warrent a new version. When they are wrong, you have the same problem as before. Trying to avoid this would result in thousands of versions, and is better solved by statically linking.
2. The different versions still need to talk to underlying layers. Unless something like VMWare is used to run several copies of Windows at the same time, there is a layer where back compatability needs to be worked on. This could be solved by making a simple, well-defined lowest layer, but it is obvious that Windows (and anything in Unix designed after 1980) does not follow such design principles.
One think MicroSoft may be addressing is the obscurity of the Linux library versioning. I have been writing software for this for 8 years and I still have no idea what figures out what version of a library to use, or how it works. It would seem that the filenames are all that are needed to control it, but apparently that is not how it works. Nor do I know how to find out what version of a library a program wants (ldd prints the version it will load, which is not exactly the same).
Major version breakage (Score:4, Insightful)
Typically, users expect breakage--or, at the very least, problems related to upgrades--with major versions. With minor versions, they don't expect breakage.
Follow the Law of Least Surprise. If you break backwards compatability, up the major by one.
Insofar as when to break backwards compatability, that's a much harder question. The obvious answer is a little philosophic: not all engineering problems can be solved by saying ``screw backwards compatability'', and some engineering problems cannot be solved without saying it.
The trick is learning which is which.
Compatibility in formats and protocols (Score:4, Insightful)
A lot of the discussion seems to be related to issues of things like programming languages and operating systems (which are important). But what about keeping up with old formats and protocols? I think the issue is more one of what your project works with, than it is what language you choose (including the OS as part of the former).
I'm not so much looking for specific answers to the above questions, but rather, a general idea of how you think one should go about deciding those issues to come up with the best answers in some given situation.
Answers. (Score:3, Insightful)
POP3 and IMAP4 are not 'new versions' of each other... neither is outdated. one is not a replacement for the other. Needs dictated solely by users.
Your web site should require no more funcionality than needed ot operate the way you want it to. That's just good programming. Don't use cookies or javascript or java if you don't need to.
You can stick to Unicode, because ISO-8859 maps into it properly.
Re:Compatibility in formats and protocols (Score:2)
UTF-8 and ISO-8859-1 can be supported at the same time for all real documents. This is done by treating all illegal UTF-8 sequences as individual 8-bit characters, rather than an "error". This also makes programming easier because there are no "errors" to worry about. For an ISO-8859-1 document to be mis-interpreted under this scheme you would need an 8-bit punctuation mark followed by two accented characters, which is not likely in any real European document.
It is also necessary to treat UTF-8 sequences that code a character in more bytes than necessary as illegal, this is vital for security reasons.
I am also in favor of adding MicroSoft's assignments to 0x80-0x9F to the standard Unicode. These are pretty much standard in the real world now anyways. This will make some more sequences (one MicroSoft symbol followed by one accented character) mismatch under the above scheme, but you are likely to get such documents anyway whether you interpret the MicroSoft characters or not.
Re:Compatibility in formats and protocols (Score:2)
And we can forget about the rest of the world that doesn't use ISO-8859-1 or UTF-8? This will hopelessly munge any documents in a different charset; whereas a UTF-8 interpreter could realize it wasn't a UTF-8 document (and probably load in up in a locale charset.)
> This also makes programming easier because there are no "errors" to worry about.
But you've added another case to everything that has to handle UTF-8, and ignored real errors.
>It is also necessary to treat UTF-8 sequences that code a character in more bytes than necessary as illegal, this is vital for security reasons.
Why? You've just introduced alternate sequences for characters (the whole security problem) in another form.
>I am also in favor of adding MicroSoft's assignments to 0x80-0x9F to the standard Unicode.
Huh? All the characters are in Unicode. To mess with preexisting characters (that date back to Unicode 1.0) to add yet more redundant characters isn't going to happen, and shouldn't; you shouldn't break backward compatibilty unless there's at least a benefit.
Re:Compatibility in formats and protocols (Score:2)
Yes we can forget about them. This is because it will eliminate the need to transmit "which character set this is in". This will be an enormous win for software reliability!
But you've added another case to everything that has to handle UTF-8, and ignored real errors.
No, my whole point is that there is only *ONE* case, which is "handle UTF-8 but if the characters look illegal, display them as single bytes". At no point does a program have to think about whether a document is ISO-8859-1 or UTF-8, it just assummes UTF-8 at all times.
You've just introduced alternate sequences for characters (the whole security problem) in another form.
Only for characters with the high bit set. This is not a security problem. The problem is bad UTF-8 implementations that allow things like '\n' and '/' to be passed through filters by encoding them in larger UTF-8 sequences.
Huh? All the characters are in Unicode
Yes, but not at the MicroSoft code points. The problem is that if this is not a standard, programs are forced to examine the text before displaying it (because there are a huge number of documents out there with these characters, and they are not going to go away!) I also think the standards organizations were stupid for saying these are control characters, if they had assigned them we would not have MicroSoft grabbing them.
Re:Compatibility in formats and protocols (Score:2)
On one hand, UTF-8 alone can do that, without this Latin-1 hack. On the other hand, this in no way solves that, as you still have to deal with ISO-8859-(2-16), SJIS and a dozen other character sets that aren't just going to go away.
> At no point does a program have to think about whether a document is ISO-8859-1 or UTF-8, it just assummes UTF-8 at all times.
What makes ISO-8559-1 special? ISO-8859-1 only may as well be ASCII only for most of the world.
> Only for characters with the high bit set. This is not a security problem.
Yes, it is. It's a little more elaborate, but you can still get into cases where one program checks the data, say forbiding access to directory e', but misses the access to e' because it's encoded differently.
> programs are forced to examine the text before displaying it
I prefer my programs to get it right, rather than do it quick.
> because there are a huge number of documents out there with these characters
Plain text documents? Not so much. The number of plain text documents is probably much smaller than by the number of KOI8-R plain text documents.
We have a solution - it's called iconv. iconv will handle all sorts of character sets, not just ISO-8859-1 and CP1252. (Actually, you don't completely handle Latin-1 either, since there are valid Latin-1 files that are valid UTF-8. But we can just handwave that away until someone gets bit by it.)
> the standards organizations were stupid for saying these are control characters, if they had assigned them
They did assign them. Single shift 1, single shift 2, etc. If the characters were needed, that's what they should have been used for, Microsoft be damned.
Re:Compatibility in formats and protocols (Score:2)
Good point about the security problem, though. I have described a way in which there are two encodings for the same character. I was mostly concerned about avoiding the possibility of punctuation marks and control characters, but foreign letters are a possibility.
Maybe it would be best to make the API pure UTF-8 (illegal sequences turn into the Unicode error character). My idea would be best done by applications upon recipt of external text. They could also do guessing as to the ISO-8859- state by checking the spelling of words, etc.
A few thoughts (Score:5, Informative)
Code was ugly and hideous
Someone forked the tree a while back, and now we had to support 2 seperate source trees (this one was really annoying, because if fix a bug and change some behaviour in one side, since both sides had to be able to talk to each other, you'd have to introduce a corrosponding "fix" in the other side.
The core architecture was woefully outdated and ineficient
Speed was an issue, and the current architecture was limiting, and the code was optimized about as well as it could be (this was also part of the uglyness problem)
we were spending about 70% of our time fixing bugs in the old code, it took this much time because they were little and stupid and with the code in the state that it was in it took forever to trace things down.
Well, we had been wanting (desperately) to redesign from the ground up for a while, but the powers-that-be wouldn't give us the time, until one customer asked for a feature, marketing promised that they'd get it, and we said "Know what? We can't do that. Not with the current infrastructure." So the powers-that-be said "do what needs to be done!" and we said "yipee!".
Moral?
How much extra time is spent during debugging code that is due to the current state of the code?
How does the core architecture compare to what you will need in the future? Will it support everything?
How efficient is the old codebase, and how efficient does it need to be?
Can you get the time required to do so? This is a one-step-backwards before two-steps-ahead thing.
Do you trust the people that you are working with enough to be able to competantly and efficiently create the new architecture? This is a serious question, because I have worked with some people that are good if you tell them exactly what to do, but I wouldn't trust them to recreate everything.
Will you be required to keep the old codebase going while you are in the process of converting? If old customers need bug fixes, you might be forced to keep the old sourcebase around for a while.
Can you make the new design backwards compatible? If not, can you provide a wrapper of sorts, or some easy way to convert your customers from your old version to the new one?
If you are going to be redesigning the user interface or the API at all, then you must also think about the impact on your customers.
Just some food for thought.
Which UNIX to support? (Score:3)
A feature that exists in the major UNIX systems, but is not part of the standards, will majorly improve the performance of my project, and make it a lot easier to code up. Should I use it or not?
Of course the question is vague. I didn't state which systems and for a reason: I don't want to focus on the specifics (although I do have a specific case in mind), but rather, I want to focus on the general principle with this issue. Just how far should I go to make sure my program works on every damned UNIX out there? How much is important?
Isn't it obvious? (Score:2)
Re:Which UNIX to support? (Score:2)
I don't have the resources to support every Unix system out there. No way, no how. So I want to use the standards. But it will mean saying "no" to a lot of people.
Migration Tools (Score:2, Informative)
With your new release (to 2.0, or whatever) that is a whole new work - you bundle tools to convert the old data.
MS does this with office, and people sometimes think its a pain, but I do not. If I'm getting a new version that (sometimes) works better, I don't mind running it through a filter - but make sure you don't make the mistake MS has made. Make sure this filter works.
If you're talking a new OS or other such projects - create data (read: settings) BACKUP tools. If you want the people to all migrate to this new OS, or large system application, give them a tool to backup those old settings and put the code right into you're new version to grab those old settings from the backup.
Thats all we want!
Analogy (Score:4, Funny)
That's why open source exists for... (Score:2)
That's why open source exists for. If things get broken, then, there is a very high chance to fix things up if one has access to code and gets an idea of the new features, bugs and innovations. If a developer thinks he is doing right on going away from standards and practices, then let it be. He is the author and he knows best what he may want or not. However, he should care that the people, dependent of his creation (aka users), will have a chance to readapt. On open source, a developer may not have to worry too much on this, because, if his product is popular and he is a good guy, there will be other developers that may help to convert data or code to the new conditions.
On closed source, the developer himself should care to do this conversion. However, as we see, this is quite expensive in terms of resources and leads to the main bulk of the project going bloat and buggy. So, if you do care about innovation and stability do open source and keep the people coming to you.
Backward compatability with what? (Score:4, Insightful)
This is a confused discussion. A lot of people are mixing up "backward compatability for users" with "not making significant changes to the code base". The two are largely unrelated, except that screwing up the latter will also mess up the former.
What your user sees is, and always must be, decided by your requirements spec, not programmer whim. The only people who can get away with doing otherwise are those developing for their own interest (hobbyists, people involved in open source development, etc).
To put it bluntly, blanket statements like "meet 90% of your users' BC needs" are garbage. In many markets (notably the bespoke application development market) if you drop 10% of your users in the brown stuff, your contract is over, and your reputation may be damaged beyond repair. Look at MS; years later and in the face of much better libraries, MFC still survives, because people are still using it (including MS) and they daren't break it.
This is far removed from rewriting things significantly as far as the code goes, which is where things like the standards mentioned come into things. I'm sorry, Cliff, but you don't do this "when you can get away with it" if you're any good.
Every time you rewrite any major piece of code, "just to tidy things up", you run the risk of introducing bugs. You need to be pretty sure that your rewrite is
If the rewrite is justified using these objective criteria, then you do it. When you do, you try to minimise the number of changes you make, and to keep the overall design clean. You retest everything that might conceivably have been broken, and you look very carefully at anything that didn't work -- it's quite possible that the people who originally wrote this code months or years ago made assumptions they forgot to document and you've broken them. Finally, if and only if your rewrite is performing acceptably and all the tests are done, you decide to keep it. If not, you throw it away and start rewriting again.
And for the record, yes, I spent most of last week rewriting a major section of our application, as a result of a code review with another team member. We kept the overall design, tweaked a few things within it, and rewrote most of the implementation. Now we need to retest it all, update all the docs, etc. This little exercise has cost our company thousands of pounds, but in this particular case it's justified by a needed performance increase and the significant reduction in bug count. But you can bet we thought very hard about it before we touched the keyboard.
Think of an intermediate release (Score:2)
While the Windows team was preparing drafts for an upcomming version, in the Mac team, we were readying a Carbon-compliant version of our product for Mac OS X.
The previous version was 5.1. This new, carbonized version didn't have any new features per say. A couple of bug fixes, and a new core API (Carbon). It was then numerated as 5.2. Basically, it's just a port of the 5.1 release, plus extra things to make it behave better under Mac OS X (AQUA stuff, and direct BSD networking instead of Open Transport).
The new 5.2 release supports Mac OS 8.6 and beyong, including Mac OS X. Prior to that, the 5.1 release supported System 7.1.2 on up.
The 5.2 release being a Mac OS X port (although it works under Mac OS 8.6 and up) meant that we could break away with System 7 without pissing too many users, since it didn't bring anything new in terms of functionality, since the 5.1 release.
The 5.2 release, therefore, acts as a cushion between the 5.1 release and the upcoming 6.0 which will most likely drop Mac OS 8.x, while be usable w/ Mac OS 9.x (and Mac OS X).
So far, this has worked OK for us.
Re:So, uh... (Score:2)
What I do on
I'm not sure I understand your first question. If you're talking about IMIP/ICAP support, we already support vCalendar and iCalendar, but lack automatic response-and-reply of attendance change from/to foreign calendaring systems, in the current 5.x versions of the server. Deduct what you want from the upcoming servers.
Concerning your second question, though, I must take the company attitude. Which is, not to comment on unreleased products, wether they exist or not.
Though, you can help answering yourself by these simple facts:
CTOC took nearly two years of development to get it where it's at.
The Mac version of MS Outlook uses a totally new/fresh/independent code base than the original Windows versions.
MS Outlook for Mac just came out (barely).
MS Outlook for Mac is not carbonized yet, and we're concentrating Mac development efforts on the carbonized version of our CTime client and code base
.
On first sight (not that we have or have not looked into it), the mac version doesn't seem to split the communication layers out like the Windows version. Every component, like the calendaring one, is a single (huge) blob of code in it's own shared lib which doesn't seem to link with anything external.
Given the above, you could deduct one of three possible scenarios: a) we're looking into it, b) work has begun or c) We've looked into it and work has not begun because it doesn't look feasible for the moment.
Hope this helps. If you want to discuss this further, maybe we should do this via e-mail, since this is getting out of context, and don't want to loose any karma points to that (took be a while to get back up to 40 after a few bad postings of mine...)
Design intent is a cornerstone (Score:4, Insightful)
FrameMaker is a great example of an application that appears to have been architected from day one to provide backward compatibility: every version of FrameMaker imports and exports Maker Interchane Files (.MIF files) and so it is trivial to move files between releases of the application. While I'm sure this causes the developers some headaches from time to time, I know from personal experience that a constant anchor point like
Having done work on an ASCII interchange mechanism for a multiplatform application, I can be fairly certain that the FrameMaker decision isn't very difficult to implement: each release of the application has a pair of small functions, one to walk the internal data structure and emit the ASCII interchange format, and another that parses the ASCII interchange file and produces an internal data structure.
When we designed our application, the ASCII interchange functionality was deemed important; this influenced the internal data structures, which in turn influenced the binary data files. If we had tried to bolt backward compatibility on at a later date (i.e., in version 2.0) it could have been a lot of work; whereas, building it in from day one didn't cause any extra work.
Conscious design intent is the key to making backward compatibility a non-issue.
Good article. (Score:2)
Along the same lines... what *really* irks me is when I go to compile some utility I used a couple years ago... and I it wants a *whole bunch* of libraries it didn't need before, usually related to display only.
Too many good command-line apps get turned into huge, bloated GNOME apps for *no reason* (or just so the developer could play with gnome).
You shouldn't let backwards compatability hold you back, that's for sure. If you wanna bring out version 2.0, as a rewrite, why not? Keep the old one available to people, though.
Java (Score:2, Interesting)
I've found that Java is a great lanaguage for making backwards compatibility easier. The main reason is that dynamic linking doesn't use cardinals or offsets. This means that I can fundamentally reprogram and entire class (even without using an interface) and still have a dependent program work fine with recompiling it. (Note: the only gotcha is finals, which are copied at compiliation to the dependent program - bah!).
Along with a couple of other things, such as using lots of interfaces and factories throughout the library, backward compatibility is hardly an issue at the code level.
Jason.
C++ and porting (Score:2)
There is a secondary issue, regarding work-arounds for small annoyances. Autoconf does a good job of taking care of this. For example, an autoconf macro in configure.in like this can be used to tell me whether I need to install an sstream header for old gcc versions, or if one is already installed. Sorry, had to snip it, because of the lame "lameness" filter.
One can check other conditions and define macros, or automatically edit Makefiles based on outcomes of this and similar tests. GNOME and KDE packages ship with a good collection of autoconf macros. I have found these very useful.
I test regularly on g++ versions 2.95 and up. Less frequently, I try to build against g++ 2.91xx. The compiler is portable, and using the same compiler makes life simpler. Even within this narrow framework, there are portability issues. For example, earlier gcc versions do not ship with the sstream header. The streams library is broken in early versions. For examploe, int is used instead of appropriate types. The best way to manage these subtle annoyances is to use autoconf. Libtool is also essential, for a different reason: the commands used to link vary wildly from platform to platform.
Arrogance (Score:3, Redundant)
Old code is hard to read - even if it is your code - because you lack the overall 'grasp' of what the code is doing - which the developer had had when it was written. Old code becomes just lines of code instead of part of an intelligent structure; you can see the 'trees' but the 'forest' has been lost.
This means that the problem with old code is that YOU don't know what the developers were doing; not that THE DEVELOPERS didn't know what they were doing!
It is a very easy mistake to confuse those two types of ignorance. Add in a little 'it can't be me that is wrong' attitude and the name of the game becomes a contemptuous 'out with the old in with the new.'
The truth is that there are differences in skill levels of programmers - old code written by good programmers is a lot better than new code written by poorer quality programers. If you didn't know that programmers differ in skill level it is conclusive proof that you are not a good programmer; to a bad programmer all code looks the same - that is why a bad programmer is a bad programmer
Re:Arrogance (Score:2)
Of course, if the original developers document properly, this doesn't much happen. That will soon tell you whether the guys who did it first time around were good or not.
Sometimes stay off the cutting edge (Score:2, Informative)
of cool functions but it is the only way. Then just make sure it works best on the newest platforms or browsers depending if the program runs on an OS or a web browser. This isn't always an easy thing to find out. We made a web application using Java 1.1 and it turns out that Netscape on the Mac supported it without any of the changes from 1.0 to 1.1 so it didn't work there. But recently I've heard that Netscape may finally be supporting 1.1 for real on the mac and 1.3 with OS X. Cool!!! If only we wrote the program in Java 1.0 there would be no problems (except that it would probably be really clunky).
UNISYS overdoes it (Score:4, Interesting)
The API has barely changed in the last 25 years. A friend of mine has an application that's been running unchanged since the 1970s. [fourmilab.ch] It has contined to work across generations of hardware. And it's in assembler.
They had the advantage that their OS was decades ahead of its time. UNIVAC had symmetrical multiprocessing with threads in a protected mode environment thirty years ago.. And threads were designed in, not bolted on like UNIX.
2 Thoughts (Score:2)
1. You have to distinguish backward compatibility in the program or in the interface presented to the users. Sometimes, the latter can be preserved for the sake of the clients while the former can be invisibly improved to adhere to current standards.
2. Cross platform portability is a good thing in its right. Not just be cause you can increase the market by a few percent, but because, in my experience, the more a code has been required to run throught the gauntlet of different compilers on different platforms, the more likely the code is not going to break. That means break, period, as well as not break when porting to platform n+1.
Maintainability and extendability of software is improved markedly if you make an effort to be cross platform portable.
That's not to say that you need to port back to the same least common denominator as Ghostscript has been known to do. While impressive, I consider that level of effort to be more than I would consider for a software project. But a lot of that is legacy anyway.
I work a large C++ Solaris project that has code from the mid-1990's when compilers didn't have as much of the standard as they do now. We're slowly making an effort to move in the direction of standards in this regard because it will decrease long term maintenance costs and make the code base easier to read and, therefor, to extend.
Re:The best time... (Score:1)
FYI - Microsoft typically supports interop and upgrade from at least the two prior versions, possibly more depending on what the users are running. For example, even though XP is based on the NT codebase, MS supports upgrade from (and I believe interop with) Win98/Me.
Re:The best time... (Score:2, Informative)
[this really should be off-topic, but this is the type of user complaint that you will get if you don't carefully coordinate with your users before breaking old functionality]
1) Software developers writing software for XP can write it so that it is compatible with 9x/me or even Win3.1 if they want.
2) The topic question was backwards compatibility. Very few applications (if any) are 100% forward compatible. If you save your Word 97/2k/xp doc in a Word 6.0 compatible format, Word 6.0 can read it just fine. Where is the logic in expecting Word 6 to read the native file format that contains new features in Word 2k?
3) How does it become Microsoft's problem when 3rd party application programmers don't design their software to run on multiple versions of the OS? (It is possible, in fact a large percentage of the win9x software runs on w2k without being *designed* for w2k.)
Think about it for a second, how do you make the application that you released last year so that it will be 100% compatible with the file format and the new features that you will dream up next year??? Even if the file format was compatible (which would probably make it a kludge), there is no way that the functionality would be compatible.
And, umm, you can always save in the previous version's file format.
How is this any different from the new perl script to configure IPchains that won't work on kernels prior to IPchains? Or any other added feature for that matter...this is a flaw in your expectation, not evil marketing.
Do you expect your Netscape 1.0 browser to be able to handle CSS??
Re:The best time... (Score:2)
The 7 stages of coding (Score:2, Funny)
Enthusiasm - I can make this baby fly!
Doubt - Something's not working here
Denial - Stating that legacy code should stay in place until the end of time.
:try_again
Anger - Showing signs of anger towards said legacy code.
Self-bargaining - Bargaining with one's self about what legacy code should stay and what should not.
Depression - Usually accompanied by guilt - When you realize you removed the wrong bit of code and checked it back in without reviewing the changes.
Acceptance - Re-writing most of the code since it's all buggered-up, anyhow.
Goto
Re:This really isn't on topic but... (Score:2)
Re:Make incentives to upgrade... (Score:2)
Sure, most sites require 4.x+ browsers, but they don't have to. Coding should be done on the server, not the client. CSS should be inlined rather than included, and IE3 works just fine.
Making incentives to upgrade is not a design decision; it's a capitalist decision.