Porting Open Source to Minor Platforms is Harmful 709
Tamerlan writes "Ulrich Drepper posted a blog entry titled "Dictatorship of Minorities". He argues that open source projects' attempts to support non-mainstream (read "non-Linux") operating systems slow down development and testing. While Ulrich may be biased (he is a RedHat employee) he has the point: if you ever read mailing list of any large open source project, you know that significant piece of traffic is about platform-specific bugs or a new release broken on some exotic platform."
Adverse Affect For Me (Score:4, Interesting)
But it was my understanding that the idea behind open source was to roll your own if your machine was not covered. If I want to keep the Sparc chugging along, I guess I'd better learn to port it myself.
Re:Adverse Affect For Me (Score:3, Interesting)
For the same reason I tried Linux in 1996: interest in alternatives.
Re:Adverse Affect For Me (Score:3)
Re:Adverse Affect For Me (Score:5, Insightful)
For the same reason Ulrich Drepper is wrong. An Ultra 1 is super cheap, now, and it gives a programmer the chance to test code on a big-endian 64-bit architecture. Are there lines of code with endian dependencies? Are there lines of code that assume 32-bit CPUs?
The same goes for testing on Linux, as well as NetBSD, Solaris, etc. Does the code really use POSIX intelligently? Is the program abstracted well from the kernel services?
This whole article is just a Red Hat employee tooting his company's horn. His advice is inappropriate, in that it promotes bad programming, just as Windows fanboys writing to Win32 can shoot themselves in their feet so often.
Re:Adverse Affect For Me (Score:5, Insightful)
- Not everything requires the latest hardware. Keeping machines running, that are still fully functional (component wise), keeps them out of landfills.
- Sentimentality, I like the Sun SPARC hardware and want to keep using it. Even for trivial tasks like shell accounts for mates and some types of testing.
- Diversity is a useful thing of itself, an x86 monoculture can't be good for us long term.
Re:Adverse Affect For Me (Score:3, Insightful)
Re:Adverse Affect For Me (Score:3, Interesting)
Was it ever, at least 6809 - m68k was understandable by mere mortals and had a certain elegance. x86, ( hell even the 4004-8080) has always seemed kludgey, inelegant and incomprehensable.
Re:Adverse Affect For Me (Score:3, Informative)
Re:Adverse Affect For Me (Score:3, Insightful)
Re:Adverse Affect For Me (Score:3, Informative)
Of course (Score:2, Insightful)
http://imcommunity.net/cgi-bin/u.cgi?u=38 [imcommunity.net]
Re:Of course (Score:5, Insightful)
So, tell me... In our minority, why on earth would we take that attitude?
Ulrich is off his rocker.
Agree, and... (Score:5, Insightful)
To be sure, robustness may be as optional for you as it was for Microsoft (and would still be, absent competition from Linux), but in the long run it seems to pay off.
Most of us Linux users would not regard, forex, The GIMP as particularly robust, but compared to the typical WIn32 app it's a paladin of reliability. My sister-in-law [goldenlight.bur.st] routinely leaves it open (and unsaved) for weeks on end, confident that it will still be there when she gets back (but a recent hard-disk failure of mine seems to have put the fear of God into her WRT reliability). She also happily browses everywhere fearlessly, knowing that she can't damage anything on her own machine, and nobody either I or her know of have ever been burnt by malware while browsing in Linux.
MS users just don't do that - not more than once or twice, anyway.
No, Ulrich has a point (Score:4, Interesting)
People seem to be repeating a lot of "folk wisdom" about portability. Oh it's just bugs. Make your damn software clean you damn coder. Etc.
I can guarantee you that anybody who says this has never actually read the sources to glibc, binutils or gcc. Hell, they probably never even read the mailing lists! I have, and when Ulrich says an enormous amount of effort goes on supporting minority platforms he is totally correct. Hello, binutils isn't the GIMP people! Other platforms have totally different architectures and often need huge amounts of platform specific code from these projects. This isn't a case of sloppy coding, it's a case of massive amounts of work being done to support edge cases. Go read the sources to bfd sometimes. Adding support for one platform that uses different assumptions about basic things like memory layout can require huge reworkings of the code.
Essentially there are a lot of people spouting off here based on their experiences of compiling FooApp on FreeBSD or whatever. Have you written a C library? A threading library? No ... then you are probably not really qualified to judge how much work this generates for Ulrich.
Oh, and for the guy later on in this thread who says "AIX is not a minority platform, WTF?" - I say to him WTF. AIX most definitely is a minority platform. Maybe not in the world he lives in, but in the real world Windows is dominant, sometimes people think about Linux/Mac or even FreeBSD and everything else barely registers at all unless you administer high end servers or work on embedded software (most people do not).
I've been bitten by this mentality before. Back when exec-shield was first developed, it broke Wine (which I work on). So I set out developing a fix. Eventually, I wrote a GNU linker script that arranged virtual memory such that things would work correctly even when exec-shield was active. But it didn't work, because of a simple bug in the kernel. No problem, right? Just fix the bug, right? Well, actually, somebody did. A patch was written, submitted, got into Andrew Mortons tree ... and it didn't compile on Itanium. The original author didn't have access to such a machine, neither did I, and the person who reported the failure (who worked for Intel, IIRC) was overloaded with other work and couldn't fix it. So the patch was dropped.
In the end, a few months later, we had a different solution that was about a million times more complicated. Largely because a simple bugfix patch didn't compile for unspecified reasons on a platform nobody uses and this was grounds for it to be dropped. That mentality of "all computers are born equal" is why Debian has become a laughing stock and it cuts both ways.
Straw man (Score:5, Insightful)
What he said was *not* that Glibc, or Gcc, and whatnot shouldn't be ported to AIX, m68k, and whatever. What he said was that he does not care to *maintain* those ports, and should not be expected to. IBM (or IBM's customers) can certainly afford to maintain a port for AIX. Let them. Likewise, all those embedded-system houses dependent on m68k targets are welcome to step up and supply their own patches to keep their ports working.
If a patch to mainline breaks the AIX port, it's the job of the AIX maintainers to figure out how to fix the patch, not him, and not whoever contributed the patch (but has no access to AIX targets).
He's not even saying he would reject patches needed to support minority targets. Whoever's maintaining the m68k port doesn't need to maintain a fork. They are entirely welcome to send along whatever patches they need installed. They need only be sure their patches don't break any supported targets. This certainly makes more work for users of less popular targets, but it spreads the work around, instead of piling it up on those doing mainline development. The mainline maintainers have plenty else to worry about.
Re:Straw man (Score:5, Interesting)
With a project as big and important as GCC, you'd think they'd have a server for each platform set up for all their developers to play with. Gentoo has Sparc, MIPS, PPC, etc. boxes for their developers to use for porting software.
It seems to me that a smart idea would be to have some kind of system where a developer could submit a patch, which would then be sent out to a server farm, where each server would try to compile GCC with the patch, then run a test suite. Doesn't Mozilla do something like this nightly?
"I don't have access to a [foo] box" should never be a valid excuse with larger projects.
Re:Straw man (Score:3, Insightful)
So, everybody who fixes something that (incidentally) affects emission of debug annotations in Gcc has to learn all the idiot formats used in AIX, Solaris, Tru64, PE, and what-have-you just because FSF happens to have those machines? Yes, somebody on the project might know intimate object-file format details about any of those. It's unreasonable to expect everybody
Re: (Score:3, Insightful)
Question (Score:3, Insightful)
Furthermore, at least on OSX, the Fink project makes many programs OS X buildable, but puts the maintenance onus mostly on the Fink people, not the original authors. Of course this can have it's own problems.
Java? (Score:2, Funny)
Re:Java? (Score:5, Funny)
Yeah - Sorry about that, our bad.
-- Sun Microsystems
Re:Java? (Score:5, Funny)
Some people, when confronted with a problem, think "I know, I'll use Java." Now they have two problems.
Re:Java? (Score:3, Insightful)
Re:Java? (Score:3, Interesting)
I suspect that we have not yet seen the VM architecture in its full maturity.
Re:Java? (Score:3, Informative)
Is that enough platforms for you? No, it doesn't cover everything, but there certainly are a lot there.
Debian (Score:3, Insightful)
I'm a fan of Debian, but I think that Debians effort to support the myriad of architectures out there is hurting it.
It does a great service to the rest of the Linux community though, because it helps keep things portable.
But having a requirement that something work on a large number of platforms slows down the release cycle.
Debian's more about leadership attitudes, I think (Score:4, Informative)
I'm not sure that I agree. To me, the Debain delays seems to be at least as much a political and leadership thing. In particular, consider how quickly Debian went into a freeze for a new release after a change in leadership. Granted that it's a week behind schedule, but the green line is now going down rapidly [debian.org], and we're expecting a new release within a matter of days. If it was so difficult to support and release on multiple architectures, this likely wouldn't have been able to happen.
I'm not trying to imply that the old Debian leadership was necessarily bad or that the new leadership is particularly good. But a change in attitudes very quickly resulted in a new release. This suggests that support on lots of architectures has little to do with it, whereas leadership attitudes has a lot to do with it.
We'll have to wait and see just how reliable Debian Sarge turns out to be when it's promoted to stable, of course. (Disclaimer: I run debian sarge on my home workstation and laptop.)
Re:Debian's more about leadership attitudes, I thi (Score:3, Informative)
Everytime somebody likes to say that about Debian, I like to remind them the NetBSD folks support an ... impressive array of platforms, and at the same time hack userland, kernel and protocols. While Debian developers mostly package upstream stuff.
I would have to say that Debian developers, for the most part, are also involved in the userland, kernel, and protocols. Take a look at developers such as Colin Watson, Joey Hess, Branden Robinson, Ben Collins, and others are doing in and around the Free Softw
The OpenBSD project doesn't seem to agree. (Score:5, Insightful)
Re:The OpenBSD project doesn't seem to agree. (Score:5, Insightful)
Porting helps rid software of bugs. (Score:5, Insightful)
Porting assures portability, clean code, future (Score:4, Insightful)
Surely porting to platforms such as the Alpha and UltraSparc was a very good basis for porting to platforms such as AMD-64. This is a crucial advantage for free software, where we can be sure that we will be able to support new platforms and make interesting platforms mainstream.
On the other hand, the premisis that the main maintainers can not be responsible for all the porting effeorts is reasonable. Debian is thinking along the same lines, and for good reasons.
I think it is wrong and bad to assume porting is a bad thing and avoid it. Even apparently futile projects such as porting free software for closed commercial platforms gives a large amount of flexibility in design and portability and helps projects such as embedded graphical environments.
Portability is just one facet of advantages of free software and as such is a precios thing that we have to cultivate. But it sould be just another part of the free and open collaboration development process, not an obligations for the main developers.
Just my 2 cents.
Re:Porting assures portability, clean code, future (Score:3, Interesting)
I can easilly port between Linux and *BSD but the problem comes when users ask me to port my code to non Posix OS or claim to be Posix but really aren't Posix(win32). Or my latest peeve: sort of Posix but non ELF non GNU (OS X).
Re:Porting helps rid software of bugs. (Score:3, Insightful)
Contrast this with the paragraph below:
Re:The OpenBSD project doesn't seem to agree. (Score:4, Insightful)
Logic Circuits Concur with OpenBSD's Findings (Score:4, Informative)
(The only exceptions would be things that use specific Linuxisms, such as some of the Netfilter calls, anything to do with a filesystem that is only available on Linux - XFS, JFS, Lustre for example. The network structures use different variable names, but that's not an excuse as you should be using an abstraction library in those cases.)
Re:Logic Circuits Concur with OpenBSD's Findings (Score:3, Informative)
You do realize that XFS is a port from SGI's UNIX varient (IRIX I believe). That JFS is ported from OS/2 to Linux (and was originally written for AIX and the s390 Mainframe if I recall the details correctly).
Lustre might be Linux only. I believe it started out life as a Linux-only project. I believe that ReiserFS is fairly Linux only (I thought they had ports to other OS's, but I can't find anything o
The title of this article is not correct. (Score:5, Informative)
Porting Open Source to Minor Platforms is Harmful
What the fuck? Where'd you get that from? Read Ulrichs post. How about:
Delaying the development of features because of problems with minority platforms that can't be fixed by the bug reporters is Harmful
You may disagree, but unlike the title of this article, it does actually cover what Ulrich is talking about.
Re:The title of this article is not correct. (Score:3, Insightful)
Aside from that, support for non mainstream platforms is a major strength of open source, many of these non mainstream platforms are and always have been much better than the mainstream platforms, but they're dying out because of the prevalence of proprietory software which can't be ported to these platforms.
Re:The OpenBSD project doesn't seem to agree. (Score:5, Insightful)
Indeed.
Now, to be fair, I have to concede the man does have a point. Supporting several configuration options and several platforms increases complexity, if your goal is simply to get the thing running and marketable.
At the same time, though, dealing with that increased complexity can give a project the impetus it needs to clean up spaghetti code that 'just runs' and replace it with more auditable and correct code, which is really a gain in the long run. Assuming, of course, your goal is to write good software - not just to write 'good enough' software and get it out the door in line with a deadline from marketing.
OpenOffice is a Gateway Drug... (Score:5, Insightful)
I like to think of OSS/GPL stuff as a 'gateway drug' - to use an analogy. Using it may not automatically make people go to Linux, but it certainly makes it an increasing possibility.
Re:OpenOffice is a Gateway Drug... (Score:4, Insightful)
Both sides are right, I think. (Score:3, Insightful)
I think this is incorrect.
First off, Open Source, despite its close engagement with Freedom ought to also stand for what is best in the Software Engineering world. This means clean, lightweight, portable code. For better or worse there is a standard which is POSIX. Linux, to the extent that it uses the GNU system, is basically POSIX-compliant. Open Source projects ought to target POSIX and keep themselves free of proprietary entanglements.
This can be achieved by focusing efforts on programming for Linux, the premier Open Source operating system. Only by keeping the code clean can a project be easily ported, but a project that isn't even near completion ought not be ported at all. Such non-mainline work results in incompatibilities and divergences from the main trunk of code that cannot be easily fixed down the road.
A very good example is the Symbian/Nokia gcc compiler which has many special extensions and cannot be used to compile for any other targets or operating systems. Well, they are doing away with their special version of the compiler and finally going back to the main line gcc tree. Unfortunately, all that work to specialize gcc for their platform is tossed out the window now. Work to no avail, essentially.
The key here is not to focus on Linux, specifically. Rather, it is to focus on a standard and program to that. That Linux is one of the best of the standard bearers, it makes sense to complete programming there first rather than start porting to esoteric platforms right away.
Re:Both sides are right, I think. (Score:3, Interesting)
If you slow down development to cover a broader set of platforms then you will be late with the latest new buzzword features (hyperthreaded keyboard support
NetBSD? (Score:5, Interesting)
Try a VM (Score:2, Insightful)
There are 1000s of Java projects on SourceForge that will never have this problem.
Re:Try a VM (Score:5, Insightful)
Re:Try a VM (Score:4, Insightful)
This isn't an issue limited to Mac OS, by the way -- Java programmers ought to take the differences between GTK and Windows into account, too.
Re:Try a VM (Score:3, Insightful)
Re:Try a VM (Score:3, Informative)
A VM is just another architechture. In theory we could just write everything for x86 and then run emulators on every other platform, and it would be about the same thing.
The problem is that Java works great as long as you only run it on an x86, or maybe a sparc or a mac. And java apps have the
Re:Try a VM (Score:4, Informative)
Re:Sure. (Score:4, Interesting)
primetest.cpp [dyndns.org]
primetest.java [dyndns.org]
primetest.pl [dyndns.org] (for fun, because I like perl)
Using Java Blackdown-1.4.2-01, gcc 3.3.5-20050130 and perl 5.8.5 on a 1.3 Ghz Pentium-M, I get these results:
neil@t40-n Documents $ javac primetest.java
neil@t40-n Documents $ time java primetest
real 0m7.809s ./primetest
user 0m7.700s
sys 0m0.033s
neil@t40-n Documents $ g++ primetest.cpp -o primetest
neil@t40-n Documents $ time
real 0m2.699s ./primetest.pl
user 0m2.676s
sys 0m0.007s
neil@t40-n Documents $ time
real 0m47.928s
user 0m46.207s
sys 0m0.138s
neil@t40-n Documents $
How about you? Sure it's a trivial benchmark, but it definitely shows Java way behind (by a factor of three!) C++ for number crunching. Of course we see Perl well behind both, but it's definitely not meant for number crunching, so no surprise really.
Re:Sure. (Score:4, Informative)
neil@t40-n Documents $ time java primetest
real 2m41.851s ./primetest
user 2m41.439s
sys 0m0.144s
neil@t40-n Documents $ time
real 0m59.801s
user 0m59.618s
sys 0m0.056s
Using -O3 for gcc with 10,000,000 instead of 1,000,000:
neil@t40-n Documents $ time ./primetest
real 0m54.883s
user 0m54.755s
sys 0m0.047s
Using int instead of long with Java and 10,000,000:
neil@t40-n Documents $ time java primetest
real 1m6.386s
user 1m5.930s
sys 0m0.128s
Ah hah, well that would explain it. I guess you do learn something new every day. Certainly a far cry from the ~3x difference initially observed.
I didn't bother repeating the perl test with 10,000,000 however...
Re:Sure. (Score:3, Informative)
Re:32 Bit vs. 64 Bit (Score:3, Informative)
Java long => 64 Bit
So you are comparing Apples with Melons
Using my mighty +1 Karma Bonus Power...
Can somebody please mod the parent up?? The grandparent poster is apparently too clueless to create Java vs. C++ benchmarks.
Java primitives:
http://java.sun.com/docs/books/tutorial/java/nuts a ndbolts/datatypes.html [sun.com]
C primitives:
http://www.phim.unibe.ch/comp_doc/c_manual/C/CONCE PT/data_types.html [unibe.ch]
Let's see the benchmark with either int vs. long or long vs., er, long long
I call bad c code (Score:5, Insightful)
Re:I call bad c code (Score:5, Interesting)
It's sad that whenever you build some GNU sources or the latest Linux app du jour you get tons of warnings. Some projects even compile their files with 2>&1 >/dev/null. How sad is that?
It's not just the Linux folks. OpenSSL is actually worrying me:
etc. etc.
And this in a project that's driving quite an amount of sites, authentication mechanism and what not?
Most if not all of the sources that are native to NetBSD, i.e. not imported like OpenSSL and GCC compile without any warning. You automatically get a good feeling about using it.
Something needs to change in coding land.
</rant>
Minorities make life so ... complicated ... (Score:5, Funny)
Re:Apple (Score:3, Interesting)
it has a lot of ugly nextstep-isms though, including the heavy leanings toward objc and bundles.
and for a long time osx was missing even basic freebsd mechanisms like dlopen() (yes, i know someone wrote a wrapper, but it took a long time for apple to appropriate it as an official api).
Re:Apple (Score:3, Insightful)
Assuming we're talking about the same thing, bundles are a compromise between the need for easy installation of applications and "flat" file systems. What would you have preferred under those requirements?
Also, do you have a more objective criticism than "ugly"? I'm not saying you don't, but many times it's just another word for "unfamiliar to me".
But the point is ... (Score:2)
platform-specific bugs? Doubtful (Score:5, Insightful)
The bugs due to platform bugs -- well, knowing about them helps improve the platform.
If you think fixing these bugs is a pain in the neck, fine. If you think it's a waste of time, however, think again.
Re:platform-specific bugs? Doubtful (Score:5, Insightful)
1) identifying subtle bugs
2) preparing software for future platforms
- Subtle Bugs -
As stated in the parent post, porting software to various platforms help uncover bugs that may not surface during routine testing in a mono culture.
- Future Platforms -
Making software portable prepares software for the future. As computer technology advances, software that has been developed to be portable will be the first code running on the new hardware. I could go on and on about this, but I think the recent articles regarding Intel's Itanium already make this point loud and clear.
Re:If they only affect exotic platforms it is a wa (Score:3, Insightful)
Users don't care about 90% of the users. They care about themselves. Why should they spend money to buy a new PC just because other people can't take the time to think ahead in their designs?
Also, I hope you don't think 90% is a good cut-off. If you're happy with assumptions that only hold 90% of the time, I sure hope none of your software is running on my system.
Besides, if you're talking about assumptions that cover "90% of the users", you certain
Fine until some future bug bites you in the ass... (Score:5, Insightful)
Think, for example, about 64 bit cleanliness. A piece of software which supported Alpha, UltraSPARC64 and SGI's MIPS64, and so on, wouold have been fairly trivial to port to IA64, and AMD64, and PPC64 when they started to become significant. OTOH, code which assumed it was running on a 386 would have been a pain in the ass to port to even just AMD64.
Also, by supporting a broad spectrum of compilers, you will probably be able to understand what is going wrong when you compiler of choice changes. Witness code breakage on gcc3. Devs who had already ported their software to a variety of compilers were better able to respond to any issues, and fix their code.
Many monoculturalists make stupid endian-ness assumptions. Now, Mac OS X is becoming a significant market. If you have stupid endian-ness assumptions, then you may wind up having to basically rewrite in order to gain access to those millions of potential customers/users.
Imagine if OpenGL only supported SGI and 386. Or libtiff only worked on i386. People just wouldn't use them. Things like that get used because they are ubiquitous, and you can build them anywhere.
Portable code is robust code (Score:4, Insightful)
Of course, you can overdo it. Take a look at InfoZip for example. No, seriously, take a look at it. It works on every platform you can think of, but the price is that the code is almost unreadable. The biggest problem is all the cruft needed to maintain 16-bit compatibility. It desperately needs updating to handle non-ASCII filenames intelligently, but the last thing that code needs is another layer of #ifdef's.
There comes a time when you just have to say "fuck the Amiga".
Re:Portable code is robust code (Score:4, Funny)
Re:Portable code is robust code (Score:3, Interesting)
I have seen this phenomenon but it certainly doesn't have to be that way. One of the best kept secrets of having both cleanliness and portability is to eliminate #ifdefs from sources as far as possible. The obvious ideal is to have identical source and abstract platform differences by other means (e.g. macros instead of #ifdefs, and using identical APIs on different platforms, supplying only platform dependent implementations).
Keeping it clean probably al
Re:Portable code is robust code (Score:3, Insightful)
writing 5 different copies of the same function is often more counterproductive than one with 5 little #ifdefs in it.
moving code out into separate modules can introduce other problems such as increased effort to keep all the individual architecures coherent -- which increases the risk of bugs creeping in. this appr
Ummmm . . . (Score:5, Insightful)
Unfortunately, I don't think that Ulrich is doing that.
AIX is not a minority platform. What The Fuck. Okay, so the AIX guys are asshats in the way they treat GCC, fine. But GCC's claim to fame is that it it is the cross compiling, multiplatform compiler du jour. I think Ulrich loses a lot of credibility to say that GCC needs to not support AIX because it's a minority platform.
*nix applications which run primarily in userspace should port to the various BSD's and Linux easily, and if they don't then 99% of the time it's a bug. And in many cases, it's a bug that will affect the working platforms eventually (relying on nonstandard behavior of system calls, linker oddities, assumptions about file placement, etc). And if a closed Unix platform has paid developers to assist in the porting, then it should run on that platform too. And if the paid devs are dickbrains, then a good project leader should say so. Behave, or fork and get your whining ass out of my tree.
These AIX GCC guys shouldn't be saying "This patch breaks AIX, kill it", they should be saying "This patch fixes *blank* on AIX", at least most of the time.
Re:Ummmm . . . (Score:3, Interesting)
I've watched the GCC list; David Edelsohn has consistently worked with the other people on the list. He takes the time to work out a solution to the AIX problems. OTOH, when Ulrich Drepper thinks something is broken on GNU/Linux, he will fight tooth and nail for the solution he thinks is right, as long as it doesn't mean that he has to clearly explain why his solution is right.
Big big BIG ditto (Score:3, Informative)
All of what dvdeug says here is true. I've done more than watch the list, I'm a maintainer, and David Edelsohn (an IBM employee) has always been willing to work with the GCC community. He even pushes the IBM developers and management to make each release of AIX slightly less bizzare than the previous release. Ulrich simply insults you if you disagree with him. He does not participate on the GCC lists; when he does send a message, it's a flame. I've never seen an explanatory email from him, on any topi
He's crazy. (Score:3, Insightful)
This line of thinking is a lot like how I presume Microsoft thinks of things: if we just port to this one API, it doesn't matter how bletcherous it is. But as Microsoft has discovered, this kind of thinking actually turns into a straitjacket, which prevents them from being responsive when they need to be.
get some perspective (Score:5, Insightful)
GCC is the de facto standard because it runs on more platforms than anybody else.
If it ceases to run on all these platforms, it will either:
a) fork a project that will support them
b) another compiler will take its place as the de facto standard
c) people will be forced to use whatever the default cc is on their OS.
In any of these cases, the portability concerns will get an order of magnitude worse.
I categorically disagree. (Score:5, Insightful)
also, if it takes you a lot of effort to keep architecture-nimble, there is something fundamentally wrong with your design. this in itself should be a warning.
But there is no benefit at all in supporting something like PA Risc, m68k, cris, etc as configurations of the generic code.
ulrich obviously has no clue whatsoever about embedded systems, and should therefore stfu on this point. one of the most popular embedded platforms is a 68k variant (coldfire) -- it's probably second behind ARM. by dumping support of 68k you castrate linux in the embedded marketplace. there's much more to 68k linux than sun3 and atari/amiga.
his rant against mingw as "undeserving" is stupid. mingw is an enabler -- it means people can develop for win32 without having to pay microsoft $$$$ for the privilege of doing so.
his 'dictatorship of the minorities' argument is actually self-defeating on this point because microsoft users are in the majority. by his own arguments, we should be concentrating on supporting win32 as the primary target for gcc and primary architecture for linux.
utterly ridiculous.
bug-eyed rants like his just serve to reinforce the stereotype that all open source advocates are completely unhinged. it is not helpful in the least.
Re:The platform comes... (Score:4, Interesting)
I've actually done some fairly hefty optimizations (real-time array processing on an original Sparcstation, for a nuclear research facility) and know the sort of problem you are talking about. The best thing to do, once you have a working, solid, implementation is profile it and find the areas that - if accelerated - would offer the best improvement in performance.
The problem with having so little time - in your case a month - is that you can't optimize cleanly. You've got to hit the code in the best places first and dig your way through as far as you can in the time.
Typically, slow areas will involve moving lots of data, number-crunching and I/O. You can't do much with graphics I/O, unless you've some good specs on the hardware and know assembly. (That's one reason I learned assembly.) Number-crunching is tough, too - just do as little as possible, do what you can when the system is idling, try to merge operations, and stick to integer arithmetic as it is so much faster. Moving data is easy - don't, unless it'll absolutely kill you. Do everything in-situ, if you can. (I wrote a Mandelbrot generator that did ALL arithmetic inside the 80x87 coprocessor - nothing loaded, nothing saved, which meant no slow transfers. It was still slower than fractint, but it was a lot faster than any other floating-point fractal generator out there.)
I don't agree either (Score:3, Insightful)
There are bugs that just don't get flushed out until you port to: non-x86; 64-bit; bigendian; Win32; OS X; etc, etc, etc. Drepper should know better: All the world's not a VAX, etc. (though a VAX port is a fine start :-)
Also, every port makes the process of porting itself easier. It's no coincidence that the most reliable and defect-free software is typically the most-ported software. This has always been true: TeX and METAFONT (where the monetary bug bounty [tug.org] doubled for every bug report, so assured was Knuth of its quality); Apache; Linux itself; NetBSD; GCC and friends; etc.
Hypocritical (Score:5, Interesting)
And, of course, you write your code to be portable because you make sure it runs on the big three: Windows, Mac OSX, and Linux.
Right?
Actually, I think a much larger problem is just that: Many OSS developers don't even try to support Windows. Yes, I know you hate the OS and don't want to support Microsoft, etc., etc.. But, how can you complain about major software not supporting Linux when you're writing your own software that doesn't support Windows? Isn't that entirely hypocritical?
My take: Port your software to every platform you can, especially Windows. This gives freedom of OS to your users. And if you're a Linux user yourself, you should understand just how valuable and important this freedom is.
You say that like its a bad thing... (Score:3, Insightful)
The bugs one finds on "minor" platforms usually end up being bugs on the "major" platforms you just haven't found before. Of course, for those of you still intent on/forced to write code in C/C++, you're likely getting your just desserts.
He is wrong on all counts. (Score:5, Insightful)
It is rare that I can say someone is wrong on all counts, but I have not found one defensible statement in there. (Though I guess one could be hidden and I missed it)
His first mistake is thinking GNU is everything. Maybe for him it is, but for most people we use what works. When the boss sets me down on a AIX machine I want it to work - I'm not allowed to install Linux (though I'd install *BSD if I could wipe the OS), I'm supposed to get work done.
Minorities are useful despite the cost of working with them. Bugs that are 1 in a million may happen every time on AIX. 1 in a million bugs are very hard to find. I've spends days looking at a partial crash trace wondering why it broke, and if it will happen again. With no known way to duplicate the bug it is really had to fix, and hard to be sure the 'fix' works. When it fails every time the bug is easy to fix.
Good programmers should have no problem writing cross platform code. When your code breaks on AIX, it is a sign of bad code - even if the breakage is because AIX doesn't have a function you expect.
Cross platform compilers (gcc) are much easier for me to work with. Because gcc is cross platform I can compile my stuff at home and debug it, than bring it to work and compile it and assume it works. Particularly with gcc 2.95, the support for C++ was so bad that you could not count on code written for that to work on a better compiler.
Speaking of gcc 2.95, other vendors have had better compilers for years, while gcc is only arriving. Even today, gcc isn't a great c++ compiler. (though 4.x is much better) There is no point in throwing stones at other vendors - their compilers may have been expensive, but they at least worked close to right.
The upper/lower case differences with Windows are a non-factor. You should never have any word that differs by case only - it leads to many bugs if you do.
The API differences on Windows are mostly handled by Cygwin and mingw. Those areas that are different are places where you should have your code modular anyway. Mostly we are talking about device and networking code. IPv6 is on the way (has been for 10 years now...), you need some difference code to support that. There is no standard for device code - what works on OpenBSD won't work on linux, or FreeBSD.
True almost nobody cares are VAX - but it is interesting anyway. If you code is modular like it should be, then supporting those weird things isn't a big deal - you write you code, and let those are care about it test.
A short summary: There should be only one OS that anyone runs: RedHat Linux enterprize edition on x86. (not x86-64) Not Fedora core, much less gentoo or those other non-redhat distributions. You FreeBSD people can go to hell.
He wants to take his ball and go home, I don't care, we are better off without people like him in the open source world.
Re:He is wrong on all counts. (Score:4, Informative)
cygwin does deal with api differences however. it's a completely different beast.
it's rare I think someone is *right* on all counts (Score:3, Insightful)
It makes you read the docs. It forces you to use a standard API when byte order is important. It keeps you from hardcoding values (eg sizeof(void*)). It keeps you from making platform dependant optimizations that might not even be supported by the next version of the platform you're on, or if you do it forces you to make them modular.
It forces you to figure out what behavior you can rely on. Bug compatability with older versions relies on the magnanimity of the m
Re:He is wrong on all counts. (Score:3, Interesting)
I completely disagree (Score:5, Insightful)
Second, platforms are not stagnent. Code that only works on 386 linux may some day have to deal with a x64 only world. Who knows what may happen in the future. Making decisions because you reject portability means you reject the future for your code as well.
Third, different compilers are very useful for finding less obvious bugs. Ideally this means having a choice beyond gcc, if one is talking about C/C++, for example :). Using a single compiler means bugs your compiler doesn't itself know will likely be retained. Even using different versions of gcc can help. Different compilers often are good at finding completely different sets of bugs in source.
Finally, pointer/integer size and endian prejudices are evil in C/C++ code. You will find these things very quickly if you spend your whole life exclusivily on i386 and one day try to port to ppc.
Axe to grind (Score:5, Informative)
Judge for yourself. Go read the gcc list. Count the number of patches backed out in the past year because they broke AIX vs because they broke some other platform.
It sounds like an unnecessary personal snipe, which, for people who know Uli, well, i won't bother finishing that.
So if this is the most "notorious case" Ulrich's got, then he's wrong.
Particularly the "GCC would be developed much faster".
That is in fact, the funniest thing i've heard all day.
GCC would be developed faster if there was less sniping and fiefdom's and more collaboration. Which, except for a few people, has been what is generally happening. Our development process is accelerating, not slowing down.
And It certainly isn't slowed down because people need to bootstrap on AIX, which they don't.
Nobody has ever required patches be bootstrapped on AIX unless it is very likely to have some material affect on that platform.
This is just the same requirement we pose for any wide ranging change: Test it by compiling it for the architectures it is likely to break on.
Note i didn't say running. We don't require anyone have AIX boxen around. Cross compiles work fine.
Though if you break some architecture, you are expected to at least try to help the maintainer of that arch fix it.
Not true for some projects (Score:4, Interesting)
However, there is a non minor and weird platform which actually does generate a lot of trafic on the list, and is strange for most developers. Anyoen checking The GIMP bugizlla will find a lot of open bugs for Microsoft Windows Plataform. That however, doesn't slow the project either. It simply goes on, and the developers who work on Compiling and making the windows installer do what they can for the work arounds.
Doesn't work that way (Score:3, Insightful)
If there's a problem with developers being bossed around into doing niche work with no compensation, and they don't like it, they need to stand up for themselves. For example, if IBM wants gcc to work well on AIX, they should either make it happen themselves or pay the gcc developers to better look out for their interests. If, on the other hand, the gcc developers are well compensated for fixing AIX problems (I don't know what the situation is), then there's no problem, except in the eyes of bystanders who don't understand the situation.
WIndows, Symbian and other minor platforms (Score:3, Informative)
And I can't switch to Linux - all my projects for Windows and Symbian (and Nokia SDK windows-only, homegrown windows port require Wine anyway). And all the times I'm telling my clients and coworkers - look how much OO mre convinient then word, how Firebird is more safe, and Gimp have nice features, and Axiom and Maxima - well, you dont have to pay several thousand $/year for Mathematic. To drop support for "minor" platform would be a huge discouragement for people to use OSS. Don't forget that some OSS project are designed for mostly non-Linux platforms. Vincent OpenGL ES implementation is oriented for PPC/Symbian and don't have much sense for desktop Linux.
Define Minor (Score:3, Insightful)
Open Source is always about developer, not user headcount.
Half the Linux distro's have less developers than the avg BSD. Let's kill them off.
Primary and secondary platforms (Score:3, Insightful)
For secondary platforms, patches that make the application work on those should be accepted and encouraged, but releases won't get delayed, and new features can be accepted even if a solution for the seocndary platform has not been found. In general, users of the secondary platform should not rely on the official releases of the platform, but get their code directly from the maintainer of the secondary platform (or from a cvs branch).
Which platforms are primary and which are secondary should depend on the application. For easy-to-use end-user stuff like ForeFox, IA32 GNU/Linux, IA32 MS Windows and MacOS X would be a good set of primary platforms. For the GCC/binutils/gdb the set need to much more varied, and include popular embedded platforms. The strength of GCC has always been portabiblity and cross-compilation. It has only rarely been the best compiler for native compilation on popular platforms.
/. subtitle not well chosen (Score:4, Insightful)
Seriously, isn't this the same Ulrich Depper who can't even bother to get glibc right? glibc incompatibilities -- even in patch versions -- is a major headache on Linux. Compare that to those ``obsolete'' platforms like AIX and Solaris where I can still run binaries that I have compiled in the early 90s or even the 80s. glibc is one of the main reasons why Linux application deployment sucks in major (read: heterogenous) installations. Kernel differences are actually not as problematic, but glibc is biting ourselves all the day.
He has shown already that he won't bother for people who run computing centers. Here's he, spouting more hobbiest opinions. Nothing new, move forward.
Re:/. subtitle not well chosen (Score:3, Informative)
Re:/. subtitle not well chosen (Score:5, Informative)
This is what Marc Espie, an OpenBSD developer said about Ulrich on O'Reilly's OnLamp [onlamp.com] (commenting the proactive measures OpenBSD takes in C programming vs. Ulrich's "Linux programmers are geniuses" view):
"We have had a lot of success explaining the issues and getting a lot of people to switch from strcpy/strcat to strlcpy/strlcat.
Weirdly enough, the Linux people are about the only major group of people that has constantly stayed deaf to these arguments. The chief opponent to strlcpy in glibc is most certainly Ulrich Drepper, who argues that good programmers don't need strlcpy, since they don't make mistakes while copying strings. This is a very mystifying point of view, since bugtraq daily proves that a lot of Linux and free software programmers are not that bright, and need all the help they can get.
(Considering the shining, flaming personality of Drepper, and the fact that he is always Right, this is not that surprising, though)."
Implications of dropping support for non-free oses (Score:3, Insightful)
Even more important, this type of attitude (( flame me, but I'd call it bunker mentality )) harms collaboration between open source projects, and also between commercial software vendors and open source projects. (( if you don't know what I'm talking about, take a look at the copyright notices for g++'s STL headers ))
To make the point clearer, let's take Ulrich's ideas a bit further. From a BSD purist's point of view, GPL licensed software does qualify as "non-free".
What if e.g. the OpenSSH guys decided to drop support for non-free operating systems such as Linux, particularly commercial distributions like Redhat that include proprietary code?
"Of course, you Linux guys may always maintain a separate tree that includes supports for those exotic systems."
So we'd have X people who could be working on something way more useful, trying to keep a forked tree in sync with the original project. Great.
What was actually said, Minorities MUST help (Score:3, Insightful)
make no sense; to paraphrase:
Mainstream developers, using common architectures,
which will change over time, should not hold
themselves hostage to proprietary, minority or
legacy platforms
makes this impractical in any event.
This makes complete sense, if, as is actually the
case HP, IBM & SUN have, by incompetance or greed,
placed themselves in a position where their
platform _depends_ on GNU tools they need to spend
some support revenue on the tool-chain, and
provide gratis platform access. This is how it
used to be before Red Hat bought Cygnus.
Finally, no one is going to deprive legacy
platforms, they have to do work, pay or resign
themselves to a feature freeze.
A bug is a bug. (Score:3, Insightful)
Re:Let's extrapolate, shall we? (Score:2)
Re:Sounds like... (Score:5, Insightful)
Hypocrite... (software should be free! So I'm holding mine hostage until you release yours!)
And let's imagine what would happen if the vendors of those proprietary OS's decided to play the same way...
Say goodbye to NFS, NIS, Rendezvous, Mono. Perhaps Sun would close Java (which would make the conspiracy theorists happy, but nobody else.) Goodbye to the largess of many large vendors (where do you think organizations like FSF get their funding?).
And that crack about "ideally to one". Linux is a nice OS and all, but believing that it is the one for all purposes from PDAs to supercomputer clusters, data warehousing to real-time control is silly.
Re:MS Windows support must be dropped (Score:3, Insightful)
Does Gimp and Gaim work on Solaris, or the BSDs? Linux does not own these apps or Gtk+.
Linux is a kernel.
And you, nor anybody else, is in a position to say "must be dropped".
Re:MS Windows support must be dropped (Score:3, Insightful)
Its true to say that porting OSS apps to windows improves the Windows experience by providing Windows users with some good quality software for nothing, but this IMHO is a good thing. For me, OSS software is about improving software for everybody and that includes Windows users.