Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Why Open Source Doesn't Interoperate 212

bergie writes "There is an interesting article on Advogato on why it is so difficult for Open Source projects to interoperate or support common standards. Often cultural differences between projects, egoes, and many other issues stand in the way. The article outlines some practical ways for improving the situation, based on experiences from OSCOM efforts to get support WebDAV, SlideML and other standards into Open Source CMSs. Examples of successful interop projects include freedesktop.org, the cooperative effort between GNOME and KDE."
This discussion has been archived. No new comments can be posted.

Why Open Source Doesn't Interoperate

Comments Filter:
  • by Anonymous Coward on Wednesday May 07, 2003 @07:39AM (#5899855)
    RMS is the reason! I GNU it all along!
  • by Anonymous Coward on Wednesday May 07, 2003 @07:40AM (#5899859)
    To make software interoperate, developers need to create interfaces before writing their software. Often no planning is done and developers start writing their code without a clear vision of what they want to write.

    There is a certain overhead in creating interfaces. They can take time to develop and they aren't any good, you'll be stuck with them for years. Even Linus is against the creation of standard interfaces internal to the Linux kernel. That decision inhibits the creation of a truly modular system.
    • by akadruid ( 606405 ) * <slashdot.thedruid@co@uk> on Wednesday May 07, 2003 @07:49AM (#5899887) Homepage
      Unfortunatly a lot of code writing is done with inadequate planning, but this is an inherant problem. Without a crystal ball you cannot predict every twist and turn you will face.
      This is not just a problem with free, Free or open source software, but also with planned, structured development of commerical model software such as Windows.
      After all if we could predict the future why would ever need new versions? :)
      However I don't think that a neatly integrated environment is impossible, just difficult.
      Besides, getting around integration issues is part of the fun!
      • by redragon ( 161901 ) <codonnell@NOSpAM.mac.com> on Wednesday May 07, 2003 @08:11AM (#5899970) Homepage
        This is not just a problem with free, Free or open source software, but also with planned, structured development of commerical model software such as Windows.

        However, I would venture that there are more commercial products with some sort of functional and or design specification.

        Not to say that these specifications mean anything when they're done by incompitent people.

        I guess my only point is that it certainly wouldn't hurt to have not only more communication among OS development teams, but a little more planning about their interfaces. Why not solicit some software engineers who have background doing this sort of thing, and not just soliciting coders?

        And lastly...Everyone seems to be overlooking the fact that so many people doing OS projects, do have the "not invented here" mentality. However, using middleware takes time to research and test, not to mention that you've got to have a better idea of how things interop before that's helpful. So, it does come back to the issue of planning.

        Just my thoughts...

    • by CommandNotFound ( 571326 ) on Wednesday May 07, 2003 @07:56AM (#5899913)
      Often no planning is done and developers start writing their code without a clear vision of what they want to write.
      ...and this is different from commercial development, how? Or rather, is your company taking resumes?
    • by You're All Wrong ( 573825 ) on Wednesday May 07, 2003 @08:15AM (#5899984)
      Your subject line says it all. Well, nearly...

      I'd say the two biggest sins of the open-sourcers are
      a) over-generalisation (it'll be able to do everything)
      and
      b) over-specialisation (it does one task, but can't do similar ones)

      I'm finding it hard to think of examples, but I guess GNU grep's an OK example of something that's just about right.
      Expanded to do enough things like context greps (e.g. give me 4 lines before the line containing "Name:" and 1 line after), and a few other features (e.g. '-c' so that you don't need to '|wc-l') that add to its functionality, so it isn't over-specialised. Likewise, it's not sed, awk or perl, they realised that just keeping it simple and lightweight was the way to maximise its usefulness.

      YAW.
    • To make software interoperate, developers need to create interfaces before writing their software.

      Interoperability relies on standards that come from standards bodies. As long as the software conforms to those standards and works correctly, it interoperates. Interoperability with standards requires less planning on the part of the programmer because the standards have already been written by other people.

      Often no planning is done and developers start writing their code without a clear vision of what t

    • by truthsearch ( 249536 ) on Wednesday May 07, 2003 @08:47AM (#5900137) Homepage Journal
      From an interview with Rob Short [mattschwartz.net], the vice-president of Windows Core Technology, regarding Microsoft® Windows(TM) Server 2003:

      Why is there no command line only version?

      We're looking longer term to see what can be done, looking at the layers and what's available at each layer and how do we make it much closer to the thing the Linux guys have -- having only the pieces you want running. That's something Linux has that's ahead of us, but we're looking at it. We will have a command line-only version, but whether it'll have all the features in is another matter... It's a very tangled subsystem.

      <plug style="shameless">
      As I also explain at MS Versus [mattschwartz.net], Bill Gates has testified in federal court that Microsoft® can't modularize their operating system or document all of its APIs because it's written by groups of developers haphazardly binding software together without any clear overall design.
      </plug>
    • by jc42 ( 318812 ) on Wednesday May 07, 2003 @09:22AM (#5900306) Homepage Journal
      Well, I dunno about that. I've worked on any number of "planned" projects, and in every one, the same thing has happened: The managment comes along and changes the requirements. So you have to kludge the plan to handle the new requirements. Then they do it again, and again ...

      I don't think I've ever seen a planned, commercial project that interoperated with anything else. Usually you even have to write new editors because all the file formats are proprietary. Just looking at the data in a file often requires a new program, because it's a new, proprietary format, and not even your vendor's software can read it.

      OTOH, I have some "C/unix" programs that I wrote 20 years ago. I've reused them on dozens of different kinds of computers, and usually about the only problems are that the #include lines need to be rewritten. Sometimes I have to change a few variables to a different size int. But there is rarely any problem with my programs interoperating with other programs. This is shown by the fact that I always have a lot of scripts that combine my programs with other programs.

      The project I'm working on now consists essentially of "raiding" the data at a big corporation, and extracting as much info as I can from their old, poorly-documented files that are in formats that can't be read on any new computer. My code converts them to plain text, mostly flat files, a few HTML and XML files. They can be read on any computer that we copy them to. They can be dropped into a web directory and read from any browser on any computer.

      I'm doing all my work on linux boxes. I keep pointing out to them that my converted files no only work ("interoperate") with most any linux software, but will work on nearly any computer out there, and will continue to work for years. They believe me, because when they copy the files to their other random computers, their programmers don't seem to have problems with the file formats.

      Of course, one of the things they really want is to take the data and stuff it into a commercial database. The result will be data that can only be used by a timy amount of software from that database supplier. Well, that and the perl DB package (which we will note is Open Source). But note that they are doing what corporate management always wants to do: Put the data into a format that can't be easly read by other programs.

      In summary, I'd say that the Open Source environment, and more generally the non-proprietary parts of commercial unix systems, are by a large margin the best place to find interoperability. Closed, commercial systems can generally be summarized as "Your data won't work anywhere else, and won't even work on the same vendor's systems 5 years from now."

      At least that's the history that I've seen so far.

    • by kasperd ( 592156 ) on Wednesday May 07, 2003 @09:44AM (#5900479) Homepage Journal
      Even Linus is against the creation of standard interfaces internal to the Linux kernel.

      First of all it is important to keep in mind, that there are two different kinds of interfaces:
      1. Internal interfaces between parts of the kernel and kernel modules.
      2. External interfaces used by user mode applications.
      The later are aiming against complience with UNIX and POSIX standards. It is only the internal interfaces that can be changed without notice. But notice that as long as the interfaces are changed thorugh the entire kernel users are not affected. It does mean you cannot mix code from different kernel versions, but end users are not supposed to do that.

      Of course the changing interfaces are a problem for anybody who wants to develop closed source driver modules. But Linus and other kernel developers does not like closed source modules, the modules impose a problem on anybody who wants to debug kernel problems. So Linus certainly doesn't want to stop kernel development in some areas just to help developers of such closed source modules.

      Interfaces are not changed without a reason. Some developers of closed source modules might think interfaces are changed to annoy them, but of course that is not the case. However they do have to understand, that support from the Linux community cannot be expected before they open their own source. Hardware vendors must understand, that writing a closed source module does not make their hardware supported by Linux, a open source driver is needed before full support is existent.
  • by Anonymous Coward on Wednesday May 07, 2003 @07:43AM (#5899868)
    They say open source is good... They say open source is bad... They say open source is in-expensive when... They say open source is expensive when... They say open source is good when it comes to... They say open source is bad when it comes to...

    I wonder who of them is actually using open source for anything than writing redundant articles about it... I can pull as many pro- and contras out of my ass when it comes to windows. But nobody would care. Nobody cares about because everything already has been said. I don't care. I don't read articles before I comment.
  • by Anonymous Coward on Wednesday May 07, 2003 @07:43AM (#5899871)
    ...that open source authors prefer solutions they like over "standard" solutions.

    Industry standards, particularly those created by committees, are often abominations that people only use if they have to. In my experience, the extent to which people like things like CORBA and XML often seems to be inversely proportional to their level of technical sophistication.

    RFCs have much more respect in open source circles than committee-created standards.
    • ...that open source authors prefer solutions they like over "standard" solutions.

      To add to the reply on this, I think that this particular "problem" is in fact quite and advantage to OS development. Fabricated standards are often not as good as they can be, and are only revised after a long, long time. As far as I know, W3C [w3c.org] isn't that quick on updating their standards, for example. An OS developer that implements a solution "he likes" does this for a reason, and can show that there's a problem with an old
    • by gl4ss ( 559668 ) on Wednesday May 07, 2003 @08:14AM (#5899983) Homepage Journal
      heheh..

      this reminds of a very nice company that has had an habit of making closed-source, lock-in products that mostly don't care piss about standards, rather just work enough to them to allow using other products but then be so broken/intentionally closed that those other products can't be well used with it..

      open sourced can be just as broken as closed source, but you sure as f* are more probable to get your own product to work with open source solution than closed source one.
      • Programmers work in the real world, standards committees work from inside the time-capsules where they have been carefully protected from all outside influences these past 7 thousand years.

        Programmers improve upon standards.
        Open Source guys no less than the EULA-writers. The big difference is:
        When an open-source guy extends a standard, the changes he makes becomes available for all to copy and interact with.
        Even if he never writes so much as a man page for the change, you always have the source. Besides mo
    • RFCs have much more respect in open source circles than committee-created standards.


      I think that is probably it, other things may influence it, but that one takes the cake. Large committees are often a sign of the second-system effect on something. I think many open source authors try to avoid the second-system effect as much as possible and thus tend to avoid committee driven standards.
    • RFCs have much more respect in open source circles than committee-created standards.

      highly agreed. this is our experience too, do interop in small steps, use what works. we called this the "do a prototype in a day" approach in the article :)

      -gregor
    • Hehe, "their level of technical sophistication", very nice! It's syntactically and semantically ambiguous whether "their" means the standards or the people, yet both would be correct ;)
    • Open data formats is where it's at. Interoperable software interfaces are far less important, at least now that the Internet standards (tcp/ip, http) have established connectivity. Drag and drop is relatively unimportant. OLE is comparatively pointless.

      The single most basic, important standard should be for business documents. let's talk about how many commercial products support .doc! Then let's talk about how many different compilers are available for C# and VB. And how many different commercia

  • It's the ego. (Score:4, Insightful)

    by Martigan80 ( 305400 ) on Wednesday May 07, 2003 @07:50AM (#5899891) Journal
    The hardest thing to break is the ego barrier. And it is very evident in many aspects in open source. Take for example the kernel source and different programs branching off. It's great, and even arguably beneficial, except for the fact that different versions of the same program are competing against each other where the forces should be united. Again this is the beauty of Open Source, where you don't have to rely on a leader/Dictator how ever you see it. The fact of this multiplied by the ego's in different countries just make this a big number. What also does not help is the fact that many programmers are not being paid such good money for the programs either. I will still use open source software at home and love it. If people want to compare M$ "standards" to Linux just remember one thing, M$ has about a ten year head start. ;-)
  • Hrm (Score:4, Funny)

    by The-Bus ( 138060 ) on Wednesday May 07, 2003 @07:51AM (#5899895)
    You'd think if geeks couldn't "interoperate" with girls, at least they would be able to interoperate with other geeks on their projects.

    *ducks*
  • by tjansen ( 2845 ) on Wednesday May 07, 2003 @07:53AM (#5899901) Homepage
    One of the reason is that by releasing your source code you can easily create de-facto standards. VNC is a good example. It is not standardized at all, but the original implementation came with source code under GPL. Dozens of VNC projects have been created since, all inter-operable, and most of them are based on the original source code.
    • Yes, but VNC is a much more niche market though. And it is a lot simpler standard.
      All of the other projects are based directly on a historical product - VNC, whereas with these projects they can be developed alongside or before other projects, and have much wider scope.

      And there's a good reson why you can't change your posts - can you imagine the havoc when someone decides to change their +5 Interesting post to read 'First Post!'.
    • by Rich ( 9681 ) on Wednesday May 07, 2003 @08:23AM (#5900022) Homepage
      Actually, there's a pretty good specification document for VNC. I was able to write a working implementation of the protocol (for keystone) that was not derived from the ORL sources. The documentation can be found here:
      http://www.uk.research.att.com/vnc/protocol .html
  • by nuggz ( 69912 ) on Wednesday May 07, 2003 @07:53AM (#5899903) Homepage
    The real reason stuff doesn't interoperate is the people that want it to, didn't do so.

    If I want my project to interoperate, I will incorporate that feature. If I don't care, I won't.

    If someone else wants it to do something else THEY have to add that feature.
    Why should I spend my time working on features for you?

    This is free software, the interoperability people could make it work, but they don't put the effort behind it, so it doesn't happen. If it isn't worth their time, money and effort, why should it be worth mine?
    • that's the classic argument "do it yourself".

      those "interoperability people" do not exist as a group. it is a mindset more developers should endorse imo. beats NIH (not invented here) and hacking grandstanding ("you do it yourself, im too leet to care") by far.

      but being interoperable (and thus replaceable) as a project may be undesirable for the egoes involved.. reputation lock-in?

      -gregor
      • I don't think that's what he's saying at all.

        What he's saying is that the developer made the program to solve a problem that he encountered, or to fill a need that he had. This developer has no need to put a large amount of extra work into this project to add in interoperability features that neither he nor most of his users would actually use.

        People complain, expecting that since this guy put this program together in the first place, he should fix it so it works the way they want it to. But that's not ho
  • Various comments (Score:4, Informative)

    by srichter ( 120728 ) on Wednesday May 07, 2003 @07:59AM (#5899919) Homepage
    I have been involved a little bit with the OSCOM efforts and I am impressed again and again on how they all work together. The board members of this organization are leaders from various OS Web Application servers, all having different interests and yet they can work together.

    I only know Paul Everitt (one of the authors) personally, who is co-founder and used to be the CEO of Digital Creations (today Zope Corp.) - therefore one of the inventors of Zope (www.zope.org). He started the Twingle project a while ago, trying to generalize the Zope effort to create a content management Mozilla-GUI for Zope 3 to all Open Source CMS solutions.

    As the article states this effort is quiet ambitious, but it also shows the power OS can have. When Paul and I started working on the original code, we used heavily XML-RPC (it is just the easiest to use for getting anything done), but Paul has since pushed towards HTTP standards, such as WebDAV. While this is much harder (i.e. I am writing a WebDAV library in JS for Mozilla) than the original approach, it allows a lot of integration possibilities later. For example, in the future we imagine that we will be able to drag and drop objects between Bitflux and Zope and vice versa for example. Also, a unified GUI will allow Content Designers to gain a skill that is much less platform-specific (in the meaning of App Server and Operating System), which makes this skill much more marketable.

    BTW, OSCOM 3 will be held at Harvard University on May 15, 2003 if I remember right. So everyone interested in Web-related technologies living in Boston should drop by and check it out.
  • by Anonymous Coward on Wednesday May 07, 2003 @08:00AM (#5899920)
    To be honest, the headline is sensational and the document itself has limited content from which to draw conclusions. Most critically of all, however is the assertion that Open Source projects refuse to interoperate based upon experiences of a monolithic organisation trying to get OSS CMSs to implement WebDAV.

    To be honest, I've written several CMSs, contributed to others, and done 90% code re-writes on others to suit my needs. All OSS.

    The thing is, you'll find that many Open Source CMSs don't always support LDAP or a host of other standards. Why? Because they don't need to. PHP Nuke, for example, is a fine project for producing small-to-medium community/corporate content-driven web sites. It isn't perfect, but it is modular, and a bit of work will allow you to produce some very nice, functional projects. It doesn't need to have to support another protocol, WebDAV throughout and then SlideML on top.

    PHP Nuke is, in fact, a combination of other projects brought together and welded into a final package. That's interoperation for you...

    What about the OASIS initiative, where open source projects are trying to produce XML-based standards for office documents to ensure long-term data access and inter-operability. What about X? VPNs, secure communications, file sharing, standard web protocols and a hundred other examples of OSS collaboration, where proprietory companies are digging their heels in to try and jealously guard their marketshare?

    If you want to know why OSS CMSs don't have WebDAV support, it's because they don't need it, plain and simple. If it was important and really would make an incredible difference, they would all be supporting it. As it is, from what I can see, what is on offer is something that their code already does for the most part. They don't see the point of it, neither do their users. Oh, and they probably don't like you writing articles, saying that they don't play nicely when you arbitrarily come up with things and tell them they should implement them. :)

    To everyone else, sorry for the rant, but this article really is nonsense and insulting to everyone who works hard in the open source community on almost any project.
  • by heretic108 ( 454817 ) on Wednesday May 07, 2003 @08:04AM (#5899939)
    ...the fact that doco is often nonexistent or poor, code is idiosyncratically designed/written and poorly commented (if commented at all).

    Result is that quite often, it takes less time to implement something oneself than to understand and integrate with a 3rd party piece of software providing the same functionality.

    Too many developers think it's beneath them to write good doco, example progs, tutorials, clear easily-learned APIs and clear meaningful comments in the code.

    It's a kind of elitist 'techno macho' attitude - 'if it was hard to write, it should be hard to understand'. Too often my questions to authors of unfamiliar software are met with a terse 'RTFS!' (read the fucking source).

    This syndrome creates a fragmentation, which destroys opportunity for leverage from well-collated and well-catalogued sharable components. Which in turn makes developers' time more scarce, and further discourages the efforts to make code approachable.
    • I think you can either write a library or an application, but it is hard to do both at the same time. The goals and attitudes required to do both well are not inclusive.
    • Too many developers think it's beneath them to write good doco, example progs, tutorials, clear easily-learned APIs and clear meaningful comments in the code.

      Then again, many FS/OSS developers do comment and document their code. Take a look, say, at Emacs.

      Besides, your observation applies equally to both Open Source and proprietary projects. Programmer's skill is orthogonal to his area of activity. But of course, you are less likely to spot bad code in proprietary software, for the lack of sources.
  • Blindered developers (Score:5, Interesting)

    by wowbagger ( 69688 ) * on Wednesday May 07, 2003 @08:07AM (#5899946) Homepage Journal
    One of the biggest problems I see with Free Software development is the problem of the blindered developer.

    This is the guy who doesn't bother to raise his head from the computer to look at how his project works in any environment other than *his* system. You know, the guy who requires you to have libfoo.so.5.1.2.pl6-thursday-0741am-fred-mutant1 installed just to compile his code, and by $deity no other version of the library will work.

    A concrete example: The developer of the GATOS project [sourceforge.net] (a driver for the TV tuner/video capture (but not video out) functions of ATI All-in-Wonder cards) requires you to use HIS kernel module and HIS radeon driver. As a result, you may EITHER use his code XOR the DRI accelerated 3d code, but not both.

    True, he does (to an extent) track the DRI development, but rather than working with DRI and XFree and coming up with a way his drivers can play nice with the standard builds (e.g. having hooks in the standard driver and having the standard driver load his modules if present) he is off on his own little branch.

    He also uses libraries and packages that are not part of the standard installs of common distros - as a result just getting his code working is a real slog. So many people don't do it, and his project does not get as much support as it might.

    Now, I am not picking on him - developing stuff like that is hard, since it is very poorly documented. And with DRI making changes, XFree making changes, and him making changes, you WILL have times when things don't play well together. But rather than that being a transient state of affairs it is the normal state the GATOS project w.r.t. DRI.

    Unfortunately, it take time and work to stop, get a fresh install of RedHat/SUSE/Gentoo/... and see what it takes to get your code to build and install. It takes work to make sure that you really NEED the latest version of libfoo, rather than just any version. Especially when your code interoperates tightly with other people's projects it is difficult to plan interfaces that won't change frequently. If you can accept help from others this isn't so bad, but many project "leaders" have the attitude of "HOW DARE YOU IMPUNE MY PROJECT! IT IS PERFECT UNTO ITSELF! I CANNOT HELP IT IF YOU ARE NOT 31337 ENOUGH TO HAVE THE LATEST STUFF! L@M3Rz! IT IS UNDER DEVELOPMENT!"

    But that is the difference between a hack and a software engineer - just "getting something to work" and "getting something to work well, under as many circumstances as possible, as smoothly as possible."

    • by aaribaud ( 585182 ) on Wednesday May 07, 2003 @08:16AM (#5899990)
      One of the biggest problems I see with Free Software development is the problem of the blindered developer.

      ... but then one of the solutions I see with Free Software development is the solution of the enlightened (no pun intended) software engineer. For you can take this hacker's code and make it work in your environment, or backport it to DRI and/or XFree, and he'll probably be ok with it, as long as he doesn't have to support that
      • For you can take this hacker's code and make it work in your environment, or backport it to DRI and/or XFree, and he'll probably be ok with it, as long as he doesn't have to support that

        Why should a project require code to be written and THEN REWRITTEN before it meets the requirements of a broad userbase? It's wasted time and effort.

        Granted, the original coder doesn't have any obligation to write code that's useful to anyone but himself, but if he's not concerned about anyone else finding it useful, why
    • This isn't really a problem with Free software development as such. Most free software developers' time is extremely limited, so they spend it where they think it will do most good i.e. implementing new features.

      Personally I think what is needed is a kind of free software meta-project that goes round looking at other projects and providing help and suggestions on fixing the sort of interoperability issues you describe. The distro developers do this to some extent, but a distro-neutral group could be more e
    • Comment removed based on user account deletion
  • Am I the only one around here that really doesn't interoperate that much? Aside from pasting the goatsex man picture in my emails, I really don't interoperate that much.

    I have been known from time to time to use two different text editors on the same file! Egads. ...
    But I never do something like... Write a SQL Query, and want in to be executed from a Word document each time the user opens the file.

    In fact, I rarely, if ever, insert a Spreadsheet into a Word document. I know I can. I just don't.

    We're
    • Over on GForge [gforge.org] we're starting to "interoperate" a fair bit - we've got a Java SOAP client up and running (screenshot [infoether.com]) and it's been quite helpful in cleaning up some of the PHP code. Fun stuff!

      Yours,

      Tom

  • Because it's hard? (Score:5, Interesting)

    by IamTheRealMike ( 537420 ) on Wednesday May 07, 2003 @08:11AM (#5899971)
    I'm trying to create a packaging metadata spec at the moment to allow standardised LSB RPMs to be installed on all compliant distros using native dep resolvers etc - in short, making a standard that the relevant people can work to and are interested in is fantastically hard work. It's not just a case of writing something down and say "Agree to it!", everything has to be discussed, specified and of course there is always a slight worry in that you might be heading in the wrong direction etc.

    Lack of apparent interest from vendors is also somewhat discouraging. There are quite a few specs up on freedesktop.org that are only implemented by GNOME, with KDE "pending". Then for instance Mosfet comes along and claims the thumbnail spec is stupid for reasons x, y and z and proceeds to invent his own (the so-called "professional" thumbnail spec) and ask for it to replace the existing one! Not exactly encouraging.

    • very good thinking. yes, reaching consensus / interop is very hard, both for technical, and especially social reasons.

      i wonder if there are ways to overcome the ego barrier. the greater good may be a tired concept, but it applies here too..

      -gregor
    • by GGarand ( 577082 )

      If you really believe in what you wrote:
      > It's not just a case of writing something down and say "Agree to it!",
      > everything has to be discussed

      then your harsh criticism of Mosfet seems very rude and misplaced to me.

      He didn't ignore what you call a "spec", and is in fact tagged as a "new draft" and only a
      "proposition" on the freedesktop.org page.
      On the contrary, he carefully looked at it, and issued very well-founded
      arguments ( here [kde.org]) as to why he thought it was wrong.

      Last time I heard,
    • But if Mosfet's "professional" thumbnail spec just happens to be more efficient, faster or better in some way than the "standard" I think it should be adopted. That is the nature of open source software. People don't have to listen to an "authority" because nobody is really the authority. We're all working together cooperatively, but it is a give and take kind of relationship. We don't always have to get along, do we?
  • by budGibson ( 18631 ) on Wednesday May 07, 2003 @08:15AM (#5899985)
    Interoperation means passing data between two different programs over some common bridge. This means typically writing some sort of connector code. In the best case, someone is able to bundle that connector cod into a library.

    Consider coding to a web service interface such as SOAP vs. just keeping your application within one programming language and using its built-in constructs for passing data. With the web service, you either have to marshal into and out of SOAP envelopes on your own or use a library. However, not all libraries themselves interoperate. Hence, you have to test for compatibility by running against a suite of other implementations, all of which are also supposedly standard. It's the browser wars all over again. If you don't bother with interconnectivity, the job is over more quickly.

    To get an interoperability standard that you can just code to seems to take the developer community years of effort. Yes, there is value to interoperability, and that is why people do actually undertake things like web services projects and spend years trying to develop standards. But for a first project, or even a mature, successfully functioning product, interoperability is not likely to be a first instinct.
    • I'm definitely hearing you on Web Services.

      It's a great idea, that may get over-used IMHO.

      We're starting to do it to tie existing applications together to be used in-house. Why? They're all J2EE based, or moving in that direction. In fact, a while ago, the push was to get other languages to be able to talk directly to EJB's. Since then, we've added yet another layer, hence, I'm still employed.
  • by pchown ( 90777 ) on Wednesday May 07, 2003 @08:16AM (#5899989)
    I agree that it's difficult getting open source projects to interoperate. I think the problem is that interoperation is hard, often harder than developing a program that works in isolation. Writing a simple mail server would be easy, you could build it on top of something like Distributed Ruby [rubycentral.com] and have it working in a day. Writing a mail server that interoperates properly with everything else that's out there is a totally different proposition.

    Whatever the situation with open source, it's far worse with proprietary software. No open source project that I'm aware of has anything as difficult to interoperate with as .DOC.
  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
  • The great thing about standards is that there are so many of them.
  • by kahei ( 466208 ) on Wednesday May 07, 2003 @08:28AM (#5900052) Homepage
    I'm not at all sure that OSS does interoperate poorly. I would rather ask:

    Why, when there's a need to interoperate, does OSS invariably fall back on the 'chain of programs communicating via a pipe of characters' model from the 1970s, even though mechanisms for defining rich, concurrent interfaces have been in common use for ages everywhere else?

    I know there are many good reasons why pipe'o'ASCII software projects do what they do. I also know that projects like KDE have made considerable steps in what I think of as the right direction. But the lack of componentization and well-defined interfaces in Linux-style software is one good reason why I'm glad Microsoft (and even -- yechh -- Sun) still have a strong role in keeping things moving.

    (loud crashing sound as post is modded down for not being unix-centric enough)
    • > Why, when there's a need to interoperate, does OSS
      > invariably fall back on the 'chain of programs
      > communicating via a pipe of characters' model from
      > the 1970s, even though mechanisms for defining
      > rich, concurrent interfaces have been in common
      > use for ages everywhere else?

      There is another thing, called 'messaging'. It is part of the OS, at least. I guess messaging is used on many levels... I think there are some standard high-level messaging APIs as well, apart from the low-level O
    • Why, when there's a need to interoperate, does OSS invariably fall back on the 'chain of programs communicating via a pipe of characters' model from the 1970s, even though mechanisms for defining rich, concurrent interfaces have been in common use for ages everywhere else?

      One huge reason is bloat. Some applications have very robust messaging libraries/interfaces for third-party interoperability, but if you have to include tons of libraries and such... it makes you think twice. And that goes for any ty

    • mechanisms for defining rich, concurrent interfaces have been in common use for ages everywhere else?

      You must be talking about OLE. No, wait, DDE. No, now it's COM. Oops, I blinked and everyone's using .NET now.

      (disclaimer: I am not an MS developer, thank god.)

  • by jalilv ( 450956 ) on Wednesday May 07, 2003 @08:39AM (#5900105) Homepage
    A lot of Open Source projects are done because the primary developer(s) want something that is either not readily available in existing software (the original mantra of OSS) or they want certain things "their" way. Some developers are not even aware that a standard exists for what they are trying to do and will do it the way they think is the best while other developers would care less about the standards. Its is important to create awareness about standards and their importance (believe me, lot of developers don't understand the importance of standards and think of them as unnecessary burden). When a project idea comes to the mind of a developer, a lot of times the last thing a developer will think about is existence of any standards. Like the article described, ego and NIH syndrome also is a factor. The mentality is also that "if they developed standards, let them develop the product too. I will do it MY way." Again, this boils down to understanding the importance of standards and implementing them in your projects if one exists.

    - Jalil Vaidya
  • And I suppose Microsoft "interoperates" better?

    The premise is wrong. Interoperating is hard, often not worth it, and sometimes downright bad. But to the degree that anything interoperates, open source probably generally does a whole lot better than proprietary software.

  • Integrates fine (Score:3, Insightful)

    by salesgeek ( 263995 ) on Wednesday May 07, 2003 @08:52AM (#5900163) Homepage
    I find open source tools like perl, bash, grep, emacs and so on integrate well. I can process any text file I want!
  • What the heck? (Score:3, Informative)

    by foxtrot ( 14140 ) on Wednesday May 07, 2003 @09:28AM (#5900356)
    Open Source doesn't conform to standards?

    DNS?

    Sendmail?

    These aren't standards compliant?

    And now you're going to tell me WINS and Exchange ARE?!

    Perhaps the problem is not that "open source software doesn't conform to standards." Perhaps the problem is that "modern software considers itself too good for standards," which is entirely a different problem and isn't open-source specific.

    -JDF
  • by Overt Coward ( 19347 ) on Wednesday May 07, 2003 @09:29AM (#5900362) Homepage
    I've seen some posts that tap-dance around the real reason that OSS in particular doesn't interoperate well, but they all seem to miss the mark slightly. It's not really about ego, or the NIH syndome, or laziness, or poor design (if any).

    The real reason, I believe, has to do with the fundemantal drive behind an open-source project -- find an itch, then scratch it. OSS projects (in general) start because someone sees a specific need or want for software that performs a specific purpose. By its very nature, that project is looking at the world in a bottom-up fashion -- and that inevitably pushes interoperability off until "later".

    Integration or interoperability is typicaly an "add-on" to an already successful project -- no one really starts thinking about "well, I'd love to be able to do X from program Y" until both projects X and Y have developed strong user bases. It's sort of a natural selection in software -- the "best" projects survive and eventually breed (interoperate) with other projects to evolve higher-order software.

    Of course, that's just my opinion. I've been known to be wrong... though of course, those who discover that have been known to disappear...

  • I thought that was the reason Perl, Python, and sockets programming are so popular.

    This is a non-starter for me; I've never been hampered by interop problems with open source software.

    Authors of open source start out to solve their problems, not mine. I can solve my own problems with readily available tools.
  • by Fefe ( 6964 ) on Wednesday May 07, 2003 @09:45AM (#5900485) Homepage
    The main reason standards are not or only poorly supported are:

    1. the standard is not freely available (MPEG, all IEEE and ISO standards)
    2. the standard encompasses every other standard under the sun, and nobody can possibly support it all without licensing all the crap from everyone (this is actually quite typical these days. 100 companies come together and define all their combined "intellectual property" as an "industry standard" and thus force everyone else to license their crap from them. This is what happened to MPEG-4, and to an even greater extent, MHP (set top box standard, includes DVD, MPEG-2, Java, and much more))
    3. the standard sucks because it leaves ambiguities or simply assumes a lot of stuff that is not stated clearly. This is most often a problem of incompetence, because almost nobody is good in writing standards. You have either technical people who know what they are talking about but assume too much to write a good standard, or you have the technical writers who don't assume anything but don't understand what they are writing and thus produce absolutely unusable documents. This is an increasingly bad trend in RFCs, unfortunately.
    4. The standard is bad because it was written by an utter moron. This happens if marketing companies are trying to participate in a standards process because they think it will give them more credibility. The results are desastrous.


    Almost nobody is trying to implement standards badly. But some standards are so bad that you can't implement them without getting a brain tumor. Just have a look at tmpfile() in POSIX, for example, whose semantics make it impossible to use it without creating a race condition. Or look at DVD, which references a hidden trade secret called CSS, which is not part of the standard, but you can't be standard compliant without CSS, which forces you to sign sinister contracts with the content industry cabal.
    • ...for a standard body that is so obviously full of shit, you can hardly bear the pain when reading their web page: www.omg.org/mda/ [omg.org]. Please read this and then try to state coherently what these people are doing.

      Here are my favourite quotes:

      The MDA defines an approach to IT system specification that separates
      the specification of system functionality from the specification of the
      implementation of that functionality on a specific technology platform.

      WTF?! It's like Homer Simpson on acid! ;) Here's an

  • by nomadicGeek ( 453231 ) on Wednesday May 07, 2003 @09:52AM (#5900547)
    I have never participated in an open source project so I can't say but I have worked on projects for pay.

    Supporting standards is a lot of work and most of it is not particularly fun or intellectually rewarding. It is just a pain in the butt. Who want's to do that for free?

    Once the code is "complete" (code is never complete) it must be tested to insure that your implementation of the standard will work with other implementations of the standard. This testing is tedious, time consuming, and diverts resources from other parts of the project. You usually also have to have programmers who worked on each implementation available so that they can work out any inconsistencies. I've done this before for pay and I still didn't want to do it.

    Another thing that always pops up is whose implementation is more standard? When an inconsistency arises one side has to make a change. I've gotten in the middle of pissing contests with programmers who each insist that the other is wrong and they aren't going to change their code to work around their bug. What fun!

    I know that there are a lot of OSS developers out there who take their work very seriously and put out the highest quality product but they also have day jobs, lives (I hope), etc. that compete for their time. If I was doing the work on my time, I would probably tend to do the stuff that I found more rewarding. Things like rigorously supporting standards and all of the sh*t work that goes along with that would probably take a back seat to working on a new feature or something else that I considered challenging and exciting.

  • Unix Pipelines (Score:3, Insightful)

    by ReadParse ( 38517 ) <john@IIIfunnycow.com minus threevowels> on Wednesday May 07, 2003 @10:00AM (#5900605) Homepage
    Almost all of my open source software interoperates. It's the interoperation we call "unix", and which is so graceful and transparent we don't even realize they're different programs using the same rules to pass information around.

  • by HiThere ( 15173 ) * <charleshixsn@@@earthlink...net> on Wednesday May 07, 2003 @10:40AM (#5900928)
    When standards exist, Open Source projects are frequently better about following them then closed source projects. But not necessarily. If the developers consider that the standard is really broken, then they'll ignore either it or parts of it. The only difference from closed source projects is that they won't break the standard in order to keep you from working with something else.

    When standards don't exist, nobody complies with them. If something is patented, then it obviously can't be standard in any meaningful sense, unless the patent is freely useable for all standards conforming uses. Some "standards" bodies don't seem to understand this.

    And when personalities come into conflict, all conformance can go by the wayside.

    If I look over this list, the Open Source software generally comes out at least as good at standard conformance as the proprietary software. If for not other reason, because if it's easy to put standard conformance into a project, it's reasonably easy to retrofit it in. And features tend to not get removed (though they can be turned into compile-time optional features).

    Over cycles of iteration, Open Source software either becomes standard conforming, establishes a standard, or becomes irrelevant. (I.e., evolution in action.)

    E.g., would you rather try to read MSWord documents with OpenOffice, or to read OpenOffice documents with MSWord? MSWord has a positive benefit to MS with breaking adherence to the file format used by the previous version. It causes people to buy the new one. OpenOffice has no such benefit to the producer, and this is a benefit to the user.
  • Someone explain bash (Score:3, Interesting)

    by swordgeek ( 112599 ) on Wednesday May 07, 2003 @10:43AM (#5900949) Journal
    I've read a lot of comments here, with some interesting points, excuses, or disagreements on the premise that OSS doesn't support standards.

    Bash supposedly conforms to Posix standards if you invoke it with the --posix flag. (Why it shouldn't default to posix-compliant I don't know) However, bash is not compatible with /bin/sh. WHY???!!!!

    Will someone tell me why bash is the ONLY /bin/sh-like shell that I can't `echo "hi!"` in????

    In case anyone things I'm just ranting (I probably am ranting, but that's not all there is here :-), consider my question closely. sh, ksh, zsh, ash, and every other shell that uses sh syntax (i.e. not the csh variants) deals with the above statement in the same way. Bash doesn't.

    Why would OSS deliberately develop a shell (the default universal Linux shell no less) that breaks such a fundamental and long-standing de facto standard?
  • "cooperative effort between GNOME and KDE"...
    one worrd for you RMS
  • by Kaali ( 671607 ) on Wednesday May 07, 2003 @11:01AM (#5901092)
    We have Linux Standards Base -project, maybe we should also have Linux Developers Standards Base -project which defines how you should use/develop libraries and plugins in your projects so you can interoperate with multiple standards.

    So if you develop a Mail User Agent it could read all of the mailbox-standards by using the default libraries for these. (technically possible? definitely. hard? harder..)

    Ofcourse it should define other standards too: configuration files, commandline parameter styles.. everything that is common with software. But there should be some freedoms due to the nature of OSS-development like freedom to use any GUI-library etc.

    There would be a lot of fights over the standards, yes. But it might be worth it in some cases. I can see a few problem cases with this, but it could be useful in many common situations.

    The twisted part:

    OSS-developers usually wants to code everything by themselves, even if there is a library that does something for you.. they still tend to code one by themselves because the library misses one feature or something. Just add the feature to the library, don't create your own! (From this we will come by the problem of library developer not accepting your addition to the library... try to discuss about it.. why it is useful addition etc.)

    OSS-users usually wants to use software that doesn't have a lot of libraries. Have you ever begun to install some software and noticed that it is going to install 20 libraries in addition? Scary huh? We should change our mindsets, repeat with me: "Libraries are good, libraries will liberate us."
  • Remarkably, the /. fortune at the bottom of the page had the most succinct description of the problem:

    I must Create a System, or be enslav'd by another Man's; I will not Reason and Compare; my business is to Create. -- William Blake, "Jerusalem"
  • I don't know what barbarian tools these guys use, but the software I use follows standards well enough.

    Mozilla
    Sendmail
    Gaim
    Gimp
    Gcc
    LibXML
    Apac he

    I'm sure the list could grow and grow.
  • by apakuni ( 671604 ) on Wednesday May 07, 2003 @12:12PM (#5901745) Homepage

    Disclaimer/Invitation to Flame: I am not a coder by trade. Rather, I am a writer/UI planner/designer. I can hack PHP, PERL and SQL/MySQL when called on, but it is not my natural skillset.

    OK, with that out of the way, here are a few thoughts.

    1. Open Source Software (OSS) is growing and gaining market share everyday. So, the objective of this discussion should not be is OSS good or bad, but how do we remove roadblocks to it's inevitable success while enabling long-term sustainability and avoiding the bloat that so cripples many commercial platforms (ex. Windows being the obvious target).

    2. To paraphrase the Bard ... Discipline must be used. (Capt. Flewellyn, Henry V).

    As several folks have noted, standards need to be agressively developed and enforced. I do not hold the opinion that OSS dev is complete anarchy. Far from it. Some of the most elegant software I use springs from the OSS Community. And, not coincidentally, most of these bits of software use standards (XHTML 1.0, CSS 2.0, Blogger API, etc.) in addition to those already mentioned (Sendmail, etc.).

    At the same time, I would have to agree with the position taken by Gregor and Paul that perhaps, some of the tribal culture of OSS dev teams creates unneccesary isolation and, as a result, creates many missed opportunities for a bit of the "blue sky" thinking that enabled disparate groups of people to build on one another's work to create the "next big thing". For a great example of what I am talking about, read 20 Years of Berkeley Unix: From AT&T-Owned to Freely Redistributable [oreillynet.com] by Marshall Kirk McKusick. The link provides an excerpt anf the entire essay can be found here ... Open Sources: Voices from the Open Source Revolution [amazon.com].

    But, I digress.

    Because we have so many bright dedicated folks working in relative isolation, we spend lots of cycles on redundant work. So, I think we need some of those bright minds working on, for lack of a better term ... "OSS Interop Standards". Said standards could and likely should be subdivided by use case (web/desktop/server) and perhaps technology/language. Though, the risk there is that too much granularization will kill interop, which is the point in the first place.

    Certainly, the big boogeymen of developing such standards are stagnation and the death of creativity. And, those are certainly legitimate concerns. However, they are only concerns if the standards are not viewed as a malleable, living document. The idea is to steer development towards greater, platform/language-independent interoperability.

    The blogging world already sees the light (Blogger API and the emerging MetaBlog API). By enabling these standards, Moveable Type can talk to Blogger and, in the near future Xaraya. Some will argue that programming this way makes it too simple for the end-user to jumpship from one piece of software to a competitor. (ex. Moveable Type from Grey Matter). While that may be true, this completely misses the point and the opportunity. Conversion scripts/apps will exist no matter what. That is a fruitless fight.

    By using Interop Standards to enabling seamless data exchange and even easy conversion from one OSS tool to another, developers ensure growth and distribution of their products, particularly with in the less technically savvy design and end-user markets of the web. If trained as a coder, it may be relatively simple to jump from PERL to PHP to .NET.

    But the folks that are ultimately going to use free products (Moveable Type) or resell pro products (Moveable Type Pro), the jump is not so simple. They are far more likely to latch on to one language (in my case PHP and a shade of Perl).

  • by DrCode ( 95839 ) on Wednesday May 07, 2003 @01:07PM (#5902350)
    What I think would help a lot is if there were more standard conventions for selections. This would make it possible to enable more drag-and-drop features across programs. And using drag-and-drop in, say, GTK, is really quite easy (easier, I believe, than in Windows).

    For example, one piece of code I'm working on uses images and palettes; and it would be really nice if I could drag one of my palettes into the Gimp, or one of the Gimp's onto the palette window in my app. Same thing for individual colors and images.

    Is there a group or project trying to set standards for selection at the application level? Perhaps this should be an extension to the X drag-n-drop spec.
    • The framework is there, but not being used. There is a "data type" with the X selections.

      The problem is that there is no standard for any types of data other than text. Microsoft back in windows 3 introduced several types, most importantly one for .bmp file data. They did fail to make a "filename" type, which would have been a big win and probably given them the imbedded objects much better and easier than COM and so on.

      And even text is confusing, I have found the following types used to mark text: "XA_

      • Exactly. What's needed is for someone with clout to define more types. Since Gimp is pretty much the standard open-source image app., they could take the lead and publish a set of data-type's and formats that they'll handle (and not change). Even better if the KDE and Gnome projects get in on it.

        Then you could, say, drag an image from the Gimp onto your desktop and have the option of saving it as a file or setting it as your background.
  • No evidence (Score:3, Insightful)

    by Jeffrey Baker ( 6191 ) on Wednesday May 07, 2003 @01:10PM (#5902395)
    This article doesn't prove its own premise. Where is the evidence that open software is less likely to implement a standard than closed software? I can think of so many counterexamples, because OSS tends to actually define the standard. See: every major internet protocol.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...