Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Microsoft IT Technology

Microsoft Releases Office Binary Formats 259

Microsoft has released documentation on their Office binary formats. Before jumping up and down gleefully, those working on related open source efforts, such as OpenOffice, might want to take a very close look at Microsoft's Open Specification Promise to see if it seems to cover those working on GPL software; some believe it doesn't. stm2 points us to some good advice from Joel Spolsky to programmers tempted to dig into the spec and create an Excel competitor over a weekend that reads and writes these formats: find an easier way. Joel provides some workarounds that render it possible to make use of these binary files. "[A] normal programmer would conclude that Office's binary file formats: are deliberately obfuscated; are the product of a demented Borg mind; were created by insanely bad programmers; and are impossible to read or create correctly. You'd be wrong on all four counts."
This discussion has been archived. No new comments can be posted.

Microsoft Releases Office Binary Formats

Comments Filter:
  • Joel (Score:2, Insightful)

    by Mario21 ( 310404 )
    Joel's articles are a joy to read. No matter what time I receive the email about a new article by Joel, it will be read on the spot.
    • Re: (Score:2, Insightful)

      by zootm ( 850416 )
      I agree to some degree, but as a slight contrary point I find his silly insistence that Hungarian is a "good thing" [joelonsoftware.com] and his constant pimping of FogBugz (especially the "this is usually a bad idea, but it's alright when we do it!" attitude of some of the posts to be a little annoying. He's definitely smart and makes a lot of sense though.
      • Re:Joel (Score:5, Insightful)

        by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Wednesday February 20, 2008 @09:30AM (#22487000) Homepage Journal
        If you actually read the article, he's right. His point is that the use of Hungarian notation has been bastardized beyond believe. Programmers didn't understand why Hungarian originally used his famous notation, and thus tend to make an error every time they attempt to replicate his work. That's why we have tons of Java programs that look like crap due to some foolish programmer mindlessly following Hungarian Notation.

        On the subject of the Office Document format, I believe that everything he says is also true; but with a few caveats. The first is the subject of Microsoft intentionally making Office Documents complicated. I fully accept (and have accepted for a long time) that Office docs were not intentionally obfuscated. However, I also accept that Microsoft was 100% willing to use the formats' inherent complexity to their advantage to maintain lock-in. The unnecessary complexity of OOXML proves this.

        The other caveat is that I disagree with his workarounds. He suggests that you should use Office to generate Office files, or simply avoid the issue by generating a simpler file. There's no need to do this as it's perfectly possible to use a subset of Office features when producing a file programatically. Libraries like POI can produce semantically correct files, even if they aren't the most feature rich.
        • Re:Joel (Score:5, Informative)

          by zootm ( 850416 ) on Wednesday February 20, 2008 @09:49AM (#22487164)

          I'm not going to say anything against the Microsoft doc; he's pretty much absolutely right and it's a great introduction to why older formats are how they are in general to boot.

          The Hungarian thing – no, I still don't see it. Hungarian should not be used in any language which has a reasonable typing system; it's essentially adding unverifiable documentation to variable names in a way that is unnecessary, in a language which can verify type assertions perfectly well. The examples in the article are just ones where good variable naming would have been more than sufficient. It's not good enough.

          Oh god I've started another hungarian argument.

          • Re: (Score:3, Interesting)

            by Anonymous Coward
            Hungarian should not be used in any language which has a reasonable typing system;

            That's "Systems Hungarian" in the original article, and you are correct.

            "Apps Hungarian", which adds semantic meaning (dx = width, rwAcross = across coord relative to window, usFoo = unsafe foo, etc) to the variable, not typing, is what is good and what he is advocating. It is exactly "good variable naming". You can see that you shouldn't be assigning rwAcross = bcText, because why would you turn assign a byte count to a coord
            • L&O: sFoo (Score:5, Insightful)

              by poot_rootbeer ( 188613 ) on Wednesday February 20, 2008 @11:30AM (#22488416)
              "Apps Hungarian", which adds semantic meaning (dx = width, rwAcross = across coord relative to window, usFoo = unsafe foo, etc) to the variable, not typing, is what is good and what he is advocating.

              What is the justification for putting that semantic meaning into a variable name, instead of incorporating it into class definitions?

              For example, if a string can be "safe" or "unsafe", why not have "SafeString" and "UnsafeString" classes that extend String, and use instances of those, instead of having instances of the base String class names 'sFoo' and 'usFoo'?
              • Actually, when possible, you should do both. Hungarian notation is a grammar. In the same way that English has rules for writing which include capitalizing the first letter of a sentence, proper names, and so on, Hungarian notation provides visual cues to programmers that make certain types of semantic errors "sTanD oUt." There's nothing particularly unusual about the text "sTanD oUt," and it's meaning does not change by writing it that way, but it violates the English grammar and your brain's pattern recog
              • Re: (Score:3, Insightful)

                by edwdig ( 47888 )
                For example, if a string can be "safe" or "unsafe", why not have "SafeString" and "UnsafeString" classes that extend String, and use instances of those, instead of having instances of the base String class names 'sFoo' and 'usFoo'?

                For strings its a little more straightforward, but it gets messy quick with numeric values. You have to overload every operator you might possibly use, including every variant where it might make sense to operate on another type. The amount of support code needed builds up fast.

                An
          • by cp.tar ( 871488 )

            I'm not going to say anything against the Microsoft doc; he's pretty much absolutely right and it's a great introduction to why older formats are how they are in general to boot.

            The Hungarian thing – no, I still don't see it. Hungarian should not be used in any language which has a reasonable typing system; it's essentially adding unverifiable documentation to variable names in a way that is unnecessary, in a language which can verify type assertions perfectly well. The examples in the article are just ones where good variable naming would have been more than sufficient. It's not good enough.

            Oh god I've started another hungarian argument.

            Hungarian notation has nothing to do with typing systems.
            Hell, I'm barely a novice programmer, but even I can see that.

            Hungarian notation is a good variable naming practice — as long as you use it to mirror internal program semantics, not create redundant typing information.

            So far, I have tried to implement something similar to Hungarian notation in most of my programs; this article taught me a thing or two more, though some aspects touch on things way beyond my level.

            Anyway, his article on Hung

          • The Hungarian thing - no, I still don't see it. Hungarian should not be used in any language which has a reasonable typing system;

            A "typing" system doesn't help you read and understand the code. It doesn't give you any clues to the types of data being acted upon in a section of code. While I never bought in to the whole hungarian notation thing, at the time it was an "ism" that people went nuts about, it did address a specific problem with code readability. The concepts addressed by hungarian notation are
            • by zootm ( 850416 )

              Ok, I was going to respond to this but I will not get dragged into another one of these discussions. It's worse than tabs vs. spaces, I tells ya.

              Since you're talking about C/C++ code though, I'm going to assert that that doesn't fall into the class of language I was talking about anyway. You're playing with essentially-untyped data there a lot more.

              • Re: (Score:3, Informative)

                by mlwmohawk ( 801821 )
                Ok, I was going to respond to this but I will not get dragged into another one of these discussions. It's worse than tabs vs. spaces, I tells ya.

                I have to disagree, tabs and spaces are easily handled with an "indent" program.

                On VERY LARGE projects where there are hundreds of include files and hundreds of source files, it is not convenient or even possible in all cases to find the definition of an object that may be in use.

                Context and type information in the name makes it easier to quickly read a section of
                • by zootm ( 850416 )

                  It's funny, I've argued in the past that Java's very verbose typing has advantages in exactly the way you list in your post. In the case of Java, in fact, you wouldn't need the type warts since the types would be readily available.

          • Re: (Score:3, Insightful)

            There are two kinds of Hungarian notation. One is the type that adds type info. For example, prefixing longs with l. As you note, that is pointless. In fact, it is worse than pointless--because if the type of the variable is changed, it might be too much of a hassle to change the name everywhere, and you end up with the notation actually misleading.

            The second type doesn't add type information. It adds meaning information. For example, an index to a table row might be rowIndex. An index to a column might

        • Re:Joel (Score:5, Informative)

          by mhall119 ( 1035984 ) on Wednesday February 20, 2008 @10:03AM (#22487294) Homepage Journal

          Programmers didn't understand why Hungarian originally used his famous notation
          It wasn't created by some guy named "Hungarian", it was created by Charles Simonyi.

          http://en.wikipedia.org/wiki/Hungarian_notation [wikipedia.org]
        • Re:Joel (Score:4, Informative)

          by encoderer ( 1060616 ) on Wednesday February 20, 2008 @10:49AM (#22487834)
          "Programmers didn't understand why Hungarian originally used his famous notation"

          Uhh.. There was never a "Mr. Hungarian" ....

          It was invented by Charles Simonyi and the name was both a play on "Polish Notation" and a resemblance to Simonyi's father land (Hungary) where the family name precedes the given name.

          • Re: (Score:3, Funny)

            by F452 ( 97091 )
            Close, but not quite.

            It was actually all started by cHarles Hungar, and thus the "Hungarian" label.
      • by richlv ( 778496 )
        the constant bragging and plugging of his own product makes me want to stay away from it as much as possible.
    • Ack, no. They always appear to be really nifty on the surface, but they always go wrong on the details.

      Take this article, for instance - sure, he's right that trying to implement support for these specs is futile. It's the same reason why Office's OOXML "standard" is a joke. But he didn't really need to spend 6 pages saying so. And sure, the workarounds are fine if you're a Windows shop, but workarounds #2 and #3 are not simple "half day of work" if you have no experience with Microsoft technologies - it's
  • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Wednesday February 20, 2008 @09:17AM (#22486898) Homepage Journal

    Microsoft irrevocably promises not to assert any Microsoft Necessary Claims against you for making, using, selling, offering for sale, importing or distributing any implementation to the extent it conforms to a Covered Specification ("Covered Implementation"), subject to[...]
    If your implementation is buggy, does that mean you're not covered?

    To clarify, "Microsoft Necessary Claims" are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification.
    This sounds like:
    • If there are any optional parts of the spec, those parts aren't covered.
    • If the spec refers to another spec to define some part of the format, that part isn't covered.
    • Re: (Score:2, Insightful)

      by zebslash ( 1107957 )
      Yes, you know, they are afraid that buggy implementations show their format in a bad light. For instance, that would be like writing your own buggy implementation of Java and then to distribute it in order to contaminate the market with a flawed version, just to show it under a bad light. Oh wait...
    • by Ed Avis ( 5917 ) <ed@membled.com> on Wednesday February 20, 2008 @09:36AM (#22487064) Homepage
      Basically, Microsoft reserves the right to sue you for software patent infringements. So do thousands of other big software companies and patent troll outfits. The new thing now is that Microsoft likes to generate FUD by producing partial waivers and promises that apply to some people in limited circumstances (Novell customers, people 'implementing a Covered Specification', and so on). The inadequacy of this promise draws attention to the implicit threat to tie you up in swpat lawsuits, which was always there - but until this masterstroke of PR the threat wasn't commented on much.

      Ignore the vague language and develop software as you always have.
    • by ContractualObligatio ( 850987 ) on Wednesday February 20, 2008 @09:43AM (#22487132)

      If there are any optional parts of the spec, those parts aren't covered.

      RTFA. That's in the FAQ. Yes they are.

      If the spec refers to another spec to define some part of the format, that part isn't covered.

      In other words - if you do something related to a spec that isn't covered, it isn't covered. How could it be any different?!

      I'm not saying that there aren't any flaws, but this kind of ill informed, badly thought out comment (a.k.a. "+5 Insightful", of course) has little value.

      • In other words - if you do something related to a spec that isn't covered, it isn't covered. How could it be any different?!
        I think the concern is that the "something related to the spec" is actually something vitally important to the spec.
    • by julesh ( 229690 ) on Wednesday February 20, 2008 @09:51AM (#22487192)
      If your implementation is buggy, does that mean you're not covered?

      That is my primary concern with the entire promise. None of this bullshit not-tested-in-court crap that came up the other day: it doesn't cover implementations with slight variations in functionality.

      This, it seems, is intentional. MS don't want to allow others to embrace & extend their standards.
  • Obfuscation (Score:2, Insightful)

    by Anonymous Coward
    Except... we all don't have this, OLE, thing on our computers nor do we all walk it easier than the languages we deal with now.

    But let's say you do. Now you have to find an API to do it for you. As an every day guy, I can write my own HTTP parser, IP connection manager and so forth, w/o requiring special API to do it. As a smarter guy, I'd look for the libraries that can do some of the heavy lifting for me. It's flexibility. The document structure is going to affect how I write code to work with ti.

    W/
    • Re: (Score:2, Insightful)

      by wlandman ( 964814 )
      What Joel is trying to say is that at the time that Excel and other Office products were made, it was not possible to store it in XML. Joel also reminds us that as Microsoft had new versions of the software come out, they had to keep the compatability with the older versions.

      I think Joel makes a lot of good points and gives great insight into thinking at Microsoft.
    • Were the format to be simple, be it "sanely" constructed CSV, XML, RTF, etc, I have more choices.

      Word and Excel have supported CSV and RTF back into the DOS days, back into the 5.25" floppy days.

      And are you honestly saying that an 8088 with 640K ram could handle XML? Assuming that the concept of interchangable markup langauges even EXISTED back then?

      Jesus, it's like complaining that a 30 year old television doesn't support HDMI, therefore it's poorly designed.

  • by Stan Vassilev ( 939229 ) on Wednesday February 20, 2008 @09:20AM (#22486932)
    One may wonder, why release the documentation now?

    If you read Joel's blog you'll see the formats are very old, and consist primarily of C-structs dumped to OLE objects, dumped directly to what we see as an XLS, DOC and so on files.

    There's almost no parsing/validation at load time.

    Having this in a well laid documentation may reveal quite a lot of security issues with the old binary formats, which could lead to a wave of exploits. Exploits that won't work on Microsoft's new XML Office formats.

    So while I'm not a conspiracy nut, I do believe one of Microsoft's goals here are to assist the process of those binary formats becoming obsolete, to drive Office 2007/2008 adoption.
    • by Chief Camel Breeder ( 1015017 ) on Wednesday February 20, 2008 @09:55AM (#22487232)
      Actually, I think they're releasing it now because they were ordered to in a (European?) court settlement, not because they want to.
    • Re: (Score:2, Insightful)

      by friedman101 ( 618627 )
      Come on. You really think Microsoft wants to increase the vulnerability of old versions of Office (which are still the vast majority in corporate America). This not only makes their software looks bad, it increases the amount of work they have to do to support the older versions (yes, they still support Office 2003). You don't sell new cars by convincing people the last model was rubbish. I think your tin-foil hat fits a little to tight.
      • by Stan Vassilev ( 939229 ) on Wednesday February 20, 2008 @10:32AM (#22487604)
        Come on. You really think Microsoft wants to increase the vulnerability of old versions of Office (which are still the vast majority in corporate America). This not only makes their software looks bad, it increases the amount of work they have to do to support the older versions (yes, they still support Office 2003). You don't sell new cars by convincing people the last model was rubbish. I think your tin-foil hat fits a little to tight.

        Let me break your statement in pieces:

        - that would increase the vulnerability of old Office
        - the majority of corporate America is stuck on old Office
        - you don't sell old cars by convincing old ones are rubbish

        You know, have you seen those white-papers by Microsoft comparing XP and Vista and trying to put XP-s reliability and security in bad light?

        Or have you seen those ads where Microsoft rendered people using old versions of office as... dinosaur-mask wearing suits?

        If the majority of corporate America uses the old Office, then the only way for Microsoft to turn in profit would be to somehow convince them this is not good for them anymore, and upgrade. You're just going against yourself there.
      • Comment removed based on user account deletion
      • by Comboman ( 895500 ) on Wednesday February 20, 2008 @10:58AM (#22487936)
        You don't sell new cars by convincing people the last model was rubbish.

        You're kidding right? That's been exactly Microsoft's marketing strategy for the last ten years. Remember the Win9X BSOD ads for Windows XP? Microsoft is in the difficult position where their only real competition is their own previous products.

    • by Jugalator ( 259273 ) on Wednesday February 20, 2008 @11:11AM (#22488102) Journal

      So while I'm not a conspiracy nut, I do believe one of Microsoft's goals here are to assist the process of those binary formats becoming obsolete, to drive Office 2007/2008 adoption.
      Not a chance. Microsoft is bound to release Office 2003 security updates until January 14, 2014 [microsoft.com].
    • Re: (Score:2, Interesting)

      by orra ( 1039354 )

      One may wonder, why release the documentation now?

      I would say it's because they get good PR for for pretending to be transparent/friendly, whilst not actually giving away any new information.

      Look at page 129 of the PDF specifying the .doc format. [microsoft.com]. (The page is actually labelled 128 in the corner, but it's page 129 of the PDF). You will see there's a bit field. One of the many flags that can be set in this bit field: "fUseAutospaceForFullWidthAlpha".

      The description?:

      Compatibility option: when set t

  • by VosotrosForm ( 1242886 ) on Wednesday February 20, 2008 @09:25AM (#22486956)
    I would like to point out another good option Joel doesn't have on his list. It's a software called OfficeWriter, from a company named SoftArtisans in Boston. When I last checked/worked there, it was capable of generating Excel and Word docs on the server, and I believe Powerpoint was probably coming relatively soon. Creating a product that can write office documents isn't quite as impossible in terms of labor as Joel is saying.... but it's still way beyond any hobby project. Plus, he is suggesting that you use Excel automation or the like through scripts to create documents on the server, which is a decent suggestion, if you want Excel or Word to constantly crash and lock up your server, and you enjoy rebooting them every day. If you want to do large scale document generation on a server you are going to need something like Officewriter. -Vosotros/Matt
  • by G0rAk ( 809217 ) <.jamie. .at. .practicaluseful.com.> on Wednesday February 20, 2008 @09:26AM (#22486966) Homepage
    As PJ pointed out over on Groklaw [groklaw.net], MS are giving a "Promise" not to sue but this is very very far from a license. Careful analysis suggests that any GPL'd software using these binaries could easily fall foul of the fury of MS lawyers.
    • by morgan_greywolf ( 835522 ) on Wednesday February 20, 2008 @09:38AM (#22487082) Homepage Journal

      As PJ pointed out over on Groklaw, MS are giving a "Promise" not to sue but this is very very far from a license. Careful analysis suggests that any GPL'd software using these binaries could easily fall foul of the fury of MS lawyers.
      Correct.

      Here's my suggestion: someone should use these specs to create a BSD-licensed implementation as a library. Then, of course, (L)GPL programs would be free to use the implementation. Nobody gets sued, everybody is happy.
    • And it is just a promise, so even if you are not GPLed you'll live under the "will Microsoft break the promise tomorrow when I wake up"?
    • Re: (Score:3, Informative)

      by Pofy ( 471469 )
      >As PJ pointed out over on Groklaw, MS are giving a "Promise"
      >not to sue but this is very very far from a license.

      Some (hypothetical?) questions:

      What would happen if those patents in some way was transfered to someone else?

      Despite the promise, are you still actually infringing the patent? Just with an assurance of the current patent holder that he won't do anything?

      If so, what would happen if it becomes criminal to break a patent (it was quite close to be part of an EU directive not so long ago)? Toge
    • Re: (Score:3, Insightful)

      by Abcd1234 ( 188840 )
      Except anyone being sued by MS can use promissory estoppel [wikipedia.org] as a defense. 'course, you have to be able to afford to defend yourself, but I guess that's where the EFF comes in.
  • Why not ODF or OOo? (Score:2, Interesting)

    by jfbilodeau ( 931293 )
    Why does the author avoid any mention of ODF or OpenOffice as alternatives to work with MS Office docs? He seems stuck on 'old' formats like WKS or RTF.

    I know OOo is not a perfect Word/Excel converter, but it has served me marvelously since the StarOffice days. I wish that there was a simple command-line driven tool that could convert .docs or .xls to ODS or PDF using the OOo code. Any one knows about such a tool?
  • Retaliation? (Score:3, Interesting)

    by ilovegeorgebush ( 923173 ) on Wednesday February 20, 2008 @09:33AM (#22487040) Homepage
    Is this retaliation to the impending doom of the OOXML format requesting ISO standard status? Is MS's thinking: "Right, ISO has failed us, so we'll release the binaries so everyone keeps using the office formats anyway"?
  • Just as OOXML files and WMF make references to Windows or Office programming APIs, I think it would come as no surprise to anyone that Office binary formats would also make similar references. The strategy behind it would be obvious -- to tie the data to the OS and to the software as closely as possible.
    • Re: (Score:3, Informative)

      by leuk_he ( 194174 )
      Did you read the article? Nah, why would you do so for some MS bashing.

      If you read the article you would notice that the binary solution of winword 97 (and in fact it is compatible with it predecessors) was a good solution in 1992 when word for windows 2.0 was created. Machines did have have less memory and processing power that your phone, and still had to be able to open a document fast.

      my conclusion is that the open office devs are crazy that they ever supported the word .doct format, and did a surprise
      • Yes, I read the article and I don't buy into it.

        The fact is, Word in its early versions was NOT significantly faster than its competitors and neither was Excel. Word Perfect and Lotus 1-2-3 did everything people needed and they did it within the resource constraints of the day.

        The article is leading in attempting to address the "limited resources" of the day because for most of us, we find it amazingly difficult to imagine operating in a 1MB operating environment. The article also fails to identify the ac
  • by radarsat1 ( 786772 ) on Wednesday February 20, 2008 @09:42AM (#22487122) Homepage

    You see, Excel 97-2003 files are OLE compound documents, which are, essentially, file systems inside a single file.

    I don't see why just because something is organized filesystem-like (not such an awful idea) means it has to be hard to understand. Filesystems, while they can certain get complicated, are fairly simple in concept. "My file is here. It is *this* long. Another part of it is over here..."

    They were not designed with interoperability in mind.

    Wait, I thought you were trying to convince us that this doesn't reflect bad programming...

    That checkbox in Word's paragraph menu called "Keep With Next" that causes a paragraph to be moved to the next page if necessary so that it's on the same page as the paragraph after it? That has to be in the file format.

    Ah, I see, you're trying to imply that it's the very design of the Word-style of word processor that is inherently flawed. Finally we're in agreement.

    Anyways, it's no surprise that it's all the OLE, spreadsheet-object-inside-a-document, stuff that would make it difficult to design a Word killer. (How often to people actually use that anyway?) It would basically mean reimplementing OLE, and a good chunk of Windows itself (libraries for all the references to parts of the operating system, metafiles, etc), for your application. However, it certainly can be done. I'm not sure it's worth it, and it can't be done overnight, but it's possible. However you'll have a hard time convincing me that Microsoft's mid-90's idea of tying everything in an application to inextricable parts of the OS doesn't reflect bad programming. Like, what if we need to *change* the operating system? At the very least, it reflects bad foresight, seeing as they tied themselves to continually porting forward all sorts of crud from previous versions of their OS just to support these application monstrosities. This is a direct consequence of not designing the file format properly in the first place, and just using a binary structure dump.

    It reminds me of a recovery effort I tried last year, trying to recover some interesting data from some files generated on a NeXT cube from years ago. I realized the documents were just dumps of the Objective C objects themselves. In some ways this made the file parseable, which is good, but it other ways it meant that, even though I had the source code of the application, many of the objects that were dumped into the file were related to the operating system itself instead of the application code, which I did _not_ have the source code to, making the effort far more difficult. (I didn't quite succeed in the end, or at least I ran out of time and had to take another approach on that project.)

    In their (MS's) defense, I used to do that kind of thing back then too, (dumping memory structures straight to files instead of using extensible, documented formats), but then again I was 15 years old (in 1995) and still learning C.
    • by ContractualObligatio ( 850987 ) on Wednesday February 20, 2008 @10:20AM (#22487452)
      It's interesting you give a nicely egotistical critique of a well-regarded expert's article, but don't suggest a single alternative to how M$ could have met their design goals, nor explain why the no-interoperability assumption was unreasonable at the time. If you can't appreciate the design goals, nor suggest a way to meet them, what's the point of the rest of your post?
      • It's interesting you give a nicely egotistical critique of a well-regarded expert's article, but don't suggest a single alternative to how M$ could have met their design goals, nor explain why the no-interoperability assumption was unreasonable at the time. If you can't appreciate the design goals, nor suggest a way to meet them, what's the point of the rest of your post?

        I think the design goals were flawed. That's my point. Their design goals should have included, how can we ensure that our customer's dat

        • by ContractualObligatio ( 850987 ) on Wednesday February 20, 2008 @03:39PM (#22492438)

          I think the design goals were flawed. That's my point.

          And I think your ability to assess another's work is flawed courtesy of an over sized ego. That was my point.

          You have yet to provide an alternative solution to the problem. Given that one constraint is memory, your inability to be concise suggests you're not capable of coming up with one either. Certainly your "squeeze out a few extra microseconds" comment suggests you have absolutely no clue what you are talking about. Yet you persist in calling it bad design. You are strangely smug about what was quite possibly an implicit assumption forced by tough constraints, with no actual interoperability requirements, at a time when they were rarely offered let alone expected. I would stop using "IMHO" - clearly there is nothing humble about your opinion.

          Why the bit about metadata, out of interest? It's as if you think the more irrelevant things you can fit into the post, the more we're supposed to be impressed.

    • by woolio ( 927141 )
      In their (MS's) defense, I used to do that kind of thing back then too, (dumping memory structures straight to files instead of using extensible, documented formats), but then again I was 15 years old (in 1995) and still learning C.

      Except for the "1995" part, wasn't that pretty much how Microsoft got started?

      They haven't advanced from that point by much....
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I don't see why just because something is organized filesystem-like (not such an awful idea) means it has to be hard to understand. Filesystems, while they can certain get complicated, are fairly simple in concept. "My file is here. It is *this* long. Another part of it is over here..."

      He didn't say File systems were complex, he said Ole compound documents were complex. Look it up on MSDN. It's a tad painful to work with.

      "They were not designed with interoperability in mind."

      Wait, I thought you were trying to convince us that this doesn't reflect bad programming...

      Wholly out of context, Batman! They made a design decision to ignore interoperability and optimized towards small memory space. What part of that is hard to understand? You think everything should be designed up front for interoperability, regardless of context? In the mid to late 80s, there just wasn't a huge desire for this feature, as Joel states.

      but then again I was 15 years old (in 1995) and still learning C.

      Ah, now your post m

      • He didn't say File systems were complex, he said Ole compound documents were complex. Look it up on MSDN. It's a tad painful to work with.

        I didn't say this. I said I don't see why the fact that OLE documents being like file systems (according to TFA), means that they must necessarily be complex. i.e., I'm saying file systems aren't necessarily complex concepts, and therefore it's not an excuse for a convoluted file format. Anyways, maybe it's straining his analogy further than he intended, so I'll give y

    • by Thundersnatch ( 671481 ) on Wednesday February 20, 2008 @10:59AM (#22487956) Journal

      Anyways, it's no surprise that it's all the OLE, spreadsheet-object-inside-a-document, stuff that would make it difficult to design a Word killer. (How often to people actually use that anyway?)

      At my company, our users do that every day. Excel spreadsheets embedded in Word or PowerPoint, Microsoft office Chart objects embedded in everything. It's what made the Word/Excel/PowerPoint "Office Suite" a killer app for businesses. MS Office integration beat the pants of the once best-of-breed and dominant Lotus 1-2-3 and WordPerfect. When you embed documents in Office, instead of a static image, the embedded doc is editable in the same UI, and can be linked to another document maintained by somebody else and updated automatically. It saves tremendous amounts of staff time.

    • It reminds me of a recovery effort I tried last year, trying to recover some interesting data from some files generated on a NeXT cube from years ago. I realized the documents were just dumps of the Objective C objects themselves.
      IMO the powerfull serialisation formats of modern langauges are even worse than just dumping out C structs. If an app just dumps out C structs then you can probablly figure out the binary format pretty quickly with just the source for the app and a pagefull or so of information on
    • by sohp ( 22984 )
      This really is the key bit in Joel's article:

      Every checkbox, every formatting option, and every feature in Microsoft Office has to be represented in file formats somewhere. That checkbox in Word's paragraph menu called "Keep With Next" that causes a paragraph to be moved to the next page if necessary so that it's on the same page as the paragraph after it? That has to be in the file format. And that means if you want to implement a perfect Word clone than can correctly read Word documents, you have to imple

  • Still missing the binary format for access, still never mind it's not that hard to work out [sourceforge.net]
    • by Hulver ( 5850 )
      Ah, a typical sourceforge project. "We're almost ready for a beta release!" (Dated 2002), and software release (version 0.0.4 also dated 2002).

      Oh right, it was so easy they got it right first time and never had to update it since?
  • by organgtool ( 966989 ) on Wednesday February 20, 2008 @09:52AM (#22487206)
    FTA:

    There are two major alternatives you should seriously consider: letting Office do the work, or using file formats that are easier to write.
    His first workaround is to use Microsoft Office to open the document and then save that document in a non-binary format. Well that assumes that I already have Microsoft Windows, Microsoft Word, Microsoft Excel, Microsoft PowerPoint, etc. Do you see the problem here?

    The second "workaround" is the same as the first, only a little more proactive. Instead of saving my documents as binary files and then converting them to another format, I should save them as a non-binary format from the start! Mission accomplished! Oh wait - how do I get the rest of the world to do the same? That could be a problem.

    I fail to see the problem with using the specification Microsoft released to write a program that can read and write this binary format. If Microsoft didn't want it to be used, they would not have released it. Even if Microsoft tried to take action against open source software for using the specs that they opened, how could Microsoft prove that the open source software used those specs as opposed to reverse engineering the binary format on their own? I think this is a non-issue.
    • by ContractualObligatio ( 850987 ) on Wednesday February 20, 2008 @10:42AM (#22487740)

      I fail to see the problem with using the specification Microsoft released to write a program that can read and write this binary format

      That is almost the the stupidest thing I've read today (RTFA with respect to development costs to figure out why), except for this:

      If Microsoft didn't want it to be used, they would not have released it.

      We can ignore the shockingly poor logic inherent to this statement and just take it at face value: doing something just because M$ wants you to would easily make the Top 10 Stupid Things To Do In IT list. It's particularly bizarre to hear it on Slashdot.

  • Joel is being awfully apologetic. I understand why they are bad formats, but it doesn't change the fact they are bad.
  • by Doc Ruby ( 173196 ) on Wednesday February 20, 2008 @10:00AM (#22487278) Homepage Journal
    Spolsky's advice explains that the format code is extremely bad code from the POV of a programmer picking it up to use starting now. Because it grew like a coral reef, starting so long ago that interoperability with anything else but the app's codebase at the time was not in the designs. And every new feature was thrown in as a special case, rather than any general purpose facility for kinds of features or future expansion. The Microsoft legacy that leverages every year's market position into expansion the next year.

    But we're not Microsoft, and we don't have the requirements MS had when making these formats. So we should by no means perpetuate them. We should do now what MS never had reason to do: upgrade the code and drop the legacy stuff that makes most of the code such a burden, but doesn't do anything for the vast majority of users today (and tomorrow).

    That's OK, because Microsoft has done that, too, already. The MS idea of "legacy to preserve" is based on MS marketing goals, which are not the same as actual user requirements. So that legacy preservation doesn't mean that, say, Office 2008 can read and write Word for Windows for Workgroups for Pen Computing files 100%. MS has dropped plenty of backwards compatibility for its own reasons. New people opening the format for modern (and future) use can do the same, but based on user requirements, not emphasis on product lines if that's not a real requirement.

    So what's needed is just converters that use this code to convert to real open formats that can be maintained into the future. Not moving this code itself into apps for the rest of all time. Today we have a transition point before us which lets us finally turn our back on the old, closed formats with all their code complexity. We can write converters that can be used to get rid of those formats that benefited Microsoft more than anyone else. Convert them into XML. Then, after a while, instead of opening any Word or Excel formats, we'll be exchanging just XML, and occasionally reaching for the converter when an old file has to be used currently. MS will go with that flow, because that's what customers will pay for. Soon enough these old formats will be rare, and the converters will be rare, too.

    Just don't perpetuate them, and Microsoft's selfish interests, by just embedding them into apps as "native" formats. Make them import by calling a module that can also just batch convert old files. We don't need this creepy old man following us around anymore.
    • by mxs ( 42717 )

      Just don't perpetuate them, and Microsoft's selfish interests, by just embedding them into apps as "native" formats. Make them import by calling a module that can also just batch convert old files. We don't need this creepy old man following us around anymore.

      Be very careful down that road. Particularly, don't confuse "I can import it and save it in MY format" with "this document is now accessible". The application doing the import might die off just the same in 10 or 15 years; and XML is not a wonderpill that makes a document format interchangeable. If you want to do the user a favour, don't just support full import of Office documents, but full export into a standardized format as well (and not just lip-service export).

      Interoperability goes both ways; this is

      • No, XML is indeed that wonderpill. Not because it's some magic format, but because it's open and human readable, not some obfuscated binary format like .DOC . The apps doing the import will be open as well. And if they "die off" later, it's because no one is using them, so who cares? The rare need in the distant future for reading whatever does get left behind in those formats will be served for whichever archivist needs it by the more recent open converter apps that should still be archived somewhere, too.
  • by carou ( 88501 ) on Wednesday February 20, 2008 @10:03AM (#22487298) Homepage Journal
    From Joel's FA:

    There are two kinds of Excel worksheets: those where the epoch for dates is 1/1/1900 (with a leap-year bug deliberately created for 1-2-3 compatibility that is too boring to describe here), and those where the epoch for dates is 1/1/1904. Excel supports both because the first version of Excel, for the Mac, just used that operating system's epoch because that was easy, but Excel for Windows had to be able to import 1-2-3 files, which used 1/1/1900 for the epoch. It's enough to bring you to tears. At no point in history did a programmer ever not do the right thing, but there you have it.
    Nonsense.

    When Excel started importing 1-2-3 documents, the right way to do that would be to create an importer to your own native format. Not to munge a new slightly different format into your existing structures. Yes, you'd have had to convert some dates between 1900 and 1904 formats (and maybe, detect cases where the old 1-2-3 bug could have affected the result) but at least you wouldn't be trying to maintain two formats for the rest of time.

    If this is an example of programmers throughout history always doing exactly the right thing, I'd hate to see an example of code where the original author regretted some mistakes that had been made.

    • by Schnapple ( 262314 ) <tomkiddNO@SPAMgmail.com> on Wednesday February 20, 2008 @12:04PM (#22488942) Homepage
      When Excel started importing 1-2-3 documents, the right way to do that would be to create an importer to your own native format. Not to munge a new slightly different format into your existing structures.
      Well, ignoring the fact that the article elaborates on why they made some of the technical decisions early on, Joel, who was at one point a program manager for Microsoft Excel, actually has an article on this very thing [joelonsoftware.com]. Basically, this is exactly what they did - Excel initially opened 1-2-3 documents, but it could not write to them. You could open up your Lotus 1-2-3 document but you'd have to save it in Excel format. Excel 4.0 introduced the ability to write to Lotus 1-2-3 documents, and Excel 4.0 was the version that served as the "tipping point" - it was the version that businesses started buying in mass numbers and it was the version that signaled the end for Lotus 1-2-3.

      Why? Because, as the article states, Excel 4.0 was the first version that would let you go back. You could just try out Excel and if it didn't work no big deal, just go back to Lotus 1-2-3. It seems completely counter-intuitive to do so, and it apparently wasn't the easiest thing to convince Microsoft management to do so, but it worked and now everyone uses Excel and Lotus 1-2-3 is ancient history.

      The programmers did both the right thing and the thing which would be successful. With all due respect to the OpenOffice folks, they're not in the business of selling software. If people don't move to OpenOffice in mass numbers it doesn't spell doom for the company, because there is no company. Doing what you suggest might be the right thing in a programmer's perspective (and I agree), it's not compatible with a company that is trying to make a product to take over the market with. This is why Microsoft is so successful - they're staffed by a large number of people (like Joel) who get this.
    • by NullProg ( 70833 )

      When Excel started importing 1-2-3 documents, the right way to do that would be to create an importer to your own native format. Not to munge a new slightly different format into your existing structures.


      Remember, these were the XT/AT/x386 days. It was easier to munge than waste CPU cycles and memory doing conversions.

      Enjoy,
  • by flanders123 ( 871781 ) on Wednesday February 20, 2008 @10:20AM (#22487468)
    ...to take this spec and create an identical .doc format, circumventing Word's bullet AI.
    • it
      • never
        • ever
    • ever
        • works
  • by amazeofdeath ( 1102843 ) on Wednesday February 20, 2008 @10:23AM (#22487508)
    Stephane Rodrigues comments:

    "I first gave a cursory look at BIFF. 1) Missing records: examples are 0x00EF and 0x01BA, just off the top of my head. 2) No specification: example is the OBJ record for a Forms Combobox," Rodriguez wrote. "Then I gave a cursory look at the Office Drawing specs. And, again, just a cursory look at it showed unspecified records."
    http://www.zdnet.com.au/news/software/soa/Microsoft-publishes-incomplete-OOXML-specs/0,130061733,339286057,00.htm [zdnet.com.au]
  • Then what's with that 2Gb limit ? Or what's with the decision to use such formats for mail-storage and databases ?


    • Were they using 32 bit machines? Seems to me that 32 bit machines can only address 4GB of memory total. Allowing for the OS and other apps running in memory, you can't use that last bit in addressing anyway. (ie, the OS's and machines of the day maxed out at 4GB of RAM. You could make the whole thing of memory addressable, but it was not needed).

      2GB is the limit on a lot of OS's. Right now, I can think of several filesystems that limit file sizes to 2GB. (FAT16, AIX's jfs). The first of those listed filesy
  • by wrook ( 134116 ) on Wednesday February 20, 2008 @10:28AM (#22487552) Homepage
    I've worked on some of these file formats quite a bit (I was the text conversion guy when WP went to Corel -- don't blame me, it was legacy code! ;-) ) Anyway, while the formats are quite strange in places, they aren't really that difficult to parse. I would be willing to speculate that this was never really much of a problem in writing filters for apps (or at least shouldn't have been).

    No, the difficulty with writing a filter for these file formats is that you have no freaking clue what the *formatter* does with the data once it gets it. I'm pretty sure even Microsoft doesn't have an exact picture of that. Hell, I barely ever understood what the WP formatter was doing half the time (and I had source code). File formats are only a small part of the battle. You have all this text that's tagged up, but no idea what the application is *actually* doing with it. There are so many caveats and strange conditions that you just can't possibly write something to read the file and get it right every time.

    In all honesty I have at least a little bit of sympathy for MS WRT OOXML. Their formatter (well, every formatter for every word processor I've ever seen) is so weird and flakey that they probably *can't* simply convert over to ODF and have the files work in a backwards compatible way. And lets face it, they've done the non-compatible thing before and they got flamed to hell for it. I honestly believe that (at some point) OOXML was intended to be an honest accounting of what they wanted to have happen when you read in the file. That's why it's so crazy. You'd have to basically rewrite the Word formatter to read the file in properly. If I had to guess, I'd say that snowballs in hell have a better chance...

    I *never* had specs for the word file format (actually, I did, but I didn't look at them because they contained a clause saying that if I looked at them I had to agree not to write a file conversion tool). I had some notes that my predecessor wrote down and a bit of a guided tour of how it worked overall. The rest was just trial and error. Believe it or not, occasionally MS would send up bug reports if we broke our export filter (it was important to them for WP to export word because most of the legal world uses WP). But it really wasn't difficult to figure out the format. Trying to understand how to get the WP formatter (also flakey and weird) to do the same things that the Word formatter was doing.... Mostly impossible.

    And that's the thing. You really need a language that describes how to take semantic tags and translate them to visual representation. And you need to be able to interact with that visual representation and refer it back to the semantic tags. A file format isn't enough. I need the glue in between -- and in most (all?) word processors that's the formatter. And formatters are generally written in a completely adhoc way. Write a standard for the *formatter* (or better yet a formatting language) and I can translate your document for you.

    The trick is to do it in both directions too. Things like Postscript and PDF are great. They are *easy* to write formatters for. But it's impossible (in the general case) to take the document and put it back into the word processor (i.e. the semantic tags that generated the page layout need to be preserved in the layout description). That also has to be described.

    Ah... I'm rambling. But maybe someone will see this and finally write something that will work properly. At Corel, my friend was put on the project to do just that 5 times... got cancelled each time ;-) But that was a long time ago...
  • In many situations, you are better off reusing the code inside Office rather than trying to reimplement it. Here are a few examples.
    1. You have a web-based application that's needs to output existing Word files in PDF format. Here's how I would implement that: a few lines of Word VBA code loads a file and saves it as a PDF using the built in PDF exporter in Word 2007. You can call this code directly, even from ASP or ASP.NET code running under IIS. It'll work. The first time you launch Word it'll take a few seconds. The second time, Word will be kept in memory by the COM subsystem for a few minutes in case you need it again. It's fast enough for a reasonable web-based application.
    2. Same as above, but your web hosting environment is Linux. Buy one Windows 2003 server, install a fully licensed copy of Word on it, and build a little web service that does the work. Half a day of work with C# and ASP.NET.

    So if you are on a Linux system, you are screwed . I think this article is written by some M$ fanboy. Nothing wrong here. But saying that Linux user should just dump their software, and go for Microsoft stuff , just because

    It's very helpful of Microsoft to release the file formats for Microsoft and Office, but it's not really going to make it any easier to import or save to the Office file formats.

    I think it's wrong wrong wrong.

  • Chunky File Format (Score:5, Interesting)

    by mlwmohawk ( 801821 ) on Wednesday February 20, 2008 @10:55AM (#22487892)
    While I was a contractor for a now defunct contracting company, we did a contract for Microsoft. This was pre windows 3.1. We did some innovations which I think became the bases for some of the OLE stuff, but I digress, Microsoft had a spec for its "Chunky File Format."

    The office format based on the chunky file format does not have a format, per se' It is more similar to the old TIFF format. You can put almost anything in it, and the "things" that you put in it pretty much define how they are stored. So, for each object type that is saved in the file, there is a call out that says what it is, and a DLL is used to actually read it.

    It is possible for multiple groups within Microsoft to store data elements in the format without knowledge of how it is stored ever crossing groups or being "documented" outside the comments and structures in the source code that reads it.

    This is not an "interchange" format like ODF, it is a binary application working format that happens to get saved and enough people use it that it has become a standard. (With all blame resting squarely on M$ shoulders.)

    It is a great file format for a lot of things and does the job intended. Unfortunately it isn't intended to be fully documented. It is like a file system format like EXT2 or JFS. Sure, you can define precisely how data is stored in the file system, but it is virtually impossible to document all the data types that can be stored in it.
  • Outlook (Score:2, Interesting)

    by c00rdb ( 945666 )
    Why is Outlook missing from the released formats? I've spent some time reverse engineering meeting requests myself and I'd love to see the complete .msg file specification. You could find some useful on MSDN already but it was nowhere near as complete as these releases appear to be.
  • by sohp ( 22984 ) <.moc.oi. .ta. .notwens.> on Wednesday February 20, 2008 @01:22PM (#22490156) Homepage
    ...is total BS.

    A lot of the complexities in these file formats reflect features that are old, complicated, unloved, and rarely used. They're still in the file format for backwards compatibility, and because it doesn't cost anything for Microsoft to leave the code around.


    You better believe it costs Microsoft quite a bit to keep it around. At the lowest level, having the codebase that big means the tools and practices needed to manage it have to be equal to the task. Here's a hint: MS does not use SourceSafe for the Office codebase. (They use the Team tools in visual studio, so they do eat their own dogfood, but not the lite food).

    Far more insidious is the technical debt incurred by carrying around that backwards compatibility with Version-1-which-supported-123-bugs-and-all. Interdependencies that mean a bug either can't be fixed without introducing regressions, or can only be fixed dint of a complex scheme involving things like the 1900 vs. 1904 epoch split that Joel discusses.

    Oh yes, it costs a small fortune to carry around that baggage, and only a company as big as Microsoft with Microsoft's revenues can afford it. The price might seem like 'nothing' in the billions of dollars that flow in and out of Microsoft, but ignoring the elephant in the room doesn't make the elephant go away.

E = MC ** 2 +- 3db

Working...