Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

What Do You Know About Databases And XML? 257

Dare Obasanjo writes: "XML has become a pervasive part of significant segments of software development in a relatively short time. From file formats to network protocols to programming langauges, the influence of XML has been felt. I have written an overview of XML schemas, XML querying languages, XML-Enabled databases and native XML databases. Below is a shortened version of the article." Obasanjo's original OODBMS article has been updated to reflect more of the disadvantages between picking an OODBMS over an RDBMS.
This discussion has been archived. No new comments can be posted.

What Do You Know About Databases And XML?

Comments Filter:
  • by Ars-Fartsica ( 166957 ) on Monday October 29, 2001 @12:33PM (#2493278)
    XML solves the interchange problem.

    By this, it is meant that XML allows two systems that do not share a predetermined data exchange protocol to share data.

    Thats it.

    Where two systems share a common predetermined protocol, it is almost always more efficient than XML.

    Applications of XML to programming lang design (XSL) and other domains are largely a waste of time and won't last.

    • So if someone designs a new (not like XML) format for exchanging data, and manages to get it standardized, then won't this also allow two systems that do not share a predetermined data exchanged protocol to share data? One could also be careful in this design and make sure it is more efficient than XML, not only in space and bandwidth, but also in CPU time and programming time. Now does such a format need to be text based as XML is?

      • XML has been said to reinvent ASN.1 notation, but in a more readable format. ASN.1 is said to be a lot more compact than XML (I only have used it in SNMP MIB definitions, so I can't comment on size issues). XML is definitely a lot more human readable - great for the debugging stage of writing XML apps and DTDs...

        But a lot of effort has gone into XML, and we can afford the extra overhead now, and it is standard and widely available for most languages and platforms. It isn't time to throw that away. I would use XML for all now application development, however the benefits of migrating old applications and their datatypes to XML is marginal - why fix something that isn't broken?

        • At least XML is more open than ASN.1 is. Not that that means a lot. You can debug ASN.1 with something a little more sophisticated than the "cat" or "more" command.

          I recently embarked on writing an XML parser because existing APIs IMHO sucked. But digging into the XML documentation, which was huge, also reveal a "mine field" of bizarre syntax and ambiguities. Meant for human consumption? Certainly machines could have some trouble with it. I went back to using expat, and of course find it to be buggy. But it is huge code and not easy to debug, so I just have to live with it for now.

          The first attempt I saw back in the early 1970's for this, and it appeared to have originated with early PL/1 or Algol work, was something IBM called HDF. Too bad they let it drop, even though I see it all over the place today; it's just not recognized.

          XML was intended for documents, and giving to those documents certain useful properties. The text string "John Smith" might not be so obvious that it is a name, but "<name>John Smith</name>" is. Then if someone wants their browser to make all names be hyperlinks to look them up in the staff directory, that works. But what is good as a document format just doesn't seem to be all that great for bulk data.

          So we have all this storage capacity and bandwidth, so let's waste it? Let XML turn a terabyte of database into two terabytes of text transfer format. That's the ticket! I think I'd rather go with ASN.1 and BER even if documents for them are not so readily available. But if those don't get opened up, I'm sure something new can be built to replace them as well.

    • Wow, this guy posts early but it looks good so he gets modded up. What a crock.

      XML solves the interchange problem. By this, it is meant that XML allows two systems that do not share a predetermined data exchange protocol to share data. Thats it.

      That is a VAST oversimplification. What if instead you had said "computers allow us to carry out a repeated set of instructions. That's it." Doesn't quite tell the whole story, does it? Nor does your kindergarten-level definition of XML tell the whole story.

      Applications of XML to programming lang design (XSL) and other domains are largely a waste of time and won't last.

      Hmmm.... And will the stock market rebound in the next six months? Will Jesus FINALLY return and lift the Believers up into heaven? Will it rain next Friday?

      You can speak out of your ass all you like. Doesn't mean it's gonna happen. XSL/XSLT has been around for a while now, and its user base has only been expanding.

      He's a moron, obviously has done nothing more than skimmed a few chapters in some cheap-ass Wrox text, and he gets modded up to 5. There is no justice, I tell you!

      • Yet its the working definition you'll find in many articles at XML.com. Why would you presume XML serves some larger purpose?

        XSL/XSLT has been around for a while now, and its user base has only been expanding.

        Once again, XML.com has some very informed articles trashing XSL, and they aren't naive posts by someone who just read the WROX book. Stop by and read them.

    • by Rogerborg ( 306625 ) on Monday October 29, 2001 @12:49PM (#2493360) Homepage
      • Where two systems share a common predetermined protocol, it is almost always more efficient than XML

      I hear you. The product that I'm working on right now is XML heavy. It's using entirely proprietary data formats, and the XML processing is taking up 80% of the query time. After achieving full buzzword compliance, we decided that the system is way too slow, and now have to strip the whole bloody lot back out again.

      Note that there was no reason to use XML in the first place, other than some designers wanted to put it on their resumes. I kid you not.

      • If the XML processing is in a separate layer of the application from the "real work", you could have multiple implementation of the data representation mechanism... an XML implementation for full buzzword compliance, and a more native implementation for maximum performance.

        On the other hand, if you've let the XML-aware code permeate all parts of the system, it's going to be a lot of work to strip it out.

    • XML solves the interchange problem.

      If you can agree on the schemas of course. Given the large amount of wrangling / discussion / argument over what schemas to adopt, e.g. ebXML, XML is getting bogged down, for those of us that would like to actually create something better than EDI that can be the basis for lowering transaction costs and making the world a saner place for doing business it is a pain to have to deal with all this. I'm wondering is a better way of doing things would be to use RDF as the semantics are embedded with the data, and the syntax is easy to read both for machines and humans.
    • Well it really depends on what you are doing and how youare trying to do things. Perfect example is internal documentation. It isn't in a 2d format (usually).


      Lord knows how annoying it is to write a document so generic, that translating it to other forms can be possible. XML is the perfect format since there is always some middle ware that can turn XML say, into PDF's or HTML. To html, you have XSLT, its a no brainer. But to say a PDF, you can use another scripting language to process the XML and write out the PDF binary. Now we can create a handbook and have some cool stuff on the web without destroying the site.


      XML can also have internal uses for say, templating. Using XSLT, you can build a tempalte that would do cool stuff like

      [html][body]Hi [username/][/body][/html]

      which would be translated into something like

      [html]
      [body]
      [script language="php"]
      getUsername();
      [/script]
      [/body]
      [/html]

      VERY nice stuff for designers to use.

      Yes, I know my php tags and html open/close entities are arcane/wrong... but this is to make it easier to type on my part :)

      • by Merk ( 25521 )

        Converting to PDF is easy too. Just use XSL-FO. Apache has an implementation [apache.org] of this. Currently they only create PDFs, but it could easily be sent to a printer directly. You can also convert directly to TeX [ibm.com]:

    • [Start Flamebait] Please would the moderators have some knowledge of the topic BEFORE flagging things as "Interesting" or "Informative"[End flamebait]

      XML solves the data format problem, and nothing more. It does not solve the interchange problem because apps still need to know where to locate relevant information in an XML doc, and how to interpret it. i.e. they have to have knowledge of the DTD and translate from the XML (structured according to the DTD) into their own internal format.

      So instead of needing to create a reader for a binary EDI format, you plug in a bog standard parser and get named values. So it makes interchange EASIER for the programmer. Especially those with languages that don't do binary data very well.

      God only knows what XSL has to do with "programming language design". XSL has two explicit goals: 1. (XSLT) a generic translation from one XML format to another. Why? Because everyone wants to use their OWN XML DTD, so to interact with umpteen other products you need to understand the DTDs of each ... or you write an XSL for each to change it into your format. 2. (XSL:FO) display primitives to allow an XML document to be transformed into a display language, so we can see the damn thing.

      • XML solves the data format problem, and nothing more.

        Actually, less. It solves the metaformat problem.

        It does not solve the interchange problem because apps still need to know where to locate relevant information in an XML doc

        A simple policy is to reject any file without a valid DOCTYPE.

        As for what meaning you infer from tagged data, no standard is ever going to tell you that.

    • "....XML allows two systems that do not share a predetermined data exchange protocol to share data..."

      Isn't that statement an oxymoron? If both systems understand XML then they DO 'share a predetermined data exchange protocol.'

      --jeff
    • by SuperKendall ( 25149 ) on Monday October 29, 2001 @03:29PM (#2494221)
      I can't believe no-one has posted my standard response to someone who thinks XML is just for "interchange".

      The interesting thing about XML to me is NOT that it solves the interchange problem (though it helps with that). The great thing is that it solves the PARSING problem. No longer do I have to write a parser everytime I have some simple task of reading in something externally.

      What XML does is define for you a standard means of parsing, and by defining the API for parsing and the structure of the documents lets you think about how you want to structure external information, not how you're going to read it in.

      Also, because the API for parsing is now hiding the engine details below, parsers can be specialized depending on what kind of task you have. Parsing thousands of 1k XML documents would seem to demand a different processor altogether from a few multi-GB documents, but you only have to know one parser (Ok, really two - SAX and the DOM interface). You could even have specialized XML processors that did write the stream out in a wierd custom binary format for compactness and read it back in with the normal DOM API so clients wouldn't have to adjust. I'll grant you that there don't seem to be many specialized XML processors - yet.

      I also like the robustness of XML exchanges (here I'm getting more into your main point). If you add or drop attributes from an XML document, clients that read that document are less likley to break (unless of course they relied entirely on the node(s) you have removed!). That is especially true of XSL, where missing nodes of a document simply correspond to missing parts of output (which can also be a useful effect).

      You might think of XSL as a useless language, but I'll be happy to make a counter-prediction that it will grow and thrive. It's simply too useful a transformation tool to do anything else. I know the syntax seems overbearing, but for the kinds of short transformational work it's normally put to that's not much of an issue and you get used to it quickly.
  • More on OODBMS (Score:2, Informative)

    by andres32a ( 448314 )
    You kind find more on OODBMS and their benefits here. [odbmsfacts.com]
  • by blamario ( 227479 ) on Monday October 29, 2001 @12:37PM (#2493296)
    And they have some intelligent discussion over there too. Please leave it that way.
  • by Anonymous Coward on Monday October 29, 2001 @12:40PM (#2493311)

    There was a good discussion on XML data bases on the XML-Dev mailing list, which is summarized pretty well by Leigh Dodds XML and Databases? Follow Your Nose [xml.com].

  • by TechnoVooDooDaddy ( 470187 ) on Monday October 29, 2001 @12:40PM (#2493318) Homepage
    Databases are for storing data. End of Story.
    Oracle is taking some BIGTIME performance hits for stacking all that OO crap in there, and MS SQL Server is seeing the same thing now that they've got the XML in theirs. Don't believe me?
    Why is NASA switching to MySQL from Oracle [fcw.com] and noticing speed increases?

    Don't get me wrong, I'm a big fan of XML.. as a data interchange format.. but when i want tight storage and quick retrieval, give me a normalized RDBMS any day of the week. Because that's what it's for.

    • by sphealey ( 2855 ) on Monday October 29, 2001 @12:49PM (#2493358)
      Why is NASA switching to MySQL from Oracle [fcw.com] and noticing speed increases?
      I will defer to you on the advantages/disadvantages of using databases to store OO data.

      However, citing NASA as a source for technology or trends is a bit silly, for a number of reasons. The primary one is this: NASA is so large, and so diverse, that at one of their sites/on one of their projects they use one of just about every technology product you can name.

      I was once running two back-to-back software evaluations for products in the $20-million range. For both applications, the top ten vendors all claimed that their system was "used by NASA for the Space Shuttle". We checked up and guess what - they were all telling the truth.

      So you need a better example.

      sPh

    • Don't get me wrong, I'm a big fan of XML.. as a data interchange format.. but when i want tight storage and quick retrieval, give me a normalized RDBMS any day of the week. Because that's what it's for.

      But what if your data representation is already an XML schema? And a pretty complicated one at that? For example, look at METS [loc.gov] : The METS schema is a standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library, expressed using the XML schema language of the World Wide Web Consortium. The standard is maintained in the Network Development and MARC Standards Office of the Library of Congress, and is being developed as an initiative of the Digital Library Federation.

      Have a look at that schema [loc.gov] and tell me how you'd store that in a traditional RDBMS (I'd be interested if you could, because I know SQL, I don't know OODMBS or XML repositories - this is painful for me). Databases have been for storing data, but when your data is already a complex XML representation of an object, there's little use in saying don't use OODBMS.

    • So what do you think of using XML for system configurations? That tends to be in UNIX systems a lot of separate files, traditionally edited with vi although today the tools are getting more and more dummy friendly and have a smaller space of possibiities.

      • Linuxfromscratch.com has a project that aims to automate the process of building your own linux setup storing configuration files in XML, read the intro page [linuxfromscratch.org] they propose you could go to a website and fill out a survey type form to define your system, which would create a configuration file that could build everything correctly. It sounds to me like a huge undertaking but if distros chimed in on this and contributed the tools and expertise they have in how to install a linux system automagically, Automated Linux From scratch could become a standard tool used by anyone wanting to setup linux on anything. To go one step further and convert my /etc directory to MPXML (My Penguin XML...I made that up) well I don't know if this would be a good thing.
      • So what do you think of using XML for system configurations?

        XML tends to be good for hierchial, widely-parseable data. In this sense, XML is good for configuration files, because many of the more advanced ones need some type of hierarchy to be sane. Also, it makes it easy to have one editing mode for many different configuration files, and configurations can be displayed/queried in a more universal manner.

        • I find the tags are a major hindrance proper editing tekniq. If the tool is vi, I have to deal with the tags manually. If the tool hides the tags, then it has to be interpreting them and presenting some logical construct. But I've yet to see any tool that can let me do all I want with config files. How would /etc/rc look in XML?

    • by ergo98 ( 9391 ) on Monday October 29, 2001 @01:04PM (#2493441) Homepage Journal

      xml is an interchange format, not a storage format

      Absolutely, positively agree. Not only is XML only an interchange format, but it only makes sense in some situations (for instance if we have an embedded piece of hardware that we have to communicate with, and we're communicating to it from a Windows box, and there is no shared common data encapsulation format, I'd greatly prefer XML (with XSD) vastly over Jimmy the Programmer making up his own data encapsulation format/documentation method/extraction system, but if I have two Windows machines running SQL Server and they're in a common security context and they'll never change, I'd use DTS or replication, not XML).

      and MS SQL Server is seeing the same thing now that they've got the XML in theirs

      The XML "in" SQL Server is surface fluff (I love SQL Server and I'm saying this as a good thing, not a bad thing). i.e. Some modules that'll convert an XML query to an underlying DB query, and the results back to XML, and some basic XML importing and exporting routines. This hasn't affected the underlying operations of SQL Server whatsoever.

      • by drxyzzy ( 149370 )
        You keep referring to "SQL Server". Which one? PostgreSQL? MySQL? Sybase? There were several last time I checked, even for MS.
    • Comparing Oracle and MySQL performance in the context of XML is silly. It is a well-known fact that MySQL is significantly faster than Oracle, but not because of XML, Java, or other "OO crap". It is simply because MySQL doesn't have transactional support, and probably a host of other non-OO high end RDBMS features.

      I wouldn't be surprised if "OO crap" does indeed slow down Oracle, but I know the JVM for Oracle is completely optional. I can't speak to any XML features in Oracle, I'm not familiar with them.
    • Databases are for storing data. End of Story.

      Exactly, and XML is a format for encoding structured data. There are many kinds of documents that live their live their entire lives as XML, from XHTML documents to configuration files to myriad kinds of XML documents that exist today [xml.org].

      Why is NASA switching to MySQL from Oracle [fcw.com] and noticing speed increases?

      If all you want is speed then MySQL is all you need. Similarly I can quote how much faster TUX is than Apache but that means nothing if I have dynamic database driven content that I want to use JSP or Perl to access.

      There is more to picking a database than how quickly it performs some SQL queries.

      Don't get me wrong, I'm a big fan of XML.. as a data interchange format.. but when i want tight storage and quick retrieval, give me a normalized RDBMS any day of the week. Because that's what it's for.

      This means you're suggesting that people shred XML documents into relational data to store them in the DB and then reassemble them whenever they retrieve them. This is massive overhead and error prone since you're depending on your developers to come up with custom ways of doing this for each application. Also typically very difficult to ensure that the XML that was stored in the DB can be accurately reconstructed (what happens to comments, processing instructions, enbtities, etc).

  • The new tools like XPath and XQuery are pretty useful, but do you know of any tool that reads in XML and then allows you to access it via standard SQL? I know it would be a bit of a stretch to make it fit the SQL model, but I think it would be very useful, as lots of people out there are used to using SQL. Anybody doing this?
  • by gillbates ( 106458 ) on Monday October 29, 2001 @12:53PM (#2493381) Homepage Journal
    that it incurs quite a bit of processing overhead. Not only this, but in order for a validating parser to parse XML, it must read the entire document. This is simply not practical for even modestly sized databases, as most current XML parsers will attempt to read the entire file into memory.

    Granted, XML has some advantages. Data interchange among disimilar clients, for one. But storing XML in a database is a gross waste of space and processing power, and is realistically impossible for all but the smallest of databases.

    • We use XML as the format that is returned by the database.

      Requesting a list of clients and their sales will return an XML file that describes this list and sublists.

      We definately would not use XML as the actual storage format.
    • Yeah, right, all XML and SGML parsers have to read the entire document before anything can be done. All of those SAX parsers are a figment of my fevered imagination. *rolleyes*

      Now, if you're pointing out that XML provides no mechanism for indexing so you'll have to scan the file *until* you reach the record you're interested in, I agree. But as others have pointed out, nobody uses XML as the storage format for anything but the smallest databases. (E.g., configuration files.) But the translation to/from XML format for queries no more breaks its 'purity' than converting SQL "insert" clauses into binary data stored in B-tree or ISAM tables breaks its relational purity.
      • No, he said _validating_ XML parsers. If you want to validate that in fact a document matches a DTD or a Schema you do have to read the entire document. You can have a validating SAX parser by the way (though it doesn't really make sense to me to do so), though the most simple SAX parsers don't validate the document structure, they just fire events into the event handler. Clearly, validation is a more logical concept with DOM-style parsing, as you are building an in-memory tree representation of the document.
  • A couple of months ago, my employer got bit by the "OO bug" and decided to move several of our internal systems to Java-based solutions. Naturally, they hired several Java zealots who insisted that our DBMS will need to be converted to an OODBMS in order for their programs to work correctly (read: they were too lazy to implement a conversion layer). Although they were able to move things off our old HP 9000 servers and onto cheap PCs running Win2k, the JRE was rather unstable and slow compared to the old system (which, by the way, worked just fine).

    After several weeks of dealing with growing pains and general brokenness, my manager wisely decided to transition our systems back to a UNIX environment. I worked in the group that was responsible for this, and after obtaining source code to several of our accounting and inventory applications, we moved the operation over to a Linux 2.2 (Debian potato) system. Things have worked flawlessly since then, and the OODBMS and Java developers are long gone. The promise of an OO architecture was great, but it just didn't work out in the real world - Linux was the solution for us.

    -CT

    • Uh, what exactly does converting to an OODBMS and "Java-based solutions" have to do with converting from (or to) a "UNIX environment" ? What does moving "the operation over to a Linux system" mean in the context of abandoning Java and/or OODBMS ? How does an "OO architecture" preclude or relate to using or not using Linux ? Moderators, in case you don't understand, the answer is that running a Linux or Unix system has nothing to do with whether or not use either OODBMSes OR Java - because both run on Linux/Unix, so the original post makes no sense at all.

      Methinks from the above (and your handle) that your post is a joke aimed at highlighting the moderator's ineptness. Shame on the moderator for letting it through - evidence enough that all you need to do be moderated up is throw in enough buzzwords to confuse the moderators (who don't really know much about the issues anyways).

      Now watch me get modded down.
    • I'm assuming from your ID that you know perfectly well how silly your comment was, but since the moderators fell for it:

      JAVA RUNS ON UNIX. He just tossed out the Linux reference to get you guys to mod it up: "Ooh, Windows and Java failed! Linux worked! +1, informative!"

      The decision of whether or not you use Linux has absolutely nothing whatsoever to do with the decision to use Java and/or OO techniques. Further, I've never seen an unstable JRE in my life -- the JRE is the single most stable Windows app I have ever used (although the instability of Windows itself still leaves it undesirable). The last time I saw a JRE crash (even once) was, I believe, three years ago using JDK 1.2 beta 4. I program Java seven days a week, and it simply does not crash.

      And I'm also pretty impressed that you could hire new people, redesign a complex system, reimplement the new design in a completely different language/platform/database, realize it wasn't working, fire the new people, assign new people to the project, and transition over to yet *another* new platform in the space of a few short months. That's the quickest turnaround I've ever heard of.

      (Translation: this guy's a troll. Please stop handing the frickin' trolls karma points.)
      • Perhaps I was not clear in my post. The problems we had with the OO system and original system were:

        • The OO system was designed around Windows 2000 and used the proprietary COM/Java interfaces. Porting it to any sort of UNIX system would be nearly impossible. And stability was a huge issue and we had reasons to believe it was the JRE and/or OS, not the software.
        • The old system was running on aging hardware, which was expensive to maintain and support. But since we obtained the source code, we were able to easily recompile it to run on Linux.

        We really didn't have a choice. Porting the original system to Linux was the most cost-effective option available.

        And yes, we did accomplish everything within a few months. Our developers spend significant amounts of time doing actual work (it's part of the corporate culture) and very little time playing your alleged "troll busting" game on Slashdot. That goes a long way toward explaining our unusually high productivity.

        -CT

        • Proprietary COM/Java interfaces? As soon as COM gets involved, you're no longer programming Java -- you're using J++ or some other bastardized hybrid. Sounds like you hired idiots and got what you paid for.

          "...we had reasons to believe it was the JRE and/or OS..."

          I obviously have no retort to this other than to stand by my assertion that Sun's JRE is rock-solid. I have already stated that I would never use Windows in a production environment, but that's Windows' problem, not Java's. A real Java program could have been moved to any of the discussed platforms in a few minutes. I actually develop my server software on Windows and then deploy to Solaris, and in three years of doing this I've never had an issue.

          "Our developers spend significant amounts of time doing actual work (it's part of the corporate culture) and very little time playing your alleged 'troll busting' game on Slashdot"

          Yet, here you are posting on Slashdot, same as me. And you're implying that you guys are more efficient because ... why, again? Because 'troll busting' somehow takes more time than posting 'legitimate' messages?

          "That goes a long way toward explaining our unusually high productivity."

          It actually wasn't your high productivity I was commenting on. After all, the net result was that you spent a few months and (presumably) tens of thousands of dollars, and in the end all you accomplished was porting from HP-UX to Linux. That's a remarkably slow and expensive porting job. The bit I was commenting on was how quickly the plans were abandoned and the guys were fired -- you said "a few months", and presumably most of that time was doing the port. How long did you give them to try to fix it? It just sounded like the new plan wasn't given a serious chance for survival, but then I wasn't there so I don't know how long they dicked around with it.

          Everybody hires idiots now and then, and kudos to you guys for getting rid of them so quickly, assuming they really didn't know what they were doing. But these problems were not caused by Java, Windows, or an OODBS -- they were caused by incompetence, plain and simple.
  • Can we please, please, please append the definition of XML to allow "" to close whatever the last tag was?

    That simple change would probably cut the size of the average XML file in half.

    • Can we please, please, please append the definition of XML to allow "</>" to close whatever the last tag was?

      That simple change would probably cut the size of the average XML file in half.

      (corrected post, please moderate my other one down. I have plenty of Karma...)

    • by ez76 ( 322080 )
      Can we please, please, please append the definition of XML to allow </> to close whatever the last tag was?

      This would take away from the self-documenting nature of XML, I think.

      Inevitably, authors would begin terminating their deeply nested documents with tags like:
      </></></></></>
      which is a lot less informative/helpful/debuggable than:
      </address></contact></vendor>&lt /prospect>
      Know what I mean?
      • You would still be able to do it that way, but I don't see the advantage of requiring the trailing tags. If you're creating files that are only ever going to be read by machines, it makes no sense to waste the space. Heck, if you're debugging something that always used the "shortcut", it would be trivial to make a little filter to fill in the trailing tags with the full names.

        The biggest problem with XML is the incredibly wasteful and verbose nature.

  • by jelwell ( 2152 ) on Monday October 29, 2001 @01:12PM (#2493479)
    I actually wrote a client side database recently, where all the processing is done on the client. I use Javascript, XML and XSL(T).

    It requires Netscape 6.(not out yet), IE 6, or Mozilla 0.9.5+ because of it's use of XSL Transform functions.

    You can view the page here. [singleclick.com]

    Joseph Elwell.

  • by Sean Starkey ( 4594 ) on Monday October 29, 2001 @01:16PM (#2493498) Homepage
    It makes me sad to see all of these closed minded people when it comes to XML. They just haven't seen what XML can do and have been turned away from previous work in XML. XML can be used for data storage, and has many advantages.

    XML allows data to be stored with context. For example if you have the data element "CmdrTaco", that doesn't mean much. But with xml, you can store this bit of information with context:

    <SlashDot>
    <Editor>
    <Name>CmdrTaco</Name>
    </Editor>
    </Slashdot>

    Isn't that more informative?

    It is surprising to me that people who like OO don't like XML. OO allows you to have functionality attached to your data. XML allows you to put context (and even functionality) around your data.

    Another big advantage of XML databases is the lack of a schema. If you want to have a dynamic database is the relational world, you are looking at a large schema migration. An XML database allows you to just add the information with no migration at all.

    Advanced storing techniques allows query of the XML database to be just as fast as a relational database. How can that be? The XML is stored in a specialized indexed form that allows for fast retrival.

    Sure, there are applications where it doesn't make sense to use an XML database. Using an XML database to store relational data doesn't make sence, that's what relational databases are for. But if you can think outside the mold, and store your data in a new way, XML databases are for you.

    I might be a little biased in this area, since I work for a XML database company (http://www.neocore.com). I have seen XML in action, and it is more than just a data transport. I hope that I can convince at least one person to look at this advanced technology.

    • > But with xml, you can store this bit of information with context:

      <SlashDot&gt
      <Editor>
      <Name>CmdrTaco</Name>
      </Editor>
      </Slashdot>

      Isn't that more informative?


      Yes, and I can do the same thing in Scheme with about half as many characters, and with the added advantage of being able to treat parts my data and stylesheet as executable code if I wish.

      Nor do I have to reformat it with a bunch of ampersands to post it to Slashdot, by the way.

      (SlashDot
      (Editor
      (Name CmdrTaco)
      )
      )

      Even more readable, IMO.

      Oh, and people had been doing this for years before XML was ever misbegotten.

      XML: More snake oil to the rescue.
      • by DGolden ( 17848 )
        Yes, couldn't agree more. XML is just a particularly annoying way of writing S-expressions.

        I really don't get people who complain about Lisp syntax and then tell me how wonderful XML is - XML is 10x more annoying than Lisp!

        Also, if you want to deal with XML in a semi-sane way, may I recommend just transforming it into Scheme, processing it with the normal LISPy tricks, then pretty-printing it back out... See here [sourceforge.net] for the best we to deal with XML weenies.
      • I've noticed XML is basically Scheme too. One question though, how do you do XML attributes in Scheme?

        <Slashdot>
        <Editor Type="Full-Time">
        <Name>CmdrTaco</N ame>
        </Editor>
        <Slashdot>

        • How about this?

          (Slashdot
          ((Editor
          (type Full-Time)
          (worthless-stock-options yes))
          (name CmdrTaco)))

          --jeff
          • Well that breaks stuff, right?

            If elements with attributes start with two parentheses that makes them different from elements without attributes. There's gotta be a way since the SSAX [sourceforge.net] project has to handle it somehow.


            • > Well that breaks stuff, right? If elements with attributes start with two parentheses that makes them different from elements without attributes.

              I wouldn't use the double parens. Something like -

              (Editor
              (type Full-Time)
              (worthless-stock-options yes)
              (name CmdrTaco)
              )

              would work. In fact that's what I would probably do (depending on exactly what I needed to represent).

              Notice that if you have already found the Editor structure, you can take the cdr to get a list of key-value pairs, and use assoc to find the key-value pair that you want.

              This can be abstracted pretty easily into a hierarchy of lookup tables, and you can write really simple functions to extract the parts you want.
    • Of course most of us understand the idea of structured data. The point is there are already better ways of storing and retrieving structured data on a server, and very little compelling reason to send content-oriented data through to the web client, where presentation rules and all attempts at disintermediating presentation have fallen flat.
    • XML allows data to be stored with context. For example if you have the data element "CmdrTaco", that doesn't mean much. But with xml, you can store this bit of information with context:

      [snip]

      Isn't that more informative?



      When was the last time you looked at the data files of your database system? I don't think I've ever looked at the actual on-disk data of MSSQL or MySQL in quite some years. Who gives a rats arse whether that data is readable? The output from the database perhaps, but thats a formatting issue, not an architectural one.

  • RFC (corrected) (Score:5, Interesting)

    by Reality Master 101 ( 179095 ) <RealityMaster101 @ g m ail.com> on Monday October 29, 2001 @01:18PM (#2493505) Homepage Journal

    Can we please, please, please append the definition of XML to allow "</>" to close whatever the last tag was?

    That simple change would probably cut the size of the average XML file in half.

    (corrected post, please moderate my other one down. I have plenty of Karma to spare...)

    • ... because if they did, people might realize that:

      <foo> <bar>baz</bar> <mumble>grumble</mumble></foo>

      is equivalent to

      <foo> <bar>baz</> <mumble>grumble</></>

      which is semantically equivalent to

      (foo (bar "baz") (mumble "grumble"))

      And if they did that, they might have to admit that XML is semantically equivalent to Lisp S-expressions, and not a major advance in computer science after all.

      And they'd never do that.
      • This comes from the same group of thinkers that think:

        (car (cdr (car (cdr (cdr (car "x y m q")))))))))))

        Is cool - and thus, we have editors that automagically balance parenthesis. But don't get me wrong, I have a real appreciation for people that can do "real programming" (like video codecs) in Lisp.
      • Of course, one of the "ideas" of XML is that you can just strip out all of the tags and have a document you can sort of read. That would be anathema to a Lisp person, and for good reason. Lisp is all about simple, minimalistic expression and manipulation of hierarchical data. XML is about an underspecified hodgepodge of structure and free form data.

        Which is not to say that it's not useful, regardless.

      • Nobody ever claimed that XML is a major innovation in lexical encodings. Neither for that matter is LISP which is no more than Lambda calculus.

        The reason why XML uses the notation it does is that it is somewhat more robust. The problem with S-Expressions is that one misplaced parenthesis can cause the entire semantics of the expression to be changed, or as we computer scientists say 'be fucked beyond recognition'.

        Most major advances in computing are not major advances in computer science. There was absolutely nothing original in C, it was merely a version of B with a few additional features added back from BCPL which was itself merely a subset of CPL which was merely a revision of ALGOL and so on.

        Packaging counts for an awful lot.

  • There's this German textile management system called Koppermann that I was curious about - it's really flexable as far as I could tell. Whell, I fired up the ODBC browser to take a look in its MS-SQL tables as I kid you not: THEY IMPLEMENTED AN OO DATABASE IN A FLAT TABLE DATABASE. The had a giant user interface table that had rectord like this:

    ControllID
    ParentControllID
    DataType
    FormLocationX
    FormLocationY

    Then they had a giant data table like this

    DataID
    ParentDataID
    ControllID
    Data

    Argh! The madness of it all. Everything of substance was in these two tables. I'll admit that it's a nice hack, and they can tell all their clients that their data is 'easily exported into a CSV file.', but good greif! It reminds me of those people whoe made so many # define macros in C as to make it look like Pascal.

  • Performance (Score:2, Informative)

    OO databases mixed with XML == Very bad performance

    This may be great for acadamia, or perhaps small projects, but in "The Real World"(tm) this won't fly. As a performance guy working on a big system, I can tell you that using OO databases and/or XML queries/storage will butcher performance.

    For most of our clients, performance is the #1 concern, as that is what dictates hardware. Buying one 32-way p680 for a typical RDMS solution -vs two for a fancy OO/XML solution isn't much of a choice.
  • by aspillai ( 86002 ) on Monday October 29, 2001 @02:03PM (#2493728) Homepage
    I've used XML extensively and in someways agree with people saying XML isn't a storage format. But right now there are lots of applications where XML is the perfect storage format. Example: Consider a order processing company who brokers orders for company to company. One option would be to define a monolthic db schema to take care of what each company would like in their order. Another would be to define a really abstract schema to facilitate handling generic order forms. The problem with the first is, each time XYZ wants something added to an order form, you need to change the schema. With the second, it'll work but you'll need exceptionally discplined and smart programmers to deal with the abstract layer. This doesn't even deal with migration issues.

    The solution is XML. You create a XML Schema and start storing stuff. Some company wants more parameters - no problem, extend the schema. You need to migrate previous XML docs to adhere to the current schema, use XSLT. Or you can add these as optional parameters and every document that exists already will conform to the schema.

    Speed in XML is an issue. But people who think you need to read the entire XML document to process don't know what they're talking about. You can do modular processing. Also, you can do smart indexing to increase speed. And in a production environment, you turn Schema cheking off unless you're getting documents from untrusted sources. Will XML ever be as fast as RDBMS? Probably not. But XML doesn't store relational data. And with current research in XML Query languages, I'm sure XML's speed will be good enough for most applications in the future that deal with fuzzy schemas. (If you need high performance DB, then you have to bite the bullet and use a RDBMS).

    My two cents.
  • by jlowery ( 47102 ) on Monday October 29, 2001 @02:04PM (#2493731)
    Of course, this is not an easy question to answer, but the right answer involves knowing three things:

    1) Can certain records be considered 'atomic'?
    This is similar to the RDBMS question of whether or not it makes sense to construct a view or not. View definitions represent a common query. If you considering a query as a means of tying together disparate data from many tables into a single, denormalized set of records, the record could just as easily be expressed in some XML format.

    Now, if that record represents some physical or conceptual entity in the data model, it is in fact a set of properties about an object. This is what XML is good at representing. Decomposing that set of object data (record) into normalized relations may not make sense if such 'objects' are frequently requested; but there other considerations...

    2) Ad hoc queries are difficult when data is stored internally in XML, because each XML blob has to be parsed and checked for the query values. If you don't know in advance if the XML structure even has the fields you're looking for, then you must do an exhaustive search. Some have used indexed XPath information to work around this issue. Since we're mentioning indexes...

    3) How do you find the XML blobs you're looking for. We've used an ORDBMS for our XML data, and indexed on the ID or key values (as defined in an XML Schema) for each element stored in the database. This makes looking up element instances easier. It also makes relating them easier, too, if you use IDREF or keyrefs as your foreign keys.

    Now every XML document has a single root element. If you're storing that document in a database, you could choose to store just that one root element instance. More likely, you'll want to decompose the root so that accessing subelements by ID or key in the database will be easier.

    Got to run off now,

    Jeff Lowery
  • can be solved with text files, at far less expense.


    I'm not just talking out of my ass either. I've worked with EDI systems(data in binary format means you need proprietary software on both ends), XML, and plain old text files. I've used all 3 in the context of transferring data between businesses, which is what XML aims to solve. My feeling is that plain old text files, along with a descriptive file of how the text file is laid out, is overall the best solution for most data interchanges between businesses.


    One really good example of this is using diff. Suppose your supplier maintains a database of products you can order, and this data changes daily. Using text files you can easily diff todays file with the one you retrieved the day before and get a much smaller file to use to update your internal database. I can't imagine a more elegant solution using XML.


    I have found one good solution that uses XML - outputting XML on the fly over the net in response to a query. If you have customers that query your data regularly over the web, any change to the HTML will throw their queries off if they are "screen scraping" to get at your data. XML solves this problem nicely, even if new fields are added or if the XML page layout changes in some way. I don't see the logic of actually storing XML in the database though.


    My experience of being in a business where data interchanges take place on a regular basis with other businesses, is that formatted text files are still the best way overall. They are easier to deal with and faster than XML ever will be.

    • i think you're missing the point. XML is a formatted text file, and namespaces are the description of how they are laid out.


      The difference being that for an XML file, the code for loading and parsing the data into an object model, manipulating and querying it is the same for every XML document. Whereas, for plain text files and human-readable descriptions you need a programmer to write and test code for each type of file. For XML this code has already been written and tested.


      I don't agree with the 'diff' example either, for example the diff between two text files tells you nothing about the context of the diff, ie what the meaning of the change is (and no, just knowing the few lines above/below doesn't necessarily tell you anything, either). You have to manually refer to the original document and the description of the file-format in order to work out what has changed: just knowing that a particular line changed doesn't tell you what that change actually means. With an XML document it's easy to automatically derive the context of the diff, and there are already many programs which will do this.

    • First one has to think about what's XML.

      XML is not a language, notwithstanding its own name. It's a metacodification, used to create codifications such as XQL, HTML, DocBook and so on.

      OO people are usually programmers with very little CS fundamentals, so they don't even get this right: when they are talking about XML in database contexts, they should at least specify the coding they want to use. And then it should be understood that you need to use it for storage encoding, or for data communications, or both.
      Thus one cannot say that XML was created for data interchange -- it was created for metacodification. One can create a data interchange codification based on XML -- but that's kind of stupid, since XML codifications usually will give big overheads. We've been doing data interchange with text files with little problems for years. The issue of agreeing on data model and codification between applications does not go away just because you agreed on using some codification with a big overhead.

      But I haven't still touched on the worst on using XML codifications in database contexts -- it is that both XML and OO are hierarchical, thus a regression to thirty years ago when there were navigational databases, no data independence, hierarchical and network systems... we are throwing away thirty years of relational research without ever having implemented it right.

      But that's the way of an uneducated world... just as people adopting proprietary technologies have thrown away open systems ideals without ever having got it right.

      --
  • I've been using Oracle at work, and a small open-source project called DBXML [dbxml.org] for my personal projects. All of this using Java front ends.

    For my small projects, using DBXML has been a joy. There are certain things for which using XML makes a lot more sense. Some data models just fit more naturally into hierarchcal structures, for example users and groups. If you have unique usernames, you can pull data on a user, then pull their group quite easily without the need for a reference table simply by pulling hte user's parent.

    This isn't to say I think XML databases are the answer to everything. One of the largest problems I find so far is that it is that queries that are relatively easy in SQL can get a bit tedious is XPath. Also, as of yet there doesn't seem to be any truly standard query language. This is understandable, given how new the designs are, but it is a bit difficult to decide how to do things sometimes. Do you check in a document, or XUpdate it? Play with DBXML and you'll see what I'm talking about.

    For those of you complaining about XML not being an efficient way of data storage because of the high memory cost of keeping documents in memory, bear in mind that there are more parsers out there than just DOM and its relatives. SAX is quite efficient, and even if you're using DOM it is entirely possible to pull fragments out of the document as you see fit; in fact XPath makes this quite easy.

    I may be crazy, but I eventually see XML databases providing solid competition to standard RDBMS systems. I've seen complaints about performance -- I think much of this is lodged in the fact that a lot of these systems are not native XML databases -- they are RDBMSs with XML capabilities thrown on top. One way or another, it should be interestign to see how things pan out.

    End rant.

  • I've been properly brainwashed in the Open Source way, and I use XML all the time as an interchange mechanism, but you'll have to pry Crystal Reports from my cold, dead fingers.

    I have spent a lot of time training non-technical users to get their own damn reports from databases. It's hard to imagine putting data--any data--into a system where the tools to get it out haven't been written yet.
  • Triple stores (Score:4, Interesting)

    by macpeep ( 36699 ) on Monday October 29, 2001 @03:31PM (#2494229)
    At my previous job, I implemented an experimental app that was inspired by RDF (Resource Description Framework) and triple stores.

    In a triple store, you have objects that are defined by a set of properties. The word "triple" comes from the fact that you have triples of objects, properties and property values. For example, you could have a person; John Q, who has an age 37, a phone number 1234 and an employer Foo Ltd. Foo Ltd. in turn has a phone number 5678 and any number of other properties. This forms the following tripples: John Q --age--> 37, John Q --phone number--> 1234. John Q --employer--> Foo Ltd. Foo Ltd --phone number--> 5678.

    When you look at these, you can see that Foo Ltd. is both the employer of John Q (a property value) but also an object in itself that is described by a set of properties. In RDF, the tripples form a graph that describes your data. The graph is typically serialized as XML.

    At first, it would seem that this lends itself very well for relational databases. A row in a table would be the object to be described and columns are the properties. The intersection is the value. However, the problem - and strength of RDF - is that you can have any number of properties for an object. Basically, you could have any number of columns and sometimes, the property value is not just a value - it can be a database row in itself or even a set of rows.. or a set of values.

    The app I wrote mapped arbitrary RDF files to relational databases and back as well as provided an API to perform queries on the data. The result of the queries were RDF graphs in themselves.

    While this was quite cool, it turned out to be quite difficult to turn the query result graphs into meaningful stuff in a user interface. Also, queries on the RDF graphs could turn out to be extremely complex SQL queries... Most of these problems were eventually solved but the code wasn't used directly for any real world app, except heavily modified as a metadata database for a web publishing system.
  • by rodentia ( 102779 ) on Monday October 29, 2001 @06:20PM (#2495024)
    A large number of otherwise intelligent posters would seem to have been hit by the runaway XML hype train. Examples culled from various posts:

    ...[not a] major advance in computer science.

    ...[bogus] contribution to programming language design (re: XSL)

    ...[transfer data between businesses,] which is the problem XML aims to solve.

    But these are critiques directed at the hype machine, not the specification. This is really distressing me. The machine is so efficient that there are API's for XML (which shall remain nameless) being written and optimized for message passing which cannot handle mixed content as a matter of design. As though it were somehow so useful in this area that a section of the spec should be tossed to make it efficient. As though there weren't already gallons of ink being spilled on EDI, etc.

    XML was not designed to replace S-expressions, to facilitate cross-platform communications, revolutionize EDI or DBMs, to theorize about language design, yada, yada. XML is just that, an Extensible bloody Markup Language, a document tagging scheme. In this regard it is a tremendous advance. It is 80% less suck, by volume, than what went before. If you think your XML parser is bloated, have a look at any SGML parser. Part of what gets stripped out is tag minimization, the absence of which another poster complained about.

    Hey, its text and not binary because I need to write it and read it. Yes, Virginia, I've got 400 users tagging XML in flat-file editors. They complained about the loss of tag-minimization, too. But my svelte little Xerces needs a hand to stay so lean.

    The goal is to get structural and semantic information into my documents. (Yes, it's data, but a special kind of data called a document. You can call the message your passing a document, and use XML to format it, but there is some overhead the hype machine may not have emphasised in their rush to market.) I also strive to eliminate formatting or presentation instructions from the document (or hide them in PIs) to facilitate multi-target outputs. This lets my typesetters typeset and my data-entry people enter data.

    XML is designed to bring something of this model to the web. HTML is too presentation oriented. SGML is too bulky. That's what it do, babe. I take a single source file from somewhere on the filesystem, incorporate pieces from elsewhere (entity resolution, DB queries, etc.), turn it into one of five possible outputs. I use two different pagination engines with different proprietary formatting macros, XSL(T|FO), or a trap door on the bottom to dump pretty-printed ASCII. Its a publishing tool.
    • Finally, a rational discussion of the merits of XML. Hate to use the buzzward, but it's all about repurposing baby! XML facilitates creating docs that you can then convert to a variety of output formats. This is pretty much 50% of my job, so that's why I think it's so cool. And after years of poring over binary dumps of other people's data (well, and my data too), It's very nice to use a human, self-documenting format. I think a lot of the XML posts you see are from people who don't do this sort of stuff for a living.

      XSL is also cool, once you climb the steep learning curve and bend your mind around it's declarative style.

      As for native xml db's - that is probably mostly hype.

  • performance (Score:3, Funny)

    by csbruce ( 39509 ) on Monday October 29, 2001 @08:14PM (#2495412)
    1-GHz Pentium-III + Java + XSLT == 1-MHz 6502.

The moon is a planet just like the Earth, only it is even deader.

Working...