Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
The Internet Programming IT Technology

DTD vs. XML Schema 248

AShocka writes "The W3C XML Schema Working Group has released the first public Working Draft of Requirements for XML Schema 1.1. Schemas are technology for specifying and constraining the structure of XML documents. The draft adds functionality and clarifies the XML Schema Recommendation Part 1 and Part 2. The XML Schema Valid FAQ highlights development issues and resources using XML Schema. This article at webmasterbase.com addresses the XML DTDs Vs XML Schema issue. Also see the W3C Conversion Tool from DTD to XML Schema and other XML Schema/DTD Editors."
This discussion has been archived. No new comments can be posted.

DTD vs. XML Schema

Comments Filter:
  • Power (Score:5, Insightful)

    by sethadam1 ( 530629 ) <adam.firsttube@com> on Thursday January 23, 2003 @07:04PM (#5146778) Homepage
    There's no "vs."

    XML Schema are much more flexible and powerful.

    There're also about 100 times more difficult and confusing.
    • Re:Power (Score:4, Insightful)

      by goatasaur ( 604450 ) on Thursday January 23, 2003 @07:22PM (#5146902) Journal
      "There're also about 100 times more difficult and confusing."

      The phrase "difficult and confusing" goes hand-in-hand with any flexible or powerful computer utilities.

      Full utilization of XML (and myriad programming languages) takes time.

      They call them "languages" for a reason. You can't write a sonnet in French if you have only studied it for a year; and you can't write a full-featured browser suite if you started coding a month ago. :)

      • Re:Power (Score:3, Insightful)

        by iapetus ( 24050 )
        The thing is, one of the major selling points of XML has been exactly the fact that it is easier to use and less complex than its predecessors (SGML, anyone?)
        • by rodentia ( 102779 ) on Thursday January 23, 2003 @11:56PM (#5148490)
          that the same applications of XML that drive the keening about bloat and hype seen in these comments are precisely those which are driving the specs to the wrong side of the 80/20 for XML/XSL's original goals: bringing the semantic power of SGML and DSSSL to the Web. Goals for which its purist cousins RelaxNG, REST, et. al. remain admirably suited.

          The back-end curmudgeons are right, XML stinks for a universal wire format. But for loosely-coupled, message-based, semantically-rich systems it is hard to beat. And document-oriented systems which don't use XML barely deserve notice any longer.

          I gently refer s-expression trolls to paul [prescod.net] and oleg [okmij.org]
    • Re:Power (Score:5, Informative)

      by Lulu of the Lotus-Ea ( 3441 ) <mertz@gnosis.cx> on Thursday January 23, 2003 @09:50PM (#5147812) Homepage
      There certainly is a "vs." involved. There are many good reasons to choose DTDs for a given validation requirement rather than W3C XML Schemas. I address some of those in an IBM developerWorks articles:

      [ibm.com]
      Comparing W3C XML Schemas and Document Type Definitions (DTDs)

      This is a bit old, but still correct. Not a lot has changed in either spec.

      I am currently working on a series of articles on RELAX NG. In most ways, I think RELAX NG really is the best of all worlds. It is more powerful than W3C XML Schemas, while being a natural extension of the semantics of DTDs. Moreover, if you choose to use the compact syntax (non-XML), you get something very easy to read and edit by hand.

      David...
      • I am currently working on a series of articles on RELAX NG. In most ways, I think RELAX NG really is the best of all worlds. It is more powerful than W3C XML Schemas, while being a natural extension of the semantics of DTDs. Moreover, if you choose to use the compact syntax (non-XML), you get something very easy to read and edit by hand.

        I am old, and I am wary of the ways of hype. But after reading this and other comments on this thread, I had a look at the RELAX NG tutorial. [oasis-open.org] All I can say is: wow. Given that this stuff is already known to be formally correct, I am finding it very hard to believe that the W3C should not just punt on XML Schema and just adopt RELAX NG instead. It seems to have every advantage: You can understand it, it is powerful, James Clark endorses it, the tutorial is helpful...what's not to like?

      • Re:Power (Score:3, Interesting)

        by MarkWatson ( 189759 )
        Hello David,

        I also just read through the RELAX NG tutorial and I am now looking at Bali (for generating Java RELAX NG validators).

        Good stuff! I agree with the other poster that W3C should punt on XML Schemas.

        That said, I think that for the forseeable future, that simply
        using DTDs works well because all the hooks are already
        in place for the popular XML parsers.

        I suppose the next step would be to get Xerces and other
        XML parsers to natively support RELAX NG (I have to look
        to see if Clark has such a parser already :-)

        - Mark Watson
        - Free web books: www.markwatson.com
  • by Anonymous Coward
    PXML [pault.com] is a subset of XML - an alternative to the bloated XML language.

    believe me, you won't use XML anymore if you once tried PXML [pault.com]
    • by Anonymous Coward on Thursday January 23, 2003 @07:23PM (#5146908)
      Better yet, use S-Expressions.
      There are tons of parsers available.
      markup is simple:
      (this_is_the_tag
      this is all data
      (except_this_is_a_nested_tag with still more data))

      Even better still, there are customizable parsers available that can treat these S-Expression as data OR interpret them as program OR a combination of both. One such parser is called "Lisp". Once again, several implementations are available.
      Note that things like S-Expressions and Lisp have only been around for 40 years so you might want to give these technologies some time to mature.
      • I tend to agree. XML is an incredibly clunky way to ship trees around, let alone make subroutine calls.

        If it was simple and standard, it would be useful, even though it's slow, but if it's complicated and incompatible, it's part of the problem, not part of the solution.

      • by 21mhz ( 443080 ) on Friday January 24, 2003 @04:33AM (#5149541) Journal
        Better yet, use S-Expressions.
        There are tons of parsers available.


        How does one specify the character set in some, imagined or real, S-Expression markup? Do these "tons of parsers" support Unicode at least? Where to put processing instructions? Character entities? External entities? "Raw data" sections with markup suppressed? How does one specify the document type identifier? Namespaces? All these things fulfill important tasks for XML to be an universal, yet concise, markup language, and all this can make your dreamt-up S-Expression language as contrived as XML is sometimes perceived to be.
        (this_is_the_tag
        this is all data
        (except_this_is_a_nested_tag with still more data))
        Attributes, I presume, are out of our concern? You note that the means for syntactic description of data trees are around for 40 years. Yet there was yearning for something more... handy, or something. Doesn't it give any hint to you?
    • And how is XML bloated? Sounds like an ad post. :P

      You need unicode for internationalization, you want namespaces for differentiation of data, you want comments to make.. comments. Troll elsewhere.
    • believe me, you won't use XML anymore if you once tried PXML
      OK
    • The hell I won't (Score:5, Insightful)

      by ttfkam ( 37064 ) on Thursday January 23, 2003 @07:57PM (#5147091) Homepage Journal
      Trimming bloat like namespaces and comments? Are you nuts?

      How do you embed MathML in another document (like XHTML)? Currently it's with namespaces. How do you propose to do that without namespaces? Just the prefixes? What happens when two different markups use the same prefix? Wups! You're screwed!

      No comments? This is supposed to make a better alternative to XML? It won't help readability, and it certainly isn't a major bottleneck during parsing.

      Don't want the "bloat" of namespaces and comments? Wait for it... Wait for it... Don't use namespaces and comments in your documents! Wow! What a concept!

      Maybe no Unicode in PXML hunh? So much for interoperability for any kind of data. You don't ever want your pet project used in East Asia (or Russia or Greece or most other places in the world) do you? Unicode too bloated? Why not just use ISO-8859-15 (basically ASCII w/ a Euro character -- which incidentally a Euro character isn't available in ASCII)? Oh wait! That's right. You don't want to allow processing instructions, which in XML tell you what encoding is used.

      What happens if you want to change some of the basic syntax of PXML? Because you've nuked processing instructions, you can't specify a markup version like you can in XML.

      Yes, yes. We've all seen your little pet project. I hope it was just a class assignment.
      • Agreed. However, it is true that most people would consider some feature or another of XML unnecessary and bloated. I would consider various entities (everything but character entities) to be such a feature, for example. I'm not missing C-style #includes in Java, and same goes for XML. If inclusions are needed, another XML-extensions should be used (isn't there something like XInclude?).

        Second feature I'd consider removing would be CDATA sections. It is nifty when manually modifying XML, but otherwise it's just a pain (not a huge one for XML-parser, but additional bloat).

        For other people list would look different I'm sure. :-)

        • but I think you are totally off base with regard to CDATA sections. If anything, they make life easier for the parser, not harder -- at least when I was writing a parser, CDATA made things faster and easier. In cases where you are including a great deal of symbolic data -- for example, when you want to include a source code segment or ASCII art -- it is both easier to read, faster to parse, and *less* bloated.

          '<' takes up less space than '&lt;'. Assuming you have more than three or four of these in your text node, a CDATA section reduces the size of your document. For the parser, after the CDATA section is begun, only the character sequence ']]>' can end it. This means the parser only has to check for ']]>' and not '<', '&', '<?', '<!', etc.

          And yes, there is such a beast called XInclude, but it's currently only a candidate release. It's used like this:

          <foo>
          <xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="bar.xml" parse="xml">
          <xi:fallback>
          <para>This text goes in if bar.xml cannot be found or has an error</para>
          </xi:fallback>
          </xi:include>
          </foo>

          Hopefully most entities can go the way of the dodo.
          • I wish they'd advance XInclude along. It fills a really needed function in the XML world. I'd like to see xml editors and such start honoring the XInclude specification in it's current state so that I could edit meta documents which span many files.
          • but I think you are totally off base with regard to CDATA sections. If anything, they make life easier for the parser, not harder -- at least when I was writing a parser, CDATA made things faster and easier.

            Well, CDATA is nothing like entities no matter what, and parsing them is not THAT much of a problem. But there are some subtleties, when programmatically accessing CDATA blocks (when manually editing there are no problems; it's just automatic processing that's trickier). In any case, CDATA is an extra feature that's not really "needed"; normal quoting can be used instead... it's a convenience feature.

            However, doing quote/unquote when parsing/outputting is a breeze as well, and outputting CDATA sections automatically and reliably is tricky. At least if you want to do it 100% foolproof (granted, need to include ]]> token anywhere is a small, but still a possibility). You need to check for ]]> and split contents into two. When parsing contents the problem (minor I guess) is that whereas text segments are usually normalized (ie. when reading a new doc without mods, you never get 2 text segments in a row), you can get combination of CDATA blocks and text segments; there's no way for parser to combine them on the fly (well, DOM-parsers do have the normalize method that may do combination? Or perhaps that's not allowed by specs?)

            But just like entities, CDATA is meant for manual quick-quoting of blocks, and makes it easier for humans to quickly understand the contents. For programs it doesn't make big difference.

            ... oh yeah, I probably spent too much time when writing my own streamable XML-parser; when not doing full read in memory things to get more involved. :-)
            (streamability, ie. reading only what is needed currently while still managing some structure unlike SAX, was needed to handle > 100 meg XML export files... implemented both read-only and write-only versions for internal use).

    • by tomzyk ( 158497 ) on Friday January 24, 2003 @03:00AM (#5149313) Journal
      HTML [w3.org] is a subset of XML - an alternative to the bloated XMl language.

      believe me, you wont use XML (and those pesky XSLTs) anymore if you once tried HTML [w3.org]

      AND (most importantly) in virtually every single web browser that you can find, support for viewing this format over the internet is available and built into the browser itself!
  • by r4lv3k ( 638084 ) on Thursday January 23, 2003 @07:09PM (#5146806)
    1. DTD 2. XML Schema 3. CowboyNeal validation (via SOAP over SMTP)
  • by Ars-Fartsica ( 166957 ) on Thursday January 23, 2003 @07:13PM (#5146836)
    DTDs are being deprecated one way or another.

    While the W3 continues to push Schema, they are also forming working groups for RELAX after pressure from XML luminaries such as James Clark.

  • WTF!!!! (Score:3, Funny)

    by LinuxPunk ( 641305 ) on Thursday January 23, 2003 @07:19PM (#5146883) Journal
    dammit, right after I buy a book to finally learn XML in detail, they change the standards. :P
    • Actually, XML Schemas have been around for a while now.

      I learned XML a while back, and we learned Schemas and DTDs. While I can write a DTD in 10 seconds, it takes literally hours for me to write a useful XML Schema that is dynamically populated. But it's been around.
  • by MimsyBoro ( 613203 ) on Thursday January 23, 2003 @07:19PM (#5146888) Journal
    I am a programmer for a commercial company (yes I like to make money, and I program on WinTel). I year ago we had the XML craze we converted all our internal protocols to XML. I discovered that XML was just a lot of hype about nothing. There is nothing self-describing about it. Or maybe there is, just like the section names in an INI file describe the keys in them...
    On the other hand the one thing that I did find XML useful for is easy parsing. If you use XML to develop a lower level protocol you end up with bloated 10k messages. But for high-level protocols or for configuration files it's great for only one reason: There are lots of ready-made tools. If you want to parse XML in Windows just load the IXMLDocument interface and it works at lightening speed. If you want to parse the messages in a web-browser through together a quick DOM parser or even use the build in DOM one! If you want to parse XML in PERL or C/C++ there are great libs. The only reason XML is good is because all the hype got people developing very neat tools. In one of my latest projects that needs to pass information between two programs written in different languages a used a Home-Made SOAP and designed a base class the persists using XML. I developed it in both langauges in under an hour!
    So although it wastes bandwidth and there really isn't anything neat about it, it is comfortable I'll give it that.
    • by sporty ( 27564 ) on Thursday January 23, 2003 @07:35PM (#5146990) Homepage
      I year ago we had the XML craze we converted all our internal protocols to XML. I discovered that XML was just a lot of hype about nothing. There is nothing self-describing about it. Or maybe there is, just like the section names in an INI file describe the keys in them...


      Great thing about XML, is if you need to convert your communications, you can write XSLT against it to convert it while you convert your XML source.. easily. For instance, one vendor I worked with decided that the old protocol didn't work well anymore, and a ne one would be better. Forget the reasons for the change, good or bad.

      I plopped an XSLT processor in front of it. Took minutes to implement. In the mean time, I was able to properly rewrite the XML producing code. So I had some flexibility in terms of patching the protocol quickly, while taking the weeks I needed to fix things right.

      As for self describing, what is more self describing than HTML? You see a bold and italics tag around an element, you can easily figure out what style the text would be in. Yes, I know about CSS, but the point is, XML IS descriptive, so long as you use good names. Naming elements a, b and c is just developer fault.

      If you use XML to develop a lower level protocol you end up with bloated 10k messages.


      If in today's age of gigabit ethernet and cheap parts, you really really need to squeeze that extra bit through, compress the line. Seriously. Simplest case, is using ssh. Hell, it auth's AND encrypts. If you are worried about anonymous access, there are other tools.
      • by axxackall ( 579006 ) on Thursday January 23, 2003 @11:33PM (#5148400) Homepage Journal
        Great thing about XML, is if you need to convert your communications, you can write XSLT against it to convert it while you convert your XML source.. easily.

        Great thing about Lisp, is if you need to convert your communications, you can write Lisp against it to convert it while you convert your Lisp source.. easily.

        I plopped an XSLT processor in front of it. Took minutes to implement. In the mean time, I was able to properly rewrite the XML producing code. So I had some flexibility in terms of patching the protocol quickly, while taking the weeks I needed to fix things right.

        I plopped a Lisp processor in front of it. Took minutes to implement. In the mean time, I was able to properly rewrite the Lisp producing code. So I had some flexibility in terms of patching the protocol quickly, while taking the weeks I needed to fix things right.

        the point is, XML IS descriptive, so long as you use good names.

        the point is, Lisp IS descriptive, so long as you use good names.

        If you use XML to develop a lower level protocol you end up with bloated 10k messages.

        If you use Lisp S-expressions to develop a lower level protocol you don't end up with bloated 10k messages.

        Besides, in Common Lisp [elwoodcorp.com] you'll really appreciate MOP [mini.net] - Meta-Object Protocol. Much better than SOAP.

        Trust me, I know well, actively use and actually love both Lisp *AND* XML.

      • XML IS descriptive, so long as you use good names. Naming elements a, b and c is just developer fault.

        It is not just a matter of using good, descriptive names. Whatever code is reading the xml is going to have to know what the names mean. A program reading xml could care less if the name is "a" or "AVeryMeaningfulName"
    • by plierhead ( 570797 ) on Thursday January 23, 2003 @07:39PM (#5147014) Journal
      I discovered that XML was just a lot of hype about nothing.

      Amen to that. Sad to say, but certain parts of the IT industry (and in particular, anything to with one computer (or piece of code) magically talking to another one owned by a different organization) are constantly buying into the bogus claims of snakeoil salesman with silver bullet technologies. XML is merely the latest in a long line.

      The only new things about XML, IMHO, are that is has spawned more sub-specifications than any previous pretender to the crown.

      Anyone remember CORBA ? Or any of the other zillions of RPC-type mechanisms that people have jumped on the bandwagon of ?

      I'm not blaiming the people who push these agendas. I too would love to spend my weekends sunning on the beaches of exotic European tourist destinations and chugging beers on my expense account. The price of sitting through a ferw stiflingly boring and pointless standards meetings seems a small price to pay. All large IT companies employ 2 or 3 people whose job it is to front up to these meetings. Typically these people are articulate and highly versed ex-programmers but architecurally challenged and with little understanding of the real nature of building complex IT systems.

      Ultimately, these RPC mechanisms all end up as nothing - or rather, as only perhaps 1% of the eventual solution.

      All that XML is, is an easy-to-parse, text based data transfer mechanism. And as the parent posting says, there are some nice tools around for it. Big deal. Probably you'd be silly to use anything else if designing a data transfer. But is it ever going to change the world ? Or even rock it a little ? No.

      • by ttfkam ( 37064 ) on Thursday January 23, 2003 @08:11PM (#5147184) Homepage Journal
        Yup, I remember CORBA. It's still used. In fact, it can be used with XML. CORBA provides the interfaces for programmatic logic. There is nothing really required to know about how the ORBs communicate with one another. If an ORB decides that its transport mechanism is XML, so be it.

        As for SOAP and XML-RPC, what's so hard about compressing it before sending the message? The whole point about XML is that you don't need to write a new parser. You don't need to write a new broadcaster. Your project is about getting a task done, not micromanaging implementation details.

        If (and only if) your higher level API/transport is insufficient for the task do you roll your sleeves up and dig in. Do you write everything in assembly? Why not? It would be faster than whatever language you are using now. The reason you don't is that you have better things to do with your time. The goal is important, not the tool. Everyone has standardized on and is optimizing this one particular tool and it works well. So many people have done work so that you don't have to.

        Will it change the world? Of course not. It's just a markup language. Will any other computing tool change the world? Of course not. The end users have never cared how you got to the solution. They cared only if you got to the solution faster than the other guy.
        • Not to fault CORBA, although it is a rather cumbersome outgrowth on top of a sometimes overbearing paradigm (OO), or to debunk XML, which is a very powerful and complex language to replace that other, even more powerful and complex language, but...

          As for SOAP and XML-RPC, what's so hard about compressing it before sending the message?

          Well, that it is hard. Try forking a few thousand gzip processes and you'll see what I mean.

          Your project is about getting a task done, not micromanaging implementation details.

          Um, you're the one suggesting we should use compression to manage the SOAP and XML-RPC overhead. That shure sounds like micromanagement of implementation detail to me.

          Do you write everything in assembly?

          Well, in fact, for years, I didn't, but I recently picked it up again, and the speed gains are tremendous, in just a few dozens lines of code.

          The end users have never cared how you got to the solution. They cared only if you got to the solution faster than the other guy.

          So then how does that explain why developers all over the world are suffering through hundreds (thousands) of pages of documentation just to send a message across the room? Standards are good and XML is progress, but

        • I thought CORBA was like teenage sex...everyone was talking about it but nobody's doing it. And those that are are doing it are doing it wrong.

          But in truth I've done CORBA and XML-RPC. I far prefer the simplicity of XML-RPC over the complexity of the CORBA specification. I found XML-RPC to be more reliable as well, but I'm probably making the mistake of judging the technologies or CORBA and XML-RPC relative to the abilities of the implementations I had on hand.
      • Anyone remember CORBA ? Or any of the other zillions of RPC-type mechanisms that people have jumped on the bandwagon of ?


        Corba is more than just a data format. It's an architecture. XML is not an architecture, it's just a data format.

        The price of sitting through a ferw stiflingly boring and pointless standards meetings seems a small price to pay. All large IT companies employ 2 or 3 people whose job it is to front up to these meetings. Typically these people are articulate and highly versed ex-programmers but architecurally challenged and with little understanding of the real nature of building complex IT systems.


        You give these people 0 credit. Really. They probably have real jobs doing real things, while for the company's benefit help in creating these standards.

        Ultimately, these RPC mechanisms all end up as nothing - or rather, as only perhaps 1% of the eventual solution.

        All that XML is, is an easy-to-parse, text based data transfer mechanism. And as the parent posting says, there are some nice tools around for it. Big deal. Probably you'd be silly to use anything else if designing a data transfer. But is it ever going to change the world ? Or even rock it a little ? No.

        Disclaimer:Not reviewed for relevance or accuracy - results may vary.
      • The only new things about XML, IMHO, are that is has spawned more sub-specifications than any previous pretender to the crown.

        Sub-specifications?

        You mean like MathML, SMIL, SVG, XHTML, et al.?

        These are all modular lanuages that use XML.

        The XML client application uses one or more DTD or schema to determine how to interpret the various elements in the XML file, and you can intermingle e.g. MathML and XHTML and so forth all in the same XML file.

        Unless I'm grossly misinterpreting your comment (in which case I apologize), I can safely say that you didn't understand the article, since these "sub-specifications" you mentioned are exactly what DTDs/Schema are for, and exactly what makes XML a Good Thing.

        They didn't call it "Extensible" just so they could put a nice pretty "X" in "XML". (Though in all fairness, I must wonder if anyone could take something called "EML" seriously... ;)

    • Say what you will, but for me, XML delivers on every one of its promises. I think that it is easy to dismiss as 'hype' because it is so deceptively simple. After all, XML is just a special format for writing ASCII documents, right?

      Well, that statement is on par with saying that ASCII is just an OVERHYPED binary format for storing text. Its not, and neither is XML for the same reasons.

      Xml allows me to stamp out robust document schemas in minutes or hours, instead of months or even years if working from scratch. Because of the rich set of tools you mention, I don't have to write a metric ass-load of documentation on my formats, either. XML spec + my extensions == all the client needs. Because XML is a stable standard, things like MathML, ChemML, DocBook, DOM, etc. can exist, and proceed to maturity faster than otherwise.

      Yes, there were some that want to XMLify everything, but that's not an intrinsic fault of XML any more than when some dumb programmer that wants to redesign the Linux kernel to use ASCII-based API calls...

    • by valmont ( 3573 ) on Thursday January 23, 2003 @08:28PM (#5147278) Homepage Journal
      I notice that this topic is generating many comments from hard-core backend programmers who mainly focus on inter-application messaging and various equivalents of remote procedure calls.

      In my experience, many benefits of XML come when dealing with the presentation layers of many application architectures, with the ability to repurpose syndicated data at wil, here are a few examples:

      • RSS [userland.com] which defines an easy standard for any site to provide "News" in a well-defined XML Format. This allows developers to write software to aggregate news from different sites into one convenient interface, sites to exchange news headlines with eachother.
      • Google Web APIs [google.com] which allow developers to create their own custom google-powered search site with their own look and feel by simply proxying a user's search query to the google server which returns search results in XML data which can subsequently be transformed in HTML before being sent back to the user via various processes such as an XSLT transformation [w3.org].
      • Amazon Web API [amazon.com], similar in principle to the above Google API, allows developers to enhance their sites by allowing their users to search for Amazon products without having to go the Amazon site itself. One interesting side-effect of such API is that an Amazon competitor, say Barnes and Noble [bn.com], could offer a similar API to their own site. Now I could allow my users to use my service to search for books and offer them results and price comparisons from both Amazon and Barnes and Noble

      Effective use of XML and XSLT allows you to easily aggregate informational data from one or multiple sources and "repurpose" for an infinite variety of business and technological goals.

      One of the main benefits of XML is that it offers and effective, textual representation of "scructured data", that can be conveniently accessed and manipulated according to a slew of various surrounding standards such as XPath [w3.org], DOM [w3.org], XSLT [w3.org], namespaces [w3.org].

      • Ok- what if google, amazon, etc were to do the same thing, but translate in binary data, without tunneling overport 80 (which is bad, evil, and vile. Just ask any sys admin), and provide a library that parses the binary data for you?

        It would be the exact same thing- except it would be faster, use less bandwidth, be more secure, have session level security (which HTTP lacks). But it wouldn't be buzzword compliant.
        • by sporty ( 27564 ) on Thursday January 23, 2003 @10:39PM (#5148071) Homepage
          Wow, I'm just runnin into you all over the place, aren't I.

          Ok- what if google, amazon, etc were to do the same thing, but translate in binary data, without tunneling overport 80 (which is bad, evil, and vile. Just ask any sys admin), and provide a library that parses the binary data for you?


          Well, that's why you'd use HTTPS with certificates, no? And nothing is wrong with the port. If you meant HTTP, then yeah, it's plaintext.

          Mind you, I don't have a choice of OS's at work. We use solaris and linux. Now amazon, being a windows shop (i'm guessing), only gives out dll's. Great, now I'm not supported. So fine, we use java. Did you know java class (binaries) are versioned? I'm stuck with 1.3.1 ATM and a 1.4 jdk is in the works. Problem is, some jdk's use one version of the binary while another uses.. another. I always hoped it was a universal format. Sadly let down.


          It would be the exact same thing- except it would be faster, use less bandwidth, be more secure, have session level security (which HTTP lacks). But it wouldn't be buzzword compliant.


          That's why technologies like JAXB and translets are poping up. with JAXB, you can bind particular classes to particular schemas/dtd. It speeds up processing. Translets are just compiled XSLT. Really fast since your xslt can be compiled/interpted once, run anyhwere. Kind of a chain technology. translet->xslt->java->machine language.

          And mind you, nothing is more secure about a binary format. It's just obfuscated. Hell, I hacked rengeade bbs's users database format so i can write a user deletion tool. Were they going for security, prolly not. Point is, binary is just obfuscated.

          As for your sessoin level security, that's not the job of your data format. Your data format and transport layer should be indepenent. It's why you can do SOAP over HTTP, SMTP/mail and possibly anything else that has a function() like response format. request->response. It's probably why ssh is so great. All it is, is a way of authentication, communication and encryption. You can create ssh tunnels for http as a proxy.
        • There's nothing that says it *has* to be on port 80, but providing XML rather than binary data reduces both the initial development *and* the maintenance time required to release the data to the public. Also, session level security is unnecessary in a public publishing environment. Finally, with modern compression techniques, bandwidth isn't wasted.
        • I'm sure their library would give you the data in exactly the format that you need it in, be available for the language that you want it in, the platform that you need it on, and they will continue to update and support every single variety. You would also, therefore, completely trust this closed, 3rd party code that you've now integrated into your product, to not have any bugs or security holes.

          Open data formats are a good thing.
    • Thank you. Someone else who has sanity in this world.

      Another point- XML is not more open. Its only as open as the developer wants it to be. He can use a wierd XML schema made to obfuscate (or use an xml schema, ignore the parser, and have the real fields in the data for the tags. Ooh, that'd be evil. Watch MS do it with Office) and it becomes as bad as binary. Meanwhile if a binary format has its format published, it becomes as open as XML claims to be.

      So where exactly is the gain? I'm mising it. Oh, wait- XML is buzzword compliant. And it has an X, and X is cool, look at all the xtreme sports. Bleh.
    • There really needs to be a standard for compiled XML that uses a DTD to replace tags with binary references.

      We would have a standard binary format of information exchange that is small and much easier to create and parse(from a performance standpoint). You can still edit the xml by hand with a decompiler, which would be a VERY trivial editor. Hell... even verification of the data would be trivial. Someone will make one to improve performance of XML-RPC some day by setting up proxies, and you will be able to achieve DSL results on a modem.
    • People seem to be saying that XML is a silly bloated INI/config file replacment. XML can be that, but it can also be used as a 'database'.

      Now. I know a lot of people are going to moan about how slow XML is compared to any DB, and they'd be righ--at the moment. But there is one thing that XML has that DBs don't, and that's fexibility. You can add new elements to XML as you go. You can't do that with a DB (well you can, but any DB admin/desiginer would shoot you for it, and it would be hard), DBs are designed to be strict and uniformed.
      Some data types need fexibility, and this is where XML benifits.

      You said that the tools for XML are great, don't you think that could be because of the way XML was designed?

    • Have you ever used Castor [exolab.org]? Its Marshalling Framework [exolab.org] allows you to easily convert between Java classes and XML documents. This means that you can generate Java source code from an XML Schema (but not DTD, I think). Very useful: simply define your object model using XML Schema, and use Castor's Sourcecode Generator to spit out your Java source.
    • I discovered that XML was just a lot of hype about nothing. There is nothing self-describing about it.

      That's a matter of opinion. XML on it's own isn't too impressive. It's the other technologies such as XSLT, Schema, XInclude, XPath, SOAP, RelaxNG, XML-RPC, SVGML which accompany XML which really make XML a big deal.

      If


      <PERSON>
      <NAME>
      <FIRST>BOB</FIRST>
      <LAST>MARTIN</LAST>
      </NAME>
      </PERSON>


      isn't descriptive, I don't know what is.
  • RelaxNG (Score:2, Interesting)

    by ine8181 ( 618938 )
    RelaxNG vs. W3C Schema makes a much more interesting discussions. DTD is obsolete in many ways... and most of the XML parsers support schema now.
  • by mesocyclone ( 80188 ) on Thursday January 23, 2003 @07:20PM (#5146895) Homepage Journal
    XML is a very powerful tool.

    On very important use is in creating interfaces between heterogeneous systems. Areadable character set and meaningful tags is very handy for developers. The hierarchical structure is extremely powerful. And, of course, the fact that it is a standard with common tools is invaluable.

    However, one useful principle of such interfaces is "if you don't understand it, ignore it." In other words, when you get a message, look for what you want in it and use it. Ignore anything that isn't what you want. XML is ideally suited for this approach - especially if you use path based access rather than DOM tree traversal.

    This approach to interfaces allows systems to interchange messages without exact version consistency, and without requiring a tight congruence of the applications. It allows a system to "tell what it knows" and another system to "read what it needs" without further ado.

    Unfortunately, the use of schemas goes against this idea. It is IMHO a more old fashioned approach of rigidly constraining the messages to an exact specification. This can make interfaces far less robust and flexible, and increase the amount of work.

    Schema processing may also be promoted to "verify" message integrity before processing. However, it only does so in the most primitive ways. Real world messages, especially in the business world, tend to have integrity rules that go far beyond what can be expressed in anything short of a complex computer program or equivalent declarations.

    I am sure there are plenty of places where schemas make sense, but in the areas of commercial message interchange, they take a powerful and flexible construct and hobble it.
    • Not really (Score:5, Insightful)

      by Codex The Sloth ( 93427 ) on Thursday January 23, 2003 @07:47PM (#5147052)
      This approach to interfaces allows systems to interchange messages without exact version consistency, and without requiring a tight congruence of the applications. It allows a system to "tell what it knows" and another system to "read what it needs" without further ado.

      Unfortunately, the use of schemas goes against this idea. It is IMHO a more old fashioned approach of rigidly constraining the messages to an exact specification. This can make interfaces far less robust and flexible, and increase the amount of work.


      If your talking about using XML for data messaging not using schemas is just lazy. XML Schema allows optional elements and attributes and/or default values. So if it isn't required, then just make it optional. If you want multiversion interfaces, you have a different XMLSchema for each version. Then each side knows explicitly what the messaging protocol is.

      While it's probably true that things mostly kinda work if the versions don't match, you shouldn't be relying on this. There's lots of software out there that does this but that doesn't mean it's the ideal.

      If your using XML for markup of documents, schemas are somewhat less useful since the underlying semantics of the tags is usually more important.
      • Re:Not really (Score:3, Insightful)

        by mesocyclone ( 80188 )
        If your talking about using XML for data messaging not using schemas is just lazy. XML Schema allows optional elements and attributes and/or default values. So if it isn't required, then just make it optional. If you want multiversion interfaces, you have a different XMLSchema for each version. Then each side knows explicitly what the messaging protocol is.


        Lazy in this circumstance is often good. What you just described is a bunch of work, which translates into *money*. The important question to ask is what is the utility of creating this schema, vs what is the cost of doing so. The answer varies from case to case.

        After all, do I really care that much that a message passes a schema validation? It doesn't tell me that it is valid, since most of the validity is determined by far more complex criteria than can be expressed in a schema. IOW, what you assert about underlying semantics of documents is even more true with business transactions. A schema doesn't *document* those details of the "protocol".

        Furthermore, XML messages (with the exception of configuration files where schema may actually be quite useful) are normally generated by computers, not people. The rules to generate those messages are then embedded in code (or tables, which is code by another name). Once it works, it will usually continue to work. So again, the schema has offered no advantage, while adding bureaucracy.

        As an analogy, consider a schema to be like a syntax checker. It can tell you if the niggling details are right, but it can't tell you about the whether the proram will work. Since in many cases of message exchange, the niggling details are not even important, this is often a waste of time!
        • Lazy in this circumstance is often good. What you just described is a bunch of work, which translates into *money*. The important question to ask is what is the utility of creating this schema, vs what is the cost of doing so. The answer varies from case to case.

          Work does translate into *money*, not doing work doesn't translate into *saving money* except maybe in the extremely short term.

          Furthermore, XML messages (with the exception of configuration files where schema may actually be quite useful) are normally generated by computers, not people. The rules to generate those messages are then embedded in code (or tables, which is code by another name). Once it works, it will usually continue to work. So again, the schema has offered no advantage, while adding bureaucracy.

          It's true that XMLSchema provides syntactic rather than semantic constraints. But that's *really* useful information. For example XML Schema allows type checking. Sure you can just treat everything as a string and ignore the problem. You can also use it to contrain the valid values for something with regular expressions. This allows you to do assertions at the protocol level. Again, I can get away with not using them but in the long term, that's just stupid.

          And if your schema is generated by computer doesn't that make it more useful, not less? It's like saying that COM/CORBA interfaces are nice but IDL is just pointless niggling...

          As an analogy, consider a schema to be like a syntax checker. It can tell you if the niggling details are right, but it can't tell you about the whether the proram will work. Since in many cases of message exchange, the niggling details are not even important, this is often a waste of time!

          Yes, you could consider an XMLSchema as kind of type checking and syntax checking for your XML. It's been my experience that most real problems are niggling details (unless your doing demoware). Given the broad spectrum of programming tasks out there using XML these days, it would be careless to say that they *all* need Schema (and/or schema validation) which I didn't. But saying that Schemas are always (or for that matter often) a waste of time is IMO a lazy attitude.
    • No they don't (Score:3, Insightful)

      by autopr0n ( 534291 )
      When you're talking about standards you need to have things specified exactly, and schemas give you a standard way to do that. And they also allow you to do things like automatically generate code blocks to represent your data in memory, saving developers of data-processing apps a lot of time. And not only that, they create a simple way to communicate between organizations. What would you have people do, look at the XML themselves and guess it's structure (which would work about 95% of the time, but that 5% will bite you in the ass when you get something unexpected).

      And finally, Schemas don't force any of that on you. If you don't need schema support, then don't turn it on in your parser. You can still grab what you need out of the tree. Although you might not be able to throw just anything into it, that's probably a good thing. The last thing the world needs is thousands of tiny, ill-conceived exotic extensions to various Datatypes. It would make achieving universal compatibility a nightmare.

      If your app doesn't need schemas, don't use 'em. If you don't need to validate, don't check em. If you need to put more data into your tree, maybe you should rethink what your doing or rewrite maybe your schema.
  • If or When? (Score:2, Funny)

    by Anonymous Coward
    When you bend over to take the latest XML Schema, don't forget your SOAP.
  • by UpLateDrinkingCoffee ( 605179 ) on Thursday January 23, 2003 @07:26PM (#5146935)
    I'm wondering, who actually validates their XML at runtime using XML schemas? We do, but most of our XML is used for configuration files where the overhead doesn't affect overall app performance too much (read once at the beginning). One issue we run into is the validation chain.. the XML document refers to it's schema (accessible via URL on the LAN hopefully) and those schemas refer to the "master" XSD schema. This is where we have had trouble, because we usually point it to the w3c master... if the internet is down, so is our app!

    It's occurred to me maybe we are being too diligent in actually validating the schema itself, but I'm wondering what others think?

    • It's occurred to me maybe we are being too diligent in actually validating the schema itself, but I'm wondering what others think?

      Maybe. See, at our shop we're a bit lazy and often times our apps don't check validity at all. I think none of our apps really goes beyond the local realm of the validation chain which has its advantages.

      Besides, you should keep a cached copy of the w3c master docs around. They are not changed very often, so you could as well keep them locally forever without having to have internet connection (which also slows everything down).
    • Archetecture 101:

      Validate at system boundries. Once in the system, you no longer need to validate, as it's already been done.

    • by VP ( 32928 ) on Thursday January 23, 2003 @08:01PM (#5147112)
      This is a misunderstanding of the way schema validation is supposed to work. Schemas have what is called "location hints" which should be used in case you have never before encountered a particular namespace. The key word, however, is "hints" - i.e. you should never have to remotly obtain a schema if you don't need to.

      In most cases, if you are doing schema validation, you already know whta schema you can expect, so they should be not only locally available, but also cached in memory...

      As for the ..."master" XSD schema... you never ever have to get it remotely - the parser should be implementing it already...
    • I don't use XML much (a bit overhyped IMHO). However, I do use it in two area's: SOAP (Web Services) and Config files. Most web services stuff is coded by the IDE so I don't deal with that. However, with custom config files (for a web sites menu, a software install program, etc.) I always reference an XSD, however, I generally do not validate at runtime (VS.NET warns me if an XML file in my project does validate against the intended XSD). It seems that the only reason (in my experience) to validate the XSD at runtime is when an XML may be coming from a 3rd party, as opposed to a relatively static document that's part of your project.
  • by M.C. Hampster ( 541262 ) <M,C,TheHampster&gmail,com> on Thursday January 23, 2003 @07:45PM (#5147045) Journal

    One of the greatest things about XML schemas is that they themselves are well-formed XML documents. This makes it a breeze to parse and create XML Schemas. I've just started using XML Schemas in development for the past few months, and they are fantastic. A huge improvement over both DTD and XDR (Microsoft's temporary schema format until XML Schemas came out).

  • Rather than directly having DOM and some xml files etc, what do people think of having applications talk via SDAI?
  • Some people need to do better jobs thinking up domain names.
  • by Osty ( 16825 ) on Thursday January 23, 2003 @07:59PM (#5147102)

    I can't believe nobody's mentioned this yet. Microsoft has a tool [microsoft.com] that will do several things:

    • Generate an XML Schema from an XDR or XML file.
    • Generate source code (C#, VB, or JScript) from an XSD file (XML Schema file).
    • Generate XML Schemas for one or more types in a given assembly file.

    This makes writing your XSD almost trivial. The code-generation capabilities are very powerful, as well, as you can generate runtime classes for serialization/deserialization or classes derived from DataSet so you can treat XML files like any other database, etc. It's very useful if you're doing any .NET framework programming.

    I'd be very surprised if there weren't other tools out there doing similar things. I simply mentioned xsd.exe because that's what I'm familiar with.

    • It's also a great for documentation. I use XML for a lot of config files (as opposed to "ini", "conf" or the dreaded registry). Many times I forget all of the attributes or tags for my configs, but with an XSD I can not only look it up, but VS.NET uses intellisense to help me quickly code my XML.
  • by Charles Dodgeson ( 248492 ) <jeffrey@goldmark.org> on Thursday January 23, 2003 @08:14PM (#5147196) Homepage Journal
    <Cranky Pedantic Geezer Mode>
    When I was in school, the plural of "schema" was "schemata".
    </>

    I've already selected "No Karma Bonus". Beyond that I can't mod myself downward.

  • by Animats ( 122034 ) on Thursday January 23, 2003 @09:04PM (#5147476) Homepage
    It's actually possible to parse even SGML without a DTD, most of the time. I do this routinely in the SEC filing parser behind Downside [downside.com]. SEC filings come as a horrible mix of SGML and HTML, with occasional uuencoded PDFs and images. The SEC's validation is very light, and isn't based on a DTD. What comes through is a mess.

    The key to robust parsing is deferring the decision as to whether a tag has a closing tag until you've seen enough input to know. You have to read the whole document in, build a tree, then work on the tree, but for anything serious you want to do that anyway.

    This parser is in Perl. If anyone would like to take it over and put it on CPAN, let me know.

  • only partially agree (Score:3, Interesting)

    by u19925 ( 613350 ) on Thursday January 23, 2003 @09:27PM (#5147677)
    the thing that i don't like about dtd as well as schema is that they flag documents as invalid if it contains extra stuff. i guess there should be a validation mode which should flag document as valid as long as it contains atleast those stuff that i need it (and ignore additional stuff). e.g. a document may contains book name, author and price. I may be interested only in name and price. why should I consider such a document to be invalid? also, why should I validate whether the author name is in the correct format? can i just apply partial DTD (which contains only name and price) and ask the parser to validate the doc? Not at the moment.

    I don't agree with you that schema validation is useless. In many cases the documents are fully processed for business rules much later, but you want acknowledgement that your document has reached correctly and it passes atleast the most basic validation (e.g. dtd or schema validation). XML Schema do wonderful job at that. In our case, we always keep schema validation on new doc types until the system is stable and bug free and then remove validation for efficiency (for internal docs). We have discovered many subtle bugs in system which would have been extremely hard to track by looking at application error but were easier to find by looking at parser errors.

    • the thing that i don't like about dtd as well as schema is that they flag documents as invalid if it contains extra stuff.

      One of the benefits of schema validation is that it is not a "yes/no" result like DTD validation is. When properly using the PSVI (Post Schema Validation Infoset) you can achieve exactly the results that you want - you will know if the parts of the XML instance that you are interested in are there, constrained by the partial schema that you provide...
  • by Euphonious Coward ( 189818 ) on Thursday January 23, 2003 @09:31PM (#5147702)
    I gather that Relax-NG is on track to become an ISO Standard. Regardless of what happens with W3C, the ISO's XML schema based on Relax-NG won't go away. Given its natural advantages -- including the enormously greater ease of implementing it -- we might expect to see many more tools built around it.

    It would be somewhat unfortunate if both end up popular, because it will be more work to maintain both sets of tools than either one alone. That's probably what will happen, though, at least in the short term.

    • I was going to mention RELAX, but since your post is already here I'll just add a few links:

      Official?? site of RELAX [xml.gr.jp] (RELAX earthiling! we come in peace!)

      OASIS on Relax-ng [oasis-open.org] (much more dry).

      I'm not sure it would be so bad if both standards came to be popular. A few years ago at an XML conference one of the speakers described the XML world being split into three camps - data modelers (who would be backing XML-Schema), Document-centric folks (who would back RELAX), and one other group (whose leanings I forget but I guess they don't care about typed XML documents!!). Having a data-centric and document-centric approach to XML might not be so bad, each having good uses in different scenarios.
  • XML and Schemas (Score:2, Interesting)

    by Zebra_X ( 13249 )
    The value of XML is not the structure of the data. The tags, nodes, elements and attributes are just another format for parsing data. The power comes with the ability to VALIDATE the format. No other data exchange format has such an integrated approach to assuring the validity AND structure of data. Also, the hirearchical nature of XML makes it idealy suited to most information sets. It also, takes the organization of relationalal data to another level because node groupings inherently define a relationship between the information that is contained in the document. XML as just XML isn't that special but the ability to nest information and validate the structure make XML a more *reliable* data format. In the world of CSV files, ini files, and excel spread sheets and the like, it is a welcome change. As the tools evolve to take the comlexity out of creating things such as schemas. XML's potential as an interchange format will be fully realized. As for its verbosity, it is needed. The less structure the more the format is left open to interpretation.
  • by sbwoodside ( 134679 ) <sbwoodside@yahoo.com> on Friday January 24, 2003 @04:30AM (#5149538) Homepage
    I recently decided to go with RNG for my schemas after reading up on W3C XML Schema (WXS) and Relax NG (RNG) . RNG is just so much easier to read and understand. The real clincher for me was the inability in WXS 1.0 to describe non-deterministic structures. I mean, give me a break. I can't allow people to put the elements in a different order? That's just lame.

    What's more there's a fantastic tool dtdinst [thaiopensource.com] that converts DTDs into Relax NG. There's also tools to convert back and forth between WXS and RNG. So if I ever need to provide someone with a WXS schema I can just run it off automatically.

    Now I'm working on a system using AxKit [axkit.org] to parse out the RNG schema, generate HTML forms for completion, roundtrip the data back to the server, assemble an instance document using DOM and display it using XSLT and CSS. But that's another story. People who don't "get" XML should really check out AxKit.

    simon

"And do you think (fop that I am) that I could be the Scarlet Pumpernickel?" -- Looney Tunes, The Scarlet Pumpernickel (1950, Chuck Jones)

Working...