Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Programming IT Technology

Tim Bray On The Origin Of XML 218

gManZboy writes "Queue just posted an interview with XML co-inventor Tim Bray (currently at Sun Microsystems). Interestingly enough the interviewer is none other than database pioneer Jim Gray (currently at Microsoft). Among other things, in their discussion Tim reveals where the idea for XML actually came from: Tim's work on the OED at Waterloo."
This discussion has been archived. No new comments can be posted.

Tim Bray On The Origin Of XML

Comments Filter:
  • by Anonymous Coward on Friday March 18, 2005 @10:38PM (#11981911)
    We all know Microsoft invented XML, how else could have filed a patent for it:)
    • by Anonymous Coward
      I thought it was Al Gore who invented XML.
    • by tabkey12 ( 851759 ) on Friday March 18, 2005 @10:59PM (#11982028) Homepage
      JG I assume that the burning issue was keeping it simple.

      TB And we missed. XML is a lot more complex than it really needs to be. It's just unkludgy enough to make it over the goal line. The burning issues? People were already starting to talk about using the Web for various kinds of machine-to-machine transactions and for doing a lot of automated processing of the things that were going through the pipes.

      Amazingly, for such a popular method of 'communication' between and within applications, XML is admitted by most to be rather flawed and bulky...

      • by Camel Pilot ( 78781 ) on Friday March 18, 2005 @11:18PM (#11982116) Homepage Journal
        I current working on a project that is doing machine-to-machine transactions. We started off using XML to bundle and unbundle the data. However as the data rates went up performance went south.

        Some bright bunny came up with the idea of using perl stringified data structures instead using Data::Dumper.

        On the receiveing end the data structure is Safe eval'ed and viola there is the data - orders of magnitude faster and there is still the ability to read or edit the data via text editor.

        XML is just a representation of hierarchy data via named parameters and list. Perl (or Python if want) or very adept at parsing code strings.

        Also with code structures you can add dynamic functionality like

        'rsv_time' = localtime(time)

        which you can't with XML...
        • Some bright bunny came up with the idea of using perl stringified data structures instead using Data::Dumper.

          Uhh.. that's one of the things that Data::Dumper was designed to do.
          • I think you may have misread. He said "blah blah blah instead using Data::Dumper", not "blah blah blah instead of using Data::Dumper".

            If you haven't misread, your post was a little unclear, but I thought I'd respond by posting instead of with a nondescript "Overrated" mod.
        • People should use CommonLisp S-expressions instead of XML. S-expressions have the advantage that they have basic datatypes built into the format (string, list, ints, floats, symbols), and the namespace model is much more straightforwards.
        • Why bother with XML when it's just the NIH syndrome as applied to S-expressions.
        • If you really are passing that much data around that it's straining the network, you may consider compressing the data, and have the program uncompress the data before process. Using perl stringified data structures may work for you interfacing with your own systems, but if you have to interface your systems with othe people's systems, there are 2 standards, XML, and CSV.
        • XML is just a representation of hierarchy data via named parameters and list.

          It is far more than that.

          It conforms to a standard. It allows its format to be extended in standard ways without breaking the original meaning. It has rules for allowing internationalisation. Also, there are a large number of efficient parsers and processors already written for it in almost every language.

          Also with code structures you can add dynamic functionality like

          'rsv_time' = localtime(time)


          The XML dialect known as
      • YAML!

        A little fodder for the lameness filter
        • Last time I tried the Perl YAML module I could generate a pathological perl data structure (strings designed to look suspiciously like bits of YAML) and corrupt the output sufficiently that it didn't parse back into the same data structure.

          This was a bit over a year ago.

          I'm sorry, but I'm just not interested in using a format where I can't rely on it being clean enough to even pass printable text cleanly through a conversion and back again. Get back to me when you've got a format which isn't a crock of s
      • Amazingly, for such a popular method of 'communication' between and within applications, XML is admitted by most to be rather flawed and bulky...

        Yep. That didn't stop Microsoft from adding even more weight to it by creating SOAP though. Now there's a bulky format. It's like shipping a shirt-button in container on an oiltanker.

      • That depends on what you're transacting. Plus, there's a forest for the trees issue here. We're already using a sub-set of XML for most HTTP transactions - that is, HTML. A move to XML standards simply opens up a huge array of opportunities for robotic transactions, as well as leaving the field relatively wide open for web developers of traditional varieties. It's a positive good, RSS, being an obvious example of why.
    • Re:OH come on.. (Score:3, Informative)

      by Mistlefoot ( 636417 )
      Microsoft is not applying for a patent on XML but rather, a patent

      " that cover word processing documents stored in the XML (Extensible Markup Language) format. The proposed patent would cover methods for an application other than the original word processor to access data in the document."

      <URL:http://news.com.com/2100-1013_3-5146581.htm l/ >
  • by peculiarmethod ( 301094 ) on Friday March 18, 2005 @10:41PM (#11981927) Journal
    < td padding="5px" > I'm < td >
    • by Segway Ninja ( 777415 ) on Friday March 18, 2005 @10:59PM (#11982029)
      You should be in a padded cell, but someone forgot to close it.
    • by Anonymous Coward on Friday March 18, 2005 @11:15PM (#11982102)
      More correctly that, in a, say, riddle.html, should read (notice the closing </td>):

      <?xml version="1.0" encoding="UTF-8"?>
      <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
      < html xmlns="http://www.w3.org/1999/xhtml">
      <head>
      <title>Riddle</title>
      <link rel="stylesheet" href="/design/default.css" type="text/css" title="Default Stylesheet" />
      </head>
      <body>
      <table>
      <tr>
      <td class="example">I'm</td>
      </tr>
      </table>
      <p class="W3C">
      <a class="debug external" href="http://validator.w3.org/check?uri=referer">< img class="debug" src="http://www.w3.org/Icons/valid-xhtml11" alt="Valid XHTML 1.1!" /></a>
      <a class="debug external" href="http://jigsaw.w3.org/css-validator/check/ref erer"><img class="debug" src="http://jigsaw.w3.org/css-validator/images/vcs s" alt="Valid CSS!" /></a>
      </p>
      </body>
      </html>

      With a corresponding /design/default.css like:
      td.example { padding: 5px; }
      p.W3C { display: none; }

      Additionally you should take care that your .htaccess includes (to correct the application/xhtml+xml to text/html for IE & Co...):
      RewriteEngine on
      RewriteBase /
      RewriteCond %{HTTP_ACCEPT} application/xhtml\+xml
      RewriteCond %{HTTP_ACCEPT} !application/xhtml\+xml\s*;\s*q=0
      RewriteCond %{REQUEST_FILENAME} \.html$
      RewriteCond %{THE_REQUEST} HTTP/1\.1
      RewriteRule .* - [T=application/xhtml+xml]

      Of course there's a serious lack of meta-data here, The padding should be given in cm (or any other absolute measure) or em and it's not fulfilling W3C Accessability Guidelines... :-P

      And now I need to overcome the Lameness filter, oh dear... I assume it's the whitespace which I used for indentation. *shrugs* It doesn't help so far, sometimes I wonder how I'm supposed to write real comments including code examples here. Slashdot sure ssems stupid sometimes.
      • You know, it's not Slashdot's fault you can't read. Whack "reply", and you'll see:

        Allowed HTML <B> <I> <P> <A> <LI> <OL> <UL> <EM> <BR> <TT> <STRONG> <BLOCKQUOTE> <DIV> <ECODE> <DL> <DT> <DD> <CITE> (Use "ECODE" instead of "PRE" or "CODE".)

        And note that I used ECODE to show that.

        Ecode isn't perfect,

        but
        it
        does
        preserve
        indentation without
        histrionics.

        And it preserves brackets:

        <

  • SGML (Score:3, Interesting)

    by Anonymous Coward on Friday March 18, 2005 @10:50PM (#11981980)
    I think it's very funny that XML looks like it is based on SGML.

    But according to the interview, it seems that the similarities are merely coincidental.

    • I do believe that xml is a "simplified", "sub-set" of SGML, same with HTML.
    • Re:SGML (Score:3, Informative)

      by smallpaul ( 65919 )

      XML is defined as a subset of SGML. From the specification [w3.org]:

      "The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document."

  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Friday March 18, 2005 @10:50PM (#11981982) Journal
    How's that old saying go?

    Those that do not understand Lisp are doomed to reinvent it, badly.

    Why can't someone reinvent C so that it sucks less?
    • by r2q2 ( 50527 )
      I believe you are refering to greenspuns 10th law .http://c2.com/cgi/wiki?GreenspunsTenthRuleOfProgr amming
    • Please explain (Score:3, Insightful)

      by johannesg ( 664142 )
      I've heard this quote in relation to XML before, and I don't get it. LISP is a programming language. XML is a method for storing data. About the only relation between the two that I can find is that both use nesting. So, why does this get brought up whenever XML is being discussed?
      • Re:Please explain (Score:5, Informative)

        by Anonymous Coward on Saturday March 19, 2005 @06:23AM (#11983433)
        johannesg writes: "I've heard this quote in relation to XML before, and I don't get it. LISP is a programming language. XML is a method for storing data. About the only relation between the two that I can find is that both use nesting. So, why does this get brought up whenever XML is being discussed?"

        Lisp source code is first parsed into S-expressions before being compiled. The programmer can manipulate these S-expressions to generate new programming constructs.

        S-expressions are nested lists of dynamically typed data. The compiler turns these nested lists into bytecode or assembly code. But before this happens you're able to manipulate a well defined, concise and platform independent data format. The format is so useful that it is also used to store and transport non-code.

        Here's a Lisp function call nested within another function call:

        (/ (+ 1 2 3) 6)

        [i.e. add 1, 2, and 3 together and then divide by 6] Let's first give different names to the function operators:

        (divide (plus 1 2 3) 6)

        Now introduce redundancy by duplicating the opening function names:

        (divide (plus 1 2 3 /plus) 6 /divide)

        Translate the dynamically typed integers to explicit type indentifiers:

        (divide (plus (integer 1 /integer) (integer 2 /integer) (integer 3 /integer)) (integer 6 /integer) /divide)

        Now convert the parentheses and spaces to angle brackets to generate XML:

        <divide>
        <plus>
        <integer>1</integer>
        <integer>2</integer>
        <integer>3</integer>
        </plus>
        <integer>6</integer>
        </divide>

        Lisp S-expressions are a method for storing/expressing data AND code. They have less overhead than XML, solve more problems than XML (comfortably human readable programming languages can also be written in S-expressions, e.g. Scheme and Common Lisp) and they were invented decades earlier.

        Regards,
        Adam Warner
  • From the "Jim Gray" link:

    Jim Gray is a "Distinguished Engineer" in Microsoft's Scaleable Servers Research Group and manager of Microsoft's Bay Area Research Center (BARC).

    OK, Xerox has their famous Palo Alto Reseach Center (PARC), so Microsoft just has to have its own similarly named center in the same general vicinity. Sheesh!

  • Oh boy... (Score:2, Insightful)

    So this guy Tim Bray is one of the people we have to thank for replacing compact, binary config files with 'human-readible', resource-intensive XML, that needs specialized libraries to make sense of it?

    Thanks Tim, the world owes you one!

    But okay you're right, you gotta use those CPU cycles for something...

    --Don't give the world what it asks for, but what it needs.

    • Re:Oh boy... (Score:5, Insightful)

      by MrLint ( 519792 ) on Friday March 18, 2005 @11:25PM (#11982136) Journal
      Umm doesnt any kind of config file require specialized code to read it?

      As you wither need metadata to interpret the binary data, or know the predetermined data layout to read it, that sounds kinda specialized to me.

      The other option is plain text with encoded binary data. This isnt bad, its human readable, kinda, it doesnt explain the encoded binary data. metadata is also needed. I can think of xinitrc files and old ini files from win16. Has to be parsed as plain text. No guarantee of best practice or anything

      XML, well human readable, some meta info. still encoded binary data. This bonus here is the layout has at least some kinda standard to adhere to, and its possible in theory for one XML parser to read any arbitrary XML file.

      So in any case you get a deal with faust. Not human readable, or something that needs to be parsed.
    • Re:Oh boy... (Score:5, Insightful)

      by Alomex ( 148003 ) on Friday March 18, 2005 @11:26PM (#11982148) Homepage
      Try making sense of your "compact binary config files" when something goes wrong, or when you want to port the config to a different application.

      Yes, CPU cycles are cheap. CPUs sit idle over 90% of the time, even when there is a user in front of it. Spending the extra power processing 10K properly tagged files that are compatible across platforms rather than incompatible binary files is one of the best uses of raw CPU power we had.

      • Re:Oh boy... (Score:2, Interesting)

        Idle 90% of the time, but swamped for the 10% of the time you're waiting on results.

        We need to shift applications from a event-compute-display model to a predict-compute-event-display model.

        Caching data and intermediate data structures helps. Possibly even pre-computing them, when available memory permits.

        For example, let's say you've just entered a formula into a spreadsheet. The spreadsheet app can prepare the results of what would happen if you, for example, filled a row or column of cells with the
      • when i work with XML in java, i generally use just pass the XML through a GZIP stream. need to see the file contents? zcat. XML compresses well since it's repetative text. Lately I've been doing a lot of XUL code with PHP/smarty as the back-end, and again, I transparently gzip this...

        So, this solves the problem of the size of the XML to be stored on disk or transmitted over network... The only difference is parsing. Again, when i'm in java, i use PICCOLO to parse the XML -- it uses a lexical analyze
      • Re:Oh boy... (Score:3, Insightful)

        There may be a lot of spare compute cycles about, but what is critical is the ability to process XML in a timely manner on the CPU power that happens to be available at that precise instant in time at the appropriate location. Looking at the average CPU cycles used is like sitting in a traffic jam at 8am and noting that, on average, the road you are on is only used at 10% capacity. It being free at 4am is not much good if you are trying to get to work for 9am.
    • Re:Oh boy... (Score:5, Insightful)

      by Laxitive ( 10360 ) on Friday March 18, 2005 @11:32PM (#11982166) Journal
      Uhm, sorry, do you even know what the hell you're talking about?

      Let's dissect this piece by piece.

      >> "So this guy Tim Bray is one of the people we have to thank for replacing compact, binary config files"

      Who the hell said anything about config files?

      And we have tools to make things "compact" for us. It's called "compression".

      >> "with 'human-readible', resource-intensive XML, that needs specialized libraries to make sense of it? "

      Yes. Human readable. I'm a human. I can read it. Thus: Human readable. I don't understand what the quotes were for. Or your misspelling of "readable".

      And "specialized libraries"? Oh, right.. I forgot. Binary formats don't NEED libraries to parse. Yep. Dunno why libjpeg62 even exists, when it's patently obvious you can just dump jpeg data straight to video memory. Oh yeah, who needs Microsoft Word. I just "cat resume.doc >/dev/lp" to print my documents. Cause it's binary you see. I don't need a library to parse it.

      >> "Thanks Tim, the world owes you one!

      But okay you're right, you gotta use those CPU cycles for something... "

      No shit sherlock. Using CPU cycles to strictly check the type-validity of self-describing documents seems pretty worthwhile to me.

      -Laxitive
    • Re:Oh boy... (Score:4, Interesting)

      by Evil Grinn ( 223934 ) on Friday March 18, 2005 @11:46PM (#11982227)
      replacing compact, binary config files with 'human-readible', resource-intensive XML

      Like what, the Windows registry? Don't say shit like that or ESR will shoot with one of those guns he collects.

      http://www.faqs.org/docs/artu/ch03s01.html#id288 82 98
    • So this guy Tim Bray is one of the people we have to thank for replacing compact, binary config files with 'human-readible', resource-intensive XML, that needs specialized libraries to make sense of it?

      No.

      Unless you're the kind of guy who likes to blame Henry Ford for the drive-by shooting.
  • well... (Score:2, Interesting)

    by rune2 ( 547599 )
    I was damned by [GNU Project founder] Richard Stallman in egregiously profane language for working on it.

    Why do I not find this hard to believe...
  • "database pioneer ... (currently at Microsoft)" translated for slashdot readers: "sellout"
  • by wolfgang_spangler ( 40539 ) on Friday March 18, 2005 @11:02PM (#11982048)
    Gray interviews Bray, should have done it in May. Over by the bay.

    Is the my karma burning? Oh what the hay.

  • by TimeTraveler1884 ( 832874 ) on Friday March 18, 2005 @11:04PM (#11982056)
    That's hogwash. Everyone knows that the idea for XML came from the tablets of stone that Moses brought down from Mount Sinai. In these tablets were the beginnings of self-describing data. That alone was where the commandments of W3C was originally sent out to the world.

    But only in the last decade have scholars used transformation style sheets and super-computers to find more declarative complex types, hidden in the original Hebrew CDATA. It is thought there are tens if not hundreds of specifications in these texts that may never have a finalized draft.

    Progress has been slow, while the discovery of SOAP in the 1800's has made the hygiene of data possible, there much that has yet to be standardized. Considering the aging DTD schemas left from the era of King James, it will be crucial to the data-exchange of humanity to uncover more secrets of XML.

  • by Anonymous Coward on Friday March 18, 2005 @11:18PM (#11982112)

    I work with XML every day. And every day I wonder the same thing: why the hell does the end tag name have to be repeated? Why can't it just be optional? In other words, why can't it just be abbreviated as: <tagname>data</> ?

    Oh MAN I wish they could have done just that one little thing for us. It would cut our datagram size down by at least 30%, maybe more.

    • I hadn't thought about that. Very insightful.

      There has got to be a reason though. Maybe that validation wouldn't be as good or something like that?

      That's the only thing I can think of. With the notation you can tell that something is wrong, but not necessarily where.
      • Not Very insightful! (Score:2, Informative)

        by stevens ( 84346 )

        I hadn't thought about that. Very insightful.

        Lots of people have thought about it. Not Very Insightful.

        The reason is that if the parser encounters unbalanced end-tags, and they're all just </>, the parser will go farther and get very confused before it dies.

        It will be very difficult to pinpoint *which* tag isn't closed, like C's optional {} after an if(), or SGML's optional closing tags.

        It's much easier to correct if your parser can say "You forgot to close <account> on line 115" rather

    • by Alomex ( 148003 ) on Saturday March 19, 2005 @12:02AM (#11982292) Homepage
      why the hell does the end tag name have to be repeated?

      Because that is the single biggest source of headaches in parsing SGML, the precursor of XML, in which such a construct is allowed.

      It also makes error recovery very difficult, something that we know is quite important from all that malformed HTML code out there. The XML creators knew that too.

    • I work with XML every day. And every day I wonder the same thing: why the hell does the end tag name have to be repeated? Why can't it just be optional? In other words, why can't it just be abbreviated as: <tagname>data</> ?
      Same thing for me, although I'd rather have C-like blocks e.g. {data} so it's easier to jump from one side to the other (as any good editor will allow you to do). And quoting could be made easier, too (Come on, &lt;? What were they thinkin?!). The only advantage of not u
    • If more than 60% of your datagram size is element names, your element names are too long. Or you're using nested elements when you should be using attributes.
    • Yeah that'd work great if you knew 100% of the time that you'd never get bad data. If you've got a multi-nested element hierarchy however and you lose one or two of your </>, how do you know where to put them back in? It's very easy to look for an opening <element> tag followed by a closing tag of the same name, especially when building a parser that error-checks.

      You know what would cut down the datagram size more? Smaller tag names. Tag names don't have to be readable so much as uniquely ident
      • You know what would cut down the datagram size more? Smaller tag names.

        Not really, because any protocols that exchange large amounts of XML data should be compressing the data anyway, right?
    • Explicitness (Score:3, Insightful)

      by samael ( 12612 )
      Because it would make spotting your bug harder. Did you _mean_ to close that tag, or did you think you were closing a different tag? If all closing tags look the same it would make tracing certain bugs harder.
    • Or even (:tagname data)?

      Get this man a copy of Practical Common Lisp [gigamonkeys.com]!

  • by Saeed al-Sahaf ( 665390 ) on Friday March 18, 2005 @11:29PM (#11982155) Homepage
    Jim Gray interviews Tim Bray Right, sure.

    Have you ever seen these guys in the same room at the same time? No? I thought as much.

  • by Anonymous Coward on Friday March 18, 2005 @11:36PM (#11982183)
    You know, the people who invented XML were a bunch of publishing technology geeks, and we really thought we were doing the smart document format for the future. Little did we know that it was going to be used for syndicated news feeds and purchase orders.

    The most amazing thing is that back then in 1995-1996 at Open Text we were already using SGML as a data exchange protocol. All of us there (including Tim) ought to have known that XML would also have a life as a computer-to-computer communication protocol. Problem was that at the time so much of the SGML discourse was wrapped around the content versus format debate that we missed the obvious: the main of use of XML was not a replacement for HTML as a text format for the web, but as a kind of uber ASCII to allow the ready exchange of data between disimilar applications (just like ASCII in its time had eased the transfer of data between dismilar hardware and/or software platforms).
  • by Alomex ( 148003 ) on Friday March 18, 2005 @11:45PM (#11982223) Homepage
    TB: I spent two years sitting on the Web consortium's technical architecture group, on the phone every week and face-to-face several times a year with Tim Berners-Lee. To this day, I remain fairly unconvinced of the core Semantic Web proposition.

    Everyone who has actually done work on knowledge representation in the real world knows that this is a huge, difficult problem, unlikely to be solved anytime soon, as Tim Bray claims.

    The only people who claim otherwise are either frauds or ignorant. The Semantic Web initiative has both: Tim Berners-Lee is very smart, but not a computer scientist, so he's not aware of the size of the challenge, plus he's a genuinely nice person, so he tends to trust others too much.

    He has surrounded himself with the snake oil AI salesmen from the early 1980s who had promised us impending ubiquitous intelligent computers. Those fraudsters got found out back then, and spent the next fifteen years in academic limbo, only to be rescued by Tim Berners-Lee naivete.

    • I work with Tim Bray, but I seriously disagree with this position of his. If you had gone back to the days before xml was invented you could have made exactly the same argument against xml: "SGML was not a success, therefore XML can't be". I have blogged [bblfish.net] about this falacious argument at length. You can work with the Semantic Web without having to take on the most difficult problems of AI. You can use it to work on some really simple problems very effectively. Speaking of "frauds", "ignorants" and "snake
    • The Semantic Web, Syllogism, and Worldview [shirky.com]

      "metadata is just data with unstandard interfaces"; read, write, and hierarchical file namespaces rule [bell-labs.com]
    • If we're going to categorise the web then a fuzzy definition set with multiple overlapping definitions is going to be necessary. I suspect that del.icio.us is going to be the first step in this direction - link it into google and you've got a good stab at understanding what concepts web pages are actually connected to.
    • by Jagasian ( 129329 ) on Saturday March 19, 2005 @02:15PM (#11985470)
      If your post could be modded above a "5", I would mod your post as "insightful". I guess people have no memory, and that is why these Semantic Web frauds get grants, venture cap, etc. They have these big promises of seemlessly integrating web services... AUTOMATICALLY?!?!

      The easiest way to disprove their crap is this. Even in RDF or OWL, it is possible to have "semantic aliasing", i.e. multiple ways of representing the same concept. This is exactly the core problem that they claim they address and that they claim that XML does not address. Think about it, how can automated inferences be made, if two concepts have distinct _semantic_ (not just syntactic) representations? Furthermore, it can be shown that in general these different representations cannot be automatically determined to represent the same thing.

      So their entire project is a farce! It is a bunch of people that are both ignorant of pertinent theoretical mathematical results on computability, completeness, and hell, the fact that even in axiomatic set theory there are multiple ways to represent... say... the real numbers... and they are also ignorant of practical computer/sofware engineering and sociological limitations.

      They have stop-gaps: ontologies. Oh if only people could agree on one common unified ontology, the entire semantic aliasing problem would be solved... or so they seem to think. But just because people agree on a common vocabulary, the way it is used can still give rise to the semantic aliasing problem. So even though the fact that agreeing on some complete or near-complete ontology is going to be IMPOSSIBLE, even if it was done, it still wouldn't fix the deep underlying problems with the Semantic Web - problems that have been struggled with for over 100s years in the field of formal mathematics.
  • It drives me up the wall, that my employer is using XML to let parts of their own application communicate with other parts. DTDs are not used and all parts still need to be modified/recompiled whenever one of them changes. Same people maintain both ends of the communication.

    Theirs is, in reality, a proprietory format, but to stay buzz-word compliant they use XML, which hurts performance -- sometimes dearly...

    For example, to pass a couple of thousands of floating-point numbers from front end to a computation engine, each is converted to text string with something like <Parameter> around it. The giant strings (memory is cheap, right?) are kept in memory until the whole collection is ready to be sent out... The engine then parses the arriving XML and fills out the array of doubles for processing.

    It really is disgusting, especially since freely available alternatives exist... For instance, PVM solved the problem of efficiently passing datasets between computers a decade ago, but nooo, we only studied XML in college -- and it is, like, really cool, dude...

    • It drives me up the wall, that my employer is using XML to let parts of their own application communicate with other parts. DTDs are not used and all parts still need to be modified/recompiled whenever one of them changes.

      Then you are not using XML right. For one the format shouldn't be changing much, if it is clearly you guys are spending too much time coding and not enough thinking. Second any application that does not use the new attribute should be able to ignore it without any compilation change. Th
      • Then you are not using XML right.

        Does anybody?.. I guess, not...

        clearly you guys are spending too much time coding and not enough thinking

        No disagreement here -- that was my point, in fact.

        two thousand floating points ain't a giant string, unless you are programming an 8086 in Elbonia.

        Just tested simply sprintf-ing the same double 2000 times into the same text buffer on a PII-Xeon @450MHz with 2Mb of L2-cache, the whole program and the puny buffer are entirely in cache (which is not the case in real-life). 5-16 milliseconds (of user time, ignoring the sys-time)... The PII is not much slower, than the Sparcs we are using. Even if the latest and greatest CPUs are 10 times faster (which they aren't), why waste their power on chewing XML tags?

        Converting two thousand numbers to text should take 50 microseconds at the most.

        Now add the time to parse it on the other end, and consider, that the whole point of passing it is to have some computations happen. And the computations themselves happen in about 200 milliseconds...

        Now realize that size of the XML-file is 3-4 times bigger than it needs to be -- but the network packets are still 1500 bytes and with XML we need 5 or 6 (at best) instead of 2. Bandwidth is cheap, but latency is not...

        Now throw in the loss of precision from the double-text-double conversion(s) and climb up the wall next to me...

        Using XML in such scenarios is like overnighting papers from one end of the office floor to the other. Defending this practice is like saying, that FedEx is really fast and efficient everywhere except in Elbonia...

  • by Anonymous Coward on Saturday March 19, 2005 @12:20AM (#11982372)

    I think XML should have looked more like this:

    (html
    (head
    (title "This is an example"))
    (body
    (h1 "A first level header")
    (p "There's no reason for all the extra characters.")
    (p "Although this looks like LISPy HTML it could have all the features of XML")))
    • Yes, I think this definitely looks more sensible. It would have reduced the size of documents considerably and it does look cleaner.

      Consider a XML snippet:

      <sampletag name="this" type="that">
      Some value
      </sampletag>

      This could be translated into
      (sampletag [name="this"] [type="that"]Some value)

      which is much smaller.

      I wonder if someone will consider this for real
    • Sounds great... but then this happens

      (html
      (head
      (title "This is an example")
      (body
      (h1 "A first level header")
      (p "There's no reason for all the extra characters.")
      (p "Although this looks like LISPy HTML it could have all the features of XML")))

      Now your entire webpage is blank. What happened?
    • Okey dokey.

      First, lets add matching close parens to make error detection easier. (It might be handy to have a way (such as the "/") to indicate an end tag, but we're going for brevity here.)
      (html ... html)

      Now let's add attributes. It is probably most convenient to put these in a list right after the element name. Obviously if there are no attributes we need to put in an empty list so the parsing won't be ambiguous :
      (html '() (head '() (style '( type."text/css" ...) style) head) html)

      (Of cou

  • by CondeZer0 ( 158969 ) on Saturday March 19, 2005 @08:58AM (#11983734) Homepage
    "The essence of XML is this: the problem it solves is not hard, and it does not solve the problem well." -- Phil Wadler

  • [OT] bad summary (Score:4, Insightful)

    by hankaholic ( 32239 ) on Saturday March 19, 2005 @10:53AM (#11984266)
    Tim reveals where the idea for XML actually came from: Tim's work on the OED at Waterloo.
    If you believe that "OED" will be misunderstood by enough people to justify enclosing it with a link to a definition, why not just spell out "Oxford English Dictionary"?

    "Hmmm, OED might be unclear to tons of people reading this, I'll make them have to click on a link to know what I'm talking about."

    Obligatory relation to discussion content:

    Providing a link instead of writing a clear summary is choosing the wrong tool for the task at hand. Authors of some other comments in this thread have shown that XML also is the wrong tool for many of the tasks to which it is applied. Whether it's passing data internally within an application or summarizing an article for the homepage, choosing the right alternative can make a difference between efficient clarity and an inelegant kludge.

    Applying the right algorithmic tool to the right problem is actually a focus of CS. This is why sorting routines are often studied -- for instance, a routine which is more efficient at sorting millions of unordered pieces of data may be very wasteful when dealing with nearly presorted data.

    The distinction is not often understood and has more of an impact that the observer might think. For instance, when writing an application for a handheld in which data is kept sorted and is usually viewed between insertions it makes sense to sort after every data element added to the database. However, this means adding a single item to a mostly-ordered set. Understanding that quicksort is a poor choice for this application means a difference in battery life.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...