Tim Bray On The Origin Of XML 218
gManZboy writes "Queue just posted an interview with XML co-inventor Tim Bray (currently at Sun Microsystems). Interestingly enough the interviewer is none other than database pioneer Jim Gray (currently at Microsoft). Among other things, in their discussion Tim reveals where the idea for XML actually came from: Tim's work on the OED at Waterloo."
OH come on.. (Score:3, Funny)
Re:OH come on.. (Score:2, Funny)
Re:OH come on.. (Score:4, Funny)
I never had a day of training in my life!
OHH a banana!
This is article is amazingly honest (Score:5, Interesting)
TB And we missed. XML is a lot more complex than it really needs to be. It's just unkludgy enough to make it over the goal line. The burning issues? People were already starting to talk about using the Web for various kinds of machine-to-machine transactions and for doing a lot of automated processing of the things that were going through the pipes.
Amazingly, for such a popular method of 'communication' between and within applications, XML is admitted by most to be rather flawed and bulky...
Re:This is article is amazingly honest (Score:4, Interesting)
Some bright bunny came up with the idea of using perl stringified data structures instead using Data::Dumper.
On the receiveing end the data structure is Safe eval'ed and viola there is the data - orders of magnitude faster and there is still the ability to read or edit the data via text editor.
XML is just a representation of hierarchy data via named parameters and list. Perl (or Python if want) or very adept at parsing code strings.
Also with code structures you can add dynamic functionality like
'rsv_time' = localtime(time)
which you can't with XML...
Re:This is article is amazingly honest (Score:2, Insightful)
Uhh.. that's one of the things that Data::Dumper was designed to do.
Re:This is article is amazingly honest (Score:2, Informative)
If you haven't misread, your post was a little unclear, but I thought I'd respond by posting instead of with a nondescript "Overrated" mod.
Re:This is article is amazingly honest (Score:3, Interesting)
Re:This is article is amazingly honest (Score:2)
Re:This is article is amazingly honest (Score:2)
Re:This is article is amazingly honest (Score:3, Informative)
It is far more than that.
It conforms to a standard. It allows its format to be extended in standard ways without breaking the original meaning. It has rules for allowing internationalisation. Also, there are a large number of efficient parsers and processors already written for it in almost every language.
Also with code structures you can add dynamic functionality like
'rsv_time' = localtime(time)
The XML dialect known as
Re:This is article is amazingly honest (Score:2)
Re:This is article is amazingly honest (Score:2)
A little fodder for the lameness filter
YAML (Score:2)
This was a bit over a year ago.
I'm sorry, but I'm just not interested in using a format where I can't rely on it being clean enough to even pass printable text cleanly through a conversion and back again. Get back to me when you've got a format which isn't a crock of s
Re:This is article is amazingly honest (Score:2, Insightful)
Amazingly, for such a popular method of 'communication' between and within applications, XML is admitted by most to be rather flawed and bulky...
Yep. That didn't stop Microsoft from adding even more weight to it by creating SOAP though. Now there's a bulky format. It's like shipping a shirt-button in container on an oiltanker.
Re:This is article is amazingly honest (Score:3, Insightful)
Re:This is article is amazingly honest (Score:2)
HTMl is a version of SGML that uses a fixed set of tags.
XML is a simplified version of SGML.
Re:This is article is amazingly honest (Score:2)
Re:The almighty Q (Score:2)
That is a very good resource for the beginner to intermediate XML user.
Re:OH come on.. (Score:3, Informative)
" that cover word processing documents stored in the XML (Extensible Markup Language) format. The proposed patent would cover methods for an application other than the original word processor to access data in the document."
<URL:http://news.com.com/2100-1013_3-5146581.ht
here's my question.. can you decrypt this? (Score:3, Funny)
Re:here's my question.. can you decrypt this? (Score:3, Funny)
Re:here's my question.. can you decrypt this? (Score:5, Funny)
Re:here's my question.. can you decrypt this? (Score:5, Funny)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<head>
<title>Riddle</title>
<link rel="stylesheet" href="/design/default.css" type="text/css" title="Default Stylesheet"
</head>
<body>
<table>
<tr>
<td class="example">I'm</td>
</tr>
</table>
<p class="W3C">
<a class="debug external" href="http://validator.w3.org/check?uri=referer">
<a class="debug external" href="http://jigsaw.w3.org/css-validator/check/re
</p>
</body>
</html>
With a corresponding
td.example { padding: 5px; }
p.W3C { display: none; }
Additionally you should take care that your
RewriteEngine on
RewriteBase
RewriteCond %{HTTP_ACCEPT} application/xhtml\+xml
RewriteCond %{HTTP_ACCEPT} !application/xhtml\+xml\s*;\s*q=0
RewriteCond %{REQUEST_FILENAME} \.html$
RewriteCond %{THE_REQUEST} HTTP/1\.1
RewriteRule
Of course there's a serious lack of meta-data here, The padding should be given in cm (or any other absolute measure) or em and it's not fulfilling W3C Accessability Guidelines...
And now I need to overcome the Lameness filter, oh dear... I assume it's the whitespace which I used for indentation. *shrugs* It doesn't help so far, sometimes I wonder how I'm supposed to write real comments including code examples here. Slashdot sure ssems stupid sometimes.
Re:here's my question.. can you decrypt this? (Score:2)
And note that I used ECODE to show that.
Ecode isn't perfect,
SGML (Score:3, Interesting)
But according to the interview, it seems that the similarities are merely coincidental.
Re:SGML (Score:2)
Re:SGML (Score:3, Informative)
XML is defined as a subset of SGML. From the specification [w3.org]:
"The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document."
Re:SGML (Score:3, Informative)
GML even had tags for doing Gantt charts, and I would dearly love to find a publishing system that could do printouts from such tags. ... ... Here it is 10 years later, and we still haven't gotten back to the level of ease of use and flexibility that GML had in the '80s
You're looking for Gary Richtmeyer's B2H [ibm.com] program, available from IBM's z/VM download site [ibm.com]. It's written in Rexx and runs on every system you're likely to be using, comes in source form, and can process just about everything the BookMast
Lisp strikes again (Score:5, Funny)
Those that do not understand Lisp are doomed to reinvent it, badly.
Why can't someone reinvent C so that it sucks less?
Re:Lisp strikes again (Score:2, Informative)
Please explain (Score:3, Insightful)
Re:Please explain (Score:5, Informative)
Lisp source code is first parsed into S-expressions before being compiled. The programmer can manipulate these S-expressions to generate new programming constructs.
S-expressions are nested lists of dynamically typed data. The compiler turns these nested lists into bytecode or assembly code. But before this happens you're able to manipulate a well defined, concise and platform independent data format. The format is so useful that it is also used to store and transport non-code.
Here's a Lisp function call nested within another function call:
(/ (+ 1 2 3) 6)
[i.e. add 1, 2, and 3 together and then divide by 6] Let's first give different names to the function operators:
(divide (plus 1 2 3) 6)
Now introduce redundancy by duplicating the opening function names:
(divide (plus 1 2 3
Translate the dynamically typed integers to explicit type indentifiers:
(divide (plus (integer 1
Now convert the parentheses and spaces to angle brackets to generate XML:
<divide>
<plus>
<integer>1</integer>
<integer>2</integer>
<integer>3</integer>
</plus>
<integer>6</integer>
</divide>
Lisp S-expressions are a method for storing/expressing data AND code. They have less overhead than XML, solve more problems than XML (comfortably human readable programming languages can also be written in S-expressions, e.g. Scheme and Common Lisp) and they were invented decades earlier.
Regards,
Adam Warner
Re:Please explain-Chinese firewall. (Score:3, Insightful)
S-expressions are in prefix notation. Infix describes expressions such as "1+2". Lots of parenthesis is hard to read, but twice that number of angle brackets is certainly not easier.
Blurring the line between data and code is a useful technique...
This only matters if you use the data in Lisp without being careful. Any non-interpreted language could use it just as safely as XML.
P.S. I don't even like Lisp, being a person who likes type checking before I actually execut
Can't Microsoft do *anything* original? (Score:2, Interesting)
From the "Jim Gray" link:
Jim Gray is a "Distinguished Engineer" in Microsoft's Scaleable Servers Research Group and manager of Microsoft's Bay Area Research Center (BARC).
OK, Xerox has their famous Palo Alto Reseach Center (PARC), so Microsoft just has to have its own similarly named center in the same general vicinity. Sheesh!
Re:Can't Microsoft do *anything* original? (Score:2)
Re:Can't Microsoft do *anything* original? (Score:3)
Woof!
Re:Can't Microsoft do *anything* original? (Score:2)
Originally: Bay Area Research Facility (Score:3, Funny)
doubt, in order to instill a greater sense among
MSFT employees there that they actually might
(someday) have a workable product. Hence, BARC.
XML is more complicated than it should be, but
it is NOT a MSFT "invention", and has no business
being patented by MSFT. Let alone, encumbered
with their viral and restrictive and expensive
licensing scheme. What it IS is yet another
example of the slimey "embrace/extend/extinguish"
monopolistic business practices of MSFT. If
Oh boy... (Score:2, Insightful)
Thanks Tim, the world owes you one!
But okay you're right, you gotta use those CPU cycles for something...
--Don't give the world what it asks for, but what it needs.
Re:Oh boy... (Score:5, Insightful)
As you wither need metadata to interpret the binary data, or know the predetermined data layout to read it, that sounds kinda specialized to me.
The other option is plain text with encoded binary data. This isnt bad, its human readable, kinda, it doesnt explain the encoded binary data. metadata is also needed. I can think of xinitrc files and old ini files from win16. Has to be parsed as plain text. No guarantee of best practice or anything
XML, well human readable, some meta info. still encoded binary data. This bonus here is the layout has at least some kinda standard to adhere to, and its possible in theory for one XML parser to read any arbitrary XML file.
So in any case you get a deal with faust. Not human readable, or something that needs to be parsed.
Re:Oh boy... (Score:5, Funny)
Re:Oh boy... (Score:5, Insightful)
Yes, CPU cycles are cheap. CPUs sit idle over 90% of the time, even when there is a user in front of it. Spending the extra power processing 10K properly tagged files that are compatible across platforms rather than incompatible binary files is one of the best uses of raw CPU power we had.
Re:Oh boy... (Score:2, Interesting)
We need to shift applications from a event-compute-display model to a predict-compute-event-display model.
Caching data and intermediate data structures helps. Possibly even pre-computing them, when available memory permits.
For example, let's say you've just entered a formula into a spreadsheet. The spreadsheet app can prepare the results of what would happen if you, for example, filled a row or column of cells with the
I COULD NOT AGREE MORE. gzip is our friend! (Score:3, Informative)
So, this solves the problem of the size of the XML to be stored on disk or transmitted over network... The only difference is parsing. Again, when i'm in java, i use PICCOLO to parse the XML -- it uses a lexical analyze
Re:I COULD NOT AGREE MORE. gzip is our friend! (Score:2)
Like I said in my post, gzip works pretty damn well for networks too, as it supports streams. If you're running a web server, use something like mod_gzip -- if you're writing a network application with a custom-made XML-based protocol, you can simply wrap a gzip compressor/decompressor around the socket stream.
Binary XML is intended to make things parse faster, but as others have said, it's worth the extra CPU power to pr
Re:Oh boy... (Score:3, Insightful)
+1 Insightful (Score:2)
Re:Oh boy... (Score:5, Insightful)
Let's dissect this piece by piece.
>> "So this guy Tim Bray is one of the people we have to thank for replacing compact, binary config files"
Who the hell said anything about config files?
And we have tools to make things "compact" for us. It's called "compression".
>> "with 'human-readible', resource-intensive XML, that needs specialized libraries to make sense of it? "
Yes. Human readable. I'm a human. I can read it. Thus: Human readable. I don't understand what the quotes were for. Or your misspelling of "readable".
And "specialized libraries"? Oh, right.. I forgot. Binary formats don't NEED libraries to parse. Yep. Dunno why libjpeg62 even exists, when it's patently obvious you can just dump jpeg data straight to video memory. Oh yeah, who needs Microsoft Word. I just "cat resume.doc >/dev/lp" to print my documents. Cause it's binary you see. I don't need a library to parse it.
>> "Thanks Tim, the world owes you one!
But okay you're right, you gotta use those CPU cycles for something... "
No shit sherlock. Using CPU cycles to strictly check the type-validity of self-describing documents seems pretty worthwhile to me.
-Laxitive
Re:Oh boy... (Score:2)
Re:Oh boy... (Score:3, Interesting)
Except the XML file tells the parser where its own definition is. Each of the XML files inside of an OO.o package tell you how to figure out what they are.
It's not quite that simple. XML files have two definitions: the DTD and the schema. The DTD is required for validation (i.e., well-formed XML), the schema for retreiving the layout of about the elements (i.e., an integer goes in the foo attribute). Neither are required for an XML document (though you must have a DTD if you want to validate it). Sche
Re:Oh boy... (Score:4, Interesting)
Like what, the Windows registry? Don't say shit like that or ESR will shoot with one of those guns he collects.
http://www.faqs.org/docs/artu/ch03s01.html#id28
Re:Oh boy... (Score:2)
No.
Unless you're the kind of guy who likes to blame Henry Ford for the drive-by shooting.
Re:Oh boy... (Score:2, Interesting)
well... (Score:2, Interesting)
Why do I not find this hard to believe...
pioneer ... currently at Microsoft (Score:3, Funny)
happy gilmore quote (Score:4, Funny)
Is the my karma burning? Oh what the hay.
The Origin of XML (Score:5, Funny)
But only in the last decade have scholars used transformation style sheets and super-computers to find more declarative complex types, hidden in the original Hebrew CDATA. It is thought there are tens if not hundreds of specifications in these texts that may never have a finalized draft.
Progress has been slow, while the discovery of SOAP in the 1800's has made the hygiene of data possible, there much that has yet to be standardized. Considering the aging DTD schemas left from the era of King James, it will be crucial to the data-exchange of humanity to uncover more secrets of XML.
Why, oh why, did they have to repeat the tag name? (Score:3, Interesting)
I work with XML every day. And every day I wonder the same thing: why the hell does the end tag name have to be repeated? Why can't it just be optional? In other words, why can't it just be abbreviated as: <tagname>data</> ?
Oh MAN I wish they could have done just that one little thing for us. It would cut our datagram size down by at least 30%, maybe more.
Very insightful! (Score:2)
I hadn't thought about that. Very insightful.
There has got to be a reason though. Maybe that validation wouldn't be as good or something like that?
That's the only thing I can think of. With the notation you can tell that something is wrong, but not necessarily where.
Not Very insightful! (Score:2, Informative)
Lots of people have thought about it. Not Very Insightful.
The reason is that if the parser encounters unbalanced end-tags, and they're all just </>, the parser will go farther and get very confused before it dies.
It will be very difficult to pinpoint *which* tag isn't closed, like C's optional {} after an if(), or SGML's optional closing tags.
It's much easier to correct if your parser can say "You forgot to close <account> on line 115" rather
Re:Why, oh why, did they have to repeat the tag na (Score:5, Insightful)
Because that is the single biggest source of headaches in parsing SGML, the precursor of XML, in which such a construct is allowed.
It also makes error recovery very difficult, something that we know is quite important from all that malformed HTML code out there. The XML creators knew that too.
Ah yes, the "error recovery" excuse... (Score:2)
The extra redundancy of closing tag labels makes sense when your documents are generated by humans, like most SGML was.
It makes no sense at all for documents that are generated by programs, especially programs that create documents in some canonical manner like building DOM structures and then serializing them - if you trust the serializer, th
Re:Why, oh why, did they have to repeat the tag na (Score:2)
Re:Why, oh why, did they have to repeat the tag na (Score:2)
See here for enlightenment. [wikipedia.org]
Re:Why, oh why, did they have to repeat the tag na (Score:2)
Same thing for me, although I'd rather have C-like blocks e.g. {data} so it's easier to jump from one side to the other (as any good editor will allow you to do). And quoting could be made easier, too (Come on, <? What were they thinkin?!). The only advantage of not u
Re:Why, oh why, did they have to repeat the tag na (Score:2)
Re:Why, oh why, did they have to repeat the tag na (Score:3, Insightful)
You know what would cut down the datagram size more? Smaller tag names. Tag names don't have to be readable so much as uniquely ident
Re:Why, oh why, did they have to repeat the tag na (Score:2)
Not really, because any protocols that exchange large amounts of XML data should be compressing the data anyway, right?
Re:Why, oh why, did they have to repeat the tag na (Score:2)
How about when the power goes out? When a hard drive has a bad sector and transfers a malformed file? When your parser misses a closing tag, how does it know which XML element parents the next XML element? Does it guess? How would you recover from such an error?
I've
Re:Why, oh why, did they have to repeat the tag na (Score:2)
Explicitness (Score:3, Insightful)
Re:Why, oh why, did they have to repeat the tag na (Score:2)
Or even (:tagname data)?
Get this man a copy of Practical Common Lisp [gigamonkeys.com]!
Re:Why, oh why, did they have to repeat the tag na (Score:5, Informative)
Which element did I forget to close?
< ele1> < ele2> < ele3> <
Clearer now?
Jim Gray interviews Tim Bray (Score:5, Funny)
Have you ever seen these guys in the same room at the same time? No? I thought as much.
Right in front of you, Tim! (Score:4, Interesting)
The most amazing thing is that back then in 1995-1996 at Open Text we were already using SGML as a data exchange protocol. All of us there (including Tim) ought to have known that XML would also have a life as a computer-to-computer communication protocol. Problem was that at the time so much of the SGML discourse was wrapped around the content versus format debate that we missed the obvious: the main of use of XML was not a replacement for HTML as a text format for the web, but as a kind of uber ASCII to allow the ready exchange of data between disimilar applications (just like ASCII in its time had eased the transfer of data between dismilar hardware and/or software platforms).
Semantic web snake oil... (Score:5, Interesting)
Everyone who has actually done work on knowledge representation in the real world knows that this is a huge, difficult problem, unlikely to be solved anytime soon, as Tim Bray claims.
The only people who claim otherwise are either frauds or ignorant. The Semantic Web initiative has both: Tim Berners-Lee is very smart, but not a computer scientist, so he's not aware of the size of the challenge, plus he's a genuinely nice person, so he tends to trust others too much.
He has surrounded himself with the snake oil AI salesmen from the early 1980s who had promised us impending ubiquitous intelligent computers. Those fraudsters got found out back then, and spent the next fifteen years in academic limbo, only to be rescued by Tim Berners-Lee naivete.
Re:Semantic web snake oil... (Score:2, Insightful)
Re:Semantic web snake oil... (Score:2, Insightful)
My suggestion to you: don't put too much weight on Tim Bray's bet. If you look carefully at his rdf.net challenge [tbray.org] you will notice that the wording leaves him ample space to maneuvre were things to turn out agains him:
Re:Semantic web snake oil... (Score:2)
"metadata is just data with unstandard interfaces"; read, write, and hierarchical file namespaces rule [bell-labs.com]
Needs to be bottom up (Score:2)
Re:Semantic web snake oil... (Score:5, Insightful)
The easiest way to disprove their crap is this. Even in RDF or OWL, it is possible to have "semantic aliasing", i.e. multiple ways of representing the same concept. This is exactly the core problem that they claim they address and that they claim that XML does not address. Think about it, how can automated inferences be made, if two concepts have distinct _semantic_ (not just syntactic) representations? Furthermore, it can be shown that in general these different representations cannot be automatically determined to represent the same thing.
So their entire project is a farce! It is a bunch of people that are both ignorant of pertinent theoretical mathematical results on computability, completeness, and hell, the fact that even in axiomatic set theory there are multiple ways to represent... say... the real numbers... and they are also ignorant of practical computer/sofware engineering and sociological limitations.
They have stop-gaps: ontologies. Oh if only people could agree on one common unified ontology, the entire semantic aliasing problem would be solved... or so they seem to think. But just because people agree on a common vocabulary, the way it is used can still give rise to the semantic aliasing problem. So even though the fact that agreeing on some complete or near-complete ontology is going to be IMPOSSIBLE, even if it was done, it still wouldn't fix the deep underlying problems with the Semantic Web - problems that have been struggled with for over 100s years in the field of formal mathematics.
Intra-vendor XML is (usually) stupid (Score:5, Interesting)
Theirs is, in reality, a proprietory format, but to stay buzz-word compliant they use XML, which hurts performance -- sometimes dearly...
For example, to pass a couple of thousands of floating-point numbers from front end to a computation engine, each is converted to text string with something like <Parameter> around it. The giant strings (memory is cheap, right?) are kept in memory until the whole collection is ready to be sent out... The engine then parses the arriving XML and fills out the array of doubles for processing.
It really is disgusting, especially since freely available alternatives exist... For instance, PVM solved the problem of efficiently passing datasets between computers a decade ago, but nooo, we only studied XML in college -- and it is, like, really cool, dude...
Re:Intra-vendor XML is (usually) stupid (Score:3, Interesting)
Then you are not using XML right. For one the format shouldn't be changing much, if it is clearly you guys are spending too much time coding and not enough thinking. Second any application that does not use the new attribute should be able to ignore it without any compilation change. Th
Re:Intra-vendor XML is (usually) stupid (Score:4, Insightful)
Does anybody?.. I guess, not...
No disagreement here -- that was my point, in fact.
Just tested simply sprintf-ing the same double 2000 times into the same text buffer on a PII-Xeon @450MHz with 2Mb of L2-cache, the whole program and the puny buffer are entirely in cache (which is not the case in real-life). 5-16 milliseconds (of user time, ignoring the sys-time)... The PII is not much slower, than the Sparcs we are using. Even if the latest and greatest CPUs are 10 times faster (which they aren't), why waste their power on chewing XML tags?
Now add the time to parse it on the other end, and consider, that the whole point of passing it is to have some computations happen. And the computations themselves happen in about 200 milliseconds...
Now realize that size of the XML-file is 3-4 times bigger than it needs to be -- but the network packets are still 1500 bytes and with XML we need 5 or 6 (at best) instead of 2. Bandwidth is cheap, but latency is not...
Now throw in the loss of precision from the double-text-double conversion(s) and climb up the wall next to me...
Using XML in such scenarios is like overnighting papers from one end of the office floor to the other. Defending this practice is like saying, that FedEx is really fast and efficient everywhere except in Elbonia...
Re:Intra-vendor XML is (usually) stupid (Score:2)
Yet again you miss the point. Exporting data in XML means that if you ever have a change of format or create a second consumer application it can readily understand the data. For example you can pass it to gecko and display the data. Try that with your propietary binary format.
Exchanging intra vendor data in XML is no more foolish than exchanging intra vendor data in ASCII.
Re:50 microseconds is not bad - it's terrible (Score:2)
Re:Intra-vendor XML is (usually) stupid (Score:2)
This statement is actually one of the reasons for using XML.
Using a standard data format like XML that is widely understood (but nooo, we only studied XML in college, as you put it) and has a mature set of parsing tools makes handling the data easier when the applications that process the data change, particularly if they change dramatically.
It sounds like your primary objection to XML for your application is that is not very efficient. There is always a tr
Re:Intra-vendor XML is (usually) stupid (Score:2)
See xmlgrep [ieee.org]. Also xgrep [wohlberg.net] and xml command line utilities [robur.slu.se].
you, like all GNU fools, can't live without verbosity
A strange comment considering Plan 9's Unix origins.
Re:Intra-vendor XML is (usually) stupid (Score:2)
Re:Intra-vendor XML is (usually) stupid (Score:2)
What it should have looked like (Score:5, Insightful)
I think XML should have looked more like this:
Re:What it should have looked like (Score:3, Interesting)
Consider a XML snippet:
<sampletag name="this" type="that">
Some value
</sampletag>
This could be translated into
(sampletag [name="this"] [type="that"]Some value)
which is much smaller.
I wonder if someone will consider this for real
Re:What it should have looked like (Score:3, Insightful)
Now your entire webpage is blank. What happened?
Re:What it should have looked like (Score:2)
Lets see.... (Score:2)
First, lets add matching close parens to make error detection easier. (It might be handy to have a way (such as the "/") to indicate an end tag, but we're going for brevity here.) ... html)
(html
Now let's add attributes. It is probably most convenient to put these in a list right after the element name. Obviously if there are no attributes we need to put in an empty list so the parsing won't be ambiguous : ...) style) head) html)
(html '() (head '() (style '( type."text/css"
(Of cou
The essence of XML... (Score:3, Funny)
[OT] bad summary (Score:4, Insightful)
"Hmmm, OED might be unclear to tons of people reading this, I'll make them have to click on a link to know what I'm talking about."
Obligatory relation to discussion content:
Providing a link instead of writing a clear summary is choosing the wrong tool for the task at hand. Authors of some other comments in this thread have shown that XML also is the wrong tool for many of the tasks to which it is applied. Whether it's passing data internally within an application or summarizing an article for the homepage, choosing the right alternative can make a difference between efficient clarity and an inelegant kludge.
Applying the right algorithmic tool to the right problem is actually a focus of CS. This is why sorting routines are often studied -- for instance, a routine which is more efficient at sorting millions of unordered pieces of data may be very wasteful when dealing with nearly presorted data.
The distinction is not often understood and has more of an impact that the observer might think. For instance, when writing an application for a handheld in which data is kept sorted and is usually viewed between insertions it makes sense to sort after every data element added to the database. However, this means adding a single item to a mostly-ordered set. Understanding that quicksort is a poor choice for this application means a difference in battery life.
Re:XML blows (Score:2)
Re:XML blows (Score:2)