XML Co-Creator says XML Is Too Hard For Programmers 608
orangerobot writes "Tim Bray, one of the co-authors of the original XML 1.0 specification has a new entry on his website explaining why he's been feeling unsatisified lately with XML and says his last experience writing code for handling XML was 'irritating, time-consuming, and error-prone.' XML has always a divided response among the technical community. The anti-XML community has several sites stating their positions."
Too hard? (Score:5, Funny)
Re:Too hard? (Score:2, Informative)
Seriously, don't knock VB until you need to code a quick dbaccess (or other simple) app in a couple of days for internal use. Easy languages have their places!
Re:Too hard? (Score:3, Interesting)
Re:Too hard? (Score:5, Insightful)
Oh, and if you're making web-based apps, wtf are you using C for?
Re:Too hard? (Score:3, Flamebait)
#1 I know what I'm doing, and..
#2 It's called libraries....be it STL, MFC, MyStack.h or whatever.
STL and MFC are C++, not C. Presumably you know the difference between C and C++, since you "know what you're doing". I must assume then that you are trying to gloss over the distinction between C and C++ so as not to further confuse the VB programmers among us.
Re:Too hard? (Score:3, Informative)
Of course, while the 6809 was indeed a Motorola chip, the 6502 was made by MOS (a company started by former Motorola employees). The initial 6501 was pin compatible with the 6800, and Motorola sued, resulting in the 6502, which had a different pin layout.
Other than that, I agree with your comments.
Re:Too hard? (Score:3, Insightful)
Re:Too hard? (Score:5, Insightful)
Yeah, the world needs more half-assed barely functioning and noncompliant XML parsers.
Seriously I think it's much more robust to just use a normal XML parser. You get all the character set support. If someone hacked up their own parser at work I would reject it in a code review. There's no sense in maintaining your own XML parser these days; they are a commodity.
-Kevin
Re:Too hard? (Score:3, Insightful)
I do not know any programmer who uses all of the features of ansi. This may have something to do with the fact that no c++ compiler is actually %100 ansi compliant. There are just so many different kinds of templates that most programmers do not use most of them because less experienced programmers will not be able to read the code.
I never got into the xml hype. Soap is cool but xml otherwise is just an ascii text file with tags. I have not written alot of xm
Re:Too hard? (Score:3, Interesting)
This is the lamest story I've ever heard on Slashdot. I almost left for good after reading this. If the next week's worth of news doesn't get any less lame, I probably will.
Slashdot, don't be fucking lame. This is news for *nerds*, not for simps and wannabees. XML too hard? Then you shouldn't be a programmer cause that's about as easy as it gets unless you're just a hobbyist.
Re:Too hard? (Score:5, Insightful)
I think that the *real* programmers that you have talked about all write libraries now. These guys all have jobs at the tool makers like MS, Apple, etc...
Businesses in general don't want (and generally don't need) *real* programmers, they want software engineers. They want someone who can sit down, work out some requirements and provide a timely, cost effective solution. It has taken me some time to fully realize this, but the right technical solution is not always the right business solution. The PHB could really care less if the app is written in VB, C, Java, as long as the application works to within their parameters. It is those parameters that are specified by the people paying for the software that will direct the language/technology you ultimately use.
Hah. (Score:4, Funny)
Re:Hah. (Score:3, Funny)
Son, there are professional Cobol programmers who HAVE NO FINGERS LEFT.
Join the Cure. We're trying to raise $5 billion to cure Cobol Fingers through transplants.
Call 1-800-I-REALLY-REALLY-USED-TO-BE-A-COBOL-PROGRAMM
Today!
Really? (Score:3, Interesting)
Without XML, what would you normally do? Create a flat text file and read it using whatever syntax you'll like that day. I agree XML is ugly as hell to type in manually, but at least it's a standard, and every programming language in use today can handle it in a standard way - DOM, SAX, whatever.
Re:Really? (Score:5, Funny)
XML is like:
* SGML without configurability
* HTML without forgivingness
* LISP without functions
* CSV without flatness
* PDF without Acrobat
* ASN.1 without binary encodings
* EDI without commercial semantics
* RTF without word-processing semantics
* CORBA without tight coupling
* ZIP without compression or packaging
* FLASH without the multimedia
* A database without a DBMS or DDL or DML or SQL or a formal model
* A MIME header which does not evaporate
* Morse code with more characters
* Unicode with more control characters
* A mean spoilsport, depriving programmers the fun of inventing their own syntaxes during work hours
* The first step in Mao's journey of a thousand miles
* The intersection of James Clark and Oracle
* The common ground between Simon St. L and Henry Thomson
* The secret love child of Uche and Elliotte
* Microsoft's secret weapon against Sun's Open Office
* Sun's secret weapon against Microsoft's Office
* The town bicycle
Re:Really? (Score:5, Informative)
It is customary to attribute quotations [xml.org] when you publish them. Otherwise it's called plagarism. Credit where credit is due and all that.
Unless, of course this particular AC is Rick Jelliffe, in which case I apologize.
Re:Really? (Score:3, Insightful)
Yeah i am sure that someone can make a compiler than allows you to feed in pseudo code in clear English, written with crayons on the back of a ceral packet, but you are robbing Peter to pay Paul, you will have to take the hit somewhere....
Oh please! (Score:5, Interesting)
The criticism on XML is accurate, correct, valid, if only for the simple reason that the code needed to interface with the libraries is 90% plumbing-work and 10% business-solution. That 90% plumbing-work leaves oppertunity for _a lot of bugs_ to be created and for any solution using XML to become a resource-hog.
Having a standard interchange format like XML is a fun-thing, and "good", as it allows standardized processing of these formats. However, the article identifies a clear gap in the tooling and that gap needs to be addressed for XML to become a widespread success, instead of another buzzword hype.
Re:Oh please! (Score:3, Interesting)
I did find the SAX API (In Java) a little tedious to work with for maybe a few days, but after I got used to the idiom it was pretty straight-forward. The interfacing with the library was not really a lot of "extra" code. Most of my SAX parsing code spends it's time in a content handler firing of events based on XML it is processing.
I still cleanly separated the XML interfacing from the server. Once the plumbing is set up, my server does
It takes more than a set of tools (Score:3, Insightful)
It takes more than a set of good tools for a technology to become 'a widespread success'. A clear justification why XML is better than existing standard marshalling techniques would be a good starting point. ASN.1 DER, simple container LSB serialization and others.
I'm probably beating the dead horse here but XML has at least two proper
A good point (Score:3, Insightful)
Re:A good point (Score:4, Insightful)
People often fail to see the point of widely adopted standards but the bottom line is that it makes it easier to reuse functionality that confirms to the standard. There are now both SAX and DOM based parsers for most common programming languages. Basically if you spend some time figuring out how these APIs work you can work with XML from almost any language.
That is not the problem. What is a problem is that everybody is introducing their own xml based languages and in many cases forget to publish the appropriate xml schema/dtd.
Now the guy who is complaining here is a perl programmer who has to process data that is passed to him in XML form. His point is that it easier for him to throw together a bunch of regular expressions to do his thing than it is to use some off the shelf validating parser with a generic DOM/SAX based API. Good for him that is job is so simple that a bunch of regular expressions do the trick for him. I'd hate to maintain his code though and I suspect he doesn't have much reuse beyond the odd copy paste.
Re:A good point (Score:3, Interesting)
However, this is all beside the point since we've now established that there's nothing wrong with XML but that it's just the tools to manipulate it which are still l
Re:A good point (Score:4, Insightful)
Yes standards suck. But the suck in a way that is consistant and allows other sucky things to talk to other sucky things.
I'll bet the 802.11b is a really crappy standards. But as long as I can pick up interchangable devices for $50 at the local computer store I'll live in ignorant bliss.
It's about tools, libraries (Score:5, Interesting)
Using Perl regexps to parse XML is silly, because there's too much variability (e.g. attributes in any order, elements covering multiple lines) that regexps aren't good at handling. You can do it, of course, but it quickly gets messy.
There's a number of tools and libraries (with Perl or other languages) beyond plain DOM and SAX that use proper XML parsers and are reasonably easy to use. He should use one of those, and stop complaining.
Re:It's about tools, libraries (Score:5, Informative)
Re:It's about tools, libraries (Score:3, Interesting)
Bray's paper appears to express a strong preference for an XML that would work well with ?standard regex tools. In it he says, "If I use any of the perl+XML machinery, it wants me either to let it read the whole thing and build a structure in memory, or go to a callback interface." And then it adds that callback "is sufficiently non-idiomatic and awkward that I'd rather just live in regexp-land."
This, in turn, seems to be based on
Re:It's about tools, libraries (Score:3, Interesting)
The best method might be a lazy programming language where you can say
tree.a[4].b[6].contents
and only when this expression is evaluated will the necessary bit of the tree be parsed.
Re:It's about tools, libraries (Score:3, Interesting)
And depending on what you want (memory vs speed) your "xml rule" in that regexp can do whatever annotation, datastructure building, etc that you want.
Re:It's about tools, libraries (Score:5, Interesting)
XML is a grammar of Chomsky Type 2 (context free grammar). So you need a stack machine (or equivalent) to parse the whole (left or right) subtree to get your information. This may be fine for small data (like config files), but it takes a huge amount of memory space if you have real world data like the SWIFT file you have to parse for a special transaction. What he is complaining about is exactly this: Lots of parsing to get a simple datum.
With regexp your parsing is much faster, because you can concentrate on substrings, you can parse them without using a stack, you can use them in stream context. But regexp are Regular Expressions (Chomsky Type 3 grammar), so they are in fact just a subset of XML and not able to parse XML completely.
One of the links in the article points to another rant, where the author wants some regulations for a limited XML. Badly enough the ideas he is proposing are in fact context sensitive and such they are Chomsky Type 1 (context sensitive grammar) and a superset of XML instead of a simplified subset. Someone remembers the Early algorithm with something that can be described as a multi dimensional stack?
Generic XML parsers are memory intensive and can't be as fast as regular expressions. That's just computer science. Deal with it.
Re:It's about tools, libraries (Score:3, Informative)
You're right, but the problem is that "deal with it" may equate to "don't use XML" in a lot of cases, which makes XML less of the universal data representation language than it wants to be.
When the parser uses a lot of memory (like DOM reading the entire input into a tree) it becomes inefficient, sometimes infeasible, to handle large input documents. That's one of the specific
Re:It's about tools, libraries (Score:5, Informative)
Well, I've written my own XML parser, as well as a compiler for a simplified version of C, so I think I'm somewhat qualified to talk on this. A generalized XML parser is not memory intensive, unless you are a very bad programmer. All you need is a depth-first stack, which will be as high as your XML tree is deep. And given that, a stack of size N is capable of handling a tree of size X^N, you are definitely going to run out of disk space before you run out of RAM. In other words, the memory required for parsing an XML tree is trivial.
An XML parser is one of the simplest parsers imaginable. It's a sophmore task to create a state machine to process the generic L(1) (or is it L(0)?) XML grammar. And as you should know, a state machine for an L(1) grammar is as fast as you can get.
Anything you do with regular expressions will be much more complicated. As I'm sure you know, regular expressions are turned into state machines before being used to process the input. And almost all regular expression state machines are much more complicated than the state machine you need for an XML parser. In an XML parser, definite boundaries exist on elements such as:
Regular expressions are not this smart. For example, looking for the substring "abc" in the longer string "abababaaabbbabcabababac" is already generating a statemachine that is more complicated than that needed for XML parsers.
Back to the "memory" intensive nature of XML parsers. If you parse your XML tree into a nested hashmap structure, then the memory needed will be proportional to the number of nodes in the XML tree. Maybe this is what you meant by "memory intensive". However, this is totally unnecessary. You can easily construct an XML parser to look for the specific elements you care about. Then you only get those elements, and you only need to allocate the memory for the elements required.
Re:It's about tools, libraries (Score:3, Insightful)
Re:It's about tools, libraries (Score:4, Interesting)
It starts out already if you are using escape characters to mark nonterminals and escape those characters with itself to mark them terminal. Those markings are still regular, but you loose already some speed ups. For instance \\ matches \\" and \\\", but one means just \ and the end of the string, and the other one means \" and the string continues. The only way to stay out of the mess is to make sure you are using an only left bound parser, first parse for all escape characters and then for the nonterminals, which makes your parser already a (local) 2-pass-parser.
Re:It's about tools, libraries (Score:5, Informative)
At work, I brush up against XML occasionally, mostly for documentation or data-resultset purposes. In my own time, I use it in my photo
gallery - result-sets from database queries get converted to XML and then spat out through XSLT in Sablotron, straight to web. For all the hoops it goes through, it's actually still quite nippy.
However, I also dislike it intensly.
I've written a blog-like system-news announcement board using a Ruby CGI against postgresql as a backend. I can pull back a result-set - a
simple table-thing with each row being a text announcment, half a dozen fields (when posted, by whom, etc). And I wanted to output this in HTML form for the web, in plain-text to send to a user who wanted it via email every day, and in s-exp form for my own gratification.
However, the first problem you run into is the formatting. A textarea in an HTML form gives no line-wrapping (wanted for plaintext output,
but only in specific fields) and embeds ^M characters everywhere. When the output is HTML, those ^Ms want to become br tags. When the output
is plaintext or sexp, they want to become \n. Simple, if ONLY there were a way of doing either elementary reformatting or search-n-replace in XSLT. There is, but s/// is about 10 lines' worth, if my googling is to be believed. That makes it non-optimal for one of its primary uses: making transformations on big blocks of text-based data, and it can't even edit within a node correctly? Pathetic.
Why shouldn't I just write 3 output methods in my Ruby CGI script that take the result-set directly to text, HTML or sexp formats, with the power of
ruby to do a #gsub("^M", "\n") on just the fields I want, in a tiny few extra characters of code?
Now to tackle what you've said:
"Using Perl regexps to parse XML is silly"
No, it's not. Perl regexps are a highly featureful, pre-existing, code. I'd be surprised if libxml *didn't* use regexps in its XML parsers, frankly.
"e.g. attributes in any order, elements covering multiple lines) that regexps aren't good at handling."
These things are not a problem. You can easily match an attribute occurring, as it does, within a n opening-tag, and pull out both the name and the contents. Using that to set a variable of given name in your program - a highly important part, given that XML is a data-transfer format and it's the internal representation afterwards
that is its whole raison-d'etre - is trivial. Thus, perl wins.
Multi-line matching is explicitly catered-for in perl, with
"There's a number of tools and libraries "...
Indeed there are. And you know what? When I've got a small paragraph (under 10 lines) of data that I want to transfer from A to B, the last thing I'm going to do is invoke a 600Kb library so I can use a pompous and fashionable set of functions to produce "XML", when perl/ruby/sh have all had
perfectly valid "print" or "echo" commands for the past decade or more. If the output is valid XML, you've no reason to diss the method used to produce it.
As a final example, I've also had a few documents to be writing, of my own, at work. I've had two options: either sit down, set up emacs to
handle XML sources smoothly so I can open and close tags at the push of a key-chord the way I *want* to create the stuff, or program a
small sub-language. Lisp, in the form of _librep_, won the day, with a few small functions to produce strings based on the input. And guess what? Because this is a programming language rather than a mere text-transforming language, I made a CGI out of it, and can embed programs within my "data", too, without feeling the urge to write to
the W3C about it.
Editing it is an absolute dream - opening and closing paragraphs of text is a piece of cake and fits the way I want to work. (Maybe you like looking at spikey angle-bracket characters, I
dunno.)
In short, "programmed text" won the day for me.
I tend to agree. (Score:3, Funny)
Maybe he should have read Knuth (Score:5, Insightful)
None of this would have ever been needed had CS been tuaght properly. There are other concepts to describe how files are to be organized. Some of the systems date from the 1950's. BNF (which seems to work very well for programmers to describe file formats to other programmers) dates from the early 1960's. What was needed is a BNF type grammar that is machine readable.
Would XLM have ever taken off if the web used something sane and not a hacked version of a nasty text formatting system from decades ago?
What? (Score:2)
You mean BNF is for humans!?
Re:Maybe he should have read Knuth (Score:5, Informative)
WTF? Perhaps you could explain more about these two cases. As far as I know, general XML parsers such as Expat [jclark.com] do not require unlimited memory to parse any finite input document, nor do they require infinite time.
The Document Type Description (DTD) system is equivalent to a BNF grammar for XML documents. It's not quite as flexible as a full BNF because it enforces that elements are correctly nested, but I don't see this as a bad thing.
And yes, DTDs are machine readable. Other grammars for XML documents such as DSD, XML Schema or Relax-NG are also machine readable.
Just as with BNF grammars and flex(1), you can take a DTD and generate an efficient parser from it using FleXML [sourceforge.net].
Comparisons with TeX aren't really appropriate because TeX is a Turing-complete language, and so impossible to parse automatically in 100% of cases (unless you want to allow that your program will sometimes fail to terminate, ie hang, on particular input files). I don't know what you mean by your subject line 'Maybe he should have read Knuth'...
Re:Maybe he should have read Knuth (Score:3, Informative)
He probably means "taken to the limit". A way of characterizing the performance of a system- how does it fail, when faced with an overwhelming amount of work? (It's like O-notation, which assumes the problem size is infinite to elimiate lower-order effects from the description)
An infinite input file could require infinite memory to parse it. So what?
The intention probably was to point out that a program
Re:Maybe he should have read Knuth (Score:4, Insightful)
But, for data storage within an application (or a set of tightly coupled systems that trust each other to function correctly), XML is less advisable. Traditional (SQL) databases, or hand-rolled file formats, may be a better solution when high speed and scalability are needed.
JoelOnSoftware has an long article [joelonsoftware.com] on why XML is suboptimal for the latter use.
Re:Maybe he should have read Knuth (Score:3, Informative)
It's fairly common to comment out markup when hand-editing, since <![IGNORE[...]]> can't be used within a document. Skipping non-markup in the document should be just a matter of matching the Perl regex
If someone else defines a foo element in a different namespace, I don't see how you can do anything other than ignore it--it's almost certainly not what you were looking for, and you have no idea what it might mean.
My last XP with XML (Score:2)
o-xml (Score:3, Interesting)
did he miss the whole libxml thing? (Score:5, Insightful)
Now, don't mistake me for being a pro XML monkey. I think the whole XML revolution is a bunch of hot air, and that people are getting all excited over the rediculous misconception that tagged text is a new data format. (Considering that it has been around at LEAST since the early 80s) And I absolutely do not want to get started on SOAP. Why anyone wants to lump RPC calls up with http traffic to make it more difficult to firewall is beyond me.
But whining that XML is hard is bullshit. Use a library to do all of tha handling for you. That's what they are for.
Re:did he miss the whole libxml thing? (Score:2, Interesting)
When we took on XML, step one was "Write a good parser". Now we have a nice organised way of describing data. Hard to write? Beats the pants off of dealing with all the kludgey ways people used to do things.
Step one: Write a parser
Step two: Add (data) water
Step three: Enjoy
Re:did he miss the whole libxml thing? (Score:5, Insightful)
Anybody can explain me (Score:2, Interesting)
Too Many acronynms, to few standard DTD (Score:5, Insightful)
Beyond the simple tagging theres sax, dom, xslt, DTDs, XML Scheme, XForms oil etc.
Seemed like a royal pain.
While the goal of having standard document formats is a great one.
But where is the web repository of those standards for different types of documents? FOr example years ago I saw a simple DTD for drawing shaped/curves. This would be great for drawing programs. The document seems longs gone.
What do I do if I have data I want to save in a format that others can read?
Apple is starting to use XML for file formats (keynote, apples powerpoint software, documents are XML) Hopefully This will start to take off.
Re:Too Many acronynms, to few standard DTD (Score:3, Informative)
you said, "FOr example years ago I saw a simple DTD for drawing shaped/curves."
i say, "svg [w3.org]."
In other news... (Score:5, Funny)
Come on, XML is not THAT complicated. Besides, there's so many different facets to it. I think most people have the most difficult time figuring out what it is (similar in some ways, to the
As with anything worth doing, it takes a little time to learn and familiarize yourself with XML before you can really get into it and make it useful, just like programming itself.
Of course, compared to HTML, which any grade school kid can write (I don't have any proof of this claim, it's all hearsay), XML's uses go far beyond edit-save-browse-repeat. I think everyone need to find their own little niche of usefulness in the XML universe before moving on to the other areas.
Not worth it (Score:2)
Note: PHP and Linux look a lot like the things that DIDN'T go on the heap. Simple to understand, easy to use, powerful. If a non-programmer can grasp it easily, it usually doesn't go on the heap.
Short summary (Score:5, Informative)
And he would probably want to be able to parse parts of documents ("XML Fragments"), rather than whole documents.
I agree with his views (not using perl too much, though). But this is *not* the end of XML or anything. Tim just has some thoughts about how the xml api could be better in perl. Not very exciting, perhaps...
Perl suggestion (Score:3, Insightful)
I don't know what's going on in Perl 6, but it seems like Perl needs some kind of built-in way of running through an xml file by tags, in a way similar to the standard line by line file reading operator. Rather than grabbing a single line at a time, or having to slurp in the whole file before whacking it up, you should be able to pass a regex to the input operator so that it will stop when it gets to the end of a chunk of text defined by an end tag.
Obviously, there are ways of getting around this by using
He is right, I think. (Score:4, Interesting)
Among other things ...
(1) They need to eliminate the doctype can of worms. Unfortunately, this cries out for an alternative solution for character entities.
(2) Namespaces need to be simplified and better integrated into the core of the language. Expanding on this, there need to be much better mechanisms for modularizing parts of the markup so that it isn't necessary to parse and hold everything in memory to make sense of it.
(3) There needs to be clean-up and standardization of element id's and references, integrating it with (1) and (2).
Do others have more? Should this be done compatibly with XML?
I believe that we really need a standard for arbitrary abstract data models, with XML as just one syntactic representation, but I would have to go into long details to justify this.
Re:He is right, I think. (Score:5, Insightful)
1. Doctype is necessary. Perhaps you've never tried handling a very complex text (a big DOCBOOK text or a big TEI text). You need to know what kind of text you're dealing with, and there's no way to come up with one universal solution for all kinds of texts. The only character entities needed are the handful of named entities that are part of the standard: < > & etc. The rest can be handled by Unicode (including the PUA) and transcoding (if you are using a ISO 8859 encoding and you need a character outside that encoding, then you need to rethink the encoding you've chosen to use. UTF-8 is your friend). Entities really are good for more complex units (strings, etc.), rather than single characters. What character entities have to do with DOCTYPE is beyond me.
2. True
3. Standardize element IDs? Element IDs are part of the text, not part of the structure. They're simply a way of simplifying the difficulty of accessing random parts of text.
I believe that we really need a standard for arbitrary abstract data models, with XML as just one syntactic representation, but I would have to go into long details to justify this.
So you're saying we need a meta-meta-language? The *MLs are a standard for arbitrary abstract data (and text) models (because not all texts are hierarchical like DBs).
I think the problem here is that DB programmers (I'm excepting Bray from this) are overusing XML for very simple DB tasks that it wasn't intended for. If you're just doing a 40 field, 30,000 record flat DB, XML is NOT the solution. But it is the best solution for complex non-hierarchical data (i.e., books, etc.).
As for Bray, I don't think he's saying XML itself (the markup standard alone) is too hard, that it should be abandoned. I think he's saying we haven't come up with simple enough ways of accessing XML data through APIs. But of course that wouldn't be a spicy enough meatball for the Taco.
His idiom. (Score:5, Insightful)
[quote]
while () {
next if (XX);
if (X|||X)
{ $divert = 'head'; }
elsif (XX)
{ &proc_jpeg($1); }
# and so on...
}
[/quote]
Repeat after me: I will never leave parsing XML up to a regexp especially if my xml may contain CDATA and Comment sections. I will never...
Unless you are 100% certain the file you are parsing is directly under your control, ie: no comments, no cdatas, params always in the same order, same indentation, same bloody encoding [pardon my french], well, you just will have to acces the data using some kind of DOM or abstract tree representation.
I don't think he thinks no one uses XML, he seems to deplore the fact that some people don't get it at all and resort to heavy duty tools for trivial tasks [thus justifying his example above].
Basically XML is quite simple, but that's not the matter, the problem is that XML bundles ACTUAL DATA, it's all about the complexity of those data, not the API used to access it [although writing a DOM implementation is a real pain]
XML is good (Score:5, Interesting)
The documents are generally displayed as HTML on the web, but they're also read by a couple different programs for different purposes. When I first started here, it was mostly a mess of poorly hand-written HTML, but thankfully there were *only* about 20k documents at the time.
I was charged with the task of writing said programs to read these damn files. Unfortuneately, they weren't all marked up the same...
Now that we have XML and standard libraries for reading XML, it makes handling these documents a snap. Any program that needs to read them can simply have an XML parser plugged into it. The integrity of the documents themselves is maintained by the fact that they don't work if they're not properly marked up. So all these documents work, 100% all the time, and writing programs to read said documents is very simple and not prone to errors.
Yay for XML! :)
So, to sum up, XML is doing what it was meant to do, no less. Unfortuneately, it's also probably doing a bit more as well, XSL anyone? Yeck, why not just have a stand XML scripting language, why the need for the language to be valid XML itself?
"Load into memory" vs. "Callbacks" (Score:4, Informative)
XML is just one of the tools in our collective toolbox. Use it where it helps you solve a problem. Don't bother if it doesn't.
XML is not a programming language... (Score:3, Insightful)
XML: bad implementation of a good idea (Score:5, Interesting)
Now, I have to say: a universal syntax for tree-structured data is very useful: experience since the 1970s with one such universal syntax, Lisp, has shown that. It is unfortunate that XML is about the worst imaginable implementation of that idea. XML combines being a nuisance to type with having comparatively complex semantics and lots of redundant features.
What is ironic is that the same "real world programmers" who wax ecstatic about XML also condemn Lisp as too complicated and too difficult to read. The universal syntax that XML aspires to, Lisp syntax delivered many decades ago. It's just that prejudice and ignorance caused people to re-invent the wheel (and in square form, too) in the form of XML.
I am pretty torn between whether XML is a blessing or a curse. We really need something like it, but XML is so bad that it may not even live up to the level of "poorly designed industry standard but better than nothing".
I agree, of course... (Score:5, Insightful)
XML got one thing right over unadorned S-expressions - document packaging, specifically versioning and character-set labeling. XML inherited this from SGML, and it's one of the few things it took from there that was actually worth keeping.
For a good laugh, read the Origin and Goals [w3.org] section of the XML spec. Of the ten goals for XML listed there:
XML shall be straightforwardly usable over the Internet.
XML shall support a wide variety of applications.
XML shall be compatible with SGML.
It shall be easy to write programs which process XML documents.
The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
XML documents should be human-legible and reasonably clear.
The XML design should be prepared quickly.
The design of XML shall be formal and concise.
XML documents shall be easy to create.
Terseness in XML markup is of minimal importance.
I'd say two of them were met, but were bad ideas (SGML compatibility, terseness unimportant), and five of them were completely missed (ease of use, human legibility, quickly designed, formal and concise, ease of creation).
Thirty per cent is a failing grade, folks...
Re:I agree, of course... (Score:3, Interesting)
Shameless self-plug, but I have a critique of XML's failure to meet its goals [eastcoast.co.za] on my home page. You may find it interesting.
Re:I agree, of course... (Score:3, Insightful)
That would be a valid argument if XML were designed to be regularly input by humans. But XML is so cumbersome otherwise that almost all of it will be either machine generated or edited in special edi
In related news... (Score:3, Funny)
On the 1st of January, 2003, Bjarne Stroustrup gave an interview to the IEEE's 'Computer' magazine.
Naturally, the editors thought he would be giving a retrospective view of twelve years of object-oriented design, using the language he created.
By the end of the interview, the interviewer got more than he had bargained for and, subsequently, the editor decided to suppress its contents, 'for the good of the industry' but, as with many of these things, there was a leak.
Here is a complete transcript of what was was said, unedited, and unrehearsed, so it isn't as neat as planned interviews.
Interviewer: Well, it's been a few years since you changed the world of software design, how does it feel, looking back?
Stroustrup: Actually, I was thinking about those days, just before you arrived. Do you remember? Everyone was writing 'C' and, the trouble was, they were pretty damn good at it. Universities got pretty good at teaching it, too. They were turning out competent - I stress the word 'competent' - graduates at a phenomenal rate. That's what caused the problem.
Interviewer: Problem?
Stroustrup: Yes, problem. Remember when everyone wrote Cobol?
Interviewer: Of course, I did too
Stroustrup: Well, in the beginning, these guys were like demi-gods. Their salaries were high, and they were treated like royalty.
Interviewer: Those were the days, eh?
Stroustrup: Right. So what happened? IBM got sick of it, and invested millions in training programmers, till they were a dime a dozen.
Interviewer: That's why I got out. Salaries dropped within a year, to the point where being a journalist actually paid better.
Stroustrup: Exactly. Well, the same happened with 'C' programmers.
Interviewer: I see, but what's the point?
Stroustrup: Well, one day, when I was sitting in my office, I thought of this little scheme, which would redress the balance a little. I thought 'I wonder what would happen, if there were a language so complicated, so difficult to learn, that nobody would ever be able to swamp the market with programmers? Actually, I got some of the ideas from X10, you know, X windows. That was such a bitch of a graphics system, that it only just ran on those Sun 3/60 things. They had all the ingredients for what I wanted. A really ridiculously complex syntax, obscure functions, and pseudo-OO structure. Even now, nobody writes raw X-windows code. Motif is the only way to go if you want to retain your sanity.
Interviewer: You're kidding...?
Stroustrup: Not a bit of it. In fact, there was another problem. Unix was written in 'C', which meant that any 'C' programmer could very easily become a systems programmer. Remember what a mainframe systems programmer used to earn?
Interviewer: You bet I do, that's what I used to do.
Stroustrup: OK, so this new language had to divorce itself from Unix, by hiding all the system calls that bound the two together so nicely. This would enable guys who only knew about DOS to earn a decent living too.
Interviewer: I don't believe you said that...
Stroustrup: Well, it's been long enough, now, and I believe most people have figured out for themselves that C++ is a waste of time but, I must say, it's taken them a lot longer than I thought it would.
Interviewer: So how exactly did you do it?
Stroustrup: It was only supposed to be a joke, I never thought people would take the book seriously. Anyone with half a brain can see that object-oriented programming is counter-intuitive, illogical and inefficient.
Interviewer: What?
Stroustrup: And as for 're-useable code' - when did you ever hear of a company re-using its code?
Interviewer: Well, never, actually, but...
Stroustrup: There you are then. Mind you, a few tried, in the early days. There was this Oregon company - Mentor Graphi
Not really a joke anymore (Score:3, Funny)
In a stunning move, C++ creator Stroustrup identifies the fine line between a ridiculous self-parody of over-engineering, and soul-destroying evil, and pole-vaults over it [att.com].
Repeat after me:
You don't overload whitespace.
You don't overload whitespace.
You don't overload whitespace.
You don't overload whitespace.
Re:In related news... (Score:3, Funny)
We stopped when we got a clean compile on the following syntax: At one time, we joked about selling this to the Soviets to set their computer science progress back 20 or more years.
too hard (Score:5, Funny)
Now I'll go read the article.
Hahahah finallly something I know a lot about. (Score:5, Interesting)
I can tell you now that XML is a Bad Thing. It strives to excel at too many things at once, and becomes inefficient and complex as a result.
XML tries to eliminate the step of writing parsers for data, although writing parsers has never been a significant part of application development to begin with. Its rigidity instead forces you to waste time taking the output of the parser (a complex tree) and putting it into meaningful form. XML document tree traversal = 10000x more complex than getting column data out of a ResultSet... Unfortunately it is also a billion times slower to parse XML than it is to perform a medium compexity database query.
The real problem is that XML only partly addresses the problems that relational database solved years ago (organizing and data accessable), but it does it without any of the efficiency benefits of a well designed database server. In my opinion, 90+ percent of the places where XML is being used today would be better served by using columns in a relational database table to store object fields. You get indexing, you get universal, simple and efficient searching, and you get speed.
XML has too many faults to really list in one short post. The truth of the matter is that it tries to do too many things and DOESNT DO ANYTHING WELL. Sort of like if someone tries to be skilled in all musical instruments but ends up being, at best, mediocre in a few of them.
Re:Hahahah finallly something I know a lot about. (Score:5, Insightful)
what the hell are you talkin` about? (Score:3, Insightful)
I agree with this, to an extent. If you don't like/need all the fluff, don't use it. XML is only as complicated and inefficient as you want it to be.
XML tries to eliminate the step of writing parsers for data, although writing parsers has never been a significant part of application development to begin with.
It's not just about writing parsers for a single program. What happens when you have several prog
Re:Hahahah finallly something I know a lot about. (Score:3, Insightful)
This is true if you are parsing your own data, but what about parsing third party data? I did that for years and every day was full of dealing with corruption, misformatted files, or formats that varied from the documentation because some new guy was making them on the other end.
True, these problems can happen with XML but they are much easier t
Re:Hahahah finallly something I know a lot about. (Score:3, Insightful)
Your main problem is that you think a tree should be a table. I think you need to get off of your RDBMS religion and realize that that there's a whole world of data our there that perfectly capable of not being shoved in a table before it can be used.
XML is a MARKUP language (Score:4, Insightful)
For storing arbitrary data, and use as a message format (as in SOAP), it's not so good because it has markup-like features, such as the distinction between attributes and elements and the distinction between text and element nodes. (The latter in particular is a huge pain, I wish people would agree to only use text nodes in leaf elements.)
This is why XML parsers/generators, once they get into entities and DTDs and so on, become really a lot more complicated than they would need to be if XML just stored a tree of elements.
However, it's the standard, so we might as well just shut up and use it.
My opinions have no special importance but it *is* important to remember that XML is a markup format that is being used mostly for things other than markup.
similar problem with MathML (Score:5, Insightful)
Good about XML is, that whatever will emerge in the future,
it will always be possible to convert old documents into any
new form, using simple tools.
There is a point with critics: Unlike Latex or HTML which
can be written easily by hand, XML can become too bloated to
be authored directly by humans.
Similar problem with MathML:
Latex: $x^5+3x-9=0$
MathML:
<mrow>
<mrow>
<msup>
<mi>x</mi>
<mn>5</mn>
</msup>
<mo>+</mo>
<mrow>
<mn>3</mn>
<mo>⁢</mo>
<mi>x</mi>
</mrow>
<mo>-</mo>
<mn>9</mn>
</mrow>
<mo>=</mo>
<mn>0</mn>
</mrow>
You can write complicated formulas in Latex directly but it is
almost impossible to do so in MathML, where one has to rely
on tools to generate it (i.e. export it with Mathematica or
TeX -> MathML converters). Wouldn't it be nice if browsers
would understand a basic version of LateX? (That it is possible
has been shown with IBM's texexplorer plugin).
XML parsing models (Score:4, Informative)
If I understand it correctly, the author is lamenting that neither of the standard ways of parsing XML in a scripting language fit the straightforward model of scanning for something relevant and then acting upon it, where the two models are: 1) read in whole file and make a tree (take sup too much memory, is slow, etc.); or 2) use a callback interface.
The style of perl script he was seeking was a simple loop model: /ignorable/; ... } ... }
...
while () {
next if
if (/thing-one/) {
elsif (/thing-two/) {
}
To me the thing that distinguishes this the most from the provided XML parsing interfaces is that it has a minimal amount of state.
So isn't what is needed a corresponding structure to the while () above that iterates over the tree-nodes of the XML-encoded data structure, in a depth-first preorder traversal (to avoid having to build the whole tree first)? One could imagine a parser object that scans through the XML file returning nodes (and their parent history) while maintaining an absolute minimum of state. If one wanted to build an in-memory representation of a subtree given a node, then one can always do so when one finds the node one wants.
Such an interface wouldn't be good for integrity verification or the like, but for the sort of application the author was talking about, it would seem ideal. Much less flexible than the normal models, sure, but much easier to work with when the problem fits this sort of description. Perhaps I'm underestimating the difficulty of the task, but it doesn't sound too hard to write, given that it is doing so much less than the fully-featured XML parsing interfaces.
The other problem is the awkwardness of the use of XML in O-O languages such as addressed in the article [fawcette.com] linked-to by Tim Bray in his article. Though I haven't used this particular program, this seems to be the problem that FleXML [sourceforge.net] is trying to address. When you don't need all of the flexibility that XML can provide, but instead have a fixed schema that your XML-representation follows, why not have your parser automatically built to read it? People have used lex/flex for scanning text files for decades --- in these days of XML Schema, it should be even easier. If FleXML lives up to its promise, it will be. Has anyone here used FleXML and are willing to comment on how well it addresses these sorts of problems?
Stay on topic - problem isn't XML standard (Score:5, Interesting)
So to be clear, XML is here to stay. (An example of XML penetration: there is a working schema for using XML in the farming industry [agxml.org]!) Just imagine the chaos that will insue once MS Office saves all documents in true XML.
My take on the problem Tim's really talking about: inconsistency and the proliferation of people who want to be the next prodigy in their area of expertise. There are so many parsers and interfaces, even within a language domain, because vendors want to put their own spin on everything. The alphabet soup that results confuses the hell out of people. This has even happened in the open source world, where I can do a Google search on "php xml parsing" and read articles on no less than 10 different approaches. For the average guy who has been told by a project manager, "We need to take these XML files from our business partner, extract and store the data in our database," you need a standard approach. Not to stifle thought and innovation, yes, you should take the initiative to understand whether an event-driven approach (SAX parser) or an in-memory object model approach (DOM parser) is right for the job. After all, you do get paid to do this, so earn your keep! But the XML community hasn't done a good job of specifying best practices and leading people by the nose to a solution. Every XML book I've seen furthers the confusion, with each other offering his opinion with a slight variation of how to do things, leading programmers/scripters/whatevers to use the approach they most recently read about, and not necessarily the one that time has proven out to be the most efficient.
Part of this is the divide between the .Net guys, the Java camp, the Perl/PHP folks, etc., but in the spirit of interoperability, maybe the XML promoters just need to dumb things down a bit to get some simple concepts and best practices into the hands of Joe Sixpack Programmer. Maybe a central authority, a la java.sun.com or php.net?
XML is bad like Democracy is bad (Score:5, Insightful)
I had a problem at work when we switched from AutoCAD to Solidworks. Our manufacturing software couldn't read the new BOM files, which were Excel's
Java XML Parsing (Score:3, Interesting)
When XML was first introduced, there were no standard libraries in the JDK to facilitate parsing. What's more, the few projects out there varied wildly in how you actually used their DOM tree or SAX callback mechanism. This isn't necessarily a Bad Thing (tm), it's the same problem every emerging technology faces: immature tools. This is basic biology - lots of competing implementations (life forms), each struggling for community (resources).
So, time goes by, and eventually a handful of implementations emerge dominant. Some dominate due to performance, and some dominate because of ease of use of the API. The victors in this game then sometimes go through a merging process of their own, where the performance victors lend technology to ease of use API victors. After a lot of merging (and flames usually), one or two projects emerge out of the XML kingdom as the dominant players. In my opinion, in the world of Java these are Xalan (Xerces) and Dom4J.
During the maturation process, Sun comes along and looks at the technology and says "Wow this XML stuff is really here to stay. What implementations are out there, and what similarities exist between them? How can we facilitate growth of these projects?" They realize that certain classes (like org.xml.sax.InputSource) are common entities in both projects (even if the class InputSource doesn't exist), and they standardize it. For a reference to all of the XML standards implemented in the JDK, do a search on java.sun.com for JAXP, JAXM, and JAXB (just to name a few).
At this point, the XML projects come back and work in support so that they can be "JAXP compatible" (again this is part of the biological process of evolution). This insures that the projects works well with whatever Sun ships in the JDK.
In the end (which is really where we are now) you end up with a pluggable architecture, where the JDK provides some common functionality or interfaces that are implemented by open source projects.
Java XML parsing was damn hard back in the day - you had to marry your code to a specific project. But these days with the standardization that has taken place (thanks Sun!), as long as you write code that makes use of the JAXP specification you can plug in any JAXP-compliant parser into your app and things *should* work.
The difficult problem is getting other entities (Application Servers for example) to get up-to-date with the standards. WebLogic 6.1 comes with a non-JAXP compliant parser, and thus doesn't work with the latest JDK, Xalan, etc.
Re:But XML is great for computers... (Score:3)
OK... This is exactly why you SHOULD read the articles... I just posted blatantly off topic due to an annoying quick-read = misread mistake... yay me
Mod me down, I deserve it
Re:But XML is great for computers... (Score:5, Insightful)
You mean like most other non-xml config files in /etc, like say hosts, DNS zone files, named.conf, passwd/shadow, hosts.allow/deny, sendmail.mc or resolv.conf (etc. etc.)? These have standard layouts, text-based, can be edited by hand and can be easily parsed.
My point: XML is over-used for a lot of things. In some places it makes sense, but in many places it doesn't.
Re:But XML is great for computers... (Score:5, Insightful)
You just gave the best argument for adopting XML as widely as possible. Yes, all these can be parsed (with the possible exception of sendmail's config files which may be Turing-complete) but they all require *different* code for each config file. If they were in XML you'd still need different semantic code, of course, but a whole wodge of syntax issues (how do I quote strings, how do I escape newlines, how do I mark nested scopes, what happens when the string delimiter character occurs inside a string, how do I deal with comments, what is the character set, is there a formal grammar for the document, etc etc) would be dealt with. Maybe not in the way that you or I think is perfect - IMHO XML is a little bit verbose compared to say Lisp- or Tcl-style encodings. But they would be dealt with *once*. No need to learn a new or almost-the-same-but-slightly-different set of syntactic conventions for every single config file.
Maybe XML is over-used for a lot of things, but making up your own file format is definitely over-used a lot more. Simple line-oriented files are reasonable to have as plain text, for everything else please avoid the temptation to reinvent the wheel by devising a new syntax and block structure.
Re:But XML is great for computers... (Score:4, Interesting)
How, exactly, has XML simplified *anything*?
Re:But XML is great for computers... (Score:3, Interesting)
I admit that interfaces like DOM are rather clunky. But your regexps would break if a new field were added to
The answer is a better interface for reading XML files, one that knows about the format (which is described in a DTD or other grammar) and can present a neat interface like
passwd.user["abc01"].real_name
(or whate
Re:The API is XPath (Score:3, Interesting)
Also, knowing the grammar
Re:But XML is great for computers... (Score:3, Interesting)
In C# at least:
Really doesn't seem that difficult to me. BryanRe:But XML is great for computers... (Score:3, Insightful)
Had you read the article, his point was that you shouldn't have to slorp in the whole file just to read one field. In fact, he's using perl and regexp to avoid having to do things like Doc.Load.
The author claims that existing tools are oriented toward either converting to a big internal data structure, or to processing gradually using callbacks, neither of which is optimal for small fast code or simple programming.
Re:But XML is great for computers... (Score:3, Informative)
all these can be parsed but they all require *different* code for each config file.
Nonsense, if you're smart about your parser, you'll need about 3. If you're not smart about your parser, you'd probably design lousy XML anyway.
how do I quote strings, how do I escape newlines, how do I mark nested scopes, what happens when the string delimiter cha
Re:Meta XML (Score:3, Informative)
This would be better:
<date year=2003 month=3 day=18/>
I used to think XML was just horribly bloaty and ugly, now I think it's more like VB in that it's easy to make something that's very poorly designed.
Re:xml (Score:5, Informative)
It's biggest use right now is data interchange. Moving bits between one magic widget and another. And for that, HTML sucks. It just can't represent arbitrary data. Programming languages (C++, Java) are for instructions, not data.
XML fits in perfectly where it's at use-wise. Tim Bray is talking about programming for it: The available interfaces are very counter-intuitive, and that's what Bray's getting at.
Re:xml (Score:3, Informative)
Re:xml (Score:2, Informative)
By separting the content from how it is displayed makes it easier to display it in pretty much any format. By taking a single XML document you could create a page that looks great on Mozilla, great on IE, a WAP enabled phone, Opera, Microwave, Fridge - whatever!
XML is NOT a programming language. It is more l
Re:xml (Score:3, Insightful)
XML's not a language -- it's a grammar, a guide of sorts, for hierarchical data storage. You design file formats that conform to XML. The goal is that it's easy to read that file format in any language or platform (given a XML processor/parser for that platform), since your data is stored in plain human-readable UTF8-encoded text.
Might as well poke fun
WTF? (Score:5, Informative)
HTML is a page description language.
C++ and Java are data processing languages.
XML is a data description language.
You can certainly describe a page using XML, and I see no reason why you couldn't construct a programming language using XML syntax, but how on earth are you going to store data in C++ or Java?
Re:This does not bode well (Score:5, Insightful)
Did you actually read the article?
I can sum it up very easily:
He's looking for a nicer api for processing XML, he's not looking to replace XML entirely.
Re:This does not bode well (Score:2, Insightful)
The thing is, back in the day when people wore onions on their belts, programmers had to be convinced that UNIX's "file is a bag of bytes" form of data access was better than the more direct/powerful/convenient methods they'd been used to, like raw access to the drive. But programmers
Re:This does not bode well (Score:4, Insightful)
XML is not a stream - it has a hierarchical tree structure, and IMHO is not useful for anything that (a) by its very nature is a continuous stream of data (say, a log file), or (b) wants to be processed as a stream (because it's big, and would require too much memory to be handled as a single data structure).
The problem seems to be that XML is good for portability and standardization, and therefore is abused for things it's not well suited for (the well-known 'if all you have is a hammer, every problem looks like a nail' syndrome).
Re:of course there is! (sorry for the prev post) (Score:4, Funny)
<?xml version="1.0" encoding="bork">
<troll>
<sovietrussiathing>In SOVIET RUSSIA, XML standardizes YOU!!</sovietrussiathing>
<offtopic>Let's bomb the french!</offtopic>
<flamebait>Anyway, XML is for loosers!</flamebait>
</troll>
Re:Don't Blink (Score:3, Informative)
If your subdialect keeps changing, that's down to the people defining the syntax, not the language itself.
OOOPS - Re:What's the big deal (Score:3, Interesting)
He wants to write:
while (<STDIN>) {
next if (X<meta>X);
if (X<h1>|<h2>|<h3>|<h4>X) { $divert = 'head'; }
elsif (X<img src="/^(.*\.jpg)$/i>X)
{ &proc_jpeg($1); }
# and so on...
}
Using the callback he repeatedly calls irritating (and repeatedly fails to say why):
my $p1 = new XML::Parser(Handlers => { Start => \&handle_start));
$p1->parsefile($INFILE);
s ub handle_start
{
my (