W3C Gets Excessive DTD Traffic 334
eldavojohn writes "It's a common string you see at the start of an HTML document, a URI declaring the type of document, but that is often processed causing undue traffic to W3C's site. There's a somewhat humorous post today from W3.org that seems to be a cry for sanity and asking developers and people to stop building systems that automatically query this information. From their post, 'In particular, software does not usually need to fetch these resources, and certainly does not need to fetch the same one over and over! Yet we receive a surprisingly large number of requests for such resources: up to 130 million requests per day, with periods of sustained bandwidth usage of 350Mbps, for resources that haven't changed in years. The vast majority of these requests are from systems that are processing various types of markup (HTML, XML, XSLT, SVG) and in the process doing something like validating against a DTD or schema. Handling all these requests costs us considerably: servers, bandwidth and human time spent analyzing traffic patterns and devising methods to limit or block excessive new request patterns. We would much rather use these assets elsewhere, for example improving the software and services needed by W3C and the Web Community.' Stop the insanity!"
Wow (Score:2, Funny)
Re:Wow (Score:5, Insightful)
Re:Speaking of caches... (Score:5, Insightful)
Re:Wow (Score:4, Interesting)
I don't claim to know why you have a problem with webmasters (I am not one), but if you're a programmer and perceive them to have less technical ability than yourself, well.. your ilk seem to be the "clowns" this time.
Re:Wow (Score:5, Funny)
Probably for the same reason that many other people hate them. They announce themselves to people as being a "webmaster". It's a really stupid title. They don't preform wizardry. If I can't at least be a "codemaster", and maybe our plumber gets to be called a "pipemaster", then we'll continue to mock anyone who uses the word. Oooh, "plungemaster". I think he'd go for that.
Re: (Score:3, Insightful)
It's fallen into common usage. What else would you suggest? "Web Designer", "Network Architect" and all the other 'bits' of webmastery are already taken. Perhaps "Web Systems Administrator".
Re:Wow (Score:4, Funny)
Re:Wow (Score:5, Insightful)
Why on earth are you blaming webmasters? They are just about the only people who cannot be responsible for this. People who write HTML parsers, HTTP libraries, screen-scrapers, etc, they are the ones causing the problem. Badly-coded client software is to blame, not anything you put on a website.
Gumdrops (Score:5, Insightful)
The only thing I'm unclear on is whether your average browser is contributing to this problem when parsing properly written documents.
Re:I'd write the crap code. (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Actually, the term is widely used as a synonym for spidering a site. It's rare I see it used in the way you describe. Sorry for the confusion.
Re: (Score:2, Informative)
Re:Wow (Score:5, Insightful)
If you ask me, the W3 asked for this. They didn't consider the consequences, and now that they're under siege, they want to blame everyone else.
Re:Wow (Score:5, Insightful)
The expectation is that software would ship with its own copies of "well-known" DTDs with associated catalog entries; the URL is only there as a fallback. The problem is ignorant and/or lazy software developers not implementing catalogs and simply downloading from the URI each time.
Re:Wow (Score:5, Informative)
"Webmasters" refers to people who run websites, not the W3C. And this particular feature is an artefact of SGML, which was around for over a decade before the W3C ever existed.
You mean like how RFC 2616 describes the caching mechanism that is being ignored by the problem clients? Or are you referring to the established-for-decades SGML system catalogue that they mention in the HTML 4 specification [w3.org] multiple times [w3.org]?
If people writing client software actually did what they were supposed to, this wouldn't be a problem. This is not a designed-in bug, this is caused by a minority of developers eschewing the specifications and standard practice out of either ignorance or apathy.
Re:Wow (Score:5, Insightful)
Wow, it just struck me... welcome to Microsoft's world.
Their security was so bad for so many years because they worked on the assumption that:
1) Programmers know what they're doing
2) Programmers aren't assholes
Of course, the success of malware vendors (and Real Networks) has proved those two assumptions wrong many years ago, and probably 90% of the development work on Vista was adding in safeties to protect against idiot programmers, and asshole programmers.
And now the W3C is getting their lesson on a golden platter.
In short, here's the lesson learned:
1) Some proportion of programmers don't know what they're doing and never will
2) Some proportion of programmers are assholes
Re: (Score:3, Interesting)
Failing to be aware of how your users will likely behave is a design bug. If a tiny fraction of your users make a particular error it's probably their fault. If a significant proportion of your users make a particular error, it's your fault.
Re:Wow (Score:5, Insightful)
Yeah, the standard. If your shitty http engine is too shitty to process html without having to look up the DTD on the w3c's website every single page, your shitty http engine shouldn't be allowed out on the internet.
Re:Wow (Score:5, Insightful)
Yeah, the standard. If your shitty http engine is too shitty to process html without having to look up the DTD on the w3c's website every single page, your shitty http engine shouldn't be allowed out on the internet.
Good and jolly bacon bits, please mod parent up. I realize that their comment might come off as harsh, but crap, come on. If one is building an application, would one really want to have to connect to a website to get instructions on how to read a filetype? Especially when all it would take it a single wget and including those instructions with the application to avoid all of this.
Furthermore, it would seem that the process of reading a file would be far faster if the processing instructions were on the local file system rather than on a remote host. If one were really worried about changes to the instructions, one could code a routine to update the DTD whenever the application is updated; if the app isn't such that *would* be updated, one could always have it run a diff against the W3C's DTD every few months - after it's been standardized, it's not like the DTD is going to change on a daily basis. While not a complete cure, it'd still be far more considerate to the W3C's bandwidth than hitting it every request, or even every time a program is started.
Honestly, I wouldn't blame them if they 302'd the file to a page that, upon CAPTCHA'd request, made the file temporarily available for download, so that vendors could fix their broken software. They're obviously far more considerate and forgiving people than I - and, I suspect, many of you fellow Slashdotters - tend to be.
*puts on flame-resistant suit*
Re:Wow (Score:5, Insightful)
It's more like this: your app should *never* query the DTD. If the DTD changes, your app's code probably needs to change and your app should *never* try to parse using a DTD that hasn't been tested by a human being, or at least through your regression tests. Any changes to DTDs should be handled by updating the app itself.
The only exception to this is an app that also happens to be a development tool.
Re:Wow (Score:5, Insightful)
then there's little point in having one at all, is there.
You're quite right though, copy the DTD, develop against it, publish without the DTD being present in your released app. simple. If only the W3C hadn't specified it as being required to be present. If only every sample didn't have it shown in place.
Re: (Score:2)
The Solution (Score:5, Funny)
Re:The Solution (Score:5, Funny)
Re: (Score:2)
Do what.... (Score:5, Funny)
Put links to Goatse in the definitions!
Re:Do what.... (Score:4, Insightful)
Who made the DTD a URL? (Score:2, Interesting)
Re: (Score:3, Insightful)
and use that instead.
Re:Who made the DTD a URL? (Score:5, Insightful)
The URL can be seen as a backup ("in case you don't know the DTD for W3C HTML 4.01, you can create a local copy from this URL" - in the future, when people have forgotten HTML 4.01, that can be useful), or the same way XML namespaces is used - you don't have to send a HTTP request to http://www.w3.org/1999/xhtml [w3.org] to know that a document that uses that namespace is a xhtml document - it's just another form of a unique resource identifier (URI), just like a URN or a guid.
What the W3C is having a problem with is applications that decide to fetch the DTD every single request. That's just crazy. Why do you even need to validate it, unless you're a validator? Just try to parse it - it probably won't validate anyway, and you'll have to do either do it in some kind of quirks mode or just break. If you can parse it correctly, does it matter if it validates? If you can't parse it, does it matter if it validates? And if you actually do want to validate it, why make the user wait a few seconds while you fetch the DTD on every page request? The only reasonable way this could happen that I can think of is link crawlers who find the URL - but doesn't link crawlers usually avoid to revisit pages they just visited?
Re: (Score:2)
Other than that you're spot on.
Re: (Score:2)
The external DTD subset isn't just for error checking. It defines the character entities and the content model for element types. If you don't have access to the DTD (or hard-coded HTML-specific behaviour) you can't parse it fully.
What's their website URN, anyway? (Score:4, Funny)
(ducks and runs)
Re:Who made the DTD a URL? (Score:4, Insightful)
An address is effectively a unique ID.
And the advantage of an address is that its a logical place to put the DTD if you don't happen to have your own copy. Its a unique id and a map to where to get it if you don't already have it.
What were they thinking?
They were thinking people wouldn't needlessly continually redownload the same page over and over and over again.
The root dns servers operate under the same assumption. Do you think they were crazy too? After all, you can force your dns queries to go through the route servers every time if you really want to. Your not supposed to, and doing so needlessly puts more load on them, but you could.
The problem is with the docs (Score:5, Insightful)
Simple caching on client side could already improve the situation a whole lot... BUT:
When people implement something for html-ish or svg-ish or xml-ish purposes, they go google for it: "Howto XML blah foo" - result, they're getting basic screw-it-with-a-hammer tutorials that don't point out important design decisions, but instead Just Work - which is what the author wanted to achieve when they started writing the software.
It's a little bit like people still using ifconfig on Linux though it's been deprecated and superseded by iptables and iproute2. But since most tutorials and howtos on the net are just dumbed-down copypasta for quick and dirty hacks - and since nobody fucking enforces the standards - nobody does it the Right Way.
So if I start writing some sax-parser, some html-rendering lib, some silly scraper, whatnot... and the first example implementations only deal with basic stuff and show me how to do it so basic functionality can be implemented... and I'm not really interested in that part of the program anyways, because I need it for putting something more fancy on top... once after I'm through with the initial testing of this particular subsystem, I won't really care about anything else. It works, it doesn't seem to hit performance too badly, it's according to some random guy's completely irrelevant blog - hey, this guy knows what he's doing. I don't care!
This story hitting
Re:The problem is with the docs (Score:5, Informative)
ip addr add 192.168.1.2/24 dev eth0
ip link set eth0 down
etc. etc.
Leave it to Slashdot... (Score:2, Funny)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<title>Slashdot: News for nerds, stuff that matters</title>
Re:Leave it to Slashdot... (Score:5, Informative)
Re: (Score:2, Insightful)
Re: (Score:3, Informative)
From the article, it seems like the problem is with software that processes XML, like a web crawler, not a browser.
Browsers are also pretty good about caching stuff.
Re: (Score:3, Informative)
FTA:
I don't claim to fully grasp what software is causing the problem but it does seem to effect more than just XML.
Re: (Score:2)
Re: (Score:2)
Stuff that deals with generic XML and is being used for xhtml.
Re: (Score:3)
Re: (Score:2)
Layne
Umm, no. (Score:5, Informative)
If you want to complain, it should be the fact that slashdot is serving a strict.dtd when it doesn't validate against it.
Re: (Score:2)
It's the whole design of HTML/XML, that needs to have DTD files in the first place to do the processing, that is all wrong. I warned about this well over 12 years ago. At least what little code I've written to process HTML/XML has always entirely ignored the DTD.
Re:Umm, no. (Score:5, Interesting)
Don't be so sure- even if your own code ignores it. Unless you're dealing with it on a raw character level, with most XML libraries and frameworks it can be quite tricky to prevent DTDs from being resolved behind your back.
I wrote some Java code a while back to parse some XML files that were downloaded from NCBI. Typical for NCBI data, this involved wading through terabytes of crap, and anything based on DOM wasn't going to work- so I used the lower level event-based SAX library in JAXP. The files did have DTD declarations in them pointing to NCBI, which I wanted to ignore, since this was a one-time data mining operation. I just examined some sample files, figured out pseudo-XPath expressions for what I wanted to pull out, set up a simple state machine to stumble through the SAX events, and not caring about the DTD, cleared the namespace-aware and validating flags on the SAXParserFactory. So I ended up with this:
File xmlgz = new File("ncbi_diarrhea.xml.gz");
DefaultHandler myHandler = new MyNCBIStateMachineHandler();
GZIPInputStream gzos = new GZIPInputStream(new FileInputStream(xmlgz));
SAXParserFactory spf = SAXParserFactory.newInstance();
spf.setValidating(false);
spf.setNamespaceAware(false);
SAXParser sp = spf.newSAXParser();
InputSource input = new InputSource(gzos);
sp.parse(input, handler);
This ran fine, until it mysteriously froze up 18 hours into the run. It turned out to be caused by our switch to a different ISP, during which time the building lost its outside network access. The thread picked up the next file and immediately got blocked in the SAX library, trying to resolve the NCBI DTD.
This is how I fixed it:
spf.setFeature("http://xml.org/sax/features/external-general-entities", false);
spf.setFeature("http://xml.org/sax/features/external-parameter-entities", false);
Now I'm sure someone is going to come on here calling me a noob for not knowing to use an XMLReaderFactory (or whatever XML API class isn't obsolete this week) and setting a custom EntityResolver that can provide my local copy of the NCBI DTD when presented with its URI, but why should I even have to bother with that? XML pretends to be simple but it's seriously messed up.
Re: (Score:2)
No, Slashdot is not contributing to the problem, that is correct code. Just because a URI is listed, it doesn't mean that software should request it each and every time it sees it. Most code that sees that URI should already have a copy of the DTD in the local catalogue. It's only generic SGML software that cannot be expected to have a copy of the DTD.
Oy Vey... (Score:3, Interesting)
Here's an example of what correct markup should look like:
The documented standard uses a URL that links to the W3C's copy of the DTD only as an EXAMPLE. The standard DOES NOT REQUIRE usage of the URL to W3C's copy of the DTD. Responsible developers use a URL that links to their OWN COPY of the DTD. A
Re:Oy Vey... (Score:4, Interesting)
Delay (Score:5, Interesting)
Re:Delay (Score:4, Informative)
Re:Delay (Score:5, Insightful)
Re:Delay (Score:5, Funny)
Re: (Score:3, Informative)
Better: Had a build system once that looked for a host and had to TCP timeout before the build could continue. Had to happen several hundred times a build cycle.
The Java libraries do this down in their innards unless you're very careful to avoid it.
MIT needs a CDN! (Score:3, Interesting)
Simple solution (Score:5, Funny)
Continue to host the data referenced on a single T-1 line. That will cut your expenses to the bone since you'll never exceed 1.54 Mbps and that should be quite cheap. And, any dumfuxorz who fubarred their parser to not cache these basically static values will probably figure it out... very quickly.
You don't have to leave it on the T-1, maybe just 1 month out of the year. Every year.
Problem solved!
Starting on the 1st, fool (Score:5, Funny)
had this problem with hibernates website... (Score:3, Interesting)
the doctype was being used during a xsl transform during our build process; when the hibernate sight flaked out, the builds would fail intermittently.
solution was to add a xmlcatalog using a local resource.
bet this happens a lot more than most people realize; we'd been doing this for years before we noticed a problem.
'Web Community'? (Score:2)
Simple (Score:2)
Irony (Score:5, Funny)
So, w3c complains about their bandwidth, and the response is: The Slashdot Effect. Doesn't that make the old bandwidth problem seem less of a problem?
I'm just loving the irony in that.
Such an easy solution (Score:5, Funny)
Submitted this to /.? (Score:5, Funny)
Re:Submitted this to /.? (Score:5, Informative)
Re:Submitted this to /.? (Score:4, Informative)
I always thought it was stupid (Score:2)
1. The validation will not work if the remote server is down, or network is down, or your connection to the internet is down, or if the file is not accessible for any other reason.
2. You are at the mercy of some third-party to ensure that the file is correct and that it doesn't change.
3. You are susceptible to man-i
Re: (Score:2)
Re: (Score:3, Interesting)
I wrote my thesis in Docbook and installed the processing toolchain on a laptop. Sometimes the processing would fail and sometimes it worked. After a while I noticed it worked when I was setting behind my desk and failed when I was sitting on my bed. After some digging, I found out that the catalog configuration was wrong and the XML parser was downloading the DTDs from the web. This was before WiFi, so sitting on the bed meant the laptop did not have internet access.
The core of the problem is that most
Re: (Score:3, Informative)
I think I was using the Java version of Apache Xerces at the time for the Docbook processing. More recently I've used lxml in Python (based on libxml2), which has an option (no_network) to suppress DTD loading from the web, but you have to request that explicitly.
I've never seen a parser that caches DTDs by default. I'm not sure about parsers that do not download by default.
I'm going to say this as clearly as possible. (Score:3, Informative)
There, you can now stop posting your hilarious "jokes".
Surprise (Score:4, Insightful)
I've got to say, this doesn't surprise me at all. In the time I've spent at my job, I've been repeatedly floored by the amazing conduct of other companies IT departments. We've only encountered two people I can think of who have been hostile. Everyone else has been quite nice. You'd think people would have things setup well, but they don't.
We've seen many custom XML parsers and encoders, all slightly wrong. We've seen people transmitting very sensitive data without using any kind of security until we refused to continue working without SSL being added to the equation. We've seen people who were secure change their certificates to self-signed, and we seem to consistently know when people's certificates expire before they do.
But even without these things, I can't tell you how many people send us bad data and flat out ignore the response. We get all sorts of bad data sent to us all the time. When that happens, we reply with a failure message describing what's wrong. Yet we get bits of stuff all the time that is wrong, in the same way, from the same people. I'm not talking about sending us something that they aren't supposed to (X when we say only Y), I'm saying invalid XML type wrong... such that it can't be parsed.
We have, a few times while I've been there, had people make a change in their software (or something) and bombard us with invalid data until we we either block their IP or manage to get into voice contact with their IT department. Sometimes they don't even seem to notice the lockout.
Some places can be amazing. Some software can be poorly designed (or something can cause a strange side effect, see here [thedailywtf.com]). I really like one of the suggestions in the comments on the article... start replying really slow, and often with invalid data. They won't do it. I wouldn't. But I like the idea.
A lesson from network history (Score:2)
which is never ever learned...
A freely accessible [wikipedia.org] network [wikipedia.org] resource [wikipedia.org] is begging to be driven, smoking and shattered, into the ground by the ill-mannered, ill-trained, or ill-intentioned hordes.
Personally, I blame the introduction of AOL in 1994 to the Usenet for this downward spiral. We were doing just fine before all you "me too"s started pouring in.
Get off my lawn, you clueless kids!
Stupid design decisions in standards ... (Score:2)
Perhaps they will stop putting HTTP-URLs in standardized tags now... Also, enjoy life as a web content provider who spends many hours per week blocking Referers (nice typo in the original RFC!) and dealing with broken clients, something that the W3C never spent much time pondering about.
Make it slower, not faster (Score:2, Insightful)
The HTML 5 doctype kind of solves this (Score:2)
Recording UA? (Score:2, Redundant)
heh (Score:2)
Sorry that was me! (Score:2)
(For those without a sense of humour, yes this is a joke)
ISP, where's your DTD server? (Score:2)
People would have another reason to complain about their ISP's quirks.
That's the problem with a URI for an ID (Score:4, Insightful)
If they'd at least made the identifier NOT a URI, something like domain.example.com::[path/]versionstring, or something else that wasn't a URT, so it was clearly an identifier even if it was ultimately convertible to a URI, they would have avoided this kind of problem.
W3C stupidity (Score:2)
Hey, for once... (Score:5, Funny)
WARNING: GNAA (Score:2, Funny)
Caching DTDs locally (Score:2)
Or the routers. Frankly, if the result is known to not change, w3 could probably agree with the network authorities to put copies around the net and treat those heavily used URIs as URNs and just never got to w3 (or rarely go there) instead.
The notion that URNs have to be known in advance as "the popular thing" rather than being discovered after-the-fact by noticing high-volume URIs is probably the real bug here.
Re: (Score:2)
Yea, so um, routers don't know what XML is, let alone a DTD. Their purpose is to move traffic around really fast, and they manage to do that by having custom hardware do their work instead of generic CPUs/code. Unless you'd like to force everyone to use a proxy server, your idea is not feasible.
Re: (Score:2)
They already do. (Score:5, Informative)
Re:They already do. (Score:4, Insightful)
The problem is that several major XML libraries don't just default to no DTD/schema cache - they don't even implement a cache or local catalog. Implementing such a thing is left to the developers using the library.
For example, the XML libraries that come with Sun's Java rely on java.net.URL for downloading resources. I just checked my 1.6 Java install, and by default, it has no cache. In looking up how the java.net cache works, I discovered it wasn't even added until Java 1.5. So prior to Java 1.5, most Java libraries wouldn't cache responses at all because the included library didn't support caching. 'Course, even in Java 1.6, there's no default implementation, so each Java application would have to implement their own cache[1].
The included Java libraries also offer no internal DTD/schema catalog. You can create one (implement org.xml.sax.EntityResolver[2]) but by default they're off to the Internet to download any DTD they run across.
It's really not hard to see how these libraries could result in millions of hits a day - most people using them probably don't even realize that they're hitting the W3C's servers since it happens transparently. And fixing it is unfortunately not just setting configuration files and saving the DTDs locally: it's implementing a bunch of classes.
[1] And for added fun, the stub that is provided appears to be insufficient to support conditional requests - either the cache says "I have it!" and the cached response is used, or the server has to send a new copy. There's no way to do offer up an "If-Modified-Since:" request via the cache class.
[2] Noting that this can't be set for all parsers, it's set on a per-parser object basis. So if you use a third-party library that parses XML after creating its own parser object, you can't make it use your local DTD catalog.
Re: (Score:3, Insightful)
You're solving that problem at the wrong layer. HTTP already includes caching mechanisms, the W3C already use them, and part of the problem is that buggy software is ignoring them.
Please read the article. This is already supposed to happen. Buggy software fails to do this, which is the problem being talked about.
Re: (Score:2)
Re:That's what you get for making stupid rules. (Score:5, Informative)
It's not a link. It's a reference to an external DTD subset. It's there so that generic SGML software can properly parse the document without any special knowledge of HTML.
No, external DTD subsets are a part of SGML, which is at least a decade older than the W3C.
Re: (Score:2)
From the W3C specifications for XHTML documents [Link [w3.org]]
3.1.1 - Strictly Conforming Documents ...There must be a DOCTYPE declaration in the document prior to the root element. The public identifier included in the DOCTYPE declaration must reference one of the three DTDs found in DTDs using the respective Formal Public Identifier. The system identifier may be changed to reflect local system conventions... An XML declaration is not required in all XML documents; however XHTML document authors are strongly enc
Re: (Score:2)
Did you mean to reply to my comment? I can't see the connection between what I said and what you are saying.
Re:That's what you get for making stupid rules. (Score:4, Insightful)
Re:That's what you get for making stupid rules. (Score:5, Insightful)
Should SGML renderers cache it? Yes. Should W3C bitch that some SGML renderers are downloading their DTD? No. They should have thought about that before they made HTML a subset of SGML. I don't feel sorry for them.
Re: (Score:2)
Sigh.
Try getting a clue.
Re: (Score:2, Insightful)
What's the point of having a DTD if it won't change? Oh yeah, there is none. Conceptually, the DTD is there to define the data, and unless you know what is in the DTD, you cannot use it to validate, which is its purpose. And conceptually, if you assume the data is defined a certain way, you don't need a DTD.