HTML V5 and XHTML V2 344
An anonymous reader writes "While the intention of both HTML V5 and XHTML V2 is to improve on the existing versions, the approaches chosen by the developers to make those improvements are very different. With differing philosophies come distinct results. For the first time in many years, the direction of upcoming browser versions is uncertain. This article uncovers the bigger picture behind the details of these two standards."
Bet there still isn't a decent "Stop!" button (Score:5, Interesting)
I've been trying to get them (and browser people) to include a security oriented tag to disable unwanted features.
Why such tags are needed:
Say you run a site (webmail, myspace (remember the worm?), bbs etc) that is displaying content from 3rd parties (adverts, spammers, attackers) to unknown browsers (with different parsing bugs/behaviour).
With such tags you can give hints to the browsers to disable unwanted stuff between the tags, so that even if your site's filtering is insufficient (doesn't account for a problem in a new tag, or the browser interprets things differently/incorrectly), a browser that supports the tag will know that stuff is disabled, and thus the exploit fails.
I'm suggesting something like:
<restricton lock="Random_hard_to_guess_string" except="java,safe-html"
browser ignores features except for java and safe-html.
unsafe content here, but rendered safely by browser
<restrictoff lock="wrong_string"
more unsafe content here but still rendered safely by browser
<restrictoff lock="Random_hard_to_guess_string"
all features re-enabled
safe-html = a subset of html that we can be confident that popular browsers can render without being exploited e.g. <em>, <p>).
It doesn't have to be exactly as I suggest - my main point is HTML needs more "stop/brake" tags, and not just "turn/go faster" tags.
Before anyone brings it up, YES we must still attempt to filter stuff out (use libraries etc), the proposed tags are to be a safety net. Defense in depth.
With this sort of tag a site can allow javascript etc for content directly produced by the site, whilst being more certain of disabling undesirable stuff on 3rd party content that's displayed together (webmail, comments, malware from exploited advert/partner sites).
Re: (Score:3, Insightful)
Content from a 3rd party runs in a more restrictive context than the primary site (this includes frames etc).
You are then not held at the whim of a web admin to ensure these tags are included.
Or you could just use the noscript addin right now and choose which sites you trust at your discretion.
Re: (Score:3, Insightful)
Think webmail (yahoo, gmail etc), when you receive spam, your webmail provider is the one sending you the data.
Usually they will try to filter the content to make it safe. BUT as history shows it's not always 100%.
The W3C or browser maker might also make a new tag/feature that your filtering libraries aren't aware of (e.g. old sites with guestbooks that might not filter out the "latest and greatest stuff").
With m
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
</restriction> <!-- closes the existing restriction zone. Might not pass as valid XML, but HTML browsers work with tag soup. -->
Something evil!!!
<restriction lock="I don't really care here" except="everything"> <!-- This bit is purely optional -->
Obviously I need to work on something more destructive than "Something evil!!!" before I attempt to conquer the planet...
Re: (Score:3, Insightful)
My attempts to change the world (albeit by a little bit) aren't going very well either - it's been more than 5 years since I first proposed the tags, but so far the W3C and Mozilla bunch have preferred to make other "more fun" stuff instead...
Maybe Microsoft has subverted the W3C too
Re: (Score:2)
The only real secure way is to isolate the untrusted bits into their own block.... like how you do multipart mime documents in email or something. You'd need a tag to reference the "external" untrusted bits and have the browser render them in a sandbox. Even in this case, you can e
Re: (Score:3, Interesting)
The natural tag for controlling the parsing would be a processing instruction.
<?secure on key:hkwh45kdfhgkjwh45?>
blah blah blah blah
<?secure off key:hkwh45kdfhgkjwh45?>
Good luck getting that into a standard, but heck, you don't really even need the cooperation of the W3C to do this.
Re:Bet there still isn't a decent "Stop!" button (Score:5, Insightful)
Why would your site let through new tags that it doesn't recognise? Use a whitelist.
This only usually occurs if you let through malformed HTML. Use tidy or similar to ensure you only emit valid HTML. Not to mention the fact that the whole problem is caused by lax parsing — something the W3C has been trying to get people to give up on with the parsing requirements for XML.
You could define such a subset using the modularised XHTML 1.1 or your own DTD.
Yes, but it won't be actually used that way. If browsers went to the trouble of actually implementing this extra layer of redundancy, all the people with lax security measures would simply use that as an alternative and all the people who take security seriously will use it, despite it not being necessary. I think the cumulative effect would be to make the web less secure.
Re: (Score:2)
You could define such a subset using the modularised XHTML 1.1 or your own DTD.
Or monkeys could fly out of our asses :-)
The idea of modular XHTML is a nice one, but unless I'm missing something, this new XHTML modular thingy we are talking about would still need to be supported by the browser, right? In other words, it will not be supported and is a waste of time.
Modular XHTML is a nice idea in theory, but honestly... nobody will use a module unless it is implemented by Firefox and IE. Can you name any existing XHTML modules implemented by both browsers?
Er.. atom or rss?
Re: (Score:2)
It's not an idea, it's been a published Recommendation [w3.org] for over six years.
No. If the server validates the untrusted data, what's the point in the browser doing it too? Validation is deterministic, you don't get double the security by doing it twice.
All of them. XHTML 1.1
Re: (Score:2)
Those are not XHTML.
That was what I thought.. just guessing. MathML
Re: (Score:2)
On the contrary, it's very easy. There's plenty of tools out there to do this for you.
What do you mean by "feel right"?
Re:Bet there still isn't a decent "Stop!" button (Score:5, Insightful)
You want easy? SQL injections are easy to handle. Just use a parameterized query so you don't have to mix tainted data with your trusted SQL.
Back in the stone age before php thought parameterized queries were more then enterprise fluffery, you were forced to mix your user data with your SQL. And oh were the results hilarious! It look three tries (and three fucking functions) for PHP/mysql to get their escape code right and I'm sure you can still inject SQL with "mysql_real_escape_string()" in some new unthought of way.
There is no "parameterized query" with HTML. You are *forced* to mix hostile user data with your trusted HTML. If it was that hard to sanitize an "easy" language like SQL, how hard is it to sanitize a very expressive language like HTML?
You are telling me all those CPAN modules handle the hundreds of ways you can inject HTML into the dozens of different browsers? How many ways can you make an angle bracket and have it interpreted as a legit browser tag? How many ways can you inject something to the end of a URL to close the double quote and inject your javascript? How many ways, including unicode, can you make a double quote? Dont forget, your implementation cannot strip out the Unicode like I've seen some filters do - I need the thing to handle every language! I would guess there are thousands of known ways to inject junk into your trusted HTML.
I promise you that even the best CPAN module is still exploitable in some way not considered by the author. And I'd be insane to roll my own, as I'm not as smart as she is.
Don't kid yourself and thinking filtering user generated content is easy. It is very, *very* hard.
Re: (Score:2)
Escaping SQL isn't even close to the same problem. In that case, you virtually always want the user-submitted data to be treated as opaque data. The analogous situation with HTML would be escaping all the HTML and displaying it as raw code to the end user. The problem being talked about here is when you do
Re: (Score:2)
It's a hell of a lot simpler if you normalize to a valid subset of HTML.
True.dat. But you gotta know how to normalize it down first. Not saying you are wrong, but why are there so many XSS issues if it is easy? Poor education? How do we educate good programmers to do the right thing? I mean that seriously... like is there a "here is how to let your users make their comment pretty and link to other websites and not get hosed" FAQ? I think I see your take though... it helps if you have give the user a wysiwyg editor that spits you a known set of HTML. Anything outside tha
Re: (Score:3, Insightful)
It ain't easy as you say bro...
Re: (Score:3, Interesting)
A combination of ignorance, apathy, and poor quality learning materials.
Well the real answer to this is to point them to the sanitising features available for their particular platform/language/framework/etc. Generic advice is low-level by its very nature, for example XSS (Cross Site Scripting) Cheat [ckers.org]
Re: (Score:2)
I don't know why people are stupid about this. It's true that you probably can't do it with a regex. That's why $GOD gave us the DOM.
Re: (Score:2)
Your implication is basically that web-developers are more competent in terms of security than those who design the clients, and thus the client should just swallow the stuff without even bothering. In reality there are MANY people who make web pages who would probably trust the browser developers a lot more than they trust themselves not to make a mistake.
Also, you're not looking at this from the point of view of the user. I might want to tell my
Re: (Score:2)
Not at all. I expect the web developers in both cases to hand off the problem to third-party code. I just think that server-side code that has been maturing for a decade fills the role better than non-existent client-side code.
Re: (Score:3, Interesting)
<object permissions="untrusted" codetype="text/html" codebase="foo.html">
</object>
Re: (Score:3, Insightful)
There is also the minor point that your method
This is silly. (Score:3, Insightful)
Doesn't really matter how "hard to guess" your string is if you're going to transmit it cleartext in the body of your HTML document, does it?
"But wait!" you say, "We can randomize the string every time the document is served, thus defeating anything but an embedded Javascript with access to the DOM." Perhaps so, but now you're talking about server-side behavior — something clearly beyond the purview of the HTML specificat
Re:Bet there still isn't a decent "Stop!" button (Score:4, Insightful)
Wouldn't something like:
<sandbox src="restrictedContent.html" allow="html,css" deny="javascript,cookies"/>
Where are we now? (Score:2)
That's a very good article - as always IBM give a well-written introduction to the subject. But exactly what is the state of implementation of these? As far as I can gather, no browser maker has started to implement support for either. Is that correct? It would be useful to have some idea of the time scales we can expect on these both. Anyone know more about the state of play?
Re: (Score:2, Informative)
Browser vendors choice (Score:4, Insightful)
As it stands, with both XHTML 5 and XHTML 2 using the same namespace, it is only possible to support one of the two.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Given their past effort in "supporting" previous standarts, it's not hard for them to claim "XHTML2" support.
:
Just enable an additional DOCTYPE to be recognised, and throw the exact same broken "quirks-mode" parser as before.
Most of the new XHTMLv2 tags which differs from XHTMLv1's one will fail to be recognized and displayed properly, but that won't be a big change to their traditionnal support of standart....
{/sarcasm}
More seriously
Re: (Score:2)
As it stands, with both XHTML 5 and XHTML 2 using the same namespace, it is only possible to support one of the two.
Please clarify, because I don't understand this.
Since XHTML will continue to require a specific declaration and doctype, similar to
<!-- always line 1 --> <?xml version="1.0" encoding="UTF-8"?>
<!-- always line 2 --> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd">
will this not be enough so that client (browser) will be able to distinguish any version of XHTML from anything else? Isn't that sufficient??
Why not ditch HTML? (Score:4, Interesting)
Re:Why not ditch HTML? (Score:5, Interesting)
For content generated by the site author or a CMS, I would agree. Sending out code that is not XHTML compliant is unprofessional. Even if you don't want to make the additional coding changes to your site to make it true XHTML rather than XHTML-as-HTML, All of the XHTML strictness rules make your code better, where "better" means easier to maintain, faster, less prone to browser "interpretation", etc. Even just for your own sake you should be writing XHTML-as-HTML at the very least. (True XHTML requires changes to the mime type and to the way you reference stylesheets, and breaks some Javascript code like document.write(), which are properly left in the dust bin along with the font tag.)
But then along comes Web 2.0 and user-supplied content and all that jazz. If you allow someone to post a comment on a forum, like, say, Slashdot, and allow any HTML code whatsoever, you are guaranteed to have parse errors. Someone, somewhere, is going to (maliciously or not) forget a closing tag, make at typo, forget a quotation mark, overlap a b and an i tag, nest something improperly, forgets a / in a self-closing tag like hr or br, etc. According to strict XHTML parsing rules, that is, XML parsing rules, the browser is then supposed to gag and refuse to show the page at all. I don't think Slashdot breaking every time an AC forgets to close his i tag is a good thing.
While one could write a tidy program (and people have) that tries to clean up badly formatted code, they are no more perfect than the "guess what you mean" algorithms in the browser itself. It just moves the "guess what the user means" algorithm to the server instead of the browser. That's not much of an improvement.
Until we can get away with checking user-submitted content on submission and rejecting it then, and telling the user "No, you can't post on Slashdot or on the Dell forum unless you validate your code", browsers will still have to have logic to handle user-supplied vomit. (And user, in this case, includes a non-programmer site admin.)
The only alternative I see is nesting "don't expect this to be valid" tags in a page, so the browser knows that the page should validate except for the contents of some specific div. I cannot imagine that making the browser engine any cleaner, though, and would probably make it even nastier. Unless you just used iframes for that, but that has a whole host of other problems such as uneven browser support, inability to size dynamically, a second round-trip to the server, forcing the server/CMS to generate two partial pages according to god knows what logic...
As long as non-programmers are able to write markup, some level of malformed-markup acceptance is necessary. Nowhere near the vomit that IE encourages, to be sure, but "validate or die" just won't cut it for most sites.
Re: (Score:2)
I don't think Slashdot breaking every time an AC forgets to close his i tag is a good thing. :-)
That's one reason I always try to preview before I post, no actually I preview so I can edit before posting. However I still let some mistake slip by.
While one could write a tidy program (and people have) that tries to clean up badly formatted code, they are no more perfect than the "guess what you mean" algorithms in the browser itself. It just moves the "guess what the user means" algorithm to the server
Re: (Score:3, Interesting)
You can include HTML inside XHTML, by changing the namespace for that content in the container element or using includes. The browser should then parse the contents as HTML, and you can get the best of both standards.
Another option is to make sure comments cannot be submitted until they contain valid XHTML. You could use a WYSIWYG editor, fall back to /. mode when JavaScript is disabled, and help the user along by auto-correcting (when using WYSIWYG editor) or hinting (e.g., in "You need to end the strong
Re: (Score:3, Insightful)
Re:Why not ditch HTML? (Score:4, Interesting)
At first blush, the aims of XHTML 2.0 and HTML 5 ought to be orthogonal. Judging from the article, I'd suspect it is not the aims that are incompatible, but the kinds of people who are behind each effort. You either think that engineering things in the most elegant way will get things off your plate more quickly (sooner or later), or you think that concentrating on the things that are on your plate will lead you to the best engineered solution (eventually).
I'm guessing that the XHTML people might look at the things the HTML 5 folks want to do and figure that they don't really belong in HTML, but possibly in a new, different standard that could be bolted into XHTML using XML mechanics like name spaces and attributes. Maybe the result would look a lot like CSS, which has for the most part proven to be a success. Since this is obviously the most modular, generic and extensible way of getting the stuff the HTML 5 people worry about done, this looks like the perfect solution to somebody who likes XHTML.
However, it would be clear to the HTML 5 people that saying this is the best way to do it doesn't mean anything will ever get done. It takes these things out of an established standard that is universally recognized as critical to support (HTML) and puts them in a newer weaker standard that nobody would feel any pressure to adopt anytime soon. A single vendor with sufficient clout (we name no names) could kill the whole thing by dragging its feet. Everybody would be obliged to continue doing things the old, non-standard way and optionally provide the new, standardized way for no benefit at all. Even if this stuff ideally belongs in a different standard, it might not ever get standardized unless it's in HTML first.
Personally, I think it'd be nice to have both sets of viewpoints on a single road map, instead of in two competing standards. But I'm not holding my breath.
Re: (Score:3, Insightful)
Re: (Score:2)
reboot the web! (Score:5, Insightful)
people/companies are trying to develop rich applications using decade old markup language thats improperly supported by different browsers (even firefox doesn't fully support css yet) and is a very ugly mix right now, its like squeezing a rectangular plasticine object thru a round,triangular and starshaped holes at the same time
the web needs a reboot
we need a programming language that:
*works on the server and the client
*something that makes making UIs as easy as drag and drop
*something that does not forgive idiot html "programmers" who write bad code
*something that doesnt suffer from XSS
*something that can be extended easily
*something that can be "compiled" for faster execution
*something thats implemented same way in all browsers (or even better doesnt require a browsers and works on range of platforms)
Re: (Score:2, Informative)
More thoughts on why Ajax is bad for web applications [zdnet.com]: this is about how Ajax apps are often very fragile and usually don't work as expected.
Ephemeral Web-Based Applications [useit.com]: usability guru Jakob Nielsen writes this great article that goes into depth about how most web apps are complete failures when it comes to usability. Even something as basic as
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:reboot the web! (Score:4, Interesting)
You need to realize that the markup language shouldn't be used for layout. Your comment about "making UIs as easy as drag and drop" can be done with a website development environment like Dreamweaver. You need a base language for that.
Personally, I think that XHTML/CSS is going the right way. It can be extended easily, it's simple enough that that basic sites can be created by new users relatively quickly, however complex layouts still require some experience (yeah, it's got a learning curve, but that's what Dreamweaver is for).
The whole point of XHTML/CSS is that it's not designed to be implemented the same way in all browsers. It's designed so that you can take the same "content" and render it for different devices/media (ie: home PC, cellphone, paper, ebook) simply by either supporting a different subset of the styling or different stylesheets altogether.
Have you ever tried to look at a table-based layout on a mobile device? have you ever tried to look at a table-based layout on a laptop with a tiny screen or a tiny window (think one monitor, webbrowser, terminal, and code editor on the same 15" laptop screen)? table-based layouts are hell in those scenarios. Properly coded XHTML/CSS pages are a godsend, especially when you can disable styles and still get a general feel for what the content on the page is.
I'm not sure if I 100% agree with this XHTMLv2 thing, but I think XHTMLv1 is doing great. I just really wish someone would make something that was pretty much exactly what CSS is, but make it a little more robust. Not with more types of styles, but with ways of positioning or sizing an element based on its parent element, better support for multiple classes, variables (for globally changing colors), and ways of adjusting colors relative to other colors. I'd love to be able to say "on hover, make the background 20% darker or 20% more red". I'd love to be able to change my color in one place instead of having to change the link color, the background color of my header and the underline of my h elements each time I want to tweak a color.
I'd also love if you could separate form validation from the page. doing validation with JS works, but it's not optimal. Having a validation language would be pretty awesome. Especially if you could implement it server-side. If the client could grab the validation code and validate the form before sending and handle errors (by displaying errors and highlighting fields) and then the server could also run that same code and handle errors (security... it would be easy to modify or disable anything on the clientside...) that would be great. All you'd really need is just a handful of cookiecutter directives (validate the length, format/regex, and also have some built-in types like phonenumbers and emails), that would be great, too.
I also think that it's about time for JS to get an upgrade. Merge Prototype.js into javascript. Add better support for AJAX and make it easier to create rich, interactive sites.
If we're not careful, Flash is going to become more and more prominent in casual websites. The only advantage the the current standards have is that they're free and don't require a commercial solution to produce.
XSS is a sideeffect of trusting the client too much and a side-effect that won't be solved by anything you've suggested.
And why does something need to be "compiled" to be faster? What needs to be faster? Rendering? Javascript? Or are you talking about server-side? Why don't we start writing all our websites in C? Let's just regress back to treating our desktop machines as thinclients. We'll access websites like applications over X11. It'll be great.
Re: (Score:2)
Why did it take until CSS 3.0 to get easy-to-use columns? The New York Times has been using columns for 150+ years; why did the CSS implementers feel they should just dump all that publishing experience in the toilet and do things their own way?
Likewise, CSS which is supposed to free us from table-based la
Re: (Score:2)
I fully sympathize with your desire for a better way, but not at the cost of throwing away the Web and replacing it with the $VENDOR Network, which i
Re: (Score:2)
.NET / Silverlight?
ducks
Different directions -- Need Both (Score:5, Insightful)
So while the HTML 4 renderers floating around wouldn't be trashed, they could be ignored, left as is, and focus on an HTML 5 one. Migrating to XHTML is non-trivial for people with out-dated tools and lack of knowledge. You can't ignore those sites as a browser maker, but HTML 5 might give a reasonable path to modernizing the "non-professional" WWW.
XHTML has some great features, by being well-formed XML, you can use XML libraries for parsing the pages. This makes it much easier to "scrape" data off pages and handle inter-system communication, which HTML is not equipped for.
It's interesting in that HTML and XHTML look almost identical (for good reasons, XHTML was a port of HTML to XML) but are technically very different, HTML being an SGML language, and XHTML an XML language. Both programs have their uses, HTML is "easier" for people to hack together because if you do it wrong, the HTML renderer makes a best guess. XHTML is easier to use professionally, because if there is a problem, you can catch it as being an invalid XML document. Professionals worry about cross-browser issues, amateurs worry about getting it out there.
XHTML "failed" to replace HTML because it satisfies the needs of professionals to have a standardized approach to minimize cross-browser issues, but lacks the simplicity needed for amateurs and lousy professionals.
Rev'ing both specs would be a forward move that might simplify browser writing in the long term while giving a migration path. XHTML needs a less confusing and forward looking path, and HTML needs to be Rev'd after being left for dead to drop the really problematic entries and give people a path forward.
Re: (Score:2)
HTML 5 has two serialisations, a quasi-HTML serialisation and an XML serialisation.
XHTML failed to replace HTML because a browser with a dominating market share doesn't support it and using it in a backwards-compatible way confers very few adva
Re: (Score:2)
XHTML failed to replace HTML because a browser with a dominating market share doesn't support it [...]
Right.
[...] and using it in a backwards-compatible way confers very few advantages over HTML and none whatsoever for typical developers.
Wrong -- or at least it depends on what you mean by "typical." Technologies like SVG and MathML are XML-based, so there is a big advantage to having xhtml support in browsers: it lets you use inline SVG and MathML according to the w3c standards. Because MS doesn't su
Re: (Score:2)
Yes, but the advantage is only there if you give up on Internet Explorer compatibility or put in a lot of extra work by coding an additional Internet Explorer version without SVG and MathML, i.e. the version you are supposedly skipping by using XHTML.
Yes, so you can't really c
Re: (Score:2)
This is exactly what I am arguing.
I'm going to make my own browser standard... (Score:2)
beta vs vhs.... (Score:2)
Here is what I would suggest: 1 multi-column drop down, with sort capabilities. This is something that is available in desktop applications; 2) built-in browser menu; 3) better scripting modal window, I should ha
No standard without reference implementation (Score:5, Insightful)
Is doesnt matter if the reference standard is slow-as-molasses or requires vast quantities of memory, at least you have proven the standard is actually realistically implementable. On the other hand if your reference implementation was easy to build and is really good, then that will foster code re-use and massively jump-start the availability of standardised implementations from multiple vendors. It might also show that you have a really good standard there.
If you don't do this, you get stuff like SVG - I don't think there is even one single 100% compliant SVG implementation anywhere, and there may never be.
There aren't any fully compliant CSS, or HTML implementations either, to my knowledge.
The same goes for XHTML and HTML5. If you, as a standards organisation, are not in a position to directly provide, or sponsor the development of an open reference implementation, then personally, I think you should be restricting your standard to a smaller chunk of functionality that you are actually able to do this with.
There is no reason a composite standard, with a bunch of smaller, well defined components, each with reference implementations, can't be used to specify 'umbrella' standards.
Now, i am also aware that building a reference application tends to make the standard as written overly influenced by shortcomings in the reference implementation, but i really can't believe this would be worse that the debacle surrounding WWW standards we've had for the last 10+ years. Without a conformant reference implementation, HTML support in browsers is dictated by the way Internet Explorer and Netscape did things anyway.
I'm also aware that smaller standards tends to promote a rather piecemeal evolution of those standards, when what is often desired is an 'across the board' update of technology.
But this 'lets define monster standards that will not be fully implemented for years, if at all, and hope for the best' approach seems to be obviously bad, allowing larger vendors to first play a large role in authoring a 'standard' that is practically impossible to fully implement, and then to push their own hopelessly deficient versions of these 'standards' on the world and sit back and laugh because there is no way to 'do better' by producing a 100% compliant version.
Re: (Score:2)
Re: (Score:3, Informative)
For a few years now, the W3C publication process has included an additional final step. It is not possible for a specification to reach final Recommendation stage unless it has two complete interoperable implementations.
Support for multiple devices... (Score:5, Interesting)
The author apparently has no experience with rendering XHTML on mobile devices. First of all, since the screen is smaller, it's not just about restyling things in a minimalist theme. It's about prioritizing information and remove the unnecessary one so more important information becomes more accessible in limited display real-estate.
For example, anyone who accessed Slashdot homepage on their mobile phone knows the pain of having the scroll down past the left and right columns before reaching the stories. You can simulate this experience by turning off page style and narrowing your browser window to 480 pixels wide. The story summaries are less accessible because they're further down a very long narrow page.
Another problem is the memory. Even if you style the unnecessary page elements to "no display", they're still downloaded and parsed by the mobile browser as part of the page. Mobile devices have limited memory, and I get "out of memory" error on some sites. For reading long articles on mobile devices, it is better to break content into more pages than you would on a desktop display, both for presentation and memory footprint reasons.
For these two reasons, a site designer generally has to design a new layout for each type of device. The dream of "one page (and several style sheets) to rule them all" is a fairytale.
The current situation is awful. (Score:5, Insightful)
The current situation is awful.
Re: (Score:2)
If people know they can be lazy and write crap code that the browser will somehow manage to render anyway, they will since it's easier than writing correct code.
Re:The current situation is awful. (Score:5, Insightful)
As for "defining named things" - the concept of HTML is all about semantic markup. That's why using tables for layout is frowned upon, not because they are bad as such.
Re:The current situation is awful. (Score:4, Insightful)
Drag'n'drop works fine if it is manipulating a proper UI API. OS X's Interface Builder, with its springs and struts system, comes to mind.
Re:The current situation is awful. (Score:4, Insightful)
HTML is supposed to be a document format that can be flexibly rendered. Pretty much the opposite of WYSIWYG actually.
Re: (Score:3, Interesting)
If you want traditional graphic design, make a PDF.
PDF is for printing, dummy :-)
I've got a better idea anyway... How about a way to take our centuries of knowledge about "traditional graphic design" and apply it to the a web-based medium? Do we have to chuck out everything we know about good design just because of the silly constraints of HTML/CSS? How about we improve or replace HTML/CSS with something that incorporates all we know about "traditional gr
Re: (Score:3, Insightful)
HTML has it's purpose. It's time to stop trying to pervert it to yours. Either invent a fixed document format for the web or use one of the ones that's already widely supported (ie PDF). But guess what? There's a REASON people hate web links th
Re: (Score:3, Insightful)
Semantic markup languages like HTML break down because the web isn't for print. Semantic markup is the holey grail in the print world because it works so well for linear documents. The web is an interactive, non linear medium that doesn't get printed.
The web is an two way, interactive, non linear medium that is evolvin
Re: (Score:3, Insightful)
but even on the trickiest sites the grids are just a framing device for the stuff to be read
And even then, those are letters of a common alphabet delivered over light that travels inside glass. What is your point? You saying layout isn't important or something?
Layout is just as important to understanding content as the content itself. If you went into a $100USD per dish restaurant dressed in a tuxedo with your hot chick date and the menu is all in comic sans, what do you think about the quality of the food you are about to be served? Those guys who march around downtown areas might have reall
Re: (Score:3, Interesting)
That might be the theory, but it simply is not true in reality. HTML is pretty much a WYSIWYG format with additional support for different font sizes and page width. The second you add a tag you are tied to a specific display DPI, the second you add a navigation bar, you no longer have a document that can adjust to different output devices easily. I mean just look at the web today, nobody is using HTML for writing documents. If people want to write a book, t
Re: (Score:2)
Re: (Score:3, Funny)
I prefer XHTML 2, thanks (Score:5, Insightful)
I thank the HTML 5 guys for their attempts, but I prefer XHTML v2
From TFA:
XHTML is for intelligent human beings, you know, people who can actually understand what separation of concerns is.
So HTML v5 is for people who don't understand separation of concerns.
Unfortunstely that's the 99% of web kiddies out there.
One standard for smart people who know programming and actually work with an engineering mindset, another for those who see the web as a big graffiti and work with an "anything goes" mindset. No thanks, I prefer ONE standard for smart people, XHTML v2, and just to kick out everyone who isn't qualified.
Re:I prefer XHTML 2, thanks (Score:5, Insightful)
Agreed, this article is HTNL5 apologist rhetoric. I thought it was rather well-balanced until the author got to HTML5, where his preference is subtly revealed.
XHTML2's universal src attribute is mentioned (confusingly called a tag), but the universal href attribute is not, which allows any element to be transformed into a link. Nor is the rolse attribute mentioned, which allows a tag to be assigned a semantic meaning (like menu or header) without expanding the tag set.
TFA even admits in a roundabout way that HTML5 exists because the majority of so called "web developers" are ignorant of the current standards and incapable of effectively using them. If you need to be "clever" to use XHTML2, then perhaps no one will have to reach for the eye-bleach every time they wander into places like MySpace (where page skins are based on an exploit where browsers interpret <style> tags outside the document head, which is illegal).
I tell people "Writing web pages is easy. Writing them well is hard." This is proven by the amount of junk documents on the web that don't validate as anything but pretty, even if beauty is in the eye of the beholder.
The author wisely avoided any discussion of the silly new tags (some of which are presentational, not semantic) HTML5 includes. He does mention XHTML5, which is "optional"... why should we take that step backwards?
The anti-XML-compliance people like to complain that XML is too verbose. If they don't like it, they can use something else, like RTF. Cars have gotten verbose too over the years. Those people can put their money where their moths are by buying an antique that doesn't have a radio, GPS, seat belts, padded dashboards, windows, crumple zones, suspension, electric engine starters, or any number of improvements that could be argued to be bloat.
XHTML2 is the way we should go.
Re: (Score:3, Informative)
are html 5 and xhtml 2 worked on by W3C? (Score:5, Informative)
Both standards are being worked on the by the W3C standards group.
According to the IBM paper html 5 is being done independently of the W3C. "In April 2007, the W3C voted on a proposal to adopt HTML V5 for review" is about as much as W3C has with html 5.
FalconRe:are html 5 and xhtml 2 worked on by W3C? (Score:4, Informative)
According to the IBM paper html 5 is being done independently of the W3C. "In April 2007, the W3C voted on a proposal to adopt HTML V5 for review" is about as much as W3C has with html 5.
Falcon
Re: (Score:2)
There is an HTML WG at the W3C chartered [w3.org] to create a new version of HTML. A basis for review means, in W3C language, a starting document that will then be reviewed and changed as needed.
However html 5 was started outside of the W3C by an independent group.
FalconRe: (Score:2, Insightful)
ms ain't the devil for development, sometimes they drive new features and functionality that would take forever to incorporate otherwise. do they always do it in the best of ways, no, but they do bring out good things from time to time...
Re: (Score:2, Insightful)
Ajax-like techniques are possible without XMLHttpRequest and I don't believe Google Maps uses XMLHttpRequest anyway. If any organisation is responsible for the popularity of Ajax, it's Google, as it was when they started using it extensively that it really took off.
Re: (Score:2, Insightful)
Re:Where is Microsoft? (Score:4, Informative)
That's one of them, yes. It really depends on what you want to do; for example you don't need anything other than typical mousedown event handlers for things like Google Maps, and you can use things like dynamically generated image URIs to send data back to the server asynchronously, which is compatible all the way back to Netscape 2. There are lots of options, the value in XMLHttpRequest is more convenience than functionality.
Re: (Score:3, Interesting)
Way ahead of it's time though... most javascript was either for homework assignments or popup ads. All of it was copy/paste hackjobs that the web author found on super-mega-awesome-javascript.com or something. The result was "most people" hated javascript. You could browse 99% of the interweb
Re:Where is Microsoft? (Score:4, Informative)
Err, yes it does. From the Google Maps API reference [google.com]:
And that's just a recent refinement. Google Maps has used the XMLHttpRequest object for ages. Yes, it's possible to get a similar effect using hidden iframes and such, but doing it that way is really awkward. They'd have to be crazy to pass that amount of data back and forth that way when they've got XMLHttpRequest.
Re: (Score:2)
I've just checked by loading up Firebug to monitor and clicking around on Google Maps for a while, and it didn't use XMLHttpRequest at all. The basic functionality is done by dynamically loading and positioning images. I'm sure there are parts of the API available through XMLHttpRequest, but the major functionality and what it is famous for is not done with XMLHttpRequest as far as I can see.
Re: (Score:2, Troll)
Re:Where is Microsoft? (Score:4, Insightful)
Re: (Score:3, Insightful)
I urge every web developer to stop treating MSIE as a special case, since it does not follow standards
No offense to you, but I love how every single person who smugly suggests this usually has a link to a website that looks like shit when viewed on any browser.
This also seems to be the case when ever somebody bitches about web designers changing fonts, using javascript, or doing something to make their page look nice. You visit the websites created by the "changing the font at all, even in the stylesheet, is evil" or the classic "why are you trying to use two columns? two columns are evil" religious zeal
Re: (Score:2, Insightful)
I don't think I've ever seen anybody say this. Example?
In actual fact, their pages don't look boring at all. Your default browser setup looks boring.
Remember, a web design doesn't look like anything until it is realised with the combination of hardware, browser defaults and personal settings. If you think a site that uses your preferences looks boring, th
Re: (Score:2)
I don't think I've ever seen anybody say this. Example?
Exhibit A [slashdot.org]. Kinda Exhibit B [slashdot.org].
Okay.. so I overstated myself a bit, sue me; this is slashdot after all, right? You know what I'm saying though, there are a lot of people who at least in this little corner of our interweb seem to think that unless we design explicity for an 80 column lynx terminal we are going to hell - I'm not talking degrade nicely for a lynx terminal, I'm talking "designed for lynx(tm)" and forging anything more advanced.
I bet you can dig up people in 1997 that were bitching on slashdot a
Re: (Score:3, Insightful)
The Web is not for the developers. It's for the people who want and need the data, the clients who in the end actually pay the bills and view the pages. If it's a games site
Re: (Score:2)
Strawman. Nobody minds a page which uses these things properly ( i.e gracefully fall back when not supported , don't rely on them for navigation etc... ). Problem is, some people get it VERY wrong. There's a Swedish news site I like because they have good journalists, but their web developers deserve to be shot. They actually implemented a "marque-li
Re: (Score:2)
Can you make a comment system that is easy to use without javascript? Sure, but you can make a much more enjoyable user friendly one once you limit your scope to javascript only.
Your example of site navigation with javascript? It is poor design not because it is using javascript. It
Re: (Score:3, Insightful)
Please re-read the original comment. It was saying that you can use JavaScript without being backwards-incompatible. You seem to have confused this with avoiding JavaScript altogether. Every single point you make is good against an argument that JavaScript should be avoided, but completely irrelevant to somebody asking for it to degrade gracefully, which is the distinction BlueParrot was trying to explain to you.
Re: (Score:3, Insightful)
There is a very strong business case for good degradation too... Last I checked, Google doesn't interpret your javascript. You want good SEO, you better make sure the content flows right in lynx (which is the best way to think about how google sees the page).
Sadly, screen readers are pretty much like google too, but I really think we aren't feeding screen readers enough information for them to properly read a page.
Re: (Score:2)
Re: (Score:2)
Re:I bet my ass.. (Score:4, Informative)
Re:Web Applications? (Score:5, Informative)