Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

The Web's Future: XHTML 2.0 108

Lee writes "Over the years, HTML has only become bigger, never smaller, because new versions had to maintain backward compatibility. That's about to change. On 5 August 2002, the first working draft of XHTML 2.0 was released and the big news is that backward compatibility has been dropped; the language can finally move on. So, what do you as a developer get in return? How about robust forms and events, a better way to look at frames and even hierarchical menus that don't require massive amounts of JavaScript. This article takes a sneak peek at what's new in XHTML 2.0 and how you might one day put it to use."
This discussion has been archived. No new comments can be posted.

The Web's Future: XHTML 2.0

Comments Filter:
  • wow (Score:3, Funny)

    by tps12 ( 105590 ) on Friday September 20, 2002 @12:10PM (#4297603) Homepage Journal
    That has to be the dorkiest article write-up I've ever seen on Slashdot. It sounds like a story from those fake news shows they show on airplanes.
    • Blame the author of the original article; the posting here is just copied verbatim from that.
  • Hey that's great (Score:4, Insightful)

    by Iamthefallen ( 523816 ) <Gmail name: Iamthefallen> on Friday September 20, 2002 @12:13PM (#4297624) Homepage Journal
    Now just convince the gazillions of bad webdesigners out there to actually use the standard, any standard, please?
    • Alternatively, convince the millions of average web surfers to use a standards-compliant browser [mozilla.org]. Then, when everyone complains that their whizz-bang site doesn't work, the bad designers will be forced to standardize their site.
    • "Now just convince the gazillions of bad webdesigners out there to actually use the standard"

      Its not the web page authors, its the browser manufacturers. they are the ones that dont listen to the rules. netscape invented javascript, microsoft makes jscript
      sun makes java, microsoft makes j++
      all similar but differnet.
      look at the browsers DOM. that is the largest issue. web designers have many platforms to code for.
      1. pc, with ie
      2. mac, with ie
      3. pc, with javascript disabled
      4. mac, with javascript disabled
      5. linux, with javascript disabled
      6. pc, with netscape
      7. mac, with netscape
      8. pc, with opera
      9. pc, with mozilla
      10. mac, with mozilla
      11. linux, with mozilla
      12. linux, with konqueror
      13. linux, with galeon
      14. linux, with links
      15. searchbots that cant render javascript

      this is not a complete list, there are too many different situations that a amateur web designer doesnt know about. if they are using a WYSIWYG editor like front page the site is only going to look right on a pc running ie. it will look like ass on a mac running netscape. hows an amatuer to know that. they dont own 10 test machines like professional web shops do. give em a break!! and a point for effort.
      • Just because a browser may accept <Blink> or <Font> it doesn't mean you should use it. But, a lot of webdesigners don't bother learning what's good form when it comes to their trade/hobby.

        I have no problem with what people do with their own personal pages, no matter what chaos they create. But, if you put something online for the world to see, expect critique if the world can't see it.

    • Interesting != Insightful
      Interesting != Informative
      Informative != Insightful


      How about "(Interesting) (Informative) (Insightful) are disjoint sets"?
  • by kawika ( 87069 ) on Friday September 20, 2002 @12:23PM (#4297695)
    I'm all for the advancement of standards and the cleanup of bad practices sanctioned by older HTML, but we all know this changes nothing in our immediate future. Most normal (non-Slashdot-reading) users aren't going to download and install the browser of the week, and most web authors aren't going to go back and rework all their web content for new standards.
    • by reaper20 ( 23396 ) on Friday September 20, 2002 @12:41PM (#4297831) Homepage
      It would be nice if Taco and Co. would retool /. to follow some decent standards. I mean, dear God man, they're still using font tags.

      Add up your bandwidth costs using table and font tags, and then add them up using pure CSS layout - a site with the traffic of /. could save alot of money just by switching to existing standards.

    • This could go a long ways for my job. Our company uses some internal web software for billing and customer support information and it would be a big help to me to be able to rewrite the system interface with a new and more powerful language rather than the HTML and the tons of Javascript the original programmer used. The simple fact of getting rid of the Javascript menus would make me the happiest person alive.

      It would also be simple for us to have the employees use a compatable browser in the office to access and use it.

  • Egads (Score:3, Funny)

    by devphil ( 51341 ) on Friday September 20, 2002 @12:33PM (#4297778) Homepage


    After reading the article (a good one, by the way), I have to wonder whether any of this will ever be used in practice.

    There's got to be more backwards compatability, or it's just not going to be adopted. I have this horrible vision of every major website replacing their initial homepage with a front door: "For XHTML 2, click here. For everything else, click here." and their entire site duplicated. Yeah, right.

    I really like the idea, though. Mark it up based on content not presentation, so that multiple browsers and other tools can all make sense of the page, and use another tool (here, CSS) to make it look pretty. Hmmmmm...... holy shit! they've invented TeX!

    • Re:Egads (Score:2, Informative)

      by DarkVein ( 5418 )

      As horrible as it sounds, XHTML2 and a very basic XSL could make this nightmare of yours an extremely simple and automated proccess.

      Luckily, server-side scripting and web servers have advanced since iplanet. Two lines of PHP could sniff the client's browser and then fill xhtml2 or fail-safe to xhtml1.1, without the user ever knowing.

      The point, however, is that it is almost no trouble to do an XSL translation from XHTML2 to XHTML1 or even HTML4. The reverse is not true. Website back-ends can be updated to take advantage of XHTML2's more concise and descriptive format, while XSL produces an antique but perfectly valid HTML4.01 public face. The results are easier maintenance, modulized structure, and enough context to generate valid markup for any earlier version of HTML.

      Backwards compatibility? XSL with XHTML2 gives you the ultimate in backwards compatibility! It can give you valid markup for EVERY version of HTML, as appropriate for your public site's demographic.

    • With HTML you're stuck with web-only publishing. If you use Docbook or TEI, then you can publish both online and on paper.

      There are already many tools for tanslating Docbook to HTML or to paper surrogates like PS or PDF. If you consider XLST, then you can quickly make your own tools.

      Not only that, with Docbook and TEI your markup is based on content, making the mythical sematic web one step closer to reality.

  • by RobotWisdom ( 25776 ) on Friday September 20, 2002 @12:37PM (#4297805) Homepage
    I think I was actually the first [google.com] to point out the need for XHTML, but I think the direction it's gone has been a disaster.

    Nicholas Chase seems completely oblivious to the fact that no-one would ever really associate a style with the semantic-category 'holiday'-- styles are just a way of varying emphasis, and almost never reflect the underlying semantics in that fashion. (If you mention three different holidays on the same page, is there any reason to expect they'll all need the same style? Of course not-- semantics doesn't really work that way.) [more] [robotwisdom.com]

    My original proposal was a response to the incompatibility of XML with HTML, but this 2.0 proposal even throws that away. Given that there are several billion HTML docs floating around, how likely is it that anyone is going to use a browser that can't render them? It just ain't gonna happen-- human factors doesn't work that way.

    I've even called for [google.com] a 'W3C Secession' because they seem so out-of-touch with the real world and the real Web.

  • This annoys me for one key reason: I am a programmer, not a designer.

    Now, if I want to make any web-interface code I have to write 500 lbs of html code just to make my HTML pages look decent instead of just being able to use a couple of nice <font> and <b> tags.

    I don't want have to write a style sheet just to make project pages look good, thank you very much.
    • Which of the following looks more like "500 lbs of HTML":

      <style type="text/css">
      body { font: bold larger "Verdana" }
      </style>
      <body>
      <p>This is my duh page.</p>
      <p>It is a nice page.</p>
      <p>It has three paragraphs. Wow.</p>
      </body>

      or:

      <body>
      <p><font face="Verdana" size="+2"><b>This is my duh page.</b></font></p>
      <p><font face="Verdana" size="+2"><b>It is a nice page.</b></font></p>
      <p><font face="Verdana" size="+2"><b>It has three paragraphs. Wow.</b></font></p>
      </body>
    • Style sheets mean less code, not more. An XHTML/CSS page is cleaner and simpler than older pages - less spacing tricks (non-breaking spaces, invisible images, convoluted tables), more consistant code, less repeated tags.

      As a programmer myself, I don't see why you are more confortable with micromanaging <font> tags rather than defining the page properties once in one central place. Hell, if you want, you can just use embedded style rules and put style="font-family: Verdana" right in the tag you would have wrapped in a <font></font> tag.
    • Ehm, as a programmer you must have some aesthetic sense of design, right? Things like, say, avoiding redundancy and abstracting things away?

      Instead of applying a <font> tag to absolutely every element you want to set it, you just do body { font-family: foo, bar, sans-serif; } and be done with it.

      Instead of doing this on *every* page you just <link> to it, and have a single place to change the font you want to use.

      Instead of 5k of tables per page, you use a few <div>'s and position/float them in your one CSS file.

      Your HTML becomes much more lightweight, and you can style it progressively without going back and editing your HTML every time. Do it once, and stick to the same classes and overall structure, and you don't have to do it again next time. You can modularise your stylesheets and swap colour schemes and layouts at will and without rewriting tonnes of HTML every time.
    • You're right. You're a programmer, and not a designer.

      What should happen, in any halfway decent web design department, is that a designer creates the page and the layout, and you do all the server-side grunt work.

      If you're being forced to tool around with crap like XHTML/HTML or CSS at your job, then obviously your skills are being wasted. I can understand Javascript code or something, but as a programmer, you really should never have to muck around with a table tag to get the layout perfect.
      • ...the problem is that many designers don't know enough about programming to muck around with the table tags to get the layout perfect.

        I agree that it's a waste of a programmer to have to muck around with stylesheets...but the programmer should have not problem implementing them. And, many times, programmers understand more about which properties are supported in which browsers...lots of designers just throw together their stylesheets in Dreamweaver without giving much thought to what's going to work and what's not.
        • The nice thing about the new standards (and improving with CSS2/3) is that most of the use for table tags is gone now. I've built my personal homepage (click my URL, although it's down atm) entirely *without* tables. The only thing I use is a heirarchial system of <div> tags using different classes.

          Finally, designers can design pages in a way that makes sense (50px margin, right justify, inline, block, background-repeat, etc.), instead of doing all that table crap. Hopefully the designers that can't program well enough to master table tags can use something that's more compatible with their existing graphics layout experience.
          • I've built my personal homepage (click my URL, although it's down atm) entirely *without* tables. The only thing I use is a heirarchial system of <div> tags using different classes.

            So you've traded tables for a collection of nested DIV elements? I guess the semantic web means nothing to you. Whatever floats your boat. The Web *is* a take-it or leave-it medium.

            [OT: Of course, now I'm wondering why /. permits DIV elements in HTML Formatted posts. I just noticed. How on earth is this useful?]

            • by coaxial ( 28297 )
              So you've traded tables for a collection of nested DIV elements? I guess the semantic web means nothing to you.

              Ah yes. Using "Table Data" to indicate a navigation bar makes MUCH more sense than a simple nondescript "division".

              I mean just look at this post. Should I, and if I should, how do I, mark up "much". Should it be EM, STRONG, B[old], I[talic], or just capitalize it? Do I markup the previously quoted text as BLOCKQUOTE, since that's the only tag that's even close, even though it's not actually blockquote material since it's only one line?

              Useful content based markup was pretty much DOA when they created the CODE tag, over say something much more useful like "name".

              • Using "Table Data" to indicate a navigation bar makes MUCH more sense than a simple nondescript "division".

                Never said that it was better. I just don't think the gains are all that great. You basically reiterated why: those DIV elements are nondescript. The CLASS atrributes permit separation of style rules, but the underlying containers are the same (DIVs). As I also said, whatever floats yer boat.

                I mean just look at this post. Should I, and if I should, how do I, mark up "much" [etc.]

                As with anything on the Web, author's choice. Good or bad. Being Slashdot, one poster's markup doesn't matter one way or the other. There are no community rules, so even if there a correct markup style, that one perfectly formatted post would be lost in a sea of posts. And tables. :p I abandoned such concerns on /. long ago. It's a lost cause.

                Useful content based markup was pretty much DOA when they created the CODE tag [...]

                I don't quite understand your argument here.

                • > Useful content based markup was pretty much DOA when they created the CODE tag [...]

                  I don't quite understand your argument here.

                  My point is that even when you do have content based markup (which is undeniably better for searching and whatnot), getting people to use it, let alone use it correctly, is a near impossible task.
            • Tables for layout, yes. My div elements look like this:

              <div class="blog">
              <div class="entry">
              <div class="summary">...</div>
              <div class="date">...</div> ...
              </div> ...
              </div>

              Now just remove the "div class" and you've got semantic, self-describing XML. I'd say that's a *whole* lot better than three levels of nested tables and <font> tags. I'd use real XML along with XSLT and CSS instead of even any div tags, except I have other requirements related to HTML validators and editors.

              Yes, it's still not a standard format that can be understood by, say, a semantic search engine, but that's what RDF is for. No matter how you look at it, true XML with custom tags or just div tags with classes, it kicks tables' asses.
              • Now just remove the "div class" and you've got semantic, self-describing XML.

                Except for one thing (and I pretty much assume you realize this): it's not XML, it's HTML. Live by the sword, die by the sword.

                I'd say that's a *whole* lot better than three levels of nested tables and tags.

                In a number of regards, this is true. I never disputed that. In terms of semantic gain, there isn't a whole lot apart from avoiding abuse of what TABLEs were meant to mean.

                I'd use real XML along with XSLT and CSS instead of even any div tags, except [...]

                And I'm certain you would. You have limiting conditions -- heck, all Web authors do. XHTML is a joke (why not just use HTML 4.01? why hide XHTML behind the text/html content-type?) now, and use of XHTML or XML on the Web is premature. The tools simply are too far behind the standards. Or the standards are too far ahead.

                No matter how you look at it, true XML with custom tags or just div tags with classes, it kicks tables' asses.

                I find it curious that you use the phrase "no matter how you look at it" after stating that repetitive DIV use isn't terribly meaningful from a semantic standpoint. Seems somewhat contradictatory to me. :p

                Look, I know authors' backs are up against the wall to move away from TABLEs to lighter markup and better application of existing standards. However, given the flaws of existing user agents and the mess of W3C recommendations, much of the perceived progress is "illusory" in terms of actual semantic gain.

                Now I did say, "whatever floats your boat." The Web is a do-what-works medium, and I recognize that. But I do think the amount of self-congratulation involved in moving to DIV ... DIV ... DIV ... DIV constructions should be a bit muted. It's a markup translation that solves a number of practical issues surrounding use of TABLEs, but it doesn't really solve more significant, core issues.

                But that's not your fault. I'm not certain who got everyone into this mess. I can only hope a path opens to a better way within my lifetime. Besides, I bear bigger grudges against people who replace FONT elements with SPANs. HAND

  • My question is, are they expecting browsers to simply only understand only XHTML 2, or just that the current browsers out there will not be able to read XHTML 2, but future browsers will have be able to read both XHTML 2 and the previous?

    If the browsers are allowed cross-compatibility, then I say I like what I see. But if HTML and et all are thrown out the window completely, then I don't think we will ever see XHTML 2 ever put into practice.

    • Browser behaviour is (or should be) determined by the !DOCTYPE element in an HTML document. What they mean is that if a page has an XHTML/2.0 doctype, it will not support all the cruft that was left in XHTML/1.0 and 1.1 (left in for the purposes of backward compatibility).

      If a page doctype claims the page is HTML/4.0 or HTML/3.2, then none of these new rules should apply.
    • I would presume that a good browser would check the DOCTYPE or something and adjust accordingly. If the page was XHTML 2.0 then the browser ignores deprecated tags...

      But your question makes me think that we will probably end up with page composers writing a mish-mash of old and new code and the browsers being left to sort it all out.

      Perhaps what we need is something like HTML 4 Transitional which has features of both HTML 4 and XHTML 1?
      • Oh good god no. The last thing we need is another "transitional" doctype.

        Of those who first encounter standards, nine-tenths of them look at 4.01 Transitional, get validation under it, say "Hey, I've got standards-compliance!", and walk away, regardless of how many tables and font tags are in the page.
        • nine-tenths of them look at 4.01 Transitional, get validation under it, say "Hey, I've got standards-compliance!", and walk away

          How does a web developer achieve both Strict standards compliance and Netscape 4.x compliance? Some people don't have the money to buy even the four-year-old computer hardware capable of running Mozilla, so they stick with Netscape 4.x, which runs at an acceptable speed on their hardware.

          • How does a web developer achieve both Strict standards compliance and Netscape 4.x compliance?

            The short answer is "you can't, you don't, and you shouldn't".

            Oh, it's extraordinarily simple to keep content accessible. What's more difficult is maintaining the visual design, given the immense number of bugs in NS4. The former is what is supposed to matter on the Web; if you want a major visual experience, MSIE6 and Netscape 6+/Mozilla and Opera and Konqueror all support stylesheets. That covers just about every base, from Average Shmoe to Hardcore Geek to Not Exactly Bleeding Edge, et cetera.

            The simple truth is that Netscape 4 does not support HTML 4.01. You cannot use HTML 4.01's design tools properly with Netscape 4 because of its bugs. Therefore, Netscape 4 cannot support the design. The same is true for Netscape 3, Netscape 2, MSIE 3, Mosaic, Mnemonic, Voyager, and a host of other such browsers. But if you write strictly to standards and accessibility guidelines, people using such browsers get enough to get by.

            It's not necessary to go out of your way to snub Netscape 4. It's reasonably capable enough with basic HTML structural elements such that there's no need nor any point to doing so. However, it's extremely damaging to go out of your way to accomidate Netscape 4 as well.
  • i think most of the post i have read in here are missing the point. this will not happen tomorrow. and once it starts to happen there will be (long) transitional period. actually i am sure that someone will write an xsl to convert any xhtml 2.0 to xhtml 1.x (and probably the other way around too). i am also sure that for quite sime time any browser will render both anyways. and for all that are complaing that they just want to write simple webpages and dont care for all of this: use a WYSIWYG editor! but for professionals this is important stuff. to prevent "infolation" by helping searchengine to get the content effectively this is important stuff. but rest assured there will be tools that will ensure that joe schmo and assembler developer X can publish their stuff on the web.
  • I have to ask why. Given structured markup and stylesheets, what is the reasoning for XHTML2.0? I understand 1.0 as a transition. If XML is what it says it is, what is XHTML?
    • Random XML is not very useful on the web, since the web's supposed to be multi-platform. Just how exactly is a search engine, or a screenreader, or a braile browser, or whatever, supposed to work out how to display your lovely, but rather meaningless collection of tags in your custom XML doctype?

      Are you going to encode *all* that in your CSS? Really? For every XML document you might want to publish? No, of course not.

      Much easier to standardise on one main XML doctype which will always have some basic structure which a UA can apply style to, even if you don't bother.
      • Why do you want to leave this to the UA? Your rationale trivializes the distinction between structure and presentation markup is intended to preserve. What does the UA know about the semantics of my stuff? Nothing but what I tell it. I want to handle the mapping, thank you, and the UA can map to standard presentations that *do* belong with the UA in some hypothetical, as yet unforseen W3C CSS Presentation Standard. XForms, XFrames, etc. can all live on their own merits.

        There is no such thing as *random XML*; that's what I mean by why.
        • There are nine seperate media types define in CSS. If you want to step outside the boundaries of HTML, you have to specify them all, or UA's will see your XML as a meaningless collection of tags. You'd get the same effect if you created your website entirely out of <span> tags.

          Even if you *do* provide for those 9 media types, what happens when a new media type arrives? Custom ones are perfectly valid, so you might not even know about them. There also isn't anything for search engines to index properly; they can't give presidence to your headings, or work out what the title is supposed to be, etc.

          UA's do not have any concept of semantics outside what is programmed into them, and although in many cases you can make do without and just force a specific rendering, it's almost always a bad idea.

          Your XHTML document comprised entirely of <span class="foo"> may render using your stylesheet in my browser, but if I decide you have no taste and turn off your CSS (which I do often enough), your document falls to pieces.

          This is why we want a standard document format for the web; you can extend it to meet your needs, but leave enough of the original document for it to be meaningful and useful without any other information about *your* semantics.
          • I toss out CSS as only an alternative location for the *presentation* structure XHTML embodies. Currently CSS will not handle this because the W3C saw fit to create another web presentation DOCTYPE.

            UA's do not have any concept of semantics outside what is programmed into them, and although in many cases you can make do without and just force a specific rendering, it's almost always a bad idea.

            This is precisely my point. Taking a bad idea and installing it as a standard does not make it a good idea. XHTML is just a less violent way of enforcing a default *presentation* semantics and it violates the spirit and intent of markup. My point is that this mapping belongs somewhere else than a presentation DOCTYPE. Whether I enforce XHTML on my authors or shoehorn my data into it via transformations, I am doing violence to my semantics in doing so, and all in the name of satisfying a *presentation semantics*. See the myth of structural markup post elsewhere in this thread. XHTML, by its very nature, does not do what it says it wants to do (separate presentation from structure).

            This is why we want a standard document format for the web; you can extend it to meet your needs, but leave enough of the original document for it to be meaningful and useful without any other information about *your* semantics.

            Whaddya mean we, kemosabe?

            Search engines can look at the meta-data I provide. There must be four specifications for that at this point. My semantics is precisely the information I wish to impart. I am not sure we need a new standard way of concealing it.
            • Search engines can look at the meta-data I provide. There must be four specifications for that at this point.

              Google ignores <meta /> elements because of the rampant spamming associated with unscrupulous use of keywords in <meta /> elements.

              • That's metadata as in RDF, Topic Maps, Dublin Core, etc. I'm talking about anything but XHTML. When the browser becomes the platform, the application becomes the user agent and google becomes an RSS service we can point our agents at. Already happening, albeit slowly. But fast enough to make XHTMLv2.0 DOA.
                • and google becomes an RSS service we can point our agents at.

                  But if the major search engines and other HTML-based services become easily scriptable, then the search engines lose the revenue stream of advertisements. That's why the Semantic Web won't work. Either that or most Semantic Web sites will become pay sites.

                  • Agreed. But they already are getting scriptable and google was first off the block.

                    You can't really syndicate content without a receiving syndicate. In fact, the semantic web dies without a more push oriented model and quality aggregators like google are the only environments in a position to provide it.

                    They are already moving off advertising as a revenue base. Companies are paying them to aggregate and screen content for their intranets and portals. The economies of scale will bring this to your screen much sooner than Mozilla will bring you an XHTMLv2.0 page. You pay google to let you troll their synopsis of the web or you take the time to troll your own. And it'll be free if you're willing to accept corporate sponsorship of your results.
  • Yet another poorly implemented standard that I will have build/debug for. I wonder what super dooper features Microsoft will add on to give it for extra bonus fun? Happy Happy, Joy Joy!
  • The XHTML2 working group could create an XHTML 2.0 site, and create a link that embedded a XHTML2Java app for those people with non-compliant sites. Only, instead of making it a standalone browser, make it work inside existing browsers.

    The Java app could do all of the XHTML2 rendering in clients that today don't support it. The web author can write their site in XHTML2 and provide a javascript that detects older browsers and opens a window with the app to browse the XHTML content. Due to app sizing limitations you would probably need to create a form that chose an appropriate screen size and font size preferend, but a cookie could store that.

    In addition, if created by the working group, make it GPL and use it as a reference implementation so that other browsers can reuse what code they want to speed up their development.

    Eventually, all of the browsers catch-up. People still using older browsers don't get limited by this, they just suffer slower load times on XHTML2 sites.

    Since XHTML2 has been cleaned up so drastically, the App would actually be reasonably small compared to an app that would be able to deal with all of XHTML1/HTML4/DOM/CSS.

    Plus, for internal use, people would already have a browser component that could be gracefully loaded over a network in any Java-capable OS that provided a robust and clean document language.

    Oh wait, I forgot, it's not 1996 anymore ... people are too jaded to accept this as a working possibility :)

    • I edited my first paragraph without re-reading:

      The XHTML2 working group could create an XHTML 2.0 site, and create a link that embedded a XHTML2Java app for those people with non-compliant sites. Only, instead of making it a standalone browser, make it work inside existing browsers.

      Should be:

      The XHTML2 working group could create an XHTML 2.0
      Java app. People who want to adopt XHTML 2 could use the app to create a link that embedded a XHTML2Java app for those people with non-compliant browsers. Only, instead of making it a standalone browser (like HotJava was), make it work inside existing browsers.

      ...

      Also ... it might speed developer adoption if a canned style sheet were developed that created a style to mimic HTML4 markup along with a translating cross-reference.

      That way I as a developer could embed the canned style sheet, look at the cross reference and know that if I put something like italicized it would be close to using italicized.

      Just ideas ...

      • Criminy ... I'll learn to use PREVIEW someday ... from my reply to my parent:
        That way I as a developer could embed the canned style sheet, look at the cross reference and know that if I put something like
        italicized it would be close to using italicized.

        Just ideas ...

        SHOULD be ...
        That way I as a developer could embed the canned style sheet, look at the cross reference and know that if I put something like
        <em class="HTML4-italic">italicized</em> it would be close to using <i>italicized</i>.

        Just ideas ...

        I'm done talking to myself ...
  • by perljon ( 530156 ) on Friday September 20, 2002 @01:02PM (#4298017) Homepage
    XHTML wants to take away an authors ability to affect presentation. However, it is clear that authors want comlplete control over presentation. When HTML first started out, it was static. And it was good. Then, CGI appeared and you could now create static pages on the fly connecting to other systems. This too was good. Then HTML replaced the interface for 75-90% of internal corporate applications. This was kind of good.
    The problem is that HTML is not a very good presentation language, and every since it first arrived, programmers are wanting to make it a better presentation language. Java, ActiveX, .Net, Macromedia, Netscape-Plugins, etc. all try to make the broswer a better presentation language for dynamic data in the back. People want to write applications and have them automatically work on all platforms. And not just work, but take advantage of what we know are good interfaces. Good interfaces are not hitting submit and waiting 3 seconds for a response, or even clicking a link and having the whole screen go blank while it downloads and figures out how to display the next page. A good interface to an application respons immediately and looks good.

    Therefore, I think XHTML is doomed because it tries to take out the thing that everyone and there mother wants from a web application; the ability to create interfaces to applications that are always update and don't require complicated download and installation processes. A web language that increase a programmers ability to control the interface while not adding complicated download processes will replace HTML. Nothing short of that.
    • Exactly.
      You should have heard the cheers when css first came out and our team saw the possibility of pixel accurate layout. Then the sobbing and moaning that followed the realization that it just didn't work.

      |- this site best experienced in [browser here ]-|
      |--- please resize your browser to ---|
      DIE DIE DIE DIE DIE DIE DIE DIE DIE DIE!!!!!
    • Of course, perfect control over layout is a pipedream on the web. It was never designed to do this. Even if you had the ability to do so on the server side, the user is still free to remake the "experience" in any way they want. Different default fonts; change default (or allowed) fontsizes; embed your page in some local css; very different screen resolutions and browser window sizes, and so on.

      Now, this user control is a Good Thing. It means the web an be used with very different kinds of devices, and it means users with various impairments can access the info. Most vision-impaired users do not use screen readers, for example; for them it is sufficient to be able to set the font and size so they can read it.

      If you want perfect control, make a PDF-document out of it.
    • ``XHTML wants to take away an authors ability to affect presentation.''
      I beg your pardon? Ever heard of CSS? How about XSL? What XHTML does, is adding XML's extensibility to HTML documents, thereby allowing HTML documents to be extended with non-HTML data. It does NOT, in ANY way, take away your ability to control the appearance of your document; you can still use CSS for that. And because XHTML is XML, you can apply XSL to it to transform it to any other kind of XML, including something compatible with legacy HTML, possibly containing all those nasty tags used to specify the appearance of documents for browsers that don't eat CSS.
  • How do you get the end user to upgrade their browsr?

    Granted many people are using Winblows and IEEEEEEE or mozilla / netscape 6+, but there is still that percentage of people that are not, or will stay on Win 98 / NT 4 until they can no longer.

    What about getting all the pages out in cyber space to upgrade to this standard????

    Now the browsers will have to support 2 standards, the new one and the old ones, or have many pages just plain unviewable.

  • A few notes... (Score:4, Insightful)

    by Viqsi ( 534904 ) <jrhunter @ m enagerie.tf> on Friday September 20, 2002 @01:26PM (#4298288)
    For one thing, the heirarchal menus thing is probably referring to the <nl> element, which is really just good semantic markup for lists of links; it's along similar lines to <ul> and <ol>. It's not a replacement for DHTML menus (boo! hiss!) or anything like that; effects like that would still be handled via (ECMA|Java)script or CSS.

    For another, backwards compability has not been "dropped" in the sense that it's gone completely, total split with the past, et cetera. It's just no longer a priority. You can likely expect <br> and maybe the <hN> elements to dissapear entirely as things evolve (many are in favor of that last; many aren't) in addition to those that have already gone byebye. There's also debate about the sematic value of <strong> and either <abbr> or <acronym> (I can never remember which one folks want to get rid of) and whether or not they should stay.

    There's also quite a bit of talk about how to handle titles for other elements. Some folks question why <name> is being used instead of <h> in the new navigation lists, for example.

    And they're right about XLink, by the way. There's a new reccomendation being put together to try to address these issues, called HLink. You can find it at http://www.w3.org/TR/hlink/ [w3.org].

    And just so I can put out these totally unsolicited opinion: XFrames absolutely rocks. Love it. Nurture it. And I've been waiting way too long for <img> to die; now let's just all hope that Microsoft fixes up all of their horrifyingly large bugs with <object> in time for this... :)

    (Ah, one more note. Slashcode doesn't appear to allow the <code> element in comments. Indeed, the only semantic markup allowed in /. comments is <a>, <p>, <blockquote>, <em> and <strong> (and like I said earlier, that last is being challenged). This is, quite frankly, really, REALLY sad. Why hasn't /. gotten rid of all their legacy crap yet?)
  • Looks like... (Score:3, Insightful)

    by siegesama ( 450116 ) on Friday September 20, 2002 @01:50PM (#4298524) Homepage
    Why, that looks a lot like... docbook??

    <section>
    <h>The Web's future: XHTML 2.0</h>
    <p>by Nicholas Chase</p>
    <section>
    <h>Good-bye backward compatibility, hello structure</h>
    <p>Why backward compatibility is over.</p>
    </section>
    </section>


    On an only slightly related note: it is interesting that IBM is pushing this, when IBM is internally still requiring support for Netscape 4.x users. In otherwords, it's pretty unlikely that XHTML 2.0 will ever actually grace the IBM intranet (which is sad, because I wouldn't mind converting over)
  • In case you were wondering how long it will take before browsers can handle XHTML 2.0.
    Here are 2 techiniques to do this already in recent browsers:
    CSS, works in Opera 6, Mozilla 1.0 and IE 6 [w3future.com]
    XSL, works in Mozilla 1.0 and IE 6 [w3future.com]
  • I only go a few paragraphs in and I'm already dissapointed. They still represent CSS in a non-XML form. Why in the world would they not move CSS or whatever style language they want to use into an XML format? Now people that want to parse these things are going to have to write or continue to support their own CSS parsers instead of just using existing XML parsers! I'm not asking them to re-invent CSS or its rules, but please, let's change the syntax so that XHTML and CSS both live in XML documents.
  • Don't like the img removal. The example given uses a fallback from mpg to jpg to text, which makes sense. But most jpg use is not in this sense but just as plain pictures, and the alt tag provides a sufficient fallback mechanism for that. Having to specify the mime type of every picture you use seems like extra work for no gain as well. What happens if you leave that off? Is there some other reason for this?

    The nested sections makes a lot of sense. You can probably rig this up in XSL right now if you really like it.

    The href attribute to almost anything is the best part. Not having to wrap pictures in <a href=> tags will save quite a bit tags and convey the actual intent much better.
    • <img /> has been an oddball in HTML, introduced by Netscape. Note the use of src="" instead of href. There's a fairly unanimous opinion among those I know that <img alt="my dog"/> should always have been <img>my dog</img>.

      Object is a far more useful element, and you can still write it as an empty element (<object data="mydog.jpg"/>) if you chose to leave out alt-text. The nesting fallback mechanism means that you can include a GIF inside a PNG, or a JPEG inside of an SVG, which could provide greatly for the acceptance of new standards. I like having one element for any sort of logical object. I've taken to using CSS' background-image for any fluffy images.

    • Having to specify the mime type of every picture you use seems like extra work for no gain as well. What happens if you leave that off?

      Don't worry, the mime-type is not required. The benefit of using it is that the user-agent can look at the mime-type to see if it knows how to support it, before it actually fetches the file. If my browser doesn't support application/x-shockwave, then it will just skip over that object and go straight to the alternate text.

      Anyway, the point is, it's not required. It's just a little extra hint. It will probably never make it into widespread use, except on anal-retentive geek sites (not /.).
      • It's kind of annoying, really. Mozilla ignores inline type="" attributes on everything except <link/> elements, vying instead for the HTTP server's assessment. IE ignores MIME types in all their forms, instead simply looking at the letters after a period. Isn't there an RFC or something about this?
  • by ttfkam ( 37064 )
    A Slashdot article about XHTML 2.0 and the wave of the future and yet the top of every page has the line:

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">

    3.2? Now that's a blast from the past. How much do you want to bet that Slashdot's audience by and large uses browsers that are compatible with XHTML 1.1? This isn't the AOL start page after all...

    Well...at least they have an XML and an RDF feed.
  • No really, it's a tree language. If it was MARKUP - i.e. layers of "virtual highlighter pen", it would allow overlapping tags, and wouldn't shoehorn weakly structured data into rigid trees*. As it is, XML corresponds closely to Lisp [lisp.org] sexps, but reimplemented badly with shitty redundant syntax.

    XHTML is a particularly bad application of XML, because HTML text is intended to be authored by humans, not autogenerated by and for some bloated SAX parser/DOM tacked onto a bloated Java/CLR VM.

    People liked HTML before XHTML because it was forgiving. One could forget a few close tags, one could <b>overlap <i>tag<b> runs <i> and the browser would muddle through.

    There's no particularly good reason to burden people with maintaining rigid tree structure if it doesn't make sense. One of the major problems I have with people usng XML is is the weeks/months they spend agonising over their Schemas, on the correct way to shorehorn their transient data into pretty trees - for god's sake, people! If you're using tools so inflexible that you can't just change your mind halfway through, maybe it's time you stopped using the buzzword-laden marketware of XML/Java/C++/C# and moved to a more flexible platform, like Perl, let alone Lisp! 90% of the time, the stuff I see could just be a ASCII CSV dump of an array, or just a stream of bytes! At least Lisp sexps don't force you to bother with close tags that redundantly echo the open tags - and they have identical expressivity, since XML is a tree, not a markup, language.

    Bring back real Markup languages! The XMLers have lost their way. They're busily reinventing lisp, badly (yet again) - they've just come from the other side (data-side) to all those scripting languages (code-side) that are slowly mutating into Lisp, where data is code and code is data.

    * (And yes, I know that you can eventually make most everything look like a very broad tree-structure by placing a virtual root before an arbitrary collection - witness the UNIX filesystem! - but I hope the reader can see that that's not really my point)

    • People liked HTML before XHTML because it was forgiving. One could forget a few close tags, one could overlap tag runs and the browser would muddle through.

      Actually, I think that this is why most people hated html. The muddleing you speak of was different between implementaions (browsers) and thus you have a pseudo standard.
      • Dunno about that - only a few people I've met disliked HTML for that reason, and they were all finicky computer-geek types like myself (but less observant of what the "norms" do) - everyone else:

        (a) Just assumes everyone else has the same browser they do.

        (b) For non-trivial documents, just uses a WYSIWIG edifor of some description, and just dives into the HTML and tweaks the tags here and there if they feel like it, with the net result that even if the WYSIWIG editor spat out valid XHTML to begin with (unlikely in the first place), it sure isn't when they're through with it.

        (c) For trivial documents, writes fragmentary html, maybe even remembering that head and body both exist.

        HTML brought online document authoring to masses of people who don't really care about computers, but love the fact they can now build an online community of lacrosse players or some such. I beleive HTML became popular in part because it was actually simple enough that it was _easier_ to write HTML than learn to use a new type of wysiwig editor. These days, new editions of HTML books have big scary warnings about not forgetting to close your tags, to remember to close them in the right order, remember to put in / in singleton tags like br, why you should separate content and presentation, etc. None of which the average joe _wants_ to care about. And a bunch of geeks telling him to will just annoy him.

        • HTML brought online document authoring to masses of people who don't really care about computers, but love the fact they can now build an online community of lacrosse players or some such. I beleive HTML became popular in part because it was actually simple enough that it was _easier_ to write HTML than learn to use a new type of wysiwig editor. These days, new editions of HTML books have big scary warnings about not forgetting to close your tags, to remember to close them in the right order, remember to put in / in singleton tags like br, why you should separate content and presentation, etc. None of which the average joe _wants_ to care about. And a bunch of geeks telling him to will just annoy him.

          Exactly! Not that I'm against XML, I use it everyday. And having a standard document format for everything and a standard API for accessing that data is wonderful. But I agree that HTML made page authoring easy enough for non-techies to be able to create sites about their cats, their little clubs, their personal lives and ideas, and that made the web really interesting in its day (something it has lacked in recent years).

          It's great and all that we now have sophisticated document management system (one of which I'm guilty of having created, see .sig for details :)), but the fact of the matter is that all of this "easy" content management has only really created a new barrier to entry for the average joe wanting to publish his/her thoughts. It also helps further the rift between corporations/multinational conglomerates who are able to publish using these high-end systems and the average joes that made up the original commercial-free population of the net. While standards help the big guys do what big guys do, they also inhibit competition and are exclusive in this fundamental way.

          So while we can have discussions about interoperability between CMS systems and at the same time talk about the refinement of HTML into a "proper British grammar" of itself, it's important that we recognize a few applications that have grown out of the need for "the rest of us" to express ourselves as well (specifically, blogging). And anyways, it should be the responsibility of the software makers to worry about standards compliance. For far too long now software like dreamweaver/frontpage/homesite/et al have been excused for their poor output quality, but it looks like things have improved at least somewhat on that front in recent years.

          Now it's time to kick the leading browser out on its ass and get on with the web as it should be. Free for all, and accessible by all.

          P.S. If you hate XML so much for structuring documents, you may appreciate YAML, SLiP, and the like. Search for them on Google.

      • The muddleing you speak of was different between implementaions

        That's not really my point:

        I'm actually arguing that the XML and XHTML language syntax and structure itself is poor - I do, however, beleive that a common set of HTML tags should be standardised - just not that those tags should _have_ to be in arranged in a rigid tree-structured document. e.g. There should be a consensus that p should continue to mean paragraph. I do not believe that I should have to remember to close my p tags.

  • It's not about to change at all. Because nobody is going to use xhtml 2. At least nobody is going to use it until one of two things happens: a) it is supported by internet explorer or b) somebody has the gumption to smash MS's anti-standards monopoly. As MS has loads of money and you Americans cannot seem to escape from your view that people who have money are obviously good or clever then I suppose it's gonna have to be a).

    By the way, all my web pages are xhtml 1 transitional - but all my client's browsers ain't.
  • Proof of concept [w3future.com], anyway. This page doesn't work entirely in IE, because of IE's horrible <object> bugs. Works great in Mozilla, pretty good in Opera.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...