
The Anti-Thesaurus: Unwords For Web Searches 148
Nicholas Carroll writes: "In the continual struggle between search engine administrators, index spammers, and the chaos that underlies knowledge classification, we have endless tools for 'increasing relevance' of search returns, ranging from much ballyhooed and misunderstood 'meta keywords,' to complex algorithms that are still far from perfecting artificial intelligence. Proposal: there should be a metadata standard allowing webmasters to manually decrease the relevance of their pages for specific search terms and phrases."
Sounds Good But... (Score:3, Insightful)
Re:Sounds Good But... (Score:2, Interesting)
On the other hand, any potential customer who find the page as a result of a broader match than warranted by the page might also remeber the site as one that doesn't have what he needs. I don't claim to understand mainstream consumerism, but in my professional capacity, I tend to avoid companies that tries to make a followup sale on a completely unrelated issue.
Re:Sounds Good But... (Score:1)
> to manually decrease the relevance of their pages
> for specific search terms and phrases."
Last time I checked, the problem was stopping XXX BRITNEY NIPSLIP from turning up as the result to "+car +transmission +repair".
Re:Sounds Good But... (Score:4, Interesting)
Re:Sounds Good But... (Score:1)
Re:Sounds Good But... (Score:2, Insightful)
Re:Sounds Good But... (Score:1)
We've got some real winners modding around here as of late... *sigh*
Re:Sounds Good But... (Score:1)
The only thing to do about it is to metamoderate and make sure lame behavior like modding your parent post "Flamebait" get's marked "Unfair"
OT: Re:Sounds Good But... (Score:1)
Maybe this is only an Opera issue?
Re:Sounds Good But... (Score:3, Insightful)
Re:Sounds Good But..... Useful? (Score:1)
Oh you capitalist-thinkers. Spare a thought for Geocities/ Hypermart users who have to start shelling out money if they cross a certain hit threshold.
How about this? (Score:4, Insightful)
Hell, an engine that did that would almost be useful.
Re:How about this? (Score:3, Funny)
You can guess why: Search engine developers buy copies of the same software, learn how to recognize its output, and then demote your site or block it altogether when they spot that pattern in your pages.
no hard "this site was banned" but it seems there are some who do demote/block if they catch you putting garbage in your keyword list.
PS if any porn site puts 'alan turing' in their keywords I would actually want to go there - shows some imagination to say the least, gotta give them props for that...
Re:How about this? (Score:4, Informative)
Disclaimer: I'm in no way associated with Google.
Re:Google Goggles (Score:1)
Their sites would have little initial ratings. As soon as no one links to them from outside, their total rating pool remains low. So, to rise actually high, you have to attract other popular sites. Combined with a shit filter (bad words decrease ratings), this should sort uglies away fairly well.
You know this is going to happen (Score:4, Funny)
Re:You know this is going to happen (Score:2)
You are right. Any system where the webmasters have an impact on search relevance will be beaten. Hey they even found a way to beat google. Just create a fake front end that looks serious with one button "naked pictures". The system he describes works best for altavista (6 months ago) like systems.
Even
Re:You know this is going to happen (Score:1)
important distinction.
sorry (Score:1)
Re:You know this is going to happen (Score:1)
I search for 'slash' and 'dot' and end up *here*?! (Score:3, Interesting)
Google seems to do a good enough job of filtering out irrelevant responses as it is.
Re:I search for 'slash' and 'dot' and end up *here (Score:1)
Proposal won't work: No incentive! (Score:1, Redundant)
Okay, pretend I'm a webmaster. What's my incentive to have my page show up LESS in anyone's search results?!
If someone didn't want my site, why do I care if they get it? And if someone wants my site, I don't want to take any chance with an "anti-thesaurus" that might end up excluding my site!
Re:Proposal won't work: No incentive! (Score:5, Interesting)
From: frankie3327@aol.com
To: staff@cs.here.edu
Subject: help!
i have a lexmark 4590 and it wont print in color.
it only makes streaks. also the paper always
jams. how do i fix it? please reply soon!
The senders never had any connection to the college or the department. We'd reply telling them we had no idea what they were talking about, and that they should seek help elsewhere. It was rather annoying.
We eventually figured it out. The department web site maintains a collection of help documents for users of the systems. One of them talked about how to use the department's printers, what to do if you have trouble, etc. At the bottom it listed staff@cs.here.edu as the contact address for the site.
You've probably guessed it by now. That page came up as one of the top few hits when you searched for "printing" on one of the major search engines (I forget which one). Apparently lusers would find this page, notice that it didn't answer their question, but latch on to the staff email address at the bottom, as if we were an organization dedicated to helping people worldwide with their printers. Furrfu!
I think we reworded the page to emphasize that it only applied to the college, and we haven't received any more emails lately. But if we could have kept search engines from returning it, that would have been even better. Since in our case the page was intended for internal use, we don't care whether anyone can find it from the Internet. Our real users know where to look for it.
So in answer to your question: When a search engine returns a page that doesn't answer the user's question, the user will often complain to the webmaster. That's a clear incentive to the webmaster not to have the page show up where it's not relevant. Also, it's not the goal of every site simply to be read by millions of people; some would rather concentrate on those to whom it's useful.
Re:Proposal won't work: No incentive! (Score:4, Informative)
http://www.robotstxt.org/wc/exclusion.html [robotstxt.org]
robots.txt ? (Score:3, Informative)
if more people used robots.txt, a lot of 'only useful to internal users' sites would drop right off the engines, leaving relevant results for the rest of the world...
just a thought......
Re:robots.txt ? (Score:1, Interesting)
Re:robots.txt ? (Score:1)
j
Re:robots.txt ? (Score:2)
And, if the admin had a clue, a simple "WTF did you get my addy?" emailed to joe6paq@aol.com would probably have explained everything.
Re:Proposal won't work: No incentive! (Score:1)
Saving bandwidth, perhaps? For a hobbyist's website hosted cheaply (and thus having a low transfer limit), it might be quite desirable not to attract too many visitors who aren't actually interested in the site's contents. Of course, that's not a very common scenario, good search engines will give such sites a low priority anyway because they're not linked to very often.
Irrelevant visitors are often the best (Score:1, Interesting)
CPM ads pay the same regardless of relevence. CPC ads tend to pay *even more* for visitors who aren't interested in your content, since they're more likely to click on the ad on the way out.
mod_rewrite is your friend (Score:4, Insightful)
Re:mod_rewrite is your friend (Score:2)
Can you point me to a good howto?
Re:mod_rewrite is your friend (Score:1)
Re:mod_rewrite is your friend (Score:1)
Yes, you can somewhat stop people from putting image requests to your server in their pages, but you can't stop people from snarfing your images.
So now they have my images, and put them on their own site, which doesn't cost me any bandwidth. Sounds like a good thing to me.
The reason to stop them linking to images on my server is to save me bandwidth, not to prevent people from stealing my images. (That's what copyright is for. :-)
mod_rewrite reference, examples (Score:3, Informative)
Well some docs are here [apache.org], and the mod_rewrite reference is here [apache.org].
Here is a goofy example that does a redirect back to their google query, except with the word "porn" appended to it. As an added bonus, it only does it when the clock's seconds are an even number. (Or do the same test to the last digit of their IP address). Replace the plus sign before "porn" with about 100 plus signs and they won't see the addition because each plus sign becomes a space. The "%1" refers to their original query.
Here's another one that checks the user-agent for an URL, and then redirects to it. This keeps most spiders and stuff off your pages since they usually put their URLs in the User-Agent:
Anything you can think of is possible. I think you can even hook it into external scripts.
Re:mod_rewrite reference, examples (Score:1)
This keeps most spiders and stuff off your pages since they usually put their URLs in the User-Agent:
Why not just use robots.txt? Either way you're relying on the spider operator to write their bots in a particular way.
A bit negative? (Score:2, Interesting)
After all, if you describe your site, a good search engines will use this information well (so you shouldn't get too many erroneous hits). However, if you list your non-words, a bad search engines will just see this list and treat them as keywords!
Turning lemons into lemonade (Score:2, Interesting)
Please forgive me for mentioning capitalism on Slashdot, but a website that receives many misdirected hits is perfect for targeted marketing. Think of the possibilities: if your web site is getting mistaken hits for "victor mousetraps," sell banner ads for "Revenge" brand traps and make a killing on the click-throughs. With a little clever Perl scripting, determine which banner ad to show based on which set of "wrong keywords" show up in the referer. Companies will pay a lot of money for accurately targeted advertisements. Selling these ads would undoubtedly pay the whole bandwidth bill and probably make a profit to boot.
So no, unwords are not necessary. Unless you're running a website off a freebie
~wally
Re:Turning lemons into lemonade (Score:1)
At first we just allowed our out-of-the-box search engine package to index our catalog, but the problem we kept running into was the relavance of the results (for example returning VCR stands ahead of an actual VCR when the search was "VCR".)
So to solve this our merchandizers manually added keywords to each group of products that amounted to a thesaurus. We coded the indexing to place a weighted value for these keywords ahead of the title words, and those ahead of body text.
It's actually a bigger problem than most geeks realize (as our CEO pointed out.) We were trying to return not just pages that corresponded to the search string, but to the intent of the user. That takes a little more thought on the part of the search engine coders and the implementers.
Re:Turning lemons into lemonade (Score:1)
Well, speaking personally, I don't want people arriving at my web site unless they're actually looking for the content that's on it. That's because I pay for bandwidth.
I also know plenty of people who have web sites for their friends, but have ended up being pestered by online perverts after they ended up in search engine listings.
Re:Turning lemons into lemonade (Score:2)
Me, definitely.
I have a section of my site related to Steve Albini's bands, including Big Black [petdance.com]. I get tons of hits looking for things like:
and my favorite...
Re:Turning lemons into lemonade (Score:2)
Re:Turning lemons into lemonade (Score:1)
Why would anyone want to pay for their bandwidth if they could easily get commercial sponsors to pay for it?
~wally
Bad planning (Score:5, Funny)
You're going to have to excuse me... (Score:1, Flamebait)
The people whose web pages are being thrusted to the top of the query lists are the people who are polluting the metadata and other tags for the sole purpose of getting their sites higher in the search lists
So lemmy get this straight: you want all good and honest people (who aren't causing the problem in the first place) to opt-out of common searches (which they'd never want to do), and this will thus remove the legitimate entries from the pool of queries, returning an even more polluted list from your search engine.
am I missing something here?
Although there are a few people who would be helped by removing absolutely irrelivant queries, the vast majority would actually suffer if they used this.
Re:You're going to have to excuse me... (Score:2)
The US Gov't Won't Like It (Score:1)
Better Metadata (Score:4, Interesting)
Marking up pages with information about the meaning of the terms on them is the main thrust of the work on semantic web - see http://www.daml.org/ [daml.org] (for DAML - the DARPA Agent Markup Language), http://www.semanticweb.org/ [semanticweb.org] (One of the main information sources) and finally the new W3C activity on the subject: http://www.w3.org/2001/sw/ [w3.org].
How far, how fast it will go is another matter but there's certainly a lot of interest in creating a more "machine readable" web.
Re:Better Metadata (Score:1)
It seems to be a chicken-and-egg situation at the moment -- I'm doing quite a lot of work producing Dublin Core [dublincore.org] metadata in XHTML and RDF format for a content management system, however no search engines yet support the indexing or searching of this metedata.
When they do then a proposal like this might make (some) sense.
Re:Better Metadata (Score:2)
Re:Better Metadata (Score:2)
The real problem is at the other side, when the user fires Google and enters the standard 2-4 query terms "bank australia". There is a lot less information there for a computer to decide that the user is looking for a bank in Australia.
Metadata on the web pages is pretty much useless for understanding what the user wanted.
search issues (Score:2, Interesting)
The main power technique, at least on google, is utilizing quotes and AND/OR to limit search results. Rather than spewing a line of text, enclosing specific "phrases" often gives more accurate results.
Then again, I have been able to simply cut n' paste error messages into the groups.google.com form and immediately receive accurate, useful hits. I think that though the internet and webpages and generally disorganized and uncentralized, an outside entity can impose order given enough bandwidth, time, energy and intelligence. In the future, web services, probably based on CORBA and SOAP, will allow sites to return messages to searchers or indexing services, thus doing away with a lot of the mystery in the current system.
All that said, I have had excellent luck with google finding about 95% of all the information I have searched for in the past couple months, showing that a well-written spider and intelligent classification and rating can circumvent the problem of so much untagged, nebulous information.
The internet is something like the world's largest library where anyone can insert a book and random organizers may (if they wish!) go through and make lists, hashes and indexes of the information for their own card catalogs. Right now, each search service maintains its own separate list! The crawler is like a super-fast librarian who can puruse the book. The coming paradigm will be fewer, more accurate and useful catalogs along with books that "insert themselves" into these schemes intelligently and discretely after a validation of informational content.
Re:search issues (Score:1)
Once you've thrown out the 'click here' and 'this link' junk, this is far more reliable than using meta tags, and often more reliable than looking for keywords within the page itself.
Re:search issues (Score:1)
Search engines should reduce the relevance of pages with huge META sections.
Sounds Good (Score:1)
Re:Sounds Good (Score:1, Funny)
Re:Sounds Good (Score:1)
The Wayback Machine (Score:1)
Isn't that what - is for? (Score:2, Informative)
For example, if I'm looking for info on a Toyota Supra and too many Celica-related pages come up, I'll type:
toyota supra -celica
On a related note, does anyone feel that Google's built-in exclusion list of universal keywords (a,1,of) is really aggravating when Google excludes those words in phrases?
Re:Isn't that what - is for? (Score:2)
The suggestion was intended to tell the search engines what words on your site aren't relevant for search purposes. So a site primarily about Toyota Celicas, but that mention Supra a couple of places might want add Supra to their "nonwords" entry, to avoid confusing people looking for info about Supras.
So if the suggestion were in use by most people, you might not have to add "-celica" to your search, as it would be easier for the search engine to exclude pages that contain the word "Supra" but that isn't relevant for your search.
It's in no way a perfect idea. But if enough people use it it may have some value.
That's not going to help bandwidth (Score:3, Funny)
Re:That's not going to help bandwidth (Score:1)
Mommy, what does "view source" mean, and why is the computer swearing at me?
Re:That's not going to help bandwidth (Score:3, Insightful)
Why not using the refferer heder of HTTP (Score:1)
So I would suggest that he could think about checking the refferer as this site [internet.com] is showing and maybe directs all users that come from a search engine to a page where he offers a search engine that is limited to his site. Since the referrer also includes the whole search string he could maybe even use it to fill out his search form.
I would even prefer this method because it often happens to me that I enter a site via link from a search engine and then I find out that the result page is just a part of a frameset and its missing properties like Javascript variables. If I would redirect search engine users to a defined starting point on my site they would have less troubles (Don't start a disscussion about the sense and use of frames here :-) )
Of course... (Score:1)
Re:Of course... (Score:1)
I thought of a similar idea and worded it as such: (Score:1)
"Technique to negate words in a document for increased searching. For instance, include files that cause a phrase like 'How we converted to XHTML 1.0' to show up on every page. Only the page with actual information, should show up in search, not every page with the include file."
Re:I thought of a similar idea and worded it as su (Score:1)
Filenames as an unname (Score:1)
Why this is redundant, and overly subjective (Score:2)
Another problem with metadata in general, of which spam is but one symptom, is the fact that creators of content often have no idea of how their content appeals, or fails to appeal, to other people. Did Mahir have any idea that his name would become a top-ranked search term? Does anyone have any idea how his content should be ranked for a given search term (besides number one, of course)?
What is the number one piece of metadata found in spam messages? This is not spam.
Domain names (Score:1, Offtopic)
It's funny how most people thing that common word domains are valuable, but forget that if you have a name that, when typed into a search engine, jumps out as the only result is pretty valuable too. Especially if it sounds like it is spelled.
Maybe not the best example, but since the 4 letter TLD's are practically all gone, I was going to register duxo.com. Unfortunately one of the many domain hogs got it the day I was going for it.
I got an other one though, but it's not up yet so I won't tell what it is!
Re:Domain names (Score:1)
t.
Yup... (Score:2)
My suggestion to anyone is that they develop three good domain names that they would be happy with. But for god's sake, do it *offline*! Don't search for them, don't try them in your browser, and don't tell anyone what they are. *Then* just go register one or all of them. Don't wait, don't search, and don't even breathe until they're yours.
Oh, and don't forget to trademark the language in those URLs (can't be plain English remember). If someone sees your new URL and likes it, they could register the TM if you don't. Then they can sue you for ownership of the domain, since you're clearly infringing on their TM; and they'll probably get the domain in the end.
Hey, I don't make the rules...
And my favorite word today is don't.
This will never work (Score:1)
For just the same reason as the automotive industry has made clean fuel vehicles standard, and the very way our capitalist world operates. For the time (money) it takes to implement this thing to make the world a better place, the costs can not be substantiated. Granted, if a lot of sites did this, there would be more time for everyone to spend playing with their dog rather than dig through irrelevant search results. But Joe webmaster's company is never going to pay him to do it, and he's not going to spend his free time doing it when he could be spending time with his dog.
That's the way the world is working right now, and people who want to change the world to a better place will probably spend their time doing other things rather than putting unwords in their web documents.
A part they left out of the story; (Score:4, Funny)
Re:A part they left out of the story; (Score:1)
I can see it now... (Score:2, Insightful)
This proposal will not make the indexing of sites more reliable. If anything it will add to the common confusion associated with meta keywords. Yes it is quite a nice idea in theory but I can't see anyone wanting to exclude words from being searched. The main point in the proposal was that the author felt guilty about pulling in people who had entered search terms that appeared on his page. One would ask why he is publishing information on the internet if he doesn't want people to look at it. A better solution would be to get people to use search engines properly. As an example I will use the stalking on the internet term. If people put these words into google and come up with his page then prehaps they should have modified their query to something like "stalking on the internet" and they may not have found his page. On the other hand if his page contains the phrase "stalking on the internet" it migh be just what the seaker was looking for.
To this proposal I say nay. or prehaps oink.
The Semantic Web (Score:5, Interesting)
The problem with content on the web today is that while it is perfectly readable by humans, it is incomprenesible to machines. If Tim and Co get their way, and I for one would love to see the Semantic Web catch on, then we can get rid of kluges like the Anti-Thesaurus, HTML meta keywords and the like.
Strings convey no meaning out of context (Score:1)
Take for example a search for the string tar, which will yield documents containing:
tar -zxf update.tgz, or cp update.tar update.old, or roofing tar , or jeg tar en øl nu
Each instance of tar above has a different meaning, but the same spelling. When you get into misspellings, spelling variations, and conjugation, then the actual concept is even harder to associate with a given range of strings.
Even Google searches are for strings and not concepts, but Google's ranking algorithm [google.com] relies on which pages get the most links from pages that also get the most links. However, you'll still get different results for color vs. colour and tyre vs tire. Because the algorithm only reflects how people have chosen their links, it does, from time [theregister.co.uk] to time [theregister.co.uk] give unusual associations. ;)
Re:The Semantic Web (Score:3, Insightful)
Indeed, but how close are they from achieving anything of significance? Ai has been working on a Universal Onthohology for ages and gotten nowhere.
The fact that Berners-Lee agree that it would be a "cool thing to have" does not make it any more likely to happen (by the way, TB-L first proposed the semantic web almost five years ago).
Re:The Semantic Web (Score:1)
The problem with the Semantic Web is that humans, in general, write web pages to be readable by humans, not by machines.
This is not likely to change anytime soon.
Re:The Semantic Web (Score:2, Interesting)
Re:The Semantic Web (Score:1)
No, not at all. It's easy to retro-fit a web site with RDF metadata about the content of that site and requires no human-visible changes to the site. Metadata can be stored in HTML meta tags or perferably in seperate RDF description files. None of this effects the way people surf the Web, and unless they have a good browser they won't even know the additional metadata exists.
In addition, using SW-friendly content in web pages (like strict XHTML, using CSS for all style, use of other XML dialects like SVG, MathML, CML and so on) only lends to machine comprehension while not detracting a single iota from human comprehension.
It's possible to have web content that is both human and machine comprehsible, but it unfortunately takes a little more effort than making content that is just human readable.
Could have done with this years ago (Score:2, Funny)
What about !keyword? (Score:3, Informative)
Presumably the same could be done for <meta name="keywords"> in HTML.
I like the idea (Score:2)
However, you could show some information if people visit with a certain Referrer header, directing them to more useful pages. This works in the majority of cases, and it doesn't need much cooperation from the search engines.
Exclude genealogy pages; nonsearch tag (Score:1)
2. Some sites have menus on each page listing every topic on the site. You search on a word and get every page in the site returned, including those that mention the topic only in the menu. A tag such as this <nonsearchable> </nonsearchable> surrounding the menus might aid in solving this problem.
The Wrong Tree (Score:2)
Just over do it!Re:The Wrong Tree (Score:2)
The more traditional search engines (not google?) have protections against sites that do extreme things to get to 1 in the hitlist. They have protections against repeating 1 word a lot of times. (META="sex, sex,sex"). Repeating your "exwords" in the normal meta tag so many times should trigger the search engine "spam alert" and decrease the search relevance.
Why this won't work (Score:2)
Unmarketing (Score:2)
there should be a metadata standard allowing webmasters to manually decrease the relevance of their pages for specific search terms and phrases."
So, in other words... businesses will want to reduce their exposure on the web? I don't think so.
This is backwards (Score:2, Insightful)
No more meta tags (Score:1)
There were a couple of interesting papers at the ACM's SIGIR [acm.org] this year that use only the anchot text that points to a webpage to get a description of the pointed to page and they could do some cool things like language translations with just that data.
Invisible pages for the pissed-off (Score:1)
I know of at least one web page that has been very carefully constructed so that search engines won't find it, but people who know what they're looking for will find it easily.
With no subject-specific keywords, however, unless you do know what the author is talking about, you won't have any idea what she's so pissed off about.
No, don't ask: I am routinely pissed off for the same reason, and will not post the URL here.
I wouldn't mind if searches for my name brought up my current web page, rather than the one I had in 1995. But that's another matter.
...laura
Re:Invisible pages for the pissed-off (Score:1)
t.
Biblio entries (Score:1)
While I have occasionally found a source I needed from a hit on a bibliographic entry, one of my pet-peeves, even on Google, is long lists of nothing but bibliographic entries. Usually it's a pretty clear sign that there isn't much on the topic available on the Internet, but sometimes I just need to change my search terms slightly.
But I think nonword is a bad idea. If the website's editors decide to keep a word, and Google's page-rank technology shows it to me, I'm willing to check it out.
very useful for single site search engines (Score:1)
Searching for BSML: Bull Shit Markup Language (Score:2)
Appearently, they would prefer that people searching for "BSML [google.com]" did not turn up my web page. I wonder if they've tried to get the Boston School for Modern Languages [bsml.com] to change their name, too?
Now isn't the whole point of properly using XML and namespaces to disambiguate coincidental name clashes like this? If LabBook thinks there's a problem with more than one language named BSML, then they obviously have no understanding of XML, and aren't qualified to be using it to define any kind of a standard.
Maybe LabBook should put some meta-tags on their web pages to decrease their relevence when people are searching for "Bull Shit" or "Modern Language".
-Don
========
From: "Gene Van Slyke" <gene.vanslyke@labbook.com>
To: <don@toad.com>; <dhopkins@maxis.com>
Sent: Monday, November 12, 2001 10:36 AM
Subject: BSML Trademark
Don,
While reviewing the internet for uses of BSML, we noted your use of BSML on http://catalog.com/hopkins/text/bsml.html [catalog.com].
While we find your use humorous, we have registed the BSML name with the United States Patent and Trademark Office and would appreciate you removing the reference to BSML from your website.
Thanks for your cooperation,
Gene Van Slyke
CFO LabBook
========
Here's the page I published years ago at http://catalog.com/hopkins/text/bsml.html [catalog.com]:
========
BSML: Bull Shit Markup Language
Bull Shit Markup Language is designed to meet the needs of commerce, advertising, and blatant self promotion on the World Wide Web.
New BSML Markup Tags
CRONKITE Extension
This tag marks authoritative text that the reader should believe without question.
SALE Extension
This tag marks advertisements for products that are on sale. The browser will do everything it can to bring this to the attention of the user.
COLORMAP Extension
This tag allows the html writer complete control over the user's colormap. It supports writing RGB values into the system colormap, plus all the usual crowd pleasers like rotating, flashing, fading and degaussing, as well as changing screen depth and resolution.
BLINK Extension
The blinking text tag has been extended to apply to client side image maps, so image regions as well as individual pixels can now be blinked arbitrarily.
The RAINBOW parameter allow you to specify a sequence of up to 48 colors or image texture maps to apply to the blinking text in sequence.
The FREQ and PHASE parameters allow you to precisely control the frequence and phase of blinking text. Browsers using Apple's QuickBlink technology or MicroSoft's TrueFlicker can support up to 65536 independently blinking items per page.
Java applets can be downloaded into the individual blinkers, to blink text and graphics in arbitrarily programmable patterns.
See the Las Vegas and Times Square home pages for some excellent examples.
BSML prior art? (Score:2)
The wheels of government and commerce would grind to a halt were they not well lubricated with Bull Shit. So I created the Bull Shit Markup Language and published the BSML web page [catalog.com] years ago, putting it on the public domain for the good of mankind. Now somebody has finally taken it seriously, and is trying to monopolise BSML!
He who controls BSML controls the Bull Shit... and he who controls the Bull Shit controls the Universe!
http://catalog.com/hopkins/text/bsml.html [catalog.com]
Does anyone know of any prior art pertaining to Bull Shit and Markup Languages? What about VRML -- Maybe I could get Mark Pesche to testify on my behalf? c(-;
Here's a list of the huge faceless multinational corporations I'm up against:
http://www.labbook.com [labbook.com]
"IBM, NetGenics, Apocom, Bristol-Myers Squibb, Wiley and other leaders of the life sciences industry support LabBook's BSML as the standard for biological information".
To paraphrase Pastor Martin Niemöller:
First they patented the Anthrax Vaccine
and I did not speak out
because I did not have Anthrax.
Then they patented the AIDS Drugs
and I did not speak out
because I did not have AIDS.
Then they patented Viagra
and I did not speak out
because I already had an erection.
Then they came for the Bull Shitters
and there was no one left
to speak out for me.
-Don