Science Project Quadruples Surfing Speed - Reportedly 646
johnp. writes "A computer browser that is said to least quadruple surfing speeds on the Internet has won the top prize at an Irish exhibition for young scientists, it was announced on Saturday. Adnan Osmani, 16, a student at Saint Finian's College in Mullingar, central Ireland spent 18 months writing 780,000 lines of computer code to develop the browser. Known as "XWEBS", the system works with an ordinary Internet connection using a 56K modem on a normal telephone line.
" A number of people had submitted this over the weekend - there's absolutely no hard data that I can find to go along with this, so if you find anything more on it, plz. post below - somehow 1500 lines of code per day, "every media player" built in doesn't ring true for me.
Basic maths. (Score:4, Interesting)
Hmmmm he even admits it crashes (Score:4, Interesting)
This isn't a microprocessor - the speed it runs should be completely unrelated to it's caching.
I'm very very skeptical that this is anything more than a browser with precaching.
It also makes other ludicrous statements about how blind people can now access the web. I'm not sure how they do it presently, but i know that they do.
Re:Basic maths. (Score:2, Interesting)
Re:Hmmmm he even admits it crashes (Score:3, Interesting)
There is also BLinux [leb.net], a project to make Linux accessible for blind users.
Jaws is pretty cool, I use it at work sometimes to test sites.
Hmm. (Score:5, Interesting)
But what really got me where the two most important features someone could ever want in a Web Browser - it can play dvd's [it incorporates ever media player!], and also has a handy animated assisant called Pheobe.
Now, I am most probably wrong, and will happily eat my hat, but I cant help but feel that this isn't an entirely accurate article.
ps. Does anyone know if it is standard compliant ?
Strong sense of deja vu (Score:3, Interesting)
Why am I thinking this is just another one of those snake-oil web speedups that does lots of caching and pre-emptive downloading of pages on the off chance you are going to view it? I'll be taking this story with a large pinch of salt for now I think.
Is that so? (Score:4, Interesting)
Sure, you can leave stuff out (images, JavaScript, Flash), but "at least quadruple"? If the page is simple enough then you can't just ditch a chunk of it.
Ooh, AND "[at] least quadruple surfing speeds" and "they found it boosted surfing speeds by between 100 and 500". Even the article isn't making any sense . .
Of course, if this turns out to be true than I will be the first to eat my cat (and the first to download it), but I'm sure this isn't even possible, right?
Just my 2 cents (actually, that was more like 5) . . .
Lets not get too excited (Score:2, Interesting)
What a load of crap (Score:5, Interesting)
A kid coding 780'000 lines of code in 18 months. All alone. In that time he have had to design and implement the whole shit including "every single media player built in".
It would require some sort of dial-up-server side module to compress and modify the contents of the data and this kind of system would most certainly be a lossy method for transferring data. It won't be possible to transfer binary data with this thing without corrupting the result completely.
And what kind of a piece of software would choke under the load of 7x56k modem ("At seven times it actually crashes so I have limited it to six.")?
This is just a cheap attempt to gather some attention.
Ok, let's think this through.... (Score:5, Interesting)
If it does require a server side piece, it's not a web browser, per se; but as a general question, is it worthwhile to look into "compressed" web pages, e.g., foo.html.zlib? (I tend to doubt the savings are that much for the "average" page, but shoving graphics into an archive might keep down the number of requests needed to fetch a whole page and its graphics.)
If it's not server side compression, the only thing I can think of (and fortunately smarter people than me will think of other things I'm sure) is that he's pre-fetching and caching pages to make the apparent speed faster.
So is the "secret" that he has some hueristic that sensibly guesses what links you'll click next, combined with regularly fetching, oh say, your most requested bookmarks? (In my case it might look like: slashdot -- New York Times -- slashdot -- sourceforge -- slashdot -- freshmeat -- eurekareport -- slashdot.)
In other words, is he mirroring sites locally in the background? And if so, how must bandwidth is wasted just sitting in the cache until it's stale?
(On the other hand, could I point his browser at
Re:Hmm. (Score:5, Interesting)
I have a feeling this project is nothing but hot air.
Re:Basic maths. (Score:3, Interesting)
I must be getting old. In my younger days, that was called "libraries", and you only counted each line once, no matter how many times they were reused.
Re:It slices it dices (Score:5, Interesting)
The article discusses that he uses a simple modem, so that could perhaps mean that he just wrote some new method for transmitting more bits/second by a more accurate signal detection method.
Although, this has basically nothing to do with the browser at all, so it does not make any sense to me. Sounds like the article mixes apples and oranges, or perhaps the "student" is just laying out a smoke barrier so that noone will steal his ideas before he get the patent.
my 5 cents...
Increases "surfing speeds" (Score:3, Interesting)
Re:have you ever been 16 (Score:2, Interesting)
I've always been proud that when looking back on my own projects I had something like 20-30 lines a day. On really good days you can write hundreds of lines but sometimes you have to throw everything out again because it's crap.
I hope this guy isn't for real, he'll be burnt out by the time he's 30.
Irish Patent Office does not know about this (Score:5, Interesting)
Query
Application Date: 08/01/2003 -> 10/01/2003
Abstract: *internet*
Results: 0
Query
Date Of Grant: 08/01/2003 -> 10/01/2003
Abstract: *internet*
Results: One Result: 2000/0717 82661 Server-based electronic wallet system
Thats it, so it doesn't seem he applied for the patent in Ireland then...
P.S. The stars around "internet" are mine, I used them to indicate that I searched all abstracts that contained the word "internet"
What methods are possible? Hard disk are cheap? (Score:2, Interesting)
Serverside caching [propel.com] could be used.
TCP/IP non-comformaty [slashdot.org] is the third option.
Assuming this is true, (ignoring the 1500 lines a day), what else could he be doing?
Judging by harddisk prices, client side cacheing algorythms would make sense. Cacheing many portal and search engine homepages is a powerful start. Combined with a central server that then reviews these popular pages for changes, and publishes a simple summary for the browser client to collect and compare with older summaries, then a browser can collect only updated portal pages for the cache, all optimizes portal renders.
Then less common homepages, such as the high school I attended, can be gleened from users typed-in webaddress history, and automatically cached as chron-job.
Creating cached copies of commonly used graphics on portal website can save a ton of bandwidth. Again a server based bot could rate the linkcount of graphics on portal sites, and if the graphic has changed, and then post this list for browsers to collect for caching. Searching HTML for imagefiles, that are already stored in the cache, and modify the page on the fly to call only the cached image would save bandwidth. e.g. caching all of slashdot's article catagory icons.
Then the tricky part, "which linked pages to cache while the user reads a page?", so that when a link is clicked, the pages renders fast. I would download the html from all of them, and while the reader reads, check for already cached images, and then start downloading image files.
-Mac Refugee, Paper MCSE, Linux Wanna be!
and first poster of the word "knoppix"
Re:Accurate and not... (Score:3, Interesting)
You are right on the presentation bit
Re:Basic maths. (Score:5, Interesting)
Indeed. I remember reading that IBM reckon that, including design, coding, testing, debugging and documentation, a programmer's doing well to get 10 lines of code per day, averaged over the life of the project.
Also depends how he's counting lines. In C, because that can vary so much depending on individual formatting style, a good rule of thumb is to count semicolons. And even then it won't tell you if programmer A is writing fast but hard to read code and programmer B is checking the return value of every system call (as you're supposed to but few ever do), adding lines and robustness with no extra actual functionality.
Re:Accurate and not... (Score:3, Interesting)
Lets say that you want to increase compression of some data. EG HTML. Could there not be a technique to speed things up? Sure there is, get rid of the spaces, remove some tags, etc.
Well lets say that with each compression technique there are levels of what can be thrown away. And maybe when he tweaks to level 7 he throws away too much. At that point the app does crash since he may be throwing away something interesting.
That was my point of partially lossless....
[Hard/Soft]ware... (Score:2, Interesting)
But, you're talking about things that occur in two different places... The modem is hardware, the browser is software. A new browser could not increase the _actual_ speed of the modem. Obviously, he's talking about some algorithm that makes better use of the bandwidth to show a _perceived_ increase in speed, because I seriously doubt that he's come up with a new compression algorithm that compressed 4 times better than existing stuff, but doesn't require the server to know about it.
T
Re:Basic maths. (Score:3, Interesting)
From my software engineering course way back in college I think I remember the number being 4 or 5. But that is more like an industry average. One thing about software is that the best programmers are something like two to three orders of magnitude more productive than the average. Between that and the communication costs growing exponentially in a group you find that a few very talented programmers are vastly more productive than a mass of average programmers.
Still, sustaining 1,500 LOC per day for a year and a half ... that's beyond the productivity level of anyone I've ever seen. I personally have managed 4,500 per day for a period of about a week on occasion ... but I wasn't sleeping much during that period.
I am not sure I'd take that number at face value though. If this were real he would almost certainly be using a lot of prewritten code for codecs and the like and that would balloon the LOC for little effort on his part. It's more than a little unlikely that he'd be able to write all his own codecs in the first place.
So, while the LOC sounds specious, it's potentially believable given the probability of code reuse.
The thing that makes this entirely unbelievable is the performance claim. 4x performance of existing browsers over a 56k line? That's simply not possible since the bulk of time spent waiting is data transmission time. That could be improved but only with a server side component and it's doubtful it could be improved substantially without a large loss in quality.
I'm not going to dismiss the claim of a new web browser, but I'd be surprised if any of the size and performance claims hold water.
Re:Basic maths. (Score:4, Interesting)
Most likely he has taken an open source browser and added in his own extensions. This is the type of innovation that making the browser open source is meant to support.
As for speeding up browsing by a factor 100% that is pretty easy. We did a lot of work on HTTP-NG and you can speed up downloads a lot just by compressing the headers so that they fit into a single packet. HTML is also very compressible. The biggest mistake we made in the original Web code was not putting a lightweight compression scheme into the code, although it did make it into the specs.
Of course the reason this did not happen was the LZ patent GIF fiasco and the then state of the GNU compression libraries. Even so Microsoft has supported compression in IE for some time.
I am rather more skeptical about the 500% claim. I don't think that there is that much redundancy unless you have completely artificial examples.
Yeah, right (Score:3, Interesting)
I call bullshit. That claim dosn't make any sense whatsoever, especialy if it's just software.
It seems (to me) Like he just threw together a bunch of MS APIs (such as the microsoft speach API for 'Phoebe', the windows media API for the DVD player and video players, probably even used IE to display pages).
At most he threw in an intelegent caching routine, such as pre-downloading linked pages or something. I also don't think he wrote 780kloc
Re:Basic maths. (Score:5, Interesting)
I'd give him first prize... (Score:2, Interesting)
Re:Hmm. (Score:4, Interesting)
The claim that it's 100 to 500% faster is probably accurate in some sense, but compared to what? An old version of Netscape or Explorer? And on what kind of set-up? You can probably see that kind of variation in a single browser installation just by changing to different options and settings or by closing other windows or background applications. Personally, I often find myself switching between browsers depending on what seems to be working better on a particular day or on a particular network or machine.
On the other hand, he does sound like he's a bright kid with a good future, but probably one that just took Mozilla and turned it into snazzy looking bloatware with a bunch of extra features. Or, perhaps an even brighter kid who did the same thing from "scratch" with a lot of cutting and pasting (of his own work and from existing programs and libraries) to end up "writing" so many lines of code.
Re:MIT is better (Score:3, Interesting)