Streaming a Database in Real Time 194
Roland Piquepaille writes "Michael Stonebraker is well-known in the database business, and for good reasons. He was the computer science professor behind Ingres and Postgres. Eighteen months ago, he started a new company, StreamBase, with another computer science professor, Stan Zdonik, with the goal of speeding access to relational databases. In 'Data On The Fly,' Forbes.com reports that the company software, also named StreamBase, is reading TCP/IP streams and using asynchronous messaging. Streaming data without storing it on disk gives them a tremendous speed advantage. The company claims it can process 140,000 messages per second on a $1,500 PC, when its competitors can only deal with 900 messages per second. Too good to be true? This overview contains more details and references."
Seriously, Michael (Score:4, Insightful)
It must be alot since the pay for play is so obvious.
THE TRUTH ABOUT ROLAND PIQUEPAILLE (Score:4, Informative)
I think most of you are aware of the controversy surrounding regular Slashdot article submitter Roland Piquepaille. For those of you who don't know, please allow me to bring forth all the facts. Roland Piquepaille has an online journal (I refuse to use the word "blog") located at http://www.primidi.com/ [primidi.com]. It is titled "Roland Piquepaille's Technology Trends". It consists almost entirely of content, both text and pictures, taken from reputable news websites and online technical journals. He does give credit to the other websites, but it wasn't always so. Only after many complaints were raised by the Slashdot readership did he start giving credit where credit was due. However, this is not what the controversy is about.
Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. Blogads is not your traditional online advertiser; rather than base payments on click-throughs, Blogads pays a flat fee based on the level of traffic your online journal generates. This way Blogads can guarantee that an advertisement on a particular online journal will reach a particular number of users. So advertisements on high traffic online journals are appropriately more expensive to buy, but the advertisement is guaranteed to be seen by a large amount of people. This, in turn, encourages people like Roland Piquepaille to try their best to increase traffic to their journals in order to increase the going rates for advertisements on their web pages. But advertisers do have some flexibility. Blogads serves two classes of advertisements. The premium ad space that is seen at the top of the web page by all viewers is reserved for "Special Advertisers"; it holds only one advertisement. The secondary ad space is located near the bottom half of the page, so that the user must scroll down the window to see it. This space can contain up to four advertisements and is reserved for regular advertisers, or just "Advertisers". Visit Roland Piquepaille's Technology Trends (http://www.primidi.com/ [primidi.com]) to see it for yourself.
Before we talk about money, let's talk about the service that Roland Piquepaille provides in his journal. He goes out and looks for interesting articles about new and emerging technologies. He provides a very brief overview of the articles, then copies a few choice paragraphs and the occasional picture from each article and puts them up on his web page. Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles. Nothing more, nothing less.
Now let's talk about money. Visit http://www.blogads.com/order_html?adstrip_category =tech&politics= [blogads.com] to check the following facts for yourself. As of today, December XX 2004, the going rate for the premium advertisement space on Roland Piquepaille's Technology Trends is $375 for one month. One of the four standard advertisements costs $150 for one month. So, the maximum advertising space brings in $375 x 1 + $150 x 4 = $975 for one month. Obviously not all $975 will go directly to Roland Piquepaille, as Blogads gets a portion of that as a service fee, but he will receive the majority of it. According to the FAQ, Blogads takes 20%. So Roland Piquepaille gets 80% of $975, a maximum of $780 each month. www.primidi.com is hosted by clara.net (look it up at http://www.networksolutions.com/en_US/whois/index. jhtml [networksolutions.com]). Browsing clara.net's hosting solutions, the most expensive hosting service is their Clarahost Advanced (http://www.uk.clara.net/clarahost/advanced.php [clara.net]) priced at £69.99 GBP. This is
For the record--Taco's response to this (Score:5, Informative)
I then told him about the controversy over it in posters' minds, and he said it was just a "new successful troll meme." Good luck getting through to Slashdot's editors, because clearly Malda does not consider this anything to take seriously.
Re:THE TRUTH ABOUT ROLAND PIQUEPAILLE (Score:1)
Stop encouraging people to visit (Score:3, Insightful)
Re:THE TRUTH ABOUT ROLAND PIQUEPAILLE (Score:2, Interesting)
Queue dancin' (Score:3, Funny)
Re:Queue dancin' (Score:1)
speed focus (Score:3, Insightful)
Re:speed focus (Score:3, Informative)
Re:speed focus (Score:4, Interesting)
One question might be...why write the data directly to a database initially? Why not utilize a faster format, then write to the DB when things have slowed down (i.e. caching)?
Admittedly I haven't read the article, but I am familar with 200+G databases, and there are ways to deal with performance with current DB tech.
I do welcome any new competition, but there are ways of querying data in memory already. Heck, put the whole thing on a RAM Drive...how much data can there be for stock tickers?
Re: (Score:3, Insightful)
Re:speed focus (Score:2)
Re:speed focus (Score:2)
Re:speed focus (Score:2)
Re:speed focus (Score:2)
Any IT shop required to keep their products up will have these things in place.
Re:speed focus (Score:3, Informative)
What distinguishes RDBMS systems is the fact that their storage is permanent and engineered to perform crash recovery. This means that even a memory resident Oracle database will be doing synchronous writes to it's transaction logs. This ensures that any transaction can be regener
Re:speed focus (Score:3, Funny)
Yeah, this will outright kill mysql, I'm swapping tomorrow, got any cash to spare?
Re:speed focus (Score:2)
"Do you have any cache to spare?"
*grynn*
Re:speed focus (Score:5, Informative)
Its called IRC.
Re:speed focus (Score:4, Informative)
Re:speed focus (Score:1)
Re:speed focus (Score:5, Funny)
you start with /join, of course...
Re:speed focus (Score:2, Insightful)
> for web applications that don't need funky features but where concurrency
> and speed are important
As near as I can make out from the (somewhat nontechnical) article, this
is not a traditional database in any normal sense; it's more like a query
engine for streaming data. It doesn't permanently store all the data in
the stream that's passing through it. What it does store, I take it, is
query results. So I guess basically you
Practical Considerations (Score:1, Informative)
I used to work with a mySQL variant which facilitate queries by using a RAMDisk and an optimized version of Watcom Pascal to enhance query functionality. We made it open source, but last I heard, the last administrator had converted it into a MP3-labelling shareware package.
Re:Practical Considerations (Score:2)
The dropping cost of memory wipes out your practical concerns. You can have all of the logical correlations that you want in memory. We tend to think that we have to write data to disk to make it organized because today's operating systems and programming languages give us very little direct control of memory, but they give us a great deal of control over what we write to the disk.
If we had more operating systems that gave us direct control of memory, o
Re:Practical Considerations (Score:2)
You're right there. At about $150/Gig, and not using disk space, that $1500 PC system could possibly be an Athlon 64 with 8 Gig of memory.
Re:Practical Considerations (Score:2)
First, you are correct. With operating systems, the actual layout of the disk is a mystery to the programmer. I use fopen(), but really don't know where the file is.
The layout of the file, however, is completely under my control. I often know the complete logical structure of the data in the file.
In early assembler type programming languages, the programmer would
WWGT (Score:2, Funny)
Why don't you ask Google? (Score:2)
You mean Trump.com? (Score:1)
Re:You mean Trump.com? (Score:2, Funny)
Duh (Score:2, Informative)
As long as it's read only, the disk won't be touched.
A writeable database that doesn't need to be written to disk is not a database, it's called a nonpersistent cache.
Re:Duh (Score:4, Insightful)
At no time is the data 'stored' in any way
You cannot query anything that happened in the past, because the program doesn't remember it.
Re:Duh (Score:2)
Re:Duh (Score:2, Insightful)
Re:Duh (Score:3, Interesting)
Any of the enterprise databases will with gobs of memory end up caching the entire database in memory.
That's still much slower than in-memory approaches that don't use a database at all. For apps that are amenable to the stick-it-all-in-RAM approach, serializing all your data access is a performance killer.
A writeable database that doesn't need to be written to disk is not a datab
What does it do? (Score:2, Redundant)
Re:What does it do? (Score:1, Insightful)
That's just a SWAG, but from the article that's what it sounds like to me.
Re:What does it do? (Score:2)
Re:What does it do? (Score:2)
I guess the idea is that you can run SQL queries on those in-memory tables (as opposed to searching memory in some non-standard way).
>This sounds like it needs a constant stream of data.
It doesn't _need_ a constant stream of data - data streams are there anwyay.
It replaces disk-based databases which are aparently useless for real-time decision support systems that must process huge constant streams of data
Data Space Transfer Protocol [DSTP]? (Score:2)
Scientific programming question: Anybody have any experience with the Data Space Transfer Protocol [dataspaceweb.net]? Also known as the "Data Socket Transfer Protocol"? National Instruments [NI] wrote a DSTP front end into LabVIEW [ni.com], but if any major vendors have a DSTP back end, I haven't discovered it.
Or does anyone have any experience with any other methods of moving large amounts of [strongly-typed] data across the wire so that it comes to rest in a central repository in some sort of a coherent fashion?
Thanks!
Re:Data Space Transfer Protocol [DSTP]? (Score:2)
Semantics. (Score:2)
No, Data Space Transfer Protocol is not "also known as" Data Socket Transfer Protocol.
First of all, Grossman's group at UIC [dataspaceweb.net] tends to call it Data Space Transfer Protocol. On the other hand, the promotional and marketing material at National Instruments tends to call it Data Socket Transfer Protocol.
Second, there seems to be some confusion as to what is meant by a backend. I want some sort of a server [something traditional, like Oracle/DB2/SQLServer, or something a little new-fangled, like Objectivity/
Re:Data Space Transfer Protocol [DSTP]? (Score:2)
I've written my own wire protocol + packer
Ugh. (Score:2)
I've written my own wire protocol + packers and unpackers. I tag every data value with its type (number, time, string,
Re:Ugh. (Score:2)
Yeah. I suppose I could have used Corba, but now that I have the basic infrastructure in place there isn't really any advantage to doing so since the effort involved in remote function calls is now as small as it will ever get.
Besides, I can think of at least one major (multi-million euro) software package that is considered almost too slow to be useable precisely because it is attempting to use Corba to shift serious
I wonder how this is different from MySQL (Score:1)
Re:Typical (Score:1)
I call foul (Score:4, Insightful)
So they manage to do their analysis without even touching main memory? Nifty! What do they do, make it all fit in the L1 data cache? OK, maybe the guy was misquoted - I trust reporters about as far as I can throw them - but the whole thing just smells funny to me. I'm betting that the massive speedup they report is only for carefully selected, pre-groomed data sets. I agree that analyzing data as it comes in rather than storing it up to recrunch later is the smart thing to do, but that insight isn't a breakthrough of the kind the article is spinning this as.
Re:I call foul (Score:5, Interesting)
You have a "standing query". So you can ask things, like, what's the rolling average for the last 60 seconds for this ticker name. What's the minimum price for this commodity.
You can ask to correlate things. Store the last 90 minutes worth of transactions on these commodities. Search for these types of patterns.
It sounds like what they have done is build an OLAP cube that builds its dataset on the fly by processing messages coming over a streaming interface.
It's much smarter to do that, then write every last transaction to disk, and then query the transactions after the fact. That'd be the natural way to thing about it if you used a Relational database.
Essentially, it sure sounds like he's written a generalized packet filter, that can compute interesting functions on the data. Think snort, think ethereal, think iptables, think policy routing. Now apply those kinds of technology to "The price of this stock", "the location of that soldier", where those values are embedded in a network packet frame somewhere.
While each single application of this sounds trivial to implement, if he has done it in a generalized way, that can keep pay with larger systems, bully for him.
The irony of all this for me is that at a former job, I used to process medical data exactly this way. It sounds like the HL7 interface issues we used to have. You couldn't possibly take a full HL7 stream and process it, so you'd filter it down to just the patients that this department was interested in. Then only process messages about those patients.
There were rows that even about those patients you weren't interested in that you had to filter out. You spent a bunch of time filtering, and re-filtering.
We wrote the raw messages to disk, and spooled them to ensure we didn't miss messages due database problems (if the database was down, you had to spool until the database came back up, it was unacceptable to miss patient records for database maintience).
Kirby
Re:I call foul (Score:2)
Has nothing to do with relational databases (Score:5, Insightful)
Re:Has nothing to do with relational databases (Score:5, Funny)
Uh oh... You dared to slam Rolly. Prepare for the wrath of Mikey and his infinite mod points.
Lube thy anus.
Re:Has nothing to do with relational databases (Score:1)
So, if you need extra info on some result (e.g. look for out-of-place vehicles, followed by, what are the drivers vitals?) you just run another query on the new data stream and definitively don't do look at past data. That is where the "don't store" comes in.
Re:Has nothing to do with relational databases (Score:5, Insightful)
Re:Has nothing to do with relational databases (Score:2)
But he definitely did UTF$
Re:Has nothing to do with relational databases (Score:1)
From a purely theoretical pov, is there any reason not to be able to do this kind of thing relatively easily anyway?
if you're just getting information from streaming data, then surely the analogy would be putting rocks in a river - each rock would represent a set of if/then conditions and the time the water spent passing by the rock would be the time for the system to discretise the data, perform the if loop, then let it pass...
the then dive
Read the article before posting (Score:5, Insightful)
Re:Read the article before posting (Score:5, Insightful)
1. if you decide to add a new analytic you have to start with new data - you can't deploy a new analtyical component and against historical data.
2. if your machine crashes - it takes all your accumulated analytical data along with it. Maintaining a distribution of activity calculated every 5 minutes over 90 days? Great, but after the server comes back up your data starts all over.
3. if your analtyical component needs to run against a lot of history each time (ex: total number of unique telephone numbers accessed by day, calculate rolling median) then you'll have to maintain that detail data in memory. As you can imagine - you can *easily* identify calculations that will exceed your memory. So, to tune you'll be forced to keep your calculations to relatively recent data only.
ken
Stonebraker gave a guest lecture to my class. (Score:2, Informative)
If a given provider is consistently slow, it sounds a low-level alarm against the provider, not to trust their data because it's slow. Similarly for various markets, and probably other groupings too. It probably does other processing on the data.
This data is
Re:Read the article before posting (Score:3, Interesting)
Data IS written to disk/backed-up. (Score:4, Informative)
Every time you get some thing interesting, you save that on disk too - but separately, into a much smaller db. This way state is also saved, and since state is going to be much smaller than the data, there will be no speed issues.
Now the clever thing to do would be to link this flowing-state dbms (FSDBMS) to a standard rdbms working from the disk. Then you could verify the information from the FSDBMS, and ensure that things aren't screwed up. Also, based on patterns seen by the rdbms with long term data, new queries could be generated on the FSDBMS, allowing it to generate results from the data on the wire.
Sounds like it would have applications primarily where response time is at a premium, and long history is not such a large component of the information.
So in the case of military info, where a HumVee could be in trouble (a situ someone else has mentioned), the FSDBMS would raise the alarm, and some other process would then follow up and ensure that the alarm was taken care of.(The data itself would be backed up for future analysis, such as whether the query was correctly handled).
Dynamic queries in such a situ could be - get the id of the closest Apache reporting in, or closest loaded bomber en-route to some other target. Then the alarm handling program would re-route the bomber/apache to the humvee for support. While querying the disk database may be time intensive, the FSDBMS would have delivered a sub-optimal FAST solution.
So imagine the FSDBMS as a filter, giving different bits of information to different people. With the option that you could change the filter on the fly. And the filter could be complex, based on previous history etc., just like a DB query.
A Better Solution (Score:4, Informative)
Another option is EPL server by ispheres [ispheres.com]. Unlike the product mentioned here, which seems to be just some extra code thrown on top of a database EPL server is built from the ground up for this sort of application.
For sensor networks (Score:4, Interesting)
seems similar to his Auroa project... stonebraker has a history of turning his university research projects into successful startups.
ACID? (Score:3, Insightful)
Re:ACID? (Score:3, Informative)
So, throw out ACID (if problem domain doesn't require it) and get performance increases, wow! Probably they are now patenting it because no one had thought of that before...
Re:ACID? (Score:2)
Cyberpunk, Anyone? (Score:1)
Seems to me that something like this would be incredibly useful for that: when the data from a couple seconds ago is now obsolete, you definitely need to be able to parse your queue as fast as you can.
Memory faster than disk, film at 11 (Score:2)
> them a tremendous speed advantage.
There's a reason people generally don't do this, and that's because memory is expensive.
> The company claims it can process 140,000
> messages per second on a $1,500 PC, when its
> competitors can only deal with 900 messages per
> second.
But I bet you its competitors can serve huge web-sites at 900 messages per second, whereas StreamBase can serve fits-in-memory-only web-sites at 140,000 messages per secon
Re:Memory faster than disk, film at 11 (Score:1)
The article is not about databases in the conventional sense.
Classifier Systems: the Genetic Algor of streaming (Score:3, Interesting)
With a high enough stream processing speed (using StreamBase's methods), classifier systems might be useful for AI/adaptive learning scenarios.
Re:Classifier Systems: the Genetic Algor of stream (Score:3, Interesting)
The material covered in the book is also still very relevant and the books a joy to read.
You should buy it
Seems more like MOM than DB (Score:2, Insightful)
why not just use the echo port (Score:2)
Simple, lots of space, and secure...until a power failure.
Not sure what the selling point is (Score:2)
More Information (Score:2, Informative)
sed via SQL? (Score:2)
Article text minus the spam (Score:2, Informative)
Streaming a Database in Real Time
Michael Stonebraker is well-known in the database business, and for good reasons. He was the computer science professor behind Ingres and Postgres. Eighteen months ago, he started a new company, StreamBase, with another computer science professor, Stan Zdonik, with the goal of speeding access to relational databases. In "Data On The Fly," Forbes.com reports that the company software, also named StreamBase, is reading TCP/IP streams and using asynchronous messaging. Streami
Got the wrong end of the stick (Score:2, Insightful)
This idea really doesn't seem that new though? its just real-time DSP on text-based data! with a front-end that pretends to be a database.
The analysis is what will be hard (Score:1)
Seems kinda silly to me. (Score:4, Insightful)
Sooner or later you have to put something somewhere. Let's say you monitor a battalion in battle in realtime. All of these messages are streaming in and being analyzed. Great. But now what? So something triggers an alert, say. Well, what's tracking the status of the alert? Wouldn't you want to track the status of an alert saying "this Humvee is off course"? Wouldn't you want to track whether someone had acknowledged the alert, and what they did about it?
And don't forget there are liability issues, historical issues, and more. You're a stock trader, all of these messages are coming and being analyzed. You get an alert...one of your triggers tripped. You make a trade as a result, only to find out 30 minutes later that the trigger was WRONG and your trade was WRONG and you (or your company) is out $10 million. How do you prove that you made the trade based on the trigger like you were supposed to and not because you f**ked up? The trigger, and the data that caused it to trip, is long gone. What do you do now?
Eventually something has to be written (stored) somewhere, sometime. I guess I can see the need for summarizing data and only storing what StreamBase says is "important" but how would you know if everything was OK if the actual data driving everything was long gone?
Re:Seems kinda silly to me. (Score:1)
It depends on you application, unless your running a black-box the best course of action would be to relay the message to the driver of the Humvee.
It's also real handy when you get asked to produce the data in court.
Re:Seems kinda silly to me. (Score:2)
Here's my logic, if this system can ha
This isn't streaming, this is message queuing... (Score:5, Informative)
I'm sure this is a great product, but both the submitter and the writer of the story seem to not grok what makes it great.
My RTOS will do more than 1400 messages/sec (Score:2, Interesting)
Yet, then what is LabView? We've been processing live real-time data streams for years.
I still don't get the scope of it. It seems on one hand to be a lot of the same. This idea that they need this type of software to process data from remote sensors doesn't click. I process data from remote sense in real-time all the time (no pun intended). There is no need to store it in a DBMS and then query it in order for the data to be use
Who cares whether it runs on a $1500 PC ? (Score:3, Interesting)
Are there databases of porn? (Score:1)
Combining this with an RDBMS (Score:2, Informative)
Other posts are correct that what is talked about here is a message queuing mechanism to some degree. What I had designed and built was what we called an event server.
Basically how it worked was that you sent what SQL statement you wanted registered and then you got the initial data set back and then any change
This is old news (Score:3, Insightful)
The CommandBehavior.SequentialAccess descendant of the SelectCommand Class in C# can be assigned in a way that allows binary objects, or otherwise
I do this today to provide a distribution network for doctors who need access from several places to a pool of active patient data. This is a data volume of Serveral Terrabytes per location, so I assure you that we are discussing the same scale here as the article.
Consequently, the TPC benchmarks show 3,210,540 TpCM as the current posted record for AIX on a Big Blue machine, so their numbers are skewed if not wrong. Most processes, including those using binaries, can be proceduralized at the back end anyway, thus make call -> server -> stored_procedure ->return (); be the flow, with all data living inside of RAM, and sorts happening in 'real-time', that is from a pinned table into another location in memory at the server layer, returning into a dataset that is kept in RAM on the client.
I don't really see anything revolutionary about all this, correct me if I'm mistaking something?
-chitlenz
How do you like them apples? (Score:1)
no storage = no problem (Score:3, Insightful)
This is an old concept. (Score:3, Interesting)
whats the value of speed (Score:1)
Re:A Poem for Michael and Roland (Score:1, Funny)
Then comes marriage,
Then come a spam-advertisement campaign earning thousands a month at the expense of companies which Roland rips off.
Shit, that didn't rhyme...
Re:coming soon to a terminal near you: steaming vd (Score:1)
Re:coming soon to a terminal near you: steaming vd (Score:1)
Re:Storage (Score:2)
That means that the amount of memory needed is very much dependent on the type of query written. If you are looking for army units that don't have enough gas to complete the objective or are off course, hopefully the number of matching "records" is small.
Speculation but seems likely.
Re:Storage (Score:1)
Re:Storage (Score:2)
It all about fast pattern mathing in expert systems but is easily applied to other fields.