A New Approach To Database-Aided Data Processing 45
An anonymous reader writes "The Parallel Universe blog has a post about parallel data processing. They start off by talking about how Moore's Law still holds, but the shift from clock frequency to multiple cores has stifled the rate at which hardware allows software to scale. (Basically, Amdahl's Law.) The simplest approach to dealing with this is sharding, but that introduces its own difficulties. The more you shard a data set, the more work you need to do to separate out the data elements that can't interact. Optimizing for 2n cores takes more than twice the work of optimizing for n cores. The article says, 'If we want to continue writing compellingly complex applications at an ever-increasing scale we must come to terms with the new Moore's law and build our software on top of solid infrastructure designed specifically for this new reality; sharding just won't cut it.' Their solution is to transfer some of the processing work to the database. 'This because the database is in a unique position to know which transactions may contend for the same data items, and how to schedule them with respect to one another for the best possible performance. The database can and should be smart.' They demonstrate how SpaceBase does this by simulating a 10,000-spaceship battle on different sets of hardware (code available here). Going from a dual-core system to a quad-core system at the same clock speed actually doubles performance without sharding."
Yet another slashvertising ... (Score:3)
This submit is yet another example of how advertisement is disguised as Slashdot article
This "SpaceBase" thing is but a database product
This "parallel processing a-la database" thing is but part of an advertising campaign being pushed by the company "parallel universe" to advertise their "SpaceBase" database package
That's all
Re: (Score:2)
Yeah, what a piece of shard.
Ludicrous (Score:3)
This is ludicrous. Paraphrasing: "We do databases, so we'll say that the solution to scaling parallel software resides in databases".
The applications for parallel processing are many and diverse. Databases are relevant to only some of them.
Shared RAM (Score:5, Interesting)
Re: (Score:2)
Re: (Score:2)
mmm.... Embarrassingly Parallel...
When you come down to it, it's amazing how many of the problems that actually require large amounts of processor time _are_ embarrassingly parallel. Presumably because the more calculations you have to perform, the more likely those calculations are to be partitionable into entirely independent data sets.
Re: (Score:1)
I agree. Databases come with a lot of overhead, but in some (not all) situations, the extra overhead can speed things up. I find it hard to believe that databases would speed up something as simple as what they are suggesting. Something more complex? Perhaps, but I'd have to see what they propose.
On a separate note: Did I miss something? They are talking about coding this up in databases, yet the code they give is in Java. It would be nice to see code in both Java and SQL... and which machines, which
Scale up vs. scale out (Score:2)
By moving processing to the database, you're implicitly changing from scale-out (parallelism) to scale-up, aren't you? Unless you have a cluster with really, really fast row-level lock processing, the solution for faster DB is usually faster CPU and more memory for transaction buffers, not more computers. Larger buffers puts an additional overhead on memory-to-memory transfer (as well as more lock traffic & the delays that introduces) on a scale-out basis for databases, so the tendency is to scale up
Re: (Score:1)
Re: (Score:3)
On a separate note: Did I miss something? They are talking about coding this up in databases, yet the code they give is in Java. It would be nice to see code in both Java and SQL
Their database is a NoSQL database that uses queries implemented as Java objects; it doesn't have a query language other than those objects.
Re: (Score:1)
If all you make is a hammer, everything better be a nail.
Re: (Score:2)
The hammer they wrote for that particular nail is fine, but the claims in the story are plain silly. Particularly this bit (my emphasis):
"but the shift from clock frequency to multiple cores has stifled the rate at which hardware allows software to scale. (Basically, Amdahl's Law.) The simplest approach to dealing with this is sharding"
Re: (Score:3)
The problem is that it's hard to optimize parallel-ization for all useful factors/dimensions. Generally optimizing data for one grouping de-optimizes for another.
Replication may improve reading by copying and re-grouping the copies by the different dimensions (often on diff servers), but this makes writing more complex and slow because then the replication and reconstitution of copies for each dimension becomes a bottleneck.
The real problem is that we live in a 3-D universe. If we move to a 12-D universe, t
MS: Bad ideas are GOOD! (Score:2)
"Under Gates, MS used to steal the best ideas; now they steal the bad ones."
Inside Microsoft, that is considered an improvement... in making people suffer. Still, however, Microsoft is forced to release bad operating system versions only every other release. MS doesn't have complete control yet. In the future, all releases will have new problems.
Can someone explain... (Score:3)
What the difference between threading an app and sharding it are? I'm kinda leaning towards writing this off as a bunch of theoretical BS, not the kind that makes sense either. Database servers are the highest load servers on most networks, distributing data process to them sounds idiotic at best.
Re: (Score:2)
What the difference between threading an app and sharding it are? I'm kinda leaning towards writing this off as a bunch of theoretical BS, not the kind that makes sense either. Database servers are the highest load servers on most networks, distributing data process to them sounds idiotic at best.
Different scales, the only way I've heard sharing used is to distribute the load between many servers, not like a cluster but that each server has their own form of dedicated area, like for example in a MMORPG you can divide the game world and pass people from shard to shard as they cross the game world. Or splitting a data set by rows depending on which "belong" together, a bit like NUMA for data - local data interacts fastest with local data, less fast with remote data but it all acts seamless as one big
Re: (Score:2)
That makes sense, so like a thread = entire server kind of scope from my example. Is it really appropriate to call it a database, or a smart distributed cache? In addition to local data interacting faster with local data, memory is even faster, I'd imagine that would make it a ways more complicated to design/implement though.
Re: (Score:3)
Sharding
is the secret ingredient in the the Web scale sauce ! >/p>
I have a solution (Score:5, Funny)
Re:I have a solution (Score:5, Interesting)
This is not a case of "let's do processing at the database". This isn't "holy crap SQL has functions!" or "try to use set-based queries that return the data you want rather than getting a dozen record sets and looping through them with 'until (RecordSet.eof())'". This is what you do when you've done all that and you still have performance problems because of data size and complexity of queries.
It's a case of needing to maintain data consistency in processing when you have 10,000 concurrent users all changing data but you want to process something very complex with your real time data set. Think things like geocoding in real time all cell phones attached to your cellular network, then running tower load balancing applications that can be made aware of the fact that the data has changed as it's changing and taking that into account. A tower could see that a high data user on an adjacent tower is approaching and could begin preparing for that. The 10s of thousands of spaceships is just a simple example. Let's say you want to resurface the highway system for Los Angeles, and you want to use real time data of the number of cars on the road at different times to model how traffic patterns might change when you close lanes so you can determine how to close the lanes and test the best method for how you should re-route traffic.
The key idea here is that each spaceship or cell phone user or automobile can interact with each other based on their data (in this case, proximity data). How can we write applications that might need to signal 20,000 other processes that their data just changed? RDBMSs are already incredibly good at dealing with data consistency and concurrency, and for large data sets that can interact arbitrarily with the rest of the data, sharding doesn't work.
Now let's say you want to do something really difficult, like modelling the human body at the cellular level. Each cell is it's own process, but each cell can interact with any number of other cells with signalling mechanisms. This chemical signalling would have to be translated to data signalling to the application processes, and it would all need to be kept consistent to maintain the reality of the simulation. Now give the simulation cancer. Now test an experimental treatment. Now do it 500,000 times each for all 10,000 types of cancer and each of the 1,000s of possible cures, and speed up the timeline to go as quickly as possible. You can have entire planetary populations of simulated humans with every disease ever known, and you can try every possible treatment simultaneously. Trillions of simulated humans dying from failed treatments advancing your knowledge in the real world by hundreds of thousands of years in a fraction of the time. Now do the same with astrological bodies, or subatomic particles.
We use simulations now to model things that we understand but can rarely observe, but rarely do we do so as quickly as they occur in the natural world. What will happen when we can model anything and everything... instantly... simultaneously.
Re: (Score:2)
It seems pretty obvious that you should use some type of indexing in the database to select items rather than do some cool O(n^2) operations when you have billions of items.
Also, webscale [mongodb-is-web-scale.com]
Re: (Score:1)
What keeps everything from being in the same place (Score:2)
Space is what keeps everything from being in the same place. If you can partition your problem spatially, it gets easier. You have to be able to handle interaction across boundaries, though. This is OK as long as you don't have interaction across multiple boundaries. Grid systems have trouble at corners where four grid squares meet.. (There's an argument for using hexes, because you never have more than three hexes meeting at a point.
Hard cases include fast moving objects, big objects, and groups of c
Like Hadoop? (Score:2, Interesting)
This doesn't sound at all ground-breaking. They've basically discovered what Hadoop already does -- if you shard your data, it makes sense to run the processing where the data is, to reduce communication overhead. And Hadoop didn't pioneer the idea, either. It's based on Google's MapReduce, and I'm pretty certain that the ideas go back much further than that.
What does it matter? (Score:1)
All that counts is explosion.png.
Intelligent design (Score:3)
I'm a fan of databases, DSLs, query languages and parallelizing compilers. I think there are huge opportunities to punt problems to all manners of optimizers which dynamically figure out which resources are to be used to crunch a problem. It is in my view inevitable this is the future.
The problem is this only takes you so far. At some level you actually have to design a system that scales and you still have to get into the weeds to do it unless there is some serious human level AI involved.
There is a reason people pay big money for large single system image machines. Not everyone has the luxury of googles and facebooks problems.
What do cores have to do with it? (Score:1)
Most databases, especially "large" ones wind up being I/O bound. A large database being defined, somewhat arbitrarily, as 3x or more larger than the amount of RAM available to store it.
Sharding is a solution designed to deal with horizontally or vertically partitioning a database. By splitting the database you can store the database on different mass storage systems. In so doing you achieve parallel I/O, which directly addresses the I/O binding.
All this ignores the effects of RAID, drive/controller cache
Re: (Score:3)
The other underappreciated benefit of sharding is that it brings more caching RAM to bear on the problem. In traditional hardware, and this is even more true of cloud setups like Amazon's EC2, the maximum amount of memory you can configure in an instance isn't that high. This number isn't going up as fast anymore either. You can get 256GB of RAM in a machine, but from the perspective of speed to any one core it will not even be close to 32X as fast as 8GB.
Adding another shard doubles the amount of RAM fo
Re: (Score:1)
Heresy! (Score:3)