An anonymous reader writes: John Haugeland has an post about parallel data processing. He starts off by talking about how Moore's Law still holds, but the shift from clock frequency to multiple cores has stifled the rate at which hardware allows software to scale. (Basically, Amdahl's Law.) The simplest approach to dealing with this is sharding, but that introduces its own difficulties. The more you shard a data set, the more work you need to do to separate out the data elements that can't interact. Optimizing for 2n cores takes more than twice the work of optimizing for n cores. Haugeland says, 'If we want to continue writing compellingly complex applications at an ever-increasing scale we must come to terms with the new Moore’s law and build our software on top of solid infrastructure designed specifically for this new reality; sharding just won’t cut it.' His solution is to transfer some of the processing work to the database. 'This because the database is in a unique position to know which transactions may contend for the same data items, and how to schedule them with respect to one another for the best possible performance. The database can and should be smart.' He demonstrates how SpaceBase does this by simulating a 10,000 spaceship battle on different sets of hardware (code available here). Going from a dual-core system to a quad-core system at the same clock speed actually doubles performance without sharding.