MemSQL Makers Say They've Created the Fastest Database On the Planet 377
mikejuk writes "Two former Facebook developers have created a new database that they say is the world's fastest and it is MySQL compatible. According to Eric Frenkiel and Nikita Shamgunov, MemSQL, the database they have developed over the past year, is thirty times faster than conventional disk-based databases. MemSQL has put together a video showing MySQL versus MemSQL carrying out a sequence of queries, in which MySQL performs at around 3,500 queries per second, while MemSQL achieves around 80,000 queries per second. The documentation says that MemSQL writes back to disk/SSD as soon as the transaction is acknowledged in memory, and that using a combination of write-ahead logging and snapshotting ensures your data is secure. There is a free version but so far how much a full version will cost isn't given." (See also this article at SlashBI.)
A nice approach perhaps... (Score:5, Interesting)
Re:A nice approach perhaps... (Score:5, Insightful)
Re: (Score:3)
Besides, the "caching" or equivalent work is not the most difficult part of a DBMS, by far: What about the algorithms to "compile" queries in order to use indexes and perform the JOINS opti
Re: (Score:3)
All of computer science is just an exercise in caching, don't you know.
Re: (Score:3)
You can get more than 10M IOPS on certain RAM-based SSD, they're just mighty expensive.
Ya Don't Say! (Score:5, Insightful)
Really? Accessing RAM is faster than accessing a disk? What a novel discovery!
It seems to me that MySQL can also be run in memory. Apparently that's how the clustered database works (or used to work). I've never tried it, but let's see some benchmarks between MemSQL and an entirely memory-based MySQL.
Re: (Score:2)
I was going to say, how does this perform on large queries in a large database.
Re: (Score:3)
Didn't you get the memo? There are no large databases anymore, all database servers are supposed to have more RAM than the size of their database.
*** BARF!!! ***
Re: (Score:3)
Given how trivial and relatively cheap it is to put 192GB+ RAM into a server these days, there's a lot of truth in that statement, whether you like it or not.
Re: (Score:3)
I find it funny how easy it is to order an AMD system with 256GB of ram (or even 512GB, just much more expensive) yet the Intel ones all seem to max out at 192 or really, really expensive 384GB.. I know it has to do with the memory controllers, but our loads are very, very memory dependent..
Re: (Score:3)
I find it funny how easy it is to order an AMD system with 256GB of ram (or even 512GB, just much more expensive) yet the Intel ones all seem to max out at 192 or really, really expensive 384GB.. I know it has to do with the memory controllers, but our loads are very, very memory dependent..
The Dell PE820 (a 4-socket intel server) supports up to 1.5TB of RAM. With 2-CPUs, though, it's only 768GB...
Re: (Score:3)
IBM x3950 x5 can do 3TB of ram with Intel processors.
Re:Ya Don't Say! (Score:4, Insightful)
You can already put a TB of RAM into a server if you want. If you really need to have that amount of data with next to zero latency, then the cost (which is still relatively low) is unlikely to be much of a stumbling block.
It clearly will shock you to learn that most databases are well under a couple of hundred GB in size.
Re: (Score:3)
Our database is 300TB. So.... Yeah.
Re: (Score:3)
It seems to me that MySQL can also be run in memory. Apparently that's how the clustered database works (or used to work).
Absolutely correct. NDB Cluster. It's quite fast, even on older hardware providing you have enough RAM to hold your database.
Re: (Score:3)
No foreign keys with NDB
Also correct. Your point? Other than that specific tools were made for specific jobs.
Re:Ya Don't Say! (Score:4, Insightful)
That and memcached (I think that's the name).
This comparison is far from fair... Is it ACID? Or eventually synchs up? How does it compare with other memory based DB's?
Comparing it with a slow relational DB will not give you any kind of credibility.
Re:Ya Don't Say! (Score:5, Insightful)
TFS states that transactions are writen to disk after being "acknowledged" in memory.
I assume that means transactions are written to disk only after the database reports back a succesful commit.
So failing to meet the D of ACID compliancy.
Re: (Score:2)
Isn't the implocation that in this case there's a lot less time between the transaction getting made and that data being committed to non-volitile memory?
Re:Ya Don't Say! (Score:5, Insightful)
I don't think that's something which can be changed, except by changing the hardware. The starting point is this: When a COMMIT is made all changes have to be written to the write-ahead-log before a success response can be returned to the client. The WAL is written sequentially, and so if you're using ordinary disks and are sensible you give it its own set of spindles (RAID1, say). That means that between each write you have to wait for one disk rotation - you append to the log, you process the next transaction, then you have to wait for the disk to rotate to just after where you finished writing before you can write the next one. So you can do 1/15k transactions per minute with this basic setup.
You can do things to make this faster. You can write several transactions at once, and you can put slight delays in to transaction commits to wait for others to bundle them with (PostgreSQL I believe will do the first and can be configured to do the second). You can use battery backed caches in your RAID system, which will have much the same effect (and leave you limited by disk bandwidth and cache size). You can use SSDs that don't need to seek.
I can't see anything in TFA that MemSQL is supposed be doing differently here, or anything it CAN do differently. From TFA: 'The key ideas are that SQL code is translated into C++, so avoiding the need to use a slow SQL interpreter, and that the data is kept in memory, with disk read/writes taking place in the background.'. The first I'm not too sure I understand (presumably they're not turning it in to C++ and then passing it through a C++ compiler....) but maybe we can blame the journalist for that. Or maybe they've just reinvented prepared statements. The second is what databases do anyway - except, of course, for the WAL and when you're reading data which isn't in memory. Perhaps what they're doing is flushing the WAL after the commit has returned to the client - which makes the database very much not ACID, and is also something that other databases can be configured to do if you don't care about your data.
Potentially what they could do, though, is to have designed all of their data structures, algorithms, locking and so on around the assumption that everything is in memory. There are big differences in the best query plan to use when data is in memory vs on disk, and traditional databases don't necessarily make the right choices. They try, but may for instance use table scans for queries which return a large proportion of the rows in a table because sequential IO is faster, when they should be using indexes if the data is in memory. And BTrees and the way data everywhere is split in to pages is something traditional DBs do because that works well even when most of your data is on disk. So maybe that's what they've done differently that other DBs haven't already been doing.
Re:Ya Don't Say! (Score:4, Interesting)
Not just that - you can get a FusionIO ramdisk device for really big databases and get performance that's somewhere between SSD and memory. Those are all battery backed and such, so no monkeying around with whether the ACID was done right or not.
Re:Ya Don't Say! (Score:5, Interesting)
It's a bit more complex. There's four main ways to do MySQL storage in RAM (which I know of because my current work project is a MySQL application).
First, the NDB Cluster system is there, which is what you've mentioned. That's basically just a MySQL frontend to a distributed, memory-based NoSQL database, though. Convenient, but not truly "MySQL".
The second is using the "Memory" storage engine, where it actually stores a normal MyISAM table in memory. However, this is a surprisingly crappy option, because it uses table-level locks for writing, so parallel write performance is only marginally faster than disk.
The third is to store regular InnoDB tables on a ramdisk. This can be crazy fast, but it also means that if your server crashes or loses power, you're *fucked*
The fourth is to use Memcached, which isn't really a MySQL thing at all. You're basically just caching data in a memory-only NoSQL database, at the application level. This is actually what we ended up doing, because all the others are pretty crappy options - Cluster is the best one, but the hardware requirements are higher than we could justify spending given our performance requirements. Shoving memcached onto the web server (which has RAM to spare) and setting certain queries to cache their results there sped things up significantly, at minimal cost.
As far as I can tell from the summary (I refuse to read the articles for such a blantant slashvertisement), this "MemSQL" doesn't do anything you can't do by configuring MySQL properly, although they likely optimized some rarely-used modules to make them faster.
Re:Ya Don't Say! (Score:5, Informative)
The third is to store regular InnoDB tables on a ramdisk. This can be crazy fast, but it also means that if your server crashes or loses power, you're *fucked*
Not necessarily. There are battery-backed volatile RAM devices that can last for days, and also non-volatile RAM devices like F-RAM and MRAM.
Battery backed volatile RAM can even be considered "cheap" - if the bottleneck are in tables small enough to fit on these, or the amount of overall writes is so high that placing the innodb logs there makes sense, it can be cheaper than a RAID10 or 50 of high-speed SAS drives.
The HyperCard / ACARD drives, for example, are only $300 plus RAM. And if the worst happens, you can even dump the RAM to a CF card before the battery runs out.
Re: (Score:2)
I was referring to software-based ramdisks, not RAM-based SSDs. Although I suppose there's not much of a performance difference - the only difference is durability.
Re:Ya Don't Say! (Score:5, Interesting)
The biggest issue with RAM drives are their cost.
Yes and no. If you can fit the Innodb writeahead-logs and a few of the worst bottleneck tables on, say, an 8 GB ram drive, it's a bargain.
HyperDrive: $300
2 * 4GB 240-Pin DDR2-800 SDRAM ECC: $234
16 GB CF card for backup: $30
Total: $564
That's downright cheap compared to what a RAID 10 or 50 of SSDs or short-stroked 10k/15k rpm drives would cost.
If it solves a bottleneck, it could be a big money saver.
Re:Ya Don't Say! (Score:5, Funny)
MySQL is not webscale because it uses joins.
Re:Ya Don't Say! (Score:4, Informative)
For those who don't get the reference:
http://www.xtranormal.com/watch/6995033/mongo-db-is-web-scale
Re:Ya Don't Say! (Score:5, Funny)
I'm gonna call BS on this one. Why would a ram disc need a fan?
okay...? (Score:5, Funny)
Re:okay...? (Score:5, Insightful)
MySQL is the last thing I think of, personally. It sucks as soon as you make it ACID compliant and start hitting it with thousands of concurrent requests. You're much better off with PostgreSQL.
Re:okay...? (Score:5, Funny)
But with MySQL you can get a wrong answer REAL FAST!!!
Re: (Score:2)
Re:okay...? (Score:4, Informative)
Re:okay...? (Score:5, Informative)
*woosh*
Re:okay...? (Score:4, Insightful)
MySQL is actually very fast under light loads / one-off queries, and if you choose to leave it at the non-ACID compliant default settings, and similar. eg. "innodb_flush_log_at_trx_commit"
That's probably the only reason why it got popular... There weren't any open source NoSQL DBs at the time, and MySQL seems fast when tested with a basic, shallow benchmark. Of course others like PostgreSQL completely leave it in the dust once there's some real load, or complex queries, or you WANT to be absolutely sure transactions were committed to disk before returning.
As a single point of evidence, I give you Zabbix... It supports the use of all the major databases (Postgresql, DB2, Oracle, SQLite, etc.) as backends, yet MySQL is recommended as it performs the fastest.
http://www.zabbix.com/documentation/1.8/manual/performance_tuning [zabbix.com]
Level-2 overflow! Resize analysis! Change the modulo! Ahhhh!
I've done the PICK-OS thing for a few years, and I'm not a big fan. I'm infinitely happier administering PostgreSQL DBs.
Besides, you don't have to go to something as exotic as PICK to get away from SQL. Try ages-old Berkley DB (db4), or any of the newer NoSQL options.
Re: (Score:2)
That's probably the only reason why it got popular... There weren't any open source NoSQL DBs at the time
Zope? BDB? Both of these were available at the time MySQL became popular.
Re: (Score:3)
As a single point of evidence, I give you Zabbix... It supports the use of all the major databases (Postgresql, DB2, Oracle, SQLite, etc.) as backends, yet MySQL is recommended as it performs the fastest. http://www.zabbix.com/documentation/1.8/manual/performance_tuning [zabbix.com]
From the linked document:
rebuild MySQL or PostgreSQL from sources to get maximum performance
2003 just called. They want their Gentoo Ricers [funroll-loops.info] back.
Ahhhh, Pick! (Score:5, Interesting)
The most over-the-top DB God I know started in Pick-land (ca 1972?). Although he does (is forced to?) use SQL nowadays, he thinks in ways that do not come out of any SQL DBA handbook. As a result he gets DBMSs to do things that are ... unnatural.
He is currently doing some data-cubing stuff for us that I didn't think could be done with something less than a DOD budget. He says his touchstone is thinking in Pick and then 'translating' to SQL.
I still think that the 2 missing courses from any CS degree program are 1) how to debug, and 2) history of computing.
Re: (Score:2)
Re:Ahhhh, Pick! (Score:5, Insightful)
Practical software engineering is mostly about debugging. An actual course in debugging would imply that Computer Science curriculum had something to do with practical software engineering, which we're all painfully away it hasn't in the slightest.
Re: (Score:3)
As someone who programs in Pick/D3 still every day (a skill I picked up working for a company with a legacy product), as well as having had worked in pretty much every SQL product that exists, I am both startled and amazed to see it mentioned on Slashdot. I think this is the first time I've ever seen anyone mention it!!!
And I am in agreement - Pick was something truly different which could have been as big as SQL - multi-value, "NoSQL"-ish which still had a query engine, fast, little to no maintenance, loo
Show me vs a real DB engine (Score:5, Interesting)
Show me benchmarks vs Oracle, PostgreSQL or SQLServer. Spare me the comparison with MySQL or some other toy.
vs DB2 (Score:2)
I would like to see the compare againsr DB2. Midrange DB2 if you really want like for like, mainframe if you have guts :)
Re:Show me vs a real DB engine (Score:5, Informative)
Show me benchmarks vs Oracle, PostgreSQL or SQLServer. Spare me the comparison with MySQL or some other toy.
I think the reason the comparison to MySQL is appropriate is that this database is supposed to be MySQL compatible.
Re:Show me vs a real DB engine (Score:5, Informative)
Re: (Score:2)
but anyone can download and run Oracle & benchmark it and publish it. This is the internet, information wants to be free.
Re: (Score:2)
MySQL is not a toy (anymore). It's been very good for at least half a decade and has been ACID compliant if you have a half-way competent DBA. Also, MySQL is the fastest of the set you just mentioned in the most basic SELECT/INSERT/UPDATE benchmarks (although each of the rest do excel in solving some really specific problems).
Re: (Score:2, Funny)
PostgreSQL is definitely the best database software out there.
Re: (Score:2)
Just wish we could easily re-order columns with FKs. =)
Re: (Score:3)
I haven't yet had a need to re-order columns with FK's, despite having build, maintained and used hundreds of different tables in a variety of database products.
Is there any good reason to do so, besides a desire to make old database tables look slightly prettier?
Re: (Score:3)
MySQL is not a toy. Oracle is a bloated monster that only survives by locking-in their customers. I know a lot of high-end customers that would ditch Oracle immediately if that would not mean rewriting a lot of software.
Re:Show me vs a real DB engine (Score:4, Funny)
Ah, but it's an "enterprise-grade" toy.
Err... what? (Score:4, Interesting)
Ok, so both article and video is extremely thin on details, the explanation for the massive performance is pretty much gibberish and their argumentation for ACID compliance is bullshit.
Just leaves me with the question, what are they trying to get out of this BS?
Re:Err... what? (Score:5, Informative)
Just leaves me with the question, what are they trying to get out of this BS?
Your money, its not a free piece of software.
Re: (Score:3)
Self-aggrandizement and money. When somebody claims they are better than everybody else, they are usually lying and knowing it.
Re: (Score:2)
Quite true. Of course this is something the "enterprise" DB vendors are desperate to hide and there are still enough people that do not have enough of a clue about database-theory. The problem is not SQL though, but the relational database model in general.
Meh. (Score:5, Insightful)
Give me fast enough, robust, easy to administer and standards compliant. Maybe a little less fast means you throw more hardware at a problem, but it doesn't matter if overall the overall cost and risk is inflated. A platform decision boils down to three things: (1) is it good enough; (2) is it economical; (3) if we decide later this doesn't work for us, are we totally screwed.
In any case, there's no meaningful way you can make a claim that a database management system is the fastest on the planet. All you have is benchmarks, and different benchmarks apply to different use-cases.
Pedant alert! (Score:3)
What you have there is (or may be) the fastest database management system.
I have the worlds fastest database. One table, one record, and one field (NULL).
Facebook engineers? Gah! (Score:4, Funny)
I wouldn't run my toaster on software engineered by someone from Facebook, let alone a database. I'd have to spend ten minutes searching for my toast, and it would show up the following week.
Re:Facebook engineers? Gah! (Score:5, Funny)
Re: (Score:2)
Re: (Score:3)
Oh but come on. Their engineers are super leet! To work at Facebook, you have to win a drunken speed-hacking contest just to be a PHP coder!
But not dislike the toast. (Score:2)
But not dislike the toast.
Faster then MUMPS? (Score:3)
Nothing to see here, move along, folks. (Score:2)
How do they write to disk faster? (Score:3)
They're durable and synchronously log all changes to disk, so what makes them faster? They do say this, from: http://developers.memsql.com/docs/1b/durability.html [memsql.com]
Reconfigure the server to use a faster disk. MemSQL exclusively relies on sequential (not random) disk writes, so using an SSD will dramatically improve durability write performance.
Are SSDs better at sequential writes? I thought their advantage was random reads, and they weren't any faster at writes then HDDs. Also, the data would become hopelessly out of order by only doing sequential writes, unless they're periodically re-writing all the data in order, which would mean lots more I/O then a typical DB.
Re: (Score:2)
Are SSDs better at sequential writes? I thought their advantage was random reads, and they weren't any faster at writes then HDDs. Also, the data would become hopelessly out of order by only doing sequential writes, unless they're periodically re-writing all the data in order, which would mean lots more I/O then a typical DB.
They say they rely on snapshots and logging. I'm assuming that it periodically writes a snapshot of RAM to disk, then logs transactions in the log for recovery. Hopefully it snapshots different portions of RAM at different times so there's not one huge snapshot being written to disk every time.
Though if I had a database where I needed 80,000 query/second performance, I'd probably want a cluster of these so if one machine goes down, the other machine can take over so I don't have to wait for the service to
Re:How do they write to disk faster? (Score:5, Informative)
SSD is significantly faster than HDD at both sequential and random writes. Top 15K SAS drives write ~250MB/s sequential. Top SSD write 550MB/s sequential. Write random and it gets much worse for the SAS drive. Try to even find an enterprise HDD benchmark done in the last year. No one bothers because enterprise buys SSD if they care about performance.
Speed vs. speed (Score:5, Interesting)
Speed's fine, but what kind? Or more specifically, over what timeframe? High transaction rates are fine, but they don't do any good if you can only sustain them for a few seconds or minutes before the whole thing collapses. I want to know the transaction rate the thing can sustain over 24 hours of continuous operation. In the real world you have to be able to keep processing transactions continuously.
That long-time-period test also shows up another potential problem area: disk bottleneck. In-memory's fine, but few serious databases are small enough to fit completely in memory. And even if it will fit, you can't lose your database when you shut down to upgrade the software so eventually the data has to be written to disk. And that becomes a bottleneck. If your system can't flush to disk at least as rapidly as you're handling transactions, your disk writes start to lag behind. Sooner or later that'll cause a collapse as the buffers needed to hold data waiting to be written to disk compete for memory with the actual data. You can play algorithmic games to minimize the competition, but sooner or later you run up against the hard wall of disk throughput. And the higher your transactions rates are, the harder you're going to hit that wall.
Re: (Score:2)
Re:Speed vs. speed (Score:4, Insightful)
I can buy servers with over a Terabyte of ram, mutiple power supplies and 4 x 10G interfaces for FCOE.
What is a disk again other than to boot from.
The disk is something to hold your data when a backhoe cuts your datacenter power, and cuts the network connections that you use to replicate data to your remote site.... then your UPS runs out of battery after an hour of transactions have been applied to the database with no replication to the remote site.
Sometimes sh*t happens in ways you haven't planned for... when you have N degrees of redundancy, you'll get bit by the rare N+1 event. It's better to have your data stored somewhere that doesn't disappear after the power goes away (or the machine reboots).
(if you're using your FCoE network to connect to the SAN to store your data, you're still using disks but there's no reason to use a local disk to boot from)
Re: (Score:3)
The disk is something to hold your data when a backhoe cuts your datacenter power, and cuts the network connections that you use to replicate data to your remote site.... then your UPS runs out of battery after an hour of transactions have been applied to the database with no replication to the remote site.
We once had a "backhoe event" that cut the power cables between the point where the grid power and the UPS cables came together (our main UPS at the time was a 10MW diesel generator) and the point where they entered the datacenter building. There was about only 2 feet of cabling where they could have done this, but that's where someone put a jackhammer through. Aside from shutting us down in a great hurry, they also put themselves in the hospital, and at the same time blew the breakers on the grid substatio
Re:Speed vs. speed (Score:5, Interesting)
A terabyte of RAM costs quite a lot of money, far more than a terabyte of hard drive does. And it's not as big as it sounds, I've dealt with databases bigger. Usually the ones that demand the highest performance are also the ones that eat the most space once you start taking indexes and such into account.
And multiple power supplies? Won't help you when the data center rack loses all power. I recall at least 2, maybe more, reports of total loss at data centers in the last 12 months, so it's not like it's that rare an event. That's not counting partial losses, or cases where someone simply fumble-fingered and powered down or rebooted the wrong server. And it certainly doesn't count maintenance outages when the server or the database software had to be restarted to upgrade software. Redundant power supplies won't help against that, and while it's no big deal normally it's a really big deal when it means losing 100% of the contents of the database when memory gets cleared. Sooner or later you need the data on persistent storage, disk or an equivalent. You can handwave that need over the short term, minutes to maybe hours, but when you start talking about maintaining the database for months to years it's a different story. And if you want to say you don't need that kind of up-time, well, the business people where I work would probably boot you out the door so hard you'd bounce twice for suggesting they could just live with losing all our data a couple of times a year. Having it happen even once would probably be the end of the company.
TimesTen Database (Score:2, Interesting)
So what is the difference between MemSQL and TimesTen [wikipedia.org]?
Other than the 16 years TimesTen has been out longer, the fact that Oracle now owns TimesTen, that it runs on both 32bit and 64bit Linux and Windows, that it can run in front of another database engine to give it a boost, and that it has customer installations up to the Terabyte range.
Just another lame attempt to reinvent the wheel.
Filesystem anyone? (Score:2, Informative)
Remember the good old days, when XYZ-db wasn't always available (or even disirable)? we used to use files.
Yea, files. Novel concept, these days, mention ISAM to someone and they don't know what you're talking about!
If you really need speed, maybe a database isn't your best bet. Maybe, just maybe, you should consider structuring the data in a way that makes sense for your application using files.
Re: (Score:3, Interesting)
I work on a system like that right now in a really big company. Let me tell you something- it's shit. If you need concurrent access to the files/directories by several processes, you'll have a heap of issues. Consumers pick up files before they are completely written by the producers (now fixed by file renaming, but required work). Sime directories now hold 300k files, and any file operations are extremely slow- filesystems aren't designed for this (in process of being fixed by splitting directories squid s
Looks like that old Prevayler "database" (Score:2)
"No more porridge". Right.
This thing is ACID at least?
memSQL fully hubris acid trip compliant (Score:2)
MySQL the worlds most popular open source database
memSQL the worlds fastest database
PostgresSQL the worlds most advance open source database
SQLite most widely deployed SQL database engine in the world
I just wish people would dispense with their childish marketing bullshit already.
The Devil Is In The Detail (Score:4, Informative)
I've had a love-hate relationship with MySQL for over ten years now, and have as much cause to hate it as anyone, but I have to point this out. Read the MemSQL docs carefully, and here's the killer - they only support single-query transactions, and only at isolation level READ COMMITTED.
Until those two facts change, then its hardly a fair comparison.
Qualifications? (Score:3)
Shamgunov has excellent credentials in the database world, in spite of having worked at Microsoft on SQL Server for six years.
FTFY
Re:Looks good for testing (Score:5, Insightful)
As a long time SysAd/webmaster/developer, I'm certainly interested
At the risk of sounding incredibly condescending....
If you were really a sysadmin who could benefit from that kind of speed improvement, you'd know that it's possible to achieve that level of performance with MySQL already, by either running it from memory or by using a fast hard drive array. The simplest/cheapest option to drastically improve MySQL performance is to throw a large amount of RAM at a system and point MySQL at the memory. MySQL can be configured to keep the database in active memory and sync to the disk on a regular basis, which is almost exactly the kind of behaviour described for MemSQL... for an exceptionally large database that can't be stored in system memory, I imagine that the advantage that MemSQL is boasting would evapourate. There are other ways to go about doing it, such as running a fast disk array or a cluster, in order to get around the limitations of using RAM, but ultimately the prime determining factor for speed in MySQL is speed of access to the database file itself.
Re: (Score:3)
I'd love to see their tests when this DB needs to go into swap / pagefile. It's double the slowdown, needs to write into the swap (disk I/O) and then sync the DB (disk I/O again).
I can't, for the life of me, understand where this will be better than the already available options.
Re: (Score:2)
I'd love to see their tests when this DB needs to go into swap / pagefile. It's double the slowdown, needs to write into the swap (disk I/O) and then sync the DB (disk I/O again).
I can't, for the life of me, understand where this will be better than the already available options.
I think the point of an in-memory database is that you size your machine so it does *not* need to swap in normal use. Otherwise, since as you said, you lose all of the speed - worse because the operating system decides what to swap out, and may not make the most efficient choice. (though they probably just mlock() the memory buffers into RAM and prevent any of the database RAM from being swapped out at all.)
But if the architect did expect the machine to swap at times, he probably wouldn't put the swap and
Re: (Score:3)
I meant 2 disk access, some or another. From what I read they would never be simultaneous anyways.
Either way, this would be useful (actually IS, some solutions do this) in the Business Intelligence field. But the whole point of keeping everything in memory is moot when you have petabytes of information that you need to process during your ETL. What matters in this database is, how well does it behave in a cluster and how would it handle concurrency (ACID? Eventually synchronized?).
I doubt this is all that u
Re: (Score:2)
I meant 2 disk access, some or another. From what I read they would never be simultaneous anyways.
Either way, this would be useful (actually IS, some solutions do this) in the Business Intelligence field. But the whole point of keeping everything in memory is moot when you have petabytes of information that you need to process during your ETL. What matters in this database is, how well does it behave in a cluster and how would it handle concurrency (ACID? Eventually synchronized?).
I doubt this is all that useful for common DB applications like websites and the like. Relational DB's have been proving to be enough for everything (ex: Youtube uses mysql shards - or used to) purely web related for a while now, I doubt this is a gamechanger at all.
Actually, I thought this would be less useful with large databases (like a large data warehouse), and more useful with webservers. If you have a busy website and your core database is measured in gigabytes and not terabytes, it's probably cheaper and easier to run it in-memory than to build out a distributed cluster of SQL nodes to handle the transaction load. $15K buys you a server with 16 cores of CPU and 384GB of RAM.
Re: (Score:2)
I see your point, but I disagree. I consider it better to have the webserver running slowly than having it crash because it ran out of memory. To do that might be a choice, but you could just go here ( http://unixfoo.blogspot.pt/2007/11/linux-performance-tuning.html [blogspot.pt] ).
Re: (Score:3)
Just say no to swap. It's pointless, except as a crutch for broken software. And it's dangerous on a server. If an application wants disk-backed VM, it can use mmap.
Swap isn't just a crutch for broken software (though it can be), sufficient RAM is not always available. In a perfect world, all servers would have more RAM than their applications ever need, more cores than the processes can take advantage of, and all disks would be RAID-10 arrays of SSD's.
But back in the real world where most of us have to live, swap does come into use at times to let a server accommodate loads that it otherwise couldn't handle due the memory footprint of the software it's running. Swap
Re: (Score:2, Insightful)
If you were really a sysadmin who could benefit from that kind of speed improvement, you'd know that it's possible to achieve that level of performance with MySQL already, by either running it from memory or by using a fast hard drive array.
The guys that wrote it are former Facebook employees. So I have to assume they know how to get the best performance out of MySQL, and that itdoesn't suit their needs for whatever reason.
The article doesn't really go into much detail about why, but my point is really ab
Re: (Score:2)
Getting a speed boost but setting a time bomb to being screwed isn't really a smart decision.
Re: (Score:3)
History
The ARPANET, the predecessor of the Internet, had no distributed host name database. Each network node maintained its own map of the network nodes as needed and assigned them names that were memorable to the users of the system. There was no method for ensuring that all references to a given node in a network were using the same name, nor was there a way to read the hosts file of another computer to automatically obtain a copy.
The small size of the ARPANET kept the administrative overhead small to maintain an accurate hosts file. Network nodes typically had one address and could have many names. As local area TCP/IP computer networks gained popularity, however, the maintenance of hosts files became a larger burden on system administrators as networks and network nodes were being added to the system with increasing frequency.
http://en.wikipedia.org/wiki/Hosts_(file) [wikipedia.org]
Top coder (Score:5, Interesting)
They did have an ad to lure in "Top Coders" at http://developers.memsql.com/blog/ [memsql.com]
Apart from their ad, what they said about Top Coders was interesting - with the exception of top coders memorizing who books filled with algorithms, because top coders do not memorize nothing - top coders do not get to be top coders by memorizing.
Instead, top coders have that instinct to _know_ which algorithm to adapt and apply, and top coders know where (and how to) look for the algorithm (either from their own archive, from books, from old magazines, or from some strange corners on the Web)
Re: (Score:2)
Quite true. That is also what competent computer scientists do: Learn the rough border conditions of a problem and its solutions and look up details when needed. Be able to construct something reasonable when no solution can be found in the literature. Committing details to memory is only for those weak of mind. Of which there are many.
Re:Top coder (Score:5, Interesting)
All of the best developers I've met had phenomenal memories. I think both a natural reasoning ability and great memory are assets. If you are missing one, you aren't going to be as strong as someone who has both.
Re:Top coder (Score:5, Interesting)
I have met quite few people that could fake being good coders using really good memory. They were in fact at best mediocre coders and sometimes really bad ones. While these people can code solutions to simpler things really fast, they usually do not notice when they are out of their depth and would need to look up things or think about them for a while. Then they screw up royally. That most people mistake them for really good coders (and no, memory does not help reasoning ability, it hinders it) makes things worse. One of the hallmarks of a great coder is a very keen sense for when he/she needs to be careful because something is more difficult than it appears to be. Those with really good memories regularly fail that test. Bad memory is an asset here.
Re: (Score:3)
I definitely disagree. People with great memories can bring context to a problem that a lousy memory just can't. If you can't hold all 20 factors to consider in your mind at once, meandering from one to another will leave you with a solution that effectively considers only a couple.
I have reasonably solid anecdotal evidence on this. I've seen top coders with great memories produce software that dominates their industry in three different industries now, and some of that software is now in the mid third d
Re:Top coder (Score:5, Insightful)
Juggling 20 factors in your brain (short term memory) is not the same as having a good memory (long term memory).
In fact they literally use different parts of the brain.
Re:Top coder (Score:5, Insightful)
Except that so very little of programming these days is about algorithms.
Rather, it is about elegantly solving businesses problems and to know one's way around huge frameworks.
Being a "top coder" is in it self a very good thing of course, but there are very few companies that actually work with technical details like implementing a better hash algorithm and so forth.
Rather, in most developers jobs, it is very valuable to;
* be good at being able to understand, handle and especially change large systems.
* be good at producing solutions that at a reasonable rate balances cost and customer demands versus simplicity, performance, structure and other technical values.
* being able to foresee the usages of the solutions in different time frames, and through this make systems cheaper and easier to evolve. Sometime, a super quick and butt-ugly solution is a really good thing to get the customer going while it figures out what it really wants. As long as all parties are aware of the situation and knows that a complete rewrite will have to be paid for next.
* not act like a stubborn child when ones pet solution or technology gets scrapped or unaccepted or that the rest of the company think it is risky to invest time in going down that road. But to just keep pushing.
* to be professional and keep on working even though the current thing is really boring.
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)