Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming

Cassandra Rewritten In C++, Ten Times Faster 341

urdak writes: At Cassandra Summit opening today, Avi Kivity and Dor Laor (who had previously written KVM and OSv) announced ScyllaDB — an open-source C++ rewrite of Cassandra, the popular NoSQL database. ScyllaDB claims to achieve a whopping 10 times more throughput per node than the original Java code, with sub-millisecond 99%ile latency. They even measured 1 million transactions per second on a single node. The performance of the new code is attributed to writing it in Seastar — a C++ framework for writing complex asynchronous applications with optimal performance on modern hardware.
This discussion has been archived. No new comments can be posted.

Cassandra Rewritten In C++, Ten Times Faster

Comments Filter:
  • First post (Score:5, Funny)

    by Anonymous Coward on Tuesday September 22, 2015 @06:07PM (#50578821)

    Because it was written in Seastar

  • by Anonymous Coward on Tuesday September 22, 2015 @06:07PM (#50578825)

    Seriously. WTF?

  • Lies! (Score:5, Funny)

    by Anonymous Coward on Tuesday September 22, 2015 @06:09PM (#50578841)

    That is a lie!

    I think they mean the C++ port is 10X SLOWER than Java.

    Java is faster than C,C++ everyone knows that!

    Maybe if they ran the code on a java interpreter, written in java, running on a java interpreter...

    More recursive use of java == more speed!

    Why slow a system down with all that C++ bloatware?

    • by s.petry ( 762400 ) on Tuesday September 22, 2015 @06:59PM (#50579227)

      Oracle has just launched a new series of patent infringement lawsuits. Oracle allegations include reverse engineering Java to improve the speed of applications like Cassandra, benchmarking Java without permission. They are seeking an immediate cease and desist order, in addition to immediate financial relief for sustaining PPS (More commonly known as Poopy Pants Syndrome.).

  • by Anonymous Coward on Tuesday September 22, 2015 @06:09PM (#50578845)

    Almost as fast as native! Maybe even faster for some tasks!

    sure

    • Sans sarcasm I would've also accepted: "duh"

    • by fragMasterFlash ( 989911 ) on Tuesday September 22, 2015 @08:32PM (#50579777)
      "C++ is my favorite garbage collected language because it generates so little garbage"

      -Bjarne Stroustrup
    • by Luke Wilson ( 1626541 ) on Wednesday September 23, 2015 @02:33AM (#50580775)

      Most of what they've done seems to be rearchitecting, not getting a simple speed boost from using an unmanaged language. They're bypassing the OS to get more locality and cache retention. Those problems would not be addressed by merely rewriting in C++.

      For one, they've replaced the OS network stack with an in-process one, where each thread gets its own NIC queue so they can have "zero-copy, zero-lock, and zero-context-switch[es] [seastar-project.org]"

      They're also keeping more data in memory and eschewing relying on the the OS file cache. It seems like they're taking every opportunity to use the in memory representation to avoid using sstables. They try harder than Cassandra to update instead of invalidate [scylladb.com] that cache on writes.

    • by IamTheRealMike ( 537420 ) on Wednesday September 23, 2015 @04:22AM (#50581035)

      The headline is rather misleading. This isn't just a plain port of the code from Java to C++ to get a magical 10x speedup. Amongst other things they appear to be running an entire TCP stack in userspace and using special kernel drivers to avoid interrupts. This is the same team that produced OSv, an entirely new kernel written in C++ that gets massive speedups over Linux ..... partly by doing things like not using memory virtualisation at all. Fast but unsafe. These guys are hard core in a way more advanced way than just "hey let's switch languages".

  • by Anonymous Coward

    This is the trademark reason why Java shouldn't be used in performance sensitive environments in the first place.

    As for would it have been any faster if it was written in C or straight ASM, probably not worth chasing down that extra 1%. Generally the justification for straight C or ASM is to remove runtime bloat, and you'd first have to give up using any frameworks to get there.

    Just to remind potential programmers. Lean C before you learn any other programming language, otherwise you will not understand why

    • Re: (Score:2, Funny)

      by Anonymous Coward
      But, but... Java is enterprisey!
    • by Anonymous Coward

      Cassandra is nothing to sneeze out since it outperforms other db-engines (which are written in C, like MySQL).

      Anyhow, you use the right tool for the job, and the big question is: would ScyllaDB even exist if Cassandra wasn't written first?

      • by Anonymous Coward on Tuesday September 22, 2015 @06:58PM (#50579211)

        Cassandra is nothing to sneeze out since it outperforms other db-engines (which are written in C, like MySQL).

        Cassandra and MySQL are very different types of databases designed to handle different tasks. It's like saying a hammer is better than the saw without mentioning what job needs to be done with it.

    • by Dutch Gun ( 899105 ) on Tuesday September 22, 2015 @06:58PM (#50579213)

      Just to remind potential programmers. Lean C before you learn any other programming language, otherwise you will not understand why your code's performance is terrible.

      It may not be apparent even then. Java looks an awful lot like C++ at the code level. So... what's different? Java (and other managed languages like C#) have a bunch of neat features like reflection and automatic memory management, which inherently comes at the cost of runtime efficiency. Simply learning C or C++ won't point out exactly why those languages are so much faster than managed languages. You can write nearly the same code in C++, Java, and C#, and you'll see C++ win performance benchmarks - at least in all but the most contrived examples.

      Among the more significant differences are that C++ compilers are extremely good at optimizing, and C++ code generally compiles down to better cache-coherent structures than other languages. The difference is in the language itself, which adheres to a zero-cost principle, in that you don't pay for features you don't use. A lot of C++ abstractions are eliminated *entirely* at runtime, and are only used to protect the code's integrity during the compilation phase. We were told for years that native-equivalent performance was just around the corner or even already here, and it just never really happened outside of small, contrived benchmarks.

      I don't think it's necessary to always learn C or C++ first, although I do think it's worthwhile to learn it at some point, simply because there's a lot of it out there. I'm primarily a C++ programmer myself, but I tend to be a bit more pragmatic about language preference. Use the language that's right for the job. For example, C is a *horrible* choice if you're writing a simple application that needs to do a bunch of string processing. In many cases, high performance isn't even a consideration, rather than correctness, security, and development speed.

      • I would say that 95% of all people I know in person, who learned C first and not: Assembler, Pascal, SmallTalk, Lisp are extremely bad on advanced language concepts like functional or oo programming. Most of them shifted to scripting and operating servers and don't "code". A minority is doing embedded programming in C++ which mainly looks like C.

        The idea that learning C first has any advantage is completely bollocks, a /. myth.

        I started with C in 1987 ... on Sun Solaris (after 6 years Assembler, Pascal and BASIC) ... 1989 I switched to C++. I never looked back.

        Only masochists would look back at C of that period.

        ANSI C is much better ... but still: when I see a self proclaimed C genius with 30 years experience program Java or C++ ... shudder.

        • by phantomfive ( 622387 ) on Tuesday September 22, 2015 @09:11PM (#50579949) Journal

          I would say that 95% of all people I know in person, who learned C first and not: Assembler, Pascal, SmallTalk, Lisp are extremely bad on advanced language concepts like functional or oo programming. Most of them shifted to scripting and operating servers and don't "code". A minority is doing embedded programming in C++ which mainly looks like C.

          Almost no one learns to program in assembler, Pascal, SmallTalk, or Lisp as their first language these days. It's all Python now, or Java.

        • by dgatwood ( 11270 )

          I would say that 95% of all people I know in person, who learned C first and not: Assembler, Pascal, SmallTalk, Lisp are extremely bad on advanced language concepts like functional or oo programming.

          Not sure why people learning Pascal, assembler, or Lisp first would be better at OO. There's nothing OO about any of those. I would turn that around and say that 95% of programmers are bad at OO programming, period, regardless of what language they started with. Most folks frequently forget what are, IMO, som

          • I would turn that around and say that 95% of programmers are bad at OO programming, period, regardless of what language they started with.

            That's because 95% of written OO solutions don't fit an OO domain. There is this myth that OO is the best we have, but in reality OO is very counter-intuitive to the human brain. Most OO solutions would be better off structured. The human brain handles that much better than OO.

      • by fyngyrz ( 762201 ) on Tuesday September 22, 2015 @08:04PM (#50579641) Homepage Journal

        For example, C is a *horrible* choice if you're writing a simple application that needs to do a bunch of string processing. In many cases, high performance isn't even a consideration, rather than correctness, security, and development speed.

        That is only true if you haven't written a string processing library. Which pretty much anyone who is going to address tasks like this will do, presuming they just don't go out and find one already written. Same thing for lists, dictionaries, trees, GEOdata, IPs, etc. Whatever. There's nothing that says one has to use C's built-in model for strings, either. Make a better one. It was one of the first things I did, and I did it in assembler, as soon as I ran into the convention of an EOT embedded in the actual text being the end marker -- I thought it was stupid then, and I didn't think a zero was any smarter when C first came to my attention lo those many decades ago. It's also a bear trap anyone can throw a bear into with regard to vulnerabilities -- one that can be entirely obviated by a decent string handling module.

        C isn't a bad language to do *anything* in. It's just a language that requires you to be competent, or better, and to address it through the lens of that competence in order to get enough out of it to make the result and the effort expended worth the candle. And no, if the programmer doesn't write in such a way as to almost always create generally reusable components, I'd not be willing to apply the appellation "competent" to the programmer.

        C's key inherent characteristics are portability, leanness and close-to-the-metal speed. It doesn't hold your hand. It's a language for experienced, skilled programmers when we're talking about creating actual products that are expected to perform in the wild. Lean code isn't nearly the issue it used to be, but it's still "nice" to have.

        • Lean code is always an issue. If your code incurs a x2 to x10 overhead associated with the virtual machine, that's either 2-10x the hardware you need to spend money on to achieve the same throughput as before, and 2-10x the electric bill for compute-intensive applications. If you're nowhere near the limit of your box, you don't notice. If you've got rooms upon rooms of computers doing the same thing, and you're writing your code in not C/C++, then you're wasting money.
          • by bondsbw ( 888959 )

            Lean code may be an issue if performance is critical for your application/system.

            If you are writing an app to manage photos, you had better favor reliability over performance or you might receive death threats from moms and dads whose baby photos have vanished.

          • Barring specialized service providers, compute costs are seldom the biggest item in company expenses.

        • That is only true if you haven't written a string processing library.

          Memory management is still a pain though, because a lot of times you want to create a lot of new strings when you are doing splicing and inserting etc.

        • I'm not trying to slam C. You can do just about anything in C - that's one of it's strengths. I'm just pointing out that it's not the *optimal* choice for certain types of tasks, in my opinion. C has advantages in it's relative simplicity, portability, and power. Moreover, it works very well as a "least common denominator" language, in that nearly every other language can easily interop with it because of it's stable ABI. This is why nearly every OS and many widely-used libraries are written in pure C.

        • Re: (Score:3, Interesting)

          by ZeroConcept ( 196261 )

          1. C is not portable, it's tied to the architectures/OSs/APIs the programmer chose to target at write time.
          2. Leanness and close-to-the-metal speed are irrelevant in most business scenaios (time to market rules, cores and memory are commodity, see ABAP and related monsters successfully running most of the world transactions regardless of C).
          3. C is not a language meant to implement business solutions, it's a wrapper for ASM for idiots who can't write ASM themselves.(rethorical)
          4. Writing string processing l

      • > Among the more significant differences are that C++ compilers are extremely good at optimizing,

        LOL. No they aren't [youtube.com]

        Mike Acton gave an excellent talk Code Clinic 2015: How to Write Code the Compiler Can Actually Optimize where he picked an integer sequence to optimize the run-time to calculate the sequence. Techniques include: memoization, and common sub-term recognition. For 20 values pre-optimization time was: 31 seconds, post-optimization time was: 0.01 seconds.

        Linked above.

        • by Entrope ( 68843 )

          C++ compilers are pretty good at optimizing the code you write (subject to aliasing issues inherited from C, and so forth). They're not functional compilers that construct all kinds of state behind your back in hopes that it will be useful. If your complaint is that they don't do things like memoize results for you, somebody probably has a nice C++11 header that will, and most C++ developers will ask why you think a C++ compiler should memoize things without you asking.

          • The points are two fold:

            1. Naive use of algorithms and OOP without understanding the data flow will always be slower then understanding and optimizing for the (data) cache usage.

            Pitfalls of Object Oriented Programming [cat-v.org]

            2. C/C++ compilers do a really shitty job of optimizing even trivial code.

            CppCon 2014: Mike Acton "Data-Oriented Design and C++" [youtube.com]

            Mike demonstrates a simple example where a bool member flag is used as a test. MSVC does a horrible job at O2; Clang does a much better job, but still crappy. (Note: U

            • by Entrope ( 68843 )

              Your first link doesn't say anything about compilers -- it's about cache locality and branch prediction, and how the application's architecture can make those more or less of an issue.

              I watched part of the first Mike Acton video you linked, and was reminded why I hate watching videos to try to learn something: People talk too slowly and basically never organize their information for efficient understanding or consumption. I'm not about to sit through another one to find out he is mostly complaining about h

        • You have to consider that compilers are also going to perform a wide variety of micro-optimizations that humans simply couldn't do on a massive scale, over millions of lines of code. No one would argue that a compiler can radically restructure your algorithms during optimization, because it doesn't know which side-effects are acceptable and which are not. So, yes, human programmers still need to be aware of how to structure code for best results on a given platform.

          Of course, you can always find specific

          • You may also be interested in my followup [slashdot.org]

            I linked to second trivial case, which in practice tends to show up time and time again in typical game code, using member bools where C++ compilers fall completely over.

            Compilers have a very narrow range where they are very good. Outside that domain, they suck at generating optimal code.

            As I say there are 3 levels of optimizations:

            1. Micro-optimization aka bit twiddling
            2. Algorithm optimization
            3. Data-flow optimization. Data Orientation Design focusing on the commo

            • Yep, I don't disagree with you. When I talked about optimizations, I was of course only talking about case 1. Anything above that certainly requires human-level work, and typically a substantial effort and deeper knowledge of the compiler and platform.

              I wonder if the compiler does a better job if const is properly used? It's meant as a compiler hint, so that the compiler can be more aggressive because it knows there are supposed to be no side effects in the functions.

              Also, I'd have a serious talk with a

        • The fact that compilers are better at optimizing some types of things than others, doesn't make them not extremely good at optimizing. On average they are better at optimizing than a human could ever be.
          • At the micro level, yes, agreed.

            At the macro level. Not even close.

            • As a software engineer, I pretty much expect to be doing macro level optimizations, and I expect the compiler to be doing the micro level stuff that I can;t be bothered with. I still agree with the claim that C++ compilers (and really it's the language) allow the developer to better convey intent which in theory (and I am pretty sure in practice), allow the compiler to make better optimizations than it would otherwise.
          • On average they are better at optimizing than a human could ever be.

            What? No way! If you want to do better than the compiler, follow these steps:

            1) Compile your program, get the assembly output from the compiler.
            2) Find improvements (this will not be hard). Profile to determine how much faster your changes are.
            3) After many repetitions of this, you will get good enough to write faster than the compiler without getting the assembly output.

            Compilers do better than an average human writing assembly, but that's only because most humans aren't good at writing assembly. It'

      • It may not be apparent even then. Java looks an awful lot like C++ at the code level. So... what's different? Java (and other managed languages like C#) have a bunch of neat features like reflection and automatic memory management, which inherently comes at the cost of runtime efficiency. Simply learning C or C++ won't point out exactly why those languages are so much faster than managed languages. You can write nearly the same code in C++, Java, and C#, and you'll see C++ win performance benchmarks - at least in all but the most contrived examples.

        All that stuff comes at a cost, but not a 1000% cost.
        If you're seeing that much of a speedup, then it's likely you were doing silly things in the Java version.

        I'd rather write C than Java, but let's be honest about the performance: Java's not that much worse.

      • C++ code generally compiles down to better cache-coherent structures than other languages

        It's possible to do cache-coherent programming in managed runtimes - it's mostly about knowing the rules though, and people who've never used unmanaged languages are less likely to know the rules.

        Everything that's an "object" is a pointer, and they'll be scattered all over the heap. The only way you get cache-coherency is using structs, which not all managed languages have, or arrays, which even managed languages allocate as a contiguous block of memory.

        I've written sorted collection classes in Java that us

    • Just to remind potential programmers. Lean C before you learn any other programming language, otherwise you will not understand why your code's performance is terrible.

      C doesn't let you understand why people's code has terrible performance. C has the same problems that make code slow - reference semantics. It's a mistake to conflate low level with "performance". For example, std::sort is faster than qsort, and you can't understand why just by understanding C.

    • Generally the justification for straight C or ASM is to remove runtime bloat, and you'd first have to give up using any frameworks to get there.

      Another is if you have to security audit the result and protect it from attack, as in OSes. C++ can generate stuff that isn't obvious from the local source code - thanks to definitions, overridings, and the like. (Linus makes this point - it's why the Linux kernel is in C and will stay there for the foreseeable future.)

      But that shouldn't be enough of an issue here

    • I know plenty of people who only learned C, but have no fucking clue how anything works.
    • The real reason is much more nuanced than language differences between C++ and Java. The Seastar network architecture bypasses kernel TCP/IP stack entirely, but instead implements user mode TCP/IP stack using dpdk, which allows user mode to poll network card's packet buffer directly over memory mapped I/O. The user mode stack runs on single core only, but you could run multiple instances on multiple cores. It can scale linearly because there is very little shared state across cores.

      C++ with custom network s

    • As for would it have been any faster if it was written in C or straight ASM, probably not worth chasing down that extra 1%.

      Assembly can give you huge performance gains, much more than 1%. One of the reasons why is because it gives you more control of caching. Paul Hsieh has written quite a bit on this topic, it's worth checking out.

  • by lkcl ( 517947 )

    yaaaa... but are they using Lightning Memory Database (LMDB) as the back-end? http://developers.slashdot.org... [slashdot.org] https://en.wikipedia.org/wiki/... [wikipedia.org]

  • Sure, but is it Web scale?

  • by jlowery ( 47102 ) on Tuesday September 22, 2015 @06:55PM (#50579193)

    They also boosted performance by never freeing memory, too!

  • Databases are usually I/O bound and improvement of storage structure/network protocol is more important than spot optimization of code. A more likely statement is that scylladb performed ten times faster than Cassandra in one particular benchmark for which Cassandra has not been specifically optimized for yet and is ten percent faster in an average case.

    In either case, good luck maintaining speed and stability after 5 releases when you implement every corner case of every feature and have to deal with legac

  • by Anonymous Coward on Tuesday September 22, 2015 @07:21PM (#50579361)

    I find it depressing that so little attention is paid to efficient computing. People now just throw memory and cycles at problems because they can with passable results. But I wonder how much more we could get out of our machines if software was carefully crafted from bottom to top.

    • You're looking in the wrong place. Try prying open your smoke alarm or CO detector. It's probably got a PIC12F or 10F, or an ATTiny or something. Those things have 1k words of program memory and a few tens of bytes of ram (typically). People pay a lot of attention to efficient computing. Too big a program means the model up, or an extra 5c per unit which clobbers profit margin. Too inefficient and the non replacable battery lasts 3 years not 5.

      Or look at the compute bound stuff. FFMPeg gets faster year on y

  • Wow, two years ago everyone here told us that NoSQL is evil and tried to convince us that we should stick to MySQL.

    Now everyone tells us Java is evil, because a rewrite in C++ is faster.

    What a surprise.

    If I would rewrite Cassandra from scratch, in Java, it also would be faster than the actual code.

    Why? Because all the learning the original team did over a course of a decade I can reuse and improve on.

    Keep in mind, the rewrite uses a new framework and new concepts for concurrency. Concurrency is one of the core areas where computing in future will certainly make lots of progress.

    I for my part I'm waiting for a Lucene rewrite, regardless in what language. Probably the worst OSS code I have ever see ... actually the worst code regardless of OSS or closed source.

    • Re: (Score:2, Insightful)

      "If I would rewrite Cassandra from scratch, in Java, it also would be faster than the actual code."

      Great (In the unlikely event you could actually single-handedly rewrite it at all.) Now make it 10 times faster. Nice Red Herring post though!

      • I did not say I make it 10 times faster, I said: I make it faster.

        It is a no brainer that the speed increase has nothing to do with the language used but with better architecture and better approaches. That can be easy repeated with a Java rewrite.

        Why should it not be possible to single handed rewrite it in a reaonable time?

        • "It is a no brainer that the speed increase has nothing to do with the language used but with better architecture and better approaches. That can be easy repeated with a Java rewrite."

          Perhaps that is the conclusion one reaches without a brain, but people with an actual brain and an understanding of the fundamental differences between C++ and Java know that is complete bullshit, and recognize you as the incompetent posturing for recognition with no ability to back your ignorant bullshit up with actual skills

          • Perhaps you shouzld read the plenty full of Posts here in this thread that explain why the new C++ rewrite is faster. Or simply go to the vendors web page. It is very well explained :D

            You Sir, are just an ignorant idiot without a brain. Or you had grasped what I implied with my previous comment :D ... sorry to use your own wording on you, but it seemed fitting.

            • "Perhaps you shouzld read the plenty full of Posts here ..."

              Know doubt your grasp of programming languages is similar to your grasp of the English language. Nobody is arguing that re-architecting isn't a major advantage. The point is that one cannot attribute the entire improvement to it, and claiming that you could use Java and acheive similar results represents your blatant blathering to the world that you lack an understanding of language architectures. Put Bjarne Stroustrup and James Gosling together

    • Wow, two years ago everyone here told us that NoSQL is evil and tried to convince us that we should stick to MySQL.

      I will admit that I don't quite understand the fuss about "NoSQL".

      It's just a two column table with a primary key and a data blob. Congratulations, I guess. Yes, a specialized piece of software for this might be fast, but it's not anything new or innovative. It's just a two column table with a bow on top.

      • There are plenty of different variations of NoSQL Databases. Column Stores like Cassandra (with unlimited ampoounts of columns, *cough* *cough* how useful would a storage with only two columns be?), document based like Mongo DB, XML based or simply graph based ones (e.g. OrientDB and Neo4J).

    • I do agree that a language-to-language rewrite would yield impressive gains... but that's not the whole of it. Cassandra is an edge case ... and yes, the Lucene code could use some love (contribute some patches??)

      C++ isn't necessarily the best choice for everything, just like a Mclaren F1 isn't the optimum choice to pick up groceries. But if your requirements dictate that performance is a chief priority, it most certainly is.

      I've written many Java and C++ systems at scale. Java simply does not excel at maxi

    • by Ace17 ( 3804065 )
      The sole action of rewriting a piece of code doesn't magically make it faster. Nor does it make cleaner, or more stable. You have to put something more into the equation. Like a new programming language, or a new team. You might want to have a look at Joel Spolsky's post named "Things You Should Never Do", explaining why a big rewrite is almost always a bad idea.
    • by narcc ( 412956 )

      Fads are fads. Today's 'best practices' are tomorrow's 'horrible mistakes'.

      You're right that rewrites often result in a significantly better product. I suspect that, in addition to your reason, the two are correlated.

    • by robi5 ( 1261542 )

      > Wow, two years ago everyone here told us that NoSQL is evil and tried to convince us that we should stick to MySQL.

      The proper alternative to NoSQL would be an SQL database, rather than the misleadingly named MySQL database.

      • A SQL database is only an alternative to a NoSQL database if having chosen the later was an architecture mistake.

        I never have seen usage of an NoSQL database where it made sense to switch to a relational one (SQL).

        If you have examples, I guess many are eager to hear them.

  • by Anonymous Coward

    I will only use MongoDB because it is web scale.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...