Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Programming Operating Systems Software Apple

Apple Open Sources Grand Central Dispatch 342

bonch writes "Apple has open sourced libdispatch, also known as Grand Central Dispatch, which is technology in Snow Leopard that makes it easier for developers to take advantage of multi-core parallelism. Kernel support is not required, but performance optimizations Apple made for supporting GCD are visible in xnu. Block support in C is required and is currently available in LLVM (note that Apple has submitted their implementation of C blocks for standardization)." Update: 09/11 15:32 GMT by KD : Drew McCormack has a post up speculating on what Apple's move means to Linux and other communities (but probably not Microsoft): "...this is also very interesting for scientific developers. It may be possible to parallelize code in the not too distant future using Grand Central Dispatch, and run that code not only on Macs, but also on clusters and supercomputers."
This discussion has been archived. No new comments can be posted.

Apple Open Sources Grand Central Dispatch

Comments Filter:
  • What? (Score:3, Interesting)

    by julesh ( 229690 ) on Friday September 11, 2009 @09:09AM (#29388375)

    Can somebody explain what this "blocks" is? I mean, C being a block-structured language, I thought it already supported them...

    • Re:What? (Score:5, Informative)

      by Daniel_Staal ( 609844 ) <> on Friday September 11, 2009 @09:13AM (#29388405)

      Blocks are sections of program that can be passed around between functions as arguments. They basically allow 'functional' programming in C.

    • Re: (Score:2, Informative)

      Blocks are the closest C will probably ever get to first-class functions. They're close enough that you can reap most of the benefits of first class functions.

    • Re:What? (Score:5, Informative)

      by twoshortplanks ( 124523 ) on Friday September 11, 2009 @09:24AM (#29388481) Homepage
      I'd recommend reading the relevant section [] of the ars technica review of mac os x 10.6
    • To me it looked something like a nameless function, which shares same stack with the functions where it is defined.

      Even if it's a dumb nameless function, it could be already very useful. E.g. perl's 'sort {$a op $b} @list' can be more or less directly translated to C's 'qsort( .... ^(const void *a, const void *b){ return a op b; } )'. At least I hope so.

    • Re:What? (Score:4, Interesting)

      by LO0G ( 606364 ) on Friday September 11, 2009 @10:28AM (#29389075)

      They're basically lambda functions [] which are a part of C++0x.

      • Re:What? (Score:4, Informative)

        by wootest ( 694923 ) on Friday September 11, 2009 @02:12PM (#29391707)

        They're an alternate implementation of lambda-like technology. Apple's Chris Lattner wrote the first public announcement on blocks and noted: "To head off the obvious question: this syntax and implementation has nothing to do with C++ lambdas. Blocks are designed to work well with C and Objective-C, and unfortunately C++ lambdas really require a language with templates to be very useful. The syntax of blocks and C++ lambdas are completely different, so we expect to eventually support both in the same compiler."

        If only due to type inference, I'd prefer the syntax of C++ lambdas, but Blocks aren't half bad at what they do within the frame of the C model.

    • by heffrey ( 229704 )

      Actually, C is not a block-structured language. You can't declare functions inside other functions.

    • by Santana ( 103744 ) on Friday September 11, 2009 @02:14PM (#29391735) Homepage

      I'm honestly surprised how ignorant and lazy the regular slashdotter has become with the years.

      Any self-respected geek should be already keeping up to date with Apple advancements which are and will be impacting techology in the years to come.

      If you people haven't noticed already, Apple has been consistently releasing libraries and server software as open source projects for the rest to pick up , use and modify, with liberal licenses.

      A friend of mine used to say (can't remember exactly... paraphrasing:)

      * Microsoft wants all software to be theirs
      * GNU wants all software to be free
      * BSD wants all software to be better

      And releasing GCD, gentlemen, is another master stroke by Apple, just like WebKit, Bonjour, LLVM, the list goes on, to share knowledge and advance technology by merit, not by forcing it down your throat thanks to the monopoly you have been handed.

      The term "block" is familiar to Ruby programmers. It's an old concept which Ruby has made easy to use and hence popular and actually useful.

      And here's another lesson which OpenBSD, Apple and Ruby have been putting to work without you noticing guys: any technology that is difficult to use, no matter how good it is, will not be used if gets in your way; the technology must be easy to deploy/use and unobstrusive to be actually used and useful.

      Just remember SELinux and how many people just disable it, no matter how good it is (which I don't think it is, but that's for another rant). Then compare it with the technology that OpenBSD has been implementing for memory protection which is unobstrusive and ready to use with no extra configuration. Same with Ruby blocks, which more programmers are using and a lot of software is benefitting from it now, even though higher order functions and closures have been around for ages.

      Having Ruby-like blocks in C and Objective-C is so COOL, you must appreciate that if you think you're serious at programming. Apple has already submitted it to be a standard. I believe MacRuby will benefit from this too, which is Ruby written in Objective-C, which implements Ruby classes as Objective-C classes, achieving incredible speed, taking advantage of Objective-C and LLVM technologies.

      Now, I want my late '90s Slashdot back please, where you could more easily find insightful and informative comments. There's a lot of garbage and Microsoft apologists nowadays.

  • Awesome! (Score:5, Interesting)

    by gers0667 ( 459800 ) on Friday September 11, 2009 @09:10AM (#29388381) Homepage

    I'm not too well versed in Cocoa development. I pushed some code that should have been in a separate thread into GCD, which requires you to use a block. All in all, I had to add an include, 1 line of code and a closing bracket.

    Apple has made some seriously cool stuff here.

    • Re:Awesome! (Score:4, Insightful)

      by ThePhilips ( 752041 ) on Friday September 11, 2009 @09:50AM (#29388699) Homepage Journal

      Having done some multiprogramming, I have to tell that it is really an end-user technology. Less a tool for developer.

      One of the biggest problems on multi-CPU/core systems if to split appropriately CPUs between applications. That requires quite amount of testing and benchmarking. Then you simply configure max number of threads each application allowed to use. Obviously changing anything at later time when problem was found requires some effort too.

      With GDC that all now is much easier. Available CPU resources are presented to applications in a fashion of real-time batch queue. One doesn't need to configure thread pools per application anymore.

      That was never a problem from software development point of view - we just provide the 'max threads' parameter. But that was always problem from user/operator point of view who has to actually fill in the parameter.

      • Re:Awesome! (Score:5, Informative)

        by Dog-Cow ( 21281 ) on Friday September 11, 2009 @10:30AM (#29389097)

        GCD is entirely a developer technology. It's a library for crying out loud! The end-user never does anything with it.

        The whole point is to make multi-processing easy for the developer.

        I pity the fools who have to use the code you've written.

      • by jedidiah ( 1196 )

        No, it just looks like you are trusting the OS to do that for
        you and to get it right without any input from you. If the OS
        can do that for you as a developer then why can't you do that
        in your own code?

        • Re: (Score:2, Insightful)

          by BasilBrush ( 643681 )

          For the usual reason that OSs handle resources. Because the OS is in a position to share out resources between applications in a reasonable way. If each app does it's own thing, then it's either a free for all, and pretty much guaranteed to be inefficient.

          It also of course has the other usual benefit of libraries. It means that programs don't have to reinvent the wheel every time they write an app. GCD makes writing multi-threaded code using thread pools vastly simpler than before.

      • The question is... how is this different to Intel's Thread Building Bocks, or OpenMP, both of which are better supported and more widely available to non-Mac developers.

        I guess having their own implemntation (for the mac) makes sense, as they can integrate it throughout the OS. I don't know anything about GCD either, but could it be used in Linux to make that a more parallel-friendly OS with less developer effort, and more standardisation of parallel execution?

        Linux is always playing catch-up with Windows f

        • Re:Awesome! (Score:5, Insightful)

          by ThePhilips ( 752041 ) on Friday September 11, 2009 @11:50AM (#29389999) Homepage Journal

          The question is... how is this different to Intel's Thread Building Bocks, or OpenMP, both of which are better supported and more widely available to non-Mac developers.

          If I'm not mistaken the technologies are for an application.

          GCD can coordinate all applications running on the same system.

          I guess having their own implemntation (for the mac) makes sense, as they can integrate it throughout the OS. I don't know anything about GCD either, but could it be used in Linux to make that a more parallel-friendly OS with less developer effort, and more standardisation of parallel execution?

          Theoretically yes.

          Apple here is in unique position.

          Most software developers care solely about their own application. Old example from the desktop. On Windows I have 7-zip archiver installed. I have dual core CPU and this 7-zip is configured to use the 2 cores. From prospective of software developers it's all what they can do: let users tell how much cores/CPUs can be used. But I also have a video encoding application installed - and also configured to use two cores. If I try to run them both in parallel, that would cause erroneous amount of context switching harming performance of both the tasks. In worst case that might make my desktop completely unresponsive. As user I'm also lazy to reconfigure every time applications how many CPUs they should use.

          Apple itself now produces number of applications which can utilize multiple CPUs (iTunes audio conversion, iMovie/QuickTime/FC video conversion, etc) and obviously they run into the problem that when applications left on their own to decide how much CPU resources they should use, system would overload leading to all the effects. Requiring user to reconfigure all the applications all the time is also kind of stupid.

          Since Apple is in control of the OS and applications - and their own software might suffer from the problem, they went out and implemented the solution: system-wide batch queue with a thread pool. They are still threads - local to the process - but they are scheduled on system-wide basis. You do not need to configure applications how many CPUs they should use - nor applications have to think about: they simply put tasks (to be threads) of the queue of GCD.

          I'm using the 'batch queue' term because this is the closest what exists now. Though classical UNIX batch queues are different in nature: those are processes and they are executed at some unknown point of time. GCD is real-time in its nature and its threads run immediately, unlike traditional batch queues which wait for system to be idle.

      • by rsax ( 603351 )

        Having done some multiprogramming, I have to tell that it is really an end-user technology. Less a tool for developer.

        With GDC that all now is much easier. Available CPU resources are presented to applications in a fashion of real-time batch queue. One doesn't need to configure thread pools per application anymore.

        I'm so glad GDC is available for general use. I can't count the amount of times I've had to help my grandma configure thread pools for each app. She's totally gonna dig 10.6 now that her mul

  • the question is:
    What license? Apache v2.0
    What the fuck is GCD []?

    Grand Central Dispatch (GCD), named for Grand Central Terminal, is used to optimize application support for multicore processors. It is an implementation of task parallelism based on the thread pool pattern
    GCD works by allowing specific tasks in a program that can be run in parallel to be demarcated as blocks.[2] To this end, it extends the syntax of C, C++, and Objective-C programming languages.[2] At runtime, the blocks are queued up for execution and depending on availability of processing resources, they are scheduled to execute on any of the available processor cores[2] (referred to as "routing" by Apple).[3]
    see also
    # Task Parallel Library - comparable technology in the .NET Framework developed by Microsoft.
    # Java Concurrency - comparable technology in Java (also known as JSR 166).

    • Thanks. Now I imagine this might be more useful in BSD, except I'm not sure BSD can incorporate code under an Apache license. Perhaps someone can enlighten me there.

      Honestly, I'm more interested in whether or not this can benefit Linux, but I'm assuming it would take a major rewrite to fit the Linux kernel.

      • by MrMr ( 219533 )
        Task parallellism libraries are essentially fancy wrappers around fork-and-exec. That is really old tech, and needs no rewrite.
        • by samkass ( 174571 ) on Friday September 11, 2009 @09:44AM (#29388639) Homepage Journal

          pthreads and fork/exec are the equivalent of assembly language for parallelism compared to GCD. The API makes it easy to create anonymous methods that can be parallelized, have dependencies, be put in serial or parallel queues, etc. Then the OS implementation can prioritize at a finely-grained level based on dynamic resource availability, relative process priority, etc., on a system-wide basis. (The OS implementation of GCD was already open-sourced as part of 10.6's Darwin xnu kernel release last week.)

          It's pretty nifty stuff. And it's good to see Apple continue MacOS X's tradition of openness and support of open source.

        • by ThePhilips ( 752041 ) on Friday September 11, 2009 @10:02AM (#29388835) Homepage Journal

          It's an old tech. But it's different this time around.

          Old thread pools are per process. This is a thread pool for the whole system. And that's new.

          IOW, with GCD you do not need to configure every application how much threads it should start. Applications do not need to bother with it anymore too: they simply queue batch tasks as they arrive and GCD guarantees that they will be executed. Without overloading system.

          Shortly, GCD is a system-wide replacement for old per-application thread pool configuration. Makes applications simpler and also doesn't force end-user to understand all oddities of multi-programming to get most out of their boxes.

  • Blocks and GDC (Score:5, Informative)

    by Anonymous Coward on Friday September 11, 2009 @09:21AM (#29388455)

    In Snow Leopard, Apple has introduced a C language extension called "blocks." Blocks add closures and anonymous functions to C and the C-derived languages C++, Objective-C, and Objective C++.
    Perhaps the simplest way to explain blocks is that they make functions another form of data. C-derived languages already have function pointers, which can be passed around like data, but these can only point to functions created at compile time. The only way to influence the behavior of such a function is by passing different arguments to the function or by setting global variables which are then accessed from within the function. Both of these approaches have big disadvantages
    Full Read:

    Directly in line with blocks is Grand Central Dispatch (and this is, where blocks become really usefull):
    GDC is a a technology to resolve the concurrency conundrum by giving programmers a very easy way to split tasks into multiple sub-tasks which can then be loaded onto different threads/cpu. All this also works with normal threading, but GDC makes the process far easier, with the intention to prepare OSX for future multicore machines:

    It does so by using blocks as separate tasks:

    "When I first heard about Grand Central Dispatch, I was extremely skeptical. The greatest minds in computer science have been working for decades on the problem of how best to extract parallelism from computing workloads. Now here was Apple apparently promising to solve this problem. Ridiculous.

    But Grand Central Dispatch doesn't actually address this issue at all. It offers no help whatsoever in deciding how to split your work up into independently executable tasksâ"that is, deciding what pieces can or should be executed asynchronously or in parallel. That's still entirely up to the developer (and still a tough problem). What GCD does instead is much more pragmatic. Once a developer has identified something that can be split off into a separate task, GCD makes it as easy and non-invasive as possible to actually do so.

    The use of FIFO queues, and especially the existence of serialized queues, seems counter to the spirit of ubiquitous concurrency. But we've seen where the Platonic ideal of multithreading leads, and it's not a pleasant place for developers.

    One of Apple's slogans for Grand Central Dispatch is "islands of serialization in a sea of concurrency." That does a great job of capturing the practical reality of adding more concurrency to run-of-the-mill desktop applications. Those islands are what isolate developers from the thorny problems of simultaneous data access, deadlock, and other pitfalls of multithreading. Developers are encouraged to identify functions of their applications that would be better executed off the main thread, even if they're made up of several sequential or otherwise partially interdependent tasks. GCD makes it easy to break off the entire unit of work while maintaining the existing order and dependencies between subtasks." (source = above url)

  • GCD -vs- OpenMP (Score:4, Informative)

    by MobyDisk ( 75490 ) on Friday September 11, 2009 @09:54AM (#29388735) Homepage

    It looks like GCD [] is very similar to OpenMP []. I am always biased toward using an open standard, when possible. Since many compiler vendors support OpenMP, why didn't apple just implement that for Objective-C, instead of creating their own threading solution? Judging from the examples, GCD looks much cleaner and simpler. But that often comes with a price.

    • It looks like GCD is very similar to OpenMP.

      Who many lines in OpenMP to implement something like "do this task in a background thread at low priority, and when it's done, do that task in the UI thread"? In GCD it is two lines.

    • Re:GCD -vs- OpenMP (Score:5, Informative)

      by jonesy16 ( 595988 ) <jonesy@gmai l . c om> on Friday September 11, 2009 @11:06AM (#29389511)

      GCD and OpenMP have very little in common. OpenMP is a language extension. It requires the programmer to understand what environment their program is going to run in, what variables can be shared and how, etc. GCD merely asks you to identify blocks of code that are independent and it handles parsing them out to threads, variable replication, etc. It's the difference between providing detailed blueprints of a car (the OpenMP way) and just saying "I want a car" (the GCD way). You can *almost* think of GCD as a user-friendly frontend for OpenMP.

    • Re: (Score:3, Insightful)

      by ceoyoyo ( 59147 )

      "Judging from the examples, GCD looks much cleaner and simpler. But that often comes with a price."

      I think you answered your own question. Apple is kind of fanatical for cleaner and simpler and not just in their user interfaces. It's nice. Apple APIs are some of the best I've ever used.

  • I know the C-like language geniuses won't jump on Erlang immediately, but the multi-core support is awesome. I'm pretty sure there's a port for every platform too.

    • Erlang is pretty cool, but I doubt the linux kernel or any other massive c codebase is going to be rewritten in it. GCD is an incremental improvement, that will be more likely to have a bigger impact.
  • What I really want to see next is libxgrid so that I can use my debian/windows boxes as Xgrid nodes.

    My university had around 20 computer labs, some were packed, but some were completely empty. I understand that it'd use more energy, but if you could turn those into a cheap 'super computer' and loan/rent out time to groups on campus.

  • by wandazulu ( 265281 ) on Friday September 11, 2009 @11:21AM (#29389657)

    I've come to really like GCD; I haven't played with it much in Cocoa (Obj-C) but I've been moving some of the stuff I wrote a long time ago in C to use it and I think I can say that what it does is *really* *really* awesome. It helps when writing code to be run in parallel; it does is not help you in determining *what* should be done in parallel. By putting your work into queues, by way of closures (yeah, blocks, whatever...I'm sticking with the closure name), it's up to the underlying OS to determine what thread gets what work, and on what processor. Having worked with multithreaded stuff on Windows, and calling GetThreadAffinityMask or whatever it was, and being told that it's just a *hint* to the OS, which is free to ignore you, which it always did, GCD really does spread out the work evenly among my 16-proc MacPro, and then turns around and does it just as well on the dual-core mini.

    I've wanted something like this for years; a really decent OS thread scheduler that divides up the work on the other processors in a sensible fashion. I was even looking into how much effort it would take to write something like this from scratch for Linux, and now I don't even have to. Sweet!

    Caveats: This is in OS X only, so no iPhone GCD (at least, not yet...not really necessary until we have multi-core iPhones), and while I've lived with additions to C++ through the years (templates mostly), the idea of adding, well, anything to C seems strange, let alone something as run-time dependent as closures.

    • Re: (Score:3, Insightful)

      it does is not help you in determining *what* should be done in parallel.

      I'd disagree slightly on that point; agreed, the technology doesn't actually make certain code blocks suddenly go "me! me! me!" but it does significantly ease the burden on the developer in making those decisions. I can throw something into the parallelization engine with a couple lines of code to see what difference that makes.

      The key example would be something like this one here [], which Slashdot's filter doesn't like (it says please don't use so many junk characters, bah).

      That's a nice and simple change to

    • 16-proc MacPro

      O RLY? Where can I get me one of them?

      That aside, this is a pretty decent description of what GCD is good for.

  • I don't mind progress, and new standards and all that, and the idea of a "user-friendly scheduler" is really nice, but how hard would it be to make this work with just generic callable objects? it's not that hard to implement a closure in C, and it's been done for years for things like boost and libsigc++ (any signal/callback system that doesn't have upvalues is useless to me at this point). And it's not like these "blocks" are actually compiled and linked at run time, it's just a pointer to a static functi

No problem is so large it can't be fit in somewhere.