Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Harlan: a Language That Simplifies GPU Programming 195

hypnosec writes "Harlan – a declarative programming language that simplifies development of applications running on GPU has been released by a researcher at Indiana University. Erik Holk released his work publicly after working on it for two years. Harlan's syntax is based on Scheme – a dialect of LISP programming language. The language aims to help developers make productive and efficient use of GPUs by enabling them to carry out their actual work while it takes care of the routine GPU programming tasks. The language has been designed to support GPU programming and it works much closer to the hardware." Also worth a mention is Haskell's GPipe interface to programmable GPUs.
This discussion has been archived. No new comments can be posted.

Harlan: a Language That Simplifies GPU Programming

Comments Filter:
  • by Anonymous Coward on Friday July 05, 2013 @04:15AM (#44193175)
    I find it hard to believe ANYTHING derived from LISP could simplify anything.
    • by marcello_dl ( 667940 ) on Friday July 05, 2013 @04:36AM (#44193251) Homepage Journal

      Because lisp-style languages are already simplified to the extreme, you mean? Phew, for a moment I thought I spotted a troll.

      • The question is whether simplifying the syntax down to a nubbin really flattens the learning curve or not.
        • Yes, it does.

    • by lxs ( 131946 )

      ((easy)is)lisp

    • by dbIII ( 701233 )
      Wrong (very wrong (indeed it is as wrong as that incident last week (last week being in June)))
    • You likely write code in a syntax that was derived from Lisp every day and don't even realize it.

  • float.kfc [github.com] shows the basic Scheme-style syntax.

    I wonder why it uses .kfc as its extension...

    • by zdzichu ( 100333 ) on Friday July 05, 2013 @04:24AM (#44193209) Homepage Journal

      Holk reveals [theincredibleholk.org] that the name Harlan comes from a mishearing of fried chicken icon Colonel Sanders' first name, Harland, and this association is also why all the file extensions for Harlan programs are .kfc.

  • Indian University? (Score:3, Informative)

    by Anonymous Coward on Friday July 05, 2013 @04:20AM (#44193195)

    I think you mean Indiana University, mods.

  • by Dan East ( 318230 ) on Friday July 05, 2013 @04:24AM (#44193211) Journal

    According to the story it is Indiana University, not Indian University.

    I wonder if scheme was in some way necessary or conducive to running on the gpu, or if that was an arbitrary choice. I still have nightmares of car and cdr from way back when.

    • by abies ( 607076 ) on Friday July 05, 2013 @05:10AM (#44193329)

      Scheme/lisp was a bit helpful in the way it has a lot of features simplifying code generation. In fact, lisp is ultimate example of programmers bending towards making things easiest for compilers. It is a lot easier to transform lisp-like code into other representation - you don't really need to write lex/bison-like parser part of the grammar, you can immediately start with transforms.

      But it doesn't make it simplier for people using the final language - just for the guy writing the compiler. You have to be masochist to prefer to write

          (define (point-add x y)
              (match x
                  ((point3 a b c)
                    (match y
                        ((point3 x y z)
                          (point3 (+ a x) (+ b y) (+ c z)))))))

      instead of something like

      define operator+(point3 a, point3 b) = point3(a.x+b.x,a.y+b.y,a.z+b.z)

      Lisp makes writing DSLs easy - but resulting DSLs are still Lisp. In the era of things like XText [eclipse.org], which provide full IDE with autocompletion, project management, outlines etc on top of your DSL, there is no real excuse to make things harder then needed

      • Re: (Score:2, Flamebait)

        by vikingpower ( 768921 )

        In the era of things like XText [eclipse.org], which provide full IDE with autocompletion, project management, outlines etc on top of your DSL, there is no real excuse to make things harder then needed

        Bullshit. Did you ever try and actually design, write, develop and maintain an industrial-strenght DSL with Xtext ? If yes, then I would be interested to hear from your experience. If not, hear mine: it is hell.

      • by Goaway ( 82658 )

        I always found it deeply ironic that SICP, of all books, starts out with the statement that "Thus, programs must be written for people to read, and only incidentally for machines to execute", and then goes on to use Scheme.

        • by rmstar ( 114746 )

          I always found it deeply ironic that SICP, of all books, starts out with the statement that "Thus, programs must be written for people to read, and only incidentally for machines to execute", and then goes on to use Scheme.

          The thing with lisp syntax is that, at first sight, it looks less intelligible than, say, C++ syntax. But once you get used to the parentheses, it actually is a LOT easier to read and write correctly. All the suggestive syntactic sugars of C/C++/whatever tend to have subtle interference p

      • I dunno, I always liked LISP. There's more typing but it seems logical to me. And the ease of transforming it to something else shouldn't be down played. I've got one API that I have to use frequently that uses LISP like code for sales reports... and often I have to dump that into an excel spreadsheet for some of my sales people... so I wrote a script that did it for me. It's actually really simple. I couldn't imagine doing that with any other language. I just copy the code, run my script, paste into excel

    • The scheme syntax lends itself well to parallelization. You can parse the program into a tree, with the command as the parent node and the inputs as its children. If you don't allow any of the commands to have side effects, you can process each of the children's subtrees in parallel. This makes it trivial for the compiler to figure out how to decompose your program into multiple GPU kernel calls and forces the programmer to think in different terms to come up with an algorithm that fits in with Scheme's pro
  • Change in thinking (Score:5, Informative)

    by dargaud ( 518470 ) <slashdot2@nOSpaM.gdargaud.net> on Friday July 05, 2013 @04:27AM (#44193217) Homepage
    I just started doing some GPU programming and the change in thinking that it requires even for very simple things can be hard for programmers. I don't know if this language forces us to think in new terms, but here's a very simple example: we often use arrays of structures. Well, a GPU can optimize computations on arrays but not on structures, so it's better to use structures of arrays... Even is less natural for the programmer. Plenty of small examples like that that don't really depend on the language you use.
    • Well, a GPU can optimize computations on arrays but not on structures, so it's better to use structures of arrays... Even is less natural for the programmer.

      It is only less natural for you because you've ignored the CPU's SIMD extensions all this time.

      My question is if in all this time that you have avoided the CPU SIMD extensions, then why is it at all important that you find the GPU's version of it less natural?

      (queue the folks that dont realize that SoA SSE code is a lot faster than AoS SSE code, but will now rabidly defend their suboptimal and thus mostly pointless usage of SSE)

      • Replace "structure" with "object" and you'll see why most programmers think in terms of arrays of structures and not structures of arrays.

        Anyhow, for me that shows an intrinsic limitation of object oriented languages and why C still rules strong. When you run into the limitations of the hardware, you get to a point where object oriented languages are a limiting factor in optimization.

        When I started learning how to program neural networks in the 1980s, I realized that by turning object oriented programs insi

        • You've never used CLOS then, eh? The optimization you described is a classic use of the metaobject protocol to redefine the representation of instances through a metaclass. You get the performance boost without sacrificing the power of abstraction.

          Of course, the SML/Haskell folks would tell you OO is doomed, but only because using a proper type system and algebraic types allows the compiler to do all of the hard stuff for you...

    • I just started doing some GPU programming and the change in thinking that it requires even for very simple things can be hard for programmers.

      Except for Python/NumPy and Matlab programmers (and perhaps Fortran, idk, never used it).

      I was pleasantly surprised when I adapted my Python code (some image processing / neural network stuff) to use OpenCL, and without much effort achieved a 70% reduction in processing time.

      • I just started doing some GPU programming and the change in thinking that it requires even for very simple things can be hard for programmers.

        Except for Python/NumPy and Matlab programmers (and perhaps Fortran, idk, never used it).

        Fortran had parallelizing compilers way, way before this "omg, dual core, we need to rethink everything about programming". I believe it was helped by the language's native matrix/array syntax. This is F90 and later though, earlier standards were horrible, and it's those that give Fortran a bad name even today.

    • by sconeu ( 64226 )

      Writing in Harlan changes your thinking.

      It turns you into a total douchebag. One who goes around insisting that everyone else stole and destroyed your ideas.

  • by Vincent77 ( 660967 ) on Friday July 05, 2013 @05:19AM (#44193349)

    There are several languages that are written on top of OpenCL - that is the whole idea of this API. But if your read the article, it seems this guy was the actual inventor of the wheel.

    Same response happened when some guy made Rootbeer and let some marketeer write an alike article [slashdot.org]. It was suggested that you could just run existing Java-code on the GPU, but that was not true at all - you had to rewrite the code to the rootbeer-API. This Harlan-project is comparable: just beta-software that has not run into the real limits of GPU-computing - but still making big promises that in contrary to their peers they actually will fix the problem.
    I'm not saying it can be in the future, but just that this article is a marketing-piece with several promises on future advancements.

    Check out Aparapi [google.com] and VexCL [github.com] to name just two. There are loads [khronos.org] and loads [streamcomputing.eu] of these solutions - many of these wrappers slowly advance to higher level languages, and have been in the field a lot longer.

  • Ah...Comtrya! Comtrya!
  • Hmm (Score:2, Insightful)

    by DrXym ( 126579 )
    Lisp code is practically unreadable thanks to all the parentheses without good formatting and even then looks totally foreign to people brought up on C or Java. For example all the computations are completely backasswards thanks to polish notation. A better language for GPU programming would be one which at least retains the essence of C and provides convention based techniques and inference to relieve some of the effort of writing out OpenCL kernels.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      And C is virtually unreadable to anyone brought up with Smalltalk and Ada, so what's your fucking point? It takes something like three days maximum to get used to prefix notation, so learn it if you want to use the tool, and get over with your irrational and insubstantial syntax preferences.

      • by DrXym ( 126579 )
        My point Einstein, is that C is the language that CUDA, Cell, OpenCL and OpenGL SL are derived from. So it's a rather useful property of MagicNewLanguage if it is similar to what people are accustomed to already, preferably allowing them to express the same concepts in a terser but similar form. Assuming that is, that the person who created MagicNewLanguage stands any hope of persuading people to use it.
        • by DrXym ( 126579 )
          Oh and add HLSL and Renderscript for good measure.
        • My point Einstein, is that C is the language that CUDA, Cell, OpenCL and OpenGL SL are derived from. So it's a rather useful property of MagicNewLanguage if it is similar to what people are accustomed to already, preferably allowing them to express the same concepts in a terser but similar form.

          And how, exactly speaking, would the MagicNewLanguage do that? Because we can probably assume that OpenCL etc. were written by people who knew what they were doing, and are thus already as or nearly ast good as a C-

      • I had the misfortune to inherit a series of instrument controllers and data collection routines written in Scheme, with hooks into legacy Fortran. A couple of engineers had kept their love of Scheme since university, and 25 years later elected to implement production code in it. Why? Because of the elegance of the grammar, which simplified their job .... because they were steeped in Scheme. When they left the shop, there was no one among the other 50 experienced engineers who had been anywhere near Lisp/Sch

        • by jbolden ( 176878 )

          What a terrible pity. A bunch of engineers got to work on a design they enjoyed and have enthusiasm for their work and also be more productive. Can't possibly understand why a PM would allow that since whoever had to maintain it would have to learn a few new things.

      • by tyrione ( 134248 )

        And C is virtually unreadable to anyone brought up with Smalltalk and Ada, so what's your fucking point? It takes something like three days maximum to get used to prefix notation, so learn it if you want to use the tool, and get over with your irrational and insubstantial syntax preferences.

        No one brought up with Smalltalk hasn't been brought up without C. Ada, perhaps, but then again anyone taught Ada was exposed to Fortran and most likely moved to C, which once again predates effing ADA and SMALLTALK. You were far better off with PASCAL as an example.

    • by splutty ( 43475 )

      I suspect one of the main reasons for using Lisp/Scheme style notation, is that almost all GPU programming anyone would want to do are (based on) mathematical equations.

      For mathematicians, a Lisp notation is actually a lot more logical and easier than a C notation. (At least the older ones :)

      The whole concept of iterations in calculations is a bit awkward in C (with all the parenthesis, yes...) in comparison to Lisp (where they're fairly well delineated blocks if properly indented)

      Yes, you can mostly do the

      • by DrXym ( 126579 )
        Almost all GPU and GPGPU programming is around a C-like language. There is some Fortran support too for CUDA and a smattering in OpenCL. I expect most developers would be using C though.
    • Lisps, like Scheme and Harlan, can be made readable.

      Look at the Readable Lisp S-expressions Project [sourceforge.net], which extends traditional Lisp notation and adds capabilities such as infix and traditional f(x) style formatting. We provide open source software (MIT license) implementations, including one that can be used as a preprocessor. So, you can have your Harlan and stuff like infix too.

      Without these syntax improvements you're right. Very few developers will be happy writing complicated math expressions wi

    • People always say it's the parens, but it's not. It's the prefix notation. The first thing that your program does is buried way down in the tree. IMHO, that's why most people find it hard to analyze non-trivial Lisp functions.

      (print (eval (read)))

      OK, easy. You gotta read, then evaluate, then print the result. In this regard, Lisp is like Forth--not too hard to understand if you write simple, meaningful functions and combine them. I think more Forth programmers are aware of the problem though. Lisp pr

      • Is that really any different than any other eager language though? The C equivalent would be print (eval (read (STDIN)));. All you are doing is shifting the location of the parenthesis, and adding more punctuation.

        • The C equivalent would be print (eval (read (STDIN)));

          Hmmmm... In C print isn't standard. printf is printf(char *fmt,...). Your eval function would have to return the result as a string of some kind. You could have it be a pointer to a chunk of memory that holds the string, but then it wouldn't be thread safe (forgetting for a moment that threads aren't spec'd out in C, let's say were POSIX for sake of argument). So if it's thread safe you could return a string, but then how do you free it?

          Because of

          • Oh, the danger of spitballing code online. About an hour later I'm watching TV and thinking about my last post. Then I recall that the order of evaluation in C is unspecified.

            Thus, even if you did try to write C in prefix style, the order in which things are evaluated is potentially botched. In a pure functional language this doesn't matter. Either that, or Lisp specifies an order of evaluation even if you might have side effects. I think. I'd better shut my mouth before I look any more stupid; but t

      • by jbolden ( 176878 )

        The main things with good functional code is you don't think in terms of "first thing the program does". You shouldn't care about order at all. What's important is the hierarchy. That's why Haskell introduced do notation [wikibooks.org] to allow for imperative statements to be in imperatives if order matters

  • Faster always trumps "easier" in the end. Few languages are programmatically easier than C, it remains to be seen if that is the case here. Often "easier" means "able to do things without an underlying understanding of the architecture", and that's not condusive to Good Eats. (apologies to Alton Brown)

    I had a brief foray into Java, but I am amazed at the mileage I've gotten out of the C programming language and it's relatives. (C++, ObjC).

    • by HiThere ( 15173 )

      Context is important.

      C is great for small pieces of code. It gets increasingly awkwards as the size increases. So you need to modularize. Which is what Object Oriented languages do. Also what functional languages do, though they do it differently. I don't think either of those is the best choice for a MPU heavy environment. To me that sounds like a dataflow language would be best. But I can't think of any extant that aren't either moribund or so narrowly specialize that they might as well be. (Few l

      • by jbolden ( 176878 )

        So far functional is doing amazing at parallel. Because the languages are side effect free you can embed execution strategies right in the code as a high level modifier that can be easily profiled / changed.

        • by HiThere ( 15173 )

          I agree, they *ought* to be good at parallel. But often they aren't, even if I don't know why. E.g. Racket Scheme has wonderful parallel constructs, but if you read the documentation carefully you discover that those constructs actually only run in one thread. (I'm particularly thinking about "futures" here.) And if I want to start separate isolated processes...I can do that in Ruby or Python or C or ...well, anything that can handle network connections to the same machine.

          Usually, I'll admit, the docume

  • by fygment ( 444210 ) on Friday July 05, 2013 @08:03AM (#44193837)

    LISP? Really?! Were they _trying_ to make the GPU less accessible?

    • It was released by an academic researcher. It wouldn't be considered valid for career advancement unless he demonstrated his hardon for Lisp like everyone else in the department.

  • ...and I must scream.

  • And people wonder how come getting a Ph.D in CS versus Mechanical Engineering, EE, ChemE, Physics, etc., is a circle jerk waste of time.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...