Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Linux

LLVM 7.0 Released: Better CPU Support, AMDGPU Vega 20; Clang 7.0 Gets FMV and OpenCL C++ (phoronix.com) 76

LLVM release manager Hans Wennborg announced Wednesday the official availability of LLVM 7.0 compiler stack as well as associated sub-projects including the Clang 7.0 C/C++ compiler front-end, Compiler-RT, libc++, libunwind, LLDB, and others. From a report: There is a lot of LLVM improvements ranging from CPU improvements for many different architectures, Vega 20 support among many other AMDGPU back-end improvements, the new machine code analyzer utility, and more. The notable Clang C/C++ compiler has picked up support for function multi-versioning (FMV), initial OpenCL C++ support, and many other additions. See my LLVM 7.0 / Clang 7.0 feature overview for more details on the changes with this six-month open-source compiler stack update. Wennborg's release statement can be read on the llvm-announce list.
This discussion has been archived. No new comments can be posted.

LLVM 7.0 Released: Better CPU Support, AMDGPU Vega 20; Clang 7.0 Gets FMV and OpenCL C++

Comments Filter:
  • by Anonymous Coward

    if not, it needs one

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Yes, it has one that openly discriminates against white males.

      A principal developer has already walked away from the project because of it.

      • And literacy. "There _is_ a lot of LLVM improvments". Why should I believe it's going to handle a set of very picky language specs if it can't speak English?
    • by Desler ( 1608317 )

      Yes.

  • by jellomizer ( 103300 ) on Wednesday September 19, 2018 @09:54AM (#57341838)

    I finally got myself a computer with a an above average Video Card (NVIDIA) and have been playing the CUDA core logic.
    It is great, having access to thousands of parallel CPU's can really bring my execution time of code down a Big O level.

    However what I am doing only seems to work with nVidia Chips. And AMD GPU's probably will need different coding as well.
    The main point of C/C++ is write once compile anywhere. However at this point it is still very shaky in support. So any program that uses the GPU for calculation would need to be coded multiple times for different platforms (Or at least with a switch inside the code for the platform particular issues).
    It reminds me a lot like early C, where you needed to switch to assembly language a lot more, because the default core sets wasn't robust enough for many actions.

    • by Anonymous Coward on Wednesday September 19, 2018 @10:06AM (#57341926)

      It is great, having access to thousands of parallel CPU's can really bring my execution time of code down a Big O level.

      That's not how Big O works...

      • by Anonymous Coward on Wednesday September 19, 2018 @10:08AM (#57341944)
        That's what she said.
      • The top single CPU Sorting algorithm can be performed at a speed of O(n)
        If you can have a processor for nearly every one of your data elements a parallel algorithm such as a Shear Sort can give you a speed O(log(n)) because in essence the longest part of the sort is the diagonal of the square.

        When you have a lot of cores like 1024 you can really program your algorithm in a way that will bring down the Big O based on normal sized data sets.
         

        • by Anonymous Coward

          Yeah, but that's still not how big O works...

        • Re: (Score:2, Informative)

          by Anonymous Coward
          WTF is "a speed O(log(n)) "? Throwing more cores at a problem does not ever reduce the number of operations. Back to the "speed" thing. O(N^100) can be faster than O(1). It's not an issue of speed, but an issue of complexity. All parallel algorithms can be run serially, but not visa versa. And what are you smoking that you think you can reduce the minimum complexity by throwing cores at a problem?
        • by Anonymous Coward

          I understand what you mean, but please don't say it that way. People who understand Big O notation will rightfully complain about abuses like that. What you're saying is technically wrong and not a good way of expressing what you mean. It makes you look like an uneducated hack spewing buzzwords. You're throwing massive amounts of (parallel) processing power at a problem and that can make higher complexity algorithms feasible for certain problem sizes, but this effect is precisely what Big O notation is mean

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Isn't that OpenCL is?

    • by UnknownSoldier ( 67820 ) on Wednesday September 19, 2018 @10:38AM (#57342150)

      Uhm, hello, that's what OpenCL provides - a standardized way to access the GPU in a vendor neutral way.

    • "The main point of C/C++ is write once compile anywhere."

      Since when?

      If you have new or different hardware you simply have to deal with it.

      The C++ language Standard doesn't deal with specific hardware abstraction.

      It is the same as C. You have to deal with new/different hardware, and it is not because "the default core sets wasn't robust eno ugh for many actions".

      CUDA is an API written by Nvidia.

      Hardware manufactures do not have strong incentives to write libraries that will work for their
    • by Guybrush_T ( 980074 ) on Wednesday September 19, 2018 @11:30AM (#57342598)

      Well, that was also my thoughts when I wrote my first CUDA program : apart from some new implicit structures (threadIdx / blockIdx / ...), CUDA is just C++.

      However, writing a program that has just decent performance in CUDA is very different from writing CPU code : you have to twist your mind to think wide (lots of slow but very synchronous cores) instead of long (several unsynchronized but fast cores). Usually, it makes the optimized code quite different and it is hard to abstract.

      Surely, there are some cases that can be easily described to work well in both cases, like process this 3D matrix. But there are things for which the language cannot do much because you really need to optimize for a specific architecture that has made important and distinctive design choices in the memory model and compute architecture.

      So in the end, writing CUDA or OpenCL is likely not a 20-years investment. Things are moving fast and maybe GPUs will converge to a similar model in the future (just like there used to be very different types of CPUs at some point), but for now, writing CUDA or OpenCL code is optimizing for a specific architecture. It is somewhat expensive but makes a big difference.

    • The main point of C/C++ is write once compile anywhere.

      That has never been "the main point of C/C++".

      The main point of C is "let programmers get their work done, and give them access to low-level features when they need to".

      The main point of C++ is OOP with C-like features.

    • by Anonymous Coward

      "bring my execution time of code down a Big O level" That's a nonsensical statement.

    • I used to put my hopes into HSAIL, but now I'm rather in the SPIR(-V) territory. If Vulkan becomes widespread, it shouldn't matter whether you like C, C++, or anything else. Of course, unless nVidia finds ways to screw things up for everyone again, as they like to do.
  • by Anonymous Coward

    They kicked out the creator. Remember that.

  • ...I just installed last week the previous version of LLVM on my laptop, configured Code::Blocks, and compiled all the libraries I use for my development. Now back again to square one!

If I want your opinion, I'll ask you to fill out the necessary form.

Working...