LLVM 7.0 Released: Better CPU Support, AMDGPU Vega 20; Clang 7.0 Gets FMV and OpenCL C++ (phoronix.com) 76
LLVM release manager Hans Wennborg announced Wednesday the official availability of LLVM 7.0 compiler stack as well as associated sub-projects including the Clang 7.0 C/C++ compiler front-end, Compiler-RT, libc++, libunwind, LLDB, and others. From a report: There is a lot of LLVM improvements ranging from CPU improvements for many different architectures, Vega 20 support among many other AMDGPU back-end improvements, the new machine code analyzer utility, and more. The notable Clang C/C++ compiler has picked up support for function multi-versioning (FMV), initial OpenCL C++ support, and many other additions. See my LLVM 7.0 / Clang 7.0 feature overview for more details on the changes with this six-month open-source compiler stack update. Wennborg's release statement can be read on the llvm-announce list.
But does it have a code of conduct (Score:2, Funny)
if not, it needs one
Re: (Score:2, Informative)
Yes, it has one that openly discriminates against white males.
A principal developer has already walked away from the project because of it.
Re: (Score:1)
Their Code of Conduct says no such thing (Score:1)
Maybe you're such a dense motherfucker that you missed it, so here's what you glossed over, dicknozzle:
[ nonsense redacted ]
I normally wouldn't reply to such an obvious troll, however, to avoid the off-chance that someone actually believes this drivel ...
As a Gentoo user I prefer gcc over llvm for most things (faster code, better support by the distro), so I'm not exactly a fanboi, however, your post appears to be complete fiction (as well as a rather poorly constructed troll). The LLVM code of conduct con
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Yes.
Should GPU Coding become more standardize. (Score:4, Interesting)
I finally got myself a computer with a an above average Video Card (NVIDIA) and have been playing the CUDA core logic.
It is great, having access to thousands of parallel CPU's can really bring my execution time of code down a Big O level.
However what I am doing only seems to work with nVidia Chips. And AMD GPU's probably will need different coding as well.
The main point of C/C++ is write once compile anywhere. However at this point it is still very shaky in support. So any program that uses the GPU for calculation would need to be coded multiple times for different platforms (Or at least with a switch inside the code for the platform particular issues).
It reminds me a lot like early C, where you needed to switch to assembly language a lot more, because the default core sets wasn't robust enough for many actions.
Re:Should GPU Coding become more standardize. (Score:5, Informative)
It is great, having access to thousands of parallel CPU's can really bring my execution time of code down a Big O level.
That's not how Big O works...
Re:Should GPU Coding become more standardize. (Score:4, Funny)
Re: (Score:2)
The top single CPU Sorting algorithm can be performed at a speed of O(n)
If you can have a processor for nearly every one of your data elements a parallel algorithm such as a Shear Sort can give you a speed O(log(n)) because in essence the longest part of the sort is the diagonal of the square.
When you have a lot of cores like 1024 you can really program your algorithm in a way that will bring down the Big O based on normal sized data sets.
Re: (Score:1)
Yeah, but that's still not how big O works...
Re: (Score:2, Informative)
Re: (Score:1)
I understand what you mean, but please don't say it that way. People who understand Big O notation will rightfully complain about abuses like that. What you're saying is technically wrong and not a good way of expressing what you mean. It makes you look like an uneducated hack spewing buzzwords. You're throwing massive amounts of (parallel) processing power at a problem and that can make higher complexity algorithms feasible for certain problem sizes, but this effect is precisely what Big O notation is mean
Re: (Score:2, Informative)
Isn't that OpenCL is?
Re:Should GPU Coding become more standardize. (Score:4, Informative)
https://wiki.tiker.net/CudaVsO... [tiker.net]
https://create.pro/blog/opencl... [create.pro]
Re: (Score:2)
Re:Should GPU Coding become more standardize. (Score:5, Informative)
Uhm, hello, that's what OpenCL provides - a standardized way to access the GPU in a vendor neutral way.
Nothing to do with C/C++ (Score:3)
Since when?
If you have new or different hardware you simply have to deal with it.
The C++ language Standard doesn't deal with specific hardware abstraction.
It is the same as C. You have to deal with new/different hardware, and it is not because "the default core sets wasn't robust eno ugh for many actions".
CUDA is an API written by Nvidia.
Hardware manufactures do not have strong incentives to write libraries that will work for their
Re:Should GPU Coding become more standardize. (Score:4, Interesting)
Well, that was also my thoughts when I wrote my first CUDA program : apart from some new implicit structures (threadIdx / blockIdx / ...), CUDA is just C++.
However, writing a program that has just decent performance in CUDA is very different from writing CPU code : you have to twist your mind to think wide (lots of slow but very synchronous cores) instead of long (several unsynchronized but fast cores). Usually, it makes the optimized code quite different and it is hard to abstract.
Surely, there are some cases that can be easily described to work well in both cases, like process this 3D matrix. But there are things for which the language cannot do much because you really need to optimize for a specific architecture that has made important and distinctive design choices in the memory model and compute architecture.
So in the end, writing CUDA or OpenCL is likely not a 20-years investment. Things are moving fast and maybe GPUs will converge to a similar model in the future (just like there used to be very different types of CPUs at some point), but for now, writing CUDA or OpenCL code is optimizing for a specific architecture. It is somewhat expensive but makes a big difference.
Re: (Score:2)
That has never been "the main point of C/C++".
The main point of C is "let programmers get their work done, and give them access to low-level features when they need to".
The main point of C++ is OOP with C-like features.
Re: (Score:1)
"bring my execution time of code down a Big O level" That's a nonsensical statement.
Re: (Score:2)
Re: (Score:2)
They kicked out the creator. Remember that. (Score:1, Informative)
They kicked out the creator. Remember that.
Re: They kicked out the creator. Remember that. (Score:3)
Re: (Score:2)
... ah, great point! None of these improvements matter; you have identified the one salient issue!
FMV? *looks up FMV* - Oooh, I want that for my crypto libraries (using rdrand, aesni, etc,).
FMV matters.
Re: They kicked out the creator. Remember that. (Score:3)
Re: (Score:2)
I got it. But it was a convenient place to unload the FMV experience I had been through minutes earlier.
Re: They kicked out the creator. Remember that. (Score:2)
Re: (Score:1)
Well using caps in abrivations is not shouting it is actually the concert way of doing it
Re: (Score:2)
Scrollback (Score:2)
You're just Bashing them now.
Re: (Score:2)
Your line was buffer than mine. I suspect you have a history of this.
Damn.... (Score:2)