Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Programming

Intel Developing 'Data Parallel C++' As Part of OneAPI Initiative (phoronix.com) 81

Intel's One API project aims "to simplify application development across diverse computing architectures."

Now an anonymous reader quotes Phoronix: Intel announced an interesting development in their oneAPI initiative: they are developing a new programming language/dialect. Intel originally began talking about oneAPI last December for optimizing code across CPUs / GPUs / FPGAs and as part of "no transistor left behind...."
The article then acknowledges "the SYCL single-source C++ programming standard from The Khronos Group we've expected Intel to use as their basis for oneAPI," before noting Intel is going "a bit beyond..."

"Data Parallel C++ (DPC++) is their 'new direct programming language' aiming to be an open, cross-industry standard and based on C++ and incorporating SYCL."
This discussion has been archived. No new comments can be posted.

Intel Developing 'Data Parallel C++' As Part of OneAPI Initiative

Comments Filter:
  • Not bad (Score:2, Insightful)

    by Anonymous Coward

    Despite various stumbles in the recent years, Intel is still playing some of their cards right. In this case, we'll have to see if other industry players go for it to determine whether it's useful or not.

    • They seem to be re-inventing OpenCL, which absolutely was made for this. Not sure how thats playing the cards right, but here you are saying it...
  • C== would be a more proper name for a parallel language...

    • by Memnos ( 937795 )

      Would you pronounce that, "sequels"?

  • CUDA (Score:2, Interesting)

    There have already been a number of attempts at this. You've got OpenMP which is pretty spiffy. Take a normal C++ program, tack on a #pragma parallel for before a for loop and get parallelism more or less for free, well, of easy, data parallel sort.

    There's also OpenCL which sort of didn't go very far.

    And of course the 500lb gorilla of CUDA. Despite AMD GPUs having piles of RISC processors with wide vector units and documented instruction sets, it's kind of gone nowhere in the GPGPU world compared to nvidia.

    • by godrik ( 1287354 )

      I like OpenMP but it is not good for everything. The loops are pretty low overhead but not every parallel program is a loop. The tasking model is nice but the semantics aren't best, and it does not compose well with parallel for loops.

      CUDA is nice to program GPUs or any kind of highly regular codes to map on massively parallel vector machines. But try to do any kind of unbalanced parallelism in CUDA and it is kind of a pain. asyncs, futures, or most form of MIMD processing isn't going to map well to CUDA.

      C+

    • by Anonymous Coward

      The problem with CUDA is that nVIDIA, in its godly wisdom, decided to keep it closed, patented, and only supported on nVIDIA GPUs. That means it will never be used for anything else. And while it's quite popular and useful, and relatively easy to use - it is still limited to that niche and cannot expand further.

  • by Anonymous Coward

    Granted, POSIX could use a little upgrade to suit today's world (and I don't mean any of that i/web cancer), but no need to reinvent the wheel!
    (That would be the WhatWG's job. Literally.)

  • With Ryzen 12 core 24 thread CPUs mere weeks away all this would do is create code that works better on AMD hardware

  • by Tough Love ( 215404 ) on Saturday June 22, 2019 @02:35PM (#58805430)

    So this is like C with Goroutines? Help me here.

  • ICC (intel's compiler) cannot even handle basic vector extensions the last time I checked. You still have to use intel's poorly named intrinsic functions that aren't the least bit portable to other platforms (perhaps that's the point) On top of that, the intrinsics happily operate on vectors as if they are all the same but if you properly typedef your vectors, all of a sudden you need to cast them or use a special compiler flag. Intel is (intentionally?) clueless when it comes to this stuff - perhaps to
    • To be fair, Intels ICC does auto-vectorization. If you are using the intrinsics at this point you are probably doing it wrong. Intels ICC also cripples execution on AMD hardware, so there is a good reason to avoid it.

You know you've landed gear-up when it takes full power to taxi.

Working...