Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Microsoft Programming Upgrades Windows

.NET Native Compilation Preview Released 217

atrader42 (687933) writes "Microsoft announced a new .NET compiler that compiles .NET code to native code using the C++ compiler backend. It produces performance like C++ while still enabling .NET features like garbage collection, generics, and reflection. Popular apps have been measured to start up to 60% faster and use 15% less memory. The preview currently only supports Windows Store applications, but is expected to apply to more .NET applications in the long term. A preview of the compiler is available for download now. (Caveat: I both work for MS and read Slashdot.)"
This discussion has been archived. No new comments can be posted.

.NET Native Compilation Preview Released

Comments Filter:
  • Huh? (Score:5, Funny)

    by Desler ( 1608317 ) on Thursday April 03, 2014 @04:33PM (#46653163)

    Popular apps have been measured to start up to 60% and use 15% less memory.

    So they no longer fully start up? Why is that a benefit?

  • Open source compiler (Score:5, Interesting)

    by rabtech ( 223758 ) on Thursday April 03, 2014 @04:42PM (#46653325) Homepage

    They also open-sourced their new C# compiler:

    http://roslyn.codeplex.com/ [codeplex.com]

  • by Kenja ( 541830 ) on Thursday April 03, 2014 @04:47PM (#46653397)
    This can only be a good thing as every game I install these days also installs the redistribution files for .net.
    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Thursday April 03, 2014 @05:16PM (#46653855) Homepage
      Yep! From their FAQ:

      apps will get deployed on end-user devices as fully self-contained natively compiled code (when .NET Native enters production), and will not have a dependency on the .NET Framework on the target device/machine.

    • every game I install these days also installs the redistribution files for .net.

      Do they actually install them, or are they merely included in the installer packaging and installed if and only if the files are missing or outdated?

  • Translator? (Score:3, Interesting)

    by Anonymous Coward on Thursday April 03, 2014 @04:48PM (#46653419)

    compiles .NET code to native code using the C++ compiler backend

    Can it output the generated C++ source?

    • Re:Translator? (Score:4, Informative)

      by Calavar ( 1587721 ) on Thursday April 03, 2014 @05:05PM (#46653683)
      Correct me if I am mistaken, but I'm pretty sure that if they are using the backend they are skipping the lexing and parsing steps and going straight to the generation of the intermediate representation. That would mean that there is no generated C++ code to see.
      • Correct me if I am mistaken, but I'm pretty sure that if they are using the backend they are skipping the lexing and parsing steps and going straight to the generation of the intermediate representation. That would mean that there is no generated C++ code to see.

        That is precisely what they announced. No correction needed. They use that C++ backend to emit code for specific processor architectures (and core counts) and do global optimizations.

  • The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc). On the other hand, the startup would benefit - no more need to just-in-time compile. Plus there is no need for memory to compile it. On the other hand, the runtime might use some cycles to further optimize code during execution, whereas with this approach the code won't change any further. In any case, great for instant star

    • by robmv ( 855035 )

      Well, the ART preview native compiler on Android 4.4 is on device so it could compile to native on the device, but I expect Google will accelerate that step precompiling on their servers taking into account device characteristics. Microsoft could do that too if they want

      • yes, the comments on TFM say that you have to upload MSIL to the APPStore server which will compile it to native code. I expect they will compile to all known architectures at that point.

        So the appstore will become a compile farm.... I like that, I remember when sourceforge did exactly that.

    • The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).

      MS announced that developers still need to pass hints to the compiler on what architecture, CPU core count, available memory etch, to compile for. You can (cross) compile to multiple architectures.

      This technology is already at work when deploying apps for Windows Phone 8: Developers pass IL code to the store, native compilation is performed per device type in the cloud (CPU architecture, OS version, memory, ...) and the binary is passed to the device.

    • The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).

      I hate to break it to you, but the original Pentium is now obsolete. Compiling for a specific CPU variant doesn't help much these days. I'm also unaware of any JIT compiler that adjusts the code for available RAM. You might have a point about the phase of the moon.

      Basically you're citing the standard tropes used in defense of JIT. Theoretically it can make some difference, but when I ask for benchmarks showing code on a JIT running faster than straight-to-binary, all I hear is crickets.

      • Compiling for a specific CPU variant doesn't help much these days.

        That depends if you use SSE, AVX, AES-NI or any of the other significant performance enhancing additions to recent CPU's.

        Show me some code compiled with AES-NI support that isn't significantly faster than code without hardware accelerated AES instructions.

      • by Megol ( 3135005 )
        Pentium? Do you really think x86 evolution ended with the Pentium?!? Here's a _subset_ of cases that differs on current designs and can make a performance difference:
        • CMOV (conditional move): can be beneficial for some processors while mostly useless for others.
        • Micro-op fusion. Some processors support some kind of fusion (where a compare-type instruction is fused with a branch dependent on the comparison). Which types of compare can be combined to which types of conditional branch differs.
        • Zeroing cons
      • Compiling for a specific CPU variant doesn't help much these days.

        Do you have any numbers? GCC has a bunch of different cost models for scheduling for different CPUs and regularly gets new ones added. That said I'm not sure I've ever seen a vast amount of difference, and less recently compared to the GCC3 days and PIII versus P4 which had very different costs.

        Also, fun thing, with multiversioning, GCC can now compile multiple funtion versions using different instruction set subsets and switch at load time.

  • From the article:

    the .NET Native runtime [is] a refactored and optimized CLR

    According to the article, the .NET Native runtime is a (not yet complete) implementation of .NET. This means that Wine + .NET Native = a Microsoft-built .NET runtime on Linux. This is good news because this may be a way to take those .NET technologies missing from Mono, such as WPF, and still use them on Linux.

    Another reason this is good news is, we're one step closer to being able to develop Windows installers in .NET. Lately I've been using NSIS and it is the most stupid, idiotic language

  • by Theovon ( 109752 ) on Thursday April 03, 2014 @05:51PM (#46654487)

    I skimmed over the links, but I probably just missed it. So apps take 60% less time to start, and they use 15% less memory. What about run-time performance? How much faster are they when executing?

    • I skimmed over the links, but I probably just missed it. So apps take 60% less time to start, and they use 15% less memory. What about run-time performance? How much faster are they when executing?

      During runtime, a.NET already runs compiled. This saves on the JIT compiler.

      However, they also announced (later session at /Build//) that the new compilers (including the JITs) will take advantage of SIMD. For some application types this can allegedly lead to serious (like in 60%) performance gains. Games were mentioned.

      • by PRMan ( 959735 )
        This only helps the first load. After that, either one should be the same if the JIT is worth anything.
        • Well, that depends. JIT needs to be *really* fast. That limits the optimization it can do. Pre-compilation to native allows more processing time for optimizations between the CIL and the machine code than a JIT can really afford.

  • by IamTheRealMike ( 537420 ) on Thursday April 03, 2014 @06:02PM (#46654669)

    Many years ago there was an R&D project inside a large tech company. It was exploring many of the hot research topics of the day, topics like mobile code, type based security, distributed computing and just in time compilation using "virtual machines". This project became Java.

    Were all these ideas actually good? Arguably, no. Mobile code turned out to be harder to do securely than anyone had imagined, to the extent that all attempts to sandbox malicious programs of any complexity have repeatedly failed. Integrating distributed computing into the core of an OO language invariably caused problems due to the super leaky abstraction, for instance, normal languages typically have no way to impose a deadline on a method call written in the standard manner.

    Just in time compilation was perhaps one of the worst ideas of all. Take a complex memory and CPU intensive program, like an optimising compiler, and run it over and over again on cheap consumer hardware? Throw away the results each time the user quits and do it all again when they next start it up? Brilliant, sounds like just the thing we all need!

    But unfortunately the obvious conceptual problems with just in time compilers did not kill Java's love for it, because writing them was kind of fun and hey, Sun wasn't going to make any major changes in Java's direction after launch - that might imply it was imperfect, or that they made a mistake. And it was successful despite JITC. So when Microsoft decided to clone Java, they wanted to copy a formula that worked, and the JITC concept came along for the ride.

    Now, many years later, people are starting to realise that perhaps this wasn't such a great idea after all. .NET Native sounds like a great thing, except it's also an obvious thing that should have been the way .NET worked right from the start. Android is also moving to a hybrid "compile to native at install time" model with the new ART runtime, but at least Android has the excuse that they wanted to optimise for memory and a slow interpreter seemed like the best way to do that. The .NET and Java guys have no such excuses.

    • That's rather a bizarre claim, considering i and p code has been around for decades, and virtual machines have been around as long. There's nothing particularly unique about Java (or .NET for that matter).

    • by aberglas ( 991072 ) on Thursday April 03, 2014 @09:03PM (#46656697)
      You miss the point entirely. The vast majority of CPU time in most applications is spent in a relatively few leaf subroutines. What the JIT does is just compile those bits that are found to be CPU intensive.

      In tests I had done some time ago with the early compilers, .Net code was actually faster than C implementing the same algorithm. The reason is that it can perform global optimizations, in-lining aggressively. Sure that can be done with C (and you do not even need macros), but it takes extra work, slows down the compiler if too much is put into header files, and programmers usually miss some of the routines that need in-lining.

      Modern generational garbage collectors are also faster than malloc/free, and do not suffer fragmentation.

      Delaying compilation makes it architecture neutral, same distro for 32, 64bit, ARM etc. What is needed is to cache the results of previous compiles which causes a slight but usually negligible start up penalty.

      Compiling all the way to machine code at build time is an archaic C-grade idea that became obsolete thirty years ago for most common applications.

      • You make some good points, however:

        The reason is that it can perform global optimizations, in-lining aggressively.

        So can all semi-modern C++ compilers. This is a compiler technology, not a language concern.

        Modern generational garbage collectors are also faster than malloc/free, and do not suffer fragmentation.

        Perhaps true, but this ignores the fact that C++ can effectively bypass heap allocation completely for programmer-defined hot spots. Sure, this pushes the optimisation work on to the programmer rather than the compiler, but it still means a significant performance win. Java can't do this to anything like the same degree.

      • The reason is that it can perform global optimizations, in-lining aggressively. Sure that can be done with C (and you do not even need macros), but it takes extra work, slows down the compiler if too much is put into header files, and programmers usually miss some of the routines that need in-lining.

        Modern static compilers have been advancing too. Automatic inlining is now very well established. With link time optimization, this even happens across translation units.

        Modern generational garbage collectors a

    • Dynamically compiling code has some advantages unrelated to security or portability. For example, try efficiently implementing generic virtual methods without a JIT.

      (Coincidentally, .NET Native doesn't support this feature of C#)

  • by BestNicksRTaken ( 582194 ) on Friday April 04, 2014 @07:16AM (#46659513)

    never seen the point of c#

"If it ain't broke, don't fix it." - Bert Lantz

Working...