Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Software Microsoft Programming Upgrades Windows

.NET Native Compilation Preview Released 217

atrader42 (687933) writes "Microsoft announced a new .NET compiler that compiles .NET code to native code using the C++ compiler backend. It produces performance like C++ while still enabling .NET features like garbage collection, generics, and reflection. Popular apps have been measured to start up to 60% faster and use 15% less memory. The preview currently only supports Windows Store applications, but is expected to apply to more .NET applications in the long term. A preview of the compiler is available for download now. (Caveat: I both work for MS and read Slashdot.)"
This discussion has been archived. No new comments can be posted.

.NET Native Compilation Preview Released

Comments Filter:
  • by ackthpt ( 218170 ) on Thursday April 03, 2014 @04:48PM (#46653421) Homepage Journal

    Isn't it time for yet-another-language? How about C$ ?

  • by common-lisp ( 2771805 ) on Thursday April 03, 2014 @04:54PM (#46653497)

    From the article:

    the .NET Native runtime [is] a refactored and optimized CLR

    According to the article, the .NET Native runtime is a (not yet complete) implementation of .NET. This means that Wine + .NET Native = a Microsoft-built .NET runtime on Linux. This is good news because this may be a way to take those .NET technologies missing from Mono, such as WPF, and still use them on Linux.

    Another reason this is good news is, we're one step closer to being able to develop Windows installers in .NET. Lately I've been using NSIS and it is the most stupid, idiotic language I've ever used. It's been described as a mixture of PHP and assembly.

    Another thought: the article doesn't seem to mention it, but judging by the design, the .NET Native compiler may be able to compile any .NET DLLs and EXEs, not just C# ones.

  • Re:Ah... (Score:4, Insightful)

    by i kan reed ( 749298 ) on Thursday April 03, 2014 @04:56PM (#46653547) Homepage Journal

    Yeah, sorry, GUIs won. Like 20 years ago. You can stop pretending that our multicore processors with 64 gigs of ram can't handle them.

  • by common-lisp ( 2771805 ) on Thursday April 03, 2014 @05:35PM (#46654229)

    If you care so much about all that windows crap, why are you running Linux at all?

    Because Linux is much easier to develop with.

    Some Windows technologies (WPF, .NET, C#) are well designed, as are many Linux technologies. Seeing the benefits of one platform isn't mutually exclusive with seeing the benefits of another platform.

    However, what is mutually exclusive is a tribal mindset and the ability to see two sides of the situation.

  • by Anonymous Coward on Thursday April 03, 2014 @05:36PM (#46654253)

    I believe MS Office is built using Visual C++.
    Microsoft were unable to use .NET to build their own applications, presumably because of poor performance.
    Microsoft have failed to eat their own dog food.

    If MIcrosoft .NET was successful, then WindowsRT on an ARM CPU would have not been a failure.
    If .NET was implemented properly, all applications compiled with Microsoft VIsual Studio should have produced exes/dlls with both native x86 and .net (fat binaries).
    Then all existing Windows Apps would run on the ARM CPU (admittedly a bit slower). .NET has clearly failed.

  • by MightyMartian ( 840721 ) on Thursday April 03, 2014 @05:55PM (#46654571) Journal

    Is C really that hard to develop in? After all, the chief advantages of C# isn't really C#, but the .NET libraries. C/C++ with good libraries strikes me as being a reasonably good option. If I'm just going to end up compiling it to down to machine code anyways, why bother with .NET at all? I get it if you have an existing code base you want to squeeze some more cycles out of, but if I were starting a new project tomorrow, give me one reason why a C# compiler is the way to go as opposed to C++?

  • by IamTheRealMike ( 537420 ) on Thursday April 03, 2014 @06:02PM (#46654669)

    Many years ago there was an R&D project inside a large tech company. It was exploring many of the hot research topics of the day, topics like mobile code, type based security, distributed computing and just in time compilation using "virtual machines". This project became Java.

    Were all these ideas actually good? Arguably, no. Mobile code turned out to be harder to do securely than anyone had imagined, to the extent that all attempts to sandbox malicious programs of any complexity have repeatedly failed. Integrating distributed computing into the core of an OO language invariably caused problems due to the super leaky abstraction, for instance, normal languages typically have no way to impose a deadline on a method call written in the standard manner.

    Just in time compilation was perhaps one of the worst ideas of all. Take a complex memory and CPU intensive program, like an optimising compiler, and run it over and over again on cheap consumer hardware? Throw away the results each time the user quits and do it all again when they next start it up? Brilliant, sounds like just the thing we all need!

    But unfortunately the obvious conceptual problems with just in time compilers did not kill Java's love for it, because writing them was kind of fun and hey, Sun wasn't going to make any major changes in Java's direction after launch - that might imply it was imperfect, or that they made a mistake. And it was successful despite JITC. So when Microsoft decided to clone Java, they wanted to copy a formula that worked, and the JITC concept came along for the ride.

    Now, many years later, people are starting to realise that perhaps this wasn't such a great idea after all. .NET Native sounds like a great thing, except it's also an obvious thing that should have been the way .NET worked right from the start. Android is also moving to a hybrid "compile to native at install time" model with the new ART runtime, but at least Android has the excuse that they wanted to optimise for memory and a slow interpreter seemed like the best way to do that. The .NET and Java guys have no such excuses.

  • by Anonymous Coward on Thursday April 03, 2014 @06:40PM (#46655189)
    Linux isn't dying. yes it is a long way behind MS and MS are still very much dominate, but Linux isn't going anywhere.
  • by MightyMartian ( 840721 ) on Thursday April 03, 2014 @06:59PM (#46655427) Journal

    Yeah, I can't wait for a half-gigabyte executables.

  • by ralphbecket ( 225429 ) on Thursday April 03, 2014 @07:10PM (#46655603)

    "After all, the chief advantages of C# isn't really C#, but the .NET libraries."

    You can't be serious! C is *substantially* lower-level than C#; you should only use C as a portable assembly language. I've spent decades writing assembly, C, and higher level languages and I'd pick C# over C in an eyeblink for anything that doesn't require access to the bare metal (well, personally I'd pick a functional language, but these days I work in industry...)

  • by Desler ( 1608317 ) on Thursday April 03, 2014 @08:28PM (#46656421)

    You can't be serious! C is *substantially* lower-level than C#; you should only use C as a portable assembly language.

    Why? C is extremely easy to write and has vast amounts of libraries to use.

  • Re:Ah... (Score:5, Insightful)

    by fyngyrz ( 762201 ) on Thursday April 03, 2014 @10:20PM (#46657307) Homepage Journal

    You could write everything in assembly if you wanted to and with careful optimization you could probably produce faster code but it would take several orders of magnitude faster to do.

    No. You can't do that unless the platform is locked down hardware wise, and that's not been the case with the major OS's for quite some time now. The best tool -- to date -- for anything serious aimed at a major OS is c. By far. Not C++. not objective c, not C#, not asm ... just c.

    due to NIH syndrome

    No. That's not it at all. I don't care where it was invented; that's a symptom, not the actual problem. The problem is bringing in other people's code results in a loss of maintainability, quite often a loss of focus on the precise problem one is attempting to address, a loss of understanding of exactly what is going on, which in turn leads to other bugs and performance shortcomings. OPC comes into play at multiple levels: attempts to manage memory for you; libraries; canned packages of every type and "handy" language features that hide the details from you. NIH because it wasn't you just *looks* like the problem, but the problem is what NIH code actually does to the end result, and that's a real thing, not a matter of I don't like your style, or some personality syndrome. If the goal is the highest possible quality, then the job has to be fully understood and carefully crafted from components you can service from start to finish, the only exceptions being where it *must* interface with the host OS. Even then you're likely to get screwed. Need UDP ports to work right? Stay away from OSX. Need file dialogs to handle complex selections? MS's were broken for at least a decade straight. Need font routines that rotate consistently? Windows would give it to you various ways depending on the platform. And so on. Better off to write your own code if you can possibly manage it. You know, so it'll work, and if your customer finds an error, so you can fix it instead of punting it into Apple or MS's lap.

    It boggles the mind that people *still* use the term "bloated" simply because they are utilizing frameworks that might not be limited to just the exact set of things you need

    I use "bloated" when my version of something is 1 mb, and a friend's, with fewer lines of code, is 50 mb and runs the target functionality at a fraction of the speed, not to mention loading differences and startup differences. It's not just about a library routine that isn't called (well, until there are a lot of them, or if they're very large... linkers really ought to toss those out anyway), it's primarily about waste in every function call, clumsy memory management that tries to be everything to everybody and ends up causing hiccups and brain farts at random times, libraries that bring in other libraries that bring in other libraries until you've got a house of a thousand bricks, where you only actually laid a few of them, and you have *no idea* of the integrity of the remaining structure. Code like that is largely out of your control. Bloated. Unmaintainable. Opaque. Unfriendly to concurrently running tasks.

    Look at your average iOS application. 20 megs. 50 megs. Or more. For the most simpleminded shite you ever saw; could have been implemented in 32k of good code and (maybe) a couple megs of image objects. That's what I'm talking about, right there. Bloat. It's that zone where a craft is swamped by clumsy apprentices who think they understand a lot more than they do. Where one fellow creates beautiful, strong, custom furniture, and the other guy buys a 59.95 box from IKEA and turns a few cams. The good news is that there will always be a place for those who can really craft, because there's a never-ending source of challenges where crap just won't do. And despite rumors to the contrary, end users do know the difference -- especially once they've been exposed to both sides of the coin.

"What the scientists have in their briefcases is terrifying." -- Nikita Khrushchev