.NET Native Compilation Preview Released 217
atrader42 (687933) writes "Microsoft announced a new .NET compiler that compiles .NET code to native code using the C++ compiler backend. It produces performance like C++ while still enabling .NET features like garbage collection, generics, and reflection. Popular apps have been measured to start up to 60% faster and use 15% less memory. The preview currently only supports Windows Store applications, but is expected to apply to more .NET applications in the long term. A preview of the compiler is available for download now. (Caveat: I both work for MS and read Slashdot.)"
Huh? (Score:5, Funny)
Popular apps have been measured to start up to 60% and use 15% less memory.
So they no longer fully start up? Why is that a benefit?
Re: (Score:3, Informative)
The word 'faster' was omitted. I've updated to fix.
Open source compiler (Score:5, Interesting)
They also open-sourced their new C# compiler:
http://roslyn.codeplex.com/ [codeplex.com]
Re: (Score:3, Insightful)
Isn't it time for yet-another-language? How about C$ ?
Re:Open source compiler (Score:5, Funny)
I'm not going back to GW-BASIC, thanks,
10 LET M$ = "Microsoft" (Score:2)
I'm not going back to GW-BASIC, thanks,
But at least we know where M$ started out.
Re: (Score:3)
Re: (Score:2)
Fine. Shift-5 then.
Re: (Score:2)
Isn't it time for yet-another-language? How about C$ ?
C£, so then it really could be unambiguously C-pound.
So no more .net redistributable? (Score:3)
Re:So no more .net redistributable? (Score:5, Informative)
apps will get deployed on end-user devices as fully self-contained natively compiled code (when .NET Native enters production), and will not have a dependency on the .NET Framework on the target device/machine.
Re: (Score:3)
Do they actually install them, or are they merely included in the installer packaging and installed if and only if the files are missing or outdated?
Re: (Score:3)
Actually, that's not the case. As already mentioned by PhrostyMcByte above, here's the quote from Microsoft's FAQ:
apps will get deployed on end-user devices as fully self-contained natively compiled code (when .NET Native enters production), and will not have a dependency on the .NET Framework on the target device/machine.
Pretty cool in my book.
Re:So no more .net redistributable? (Score:5, Insightful)
Yeah, I can't wait for a half-gigabyte executables.
Re: (Score:2)
Yeah, I can't wait for a half-gigabyte executables.
You mean you're not working with them already?
Re: (Score:3)
This new compiler actually links the dependent parts of the .Net framework into your native code, to produce a stand alone executable.
Re: (Score:2)
This would mean it gets compiled like C or C++. You would still need the .NET redistributable for any libraries you reference just as you have done with C++ libraries or DLL libraries in traditional Windows development
I see your point - libs will still be needed, whether .NET framework or individual dlls, however... from the comments on the article (yes, I RTFM!!!) it seems you will be able to statically link everything together into a single, self-contained executable.
Translator? (Score:3, Interesting)
compiles .NET code to native code using the C++ compiler backend
Can it output the generated C++ source?
Re:Translator? (Score:4, Informative)
Re: (Score:2)
Correct me if I am mistaken, but I'm pretty sure that if they are using the backend they are skipping the lexing and parsing steps and going straight to the generation of the intermediate representation. That would mean that there is no generated C++ code to see.
That is precisely what they announced. No correction needed. They use that C++ backend to emit code for specific processor architectures (and core counts) and do global optimizations.
Only benefits smaller devices (Score:2)
The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc). On the other hand, the startup would benefit - no more need to just-in-time compile. Plus there is no need for memory to compile it. On the other hand, the runtime might use some cycles to further optimize code during execution, whereas with this approach the code won't change any further. In any case, great for instant star
Re: (Score:3)
Well, the ART preview native compiler on Android 4.4 is on device so it could compile to native on the device, but I expect Google will accelerate that step precompiling on their servers taking into account device characteristics. Microsoft could do that too if they want
Re: (Score:2)
yes, the comments on TFM say that you have to upload MSIL to the APPStore server which will compile it to native code. I expect they will compile to all known architectures at that point.
So the appstore will become a compile farm.... I like that, I remember when sourceforge did exactly that.
Re: (Score:3)
The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).
MS announced that developers still need to pass hints to the compiler on what architecture, CPU core count, available memory etch, to compile for. You can (cross) compile to multiple architectures.
This technology is already at work when deploying apps for Windows Phone 8: Developers pass IL code to the store, native compilation is performed per device type in the cloud (CPU architecture, OS version, memory, ...) and the binary is passed to the device.
Re: (Score:2)
The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).
I hate to break it to you, but the original Pentium is now obsolete. Compiling for a specific CPU variant doesn't help much these days. I'm also unaware of any JIT compiler that adjusts the code for available RAM. You might have a point about the phase of the moon.
Basically you're citing the standard tropes used in defense of JIT. Theoretically it can make some difference, but when I ask for benchmarks showing code on a JIT running faster than straight-to-binary, all I hear is crickets.
Re: (Score:2)
Compiling for a specific CPU variant doesn't help much these days.
That depends if you use SSE, AVX, AES-NI or any of the other significant performance enhancing additions to recent CPU's.
Show me some code compiled with AES-NI support that isn't significantly faster than code without hardware accelerated AES instructions.
Re: (Score:2)
Re: (Score:2)
Compiling for a specific CPU variant doesn't help much these days.
Do you have any numbers? GCC has a bunch of different cost models for scheduling for different CPUs and regularly gets new ones added. That said I'm not sure I've ever seen a vast amount of difference, and less recently compared to the GCC3 days and PIII versus P4 which had very different costs.
Also, fun thing, with multiversioning, GCC can now compile multiple funtion versions using different instruction set subsets and switch at load time.
Run more native Windows apps on Linux (Score:2, Insightful)
From the article:
the .NET Native runtime [is] a refactored and optimized CLR
According to the article, the .NET Native runtime is a (not yet complete) implementation of .NET. This means that Wine + .NET Native = a Microsoft-built .NET runtime on Linux. This is good news because this may be a way to take those .NET technologies missing from Mono, such as WPF, and still use them on Linux.
Another reason this is good news is, we're one step closer to being able to develop Windows installers in .NET. Lately I've been using NSIS and it is the most stupid, idiotic language
Re: (Score:3, Insightful)
If you care so much about all that windows crap, why are you running Linux at all?
Because Linux is much easier to develop with.
Some Windows technologies (WPF, .NET, C#) are well designed, as are many Linux technologies. Seeing the benefits of one platform isn't mutually exclusive with seeing the benefits of another platform.
However, what is mutually exclusive is a tribal mindset and the ability to see two sides of the situation.
Re: (Score:3)
Breaking news! Some people use Linux but have a need for something that's only ostensibly available for Windows!
Re: (Score:3)
What about number-crunching performance? (Score:3)
I skimmed over the links, but I probably just missed it. So apps take 60% less time to start, and they use 15% less memory. What about run-time performance? How much faster are they when executing?
Re: (Score:3)
I skimmed over the links, but I probably just missed it. So apps take 60% less time to start, and they use 15% less memory. What about run-time performance? How much faster are they when executing?
During runtime, a.NET already runs compiled. This saves on the JIT compiler.
However, they also announced (later session at /Build//) that the new compilers (including the JITs) will take advantage of SIMD. For some application types this can allegedly lead to serious (like in 60%) performance gains. Games were mentioned.
Re: (Score:2)
Re: (Score:3)
Well, that depends. JIT needs to be *really* fast. That limits the optimization it can do. Pre-compilation to native allows more processing time for optimizations between the CIL and the machine code than a JIT can really afford.
Re: (Score:2)
.NET JIT always compiles, it doesn't have a bytecode interpreter at all. That's why it has to be faster than Java's, and why it doesn't optimize as well.
Is JITC finally going to die? (Score:4, Insightful)
Many years ago there was an R&D project inside a large tech company. It was exploring many of the hot research topics of the day, topics like mobile code, type based security, distributed computing and just in time compilation using "virtual machines". This project became Java.
Were all these ideas actually good? Arguably, no. Mobile code turned out to be harder to do securely than anyone had imagined, to the extent that all attempts to sandbox malicious programs of any complexity have repeatedly failed. Integrating distributed computing into the core of an OO language invariably caused problems due to the super leaky abstraction, for instance, normal languages typically have no way to impose a deadline on a method call written in the standard manner.
Just in time compilation was perhaps one of the worst ideas of all. Take a complex memory and CPU intensive program, like an optimising compiler, and run it over and over again on cheap consumer hardware? Throw away the results each time the user quits and do it all again when they next start it up? Brilliant, sounds like just the thing we all need!
But unfortunately the obvious conceptual problems with just in time compilers did not kill Java's love for it, because writing them was kind of fun and hey, Sun wasn't going to make any major changes in Java's direction after launch - that might imply it was imperfect, or that they made a mistake. And it was successful despite JITC. So when Microsoft decided to clone Java, they wanted to copy a formula that worked, and the JITC concept came along for the ride.
Now, many years later, people are starting to realise that perhaps this wasn't such a great idea after all. .NET Native sounds like a great thing, except it's also an obvious thing that should have been the way .NET worked right from the start. Android is also moving to a hybrid "compile to native at install time" model with the new ART runtime, but at least Android has the excuse that they wanted to optimise for memory and a slow interpreter seemed like the best way to do that. The .NET and Java guys have no such excuses.
Re: (Score:2)
That's rather a bizarre claim, considering i and p code has been around for decades, and virtual machines have been around as long. There's nothing particularly unique about Java (or .NET for that matter).
Re:Is JITC finally going to die? (Score:5, Informative)
In tests I had done some time ago with the early compilers, .Net code was actually faster than C implementing the same algorithm. The reason is that it can perform global optimizations, in-lining aggressively. Sure that can be done with C (and you do not even need macros), but it takes extra work, slows down the compiler if too much is put into header files, and programmers usually miss some of the routines that need in-lining.
Modern generational garbage collectors are also faster than malloc/free, and do not suffer fragmentation.
Delaying compilation makes it architecture neutral, same distro for 32, 64bit, ARM etc. What is needed is to cache the results of previous compiles which causes a slight but usually negligible start up penalty.
Compiling all the way to machine code at build time is an archaic C-grade idea that became obsolete thirty years ago for most common applications.
Re: (Score:2)
You make some good points, however:
The reason is that it can perform global optimizations, in-lining aggressively.
So can all semi-modern C++ compilers. This is a compiler technology, not a language concern.
Modern generational garbage collectors are also faster than malloc/free, and do not suffer fragmentation.
Perhaps true, but this ignores the fact that C++ can effectively bypass heap allocation completely for programmer-defined hot spots. Sure, this pushes the optimisation work on to the programmer rather than the compiler, but it still means a significant performance win. Java can't do this to anything like the same degree.
Re: (Score:2)
The reason is that it can perform global optimizations, in-lining aggressively. Sure that can be done with C (and you do not even need macros), but it takes extra work, slows down the compiler if too much is put into header files, and programmers usually miss some of the routines that need in-lining.
Modern static compilers have been advancing too. Automatic inlining is now very well established. With link time optimization, this even happens across translation units.
Modern generational garbage collectors a
Re: (Score:3)
Dynamically compiling code has some advantages unrelated to security or portability. For example, try efficiently implementing generic virtual methods without a JIT.
(Coincidentally, .NET Native doesn't support this feature of C#)
or just use c++ in the first place (Score:3)
never seen the point of c#
Re:I thought April fools was 2 days ago? (Score:4, Interesting)
It actually sounds like gcj.
Re: (Score:3)
Re: (Score:2)
1999 called. http://gcc.gnu.org/java/ [gnu.org]
Re: (Score:2)
gcj is largely abandonware since distros adopted openjdk.
(There might still be a case for an AOT compiler to replace hotspot but I'm not sure Excelsior JET has much industry adoption - at least not in the enterprises I worked at.)
Re:Ah... (Score:4, Insightful)
Yeah, sorry, GUIs won. Like 20 years ago. You can stop pretending that our multicore processors with 64 gigs of ram can't handle them.
Re: (Score:2, Informative)
When installing a security update for .NET takes 45 minutes, there is no pretending involved.
Re: (Score:2)
Would you rather they didn't update the assembly cache when they install an update? So your applications have to wait for their dependencies to be updated while they start?
Background assembly cache updates (Score:2)
Would you rather they didn't update the assembly cache when they install an update? So your applications have to wait for their dependencies to be updated while they start?
I'd rather have it update the assembly cache in the background after the interruption of a reboot. This way I can run non-.NET applications on one core while the assembly cache updates on another.
Re: (Score:2)
Sounds like a great user experience right there. Hmm, I just updated, maybe this app will work, maybe it won't?
Re: (Score:2)
Re: (Score:3)
Memory and CPU power are there to be used so why not take advantage of it. And what the hell is a hand coded app? Or are you referring to programming against a runtime versus programming directly against the OS? And what does eschewing OO approaches mean? Are you talking about an application that encapsulates all it's functionalities without referencing any external resources or dependencies?
Re: (Score:2)
1) if my app is careful with memory, you have more left to do other things with, and there's less swapping, etc.
2) if my app is 10x faster than yours (and it probably is at least that), I *am* taking advantage of the CPU, in fact, better than you are.
It's an app where you write the code, instead of bringing in a bunch of black boxes. So that A, you can control exactly what is going on, and B, you
Re: (Score:2)
Re: (Score:2)
Some of us are capable of writing anything we need, we just have better things to do than re-invent wheels all the time. Grow up.
Battery life (Score:2)
Memory and CPU power are there to be used so why not take advantage of it.
Battery life, for one. The less time a task takes, the less energy your application draws from the battery over the course of its running, and the less energy the screen draws from the battery while the user waits for it to finish.
Re:Ah... (Score:5, Insightful)
No. You can't do that unless the platform is locked down hardware wise, and that's not been the case with the major OS's for quite some time now. The best tool -- to date -- for anything serious aimed at a major OS is c. By far. Not C++. not objective c, not C#, not asm ... just c.
No. That's not it at all. I don't care where it was invented; that's a symptom, not the actual problem. The problem is bringing in other people's code results in a loss of maintainability, quite often a loss of focus on the precise problem one is attempting to address, a loss of understanding of exactly what is going on, which in turn leads to other bugs and performance shortcomings. OPC comes into play at multiple levels: attempts to manage memory for you; libraries; canned packages of every type and "handy" language features that hide the details from you. NIH because it wasn't you just *looks* like the problem, but the problem is what NIH code actually does to the end result, and that's a real thing, not a matter of I don't like your style, or some personality syndrome. If the goal is the highest possible quality, then the job has to be fully understood and carefully crafted from components you can service from start to finish, the only exceptions being where it *must* interface with the host OS. Even then you're likely to get screwed. Need UDP ports to work right? Stay away from OSX. Need file dialogs to handle complex selections? MS's were broken for at least a decade straight. Need font routines that rotate consistently? Windows would give it to you various ways depending on the platform. And so on. Better off to write your own code if you can possibly manage it. You know, so it'll work, and if your customer finds an error, so you can fix it instead of punting it into Apple or MS's lap.
I use "bloated" when my version of something is 1 mb, and a friend's, with fewer lines of code, is 50 mb and runs the target functionality at a fraction of the speed, not to mention loading differences and startup differences. It's not just about a library routine that isn't called (well, until there are a lot of them, or if they're very large... linkers really ought to toss those out anyway), it's primarily about waste in every function call, clumsy memory management that tries to be everything to everybody and ends up causing hiccups and brain farts at random times, libraries that bring in other libraries that bring in other libraries until you've got a house of a thousand bricks, where you only actually laid a few of them, and you have *no idea* of the integrity of the remaining structure. Code like that is largely out of your control. Bloated. Unmaintainable. Opaque. Unfriendly to concurrently running tasks.
Look at your average iOS application. 20 megs. 50 megs. Or more. For the most simpleminded shite you ever saw; could have been implemented in 32k of good code and (maybe) a couple megs of image objects. That's what I'm talking about, right there. Bloat. It's that zone where a craft is swamped by clumsy apprentices who think they understand a lot more than they do. Where one fellow creates beautiful, strong, custom furniture, and the other guy buys a 59.95 box from IKEA and turns a few cams. The good news is that there will always be a place for those who can really craft, because there's a never-ending source of challenges where crap just won't do. And despite rumors to the contrary, end users do know the difference -- especially once they've been exposed to both sides of the coin.
Re: (Score:2)
I'm known to be a low level person and professionally a C programmer. And I agree with you on some stuff.
However...
No, I won't use C to do something in 1k memory and 3 weeks of coding, I will use python in 10mb memory and 1 day of coding. Simply because my time costs more than 10mb of memory. So stop demonising higher level languages and accept that they have their perfectly legit uses as long as their limitations are undestood. Keep in mind that if android used C and not java, we would have about 5 non cra
Re: (Score:2)
OK. Your time costs more than 10 MB of RAM. But does it cost more than 10 MB of RAM times 100 apps times one million users? In the latter case, the guys who collectively write the 100 apps are costing the users collectively an extremely large amount of resources (think money).
It depends on how the app is going to be used. I program with C, C++, and Python, as appropriate.
Re: (Score:2)
I program on niche embedded devices which have 16mb of ram and still cost a lot of money for what they are. C is the only way there. However through the development circle we have a lot of bugs which get attributed to improper memory management - null dereferencing, memory leaks and the like. This makes the development circle longer, which is acceptable.
Now on the mobile market, it is an implied that the consumer prefers a lot of relatively stable applications in a short period of time. The tradeoff for thi
Re: (Score:3)
The best tool -- to date -- for anything serious aimed at a major OS is c. By far. Not C++
Not only that, but it must only be written by a true Scotsman as well.
There are many major, serious, projects written in C++.
GCC, LLVM, Firefox, QT, Webkit, the JVM, libre/open office.
In fact GCC recently switched from C to C++. Basically, C++ provides exactly the same machine model as C, except that it gives you a more programmable compiler and richer abstractions. There are very, very few places that it's worth usi
Re: (Score:2)
Re: (Score:2)
> You should pretend your work will be so popular if you want it to be so popular.
No you should not. This is amateur/hobbyist talk. When you have a billion customers, you will have plenty of money to optimize... or by then, you will be able to afford to keep plenty of "terabytes" online. Writing apps for billions of users when all you are likely to have is a few hundred/thousand users... is plain delusional and is inviting trouble.
Highly optimized, hyper-scalable code does not come for free. You will si
When it's 30 fps vs. 60 fps (Score:2)
the fact is that in the vast majority of cases saving 5ms while expending 5 times the development effort
For something that takes 20 ms to execute, making it take only 15 ms will help your application update its view every vertical blank instead of every two vertical blanks.
due to NIH syndrome
One cause of NIH syndrome is disagreement with the licensing terms of the pre-existing libraries. Another is that the pre-existing libraries happen not to be ported to a platform that you plan to support.
Re: (Score:2)
Exactly.
Re: (Score:2)
Re: (Score:2)
Not to mention handy language features (call it sugar if you want, it makes the coding faster and easier) like LINQ, lambdas, method overloading, and so on.
Re: (Score:2, Insightful)
Is C really that hard to develop in? After all, the chief advantages of C# isn't really C#, but the .NET libraries. C/C++ with good libraries strikes me as being a reasonably good option. If I'm just going to end up compiling it to down to machine code anyways, why bother with .NET at all? I get it if you have an existing code base you want to squeeze some more cycles out of, but if I were starting a new project tomorrow, give me one reason why a C# compiler is the way to go as opposed to C++?
Re:It produces performance like C++ (Score:5, Insightful)
"After all, the chief advantages of C# isn't really C#, but the .NET libraries."
You can't be serious! C is *substantially* lower-level than C#; you should only use C as a portable assembly language. I've spent decades writing assembly, C, and higher level languages and I'd pick C# over C in an eyeblink for anything that doesn't require access to the bare metal (well, personally I'd pick a functional language, but these days I work in industry...)
Re: (Score:2, Insightful)
You can't be serious! C is *substantially* lower-level than C#; you should only use C as a portable assembly language.
Why? C is extremely easy to write and has vast amounts of libraries to use.
Re: (Score:2)
Then again, you could just pick full C++ 11, which has the advantages of both the higher level of abstractions like C#, and the low level capabilities of C.
Anticipating the "but ... but ... it doesn't have garbage collection" - correct; it's not for brain dead idiots who can't program with proper technique and have to have something "managing" their code.
Re:It produces performance like C++ (Score:4, Funny)
I have never seen an unintended buffer overflow problem in C# or Java.
So you've seen intended ones?
Re: (Score:2)
I have never seen an unintended buffer overflow problem in C# or Java.
So you've seen intended ones?
Yup, exploit code is full of em...
Re: (Score:2)
Parent has the right of it. Performance is only part of the equation. There's what I call the X-abilities. Readability, maintainability, scalability, you-name-it-ability.
Often, raw C can fail on those.
Re: (Score:3)
Raw C can be X-able.
It's just plain PITA to do it.
Otherwise, performance of the raw C is overrated. Or better: the developers who benefit most from C performance are the ones who can't algorithms. Also, developing reusable algorithms in C is a major PITA.
Re: (Score:2)
The small amount of your post that even makes sense is absurdly wrong. For fuck's sake, *nix kernels have been implementing complex process and cycle allocation algorithms for four decades now, almost all of it written in C. That's not even talking about various tools in userland that invoke fairly complex logic.
Christ almighty, what fucking planet do some posters live on?
Re: (Score:2)
For fuck's sake, *nix kernels have been implementing complex process and cycle allocation algorithms for four decades now, almost all of it written in C.
LOL. Thanks. As a system developer specializing on Linux, how could I have missed it!? /s
Seriously though, you might also note that it often took kernels also *decades* to get where they are.
Most algorithms are very very primitive - because you shouldn't put complex/unpredictable logic into the kernel.
Lion share of memory allocation is static. There are very few truly dynamic structures. Because kernel may not run out of memory (and kernel address space is often very limited).
Data structures are pri
Re: (Score:2)
for any new development it is literally impossible to recommend C over C++
]
Your rant makes little sense - almost everyone upthread was talking about C/C++, not just C. They were contrasting C/C++ to things like C# and Java.
Re: (Score:2)
Your rant makes little sense - almost everyone upthread was talking about C/C++, not just C.
Which, in itself, makes little sense. Actually programming in C++ is vastly different from programming in C. In fact the main criticisms of C (low level, tedious, error-prone by hand resource management, unsage contstructs etc...) can almost entirely be avoided in C++ with no performance penalty. In fact often with a performance gain.
Re: (Score:2)
for any new development it is literally impossible to recommend C over C++.
Really? Let's find out:
For your next project, I strongly recommend that you implement it in C instead of C++.
Sorry, kid. I've got to side with reality on this one...
Re: (Score:2)
It can't use an `int` as a key or value - it operated on pointers to something abstract. Meaning that not only that something has to be dynamically allocated, but that if it is small - like `int` or even `long` - the overhead of dynamic memory would (typically) quadruple memory consumption. Which is clearly why things like glib aren't used in kernel space.
If you're coding for a 6502 (Score:2)
Re: (Score:2)
Or what optimizing compiler targeting the 6502 do you recommend?
Not sure about the 6502, but for the Atmel series of 8 bit micros, I reccomend GCC, same as always.
Portability of C# (Score:2)
why not do it in C# if it runs fast enough?
Because too many people who code in C# tend to get lazy and not test in Mono.
Re: (Score:2)
This is actually fucking awesome. They've got native compilation of Win32/64 desktop and server apps on the road-map. You're right, nobody cares about the Windows Store, which is why they targeted those apps first (you know, developers, developers, developers and all that shit).
The FAQ [microsoft.com] clearly states that they're planning to propagate this feature to all .Net apps.
Desktop apps are a very important part of our strategy. Initially, we are focusing on Windows Store apps with .NET Native. In the longer term we will continue to improve native compilation for all .NET applications.
I'm guessing that means .Net 4.5+ apps, which in turn means Windows 8+. So here's for hoping that Windows 9 is not gonna suck so much donkey ball
Re: (Score:2, Interesting)
That doesn't sound like a proper native compiler:
The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead of using the just-in-time (JIT) compiler to compile the original assembly.
Yes, it does produce native code.
No, it doesn't produce an executable, ready for redistribution.
I do not disagree with the approach, but there is still the difference. If done right, it might be a blessing: code is optimized for the local CPU. If done poorly (as MS likes to do it sometimes) it might mean irreproducible bugs or performance regressions and outright no effect at all, if cache gets corrupted somehow.
Re: (Score:2)
NGEN just caches the JIT output of an assembly, it does not produce a native executable.
Re: (Score:2)
I agree with the essence of the statement, but it's written in terms of childish absolutes such as "nobody" that obviously isn't true. Maybe if he had said "The majority of .NET developers aren't doing metro. When you expand support for this feature, then it'll be interesting to the rest of us." But some people live in a world that revolves around them and cry when they get left out. I hope this feature comes to the rest of the .NET platform, but I'm not going to cry about it.
Re: (Score:2)
Maybe because the plan is to support all of .net.
is expected to apply to more .NET applications in the long term
I assume it was easier to only support the reduced API of metro apps to start with.
Re: (Score:2)
Desktop apps are a very important part of our strategy. Initially, we are focusing on Windows Store apps with .NET Native. In the longer term we will continue to improve native compilation for all .NET applications.
I'm not your Google bitch, so you can figure out where that quote came from on your own yeah?
Re: (Score:3)
Microsoft were unable to use .NET to build their own applications, presumably because of poor performance.
Unlikely. MSO is very old. Very likely the source code is poorly documented and not completely understood. Porting that to anything is going to be a major and very risky undertaking.
.NET has clearly failed.
Still clearly better than VB.
Re: (Score:2)
Re: (Score:2)
No need to use Mono for ARM. .NET has been supported on numerous architectures, including ARM, as part of Windows CE for years. Sure, it was only a subset of the full framework, but not for any technical reason except keeping the footprint small.
WinRT (not to be confused with Windows RT; we're talking about the API set now) does often feel like a waste of effort to me, although there is something to be said for identifying/creating a sandbox-friendly set of APIs to use in creating sandboxed software...
.NET CF can't run IronPython (Score:2)
No need to use Mono for ARM. .NET has been supported on numerous architectures, including ARM, as part of Windows CE for years. Sure, it was only a subset of the full framework
The .NET Compact Framework can't even run IronPython or any other DLR language because it lacks Emit.
Re: (Score:3)
.NET apps compiled for "AnyCPU" will, technically, run just fine on Windows RT on ARM. The reason why you can't actually run such desktop apps is because it is blocked by signature verifier (any desktop app must be signed by MS to run on RT). It's a DRM thing, not a technical limitation.
Oh, and huge parts of Office use .NET these days, alongside the older native code. Ditto for VS, and many other products.
Re: (Score:2)
It's already completely possible to run native or managed desktop apps on RT. You just need to "jailbreak" it first to remove the signature enforcement on user-mode full-trust binaries. RT 8.0 has been jailbroken since like a year ago...
The jailbreak for RT 8.1 is in development. Microsoft put a completely unjustifiable amount of effort (IMO) into making sure RT 8.1 sucks even more than RT 8.0, but nothing that complex is perfect. If you have a gen1 RT device (anything except a Surface 2 or Lumia 2520) you
Re: (Score:2)
It'd be nice if MS would officially support loading standard win32 applications, compiled for ARM, without jailbreaking though.
Now obviously they want to avoid a situation where a consumer tries to install a shrink-wrapped x86 program that won't run on a different architecture - but there are other means. e.g. distributing platform specific binaries through the Windows store.