Microsoft's Top Devs Don't Seem To Like Own Tools 496
ericatcw writes "Through tools such as Visual Basic and Visual Studio, Microsoft may have done more than any other vendor to make drag and drop-style programming mainstream. But its superstar developers seem to prefer old-school modes of crafting code. During the panel at the Professional Developers Conference earlier this month, the devs also revealed why they think writing tight, bare-metal code will come back into fashion, and why parallel programming hasn't caught up with the processors yet." These guys are senior enough that they don't seem to need to watch what they say and how it aligns with Microsoft's product roadmap. They are also dead funny. Here's Jeffrey Snover on managed code (being pushed by Microsoft through its Common Language Runtime tech): "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore." Snover also joked that programming is getting so abstract, developers will soon have to use Natal to "write programs through interpretative dance."
Wow! (Score:5, Informative)
News at 11!
Package Runners vs Programmers (Score:5, Insightful)
Well you are very close to the mark here.
The integrated IDEs arose to allow the hobbyist and corporate IT newbie turn out something, which has a good chance of being functional if not maintainable or efficient. These tools also allow specialists from other fields (accountants, meteorologists, scientists) actually turn out products.
Efficiency, maintainability and extensibility don't matter for that type of programmer. They just need to put up a few screens to count the widgets or produce the daily report. It doesn't have to be portable,or efficient, and chances are it will be tossed as soon as that programmer moves on to another job.
Those programmers do manage to turn out a reasonable amount of customized applications, some of which are actually marketable. The vast majority are for in-house use. But some actually work quite well for specialized industry applications like Medical Billing, where the knowledge of the subject matter is more important than the efficiency of the code.
OS level code has to be much more efficient, and there is no substitution for knowing the programming language, the processor capabilities, and the compiler peculiarities. You can not leave to some IDE the task of putting code behind a button that will drag in half a ton of MS Foundation Classes or use some particular C/C++ construct that is horribly inefficient. You are basically always dealing with some data stream or message and doing X Y and Z with it and handing it off to the next task.
As such these guys virtually never see a piece of data all the way thru the computer. Their customers are other pieces of software. Their wholesaler are yet more pieces of software. Its data-in, whack it, pound it, and pass it on sort of code, and a lot of it, and a lot of self plagiarism. The IDEs just get in the way.
Re:Package Runners vs Programmers (Score:5, Insightful)
So the IDEs really have nothing to do with the so called "designers" you see in Visual Studio. And yes, its true that no developer who was serious about maintaining a multi-year product could do it via the designers before, you just had no control over WTF was going on. Now with WPF and xaml you finally can use the designers in a maintainable fashion, but it's a bit too little, too late for most developers to care. You can write pure Windows code in C as a makefile project(so you have clear control over the build, even to the point of a different toolchain like GNU) just as well in the Visual Studio 2010 IDE as with Vim and the command prompt tools. It's just a matter of if the IDE at that point gets in the way more than it helps.
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Rather smug, I think. (Score:5, Insightful)
The original article and the summary both come of as rather smug to me. The truth of the matter is that you need both low-level nitty-gritty programming and high-level programming. It depends on what you're using it for.
Think of it this way. You have people who make pipes. You know, the kind used in plumbing. Fittings, too. And they're very good at it. If you take your average house builder and try to get him to make a pipe, he'll be hopelessly bad at it. But you know what those guys who build houses are good at? Putting the pipes together in meaningful ways to get what they need (i.e. building a house) done. Take a guy who's brilliant at making pipes and fittings and try to get him to build a house. Yeah, not such a superstar now.
It's the same with programmers. Tell someone who is very good at writing low-level code, "I need a killer efficient compiler." Give them enough time, and they can churn it out and make it wicked optimized. Tell them, "I need a new type of control that works in this specific way and with crucial efficiency," and they're your guys. Tell them, "I need an new application entirely from scratch that can process my specific business logic, it needs to look and feel like a standard Windows application, it needs to be easy for end users to figure out and work with, and we need a working version in a couple of weeks," and they'll probably laugh at you. Yet that's what those people they're looking down on, the people developing with higher-level abstracted languages, are doing every day.
In my experience, competence != usefulness. They're not opposites, mind you, but it takes both types. It takes the people who work with the low-level nitty-gritty stuff, and it takes the people who use what they churn out to actually accomplish real-world productive things. One isn't smarter, one isn't better, neither should be looked down upon. Both are necessary.
Re:Rather smug, I think. (Score:5, Insightful)
Also, experience has taught us that even elite programmers make errors, and the most common categories of errors happen to be of the kind that can be prevented by designing your language or platform properly. This results in enhanced reliability, security and performance, due to the lack of null-pointer dereferences, buffer overflows, memory leaks, and so on and so forth. Now, I'm not saying that you therefore have to use Dotnet; I'm not really a fan of having to install yet another disk space munching VM, when I already need Java (among other things) for my studies and to run a few handy free tools that I depend upon to handle certain parts of the life I enjoy efficiently. However, I can say that if Microsoft, and programmers across the globe, would deprecate and replace old APIs that for example use uncounted strings, non-garbage-collected memory, and so on, the world would be a better place. And you don't need Dotnet or Java - you can do all of these things in plain C++ or macroassembler if you want.
"Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore."
Not particularly funny, and not an argument to go back to the olden days. Quite the opposite.
Mod parent UP! (Score:4, Interesting)
Very insightful reply, and you're 100% correct. That was my other line of thought. This is the company (and probably some of these "superstar" programmers are the very people) who have given us a litany of buffer overruns, security holes, and other low-level programming "features" over the years.
I'm not saying that no one should ever program at a low level, but I am saying that people shouldn't be afraid to take advantage of features of managed code and other conveniences. Don't program at a low level if you don't have to. You're only making your life harder for no reason, and needlessly exposing yourself to risks of fundamental errors that are much worse. Take advantage of all of the hard work that others have already done.
Re:Mod parent UP! (Score:5, Interesting)
Re: (Score:3, Informative)
Now, I'm not saying that you therefore have to use Dotnet; I'm not really a fan of having to install yet another disk space munching VM
That choice has already effectively been made for you - Windows comes with some version of .NET since Win2003.
Re:Rather smug, I think. (Score:5, Insightful)
Just because you don't know how to use non garbage collected memory does not mean that functions that don't use it should be phased out. Garbage collection trades safety for flexibility and if you remove the flexibility you will find a whole class of programs that would be a lot less efficient. These days there is a lot of work being put into making things like NULL pointer bugs and memory overflows simply crash instead of allowing a backdoor into the system and for some programmers crashing vs running slow is a good trade off
I need those functions to do my job since I write network based apps that require memory reuse and pointers to be able to process data as quickly as possible.and I've seen the results when some of our competition has attempted to do my job with garbage collected languages. (generally 3x the hardware requirements, 5x if they used java)
Not everything is a desktop application.
Re:Rather smug, I think. (Score:4, Interesting)
Nice ad hominem. But the issue isn't whether he can use it, the issue is whether everyone who's code is running in the machine can use it, especially the guys who wrote libraries.
This is a strange statement. Garbage collection in no way prevents memory reuse; it simply automatically releases blocks of memory when they can't be accessed by following pointers from the root set anymore. In fact there's garbage collection libraries for plain C, some of which don't even require recompilation of applications but simply replace malloc() and free().
Besides, network applications are precisely those where it might be worthwile to use 5x hardware just to get some extra protection.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
In practice, with the advent of ABS, most people drive faster, thus the general braking distances might be longer. But from the same speed, there is only one situation where an ABS causes longer braking distances: If the ground is not solid, but consists of sand, gravel or freshly fallen snow: Then blocking wheels will cause a fair amount of material to collect in front of the wheels and thus causing better contact to the ground.
Re:Wow! (Score:5, Interesting)
We've got a crew of .NET developers writing us an updated replacement to an existing VB app. I keep calling the new interface Fisher-Price, but actually it's Hasbro. I was mistaken, but an easy mistake to make.
Where it should absolutely take two clicks to make something happen, they found a way to make it five. Where you should enter a date, they found a way to not allow special characters, like '/'. Where you should enter an address, well, no spaces allowed. Basic functionality is lacking for several features, but the interface is there.
And no help files yet, despite beta release pending in a few days. In fact, though we have well over 1,000 pages of documentation, there seems to be no functional install that preserves the users' data in case they need to reinstall. I'm told that the next build introduces that.
For all the fancy IDEs, tools, etc, these guys are still not getting it done. I dare not say how far behind schedule this is, nor what the actual platform is, or someone will guess and raise hell over how anyone could be so insensitive as to speak the truth.
Your tools mean crap, if you're incapable. Just as your plumber would probably suck at actually making the pipe, your developers will suck if they don't 'get' what your users actually do.
Of course, it would help if they asked what the users actually do.
But I'm not bitter. I get to support this. Plenty of work.
Re:Wow! (Score:5, Interesting)
I just wanted to let you know I feel your pain. I worked at this place a while back and I really liked my job. It didn't pay that well, but I felt important and had a massive amount of freedom. Then they hired a consultant to come in and take over IT. He knew how to run a business, but next to nothing about IT (though he knew just enough lingo to fool people who did for a few days). His 'programmer' didn't understand how to navigate file systems on Windows with Perl (and was supposedly a Perl guy). Being a Linux guy myself, I figured maybe he was, too. No, he had never even used Linux. Once I found that out I started to get rather scared and discouraged, because he was reworking a complicated, arcane, mission-critical system. I demanded all passwords be changed and that I not be given any of them because I didn't want to be blamed when they screwed everything up (plausible deniability). After assuring me that they (my bosses) wouldn't, and finally relenting another month later they finally fired the guys because they couldn't get anything working. At all. I even told them where to start to get a feel for what they needed to be able to do on it, and they still couldn't do it. They didn't even know enough to mess it up (generally the easiest thing to do). So management's answer was to just not have any sort of IT department at all. I could do all of the old IT manager's job, plus my old one, for the same pay and no possibility for advancement. So I gave them a month's notice and left. Probably not the smartest thing I've ever done (the economy tanked about 6 months later) but since most everyone else had left or been laid off around the same time, I'm not sure how much a difference it would have made to do otherwise.
Re:Wow! (Score:5, Funny)
Alright boys, take the love fest over to thedailywtf.
Re:Wow! (Score:4, Interesting)
Of course, it would help if they asked what the users actually do.
Bingo.
We had the advantage of a small business owner wanting our software developed because he thought "It should work like this". So we made it work like that and a lot of other small business owners found it to make sense and relatively easy to use. There were a couple quirks, but that's not good enough. Not for me.
And this is where so many others fails. After the phase 1 deployment of our product (about 100 installs), I drove/flew around to our customers 6 months later, stopped by in person and asked as the first question: "What doesn't work?" followed by "How can it work better?"
Writing windows GUIs (Score:3, Insightful)
Re:Wow! (Score:4, Insightful)
Well, I like some of those guys but as someone who has been closer to the bare medal than they have (I'm a former Atari 2600 programmer), I'd say that the habits of old pros say little about the quality of today's tools.
We used 6502 cross-compilers on a PDPxx and VAX, not because we thought the command-line was better but because that was the best we had at the time.
BTW, I'm not trying to say that I'm better or smarter than those other guys, I just have written very low-level, real-time code.
Re: (Score:3, Funny)
In the Army, this is called "back when it was hard".
In this context:
How many developers does it take to screw in a light bulb?
2: one screws it in, the other one talks about how hard it used to be.
So what? (Score:4, Insightful)
I hate Microsoft more than anyone, but... I really don't see an issue or any hypocrisy here.
Re:So what? (Score:4, Informative)
That's because you weren't reading the ads - direct or indirect - of these MS "dev tools" (in magazines etc)
And you haven't been affected by managers who were reading them.
Comment removed (Score:5, Insightful)
Re:So what? (Score:5, Insightful)
Yeah, really. Senior Engineers disagree with company marketing strategy ...
Who said that "graphical programming" is company marketing strategy today? Oh, sorry, it's another kdawson story; you expected any facts here? Let me explain then.
Anyone who deals with .NET tools knows that there had been a recent shift back towards code. For example, WinForms development was too tedious without visual drag&drop form editor, but WPF markup is best hand-coded, just like HTML (VS provides a visual editor, too, but hardly anyone uses it for anything except quick preview). Or what used to be called "typed datasets" - also very designer-centric, but with LINQ2SQL and Entity Framework, again, most people stick to writing code and mappings in XML by hand.
In fact, it's easy to find out that much if you just look up the names mentioned in TFA. For example, who is Don Box? He's working on Microsoft "Oslo" project [wikipedia.org], next-gen modeling platform which was hyped [msdn.com] back on PDC2008, and all Microsoft managers in the division blogged on how it's the next big thing etc. And the main difference of that platform from the existing "DSL" tools in VS2008? Oslo is centered around text-based DSLs, and comes with a Emacs-like editor [msdn.com] which can handle them.
In short, developers of the new tools for Microsoft development platform - which are fully backed by marketing - criticize some aspects of the previous generation of tools. Surprise, eh?
But go ahead and ask them what they use to edit C# code that they write, and I bet you'll hear "why, VS of course".
Then there's Jeffrey's comment on .NET. I don't see anything fundamentally wrong with it, and if you RTFA, you'll see that he is an architect for PowerShell - a very high-level scripting/shell language built on top of .NET! To interpret his comment as a criticism of managed, when he is in fact the one "pushing" for it via the tech he works on, is rather disingenuous.
Re:So what? (Score:4, Informative)
"Even $3 embedded devices will have 100s of meg ram to play with."
As a rule, no they won't. SOME embedded appliances, marketed for homes and sheltered geeks will have that, because they have the luxury of being connected to the powergrid all the time. Embedded stuff that don't have that luxury will not, for power saving reasons. That's why there are still embedded devices that use 1980's bare-bone processors and less than 100KiB RAM.
Re:So what? (Score:5, Funny)
I hate Microsoft more than anyone
See, being subjected to IDEs has lowered your ability of detecting faulty code. You're basically saying A > A since this "anyone" includes "you" too. ;)
pros and cons (Score:2, Interesting)
The cons: managed code doesn't give nearly as much control because it tries to spoonfeed you. This is basically a catch-all for every con anyone can think of for managed code.
Re:pros and cons (Score:5, Insightful)
There are lots of situations when you don't need to twiddle the bits or delete your own allocated memory. What's wrong with simplifying the language for simplified tasks?
It's not like Microsoft doesn't support low-level languages.
Re:pros and cons (Score:5, Funny)
Re: (Score:2)
Re:pros and cons (Score:4, Funny)
http://www.codeplex.com/singularity [codeplex.com]
Your words, I want to see you eat them.
Re: (Score:3, Insightful)
For example, think of the type of errors leading to security bugs. A lot of them have to do with buffer overflows primarily in the area of string manipulation. These are easy mistakes to make in C or C++. Hell Microsoft and others have tried to modify the C runtime library to have "safer" versions of string manipulation functions because these errors continue to happen. Now consider a managed lang
Re: (Score:3, Interesting)
But the GC does not solve two things:
1) Freeing up resources other than memory (this is only possible with a deterministic GC and RAII/destructors, or with refcounting instead of a GC)
2) Taking up tons of RAM because of unnecessary allocations (I've seen Java code that allocates MBs in tight loops...)
Those onion belts are going bad (Score:5, Interesting)
Yes, in some respects, programming is becoming easier and more unqualified people are able to do it.
But I think that these guys are really missing the boat. The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain.
In addition, content these days is a form of programming. Whether it is HTML/CSS or word processing or spreadsheets, the distinct line between what is a program and what is pure data is blurred beyond recognition. So a programming language for interpretive dance would probably find the Natal very useful.
Re: (Score:3, Insightful)
Wow. BadAnalagyGuy got an insightful. Someone didn't read the full comment.
Of course, this isn't true. The thing about end users is, they generally don't know what they want. Even if they had a tool that would do whatever they say, it won't solve their problem because they don't know how to formulate it. The tool would need to read their mind, to the point of making something they didn't even realize they really wanted.
Re: (Score:3, Interesting)
There's truth to what you are saying - I'll bet any senior developer can tell war stories for hours on the topic of users who don't know what they want - but BAG's comment was still very insightful.
Despite how readily domain experts (that is, our customers) disappoint us when it comes to grasping the most basic stuff such as C, Java, SQL or even HTML, it is a mistake to think that they are stupid or that they don't know *their* domains very well (the most basic stuff of which we may then find ourselves str
Re: (Score:2, Insightful)
The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain.
Well said.
Nobody does accounting better than an accountant.
But its not always the best Idea to hand data system development to an accountant and hope for the best. Some one has to guide that guy doing the data editing, manipulation, and storage just like that guy has to guide the programmer how to keep his company books, file taxes, etc.
And this is where the current crop of tools fail. They let you build things that can go horribly wrong, because of simple errors that a professional programmer might have
Re: (Score:3)
The reason why programming became easier is that it was really hard to teach college graduates and other people how to manage their own code, do software maintenance, garbage collection, memory management, error trapping, management of pointers, etc. So the programming languages evolved to support the lowest common denominator programmers that the colleges kept producing.
In the 1980's when I first went to college for Computer Science, we got taught a whole lot of techniques and methods, that they don't teac
On the other hand (Score:2)
You don't have to be a crack programmer or have a team of them to publish great software on a deadline.
Yes, it helps. A lot. And in a serious large scale development effort you want as many as you can get...
But it's good to be able to be useful without having to be elite.
and this is a good thing (Score:2)
VB.NET and Microsoft's other tools make programing possible. People on slashdot will argue that this leads to bad applications, but the choice is between bad applications and no applications, not bad applications and good applications. Granted, sometimes bad applications are dangerous, but that's not a sufficient rationale to withhold these type
elitist morons (Score:4, Funny)
I don't care what the MS Developers use (Score:2, Insightful)
It does not affect my decisions at all.
Businesses aren't in business to push programming ideology. They are in business to make money. If I need an application I'm going to get the application that does the job for the least amount of money (all the caveats about it not being poorly written and being moderately open to possible future expansion, etc.. apply). If I need bare-metal code then I'll get a guy to do that. If VB will do the job then I'm going to get a guy to do that and probably a bit cheaper.
modify that analogy (Score:5, Interesting)
"Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore."
Might have been more appropriate to compare it in that people in the high performance arena (nascar) don't like antilock brakes because of their limits and the separation you get from your task at hand. (you lose your "feel for the road")
Tho I'm a little strangely biased, I miss the days of assembly, when 10k was a LOT of code to write to solve a problem, thing ran at blindingly fast speed with almost no disk or memory footprint. Nowadays, Hello World is a huge production in itself. 97% of today's coders don't have any idea what they've missed out on and just accept what they've got. Even someone that understands the nerf tools like VB at a lower level can get sooo much more out of them. I recall taking someone's crypto code in VB and producing a several thousand-fold speed boost because of my understanding of how VB was translating things. They didn't know what to say, they'd just accepted that what they were doing was going to be dog slow. (and unfortunately the users are also falling under the same hypnosis)
Re:modify that analogy (Score:5, Insightful)
... 97% of today's coders don't have any idea what they've missed out on and just accept what they've got. ...
My apologies for snipping such a large portion of your reply, but that one sentence from your post nicely sums up so many of the problems with new coders it deserves calling out.
Disclaimer: I'm an old fart when it comes to programming. I admit it. I like bare metal programming, high-performance applications with minimal footprint, and elegant solutions to non-trivial problems. I don't avoid kernel-level threads; they're a useful tool.
The company I work for has hired a large number of programmers over the last year in order to replace a number of aging systems. I've interviewed a lot of these people, and I've worked with most of the ones that have we've hired on various parts of the overall project. The newer programmers know quite a lot about available frameworks and their general capabilities. They've been taught the 80/20 rule early on, and they embraced it: When faced with a new task, these people find something that already exists and set about modifying it. All that is fine for applications that are of a certain size. A size that, apparently, is about the size of school projects and therefore succeeds admirably when graded.
So what I've seen coming through the door are people who can put Lego blocks together. They're used to that type of problem solving. They've been taught to download 80% of the solution, then "fix it" so it also does the other 20%. This type of problem solving works well when you're building Lego-block-shaped solutions. That fails to happen much of the time, however. Most real-world solutions -- you know, the kind that are complex enough that someone is willing to pay an actual salary to solve -- don't look like a collection of Lego blocks. The amount of custom code grows and grows as more and more Lego blocks are added. Interoperability problems between the Lego blocks start encompassing the majority of coding effort. The overall system gains complexity at an alarming rate. Things start to suck, both from the programmer's perspective as well as from a systems perspective.
The bad part of this, and to bring things back to my original point, is that these newer programmers expect it to be that way. What's worse, at least from my point of view, is that this entire mentality has been around long enough for these programmers to stop coding and start managing other programmers. So now we have people who build things that suck, and managers who expect it to suck. Expectations are lowered and, unfortunately, met.
Google and Apple seem unafraid to break this cycle, albeit in different ways. So hope is not entirely lost. Maybe that's the 3% you alluded to in your original post.
Re:modify that analogy (Score:5, Insightful)
Might have been more appropriate to compare it in that people in the high performance arena (nascar) don't like antilock brakes because of their limits and the separation you get from your task at hand. (you lose your "feel for the road")
I always laugh when I read this sort of thing.
In *real* high performance racing - Formula 1 - ABS (along with traction control, launch control, active suspension, and a whole bunch of other fancy electronics that basically turned the cars into a ludicrously fast go-karts) was used very successfully and then banned because it could do a far, far better job than any human.
Re: (Score:3, Insightful)
Who let a VB coder implement a cryptosystem
well in this case all we actually ended up needing was the SHA1 and MD5 hashing, but he wanted to be complete about things so we did the full RFC. Still we had to hash 1mb blocks so speed was important. 4 seconds was dropped to a very small fraction of a second. But you really got to see the improvements when trying to do actual encryption.
But when coding crypto, the tool isn't the first place you should look for security - pay more attention to the meatbag.
Abstraction vs Managed (Score:2)
This seems a good place to point out one of the chronic errors of people talking about software development...
Abstract does not mean slow, bloated, inefficient, or incomprehensible.
Having the wrong abstraction for the task at hand, however, often does. And blindly questing after "managed" "portable" and "high-level" is a good way to get abstractions which work poorly for *any* task. At best, you get Java/.Net/Javascript... tolerable for many tasks, and completely useless for others.
No, it's just "old dogs - new tricks" (Score:4, Interesting)
I could write a lengthy essay about how old programmers don't like to use new tools that offer them little because they already know all the tricks and gadgets for their old, "inferior" and more complicated tools, while new tools are perfect for new programmers because they don't have to learn so much to achive the same results because those tools are easier to use and the learning curve isn't so steep until you have a result, but I think I can sum it up in a single word:
Emacs.
Re: (Score:3, Insightful)
In my experience, a lot of these "tools that are perfect for new programmers because they don't have to learn so much" really mean that you spend a lot of time learning the tool, and _then_ have to still learn what's really happening if you ever want to make it to the level of the "old programmers", and dog forbid you are ever required to use a different tool.
To add insult to injury, the focus on the tool usually means there is so much boilerplate before you actually get to understanding the programs you wr
Re: (Score:3)
Well, that's what the industry wants.
Look at it sensibly. Yes, RAD tools (and let's face it, that's what VS is under the hood) abstract away a lot of the "inner workings" of programs. Ask 10 RAD programmers for the difference of compiler and linker and 5 of them will stare at you blankly. Ask the remaining 5 why there is two steps in the first place and 4 more will go "ummmm...".
But at the end of the day, the customer (or boss, if the programmers are employees) does not care. They care that they produce cod
The most important question however is (Score:5, Insightful)
do they use vim or emacs now?
Good debugger (Score:3, Interesting)
Intellisense is pretty slick, but overall if I'm doing development on Windows I'd rather use an emacs as my text editor.
Of course, Visual C++ makes a fantastic debugger. It's almost good enough to forgive Windows for its lack of valgrind. Almost.
Dear Computerworld... (Score:5, Funny)
XOXOXO
-Steve B.
Leaks like a sieve (Score:4, Interesting)
I'd be frustrated too trying to write code with tools that generate memory leaks for days and sucks at returning free'd memory to the system. I remember one version of Word you could start it up and just let it sit, within and hour or so Windows would crash. Then the version of Excel that shipped with debug code because the stripped version would never pass QA. Aw fine tools.
Interpretive dance? (Score:5, Funny)
The real reason why they don't use Visual Studio (Score:5, Insightful)
The real reason why they don't use Visual Studio is far more prosaic -- the build environment of most of Microsoft products does not support the Visual Studio project files. Their products are built using a system called CoreXT -- basically a set of binary tools and scripts cobbled together by build engineers and developers over the past decade or so. CoreXT uses a lot of different crap, make, perl, compilers, etc, etc., and all tools and SDKs are checked in and versioned. The upside is that you can roll back your Source Depot (Microsoft's own flavor of Perforce) enlistment to an earlier date and be sure things will build exactly the same way, and once you enlist, you get repeatable, isolated build environment where you can guarantee the correctness of versions for all tools, compilers and libraries (Java developer's wet dream, even though they don't know it). The downside is that you have to maintain the makefiles by hand, and you can't use Visual Studio, because there are no project files checked in, and even if there are, most people don't use them and they are not updated, so you can count on them being broken.
I did a lot of my coding in either Notepad2, or in a separate project in Visual Studio against a test harness emulating the rest of the project (what Enterprise Java types call a "mock"). Some folks used Ultra Edit or vi, or EMACS. For some just a bare Notepad did the trick. Some stuck with Visual Studio, which in their case was just a glorified Notepad with autoindent since it doesn't support build or Intellisense if you don't have a project file.
Yes, it's an enormous waste of time, and yes, it was painful. But CoreXT is so integrated into the rest of the dev pipeline that replacing it with something else in a large product is a major, destabilizing endeavor that is bound to undo at least some of the work around gated check-in infrastructure, test infrastructure, automated deployment infrastructure and god knows what else, so few teams ever attempt it. Now naturally, DevDiv eats their own dogfood, so they were one of the first teams to switch completely to MSBuild. It took something like a year in their case, they did it gradually, from the leaves down the tree. I'm sure if they had a choice, they would be using CoreXT to this day though, and fighting with incremental build issues. :-)
Recently, a few more teams have adopted MSBuild. They can actually open their entire projects in Visual Studio and rebuild them. If they have test infrastructure deployed on the side, some of them can even test the product without waiting for it to deploy. So I predict that as more and more teams adopt MSBuild (this in itself could take another decade easily), these "senior" folks will come around to appreciate its benefits. It's awfully handy when you can set a conditional breakpoint on your local box and step through things.
Oh please... (Score:5, Insightful)
I can't stand it when Microsoft developers talk about multi-threaded programming when the entire corporation has done that absolute bare minimum to make developer's lives easier. No wonder that they don't like using their own tools, because their tools are terrible.
Many years ago, a brilliant third-party multi-threaded library [oswego.edu] was released for Java, by a professor at Oswego university. I used it in several large production apps, and it absolutely rocked. You could build up safe, reliable, scalable multi-threaded applications by simply snapping together flexible pieces like Lego. It was so good that it became a part of the SUN Java standard library, and it's now called "util.concurrent". Compared to having to "hand craft" multi-threaded code in C++, it was wonderful. It's as if the lights had just turned on, and everything had become clear to me.
Now that I'm a C# dev, it's been a huge step backwards, doubly so because .NET was developed after the Oswego library was already popular, so Microsoft must have seen it and just flat out ignored it. For years afterwards, the whole entirety of multi-threading in both .NET and C++ were "threads" and "locks". The one nicety they included was an anemic thread pool in .NET which was just usable enough for the most basic tasks, but couldn't handle any real load. Even the locks were heavyweight inter-process kernel locks that are unusably slow for many tasks.
It's only now in .NET 4 (which won't be final until 2010) that they are adding a small set of very basic lock-free containers, light-weight locks, and actual interfaces that one can implement in order to customize behavior. It's all still very basic, and nowhere near as flexible, powerful, or comprehensive as the Java APIs that are years old now.
Microsoft's general attitude to API design is so bad that it can only be described as wilful ignorance. Reading articles evangelizing "modern multithreaded programming to better utilize new multi core processors" somehow feels like a religious zealot harping on about their appreciation of pure rational logic and science.
Re:Doug Lea? (Score:4, Interesting)
Damn. I don't even use Java(I'm an embedded C guy), but if DL [oswego.edu] did it, then it's probably really good. As a C developer I feel the old pthreads style is a throwback to old multi-process hacks on SysV of 25 years ago.
What dragged you to the dark side anyways? (C#/.NET)
The Oswego library was the bomb. It's basically how APIs should be designed: Very simple looking abstract interfaces* with a bunch of reference implementations, some of which are incredibly advanced. You can then pick and choose what you want, reimplement anything at will, and combine like Lego. That guy saved me a LOT of time and bug hunting. Want a queue? Pick from four different flavors! Want priorities? Done! Want to keep the queue but change the execution style or locking mechanism? No problem!
As to switching, I've always generally been a Windows guy, and C# is currently the single fastest way to develop a GUI and not get "stuck" too much, because it can call C or COM style APIs directly. When it was first released, a good former Java/C++ developer could get started with it quickly, and develop GUIs 2-3x quicker than anything else, which then ran smoothly, and looked native.
I still get frustrated, especially with the lack of decent containers, algorithms, and threading frameworks, but C# is still overall the best. I do a lot of very Windows platform specific stuff like Active Directory manipulation, and it would be very hard to do that quickly (but correctly) with any other language.
Java is great for "server side" development. It has better database binding** libraries, threading, third-party support, containers and frameworks, and a much better community. However, its client-side is just terrible, especially the GUI frameworks. SUN apparently still hasn't learned the key to Microsoft's success story: own the client, and you will own the world.
It's only recently that Java IDEs got decent "drag & drop" forms development, while Microsoft is already a generation ahead with WPF which very cleanly separates code and layout, to the point that artists can do layout almost completely independently of the dev team. Think of what HTML and CSS tried but failed to do, but done properly.
*) Microsoft has an allergy to interfaces. It's like they're trying to tell you that they "own" the API, and you, the developer, should keep your dirty little mitts off it.
**) Microsoft's LINQ to SQL is practically a beta at this time. They don't even support multi-columns keys! Its big brother, the "Entity Framework" didn't support foreign keys until .NET 4, which is currently beta, and the GUI editor still fails on all but the simplest models. Something like 60% of the features, if used, disable the GUI editor completely. Microsoft isn't even planning to finish the EF framework GUI, ever. Every couple of years, they come up with a new data binding framework, drop the old ones, never finish it, and then they repeat, not having learned a single lesson. I've lost count.. there's been, what: DDE, ODBC, ADO, ADO.NET, LINQ, EF, and now they're up to some garbage called "M" or "Oslo" or whatever. I'm certain it'll be buggy, slow, incomplete, and replaced in short order. Just watch.
Great tools, rough practices... (Score:3, Insightful)
If only Microsoft's architecture and best practices groups actually worked to leverage the efficiency of the tools, rather than just drain it. The thing about Microsoft is that the tools are good, yes, but then they sell them with these practices and recommendations that just drain innovation and drags all developers down to a lowest common denominator.
It's really, Taylor all over again but applied to computer programming. The problem is, Taylor is what GM and Ford and Chrysler do, and the unions are locked in with. Just like we have pipefitters and machinist titles of varying kind on the shop floor at an old style manufacturing plant, we have guys that are being pushed into the database, u/i design, or middle tier roles, and really, 90% of all projects could be done with one guy putting together a half way decent screen in a craftsmen like fashion.
At this point, Microsoft is headed out just like GM - recruiting a lot of the best engineers, then just killing them in red tape, and delivering products that increasingly fail to captivate their market.
Quote's out of context (Score:3, Informative)
I was at the talk, and yes Don Box said "I will fight you if you try to take away my text editor" but it was after having being asked a leading question by Eric Meijer. Something along the lines of "will we ever write software entirely without writting text?"
However, what was Don doing for the rest of the PDC? He was hawking Entity Framework and M, both of which allow users to model data access using rich graphical tools!
The talk is here: http://microsoftpdc.com/Sessions/FT52 [microsoftpdc.com].
The big problem is "builds". (Score:3, Interesting)
Ask yourself why we have "builds", where everything gets rebuilt. Do I have to have my ICs re-fabbed when I change the PC board design? No. We're still not doing components right.
Historically, the big problem came from C include files. Everything but the kitchen sink is in there. There's no language-enforced separation between interface (the parts clients of the module see and may have to recompile if changed) and implementation (the part the implementations see). Also, you can include files inside include files, even conditionally. So developing the dependency graph of the program is hard.
C++ made things worse, not better. The private methods of a C++ class have to appear in the header file, which exposes more of the internals than is really necessary. Every time you add a new private method, the clients, who can never see or use that private method, have to be recompiled. This not only produces cascading builds, it discourages programmers from adding new private methods rather than bloating existing ones. That's bad for code readability and reliability.
Ada explicitly dealt with this. Ada has a hard separation between interface and implementation. This was considered a headache when Ada came out, but now that everyone has bigger monitors, it's less of an issue.
Java, despite having interfaces, seems to have build and packaging systems of grossly excessive complexity. I'm not really sure why.
The next problem is the "make" mindset, which is built on timestamps. "make" doesn't check what changed; it checks was was "touched". If "make" decided what had changed based on hashes, rather than timestamps, many unnecessary recompiles would be avoided. Something could run "autoconf", produce exactly the same result as last time, and not trigger vast numbers of recompiles.
There's also the tendency to treat "make" as a macro language rather than a dependency graph. This results in makefiles that always recompile, rather than only recompile what's needed.
It would be useful if compilers output, in the object file, a list of every file they read during the compile, with a crypto grade hash (MD5, etc.) of each. A hash of the compile options and the compiler version would also be included. Then you could tell, reliably, if you really needed to rebuild something.
Crikies. (Score:3, Insightful)
People are reading a lot into this that isn't there. These people use Visual Studio, and I don't think they'd claim that using a GUI to design a...GUI is a bad thing. They're referring largely to the new modeling tools MS is pushing with VS2010, and they're saying sometimes it's quicker to just write the code than design the model. And indeed it is.
It's a tradeoff. For example, they already have some modeling tools (web service factory) for developing a web service. You layout interfaces, data contracts, message contracts, etc... and associate them visually. I think this sucks, personally, and I still just do it the old school (and much quicker, more powerful) method by creating an interface and data contracts. But for some scenarios designing the model might pay off in terms of self-documentation and allowing some standards to be followed by multiple developers working on a web service.
Re: (Score:2, Funny)
Wow! I never thought I'd see a "crappy Microsoft software made me disabled!" post on Slashdot. Though I guess it shouldn't com
Re: (Score:2)
Actually the stress was from management and coworkers, which the stress from Dotnet language was nothing compared towards. Everything was fine at my job, until I got mentally and physically sick, and then I was discriminated against until I kept getting sicker and sicker and eventually fired for being too sick.
I remember these words:
"Programmers are a dime a dozen. We get 500+ resumes a week for your position alone. We can easily hire a programmer who won't get sick on the job for a fraction of what we pay
Re:I agree (Score:5, Interesting)
because the modern Microsoft development tools need that infernal Dotnet library to be loaded and then when it gets messes up any software that depends on it does not work.
Indeed. One of my PCs has a broken '.Net framework' which can't be fixed without a complete reinstall of the operating system: even Microsoft's own 'completely obliterate every last trace the bloody thing' uninstaller isn't enough to remove all the traces which prevent it from reinstalling properly. As a result, a lot of new software simply will not run.
Fortunately I do most of my useful work on Linux or Solaris these days so not being able to run random Windows software is no big deal, but '.Net' is such a monstrosity that it makes 'DLL Hell' look good in comparison; if even Microsoft can't fix it when it breaks, what chance do users have?
Re: (Score:3, Informative)
WTF?
VisualStudio supports plain old C++. In fact, the new MSVS it's THE BEST editor for plain old C++, with the best autocomplete and refactoring support for C++ which exists in this Universe. I routinely write kernel-mode code in it, for example.
Some features like online C++ error checking are simply unique.
Re:I agree (Score:5, Informative)
Huh? If you don't want .NET, don't use a Managed C++ project, use a native C++ project. You can control exactly what libraries are included, and no, it doesn't include .NET libraries by default. I'm not sure how you can claim to know anything about Visual Studio, if you don't know this.
Re: (Score:2)
Actually BASIC and Visual BASIC are beginner languages. The B in BASIC stands for beginners. Not a "baby" language. You can still get things done in Visual BASIC but you don't have the control or memory management of C++ or C#.
I never used the Visual C++.Net languages, so I didn't know that. I usually use Turbo C++ for MS-DOS, or Borland C++, or even GNU C++ or MINGW C++ or C++ with Cygwin instead. I don't really see a need for a MS-C++ anymore when there are FOSS alternatives or cheap alternatives as in Tu
Hey remember the 8-bit 1970's and 1980's? (Score:5, Insightful)
When BASIC was bundled with almost every 8-Bit computer sold in the 1970's and 1980's? It was the default language on most of the systems sold and other languages like FORTRAN, C/C++, Pascal, COBOL, etc had to be bought separately. So the el cheapo way to program was via BASIC. Many computer magazines issued BASIC programs in them and cross platform modifications for Apple //, Commodore 64, IBM PC MS-DOS, Atari 800/400, TRS-80 and TRS-80 COCO, TI 99/4a, etc that had BASIC default.
Borland turned Turbo Pascal into Borland Delphi, but Microsoft turned GWBASIC and Quick BASIC into Visual BASIC. Then when Windows dominated the Visual BASIC took over C++ and Delphi, as it was easier to program in for managers to understand. I didn't program in Visual BASIC because I liked it, I programmed in it because the job required me to do so. My managers didn't understand Java, C/C++, Python, Perl, PHP, etc, only BASIC and Visual BASIC, so they didn't trust programmers to write with anything else. Most of the Microsoft Windows IT/IS departments are run almost the same way and most of them use Visual BASIC (or in some cases Visual C# as it is easier than C++) because it is easier for management to understand what their programmers are doing and review code.
You don't earn money programming with your choice of a programming language, you earn money with a programming language that your employers choose for the jobs that are available in your area. Unless you want to migrate to a Linux or Macintosh IT/IS department, but sometimes you have to settle for a Visual BASIC development job and have no choice in the matter.
Just that now Microsoft Programmers are catching on that Dotnet is rotten, and if corrupt any program written to use it won't work.
Re: (Score:3, Informative)
Did you strip debug symbols from the executable produced by MinGW?
Yes, I gave g++ the flags -s to strip debugging symbols and -Os to optimize for size. The <cstdio> version stripped down to 5,632 bytes, but the <iostream> version stripped down to 266,240. I investigated further, without the -s flag, and it turned out that whenever GNU libstdc++ creates an ostream, it also creates locale objects to represent date, time, and money formats, even if the program never outputs a date, time, or money object. Similar executable sizes were seen with devkitARM, a distri
Re: (Score:3, Interesting)
Interesting. I have been writing Qt applications on Windows using MinGW for a while and just assumed my executables were huge because of Qt.
I just tested what you said, whipped up Hello World using libstdc++ and got an identical byte size as your own. It was 474990 bytes with debugging symbols in it!
I recompiled with -Os and stripped the executable and got it down to 265728. Jesus.
Re: (Score:3, Interesting)
Judging by what GP said, it seems to be a library problem, not the compiler problem. I assume the way they wrote formatting code for iostream somehow references all locale facets, triggering their instantiation.
Still, iostreams are fat on any implementation Just checked on VC++2010 - with "optimize for size" it was 95Kb for a HelloWorld.
Re: (Score:3, Informative)
Try it with the project set to static link the C++ runtime. Otherwise a lot of the code is going to be in the msvcrt??.dll instead of the executable, and it's not a valid test.
The test was for a statically linked binary (it's what you get by default when using cl.exe with no other options). Dynamically linked (/MD) one is 7Kb.
Re:I agree (Score:5, Informative)
Many companies aren't big enough (or use Windows extensively enough) to get a volume license. And besides that, the significant cost is not the license, but replacing the hardware, and all the man-hours of work getting all the old apps up and working again.
Windows 9x will remain for many years to come, on business PCs with modest needs. And yes, there periodically need to be new programs written, as well as several old programs maintained.
Re:Programmers I've worked with (Score:5, Insightful)
The biggest posers I worked with used Visual Studio. The best group of programmers I worked with used text editor. That group could code rings around VS. The best of the best of them used vi.
This is absurd. Visual Studio, Eclipse, Vim, these are fucking *tools*. People use tools, not because the people are better, but because they find the tools useful.
Me, if I'm writing code for Unix or my DS, yeah, I prefer a maximized xterm, GNU Screen, and Vim. But if I'm writing a .NET application, I'm gonna use Visual Studio, as it's a very powerful development environment (doubly so when coupled with ViEmu).
OTOH, people who judge others based on their choice of IDE? Those people *are* tools...
Re: (Score:3, Interesting)
The article didn't actually have much in it, it was Computer World after all, but it makes sense if you read it as visual programming tools instead of Visual Studio. The article seemed, as CW articles tend to, that it was a good article that was cut to 1/4 it's original size just because.
With today's libraries, a half-decent IDE can make a huge difference in productivity. But the comment about zooming in and out makes perfect sense if you think of the "I'll drag an IF block here, then wire it to the blah f
Re: (Score:3, Insightful)
While it is obviously true that suc
Re:Programmers I've worked with (Score:5, Interesting)
But the OP may be right if it is statistically true (I don't know that it is) that there exists high correlations between "good" programmers preferring a text editor and/or "posers" preferring Visual Studio.
I'll buy the former, but absolutely *not* the latter.
Coding with a simple text editor, make/gcc/etc, and gdb implies a fundamental set of skills: familiarity and comfort on the command-line, ability to (presumably) write and invoke Makefiles, ability to use gdb (which, let's face it, ain't pretty for a newcomer), and so forth. So it stands to reason that there's a greater chance such an individual has the skills necessary to write decent code (also known as "trial by fire"), as a poorer developer would likely be scared away.
But I find it very difficult to believe that, given a population of VS users, there's a disproportionate number of crappy developers (ie, that the ratio of skilled to unskilled developers exceeds the ratio in the software developer population at large). A weak developer will obviously use any tools that make the job of writing code less daunting, and a good IDE definitely fits in that category. And a strong developer will use whatever tool makes their job easier, and guess what? VS is a *damn* good tool, particularly if you're targeting the .NET stack.
So I'd would say this: A developer that's happy using a simple text editor/compiler/debugger combo has a greater than average chance of being a good developer. But you can assume nothing about a developer who chooses an integrated IDE over the aforementioned environment if given the choice.
In fact, I would go so far as to say that any developer who actively *shuns* IDEs without good reason (and, BTW, simple familiarity is a good reason... mindless elitism, however, is not) deserves as much skepticism as one who isn't capable of using the editor/compiler/debugger environment, as they're expressing dogmatic tendencies that can be deeply counterproductive in the work environment.
Re: (Score:3, Funny)
Please don't do that. I will hunt you down and deliver a round-house open-handed slap to your ear; rupturing your eardrum asunder. Consider that next year I might have to maintain your 512 column code. There is nothing more nauseating than opening someone's code in a standard xterm and seeing single lines fucking wrap around the fuck around the fucking terminal
Wouldn't it be easier to turn off line wrap in your editor?
Re: (Score:3, Informative)
you really should try and use 80 columns only
Hi. The Time travel tourist board called, they said your "work in the future" visa was about to expire and could you make your way back to 1978 please. :)
Re:Why I prefer plain old text editors (Score:5, Insightful)
Yeah, because IDEs don't have lists of errors, and double clicking on the error doesn't take you right to the line where the error is. It's so much easier to hunt through random code than to look at the little red squiggle on the line the IDE pointed out to you.
I'd call you an idiot but that would be insulting to idiots.
Re: (Score:3, Interesting)
When I get an error, the compiler also points exactly to the line where the error is. So your insults are completely unwarranted.
Unless I forced to, I would never touch those kits (Score:3, Interesting)
There are times I am forced to, like if I'm doing gaming video, I have to do it using the Direct-X toolkit.
I mean, there is no other way around, since some users are using ATI cards and CUDA is useless on ATI GPUs.
But on other projects, I do bare metals, and when I have the chance, I go assembly.
Re:Unless I forced to, I would never touch those k (Score:4, Insightful)
C on an 8-bit microcontroller? (Score:3, Interesting)
I don't know why anyone would still program in assembly for anything other than, say, emulators which need to be built for speed.
How about programs designed to run in the same class of systems that such emulators emulate? There are dirt-cheap mass-produced computer-on-a-chip designs that connect an 8-bit CPU core clocked at under 5 MHz to a TMS9918-like video generator. You're dealing with something comparable to a Sega Master System or NES on a chip. C on a Z80 or 6502 isn't pretty.
Re: (Score:3, Insightful)
Embedded development and bootstrapping is the last bastion of necessity in assembly.
Any other use is likely for obfuscation, academia or pride.
Re:C on an 8-bit microcontroller? (Score:4, Interesting)
"Embedded development and bootstrapping is the last bastion of necessity in assembly.
Any other use is likely for obfuscation, academia or pride."
Or speed where it matters, because nothing beats speaking in one's native tongue for communication.
Which is why a many great deal of OS components are written in general x86 assembler.
There's even an OS written entirely in Assembler, fits on a 1.4MB floppy, and does pretty much everything Windows does, faster, in a smaller memory footprint, and in 1/7,000th the space - minus gaming.
IOW assembler is still plenty useful, not just for embedded markets or bootstrapping.
Re: (Score:3)
If speed matters it's cheaper, easier, and faster to buy more processing power.
Re:C on an 8-bit microcontroller? (Score:4, Interesting)
Embedded development and bootstrapping is the last bastion of necessity in assembly.
Any other use is likely for obfuscation, academia or pride.
About this time last year I was told to implement (in a C++ app) something not unlike the IMPORT command in VB. In other words, it allows you to call an arbitrary DLL function from a script language in our application.
Since we do not know at compile-time how many parameters are going to be passed to it, the only way I could think of doing this was to manually construct the stack into the required calling convention using PUSH instructions and do JMP EAX to call the function.
It took about 10-20 lines of assembler. I then had to do a couple of alternative implementations to support win32, elf32, win64, elf64 and win32 on ARM for the PDA version, but since there wasn't much code involved, the whole exercise took about a week from nothing to fully working.
Maybe there is a way to do this in C++ without resorting to assembler, and without requiring the called module to have a predefined interface, but I'm damned if I can think of one. To be fair, it is the only assembler module in the entire program.
Re: (Score:3, Insightful)
I'd call that "runtime environment glue" and that's an extremely appropriate place for assembly code in modern programming. You're quite literally outside the language at that point, when you have to manually manage the calling convention.
Re: (Score:3, Insightful)
Embedded development and bootstrapping is the last bastion of necessity in assembly.
Any other use is likely for obfuscation, academia or pride.
If my employer is any guide. it's rapidly dying in embedded development. Processors are fast enough - and optimising compilers sophisticated enough - that assembler is simply unnecessary.
I can't see assembler being in heavy use outside of ever more esoteric branches of embedded development over the next few years.
Re: (Score:3, Informative)
Does that work even if you're targeting generic i686, which only goes up to MMX? Does the speed gain outweigh the time overhead to check which instruction sets are available and the space overhead of multiple implementations, one for each instruction set?
When speed matters, runtime processor checking is a triviality. Think about it.
Anyways, runtime cpu checking is normally only done once, often during program initialization.
There is a reason that the fastest implementations of heavy-lifting algorithms (encryption, compression, etc..) are written in assembly, and its not because the compiler is only slightly sub-optimal. Its because the abstract machine model for the language simply doesnt contain the concepts necessary to describe the (presumed to be..
Re:Unless I forced to, I would never touch those k (Score:4, Insightful)
Not really. Compilers these days do a better job at optimizing most code than most assembly programmers (for well-supported CPUs anyway), mostly because instruction timing and dependency issues in modern complex CPUs are quite complicated, and compilers are able to take a lot more into account (just because the code 'looks' tighter doesn't mean it'll run faster). The only place where it really makes sense to use asm is for tight inner algorithm loops, especially when you can use SIMD instructions.
Re: (Score:3, Interesting)
OpenGL doesn't encapsulate anything other than graphics. DirectX encapsulates input, 3D acceleration, sound, etc.
From a developer standpoint, DirectX is a no-brainer when it's available.
Re: (Score:3, Interesting)
Thank you, but that's not what I was asking,although the comparison is both interesting and informative. (to me, at least) What I wanted to know is why Linux devs don't feel the need for a Linux version of Direct-X. However, it occurs to me that you have answered my question, indirectly: OpenGL may not do everything Direct-X does, but it does enough so that Linux devs don't feel the need for a more comprehensive solution. Thank y
Re:Unless I forced to, I would never touch those k (Score:4, Informative)
Re:Uh, sure... (Score:5, Insightful)
The point you are missing is that "bare-metal code" is assembler, regardless of how much effort is involved.
I again have to point you to Linux or *BSD, these OSes have real time drivers in C. I don't recall seeing *any* peripheral driver in Linux that is not C. Practically all assembly code is under arch/ which means bootstrap, memory initialization and main timers. The rest is C.
Go ahead and write a real time driver in C, let me know how it works for you.
I'm doing this right now, and it is a very usual thing for me to do because I work on firmware for slower microcontrollers that run at clock speeds from 1.8 to 16 MHz. I have tons of peripherals in the MCU, and they must be serviced on time. A typical MCU project is a real time design. Sometimes I profile the code by connecting an oscilloscope to some spare pins and check that I have enough time in critical parts of the code. And *all* the code is C, compiled by avr-gcc 4.3.2. I have maybe 0.1% of the code that is in assembler, and that is stock macros that come with avr-gcc.
To illustrate, here [linux.no] you can see the lowest level of avr32 support, and you can observe how many LOCs are in .S and how many are in .c files. If still not convinced, visit mm/ [linux.no] and see what language the do_page_fault() [linux.no] is implemented in; that is one of most performance-critical pieces of code. C today is the "bare-metal" language of choice, and it works well in that role.
Re:Uh, sure... (Score:5, Insightful)
"I work on firmware for slower microcontrollers that run at clock speeds from 1.8 to 16 MHz"
No wonder you can afford to use C! ;)
I can buy a $1.20 microcontroller that's more than an order of magnitude faster than the machine I learned to code on. Still, there are still times when a nice bit of assembly makes a big difference. Not much, just as much as you need.
As for the rest of the thread, assembly IS the bare metal. C is the next best thing to bare metal. Not to say that you should code everything in assembly, that would be silly. But being aware that it exists can't hurt.
Re: (Score:3, Informative)
I keep being told by .NET people it's really not that heavy and it much more productive etc etc.
I feel the same way about Python. CPython is a naive interpreter, one of the few used for serious programming. Even Javascript has JIT compilers now. Not only that, on a multicore CPU, each CPython process runs slower, due to badly designed fighting over the global interpreter lock.
If CPython didn't suck so bad at performance, Google wouldn't have had to write "Go". Sad.