Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Microsoft

Microsoft's Top Devs Don't Seem To Like Own Tools 496

ericatcw writes "Through tools such as Visual Basic and Visual Studio, Microsoft may have done more than any other vendor to make drag and drop-style programming mainstream. But its superstar developers seem to prefer old-school modes of crafting code. During the panel at the Professional Developers Conference earlier this month, the devs also revealed why they think writing tight, bare-metal code will come back into fashion, and why parallel programming hasn't caught up with the processors yet." These guys are senior enough that they don't seem to need to watch what they say and how it aligns with Microsoft's product roadmap. They are also dead funny. Here's Jeffrey Snover on managed code (being pushed by Microsoft through its Common Language Runtime tech): "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore." Snover also joked that programming is getting so abstract, developers will soon have to use Natal to "write programs through interpretative dance."
This discussion has been archived. No new comments can be posted.

Microsoft's Top Devs Don't Seem To Like Own Tools

Comments Filter:
  • So what? (Score:4, Insightful)

    by Aphoxema ( 1088507 ) * on Saturday November 28, 2009 @10:01PM (#30258234) Journal

    I hate Microsoft more than anyone, but... I really don't see an issue or any hypocrisy here.

  • by denalione ( 133730 ) on Saturday November 28, 2009 @10:12PM (#30258276)

    It does not affect my decisions at all.
    Businesses aren't in business to push programming ideology. They are in business to make money. If I need an application I'm going to get the application that does the job for the least amount of money (all the caveats about it not being poorly written and being moderately open to possible future expansion, etc.. apply). If I need bare-metal code then I'll get a guy to do that. If VB will do the job then I'm going to get a guy to do that and probably a bit cheaper. I don't care what the language is. I care that the problem is solved adequately for the least amount of overhead possible.

  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Saturday November 28, 2009 @10:20PM (#30258328) Homepage

    Wow. BadAnalagyGuy got an insightful. Someone didn't read the full comment.

    Of course, this isn't true. The thing about end users is, they generally don't know what they want. Even if they had a tool that would do whatever they say, it won't solve their problem because they don't know how to formulate it. The tool would need to read their mind, to the point of making something they didn't even realize they really wanted.

  • by icebike ( 68054 ) on Saturday November 28, 2009 @10:22PM (#30258332)

    Well you are very close to the mark here.

    The integrated IDEs arose to allow the hobbyist and corporate IT newbie turn out something, which has a good chance of being functional if not maintainable or efficient. These tools also allow specialists from other fields (accountants, meteorologists, scientists) actually turn out products.

    Efficiency, maintainability and extensibility don't matter for that type of programmer. They just need to put up a few screens to count the widgets or produce the daily report. It doesn't have to be portable,or efficient, and chances are it will be tossed as soon as that programmer moves on to another job.

    Those programmers do manage to turn out a reasonable amount of customized applications, some of which are actually marketable. The vast majority are for in-house use. But some actually work quite well for specialized industry applications like Medical Billing, where the knowledge of the subject matter is more important than the efficiency of the code.

    OS level code has to be much more efficient, and there is no substitution for knowing the programming language, the processor capabilities, and the compiler peculiarities. You can not leave to some IDE the task of putting code behind a button that will drag in half a ton of MS Foundation Classes or use some particular C/C++ construct that is horribly inefficient. You are basically always dealing with some data stream or message and doing X Y and Z with it and handing it off to the next task.

    As such these guys virtually never see a piece of data all the way thru the computer. Their customers are other pieces of software. Their wholesaler are yet more pieces of software. Its data-in, whack it, pound it, and pass it on sort of code, and a lot of it, and a lot of self plagiarism. The IDEs just get in the way.

  • Re:So what? (Score:5, Insightful)

    by Shakrai ( 717556 ) on Saturday November 28, 2009 @10:25PM (#30258366) Journal

    I really don't see an issue or any hypocrisy here.

    Yeah, really. Senior Engineers disagree with company marketing strategy and prefer to keep things simple. That isn't newsworthy -- that's a Dilbert strip ;)

  • Re:pros and cons (Score:5, Insightful)

    by sys.stdout.write ( 1551563 ) on Saturday November 28, 2009 @10:28PM (#30258378)
    C'mon, this is unfair. By your logic we shouldn't have Perl or Python or any other scripting language because they "[don't] give nearly as much control because it tries to spoonfeed you."

    There are lots of situations when you don't need to twiddle the bits or delete your own allocated memory. What's wrong with simplifying the language for simplified tasks?

    It's not like Microsoft doesn't support low-level languages.
  • by icebike ( 68054 ) on Saturday November 28, 2009 @10:32PM (#30258402)

    The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain.

    Well said.

    Nobody does accounting better than an accountant.

    But its not always the best Idea to hand data system development to an accountant and hope for the best. Some one has to guide that guy doing the data editing, manipulation, and storage just like that guy has to guide the programmer how to keep his company books, file taxes, etc.

    And this is where the current crop of tools fail. They let you build things that can go horribly wrong, because of simple errors that a professional programmer might have caught. They are like a bag of wrenches being a substitute for a good auto mechanic. (Obligatory car analogy).

    We probably need better design tools, that can capture what it is the accountant wants in terms of inputs, outputs, retention, and quality assurance. Then the code cutting can be done by specialists, or generated code, as the situation dictates.

  • When BASIC was bundled with almost every 8-Bit computer sold in the 1970's and 1980's? It was the default language on most of the systems sold and other languages like FORTRAN, C/C++, Pascal, COBOL, etc had to be bought separately. So the el cheapo way to program was via BASIC. Many computer magazines issued BASIC programs in them and cross platform modifications for Apple //, Commodore 64, IBM PC MS-DOS, Atari 800/400, TRS-80 and TRS-80 COCO, TI 99/4a, etc that had BASIC default.

    Borland turned Turbo Pascal into Borland Delphi, but Microsoft turned GWBASIC and Quick BASIC into Visual BASIC. Then when Windows dominated the Visual BASIC took over C++ and Delphi, as it was easier to program in for managers to understand. I didn't program in Visual BASIC because I liked it, I programmed in it because the job required me to do so. My managers didn't understand Java, C/C++, Python, Perl, PHP, etc, only BASIC and Visual BASIC, so they didn't trust programmers to write with anything else. Most of the Microsoft Windows IT/IS departments are run almost the same way and most of them use Visual BASIC (or in some cases Visual C# as it is easier than C++) because it is easier for management to understand what their programmers are doing and review code.

    You don't earn money programming with your choice of a programming language, you earn money with a programming language that your employers choose for the jobs that are available in your area. Unless you want to migrate to a Linux or Macintosh IT/IS department, but sometimes you have to settle for a Visual BASIC development job and have no choice in the matter.

    Just that now Microsoft Programmers are catching on that Dotnet is rotten, and if corrupt any program written to use it won't work.

  • by ChienAndalu ( 1293930 ) on Saturday November 28, 2009 @10:37PM (#30258436)

    do they use vim or emacs now?

  • by KibibyteBrain ( 1455987 ) on Saturday November 28, 2009 @10:39PM (#30258446)
    Only one of these guys said anything about not liking to use an IDE. I use IDEs to write assembler language for microcontrollers at work every day. Sure I could do it in an editor as well but I much prefer the graphical debugger and simulator of my IDEs as being able to see all the dozens control registers' and fuses' bits graphically during the execution of each instruction is easier for my mind to wrap itself around than my screen littered in hex or ones and zeros, at least sometimes. That said, my assembler's emitted machine code is no different than if it wrote it in Vim and then ran the command line based build tools, which are the same thing my GUI runs when I press the associated F-key.

    So the IDEs really have nothing to do with the so called "designers" you see in Visual Studio. And yes, its true that no developer who was serious about maintaining a multi-year product could do it via the designers before, you just had no control over WTF was going on. Now with WPF and xaml you finally can use the designers in a maintainable fashion, but it's a bit too little, too late for most developers to care. You can write pure Windows code in C as a makefile project(so you have clear control over the build, even to the point of a different toolchain like GNU) just as well in the Visual Studio 2010 IDE as with Vim and the command prompt tools. It's just a matter of if the IDE at that point gets in the way more than it helps.
  • by Abcd1234 ( 188840 ) on Saturday November 28, 2009 @10:42PM (#30258464) Homepage

    The biggest posers I worked with used Visual Studio. The best group of programmers I worked with used text editor. That group could code rings around VS. The best of the best of them used vi.

    This is absurd. Visual Studio, Eclipse, Vim, these are fucking *tools*. People use tools, not because the people are better, but because they find the tools useful.

    Me, if I'm writing code for Unix or my DS, yeah, I prefer a maximized xterm, GNU Screen, and Vim. But if I'm writing a .NET application, I'm gonna use Visual Studio, as it's a very powerful development environment (doubly so when coupled with ViEmu).

    OTOH, people who judge others based on their choice of IDE? Those people *are* tools...

  • by KingSkippus ( 799657 ) on Saturday November 28, 2009 @10:47PM (#30258500) Homepage Journal

    The original article and the summary both come of as rather smug to me. The truth of the matter is that you need both low-level nitty-gritty programming and high-level programming. It depends on what you're using it for.

    Think of it this way. You have people who make pipes. You know, the kind used in plumbing. Fittings, too. And they're very good at it. If you take your average house builder and try to get him to make a pipe, he'll be hopelessly bad at it. But you know what those guys who build houses are good at? Putting the pipes together in meaningful ways to get what they need (i.e. building a house) done. Take a guy who's brilliant at making pipes and fittings and try to get him to build a house. Yeah, not such a superstar now.

    It's the same with programmers. Tell someone who is very good at writing low-level code, "I need a killer efficient compiler." Give them enough time, and they can churn it out and make it wicked optimized. Tell them, "I need a new type of control that works in this specific way and with crucial efficiency," and they're your guys. Tell them, "I need an new application entirely from scratch that can process my specific business logic, it needs to look and feel like a standard Windows application, it needs to be easy for end users to figure out and work with, and we need a working version in a couple of weeks," and they'll probably laugh at you. Yet that's what those people they're looking down on, the people developing with higher-level abstracted languages, are doing every day.

    In my experience, competence != usefulness. They're not opposites, mind you, but it takes both types. It takes the people who work with the low-level nitty-gritty stuff, and it takes the people who use what they churn out to actually accomplish real-world productive things. One isn't smarter, one isn't better, neither should be looked down upon. Both are necessary.

  • by Anonymous Coward on Saturday November 28, 2009 @10:47PM (#30258504)

    Yeah, because IDEs don't have lists of errors, and double clicking on the error doesn't take you right to the line where the error is. It's so much easier to hunt through random code than to look at the little red squiggle on the line the IDE pointed out to you.

    I'd call you an idiot but that would be insulting to idiots.

  • by NoYob ( 1630681 ) on Saturday November 28, 2009 @11:03PM (#30258618)
    When I have to write a Windows GUI app, C# rocks! I can design the UI whip off the code and be done with it. It's better than MFC and after writing countless message loops in win32 and OS/2 for that matter, I don't think writing more GUI boiler plate code, using the APIs to match resource IDs, and all that mindless coding will do any good - especially when I need the time to figure out an algorithm or other problems. And I can still do low level stuff (really low level) with P/Invokes, so the only thing I'm really missing out is busy work. And if there's a time when I DO need the control and abilities of unmanaged code, well there's the C++ stuff.
  • Re:Wow! (Score:4, Insightful)

    by ClosedSource ( 238333 ) on Saturday November 28, 2009 @11:12PM (#30258674)

    Well, I like some of those guys but as someone who has been closer to the bare medal than they have (I'm a former Atari 2600 programmer), I'd say that the habits of old pros say little about the quality of today's tools.

    We used 6502 cross-compilers on a PDPxx and VAX, not because we thought the command-line was better but because that was the best we had at the time.

    BTW, I'm not trying to say that I'm better or smarter than those other guys, I just have written very low-level, real-time code.

  • Re:Uh, sure... (Score:5, Insightful)

    by tftp ( 111690 ) on Saturday November 28, 2009 @11:29PM (#30258766) Homepage

    The point you are missing is that "bare-metal code" is assembler, regardless of how much effort is involved.

    I again have to point you to Linux or *BSD, these OSes have real time drivers in C. I don't recall seeing *any* peripheral driver in Linux that is not C. Practically all assembly code is under arch/ which means bootstrap, memory initialization and main timers. The rest is C.

    Go ahead and write a real time driver in C, let me know how it works for you.

    I'm doing this right now, and it is a very usual thing for me to do because I work on firmware for slower microcontrollers that run at clock speeds from 1.8 to 16 MHz. I have tons of peripherals in the MCU, and they must be serviced on time. A typical MCU project is a real time design. Sometimes I profile the code by connecting an oscilloscope to some spare pins and check that I have enough time in critical parts of the code. And *all* the code is C, compiled by avr-gcc 4.3.2. I have maybe 0.1% of the code that is in assembler, and that is stock macros that come with avr-gcc.

    To illustrate, here [linux.no] you can see the lowest level of avr32 support, and you can observe how many LOCs are in .S and how many are in .c files. If still not convinced, visit mm/ [linux.no] and see what language the do_page_fault() [linux.no] is implemented in; that is one of most performance-critical pieces of code. C today is the "bare-metal" language of choice, and it works well in that role.

  • by v1 ( 525388 ) on Saturday November 28, 2009 @11:31PM (#30258776) Homepage Journal

    Who let a VB coder implement a cryptosystem

    well in this case all we actually ended up needing was the SHA1 and MD5 hashing, but he wanted to be complete about things so we did the full RFC. Still we had to hash 1mb blocks so speed was important. 4 seconds was dropped to a very small fraction of a second. But you really got to see the improvements when trying to do actual encryption.

    But when coding crypto, the tool isn't the first place you should look for security - pay more attention to the meatbag.

  • Re:So what? (Score:5, Insightful)

    by shutdown -p now ( 807394 ) on Saturday November 28, 2009 @11:40PM (#30258818) Journal

    Yeah, really. Senior Engineers disagree with company marketing strategy ...

    Who said that "graphical programming" is company marketing strategy today? Oh, sorry, it's another kdawson story; you expected any facts here? Let me explain then.

    Anyone who deals with .NET tools knows that there had been a recent shift back towards code. For example, WinForms development was too tedious without visual drag&drop form editor, but WPF markup is best hand-coded, just like HTML (VS provides a visual editor, too, but hardly anyone uses it for anything except quick preview). Or what used to be called "typed datasets" - also very designer-centric, but with LINQ2SQL and Entity Framework, again, most people stick to writing code and mappings in XML by hand.

    In fact, it's easy to find out that much if you just look up the names mentioned in TFA. For example, who is Don Box? He's working on Microsoft "Oslo" project [wikipedia.org], next-gen modeling platform which was hyped [msdn.com] back on PDC2008, and all Microsoft managers in the division blogged on how it's the next big thing etc. And the main difference of that platform from the existing "DSL" tools in VS2008? Oslo is centered around text-based DSLs, and comes with a Emacs-like editor [msdn.com] which can handle them.

    In short, developers of the new tools for Microsoft development platform - which are fully backed by marketing - criticize some aspects of the previous generation of tools. Surprise, eh?

    But go ahead and ask them what they use to edit C# code that they write, and I bet you'll hear "why, VS of course".

    Then there's Jeffrey's comment on .NET. I don't see anything fundamentally wrong with it, and if you RTFA, you'll see that he is an architect for PowerShell - a very high-level scripting/shell language built on top of .NET! To interpret his comment as a criticism of managed, when he is in fact the one "pushing" for it via the tech he works on, is rather disingenuous.

  • by Darkness404 ( 1287218 ) on Sunday November 29, 2009 @12:23AM (#30259048)
    I don't really see the point in using assembly anymore. Yeah, its fast. But, its also a total pain to port and sometimes to maintain. With predictions saying that ARM will eventually catch up to x86 within the next 5 years, the fact that an ARM version of netbooks could be coming, etc. I don't know why anyone would still program in assembly for anything other than, say, emulators which need to be built for speed.
  • by Lord Grey ( 463613 ) * on Sunday November 29, 2009 @12:34AM (#30259092)

    ... 97% of today's coders don't have any idea what they've missed out on and just accept what they've got. ...

    My apologies for snipping such a large portion of your reply, but that one sentence from your post nicely sums up so many of the problems with new coders it deserves calling out.

    Disclaimer: I'm an old fart when it comes to programming. I admit it. I like bare metal programming, high-performance applications with minimal footprint, and elegant solutions to non-trivial problems. I don't avoid kernel-level threads; they're a useful tool.

    The company I work for has hired a large number of programmers over the last year in order to replace a number of aging systems. I've interviewed a lot of these people, and I've worked with most of the ones that have we've hired on various parts of the overall project. The newer programmers know quite a lot about available frameworks and their general capabilities. They've been taught the 80/20 rule early on, and they embraced it: When faced with a new task, these people find something that already exists and set about modifying it. All that is fine for applications that are of a certain size. A size that, apparently, is about the size of school projects and therefore succeeds admirably when graded.

    So what I've seen coming through the door are people who can put Lego blocks together. They're used to that type of problem solving. They've been taught to download 80% of the solution, then "fix it" so it also does the other 20%. This type of problem solving works well when you're building Lego-block-shaped solutions. That fails to happen much of the time, however. Most real-world solutions -- you know, the kind that are complex enough that someone is willing to pay an actual salary to solve -- don't look like a collection of Lego blocks. The amount of custom code grows and grows as more and more Lego blocks are added. Interoperability problems between the Lego blocks start encompassing the majority of coding effort. The overall system gains complexity at an alarming rate. Things start to suck, both from the programmer's perspective as well as from a systems perspective.

    The bad part of this, and to bring things back to my original point, is that these newer programmers expect it to be that way. What's worse, at least from my point of view, is that this entire mentality has been around long enough for these programmers to stop coding and start managing other programmers. So now we have people who build things that suck, and managers who expect it to suck. Expectations are lowered and, unfortunately, met.

    Google and Apple seem unafraid to break this cycle, albeit in different ways. So hope is not entirely lost. Maybe that's the 3% you alluded to in your original post.

  • by Anonymous Coward on Sunday November 29, 2009 @12:36AM (#30259100)

    Also, experience has taught us that even elite programmers make errors, and the most common categories of errors happen to be of the kind that can be prevented by designing your language or platform properly. This results in enhanced reliability, security and performance, due to the lack of null-pointer dereferences, buffer overflows, memory leaks, and so on and so forth. Now, I'm not saying that you therefore have to use Dotnet; I'm not really a fan of having to install yet another disk space munching VM, when I already need Java (among other things) for my studies and to run a few handy free tools that I depend upon to handle certain parts of the life I enjoy efficiently. However, I can say that if Microsoft, and programmers across the globe, would deprecate and replace old APIs that for example use uncounted strings, non-garbage-collected memory, and so on, the world would be a better place. And you don't need Dotnet or Java - you can do all of these things in plain C++ or macroassembler if you want.
    "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore."
    Not particularly funny, and not an argument to go back to the olden days. Quite the opposite.

  • by melted ( 227442 ) on Sunday November 29, 2009 @01:05AM (#30259224) Homepage

    The real reason why they don't use Visual Studio is far more prosaic -- the build environment of most of Microsoft products does not support the Visual Studio project files. Their products are built using a system called CoreXT -- basically a set of binary tools and scripts cobbled together by build engineers and developers over the past decade or so. CoreXT uses a lot of different crap, make, perl, compilers, etc, etc., and all tools and SDKs are checked in and versioned. The upside is that you can roll back your Source Depot (Microsoft's own flavor of Perforce) enlistment to an earlier date and be sure things will build exactly the same way, and once you enlist, you get repeatable, isolated build environment where you can guarantee the correctness of versions for all tools, compilers and libraries (Java developer's wet dream, even though they don't know it). The downside is that you have to maintain the makefiles by hand, and you can't use Visual Studio, because there are no project files checked in, and even if there are, most people don't use them and they are not updated, so you can count on them being broken.

    I did a lot of my coding in either Notepad2, or in a separate project in Visual Studio against a test harness emulating the rest of the project (what Enterprise Java types call a "mock"). Some folks used Ultra Edit or vi, or EMACS. For some just a bare Notepad did the trick. Some stuck with Visual Studio, which in their case was just a glorified Notepad with autoindent since it doesn't support build or Intellisense if you don't have a project file.

    Yes, it's an enormous waste of time, and yes, it was painful. But CoreXT is so integrated into the rest of the dev pipeline that replacing it with something else in a large product is a major, destabilizing endeavor that is bound to undo at least some of the work around gated check-in infrastructure, test infrastructure, automated deployment infrastructure and god knows what else, so few teams ever attempt it. Now naturally, DevDiv eats their own dogfood, so they were one of the first teams to switch completely to MSBuild. It took something like a year in their case, they did it gradually, from the leaves down the tree. I'm sure if they had a choice, they would be using CoreXT to this day though, and fighting with incremental build issues. :-)

    Recently, a few more teams have adopted MSBuild. They can actually open their entire projects in Visual Studio and rebuild them. If they have test infrastructure deployed on the side, some of them can even test the product without waiting for it to deploy. So I predict that as more and more teams adopt MSBuild (this in itself could take another decade easily), these "senior" folks will come around to appreciate its benefits. It's awfully handy when you can set a conditional breakpoint on your local box and step through things.

  • by onefriedrice ( 1171917 ) on Sunday November 29, 2009 @01:17AM (#30259270)
    Actually, you're both right. Or I should say, you are right and the OP may be right. You're right because it is obviously true that there exists at least some non-poser programmers who use Visual Studio and at least some "poser" programmers who use a text editor. But the OP may be right if it is statistically true (I don't know that it is) that there exists high correlations between "good" programmers preferring a text editor and/or "posers" preferring Visual Studio.

    While it is obviously true that such correlation coefficients do not equal 1.0 (since supposedly at least some of us subjectively know of some good programmers who prefer Visual Studio or some poser programmers who prefer a text editor, it is my opinion that there probably does exist a pretty strong correlation on the basis (and assumptions) that programmers who are familiar with a text editor are older and therefore more experienced than those whose only real experience is with an IDE. If this is true, then the OP is generally correct (and it is obvious that he was generalizing).
  • Re:Uh, sure... (Score:5, Insightful)

    by ceoyoyo ( 59147 ) on Sunday November 29, 2009 @01:35AM (#30259338)

    "I work on firmware for slower microcontrollers that run at clock speeds from 1.8 to 16 MHz"

    No wonder you can afford to use C! ;)

    I can buy a $1.20 microcontroller that's more than an order of magnitude faster than the machine I learned to code on. Still, there are still times when a nice bit of assembly makes a big difference. Not much, just as much as you need.

    As for the rest of the thread, assembly IS the bare metal. C is the next best thing to bare metal. Not to say that you should code everything in assembly, that would be silly. But being aware that it exists can't hurt.

  • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Sunday November 29, 2009 @01:53AM (#30259396)

    Might have been more appropriate to compare it in that people in the high performance arena (nascar) don't like antilock brakes because of their limits and the separation you get from your task at hand. (you lose your "feel for the road")

    I always laugh when I read this sort of thing.

    In *real* high performance racing - Formula 1 - ABS (along with traction control, launch control, active suspension, and a whole bunch of other fancy electronics that basically turned the cars into a ludicrously fast go-karts) was used very successfully and then banned because it could do a far, far better job than any human.

  • by RAMMS+EIN ( 578166 ) on Sunday November 29, 2009 @01:54AM (#30259400) Homepage Journal

    In my experience, a lot of these "tools that are perfect for new programmers because they don't have to learn so much" really mean that you spend a lot of time learning the tool, and _then_ have to still learn what's really happening if you ever want to make it to the level of the "old programmers", and dog forbid you are ever required to use a different tool.

    To add insult to injury, the focus on the tool usually means there is so much boilerplate before you actually get to understanding the programs you write that it gets _harder_ to figure out what's really happening; you end up with "programmers" to whom programming remains some kind of black magic; they can only program things if there is a tutorial that tells them how to do it, a wizard that generates the code for them, or some sample code that they can copy paste. At best, they'll be able to glue together ready made code to make something that more or less works, but you won't find these people coming up with innovative solutions themselves, and you really don't want to let them anywhere _near_ code that requires an understanding of concurrent programming or security.

    Now, I am not saying that people who start by learning a tool will never make great programmers or that people who start with a text editor and an assembler will always make great programmers. I just think that the idea that the former have an advantage over the latter is mistaken. At it's core, programming is actually not that hard. So why not let people start out by giving them a good understanding of the core, and then let them focus their energy on the hard part: translating Real World requirements to units that are trivial to implement and test?

    In my opinion, a good tool is one that helps you do things you already know how to do, while not getting in the way if - for whatever reason - you want to do them yourself. No significant learning curve (you don't _have_ to use the tool's facilities), no black magic, and no breakage if you mix in code that isn't from the tool. A bad tool is one that gets in the way: requires significant effort to learn, does things you don't understand, chokes on code that doesn't fit its model, etc. A good tool doesn't make you a better programmer, just a more productive one. A bad tool doesn't make you a better programmer, either; it just provides a boost to bad programmers and cripples good programmers so both seem about equally bad.

  • by Anonymous Coward on Sunday November 29, 2009 @02:11AM (#30259438)

    If you'd go back and reread the statement, you'll see you argued against a point not made. Here's an example of what the GP means: I'm working on a team developing some computer vision software that will ultimately run under Linux. I develop under Linux, but my teammates do not. OpenCV operates under Windows, but when I get things running on my VS installation and send them the code, nothing. An error located somewhere in the configuration means they cannot use the (correct) code without first identifying and changing library and linker options.

    I'd call you an idiot but that would be insulting to idiots.

    And who's the one who is incompetent?

  • by digitalunity ( 19107 ) <digitalunity@yah o o . com> on Sunday November 29, 2009 @02:22AM (#30259482) Homepage

    Embedded development and bootstrapping is the last bastion of necessity in assembly.

    Any other use is likely for obfuscation, academia or pride.

  • Oh please... (Score:5, Insightful)

    by bertok ( 226922 ) on Sunday November 29, 2009 @02:42AM (#30259542)

    I can't stand it when Microsoft developers talk about multi-threaded programming when the entire corporation has done that absolute bare minimum to make developer's lives easier. No wonder that they don't like using their own tools, because their tools are terrible.

    Many years ago, a brilliant third-party multi-threaded library [oswego.edu] was released for Java, by a professor at Oswego university. I used it in several large production apps, and it absolutely rocked. You could build up safe, reliable, scalable multi-threaded applications by simply snapping together flexible pieces like Lego. It was so good that it became a part of the SUN Java standard library, and it's now called "util.concurrent". Compared to having to "hand craft" multi-threaded code in C++, it was wonderful. It's as if the lights had just turned on, and everything had become clear to me.

    Now that I'm a C# dev, it's been a huge step backwards, doubly so because .NET was developed after the Oswego library was already popular, so Microsoft must have seen it and just flat out ignored it. For years afterwards, the whole entirety of multi-threading in both .NET and C++ were "threads" and "locks". The one nicety they included was an anemic thread pool in .NET which was just usable enough for the most basic tasks, but couldn't handle any real load. Even the locks were heavyweight inter-process kernel locks that are unusably slow for many tasks.

    It's only now in .NET 4 (which won't be final until 2010) that they are adding a small set of very basic lock-free containers, light-weight locks, and actual interfaces that one can implement in order to customize behavior. It's all still very basic, and nowhere near as flexible, powerful, or comprehensive as the Java APIs that are years old now.

    Microsoft's general attitude to API design is so bad that it can only be described as wilful ignorance. Reading articles evangelizing "modern multithreaded programming to better utilize new multi core processors" somehow feels like a religious zealot harping on about their appreciation of pure rational logic and science.

  • by gmack ( 197796 ) <gmack@noSpAM.innerfire.net> on Sunday November 29, 2009 @02:50AM (#30259574) Homepage Journal

    Just because you don't know how to use non garbage collected memory does not mean that functions that don't use it should be phased out. Garbage collection trades safety for flexibility and if you remove the flexibility you will find a whole class of programs that would be a lot less efficient. These days there is a lot of work being put into making things like NULL pointer bugs and memory overflows simply crash instead of allowing a backdoor into the system and for some programmers crashing vs running slow is a good trade off

    I need those functions to do my job since I write network based apps that require memory reuse and pointers to be able to process data as quickly as possible.and I've seen the results when some of our competition has attempted to do my job with garbage collected languages. (generally 3x the hardware requirements, 5x if they used java)

    Not everything is a desktop application.

  • Re:pros and cons (Score:3, Insightful)

    by wwahammy ( 765566 ) on Sunday November 29, 2009 @02:58AM (#30259590)
    Managed code takes some control away from the developer but is the developer having that control for the best?

    For example, think of the type of errors leading to security bugs. A lot of them have to do with buffer overflows primarily in the area of string manipulation. These are easy mistakes to make in C or C++. Hell Microsoft and others have tried to modify the C runtime library to have "safer" versions of string manipulation functions because these errors continue to happen. Now consider a managed language like Java or C#. It's not possible to overflow buffers provided the buffer management code is sound. Instead of each mostly average programmer going through the process to manage buffers and reinventing the wheel, we have a few particularly talented people develop the code who specialize in that type of work.

    Also think about memory leaks. Firefox is a prime example of a great open source program developed in an unmanaged language. Probably thousands of people have looked at the code and still some of the most common complaints are memory leaks. My feeling as for why that would be is that C++ makes it so easy to forget to deallocate memory that with a million line program, its just too difficult to find every possible leak. A managed language never has that problem. We have a group of particularly talented people develop a garbage collector and memory management system and the average developer will never have to worry about it.
  • by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Sunday November 29, 2009 @03:35AM (#30259730) Homepage
    It's actually a good argument. With antilock brakes, most any moron can maintain control of their vehicle in panic-stop situations. The trade-off is that stopping distance is increased. So, sure, go without ABS if you're skilled enough to do so. You'll get some extra performance when you're racing. But if you're just driving to the grocery store in the winter? I'd hope that most of the soccer-moms on the road with me have ABS.
  • by Sjefsmurf ( 1414991 ) on Sunday November 29, 2009 @04:12AM (#30259784)
    Ah... yes...

    but what happens when the house plumber does not understand the materials and tools he is using? 3 months later... poooooofffff... water all over the place.

    sizzle, spark, kawoff.... the whole house is in flame from the electrical short circuits caused by all the water on the electric wiring done by the unqualified electricians.

    The nasty truth is that you will never design the underlying stuff well without having a decent knowledge of how it is used, and you can never use things well without having a decent understanding of how things works (with some limitations of course, I don't expect you to know assembly code of the windows bootstrap to be able to use Word well).

  • by DefenceForce ( 594607 ) on Sunday November 29, 2009 @05:54AM (#30260008)
    Well, both of you are correct imo. Yes I prefer having to double click on the error in the IDE and have the cursor positioned on the correct line on the correct source code... except sometimes the indicated error has nothing to do with the actual problem. In practice, on non trivial projects involving many developers, libraries, inherited property sheets and complicated solution files, it is quite common that the error is due to something included before something else, or somebody changing an inherited parameter somewhere else in the code base. So yes in the end you have an error somewhere, but the cause of the error is not necessarily located there and may be very difficult to find.
  • If only Microsoft's architecture and best practices groups actually worked to leverage the efficiency of the tools, rather than just drain it. The thing about Microsoft is that the tools are good, yes, but then they sell them with these practices and recommendations that just drain innovation and drags all developers down to a lowest common denominator.

    It's really, Taylor all over again but applied to computer programming. The problem is, Taylor is what GM and Ford and Chrysler do, and the unions are locked in with. Just like we have pipefitters and machinist titles of varying kind on the shop floor at an old style manufacturing plant, we have guys that are being pushed into the database, u/i design, or middle tier roles, and really, 90% of all projects could be done with one guy putting together a half way decent screen in a craftsmen like fashion.

    At this point, Microsoft is headed out just like GM - recruiting a lot of the best engineers, then just killing them in red tape, and delivering products that increasingly fail to captivate their market.

  • by Sique ( 173459 ) on Sunday November 29, 2009 @09:11AM (#30260798) Homepage

    In practice, with the advent of ABS, most people drive faster, thus the general braking distances might be longer. But from the same speed, there is only one situation where an ABS causes longer braking distances: If the ground is not solid, but consists of sand, gravel or freshly fallen snow: Then blocking wheels will cause a fair amount of material to collect in front of the wheels and thus causing better contact to the ground.

  • by jimicus ( 737525 ) on Sunday November 29, 2009 @12:02PM (#30261762)

    Embedded development and bootstrapping is the last bastion of necessity in assembly.

    Any other use is likely for obfuscation, academia or pride.

    If my employer is any guide. it's rapidly dying in embedded development. Processors are fast enough - and optimising compilers sophisticated enough - that assembler is simply unnecessary.

    I can't see assembler being in heavy use outside of ever more esoteric branches of embedded development over the next few years.

  • Yeah, its fast.

    Not really. Compilers these days do a better job at optimizing most code than most assembly programmers (for well-supported CPUs anyway), mostly because instruction timing and dependency issues in modern complex CPUs are quite complicated, and compilers are able to take a lot more into account (just because the code 'looks' tighter doesn't mean it'll run faster). The only place where it really makes sense to use asm is for tight inner algorithm loops, especially when you can use SIMD instructions.

  • Crikies. (Score:3, Insightful)

    by RightSaidFred99 ( 874576 ) on Sunday November 29, 2009 @04:27PM (#30263556)

    People are reading a lot into this that isn't there. These people use Visual Studio, and I don't think they'd claim that using a GUI to design a...GUI is a bad thing. They're referring largely to the new modeling tools MS is pushing with VS2010, and they're saying sometimes it's quicker to just write the code than design the model. And indeed it is.

    It's a tradeoff. For example, they already have some modeling tools (web service factory) for developing a web service. You layout interfaces, data contracts, message contracts, etc... and associate them visually. I think this sucks, personally, and I still just do it the old school (and much quicker, more powerful) method by creating an interface and data contracts. But for some scenarios designing the model might pay off in terms of self-documentation and allowing some standards to be followed by multiple developers working on a web service.

  • by Mr Z ( 6791 ) on Sunday November 29, 2009 @08:41PM (#30265112) Homepage Journal

    I'd call that "runtime environment glue" and that's an extremely appropriate place for assembly code in modern programming. You're quite literally outside the language at that point, when you have to manually manage the calling convention.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...