Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Microsoft

Microsoft's Top Devs Don't Seem To Like Own Tools 496

Posted by kdawson
from the frankly-speaking dept.
ericatcw writes "Through tools such as Visual Basic and Visual Studio, Microsoft may have done more than any other vendor to make drag and drop-style programming mainstream. But its superstar developers seem to prefer old-school modes of crafting code. During the panel at the Professional Developers Conference earlier this month, the devs also revealed why they think writing tight, bare-metal code will come back into fashion, and why parallel programming hasn't caught up with the processors yet." These guys are senior enough that they don't seem to need to watch what they say and how it aligns with Microsoft's product roadmap. They are also dead funny. Here's Jeffrey Snover on managed code (being pushed by Microsoft through its Common Language Runtime tech): "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore." Snover also joked that programming is getting so abstract, developers will soon have to use Natal to "write programs through interpretative dance."
This discussion has been archived. No new comments can be posted.

Microsoft's Top Devs Don't Seem To Like Own Tools

Comments Filter:
  • pros and cons (Score:2, Interesting)

    by gcnaddict (841664) on Saturday November 28, 2009 @09:01PM (#30258236)
    The only pro: anyone can probably learn to write some sort of simple application through Microsoft's tools via managed code.

    The cons: managed code doesn't give nearly as much control because it tries to spoonfeed you. This is basically a catch-all for every con anyone can think of for managed code.
  • by BadAnalogyGuy (945258) <BadAnalogyGuy@gmail.com> on Saturday November 28, 2009 @09:03PM (#30258248)

    Yes, in some respects, programming is becoming easier and more unqualified people are able to do it.

    But I think that these guys are really missing the boat. The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain.

    In addition, content these days is a form of programming. Whether it is HTML/CSS or word processing or spreadsheets, the distinct line between what is a program and what is pure data is blurred beyond recognition. So a programming language for interpretive dance would probably find the Natal very useful.

  • modify that analogy (Score:5, Interesting)

    by v1 (525388) on Saturday November 28, 2009 @09:15PM (#30258296) Homepage Journal

    "Managed code is like antilock brakes. You used to have to be a good driver on ice or you would die. Now you don't have to pump your brakes anymore."

    Might have been more appropriate to compare it in that people in the high performance arena (nascar) don't like antilock brakes because of their limits and the separation you get from your task at hand. (you lose your "feel for the road")

    Tho I'm a little strangely biased, I miss the days of assembly, when 10k was a LOT of code to write to solve a problem, thing ran at blindingly fast speed with almost no disk or memory footprint. Nowadays, Hello World is a huge production in itself. 97% of today's coders don't have any idea what they've missed out on and just accept what they've got. Even someone that understands the nerf tools like VB at a lower level can get sooo much more out of them. I recall taking someone's crypto code in VB and producing a several thousand-fold speed boost because of my understanding of how VB was translating things. They didn't know what to say, they'd just accepted that what they were doing was going to be dog slow. (and unfortunately the users are also falling under the same hypnosis)

  • Re:I agree (Score:5, Interesting)

    by 0123456 (636235) on Saturday November 28, 2009 @09:15PM (#30258302)

    because the modern Microsoft development tools need that infernal Dotnet library to be loaded and then when it gets messes up any software that depends on it does not work.

    Indeed. One of my PCs has a broken '.Net framework' which can't be fixed without a complete reinstall of the operating system: even Microsoft's own 'completely obliterate every last trace the bloody thing' uninstaller isn't enough to remove all the traces which prevent it from reinstalling properly. As a result, a lot of new software simply will not run.

    Fortunately I do most of my useful work on Linux or Solaris these days so not being able to run random Windows software is no big deal, but '.Net' is such a monstrosity that it makes 'DLL Hell' look good in comparison; if even Microsoft can't fix it when it breaks, what chance do users have?

  • by Opportunist (166417) on Saturday November 28, 2009 @09:23PM (#30258348)

    I could write a lengthy essay about how old programmers don't like to use new tools that offer them little because they already know all the tricks and gadgets for their old, "inferior" and more complicated tools, while new tools are perfect for new programmers because they don't have to learn so much to achive the same results because those tools are easier to use and the learning curve isn't so steep until you have a result, but I think I can sum it up in a single word:

    Emacs.

  • Good debugger (Score:3, Interesting)

    by jpmorgan (517966) on Saturday November 28, 2009 @09:39PM (#30258444) Homepage

    Intellisense is pretty slick, but overall if I'm doing development on Windows I'd rather use an emacs as my text editor.

    Of course, Visual C++ makes a fantastic debugger. It's almost good enough to forgive Windows for its lack of valgrind. Almost.

  • Re:Wow! (Score:5, Interesting)

    by rickb928 (945187) on Saturday November 28, 2009 @10:00PM (#30258596) Homepage Journal

    We've got a crew of .NET developers writing us an updated replacement to an existing VB app. I keep calling the new interface Fisher-Price, but actually it's Hasbro. I was mistaken, but an easy mistake to make.

    Where it should absolutely take two clicks to make something happen, they found a way to make it five. Where you should enter a date, they found a way to not allow special characters, like '/'. Where you should enter an address, well, no spaces allowed. Basic functionality is lacking for several features, but the interface is there.

    And no help files yet, despite beta release pending in a few days. In fact, though we have well over 1,000 pages of documentation, there seems to be no functional install that preserves the users' data in case they need to reinstall. I'm told that the next build introduces that.

    For all the fancy IDEs, tools, etc, these guys are still not getting it done. I dare not say how far behind schedule this is, nor what the actual platform is, or someone will guess and raise hell over how anyone could be so insensitive as to speak the truth.

    Your tools mean crap, if you're incapable. Just as your plumber would probably suck at actually making the pipe, your developers will suck if they don't 'get' what your users actually do.

    Of course, it would help if they asked what the users actually do.

    But I'm not bitter. I get to support this. Plenty of work.

  • Leaks like a sieve (Score:4, Interesting)

    by ToasterTester (95180) on Saturday November 28, 2009 @10:02PM (#30258614)

    I'd be frustrated too trying to write code with tools that generate memory leaks for days and sucks at returning free'd memory to the system. I remember one version of Word you could start it up and just let it sit, within and hour or so Windows would crash. Then the version of Excel that shipped with debug code because the stripped version would never pass QA. Aw fine tools.

  • by Taco Cowboy (5327) on Saturday November 28, 2009 @10:11PM (#30258664) Journal

    There are times I am forced to, like if I'm doing gaming video, I have to do it using the Direct-X toolkit.

    I mean, there is no other way around, since some users are using ATI cards and CUDA is useless on ATI GPUs.

    But on other projects, I do bare metals, and when I have the chance, I go assembly.

  • by MBCook (132727) <foobarsoft@foobarsoft.com> on Saturday November 28, 2009 @10:37PM (#30258802) Homepage

    The article didn't actually have much in it, it was Computer World after all, but it makes sense if you read it as visual programming tools instead of Visual Studio. The article seemed, as CW articles tend to, that it was a good article that was cut to 1/4 it's original size just because.

    With today's libraries, a half-decent IDE can make a huge difference in productivity. But the comment about zooming in and out makes perfect sense if you think of the "I'll drag an IF block here, then wire it to the blah field..." kind of visual programming. The kind that people are always trying to make so that programming will be available to the "everyman". The kind that never works.

    Hypercard is the closest I've ever seen this come to working well. It's so sad Apple never knew what to do with it. It was approachable, and for simple record keeping was rather easy. HyperScript worked very well and was extremely approachable, especially with it's ability to do things like "take word 5 of line 2 of textbox".

  • Re:Wow! (Score:5, Interesting)

    by nextekcarl (1402899) on Saturday November 28, 2009 @11:06PM (#30258944)

    I just wanted to let you know I feel your pain. I worked at this place a while back and I really liked my job. It didn't pay that well, but I felt important and had a massive amount of freedom. Then they hired a consultant to come in and take over IT. He knew how to run a business, but next to nothing about IT (though he knew just enough lingo to fool people who did for a few days). His 'programmer' didn't understand how to navigate file systems on Windows with Perl (and was supposedly a Perl guy). Being a Linux guy myself, I figured maybe he was, too. No, he had never even used Linux. Once I found that out I started to get rather scared and discouraged, because he was reworking a complicated, arcane, mission-critical system. I demanded all passwords be changed and that I not be given any of them because I didn't want to be blamed when they screwed everything up (plausible deniability). After assuring me that they (my bosses) wouldn't, and finally relenting another month later they finally fired the guys because they couldn't get anything working. At all. I even told them where to start to get a feel for what they needed to be able to do on it, and they still couldn't do it. They didn't even know enough to mess it up (generally the easiest thing to do). So management's answer was to just not have any sort of IT department at all. I could do all of the old IT manager's job, plus my old one, for the same pay and no possibility for advancement. So I gave them a month's notice and left. Probably not the smartest thing I've ever done (the economy tanked about 6 months later) but since most everyone else had left or been laid off around the same time, I'm not sure how much a difference it would have made to do otherwise.

  • by tepples (727027) <tepples@[ ]il.com ['gma' in gap]> on Saturday November 28, 2009 @11:30PM (#30259086) Homepage Journal

    I don't know why anyone would still program in assembly for anything other than, say, emulators which need to be built for speed.

    How about programs designed to run in the same class of systems that such emulators emulate? There are dirt-cheap mass-produced computer-on-a-chip designs that connect an 8-bit CPU core clocked at under 5 MHz to a TMS9918-like video generator. You're dealing with something comparable to a Sega Master System or NES on a chip. C on a Z80 or 6502 isn't pretty.

  • Mod parent UP! (Score:4, Interesting)

    by KingSkippus (799657) on Saturday November 28, 2009 @11:55PM (#30259180) Homepage Journal

    Very insightful reply, and you're 100% correct. That was my other line of thought. This is the company (and probably some of these "superstar" programmers are the very people) who have given us a litany of buffer overruns, security holes, and other low-level programming "features" over the years.

    I'm not saying that no one should ever program at a low level, but I am saying that people shouldn't be afraid to take advantage of features of managed code and other conveniences. Don't program at a low level if you don't have to. You're only making your life harder for no reason, and needlessly exposing yourself to risks of fundamental errors that are much worse. Take advantage of all of the hard work that others have already done.

  • Re:Wow! (Score:4, Interesting)

    by ducomputergeek (595742) on Sunday November 29, 2009 @12:24AM (#30259302)

    Of course, it would help if they asked what the users actually do.

    Bingo.

    We had the advantage of a small business owner wanting our software developed because he thought "It should work like this". So we made it work like that and a lot of other small business owners found it to make sense and relatively easy to use. There were a couple quirks, but that's not good enough. Not for me.

    And this is where so many others fails. After the phase 1 deployment of our product (about 100 installs), I drove/flew around to our customers 6 months later, stopped by in person and asked as the first question: "What doesn't work?" followed by "How can it work better?"

  • Re:Mod parent UP! (Score:5, Interesting)

    by TapeCutter (624760) * on Sunday November 29, 2009 @12:59AM (#30259412) Journal
    A decade ago when I was employed at Big Blue it's then CEO Lou Gerstner said "All code has been written, it just needs to be managed". We all laughed so hard it hurt, however the way things are going Lou may get to have the last laugh after all.
  • Re:pros and cons (Score:2, Interesting)

    by jocabergs (1688456) on Sunday November 29, 2009 @01:20AM (#30259476)
    I think a lot of IDE's actually complicate the language though. When I do use the features, I end up having to go back in and rewrite 95% of the simple feature I just used at least 86% of the time. Another fear I have is that allowing to many non-schooled programmers into programming can really make it difficult when you have to try to figure out coding done by someone who doesn't have any conception of decent coding practices and naming conventions are. I do think that there are somethings which can be done more simply though, that being said; however, in the interest of some semblance of job security I'd prefer they not be too easy.
  • OpenGL doesn't encapsulate anything other than graphics. DirectX encapsulates input, 3D acceleration, sound, etc.

    From a developer standpoint, DirectX is a no-brainer when it's available.

  • by Abcd1234 (188840) on Sunday November 29, 2009 @01:35AM (#30259520) Homepage

    But the OP may be right if it is statistically true (I don't know that it is) that there exists high correlations between "good" programmers preferring a text editor and/or "posers" preferring Visual Studio.

    I'll buy the former, but absolutely *not* the latter.

    Coding with a simple text editor, make/gcc/etc, and gdb implies a fundamental set of skills: familiarity and comfort on the command-line, ability to (presumably) write and invoke Makefiles, ability to use gdb (which, let's face it, ain't pretty for a newcomer), and so forth. So it stands to reason that there's a greater chance such an individual has the skills necessary to write decent code (also known as "trial by fire"), as a poorer developer would likely be scared away.

    But I find it very difficult to believe that, given a population of VS users, there's a disproportionate number of crappy developers (ie, that the ratio of skilled to unskilled developers exceeds the ratio in the software developer population at large). A weak developer will obviously use any tools that make the job of writing code less daunting, and a good IDE definitely fits in that category. And a strong developer will use whatever tool makes their job easier, and guess what? VS is a *damn* good tool, particularly if you're targeting the .NET stack.

    So I'd would say this: A developer that's happy using a simple text editor/compiler/debugger combo has a greater than average chance of being a good developer. But you can assume nothing about a developer who chooses an integrated IDE over the aforementioned environment if given the choice.

    In fact, I would go so far as to say that any developer who actively *shuns* IDEs without good reason (and, BTW, simple familiarity is a good reason... mindless elitism, however, is not) deserves as much skepticism as one who isn't capable of using the editor/compiler/debugger environment, as they're expressing dogmatic tendencies that can be deeply counterproductive in the work environment.

  • by techno-vampire (666512) on Sunday November 29, 2009 @01:36AM (#30259526) Homepage
    From a developer standpoint, DirectX is a no-brainer when it's available.

    Thank you, but that's not what I was asking,although the comparison is both interesting and informative. (to me, at least) What I wanted to know is why Linux devs don't feel the need for a Linux version of Direct-X. However, it occurs to me that you have answered my question, indirectly: OpenGL may not do everything Direct-X does, but it does enough so that Linux devs don't feel the need for a more comprehensive solution. Thank you again.

  • by Anonymous Coward on Sunday November 29, 2009 @01:54AM (#30259582)

    Linux video in general is a mess...

    But open source developers have put a ton of effort into cloning DirectX in Wine, which is arguably a better solution given Linux's current marketshare.

  • by digitalunity (19107) <digitalunity&yahoo,com> on Sunday November 29, 2009 @02:10AM (#30259632) Homepage

    Interesting. I have been writing Qt applications on Windows using MinGW for a while and just assumed my executables were huge because of Qt.

    I just tested what you said, whipped up Hello World using libstdc++ and got an identical byte size as your own. It was 474990 bytes with debugging symbols in it!

    I recompiled with -Os and stripped the executable and got it down to 265728. Jesus.

  • by shutdown -p now (807394) on Sunday November 29, 2009 @02:33AM (#30259722) Journal

    Judging by what GP said, it seems to be a library problem, not the compiler problem. I assume the way they wrote formatting code for iostream somehow references all locale facets, triggering their instantiation.

    Still, iostreams are fat on any implementation Just checked on VC++2010 - with "optimize for size" it was 95Kb for a HelloWorld.

  • by mhelander (1307061) on Sunday November 29, 2009 @02:47AM (#30259742)

    There's truth to what you are saying - I'll bet any senior developer can tell war stories for hours on the topic of users who don't know what they want - but BAG's comment was still very insightful.

    Despite how readily domain experts (that is, our customers) disappoint us when it comes to grasping the most basic stuff such as C, Java, SQL or even HTML, it is a mistake to think that they are stupid or that they don't know *their* domains very well (the most basic stuff of which we may then find ourselves struggling to come to terms with). Domain experts already express themselves very precisely using their own notations - but they normally lack any decent tool support for this. In practice, they may write formulas in word or excel. Now think about what BAG wrote for a moment:

    "The closer the programming environment can come to providing domain-relevant expression tools to the user, the better they will be able to create programs that fit their domain."

    If, as BAG suggests, they were to have richer tools supporting their domain relevant expressions, perhaps including things we coders take for granted in our IDEs such as type checking, code completion, refactorings, etc - wouldn't the result be that domain experts could express themselves even more efficiently and precisely? Wouldn't the end result tend to be software better suited to address their specific problem domains?

    Are they "writing programs", though? Well, are you writing a program when you're typing in interface declarations, or only when you type in imperative method implementations? If you are ready to agree that declarative code can also be code then you pretty much agree with where BAG is going when he says that "the distinct line between what is a program and what is pure data is blurred beyond recognition". If you can generate a full application from a set of formal, domain specific expressions - are not those domain expressions the source code, and aren't those people editing that source code (even if they happen to be the domain experts) programming?

  • by Khyber (864651) <techkitsune@gmail.com> on Sunday November 29, 2009 @03:16AM (#30259800) Homepage Journal

    "Embedded development and bootstrapping is the last bastion of necessity in assembly.

    Any other use is likely for obfuscation, academia or pride."

    Or speed where it matters, because nothing beats speaking in one's native tongue for communication.

    Which is why a many great deal of OS components are written in general x86 assembler.

    There's even an OS written entirely in Assembler, fits on a 1.4MB floppy, and does pretty much everything Windows does, faster, in a smaller memory footprint, and in 1/7,000th the space - minus gaming.

    IOW assembler is still plenty useful, not just for embedded markets or bootstrapping.

  • Re:Doug Lea? (Score:4, Interesting)

    by bertok (226922) on Sunday November 29, 2009 @05:09AM (#30260034)

    Damn. I don't even use Java(I'm an embedded C guy), but if DL [oswego.edu] did it, then it's probably really good. As a C developer I feel the old pthreads style is a throwback to old multi-process hacks on SysV of 25 years ago.

    What dragged you to the dark side anyways? (C#/.NET)

    The Oswego library was the bomb. It's basically how APIs should be designed: Very simple looking abstract interfaces* with a bunch of reference implementations, some of which are incredibly advanced. You can then pick and choose what you want, reimplement anything at will, and combine like Lego. That guy saved me a LOT of time and bug hunting. Want a queue? Pick from four different flavors! Want priorities? Done! Want to keep the queue but change the execution style or locking mechanism? No problem!

    As to switching, I've always generally been a Windows guy, and C# is currently the single fastest way to develop a GUI and not get "stuck" too much, because it can call C or COM style APIs directly. When it was first released, a good former Java/C++ developer could get started with it quickly, and develop GUIs 2-3x quicker than anything else, which then ran smoothly, and looked native.

    I still get frustrated, especially with the lack of decent containers, algorithms, and threading frameworks, but C# is still overall the best. I do a lot of very Windows platform specific stuff like Active Directory manipulation, and it would be very hard to do that quickly (but correctly) with any other language.

    Java is great for "server side" development. It has better database binding** libraries, threading, third-party support, containers and frameworks, and a much better community. However, its client-side is just terrible, especially the GUI frameworks. SUN apparently still hasn't learned the key to Microsoft's success story: own the client, and you will own the world.

    It's only recently that Java IDEs got decent "drag & drop" forms development, while Microsoft is already a generation ahead with WPF which very cleanly separates code and layout, to the point that artists can do layout almost completely independently of the dev team. Think of what HTML and CSS tried but failed to do, but done properly.

    *) Microsoft has an allergy to interfaces. It's like they're trying to tell you that they "own" the API, and you, the developer, should keep your dirty little mitts off it.

    **) Microsoft's LINQ to SQL is practically a beta at this time. They don't even support multi-columns keys! Its big brother, the "Entity Framework" didn't support foreign keys until .NET 4, which is currently beta, and the GUI editor still fails on all but the simplest models. Something like 60% of the features, if used, disable the GUI editor completely. Microsoft isn't even planning to finish the EF framework GUI, ever. Every couple of years, they come up with a new data binding framework, drop the old ones, never finish it, and then they repeat, not having learned a single lesson. I've lost count.. there's been, what: DDE, ODBC, ADO, ADO.NET, LINQ, EF, and now they're up to some garbage called "M" or "Oslo" or whatever. I'm certain it'll be buggy, slow, incomplete, and replaced in short order. Just watch.

  • by Anonymous Coward on Sunday November 29, 2009 @06:38AM (#30260354)
    No. Take a look at your local libc sources (if available). I bet you that a lot of the common string operations are written in assembly, hand-optimised to a precise CPU instruction set (I.e. SSE4.2 on newest CPUs)
  • Re:pros and cons (Score:3, Interesting)

    by ardor (673957) on Sunday November 29, 2009 @07:17AM (#30260540)

    But the GC does not solve two things:
    1) Freeing up resources other than memory (this is only possible with a deterministic GC and RAII/destructors, or with refcounting instead of a GC)
    2) Taking up tons of RAM because of unnecessary allocations (I've seen Java code that allocates MBs in tight loops...)

  • by selven (1556643) on Sunday November 29, 2009 @07:33AM (#30260634)

    When I get an error, the compiler also points exactly to the line where the error is. So your insults are completely unwarranted.

  • by Tapewolf (1639955) on Sunday November 29, 2009 @07:46AM (#30260692)

    Embedded development and bootstrapping is the last bastion of necessity in assembly.

    Any other use is likely for obfuscation, academia or pride.

    About this time last year I was told to implement (in a C++ app) something not unlike the IMPORT command in VB. In other words, it allows you to call an arbitrary DLL function from a script language in our application.
    Since we do not know at compile-time how many parameters are going to be passed to it, the only way I could think of doing this was to manually construct the stack into the required calling convention using PUSH instructions and do JMP EAX to call the function.

    It took about 10-20 lines of assembler. I then had to do a couple of alternative implementations to support win32, elf32, win64, elf64 and win32 on ARM for the PDA version, but since there wasn't much code involved, the whole exercise took about a week from nothing to fully working.

    Maybe there is a way to do this in C++ without resorting to assembler, and without requiring the called module to have a predefined interface, but I'm damned if I can think of one. To be fair, it is the only assembler module in the entire program.

  • by ultranova (717540) on Sunday November 29, 2009 @08:37AM (#30260930)

    Just because you don't know how to use non garbage collected memory does not mean that functions that don't use it should be phased out.

    Nice ad hominem. But the issue isn't whether he can use it, the issue is whether everyone who's code is running in the machine can use it, especially the guys who wrote libraries.

    I need those functions to do my job since I write network based apps that require memory reuse and pointers to be able to process data as quickly as possible.and I've seen the results when some of our competition has attempted to do my job with garbage collected languages. (generally 3x the hardware requirements, 5x if they used java)

    This is a strange statement. Garbage collection in no way prevents memory reuse; it simply automatically releases blocks of memory when they can't be accessed by following pointers from the root set anymore. In fact there's garbage collection libraries for plain C, some of which don't even require recompilation of applications but simply replace malloc() and free().

    Besides, network applications are precisely those where it might be worthwile to use 5x hardware just to get some extra protection.

    Not everything is a desktop application.

    /blockquote>

    True. Not every application has someone babysitting them all the time and validating all their inputs.

  • by Animats (122034) on Sunday November 29, 2009 @01:05PM (#30262690) Homepage

    Ask yourself why we have "builds", where everything gets rebuilt. Do I have to have my ICs re-fabbed when I change the PC board design? No. We're still not doing components right.

    Historically, the big problem came from C include files. Everything but the kitchen sink is in there. There's no language-enforced separation between interface (the parts clients of the module see and may have to recompile if changed) and implementation (the part the implementations see). Also, you can include files inside include files, even conditionally. So developing the dependency graph of the program is hard.

    C++ made things worse, not better. The private methods of a C++ class have to appear in the header file, which exposes more of the internals than is really necessary. Every time you add a new private method, the clients, who can never see or use that private method, have to be recompiled. This not only produces cascading builds, it discourages programmers from adding new private methods rather than bloating existing ones. That's bad for code readability and reliability.

    Ada explicitly dealt with this. Ada has a hard separation between interface and implementation. This was considered a headache when Ada came out, but now that everyone has bigger monitors, it's less of an issue.

    Java, despite having interfaces, seems to have build and packaging systems of grossly excessive complexity. I'm not really sure why.

    The next problem is the "make" mindset, which is built on timestamps. "make" doesn't check what changed; it checks was was "touched". If "make" decided what had changed based on hashes, rather than timestamps, many unnecessary recompiles would be avoided. Something could run "autoconf", produce exactly the same result as last time, and not trigger vast numbers of recompiles.

    There's also the tendency to treat "make" as a macro language rather than a dependency graph. This results in makefiles that always recompile, rather than only recompile what's needed.

    It would be useful if compilers output, in the object file, a list of every file they read during the compile, with a crypto grade hash (MD5, etc.) of each. A hash of the compile options and the compiler version would also be included. Then you could tell, reliably, if you really needed to rebuild something.

The tree of research must from time to time be refreshed with the blood of bean counters. -- Alan Kay

Working...