Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Bug Technology

When Bugs Aren't Allowed 489

Coryoth writes "When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs. Praxis High Integrity Systems, who were the feature of a recent IEEE article, write exactly that kind of software. In "Correctness by Construction: A Manifesto for High-Integrity Software" developers from Praxis discuss their development method, explaining how they manage such a low defect rate, and how they can still maintain very high developer productivity rates using a more agile development method than the rigid processes usually associated with high-integrity software development."
This discussion has been archived. No new comments can be posted.

When Bugs Aren't Allowed

Comments Filter:
  • Still can have bugs (Score:4, Interesting)

    by Billly Gates ( 198444 ) on Thursday January 05, 2006 @09:03PM (#14405829) Journal
    The only method I have seen with almost perfect reliability is where the inputs and outputs are overloaded to handle any datatype and can be proven mathamatically not to crash. I guess a CS degree is still usefull.

    The problem is to obtain it you need to write your own libraries and not use ansi or microsoft or any other products as you can not see or trust the source code.

    If you can prove through solid design and input and output types that the program wont lose control then your set. Its buffer overflows and flawed design that has not been tested with every concievable input/output that causes most serious bugs in medical and aerospace applications.

    However in practice this challenge is a little unpractical when deadlines and interopability with closed source software get in the way.
  • by GileadGreene ( 539584 ) on Thursday January 05, 2006 @09:14PM (#14405886) Homepage
    And yet the reports I've seen on Praxis claim costs and schedules the same or less than the development of software of similar complexity...
  • Here, here... (Score:5, Interesting)

    by crimson30 ( 172250 ) on Thursday January 05, 2006 @09:16PM (#14405898) Homepage
    When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs

    I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.


    That was my first thought, particularly with military avionics. A few years ago they put out a hardware/software update for the ENS system (Enhanced Navigation System) which led to frequent crashing... and it took over a year for them to come out with a message saying that it was a bug and not to waste countless man hours trying to repair it.

    It's sort of a new concept, though, as I'd never really seen such problems with traditional avionics systems (non glass-cockpit stuff). I've always attributed it to people being used to the behavior of MS Windows. And I'm not saying that to start a flamewar. I'm serious. Unreliable avionics systems should be unacceptable, but these days, that doesn't seem to be the case.
  • by Anonymous Coward on Thursday January 05, 2006 @09:16PM (#14405903)
    Doesn't seem to help Microsoft, despite their bottomless coffers....
  • by david.emery ( 127135 ) on Thursday January 05, 2006 @09:16PM (#14405905)
    The Master Money server done by Praxis was done Fixed Price, and with a warranty that says Praxis would fix any bug discovered over the net 10 years -for free-.

    How many of you would be willing to place that kind of warranty on YOUR CODE?

    dave (who's tried SPARK and liked it a lot, although proofs are much harder than they should be...)

  • Secure code (Score:3, Interesting)

    by Muhammar ( 659468 ) on Thursday January 05, 2006 @09:20PM (#14405922)
    The main obstacle in writing a decent code is usualy the management - their frequent changes of mind (about what they want - which is usualy different from what is helpful to the users) and their "good enough" and "legacy first" attitude. Overreaching ambition is another problem - one needs to limit himself to fewer things to do them well - and the management pressures usualy run in oposite direction. (Salesmanship bullshit never helps, especialy if it starts to influence the direction of your project.)

  • by GileadGreene ( 539584 ) on Thursday January 05, 2006 @09:20PM (#14405923) Homepage
    Did you actually bother to RTFA? Oh yeah, this is /. - stupid question. Allow me to suggest that you make an exception, and actually take the time to read TFA in this case. Praxis does not claim no bugs, they claim significantly lower defect rates than development using other methods. Which in turn means much greater confidence that the system will work as desired.

    No one can ever make something that is completely "bug-free" - even in traditional, non-software disciplines. All you can do is make the probability that the system will work as high as possible. Praxis has some techniques that can help developers create software with a much higher probability of working correctly than it would otherwise have. That's a good thing, even if it doesn't result in perfection.

    Its buffer overflows and flawed design that has not been tested with every concievable input/output that causes most serious bugs in medical and aerospace applications.

    It's the fact that Praxis relies on static checking far beyond anything you've ever seen (using a carefully designed subset of Ada that can be statically checked in all sorts of interesting ways) that helps to ameliorate this problem, since the static check is effectively equivalent to an exhaustive input/output test.

  • seems kinda small (Score:3, Interesting)

    by khallow ( 566160 ) on Thursday January 05, 2006 @09:28PM (#14405977)
    I count less than 400k source code lines among their examples ("SLOC"). Collectively, this is at least an order of magnitude (maybe two or more actually, I don't know) shorter than the really big projects out there. So I guess I have two questions. First, is this rate really good given the size of the projects described? And second, for the huge projects, what sort of bug rates are theoretically achievable?
  • by goombah99 ( 560566 ) on Thursday January 05, 2006 @09:34PM (#14406005)
    unlimited risk can be an incentive too.

    Professor Middlebrook at caltech was an innovator in an unusual field. Sattelite electronics. Since no repairman was coming they wanted robust electronics. He desigined circuits in which any component could fail as an open or a short and it would remain in spec. You know that's a remarkable achievement if you've ever desinged a circuit before. Notably you can't really do this using SPICE. Speice will tell you what comething does but not how to design it. To do that you need a really good sense of approximations of the mathematical formula a circuit represents to see which components are coupled in which terms. And you need one more trick. The ability to put in a new element bridging any two points and quickly see how it affects the cicuit in the presence of feedback. To do that he invented the "extra element theorem" which allows you to compute this in analytic form from just a couple simple calculations. They still don't teach this in stardard courses yet. You can find it in Vorperians text book, but that's it. If you want to learn it you gotta either go to the original research articles from the 70s.

  • by data64 ( 300466 ) on Thursday January 05, 2006 @09:47PM (#14406080)
    In school, this article [fastcompany.com] was required reading for our class. It addresses the same topic as TFA.

    The essential point being there is a trade-off between quality and quantity and each organization/project needs to decide how far they want to lean in either direction.
  • by Coryoth ( 254751 ) on Thursday January 05, 2006 @09:50PM (#14406090) Homepage Journal
    If someone sent a copy of this to Micro$oft? Would any of them read or comprehend it? It could make a difference in the version after Vista.

    Oddly enough it would make perfect sense to some people at MS. The Singularity OS project from MS research uses a lot of the same ideas in development methodology and formalism. Whether Singularity will ever make it out of MS research, or simply remain a curious little side project, is of course an interesting and quite open question. Only time will tell.

    For other OSs developed in a similar mold, try Coyotos [coyotos.org] which, while still getting seriously underway, looks quite promising indeed when it comes to secure and robust OSs.

    Jedidiah.
  • by brucehoult ( 148138 ) on Thursday January 05, 2006 @10:03PM (#14406160)
    The site is slashdotted at the moment, so I can't read the article.

    A good example of people writing complex but bug-free software under time pressure is the annual ICFP Programming Contest [icfpcontest.org]. This contest runs over three days, the tasks are complex enough that you usually need to write 2000 - 3000 lines of code to tackle them, and the very first thing the judges do is to throw corner-cases at the programs in an effort to find bugs. Any incorrect result or crash and you're out of the contest instantly. After that, the winner is generally the highest-performing of the correct programs.

    Each year, up to 90% of the entries are eliminated in the first round due to bugs, usually including almost all the programs written in C and C++ and Java. Ocassionally, a C++ program will get through and may do well -- even win, as in 2003 when you didn't actually submit your program but ran it yourself (so it never saw data you didn't have a chance to fix it for). But most of the prize getters year after year seem to use one of three not-yet-mainstream languages:

    - Dylan [gwydiondylan.org]
    - Haskell [haskell.org]
    - OCaml [inria.fr]

    You can argue about why, and about which of these three is the best, or which of them is more usable by mortals (I pick Dylan), but all of them are very expressive languages with uncluttered code (compared to C++ or Java), completely type-safe, produce fast compiled code, and use garbage collection.
  • by GileadGreene ( 539584 ) on Thursday January 05, 2006 @10:18PM (#14406239) Homepage
    Let's see them do this after taking over a half-coded project with minimal design requirements, a hard deadline, and a budget that can be cut by governmental forces at will.

    Which is precisely the kind of project they avoid, because it's a disaster waiting to happen. Praxis caters to customers who want quality, and are willing to do what it takes to get it.

  • by thesnarky1 ( 846799 ) on Thursday January 05, 2006 @10:25PM (#14406271) Homepage

    Actually, there was a case (in 2000, I believe) where a destroyer (the ship, not robot) was running a version of windows, got a divide by zero error, and the ship's systems all crashed. It was dead in the water for a few hours.

    Yes, found a wiki [wikipedia.org]:

    In September 21, 1997 while on maneuvers off the coast of Cape Charles, Virginia, a crew member entered a zero into a database field causing a divide by zero error in the ships Remote Data Base Manager which brought down all the machines on the network, causing the ships propulsion system to fail. Anthony DiGiorgio, a civilian contractor with a 26-year history of working on Navy control systems, reported in 1998 that the Yorktown had to be towed back to Norfolk, Virginia naval base. Ron Redman, a deputy technical director with the Aegis Program Executive Office, backed this claim up, suggesting that such system failures had required Yorktown to be towed back to port several times.

    Note, it was running Windows NT 4.0. So yes, in military warships we even us Windows. I also believe that Britain's newest destroyer runs on a version, but I forget which, as I read it a while back.

  • by MerlynEmrys67 ( 583469 ) on Thursday January 05, 2006 @10:26PM (#14406275)
    Was working for a small startup with a bigtime CEO. One of his interseting points was, the most successful software companies in the world has yet to release a product anywhere close to "On Time". I can give you a train wreck litany of software companies that released products "On Time" that were so bug ridden people swore they would never buy another product from the company, and replace the product with a comptetitors.

    The end result - In a year, no one will remember that you were 6 months late - make a buggy release and in a year EVERYONE will remember the buggy release.

    Why I always have time to do it over, and never the time to do it right in the first place

  • Re:economics (Score:2, Interesting)

    by DeveloperAdvantage ( 923539 ) on Thursday January 05, 2006 @10:41PM (#14406351) Homepage
    One reason the economics is slanted toward buggy software is due to the legal issues. I have seen very few cases where developers of buggy software actually are held responsible for the mess which they have delivered...actually, I can't think of any off the top of my head (although I am sure they are out there).

    In fact, often it is beneficial to the software provider to provide a product with enough defects so the software provider can get follow-up work or support contracts to fix them.

    Consumers of software services, or even shrink wrapped software, need to start demanding higher quality.

    Read through an end user agreement and you will see who bears the cost of defects.
  • by kahanamoku ( 470295 ) on Thursday January 05, 2006 @10:49PM (#14406392)
    I guess you could always substitute Quality, with Stability / Robustness
  • Re:No Bugs for NSA? (Score:5, Interesting)

    by GrEp ( 89884 ) <crb002@gm a i l.com> on Thursday January 05, 2006 @10:54PM (#14406420) Homepage Journal
    Easier actually.

    http://www.rockwellcollins.com/news/page6237.html [rockwellcollins.com]

    Rockwell collins designed a secure processor for the NSA that PROVABLY can run multiple threads where each thread can get no information about the others. That way they can run processes with different clearance levels on the same CPU.

    Model checking can make you big bucks if you can find the customers to pay for it.
  • by SeaFox ( 739806 ) on Thursday January 05, 2006 @11:29PM (#14406563)
    You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.

    And why aren't there? Geeks lament the fall of IT and Computer Science programs at institutes of higher learning, and wonder why people don't want to go into these fields. If there was demand for programmers of the caliber you mention and companies willing to pay salaries deserving of such abilities there would be more people studying towards such a position.

    But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.

    Nursing is a profession that is always looking for new people and folks who didn't make it through the four-year grind back when they were young are flocking to it because the jobs are waiting. If companies held their coders to a higher standard, they would trim some of the fat in their projects and have jobs open for people willing to do the work. The result would be more productive teams and better applications, a win-win situation.
  • by brucehoult ( 148138 ) on Thursday January 05, 2006 @11:31PM (#14406569)
    I've participated in the ICFP before (one-man team on Java, program died in the first round, so there are my cards on the table),

    My cards on the table are that I've entered each of the last six years with 3 - 5 person teams using Dylan, collecting 2nd place twice and Judge's Prize twice.

    but one of the reasons the International Conference on Functional Programming Contest is consistently won by Functional Programmers is that it appeals heavily towards them both in terms of getting the word out to people and in terms of task selection.

    Getting the word out doesn't seem to be the problem. Last year for example there were 161 first-round entries. Only 38 entries -- 24% of the total -- were in one of the languages I mentioned as being consistently sucessful: 1 in Dylan, 16 in Haskell and 21 in OCaml.

    I also disagree about task selection. C and C++ and Java are every bit as suited to the sort of tasks in the ICFP contest as they are to the things they are normally used for. What they are not suited to is doing them in a short period of time, in an exploratory programming manner, and without bugs.

    Type safety, fast compiled code, garbage collection -- all of these were all but irrelevant to the last two years' tasks. The main stumbling block both years had been writing parsers.

    Parsers aren't the problem. C has had parser generators for thirty years and besides the messages to be parsed were totally trivial. Dylan doesn't yet have any good tools for writing parsers, but it doesn't matter because we were able in the first eight hours of the contest to hand write a complete and correctly functioning (but stupid) program using nothing more complex than regular expressions, leaving the remaining 64 hours to think of something clever to do. Anyone using Perl or Python or Ruby probably finished the infrastructure even quicker than we did.
  • I second the motion! (Score:5, Interesting)

    by Jetson ( 176002 ) on Thursday January 05, 2006 @11:44PM (#14406627) Homepage
    I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.

    I've been a controller for 13 years and have worked in the automation end of things for almost 4 years now. There is NO SUCH THING as bug-free Air Traffic Control software. The best we can hope for is heterogenous redundancy and non-simultaneous failures. Some engineers seriously think they could replace all those controllers with an intelligent algorhythm. What really scares me is that the more they try, the less engaged the people become and the harder it is for them to fall back to manual procedures when the worst happens.

    Everyone used to laugh at how Windows NT could only run for 34 days before it needed a reboot. Some of our systems can't run HALF that long without needing a server switch-over or complete cold-start.

  • by Coryoth ( 254751 ) on Thursday January 05, 2006 @11:50PM (#14406662) Homepage Journal
    It's very true that developing quality code is something that requires both parties commitment. If someone is unwilling to work with the developers to help them clearly define the requirements then yes, quality code is going to be all but impossible, and in that case it isn't the developer's fault. As others have pointed out, a large part of the reason so much code is so bad is simply because the clients fail to demand a higher standard. As long as customer expectations are low, or they are unwilling to actually commit to getting good results, there will remain issues.

    Jedidiah.
  • by georgewilliamherbert ( 211790 ) on Friday January 06, 2006 @12:31AM (#14406872)
    While it's hard to argue with the above logic, such changes don't happen overnight just because a demand increases. When we're talking about properly educating people (and by properly, I mean both intensive CS background in theory and practice, and then practical coding in real world environments in quantity to become a good coder as well), it will take five or more years for a demand spike to produce a supply rise.

    The projects I am involved with today won't wait five years for programmers. This is business fact; if they aren't started now, and completed in an economical timeframe, then companies will go out of business and not be around to hire those better programmers in 5 years.

    If you shift the question around to "should we set project and programmer standards" and "should standards improve and evolve over time", and the statement to "current standards are inadequate and irresponsible" then sure, no disagreement.

    In many cases, development problems are the result of not even following what industry standards and best practices are in place now.

    And one thing I don't want to see is formal programming CS programs which produce CS professors exclusively ... though some of my CS academic aquaintences dislike me saying so, from what I see, few of their graduates go into coding in industry. That's a pretty unfortunate thing for the world. To be ultimately useful, these skills need to become things which take people out of college, into industry, and successfully into challenging industry projects.

    Nontrivial problems to solve. If it was easy, everyone would be doing this stuff right already. That obviously isn't true yet...
  • by Beryllium Sphere(tm) ( 193358 ) on Friday January 06, 2006 @12:59AM (#14406972) Journal
    http://en.wikipedia.org/wiki/IEFBR14 [wikipedia.org]

    IEFBR14 is sort of an executable version of /dev/null. It returns immediately when called. It had four or five bug reports filed against it.

    IBM could of course write a defect-free return statement. All the bugs were requirements drift that Praxis could not have prevented.
  • by NormalVisual ( 565491 ) on Friday January 06, 2006 @01:01AM (#14406978)
    1) Rely on existing libraries and hardware as black boxes

    Don't forget tools - I have to do the majority of my work in C++ with Visual Studio, and there has been more than one occasion where I had to look at the generated assembly code to figure out what was going on when VC++ screwed something up ("WTF - where'd that JMP come from?!?"), and then spend a little more time coming up with an alternate way to express the same logic that the compiler wouldn't fuck up. You can mathematically prove the correctness of your code all you want, but if the compiler is not going to faithfully produce machine code that exactly mimics that behavior, you're still going to have issues to deal with even though your logic is impeccable. Being able to truly prove correctness means being able to account for every single machine instruction that will be executed, and every bit of data those instructions operate on. I can't do that with any system in the commercial marketplace right now, so while I perhaps can guarantee *my* code, I can't guarantee bug-free or uninterrupted operation .

    You can't expect to build something solid if your foundation is made of quicksand.
  • all of them are very expressive languages with uncluttered code (compared to C++ or Java), completely type-safe, produce fast compiled code, and use garbage collection.
    More to the point, they're all basically functional, and Haskell is a pure functional language.

    I think we're going to see functional languages get a lot more popular soon because they're better for concurrent programming, and we're about to see a lot more multi-processor PCs.
  • by meburke ( 736645 ) on Friday January 06, 2006 @01:23AM (#14407076)
    This article couldn't have been more coincidental to my current project: I've been re-reading James Martin's books, "Application Development without Programmers" and "System Design from Provably Correct Constructs", with the goal of selecting a method to program mechanical devices.

    Martin's thesis, and remember this was back in the 70's and early 80's, was that the program should be generated from a specification of WHAT the program was to do, rather than trying to translate faulty specifications into code telling the computer HOW to do it. (Trust me, that poor sentence does not come close to describing the clarity of purpose in Martin's books.) Martin proposes that a specification language can be rigid enough to generate provably correct programs by combining a few provably correct structures into provably correct libraries from which to derive provably correct systems.

    The definition of the time, HOS (for Higher-Order Software) was actually used by a company called HOS, Inc.(!), and apparently worked pretty well. Many of the constructive ideas were included in OOP and UML, but ideally, if I understand the concept properly, it would be equivalent to generating software mostly from Use-Case analysis. There are similar approaches in MDD and MDA methodologies. I wonder what ever became of the HOS,Inc. and the HOS methods? It looks like they had a handle on round-trip software engineering in the 80's.

    OK, why would this be a good thing? Well, for one thing, computational/programmable devices are prolifierating at a tremendous rate, and while we can engineer a new device in 3 to 6 months, the programs for the device take 18 months to 3 years (if they are finished at all). Hardware development has greatly outpaced software development, by some estimations a 100x diference in capacity...yet they are built on the same fundamental logic!

    The best argument, IMO, is that since larger systems are logarithmically more complex, and since it is impossible to completely test even intermediately complex systems, it will require provably correct modules or objects to produce dependable systems. If the code is generated from provably correct components, then the system only has to be tested for correct outputs.

    Furthermore, code generated from provably correct components can allow machinery and devices to adapt in a provably correct way by rigorously specifying the outputs and changes to the outputs.

    Praxis is on a roll. The methodology employed is probably more important than the genius of the programmers. It should get better,though. The most mediocre Engineer today can produce better devices than the brilliant Engineers of 30 years ago using tested off-the-shelf components. IMO, this the direction programming should be taking.
  • by RibRdb ( 243719 ) on Friday January 06, 2006 @03:26AM (#14407467)
    As a computer science student, I would like to know why we aren't taught this in school. My professors have told me that software verification is a nearly impossible task which is too complicated for any real system, but this article claims otherwise. Why aren't we being taught that we should be responsible for writing reliable code and how to do it? Why haven't I heard of Z or other methods for writing provable specifications? My professors can't even write a decent specifcation. And supposedly this is one of the best Computer Science programs in the US. If students aren't even taught that it is possible to write near bug free software, it's no wonder that software is so unreliable.
  • by Vintermann ( 400722 ) on Friday January 06, 2006 @04:14AM (#14407607) Homepage
    SPARK Ada, which Praxis use, is actually not turing-complete. It's bounded in space: no dynamic data structures, no allocation or deallocation whatsoever - not even implicitly. That's what permits the serious static checking, and makes correctness proofs feasible. Termination proofs can still be a pain, though. AFAIK most SPARK programs are proven to behave correctly only if they do terminate.
    That's not so relevant in real-time systems, though, which is where SPARK really shines. It's quite an achievement by Praxis to construct a language which is amenable to correctness proofs, and still can handle multiple threads just fine. It looks like something Djikstra would have been impressed by.

    To answer one of the followups to this post: Yes, NAND gates may be turing complete, but not unless you have an unbounded supply of them. Otherwise, they can not be equivalient to a turing machine (which has an infinite tape).
  • Re:Here, here... (Score:2, Interesting)

    by gronofer ( 838299 ) on Friday January 06, 2006 @09:07AM (#14408308)
    For the USA, these days, the war is never over. But a new model of jet fighter is hardly likely to make a big impact agaist drugs, or terror, or whatever it will be next year.
  • by nicolaiplum ( 169077 ) on Friday January 06, 2006 @09:18AM (#14408345)
    At the age of 17 I spent a week in the Praxis offices on "Work Experience" (Americans may think of this as a very short internship), to find out what developing software would be like as a career. This was a major formative event of my life: I thought that developing software sounded good, I really liked using Real Computers (multiuser, multiprocessing systems with powerful operating software, like VMS and SunOS), and the people impressed me greatly. It definitely set me on the path to the career in systems development and administration that I have today.
    The person who made the biggest impression on me was the sysadmin. He got his own office-cube instead of having to share, he wore much more casual clothes and had a lot more hair and beard than most of the staff, he got to have big toys (several workstations, a LaserJet IIIsi big enough for an entire office that seemed to be his alone, etc) and he didn't seem to get much hassle from anyone else. This was obviously the job for me.
    The sysadmin was obviously rather a BOFH. When I was sat at the UNIX workstation for the first time, and had poked around with basic file-handling commands, I asked "What's the text editor on this system?". He answered "emacs - e m a c s - here's a manual" and picked about 300 sheets of paper off the Laserjet and handed it to me.
    I got to play with UNIX (SunOS), VMS, Oracle development environments. I still have the Emacs manual printout somewhere at home - it served me well when I went to University where printing anything out was charged by the sheet!
    I'm very glad they're still around.
  • by qwijibo ( 101731 ) on Friday January 06, 2006 @10:29AM (#14408708)
    In order to get the more talented people, companies need to be willing to pay more. It's a chicken vs the egg problem. They can't get the proof that it's money well spent until after they spend the money, so they don't believe it's possible.

    In large organizations, there are already a lot of mediocre people and policies built to deal with that reality. People who know what they're doing face an uphill battle in getting anything done. There are large beaurocracies in place to tell people what standards they must adhere to, but nobody can explain why. Without knowing why, it's hard for the competant people to come up with solutions that meet the technical needs without violating some weird compliance regulations. Everything comes down to the lowest common denominator in large companies. Small companies can avoid this problem, but only until they reach some critical mass of mediocre people, which is when the downward spiral starts.
  • by ivan256 ( 17499 ) * on Friday January 06, 2006 @06:32PM (#14412678)
    That doesn't mean that CS grads shouldn't be employed in industry at all, just that the assumption that they are generally prepared for product development work isn't a good one.

    I don't want to say that you can't be a Computer Scientist without being a good Software Engineer, but I will say that you can't be a Good Computer Scientist without being a good Software Engineer. Every one of the skills a good software engineer needs together forms a subset of the skills required by a good computer scientist. Though perhaps you're confusing Software Engineers with Computer Programmers?

    all of the examples you've given are embedded software [...] It's also the software most likely to be written by people trained as engineers (electrical or computer), rather than CS grads

    How do you figure? As a developer of commercial embedded software, that hasn't been my experience. Teams usually consist of both, but code written by Electrical Engineers, well, let's just say it's notorious.

    Let's cut the bull though. My problem here, and why I engaged in this discussion is your implication that computer scientists make poor software engineers ("Sadly, many developers are graduates from CS programmes instead of engineering programmes"). Above all else, please save your judgement for the quality of particular education programs instead of blanketing an entire field with your views. Many of us went to engineering schools, and got formal engineering training along with our computer science degrees. The real problem is that it's too easy to get a degree and not actually take any benefit from the experience, not that computer scientists create crappy software. That's as silly as saying "Sadly most modern drugs are developed by molecular biologists instead of genetic engineers".

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...