Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Books Media Book Reviews IT Technology

High Integrity Software 238

Jack Ganssle writes "High Integrity Software: the title alone got me interested in this book by John Barnes. Subtitled "The SPARK Approach to Safety and Security," the book is a description of the SPARK programming language's syntax and rationale. The very first page quotes C.A.R Hoare's famous and profound statement:'There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.' This meme has always rung true, and is one measure I use when looking at the quality of code. It's the basis of the SPARK philosophy." Read on for more of Ganssle's review of the book, and some more on the SPARK language.
High Integrity Software
author John Barnes
pages 448
publisher Addison-Wesley
rating 8
reviewer Jack Ganssle
ISBN 0321136160
summary The book describes a language that insures programs are inherently correct.

What is SPARK? It's a language, a subset of Ada that will run on any Ada compiler, with extensions that automated tools can analyze to prove the correctness of programs. As the author says in his Preface, "I would like my programs to work without spending ages debugging the wretched things." SPARK is designed to minimize debugging time (which averages 50% of a project's duration in most cases).

SPARK relies on Ada's idea of "programming by contract," which separates the ability to describe a software interface (the contract) from its implementation (the code). This permits each to be compiled and analyzed separately.

It specifically attempts to insure the program is correct as built, in contrast to modern Agile methods which stress cranking a lot of code fast and then making it work via testing. Though Agility is appealing in some areas, I believe that, especially for safety critical system, focus on careful design and implementation beats a code-centric view hands down.

SPARK mandates adding numerous instrumentation constructs to the code for the sake of analysis. An example from the book:

Procedure Add(X: In Integer);

--#global in out Total;

--#post Total=Total~ + X;

--#pre X > 0;

The procedure definition statement is pure Ada, but the following three statements SPARK-specific tags. The first tells the analysis tool that the only global used is Total, and that it's both an input and output variable. The next tag tells the tool how the procedure will use and modify Total. Finally a precondition is specified for the passed argument X.

Wow! Sounds like a TON of work! Not only do we have to write all of the normal code, we're also constructing an almost parallel pseudo-execution stream for the analysis tool. But isn't this what we do (much more crudely) when building unit tests? In effect we're putting the system specification into the code, in a clear manner that the tool can use to automatically check against the code. What a powerful and interesting idea!

And it's similar to some approaches we already use, like strong typing and function prototyping (though God knows C mandates nothing and encourages any level of software anarchy).

There's no dynamic memory usage in SPARK -- not that malloc() is inherently evil, but because use of those sorts of constructs can't be automatically analyzed. SPARK's philosophy is one of provable correctness. Again -- WOW!

SPARK isn't perfect, of course. It's possible for a code terrorist to cheat the language, defining, for instance, that all globals are used everywhere as in and out parameters. A good program of code inspections would serve as a valuable deterrent to lazy abuse. And it is very wordy; in some cases the excess of instrumentation seems to make the software less readable. Yet SPARK is still concise compared to, say, the specifications document. Where C allows a starkness that makes code incomprehensible, SPARK lies in a domain between absolute computerese and some level of embedded specification.

The book has some flaws: it assumes the reader knows Ada, or can at least stumble through the language. That's not a valid assumption any more. And I'd like to see real-life examples of SPARK's successes, though there's more info on that at www.sparkada.com.

I found myself making hundreds of comments and annotations in the book, underlining powerful points and turning down corners of pages I wanted to reread and think about more deeply.

A great deal of the book covers SPARK's syntax and the use of the automated analysis tools. If you're not planning to actually use the language, your eyes may glaze over in these chapters. But Part 1 of the tome, the first 80 pages which describes the philosophy and fundamentals of the language and the tools, is breathtaking. I'd love to see Mr. Barnes publish just this section as a manifesto of sorts, a document for advocates of great software to rally around. For I fear the real issue facing software development today is a focus on code ueber alles, versus creating provably correct code from the outset.


You can purchase High Integrity Software from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, carefully read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

High Integrity Software

Comments Filter:
  • question (Score:5, Insightful)

    by Anonymous Coward on Wednesday May 19, 2004 @03:04PM (#9197491)
    Do these authors actually employ the mentioned practices with any success on a daily basis?

    Or do they sit around thinking of methodologies to write books about?

    Those who can, do, those who can't, teach?

  • A _review_? (Score:5, Insightful)

    by Caine ( 784 ) on Wednesday May 19, 2004 @03:06PM (#9197499)
    Seriously, I'm not one to complain, but this isn't a review; it's a guy saying "WOW" repeatedly.
  • But what.. (Score:4, Insightful)

    by Anonymous Coward on Wednesday May 19, 2004 @03:06PM (#9197506)
    ..if the contract itself is wrong?
  • What horseshit (Score:4, Insightful)

    by Anonymous Coward on Wednesday May 19, 2004 @03:09PM (#9197522)
    The book describes a language that insures programs are inherently correct.

    Now, is there a language to ensure that your boss asks you to program the right thing?

  • by tcopeland ( 32225 ) * <tom AT thomasleecopeland DOT com> on Wednesday May 19, 2004 @03:20PM (#9197612) Homepage
    modern Agile methods which stress cranking a lot of code fast and then making it work via testing.
    No, instead, they stress writing a test, then writing code to make the test pass. Big difference there.
  • Re:But what.. (Score:3, Insightful)

    by happyfrogcow ( 708359 ) on Wednesday May 19, 2004 @03:22PM (#9197619)
    that's not the implementors primary concern, really. but if you have a design to begin with, atleast you can backtrack and see why the contract is wrong. change it, and implement the right contract.

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 19, 2004 @03:28PM (#9197676)
    Comment removed based on user account deletion
  • by RAMMS+EIN ( 578166 ) on Wednesday May 19, 2004 @03:30PM (#9197687) Homepage Journal

    SPARK mandates adding numerous instrumentation constructs to the code for the sake of analysis. An example from the book:

    Procedure Add(X: In Integer);

    --#global in out Total;

    --#post Total=Total~ + X;

    --#pre X > 0;


    This seems rather a waste of time. You either first describe exactly what the code does, then write the code, or you write a simplification of what the code does, then the code.

    In the first case, you write the exact same thing twice, in different languages. That sounds like an immense waste of time to me.

    In the second case, your specification does not cover every aspect, which introduces loopholes, defeating the purpose of the contract.

    In either case, you get in trouble if there are errors in the contract.
  • by Anonymous Coward on Wednesday May 19, 2004 @03:30PM (#9197688)
    It's not so much that they can figure them out as it is they don't care to.
  • Re:But what.. (Score:5, Insightful)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Wednesday May 19, 2004 @03:33PM (#9197718)
    I agree that this is actually the problem. As I understand it, the high-reliability people have more-or-less solved the problem of enduring the code follows the specification (though at a cost that woul deter for less critical applications). But all this does is push the problem one level higher.

    In the early days of compilers, one of the claims for compilers was that they would make mistakes impossible. Of course, all they did was make one class of stupid assembler mistakes impossible.

    The reason for the verbosity of COBOL is the idea that it would be so like business English that management could read it, if not write it.

    Eash time we get a tool that removes one class of mistakes, all we do is increase the systen m complexity until the level of mistakes returns to the previous level of near-unacceptability. "Snafu" is the normal state of the programmers universe - it is only a case of how large a system you build before it all fouls up.

    Having said that, Design By Contract is a good idea. While accepting that it is always going to turn to ratshit, you might as well do so at a higher level as a lower one. However, it isn't new: look at Eiffel and Aspect Java for instance.
  • by Animats ( 122034 ) on Wednesday May 19, 2004 @03:35PM (#9197735) Homepage
    We did this twenty years ago, for a dialect of Pascal. See Practical Program Verification [acm.org]. Back then, you could do it, but it was rather slow. Today, with machines thousands of times faster than the VAX 11/780 we used back then, it's much more feasible. But you need a language suitable for verification. C and C++ are hopeless - the semantics of the language are ambiguous. (Casts, pointer arithmetic, and "void *", make the typing system unreliable.) The Pascal/Modula/Ada family are suitable, with modifications and limitations. Eiffel and Sather do well, but few use them. Java, though, is both verifiable and widely used.

    The best available modern system for formal verification is the Extended Static Checking system for Java [hp.com] developed at DEC SRL. This was developed at DEC before HP shut down that research operation. It's still available as a free download [compaq.com].

    What all this machinery does is put teeth into "design by contract". With systems like this, you can tell if a function implements its contract, and you can tell if a caller complies with the contract of each thing they call. Before running the program.

    Developing in this mode means spending forever getting rid of the static analysis errors. Then, the program usually just runs. That's exactly what you want for embedded systems. But it's painful for low-grade programming like web site development, where "cosmetic errors" are tolerable and time-to-market matters more than correctness.

  • by happyfrogcow ( 708359 ) on Wednesday May 19, 2004 @03:39PM (#9197769)
    Ok, I know nothing about SPARK, so forgive my ignorance.

    me neither, me too...

    my understanding is that the contract has hard requirements on specific input and specific output for results. all of which are defined prior to executing that code. something like "we require an incomming integer with a value that is between zero and fifteen. we gaurantee that an integer value will be returned that is either zero or one"

    with a public class interface you can write a peice of code that does this, but it won't gaurantee anything. it's up to the developer to exhaustively test all situations and make sure that it happens. in a contract based language, i would guess that the program either won't compile, won't run, or will fail in obvious ways in the development stage if the requirements are not met. i'm not sure how they handle requirements that aren't met.
  • Re:But what.. (Score:5, Insightful)

    by bcrowell ( 177657 ) on Wednesday May 19, 2004 @03:45PM (#9197807) Homepage
    As I understand it, the high-reliability people have more-or-less solved the problem of enduring the code follows the specification (though at a cost that woul deter for less critical applications).
    Yeah, the article's statement, "The book describes a language that insures programs are inherently correct," is really misleading. Very few real-world programs have ever been proved correct, and the reasons for that are language-independent. Very few real-world problems even lend themselves to a precise mathematical statement of what "correct" would mean. What would it mean for mozilla to be "correct?" Even if your code is running a nuclear reactor, the code can only be as "correct" as the accuracy of the assumptions and engineering data that went into the spec.

    In many real-life projects, the nastiest bugs come from requirements that change after the first version has already been written. Proving that a program correctly implements a spec doesn't help if the spec changes.

  • by Anonymous Coward on Wednesday May 19, 2004 @03:55PM (#9197894)
    Certification is definitely a must with certain software, and the field of Software Engineering needs some major work, but don't take it too far. If you want a program that you know is correct, you have to prove it's correct. For non-trivial programs, that is simply intractible. What's more, although you can try to create metrics to decide when a program is "good enough", there's also a good chance you'll be wrong. Let's say you somehow managed to cover 50% of the cases (very unrealistic) and decide that works. So you deploy it, people start using it, and they start to hit the remaining 50% of the cases. Unfortunately for you, one of those cases you didn't test causes the system to cease working, or worse, go into an unknown state.

    There's many many many things that can be done to improve software. There's ways of adding redundancy, simplfying the source, properly reusing components, and so on. It's still going to be difficult.

    If you want to do that for mission critical software, I'm all for it. But if you expect that for an application like Word, don't expect to ever see another version of Word again.
  • by NineNine ( 235196 ) on Wednesday May 19, 2004 @04:05PM (#9198022)
    First off, Windows is the ONLY software that's used both in commercial, industrial, specialized applications, AND on Timmy's computer in his bedroom. Of course it's going to be attacked more than say, a program that runs a CNC machine. I'm betting that many more specialized programs are even MORE vulnerable, but they're not exposed to anywhere near the abuse that Windows is. (ie: I've never heard of anybody hacking or even attemting to hack an elevator's software).

    Secondly, we have the entire legal system behind software. You have a problem with the software that runs your x-ray machine? You can 1. Sue the manufacturer or 2. not buy the damn thing again. It's already in place.
  • Re:Eurofighter (Score:5, Insightful)

    by SheldonYoung ( 25077 ) on Wednesday May 19, 2004 @04:07PM (#9198062)
    SPARK is used heavily in the safety critical software in the Eurofighter amongst other projects. It is a complete pain to type all of the annotation, takes forever to run the tool and it very rarely comes up with any real problems in the code.

    This type of unprofessional crap is the reason people have such low expectations of software. You didn't want to use the tool because it was a "pain to type"?! If the length of time it takes you to type your code is a bottleneck then you're not doing enough thinking before you type. The extar effort requierd to type more verbose code is close to zero. You're coming across like an aeronatical engineer would if they tightened a critical bolt to only 90% of the required torque because it was less effort.

    By saying very rarely comes up with any real problems" means it found some, and those problems may have been easily been overlooked by other types of testing. And what are problems wouldn't be "real" in saefty critical code?! Yes, there are other tools besides SPARK that could have been used but the principles should have been the same.

    Don't ever forget you're talking about a serious piece of hardware and there's a human being sitting in the pointy end. If I was a pilot of something that had a bug in it's safety critical software because of your lack of pride I would kick your ass.
  • by killjoe ( 766577 ) on Wednesday May 19, 2004 @04:11PM (#9198101)
    It also sounds exactly like Eiffel.

    This industried ability to continually re-invent the wheel never ceases to amaze me.

    Let's go back to lisp and smalltalk every frikken language since then is just a rewrite of one or the other anyway.
  • Re:But what.. (Score:3, Insightful)

    by Waffle Iron ( 339739 ) on Wednesday May 19, 2004 @04:27PM (#9198305)
    ..if the contract itself is wrong?

    That's the beauty of this system. You can close out the issue in your bug tracking database anyway:

    "CLOSED: Not a bug; behavior as designed."

  • by Alwin Henseler ( 640539 ) on Wednesday May 19, 2004 @04:28PM (#9198309)
    First thought I had here was another quote, probably saw it before somewhere on /.:
    "The true measure of a good coder is not how complex his code is, but how simple."

    Today's software systems become bigger, bigger, and bigger. Maybe single components are simplified, debugged or optimised, but not a system as a hole. The results we see today, in many systems, a single slip in one place, can screw up the entire system.

    IMHO the logical way to combat this, would be to design software using methods that can be formally proven to do what they're supposed to do. Such methods do exist.

    Today's security measures in operating systems (like running apps as non-root user under *nix) are in place at least for one reason: an assumption is made, that as a piece of software grows bigger, it's simply not possible to guarantee that there are no bugs in there, that it will always work as expected.

    That assumption is flawed: it is possible to design software such, that it always works as expected (hardware failures aside, that is). Doing so is just very hard. Not impossible.

  • Re:Ok... (Score:3, Insightful)

    by wintermute42 ( 710554 ) on Wednesday May 19, 2004 @04:47PM (#9198524) Homepage

    And what they did in Ada would have been impossible in C++. It's significant that SPARK code will run EXACTLY the same on all compliant Ada compilers. No platform dependancies, no ambiguities, no undefined behavior... ALWAYS the same. You simply can't possibly define a subset of C++ which would be able to make those promises. I don't know if it would be possible with Java; since there's no formal specification of Java, probably not.


    I'm not sure what you meant by formal specification here. As I recall someone did a huge denotational semantic specification of Ada. In theory this massive specification defined exactly what Ada's semantics are. The only problem is that such a specification is, in practice, useless. For even simple constructs, the denotational semantic description is huge. And as with any formalism, there is always the opportunity for error. Given the complexity of denotational semantics I suspect that few people other than the author ever read the whole thing.

    So that fact that Ada had DoD funding for a denotational semantic specification and Java does not does have such a specification does not make Ada more portable and reliable (in the sense that a piece of code will produce the same result on different platforms).

    The idea of adding assertions and other formal descriptions of the intended semantics has some merit. But the realization is SPARC is unfortunate.

    As I recall, the gang at Sun who did Java mentioned that they used C/C++ as the basis for Java because they wanted the language to be used. Choosing Ada as the basis for anything guarantees that few people will use it. The module paradigm for languages (used by Ada and Modula-2) is not as powerful as the object model. This is one reason that C++/Java have caught on and Ada/Modula-2 are consigned to history.

    Finally, just because you have a specification and code that exactly implements this specification does not mean that the application will not result in unintended results (e.g., the death of users or others). Frequently the problems encountered with software are a result of conditions that the designers did not forsee. That is, things that were not defined in the specification.

  • Re:Eurofighter (Score:1, Insightful)

    by Anonymous Coward on Wednesday May 19, 2004 @04:59PM (#9198677)
    That is what is so cool about the defense industry -- projects delivered at 1/100 the defect rate, with 1000* the manpower requirements and 5000 * the cost. Most excellent for those of us in the software business...
  • Re:Ummmmm... (Score:3, Insightful)

    by randombit ( 87792 ) on Wednesday May 19, 2004 @05:12PM (#9198866) Homepage
    Actually, I've seen a bunch of safety-critical systems that were written in Pascal or some godawful assembler

    My Dad hand-patches microcode on 60s-era safety systems in a chemical plant for a living. It's pretty intense. :)

    Perl is pretty far from a B&D language, but I'd sure hate to see an autopilot written in perl, no matter how productive or satisfied it made the coder.

    The happiness of the coder is not really the issue. If we could get safe, secure, reliable software by coding them in restrictive languages (or beating the programmers with sticks, whatever), then by all means we should, if it's worth what it will cost.

    The first problem is that nobody has any really convincing evidence that, all other things being equal (testing, design methods, skill of people involved, time/money available), SPARK or similiarly restricted languages actually gets you any meaningful improvment in security as compared to, say, Pascal, Ada95, C++, Java, O'Caml, or a similiarly "full of pointy bits" language.

    Secondly, the languages restrictions mean the programmers/designers have to spend more time (ie, money) working around the limitations of said language. It's economics. If a particular application (Word or Photoshop or your firewall or what have you), *never* crashed or failed in any way, would you be willing to pay 1000 times as much for it? 100x? 10x? Twice as much, even? For an autopilot, it might make sense. For 99% of the stuff that's out there, it doesn't (read Ross Anderson's papers on the economics of security for some good details on this area), because the cost of a failure (for the people writing/selling the code) is between zero and tiny, and the cost of making it not fail is high. It's cheaper to write crappy software, unless your legal/political liability is so fscking high that you *have* to make in safe.
  • Homeland Security (Score:2, Insightful)

    by yermoungder ( 602567 ) on Wednesday May 19, 2004 @05:32PM (#9199085)
    WRT: SPARK check out this report: http://www.cyberpartnership.org/SDLCFULL.pdf The report's task force was co-chaired by Microsoft and Computer Associates people and is being submitted to the DHS. It explicitly recommends SPARK as one method of increasing reliability/security. It also states "A programming language with significantly fewer possibilities for mistakes than C or C++ should be used where possible."
  • by gewalker ( 57809 ) <Gary.Walker@Astr a D i g i tal.com> on Wednesday May 19, 2004 @05:41PM (#9199194)
    I'm not so sure about Eiffel. In Eiffel, you can declare class invariants, which are contracts that must apply to your class and all inherited classes. Sounds like it includes OOPL contracts to me. Eiffel also has loop invarints.

    Anyway, I see nothing new in DBC in SPARK not already in Eiffel.
  • by arn@lesto ( 107672 ) on Wednesday May 19, 2004 @07:41PM (#9200250) Homepage
    The problem with contract programming is that most people are incredibly bad at describing preconditions and postconditions. They either over specify or worse leave something out; the result is that the contract is invalid.

    The code gets written and it turns out to have bugs anyway. We go back to examine everything and notice missing details in the contracts. We fix those problems, the code gets re-written and the cycle continues. In the end we've not achieved much in the way of program correctness, efficiency of code, or reduction of the programers time.

    Trivial examples all show the benifits of whatever new language feature has been added. It's really only after significant use and experience that we can judge how good something is in a large scale appplication.

    I remember getting excited about the "Z specification" language in 1986, but in the end it was just too damn
    hard for everyone. The programs were absolutely correct but if you wanted to change something later, you were back to square one. Forget about anything large involving multiple people.

    Correctness isn't always necessary, some times good enough is all that is required. If you want to spend a couple of months proving that your implementation of sqrt() is correct I hope you have the funding (I see government money... enough said). Most of us can't afford this level of correctness.

    Give me a sandbox, lego bricks, and let me play.. go away I don't like your rules.
  • Agile methods (Score:3, Insightful)

    by JacobO ( 41895 ) on Wednesday May 19, 2004 @07:54PM (#9200322)
    in contrast to modern Agile methods which stress cranking a lot of code fast and then making it work via testing

    I have to disagree here, the agile methodologies I'm aware of stress automated unit testing to ensure the code that follows meets the specifications. They are agile because the "contract" enforced by the unit tests allow you to see what you have broken easily after a change. If your unit tests pass then you've either not broken anything or your test coverage is insufficient. It seems that these SPARK "tags" have some of the benefits and all the problems that a good suite of automated unit tests provides.

    I do however like the idea that your assumptions and dependencies are explicitly mentioned nearby where they occur. These are things that definitely sting you, especially in code you are new to (written by someone else.) All the little interdependencies and unexpected side-effects that make their way into code can really make life difficult sometimes.

    I have a feeling though that this would take discipline, and if all team members were skilled and disciplined then you would likely have much of these things stated anyway.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...