Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Are You Using Z-Notation to Validate Your Software? 110

ghopson asks: "Many years ago in the early nineties I became aware of a software definition language called Z, which was based on set theory. The idea was that a software system could be defined in 'Z', and then that system could be mathematically proven to be 100% correct (in other words, zero defects).What is the current state of play with 'zero defect' software development? I know Z still exists, that there is a software development method (the B-method) associated with it. Have techniques and advances such as MDA advanced zero-defect software development? Or would a set of robust unit tests suffice?"
This discussion has been archived. No new comments can be posted.

Are You Using Z-Notation to Validate Your Software?

Comments Filter:
  • sure.. (Score:5, Funny)

    by Anonymous Coward on Tuesday June 17, 2003 @07:58PM (#6228268)
    but when I transferred my algorithm into the system I made a small error.

    so I designed a system that would convert C code into Z code. unfortunately I never did get around to fully debugging it, so I decided to write a theorem prover that would work on straight C code.

    this one worked great, but one day it didn't catch a bug in some code that I *knew* was buggy.

    so I decided to write another theorem prover to parse the code of the first one. I figured if I fed the code of each one into the other, that would greatly reduce the chances of error? well, the second one crashed at startup and the first one said it was correct.

    turned out to be a bug in the OS. So I fed the OS source code into the first one, and it found some defects. I fixed the defects but then the system wouldn't boot any more. I think it was actually a bug in the C compiler, and I was going to fix it for real this time, but it was getting late.

    so I decided to just try to write clear, simple code and fix bugs as they are reported.
  • As a software developer, I'm not sure what this buys me. I can prove that if my software is a 100% correct implementation of the ideal system decribed in Z, then it's bug-free.

    In other words, I can prove that if my software is 100% bug-free, it's bug-free. Woohoo. This doesn't address the case where my program doesn't fully conform to the Z specification, does it?
    • by Ivan Raikov ( 521143 ) on Tuesday June 17, 2003 @08:38PM (#6228559) Homepage
      In other words, I can prove that if my software is 100% bug-free, it's bug-free. Woohoo. This doesn't address the case where my program doesn't fully conform to the Z specification, does it?

      In formal verification, the idea is that you would have a way to prove the equivalence between your Z model (or whatever modeling language you use) and your implementation. Of course, it's a lot more difficult to prove equivalence between a formal model and a C implementation, than if you are using a modern high-level language like Standard ML.
    • by cookd ( 72933 ) <douglascook@NOSpaM.juno.com> on Tuesday June 17, 2003 @09:55PM (#6228985) Journal
      Actually, I think the real kicker is the inverse of the problem you describe. It may very well be possible to prove that your C program properly conforms to the Z model. But is your Z model correct?

      For many real-world problems, figuring out *exactly* what you want your program to do in all situations (and figuring out what all the situations are in the first place) is the hard part. You can then go implement that in C almost as easily as you can in Z.

      I'm not saying this (Z and B) is useless. I just want to focus on the real problem. Using a modeling language just introduces one additional layer of abstraction into the process. While this may make the system easier to discuss and easier to "prove correct", it also introduces another layer at which errors may creep into the system. For some cases, the extra layer introduces fewer errors than it prevents. For other cases, the opposite is true.

      I think to some extent it comes down to "how much time are we going to spend making sure this software is darn near perfect." The law of diminishing returns says that the number of flaws fixed per unit of programmer time will go down as more programmer time is applied, providing a "cost to flaw" curve. Customers also have a curve of how much they are willing to pay for additional reliability. The curves meet at some price point/quality level (keeping features constant), and that is where the product will ship. If the customer's need for extra reliability is large enough that the modeling phase can be integrated properly into the product schedule, it becomes worthwhile to spend time with formal models; otherwise, these will only decrease reliability since the modeling phase eats into the time needed for other phases to properly implement the product.

      In other words, I think that a formal modeling phase is overkill for most requirements ("doesn't crash too often") since it has less bang for the buck as far as generating features and fixing glaring bugs. But it continues to have at least some bang for the buck long after other processes drop off in effectiveness. So it is worthwhile to add formal modeling when unit testing isn't going to get you to your desired flaw rate.
      • by JohnFluxx ( 413620 ) on Wednesday June 18, 2003 @03:21AM (#6231107)
        Just to restate some of your points in a different way...

        Z is an abstraction layer in the same way any specification is an abstraction layer.

        You don't really want to ever go from what the user wants to C. Instead you have a design and specification. Then you validate that specification against what the user wants, code the C, and validate the C and specification.
        This is done because it is easier to validate C against a written specification, rather than against some idea in the users head.

        In the same way, in some cases it is easier validate the specification against Z, then validate Z against the C code, then validating directly.

        I can't help but paraphrase Knuth's most famous phrase: Beware of this code for I have only prooved it to be correct, not tested it.

        or something like that :)

        • "Beware of bugs in the above code; I have only proved it correct, not tried it."
          http://www-cs-faculty.stanford.edu/~knuth/faq.html [stanford.edu]
        • you validate that specification against what the user wants

          That won't always specify the right program for the application. In one project I was assigned to code a user interface from a nicely detailed specification. When the project team then sent me to test it at the customer site there were many complaints from the users. It was obvious to me that the specification did not match how the users actually did their work.

          Apparently some group had designed the user interface to have the computer tell th

    • You can't really think the people working on formal verification are that stupid.
  • Ahem (Score:2, Funny)

    by eggstasy ( 458692 )
    This is the 21st century my friend. Products aren't supposed to have a measurable amount of usefulness or quality. They are merely an excuse for other people to get their hands in your pocket.
    Through comprehensive hyping and brainwashing (err, marketing) you could get a sizable amount of money out of selling a single turd. Ever heard of Merda d' Artista [google.com]? It literally translates to "Artist's shit" and it IS shit. Very old shit but nothing more than human faeces. And yet, we are supposed to believe that it is
  • by Godeke ( 32895 ) * on Tuesday June 17, 2003 @08:04PM (#6228315)
    Ok, this may get me pounded, but I found this line amusing:

    Z is a formal (i.e., mathematical) specification notation used by industry (especially in high-integraity systems) as part of the software (and hardware) development process

    Perhaps they could include spell checking?
    • Z is a formal (i.e., mathematical) specification notation used by industry...

      What industry?

      In practice, just getting the requirements right is more important. The better the problem domain is understood, the fewer errors--real or percieved--will be present in the implemented system.

      Also, I know of no programming language that fully prevents the possibility of side-effects of new code. C allows some pointer arithmetic to totally destroy basically anything in the program. A Java "null pointer exception
      • side-effects

        C is not a language for the timid :) Case in point: A really interesting hole in the C/C++ spec that nails programmers is the interaction between side effects and sequence points. Which, long story short leaves statements such as: Fx(y++, z=y), undefined. The compiler only guarantees that side effects will have been completed at the next sequence point, the compiler could choose to evaluate the operators in either order and be in complete compliance.

      • After doing a bit of research on the linked sites, one can see that this falls under the "well meaning and very useful in nitches" category. If you are building software that *can't die* because it will be used in critical points like aircraft control systems or medical equipment, you can probably afford people smart enough to implement this.

        The "language" itself looks like a cross between boolean algebra, set theory and relational algebras (which came from set theory). Using a truely functional language w
  • by Jerf ( 17166 ) on Tuesday June 17, 2003 @08:04PM (#6228317) Journal
    For the practical world, unittests seem to be the way to go. A large number of people have been experiencing very great and real success, without having to themselves be absolute geniuses to use.

    Provable programs, on the other hand, are typically so complicated for a real-world program that your provably bug-free program still has bugs, because it turns out your model is wrong, because it was written by a human, not a god.

    I'm very unimpressed with the whole "provable program" crowd for any but the most trivial proofs; decades of claims, little to show for it, and all based on the rather wrong idea that if we just push the need for perfection back one level, somehow the computer can magically take up the slack and prevent the bugs from ever existing. The unittesting philosphy, that bugs will happen and it's more important to test and fix them, since complete prevention is impossible anyhow, is much more based in reality, and correspondingly works much better in the real world.

    (Provably correct computing is one of those branches of technology that may produce something useful, but is unlikely to ever acheive its stated "ultimate goals"; I class pure q-bit based quantum computing and pure biologically-grown computing in here as well.)
    • So what happens when someone is programming in some language/dev-system that won't permit non-correct program ta be wrotted? Eh?

      Also, your determination that grown biological computing is unlikely to ever achieve its ultimate goal, yeah, we prove that, don't we...

      : P

      ( of course, just precisely what our ultimate goal IS be open to debate...
      : )

      • by Jerf ( 17166 ) on Tuesday June 17, 2003 @10:55PM (#6229588) Journal
        So what happens when someone is programming in some language/dev-system that won't permit non-correct program ta be wrotted? Eh?

        Ironically, it's provable that no Turing Complete language can be limited to create only correct programs for any non-trivial definition of correctness. Computer science is full of such fun things.

        Proof: In your supposed language, there is a set of "correct" programs for some criterion, and a set of incorrect programs.

        (Sub-proof: The program "return 1" is only correct for the problem consisting of "Always return true" and isomorphic problems, and "return 0" is only correct for the problem consisting of "Always return false" and isomorphic problems. Thus, for any given correct behavior there exists at least one incorrect program in your language, OR your language contains only a trivial number of programs and as such is hardly deserving to be called a "language".)

        If there does not exist a Turing Machine (read: 'program') that can decide in finite time whether a given program meets a correctness criterion, then we can never decide whether a program is correct, because Turing Machines are the best computational tools we have. Therefore, suppose some Turing Machine M exists that accepts as input some suitable correctness criterion, and a program, and outputs whether that program meets the correctness criterion (in finite time).

        By Rice's theorem [wolfram.com], no such TM machine can exist. (Indeed, it's fairly trivial to show that any non-trivial definition of "correctness" must certainly include asserting that the program executes in finite time, and therefore a program that could check such a correctness criterion must be able to solve the Halting Problem, if such a machine existed.)

        Since any language that only allowed one to express correct concepts is isomorphic to a Turing Machine that correctly validates the existance in that language [netaxs.com], and no such TM can exist, no such language can exist, unless it is trivial and the correctness criterion is trivial. In which case as I alluded to earlier, it's hardly worth calling a "language" in the computer science sense.

        In fact this sort of proof is one of the reasons I seriously wonder why so many people still seem to be pursuing this. (Fortunately, I see many signs that the field is waking up to this fact and good, dare I say useful research has been starting to get done in the past few years; perhaps the empirical successes of unit testing in software engineering has helped prompt this.)

        BTW, no offense but I've been on Slashdot for a while, and I direct this at any replier; if you don't even know what Rice's Theorem is, please don't try to "debunk" this post. Many exceedingly smart people have dedicated their lives to computer science. I only wish I were smart enough to craft these tools, rather then simply use them. Rice's theorum has withstood their attack since 1953 . The odds of a Slashdot yahoo "debunking" this theorem are exceedingly low. (On the other hand, if you do understand what I was saying I of course welcome correction and comment; still, this is a fairly straightforward application of Rice's theorem and I can't see what could possibly go wrong. It's a pretty simple theorem to apply, it's just the proof that's kinda hairy.)

        Also, your determination that grown biological computing is unlikely to ever achieve its ultimate goal, yeah, we prove that, don't we...

        I do not believe biological computing is impossible; we are an obvious counterexample. What I don't think is going to happen is that at any given point, the most powerful (man-manufactured) computing device that exists is biological, because biological systems by definition require a life support system to support them, which must consume space, power, and resources better spent directly on the actual computation by a non-livi
        • If there does not exist a Turing Machine (read: 'program') that can decide in finite time whether a given program meets a correctness criterion, then we can never decide whether a program is correct, because Turing Machines are the best computational tools we have. Therefore, suppose some Turing Machine M exists that accepts as input some suitable correctness criterion, and a program, and outputs whether that program meets the correctness criterion (in finite time).

          I believe that you are misunderstanding

          • A good example of such a program is any type checker. A specification like the following:

            f :: Int -> Char

            states that a function in a program takes an integer and returns a character. The Type system should prove this. The thing is, this is in theory impossible by Rice's theorem. Hoever, for a very large (and useful) amount of programs this is quite doable. However, take the following pseudoprogram:

            f(x) = if 2 = 3 then return true else return 'a'

            This program be rejected by any typechecker even though
        • Right.

          ( awesome reply, and thank you! )

          Firstly, I were thinking of Correct Constructs, and probably this is wholely rong, but my assumption was that a Program is made-up of components, each of which can be proved not-incorrect, so the whole thing feels, to me, fractal... ( perhaps there's blurring between the fractal-type reality and the recursive-type reality in my mind... ), so the components can be proven to be correct-constructs, and the whole can be proven too to be a not-incorrect-construct, NOT a

          • You are free to believe that the human mind has more power then a Turing Machine. I do too, in the end. (Many do not.)

            However, computers are definately Turing Machines, and generally when one discusses "computability", we really are referring to computers, not Mind. Even if in the end the mind really is just a Turning Machine we still do not have the constructs yet to deal with it directly (hence the whole field of Psychology). That's still the domain of philosophy.

            I'd also point out that the original con
        • In fact this sort of proof is one of the reasons I seriously wonder why so many people still seem to be pursuing this.

          Well, for starters, many systems (I don't know much about Z) don't work this way. They have the programmer write a proof with his code, and that proof system is such that the proofs are easy to validate. What is sacrificed is the ability to write some correct programs in ways that the proof system can't comprehend, but for the most part if you're trying to write clear, maintainable code, y
    • by saden1 ( 581102 ) on Tuesday June 17, 2003 @08:52PM (#6228643)
      Someone also needs to tell this crowd that some bugs are result of implementation itself. Out of bounds, errors, looking in non-existent resources, etc. Of course exception handling mechanism should be used where appropriate but that itself is prone to programmatically mistakes.

      Unit Testing is by far the best way to go. You find a problem, you fix it but at the same time make sure you don't introduce new problems. This might seem like a daunting task when dealing with a large system but it sure beats trying to strive for perfection all at once.

      Test, test, test I say.
      • Formal specification helps you avoid that kind of thing in the first place though... if you do your spec right that is! It's still down to the programmer in the end, even an insanely high-level language like Haskell can't protect you from everything (even out-of-bounds errors, you can still try and request element 50 from a 20-element list if you like, and it's not going to like it).

        So although they can be very useful, you have to do the spec right in the first place. Which can be easier than getting it ri
      • But testing isn't enough, especially in a safety-critical area. How do you know that the tests are adequate? How do you know that the tests test what the customer is after? How do you know that the various unit tests imply that the system integration tests will work?

        Unit testing is good, but insufficient, and writing a test suite that covers everything to a level that ensures that integration goes smoothly and no faults occur in service is in reality *at least as hard* as using a formal development proc
    • In fact unit testing and formal methods could reinforce each other, but IMO the tools for this approach are missing.

      Say the system you want to test has an "input space" (all combinations of possible input parameters) and some output ( a program that doesn't do anything , produce something or has a side effect, is useless). Unit testing is all about defining a series of a "test cases". In each test case we

      • set up a precondition;
      • deliver an input for the function to perform;
      • then do some checks on the resu
  • About formal methods (Score:5, Informative)

    by sl956 ( 200477 ) * on Tuesday June 17, 2003 @08:10PM (#6228374)

    Your question is about Formal Methods. This field in computer science seems more popular in Europe than in the US theese days.
    Two good articles you should read :
    • J. A. Hall, "Seven myths of formal methods", IEEE Software 7,5 (Sept. 1990), 11-19. ( quoted here [win.tue.nl])
    • J. P. Bowen & M. G. Hinchey, "Seven more myths of formal methods", IEEE Software 12,4 (July 1995), 34-41. ( quoted here [win.tue.nl])

    If your question was about their use in practical projects in industry, I've heard recently of teams doing serious work for Airbus (safety-critical embedded software using either B or Esterel) or for Gemplus (entirely proven JVM for javacards using B).
  • I vaguely remember that short of an exhaustive date set test, it's actually IMPOSSIBLE to determine mathematically that an algorithm is correct for any but the simplest (read shortest) pieces of code.
    • No! (Score:3, Interesting)

      by Tom7 ( 102298 )
      I vaguely remember that short of an exhaustive date set test, it's actually IMPOSSIBLE to determine mathematically that an algorithm is correct for any but the simplest (read shortest) pieces of code.

      This is total crap. You can, of course, prove algorithms of any size correct. Some really long (textually) algorithms can be easy to prove, and some really short ones, like this: /* terminate and return 1 for any positive input */
      int collatz(int i) {
      if (i & 1) return collatz (i * 3 + 1);
      else retur
      • ...hence the vaguely. But I take offense at your "total crap" assertion, especially because after you shot down my statement so categorically, you went ahead to say that actually, it's basically not all that far from the truth...

        I should have probably not used the word "impossible" since theoretically it may well be possible, but outside of academia and supercomputing centers, "takes 1 year at 10 TFlops" can be safely equated to "practically impossible".

        • by Tom7 ( 102298 )
          It's not a supercomputing thing, it's just tedious. Verification of a proof is usually really fast. The thing is, it's not "practicall impossible" at all... in fact, it's often very easy. But it is something that has so little benefit over an informal proof that nobody does it for practical software. Sorry if I sounded harsh, but there are a lot of people posting to this story that totally don't understand the mathematics involved or even what they're trying to do, and that drives me nuts. ;)
      • Re:No! (Score:3, Insightful)

        by Jerf ( 17166 )
        No, you're wrong. Please see my post [slashdot.org] on this topic. It is impossible to prove all programs correct for any useful definition of "correct".

        There exist proof systems which you seem to be referring to that function by examining the complete state space of a given algorithm, and validate it against some criteria. This only works on algorithms that have a finite state space. Since Turing Machines are allowed to have arbitrarily long tapes, the state space for some algorithms can also be arbitrarily large, which
        • I'm not wrong. What the original poster said is that it's impossible to prove anything but the simplest algorithms correct. That's not true at all; people publish papers with complicated algorithms and proofs of their correctness every day. Those proofs of correctness can usually be concretized to the source code level, it's just painful. It is not impossible! (There is no limit to finite state spaces. Haven't you ever heard of induction??)

          I'm not talking about a computer program that examines raw source c
          • But there is certainly no theorem telling us that it's hopeless.

            Yes, there is. Rice's theorem proves it impossible in the general case. And in fact, the very fact that you can produce a proof of correctness tends to argue not that your proof is powerful, but that the algorithm, even if it looks fancy on paper, is trivial. The proof system is necessarily less powerful then the computational system; see Godel's theorem.

            And all the such proofs I've seen, while very useful in the real world (it's vital to pr
            • Yes, there is. Rice's theorem proves it impossible in the general case.

              You are misapplying the theorem, or else don't understand what it says. There are systems where given a program, and a proof, verification of non-trivial properties (like termination) is decidable. Yes, there is no program that will generate that proof for you for arbitrary programs and properties. But that does not mean that a human (or even, sometimes, a computer) can't create a proof for a specific program and then verify that proof
            • Rice's theorem, roughly speaking, shows that you can't write a compiler that gives a compile error if your program computes square roots, but works fine for all other programs - here the criterion for "correctness" is "doesn't compute the square root of the input data".

              However, this doesn't rule out the correctness criterion "contains in its source a predicate calculus proof that it satisfies a property also specified in the source code".

              Why not? From a technical point of view, the reason Rice's theorem d

      • Re:No! (Score:3, Insightful)

        by Anonymous Coward
        uhm, how will that function ever exit?

        If this is true, call our function again.
        If not, call our function again.

        Ah crap, infinite recursion.
      • /* terminate and return 1 for any positive input */
        int collatz (int i) {
        if (i == 1) return 1;
        if (i & 1) return collatz (i * 3 + 1);
        else return collatz(i >> 1);
        } ... really, really hard to prove correct.



        it had better be hard to prove correct!

        called with the most negative integer (high bit set, all others zero) it will return 1.

        or did you not mean to specify what it would do with a negative input?
        • by Tom7 ( 102298 )
          (?)

          The specification is right in the comment before the code. It says that it returns 1 for any positive input.
  • ACL2 (Score:3, Interesting)

    by Ivan Raikov ( 521143 ) on Tuesday June 17, 2003 @08:31PM (#6228505) Homepage
    ACL2 [utexas.edu] is a theorem proving and modeling environment, based on Common Lisp, which is used for verification of hardware and software. I believe AMD used it to prove the correctness of the floating-point division algorithm in one of their processors.
  • by blackcoot ( 124938 ) on Tuesday June 17, 2003 @09:26PM (#6228852)
    both robust unit testing and formal methods have their place --- formal methods for proving design and unit testing for proving implementation. however, it seems to me that the question of why formal methods aren't more widespread in the us boils down to a question of perceived cost v. perceived benefit. let's say it takes "n-nines" of reliability to keep enough of your customers satisfied enough to keep coming back to you. let's say you've got two designs for software which accomplishes a certain task, and both models (if implemented correctly) would result in a product that the customer would be sufficiently happy with. which model do you implement? odds are mostly likely that you'll go with the cheapest one, because that's what maximizes profits. (forget everything i just said if you are an academic or coding for the love of it). to borrow a line from an engineering prof: "perfection is the enemy of good enough, and it's good enough that sells"
    • Good point. I'd like to add one.

      In my experience - I've worked with a lot of people just coding for the love of it - most coders find specification very hard. The term "Formal Specification" suggests it is a solid way to define program semantics unambiguously. Also, since it is (or should be) a high level language, it should be pretty easy to express complex behaviour.

      In reality it is not. Writing a good specification - where good implies not just that it is consistent, but also that it covers the complet

  • Doesn't "proving" the perfection of a body of code mathematically sound like it would basically face the same barriers as encountered in the infamous halting problem? That's not to say it can't be done with sufficient resources for a sufficiently simple body of code, but I would think there would be no general solution that always works in reasonable time for validating real code.
    • Re:Code Proofs? (Score:3, Informative)

      by NTDaley ( 259087 )
      It's not a matter of chucking the specification at a piece of software and saying 'prove it for me'.

      There are points at which you have to help the software along, tell it which strategy to use, or give it a half-way point to prove first.

      If the assumptions being tested are provable, then it is valid to make those assumptions. If you can't prove them, then you don't really know that they are correct, and you shouldn't be making them.

      Some pieces of code it is possible to prove whether it halts or not. I


      • Well, yes but you'll notice the very limited data domains of your examples. Programs designed to output a fixed string "Hello World", or count the static range of 1-10, are pretty easy to solve the halting problem on.

        What about real input and output though? It becomes quite a task when the user can input (through a keyboard, hardware device, a config file, etc....) essentially unlimited amounts of input data. You can't conceive or test all possible input value permutations in reasonable time, and theref
  • by NTDaley ( 259087 ) on Tuesday June 17, 2003 @09:37PM (#6228898) Homepage
    Some people here don't seem to understand the usefulness of Z.

    When developing software the process might look something like this:
    ...
    Specification (created with or by client)
    Design
    Implementation
    Testing
    ...

    Typically a specification could be a great big document written in english. It could have ambiguities, it could have subtle contradictions, it could be missing vital information.

    If the specification is written in Z, then there are tools that let you:
    - Find ambiguities or contradictions.
    - Prove that some fact of the system necessarily follows from some other fact.
    - Prove whether assumptions are correct.
    - Test what results are possible given certain inputs and sequences of operations.

    There are processes for, step by step, turning a specification into something more like real code, while proving that the two forms mean the same thing (apart from whatever limits are introduced). So, for example the transformed specification might change a variable from being an integer to being an integer in the range 0..65535, and you can test what effects this has.
    All of this can be done before writing a single line of actual program.
    Yes mistakes are still possible, but if done right, they are significantly less likely; and unlikely to be due to a misunderstanding about the capabilities of some part of the system.

    Z can also help in the process of testing an implementation. From the specification you can see that, for example, the number 50 is the input value at which the behaviour of a function significantly changes, so you know to produce test sets checking the input of 50 and values on either side. This doesn't _have_ to be done by hand; my thesis supervisor is part of a project to automatically generate test sets from B and Z specifications.

    So creating a Z specification:
    -mean more work before coding starts (I think this is what makes most people uncomfortable with it :-)= ),
    -forces you to be very specific about what correct behaviour is,
    -allows you to test properties of the system before it is created,
    -can help to make more useful test sets.

    (Disclaimer: My Master's thesis involves making tools for Z, so I could be biased; but it also means I probably know more than average about Z).

    • So creating a Z specification:
      -forces you to be very specific about what correct behaviour is

      - Does not take into account changing requirements without rework of the initial model
      - Normally cannot factor change into a design in the way that stuff such as design patterns do

      Now yes, these are very different issues, but unless you are working on a system that derives some nontrivial benefit from being 'proven' (ie: nuclear, medical etc) then it only has limited scope.

      Perhaps the realities of most short-ti
    • So creating a Z specification:
      -mean more work before coding starts
      -forces you to be very specific about what correct behaviour is


      Not just more work. Imagine trying to get a customer to agree on a Z specification. They'll probably say, "Z??? Isn't that some sort of science fiction mini-series?". Also, for many customers, not being specific is the only specific thing about them.

      This is where the "black art" of programming is most pronounced, because it often has more in common with building architecture
  • I assume this is supposed to be an 'Ask Slashdot' story.
  • obligatory quote (Score:5, Insightful)

    by Felipe Hoffa ( 141801 ) on Tuesday June 17, 2003 @10:26PM (#6229323) Homepage Journal
    ``Beware of bugs in the above code; I have only proved it correct, not tried it.''

    Donald E. Knuth [stanford.edu]

    Felipe Hoffa
  • language called Z, which was based on set theory.

    Relational is also based around set theory. I think relational is cool, especially if done right (the current vendors and languages need help IMO).

    Relational could also be made more friendly by creating operators and syntax that people can better relate with. The operators can be defined based on combinations of Dr Codd's original operators, but more "natural".

    There does not seem to be a lot of pressure to improve upon relational theory beyond the "Orac
  • by Beryllium Sphere(tm) ( 193358 ) on Wednesday June 18, 2003 @02:16AM (#6230863) Journal
    In IBM mainframe land, there's a program called IEFBR14 designed to do nothing (don't ask, you DON'T want to know).

    The usual story is that the first version had a bug report filed because, although it did indeed return, it didn't set the exit code.

    Next, according to the conventional version, the linkage editor gagged because the source ended with an END directive instead of END IEFBR14.

    With that fixed, the next bug report was that it didn't interoperate properly with the dump tools, in that it didn't stick the program name into a predictable area of the code.

    Then came the bug report about chaining register save areas (again, you don't want to know -- it's related to the fact that the machines didn't have a stack).

    Nor was that the end of it, a later bug report was that it wasn't being flagged as reentrant when it got linked.

    That's five bug reports against a null program. All of them were integration problems and/or omissions in the specification.

    There's a class of bugs formal methods can help with, but that class is a minority in most commercial work.
    • Parent should have been modded +5: *Damned* insightful. My friend is working on a tiny little program right now that has been changed completely several separate times within the same week. Formal analysis is *hard*, and takes a long time, even with tools. And once it's complete, a small change to the requirements can break the analysis utterly. With real-world, commercial development, requirements drift isn't just a possibility, it's a near-100% certainly. With the exception of large, safety-related p
  • I attended Aston University in the UK [ www.cs.aston.ac.uk ] and they are still providing a final year module 'Formal Software Development' based around the Z language.

    Perhaps this has something to do with the fact that the final year senior tutor has written a book on it - but that doesn't make it any less (or more) useful.

    see:
    http://www.cs.aston.ac.uk/phpmodules/displaysylla b usinternet.php?inputmod=CS3120

    for the lowdown. So in answer to your question - yes people are still using it, at least acad

    • And that final year course probably being the last time I encountered Z before reading this article today...

      Should also be noted that Aston (at least when I was there) also thought Ada and Modula-2 to be useful teaching languages.. what marvellously transferrable skills those are...

      t o b e

  • by Scorchio ( 177053 ) on Wednesday June 18, 2003 @04:09AM (#6231251)
    I studied for my CS degree at the University of York in the early 90's, who - at the time - were really pushing Z and Ada as the stuff of the future. One thing that stood out from the formal specification course, at least in my mind, was the example Z specification for a function to swap the values of two variables. Basically :

    swap(a,b) :
    tmp = a
    a = b
    b = tmp

    The pseudocode is simple enough, but the Z spec took up a whole page. This put people off - if the smallest function in a minor subsystem of an application takes a whole page and much head-scratching to figure out, how long will the rest of the software take? Surely it's so time consuming as to be infeasible for real world use. Hardly any on the course really "got it", and I was one of the many who didn't. A friend of mine went on to use Z in his final year project, and at some point *ting* - he got it and found that it was good. I've never seen or heard of Z being used anywhere since, but some day I want to have another look at my notes and text books to see if it makes any more sense yet, and if so, whether it can help my work.

    I don't know if they still advocate Z, but they started teaching Eiffel instead of Ada the year after I left..

  • by Qoumran ( 42177 ) on Wednesday June 18, 2003 @06:28AM (#6231662) Homepage

    Great I just passed an exam in this, but lets see if I still remember it..

    First you we have to get the question right - are you talking about validation or verification of software? They are quite different from each other:

    • Validation: Are we building the right product?
    • Verification: Are we building the product right?

    The above is a quote of Boehm from 1979 I think... can't remember the exact title.

    As you can propably guess there is quite a difference betweem the two. Unit testing is a method for verification, but from your description Z sounds like validation to me.

  • by metamatic ( 202216 ) on Wednesday June 18, 2003 @09:15AM (#6232967) Homepage Journal
    Unless there's been some radical change in the state of the art that I've missed hearing about in the last few years, there's a big problem with Z notation and other methods of forming provably correct programs.

    Said killer problem is that all the popular programming languages have features which make it impossible to model them using denotational semantics--that is, to produce mathematical functions which represent the program behavior. For instance, no C-based language can be modeled.

    That's not to say that denotational semantics is completely useless. It's used in certain narrow domains, such as hardware--there are numerous chip designs where the circuitry and microcode is proven correct.

    However, there's no way you're going to see --check-correctness flags on gcc or javac any time soon.

    Of course, the bigger problem with mathematical techniques for proving correctness is that there's no measurable market demand for bug-free software (outside of the few narrow fields alluded to above), and hence no money available to spend on paying highly skilled computer scientists to spend a long time developing provably correct software using obscure programming languages. Instead, everyone just hacks together some alpha-quality code using C++ and calls it version 1.0.
    • Good point about market size.

      I actually use formal methods when proving that threads syncrhonize/dont deadlock: its a dog to test such things and a bit of mathematics saves a lot of time. The rest of the time: JUnit and CPPUnit tests. The nice thing about tests is that you can automate regression testing -I've never heard of automated regression proofs...
  • The idea was that a software system could be defined in 'Z', and then that system could be mathematically proven to be 100% correct (in other words, zero defects).

    I've heard of a different idea. Once you define the problem in this language called "C", the software is automatically written by this device called a "compiler", and it is already proven to be 100% correct.

  • Bart Massey and Jamey Sharp of Portland State have written XCB, a reimplementation of Xlib with an eye towards improved tunability (and an improved API). They were having problems coming up with an implementation that worked properly; the multi-threaded connection handling code was up to four designs and two implementations, all of them incorrect.

    Massey and Robert Bauer of Rational ended up writing a Z specification for that portion of the code, and proved it correct. The C implementation of the spec is in the XCB distribution right now.

    Massey and Bauer published a paper describing this effort for USENIX 2002. They conclude with the comment "[t]he sort of lightweight modeling described [above], that isolates a troublesome portion of the system for more detailed study, is especially appropriate[...]".

    XCB Wiki: http://xcb.wiki.cs.pdx.edu/
    XCB/Z Paper: http://xcb.wiki.cs.pdx.edu/pub/XCB/WebHome/usenix- zxcb.pdf
  • I finished a class recently that concentrated on using Z for formal modeling. The two professors published a reasonably interesting paper (for Z) on a thread safe X Windows C library binding to the X protocol.

    X meets Z: Verifying correctness in the presence of POSIX Threads [pdx.edu]
  • by Anonymous Coward
    The Z homepage is a frigging disaster.
    What, pray tell, does Z code look like?
  • by Chalst ( 57653 ) on Wednesday June 18, 2003 @03:58PM (#6236874) Homepage Journal
    In the short term I'm not persuaded by the case for widespread application of formal methods, but in the long run I think it will become very important. Two areas I think are areas that I think are most beneficial for formal methods: firstly verified implementations of network protocols, and journalling file systems.
  • I don't mean to be glib by making such a statement as "software is still voodoo", but after reading this thread, it is quite obvious I think that clear understanding of our art is achieved by far too few practitioners, and that is the crux of the problem of software quality.

    While provably correct solutions are presently unrealistic, we have myriads of tools to assist in the process of creating high quality software. Specification tools like Z, modelling languages, coverage analysis and profiling software,
  • by Jon Jacky ( 682950 ) on Thursday June 19, 2003 @10:07PM (#6250294)

    I used Z to design the control software for a radiation therapy machine.

    I expressed the safety requirements in Z, and also expressed the detailed design in Z, and argued (I wouldn't say "proved") that the design didn't violate the safety requirements.

    We coded the control program in C from the detailed design in Z. Of course we tested the program thoroughly too - you still have to do all that, of course.

    The machine has been in use treating patients since July 1999. We haven't killed anyone yet (unlike some other products in this category).

    There are some papers and reports about the project at

    http://staff.washington.edu/~jon/cnts/index.html

    There is a short (3-page) report about our use of Z in this project at

    http://staff.washington.edu/~jon/cnts/iccr.html

    There is a longer report about the whole project at

    http://staff.washington.edu/~jon/cnts/cnts-report. html

    The Z for the radiation therapy machine itself is at

    http://staff.washington.edu/~jon/z/machine.html

  • Imagine this situation: You have an algorithm which can, given a program as input, determine if that program will function properly. The way to break it is simple. Write a program that performs tasks as the pseudocode following dictates:

    Specification: This program will terminate (i.e. finish executing).

    Run my own code through the 'properly functions' algorithm
    Does it say I'll terminate?
    If so, loop forever and never terminate.
    Otherwise (it says I'll never terminate) terminate immediately.

    It is impossibl

    • Good grief, smart people say some stupid things.

      Z doesn't have to prove all programs (including the one you describe) correct in order to be useful. It only has to prove my program correct to be useful to me.

    • The halting problem does not apply in program verification. It's easy enough to produce programs whose correctness is ambiguous, but such programs are broken.

      Incidentally, the halting problem is decidable when memory is finite. Either the program halts or it eventually repeats a previous state. This is not enormously useful, because you can write a program with a huge state space, but it answers the formal objection.

  • Ok, so your software will work to the design, but who's going to design perfect software?
  • I looked at Z for a while, but I had a problem with how closely it resembled mathematics. I thought about the momement when I would present a Z specification to a typical client and could not imagine them using it. I would think that they would always go for the text description.
    After all, it's the end user who is supposed to approve what you are doing and the typical user is probably someone who avoided algebra in high-school at all costs. Their eyes would glaze over at any mathematics at all. Only the pro
  • Zero defect development is tough to introduce to a development environment because it is so different from what we accept as development. I'm going to make distinctions here between Z, a formal specification, and 'zero defect' development which I believe the author is referring to 'clean room development' Using a formal specification, like Z, a specification can be written for a programming that can be proven to be correct. It's difficult to define 'correct' but essentially the program can be shown not to
    • It seems the weak link in this zero-defect cause is the specification. Zero-defect software becomes unattainable without precise specifications.

      Zero-defect software should probably be approached from the viewpoint of course-grained systems. Break the problem down into smaller problems and attempt to prove each smaller system is correct. As smaller systems are combined to build bigger systems, the zero-defect approach shifts to proving the integration of smaller systems is correct.

      I offer two suggestion

Whoever dies with the most toys wins.

Working...