Are You Using Z-Notation to Validate Your Software? 110
ghopson asks: "Many years ago in the early nineties I became aware of a software definition language called Z, which was based on set theory. The idea was that a software system could be defined in 'Z', and then that system could be mathematically proven to be 100% correct (in other words, zero defects).What is the current state of play with 'zero defect' software development? I know Z still exists, that there is a software development method (the B-method) associated with it. Have techniques and advances such as MDA advanced zero-defect software development? Or would a set of robust unit tests suffice?"
sure.. (Score:5, Funny)
so I designed a system that would convert C code into Z code. unfortunately I never did get around to fully debugging it, so I decided to write a theorem prover that would work on straight C code.
this one worked great, but one day it didn't catch a bug in some code that I *knew* was buggy.
so I decided to write another theorem prover to parse the code of the first one. I figured if I fed the code of each one into the other, that would greatly reduce the chances of error? well, the second one crashed at startup and the first one said it was correct.
turned out to be a bug in the OS. So I fed the OS source code into the first one, and it found some defects. I fixed the defects but then the system wouldn't boot any more. I think it was actually a bug in the C compiler, and I was going to fix it for real this time, but it was getting late.
so I decided to just try to write clear, simple code and fix bugs as they are reported.
Re:sure.. (Score:2)
what does this buy me? (Score:2)
In other words, I can prove that if my software is 100% bug-free, it's bug-free. Woohoo. This doesn't address the case where my program doesn't fully conform to the Z specification, does it?
Re:what does this buy me? (Score:4, Interesting)
In formal verification, the idea is that you would have a way to prove the equivalence between your Z model (or whatever modeling language you use) and your implementation. Of course, it's a lot more difficult to prove equivalence between a formal model and a C implementation, than if you are using a modern high-level language like Standard ML.
Re:what does this buy me? (Score:4, Informative)
For many real-world problems, figuring out *exactly* what you want your program to do in all situations (and figuring out what all the situations are in the first place) is the hard part. You can then go implement that in C almost as easily as you can in Z.
I'm not saying this (Z and B) is useless. I just want to focus on the real problem. Using a modeling language just introduces one additional layer of abstraction into the process. While this may make the system easier to discuss and easier to "prove correct", it also introduces another layer at which errors may creep into the system. For some cases, the extra layer introduces fewer errors than it prevents. For other cases, the opposite is true.
I think to some extent it comes down to "how much time are we going to spend making sure this software is darn near perfect." The law of diminishing returns says that the number of flaws fixed per unit of programmer time will go down as more programmer time is applied, providing a "cost to flaw" curve. Customers also have a curve of how much they are willing to pay for additional reliability. The curves meet at some price point/quality level (keeping features constant), and that is where the product will ship. If the customer's need for extra reliability is large enough that the modeling phase can be integrated properly into the product schedule, it becomes worthwhile to spend time with formal models; otherwise, these will only decrease reliability since the modeling phase eats into the time needed for other phases to properly implement the product.
In other words, I think that a formal modeling phase is overkill for most requirements ("doesn't crash too often") since it has less bang for the buck as far as generating features and fixing glaring bugs. But it continues to have at least some bang for the buck long after other processes drop off in effectiveness. So it is worthwhile to add formal modeling when unit testing isn't going to get you to your desired flaw rate.
Re:what does this buy me? (Score:4, Insightful)
Z is an abstraction layer in the same way any specification is an abstraction layer.
You don't really want to ever go from what the user wants to C. Instead you have a design and specification. Then you validate that specification against what the user wants, code the C, and validate the C and specification.
This is done because it is easier to validate C against a written specification, rather than against some idea in the users head.
In the same way, in some cases it is easier validate the specification against Z, then validate Z against the C code, then validating directly.
I can't help but paraphrase Knuth's most famous phrase: Beware of this code for I have only prooved it to be correct, not tested it.
or something like that
Re:what does this buy me? (Score:1)
http://www-cs-faculty.stanford.edu/~knuth/faq.htm
Re:what does this buy me? (Score:1)
That won't always specify the right program for the application. In one project I was assigned to code a user interface from a nicely detailed specification. When the project team then sent me to test it at the customer site there were many complaints from the users. It was obvious to me that the specification did not match how the users actually did their work.
Apparently some group had designed the user interface to have the computer tell th
Re:what does this buy me? (Score:1)
Ahem (Score:2, Funny)
Through comprehensive hyping and brainwashing (err, marketing) you could get a sizable amount of money out of selling a single turd. Ever heard of Merda d' Artista [google.com]? It literally translates to "Artist's shit" and it IS shit. Very old shit but nothing more than human faeces. And yet, we are supposed to believe that it is
we're on slashdot, not k5 (Score:1)
Spell checking? (Score:5, Funny)
Perhaps they could include spell checking?
Re:Spell checking? (Score:2)
What industry?
In practice, just getting the requirements right is more important. The better the problem domain is understood, the fewer errors--real or percieved--will be present in the implemented system.
Also, I know of no programming language that fully prevents the possibility of side-effects of new code. C allows some pointer arithmetic to totally destroy basically anything in the program. A Java "null pointer exception
Re:Spell checking? (Score:3, Interesting)
C is not a language for the timid :) Case in point: A really interesting hole in the C/C++ spec
that nails programmers is the interaction between side effects and sequence points. Which, long story short leaves statements such as: Fx(y++, z=y), undefined. The compiler only guarantees that side effects will have been completed at the next sequence point, the compiler could choose to evaluate the operators in either order and be in complete compliance.
Re:Spell checking? (Score:2)
The "language" itself looks like a cross between boolean algebra, set theory and relational algebras (which came from set theory). Using a truely functional language w
Unit tests seem to be the way to go (Score:5, Insightful)
Provable programs, on the other hand, are typically so complicated for a real-world program that your provably bug-free program still has bugs, because it turns out your model is wrong, because it was written by a human, not a god.
I'm very unimpressed with the whole "provable program" crowd for any but the most trivial proofs; decades of claims, little to show for it, and all based on the rather wrong idea that if we just push the need for perfection back one level, somehow the computer can magically take up the slack and prevent the bugs from ever existing. The unittesting philosphy, that bugs will happen and it's more important to test and fix them, since complete prevention is impossible anyhow, is much more based in reality, and correspondingly works much better in the real world.
(Provably correct computing is one of those branches of technology that may produce something useful, but is unlikely to ever acheive its stated "ultimate goals"; I class pure q-bit based quantum computing and pure biologically-grown computing in here as well.)
Re:Unit tests seem to be the way to go (Score:1)
So what happens when someone is programming in some language/dev-system that won't permit non-correct program ta be wrotted? Eh?
Also, your determination that grown biological computing is unlikely to ever achieve its ultimate goal, yeah, we prove that, don't we...
: P
( of course, just precisely what our ultimate goal IS be open to debate...
: )
Re:Unit tests seem to be the way to go (Score:5, Informative)
Ironically, it's provable that no Turing Complete language can be limited to create only correct programs for any non-trivial definition of correctness. Computer science is full of such fun things.
Proof: In your supposed language, there is a set of "correct" programs for some criterion, and a set of incorrect programs.
(Sub-proof: The program "return 1" is only correct for the problem consisting of "Always return true" and isomorphic problems, and "return 0" is only correct for the problem consisting of "Always return false" and isomorphic problems. Thus, for any given correct behavior there exists at least one incorrect program in your language, OR your language contains only a trivial number of programs and as such is hardly deserving to be called a "language".)
If there does not exist a Turing Machine (read: 'program') that can decide in finite time whether a given program meets a correctness criterion, then we can never decide whether a program is correct, because Turing Machines are the best computational tools we have. Therefore, suppose some Turing Machine M exists that accepts as input some suitable correctness criterion, and a program, and outputs whether that program meets the correctness criterion (in finite time).
By Rice's theorem [wolfram.com], no such TM machine can exist. (Indeed, it's fairly trivial to show that any non-trivial definition of "correctness" must certainly include asserting that the program executes in finite time, and therefore a program that could check such a correctness criterion must be able to solve the Halting Problem, if such a machine existed.)
Since any language that only allowed one to express correct concepts is isomorphic to a Turing Machine that correctly validates the existance in that language [netaxs.com], and no such TM can exist, no such language can exist, unless it is trivial and the correctness criterion is trivial. In which case as I alluded to earlier, it's hardly worth calling a "language" in the computer science sense.
In fact this sort of proof is one of the reasons I seriously wonder why so many people still seem to be pursuing this. (Fortunately, I see many signs that the field is waking up to this fact and good, dare I say useful research has been starting to get done in the past few years; perhaps the empirical successes of unit testing in software engineering has helped prompt this.)
BTW, no offense but I've been on Slashdot for a while, and I direct this at any replier; if you don't even know what Rice's Theorem is, please don't try to "debunk" this post. Many exceedingly smart people have dedicated their lives to computer science. I only wish I were smart enough to craft these tools, rather then simply use them. Rice's theorum has withstood their attack since 1953 . The odds of a Slashdot yahoo "debunking" this theorem are exceedingly low. (On the other hand, if you do understand what I was saying I of course welcome correction and comment; still, this is a fairly straightforward application of Rice's theorem and I can't see what could possibly go wrong. It's a pretty simple theorem to apply, it's just the proof that's kinda hairy.)
Also, your determination that grown biological computing is unlikely to ever achieve its ultimate goal, yeah, we prove that, don't we...
I do not believe biological computing is impossible; we are an obvious counterexample. What I don't think is going to happen is that at any given point, the most powerful (man-manufactured) computing device that exists is biological, because biological systems by definition require a life support system to support them, which must consume space, power, and resources better spent directly on the actual computation by a non-livi
Re:Unit tests seem to be the way to go (Score:3, Insightful)
I believe that you are misunderstanding
Re:Unit tests seem to be the way to go (Score:2)
f
states that a function in a program takes an integer and returns a character. The Type system should prove this. The thing is, this is in theory impossible by Rice's theorem. Hoever, for a very large (and useful) amount of programs this is quite doable. However, take the following pseudoprogram:
f(x) = if 2 = 3 then return true else return 'a'
This program be rejected by any typechecker even though
Re:Unit tests seem to be the way to go (Score:2)
Right.
( awesome reply, and thank you! )
Firstly, I were thinking of Correct Constructs, and probably this is wholely rong, but my assumption was that a Program is made-up of components, each of which can be proved not-incorrect, so the whole thing feels, to me, fractal... ( perhaps there's blurring between the fractal-type reality and the recursive-type reality in my mind... ), so the components can be proven to be correct-constructs, and the whole can be proven too to be a not-incorrect-construct, NOT a
Re:Unit tests seem to be the way to go (Score:2)
However, computers are definately Turing Machines, and generally when one discusses "computability", we really are referring to computers, not Mind. Even if in the end the mind really is just a Turning Machine we still do not have the constructs yet to deal with it directly (hence the whole field of Psychology). That's still the domain of philosophy.
I'd also point out that the original con
Re:Unit tests seem to be the way to go (Score:3, Insightful)
Well, for starters, many systems (I don't know much about Z) don't work this way. They have the programmer write a proof with his code, and that proof system is such that the proofs are easy to validate. What is sacrificed is the ability to write some correct programs in ways that the proof system can't comprehend, but for the most part if you're trying to write clear, maintainable code, y
Re:Unit tests seem to be the way to go (Score:4, Interesting)
Unit Testing is by far the best way to go. You find a problem, you fix it but at the same time make sure you don't introduce new problems. This might seem like a daunting task when dealing with a large system but it sure beats trying to strive for perfection all at once.
Test, test, test I say.
Re:Unit tests seem to be the way to go (Score:1)
So although they can be very useful, you have to do the spec right in the first place. Which can be easier than getting it ri
Re:Unit tests seem to be the way to go (Score:1)
Unit testing is good, but insufficient, and writing a test suite that covers everything to a level that ensures that integration goes smoothly and no faults occur in service is in reality *at least as hard* as using a formal development proc
relation between formal spec and unit testing (Score:1)
Say the system you want to test has an "input space" (all combinations of possible input parameters) and some output ( a program that doesn't do anything , produce something or has a side effect, is useless). Unit testing is all about defining a series of a "test cases". In each test case we
About formal methods (Score:5, Informative)
Your question is about Formal Methods. This field in computer science seems more popular in Europe than in the US theese days.
Two good articles you should read :
If your question was about their use in practical projects in industry, I've heard recently of teams doing serious work for Airbus (safety-critical embedded software using either B or Esterel) or for Gemplus (entirely proven JVM for javacards using B).
Re:About formal methods (Score:2)
impossible ? (Score:1)
No! (Score:3, Interesting)
This is total crap. You can, of course, prove algorithms of any size correct. Some really long (textually) algorithms can be easy to prove, and some really short ones, like this:
int collatz(int i) {
if (i & 1) return collatz (i * 3 + 1);
else retur
Re:No! (Score:2)
I should have probably not used the word "impossible" since theoretically it may well be possible, but outside of academia and supercomputing centers, "takes 1 year at 10 TFlops" can be safely equated to "practically impossible".
Re:No! (Score:1)
Re:No! (Score:3, Insightful)
There exist proof systems which you seem to be referring to that function by examining the complete state space of a given algorithm, and validate it against some criteria. This only works on algorithms that have a finite state space. Since Turing Machines are allowed to have arbitrarily long tapes, the state space for some algorithms can also be arbitrarily large, which
That's not the kind of proof I'm talking about (Score:2)
I'm not talking about a computer program that examines raw source c
Re:That's not the kind of proof I'm talking about (Score:3, Interesting)
Yes, there is. Rice's theorem proves it impossible in the general case. And in fact, the very fact that you can produce a proof of correctness tends to argue not that your proof is powerful, but that the algorithm, even if it looks fancy on paper, is trivial. The proof system is necessarily less powerful then the computational system; see Godel's theorem.
And all the such proofs I've seen, while very useful in the real world (it's vital to pr
Rice's theorem is not relevant to this (Score:3, Interesting)
You are misapplying the theorem, or else don't understand what it says. There are systems where given a program, and a proof, verification of non-trivial properties (like termination) is decidable. Yes, there is no program that will generate that proof for you for arbitrary programs and properties. But that does not mean that a human (or even, sometimes, a computer) can't create a proof for a specific program and then verify that proof
Re:That's not the kind of proof I'm talking about (Score:1)
Rice's theorem, roughly speaking, shows that you can't write a compiler that gives a compile error if your program computes square roots, but works fine for all other programs - here the criterion for "correctness" is "doesn't compute the square root of the input data".
However, this doesn't rule out the correctness criterion "contains in its source a predicate calculus proof that it satisfies a property also specified in the source code".
Why not? From a technical point of view, the reason Rice's theorem d
Re:No! (Score:3, Insightful)
If this is true, call our function again.
If not, call our function again.
Ah crap, infinite recursion.
Re:No! (Score:1)
Re:No! (Score:2)
int collatz (int i) {
if (i == 1) return 1;
if (i & 1) return collatz (i * 3 + 1);
else return collatz(i >> 1);
}
it had better be hard to prove correct!
called with the most negative integer (high bit set, all others zero) it will return 1.
or did you not mean to specify what it would do with a negative input?
Re:No! (Score:1)
The specification is right in the comment before the code. It says that it returns 1 for any positive input.
ACL2 (Score:3, Interesting)
perceived cost v. perceived benefit (Score:3, Interesting)
Re:perceived cost v. perceived benefit (Score:2)
In my experience - I've worked with a lot of people just coding for the love of it - most coders find specification very hard. The term "Formal Specification" suggests it is a solid way to define program semantics unambiguously. Also, since it is (or should be) a high level language, it should be pretty easy to express complex behaviour.
In reality it is not. Writing a good specification - where good implies not just that it is consistent, but also that it covers the complet
Code Proofs? (Score:2)
Doesn't "proving" the perfection of a body of code mathematically sound like it would basically face the same barriers as encountered in the infamous halting problem? That's not to say it can't be done with sufficient resources for a sufficiently simple body of code, but I would think there would be no general solution that always works in reasonable time for validating real code.
Re:Code Proofs? (Score:3, Informative)
There are points at which you have to help the software along, tell it which strategy to use, or give it a half-way point to prove first.
If the assumptions being tested are provable, then it is valid to make those assumptions. If you can't prove them, then you don't really know that they are correct, and you shouldn't be making them.
Some pieces of code it is possible to prove whether it halts or not. I
Re:Code Proofs? (Score:2)
Well, yes but you'll notice the very limited data domains of your examples. Programs designed to output a fixed string "Hello World", or count the static range of 1-10, are pretty easy to solve the halting problem on.
What about real input and output though? It becomes quite a task when the user can input (through a keyboard, hardware device, a config file, etc....) essentially unlimited amounts of input data. You can't conceive or test all possible input value permutations in reasonable time, and theref
Re:Code Proofs? (Score:2)
When you use these "properties", will you still catch all the bugs you would catch by testing all input sequences though? Perhaps because of a couple small subtle flaws in a few independant peices of your code that are affected by the same input data stream, everything works fine until a config file contains the sequence " ]D[", at which point your code goes into an infinite loop and never comes back. These things do happen (although typical there's something more obvious involved, like terminal bytes an
What is possible with Z (Score:5, Informative)
When developing software the process might look something like this:
Specification (created with or by client)
Design
Implementation
Testing
Typically a specification could be a great big document written in english. It could have ambiguities, it could have subtle contradictions, it could be missing vital information.
If the specification is written in Z, then there are tools that let you:
- Find ambiguities or contradictions.
- Prove that some fact of the system necessarily follows from some other fact.
- Prove whether assumptions are correct.
- Test what results are possible given certain inputs and sequences of operations.
There are processes for, step by step, turning a specification into something more like real code, while proving that the two forms mean the same thing (apart from whatever limits are introduced). So, for example the transformed specification might change a variable from being an integer to being an integer in the range 0..65535, and you can test what effects this has.
All of this can be done before writing a single line of actual program.
Yes mistakes are still possible, but if done right, they are significantly less likely; and unlikely to be due to a misunderstanding about the capabilities of some part of the system.
Z can also help in the process of testing an implementation. From the specification you can see that, for example, the number 50 is the input value at which the behaviour of a function significantly changes, so you know to produce test sets checking the input of 50 and values on either side. This doesn't _have_ to be done by hand; my thesis supervisor is part of a project to automatically generate test sets from B and Z specifications.
So creating a Z specification:
-mean more work before coding starts (I think this is what makes most people uncomfortable with it
-forces you to be very specific about what correct behaviour is,
-allows you to test properties of the system before it is created,
-can help to make more useful test sets.
(Disclaimer: My Master's thesis involves making tools for Z, so I could be biased; but it also means I probably know more than average about Z).
Re:What is possible with Z (Score:1)
So creating a Z specification:
-forces you to be very specific about what correct behaviour is
- Does not take into account changing requirements without rework of the initial model
- Normally cannot factor change into a design in the way that stuff such as design patterns do
Now yes, these are very different issues, but unless you are working on a system that derives some nontrivial benefit from being 'proven' (ie: nuclear, medical etc) then it only has limited scope.
Perhaps the realities of most short-ti
Re:What is possible with Z (Score:2)
-mean more work before coding starts
-forces you to be very specific about what correct behaviour is
Not just more work. Imagine trying to get a customer to agree on a Z specification. They'll probably say, "Z??? Isn't that some sort of science fiction mini-series?". Also, for many customers, not being specific is the only specific thing about them.
This is where the "black art" of programming is most pronounced, because it often has more in common with building architecture
Ask Slashdot? (Score:1)
obligatory quote (Score:5, Insightful)
Donald E. Knuth [stanford.edu]
Felipe Hoffa
set theory (Score:1)
Relational is also based around set theory. I think relational is cool, especially if done right (the current vendors and languages need help IMO).
Relational could also be made more friendly by creating operators and syntax that people can better relate with. The operators can be defined based on combinations of Dr Codd's original operators, but more "natural".
There does not seem to be a lot of pressure to improve upon relational theory beyond the "Orac
Requirements drift will kill you (Score:4, Insightful)
The usual story is that the first version had a bug report filed because, although it did indeed return, it didn't set the exit code.
Next, according to the conventional version, the linkage editor gagged because the source ended with an END directive instead of END IEFBR14.
With that fixed, the next bug report was that it didn't interoperate properly with the dump tools, in that it didn't stick the program name into a predictable area of the code.
Then came the bug report about chaining register save areas (again, you don't want to know -- it's related to the fact that the machines didn't have a stack).
Nor was that the end of it, a later bug report was that it wasn't being flagged as reentrant when it got linked.
That's five bug reports against a null program. All of them were integration problems and/or omissions in the specification.
There's a class of bugs formal methods can help with, but that class is a minority in most commercial work.
Re:Requirements drift will kill you (Score:3, Insightful)
they're teaching it here... (Score:1)
I attended Aston University in the UK [ www.cs.aston.ac.uk ] and they are still providing a final year module 'Formal Software Development' based around the Z language.
Perhaps this has something to do with the fact that the final year senior tutor has written a book on it - but that doesn't make it any less (or more) useful.
see:
http://www.cs.aston.ac.uk/phpmodules/displaysylla b usinternet.php?inputmod=CS3120
for the lowdown. So in answer to your question - yes people are still using it, at least acad
Re:they're teaching it here... (Score:1)
Should also be noted that Aston (at least when I was there) also thought Ada and Modula-2 to be useful teaching languages.. what marvellously transferrable skills those are...
t o b e
Z.. haven't heard of that in a looong time (Score:4, Insightful)
swap(a,b)
tmp = a
a = b
b = tmp
The pseudocode is simple enough, but the Z spec took up a whole page. This put people off - if the smallest function in a minor subsystem of an application takes a whole page and much head-scratching to figure out, how long will the rest of the software take? Surely it's so time consuming as to be infeasible for real world use. Hardly any on the course really "got it", and I was one of the many who didn't. A friend of mine went on to use Z in his final year project, and at some point *ting* - he got it and found that it was good. I've never seen or heard of Z being used anywhere since, but some day I want to have another look at my notes and text books to see if it makes any more sense yet, and if so, whether it can help my work.
I don't know if they still advocate Z, but they started teaching Eiffel instead of Ada the year after I left..
Re:Z.. haven't heard of that in a looong time (Score:1)
Re:Z.. haven't heard of that in a looong time (Score:2)
> took up a whole page.
Who wrote the spec?
If a and b are part of your system state-schema then a swap operation is just:
Swap
delta System
--
theta System adres {a,b} = theta System' adres {a,b}
a' = b
b' = a
Short and not too complex. (The "theta System adres" bit is basically just saying "this operation doesn't change anything other than a and b in the system"; if I knew exactly what your System schema was I wouldn't have even needed to write that. And
Re:Z.. haven't heard of that in a looong time (Score:2)
a^=b;
b^=a;
a^=b;
Swaps scalars without using an extra variable.
Re:Z.. haven't heard of that in a looong time (Score:2)
Validation vs. verification (Score:3, Insightful)
Great I just passed an exam in this, but lets see if I still remember it..
First you we have to get the question right - are you talking about validation or verification of software? They are quite different from each other:
The above is a quote of Boehm from 1979 I think... can't remember the exact title.
As you can propably guess there is quite a difference betweem the two. Unit testing is a method for verification, but from your description Z sounds like validation to me.
The killer problem with formal correctness (Score:4, Insightful)
Said killer problem is that all the popular programming languages have features which make it impossible to model them using denotational semantics--that is, to produce mathematical functions which represent the program behavior. For instance, no C-based language can be modeled.
That's not to say that denotational semantics is completely useless. It's used in certain narrow domains, such as hardware--there are numerous chip designs where the circuitry and microcode is proven correct.
However, there's no way you're going to see --check-correctness flags on gcc or javac any time soon.
Of course, the bigger problem with mathematical techniques for proving correctness is that there's no measurable market demand for bug-free software (outside of the few narrow fields alluded to above), and hence no money available to spend on paying highly skilled computer scientists to spend a long time developing provably correct software using obscure programming languages. Instead, everyone just hacks together some alpha-quality code using C++ and calls it version 1.0.
Re:The killer problem with formal correctness (Score:2)
I actually use formal methods when proving that threads syncrhonize/dont deadlock: its a dog to test such things and a bit of mathematics saves a lot of time. The rest of the time: JUnit and CPPUnit tests. The nice thing about tests is that you can automate regression testing -I've never heard of automated regression proofs...
how complicated is Z? (Score:2, Insightful)
The idea was that a software system could be defined in 'Z', and then that system could be mathematically proven to be 100% correct (in other words, zero defects).
I've heard of a different idea. Once you define the problem in this language called "C", the software is automatically written by this device called a "compiler", and it is already proven to be 100% correct.
Yes, this is industry best practice (Score:2)
+1, insightful (Score:1)
"Real-world" use of Z for software development (Score:5, Interesting)
Massey and Robert Bauer of Rational ended up writing a Z specification for that portion of the code, and proved it correct. The C implementation of the spec is in the XCB distribution right now.
Massey and Bauer published a paper describing this effort for USENIX 2002. They conclude with the comment "[t]he sort of lightweight modeling described [above], that isolates a troublesome portion of the system for more detailed study, is especially appropriate[...]".
XCB Wiki: http://xcb.wiki.cs.pdx.edu/
XCB/Z Paper: http://xcb.wiki.cs.pdx.edu/pub/XCB/WebHome/usenix
A Z paper (Score:1)
X meets Z: Verifying correctness in the presence of POSIX Threads [pdx.edu]
Sample Z source? (Score:1, Funny)
What, pray tell, does Z code look like?
Re:Sample Z source? (Score:1)
Areas of strength for formal methods (Score:3, Interesting)
Software is still voodoo, that's the problem (Score:2, Insightful)
While provably correct solutions are presently unrealistic, we have myriads of tools to assist in the process of creating high quality software. Specification tools like Z, modelling languages, coverage analysis and profiling software,
Answer to question in article title (Score:1)
I used Z for radiation therapy machine software (Score:4, Informative)
I used Z to design the control software for a radiation therapy machine.
I expressed the safety requirements in Z, and also expressed the detailed design in Z, and argued (I wouldn't say "proved") that the design didn't violate the safety requirements.
We coded the control program in C from the detailed design in Z. Of course we tested the program thoroughly too - you still have to do all that, of course.
The machine has been in use treating patients since July 1999. We haven't killed anyone yet (unlike some other products in this category).
There are some papers and reports about the project at
http://staff.washington.edu/~jon/cnts/index.html
There is a short (3-page) report about our use of Z in this project at
http://staff.washington.edu/~jon/cnts/iccr.html
There is a longer report about the whole project at
http://staff.washington.edu/~jon/cnts/cnts-report. html
The Z for the radiation therapy machine itself is at
http://staff.washington.edu/~jon/z/machine.html
No such beast. Provably. (Score:2)
It is impossibl
Re:No such beast. Provably. (Score:2)
Z doesn't have to prove all programs (including the one you describe) correct in order to be useful. It only has to prove my program correct to be useful to me.
Wrong (Score:2)
Incidentally, the halting problem is decidable when memory is finite. Either the program halts or it eventually repeats a previous state. This is not enormously useful, because you can write a program with a huge state space, but it answers the formal objection.
Software Zero Bugs, design one. (Score:2)
Z and Math-Phobic clients (Score:1)
After all, it's the end user who is supposed to approve what you are doing and the typical user is probably someone who avoided algebra in high-school at all costs. Their eyes would glaze over at any mathematics at all. Only the pro
'zero defect' development (Score:1)
Re:'zero defect' development (Score:1)
Zero-defect software should probably be approached from the viewpoint of course-grained systems. Break the problem down into smaller problems and attempt to prove each smaller system is correct. As smaller systems are combined to build bigger systems, the zero-defect approach shifts to proving the integration of smaller systems is correct.
I offer two suggestion