Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Security

NVIDIA Security Team: 'What if We Just Stopped Using C?' (adacore.com) 239

This week the Adacore blog shared a story about the NVIDIA Security Team: Like many other security-oriented teams in our industry today, they were looking for a measurable answer to the increasingly hostile cybersecurity environment and started questioning their software development and verification strategies. "Testing security is pretty much impossible. It's hard to know if you're ever done," said Daniel Rohrer, VP of Software Security at NVIDIA.

In my opinion, this is the most important point of the case study — that test-oriented software verification simply doesn't work for security. Once you come out of the costly process of thoroughly testing your software, you can have a metric on the quality of the features that you provide to the users, but there's not much you can say about security.

Rohrer continues, "We wanted to emphasize provability over testing as a preferred verification method." Fortunately, it is possible to prove mathematically that your code behaves in precise accordance with its specification. This process is known as formal verification, and it is the fundamental paradigm shift that made NVIDIA investigate SPARK, the industry-ready solution for software formal verification.

Back in 2018, a Proof-of-Concept (POC) exercise was conducted. Two low-level security-sensitive applications were converted from C to SPARK in only three months. After an evaluation of the return on investment, the team concluded that even with the new technology ramp-up (training, experimentation, discovery of new tools, etc.), gains in application security and verification efficiency offered an attractive trade-off. They realized major improvements in the security robustness of both applications (See NVIDIA's Offensive Security Research D3FC0N talk for more information on the results of the evaluation).

As the results of the POC validated the initial strategy, the use of SPARK spread rapidly within NVIDIA. There are now over fifty developers trained and numerous components implemented in SPARK, and many NVIDIA products are now shipping with SPARK components.

This discussion has been archived. No new comments can be posted.

NVIDIA Security Team: 'What if We Just Stopped Using C?'

Comments Filter:
  • What if, indeed. (Score:3, Insightful)

    by Narcocide ( 102829 ) on Sunday November 13, 2022 @03:38AM (#63046963) Homepage

    Where expertise is devalued, doom follows. Mark my words.

    • "The determined programmer can write buggy code in any language".

      They should just switch to Haskell. Someone, and that somone would be an academic, once tried to tell me that it's impossible to write incorrect code in Haskell.

      • by thesupraman ( 179040 ) on Sunday November 13, 2022 @05:43AM (#63047149)

        It is also impossible to write useful code in Haskell ;)
        well, practically impossible.

      • The grain of truth in that lie is that given a language with a strict and expressive type system (like Haskell or Scala) and a tradition for using it properly (unlike Java and c#), it's just much harder to write incorrect code. It reduces the set of incorrect programs that would compile by a huge factor, and leaves you with a lot less code to verify.
        • Re: (Score:3, Insightful)

          by Artenzio ( 10222659 )
          I estimate that about 90% of the bugs in my code are mistakes that could happen in any language. Stuff like array overruns or stack corruption are relatively rare. As your project gets more and more complicated, mistakes happen at higher and higher levels, and language isn't going to help you.
          • I agree with that. Most bugs hide in the diff between reality and the developer's understanding of it. Still even with that it really helps to use a language that lets you encode intent. Makes it more likely that kind of error is caught in code review.
      • Re: What if, indeed. (Score:4, Interesting)

        by lucasnate1 ( 4682951 ) on Sunday November 13, 2022 @10:12AM (#63047575) Homepage

        As someone who programmed a lot of C++ and Haskell, the languages are similar in a strange way. The only difference is in what's easy and what's hard. In Haskell, all the standard library is full with super safe and friendly containers. In C++, it is full with pointers and APIs with bound checking.

        Now comes the interesting part. Theoretically, C++ allows you to write completely safe containers if you always bounds checking apis and rely on ugly templates. Haskell also allows you to directly access memory, do mallocs, and violate access.

        The only difference is in the defaults of both languages and what functionality they choose to hide behind complicated APIs.

        I think both languages have their places. In industry projects I'm maintaining now, some would better remain in C++ while others should have been rewritten in Haskell.

    • Expertise is not being devalued.

      And no, you are not smart enough to write secure code in C.

  • proof (Score:3, Interesting)

    by Artenzio ( 10222659 ) on Sunday November 13, 2022 @03:41AM (#63046969)

    Fortunately, it is possible to prove mathematically that your code behaves in precise accordance with its specification.

    I'll give you a program, and a specification that says "this program halts"

    • Re:proof (Score:5, Informative)

      by ShanghaiBill ( 739463 ) on Sunday November 13, 2022 @04:07AM (#63047005)

      I'll give you a program, and a specification that says "this program halts"

      The halting problem [wikipedia.org] says that you can't prove that any program halts or not.

      But you can prove the behavior of programs that are specifically written to be proven, including whether or not they halt.

      In practice, those programs tend to be trivial, and proofs of correctness are not very useful.

      • by Sique ( 173459 )
        In practice, about 30 years ago, when I was still a student, there were already formally verified compilers and at least one formally verified operating system. This is everything else than trivial.
        • about 30 years ago ... there were already ... at least one formally verified operating system.

          So in 1992, there was a proven correct OS with zero bugs that never ever crashed? Citation needed.

          • My random guess is they're thinking of QNX which was famous at the time for running nuclear plants and other high reliability situations for years without reboots and being able to upgrade modules in place. The kernel had very few calls which would make it easier to vet.

            • Re: proof (Score:4, Informative)

              by ShanghaiBill ( 739463 ) on Sunday November 13, 2022 @07:21AM (#63047283)

              My random guess is they're thinking of QNX

              QNX is a microkernel, so most functionality is implemented outside the kernel. It is easy to do things reliably when you don't do much of anything. But the QNX Wikipedia page [wikipedia.org] says nothing about formal verification, and a Google search brings up nothing.

              The L4 microkernel [wikipedia.org], written in Haskell, was supposedly formally verified in the late 1990s.

        • The "formally verified operating system" was probably just the microkernel. A huge problem with operating systems is that they have to deal with physical hardware which is typically riddled with hardware bugs and undocumented behavior that has to be worked around. Good luck proving anything.
      • by dfghjk ( 711126 )

        "The halting problem [wikipedia.org] says that you can't prove that any program halts or not."

        No. The program "HALT" provably halts. The program "while (true) {}" does not. What you cannot prove is that an arbitrary one "halts or not". "Any" particular program may be provable or may not me.

        "But you can prove the behavior of programs that are specifically written to be proven, including whether or not they halt."

        This is in open contradiction to your previous claim. "Any" program includes programs "speci

      • In practice, those programs tend to be trivial, and proofs of correctness are not very useful.

        For the vast majority of programs, it's obvious whether they halt or not. Generally programs are set up in a giant loop of some sort, and when you encounter that, you can show that it doesn't halt (or in which cases it halts). This is useful, because in practice you want to know in which cases your program halts.

        Most practical code is not self-referential, it's not even recursive. So the approach is to look for any strange for loops that might be infinite loops. Once you've analyzed them all, you can tell w

      • by ljw1004 ( 764174 )

        you can prove the behavior of programs that are specifically written to be proven, including whether or not they halt. In practice, those programs tend to be trivial, and proofs of correctness are not very useful.

        In practice, I disagree. Whenever I'm writing recursion or a while loop, I always aim for a single-line comment explaining why it terminates, and I ask for similar comments when I'm reviewing code (unless it's trivial, e.g. an integer increment with clear upper bound). I've found quite a few bugs this way in practical systems. And if there's no recursion or while loop, well, it halts!

        Whenever there are two pieces of storage storing related information, I ask for comments explaining the invariants that relat

    • by narcc ( 412956 )

      This is a misunderstanding of the halting problem. It doesn't say that you can never determine if a program will halt or not, only that you can't determine of all programs halt or not.

      Just because we can't solve the problem all of the time, doesn't mean that we can't solve a problem some of the time, or even most of the time.

      You should be able to easily write a program that you can prove will halt. You can even write programs that will examine other programs and determine if they will halt or not. What

      • only that you can't determine of all programs halt or not.
        Nope: only that you can't write an algorithm that proves for every program if it halts or not.

      • And to that point, universal computer that only allows halting programs is impossible. You should be able to easily write a program that halts but you cannot have turing-complete language that will allow you to write programs that always halt. Hence, bugs and security problems will still possible in programs written in either C or SPARK or whatever system NVIDIA security team decided to use.

        In your hypothetical, if there is a system that can determine if your specific program will halt or not, it will
        • by godrik ( 1287354 )

          You should be able to easily write a program that halts but you cannot have turing-complete language that will allow you to write programs that always halt. Hence, bugs and security problems will still possible in programs written in either C or SPARK or whatever system NVIDIA security team decided to use.

          That is not necessarilly true.

          I have never used SPARK in particular. But when using code provers, you never just throw you whole code and ask the prover to show it has a property. The properties are expressed from the ground up through the simpler functions to the more complex functions.

          When you realize the prover can't write the proof, you introduce intermediate properties on smaller parts of the code to help it do what you need.

    • by kmoser ( 1469707 )
      The problem with proving your code behaves in precise accordance with the specification is that the specification usually omits important things, like "isn't vulnerable to fuzzing, buffer overflows, or race conditions."
  • by franzrogar ( 3986783 ) on Sunday November 13, 2022 @03:42AM (#63046971)

    They *RECODED* (and that's the key) a C program into SPARK and they want us to believe that the "improvements in the security" (whatever that means because there are no code to check nor testsuites nor nothing) was not because of "recoding" but because of "the language"...

    • It is difficult to prove the correctness of C programs because of pointer aliasing [wikipedia.org], out of bounds array accesses, and stack munging [wikipedia.org]. Many other languages don't allow these operations.

      • out of bounds array accesses.

        That why all the smart C programmers moved to std::vector and std::string - they can check that for you.

    • by lsllll ( 830002 )
      These people are dumbasses. At the end everything ends up as instruction code. It's just not there until it's there.
    • by DrXym ( 126579 )
      I really don't see your point. It's obvious that they didn't just translate the original code into the new language line by line and be done with it. They would have looked at interface boundaries, calling contracts, parameters, inputs, outputs and produced functionally equivalent code using the safe language. Line by line it would look very different.

      It doesn't mean the existing code was safe, or that it would stay safe after being modified. Nor does it even mean you could apply what you learned from the

  • by Casandro ( 751346 ) on Sunday November 13, 2022 @03:52AM (#63046981)

    A company that managed to include a node.js web service with their drivers that exposes privileged system calls to the network, I doubt they would be able to actually use languages with automated proofs.

    • Mod Up. The word metrics came up. Normally you then know which coders are best, and write the most trustworthy code, and the others just interested in a paycheck and in the average range. They have enough money to employ some OpenBSD auditors to sample their 'passed' code for more metrics. In the real world this is never done. At university CS, 'Duty Programmers' with nasty red pens are the best. They know how beginners think. These are the people project managers hate, because it is PC incorrect to tell an
      • by tlhIngan ( 30335 ) <slashdot&worf,net> on Sunday November 13, 2022 @02:20PM (#63048143)

        I just had a job interview and I made the coding challenge go sideways - by simply doing a bog-simple check I always do.

        I was giving a C string function to write. So the very first thing I write is to check of NULL pointers. I've gotten so used to coding over the years that bad arguments are to be expected, and if you're passed a pointer, you can expect it to be NULL at some point.

        Of course, the specification said nothing about getting NULL as input and it threw my interviewers for a loop - I looked at the written specification they gave me, it said nothing, and I looked at the interviewers because I apparently wandered into an edge case no one considered.

        The proper way is to handle it somehow - if you can, trigger the debugger. If not, trigger an assertion that halts the program or throw a fatal exception, because it's something that should never happen in normal operation and if you get it, you must assume the system is in an abnormal state. At the very least, return an error.

        Of course, triggering a forced breakpoint is probably a nice way to do it - if you're running with a debugger it will trigger the debugger, if you're not, you'll typically get an illegal instruction error (breakpoints are typically handled by a breakpoint instruction, which is generally an illegal instruction with a specific pattern so a debugger can tell it's a breakpoint and not something else when it handles the illegal instruction error. Illegal instruction errors are also precise errors that always point to the faulting instruction so you know exactly where it faults).

        Trust me, the look on the interviewer's faces when I presented them with "what is the proper way to handle this error" was priceless. I guess I sort of pre-empted their questions on "well, what happens if we give you NULL?"

    • They did that? ... I believe you, it's totally thinkable because of the l00ny Node craze and the epic prevalence of dimwitt deciders, but this is a new level of Monty Python in software projects. Pretty hilarious if you ask me. ROTFL!

  • by mccalli ( 323026 ) on Sunday November 13, 2022 @03:57AM (#63046987) Homepage
    I was taught this too. It's a brilliant way of proving your code is bugged perfectly according to specification.

    Why? Because at that point the spec is code, and code can be bugged. Also it tells you nothing about implementation - it would tell your your outputs were perfect for all your inputs according to what the specification asked for, but wouldn't tell you you'd forgotten to free a pointer.

    My lecturer (we're talking early 90s) was adamant that it wasn't just yet another computer language and was different in concept. It was Z I was taught...and lo and behold, a few years after university, a compiler for it appeared...
    • by DrXym ( 126579 )
      Z might have stood a better chance of succeeding if it was actually readable and writable instead of using mathematical notations that couldn't even be properly encoded or rendered by computers at the time. We had a module on it and we had to write our answers with pen and paper.
    • by gweihir ( 88907 )

      Indeed. Back then, the CS prof teaching us formal verification said he just saw it as a form of an alternate implementation that gets compared to the code. It is not nearly as absolute as most people think and does absolutely nothing (except giving a false sense of security) when the ones writing the spec just make the same mistakes. That is quite common, because a lot of problems result from people missing more complex cases or unexpected behaviors and often they miss them in the code and the spec both, ev

  • wow! another sucker company that has fallen for provably secure bullshit.
    • by gweihir ( 88907 )

      Yeah. The limits of that approach are well known.

    • This is your failure in reasoning, not theirs.

      They are not claiming that using a language that eliminates many classes of bugs solves all problems. It's only you and your fellow C supremacists who arebasically claiming that the only real alternative to C is complete perfection, so therefore it doesn't exist.

      Eliminating certain very common bug classes means you no longer need to spend time auditing for them and fixing them.

      You can write secure machine code given enough time and resources. But it's, frankly,

      • I recommend reading "Michael Abrash's Black Book of Graphics Programming". My perspective is that the hardware will do whatever it damn well pleases, regardless of your software. Abstracting away software problems doesn't make the problems go away, it merely hides the problems. Thus C, aka portable assembly, which makes it easier to figure out the difference between what you wrote and what the hardware actually did. Yes, it would be nice to use fancier languages, if only the hardware would play along.

  • How is it different CodeSonar static code analysis tool?
    from https://en.wikipedia.org/wiki/... [wikipedia.org]

    SPARK is a formally defined computer programming language based on the Ada programming language,

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    Verification conditions
    - array index out of range
    - type range violation
    - division by zero
    - numerical overflow

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    This seems to be new, but then the specification needs to be correct:

    Static Verification ...proving that the software implementation meets a formal specification of the program's required behaviour

    https://www.adacore.com/about-... [adacore.com]

    DEF CON 30 video links:
    https://blog.adacore.com/when-... [adacore.com]

    • by DrXym ( 126579 )
      Static analysis tools for C/C++ are better than nothing but anyone who has ever used them also knows they are expensive, slow and hopelessly broken, producing reams of false positives or getting easily confused.
    • That's kind of ironic because Ada is famous for a certain disaster [bugsnag.com] due to a numerical overflow resulting from a type range violation. `

    • The best you can do is annotate C(++) and then iteratively modify portions which can not be automatically proven to the point it has Spark level (or higher) correctness guarantees, like the L4 verified project did.

      Ignoring that mountainous level of work, Spark has stronger guarantees than Misra C(++) which has stronger guarantees than free form C(++) with analysis tools.

      • by serviscope_minor ( 664417 ) on Sunday November 13, 2022 @09:28AM (#63047487) Journal

        The best you can do is annotate C(++) and then iteratively modify portions which can not be automatically proven to the point it has Spark level (or higher) correctness guarantees, like the L4 verified project did.

        Ignoring that mountainous level of work,

        Exactly. The ClEvEr PrOgRaMmErS wRiTe BuG fReE C crowd are ignoring that while you can, it usually involves external tools and standards and ton of work. I can never get the reason why they think it's so terrible to have those integrated into the language itself, to reduce the workload.

  • Had not heard much about that language in years

  • For nVidia hardware, simply unplug it and place it in storage.
  • It's fun to order a programmer

    To program everything in C
    To patch, install, bug fix and more
    And run at the first sign of bankruptcy!

    It can be manly in programming
    They'll cut your salary semi-annually
    It's not tax deductible
    Our binaries are corruptible
    We're programming in the language known as C

  • Yes, testing is limited in what it can do for security. But that primarily applies to logical errors and does apply not or only in a very limited sense to the specific vulnerabilities caused by the absence of memory safety. Hence this is _not_ a reason to give up C and this person has no clue what they are talking about.

    • Spark exists and Misra C and its competitors can not provide tool chains able to give equivalent guarantees even if it is theoretically possible. Spark is cheap too.

      By the time you are following Misra type coding guidelines and add range annotation it might as well be a different language any way, that ain't C to me.

  • This all sounds very similar to Z and VDM - which at the end of the 1980s early 1990s was clearly going to be the way programming was going. I even put myself on a couple of courses to learn those formal methods - apparently I was guaranteed to get a return on my investment in my upgraded skills (or so I was told). Hohum. The problem was, they took time to learn and otherwise perfectly decent programmers couldn't necessarily get their heads around the esoteric nature of formal methods (being based on abst
  • by cjonslashdot ( 904508 ) on Sunday November 13, 2022 @06:20AM (#63047207)
    C is _a_ problem, but not the core problem. As I explained in my 2005 textbook High-Assurance Design, the core problem is that programmers, by and large, have little-to-no training in secure coding, and even less interest in it. And that is mostly because employers do not make that kind of training a priority. When was the last job interview in which you were asked about your knowledge of secure coding? And that is mostly because companies that write software have insufficient liability for losses when users of that software experience security breaches.
    • Back in the days when I was developing a small software demonstrator, we avoided C and C++ as much as possible. Our company did not produce software. It was an experimental project, but it had to run fast. It did some FEM-like simulations on ICs. If it was too slow, it was not worth investigating further.
      I had a fierce discussion about this with one of the programmers. He argued that we had to do everything in C++, because that is the only "real" programming language. It was also going to be a lot faster.
      • I agree. Programmers become loyal to the languages they know. That is a dysfunction. Perhaps the reason is that it takes so long to learn a language well. It becomes a constant tool of trade, rather than a tool from the shelf.
    • Of course, Python never has any security problems. Neither does Javascript.

    • Amongst other things, programmers and software engineers work at:
      1) Functionality (does the software do what it is supposed to, more or less)
      2) Interoperability (using and offering APIs) 3) Maintainability (readable code, documentation, solid yet flexible design and object models, good data models, etc)
      4) Testing (allowing for automatic testing)
      5) Security.

      I found that many software developers stop after point 2 or 3. They'll pay lip service to testing ans security, instead trusting that the testing
    • have little-to-no training

      Oooof. That's not a good sales pitch for you book, trying to solve human problems with training doesn't work well. The safety industry got away from that mindset in the 80s, programmers should too.

      The core problem is humans be humans, and we should do our best to mitigate that problem *without* relying on attempting to change the human.

  • 'What if We Just Stopped Using C?'

    What if you just shut the hell up, hmm?

  • Replace all old bugs with new bugs. Definitely not against using other languages when possible. Thing is there are only so many was to build low level code.
  • Look, this has become a religious war for many.

    The C camp will continue doing C.

    Their competition will use Rust and similar tools.

    We'll see in a few years who's come out on top.

    I would put my money in a bank coded in Rust. Place your bets.

  • Why SPARK and not Spin [spinroot.com]?

    Spin uses C and not Ada. So it seems like it would be more familiar to NVIDIA's engineering staff. And the step of porting over would be omitted.

    Spin can be used in way beyond what SPARK can offer. From the website:

    Spin can be used in four main modes:

    as a simulator, allowing for rapid prototyping with a random, guided, or interactive simulations
    as an exhaustive verifier, capable of rigorously proving the validity of user specified correctness requirements (using partial order reduction theory to optimize the search)
    as proof approximation system that can validate even very large system models with maximal coverage of the state space.
    as a driver for swarm verification (a new form of swarm computing that can exploit cloud networks of arbitrary size), which can make optimal use of large numbers of available compute cores to leverage parallelism and search diversification techniques, which increases the chance of locating defects in very large verification models.

    Of course if you do an exhaustive verification of a complex program it would take more resources to complete than exist. It's not going to solve the halting problem for you. But for narrow cases you can fully verify and there are examples of real-time schedulers and network

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...