So, at least in some meaningful sense, the opposite of proving something is secure is finding code paths for which it isn't Try Byron Cook's TERMINATOR project And at least two videos on Channel9 Here's one of them s research is likely to be a good starting point for you to learn about this extremely interesting area of research Projects such as Spec# and Typed-Assembly-Language are related too. In their quest to move the possibility of safety checks from runtime back to compile-time, they allow the compiler to detect many bad code paths as compilation errors. Strictly, they don't help your stated intent, but the theory they exploit might be useful to you.
So, at least in some meaningful sense, the opposite of proving something is secure is finding code paths for which it isn't. Try Byron Cook's TERMINATOR project. And at least two videos on Channel9.
Here's one of them s research is likely to be a good starting point for you to learn about this extremely interesting area of research. Projects such as Spec# and Typed-Assembly-Language are related too. In their quest to move the possibility of safety checks from runtime back to compile-time, they allow the compiler to detect many bad code paths as compilation errors.
Strictly, they don't help your stated intent, but the theory they exploit might be useful to you.
There is the L4 verified kernel which is trying to do just that. However, if you look at the history of exploitation completely new attack patterns are found and then a lot of software written up to that point is very vulnerable to attack. For instance format string vulnerabilities weren't discovered until 1999.
About a month ago H.D. Moore released DLL jacking and literary everything under windows is vulnerable. I don't think its possible to prove that a piece of software is secure against an unknown attack. At least not until a theorem is able to discover such an attack, and as far as I know this hasn't happened.
By security, I think you mean that a set of invariants hold for a certain piece of code running on known hardware. If that is the case, I believe that it is possible to prove that invariants hold. It just gets harder as the software gets bigger, but from what I've read, we're getting smarter at it.
– uosÉÅ¿ Sep 9 '10 at 17:35 If that isn't true, I'd be very upset. – uosÉÅ¿ Sep 9 '10 at 17:36 @uosÉÅ¿ Let me put this a different way, I have written a lot of exploits over the years and I have lost track of the number of CVE numbers issued to me, and I don't think this can be proven. At least not yet.
– Rook Sep 9 '10 at 17:40 You sounds more qualified than me. Do you believe that it is possible to secure trivial code? I mean, remove the OS, run the most trivial thing you can think of directly on hardware, could such a thing be absolutely secure from a remote attack?
If so, that's one extreme. And we know the other extreme (XP? ).
Somewhere in the middle there is a boundary. We gotta figure out that boundary. – uosÉÅ¿ Sep 9 '10 at 17:51 But as you say, "at least not yet".
I'm not challenging you, I'm just curious to hear more of your thoughts. – uosÉÅ¿ Sep 9 '10 at 17:55.
It's not really related to theorem-proving, but fuzz testing is a common technique for finding vulnerabilities in an automated way.
I'm currently writing a PDF parser in Coq together with someone else. While the goal in this case is to produce a secure piece of code, doing something like this can definitely help with finding fatal logic bugs. Once you've familiarized yourself with the tool, most proof become easy.
The harder proofs yield interesting test cases, that can sometimes trigger bugs in real, existing programs. (And for finding bugs, you can simply assume theorems as axioms once you're sure that there's no bug to find, no serious proving necessary. ) About a moth ago, we hit a problem parsing PDFs with multiple / older XREF tables.
We could not prove that the parsing terminates. Thinking about this, I constructed a PDF with looping /Prev Pointers in the Trailer (who'd think of that? :-P), which naturally made some viewers loop forever.(Most notably, pretty much any poppler-based viewer on Ubuntu.
Made me laugh and curse Gnome/evince-thumbnailer for eating all my CPU. I think they fixed it now, tho. ) Using Coq to find lower-level bugs will be difficult.In order to prove anything, you need a model of the program's behavior.
For stack / heap problems, you'll probably have to model the CPU-level or at least C-level execution. While technically possible, I'd say this is not worth the effort. Using SPLint for C or writing a custom checker in your language of choice should be more efficient.
Yes, a lot of work has been done in this area. Satisfiability (SAT and SMT) solvers are regularly used to find security vulnerabilities. For example, in Microsoft, a tool called SAGE is used to eradicate buffer overruns bugs from windows.
SAGE uses the Z3 theorem prover as its satisfiability checker. If you search the internet using keywords such as “smart fuzzing� Or “white-box fuzzing�
, you will find several other projects using satisfiability checkers for finding security vulnerabilities. The high-level idea is the following: collect execution paths in your program (that you didn't manage to exercise, that is, you didn't find an input that made the program execute it), convert these paths into mathematical formulas, and feed these formulas to a satisfiability solver. The idea is to create a formula that is satisfiable/feasible only if there is an input that will make the program execute the given path.
If the produced formula is satisfiable (i.e. , feasible), then the satisfiability solver will produce an assignment and the desired input values. White-box fuzzers use different strategies for selecting execution paths.
The main goal is to find an input that will make the program execute a path that leads to a crash.
Disclaimer: I have little to no experience with automated theorem provers A few observations Things like cryptography are rarely ever "proven", just believed to be secure. If your program uses anything like that, it will only be as strong as the crypto. Theorem provers can't analyze everything (or they would be able to solve the halting problem) You would have to define very clearly what insecure means for the prover.
This in itself is a huge challenge.
I'm not 100% certain about this, but I'm pretty sure that much of cryptography is proven down to the axioms (especially with the proposed P! =NP), it is just often misapplied. As you note in your third point, it's important to strictly define what you're counting on the mechanism for - only then can you prove that your use is a correct application.
– uosÉÅ¿ Sep 9 '10 at 17:28 @uosÉÅ¿ much of cryptanalysis involves reducing the number of rounds needed to brute force a cipher/hash. Also, I'm fairly sure that "factoring is hard" hasn't been proven (unreliable source: en.wikipedia. Org/wiki/Integer_factorization ) – cobbal Sep 9 '10 at 17:47 Again, I'm not an expert, but for your consideration: Integer factorization isn't the only "hard" mechanism available.
Also, the recent P! =NP would (pending review) prove that hard problems exist. And I think it's been proven that!
IF! Hard problems exist, that integer factorization or some of its friends are in that category. – uosÉÅ¿ Sep 9 '10 at 17:54 Also, the Byron Cook links address the halting problem.
Briefly: Yes, the halting problem remains, but that is just saying that, as you say, you can't statically analyze every program for termination, but it doesn't mean that there aren't a wide range of kinds of programs that are statically analyzable for termination. – uosÉÅ¿ Sep 9 '10 at 17:58 Leads me to wonder if there is some intersection of statically analyzable qualities and Turing Completeness - probably that's been/being explored. – uosÉÅ¿ Sep 9 '10 at 18:00.
Yes. Many theorem proving projects show the quality of their software by demonstrating holes or defects in software. To make it security related, just imagine finding a hole in a security protocol.
Carlos Olarte's Ph.D. Thesis under Ugo Montanari has one such example. It is in the application.
Not really the theorem prover itself that has anything to do with security or special knowledge thereof.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.