• 沒有找到結果。

6.2 Compare with Other Work

6.2.3 Compare with zzuf

We use zzuf to do fuzz bc program, the input buffer size is also 6. We control some parameter like fuzzing ratio to observe the affect. To do fuzz testing smoothly, we add ”quit\n” to the end of fuzzing file and do not fuzz the part. The fuzzing ratio is 0.1, 0.2, 0.3 and 0.4, the every fuzzing ratio test with the number of test data: 1000, 2000, 3000, 4000, 5000 and 6000.

The result is shown as Table 6-5 and Figure 6-6. The ”all” means combining the fuzzing ratio is 0.1, 0.2, 0.3 and 0.4.

Table 6-5: The line coverage of program bc using zzuf

To observe and analyse the figure, we can find out that:

• If the fuzzing ratio is too small or too big, the result of coverage is not

Figure 6-6: The figure of coverage comparison using zzuf

• Fuzz testing can gain the high coverage at beginning. But when we use more test data to do fuzz testing, the improvement of the coverage is small.

7 Conclusions

Scalability is an important issue in symbolic file testing. In KLEE, the complexity of C standard library is a main reason to bother the scalability.

To address this problem we proposed regular expression constraints to limit the execution paths and focus on testing part of specifications. Description of input format is flexible and easily by using regular expression, . Finally we implement our method on KLEE to reduce the usage memory and enhance the line and path coverage. We hope this work can help programmers to test their program easily.

8 Future Work

As mentioned at result of HAMPI comparison, context-free language can handle more situation, so we can add context-free grammar supporting in KLEE. We can use Lex and Yacc tools to implement this feature.

But describing a context-free grammar usually difficult than describing regular expression, we can still maintain regular language part for some small programs testing. We can improve this part by using PCRE [32], which is short from Perl-compatible regular expressions, to support our work. PCRE provides some APIs, the important two function is ”pcre compile()” and

”pcre exec()”. The function pcre compile() can be used to build a data struc-ture containing regular expression information, and the function pcre exec() can be used to match a given string and given compiled regular expression.

We can use pcre compile() first, then refer pcre exec() to implement a func-tion which can match constraints and expression.

References

[1] G.J. Myers. The art of Software Testing. Wiley, 2004.

[2] Cristian Cadar, Daniel Dunbar, and Dawson Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. USENIX Symposium on Operating Systems Design and Im-plementation, 2008.

[3] Chris Lattner and Vikram Adve. LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation. In Proceedings of the 2004 International Symposium on Code Generation and Optimization (CGO’04), Palo Alto, California, Mar 2004.

[4] Vikram Adve, Chris Lattner, Michael Brukman, Anand Shukla, and Brian Gaeke. LLVA: A Low-level Virtual Instruction Set Architecture.

In Proceedings of the 36th annual ACM/IEEE international symposium on Microarchitecture (MICRO-36), San Diego, California, Dec 2003.

[5] LLVM, http://llvm.org/.

[6] NCTU Online Jduge System, http://progexam.cs.nctu.edu.tw/.

[7] uClibc, http://www.uclibc.org/.

[8] Satisfiability Modulo Theories Competition (SMT-COMP), http://www.smtcomp.org/.

[9] Clark Barrett and Cesare Tinelli. CVC3. In Werner Damm and Holger Hermanns, editors, Proceedings of the 19th International Conference on Computer Aided Verification (CAV ’07), volume 4590 of Lecture Notes in Computer Science, pages 298–302. Springer-Verlag, July 2007. Berlin, Germany.

[10] Clark Barrett and Sergey Berezin. CVC Lite: A New Implementation of the Cooperating Validity Checker. In Rajeev Alur and Doron A. Peled, editors, Proceedings of the 16th International Conference on Computer Aided Verification (CAV ’04), volume 3114 of Lecture Notes in Com-puter Science, pages 515–518. Springer-Verlag, July 2004. Boston, Mas-sachusetts.

[11] Vijay Ganesh and David Dill. A decision procedure for bit-vectors and arrays. In Werner Damm and Holger Hermanns, editors, Computer Aided Verification, volume 4590 of Lecture Notes in Computer Science, pages 519–531. Springer Berlin / Heidelberg, 2007.

[12] The Yices SMT solver, http://yices.csl.sri.com/.

[13] Robert S. Boyer, Bernard Elspas, and Karl N. Levitt. Select—a formal system for testing and debugging programs by symbolic execution. In

Proceedings of the international conference on Reliable software, pages 234–245, New York, NY, USA, 1975. ACM.

[14] James C. King. Symbolic execution and program testing. Commun.

ACM, 19(7):385–394, 1976.

[15] Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, and Dawson R. Engler. Exe: Automatically generating inputs of death. ACM Trans. Inf. Syst. Secur., 12(2):1–38, 2008.

[16] Patrice Godefroid, Nils Klarlund, and Koushik Sen. DART: directed automated random testing. In PLDI ’05: Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implemen-tation, pages 213–223, New York, NY, USA, 2005. ACM Press.

[17] Koushik Sen and Gul Agha. Cute and jcute: Concolic unit testing and explicit path model-checking tools. In In CAV, pages 419–423. Springer, 2006.

[18] J. Burnim and K. Sen. Heuristics for scalable dynamic test generation.

In ASE ’08: Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, pages 443–446, Wash-ington, DC, USA, 2008. IEEE Computer Society.

[19] Peach Fuzzing Platform, http://peachfuzzer.com/.

[20] zzuf, http://caca.zoy.org/zzu/.

[21] Patrice Godefroid, Michael Y. Levin, and David A Molnar. Auto-mated whitebox fuzz testing. In Network Distributed Security Sympo-sium (NDSS). Internet Society, 2008.

[22] Rupak Majumdar and Koushik Sen. Hybrid concolic testing. In ICSE

’07: Proceedings of the 29th international conference on Software engi-neering, 2007.

[23] David Alexander Molnar, David Molnar, David Wagner, and David Wagner. Catchconv: Symbolic execution and run-time type inference for integer conversion errors. Technical report, UC Berkeley EECS, 2007.

[24] Valgrind, http://valgrind.org/.

[25] Li-Wen Hsu. Resolving unspecified software features by directed random testing. Master’s thesis, NCTU, 2007.

[26] C. F. Yang. Resolving constraints from cots/binary components for concolic random testing. Master’s thesis, NCTU, 2007.

[27] George C. Necula, Scott McPeak, Shree Prakash Rahul, and Westley Weimer. CIL: Intermediate Language and Tools for Analysis and Trans-formation of C Programs. In CC ’02: Proceedings of the 11th Interna-tional Conference on Compiler Construction, pages 213–228, London, UK, 2002. Springer-Verlag.

[28] Yan-Ting Lin. Non-primitive type symbolic input for concolic testing.

Master’s thesis, NCTU, 2009.

[29] You-Siang Lin. Cast: Automatic and dynamic software verification tool.

Master’s thesis, NCTU, 2009.

[30] MPlayer, http://www.mplayerhq.hu/.

[31] Adam Kiezun, Vijay Ganesh, Philip J. Guo, Pieter Hooimeijer, and Michael D. Ernst. Hampi: a solver for string constraints. In ISSTA

’09: Proceedings of the eighteenth international symposium on Software testing and analysis, pages 105–116, New York, NY, USA, 2009. ACM.

[32] PCRE, http://www.pcre.org/.

相關文件