• 沒有找到結果。

7.1 Summary

We define a new graphical structure to model an application. By using sets of normalized terms, we give descriptions for each screen and action in an application then use the transition from a screen to another screen to form the program behavior. Based on human understanding of the normalized terms and the application, we call the new graphical structure “Common Sense” model.

To elicit the human common sense of an application, we develop a GUI tool. The tool is a middle man between a tester and an application. By interacting with the tester, SpecElicitor construct the common sense model step by step for each operation.

SpecElicitor produces a common sense model to model the behaviors that the tester performed in the end.

According to our assumption, we find important features of a type of application by overlapping their common sense models. To test these important feature in higher intensity, we design an algorithm by randomly selecting weighted actions. We also design a test case generator output test cases following the algorithm and a test case executor runs these test cases and produce a test report. The test report contain not only the result of running test cases, but also a simple evaluation according to whether the traces are passed, failed, or indicating that our common sense model is inadequately to model the application.

7.2 Limitation

kind of applications are built by different testers, the overlapped model of these models will be meaningless in worst. Moreover, if a tester chooses a wrong term to describe a screen or an action, it will cause the same circumstance. So labor training and fault tolerance would becoming a problem.

Another limitation is identification of screens and actions. We use abstraction to filter out redundant information and comparing remain for now. Nevertheless, if a screen has a very slightly difference and cannot be gotten rid of by abstraction, the screen would be treat as a new, different screen. This will acquire more labor works to give meanings to the screen and actions.

7.3 Future Work

One of future works is trying to make SpecElicitor more powerful and efficiency.

For now there are some technical problems could not be solved. For instance, scrolling the Android device is one of the problems. We can perform a user input through ADB[17]

as touching down at one position, moving to another position in a predefined number of seconds, then lifting the finger. However, the scrolling distance in the application is considering the speed of moving. It is not easy to set a suitable number for the time of moving. Therefore we do not support the scrolling action in SpecElicitor. It would let SpecElicitor skip some functionalities and build an incomplete common sense model.

Let the user of SpecElicitor can alter the normalized terms of a labeled screen or action can reduce wastes of labors. It is nearly impossible to ensure that two snapshots between sequences of user input have equal status. Because internal changes may not be revealed in the screen. That makes re-doing an action being more complicated. If a user chooses normalized terms mistakenly and SpecElicitor has no mechanism for

modification, the common sense model will contain incorrect concepts. A modification mechanism can help users prevent producing a wrong common sense model and re-labeling the application intermediately.

Another improvable component is usages of our common sense database. Because we already get the meanings for each screen and action, we might be able to make a sequence of transition meaningful, too. Then use skills of data mine to find the valuable sequences of transition which might be able to correspond user behavior in real-life. For example, a process for “Login” consist of many user inputs and different screens. To successfully login an account, it has to input specific sequence of actions. The sequence would be a critical function in applications containing “Login” feature.

Our ultimate goal of using common sense models is automatically or semi-automatically identification applications. For example, we know the type “Notepad” has

“Note”, “Note List” for screens. If there is a new application, we can simply set the type of the application as “Notepad”. There will be an algorithm automatically identify which screen is “Note” and which screen is “Note List”. And we can test the application based on common sense without any intervention from labors. It may require knowledge of machine learning, natural language processing, and computer vision.

REFERENCE

[1] “UI/Application Exerciser Monkey.” [Online]. Available:

http://developer.android.com/tools/help/monkey.html.

[2] D. Olenick, “Apple iOS And Google Android Smartphone Market share Flattening:

IDC,” 2015. [Online]. Available:

http://www.forbes.com/sites/dougolenick/2015/05/27/apple-ios-and-google-android-smartphone-market-share-flattening-idc/.

[3] A. M. Memon, M. Lou Soffa, and M. E. Pollack, “Coverage criteria for GUI testing,” ACM SIGSOFT Softw. Eng. Notes, vol. 26, no. 5, p. 256, 2001.

[4] B. N. Nguyen, B. Robbins, I. Banerjee, and A. Memon, “GUITAR: An innovative tool for automated testing of GUI-driven software,” Autom. Softw. Eng., vol. 21, no. 1, pp. 65–105, 2014.

[5] D. Amalfitano, A. R. Fasolino, P. Tramontana, B. Ta, and A. Memon,

“MobiGUITAR -- A Tool for Automated Model-Based Testing of Mobile Apps,”

IEEE Softw., pp. 1–1, 2014.

[6] A. Machiry, R. Tahiliani, and M. Naik, “Dynodroid: an input generation system for Android apps,” Proc. 2013 9th Jt. Meet. Found. Softw. Eng. - ESEC/FSE 2013, p.

224, 2013.

[7] T. Ostrand, A. Anodide, H. Foster, and T. Goradia, “A visual test development environment for GUI systems,” ACM SIGSOFT Softw. Eng. Notes, vol. 23, no. 2, pp. 82–92, 1998.

[8] “Sikuli.” [Online]. Available: http://www.sikuli.org/.

[9] T.-H. Chang, T. Yeh, and R. C. Miller, “GUI testing using computer vision,” Proc.

28th Int. Conf. Hum. factors Comput. Syst., no. Figure 1, pp. 1535–1544, 2010.

[10] E. Manavoglu, T. Building, and C. L. Giles, “Probabilistic User Behavior Models,”

2003.

[11] K. Radinsky, K. Svore, S. Dumais, J. Teevan, A. Bocharov, and E. Horvitz,

“Modeling and predicting behavioral dynamics on the web,” Proc. 21st Int. Conf.

World Wide Web - WWW ’12, p. 599, 2012.

[12] A. Li, Z. Qin, M. Chen, and J. Liu, “ADAutomation : An Activity Diagram Based Automated GUI Testing Framework for Smartphone Applications,” IEEE Int. Conf.

Softw. Secur. Reliab., pp. 68–77, 2014.

[13] M. Utting, A. Pretschner, and B. Legeard, “A Taxonomy of Model-Based Testing,”

no. April, pp. 1–18, 2006.

[14] “TEMA.” [Online]. Available: http://tema.cs.tut.fi/index.html.

[15] T. Takala, M. Katara, and J. Harty, “Experiences of system-level model-based GUI testing of an android application,” Proc. - 4th IEEE Int. Conf. Softw. Testing, Verif.

Validation, ICST 2011, pp. 377–386, 2011.

[16] “Semantic Network - Wikipedia.” .

[17] “Android Debug Bridge.” [Online]. Available:

http://developer.android.com/tools/help/adb.html.

[18] “UIAutomator.” [Online]. Available: https://developer.android.com/tools/testing-support-library/index.html#UIAutomator.

[19] “Omnidroid.” [Online]. Available: https://code.google.com/p/omnidroid/.

[20] “Genymotion.” [Online]. Available: https://www.genymotion.com/.

[21] “HTC Desire 820.” [Online]. Available: http://www.htc.com/tw/smartphones/htc-desire-820-dual-sim/.

相關文件