Model-based testing
Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach.
A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior. Test cases derived from such a model are functional tests on the same level of abstraction as the model. These test cases are collectively known as an abstract test suite. An abstract test suite cannot be directly executed against an SUT because the suite is on the wrong level of abstraction. An executable test suite needs to be derived from a corresponding abstract test suite. The executable test suite can communicate directly with the system under test. This is achieved by mapping the abstract test cases to concrete test cases suitable for execution. In some model-based testing environments, models contain enough information to generate executable test suites directly. In others, elements in the abstract test suite must be mapped to specific statements or method calls in the software to create a concrete test suite. This is called solving the "mapping problem".[1] In the case of online testing (see below), abstract test suites exist only conceptually but not as explicit artifacts.
Tests can be derived from models in different ways. Because testing is usually experimental and based on heuristics, there is no known single best approach for test derivation. It is common to consolidate all test derivation related parameters into a package that is often known as "test requirements", "test purpose" or even "use case(s)". This package can contain information about those parts of a model that should be focused on, or the conditions for finishing testing (test stopping criteria).
Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing.
Model-based testing for complex software systems is still an evolving field.
Models
Especially in Model Driven Engineering or in Object Management Group's (OMG's) model-driven architecture, models are built before or parallel with the corresponding systems. Models can also be constructed from completed systems. Typical modeling languages for test generation include UML, SysML, mainstream programming languages, finite machine notations, and mathematical formalisms such as Z, B, Event-B, Alloy or coq.
Deploying model-based testing
There are various known ways to deploy model-based testing, which include online testing, offline generation of executable tests, and offline generation of manually deployable tests.[2]
Online testing means that a model-based testing tool connects directly to an SUT and tests it dynamically.
Offline generation of executable tests means that a model-based testing tool generates test cases as computer-readable assets that can be later run automatically; for example, a collection of Python classes that embodies the generated testing logic.
Offline generation of manually deployable tests means that a model-based testing tool generates test cases as human-readable assets that can later assist in manual testing; for instance, a PDF document describing the generated test steps in a human language.
Deriving tests algorithmically
The effectiveness of model-based testing is primarily due to the potential for automation it offers. If a model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically.
From finite state machines
Often the model is translated to or interpreted as a finite state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging unspecified transitions in these models.
Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. To find test cases that can cover an appropriate, but finite, number of paths, test criteria are needed to guide the selection. This technique was first proposed by Offutt and Abdurazik in the paper that started model-based testing.[3] Multiple techniques for test case generation have been developed and are surveyed by Rushby.[4] Test criteria are described in terms of general graphs in the testing textbook.[1]
Theorem proving
Theorem proving has been originally used for automated proving of logical formulas. For model-based testing approaches the system is modeled by a set of logical expressions (predicates) specifying the system's behavior.[5] For selecting test cases the model is partitioned into equivalence classes over the valid interpretation of the set of the logical expressions describing the system under development. Each class is representing a certain system behavior and can therefore serve as a test case. The simplest partitioning is done by the disjunctive normal form approach. The logical expressions describing the system's behavior are transformed into the disjunctive normal form.
Constraint logic programming and symbolic execution
Constraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints.[6] Solving the set of constraints can be done by Boolean solvers (e.g. SAT-solvers based on the Boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system.
Constraint programming can be combined with symbolic execution. In this approach a system model is executed symbolically, i.e. collecting data constraints over different control paths, and then using the constraint programming method for solving the constraints and producing test cases.[7]
Model checking
Model checkers can also be used for test case generation.[8] Originally model checking was developed as a technique to check if a property of a specification is valid in a model. When used for testing, a model of the system under test, and a property to test is provided to the model checker. Within the procedure of proofing, if this property is valid in the model, the model checker detects witnesses and counterexamples. A witness is a path, where the property is satisfied, whereas a counterexample is a path in the execution of the model, where the property is violated. These paths can again be used as test cases.
Test case generation by using a Markov chain test model
Markov chains are an efficient way to handle Model-based Testing. Test models realized with Markov chains can be understood as a usage model: it is referred to as Usage/Statistical Model Based Testing. Usage models, so Markov chains, are mainly constructed of 2 artifacts : the Finite State Machine (FSM) which represents all possible usage scenario of the tested system and the Operational Profiles (OP) which qualify the FSM to represent how the system is or will be used statistically. The first (FSM) helps to know what can be or has been tested and the second (OP) helps to derive operational test cases. Usage/Statistical Model-based Testing starts from the facts that is not possible to exhaustively test a system and that failure can appear with a very low rate.[9] This approach offers a pragmatic way to statically derive test cases which are focused on improving the reliability of the system under test. Usage/Statistical Model Based Testing was recently extended to be applicable to embedded software systems.[10][11]
Input space modeling
Abstract test cases can be generated automatically from a model of the "input space" of the SUT. The input space is defined by all of the input variables that affect SUT behavior, including not only explicit input parameters but also relevant internal state variables and even the internal state of external systems used by the SUT. For example, SUT behavior may depend on state of a file system or a database. From a model that defines each input variable and its value domain, it is possible to generate abstract test cases that describe various input combinations. Input space modeling is a common element in combinatorial testing techniques. [12] Combinatorial testing provides a useful quantification of test adequacy known as "N-tuple coverage". For example, 2-tuple coverage (all-pairs testing) means that for each pair of input variables, every 2-tuple of value combinations is used in the test suite. Tools that generate test cases from input space models [13] often use a "coverage model" that allows for selective tuning of the desired level of N-tuple coverage.
Solutions
- CA Test Case Optimizer
- Conformiq Tool Suite
- MaTeLo (Markov Test Logic) - All4tec
- Smartesting CertifyIt
- TPT
See also
- Domain Specific Language (DSL)
- Domain Specific Modeling (DSM)
- Model Driven Architecture (MDA)
- Model Driven Engineering (MDE)
- Object Oriented Analysis and Design (OOAD)
- Time Partition Testing (TPT)
References
- 1 2 Paul Ammann and Jeff Offutt. Introduction to Software Testing. Cambridge University Press, 2008.
- ↑ Practical Model-Based Testing: A Tools Approach, Mark Utting and Bruno Legeard, ISBN 978-0-12-372501-1, Morgan-Kaufmann 2007
- ↑ Jeff Offutt and Aynur Abdurazik. Generating Tests from UML Specifications. Second International Conference on the Unified Modeling Language (UML ’99), pages 416-429, Fort Collins, CO, October 1999.
- ↑ John Rushby. Automated Test Generation and Verified Software. Verified Software: Theories, Tools, Experiments: First IFIP TC 2/WG 2.3 Conference, VSTTE 2005, Zurich, Switzerland, October 10–13. pp. 161-172, Springer-Verlag
- ↑ Brucker, Achim D.; Wolff, Burkhart (2012). "On Theorem Prover-based Testing". Formal Aspects of Computing. doi:10.1007/s00165-012-0222-y.
- ↑ Jefferson Offutt. Constraint-Based Automatic Test Data Generation. IEEE Transactions on Software Engineering, 17:900-910, 1991
- ↑ Antti Huima. Implementing Conformiq Qtronic. Testing of Software and Communicating Systems, Lecture Notes in Computer Science, 2007, Volume 4581/2007, 1-12, DOI: 10.1007/978-3-540-73066-8_1
- ↑ Gordon Fraser, Franz Wotawa, and Paul E. Ammann. Testing with model checkers: a survey. Software Testing, Verification and Reliability, 19(3):215–261, 2009. URL: http://www3.interscience.wiley.com/journal/121560421/abstract
- ↑ Helene Le Guen. Validation d'un logiciel par le test statistique d'usage : de la modelisation de la decision à la livraison, 2005. URL:ftp://ftp.irisa.fr/techreports/theses/2005/leguen.pdf
- ↑ http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5954385&tag=1
- ↑ http://www.amazon.de/Model-Based-Statistical-Continuous-Concurrent-Environment/dp/3843903484/ref=sr_1_1?ie=UTF8&qid=1334231267&sr=8-1
- ↑ "Combinatorial Methods In Testing", National Institute of Standards and Technology
- ↑ "Tcases: A Model-Driven Test Case Generator", The Cornutum Project
Further reading
- OMG UML 2 Testing Profile;
- Bringmann, E.; Krämer, A. (2008). "Model-Based Testing of Automotive Systems" (PDF). 2008 International Conference on Software Testing, Verification, and Validation. International Conference on Software Testing, Verification, and Validation (ICST). pp. 485–493. doi:10.1109/ICST.2008.45. ISBN 978-0-7695-3127-4.
- Practical Model-Based Testing: A Tools Approach, Mark Utting and Bruno Legeard, ISBN 978-0-12-372501-1, Morgan-Kaufmann 2007.
- Model-Based Software Testing and Analysis with C#, Jonathan Jacky, Margus Veanes, Colin Campbell, and Wolfram Schulte, ISBN 978-0-521-68761-4, Cambridge University Press 2008.
- Model-Based Testing of Reactive Systems Advanced Lecture Series, LNCS 3472, Springer-Verlag, 2005. ISBN 978-3-540-26278-7.
- Hong Zhu; et al. (2008). AST '08: Proceedings of the 3rd International Workshop on Automation of Software Test. ACM Press. ISBN 978-1-60558-030-2.
- Santos-Neto, P.; Resende, R.; Pádua, C. (2007). "Requirements for information systems model-based testing". Proceedings of the 2007 ACM symposium on Applied computing - SAC '07. Symposium on Applied Computing. pp. 1409–1415. doi:10.1145/1244002.1244306. ISBN 1-59593-480-4.
- Roodenrijs, E. (Spring 2010). "Model-Based Testing Adds Value". Methods & Tools 18 (1): 33–39. ISSN 1661-402X.
- A Systematic Review of Model Based Testing Tool Support, Muhammad Shafique, Yvan Labiche, Carleton University, Technical Report, May 2010.
- Zander, Justyna; Schieferdecker, Ina; Mosterman, Pieter J., eds. (2011). Model-Based Testing for Embedded Systems. Computational Analysis, Synthesis, and Design of Dynamic Systems 13. Boca Raton: CRC Press. ISBN 978-1-4398-1845-9.
- Online Community for Model-based Testing
- 2011 Model-based Testing User Survey: Results and Analysis. Robert V. Binder. System Verification Associates, February 2012
- 2014 Model-based Testing User Survey: Results Robert V. Binder, Anne Kramer, Bruno Legeard, 2014
- Kramer, A., Legeard, B. (2016): "Model-Based Testing Essentials - Guide to the ISTQB(R) Certified Model-Based Tester - Foundation Level". John Wiley & Sons, 2016, ISBN 978-1119130017