Using transient/persistent errors to develop automated test oracles for event-driven software
Title | Using transient/persistent errors to develop automated test oracles for event-driven software |
Publication Type | Conference Papers |
Year of Publication | 2004 |
Authors | Memon AM, Xie Q |
Conference Name | Automated Software Engineering, 2004. Proceedings. 19th International Conference on |
Date Published | 2004/09// |
Keywords | automated test oracles, Automatic testing, event driven software, Graphical user interfaces, persistent errors, program testing, resource allocation, resource utilization, software intensive systems, test case execution, transient errors |
Abstract | Today's software-intensive systems contain an important class of software, namely event-driven software (EDS). All EDS take events as input, change their state, and (perhaps) output an event sequence. EDS is typically implemented as a collection of event-handlers designed to respond to individual events. The nature of EDS creates new challenges for test automation. In this paper, we focus on those relevant to automated test oracles. A test oracle is a mechanism that determines whether a software executed correctly for a test case. A test case for an EDS consists of a sequence of events. The test case is executed on the EDS, one event at a time. Errors in the EDS may "appear" and later ”disappear" at several points (e.g., after an event is executed) during test case execution. Because of the behavior of these transient (those that disappear) and persistent (those that don't disappear) errors, EDS require complex and expensive test oracles that compare the expected and actual output multiple times during test case execution. We leverage our previous work to study several applications and observe the occurrence of persistent/transient errors. Our studies show that in practice, a large number of errors in EDS are transient and that there are specific classes of events that lead to transient errors. We use the results of this study to develop a new test oracle that compares the expected and actual output at strategic points during test case execution. We show that the oracle is effective at detecting errors and efficient in terms of resource utilization |
DOI | 10.1109/ASE.2004.1342736 |