Techniques for Classifying Executions of Deployed Software to Support Software Engineering Tasks
Title | Techniques for Classifying Executions of Deployed Software to Support Software Engineering Tasks |
Publication Type | Journal Articles |
Year of Publication | 2007 |
Authors | Haran M, Karr A, Last M, Orso A, Porter A, Sanil A, Fouch? S |
Journal | IEEE Transactions on Software Engineering |
Volume | 33 |
Issue | 5 |
Pagination | 287 - 304 |
Date Published | 2007/// |
ISBN Number | 0098-5589 |
Keywords | execution classification, remote analysis/measurement. |
Abstract | There is an increasing interest in techniques that support analysis and measurement of fielded software systems. These techniques typically deploy numerous instrumented instances of a software system, collect execution data when the instances run in the field, and analyze the remotely collected data to better understand the system's in-the-field behavior. One common need for these techniques is the ability to distinguish execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which condition a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). To address the limitations of existing approaches, we have developed three techniques for automatically classifying execution data as belonging to one of several classes. In this paper, we introduce our techniques and apply them to the binary classification of passing and failing behaviors. Our three techniques impose different overheads on program instances and, thus, each is appropriate for different application scenarios. We performed several empirical studies to evaluate and refine our techniques and to investigate the trade-offs among them. Our results show that 1) the first technique can build very accurate models, but requires a complete set of execution data; 2) the second technique produces slightly less accurate models, but needs only a small fraction of the total execution data; and 3) the third technique allows for even further cost reductions by building the models incrementally, but requires some sequential ordering of the software instances' instrumentation. |
DOI | 10.1109/TSE.2007.1004 |