The concepts presented in this paper can also be transferred to support researchers of other software engineering domains. KernelHaven supports different types of analyses, like correctness checks, metrics, etc., in its specific domain. Hence, researchers can abstract from technical details and focus on the algorithmic core of their research problem. As an experimentation workbench, it provides configuration-based definitions of experiments, their documentation, and technical services, like parallelization and caching. Available plug-ins encapsulate existing tools, which can now be combined efficiently to yield new analyses. It addresses the need for extracting information from a variety of artifacts in this domain by means of an open plug-in infrastructure. In this paper, we present KernelHaven as an experimentation workbench supporting a significant number of experiments in the domain of static product line analysis and verification. In software engineering, we often produce unique tools for experiments and evaluate them independently on different data sets. Systematic exploration of hypotheses is a major part of any empirical research. Our results aim at supporting researchers and practitioners working on variability model engineering, evolution, and verification techniques. While the majority of constraints is extractable from code, our results indicate that creating a complete model requires further substantial domain knowledge and testing. We find that, apart from low-level implementation dependencies, configuration constraints enforce correct runtime behavior, improve users’ configuration experience, and prevent corner cases. We complement our approach with a qualitative study to identify constraint sources, triangulating results from our automatic extraction, manual inspections, and interviews with 27 developers. We find that our approach is highly accurate (93% and 77% respectively) and that we can recover 28% of existing constraints. We apply it on four highly configurable systems to evaluate the accuracy of our approach and to determine which constraints are recoverable from the code. We propose a static analysis approach, based on two rules, to extract configuration constraints from code. To automate creating and verifying such models, we need to identify the origin of such constraints. Describing options and constraints in a variability model allows reasoning about the supported configurations. Valid combinations of configuration options are often restricted by intricate constraints. Highly configurable systems allow users to tailor software to specific needs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |