Emergent Interfaces and Data-flow Analysis for Software Product Lines
Intra-Procedural Data-flow Analysis for Software Product Lines
Abstract:
Software product lines (SPLs) are commonly developed using annotative approaches such as conditional compilation that come with an inherent risk of constructing erroneous products. For this reason, it is essential to be able to analyze SPLs. However, as dataflow analysis techniques are not able to deal with SPLs, developers currently have to brute force generate all valid methods and analyze all of them individually, which is expensive for non-trivial SPLs. In this paper, we demonstrate how to take any standard intraprocedural dataflow analysis and automatically turn it into a feature-sensitive dataflow analysis in three different ways. All are capable of analyzing all valid methods of an SPL without having to generate all of them explicitly. We have implemented all analyses as extensions of SOOT’s intraprocedural dataflow analysis framework and experimentally evaluated their performance and memory characteristics on four qualitatively different SPLs. The results indicate that the feature-sensitive analyses may be up to eight times faster than the brute force approach and that the analyses have different time and space tradeoffs.
Source Code
Our implementation is available at
our Assembla space.
General Instructions
You can find
here some instructions about how to install our plug-in along with an architectural overview and implementation details.
Results
Feature-oblivious (Brute force approach) data
Consists of building all valid method variants and analyzing them one by one using a conventional data-flow analysis. The results can be found
here.
Feature-sensitive (Consecutive, Simultaneous and Lazy-sharing approaches), memory data, and cache misses data.
The data collected are packed into 2 rar files:
The cache misses numbers are also packed in 2 files: normal and full cache. More specifically, we compare cache misses of two feature-sensitive approaches: consecutive (A2) and simultaneous (A3). To collect data with respect to cache misses, we used the
Overseer tool.
The equivalence proofs of the feature-sensitive approaches are available
here.
Benchmarks
All SPLs we used are available here. Their features are annotated by using
CIDE.
Histograms of each benchmark:
Members:
CIn-UFPE
IT University of Copenhagen
Errata
- In Figure 9 (d), BerkeleyDB has actually no methods with 8 features (x-axis) and thus no variants (y-axis).
- The archive file linked above with the sheets for the feature-oblivious experiment do not contain the removal of the 2 outliers, as described in the paper. The data presented in the paper, however, is correct. Here are the updated sheets.
- The feature-sensitive data time measurement for MobileMedia08 does not contain the removal of the outliers as described in the paper. Here is the updated sheet. There is little change to the final results, and the overall conclusions are the same.
--
TarsisToledo - 2012-07-13 --
TarsisToledo - 23 Jan 2012 --
MarcioRibeiro - 26 Dec 2011 --
TarsisToledo - 20 Dec 2011 --
MarcioRibeiro - 11 Dec 2011 --
MarcioRibeiro - 09 Dec 2011 --
MarcioRibeiro - 10 Apr 2011 --
MarcioRibeiro - 10 Dec 2010