Mantis: Automatic Performa nce Prediction for Smartph one Applications
Yongin Kwon, Sangmin Lee, Hayoon Yi, Donghyun Kwon, Seungjun Yang, Byung- Gon Chun,
Ling Huang, Petros Maniatis, Mayur Naik, Yunheung Paek
USENIX ATC’13
Performance Prediction Pro blem
Predict the execution time of a pr ogram on a given input before runn ing it.
Two kinds of Approaches
Most existing techniques can be cl assified into two broad categories .
◦Domain-specific programs, automatical ly-extracted features.
◦General-purpose programs, manually-sp ecified features.
Mantis
A new framework to automatically p redict the performance of general- purpose byte-code programs on give n inputs.
Four components:
◦Feature instrumentor
◦Profiler
◦Performance model generator
◦Predictor code generator
Architecture
Feature Instrumentor
Instruments the program to collect the values of feature (f1, … , f
M) as per feature schemes.
Feature scheme
◦Branch counts
◦Loop counts
◦Method-call counts
◦Variable values
Examples
Profiler
Outputs a data set
◦ti: the ith observation of execution t ime.
◦vi: the ith observation of the vector of M
features.
ti , vi
iN1] ,...,
[
i1 iMi
f f
v
Performance Modeling
Performs a sparse nonlinear regres sion on the feature values and exe cution time.
Produces a function
◦ is the approximation of exe cution time
◦ is a subset of
In practice, K << M.
f )
' (v f
] ,...,
[
' f1 fK
v v [ f1,..., fM ]
Performance Modeling(Con t.)
However, regression with best subs et selection is NP-hard.
◦Find the subset of size K that gives the smallest Residual Sum of Squares (RSS).
◦Discrete optimization problem.
SPORE-FoBa
Sparse POlynomial REgression – Fo Ba.
◦A feature from the candidate set is a dded into the model if and only if ad ding it makes the RSS decrease a lot.
If the drop is greater than ε.
◦Remove a feature from the active set if deleting it makes the RSS increase the least.
If the increment is smaller than ε’.
Example
Degree-2 polynomial with
◦Expand (1+x1+x2)2 to get 1, x1, x2, x12, x1x2, x22.
◦Construct the following function for regression
] ,
[x1 x2
v
2 2 5 2
1 4 2
1 3 2
2 1
1
) 0
( x x x x x x
f v
Predictor code generator
Produce a code snippet, called sli ce, for each chosen features.
◦Slice: an executable sub-programs tha t yields the same value v of a featur e at a program point p as the given p rogram on all inputs.
Automatically evaluate feature val ues for each input by executing sl ices.
Example
Prototype Toolchain
Experiment Setup
A machine runs Ubuntu 11.10 64-bit with a 3.1GHz quad-core CPU, and 8 GB of RAM.
A Galaxy Nexus running Android 4.1 .2 with dual-core 1.2Ghz CPU and 1 GB RAM.
Six CPU-intensive Android applicat ions.
◦Each with 1,000 randomly generated in puts.
◦Train the predictor on 100 inputs.
Experimental Results
Features and Models
Effect of the Number of Training
Inputs
Compare with Linear Model
Prediction Time of Mantis
and PE
Prediction Error of Mantis and BE
Prediction on Different Hardware
Platform
Prediction under Background Load
Offline Stage Processing T
ime
Conclusion
Mantis is a framework that automat ically generates program performan ce predictors.
◦Combines program slicing and sparse r egression in a novel way.
Evaluation shows that the generate d predictors estimate execution ti me accurately and efficiently for smartphone applications.