• 沒有找到結果。

Mantis: Automatic Performa nce Prediction for Smartph one Applications

N/A
N/A
Protected

Academic year: 2022

Share "Mantis: Automatic Performa nce Prediction for Smartph one Applications"

Copied!
26
0
0

加載中.... (立即查看全文)

全文

(1)

Mantis: Automatic Performa nce Prediction for Smartph one Applications

Yongin Kwon, Sangmin Lee, Hayoon Yi, Donghyun Kwon, Seungjun Yang, Byung- Gon Chun,

Ling Huang, Petros Maniatis, Mayur Naik, Yunheung Paek

USENIX ATC’13

(2)

Performance Prediction Pro blem

Predict the execution time of a pr ogram on a given input before runn ing it.

(3)

Two kinds of Approaches

Most existing techniques can be cl assified into two broad categories .

Domain-specific programs, automatical ly-extracted features.

General-purpose programs, manually-sp ecified features.

(4)

Mantis

A new framework to automatically p redict the performance of general- purpose byte-code programs on give n inputs.

Four components:

Feature instrumentor

Profiler

Performance model generator

Predictor code generator

(5)

Architecture

(6)

Feature Instrumentor

Instruments the program to collect the values of feature (f1, … , f

M) as per feature schemes.

Feature scheme

Branch counts

Loop counts

Method-call counts

Variable values

(7)

Examples

(8)

Profiler

Outputs a data set

ti: the ith observation of execution t ime.

vi: the ith observation of the vector of M

features.

ti , vi

iN1

] ,...,

[

i1 iM

i

f f

v

(9)

Performance Modeling

Performs a sparse nonlinear regres sion on the feature values and exe cution time.

Produces a function

is the approximation of exe cution time

is a subset of

In practice, K << M.

f )

' (v f

] ,...,

[

' f1 fK

v v [ f1,..., fM ]

(10)

Performance Modeling(Con t.)

However, regression with best subs et selection is NP-hard.

Find the subset of size K that gives the smallest Residual Sum of Squares (RSS).

Discrete optimization problem.

(11)

SPORE-FoBa

Sparse POlynomial REgression – Fo Ba.

A feature from the candidate set is a dded into the model if and only if ad ding it makes the RSS decrease a lot.

If the drop is greater than ε.

Remove a feature from the active set if deleting it makes the RSS increase the least.

If the increment is smaller than ε’.

(12)

Example

Degree-2 polynomial with

Expand (1+x1+x2)2 to get 1, x1, x2, x12, x1x2, x22.

Construct the following function for regression

] ,

[x1 x2

v

2 2 5 2

1 4 2

1 3 2

2 1

1

) 0

( x x x x x x

f v

(13)

Predictor code generator

Produce a code snippet, called sli ce, for each chosen features.

Slice: an executable sub-programs tha t yields the same value v of a featur e at a program point p as the given p rogram on all inputs.

Automatically evaluate feature val ues for each input by executing sl ices.

(14)

Example

(15)

Prototype Toolchain

(16)

Experiment Setup

A machine runs Ubuntu 11.10 64-bit with a 3.1GHz quad-core CPU, and 8 GB of RAM.

A Galaxy Nexus running Android 4.1 .2 with dual-core 1.2Ghz CPU and 1 GB RAM.

Six CPU-intensive Android applicat ions.

Each with 1,000 randomly generated in puts.

Train the predictor on 100 inputs.

(17)

Experimental Results

(18)

Features and Models

(19)

Effect of the Number of Training

Inputs

(20)

Compare with Linear Model

(21)

Prediction Time of Mantis

and PE

(22)

Prediction Error of Mantis and BE

(23)

Prediction on Different Hardware

Platform

(24)

Prediction under Background Load

(25)

Offline Stage Processing T

ime

(26)

Conclusion

Mantis is a framework that automat ically generates program performan ce predictors.

Combines program slicing and sparse r egression in a novel way.

Evaluation shows that the generate d predictors estimate execution ti me accurately and efficiently for smartphone applications.

參考文獻

相關文件

 Sequence-to-sequence learning: both input and output are both sequences with different lengths..

LTP (I - III) Latent Topic Probability (mean, variance, standard deviation) non-key term.

Training two networks jointly  the generator knows how to adapt its parameters in order to produce output data that can fool the

Input domain: word, word sequence, audio signal, click logs Output domain: single label, sequence tags, tree structure, probability

◦ Value function: how good is each state and/or action.. ◦ Policy: agent’s

State value function: when using

Input domain: word, word sequence, audio signal, click logs Output domain: single label, sequence tags, tree structure, probability distribution.

• Softmax Layer: Classifier Convolutional.. Layer