{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# PCMark benchmark on Android" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The goal of this experiment is to run benchmarks on a Pixel device running Android with an EAS kernel and collect results. The analysis phase will consist in comparing EAS with other schedulers, that is comparing *sched* governor with:\n", "\n", " - interactive\n", " - performance\n", " - powersave\n", " - ondemand\n", " \n", "The benchmark we will be using is ***PCMark*** (https://www.futuremark.com/benchmarks/pcmark-android). You will need to **manually install** the app on the Android device in order to run this Notebook.\n", "\n", "When opinening PCMark for the first time you will need to Install the work benchmark from inside the app." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2016-12-12 13:09:13,035 INFO : root : Using LISA logging configuration:\n", "2016-12-12 13:09:13,035 INFO : root : /home/vagrant/lisa/logging.conf\n" ] } ], "source": [ "import logging\n", "from conf import LisaLogging\n", "LisaLogging.setup()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Populating the interactive namespace from numpy and matplotlib\n" ] } ], "source": [ "%pylab inline\n", "\n", "import copy\n", "import os\n", "from time import sleep\n", "from subprocess import Popen\n", "import pandas as pd\n", "\n", "# Support to access the remote target\n", "import devlib\n", "from env import TestEnv\n", "\n", "# Support for trace events analysis\n", "from trace import Trace\n", "\n", "# Suport for FTrace events parsing and visualization\n", "import trappy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test environment setup\n", "\n", "For more details on this please check out **examples/utils/testenv_example.ipynb**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in `my_target_conf`. Run `adb devices` on your host to get the ID. Also, you have to specify the path to your android sdk in ANDROID_HOME." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Setup a target configuration\n", "my_target_conf = {\n", " \n", " # Target platform and board\n", " \"platform\" : 'android',\n", "\n", " # Add target support\n", " \"board\" : 'pixel',\n", " \n", " # Device ID\n", " \"device\" : \"HT6670300102\",\n", " \n", " \"ANDROID_HOME\" : \"/home/vagrant/lisa/tools/android-sdk-linux/\",\n", " \n", " # Define devlib modules to load\n", " \"modules\" : [\n", " 'cpufreq' # enable CPUFreq support\n", " ],\n", "}" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [], "source": [ "my_tests_conf = {\n", "\n", " # Folder where all the results will be collected\n", " \"results_dir\" : \"Android_PCMark\",\n", "\n", " # Platform configurations to test\n", " \"confs\" : [\n", " {\n", " \"tag\" : \"pcmark\",\n", " \"flags\" : \"ftrace\", # Enable FTrace events\n", " \"sched_features\" : \"ENERGY_AWARE\", # enable EAS\n", " },\n", " ],\n", "}" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2016-12-08 17:14:32,454 INFO : TestEnv : Using base path: /home/vagrant/lisa\n", "2016-12-08 17:14:32,455 INFO : TestEnv : Loading custom (inline) target configuration\n", "2016-12-08 17:14:32,456 INFO : TestEnv : Loading custom (inline) test configuration\n", "2016-12-08 17:14:32,457 INFO : TestEnv : External tools using:\n", "2016-12-08 17:14:32,458 INFO : TestEnv : ANDROID_HOME: /home/vagrant/lisa/tools/android-sdk-linux/\n", "2016-12-08 17:14:32,458 INFO : TestEnv : CATAPULT_HOME: /home/vagrant/lisa/tools/catapult\n", "2016-12-08 17:14:32,459 INFO : TestEnv : Loading board:\n", "2016-12-08 17:14:32,460 INFO : TestEnv : /home/vagrant/lisa/libs/utils/platforms/pixel.json\n", "2016-12-08 17:14:32,462 INFO : TestEnv : Devlib modules to load: [u'bl', u'cpufreq']\n", "2016-12-08 17:14:32,463 INFO : TestEnv : Connecting Android target [HT6670300102]\n", "2016-12-08 17:14:32,463 INFO : TestEnv : Connection settings:\n", "2016-12-08 17:14:32,464 INFO : TestEnv : {'device': 'HT6670300102'}\n", "2016-12-08 17:14:32,562 INFO : android : ls command is set to ls -1\n", "2016-12-08 17:14:33,287 INFO : TestEnv : Initializing target workdir:\n", "2016-12-08 17:14:33,288 INFO : TestEnv : /data/local/tmp/devlib-target\n", "2016-12-08 17:14:35,211 INFO : TestEnv : Topology:\n", "2016-12-08 17:14:35,213 INFO : TestEnv : [[0, 1], [2, 3]]\n", "2016-12-08 17:14:35,471 INFO : TestEnv : Loading default EM:\n", "2016-12-08 17:14:35,472 INFO : TestEnv : /home/vagrant/lisa/libs/utils/platforms/pixel.json\n", "2016-12-08 17:14:35,475 WARNING : TestEnv : Wipe previous contents of the results folder:\n", "2016-12-08 17:14:35,475 WARNING : TestEnv : /home/vagrant/lisa/results/Android_PCMark\n", "2016-12-08 17:14:35,476 INFO : TestEnv : Set results folder to:\n", "2016-12-08 17:14:35,476 INFO : TestEnv : /home/vagrant/lisa/results/Android_PCMark\n", "2016-12-08 17:14:35,476 INFO : TestEnv : Experiment results available also in:\n", "2016-12-08 17:14:35,477 INFO : TestEnv : /home/vagrant/lisa/results_latest\n" ] } ], "source": [ "# Initialize a test environment using:\n", "# the provided target configuration (my_target_conf)\n", "# the provided test configuration (my_test_conf)\n", "te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)\n", "target = te.target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Support Functions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This set of support functions will help us running the benchmark using different CPUFreq governors." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def set_performance():\n", " target.cpufreq.set_all_governors('performance')\n", "\n", "def set_powersave():\n", " target.cpufreq.set_all_governors('powersave')\n", "\n", "def set_interactive():\n", " target.cpufreq.set_all_governors('interactive')\n", "\n", "def set_sched():\n", " target.cpufreq.set_all_governors('sched')\n", "\n", "def set_ondemand():\n", " target.cpufreq.set_all_governors('ondemand')\n", " \n", " for cpu in target.list_online_cpus():\n", " tunables = target.cpufreq.get_governor_tunables(cpu)\n", " target.cpufreq.set_governor_tunables(\n", " cpu,\n", " 'ondemand',\n", " **{'sampling_rate' : tunables['sampling_rate_min']}\n", " )" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# CPUFreq configurations to test\n", "confs = {\n", " 'performance' : {\n", " 'label' : 'prf',\n", " 'set' : set_performance,\n", " },\n", " #'powersave' : {\n", " # 'label' : 'pws',\n", " # 'set' : set_powersave,\n", " #},\n", " 'interactive' : {\n", " 'label' : 'int',\n", " 'set' : set_interactive,\n", " },\n", " #'sched' : {\n", " # 'label' : 'sch',\n", " # 'set' : set_sched,\n", " #},\n", " #'ondemand' : {\n", " # 'label' : 'odm',\n", " # 'set' : set_ondemand,\n", " #}\n", "}\n", "\n", "# The set of results for each comparison test\n", "results = {}" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#Check if PCMark si available on the device\n", "\n", "def check_packages(pkgname):\n", " try:\n", " output = target.execute('pm list packages -f | grep -i {}'.format(pkgname))\n", " except Exception:\n", " raise RuntimeError('Package: [{}] not availabe on target'.format(pkgname))\n", "\n", "# Check for specified PKG name being available on target\n", "check_packages('com.futuremark.pcmark.android.benchmark')" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Function that helps run a PCMark experiment\n", "\n", "def pcmark_run(exp_dir):\n", " # Unlock device screen (assume no password required)\n", " target.execute('input keyevent 82')\n", " # Start PCMark on the target device\n", " target.execute('monkey -p com.futuremark.pcmark.android.benchmark -c android.intent.category.LAUNCHER 1')\n", " # Wait few seconds to make sure the app is loaded\n", " sleep(5)\n", " \n", " # Flush entire log\n", " target.clear_logcat()\n", " \n", " # Run performance workload (assume screen is vertical)\n", " target.execute('input tap 750 1450')\n", " # Wait for completion (10 minutes in total) and collect log\n", " log_file = os.path.join(exp_dir, 'log.txt')\n", " # Wait 5 minutes\n", " sleep(300)\n", " # Start collecting the log\n", " with open(log_file, 'w') as log:\n", " logcat = Popen(['adb logcat', 'com.futuremark.pcmandroid.VirtualMachineState:*', '*:S'],\n", " stdout=log,\n", " shell=True)\n", " # Wait additional two minutes for benchmark to complete\n", " sleep(300)\n", "\n", " # Terminate logcat\n", " logcat.kill()\n", "\n", " # Get scores from logcat\n", " score_file = os.path.join(exp_dir, 'score.txt')\n", " os.popen('grep -o \"PCMA_.*_SCORE .*\" {} | sed \"s/ = / /g\" | sort -u > {}'.format(log_file, score_file))\n", " \n", " # Close application\n", " target.execute('am force-stop com.futuremark.pcmark.android.benchmark')\n", " \n", " return score_file" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Function that helps run PCMark for different governors\n", "\n", "def experiment(governor, exp_dir):\n", " os.system('mkdir -p {}'.format(exp_dir));\n", "\n", " logging.info('------------------------')\n", " logging.info('Run workload using %s governor', governor)\n", " confs[governor]['set']()\n", "\n", " ### Run the benchmark ###\n", " score_file = pcmark_run(exp_dir)\n", " \n", " # Save the score as a dictionary\n", " scores = dict()\n", " with open(score_file, 'r') as f:\n", " lines = f.readlines()\n", " for l in lines:\n", " info = l.split()\n", " scores.update({info[0] : float(info[1])})\n", " \n", " # return all the experiment data\n", " return {\n", " 'dir' : exp_dir,\n", " 'scores' : scores,\n", " }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run PCMark and collect scores" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2016-12-08 17:14:43,080 INFO : root : ------------------------\n", "2016-12-08 17:14:43,081 INFO : root : Run workload using performance governor\n", "2016-12-08 17:24:50,386 INFO : root : ------------------------\n", "2016-12-08 17:24:50,387 INFO : root : Run workload using interactive governor\n" ] } ], "source": [ "# Run the benchmark in all the configured governors\n", "for governor in confs:\n", " test_dir = os.path.join(te.res_dir, governor)\n", " res = experiment(governor, test_dir)\n", " results[governor] = copy.deepcopy(res)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "After running the benchmark for the specified governors we can show and plot the scores:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", " | interactive | \n", "performance | \n", "
---|---|---|
PCMA_DATA_MANIPULATION_SCORE | \n", "4264.355319 | \n", "4260.128135 | \n", "
PCMA_PHOTO_EDITING_V2_SCORE | \n", "16853.979140 | \n", "16422.056987 | \n", "
PCMA_VIDEO_EDITING_SCORE | \n", "6281.320705 | \n", "6314.691918 | \n", "
PCMA_WEB_V2_SCORE | \n", "5513.358130 | \n", "5610.058655 | \n", "
PCMA_WORK_V2_SCORE | \n", "6803.354647 | \n", "6790.043529 | \n", "
PCMA_WRITING_V2_SCORE | \n", "5855.885077 | \n", "5823.619700 | \n", "