To understand Android RunTime, what it does and why it’s important,
first, we need to go back to 2010 and the introduction of Android
2.2/Frozen Yogurt (Froyo).
Back in the days before Froyo, that bytecode was processed by the Dalvic VM’s interpreter — a bit like using GW-BASIC in the old DOS 3.3/4.0 days, or Javascript as a modern-day example. In other words, it didn’t compile the app into a fast, tiny machine-code or native code program, it simply processed that bytecode as it needed to. And just like any interpreter-only solution, Dalvic wasn’t particularly quick — it was faster than other interpreters of the time but not near native-code speed.
But along came Froyo and all of a sudden, apps were now humming along up to five times faster than they were on Éclair (Android 2.1). The sudden change was made possible by the addition of a Just-In-Time (JIT) compiler. Froyo still ran apps via the Dalvic VM interpreter, but the difference was that parts of the bytecode were now compiled into faster machine-code on-the-fly, ‘just in time’ for execution in a process also known as ‘dynamic compilation’. The initial JIT compiler release used a trace process of compilation, looking for a linear program thread and compiling the entire thread just before execution. (Here’s the original presentation slides on Android JIT compiler).
There were a number of reasons. First, when you compile into machine-code, you’re creating a CPU-specific version of that app — it’s why you can’t run a Windows desktop app on an ARM CPU-powered device. Drilling down a step further, not all Android devices run the same CPUs — for example, ARM processors run different architectures. Some run ARMv7-A; others are ARMv6; earlier examples again, ARMv5TE.
The second reason was that to fully compile bytecode into machine-code on an early smartphone or tablet CPU would’ve meant delays while waiting for the compilation process to complete; it would’ve also sucked up plenty of RAM — those early phones weren’t exactly flush with speed or RAM, so JIT compilation was a clever compromise. In fact, Google claimed other JIT implementations available at the time could take ‘minutes or even hours’ to get up to speed and deliver performance gains. In contrast, the new Dalvic JIT compiler managed to deliver its performance benefits almost immediately. And according to Google, JIT compilation on Froyo only added a 100KB load to device RAM, so it wasn’t prohibitive in that respect to older-generation devices.
A third reason is battery life — compiling apps on a phone requires
considerable CPU horsepower, which would’ve reduced battery life.
But Android was always designed to run on a wide range of CPU architectures and devices beyond phones and tablets. While Apple could get away with compiled native code, Android had to stick with something portable enough to work on everything but still have a system in place able to speed up code sections without sucking the life out those earlier devices.
Android RunTime replaces Dalvic’s JIT compiler with a new AOT (Ahead-Of-Time) version. Instead of on-the-fly processing, the whole app is now pre-compiled into machine-code just once upon installation, not just part of it at run-time, which should bring a number of benefits. First, CPU-bound apps will now run faster and time-bound apps more efficiently by removing JIT compiling — apps now exist as native code thanks to compilation on installation. Second, there should be some improvement to battery life, again, through removing JIT compiling — less code processing means greater CPU efficiency, which results in better battery life. Remember, with Dalvic, every app launches JIT compilation every time it runs unless the compiled bytecode still exists in the memory cache — so while it might be efficient from a resources viewpoint, JIT isn’t terribly efficient on a CPU scale.
Android RunTime will have some downsides but they’re relatively minor — because of the AOT compilation, apps will need more RAM during installation and they’ll need more storage space after it. You’ll still be downloading bytecode from Google Play (little changes for the developer and user), but native-code compilation will need more RAM to perform. Replacing the Dalvic interpreter also means more code has to be compiled, ready to run — word is apps will now have roughly a 20% larger footprint on your phone/tablet’s storage than before. However, with phone storage near enough to ten times what it was just a few years ago, that’s not really much of an issue (unless your phone is clogged with apps and you’re on fixed storage.)
When you first switch to using Android RunTime, Android runs through your app list, compiling them into native code — and the process can take upwards of five minutes, depending on your device and the number of apps you have preinstalled. After that, the users see nothing different functionally.
Basically, any app that makes extensive use of Android’s Native Development Kit (NDK) isn’t likely to see much of an improvement, since these apps are already running significant chunks of compiled native code. Others that use straight bytecode should see some extra zip.
And that’s exactly how it turned out. The IceStorm test inside 3DMark
improved little — in fact, it went slightly backwards under ART, just
why it’s not clear, but the lack of improvement made sense since it
relies heavily on NDK. The same happened with GFXBench 2.7.2 and
GeekBench 3.0 — it’s compiled using GCC 4.8 (GNU C++ Compiler).
Where things became more interesting was the Linpack and Quadrant Standard tests — Linpack’s performance jumped by more than a third on single-thread testing, a bit more than a fifth on multi-threaded tests; Quadrant results were similar, particularly on the CPU test. Based on this AOSP-implementation of ART, it suggests NDK-built apps won’t see much improvement (at least for the moment) whereas bytecode-based apps are currently gaining as much as a third extra speed. (Some are reporting as much as 100% speed improvements with official Google releases).
But the great news is with KitKat, you get the choice to try it out on your terms.
Launch your KitKat device, select ‘Settings > About phone’, scroll down to the Build Number and tap on it five times. As you do, you should see a prompt indicating how many times left you need to press it before you become a developer (you only need to do this once).
Once developer mode has been enabled, back out to the Settings menu, scroll down to ‘Select Run Time’ and choose ‘Use ART’.
NOTE: Google considers ART experimental and some third-party apps may break. To return your phone back to original mode, follow Step 2, but this time, select ‘Use Dalvic’.
Reboot your phone and it’ll then convert all apps from Dalvic to ART — depending on your app count, this may take some time, but KitKat will give you a progress score. After that, you’re good to go.
Just-in-time (JIT)
You probably know already that Android runs its applications each in their own little sandbox called a Dalvic Virtual Machine — it’s the cornerstone to Android’s security and has been around since before the release of Android 1.0. When you create an Android app using, say, the popular Eclipse IDE and an app-appropriate version of the Android software development kit (SDK), you’re turning your raw Java code into a compact form called ‘bytecode’ that’s more space efficient, portable and easier to run. (Take a basic look at the Android bytecode form).Back in the days before Froyo, that bytecode was processed by the Dalvic VM’s interpreter — a bit like using GW-BASIC in the old DOS 3.3/4.0 days, or Javascript as a modern-day example. In other words, it didn’t compile the app into a fast, tiny machine-code or native code program, it simply processed that bytecode as it needed to. And just like any interpreter-only solution, Dalvic wasn’t particularly quick — it was faster than other interpreters of the time but not near native-code speed.
But along came Froyo and all of a sudden, apps were now humming along up to five times faster than they were on Éclair (Android 2.1). The sudden change was made possible by the addition of a Just-In-Time (JIT) compiler. Froyo still ran apps via the Dalvic VM interpreter, but the difference was that parts of the bytecode were now compiled into faster machine-code on-the-fly, ‘just in time’ for execution in a process also known as ‘dynamic compilation’. The initial JIT compiler release used a trace process of compilation, looking for a linear program thread and compiling the entire thread just before execution. (Here’s the original presentation slides on Android JIT compiler).
Froyo found its speed through using a tracing/linear JIT compilation.
Why JIT?
The JIT compiler has been with us ever since — receiving regular pruning and maintenance in each new Android release, but essentially operating in the same general form. Now you might be thinking, if compiling an app into native-code gives better performance, why did Google bother adding a JIT compiler and not just simply compile the Java code straight to native-code?
There were a number of reasons. First, when you compile into machine-code, you’re creating a CPU-specific version of that app — it’s why you can’t run a Windows desktop app on an ARM CPU-powered device. Drilling down a step further, not all Android devices run the same CPUs — for example, ARM processors run different architectures. Some run ARMv7-A; others are ARMv6; earlier examples again, ARMv5TE.
HTC’s Desire gained plenty of speed through Android 2.2’s new JIT compiler.
For Google Play to work, it had to offer a single portable
CPU-agnostic app that could run on any Android device, otherwise it’d
mean searching for CPU-specific editions, which would’ve been disastrous
for Android’s ease of use. (You may have noticed CPU-specific codec
packs for MX Player available on Google Play, compiled codec libraries
designed to run on particular CPU architectures to maximise performance,
but they are exceptions to the rule.) The benefit of bytecode is that
it’s more efficient than raw Java code but still portable, meaning you
can load it onto any Android device and in theory, it’ll run.The second reason was that to fully compile bytecode into machine-code on an early smartphone or tablet CPU would’ve meant delays while waiting for the compilation process to complete; it would’ve also sucked up plenty of RAM — those early phones weren’t exactly flush with speed or RAM, so JIT compilation was a clever compromise. In fact, Google claimed other JIT implementations available at the time could take ‘minutes or even hours’ to get up to speed and deliver performance gains. In contrast, the new Dalvic JIT compiler managed to deliver its performance benefits almost immediately. And according to Google, JIT compilation on Froyo only added a 100KB load to device RAM, so it wasn’t prohibitive in that respect to older-generation devices.
Apple comparisons
One benefit of controlling your own hardware is you know exactly what’s in it. That’s why Apple can distribute pre-compiled app binaries to iPhone and iPad devices rather than just bytecode. It’s also one of the contributing reasons why iOS seems smoother than Android — all of its apps are running full native code.
But Android was always designed to run on a wide range of CPU architectures and devices beyond phones and tablets. While Apple could get away with compiled native code, Android had to stick with something portable enough to work on everything but still have a system in place able to speed up code sections without sucking the life out those earlier devices.
No need to compromise
Bottom-line, JIT compilation was the best solution available at the time for early-generation ARM CPUs where resources were tight and CPU clock cycles at a premium. Today, with CPU cores coming out of our ears and GBs of RAM to play with, a just-in-time view of code processing is no longer necessary, so Google has spent the last two years working on Project ART or Android RunTime.
Android RunTime replaces Dalvic’s JIT compiler with a new AOT (Ahead-Of-Time) version. Instead of on-the-fly processing, the whole app is now pre-compiled into machine-code just once upon installation, not just part of it at run-time, which should bring a number of benefits. First, CPU-bound apps will now run faster and time-bound apps more efficiently by removing JIT compiling — apps now exist as native code thanks to compilation on installation. Second, there should be some improvement to battery life, again, through removing JIT compiling — less code processing means greater CPU efficiency, which results in better battery life. Remember, with Dalvic, every app launches JIT compilation every time it runs unless the compiled bytecode still exists in the memory cache — so while it might be efficient from a resources viewpoint, JIT isn’t terribly efficient on a CPU scale.
Android RunTime will have some downsides but they’re relatively minor — because of the AOT compilation, apps will need more RAM during installation and they’ll need more storage space after it. You’ll still be downloading bytecode from Google Play (little changes for the developer and user), but native-code compilation will need more RAM to perform. Replacing the Dalvic interpreter also means more code has to be compiled, ready to run — word is apps will now have roughly a 20% larger footprint on your phone/tablet’s storage than before. However, with phone storage near enough to ten times what it was just a few years ago, that’s not really much of an issue (unless your phone is clogged with apps and you’re on fixed storage.)
How it works
Like Linux, Android makes use of shared object (.so) libraries — the Dalvic virtual machine comes via libdvm.so, while the new Android RunTime engine is built into libart.so. Although KitKat is available now through Google’s Nexus 5 smartphone, it’s also part of the Android Open Source Project (AOSP), which means you’ll find it in new open-source KitKat-based ROMs like CyanogenMod and OmniROM.
When you first switch to using Android RunTime, Android runs through your app list, compiling them into native code — and the process can take upwards of five minutes, depending on your device and the number of apps you have preinstalled. After that, the users see nothing different functionally.
What doesn’t work?
But Android RunTime (ART) is still considered experimental by Google, so while it’s not quite ready for prime time, it’s good enough for developers to get a look at it. At this stage, not every Android app works and even at this early stage, there’s a growing app compatibility list via XDA Developers Forum at www.androidruntime.com/list. Obviously, it’s not complete, but there were around 2000 apps on the list at time of writing with about 20% (397 of 1980) so far tested found to be not working under ART.
Performance differences
Rather than just say ART feels faster, we threw a few of the usual APC benchmarks at it, comparing the performance differences between ART and Dalvic using OmniROM on a Samsung Galaxy S3 GT-I9300 smartphone. However, you do have to be careful with benchmarks to make sure you know what it is that you’re actually testing.
Basically, any app that makes extensive use of Android’s Native Development Kit (NDK) isn’t likely to see much of an improvement, since these apps are already running significant chunks of compiled native code. Others that use straight bytecode should see some extra zip.
Where things became more interesting was the Linpack and Quadrant Standard tests — Linpack’s performance jumped by more than a third on single-thread testing, a bit more than a fifth on multi-threaded tests; Quadrant results were similar, particularly on the CPU test. Based on this AOSP-implementation of ART, it suggests NDK-built apps won’t see much improvement (at least for the moment) whereas bytecode-based apps are currently gaining as much as a third extra speed. (Some are reporting as much as 100% speed improvements with official Google releases).
The future for NDK
All this raises the question that if apps compiled with NDK won’t see much improvement and those running bytecode will now get a sizeable rocket under them, is NDK running out of steam? Google seemingly tries to dissuade developers away from NDK, pointing out it won’t help most apps. However, it does allow C++ developers to code CPU-intensive applications more efficiently. The most common question from developers at the moment seems to be whether NDK-compiled apps will work with ART. If you’re one of those using Intel’s C++ compiler for Android (ICC), the word is NDK apps should work on ART provided you’re using ICC v14 and NDK version 8b.
How to use ART
We’ve included a step-by-step guide on activating ART on your KitKat device, but the question for now is whether it’s worth jumping to for everyday use. Given that about 20% of apps tested so far appear to crash on ART, you’ll have to be prepared for a bit of a bumpy ride if you do. It’s obviously a good idea for developers to test out code, but at this stage, with Google continuing to refine ART (no official timeline has been announced), it’ll be important to keep up with changes and updates simply to maintain app compatibility.
But the great news is with KitKat, you get the choice to try it out on your terms.
How to enable ART on your KitKat phone
Step 1
Launch your KitKat device, select ‘Settings > About phone’, scroll down to the Build Number and tap on it five times. As you do, you should see a prompt indicating how many times left you need to press it before you become a developer (you only need to do this once).
Step 2
Once developer mode has been enabled, back out to the Settings menu, scroll down to ‘Select Run Time’ and choose ‘Use ART’.
NOTE: Google considers ART experimental and some third-party apps may break. To return your phone back to original mode, follow Step 2, but this time, select ‘Use Dalvic’.
Step 3
Reboot your phone and it’ll then convert all apps from Dalvic to ART — depending on your app count, this may take some time, but KitKat will give you a progress score. After that, you’re good to go.