Tuesday, September 2, 2014

Android RunTime (ART): how it works - APC

To understand Android RunTime, what it does and why it’s important, first, we need to go back to 2010 and the introduction of Android 2.2/Frozen Yogurt (Froyo).

Just-in-time (JIT)

You probably know already that Android runs its applications each in their own little sandbox called a Dalvic Virtual Machine — it’s the cornerstone to Android’s security and has been around since before the release of Android 1.0. When you create an Android app using, say, the popular Eclipse IDE and an app-appropriate version of the Android software development kit (SDK), you’re turning your raw Java code into a compact form called ‘bytecode’ that’s more space efficient, portable and easier to run. (Take a basic look at the Android bytecode form).

bytecode

Android SDK turns your Java-based apps into device-portable Bytecode.
Back in the days before Froyo, that bytecode was processed by the Dalvic VM’s interpreter — a bit like using GW-BASIC in the old DOS 3.3/4.0 days, or Javascript as a modern-day example. In other words, it didn’t compile the app into a fast, tiny machine-code or native code program, it simply processed that bytecode as it needed to. And just like any interpreter-only solution, Dalvic wasn’t particularly quick — it was faster than other interpreters of the time but not near native-code speed.
But along came Froyo and all of a sudden, apps were now humming along up to five times faster than they were on Éclair (Android 2.1). The sudden change was made possible by the addition of a Just-In-Time (JIT) compiler. Froyo still ran apps via the Dalvic VM interpreter, but the difference was that parts of the bytecode were now compiled into faster machine-code on-the-fly, ‘just in time’ for execution in a process also known as ‘dynamic compilation’. The initial JIT compiler release used a trace process of compilation, looking for a linear program thread and compiling the entire thread just before execution. (Here’s the original presentation slides on Android JIT compiler).

 Froyo found its speed through using a tracing/linear JIT compilation.

Why JIT?

 

The JIT compiler has been with us ever since — receiving regular pruning and maintenance in each new Android release, but essentially operating in the same general form. Now you might be thinking, if compiling an app into native-code gives better performance, why did Google bother adding a JIT compiler and not just simply compile the Java code straight to native-code?
There were a number of reasons. First, when you compile into machine-code, you’re creating a CPU-specific version of that app — it’s why you can’t run a Windows desktop app on an ARM CPU-powered device. Drilling down a step further, not all Android devices run the same CPUs — for example, ARM processors run different architectures. Some run ARMv7-A; others are ARMv6; earlier examples again, ARMv5TE.

 
 HTC’s Desire gained plenty of speed through Android 2.2’s new JIT compiler.
For Google Play to work, it had to offer a single portable CPU-agnostic app that could run on any Android device, otherwise it’d mean searching for CPU-specific editions, which would’ve been disastrous for Android’s ease of use. (You may have noticed CPU-specific codec packs for MX Player available on Google Play, compiled codec libraries designed to run on particular CPU architectures to maximise performance, but they are exceptions to the rule.) The benefit of bytecode is that it’s more efficient than raw Java code but still portable, meaning you can load it onto any Android device and in theory, it’ll run.
The second reason was that to fully compile bytecode into machine-code on an early smartphone or tablet CPU would’ve meant delays while waiting for the compilation process to complete; it would’ve also sucked up plenty of RAM — those early phones weren’t exactly flush with speed or RAM, so JIT compilation was a clever compromise. In fact, Google claimed other JIT implementations available at the time could take ‘minutes or even hours’ to get up to speed and deliver performance gains. In contrast, the new Dalvic JIT compiler managed to deliver its performance benefits almost immediately. And according to Google, JIT compilation on Froyo only added a 100KB load to device RAM, so it wasn’t prohibitive in that respect to older-generation devices.

dalvic-it-compiler-diagram
Android runs a copy of the Dalvic JIT compiler for each operating app.
A third reason is battery life — compiling apps on a phone requires considerable CPU horsepower, which would’ve reduced battery life.

Apple comparisons

 

One benefit of controlling your own hardware is you know exactly what’s in it. That’s why Apple can distribute pre-compiled app binaries to iPhone and iPad devices rather than just bytecode. It’s also one of the contributing reasons why iOS seems smoother than Android — all of its apps are running full native code.
But Android was always designed to run on a wide range of CPU architectures and devices beyond phones and tablets. While Apple could get away with compiled native code, Android had to stick with something portable enough to work on everything but still have a system in place able to speed up code sections without sucking the life out those earlier devices.

No need to compromise

 

Bottom-line, JIT compilation was the best solution available at the time for early-generation ARM CPUs where resources were tight and CPU clock cycles at a premium. Today, with CPU cores coming out of our ears and GBs of RAM to play with, a just-in-time view of code processing is no longer necessary, so Google has spent the last two years working on Project ART or Android RunTime.
Android RunTime replaces Dalvic’s JIT compiler with a new AOT (Ahead-Of-Time) version. Instead of on-the-fly processing, the whole app is now pre-compiled into machine-code just once upon installation, not just part of it at run-time, which should bring a number of benefits. First, CPU-bound apps will now run faster and time-bound apps more efficiently by removing JIT compiling — apps now exist as native code thanks to compilation on installation. Second, there should be some improvement to battery life, again, through removing JIT compiling — less code processing means greater CPU efficiency, which results in better battery life. Remember, with Dalvic, every app launches JIT compilation every time it runs unless the compiled bytecode still exists in the memory cache — so while it might be efficient from a resources viewpoint, JIT isn’t terribly efficient on a CPU scale.
Android RunTime will have some downsides but they’re relatively minor — because of the AOT compilation, apps will need more RAM during installation and they’ll need more storage space after it. You’ll still be downloading bytecode from Google Play (little changes for the developer and user), but native-code compilation will need more RAM to perform. Replacing the Dalvic interpreter also means more code has to be compiled, ready to run — word is apps will now have roughly a 20% larger footprint on your phone/tablet’s storage than before. However, with phone storage near enough to ten times what it was just a few years ago, that’s not really much of an issue (unless your phone is clogged with apps and you’re on fixed storage.)


android-kitkat-4-4-screenGoogle’s new Android 4.4 release includes two runtime engines.

How it works

 

Like Linux, Android makes use of shared object (.so) libraries — the Dalvic virtual machine comes via libdvm.so, while the new Android RunTime engine is built into libart.so. Although KitKat is available now through Google’s Nexus 5 smartphone, it’s also part of the Android Open Source Project (AOSP), which means you’ll find it in new open-source KitKat-based ROMs like CyanogenMod and OmniROM.
When you first switch to using Android RunTime, Android runs through your app list, compiling them into native code — and the process can take upwards of five minutes, depending on your device and the number of apps you have preinstalled. After that, the users see nothing different functionally.

omnirom-screenshot
ART is available in the AOSP release of Android, featured in ROMs like OmniROM.

What doesn’t work?

 

But Android RunTime (ART) is still considered experimental by Google, so while it’s not quite ready for prime time, it’s good enough for developers to get a look at it. At this stage, not every Android app works and even at this early stage, there’s a growing app compatibility list via XDA Developers Forum at www.androidruntime.com/list. Obviously, it’s not complete, but there were around 2000 apps on the list at time of writing with about 20% (397 of 1980) so far tested found to be not working under ART.

Performance differences

 

Rather than just say ART feels faster, we threw a few of the usual APC benchmarks at it, comparing the performance differences between ART and Dalvic using OmniROM on a Samsung Galaxy S3 GT-I9300 smartphone. However, you do have to be careful with benchmarks to make sure you know what it is that you’re actually testing.
Basically, any app that makes extensive use of Android’s Native Development Kit (NDK) isn’t likely to see much of an improvement, since these apps are already running significant chunks of compiled native code. Others that use straight bytecode should see some extra zip.

lg-nexus-5-300px
LG’s Nexus 5, the first phone to support the new Android RunTime engine.
And that’s exactly how it turned out. The IceStorm test inside 3DMark improved little — in fact, it went slightly backwards under ART, just why it’s not clear, but the lack of improvement made sense since it relies heavily on NDK. The same happened with GFXBench 2.7.2 and GeekBench 3.0 — it’s compiled using GCC 4.8 (GNU C++ Compiler).
Where things became more interesting was the Linpack and Quadrant Standard tests — Linpack’s performance jumped by more than a third on single-thread testing, a bit more than a fifth on multi-threaded tests; Quadrant results were similar, particularly on the CPU test. Based on this AOSP-implementation of ART, it suggests NDK-built apps won’t see much improvement (at least for the moment) whereas bytecode-based apps are currently gaining as much as a third extra speed. (Some are reporting as much as 100% speed improvements with official Google releases).

The future for NDK

 

All this raises the question that if apps compiled with NDK won’t see much improvement and those running bytecode will now get a sizeable rocket under them, is NDK running out of steam? Google seemingly tries to dissuade developers away from NDK, pointing out it won’t help most apps. However, it does allow C++ developers to code CPU-intensive applications more efficiently. The most common question from developers at the moment seems to be whether NDK-compiled apps will work with ART. If you’re one of those using Intel’s C++ compiler for Android (ICC), the word is NDK apps should work on ART provided you’re using ICC v14 and NDK version 8b.

How to use ART

 

We’ve included a step-by-step guide on activating ART on your KitKat device, but the question for now is whether it’s worth jumping to for everyday use. Given that about 20% of apps tested so far appear to crash on ART, you’ll have to be prepared for a bit of a bumpy ride if you do. It’s obviously a good idea for developers to test out code, but at this stage, with Google continuing to refine ART (no official timeline has been announced), it’ll be important to keep up with changes and updates simply to maintain app compatibility.
But the great news is with KitKat, you get the choice to try it out on your terms.

 

How to enable ART on your KitKat phone

 

Step 1


enable-android-art-runtime-step1

Launch your KitKat device, select ‘Settings > About phone’, scroll down to the Build Number and tap on it five times. As you do, you should see a prompt indicating how many times left you need to press it before you become a developer (you only need to do this once).


Step 2

enable-android-art-runtime-step2

Once developer mode has been enabled, back out to the Settings menu, scroll down to ‘Select Run Time’ and choose ‘Use ART’.
NOTE: Google considers ART experimental and some third-party apps may break. To return your phone back to original mode, follow Step 2, but this time, select ‘Use Dalvic’.

Step 3

enable-android-art-runtime-step3

Reboot your phone and it’ll then convert all apps from Dalvic to ART — depending on your app count, this may take some time, but KitKat will give you a progress score. After that, you’re good to go.

A journey on the Android Main Thread

 What is Main Thread in Android

There is an article on coding horror about why we should learn to read the source. One of the great aspects of Android is its open source nature.
When facing bugs that were related to how we interact with the main thread, I decided to get a closer look at what the main thread really is. This article describes the first part of my journey.

PSVM

public class BigBang {
 public static void main(String... args) {
 // The Java universe starts here.
 }
}
 
All Java programs start with a call to a public static void main() method. This is true for Java Desktop programs, JEE servlet containers, and Android applications.
When the Android system boots, it starts a Linux process called ZygoteInit. This process is a Dalvik VM that loads the most common classes of the Android SDK on a thread, and then waits.
When starting a new Android application, the Android system forks the ZygoteInit process. The thread in the child fork stops waiting, and calls ActivityThread.main().

 Loopers

Before going any further, we need to look at the Looper class.
Using a looper is a good way to dedicate one thread to process messages serially.
Each looper has a queue of Message objects (a MessageQueue).
A looper has a loop() method that will process each message in the queue, and block when the queue is empty.
The Looper.loop() method code is similar to this:

void loop() {
 while(true) {
 Message message = queue.next(); // blocks if empty.
 dispatchMessage(message);
 message.recycle();
 }
}
 
Each looper is associated with one thread. To create a new looper and associate it to the current thread, you must call Looper.prepare(). The loopers are stored in a static ThreadLocal in the Looper class. You can retrieve the Looper associated to the current thread by calling Looper.myLooper().
The HandlerThread class does everything for you:

HandlerThread thread = new HandlerThread("SquareHandlerThread");
thread.start(); // starts the thread.
Looper looper = thread.getLooper(); 
 
Its code is similar to this:

class HandlerThread extends Thread {
 Looper looper;
 public void run() {
 Looper.prepare(); // Create a Looper and store it in a ThreadLocal.
 looper = Looper.myLooper(); // Retrieve the looper instance from the ThreadLocal, for later use.
 Looper.loop(); // Loop forever.
 }
}
 
Handlers

A handler is the natural companion to a looper.
A handler has two purposes:
  • Send messages to a looper message queue from any thread.
  • Handle messages dequeued by a looper on the thread associated to that looper.
// Each handler is associated to one looper.
Handler handler = new Handler(looper) {
 public void handleMessage(Message message) {
 // Handle the message on the thread associated to the given looper.
 if (message.what == DO_SOMETHING) {
 // do something
 }
 }
};
// Create a new message associated to that handler.
Message message = handler.obtainMessage(DO_SOMETHING);
// Add the message to the looper queue.
// Can be called from any thread.
handler.sendMessage(message); 
 
You can associate multiple handlers to one looper. The looper delivers the message to message.target.
A popular and simpler way to use a handler is to post a Runnable:

// Create a message containing a reference to the runnable and add it to the looper queue
handler.post(new Runnable() {
 public void run() {
 // Runs on the thread associated to the looper associated to that handler.
 }
});
A handler can also be created without providing any looper:
// DON'T DO THIS
 Handler handler = new Handler(); 
 
The handler no argument constructor calls Looper.myLooper() and retrieves the looper associated with the current thread. This may or may not be the thread you actually want the handler to be associated with.
Most of the time, you just want to create a handler to post on the main thread:

Handler handler = new Handler(Looper.getMainLooper()); 
 
Back to PSVM

Let’s look at ActivityThread.main() again. Here is what it is essentially doing:

public class ActivityThread {
 public static void main(String... args) {
 Looper.prepare();
 // You can now retrieve the main looper at any time by calling Looper.getMainLooper().
 Looper.setMainLooper(Looper.myLooper());
 // Post the first messages to the looper.
 // { ... }
 Looper.loop();
 }
}
Now you know why this thread is called the main thread :) .
Note: As you would expect, one of the first things that the main thread will do is create the Application and call Application.onCreate().
In the next part, we will look at the relation between the Android lifecycle and the main thread, and how this can lead to subtle bugs.

Saturday, May 17, 2014

Why Overlay in android and how overlay works in android

What is Overlay?

Generally overlay can be defined as 
 
"Cover the surface of a thing with a coating." 
 
In Android:

"Overlay is the individual items placed on the map."

There are many ways to get custom graphics (drawables) to appear in a view. You can set a background drawable if that's what you want, or you can use an ImageView, or you can create a custom View subclass and override onDraw(). Or if you want to draw them over the children in a layout, you can override the layout and override dispatchDraw() to draw them after all of the children (after a call to super.dispatchDraw()). But sometimes you just want something simpler: ViewOverlay.

View Overlay
ViewOverlay is a class that we can find in Android since its version 4.3 (API version 18) that provides a transparent layer on top of a View, to which you can add visual content and that does not affect the layout hierarchy.

How does it work?

You just have to call the getOverlay() method from any View of your app to get its ViewOverlay, or a ViewGroupOverlay if you are calling this method from some ViewGroup object, but both of them uses the same concept.

Once you got it, you can add any View or Drawable that you want to show in this overlay calling add(Drawable drawable) method on ViewOverlay, or add(View view) on ViewGroupOverlay.

ViewOverlay API is so simple, aside from add(Drawable drawable), we can also find clear() and remove(Drawable drawable). These are the only methods that we have to use to handle the views that we move to our ViewOverlays.

Why should I use ViewOverlay?

Well, for now everything that I came up to my mind to do with this new API can be done using RelativeLayout and  a bit of tricky & ugly code. But this lets us to do that things in a friendly way.

Essentially, this component is visual-only, so views attached to a ViewOverlay will not respond to any touch or tap event. ViewOverlay mechanism was conceived to be used combined with stuff like animations.

"Using ViewOverlays we can animate views through other layouts in view hierarchy, even if they are not any of its parents."
So when some of these animations ends, we should have to call clear() or remove(Drawable drawable) methods, to remove the view from our ViewOverlay to keep it clean and avoid memory leaks.

This is only for API 18+, although we hope it will be backported at some support library in the near future.

Overlay Example