Friday 30 November 2012

Method Profiling in Android

I've recently been using the Android implementation of OpenCV for real-time computer vision on mobile devices. Computer vision is computationally expensive - especially when you're working with a camera stream in real-time. In trying to speed up my object tracking algorithm I used Android's method profiler to analyse the time spent in each function, hoping to identify potential areas for optimisation. This makes an interesting little case study and example of how to use Android's profiling tools.

How do I enable profiling?
Traceview is part of the Eclipse ADT. Whilst in the DDMS perspective, method profiling can be enabled by selecting a debuggable process and clicking the button circled below. To stop profiling, click the button again. After the profiler is stopped, a Traceview window will appear.


Interpreting Traceview output

The image above was my first method trace, capturing around seven seconds of execution and thousands of method invocations. Each row in the trace corresponds to a method (ordered by CPU usage by default). Selecting a row expands that method, showing all methods invoked from within that method. Again, these are ordered by their CPU usage.

Optimisation using profile data
Using the above example, we can see that my object tracking algorithm spends most of its time waiting for four methods to return: Imgproc.pyrDown, MainActivity.blobUpdate, Imgproc.cvtColor and VideoCapture.retrieve. The pyrDown method downsamples an image matrix whilst applying a Gaussian blur filter. The blobUpdate method is a callback I use to give updates on a tracked object. The cvtColor method converts the values in a matrix to those of another colour space. The retrieve method captures a frame from the device camera.

The latter two methods are crucial to my object tracking algorithm, as I need to call retrieve to get images from the camera and cvtColor is used to convert from RGB to HSV colour space, as it is better to perform colour thresholding this way. The former two, however, can potentially be optimised.

From this trace I've already identified a redundant yet expensive method call: pyrDown. 30% of the time spent in the processFrame method is spent waiting for pyrDown to return. I was using this function to downsample an image from the camera to 240x320, as a smaller image can be processed faster. Instead, this call can be eliminated by requesting 240x320 images from the camera.

In the blobUpdate method I send updates about the location of the tracked object and its size. I maintain a short history of these readings and use dynamic time warping to detect gesture input. By expanding the trace for this method I see that my gesture classification function is taking the most time to execute. As dynamic time warping, by design, finds alignments between sequences of different lengths, I can reduce the frequency of checking for gestures. By only checking for gestures in every second call of blobUpdate, I effectively half the amount of time spent checking for gestures. This still maintains a high recognition rate by virtue of dynamic time warping's resilience to differences in alignment length.

Conclusion
The case study in this post demonstrates how method profiling can be used to identify potential areas for optimisation; something which can be particularly beneficial in a computationally expensive application. By profiling a few seconds of execution of a computer vision algorithm I was able to capture data about thousands of method invocations. From the trace data I identified a redundant method call which accounted for 30% of my algorithm's execution time and identified an optimisation to the second most expensive method call.