Friday, December 24, 2010

JSON-RPC with a client-side library

A while ago I posted about my experiences with JSON-RPC. The conclusions were both positive and negative - JSON-RPC is a very promising protocol. Unfortunately I found that it is not so easy to find good client- and server-side libraries that support it. In particular, the JSON-RPC protocol handler in Android client needed to be implemented from scratch, without any library support.

Then one commenter proposed to check out this JSON-RPC library. Many things happened since but eventually I was able to get some first-hand experience with this library.

Click here to download the example program.

First, deploy the server part (in the server/ directory) as described at this post (actually, the server is same as the previous JSON-RPC example server). Then deploy the client part and you get the same old batch calculator functionality.

Under the hood, however, a lot of things have changed. Instead of fiddling with the JSON tokenizer and the Apache HTTP library, the JSON invocation is just one line.

double d = client.callDouble(
entry.opMethod(),
entry.getV1(),
entry.getV2() );

This simplicity comes with a price, however. The library only supports JSON-RPC 1.0 which means that there are no batch invocations and standardized exception handling. The client actually pumps the elements of the batch operation one by one into the server which could create hairy transaction consistency problems were the application a real data-intensive system. I could have refactored the server side and I could have sent the entire batch in one request (representing each operation as a JSON array sub-list) but then I would have lost JSON-RPC's simplicity on the server side.

The right way is to implement JSON-RPC 2.0 support in the library. Let's see if I or anybody else has some spare cycle in the new year for this.

That's about the valuable content, now the advertisements. First, please note the LinkedIn share button on the right panel. Try it, I am really curious what it does. Second, my very competent ex-colleague started a business of selling vintage powder compacts. Weird idea, isn't it? If you find it as weird as I did, try out the site, maybe you get some gift idea.

Sunday, December 5, 2010

Expandable list and checkboxes revisited

Once upon a time I wrote a nice little post about checkbox focus problems. The guy who originally asked the question was satisfied and went away. While I was looking the other way, a heated discussion started in the comments because the example program had an annoying property: it did not preserve the state of the check boxes when the groups were opened/collapsed but reordered the check marks randomly. Eventually I got a mail whether I could put my example program right.

Click here to download the example program.

So I did. The cause of the trouble was the adapter backing the expandable list view. It was a SimpleExpandableListAdapter which, as its name implies, is simple. In particular, it is not able to feed data to check boxes because they require boolean state while SimpleExpandableListAdapter supports only Strings (here is a posts that explains the relationship between views and adapters). The solution was to write a custom ExpandableListAdapter and the random check mark reordering disappeared.

Saturday, October 30, 2010

From homegrown JSON protocol to JSON-RPC

As I discussed in the previous post, JSON now starts to become fashionable, mostly because of its more compact encoding compared to XML. If we are commited to communicate over JSON with the server, the need for the most common client-server communication pattern, Remote Procedure Call (RPC) arise.

JSON has such a solution, called JSON-RPC. JSON-RPC is a lightweight client-server protocol. Beside the usual stuff (remote method identification, parameter and result encoding, exceptions) it has some remarkable properties that make it particularly suitable for mobile communication.

There are two JSON-RPC specifications out there, JSON-RPC 1.0 and 2.0. While 2.0 in most aspects can be seen as a natural extension of 1.0, there are differences too. JSON-RPC 1.0 was designed to be peer-to-peer with the main transport protocol being TCP. HTTP was also supported but with limitations. JSON-RPC 2.0 is explicitly client-server even though the change is only in the terminology used in the specification. The client-server nature of JSON 2.0 ensures, however, that HTTP as a transport is more naturally supported.

Let's start with the basics.

This is a JSON 1.0 request:

{"method":"add","params":[3,4],"id":0}

"method" attribute specifies the remote method, "params" attribute has a value of an array with the RPC arguments. The order matters and has to be the same as the arguments on the remote side. JSON-RPC requests and responses are connected with the "id" attribute because requests and responses does not have to be ordered. The response following a request - or over a full duplex bearer like TCP arriving at any time - does not have to be the response to the request just sent. It may be a response to an earlier request and the "id" parameter guarantees that requests can be coupled with their responses.

This is a JSON 1.0 response:

{"error": null, "result": 5, "id": 0}

"result" is the return value of the RPC call, "id" is the same as the corresponding request and "error" carries the error object if there was an error during the RPC processing. RPC 1.0 does not define the error value beside stating that it is an object.

JSON-RPC 2.0 extends JSON-RPC 1.0 in many important ways.
  • In order to provide backward compatibility, JSON-RPC 2.0 requests and responses must all have "jsonrpc": "2.0" attribute.
  • Error object is now specified. It has "code", "message" and "data" attributes (more about that in the specification).
  • It specifies oneway messages that JSON-RPC 2.0 calls notifications. Notifications are requests without "id" attribute. No response can be sent to notifications.
  • There are batched requests and responses. Batch is formed by having an array as top-level element and adding JSON-RPC request or response objects tothis array. E.g. the following is a batched request:
[{"method":"add","params":[2,3],"id":0,"jsonrpc":"2.0"},
{"method":"mul","params":[4,5],"id":1,"jsonrpc":"2.0"}]


and the following is a batched response:

[{"error": null, "jsonrpc": "2.0", "result": 5, "id": 0},
{"error": null, "jsonrpc": "2.0", "result": 20, "id": 1}]


The following is a JSON-RPC 2.0 error response:

{"jsonrpc": "2.0","error": {"message": "ServiceRequestNotTranslatable", "code": 1000}, "result": null, "id": 12}

That was a lot of explanations, let's see something that works!

Click here to download the example program.

This is the same client-server batch calculator that you have seen in the previous post. Use the deployment instructions of the previous post to try out the example program. If you want to try the example on a real phone, I recommend deploying the server part on the real Google App Engine infrastructure as described in the previous post.

The example program uses JSON-RPC 2.0 as communication protocol between the client and server instead of our homegrown protocol. The biggest change is on the server side, which is implemented as a Python application for Google App Engine. Instead of obscure parsing code, you see remote methods defined like this:

@ServiceMethod
def add(self, v1,v2):
return v1+v2

and the rest is done by the server-side Python JSON-RPC 2.0 framework which I hacked out of this JSON-RPC 1.0 implementation. The client side does not use any framework at all which is a major problem with this example program. So far I have not been able to find any decent JSON-RPC 2.0 client-side framework suitable for Android. If you have an idea, please comment!

Thursday, October 21, 2010

Client-server communication with JSON

A long time ago, XML was the emerging, fashionable technology. Those times have clearly passed and nowadays XML is just another boring piece of basic technology. If you want to be trendy, you have to go for something else. Luckily, there is a new in thing, JavaScript Object Notation, JSON. In this post, I show you a simple client-server application based on Android, JSON and Google App Engine.

JSON is based on JavaScript object literals and is a textual representation of complex data instances. It has 4 basic types: strings, numbers, booleans and null. Objects and arrays can be constructed out of these 4 types, object is really just the JSON slang for key-value pairs. The entire thing is specified in RFC 4627 so I stick to just some simple examples so that you have that JSON feeling if you don't want to read the RFC.

This is a JSON number: -4.7e12
This is another JSON number: 0
This is NOT a JSON number: 00012 (no leading zeros!)
This is a JSON string: "JSON"
This is another JSON string: "\u005C" (always 4 hexa digits after the \u, just one character, the \)
This is not a JSON string: 'notJSON'
These are JSON literals: true, false, null
This is a JSON array: [1,"hello",null]
This is a JSON array having two other JSON arrays as elements: [[1,"hello"],[2,"hallo"]]
This is a JSON object: {"a":1,"b":"hello"}
This is a JSON object having another object associated with "a" key and an array with "b" key:
{"a":{"aa":1,"bb":2},"b":[3,4]}

I think this is enough for starter because it is evident that JSON has quite powerful data structures. Arrays are equivalent of Lisp lists while objects can be represented as lists of dotted pairs.

How does JSON compare to the incumbent serialization standard, XML? JSON is fashionable in mobile communication because it is somewhat more bandwidth-efficient than XML. E.g. the last example can be written in (ugly) XML like this (XML tag mangling due to blog engine limitations):

[l]
[a]

[aa]1[/aa]

[bb]2[/bb]

[/a]

[b]

[k]3[/k]

[k]4[/k]
[/b]
[/l]

Not counting the white spaces, this is 58 characters while the JSON representation is just 32 characters, about 45% less. The situation is worse if we consider many "well-designed" XMLs out there (like e.g. SOAP documents) which are full of namespace declarations, long tag names, etc. JSON with its simple yet powerful syntax definitely has a space in mobile application design.

Click here to download the example program.

I will demonstrate the power of JSON with a simple client-server example application. This application is a batch-mode calculator. The user first enters operations on the device's UI. These operations are not calculated immediately. When the user selects the "Process list" menu item, the collected operations are sent to the server in one batch. The server calculates the operations that are sent back to the client which displays the results. As you might have guessed, the client and the server communicates with JSON data structures.



The client part needs to be deployed on Android emulator (let's start with the emulator, I will return to this) like usual. The server part, however, requires the Google App Engine SDK for Python. Download the SDK, edit server/ae_jsoncalcserver.sh so that it suits your directory layout then start the shell script (Windows users will have to turn this one-line shell script into a .bat file). If you access localhost:8080 with your browser, you get a short message that GET is not supported. If you get there, you installed the server part properly.

Now you can start the client on the emulator and you can start to play with it. The most interesting part is the process() method in JsonCalc.java. I structured the program so that the generation of JSON request and the parsing of the response is all there. The org.json package has been part of Android from the beginning and this package is really easy to use. For the server part, I used the simplejson JSON parser for Python.

The Android client logs the JSON requests and responses in the Android log (adb logcat). I represented the batch operation and its result as array of objects. Request and response objects have different keys.

Request example:
[{"op":2,"v1":2,"id":1,"v2":3.3},{"op":3,"v1":4,"id":2,"v2":6},{"op":4,"v1":6.6,"id":3,"v2":2.2}]

Response example:
[{"id": 1, "result": -1.2999999999999998}, {"id": 2, "result": 24}, {"id": 3, "result": 2.9999999999999996}]


In case you want to try it out on a real phone, you'd better deploy the server part on Google infrastructure. Then edit Config.java in the client source so that APP_BASE_URI points to your server's address and you are ready to go. When you recompile, make sure that you delete the client/bin/classes directory - chances are that javac will not notice your changes you made to Config.java because of its broken dependency resolution.

Monday, October 11, 2010

Push service from Google

Update: the first version of this post contained factual errors. Thanks to Tejas for pointing these out, you can find his experiences with C2DM here. The example program was also updated to fix these errors.

Android 2.2's most important improvement is without doubt the significantly faster virtual machine. The API, however, got some new features too. For me, the most intriguing new feature is the access to Google's experimental push services.

I have already posted about push feature in relation to iBus//Mobile asynchronous communication package so push is not new to Android. There are also open source alternatives like MQTT, Deacon or Urban Airship. It has also been present in certain Google applications from the beginning. Whenever Android Gmail application sends a notification when a new mail arrives on the server, you see push in operation. It is pretty fast, according to my experiences, even though time to time it takes longer time for the notification to arrive. 2.2 opened that already existing mechanism to general-purpose applications, even though the push service is still in early beta.

In order to do push, either one has proper push bearer (a network mechanism able to deliver unsolicited messages to the device) or such mechanism is simulated. Currently only two real push bearers are deployed widely on mobile networks, SMS and Blackberry's proprietary push solution. SMS is costly and the Blackberry solution is available only for Blackberry so the push bearer has to be simulated. The most common simulation method relies on the device to open a TCP connection to the push server. As the TCP stream is full-duplex, the server can send the push message to the device, provided that the stream is still alive. And that's the tricky bit whose complexity should not be underestimated. TCP streams time out if there is no communication on them and the other side does not necessarily notice it. A constant ping-pong traffic needs to be generated to prevent this. Frequency, however, is critical. Too frequent ping-pongs and the data cost associated to ping-ponging will be unacceptable and the battery is drawn down quickly. Too rare ping-pongs and the device or the server will not notice that the TCP stream was closed, only after a long timeout. For the user, it means that he did not get his urgent message immediately, only after, say, an hour.

There is no perfect solution to this problem, therefore it is a good feeling that Google made its best to create a simulated push bearer, along with the server that is able to keep that many TCP sockets open and shared that infrastructure with us. The Google push architecture has the following elements.

  • The device that authenticates and registers with the push server (Cloud to Device Messaging in Google lingo) and provides authentication ticket to the 3rd Party Application Server.
  • 3rd Party Application Server that uses the authentication ticket generated by the device to send push requests to the push server.
  • Push server provided by Google that authenticates push requests from 3rd Party Application Servers and delivers the messages to the devices.
Click here to download the example program.

Cloud to Device Messaging (C2DM) has many tricky aspects but I wanted to create a test program that is as simple as possible. To try it out, you need the following:

  • A PC connected to the Internet so that Google C2DM servers can be reached (and C2DM servers can reach us).
  • Android emulator with 2.2+Google API AVD created (API level 8).
  • Google App Engine SDK for Python.
  • A Google account that you register with Google for push. It is better not to use your real e-mail address because you have to insert the password for this account into the server script. Use aexp.push as package name for the application in the registration form.
Do the following.
  • Unpack the download package. Start the Android emulator. Create a Google account if no such account exists (Settings/Accounts & sync). This account does not have to be the same that you registered for push, it can be any Google account that you know the password for.
  • Enter the "client" directory in the download package, update client/src/aexp/push/Config.java with the Google account you registered for push (C2DM_SENDER)
  • Open server/pushserver/pushserver.py and update it with your push Google account (lines 165 and 167). Unfortunately here you have to insert the password for your push Google account into the server script.
  • Enter the "server" directory, customize "ae_pushserver.sh" according to your directory layout and start it. Google App Engine SDK starts.
  • Start the Push application on the emulator, select the account using the Menu and wait for the "Registered" message to appear. Now the server application is ready to deliver push messages.
  • Go to http://localhost:8080 with your browser and type a message. Select the account from the drop-down list that you configured on the device and click the submit button. The message should appear on the emulator screen.


If you want to do the same trick from a real phone, you need a server with fix IP address or a fix DNS name and run the App Engine SDK there. Or better, deploy it on the real thing, on Google infrastructure. In any case, make sure that you update the server address in client/src/aexp/push/Config.java (and delete client/bin/classes directory as there are occassionally troubles with javac's dependency resolution).

About the application. The application reuses c2dm utility library from the JumpNote sample application. When the account is selected, the application registers with the push server using the phone user's credentials and the application ID that you registered with Google. Then comes the interesting bit. After the application registers with the push server, it sends the registration ID to the application server. The application server will use the registration ID to talk to the push server when it wants to do push. The server uses the Google account that you selected for push for authentication and therefore it needs the password for this account.

The implementation in the example application server program is a not efficient as it always requests an authentication token before it sends the push message. In an efficient implementation the token can be requested only once and can be used to authenticate many messages. Eventually the token expires and only then should new authentication token requested.

The client application also sends the client account name to the server but it is not used for authentication, it is only needed so that you can select the push target by account name in the web application's drop-down menu.

Friday, July 9, 2010

Direct service invocation considered harmful

I have a more than 2 years old - and quite popular - blog post about service binding and I thought I knew enough of Android services. Then I ran into a service binding pattern that I was not aware of before. I would like to clarify before I start that every alternative in this post "works" - meaning it does what it is intended to do on the Android versions released so far. Still, it seems that this hack I am going to present got into tutorials and books. I think there is a risk here that a good number of coders think about it as the right thing to do and not as a hack.

Click here to download the example program.

This is a version of the good old DualService presented in this blog entry. The main difference that the AIDL file has been removed and instead you will find this strange construct in DualService.java:

public class DualServiceBinder extends Binder implements ICounterService {
public int getCounterValue() {
return counter;
}
}

and in onBind:

public IBinder onBind( Intent intent ) {
return dualServiceBinder;
}

Now this DualServiceBinder "Binder derivative" is a strange thing. Binders are serialization-deserialization constructs that turn RPC invocations into byte streams and back. That byte stream can be sent through the Binder kernel driver and that's how interprocess communication works in Android. Binder derivatives are normally generated by the aidl tool. You can do it by hand but it is not for the faint of heart. Basically you have to implement transact/onTransact methods and serialize/deserialize your data by hand. If you follow my advice, you leave it to the aidl tool.

DualServiceBinder does none of it. It implements a plain Java interface (ICounterService) and exposes a plain Java method. On the other side, in DualServiceClient it justs casts the IBinder to ICounterService in the onServiceConnected method and invokes the method on it.

As you might have guessed, this is a hack. The author sidestepped entirely the Android IPC mechanism and abused two properties of the Android application model.

  • By default, the Android application manager loads activities/services/providers into the Dalvik VM running in the same Linux process.
  • Classes from the same DEX file (there is one DEX file per APK) are loaded by the same class loader.
These guys here do not use Binder at all. They need an IBinder implementation because of the onBind/onServiceConnected methods' contracts and Binder comes handy as it is an implementation of the IBinder interface. Other than that, the DualServiceBinder instance is just passed from the Service to the Activity without any IPC taking place. That is why it can be recasted to a simple Java interface and the method can be invoked without any IPC mechanism. The hack is actually faster than the AIDL-based implementation (as there is no serialization/deserialization, just in-memory parameter passing) but this approach is not supported in any way by the Android API contracts.

Do you think I found the hack in a shady, unsupported freeware? No, I found it in a book (actually, my attention was turned to this book by a fellow Android programmer). The book proposes another trick: the "service interface" of their Binder descendant exposes just a getService() method that returns a reference to the Service instance and later the Activity invokes public methods of the service directly, circumventing even their own tricky service interface.

I definitely think it is a bad idea to propagate this hack even though it works on the existing Android implementations and is somewhat faster than the regular, AIDL-based invocation.

Thursday, June 17, 2010

Plugins

I blogged 2 years ago that Android is a surprisingly modern application platform, I even said that it was the most modern application platform. This is partly due to Android's advanced component features. These features are not always evident to the naked eye so I present here a standard exercise to demonstrate them.

Click here to download the example program.

In this exercise we will implement a plugin framework for an example program. Plugins are components that connect to the main application over a uniform interface and are dynamically deployable. The download package contains two Android applications. The first, pluginapp is the example application that will use the plugins. The second, plugin1 is a plugin package that contains two plugins for pluginapp. Plugin1 package is not visible among the executable applications, it does not expose any activity that may be launched by the end user. If deployed, however, its plugins appear in pluginapp. Plugin deployment/undeployment is dynamic, if you uninstall plugin1 package while pluginapp is running, pluginapp notices the changes and the plugins disappear from the plugin list.




In order to implement these functionalities, the following features of the Android framework were used.


  • Each plugin is a service. Plugin1 exposes two of them. In order to identify our plugins, I defined a specific Intent (aexp.intent.action.PICK_PLUGIN) and I attached an intent filter listening to this intent in plugin1's AndroidManifest.xml. In order to select a particular plugin, I abused the category field again. One plugin can be accessed therefore by binding a service with an intent whose action is aexp.intent.action.PICK_PLUGIN and whose category equals to the category listed in AndroidManifest.xml of the package that exposes the plugin service.
  • Plugins are discovered using the Android PackageManager. Pluginapp asks the PackageManager to return the list of plugins that are bound to aexp.intent.action.PICK_PLUGIN action. Pluginapp then retrieves the category from this list and can produce an intent that is able to bindthe selected plugin.
  • Pluginapp updates the plugin list dynamically by listening to ACTION_PACKAGE_ADDED, ACTION_PACKAGE_REMOVED and ACTION_PACKAGE_REPLACED intents. Whenever any of these intents are detected, it refreshes the plugin list.
Even though Android is not as sophisticated as OSGi when it comes to component framework, it has every necessary element. It has a searchable service registry, packages can be queried for the services they expose and there are deployment events.

Sunday, June 6, 2010

Shake-walk-run

Well, anybody can draw up frightening equations like I did in the acceleration signal analysis report in my previous post and claim that they work just because of some diagrams made by a mathematical analysis program. You'd better not believe this sort of bullshit before you experience it yourself. To help you along that path, I implemented a simple application that let you try out, how these methods work in real life.

Click here to download the example program.

If you open this simple application, you can enable the sampling with a checkbox. Once the sampling is switched on, the service behind the app constantly samples the acceleration sensor and tries to detect motion patterns. It is able to distinguish 3 patterns now: shake, walk and run. Not only it distinguishes these patterns, it also tries to extract shake, step and run count. Unfortunately I tested it only on myself so the rule base probably needs some fine tuning but it works for me. The walk detector uses the w3 wavelet (read the report if you are curious, what w3 wavelet can be) and is therefore a bit slow, it takes about 2-3 seconds before it detects walking and starts counting the steps but that delay is consistent - if you stop walking, it continues counting the steps it has not counted yet.

The moral for this application is that the tough part in detecting motion patterns is the rule base behind the signal processing algorithms. It is OK that wavelets separate frequency bands and signal power calculation allows to produce "gate" signals that enable/disable counting for certain motion patterns. But where are the signal power limits for e.g. shaking and how to synchronize "gate" signals with the signals to be counted? This example program is a good exercise as you can observe, how it uses delayers and peak counters synchronized with signal power calculators to achieve the desired functionality.

Some notes on the implementation. The machinery behind the background service that sends events to an activity is described in this and this blog posts. Also, observe the debug flags in SamplingService, if you set one of these, the service spits out signal data that you can analyse with Sage later. I did not include the keep-awake hack into this implementation because it was not necessary on my Nexus One with the leaked Android 2.2. I put a public domain license on the download page because somebody was interested in the license of the example programs.

I have to think a bit, where I want to go from here. The most attractive target for me is to formalize all these results into a context framework for Android applications but I may fiddle a bit more on signal processing.

Friday, May 14, 2010

Analysing acceleration sensor data with wavelets

Before we get back to Android programming, we need some theoretical background on signal analysis.The document is somewhat heavy on math. To quote Dr. Kaufman's Fortran Coloring Book: if you don't like it, skip it. But if your teacher likes it, you failed.

Click here to read the report.

If you are too impatient to read, here is the essence.

There is no single perfect algorithm when analysing acceleration signals. The analysis framework should provide a toolbox of different algorithms, some working in the time-domain, some operating in the frequency domain. The decision engine that classifies the movements may use a number of algorithms, a characteristic set for each movement type.

It has been concluded in the medical research community that wavelet transformation is the most optimal algorithm for frequency-domain analysis of acceleration signals. This report presented concrete cases, how wavelet transformation can be used to classify three common movements: walking, running and shake. In addition, the wavelet transformation provided data series that can be used to extract other interesting information, e.g. step count.

For those who would like to repeat my experiments, I uploaded the prototype. First
you need Sage (I used version 4.3.3). Download and unpack the prototype package and enter the proto directory. Then launch Sage (with the "sage" command) and issue the following commands:

import accel
accel.movements(5)

now you will be able to look at the different waveforms, e.g.

list_plot(accel.shake_w5)

Sage is scriptable in Python. If you know Python, you will find everything familiar, if not - bad luck, you won't get far in Sage.

Wednesday, May 5, 2010

Movement patterns

This blog entry does not have associated example program, the samples of this post were taken using the sensor monitoring application presented here. This time I will give a taste what can be achieved by using the acceleration sensor for identifying movement patterns. Previously we have seen that the acceleration sensor is pretty handy at measuring the orientation of the device relative to the Earth's gravity. At that discussion we were faced with the problem what if the device is subject to acceleration other than the Earth's gravity. In that measurement, it caused noise. For example if you move the device swiftly, you can force change between landscape and portrait mode even if the device's relative position to the Earth's surface is constant. That's because the acceleration of the device's movement is added to the Earth's gravity acceleration, distorting the acceleration vector.

In this post, we will be less concerned about the device's orientation. We will focus on these added accelerations. Everyday movements have characteristic patterns and by analysing the samples produced by the acceleration sensor, cool context information can be extracted from the acceleration data.

The acceleration sensor measures the acceleration along three axes. This 3D vector is handy when one wants to measure orientation of e.g. the gravity acceleration vector. When identifying movements, it is more useful to work with the absolute value of the acceleration because the device may change its orientation during the movement. Therefore we calculate

ampl=sqrt( x^2 + y^2 + z^2 )

where x,y,z are elements of the measured acceleration vector along the three axes. Our diagrams will show this ampl value.

Let's start with something simple.



This is actually me walking at Waterloo station. The diagram shows the beginning of the walking. It starts with stationary position then I start to walk. In order to understand the pattern, we must remember that acceleration is change of the velocity vector. This means that if we move quickly but our velocity does not change, we have 0 acceleration. On the other hand, if we move with constant speed but our direction changes, that's quite a significant change in the velocity vector. That is the source of the gravity acceleration, the Earth's surface (and we along with it) rotates with constant speed but as the speed is tangent to the Earth's surface, its direction constantly changes as the Earth rotates.

The same goes for walking. Whenever the foot hits the ground, its velocity vector changes (moved down, starts to move up). Similarly, when the foot reaches the highest point, it moved up and then it starts to move down. The maximums and the minimums of the acceleration amplitude are at these points. As these accelerations are added to the Earth's acceleration (9.81 m/s^2), you will see an offset of about 10. The sine wave-like oscillation of the acceleration vector's absolute value is unmistakable. The signal is not exactly sine wave but the upper harmonics attenuate quickly. You can see that my walking caused about 1.2g fluctuation (-0.6 g to 0.6 g).

Running is very similar except that the maximum acceleration is much higher - about 2 g peak-to peak.



And now, everybody's favourite, the shaking. Shaking is very easy to identify. First, the peak acceleration is very high (about 3 g in our case). Also, the peaks are spike-like (very high upper harmonic content).



In the following, I will present some algorithms that produce handy values when identifying these movements.

Thursday, April 22, 2010

Monitoring sensors in the background

Once I got the taste of making all sorts of measurements with sensors, I happily started to collect samples. Then I quickly ran into the background sensor monitoring problem. The use case is very simple: the phone is in idle state, keyguard is locked but it collects and processes sensor data. For example I wanted to record acceleration sensor data while cycling. I started the sensor application, started the measurement and locked the keyguard. On the Nexus One it means that there will be no further sensor data delivered to the application until the screen is on.

The source of the problem is that the sensor is powered down when the screen is off. This is implemented below the Android framework (either in the kernel driver or in hardware, I would be curious if anyone knows the answer) because if you recompile the latest sources from android.git.kernel.org, sensor data will be delivered nicely to the application in the emulator even if the keyguard is locked (the stock 2.1 emulator's Goldfish sensor does not emit any event, that is why you have to recompile from source) . The fact remains: if you want acceleration sensor data on the Nexus One, the screen must be on. This pretty much kills all the user-friendly applications that want to analyze sensor data in the background while the phone is idle. In the worst case, the screen must be constantly on (e.g. for a pedometer that needs to measure constantly because you never know when the user makes a step) but the situation for a simple context reasoner service is not much better. Such a service (read the vision paper for background, you need free registration to access) may want to sample the sensors periodically (e.g. collecting 5 sec. of samples in every minute). In order to perform the sampling, the screen should be switched on for this duration which would result in a very annoying flashing screen and increased power consumption.

You can observe these effects in the improved Sensors program that is able to capture sensor data in the background.

Click here to download the example program.

Start and deploy this program and you will see the familiar sensor list. If you click on any list item, the old interactive display comes up. The improvement here is that the application measures the sensor sampling frequency. The rate can be set in the settings menu, from the main activity. If, however, you long-press on the list item, the sampling will be started in the background. In this case there is no interactive display of the samples, they are always saved into the /sdcard/capture.csv file. The background sampler does respect the sampling rate setting, however. The background sampler is implemented as a service.

So far there is nothing extraordinary. You may observe the ScreenOffBroadcastReceiver class in SamplingService that intercepts ACTION_SCREEN_OFF intent broadcasts and does something weird in response. In order to understand its operation, you must be aware that the power manager deactivates all the wake locks that are stronger than PARTIAL_WAKE_LOCK when the keyguard powers the device down. Obtaining for example a SCREEN_BRIGHT_WAKE_LOCK in the broadcast receiver would be of no use because at this point the screen is still on and the wake lock would be deactivated when the screen is turned off. Instead, the broadcast receiver starts a thread, waits for 2 sec (experimental value) and then it activates the wake lock with wakeup option. Try to push the power button when the background sampling is on, the screen will go off for 2 seconds but then it is turned on again and you will see the keyguard screen. If you check the log, you will see that the sensor sampling stops for that 2 seconds then it comes back on. Don't worry, when the background sampling is stopped (by long-pressing the sensor's name in the list again) the screen will turn off correctly.

Ugly? You bet. Not only the solution is a bad hack but it also prevents the device from switching off the screen and those AMOLED displays plus the high-powered processors eat battery power like pigs. The decision that the sensors are powered down when the screen is off prevents the implementation of some of the most exciting mobile applications and significantly decreases the value of the platform.

But enough of the grunting. This is the state of the art in background sensor sampling in Android (as of version 2.1 of the platform), I will continue with signal processing algorithms.

Wednesday, April 14, 2010

Android training in video format

I received an e-mail asking me to advertise another Android training material. And I do it because I have never seen an Android training in video format before.

Here it is, lo and behold.

I have to admit, I am a bit skeptical. If there is a text material and video (e.g. on a news site), I always choose the text because I read pretty quickly. But I leave the decision for you. Try the free sample (increase the resolution and make it full screen, then it becomes readable) and if it works for you, consider subscribing to the course for an - IMHO - quite modest fee.

Sunday, April 11, 2010

Widget technologies on different mobile platforms

I wrote this paper for the Sfonge.com knowledge exchange. This tries to put all sorts of "widget" technologies into context, including Android widgets (you need free registration to get this page).

I could not figure out, how to place a direct link, just look for widget.pdf on the page. Update: direct link to the document.

Back to the main issue: I found a nice confusion with regards to the widget term itself. Basically there are two schools: one interprets widget from the user's point of view, as a mini-app on a default screen, e.g. home screen, the other approaches from the technology point of view and considers widgets as locally installed web applications. There are interesting corner cases. For example Android widgets are full-featured widgets from the user's point of view but not widgets from the technology point of view because Android widgets are not based on web technologies. Blackberry widgets, on the other hand, are based on quite sophisticated web technology engine but they are always full-screen. This is not that an ordinary user would call widget.

Anyway, I am looking forward to your comments either here or on the Sfonge.com website. I will also get the direct link to the document, don't worry.

On a personal note, I am back to Budapest from London. The 2-year UK vacation is over. Gee, I started this blog before I moved to London and now I am back. How time flies - particularly in the Android universe.

Friday, March 12, 2010

Sensors

Ever since I heard that Android devices come with a wide array of sensors, I have been excited about the possibilities. I am a firm believer of the ubiquitous computing vision and all the consequences it brings, including sensors that can be accessed wirelessly. Some parts of the vision (e.g. self-powering microsensors embedded into wallpaint) are still futuristic but mobile phones can be equipped with such sensors easily. Google has a strategy about sensor-equipped mobile devices where the sensor values are processed by powerful data centers. As I did not have an Android device before, I could not play with those sensors. Not anymore! (again, many thanks to those gentle souls who managed to get this device to me).

Sensors are integral part of the Android user experience. Acceleration sensor makes sure that the screen layout changes to landscape when you turn the device (open an application that supports landscape layout, e.g. the calendar, keep the device in portrait mode, move the device swiftly sideways to the right and if you do it quick enough, you can force the display to change to landscape mode). Proximity sensor blanks the screen and switches on the keylock when the user makes a phonecall and puts the device to his or her ear so that the touchscreen is not activated accidentally. In some devices, temperature sensor monitors the temperature of the battery. The beauty of the Android programming model is that one can use all these sensors in one's application.

Click here to download the example program.

A very similar application called SensorDump can be found on Android Market. Our example program is inferior to SensorDump in many respect but has a crucial feature: it can log sensor values into a CSV file that can be analysed later on (use the menu to switch the capture feature on and off). Update: from the 0.2.0 version of SensorDump, sensor data logging into CSV file is available. This is not much important with e.g. the proximity sensor which provides binary data but I don't believe one can understand at a glance, what goes on with the e.g. accelerator sensor during a complex movement just by looking at the constantly changing numbers on the device screen.

I can see the following sensors on my Nexus One.

BMA150 - 3-axis accelerometer
AK8973 - 3-axis Magnetic field sensor
CM3602 - Light sensor

Some sensors are projected as more than one logical sensor, for example the AK8973 is also presented as an orientation sensor and the CM3602 as the proximity sensor. This is just software, however, these duplicate logical sensors use the same sensor chip but present the sensor data in different format.

Let's start with the most popular sensor, the accelerometer. This measures the device's acceleration along the 3 axis. A logical but somewhat unintuitive property of this sensor is that the zero point is in free fall - otherwise the Earth's gravity acceleration is always present. If the device is not subject to any other acceleration (the device is stationary or moves with constant speed), the sensor measures the gravity acceleration that points toward the center of the Earth. This is commonly used to measure the roll and the pitch of the device, try the excellent Labyrinth Lite game on Android Market if you want a demonstration.

The graph below shows sensor data in two scenarios (note that all the data series can be found in the download bundle under the /measurements directory). The red dots show the value of the accelerometer when the device was turned from horizontal position to its side, right edge pointing to the Earth. The green dots show the sensor values when the device was tilted toward its front edge so that at the end the upper edge pointed toward the Earth.



This is all beautiful but don't forget that the acceleration sensor eventually measures acceleration. If the device is subject to any acceleration other than the gravity acceleration (remember the experiment with the portrait-landscape mode at the beginning of the post), that acceleration is added to the gravity acceleration and distorts the sensor's data (provided that you want to measure the roll-pitch of the device). The following graph shows the accelerometer values when the device was laying on the table but I flicked it. The device accelerated on the surface of the table and the smaller blue dot shows the value the accelerometer measured when this happened. As if the device was tilted to the right.


The second sensor is the magnetic field sensor, the compass. As far as I know, this sensor is not used for anything by the base Android applications, it is all the more popular for all sorts of compass applications. The magnetic sensor measures the vector of the magnetic field of the Earth, represented in the device's coordinate system. In 3D, this points toward the magnetic north pole, into the crust of the Earth. The following graph shows the scenario when the device was laying on the table but was rotated in a full circle on the surface of the table.



Even though the magnetic sensor is not subject to some unwanted acceleration like the accelerometer, it is subject to the influence of metal objects. The following graph shows the values of the magnetic sensor when the device was laying on the table but after a while I put a small pair of scissors on top of the device. You can see that there are two clusters of sensor values: one with the scissors, one without.


The third sensor is the light sensor that doubles as proximity detector. The light sensor is more evident but the proximity detector deserves some explanation. The proximity detector is really a light sensor with binary output. If blocked, it emits 0.0, otherwise it emits 1.0. The photo belows demonstrates the location of the sensor and how to block it.

Friday, March 5, 2010

Expandable list with CWAC

I received a comment at the expandable list adapter post that I should take a look at the CommonsWare Android Components (CWAC) project. CWAC aims to provide off-the-shelf components for Android programmers which is a very interesting proposition. In particular, the EndlessAdapter does pretty much the same as my example program at the post, except that the CWAC component offers a nice API to programmers while my program is nothing more than an example from which parts can be copied into somebody else's code.

Click here to download the example program.

Still, I think it is interesting to share my experience with CWAC EndlessAdapter because I reimplemented my example program using the EndlessAdapter. The largest - and, IMHO, most annoying - difference is that EndlessAdapter does the caching from the slow data source and the updating of the list in two separate stages. This means that if an application decides to cache more than one item, those items will not appear in the list until all the items of the particular batch were loaded. If you execute the example program, you will see batches of 5 items appearing (because the code preloads 5 items at once). EndlessAdapter really leaves only one other option open: that only one item is cached. This is less user-friendly, however, because the user has to wait for each item when the list is scrolled down. So I think the component should definitely support list update while a larger batch of items is being fetched from the data source. This would require change of the component API.

The other difference is more like a matter of taste. Personally, I found combining the loading and real data widget into the same row layout and fiddling with the visibility hard to maintain. My test program used a separate row layout for the loading widget which is easier to maintain but is less efficient.

Anyway, my goal with this post was to advertise the CWAC project because I believe that the Android component market is an important. one. That's true even if Android's component support could be better.

Thursday, March 4, 2010

Two-dimensional lists

I got another question whether TableLayout can be used like a two-dimensional ListView. My initial reaction was that it cannot be done because TableLayout just arranges items, all the complicated focusing/highlighting logic in ListView is completely missing from TableLayout. I realized later on, however, that TableLayout can contain Buttons and the rest will be arranged by the general focus handling logic in ViewGroup.

Click here to download the example program.

The only exciting part of this rather simple program is the way how an array of Buttons are made to look like selectable list items. The key is that the Buttons are styled. For example (XML fragments are mangled due to blog engine limitations):

[Button android:id="@+id/button_1"
style="@style/MyButton"
android:text="1"/]

The referred style is in res/values/styles.xml. The funny part is this line:

[item name="android:background"]@android:drawable/list_selector_background[/item]

This line was copied straight from the system style of List.View and applies the ListView selector (that defines appearance of list elements in ListView in their different states) to the Button. No wonder the Button behaves like a list element.

Tuesday, March 2, 2010

Progressively loading ListViews

Again I received a mail from somebody who asked for lists with forward/backward paging option. These lists are not trivial to implement but I was more intrigued by the need why anyone would want such a clumsy UI construct. Then it turned out that the guy wanted to back the list with data from a web service. Obviously, he did not want to wait until the entire list is loaded through the network but wanted to present something to the user as soon as possible.

Such a list can be found in Android Market when the list of applications is progressively loaded from the network server. The problem is worth a simple test program to analyse because it clearly shows the power of the Android adapter concept.

Click here to download the test program.

Our test application does not actually connect to the network but simulates the slow data source. If you check the Datasource class, you will find that even though this "data source" is a simple array, the accessor method adds a 500 ms delay to every access. This simulates well the fact that loading from certain data sources takes time. The trick is in the PageAdapter class. This class adds a View to the end of the list ("Loading ..."). Android Market animates this view but the idea is the same. Whenever this View is drawn, the adapter kicks the loading thread which progressively fetches new elements from the data source, adds them to the Adapter's data set and notifies the adapter that the data set has changed. There is the usual fiddling with a Handler to make sure that notifyDataSetChanged() is invoked in the context of the UI thread. The Adapter constantly changes the value returned by the getCount() method as the items are loaded from the data source. When the whole data set is loaded, the "Loading ..." widget is not drawn anymore therefore nothing triggers the loading thread.

Here is how it looks like.



No, wait a minute, here is how it looks like.


Yes! After having blogged for more than 2 years about Android, I finally put my hands on an actual Android phone. The story is complicated but the essence is that a friend thought that I deserved that phone and gave it to me. Thanks for everyone involved!

Saturday, February 13, 2010

Expandable lists and check boxes

I got another trivial question in a comment: how to add checkboxes to an expandable list view like this one? Nothing can be simpler, you just add CheckBox control to the child view that forms the row and that's it. Except that it does not work without some tweaks.

You can download the example program from here.



The first bizarre thing you can notice in res/layout/child_row.xml that the CheckBox is made non-focusable. Why to do that when we want the checkbox to capture "click" events beside "touch" events? There is a complicated answer already presented in this post. The ViewGroup encapsulating the list row (the LinearLayout in res/layout/child_row.xml) must retain the focus otherwise we don't get onChildClick events (see ElistCBox.java).

This would solve the problem for a Button but not for a CheckBox. Actually, I don't know why the Button behaves correctly and the CheckBox does not. My experience is that even if CheckBox has the focus, clicking on the row of the CheckBox does not toggle its state. I am curious if anyone can provide an explanation, here I just record the fact. Remove the android:focusable="false" line from the CheckBox element in child_row.xml and observe, that you can click on the highlighted row as much as you like but the CheckBox does not toggle. That's why I implemented it by "hand" - I took away the focus from the CheckBox, this makes the child row deliver onChildClick events then I toggled the state of the CheckBox programmatically. If anyone has a better solution, I would be deeply interested.

Update: a discussion started in the comment field regarding the erratic behaviour of check boxes in this example program. See this blog post for further explanation.

Wednesday, January 20, 2010

JARs on the classpath

Directory tree of a typical Android project (at least those created by the "android create project" command) looks like this:



As you may have seen the directory structure of countless J2SE projects, there is a directory to store the 3rd party class libraries of the project. Under the "lib" directory, you can place your Java class libraries in JAR format and they will be added to the classpath when the Android application is running.

Or will they? If you remember the APK format, there is no such thing as directory for libs in JAR format, particularly because the Dalvik VM is simply not able to execute ordinary Java class files in those JARs. You can find JARs on the device but these JARs contain class files in DEX format (Dalvik VM's class file format) inside so the extension of these files is misleading. Then where are those JARs from the lib directory that we "added to the classpath"?

You might have guessed it: these JARs are unpacked, processed by the dx tool just like the classes under the src directory and are placed into the same classes.dex file where the application resides. One application is one DEX file and JARs containing Java class files are not "added to the classpath", they are converted into DEX format and added to the DEX file of the application. Unlike in OSGi, an Android installable package cannot share code with other system components, only services.

Saturday, January 16, 2010

Oneway interfaces

In early betas, the Android IPC was strictly synchronous. This means that service invocations had to wait for the return value of the remote method to arrive back to the caller. This is generally an advantage because the caller can be sure that the called service received the invocation by the time the remote method returns. In some cases, however, this causes the caller to wait unnecessarily. If synchronicity is not required and the method has no return value, oneway AIDL interfaces may be used.

Oneway methods are specified by addind the oneway keyword to the AIDL interface definition.

oneway interface IncreaseCounter { void increaseCounter( int diff ); }

Only the entire interface can be oneway and these methods must all have void return value. The stub compiled from oneway AIDL interface does not have return path for remote methods on the service side and does not wait for the method to execute on the client side. The delivery is reliable (no invocations are lost) but the timing is not guaranteed, e.g. two sequential oneway invocations may arrive at the invoked service in different order. Oneway interfaces are therefore more complicated to use (they are also faster and don't block the caller) and are used extensively by the Android framework internally to deliver event notifications.

Click here to download the example program.

Update: In order to address a comment in the comment section, the project was adapted to the Android Studio tool chain. Click here to download the updated sources and read this blog post, how to import the project into Android Studio.


Our primitive example has an Activity and a Service. The service exposes two interfaces: one "normal" (Counter.aidl) and one oneway (increaseCounter.aidl). The interesting bit here is how one service can expose two interfaces. The onBind method checks the Intent used to bind the service and returns different binders based on the Intent. The important point here not to use the Intent extra Bundle to differentiate among interfaces. I did this and I can confirm that even though extras arrive at onBind (the API documentation states the contrary) but the framework gets completely confused and thinks that the service has already been bound with the same Intent (the framework seemingly cannot figure out that the Intent extras were different). In the example program I abused the category field therefore and this works nicely.

Sunday, January 10, 2010

RemoteViews as hypertext document

In the previous blog entry I tried to demonstrate, how a web programming construct, the feed can be implemented using LiveFolders. In this entry, I would like to highlight another relatively obscure feature of the Android API, the RemoteViews widget and argue that RemoteViews are similar in nature to another web programming construct, the hypertext document itself.

RemoteViews is the key mechanism behind the AppWidget feature of the Android platform. There are many good introductions to AppWidgets (like this or this) so I will be short. AppWidget is a representation of an Android application on the home screen. The application can decide, what representative information it publishes on the home screen. As the screen estate is limited, the representation is obviously condensed, e.g. an application executing a long-running task can publish the progress indicator as home screen widget. The technical problem is that it is not evident, how to embed functionality (even condensed functionality) from another running application into the Launcher, the default home screen provider. If Launcher delegated the refreshing/redrawing of the views embedded from another application to that application, then malfunction of that application could disrupt the Launcher itself - something to be definitely avoided as the home screen is the last refuge of the users if something goes wrong with apps. Different platforms apply similar strategy. The idea is that the home screen application does not invoke other executables when it draws the home screen, it rather uses some sort of description of the widget UI in data format and displays that. For example that is the reason Symbian forces the programmer to implement widgets as HTML/JavaScript applications and that is the reason Android decoupled the applications from the Launcher view structure

The workhorse behind Android AppWidgets is the RemoteViews widget. RemoteViews is a parcelable data structure (so it can be sent through Android IPC) that captures an entire view structure. The creator of the RemoteViews sets up this data structure, by specifying the base layout and setting field values. This has no effect on the UI and therefore background components like services, broadcast event handlers, etc. can also do it. In fact, AppWidgets are broadcast event receivers, they process broadcasts from Launcher and respond with RemoteViews. When the RemoteViews data structure is received and deserialized, it can be applied to any ViewGroup of the UI and can therefore appear on the UI of another application. From this point of view, it works just like a hypertext document except that RemoteViews describes Android views and not general web controls.

RemoteViews is a powerful construct that can be easily used outside of the context of AppWidgets. I present now a simple Activity and a Service where the Service provides an entire formatted View (and not just the data) that can be displayed easily by the Activity. Even though this is against the Big Book of service-oriented architectures (services should provide raw data and the consumers should care about formatting), there are lot of use cases for this construct. For example the output data of the service may not be so easy to display (e.g. in case of a map application) or you would like to tie the output of the service with advertisement, logo or link to your main application (a PendingIntent in Android parlance). In this case your Service (broadcast receiver, etc.) may just return the View structure you would like the client application to display. In the web world, web widgets are formatted mini webpages to be embedded into other web pages, hence the similarity of this web technology with RemoteViews.

Click here to download the example program.

There are three points to note here.
  • Check ITimeViewService.aidl under the src/aexp/remoteview directory and see how the remote method returns a RemoteViews instance.
  • Check TimeViewService.java and see how the RemoteView is constructed and updated. To make the point of returning formatted views as opposed to raw data, I added two formatted TextViews and a ProgressBar embedded into two LinearLayouts to the view structure returned by the service.
  • Check RemoteView.java and see how the RemoteView is applied to the view structure of that Activity. Everything under the "Get view data" button comes from the service when the RemoteViews is inflated into the view structure of the Activity.

RemoteViews has disadvantages, however. It cannot serialize any Views, e.g. it cannot serialize an EditText view. I am aware of only this article about the restrictions of views the RemoteViews can serialize/deserialize.