Tuesday, June 19, 2012

Universal orientation, gravity and motion acceleration sensor

We have seen a number of fusion possibilities for 2 sensors.


Time has come to put all of these together and to take a step toward the Holy Grail of sensor processing. We want a sensor that measures gravity, motion acceleration and orientation reliably, no matter what the circumstances are. If we achieve this, then we have a good basis for the much researched in-door navigation problem because the linear acceleration may be used for step counting while the orientation can be used to calculate the vector of movement. Errors will accumulate but if we have occasional reference points like WiFi beacons, reasonably good results can be achieved.


So what prevents us realizing this vision? We already obtained a pretty reliable motion acceleration estimation from the accelerometer-gyroscope fusion. The orientation is the thing we don't have. Using the compass-accelerometer fusion we can eliminate the tilt problem. If we throw in the gravity calculation from the accelerometer-gyroscope fusion and use the gravity estimation instead of the raw accelerometer data for the compass tilt compensation then we have also compensated the motion acceleration. The only thing we don't have is the effect of external magnetic fields which may fool the compass and can be pretty nasty in-door in the presence of household appliances, large transformators or other electric machinery. That we will compensate with a compass-gyroscope fusion.


The idea of the compensation of the external magnetic field using the gyroscope is very similar to the compensation of the motion acceleration. If we assume that no external magnetic field is present (beside the Earth's magnetic field), we just use the compass data (and then post-compensate against tilt by the gravity vector which needs accelerometer-gyroscope fusion). If we suspect that there may be external magnetic field present, we use the gyroscope rotation data to rotate a previous, trusted compass vector (that is supposed to measure only the Earth's magnetic field) and use that as compass estimation. The difference between this compass estimation and the measured compass value is the estimation for the external magnetic field.


The trick is the detection whether an external magnetic field may be present. In this prototype the same trick was used as with the accelerometer: if the length of the compass vector is too long or too short compared to the reference measurement, then we have an external magnetic field to compensate.

Here is the summary of the sensor fusions used in this prototype:


  • Accelerometer-gyroscope fusion provides gravity and motion acceleration.
  • Compass-gravity fusion provides compass tilt compensation. The gravity is already a derived measurement coming from the accelerometer-gyroscope fusion.
  • Compass-gyroscope fusion provides orientation and external magnetic field.
Click here to download the example program.

The example program starts with a complex stationary and dynamic calibration process. When stationary, it measures the reference gravity and magnetic field values. In this stage it is very important that the device is not subject to motion acceleration or external magnetic field (beside the Earth's magnetic field). In the dynamic calibration phase we calculate the bias of the compass. We do this by instructing the user to rotate the device and measuring compass vectors in each special position. These are supposed to be points on a sphere. Once we collected the measurement points, we calculate the center of the sphere by solving a set of linear equations. The Jama package was incorporated into the example program for matrix manipulations.

Once the calibration is finished, the algorithm outlined above is executed. The measurement data is visualized in a 2D coordinate system - the z coordinate is not drawn. This is just my laziness, I did not take the effort to implement a 3D visualization engine. It is important to note, however, that the motion acceleration (yellow) and the external magnetic field (cyan) vectors are really 3D, they are just not drawn as such. The white vector is the final orientation compensated against tilt, motion acceleration and external magnetic field.




The prototype does have limitations. While it survives pretty nicely short-term disturbances from external magnetic fields (try with a small magnet), in longer term (e.g. after 5-10 minutes) it gets properly lost. For example when riding the Budapest underground there are strong and varying magnetic fields generated by the train's traction motors during the entire journey. If the compensation algorithm picks up a wrong reference vector, it may get lost for the rest of the journey until it is able to pick up the Earth's undisturbed magnetic field.

26 comments:

Anonymous said...

Hi there!

I went through the whole tutorial series - really great job! Expecially for a beginner person like me: it really let me understand how sensor fusion can work and how to use it.

Just wanted to let you know that seeing code for this last one level of data fusion would be like sensor programming heaven;)

Thank you anyway!

today said...

Great tutorial series! One of the best out there!

I have one small question. You are combining data from three different sensors. Android does not synchronize the data from them so they have different timestamps. Also quite different frequency (accelerometer is probably the fastest one). Is it not a problem when fusing sensors data? Aren't there any need for some kind of synchronization (like via interpolation or something)?

I am totally new in Android "sensors world" so maybe this question is stupid - but I was just wandering how do you deal with this asynchronous data...

Gabor Paller said...

today: generally speaking it could be a problem. But look at the realities: we are talking about human movements sampled with about 10-50 Hz (depending on the selected sampling freqency). Gyroscope sampling rate can be as high as 600 Hz in some models. Compared to the speed of the timing of the measured signals, the difference among sensor data timestamps can be neglected.

Gabor Paller said...

Anonymous: "Just wanted to let you know that seeing code for this last one level of data fusion would be like sensor programming heaven;)"

What do you mean? The example program is always available for download.

Anonymous said...

What I meant whas (I thought code is still in development) the fact that - at least here - when I download the .zip file the archive seems to be empty..

Gabor Paller said...

Anonymous, that's a Windows Explorer thing. I just got report about another ZIP archive I published. I use 7-Zip on Windows, that opens the archive without problem. Also, there's no problem on Linux or Mac.

The archive was created with Eclipse's Export Archive file feature on Linux, I can't imagine what ZIP wizardry could be at play here. Anyway, I just tried on Windows with 7-Zip and it works.

Anonymous said...

Sorry I bothered you - it did not came to my mind that something like that could be going on. Wizzardry indeed! ;)

alexdonnini said...

Hello,
If I am not mistaken, in a previous example application you used wavelet transforms to detect patterns of motion (walk, run, shake) on an Android phone.

It looks like with GCAFusion, you have an application which will detect motion more accurately using input from multiple sensors.

Could you use GCAFusion to detect motion patterns? Would you use the same approach (wavelet transforms) as in the accel application?

I would appreciate it if you could give me some pointers on how to do that to help me get started.

Thanks,

Alex Donnini

alexdonnini said...

Hello,
I think I solved the problem I asked about in the message I posted on January 27.

My next problem is how to make sensor data available to worker threads in my main application. I tried a number of ways. None of them work.

To produce and manage worker threads (I have a quite a few), I use completion services, future, and callable facilities.

When I start my worker thread execution, the sensor sampling service seems to stop running.

Do you have any ideas as to how to resolve my problem?

Thanks,

Alex Donnini

Gabor Paller said...

Alex, could you send me your code? gaborpaller at gmail.com

alexdonnini said...

Hi Gabor,
Thanks for the offer to take a look at my code. I think I resolved the problme by having the sensor sampling service run in its own process. Now, I have too make sure the various functions in my application (wifi scan processing, sensor data gathering, and location analysis don't get in each other's way (using synchronization).

At present, my code is not very readable and is pretty big. I am a little reluctant to send it to you.

At some point, I would love to get your take on what I am doing, especially as it relates to movement detection (especially walking movement).

I'll stay in touch. I hope you won't mind.

Thanks,

Alex

alexdonnini said...

Hello Gabor,

I meant to ask you but forgot until now. Now that Android 4.4 includes a ste detector and counter, do you think using it is preferable to using a module like the one you developed? For a number of reasons, I would prefer using the one you developed but...

Thanks,

Alex Donnini

Gabor Paller said...

Alex, if there's access to any context variable that is implemented in an energy-efficient way (i.e. it is not the main processor that calculates it) then it is preferable to use that method and not something that is implemented on the main processor. Those multi-core processors with GHz clock frequency consume a lot of power. Having a dedicated chip (e.g. microcontroller) saves a lot of battery power.

My dream would be a low-power microcontroller that is also programmable e.g. in RenderScript. Then you could offload your custom signal processing into a device that can execute it in energy-efficient way. But that's just a dream. :-)

alexdonnini said...

Hi Gabor,
You may have seen the Google Tango phone announcement. I wonder how much of the sensor data processing in Tango phones is offloaded to dedicated hardware (the approach you suggest).

I understand the benefit of the approach you suggest. However, I would not want to lose control over the processing of sensor data, and the algorithms used to process that data.

At this point, I want to continue to pursue the software based approach you have used in your software.

Two questions I am grappling with are:
1) the reliability of step tracking function and how to use your GCAFusion module to track walking

2) The relationship between RSSI variability and number of steps/distance travelled. There are many (and I mean many) papers written about RSSI signal variability with distance and the inherent unreliability of RSSI readings. However, I think that a) Sample size does matter, and b) RSSI is not as variable as one might think when distance changes as it does when a user walks

Both of these questions are relevant when trying to track user location in real time both indoor and outdoor as my software does,

alexdonnini said...

https://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21071/ganesan.pdf?sequence=3

alexdonnini said...

https://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21071/ganesan.pdf?sequence=3

alexdonnini said...

Hi Gabor,

You might find this interesting

https://dspace.cc.tut.fi/dpub/bitstream/handle/123456789/21071/ganesan.pdf?sequence=3

alexdonnini said...

sorry about the misfiring on the postings

Gabor Paller said...

Alex, wrt. your comments about main CPU vs. dedicated hardware. Of course, when you experiment with an algorithm, it is advisable to implement it on the most flexible platform, which in case of Android is the generic Android Java application model. Just don't be surprised when that implementation strategy is hard to turn into a consumer product. It is not by chance that in an average smartphone, no aspect of the cellular network communication is implemented by the main processor, the whole functionality is offloaded to a dedicated communication chip. I don't know anything about the Tango, but I expect that most of the signal processing is offloaded to dedicated chips or maybe to the GPU.

If you want to do step counting, I wrote a paper about it. I am happy to help if you have any problem with that prototype (the prototype was tested on just one platform).

alexdonnini said...

Hi Gabor,
It's been a longg time since we last chatted adn, understandably, you probably do not remember me. I developed a technology for indood/outdoor location tracking. Since it's not may "day job", I spent as much time as I could on it but not full time.
I have assembled a small team to take the next step.
The core indoor/outdoor location tracking function is implemented and running farily successfully.
One component that I need to add is the detection and recording of a user's movement and stationary periods.
My plan is to use the sampling service you implemented in your shake-walk-run demo application.
When I ran the shake-walk-run demo app a couple of years ago it recorded periods of movement and stasis farily well.
I recently installed the shake-walk-run app on my Nexus 6 and it does not seem to record periods of movement and stasis as well.
Is it possible that this is due to the hardware on the Nexus 6, and the fact that the walking triggering thresholds in the "processWalking" method are not correct?
I have looked at the w3p, w4p, and w5p values reported when the shake-walk-run demo app runs on my Nexus 6 and I am walking. They rarely meet the test that would set the value of "nowWalking" to "true"

boolean nowWalking = ( w5pw < 0.4 ) && ( ( ( w3pw > 0.2 ) && ( w3pw < 0.8 ) ) || ( ( w4pw > 0.2 ) && ( w4pw < 0.4 ) ) );

I would appreciate your thoughts on this.

Thanks,. Happy New Year,

Alex Donnini

Gabor Paller said...

Hi, Alex,

The most evident source of the problem may be the sampling frequency. This is an old presentation but if you look at slides 12 and 13, I guess you figure out what the problem is.

Motion recognition with Android devices

That prototype was only tuned to Nexus 1's sampling frequency. Any different sampling frequency will change the filter bands therefore the "walk" and "run" frequencies will not match those that are defined in the app.

The solution is to implement a sampling frequency calibration step but that is completely missing from that prototype application.

alexdonnini said...

Hi Gabor,

Thanks very much for your prompt response.

I looked at the two slides you refer to above. I see what you mean!

I suspected the problem was related to differences in hardware among devices.

In your estimation, roughly speaking, how complicated would it be to develop the sampling frequency calibration module? For our application, it would need to run once (or infrequently) and be as transparent to the user as possible.

In the current prototype application is the tuning of the sampling frequency to work on the Nexus 1 reflected in the values of the wavelet filters?

By the way, I thought that as an alternative, I could try and use a steps application (e.g. your prorotype) or the Android buil-in step related functions. However, for our application fairly accurate tracking of movement is more important than the counting of steps (although knowing the number of steps taken is a nice-to-have).

Thanks for your help,

Alex

Gabor Paller said...

Hi, Alex,

Introducing sampling frequency calibration is not terribly hard, the information needed to implement it are all available in the following blog post:

Analysing acceleration sensor data with wavelets

Two things need to be done. First, you have to measure the sampling frequency (i.e. what FASTEST mean on a specific phone). That's not hard, you initiate the sampling and count, how many samples are captured in a given time period. Then based on the sampling frequency, you need to recalculate the wavelet filter parameters. This step essentially expresses the relationship of a "walking" or "running" frequency (e.g. 1 Hz or 3 Hz) compared to the sampling frequency. The wavelet calculation routines are all available in the example program of the blog post above (implemented in Python).

Regarding your alternative idea. Generally speaking using an algorithm (e.g. step counter) built into the platform is more advantageous than implementing it yourself except if you are researching or experimenting with new algorithms. The reason is power consumption. In modern mobile devices, the sensor signals are pre-processed by quite sophisticated specialized microcontrollers which operate very power-efficiently. Meanwhile the main CPU of the device that is available for executing user programs is a battery hog. This means that any sensor processing that you implement yourself (particularly if executed in the background) is going to draw on the battery very heavily.

The smartphone is OK for putting together prototypes quickly but if you think about a product, you probably want to build a custom hardware and that's not terribly hard. These guys went through the same process.

alexdonnini said...

Hi Gabor,

Thanks for the pointers/instructions. I will work on that. I hope you won't mind if I have questions as I work on the implementatiion.

With regards to the "step counting" alternative, I see your point. It makes good sense for a specialized solution.

For a general purpose solution, one that would work on any smartphone and be used by consumers (and retailers) to get context relevant information based on location (indoor/outdoor), specialized hardware creates another hurdle to acceptance, unfortunately.

So, it's a trade-off between a technically better solution and one that is "universally" available, i.e. one part of the OS.

Nearly by definition, the "standard" step counting function built into the OS is less than optimal. In addition, using it makes the application dependent on it.

My aplication's need is to provide sufficiently reliable information about a user's path clearly identifying periods of movement and periods of rest.

If a "home grown" iplementation does that, I would prefer it to the OS built-in function that would give me the same information as I would be less dependent on the OS and its developers (Google).

The additional benefit of using a home grown solution is that it does not preclude future development of specialized add-on hardware at some point in the future.

This is why I am experimenting with your algorithms. They are based on some solid research and, although the implementations are prototypes, they seem to work reasonably well.

The key is for me to make sure that they run "equally" well on a wide range of smartphones.

Does this make sense? I would appreciate your feedback.

Thanks,

Alex

Gabor Paller said...

Hi, Alex,

Sure, I am ready to answer your questions. If you send them by e-mail (gaborpaller at gmail.com) then it's a safer way to get a timely response.

Wrt. your problem: identifying user's activity (e.g. rest/walk/run) is something that can be quite well done with smartphone. The problem is "any" smartphone because smartphone sensors are notoriously unreliable in terms of calibration, etc. But as the problem is relatively simple, there is hope that eventually a reasonably reliable implementation will be provided. You will have testing problems. :-)

There is one thing that people attempted wrt. indoor positioning, that's location tracking with the help of dead reckoning (using the step counter as distance measurement and compass or gyroscope as direction measurement). Now that method is so unreliable that it won't work for ordinary users (although there's some success in laboratory setup). For location tracking, you need to use beacons, e.g. BLE beacons.

alexdonnini said...

Hi Gabor,

Great! Thanks for being available to answer questions regarding the implementation of the sampling frequency calibration module. My email address is alexdonnini at ieee.org

I agree that using the dead reckoning approach to determine a user's location (indoor/outdoor) at this point in time does not seem work (sufficiently reliably and accurately).

Our technology uses RSSI (from WiFi APs, BLE beacons, or cell towers) to arrive at a fairly accurate estimate of a user's location (indoor/outdoor).

With regards to the moving/at-rest tracking function, the goal is to be able to tell reliably when the user is moving and when he/she is not, and for how long. The location tracking function can tell me where the user is both in relative and absolute terms. If I know when the user is moving and when he/she is not, I can begin to derive a fairly good "map" of a user's "interests". Note that my application is user centric not retailer centric.

Given this goal, I think that the reliability and accuracy threshold for the moving/at-rest tracking function is not very high, even taking into account the issues you (very) correctly raise.

I really appreciate your availability to talk about this. I value your opinion. I will keep you updated on our progress.

Thanks,

Alex