Showing posts with label sensors. Show all posts
Showing posts with label sensors. Show all posts

Tuesday, October 18, 2016

Android phone as weather station with improved sensor

The previous two posts introduced a BLE-enabled weather sensor (temperature and humidity) and the Android application that extracts data from this sensor. I hinted that I intend to proceed with a more sophisticated sensor, Bosch Sensortech's BME280 but other projects diverted my attention (shameless self-promotion: read this paper about microcontrollers, image processing and low-power wide area networks if you are curious, what kind of projects took my time). But I never forgot my BME280 temperature/humidity/pressure sensors sitting in my drawer and once I had a bit of time, I resurrected the project.

The idea is the same as with the DHT-22. The nRF51822 combined ARM Cortex-M0 microcontroller/Bluetooth Low Energy radio unit will make the sensor data available over Bluetooth Low Energy (BLE) access. The smartphone will read this data and display it to the user. Later (not in this post) I intend to upload the data into some web service for analysis. We have already done this with the DHT-22, now we extend the sensor range with the BME280.

Click here to download the Android application. Click here to download the nRF51822 projects that are running on the sensor hardware.

Let's start with the sensor. Below is the schematic for the hardware (click to enlarge).






The difference between this and the previous one is that the DHT-22 was connected to a simple GPIO pin while the BME280 uses the more sophisticated I2C bus. As the BME280 is quite miniature, I used a breakout board for the sensor too. LED1 and the serial debug port are optional (but quite useful). Debug messages are emitted to the serial port, you need a 3.3V-to-RS232 converter on the TxD pin if you want to observe those.

As described previously, the circuit is realized with a low-cost nRF51822 breakout board and is programmed with the Bus Pirate programmer, adapted to nRF51822 by Florian Echtler. The only thing I changed in this setup is that this time I moved the projects to the latest version of the SDK which is the 12.1.0. Also, the soft device (the program module implementing the BLE stack) was bumped from S100 to S130. These decisions caused quite a headache because there's significant difference between the old SDK and the 12.1.0. Therefore I decided that in the nRF51822 project file I share not only the sensor project (called adv_bme180) but two simpler ones (blinky_new and blinky_new_s130) as additions to the instructions on Florian's page. As a recap: the soft device need to be flashed into the device before any BLE application is flashed and the starting address of the BLE application depends on the size of the soft device. This has changed between S100 and S130, hence the updated projects. In both blinky_new_s130 and adv_bme280 you will find the  convert_s130.sh and upload_softdevice.sh scripts that convert into binary format and flash the S130 soft device that came with the Nordic SDK.

Once you uploaded the S130 soft device, compile the project in adv_bme180 and upload it into the nRF51822. The sensor node works the same way as the DHT-22 version. The MCU in the nRF51822 acquires measurements from the BME280 by means of the I2C bus (called Two-Wire Interface, TWI in the nRF51822 documentation), once in every second. This includes temperature, humidity and pressure. Then the measurement values are compensated by the read-only calibration data also stored in the BME280 that the MCU reads in the initialization phase. The BME280_compensate_T, BME280_compensate_P and bme280_compensate_H functions come from the BME280 user's manual. The result is the compensated temperature, humidity and pressure values that the MCU puts into the BLE advertisement data. The advertisement data also contains the nRF51822's unique ID that is used to identify the sensor. The sensor has no name as the measurement+ID data is now too long to allow sensor name info too, BLE readers now recognize the sensor purely by its unique UUID.

Now on to the Android part. The architecture of the application is pretty much the same as described in the previous post. Creating an Android Studio project works the same way as described there: create an empty Android Studio project, write over the app/src/main subtree with the content of the archive that you downloaded from this post and update app/build.gradle file with the GraphView dependency.

The BME280 functionalty was inserted in three places. First, BME280 data has its own data provider (BME280SensorDataProvider.java). This new provider is able to handle pressure data that DHT-22 measurements don't have. BLESensorGWService properly recognizes BME280 sensor nodes beside the DHT-22 sensor nodes (so both are handled), parses BME280 advertisements and puts the measurement data into the BME280 data provider. MainActivity knows about the BME280 data provider, uses its data to create the sensor list and invokes BME280GraphMeasurementActivity if the sensor in question is BME280 sensor. This new visualization activity has pressure graph too.

This is how the sensor list looks like with a DHT-22 and a BME280 sensor (DHT-22 does not have pressure data, BME280 does).



And this is how the pressure graph looks like in the BME280 visualization activity.



And at last, some words about the deployment. I got a question, how the power supply works. After more than half a year of operation, I ended up with a discarded 12V motorcycle battery as power source. This battery used to have 6Ah capacity, now it has about half which is not enough to feed a motorcycle but is quite enough to yield 1-2 mA per sensor node for a long time. Also, this battery is designed to withstand quite severe weather conditions. I can only recommend discarded but still functional motorcycle/car batteries as power source if the place available for the sensor permits it.

Here is how the BME280 sensor looks like in its protective plastic box. The small panel in the foreground is a cheap DC-DC converter (not shown in the schematics) which makes the step-down from 12V to 3.3V.



And here is how the sensor nodes sit in an outdoor box with the battery. The lid of the BME280 node is removed for demonstration, the other box contains the DHT-22 sensor.





Now the part that is really missing is the data upload/data analysis functionality. 

Wednesday, May 5, 2010

Movement patterns

This blog entry does not have associated example program, the samples of this post were taken using the sensor monitoring application presented here. This time I will give a taste what can be achieved by using the acceleration sensor for identifying movement patterns. Previously we have seen that the acceleration sensor is pretty handy at measuring the orientation of the device relative to the Earth's gravity. At that discussion we were faced with the problem what if the device is subject to acceleration other than the Earth's gravity. In that measurement, it caused noise. For example if you move the device swiftly, you can force change between landscape and portrait mode even if the device's relative position to the Earth's surface is constant. That's because the acceleration of the device's movement is added to the Earth's gravity acceleration, distorting the acceleration vector.

In this post, we will be less concerned about the device's orientation. We will focus on these added accelerations. Everyday movements have characteristic patterns and by analysing the samples produced by the acceleration sensor, cool context information can be extracted from the acceleration data.

The acceleration sensor measures the acceleration along three axes. This 3D vector is handy when one wants to measure orientation of e.g. the gravity acceleration vector. When identifying movements, it is more useful to work with the absolute value of the acceleration because the device may change its orientation during the movement. Therefore we calculate

ampl=sqrt( x^2 + y^2 + z^2 )

where x,y,z are elements of the measured acceleration vector along the three axes. Our diagrams will show this ampl value.

Let's start with something simple.



This is actually me walking at Waterloo station. The diagram shows the beginning of the walking. It starts with stationary position then I start to walk. In order to understand the pattern, we must remember that acceleration is change of the velocity vector. This means that if we move quickly but our velocity does not change, we have 0 acceleration. On the other hand, if we move with constant speed but our direction changes, that's quite a significant change in the velocity vector. That is the source of the gravity acceleration, the Earth's surface (and we along with it) rotates with constant speed but as the speed is tangent to the Earth's surface, its direction constantly changes as the Earth rotates.

The same goes for walking. Whenever the foot hits the ground, its velocity vector changes (moved down, starts to move up). Similarly, when the foot reaches the highest point, it moved up and then it starts to move down. The maximums and the minimums of the acceleration amplitude are at these points. As these accelerations are added to the Earth's acceleration (9.81 m/s^2), you will see an offset of about 10. The sine wave-like oscillation of the acceleration vector's absolute value is unmistakable. The signal is not exactly sine wave but the upper harmonics attenuate quickly. You can see that my walking caused about 1.2g fluctuation (-0.6 g to 0.6 g).

Running is very similar except that the maximum acceleration is much higher - about 2 g peak-to peak.



And now, everybody's favourite, the shaking. Shaking is very easy to identify. First, the peak acceleration is very high (about 3 g in our case). Also, the peaks are spike-like (very high upper harmonic content).



In the following, I will present some algorithms that produce handy values when identifying these movements.

Thursday, April 22, 2010

Monitoring sensors in the background

Once I got the taste of making all sorts of measurements with sensors, I happily started to collect samples. Then I quickly ran into the background sensor monitoring problem. The use case is very simple: the phone is in idle state, keyguard is locked but it collects and processes sensor data. For example I wanted to record acceleration sensor data while cycling. I started the sensor application, started the measurement and locked the keyguard. On the Nexus One it means that there will be no further sensor data delivered to the application until the screen is on.

The source of the problem is that the sensor is powered down when the screen is off. This is implemented below the Android framework (either in the kernel driver or in hardware, I would be curious if anyone knows the answer) because if you recompile the latest sources from android.git.kernel.org, sensor data will be delivered nicely to the application in the emulator even if the keyguard is locked (the stock 2.1 emulator's Goldfish sensor does not emit any event, that is why you have to recompile from source) . The fact remains: if you want acceleration sensor data on the Nexus One, the screen must be on. This pretty much kills all the user-friendly applications that want to analyze sensor data in the background while the phone is idle. In the worst case, the screen must be constantly on (e.g. for a pedometer that needs to measure constantly because you never know when the user makes a step) but the situation for a simple context reasoner service is not much better. Such a service (read the vision paper for background, you need free registration to access) may want to sample the sensors periodically (e.g. collecting 5 sec. of samples in every minute). In order to perform the sampling, the screen should be switched on for this duration which would result in a very annoying flashing screen and increased power consumption.

You can observe these effects in the improved Sensors program that is able to capture sensor data in the background.

Click here to download the example program.

Start and deploy this program and you will see the familiar sensor list. If you click on any list item, the old interactive display comes up. The improvement here is that the application measures the sensor sampling frequency. The rate can be set in the settings menu, from the main activity. If, however, you long-press on the list item, the sampling will be started in the background. In this case there is no interactive display of the samples, they are always saved into the /sdcard/capture.csv file. The background sampler does respect the sampling rate setting, however. The background sampler is implemented as a service.

So far there is nothing extraordinary. You may observe the ScreenOffBroadcastReceiver class in SamplingService that intercepts ACTION_SCREEN_OFF intent broadcasts and does something weird in response. In order to understand its operation, you must be aware that the power manager deactivates all the wake locks that are stronger than PARTIAL_WAKE_LOCK when the keyguard powers the device down. Obtaining for example a SCREEN_BRIGHT_WAKE_LOCK in the broadcast receiver would be of no use because at this point the screen is still on and the wake lock would be deactivated when the screen is turned off. Instead, the broadcast receiver starts a thread, waits for 2 sec (experimental value) and then it activates the wake lock with wakeup option. Try to push the power button when the background sampling is on, the screen will go off for 2 seconds but then it is turned on again and you will see the keyguard screen. If you check the log, you will see that the sensor sampling stops for that 2 seconds then it comes back on. Don't worry, when the background sampling is stopped (by long-pressing the sensor's name in the list again) the screen will turn off correctly.

Ugly? You bet. Not only the solution is a bad hack but it also prevents the device from switching off the screen and those AMOLED displays plus the high-powered processors eat battery power like pigs. The decision that the sensors are powered down when the screen is off prevents the implementation of some of the most exciting mobile applications and significantly decreases the value of the platform.

But enough of the grunting. This is the state of the art in background sensor sampling in Android (as of version 2.1 of the platform), I will continue with signal processing algorithms.

Friday, March 12, 2010

Sensors

Ever since I heard that Android devices come with a wide array of sensors, I have been excited about the possibilities. I am a firm believer of the ubiquitous computing vision and all the consequences it brings, including sensors that can be accessed wirelessly. Some parts of the vision (e.g. self-powering microsensors embedded into wallpaint) are still futuristic but mobile phones can be equipped with such sensors easily. Google has a strategy about sensor-equipped mobile devices where the sensor values are processed by powerful data centers. As I did not have an Android device before, I could not play with those sensors. Not anymore! (again, many thanks to those gentle souls who managed to get this device to me).

Sensors are integral part of the Android user experience. Acceleration sensor makes sure that the screen layout changes to landscape when you turn the device (open an application that supports landscape layout, e.g. the calendar, keep the device in portrait mode, move the device swiftly sideways to the right and if you do it quick enough, you can force the display to change to landscape mode). Proximity sensor blanks the screen and switches on the keylock when the user makes a phonecall and puts the device to his or her ear so that the touchscreen is not activated accidentally. In some devices, temperature sensor monitors the temperature of the battery. The beauty of the Android programming model is that one can use all these sensors in one's application.

Click here to download the example program.

A very similar application called SensorDump can be found on Android Market. Our example program is inferior to SensorDump in many respect but has a crucial feature: it can log sensor values into a CSV file that can be analysed later on (use the menu to switch the capture feature on and off). Update: from the 0.2.0 version of SensorDump, sensor data logging into CSV file is available. This is not much important with e.g. the proximity sensor which provides binary data but I don't believe one can understand at a glance, what goes on with the e.g. accelerator sensor during a complex movement just by looking at the constantly changing numbers on the device screen.

I can see the following sensors on my Nexus One.

BMA150 - 3-axis accelerometer
AK8973 - 3-axis Magnetic field sensor
CM3602 - Light sensor

Some sensors are projected as more than one logical sensor, for example the AK8973 is also presented as an orientation sensor and the CM3602 as the proximity sensor. This is just software, however, these duplicate logical sensors use the same sensor chip but present the sensor data in different format.

Let's start with the most popular sensor, the accelerometer. This measures the device's acceleration along the 3 axis. A logical but somewhat unintuitive property of this sensor is that the zero point is in free fall - otherwise the Earth's gravity acceleration is always present. If the device is not subject to any other acceleration (the device is stationary or moves with constant speed), the sensor measures the gravity acceleration that points toward the center of the Earth. This is commonly used to measure the roll and the pitch of the device, try the excellent Labyrinth Lite game on Android Market if you want a demonstration.

The graph below shows sensor data in two scenarios (note that all the data series can be found in the download bundle under the /measurements directory). The red dots show the value of the accelerometer when the device was turned from horizontal position to its side, right edge pointing to the Earth. The green dots show the sensor values when the device was tilted toward its front edge so that at the end the upper edge pointed toward the Earth.



This is all beautiful but don't forget that the acceleration sensor eventually measures acceleration. If the device is subject to any acceleration other than the gravity acceleration (remember the experiment with the portrait-landscape mode at the beginning of the post), that acceleration is added to the gravity acceleration and distorts the sensor's data (provided that you want to measure the roll-pitch of the device). The following graph shows the accelerometer values when the device was laying on the table but I flicked it. The device accelerated on the surface of the table and the smaller blue dot shows the value the accelerometer measured when this happened. As if the device was tilted to the right.


The second sensor is the magnetic field sensor, the compass. As far as I know, this sensor is not used for anything by the base Android applications, it is all the more popular for all sorts of compass applications. The magnetic sensor measures the vector of the magnetic field of the Earth, represented in the device's coordinate system. In 3D, this points toward the magnetic north pole, into the crust of the Earth. The following graph shows the scenario when the device was laying on the table but was rotated in a full circle on the surface of the table.



Even though the magnetic sensor is not subject to some unwanted acceleration like the accelerometer, it is subject to the influence of metal objects. The following graph shows the values of the magnetic sensor when the device was laying on the table but after a while I put a small pair of scissors on top of the device. You can see that there are two clusters of sensor values: one with the scissors, one without.


The third sensor is the light sensor that doubles as proximity detector. The light sensor is more evident but the proximity detector deserves some explanation. The proximity detector is really a light sensor with binary output. If blocked, it emits 0.0, otherwise it emits 1.0. The photo belows demonstrates the location of the sensor and how to block it.