Chris Alexander

On Engineering

Mobile sensors and robots

3rd January, 2015

One of my projects for this year (admittedly a bit of a hangover from last year) is to get my third major mobile robot project driving around the house. One thing that has interested me for several years in this area is the potential number of sensors that are available to use.

It all began back at Uni when I was working with Kinect on a couple of projects. In terms of bang-for-your-buck, Kinect was pretty high up there. The only disadvantage was the amount of power (electrically and computationally) required to make the data usable and actually work in the real-time mobile context.

For my latest project I have decided not to go with the Kinect, but see how far I can get with Raspberry Pi, which is interesting as it is the first time I have built the robot around the compute component rather than the sensor component. Admittedly I got a bit carried away and have now got three Raspberry Pis networked together, but at least they are cheap to power.

The sensors on the robot itself are largely simple environment sensors - ultrasonic range finder, couple of IR obstacle sensors, and floor sensors, plus the Raspberry Pi camera (I have been around to long to try and do anything other than colour detection or general environment pictures with a low-resolution camera, without significant compute power you’re not going to get much further than that realistically). I have elected to enhance the number of available sensors with the addition of an old(ish) mobile phone, a Galaxy Nexus.

Modern smartphones are great for robotics - especially small and mobile - because they come with a huge host of sensors, a convenient output mechanism (big screen) and a built-in battery that can keep them going for hours. But when you consider you get WiFi, GPS, accelerometer / gyro, Bluetooth, microphone, speakers (alright not a sensor, but it interacts with the environment), compass, NFC, and often hall effect sensor you get a treasure trove of data about the robot.

I haven’t quite figured out how to resolve all of these inputs into something useful yet (currently I’m torn between an abstract world-view model and a sensor stream processing model) but the sheer fact they’re available opens up some very interesting opportunitites.

What people have started noticing is that modern mobile apps really don’t take advantage of all of these sensors they have on tap. Sure the OS will rotate stuff based on the gyro, and maybe Fitbit will use GPS + accelerometer data to track exercise, but what apps really leverage NFC, the hall effect sensor (remember Evernote Peek?) or even a combination of three of the available sensors at the same time?

Not that I’m complaining about all the sensors, because it’s great for building robots, but it would be interesting to see app developers can actually come up with something interesting in this area as devices continue to improve.