Getting the Webcam Working in Linux

Getting the camera to work in Ubuntu was super easy, it was plug and play.  The camera worked immediately with a webcam software called Camorama.
But I wanted to get some working c++ code that interacts with the camera.  For this exercise, just simply taking a snapshot from the webcam and saving it as a jpeg.
Thanks to Google, I found some sample code here that does just that!
Getting this to compile was a bit tricky, turns out I was missing libjpeg library so I had to download it with:
sudo apt-get install libjpeg62-dev
The code isn’t too bad either, it is just using l4v2 (Linux For Video v2) to communicate with the camera which returns images in YUV format.  That image is then converted to RGB and passed along to the jpeg library for compression and saving to a file.
Here’s the first image from the camera!
Oh yeah, I also put the robotic platform together last weekend (as you can see from the shot), I’ll discuss that in another post!

Building an Autonomous Robot

Ever since reading about the Raspberry PI, I’ve been sparked to work on a robotics project that I always wanted to do…to build a web-cam based autonomous robot.  I went ahead and pre-ordered one with a 11 weeks shipping date.  I actually ended up getting the PandaBoard ES because it has a much faster processor and built-in WiFi and Bluetooth.

In the meantime, I bought a robot kit based on Arudino microcontroller: DFRobot’s 4WD Mobile Platform

The Arudino is a popular open-source microcontroller that can be programmed to do various things.  It has both digital and analog outputs that can easily be programmed.  One great functionality is that it can communicate with a USB host via the USB-to-serial port.

The idea I have is to have the Raspberry PI handle the higher level work such as interfacing with the camera and doing complex image processing, while the Arudino drives the motors on the bot.  The Raspberry PI would send “driving” commands to the Arudino though the USB (which is actually a serial port).  The Arudino would then drive the motor, performing basic operations such as MoveForward(), MoveBackwards(), MoveLeft(), and MoveRight().  The Arudino has its own software development environment, but the language itself is just C with some built-in functions to handle the hardware.

I also bought a used 720p HD webcam that I researched and confirmed will work in Linux, ironically, Microsoft LifeCam Cinema.  I have done some work with the camera and I’ll post details of it in another post.

But generally, the Raspberry PI would fetch an image from the USB webcam, perform image processing (perhaps even using OpenGL to accelerate some of the algorithms) and make movement decisions based on the processed information.  It would send commands to the Arudino via the serial port. For camera interface and image processing code, I expect to write all of this this in C/C++ to maximize performance and utilize various libraries available for image processing.  I found an interesting article on image processing with OpenGL (found here) so I will be evaluating that approach as well.

For version one, the bot will autonomously track a red ball and move toward it.  This is a simple task just to get all the pieces working together.  Then I will work on more advanced functionality such as off-loading image processing to a remote laptop and manual override/remote viewing from an iPad.  I also wanted to implement face recognition and maybe speakers to the bot so that it can greet people.  Maybe even a microphone for a two-way conversation.  Siri on wheels and with eyes!

I will continue to post details of my project, so watch for more!