GoF Design Patterns in Go: Strategy

The strategy pattern is an critical pattern for implementing dependency injections.  It allows us to declare the interface of dependencies at design-time, and make them interchangeable at run-time.  The dependency can then be mocked, making it possible to truly unit test components with dependencies.

It can be implemented in Go via interfaces.  For our example, we are going to implement a typical business logic that depends on a DB layer to persist data.  Typically, you’ll have code that may look like this:

Notice the line “db PostgreSQL”.  The DB dependency is hard-coded into the business logic.  This makes it very hard to test BusinessLogic independently.  The tight coupling also makes it harder to replace PostgreSQL with another DB implementation. Let’s replace it with an interface.

Now, instead of a concrete implementation, we define an interface “PersistanceLayer” that requres a method, “GetItemNameById”.  BusinessLogic only deals with this interface, instead of a concrete implementation.  A concrete implementation can be injected with a constructor.

This constructor creates a new instance of BusinessLogic.  It takes a concrete implementation of the PersistanceLayer interface.  For example:
In the future, we can change the DB implementation without changing a single line of code in BusinessLogic.  Here are some possible implementations of PersistanceLayer.

Resurrection and Adopting Golang

It’s been a while since I last posted, but better late than never!  I’ve officially adopted Go as the programming language for all my personal projects.  Started learning the language for work, but fell in love with it’s simplicity and elegant.

I recently got Go working on a Raspberry Pi 2, and will be trying to get it working with the webcam, then hopefully implement a simple image recognition algorithm (maybe to detect a red ball).  All of which will be open sourced, so stay tuned…

Getting the Webcam Working in Linux

Getting the camera to work in Ubuntu was super easy, it was plug and play.  The camera worked immediately with a webcam software called Camorama.
But I wanted to get some working c++ code that interacts with the camera.  For this exercise, just simply taking a snapshot from the webcam and saving it as a jpeg.
Thanks to Google, I found some sample code here that does just that!
Getting this to compile was a bit tricky, turns out I was missing libjpeg library so I had to download it with:
sudo apt-get install libjpeg62-dev
The code isn’t too bad either, it is just using l4v2 (Linux For Video v2) to communicate with the camera which returns images in YUV format.  That image is then converted to RGB and passed along to the jpeg library for compression and saving to a file.
Here’s the first image from the camera!
Oh yeah, I also put the robotic platform together last weekend (as you can see from the shot), I’ll discuss that in another post!

Building an Autonomous Robot

Ever since reading about the Raspberry PI, I’ve been sparked to work on a robotics project that I always wanted to do…to build a web-cam based autonomous robot.  I went ahead and pre-ordered one with a 11 weeks shipping date.  I actually ended up getting the PandaBoard ES because it has a much faster processor and built-in WiFi and Bluetooth.

In the meantime, I bought a robot kit based on Arudino microcontroller: DFRobot’s 4WD Mobile Platform

The Arudino is a popular open-source microcontroller that can be programmed to do various things.  It has both digital and analog outputs that can easily be programmed.  One great functionality is that it can communicate with a USB host via the USB-to-serial port.

The idea I have is to have the Raspberry PI handle the higher level work such as interfacing with the camera and doing complex image processing, while the Arudino drives the motors on the bot.  The Raspberry PI would send “driving” commands to the Arudino though the USB (which is actually a serial port).  The Arudino would then drive the motor, performing basic operations such as MoveForward(), MoveBackwards(), MoveLeft(), and MoveRight().  The Arudino has its own software development environment, but the language itself is just C with some built-in functions to handle the hardware.

I also bought a used 720p HD webcam that I researched and confirmed will work in Linux, ironically, Microsoft LifeCam Cinema.  I have done some work with the camera and I’ll post details of it in another post.

But generally, the Raspberry PI would fetch an image from the USB webcam, perform image processing (perhaps even using OpenGL to accelerate some of the algorithms) and make movement decisions based on the processed information.  It would send commands to the Arudino via the serial port. For camera interface and image processing code, I expect to write all of this this in C/C++ to maximize performance and utilize various libraries available for image processing.  I found an interesting article on image processing with OpenGL (found here) so I will be evaluating that approach as well.

For version one, the bot will autonomously track a red ball and move toward it.  This is a simple task just to get all the pieces working together.  Then I will work on more advanced functionality such as off-loading image processing to a remote laptop and manual override/remote viewing from an iPad.  I also wanted to implement face recognition and maybe speakers to the bot so that it can greet people.  Maybe even a microphone for a two-way conversation.  Siri on wheels and with eyes!

I will continue to post details of my project, so watch for more!