In previous posts I detailed how to:
Each one of those projects are pretty interesting in their own right but what would be really cool is to combine them all together. So I did. Check it out - I recommend full screen.
The marble pump, motor control and basics of getting OpenCV to run on the Pi are covered in those prior posts. Here I will discuss details particular to this build. Source code for the project is available on GitHub.
Initially, the marble race was built with studs face down. The bottom of Lego plates aren't perfectly smooth, but flat enough to allow a marble to run down them without having to find flat tile pieces for the whole length of the race. Upside down plates with their uneven bottom would also slow the ball speed down to give the Raspberry Pi more time to detect and react to a red coloured ball.
This inverted plate technique worked well on the first marble race where the balls didn't have to turn corners but when 90 degree turns were added, the balls would tend to get stuck as they didn't have enough energy to bounce around and find their way down the next chute.
For the final section before the sweeper arm a (rare!) underside tile was placed in the landing spot to give the marble just enough of a starting boost to make it through the gate and onto the conveyor belt. Without that tile the marbles would land and often stop.
The chimney exit used a pair of inverted 2 x 8 plates. This was a deliberate improvement from a single 4x8 plate becase the seam between the two plates keeps the marble elevated off the uneven underside and velocity up for the first turn.
The long run down toward the final two corners was changed to stud-on-top with tiles, as without the speed boost of the flat tiles, the marbles weren't making it through the next turn. I could have increased the gradient but that would mean making a taller chimney and more force required to pump balls up the chimney. The 9V Technic motor driving the pump is from a Mindstorms Robotics Invention set, some 10 years old and the step down gear box that drives the axle is 20+ years old so I didn't want to over strain it.
The conveyor belt stage is built with the tracks from the old mindstorms set. The only trick to this section was mounting mini 2x1 wedges at the end of the belt to help guide the ball into a narrower range to drop from.
A chain link drives the belt with a 1:1 ratio from the gearbox output. I wouldn't have minded stepping it down for more torque but was a bit short on chain link.
The circuit for controlling the servo is a PCA9685 PWM LED controller connected to the Raspberry Pi via I2C as detailed in this post. A 12V power pack is fed through a UBEC (a switch mode DC-DC regulator often used in model aircraft) to supply 5V to the servo PWM chip and the DC Motor that drives the marble pump and conveyor. The DC motor in this project is not under the control of the Pi and is wired direct to the power supply.
In the picture below, the 16 pin DIP on the right is an unused IC from the previous motor project.
The Ping Pong Thrower used a large servo that was easily capable of snapping a Lego beam in half if it was commanded to move beyond the limits of the Lego it was attached to. This time I downsized for the more nimble HXT900. The ping pong project used some tie wire to bind the servo to the Lego beam, this time I dremelled out an old (set 8035!) Technic plate and trimmed down the HXT900 horn to fit inside it.
The software for the project is written in C# and runs under Mono on the Raspberry Pi. The main controller (view source) is quite simple to understand, capturing frames from the camera, looking for red in the image and if detected, moving the servo arm to the required position until the red disappears. An interesting thing I had to implement was a debounce as the red ball would appear but then temporarily disappear as it hit the corner of the race and failed to meet minimum detection area. This resulted in the sweeper arm oscillating rapidly as it cycled from detected, not detected and detected again and some balls were getting batted out race. A six-and-out example can be seen at 3:31, of the video.
The keen observer will see I bound the Z, X, C keys to flicking the servo to various positions. It is a fun game to send the undetected green balls down the race and use the keys to try and bat them on to the floor.
The sorter works by detecting the red balls and ignoring the green. In order to program the detector, the WinForms project in the PiCamCV solution is used to view a feed from the camera. The HSV colour sliders are dragged so as to create a tight range where only the ball appears white in the filtered view, indicating that all other colours are being filtered out. This can be seen at 2:07 of the video.
A lot of readers will be familiar with RGB colour. The image captured from OpenCV is in RGB but the colour detector function works in HSV colour space - Hue Saturation Value. Read more about HSV on wikipedia.
The benefit of using HSV as a colour system is that it is more resilient to changes in ambient lighting than RGB, making it more robust for colour detection in the real world. The images below show this best. The top row is a sequence of photos of the 14mm plastic beads I'm using for marbles under different lighting conditions. The bottom row is the same image blurred in Paint.NET to help average out the values a little so that a random pixel sample should represent a loose average of the colour of the ball.
|Colour Scheme||Daylight||Light bulb||Low light||Very low light|
|RGB||180, 23, 31||149, 24, 26||60, 11, 14||5, 1, 2|
|HSV||356, 88, 70||358, 86, 58||356, 83, 24||17, 72, 3|
The table above shows RGB and HSV values for a sampled pixel from the center of the ball in each lighting condition. See how in RGB representation, the values for each colour change significantly across the lighting conditions. Red samples at 180, 149, 60 and 5. For the same colour in HSV, Hue and Saturation are remarkably stable and it is only the the Value that is changing.
The magic of the ball detection is done using OpenCV's InRange function. The core of the c# code is reproduced below from ColorDetector.cs.
With respect to the code below which is executed for every frame, first the region of interest is specified at line 9. While a tuned region of interest has the advantage of excluding inadvertent red objects that we don't want to trigger off, the main benefit is reduced processing time.
After setting Region of Interest, we convert our captured frame from RGB to HSV colour space on line 12. Lines 14-18 are taking those min/max ranges determined previously in the WinForms project and calling the InRange function. This creates a new grayscale image that has 0 (black) anywhere that isn't red and 1 (white) anywhere that is - assuming red is what we passed in as the HSV filter range. Lines 24-29 are smoothing out noise. The rest of the code is obtaining information on whether anything was detected in the specified HSV range, testing how big it is and then setting the output values for the caller.
Once red has been detected, the rest is easy as Pi, flicking the servo to the appropriate position and then back again once the red disappears again.
Throughout the build the one thing I wasn't sure I could count on was the performance of the Raspberry Pi. With my first foray into OpenCV playing with C++ examples on Windows, I knew the project was feasible from a PC. The Raspberry Pi is somewhat slower however and it wasn't until late in the project that it became apparent the tiny SoC would be able to pull it off by processing enough frames per second to give it enough time to switch the sweeper gate. Here are some of the tricks I used to squeeze out some extra frames:
The Raspberry Pi camera module uses the GPU to process images. USB camera's on the Raspberry Pi use the CPI and are much much slower.
The RaspiCam library capture settings are hard coded. The git source has it capturing at 640x480. I changed this to capture at 320x240 which significantly reduces pixels being buffered from the GPU, captured by OpenCV and then processed by the C# code. The library also had a very high bitrate specified that was more suited for HD capture so I throttled that down. I didn't run any A/B tests to see if the bitrate change had any effect so that is worthy of further investigation.
The Raspberry Pi can be overclocked via the official raspiconfig tool. I tried the turbo setting straight off but my Pi refused to boot. I backed it down to Medium and earned a few more frames per second over the default clock. The Pi will throttle any overclock when it hits 80 degrees so just in case I grabbed some heatsinks off eBay and it has been keeping around the 41-42 degree mark.
Specifying the Region of Interest also gives significant performance boosts, reducing the number of pixels that need to be processed for colour. Converting an image from RGB to HSV is relatively expensive and by setting the ROI only the required pixels are converted.
The video shows the WinForms project that helps capture the HSV range but that project doesn't run the sorter. Running X Windows wastes valuable CPU! The actual servo sorting app runs from the console.
Well that about sums up my adventures with HSV colour filtering in OpenCV. OpenCV has a lot more interesting features and I have some ideas I want to implement. These ideas will require the breakdown of this current Lego model which has consumed a mammoth amount of spare time in the last 6 weeks but that is the nature of Lego!
Raspberry pi 16
Pan tilt 5