Object recognition using OpenCV in Python is easier than you think. Thanks to OpenCV’s extensive library, little first-party code is required.
First you apply Gaussian blur to the image. This means that the subsequent operations will be focused more on overarching figures, rather than minute grains. Where frame
is the image captured,
blurred = cv2.GaussianBlur(frame, (11, 1), 0)
Then you apply a mask over the image for the colour range you’re looking for. The colour range is specified in a weird version of HSV, which is a bit confusing. OpenCV operates on HSV(0–180, 0–255, 0–255) rather than the more typical HSV(0–360, 0–100, 0–100). We can convert an BGR1 produced by GaussianBlur
into HSV with:
hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)
And find the mask with:
green_lower = (40, 70, 0)
green_upper = (100, 255, 255)
mask = cv2.inRange(hsv, blue_lower, blue_upper)
I’ve used green in my example, but it could be any colour you like. I’d recommend sticking to bright, simple colours like blue,yellow and red to get the best results.
To smooth the mask outline out a bit, we’ll dilate and erode it:
mask = cv2.erode(mask, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
We then need to grab the contours (jargon for outlines) of the mask. We can do this in another one-liner (split into multiple lines):
contours, _ = cv2.findContours(
mask,
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
This only works in the release version of OpenCV or higher. If you’re using OpenCV 3, you’ll want the following:
_, contours, _ = cv2.findContours(
mask,
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
If you’re using a version earlier than 3.2, cv2.findContours
will modify mask
. This means that we’ll want to perform a deep copy of it first, passing mask.copy()
instead of mask
as the first argument. Note that 3.2 is the version installed by running sudo apt install python3-opencv
. A version string is returned by the variable cv2.__version__
.
We’ll then want to find the largest contour in the image, if it’s not just an empty list:
if len(contours) > 0:
c = max(contours, key=cv2.contourArea)
Finally, we need to grab the moments of the image. Now, an image moment is something involving a bunch of complicated math I don’t understand. At a high level, it is a kind of weighted average of the intensities of the individual pixels of an image. I’m not going to pretend to understand how it works. We can use this intensity information to calculate the centre of the image as follows (still within te if
statement):
M = cv2.moments(c)
center = (int(M["m10] / M["m00"]), int(M["m01"] / M["m00"]))
And we’ve found the centre of the object! 🥳 If we want an image out of it, we can grab the minimum enclosing circle, overlay it and a dot indicating the centre on the original image and either imwrite
or imshow
it:
cv2.circle( frame,
(int(x), int(y)),
int(radius),
(0, 255, 255), 2)
cv2.circle( frame,
center,
5,
(0, 0, 255),
-1)
# Write the image to a file named "object.jpg"
cv2.imwrite("./object.jpg", frame)
# Display it in a window entitled "Coloured Object Recognition"
cv2.imwrite("Coloured Object Recognition", frame)
And that’s it!
-
Like RGB but with blue first and red last. 🟦🟩🟥 instead of 🟥🟩🟦. ↩︎