How to Build a Computer Vision Based Color Detection Feature With OpenCV

Updated 22 Feb 2023

5 Min

3473 Views

Follow

Share

OpenCV is an open-source library focused on machine learning and computer vision. As these fields have been progressing incredibly fast over the last few years, it’s good to have such a library at hand. It’s under a BSD license which means businesses and independent developers can utilize and modify its code.

Since we already have a certain background with OpenCV, we’ve decided to play with it a little bit to show you what it’s capable of in terms of mobile development:

In this tutorial, we’re going to consider how to implement functionality allowing to change the color of objects using a smartphone’s camera.

Final result of this OpenCV tutorial for iOS

What you’re getting at the end

Getting Started With OpenCV SDK

The first step is to set up OpenCV to your Xcode project. There are tons of tutorials like this one on how to do this.

Before we start, there are several things you should know:

  • It’s an image processing library.
  • Mat — The Basic Image Container.
  • Getting the image from the real world a device transforms it into digital values. OpenCV records numerical values for each image point.
  • Any image is a matrix containing all the intensity values of pixel points.

Now a couple of things about the project itself:

  • We’ll be using Swift classes for implementation of UI and Objective-C classes to implement detection.
  • You can find out how to use Objective-C classes in Swift right here.

Step 1: Setting Everything Up

The first thing we should do is add the OpenCVDetector Objective-C class that’ll be working with the library.

In the class below we’re reducing namespaces to work with OpenCV library more smoothly.

using namespace cv;
using namespace std;

OpenCV has a CvVideoCamera class which is basically a wrapper around AVFoundation. So we provide some of the AVFoundation camera options as properties.

@implementation OpenCVDetector

- (instancetype)initWithCameraView:(UIView *)view scale:(CGFloat)scale preset:(AVCaptureSessionPreset) preset type:(OpenCVDetectorType) type { self = [super init];

if (self) {

self.videoCamera = [[CvVideoCamera alloc] initWithParentView: view];
self.videoCamera.defaultAVCaptureSessionPreset = preset;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.delegate = self;

self.scale = scale;

OpenCVDetector must implement the CvVideoCameraDelegate protocol and has to be set as a delegate to the video camera.

- (void)startCapture {
[self.videoCamera start];
}

- (void)stopCapture {
[self.videoCamera stop];
}

Then, create a CameraViewController Swift class.

class CameraViewController: UIViewController {

lazy var detecor = OpenCVDetector(cameraView: view, scale: 1, preset: .vga640x480, type: .back)

override func viewDidLoad() {
super.viewDidLoad()
detecor.startCapture()
}

This way, we initialize the camera and get self-view as a target for rendering every frame.

What Are Color Spaces?

As I’ve already mentioned, we’re going to change the color of an object. In my case, it’s a red spinner, which is quite easy to identify.

RGB is the most commonly used color space. Its acronym stands for Red Green Blue. In plain English, these three colors are mixed together in various ways to reproduce any color you may need.

Talking technical language, each of the colors can take a value between 0 and 255. Meaning (0,0,0) is black and (255,255,255) is white.

An important thing to mention here is that OpenCV reads images in BGRA format by default.

Step 2: Red Color Detection

RGB values are very sensitive to illumination so we’ll transform the color space for our image from RGB to HSV (Hue, Saturation, Value).

The HSV color space represents colors using three values:

  • Hue. It encodes color information. You can think of it as an angle where 0 degrees corresponds to the red color, 120 degrees corresponds to the green color, and 240 degrees corresponds to the blue color.
  • Saturation. This channel encodes the purity of color. For example, pink is less saturated than red.
  • Value. This channel encodes the brightness of color. Shading and gloss components of an image appear in this channel.

To better understand color spaces in OpenCV, you may read this post.

Let’s get back to our code.

In the code below, we convert the image from RGB to HSV color space and then define a specific range of H-S-V values to detect red color.

Scalar lower_red, upper_red;

//Mask for lower red
lower_red = Scalar(0, 120, 70);
upper_red = Scalar(10, 255, 255);
inRange(hsv, lower_red, upper_red, mask1);

//Mask for upper red
lower_red = Scalar(170, 120, 70);
upper_red = Scalar(180, 255, 255);
inRange(hsv, lower_red, upper_red, mask2);

// Generating the final mask
mask1 = mask1 + mask2;

The inRange function simply returns a binary mask where white pixels (255) represent those that fall into the upper and lower limit range and black pixels (0) do not.

The Hue values are actually distributed over a circle (range between 0-360 degrees). But to fit into OpenCV’s 8bit value the range is 0-180. The red color is represented by 0-30 as well as 150-180 values.

Are you into cross-platform development? Here are the 11 open-source apps built with Flutter.

We use 0-10 and 170-180 ranges to avoid skin detection. For saturation and value range I’ve just set the same value. It’s not a big deal for our task.

Then we combine masks generated for the red color range. It’s basically doing an OR operation pixel-wise. It’s a simple example of operator overloading of “+”.

Step 3: Segmenting Out the Detected Red-Colored Object

In the previous step, we generated a mask to determine the area in the frame matching the detected color. We refine this mask and then use it for segmenting out the red spinner from the frame.

The code below illustrates how it’s done.

cv::Size blurSize(8,8);
blur(mask1, mask1, blurSize);
threshold(mask1, mask1, 50, 255, THRESH_BINARY);

Mat kernel = Mat::ones(3,3, CV_32F);
morphologyEx(mask1, mask1, cv::MORPH_OPEN, kernel);
morphologyEx(mask1, mask1, cv::MORPH_DILATE, kernel);

// creating an inverted mask to segment out the object from the frame
bitwise_not(mask1, mask2);

Mat res1, res2, final_output;

// Segmenting the object out of the frame using bitwise and with the inverted mask
bitwise_and(img, img, res1, mask2);

Step 4: Create a New Background for the Object

We create a new background from our HSV image with another H-value color. Then, we enlarge saturation and brightness S-V-values.

Mat background = Mat(hsv.rows, hsv.cols, hsv.type(), Scalar(fillingHSVColor[0], NAN, NAN));

for (int i = 0; i < background.cols; i++) {
for (int j = 0; j < background.rows; j++) {
CvPoint point = cvPoint(i, j);
background.at<Vec3b>(point).val[1] = MIN(hsv.at<Vec3b>(point).val[1] + 50, 255);
background.at<Vec3b>(point).val[2] = MIN(hsv.at<Vec3b>(point).val[2] + 50, 255);
}
}

cvtColor(background, background, COLOR_HSV2BGR);

Step 5: Generating the Final Output

Finally, we can replace the pixel values of the detected red-colored area with matching pixel values of the background to generate an augmented output.

Love AR? Check this open-source face detection library with overlaying we’ve recently developed.

To do so, we’re going to use bitwiseand operation to create an image with pixel values matching the detected area and equal to the background’s pixel values. Then, we should add the output to the image (res1) from which we’ve segmented out the red spinner.

// creating image showing static background frame pixels only for the masked area
bitwise_and(background, background, res2, mask1);

// Generating the final augmented output.
cv::add(res1, res2, final_output);
How the app works in the end

Here’s how it works on practice

That’s It!

We’ve built an application able to detect the color of an object and then change it in one tap. Go and play with it on your own, it’s open-source! Here’s a link to our GitHub profile with the library.

Looking for tech consultation?

Our analysts and engineers will help you find tech ways to solve problems.

Author avatar...
About author

Evgeniy Altynpara is a CTO and member of the Forbes Councils’ community of tech professionals. He is an expert in software development and technological entrepreneurship and has 10+years of experience in digital transformation consulting in Healthcare, FinTech, Supply Chain and Logistics

Rate this article!
2778 ratings, average: 4.79 out of 5

Give us your impressions about this article

Give us your impressions about this article

Latest articles
Start growing your business with us
By sending this form I confirm that I have read and accept the Privacy Policy