close

OpenCV-Python | category: opencv python tutorial

home

OpenCV-Python

OpenCV Python Tutorials

opencvpython.blogspot.com

Histograms - 1 : Find, Plot, Analyze !!!


Hi,

This time, we will go through various functions in OpenCV related to histograms.

So what is histogram ? You can consider histogram as a graph or plot, which gives you an overall idea about the intensity distribution of an image. It is a plot with pixel values (ranging from 0 to 255) in X-axis and corresponding number of pixels in the image on Y-axis.

It is just another way of understanding the image. By looking at the histogram of an image, you get intuition about contrast, brightness, intensity distribution etc of that image. Almost all image processing tools today, provides features on histogram. Below is an image from "Cambridge in Color" website, and I recommend you to visit the site for more details.

Histograms - 1 : Find, Plot, Analyze !!!
Image Histogram


You can see the image and its histogram. (Remember, this histogram is drawn for grayscale image, not color image). Left region of histogram shows the amount of darker pixels in image and right region shows the amount of brighter pixels. From the histogram, you can see dark region is more than brighter region, and amount of midtones (pixel values in mid-range, say around 127) are very less.

(For more basic details on histograms, visit : http://www.cambridgeincolour.com/tutorials/histograms1.htm)

FIND HISTOGRAM

Now we have an idea on what is histogram, we can look into how to find this. OpenCV comes with an in-built function for this, cv2.calcHist(). Before using that function, we need to understand some terminologies related with histograms.

BINS :
The above histogram shows the number of pixels for every pixel value, ie from 0 to 255. ie you need 256 values to show the above histogram. But consider, what if you need not find the number of pixels for all pixel values separately, but number of pixels in a interval of pixel values? say for example, you need to find the number of pixels lying between 0 to 15, then 16 to 31, ..., 240 to 255. You will need only 16 values to represent the histogram. And that is what is shown in example given in OpenCV Tutorials on histograms.

So what you do is simply split the whole histogram to 16 sub-parts and value of each sub-part is the sum of all pixel count in it. This each sub-part is called "BIN". In first case, number of bins where 256 (one for each pixel) while in second case, it is only 16. BINS is represented by the term "histSize" in OpenCV docs.

DIMS : It is the number of parameters for which we collect the data. In our case, we collect data regarding only one thing, intensity value. So here it is 1.

RANGE : It is the range of intensity values you want to measure. Normally, it is [0,256], ie all intensity values.

So now we use cv2.calcHist() function to find the histogram. Let's familiarize with the function and its parameters :
cv2.calcHist(images, channels, mask, histSize, ranges[, hist[, accumulate]])

1 - images : it is the source image of type uint8 or float32. it should be given in square brackets, ie, "[img]".
2 - channels : it is also given in square brackets. It the index of channel for which we calculate histogram. For example, if input is grayscale image, its value is [0]. For color image, you can pass [0],[1] or [2] to calculate histogram of blue,green or red channel respectively.
3 - mask : mask image. To find histogram of full image, it is given as "None". But if you want to find histogram of particular region of image, you have to create a mask image for that and give it as mask. (I will show an example later.)
4 - histSize : this represents our BIN count. Need to be given in square brackets. For full scale, we pass [256].
5 - ranges : this is our RANGE. Normally, it is [0,256].

So let's start with a sample image. Simply load an image in grayscale mode and find its full histogram.

img = cv2.imread('home.jpg',0)
hist = cv2.calcHist([img],[0],None,[256],[0,256])

hist is a 256x1 array, each value corresponds to number of pixels in that image with its corresponding pixel value. Now we should plot it, but how ?

PLOTTING HISTOGRAM

There are two ways, 1) Short Way : use Matplotlib & 2) Long Way : use OpenCV functions

1 - Using Matplotlib:

Matplotlib comes with a histogram plotting function : matplotlib.pyplot.hist()

It directly finds the histogram and plot it. You need not use calcHist() function to find the histogram. See the code below:

import cv2
import numpy as np
from matplotlib import pyplot as plt

img = cv2.imread('home.jpg',0)
plt.hist(img.ravel(),256,[0,256]); plt.show()

You will get a plot as below :

Histograms - 1 : Find, Plot, Analyze !!!
Image Histogram


NOTE : Actually to find histogram, Numpy also provides you a function, np.histogram(). So instead of calcHist() function, you can try below line :

hist,bins = np.histogram(img,256,[0,256])

hist is same as we calculated before. But bins will have 257 elements, because numpy calculate bins as 0-0.99,1-1.99,2-2.99 etc. So final range would be 255-255.99. To represent that, they also add 256 at end of bins. But we don't need that 256. Upto 255 is sufficient.

Or you can use normal plot of matplotlib, which would be good for BGR plot. For that, you need to find the histogram data first. Try below code:

import cv2
import numpy as np
from matplotlib import pyplot as plt

img = cv2.imread('home.jpg')
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()

You will get a image as below :

Histograms - 1 : Find, Plot, Analyze !!!
Histogram showing different channels


You can deduct from the above graph that, blue has some high value areas(obviously it should be the due to sky)

2 - Using OpenCV functions :

Well, here you adjust the values of histograms along with its bin values to look like x,y coordinates so that you can draw it using cv2.line() or cv2.polyline() function to generate same image as above. This is already available with OpenCV-Python2 official samples. You can check that : https://github.com/Itseez/opencv/blob/master/samples/python2/hist.py . I had already mentioned it in one of my very early articles : Drawing Histogram in OpenCV-Python

APPLICATION OF MASK

Now we used calcHist to find the histogram of full image. What if you want to find some regions of an image? Just create a mask image with white color on the region you want to find histogram and black otherwise. I have demonstrated it while answering a SOF question. So I would like you to read that answer (http://stackoverflow.com/a/11163952/1134940). Just for a demo, I provide the same images here :

Histograms - 1 : Find, Plot, Analyze !!!
Application of Mask
Due to resizing, histogram plot clarity is reduced. But I hope you can write your own code and analyze it.

SUMMARY
In short, we have seen what is image histogram, how to find and interpret histograms, how to plot histograms etc. It is sufficient for today. We will look into other histogram functions in coming articles.

Hope you enjoyed it !!! Feel free to share !!!

Abid Rahman K.

K-Means Clustering - 3 : Working with OpenCV


Hi,

In the previous articles, K-Means Clustering - 1 : Basic Understanding and K-Means Clustering - 2 : Working with Scipy, we have seen what is K-Means and how to use it to cluster the data. In this article, We will see how we can use K-Means function in OpenCV for K-Means clustering.

OpenCV documentation for K-Means clustering : cv2.KMeans()

Function parameters :

Input parameters :

1 - samples : It should be of np.float32 data type, and as said in previous article, each feature should be put in a single column.

2 - nclusters(K
) : Number of clusters

3 - criteria : It is the algorithm termination criteria. Actually, it should be a tuple of 3 parameters. They are ( type, max_iter, epsilon ):
    3.a - type of termination criteria : It has 3 flags as below:
      - cv2.TERM_CRITERIA_EPS - stop the algorithm iteration if specified accuracy, epsilon, is reached.
    - cv2.TERM_CRITERIA_MAX_ITER - stop the algorithm after the specified number of iterations, max_iter.
      - cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER - stop the iteration when any of the above condition is met.

    3.b - max_iter - An integer specifying maximum number of iterations. 
    3.c - epsilon - Required accuracy

4 - attempts : Flag to specify the number of times the algorithm is executed using different initial labellings. The algorithm returns the labels that yield the best compactness. This compactness is returned as output.

5 - flags : This flag is used to specify how initial centers are taken. Normally two flags are used for this : cv2.KMEANS_PP_CENTERS and cv2.KMEANS_RANDOM_CENTERS. (I didn't find any difference in their results, so I don't know where they are suitable. For time-being, I use second one in my examples).

Output parameters:

1 - compactness : It is the sum of squared distance from each point to their corresponding centers.

2 - labels : This is the label array (same as 'code' in previous article) where each element marked '0', '1'.....

3 - centers : This is array of centers of clusters.

Now let's do the same examples we did in last article. Remember, we used random number generator to generate data, so data may be different this time.

1 - Data with Only One Feature:

Below is the code, I have commented on important parts.

import numpy as np
import cv2
from matplotlib import pyplot as plt

x = np.random.randint(25,100,25)
y = np.random.randint(175,255,25)
z = np.hstack((x,y))
z = z.reshape((50,1))

# data should be np.float32 type
z = np.float32(z)

# Define criteria = ( type, max_iter = 10 , epsilon = 1.0 )
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)

# Apply KMeans
ret,labels,centers = cv2.kmeans(z,2,criteria,10,cv2.KMEANS_RANDOM_CENTERS)

# Now split the data depending on their labels
A = z[labels==0]
B = z[labels==1]

# Now plot 'A' in red, 'B' in blue, 'centers' in yellow
plt.hist(A,256,[0,256],color = 'r')
plt.hist(B,256,[0,256],color = 'b')
plt.hist(centers,32,[0,256],color = 'y')
plt.show()

Below is the output we get :

K-Means Clustering - 3 : Working with OpenCV
KMeans() with one feature set

2 - Data with more than one feature :

Directly moving to the code:

import numpy as np
import cv2
from matplotlib import pyplot as plt

X = np.random.randint(25,50,(25,2))
Y = np.random.randint(60,85,(25,2))
Z = np.vstack((X,Y))

# convert to np.float32
Z = np.float32(Z)

# define criteria and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret,label,center = cv2.kmeans(Z,2,criteria,10,cv2.KMEANS_RANDOM_CENTERS)

# Now separate the data, Note the flatten()
A = Z[label.flatten()==0]
B = Z[label.flatten()==1]

# Plot the data
plt.scatter(A[:,0],A[:,1])
plt.scatter(B[:,0],B[:,1],c = 'r')
plt.scatter(center[:,0],center[:,1],s = 80,c = 'y', marker = 's')
plt.xlabel('Height'),plt.ylabel('Weight')
plt.show()

Note that, while separating data to A and B, we used label.flatten(). It is because 'label' returned by the OpenCV is a column vector. Actually, we needed a plain array. In Scipy, we get 'label' as plain array, so we don't need the flatten() there in Scipy. To understand more, check the 'label' in both the cases.

Below is the output we get :

K-Means Clustering - 3 : Working with OpenCV
KMeans() with two feature sets

3 - Color Quantization :

import numpy as np
import cv2
from matplotlib import pyplot as plt

img = cv2.imread('home.jpg')
Z = img.reshape((-1,3))

# convert to np.float32
Z = np.float32(Z)

# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 8
ret,label,center = cv2.kmeans(Z,K,criteria,10,cv2.KMEANS_RANDOM_CENTERS)

# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))

cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()

Below is the output we get :

K-Means Clustering - 3 : Working with OpenCV
Color Quantization with KMeans Clustering

Summary :

So finally, We have seen how to use KMeans clustering with OpenCV. I know, I haven't explained much in this article, because it is same as the previous article. Just a function is changed.

So this series on KMeans Clustering algorithm ends here.

Feel free to post your comments, feedback...

Feel free to share it with your friends....

Regards,
Abid Rahman K.

Contours - 5 : Hierarchy

Hi,

In the last few articles on contours, you have worked with several functions related to contours provided by OpenCV. But when we found the contours in image using cv2.findContours() function, we have passed two arguments additional to source image. They are Contour Retrieval Mode and Contour Approximation Method. We usually passed cv2.RETR_LIST or cv2.RETR_TREE for first argument and cv2.CHAIN_APPROXIMATE_SIMPLE for second argument, and they worked nice. But what do they actually mean ?

Also, in the output, we got two arrays, one is our contours, and one more output which we named as 'hierarchy' (Please checkout the codes in previous articles). But we never used this hierarchy anywhere. Then what is this hierarchy and what is it for ? What is its relationship with the previous mentioned function arguments ?

That is what we are going to deal in this article.

I don't know how important is this topic. Mainly because, I have never worried about hierarchy and other arguments in any of my projects. And there was no reason to. But I am sure, there might be some people who benefit from these features, otherwise OpenCV devs wouldn't have spent time to introduce such a feature. So whatever may be its use, let's just go through it. :)

So, what is this "hierarchy" ?

Normally we use the findContours() function to detect objects in an image, right ? Sometimes objects are in different locations. But in some cases, some shapes are inside other shapes. Just like nested figures. In this case, we call outer one as parent and inner one as child. This way, contours in an image has some relationship to each other. And we can specify how one contour is connected to each other, like, is it child of some other contour, or is it a parent etc. Representation of this relationship is called the hierarchy.

Consider an example image below :

Contours - 5 : Hierarchy
Hierarchy Representation

In this image, there are a few shapes which I have numbered from 0 to 5. 2 and 2a denotes the external and internal contour of the outermost box..

Here, contours 0,1,2 are external or outermost. We can say, they are in hierarchy-0 or simply they are in same hierarchy level.

Next comes contour 2a. It can be considered as a child of contour 2 (or in opposite way, contour 2 is parent of contour 2a). So let it be in hierarchy-1. Similarly contour 3 is child of contour 2 and it comes in next hierarchy. Finally contours 4,5 are the children of 3a, and they come in the last hierarchy level. From the way I numbered the boxes, I would say contour 4 is the first child of contour 3a.

I mentioned these things to understand terms like "same hierarchy level", "external contour", "child contour", "parent contour", "first child" etc. Now let's get into OpenCV.

Hierarchy Representation in OpenCV :

So each contour has its own information regarding what hierarchy it is, who is its child, who is its parent etc. OpenCV represents it as an array of four values : [Next, Previous, First_Child, Parent]
"Next denotes next contour at the same hierarchical level."
For eg, take contour 0 in our picture. Who is next contour in its same level ? It is contour 1. So simply put it as 1. Similarly for Contour 1, next is contour 2. So Next = 2.

What about contour 2? There is no next contour in same level. So simply, put it as -1.

What about contour 4? It is in same level with contour 5. So its next contour is contour 5.
"Previous denotes previous contour at the same hierarchical level."
It is same as above. Previous contour of contour 1 is contour 0 in same level. Similarly for contour 2, it is contour 1. And for contour 0, there is no previous, so put it as -1.
"First_Child denotes its first child contour."
I think there is no need of any explanation. For contour 2, child is contour 2a. So it gets the corresponding index value of contour 2a.

What about contour 3a? It has two children. But we take only first child. And it is contour 4. So First_Child = 4 for contour 3a.
"Parent denotes index of its parent contour"
It is just opposite of First_Child. Both for contour 4 and 5, parent contour is contour 3a. For 3a, it is contour 3 and so on.

If there is no child or parent, that field is taken as -1.

So now we know about the hierarchy style used in OpenCV, we can check into Contour Retrieval Modes in OpenCV with the help of same image given above. ie what do flags like cv2.RETR_LIST, cv2.RETR_TREE, cv2.CCOMP, cv2.EXTERNAL etc mean?

Contour Retrieval Mode :

This is the second argument in cv2.findContours() function. Lets' understand each flag one-by-one.

cv2.RETR_LIST :

This is the simplest of the four flags (from explanation point of view). It simply retrieves all the contours, but doesn't create any parent-child relationship. "Parents are kids are equal under this rule, and they are just contours". ie they all belongs to same hierarchy level.

So here, 3rd and 4th term in hierarchy array is always -1. But obviously, Next and Previous terms will have their corresponding values. Just check it yourself and verify it.

Below is the result I got, and each row is hierarchy details of corresponding contour. For eg, first row corresponds to contour 0. Next contour is contour 1. So Next = 1. There is no previous contour, so Previous = 0. And the remaining two, as told before, it is -1.

>>> hierarchy
array([[[ 1, -1, -1, -1],
[ 2, 0, -1, -1],
[ 3, 1, -1, -1],
[ 4, 2, -1, -1],
[ 5, 3, -1, -1],
[ 6, 4, -1, -1],
[ 7, 5, -1, -1],
[-1, 6, -1, -1]]])

This is the good choice to use in your code, if you are not using any hierarchy features.

cv2.RETR_EXTERNAL

If you use this flag, it returns only extreme outer flags. All child contours are left behind. "We can say, under this law, Only the eldest in every family is taken care of. It doesn't care about other members of the family :)".

So, in our image, how many extreme outer contours are there? ie at hierarchy-0 level?. Only 3, ie contours 0,1,2, right? Now try to find the contours using this flag. Here also, values given to each element is same as above. Compare it with above result. Below is what I got :
>>> hierarchy
array([[[ 1, -1, -1, -1],
[ 2, 0, -1, -1],
[-1, 1, -1, -1]]])

You can use this flag if you want to extract only the outer contours. It might be useful in some cases.

cv2.RETR_CCOMP :

This flag retrieves all the contours and arranges them to a 2-level hierarchy. ie external contours of the object (ie its boundary) are placed in hierarchy-1. And the contours of holes inside object (if any) is placed in hierarchy-2. If any object inside it, its contour is placed again in hierarchy-1 only. And its hole in hierarchy-2 and so on.

Just consider the image of a "big white zero" on a black background. Outer circle of zero belongs to first hierarchy, and inner circle of zero belongs to second hierarchy.

We can explain it with a simple image. Here I have labelled the order of contours in red color and the hierarchy they belongs to, in green color (either 1 or 2). The order is same as the order OpenCV detects contours.

Contours - 5 : Hierarchy

So consider first contour, ie contour-0. It is hierarchy-1. It has two holes, contours 1&2, and they belong to hierarchy-2. So for contour-0, Next contour in same hierarchy level is contour-3. And there is no previous one. And its first is child is contour-1 in hierarchy-2. It has no parent, because it is in hierarchy-1. So its hierarchy array is [3,-1,1,-1]

Now take contour-1. It is in hierarchy-2. Next one in same hierarchy (under the parenthood of contour-1) is contour-2. No previous one. No child, but parent is contour-0. So array is [2,-1,-1,0].

Similarly contour-2 : It is in hierarchy-2. There is not next contour in same hierarchy under contour-0. So no Next. Previous is contour-1. No child, parent is contour-0. So array is [-1,1,-1,0].

Contour - 3 : Next in hierarchy-1 is contour-5. Previous is contour-0. Child is contour-4 and no parent. So array is [5,0,4,-1].

Contour - 4 : It is in hierarchy 2 under contour-3 and it has no sibling. So no next, no previous, no child, parent is contour-3. So array is [-1,-1,-1,3].

Remaining you can fill up. This is the final answer I got:

>>> hierarchy
array([[[ 3, -1, 1, -1],
[ 2, -1, -1, 0],
[-1, 1, -1, 0],
[ 5, 0, 4, -1],
[-1, -1, -1, 3],
[ 7, 3, 6, -1],
[-1, -1, -1, 5],
[ 8, 5, -1, -1],
[-1, 7, -1, -1]]])

So where do we can apply this ? I don't have any good application now. One application would be in OCR. Those who have checked my article "Simple Digit Recognition OCR in OpenCV-Python" would have noticed that I used area as a constraint to remove the contours of holes inside numbers like 8,9,0,6 etc. I found that area by checking a lot of values. Instead, I should have used this feature to filter out holes inside the numbers.(To be honest, I had no idea regarding the hierarchy when I wrote that code.)

UPDATE : You can find a simple demo of practical application of cv2.RETR_CCOMP in this SOF link : http://stackoverflow.com/a/14279746/1134940

cv2.RETR_TREE :

And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family hierarchy list. "It even tells, who is the grandpa, father, son, grandson and even beyond... ".

For examle, I take above image, rewrite the code for cv2.RETR_TREE, reorder the contours as per the result given by OpenCV and analyze it. Again, red letters give the contour number and green letters give the hierarchy order.

Contours - 5 : Hierarchy

Take contour-0 : It is in hierarchy-0. Next contour in same hierarchy is contour-7. No previous contours. Child is contour-1. And no parent. So array is [7,-1,1,-1].

Take contour-2 : It is in hierarchy-1. No contour in same level. No previous one. Child is contour-2. Parent is contour-0. So array is [-1,-1,2,0].

And remaining, try yourself. Below is the full answer:

>>> hierarchy
array([[[ 7, -1, 1, -1],
[-1, -1, 2, 0],
[-1, -1, 3, 1],
[-1, -1, 4, 2],
[-1, -1, 5, 3],
[ 6, -1, -1, 4],
[-1, 5, -1, 4],
[ 8, 0, -1, -1],
[-1, 7, -1, -1]]])

I am not sure where you can use it.

So this is what Contour Retrieval Mode.

Next we will deal with third argument in cv2.findContours(), ie Contour Approximation method.

Contour Approximation Method

There are 3 flags under this category, but I am discussing only the first two - cv2.CHAIN_APPROX_NONE and cv2.CHAIN_APPROX_SIMPLE.

First one finds all the points on the contour or the boundary. But actually do we need all the points? For eg, you found the contour of a straight line. Do you need all the points on the line to represent that line? No, we need just two end points of that line. This is what second flag does. It removes all redundant points and compresses the contour.

It can be easily visualized as follows. Take an image with upright rectangle in it. Find the contours using both the flags (Take second argument as cv2.RETR_LIST). First compare number of points in each contour. Now plot each point in both the contour on the rectangle and compare the result. See it below :

Contours - 5 : Hierarchy
contours using cv2.CHAIN_APPROX_SIMPLE


Contours - 5 : Hierarchy
contours using cv2.CHAIN_APPROX_NONE










In first case, you can see a blue boundary. It is because, all the points plotted are touching each other. Actually they are distinct points. And it has 734 points in the array. But second method has only four points in four corners. That is really a good difference. Second method is a good improvement, both in memory consumption and performance.

**********************************************************************************

So I think you might have got a simple intuitive understanding regarding concept of hierarchy in OpenCV. As I mentioned in the beginning of this article, I don't know how important is this topic, since I have never used this. If I find any application using this hierarchy, I will provide the links here.

So, I hope you enjoyed this article. Don't forget to share it with your friends !!!

Regards,

Abid Rahman K.

Previous Articles on Contours :





Contours - 4 : Ultimate

Hi,

This is the fourth and final article on Contours. This is the continuation of below articles:

1 - Contours - 1 : Getting Started
2 - Contours - 2 : Brotherhood
3 - Contours - 3 : Extraction

In this article we will deal with PointPolygonTest and Convexity Defects.

1 - PointPolygonTest :

This function finds the shortest distance between a point in the image and a contour. It returns the distance which is negative when point is outside the contour, positive when point is inside and zero if point is on the contour.

For example, we can check the point (50,50) as follows:

dist = cv2.pointPolygonTest(cnt,(50,50),True)

In the function, third argument is " measureDist ". If it is True, it finds the signed distance. If False, it finds only if the point is inside or outside or on the contour.

And if you don't want to find the distance, make sure third argument is False, because, it is a time consuming process. So, making it False gives about 2-3X performance boost.

I have written another article on how to speed up programs using Numpy techniques where I have taken PointPolygonTest as the test case.

Visit : Fast Array Manipulation in Numpy

2 - Convexity Defects :

I have already explained convex hull. Any deviation of the object from this hull can be considered as convexity defect. I have explained it with the help of images in second part of this series. ( Please read it ).

OpenCV comes with a ready-made function for this, cv2.convexityDefects(). Let's see how we can use it.

hull = cv2.convexHull(cnt,returnPoints = False)
defects = cv2.convexityDefects(cnt,hull)

Notice that "returnPoints = False" in first line to get indices of the contour points, because input to convexityDefects() should be these indices, not original points.

It returns a defects structure, an array of four values - [ start point, end point, farthest point, approximate distance to farthest point ]

We can visualize it using an image. We draw a line joining start point and end point, then draw a circle at the farthest point.

Now we take each row of the defects, then from that draw, extract four values, draw line using first two values, then draw the point using third value. Remember first three values returned are indices of cnt. So we have to bring those values from cnt.

for i in range(defects.shape[0]):
s,e,f,d = defects[i,0]
start = tuple(cnt[s][0])
end = tuple(cnt[e][0])
far = tuple(cnt[f][0])
cv2.line(img,start,end,[0,255,0],2)
cv2.circle(img,far,5,[0,0,255],-1)

And below are the various results :

Contours - 4 : Ultimate
Contours - 4 : Ultimate










Contours - 4 : Ultimate

So these are two functions I wanted to discuss. With this article, series on Contours is over.

I would like to hear your feedback, comments, suggestions etc.

With Regards,
ARK

Contours - 2 : Brotherhood

Hi,

This article is the direct continuation of this article : Contours - 1: Getting Started

In this article, we will learn usage of several functions closely related to Contours. Once this functions are learnt, we can find almost all features of Contours.

1 - Image Moments

Image moments help you to calculate some features like center of mass of the object, area of the object etc. Check out the wikipedia page : http://en.wikipedia.org/wiki/Image_moment

The function cv2.moments() gives a dictionary of moment values calculated. See below :

moments = cv2.moments(cnt)

If you print moments, you get a dictionary:

{'mu02': 10888082.359906793, 'mu03': 0.005234025965704581, 'm11': 368666693.125,
'nu02': 0.10815497152071127, 'm12': 69763579350.98334, 'mu21': 101313.30416250229, 'mu20': 6674463.831166983,
'nu20': 0.06629968636479547, 'm30': 84692116672.95001, 'nu21': 1.0046975468372928e-05, 'mu11': -1980114.5675549507,
'mu12': -33122544.260385513, 'nu11': -0.019669141689288665, 'nu12': -0.0032846761082870463, 'm02': 352044973.5833333,
'm03': 68983799276.15001, 'm00': 10033.5, 'm01': 1850134.5, 'mu30': 8633090.369003296, 'nu30': 0.0008561209988226333,
'm10': 2010061.8333333333, 'm20': 409360323.5833333, 'm21': 74691021944.88333}

Now you can have calculations using these dictionary keys. For example to find the area of the object:

area = moments['m00']

More we will learn in next article.

2 - Contour Area:

Area of contour is same as number of pixels inside the contour. It can be found out using cv2.contourArea() function.

area = cv2.contourArea(cnt)

3 - Contour Perimeter:

It is also called arc length. It can be found out using cv2.arcLength() function.

perimeter = cv2.arcLength(cnt,True)

4 - Contour Approximation :

Contour Approximation will remove small curves, there by approximating the contour more to straight line. This is done using cv2.approxPolyDP() function.

Contours - 2 : Brotherhood
To understand this, suppose you are trying to find a square in an image, but due to some problems in the image, you got only what is shown at right side.


So when you try to find the contours, you will get all the curves also. But with contour approximation, you can avoid all those problems and approximates it to a perfect square.

Check below image. Red region is the actual contour area. Where green line shows approximated contour. You can see, approximated contour is a perfect rectangle.

approx = cv2.approxPolyDP(cnt,0.1*cv2.arcLength(cnt,True),True)

Contours - 2 : Brotherhood
approximation
epsilon = 10% of arc length
It also reduces number of points to operate. In original contour, there was 210 points, while approximated contour has only four points which corresponds to four corners of rectangle.

In this, second argument is called epsilon, which is maximum distance from contour to approximated contour. It is an accuracy parameter. In above case, i have taken it as 10% of arc length.


Contours - 2 : Brotherhood
approximation
epsilon = 1% of arc length




What will happen if you take it as 1% of arc length? Check out this left image. Approximation detects the defects also. And number of points in approximated contour is now 22.

So a wise selection of epsilon is needed and it all depends on your application.





5 - Convex Hull :

Contours - 2 : Brotherhood
convex hull
Once the approximation is over, Convex Hull is next. This will look similar to contour approximation, but not. Here, cv2.convexHull() function checks a curve for convexity defects and corrects it. Generally speaking, convex curves are the curves which are always bulged out, or at-least flat. And if it is bulged inside, it is called convexity defects. For example, in above case, we can see there are some inside curves for that square. They are the convexity defects. If we find convex hull for this, we get image at right.

(Actually this image is same as above, because both results are same. But it doesn't mean approximation is convex hull, although a contour can be approximated to get a convex hull by selecting suitable epsilon)

Contours - 2 : Brotherhood



Still for those who didn't understand convex hull, OpenCV documentation has a nice picture which demonstrats convex hull and convexity defects. As you can see, the black curve ( hand ) is the original contour. Red curve surrounding it is the convex hull, and convexity defects are marked at gaps between fingers, which are the local maximum deviations of hull from contours.

Syntax :

hull = cv2.convexHull(points[, hull[, clockwise[, returnPoints]]]) 

Points are the contours we pass in to.
Hull is the output, normally we avoid it.
Direction : Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. (Actually i haven't used this flag anywhere)

So to get a convex hull as in above image, following is sufficient.

hull = cv2.convexHull(cnt)

If we print hull, we get a list: [[[234 202]], [[ 51 202]], [[ 51 79]], [[234 79]]], where each value denotes the corners of rectangle, actually coordinates of corners of rectangle.

To draw a convex hull, you need to do as shown above.

But there is a fourth argument, returnPoints, which is by default True. Then it returns the coordinates. But if it is False, it return the indices of those of convex hull points with respect to contours.

For example, execute the following :

hull = cv2.convexHull(cnt,returnPoints = False)

Now if we print hull, we get : [[129],[ 67],[ 0],[142]]. If you check corresponding values in cnt, it will be same as coordinates we have already found. for example, cnt[129] = [[234, 202]] and so others.

But why would we need such a feature ? It is necessary when we find the convexity defects. We need to pass these indices to cv2.convexityDefects() function to find convexity defects. We will deal with it in another article, but keep this in mind.

6 - Is contour Convex:

There is a function to check if a curve is convex or not, cv2.isContourConvex(). It just return whether True or False. Not a big deal.

k = cv2.isContourConvex(cnt)

7 - Bounding Rectangle :

There are two types of bounding rectangles.

1) Just an upright bounding rectangle which covers the full object. It doesn't consider the rotation of the object.

Let (x,y) be the starting coordinate of rectangle, (w,h) be its width and height.

Then we can find and draw the bounding rect as follows (Green color). See result below:

x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)

2) Rotated rectangle where a bounding rectangle is drawn with minimum area, so it considers the rotation also. The function used is cv2.minAreaRect(). It returns a Box2D structure - (x,y),(w,h),theta.

rect = cv2.minAreaRect(cnt)
box = cv2.cv.BoxPoints(rect)
box = np.int0(box)
cv2.drawContours(im,[box],0,(0,0,255),2)

(x,y) - center point of the box
(w,h) - width and height of the box
theta - angle of rotation
Contours - 2 : Brotherhood
Bounding rectangle

But to draw rectangles, we need coordinate points. For this cv2.cv.BoxPoints() function is used.

Both the rectangles are shown in a single image. Green rectangle shows the normal bounding rect. Red rectangle is the rotated rect.

Area of normal bounding rect = 15972

Area of rotated rect = 8853



Contours - 2 : Brotherhood
CircumCircle
8 - Minimum Enclosing Circle :

Next we find the circumcircle of an object using the function cv2.minEnclosingCircle(). It is a circle which completely covers the object with minimum area.

You can see the result in this image.




(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
cv2.circle(im,center,radius,(0,255,0),2)

9 - Fit Ellipse :

Next one is to fit an ellipse to an object. It returns the rotated rectangle in which the ellipse is inscribed.

ellipse = cv2.fitEllipse(cnt)
cv2.ellipse(im,ellipse,(0,255,0),2)

Contours - 2 : Brotherhood
Fit ellipse

------------------------------------------------------------------------------------------------------------

So, these are some major functions related to Contours.

There are some other functions like, cv2.pointPolygonTest(), cv2.convexityDefects() etc which we will deal in another article.

Hope you like this,

Regards,
ARK

Contours - 1 : Getting Started

Hi, this article is a tutorial which try to cover all relevant functions in OpenCV dealing with Structural Analysis and Shape Descriptors, which are mainly related to contours.

Contours - 1 : Getting Started
Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. For example, consider image at left.

Assuming it is a binary image,we can say, its contour is the curve joining the all the boundary white points.

So if we find a contour in a binary image, we are finding the boundaries of objects in an image. That is why, OpenCV doc says, "The contours are a useful tool for shape analysis and object detection and recognition".

Finding Contours:

We start with a simple image as above. First we find the contours.

import numpy as np
import cv2

im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

Points to remember :
  1. For better accuracy, use binary images. So before finding contours, apply threshold or canny edge detection.
  2. FindContours function modifies the source image, so 'thresh' before and after finding contours are different image. So if you want 'thresh' as such after finding contours, store it to some other variables.
  3. In OpenCV, its operation is like finding white object from black background. So remember, object to be found should be white and background in black.
What is structure of resulting contours?

The result "contours" is a Python list, where it contains all objects boundary points as separate lists. So to find number of objects, find length of list "contours", where in this case, it is one. Only one object. So we take it as "cnt".

>>> len(contours)
1
>>> cnt = contours[0]
>>> len(cnt)
244

Here, number of points in cnt is 244. What these points denote? They are the boundary points of the object.

But, does it include all the boundary? Not exactly. The points are selected such that, contours can be drawn as straight line joining these points. So, if object is a horizontal or vertical line, only end points are stored. ie length of cnt = 2. If object is a rectangle, only 4 vertices are stored.

Contours - 1 : Getting Started
Contour points for a rectangle

Thus in our image, there are no direct horizontal or vertical lines. So most of the points will be stored. To visualise it as above, you can draw circles for each value in cnt.

How to draw the contours?

For this, there is a function, cv2.drawContours(). Let's try it:

cv2.drawContours(im,contours,-1,(0,255,0),3)

This draws a 3-pixel wide green outline of the object. If you want to fill the object with a particular color, pass value of -1 to line thickness.

cv2.drawContours(im,contours,-1,(0,255,0),-1)

Contours - 1 : Getting Started
Contours drawn filled
Contours - 1 : Getting Started
Contours drawn 3 px wide














Also, the third argument in cv2.drawContours() is also to be noted. Suppose, you want to draw only fourth contour(not here), third argument should be set to 3. If it is -1, all contours are drawn.

Now you want to draw "cnt" only. It can be done as follows:

cv2.drawContours(im,[cnt],0,(255,0,0),-1)

Note the square bracket around "cnt". Third argument set to 0, means only that particular contour is drawn.

Now we end after one more important concept, called Mask.

Mask : What and Why?

Mask can be considered as a binary image where only our desired area is white and all others are blacked out. They are used to isolate a part of image and do operations on that part only without affecting or operating on other parts of the image. This can also be considered as a ROI (Region of Interest) which can have any shape.

Contours - 1 : Getting StartedConsider a scenario, where you are asked to find average colors of each shapes in the image at right. So simply threshold the image to binarize it (please don't ask me if white ball can be detected using thresholding, it is just an example). Find contours in the binary image, then for each contour, create a mask image of that shape. ie, if first ball is cosidered, the region of that ball in mask image will be white, while all other shapes and backgrounds are blacked out. Now if you can find the mean color of that shape only. So for every shapes.

(OK, just for this case, I will do it in this image, not on our original image at the beginning)

First we find the contours as we did before. (Adjust the threshold value to detect all). Now we will see how to do it:

First create a mask image where all elements are zero (ie just a black image) with size same as source, but single channel (ie grayscale).

Then for each contour, we draw it on the mask image filled with white color. Then we find mean using mean() function, taking our mask as operating mask.

for h,cnt in enumerate(contours):
mask = np.zeros(imgray.shape,np.uint8)
cv2.drawContours(mask,[cnt],0,255,-1)
mean = cv2.mean(im,mask = mask)

Contours - 1 : Getting Started
Mask Images

See the result at left side.

(All the resulting images are animated to a single image)














I think it is sufficient for now. Keep these three in mind, ie Find Contours, Draw Contours and Mask Image. Now we can find some contour features in next post.

Regards,
ARK

Sudoku Solver - Part 2

Hi,

This is the continuation of the article : Sudoku Solver - Part 1

So we start implementing here.

Load the image :

Below is the image I used to work with.

Sudoku Solver - Part 2
Original  Image
So, first we import necessary libraries.

import cv2
import numpy as np

Then we load the image, and convert to grayscale.

img =  cv2.imread('sudoku.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

Image Pre-processing :

I have done just noise removal and thresholding. And it is working. So I haven't done anything extra.

gray = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)

Below is the result :

Sudoku Solver - Part 2
Result of adaptive thresholding
Now two questions may arise :

1) What is the need of smoothing here?
2) Why Adaptive Thresholding ? Why not normal Thresholding using cv2.threshold()  ? 

Find the answers here : Some Common Questions

Find Sudoku Square and Corners :

Now we find the sudoku border. For that, we are taking a practical assumption : The biggest square in the image should be Sudoku Square. In short, image should be taken close to Sudoku, as you can see in the input image of demo.

So a lot of things are clear from this : Image should have only one square, Sudoku Square, or not, Sudoku Square must be the biggest. If this condition is not true, method fails.

It is because, we find the sudoku square by finding the biggest blob ( an independant particle) in the image. So if biggest blob is something other than Sudoku, that blob is processed. So, I think you will keep an eye on it.

We start by finding contours in the thresholded image:

contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

Now we find the biggest blob, ie blob with max. area.

For this, first we find area of each blob. Then we filter them by area. We consider a blob for next processing only if its area is greater than a particular value (here, it is 100). If so, we approximate the contours. It removes unwanted coordinate values in the contour and keep only the corners. So if number of corners equal to four, that is a square (actually, a rectangle). If it has the maximum area among all detected squares, it is out Sudoku square.

biggest = None
max_area = 0
for i in contours:
area = cv2.contourArea(i)
if area > 100:
peri = cv2.arcLength(i,True)
approx = cv2.approxPolyDP(i,0.02*peri,True)
if area > max_area and len(approx)==4:
biggest = approx
max_area = area

For you to understand between original contour and approximated contour, I have drawn it on the image (using cv2.drawContours() function). Red line is the original contour, Green line is the approximated contour and corners marked in blue color circles.

Sudoku Solver - Part 2
Border and corners detected
Look at the top edge of sudoku. Original contour ( Red line) grazes on the edge of square and it is curved. Approximated contour ( Green line) just made it into a straight line.

Now, a simple question may arise. What is the benefit of filtering contours with respect to area? What is the need of removing them ? In simple words, it is done for speed up of the program. Although it may give you a little performance ( in the range of few milliseconds), even that will be good for those who want to implement it in real time. For more explanation, visit : Some Common Questions

Summary :

So, in this section, we have found the boundary of sudoku. Next part is the image transformation. I will explain it in next post.

Until then, I would like to know your feedback, doubts etc.

With Regards
ARK

Sudoku Solver - Some Common Questions

Hi,

This is a post to answer some common questions that can arise while dealing with the Sudoku Solver.

Question 1 : What is the need of Smoothing?

Answer : You will understand its need if you see the result without applying Smoothing. Below is the result of Adaptive Threshold without Smoothing.

Sudoku Solver - Some Common Questions
Result of adaptive noise without smoothing
You can see the same result after applying a smoothing:

Sudoku Solver - Some Common Questions
After smoothing
Compare the results. There are lot of noises in the first case. So we have to remove them in the next step which is an extra task.

I just compared number of independent objects found (ie contours ) in both the cases. Below is the result:

First without smoothing:
>>> len(contours)
3109

Next after smoothing:
>>> len(contours)
450

See the difference. Without smoothing, we are dealing with 7 times the number of objects than those found after smoothing. So which one is good?

To know different Smoothing Techniques : Smoothing Techniques in OpenCV

Question 2 : Why adaptive thresholding ? Why not normal thresholding ?

AnswerReason, You will understand when we compare the results of them. 

Below is the result, I got using Adaptive Threshold :


Sudoku Solver - Some Common Questions
Result of Adaptive Threshold
Now we apply normal thresholding for a value of 96 ( 96 is the auto threshold value generated by GIMP):

Sudoku Solver - Some Common Questions
Normal thresholding for value = 96
Now see the difference. It is because normal thresholding thresholds the image taken as a whole, while adaptive threshold thresholds the image taking an optimum value for a local neighbourhood. 

To know more about thresholding techniques :

Question 3 What is the benefit of filtering contours with respect to area? 

Answer : 1) To avoid small noises which has an area less than prescribed value and we are sure it can't be the square

2) It also improves the speed a little bit.

I will show you some performance comparisons below:

A)  We have already calculated number of objects (contours) found, which is 450. Without having any area filter, it process all the 450 contours. For that, you can just change the code as below:

for i in contours:
if area > min_size:
peri = cv2.arcLength(i,True)
approx = cv2.approxPolyDP(i,0.02*peri,True)
if area > max_area and len(approx)==4:
biggest = approx
max_area = area

It checks all the 450 contours for maximum area and it takes an average of 30 ms.

B)  Now we implement a filter for area of 100, as explained in the original code. Then it takes checks only 100 contours and takes only an average of 15 ms. So we get 2X performance.

C)  Now change the value from 100 to 1/4 of the image size. Check the code below:

min_size = thresh.size/4
for i in contours:
if area > min_size:
peri = cv2.arcLength(i,True)
approx = cv2.approxPolyDP(i,0.02*peri,True)
if area > max_area and len(approx)==4:
biggest = approx
max_area = area

Now it checks only one contour,our square, and takes only an average of 3 ms. ie, 10X performance.

Now, although time difference is only 27 ms, it will be highly useful if we implement it in real time.

So, it all depends on how you use it.

Skeletonization using OpenCV-Python

I see people asking an algorithm for skeletonization very frequently. At first, I had no idea about it. But today, I saw a blog which demonstrates simple method to do this. Code was in C++, so I would like to convert it to Python here.

What is Skeletonization?


Skeletonization using OpenCV-Python



Answer is just right in the term. Simply, it make a thick blob very thin, may be one pixel width. Visit the wikipedia page for more details : Topological Skeleton

Code : 

import cv2
import numpy as np

img = cv2.imread('sofsk.png',0)
size = np.size(img)
skel = np.zeros(img.shape,np.uint8)

ret,img = cv2.threshold(img,127,255,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
done = False

while( not done):
eroded = cv2.erode(img,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(img,temp)
skel = cv2.bitwise_or(skel,temp)
img = eroded.copy()

zeros = size - cv2.countNonZero(img)
if zeros==size:
done = True

cv2.imshow("skel",skel)
cv2.waitKey(0)
cv2.destroyAllWindows()

Below is the result I got:

Skeletonization using OpenCV-PythonSkeletonization using OpenCV-Python





References : 

1) http://felix.abecassis.me/2011/09/opencv-morphological-skeleton/

Solve this Puzzle in Python

Hi,

I would like to present you a very interesting puzzle. It was asked by maths teacher in high school ( not to solve in Python, but solve in pen and paper ). I could solve it back then ( where my "method" was not good and the way i found it is still a "mystery" to me. ). Even a few days back, I admit I couldn't find a method to solve this except brute forcing.

So I decided to apply brute-forcing for this puzzle in Python, and I asked this question in stackoverflow.com. But I was really surprised seeing the answers provided there. I would like to share this with you.

This is the puzzle :


A merchant has a 40 kg weight which he used in his shop. Once, it fell from his hands and was broken into 4 pieces. But surprisingly, now he can weigh any weight between 1 kg to 40 kg with the combination of these 4 pieces.
So question is, what are weights of those 4 pieces?
Try to solve this puzzle in Python. If you can't, find its answer here.
With Regards,
ARK.
Histograms - 1 : Find, Plot, Analyze !!!K-Means Clustering - 3 : Working with OpenCVContours - 5 : HierarchyContours - 4 : UltimateContours - 2 : BrotherhoodContours - 1 : Getting StartedSudoku Solver - Part 2Sudoku Solver - Some Common QuestionsSkeletonization using OpenCV-Python

Report "OpenCV-Python"

Are you sure you want to report this post for ?

Cancel
×