## 13. Color image segmentation

Image segmentation is the process where a *region of interest* (ROI) is extracted from the image for further processing. Noting that segmentation is more often not feasible for grayscale images, one of the features this process utilizes is the ROI’s unique true color. This method has been widely used for practical applications such as remote sensing, microscopy and object recognitions.

**Parametric vs. Non-Parametric Probability Distribution Estimation**

Color-based segmentation can be performed by finding the probability wherein a pixel falls within a distribution of interest. *Parametric segmentation* finds the Gaussian probability distribution function (PDF) in the R and G values of the ROI in order then to segment the whole image. *Non-parametric segmentation*, on the other hand, finds the 2D histogram of the ROI and uses it to segment the image using histogram backprojection.

*Parametric Segmentation*

For instance, we have an image of a 3D object with a single color and crop a monochromatic region of interest in the picture (Figure 1).

Figure 1. A 3D object with a single color. A. a red mug, B. region of interest (ROI) |

We transform the RGB color of the image into *normalized chromaticity coordinates* (NCC), such that:

It is important to note that r + g + b = 1, and b is dependent on r and g since b = 1 – r – g. We implement this process in Scilab as follows:

R = ROI(:, :, 1); G = ROI(:, :, 2); B = ROI(:, :, 3); I = R + G + B; I (find(I==0)) = 1000000; r = R ./ I; g = G ./ I;

The probability p(r) tells how much a pixel with chromaticity r belongs to the ROI:

We find these necessary factors from our r and g values:

mu_r = mean(r); mu_g = mean(g); sigma_r = stdev(r); sigma_g = stdev(g);

Using these values we apply segmentation to the entire image by calculating the probabilities for r and g.

im = imread('3D_object.png'); R = im(:, :, 1); G = im(:, :, 2); B = im(:, :, 3); I = R + G + B; I (find(I==0)) = 1000000; r = R ./ I; g = G ./ I; Pr = (1/(sigma_r*sqrt(2*%pi)))*exp(-((r - mu_r).^2)/(2*(sigma_r^2))); Pg = (1/(sigma_g*sqrt(2*%pi)))*exp(-((g - mu_g).^2)/(2*(sigma_g^2)));

The joint probability is taken as the product of *p(r)* and *p(g)*.

joint_probability = Pr.*Pg; imshow(joint_probability, []);

Figure 2 shows the extracted portions from the image via segmentation.

*Non-parametric segmentation*

Here we will use the 2D histogram of the r and g values of the image to do the segmentation. After normalizing all pixels, we tag their membership by calculating finding the histogram of the r and g values in a matrix.

BINS = 32; rint = round(r*(BINS-1) + 1); gint = round(g*(BINS-1) + 1); colors = gint(:) + (rint(:)-1)*BINS; hist = zeros(BINS, BINS); for row = 1:BINS for col = 1:(BINS-row+1) hist(row, col) = length(find(colors==( ((col + (row-1)*BINS))))); end; end;

The resulting matrix when rotated such that the origin is at the lower left corner, produces the 2D histogram for the ROI membership, which can be interpreted from the *normalized chromaticity space* or NCS (Figure 3).

Figure 3. A. 2D histogram for pixel membership; B. the Normalized Chromaticity Space (NCS) |

Using this histogram, we can now apply segmentation to the image by means of backprojection. After finding the r and g values:

arsize = size(r); rows = arsize(1); cols = arsize(2); backproj = zeros(rows, cols); for i=1:rows for j=1:cols r_val = r(i, j); g_val = g(i, j); r_new = round(r_val*(BINS-1) + 1); g_new = round(g_val*(BINS-1) + 1); backproj(i, j) = hist(r_new, g_new); // replace pixel with the value end; end; imshow(backproj, []);

Figure 4 shows the segmented image resulting from the non-parametric method.

Comparing the segmented images obtained from the two algorithms, it can be observed that the non-parametric method has extracted a larger part of the region of interest (ROI). The parametric method, on the other hand, has segmented sections that can be said to be sharper and smoother with respect to the objects surface (Gaussian approximation). These two techniques, though different in their results, display good use for different scenes under different conditions. Knowing the pros and cons of each, we would be able to wisely choose the right method for every need for segmentation.

For this section, I would rate myself 10 for being able to produce the outputs from both segmentation methods.

Credits: Dennis D., Tin C., and Marou R.

———————————————————————————————–—————–————————————

**References:**

[1] Soriano, 2010. Color image segmentation. Applied Physics 186.

[2] The Scilab Reference Manual. 2010.

## Recent Comments