Posts Tagged ‘reconstruction’

14. Image compression

September 24, 2010 2 comments

Image compression has been the means of reducing the size of a graphics file for better storage convenience.  It is also a useful way of reducing the time requirement of sending large files over the Web.  Here we will explore a method in image compression by Principal Component Analysis (PCA).  This technique utilizes the idea that any image can be represented as a superposition of weighted base images.

Suppose we have the following image and its grayscale conversion:

Figure 1. A sample image (characteristically sharp)and its grayscale equivalent.  Courtesy of SuperStock.

We divide image into blocks of 10×10 dimensions and concatenate them.  These sub-blocks are arranged into an nxp matrix, where n is the number of blocks and p the number of elements in each block.

We apply PCA on the matrix with the pca() function in Scilab. which returns a set of eigenvalues, eigenvectors and principal components.

Figure 2. Plot of eigenvalues (A) and principal components (B) of the image.

This produces eigenimages which would be essential elements to the compressed image.

Figure 3. Eigenimages derived from the original image.

Eigenvalues tell how essential a particular set of eigenvectors is to making up the completeness of the image.  Based on these values expressed in percentages we choose the most important eigenvectors and reconstruct the image out of these.  Figure 4 shows the resulting images at 86.7%, 93.4%, 95.5%, and 97.5%.

Figure 4. Compressed reconstructions of the image at different numbers of eigenvectors. 1, 3, 5 and 10 respectively.

We can find out how much of the image has been compressed by counting how much of the eigenvector elements were used in the reconstruction and/or determining the file sizes. Our original image has the dimension 280×340 and is stored at 75.3KB (grayscale).  When compressed with only a certain number of eigenvectors (figure above), becomes reduced to 44.2KB, 51.3KB, 53.9KB, and 60.7KB respectively.

When circumstances do not really require high-definition images, it is often best to compress the images into a good size such that it’s quality is not compromised and information is well-kept.

For this activity, I would rate myself 10 for the job well done. 🙂

Credits: Jeff A. and Jonathan A.

[1] Soriano, 2010. Image compression. Applied Physics 186.
[2] Mudrova, Prochazka. , 2005. Principal component analysis in image processing.
[3] TechTarget, 2010. What is image compression?


1. Digital Scanning

June 17, 2010 4 comments

A reproduction of a graph was performed by relating the pixel locations of its data points to their actual physical values.  Here I used a hand-plotted graph, drawn from The Journal of Experimental Zoology (1916), which shows the relation of pulsation rate with temperature for the organism Holothuria captiva (Figure 1).

Figure 1. Graph taken from the Journal of Experimental Zoology (1916).

Using a drawing tool (Microsoft Paint) and a spreadsheet software (OpenOffice Calc), a ratio-and-proportion approach was applied to replicate the plot.  With the help of the mouse-over location tool of Paint, the pixel coordinates of the tick marks were recorded and were related to their axis values.  Considering the possible imperfections contained in the hand-drawn image, an averaged relation was sought for all the tick marks on both axes to obtain more accurate proportionality constants.  It was found that for the x-axis 1C temperature is represented by 11.27 pixels, while for the y-axis 1 second is represented by 6.08 pixels.

After finding the pixel locations for all data points, their x and y coordinates were converted using the ratios obtained.  It is important to note that for MS Paint, the origin is located on the upper left corner of the window.  Furthermore, it is significant to observe the range of values focused on the plot, x:(10,40) and y:(25,85).  Necessary inversion and shifting of the converted values were thus performed to achieve the actual values for the points.  Figure 2 shows the resulting graph produced using the charting function in the spreadsheet.

Figure 2. Reconstruction of the graph using ratio and proportion.

The digital replica of the graph was compared to the original scanned image by overlaying one on top of the other.  Using the Chart Area tool in Calc, the bitmap image of the original plot was embedded behind the new graph.

Finishing touches include making sure that the bitmap background is set as Autofit, as opposed to the default Tile option, which somehow produces an unsatisfactory embedding depending on the bitmap size.  Figure 3 shows the resemblance of the reproduced graph with the original scanned image.

Figure 3. The digital replica overlaid on the original plot.

It is remarkable that some aspects of the manually drawn graph were corrected on the reconstructed plot (i.e. axis tilts and misalignment).  However, it can be observed that though the calculated points fall fairly well on their proper locations, the curve produced is not similar to the original plot.  This can be accounted on the limitations of the curve smoothening function for charting under OpenOffice Calc, which restricts to including all points within the smoothened curve.  A workaround will somehow require more sophisticated approaches in performing array statistics.

I would like to acknowledge Gladys Regalado for sharing a few software tips, and Dr. Soriano for the helpful assistance.  For this activity, I would give myself a grade of 9 for applying the basic methods and being able to reconstruct the essential parts of the plot.



  1. Applied Physics 186 Activity 1 manual.
  2. Soriano, 2008.  Inserting images as background for OpenOffice Calc graphs.