I’ve been into geographic information systems (GIS) lately and I found out that there is an ongoing project by the Philippine government to create a national GIS for the country. The project is called the Philippine Geoportal: One Nation, One Map. It started year 2012 under the collaboration of the National Mapping and Resource Information Authority (NAMRIA) and DOST – Advanced Science and Technology Institute (ASTI). This web-based geospatial platform is on its beta version and is publicly viewable at www.geoportal.gov.ph
This electronic map contains information on infrastructure, roads, lands and bodies of water. It has a search feature that allows you to easily locate the place you are looking for.
Though the buttons and links aren’t fully functional yet, I can see the zooms and pans are working fairly fast (even for slow connections).
I’ve spoken with a few people from NAMRIA and they said they are working on adding more geospatial data and creating smart functionalities on the geoportal. One is related to traffic, which will be extracted from geo-tagged status updates on Twitter, real-time. So next time you tweet about “traffic” on the network, try to get your geo-locator “on” – you might be able to share this info not just with your followers, but also with the users of the Philippine Geoportal.
As I’ve mentioned in my previous post, we were working on a hospital database for our case study, and that includes storage of Appointments (that’s future date + time). I noticed it odd for Oracle that it doesn’t display time, by default. And it doesn’t have a datatype specialized for time. So I found this tweak using SQL, which you might consider helpful as well.
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI';
You may need to enter this before every session, because Oracle’s default setting doesn’t include time at all. HH24 means you will be using a 24-hour time setting – that’s less troublesome than having to deal with AM/PM texts, which you need to parse (I think).
INSERT INTO appointment (appt_no, appt_date, confirm_status, cpt_code, patient_no, prov_no, room_no) VALUES('APT0011', TO_DATE('05-Feb-2013 14:30', 'DD-MON-YYYY HH24:MI'), 'T', '97810', 'P00019', 'PR0008', '102');
SELECT appt_no, TO_CHAR(appt_date, 'DD-MON-YYYY HH24:MI') appt_date, patient_no FROM appointment;
Notice that I used the TO_DATE() function to store the time data and the TO_CHAR() to retrieve/display it.
NOTE: The Time data will be displayed only upon query (via TO_CHAR()), not in the Object Browser. Well, at least we know it’s stored.
Let me know if it worked for you (or not).
The past month, I was studying the basics of databases, and among the things I had to learn along the way are Entity Relations (ER). Databases are composed of tables (called entities) that are to be connected based on their dependencies towards each other. Our instructor introduced us Microsoft Visio as a good tool to make Entity Relationship Diagrams.
In an ERD, each table has its own columns (called attributes) with at least one of them set as primary key to identify uniquely to that table. Depending on the relationship between entities, a table can inherit the primary key of another table as foreign key, and thus become a child to the table (parent) it is referencing to. Out-bound parent links can be set as mandatory or optional, and the child link cardinalities can be set such that parent-to-child relationships are:
|1 TO 0 or more||0 or 1 TO 0 or more|
|1 TO 1 or more||0 or 1 TO 1 or more|
|1 TO 0 or 1||0 or 1 TO 0 or 1||1 TO 1||0 or 1 TO 1|
|1 to Range [at least N, at most M]||0 or 1 to Range [at least N, at most M]|
In visual representation, the entity closest to the crow’s feet is the child.
When we were doing our case study on a hospital database, I had trouble trying to create a mandatory link from the parent entity (which should be displayed as two straight lines, instead of just a line and a ring). (Shown below)
In this scenario, ROOM is the parent entity and MEDICAL_SERVICE is the child entity and we need to fix their relation (highlighted below).
Under Miscellaneous > Child has parent, what we are supposed to do is to uncheck Optional.
The problem (the problem!) is that it’s grayed out (disabled). So …
Here’s the workaround:
Under Definition, disconnect associated relations. Make sure the key you choose is the one being inherited (in this case, Room_No)
Go back to Miscellaneous > Relationship type, make sure it’s set to Non-identifying. Under Child has parent, you can now (finally) uncheck Optional.
There’s the mandatory relation right there.
In this post, I’ll be showing an algorithm for generating an ink caricature out of a face photograph. Using a series of basic image processing techniques we will extract the features of the human face and then reconstruct them into a black-on-white sketch with details similar to brush (or pen) touches. This is useful for those who wish to avoid the work of manually producing such artworks. In addition, this set of methods can serve as initial groundwork for more intelligent and advanced algorithms in creating digital caricatures.
The overall algorithm can be summarized as follows:
Firstly, we choose a good threshold value to produce a satisfactory binary conversion of the raw image. The parameter value should be sufficient to capture and preserve the important features of the face (i.e. eyes, brows, facial linings, teeth, etc.).
Applying edge detection using a spot filter would obtain the important linings of these facial aspects.
Notice however that though we were able to capture the details of the face, due to the limitations of the photo (dark hair against dark background), we were not able to capture the details of the hair. This is because the dark areas were not acquired/obtained by the chosen threshold value. This can be resolved by applying value inversion, which changes the brightness of the pixels, but not the color. We use the built-in method in Scilab, rgb2hsv(), to accomplish this and then revert it back its RGB representation.
img = rgb2hsv(im) // img (:, :, 1) → HUE // img (:, :, 2) → SATURATION // img (:, :, 3) → VALUE img_value = img (:,:,3); img (:,:,3) = 1 – img_value; im = hsv2rgb(img);
After applying the same methods performed above, the following pictures show the captured details of the photo.
Next, we combine the edges detected for the face and the hair and obtain the overall edges for the face photo.
We remove the pits and holes by applying a simple morphology using a structuring element in a shape of a circle. Note that for a 5×5 matrix this would appear more like a diamond (like the one below), but it is the best shape to approximate the touch of a inkbrush or pen.
After morphology, we invert the processed image into its black-on-white equivalent and finally obtain our ink caricature.
This set of methods also show to be effective for various facial expressions.
|Raw images courtesy of DailyMail.|
The limitations however are that (1) the methods sometimes need human supervision in choosing threshold values, so to make sure all the valuable features of the face are captured (aesthetic reasons, i.e. eyelashes, dimples, etc.). (2) The effectiveness would depend on value contrasts, thus the need for value inversions. (3) The caricature detail would depend on the brush/pen size preferred and would greatly affect its accuracy.
 Soriano, 2010. Binary operations. Applied Physics 186.
 Soriano, 2010. Fourier Transform model of image formation. Applied Physics 186.
 Wikipedia, 2010. Caricature.
 Portrait Workshop, 2010. Types of Caricatures.
I just had my machine reformatted for a new Ubuntu version and I spent hours the other night reorganizing my files. I realized that among the ones taking up most of my disk space are my MP3 files.
MP3 is a lossy audio compression scheme, but you can actually take the option to reduce its bitrate (say for your small capacity MP3 player) without losing much of the quality. Some music files are at 320kbps, but the standard rate is 128kbps. You will realize that a file has a high bitrate if the size is too large for the song length.
You can calculate the size of an MP3 file using this:
z = (x * y) / 8
x = length of song (in seconds)
y = bitrate (in kilobits per second)
z = resultant file size (in kilobytes) – dividing by 8 gives the result in bytes
So let’s say you have an MP3 of length 3 minutes = 180 seconds. At 128kbps, your file will be as big as:
z = (180*128)/8 = 2880kb = 2.880Mb
Compared to when you have it at 320kbps:
z = (180*320)/8 = 7200kb = 7.200Mb
Now that’s significant difference when you have thousands of MP3s in your disk. Most people take the option to convert their MP3 files using a software (e.g. SoundConverter, another FOSS app). But if you would like to keep things light on your machine, you would wish instead to install the library LAME (a recursive acronym meaning LAME Ain’t an MP3 Encoder). For more information on this package, run: man lame.
So how does it work? To do the bitrate conversion, run on the terminal (as root):
lame --mp3input -b <bitrate> <file.mp3> <destination.mp3>
where the bitrate could range from 8 – 320, depending on your need.
If you have more than a handful of MP3 files, you can automate the conversion using the following shell script:
for f in *.mp3 ; do lame --mp3input -b <bitrate> "$f" <path_to_destination>/"$f" ; done
Note that leaving the destination parameter as null saves the file in the form file.mp3.mp3 while setting this parameter to your player’s location causes your file to land there.
 Wikipedia, 2010. MP3.
 Jhollington, 2004. kbps ?. iLounge Forums.
 Ubuntu Packages, 2010. Package: lame.
 Ubuntu Development Team, 2010. “lame” package. Launchpad.
 Vanadium, 2007. Convert MP3 bitrate. Ubuntu Forums.
Image compression has been the means of reducing the size of a graphics file for better storage convenience. It is also a useful way of reducing the time requirement of sending large files over the Web. Here we will explore a method in image compression by Principal Component Analysis (PCA). This technique utilizes the idea that any image can be represented as a superposition of weighted base images.
Suppose we have the following image and its grayscale conversion:
|Figure 1. A sample image (characteristically sharp)and its grayscale equivalent. Courtesy of SuperStock.|
We divide image into blocks of 10×10 dimensions and concatenate them. These sub-blocks are arranged into an nxp matrix, where n is the number of blocks and p the number of elements in each block.
We apply PCA on the matrix with the pca() function in Scilab. which returns a set of eigenvalues, eigenvectors and principal components.
This produces eigenimages which would be essential elements to the compressed image.
Eigenvalues tell how essential a particular set of eigenvectors is to making up the completeness of the image. Based on these values expressed in percentages we choose the most important eigenvectors and reconstruct the image out of these. Figure 4 shows the resulting images at 86.7%, 93.4%, 95.5%, and 97.5%.
|Figure 4. Compressed reconstructions of the image at different numbers of eigenvectors. 1, 3, 5 and 10 respectively.|
We can find out how much of the image has been compressed by counting how much of the eigenvector elements were used in the reconstruction and/or determining the file sizes. Our original image has the dimension 280×340 and is stored at 75.3KB (grayscale). When compressed with only a certain number of eigenvectors (figure above), becomes reduced to 44.2KB, 51.3KB, 53.9KB, and 60.7KB respectively.
When circumstances do not really require high-definition images, it is often best to compress the images into a good size such that it’s quality is not compromised and information is well-kept.
For this activity, I would rate myself 10 for the job well done. 🙂
Credits: Jeff A. and Jonathan A.
 Soriano, 2010. Image compression. Applied Physics 186.
 Mudrova, Prochazka. , 2005. Principal component analysis in image processing.
 TechTarget, 2010. What is image compression?
Image segmentation is the process where a region of interest (ROI) is extracted from the image for further processing. Noting that segmentation is more often not feasible for grayscale images, one of the features this process utilizes is the ROI’s unique true color. This method has been widely used for practical applications such as remote sensing, microscopy and object recognitions.
Parametric vs. Non-Parametric Probability Distribution Estimation
Color-based segmentation can be performed by finding the probability wherein a pixel falls within a distribution of interest. Parametric segmentation finds the Gaussian probability distribution function (PDF) in the R and G values of the ROI in order then to segment the whole image. Non-parametric segmentation, on the other hand, finds the 2D histogram of the ROI and uses it to segment the image using histogram backprojection.
For instance, we have an image of a 3D object with a single color and crop a monochromatic region of interest in the picture (Figure 1).
|Figure 1. A 3D object with a single color. A. a red mug, B. region of interest (ROI)|
We transform the RGB color of the image into normalized chromaticity coordinates (NCC), such that:
It is important to note that r + g + b = 1, and b is dependent on r and g since b = 1 – r – g. We implement this process in Scilab as follows:
R = ROI(:, :, 1); G = ROI(:, :, 2); B = ROI(:, :, 3); I = R + G + B; I (find(I==0)) = 1000000; r = R ./ I; g = G ./ I;
The probability p(r) tells how much a pixel with chromaticity r belongs to the ROI:
mu_r = mean(r); mu_g = mean(g); sigma_r = stdev(r); sigma_g = stdev(g);
Using these values we apply segmentation to the entire image by calculating the probabilities for r and g.
im = imread('3D_object.png'); R = im(:, :, 1); G = im(:, :, 2); B = im(:, :, 3); I = R + G + B; I (find(I==0)) = 1000000; r = R ./ I; g = G ./ I; Pr = (1/(sigma_r*sqrt(2*%pi)))*exp(-((r - mu_r).^2)/(2*(sigma_r^2))); Pg = (1/(sigma_g*sqrt(2*%pi)))*exp(-((g - mu_g).^2)/(2*(sigma_g^2)));
The joint probability is taken as the product of p(r) and p(g).
joint_probability = Pr.*Pg; imshow(joint_probability, );
Figure 2 shows the extracted portions from the image via segmentation.
Here we will use the 2D histogram of the r and g values of the image to do the segmentation. After normalizing all pixels, we tag their membership by calculating finding the histogram of the r and g values in a matrix.
BINS = 32; rint = round(r*(BINS-1) + 1); gint = round(g*(BINS-1) + 1); colors = gint(:) + (rint(:)-1)*BINS; hist = zeros(BINS, BINS); for row = 1:BINS for col = 1:(BINS-row+1) hist(row, col) = length(find(colors==( ((col + (row-1)*BINS))))); end; end;
The resulting matrix when rotated such that the origin is at the lower left corner, produces the 2D histogram for the ROI membership, which can be interpreted from the normalized chromaticity space or NCS (Figure 3).
|Figure 3. A. 2D histogram for pixel membership; B. the Normalized Chromaticity Space (NCS)|
Using this histogram, we can now apply segmentation to the image by means of backprojection. After finding the r and g values:
arsize = size(r); rows = arsize(1); cols = arsize(2); backproj = zeros(rows, cols); for i=1:rows for j=1:cols r_val = r(i, j); g_val = g(i, j); r_new = round(r_val*(BINS-1) + 1); g_new = round(g_val*(BINS-1) + 1); backproj(i, j) = hist(r_new, g_new); // replace pixel with the value end; end; imshow(backproj, );
Figure 4 shows the segmented image resulting from the non-parametric method.
Comparing the segmented images obtained from the two algorithms, it can be observed that the non-parametric method has extracted a larger part of the region of interest (ROI). The parametric method, on the other hand, has segmented sections that can be said to be sharper and smoother with respect to the objects surface (Gaussian approximation). These two techniques, though different in their results, display good use for different scenes under different conditions. Knowing the pros and cons of each, we would be able to wisely choose the right method for every need for segmentation.
For this section, I would rate myself 10 for being able to produce the outputs from both segmentation methods.
Credits: Dennis D., Tin C., and Marou R.
 Soriano, 2010. Color image segmentation. Applied Physics 186.
 The Scilab Reference Manual. 2010.