Comparing The Workflow Of The Face Emotion Recognition System And Opencv

1098 Words3 Pages

3.0 Methodology
The figure 3.1 below shown the overall general workflow of the face emotion recognition system and OpenCV will be applied through the whole process.

Figure 3.1 General flow of the system
This project will mainly focus on face detection and feature extraction and only one webcam will be used and mounted on a laptop so that the image frame can be extracted out from the video. After we get the image, we will proceed to another stage which is face detection to detect the human face from that image frame.
3.1 Pre-processing
Since the entire image frame will not contain the same size of face, so skin detection will be applied in order to decrease the calculation time in finding the face. The skin colour will become as a requirements …show more content…

So in this step, we not only prepare eyes, nose and mouth data sets but also for the face to increase the accuracy in detecting face.
During this stage, Viola-Jones algorithm will be used to detect the face in OpenCV. This algorithm is based on Haar-like features and AdaBoost algorithm. In order to detect face and facial parts, four data sets which are human faces, eyes, nose and mouth were created. Firstly, those data sets will be calculated through Haar-like-features and then the features will be learned by AdaBoost algorithm. Later the classifiers will be applied in cascade structure in order to detect human face and facial parts. The database of face and facial parts will include positive images and negative images. Such as for face image database, 500 positive images (with face) and 500 negative images (without face) will be prepared same goes to others. All of the face and facial parts detection will be done in the skin pixel …show more content…

Both of the images will be combined and Figure 3.4.c shows the output. We can always get the two lips in angry, sadness, surprise, normal and disgust emotion. While for fear and happy emotion, we can distinguish then by saying the teeth often appears and there are more canny edge pixels will be detected in the mouth area. Figure 3.4 Process of detecting lips; a) binary output b) Canny edge detection output c) Combination of binary and canny edge
Besides mouth, the portion of eyes and eyebrows will be identified based on some requirement such as the eyes must locate at 60% above from bottom border of the face. The eyes and eyebrows region will further separate into left side and right side. The next step we need to consider on how to separate eye and eyebrow. Prewitt filter will be used to find the global maxima. The output should have two global maxima such as one global maxima is represented as eyebrows and another maxima is represented as eyes. The output has shown in Figure 3.5. Figure 3.5 The global maxima represents region of eyebrows and

Open Document