Sunday, March 19, 2017

Automatic Emotion Detection

https://www.youtube.com/watch?v=45eLpzk6N34

Automatic facial expression recognition (often abbreviated to A/FER) is a multidisciplinary research field that spans across computer vision, machine learning, neuroscience, psychology and cognitive science. In automatic facial expression recognition research, the most common approach is to classify continuous expressive facial displays according to specific labels, categories or dimensions.


As we already know, different emotions can be better expressed by one modality rather than the other. The most incremental transition from vision-only AFER systems is to include the audio modality. This is particularly needed to correctly analyse and differentiate the facial deformations caused by expressions from facial deformations caused by speech. Some researchers have started to work more in this area but the community effort is still small.


The results of the emotional analyses are presented in a format that makes interpretation easier for the reader. The visualization tools will be based on existing visualization tools that have been developed in groups since the early 1990s. Individual documents can be visualized, for example, by marking up the sentiment-bearing words. Methods involving colors, shapes and more can be used for distinguishing between words that express appreciation or social judgment. A writer's emotions towards various topics can be visualized with the use of a graph.

Image result for automatic emotion detection

No comments:

Post a Comment