Où docteurs et entreprises se rencontrent
Menu
Connexion

Vous avez déjà un compte ?

Nouvel utilisateur ?

Vous aviez déjà un compte sur l’ancien site de l’ABG, afin de le valider aujourd’hui, merci de suivre la procédure de récupération de mot de passe.

Mot de passe oublié

Computer vision based real-time micro-expression analysis

ABG-77941 Sujet de Thèse
16/05/2018 < 25 K€ brut annuel
Université de Bourgogne, Laboratoire LE2I
Dijon - Bourgogne-Franche-Comté - France
Computer vision based real-time micro-expression analysis
  • Informatique
  • Electronique
Computer Vision, video analysis, micro-expressions classification, machine learning, approximate computing, real-time processing, affective computing, IA emotion.

Description

Scientific Background

 

In some circumstances, stress can be a positive and motivating force. However, chronic stress or anxiety can cause a variety of symptoms and affect the overall well-being and health. Affective computing (also called artificial emotional intelligence, or emotion AI) consists to conceive and develop systems that can recognize, interpret, process, and simulate human affects. This modern branch of computer science should analyze the human’s emotional state and adapt the man-machine interface to give an appropriate response to those emotions.

Computer vision-based emotion assessment begins by capturing data about the user's physical state or behavior using passive sensors. For example, a video camera might capture facial expressions, body posture, and gestures, while other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. Recognizing emotion requires firstly the extraction of meaningful pattern from the gathered data, then the processing of different modalities, such as facial expression, physiological parameter, using machine learning techniques to produce an emotional label. Real-time emotion analysis possesses many potential applications such as e-learning, e-health, neuro-marketing, etc. This research topic is relatively recent and very few results are published in the literature.

 

Description of the project

 

Based on computer vision techniques, the goal of this PhD thesis is to establish a real-time and remote  emotion recognition system which respects multiple constraints of general public applications: robustness and flexibility, low cost, user-friendliness, and embedability with low energy consumption. Facial expressions of emotion are not culturally determined, but universal. Thus, we can categorize them in 6 basic emotions. Recently, these expressions are divided in two classes: macro- and micro-expressions. Micro-expressions occur within 1/4 -1/25 of a second. They are involuntary and expose a person’s true emotions.  However, locating and analyzing these events from a video sequence is very difficult due to their short duration and low intensities.

In this PhD project, based on the bibliographic study, firstly we will use a fast camera (200 frames/s) to capture micro-expressions: a new video database will be created under real-world conditions. Then, existing macro-expression recognition algorithms will be tested and improved on optimal extracted images/video in order to generalize them on micro-expressions. Some specific new approaches for micro-expression analysis will be studied and proposed using machine learning techniques. 

Secondly, we will work on approximate computing for real time emotion analysis and classification prototype implementation. The recent concept of “approximate computing” involves how computer systems can be made better “more energy efficient, faster, and less complex” by relaxing the requirement that they are exactly correct.  In fact, many image processing applications are error-resilient, allowing for the introduction of approximations in the calculations. Therefore, adequate computing paradigm is emerged, in which the accuracy of computation results can be traded for, e.g., savings in energy, improvement in performance at runtime. In this project step, the algorithms will be reorganized in order to extract their parallelism. Intrinsic parallel calculations will be implemented on a specific GPU processor (or Field-Programmable Gate Array – FPGA circuits or hybrid architectures) to accelerate the processing speed. We will perform multiple optimizations at several levels using approximate computing.

 

Significant Publications

[1]  Fan Yang and Michel Paindavoine, "Implementation of a RBF neural network on embedded systems: real time face tracking and identity verification", IEEE Trans. on Neural Networks, Vol.14 (5), pp. 1162-1175, September 2003.

[2]  Nicolas Farrugia, Franck Mamalet Sebastien Roux, Michel Paindavoine and Fan Yang, "Fast and Robust Face Detection on a Parallel Optimized Architecture implemented on FPGA", IEEE Trans. on Circuits and Systems for Video Technology, 19 (4), pp. 597-602, April 2009.

[3]  Souleymane Balla-Arabé, Xinbo Gao, Bin Wang, Fan Yang and Vincent Brost, "Multi-Kernel Implicit Curve Evolution for Selected Texture Regions Segmentation in VHR Satellite Images", IEEE Trans. on Geoscience and Remote Sensing, Vol.52 (8), pp. 5183-5192, 2014.

[4]  Souleymane Balla-Arabé, Xibao Gao, Dominique Ginhac, and Fan Yang, “Shape-Constrained Level Set Segmentation for Hybrid CPU-GPU Computers”, Neurocomputing, 2016.

[5]  Souleymane Balla-Arabé, Xibao Gao, Dominique Ginhac, Vincent Brost and Fan Yang, “Architecture-Driven Level Set Optimization: From Clustering to Sub-Pixel Image Segmentation”, IEEE Transactions on Cybernetics, 2016.

[6]  C. Li, S. Balla-Arabé, D.Ginhac and F. Yang, “Embedded Implementation of VHR satellite image segmentation”, Journal of Sensors - MDPI, 16, 771, 2016.

[7]  Chao LI, Yanjing BI, Yannick Benezeth, Dominique Ginhac and Fan Yang, “High-level Synthesis for FPGAs: Code optimization strategies for real-time image processing”, JRTIP Journal of Real-Time Image Processing, In press, 2018.

[8]  Bobbia, S., Macwan, R., Benezeth, Y., Mansouri, A., & Dubois, J. (2018, in press). Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recognition Letters – Elsevier, In press, 2018.

[9]  Anastasia Pampouchidou, Panagiotis Simos, Kostas Marias, Fabrice Meriaudeau, Fan Yang, Matthew Pediaditis, and Manolis Tsiknakis, “Automatic Assessment of Depression Based on Visual Cues: A Systematic Review”, IEEE Trans. on Affective Computing, In press, 2018.

Nature du financement

Contrat doctoral

Précisions sur le financement

1500 € net par mois

Présentation établissement et labo d'accueil

Université de Bourgogne, Laboratoire LE2I

The Le2i (Laboratoire d’électronique Informatique et Image) is a French research Institute located in Dijon city - Burgundy, which is the hometown of Mr. Fourier. Its field of research is information and communications technology. The laboratory involves 200 people of whom 84 are faculty members and researchers from the University of Burgundy, Agrosup and ENSAM. The laboratory is organized in three departments: Computer Science, Electronic and Computer Vision. Computer Science department is very active in Technology Transfer and has strong collaborations with high profile industrial groups such as: Renault, EADS and AREVA. Electronic department is a national and international leader in its field of research, with high levels publications (IEEE Solid State Circuits, IEEE Neural Works …). Computer Vision department is gaining recognition and has reached an international level. It already has published papers in some of the best publications (IEEE PAMI, IEEE Signal Processing Letters IEEE Measurement Sciences and Instrumentation, Optics Express, Pattern recognition, etc.).

The supervisors of this PhD project, Professors Fan YANG, Dominique GINHAC and Associate Professor Cyrille MIGNIOT are experts in image processing & computer vision, and their papers have been cited for more than 1000 times respectively according to Google Scholar.

Intitulé du doctorat

Doctorat of INSTRUMENTATION AND IMAGE PROCESSING

Pays d'obtention du doctorat

France

Ecole doctorale

ED - SPIM

Profil du candidat

The candidate is expected to have a Master degree or equivalent qualifications in a relevant subject area (Image Processing, Applied Mathematics, Computer Science or Engineering). Strong analytical/mathematical skills are essential. Good programming experience (C++ and/or MATLAB and/or CUDA) is preferable. It is expected that the candidate has good communication skills, especially in written English. The candidate must be committed to deliver excellence in research!

Date limite de candidature

13/07/2018
Postuler
Fermer

Vous avez déjà un compte ?

Nouvel utilisateur ?


Mot de passe oublié
Besoin d'informations ?

Vous souhaitez recevoir une ou plusieurs lettres d’information de l’ABG. Chaque mois des actualités, des offres, des outils, un agenda…

item1 item1
item1