This project systematically studies homogeneous and heterogeneous 3D face recogni-tion in uncontrolled cases, e.g. freely moving faces when talking, walking, and con-tinuously changing head pose, including 4D face data preprocessing (de-noising, face detection, landmarking, pose estimation, reconstruction using low quality models, etc.), only shape based 4D face recognition, textured 4D face recognition, 3D-2D heteroge-neous face recognition, and 4D-3D heterogeneous face recognition. These contents cover all the critical problems of 3D face recognition in the real world, and extend the traditional static 3D face recognition to 4D homogeneous face recognition and 3D re-lated heterogeneous face recognition. This research comprehensively improves the theoretical framework of 3D face recognition which involves some issues in the 2D and 4D domains as well, and gives fundamental support to the application of 3D face recognition in the real condition.

In social media, the goal of image implicit semantics understanding is to analyze the views, intents, etc. that users want to express in their posted images. It is extremely important for many real-world applications, such as public opinion monitoring, user abnormal behavior detection, etc. However, the automatic understanding of image implicit semantics is a research task just beginning to be explored in the literature. For understanding image implicit semantics in social media, topic detection and tracking, face attribute analysis, and communicative intent understanding are three key issues. So, this project will focus on the research of these issues. For topic detection and tracking, we analyze the multi-modal data jointly to model the semantic structures of topics and generate meaningful event descriptions. For image based facial attribute analysis, we exploit rich contexts in images for facial expression recognition and age estimation. For image communicative intent understanding, we automatically extract features based on deep learning methods, and analyze images containing various types of people and events.

The current project investigates how to reconstruct the perceived dynamic face images from the functional MRI signals—a hot topic in both fields of neuroscience and computer vision. Considering the face images can express many kinds of attributes, and these attributes are expressed mostly with the high-level features, we setup multi-dimensional relationships between the visual stimuli of face images and their corresponding brain responses of fMRI signals, and use the optimized machine-learning algorithms to realize precise face image reconstruction. We hope our study supported by this project will provide us with new ideas and tools for exploring the mechanism for dynamic face perception, and also provide us with new neuroscience evidence for developing the new artificial intelligence algorithm framework.