Face understanding based on multimodal data, including important attribute classification and synthesis, such as identity, gender, race, and expression.
Analysis of identity, gender, and age through analysing the gait.
Highlight summary, data statistics, and playing tactics analysis based on tracking athletes, balls, and other targets.
Detection, tracking, and identification of major targets such as people and vehicles; analyzing the actions, behaviors, and related events of the target.
Merging multi-source remote sensing data to generate the fusion images.
Detecting specific objects such as building from high-resolution remote sensing images.
Detecting changed regions from multi-temporal high-resolution remote sensing imagery data.
Detect and classify the expression in facial images automatically.
Through analyzing signals in multiple modalities, including audio, video and text, human emotion states are modeled more accurately and understood more deeply.
Copyright © 2017 - All Rights Reserved - IRIP Lab Beihang University