Visual data are abundant in our daily lives, including audiovisual archives, videos available on the Web, or surveillance videos. Such data are highly rich in terms of colour, texture, and motion. The hard problems in automatic video analysis come from the appearance variability of objects and persons (due to changes in position, lighting conditions, or face expression, and possible occlusions), and also from the big volume and heterogeneity of such data. The team develops motion and appearance models for the detection, recognition, and analysis of events, activities, or affective states. Possible applications include :
- human-computer interactions, such as gesture analysis via video cameras for command, or face expression analysis to enable affective and empathic computing ; *security, such as crowd motion analysis and person tracking to detect abnormal situations ;
- marketing, such as gender (male/female) recognition for people facing an interactive display for ads personalisation ;
- and multimedia indexing, such as automatic content-based structuration of audiovisual archives.