Overview
This web page provides the KAIST Face Multi-Pose Multi-Illumination (MPMI) Dataset. The dataset was first introduced for a facial expression recognition task in [1], and for face verification task in [2]. This website provides the details for the dataset for researchers who would like to repeat the experiments in [1, 2] or test their methods on the dataset. Note that, the dataset is for research purposes only. If you wish to KAIST Face Multi-pose Multi-Illumination dataset, please cite the papers [1, 2].
Face MPMI Datasets
The KAIST Face MPMI dataset is a dynamic facial dataset that was recorded from 104 subjects. The dataset was recorded in two sessions, one session was obtained while the subjects were wearing eyeglasses, and one without eyeglasses. Each face sequence was recorded via 13 web cameras (logitech carl zeiss tessar HD 1080p) simultaneously, resulting in 13 pose variations. The subjects were then asked to perform the expression again under different illumination variations. Four illumination variation conditions were recorded (room illumination condition, bright illumination condition, left illumination condition and right illumination condition). The datasets is divided into 2 subsets, basic facial expression subset, and face verification subset. For the basic facial expression subset, Sequences of the seven expressions (neutral, anger, disgust, fear, happiness, sadness and surprise expression sequences) were collected. For the verification, subjects with happiness expression at front pose (Yaw angle: 0° and pitch angle 0°) under four illumination variation conditions were collected in the KAIST Face MPMI dataset and this subset was named KAIST smile dataset in [2].
To request and download the dataset, please contact ivylab@kaist.ac.kr, detailing your affiliations, and the research objective of using the dataset. Please remember that this dataset is acquired for academic research only, and any use of the dataset for commercial applications is prohibited.
More details about MPMI Dataset : [Details]
If you use the database, please cite as :
[1] Wissam J. Baddar and Yong Man Ro, “Mode Variational LSTM Robust to Unseen Modes of Variation: Application to Facial Expression Recognition”, to appear in AAAI 2019.
[2] Seong Tae Kim and Yong Man Ro, “Attended Relation Feature Representation of Facial Dynamics for Facial Authentication”, IEEE Transactions on Information Forensics and Security, 2018.