Skip to content

Radfar/Emotion-detection

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Emotion-detection

Introduction

This project aims to classify the emotion on a person's face into one of seven categories, using deep convolutional neural networks. This repository is an implementation of this research paper. The model is trained on the FER-2013 dataset which was published on International Conference on Machine Learning (ICML). This dataset consists of 35887 grayscale, 48x48 sized face images with seven emotions - angry, disgusted, fearful, happy, neutral, sad and surprised.

Dependencies

  • Python 3, OpenCV, Tensorflow
  • To install the required packages, run pip install -r requirements.txt.

I did these steps:

  1. fork project to my GitHub repository.
  2. download dataset and model.h5(trained) and copy them to 'src' folder.
  3. Clone repository locally and open it in Atom editor.
  4. create a virtual environment in conda command prompt: conda create --name [a name = ocv5here] python=3.7 my root: cd 1.study\testfolder\python\e_detect1\emotion-detection\src
  5. in conda prompt run: activate ocv5
  6. install requirements to virtual environment: pip install cmake pip install numpy pip install opencv-contrib-python==4.1.2.30 pip install tensorflow==2.1.0
  7. run : python emotions.py --mode display
  8. press 'e' to terminate the video capture!

Basic Usage

The repository is currently compatible with tensorflow-2.0 and makes use of the Keras API using the tensorflow.keras library.

  • First, clone the repository and enter the folder
git clone https://github.com/atulapra/Emotion-detection.git
cd Emotion-detection
  • Download the FER-2013 dataset from here and unzip it inside the src folder. This will create the folder data.

  • If you want to train this model, use:

cd src
python emotions.py --mode train
  • If you want to view the predictions without training again, you can download the pre-trained model from here and then run:
cd src
python emotions.py --mode display
  • The folder structure is of the form:
    src:

    • data (folder)
    • emotions.py (file)
    • haarcascade_frontalface_default.xml (file)
    • model.h5 (file)
  • This implementation by default detects emotions on all faces in the webcam feed. With a simple 4-layer CNN, the test accuracy reached 63.2% in 50 epochs.

Accuracy plot

Data Preparation (optional)

  • The original FER2013 dataset in Kaggle is available as a single csv file. I had converted into a dataset of images in the PNG format for training/testing and provided this as the dataset in the previous section.

  • In case you are looking to experiment with new datasets, you may have to deal with data in the csv format. I have provided the code I wrote for data preprocessing in the dataset_prepare.py file which can be used for reference.

Algorithm

  • First, the haar cascade method is used to detect faces in each frame of the webcam feed.

  • The region of image containing the face is resized to 48x48 and is passed as input to the CNN.

  • The network outputs a list of softmax scores for the seven classes of emotions.

  • The emotion with maximum score is displayed on the screen.

Example Output

Mutiface

References

  • "Challenges in Representation Learning: A report on three machine learning contests." I Goodfellow, D Erhan, PL Carrier, A Courville, M Mirza, B Hamner, W Cukierski, Y Tang, DH Lee, Y Zhou, C Ramaiah, F Feng, R Li,
    X Wang, D Athanasakis, J Shawe-Taylor, M Milakov, J Park, R Ionescu, M Popescu, C Grozea, J Bergstra, J Xie, L Romaszko, B Xu, Z Chuang, and Y. Bengio. arXiv 2013.

About

Real-time Facial Emotion Detection using deep learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%