INTERNATIONAL JOURNAL OF NOVEL RESEARCH AND DEVELOPMENT International Peer Reviewed & Refereed Journals, Open Access Journal ISSN Approved Journal No: 2456-4184 | Impact factor: 8.76 | ESTD Year: 2016
Scholarly open access journals, Peer-reviewed, and Refereed Journals, Impact factor 8.76 (Calculate by google scholar and Semantic Scholar | AI-Powered Research Tool) , Multidisciplinary, Monthly, Indexing in all major database & Metadata, Citation Generator, Digital Object Identifier(DOI)
Emotions are essential to comprehending human interactions. Efforts are being made to discover techniques that can mimic the human capacity to recognize emotions conveyed through facial expressions, variations in tone while speaking, and images of faces. Human Expression Recognition (HER) is among these disciplines. This paper reviews machine learning classification and deep learning algorithms for human expression recognition systems using multimodal signals. This work would assist individuals in forming relationships and is applicable in various fields, including the HCI (Human-Computer Interaction) and pharmaceutical industries. The speech and video inputs are selected and intend to develop a model that collects data from each respective data set and predicts the emotion class. The primary purpose of this paper is to enable researchers to assess the feasibility of human-computer interfaces that are sensitive to a person's emotions. The reuse of a previously learned model on a new problem is known as transfer learning, which is popular in deep learning now since it can train deep neural networks with a small amount of data. In this paper, we have applied a Deep learning model, i.e. CNN, and compared it with the existing models such as Multilayer perceptron and Decision tree classifier. The study aims to approach and improve continuous human expression recognition via video and audio and report the most recent developments in this field. To improve the accuracy of the existing model for speech, we have used two different combinations of datasets, i.e. RAVDESS and TESS and accomplished 87.08% accuracy using CNN. For the facial expression model, we have used the FER 2013 dataset using a transfer learning algorithm, and the model reached an accuracy approx. 99% over seven classes.
Keywords:
Emotion recognition, Speech, Video, Deep Learning.
Cite Article:
"Emotion Classifier Using Deep Learning", International Journal of Novel Research and Development (www.ijnrd.org), ISSN:2456-4184, Vol.8, Issue 5, page no.g820-g827, May-2023, Available :http://www.ijnrd.org/papers/IJNRD2305700.pdf
Downloads:
000118750
ISSN:
2456-4184 | IMPACT FACTOR: 8.76 Calculated By Google Scholar| ESTD YEAR: 2016
An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 8.76 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator
Facebook Twitter Instagram LinkedIn