Affective States Classification Performance of Audio-Visual Stimuli from EEG Signals with Multiple-Instance Learning

Loading...
Publication Logo

Date

2022

Journal Title

Journal ISSN

Volume Title

Publisher

TÜBİTAK Scientific & Technological Research Council Turkey

Open Access Color

OpenAIRE Downloads

OpenAIRE Views

Research Projects

Journal Issue

Abstract

Throughout various disciplines, emotion recognition continues to be an essential subject of study. With the advancement of machine learning methods, accurate emotion recognition from different data modalities (facial images, brain EEG signals) has become possible. Success of EEG-based emotion recognition systems depends on efficient feature extraction and pre/postprocessing of signals. Main objective of this study is to analyze the efficacy of multiple-instance learning (MIL) on postprocessing features of EEG signals using three different domains (time, frequency, time-frequency) for human emotion classification. Methods and results are presented for single-trial classification of valence (V), arousal (A), and dominance (D) ratings from EEG signals obtained with audio (A), video (V), and audio-video (AV) stimulus using alpha, beta and gamma bands. High accuracy was observed with both binary and multiclass classification of the AV stimulus. Findings in this study suggest that MIL applied on frequency features yields efficient results on EEG emotion recognition.

Description

Dasdemir, Yasar/0000-0002-9141-0229; Ozakar, Rustem/0000-0002-7724-6848

Keywords

Emotion Recognition, EEG, Multiple-Instance Learning, Time Domain, Frequency Domain, Time-Frequency Domain

Fields of Science

Citation

WoS Q

Q3

Scopus Q

Q2

Source

Turkish Journal of Electrical Engineering and Computer Sciences

Volume

30

Issue

7

Start Page

2707

End Page

2724
Google Scholar Logo
Google Scholar™

Sustainable Development Goals

SDG data could not be loaded because of an error. Please refresh the page or try again later.