摘要:Emotion recognition is a challenging problem in Brain-Computer Interaction (BCI). Electroencephalogram (EEG) gives unique information about brain activities that are created due to emotional stimuli. This is one of the most substantial advantages of brain signals in comparison to facial expression, tone of voice, or speech in emotion recognition tasks. However, the lack of EEG data and high dimensional EEG recordings lead to difficulties in building effective classifiers with high accuracy. In this study, data augmentation and feature extraction techniques are proposed to solve the lack of data problem and high dimensionality of data, respectively. In this study, the proposed method is based on deep generative models and a data augmentation strategy called Conditional Wasserstein GAN (CWGAN), which is applied to the extracted features to regenerate additional EEG features. DEAP dataset is used to evaluate the effectiveness of the proposed method. Finally, a standard support vector machine and a deep neural network with different tunes were implemented to build effective models. Experimental results show that using the additional augmented data enhances the performance of EEG-based emotion recognition models. Furthermore, the mean accuracy of classification after data augmentation is increased 6.5% for valence and 3.0% for arousal, respectively.