Attention mechanisms in deep learning models for Twitter sentiment analysis
Document typeMaster thesis
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder
Sentiment Analysis on social media such as Twitter can provide us with valuable information about the users’ opinions. The singularities of these data lie in their short format and informal language. In the last years, Deep Learning models like Recurrent Neural Networks and Convolutional Neural Networks have been widely studied for this task reaching promising results when combined with word embedding mechanisms. In this master thesis, we go through the bases of Sentiment Analysis and Deep Neural Networks and then some Deep Learning models are presented. Pursuing improving the performance of these models, attention mechanisms like Self Attention of Transformer Encoder are presented and included in the models. The same dataset is used to train all the presented models in order to evaluate them and analyze the impact of including attention mechanisms on Deep Neural Networks in a Sentiment Analysis task.