WIP....
Sentiment analysis is the computational study of opinions, sentiments and subjectivity embedded in user-generated textual document. Usually, the text is classified into positive, negative or neutral sentiments. The most interesting user-generated textual documents are reviews about entities, prod- ucts or services. These use-generated textual documents are of interest to organizations that own or provide the services; supermarkets, movie produces, restaurants, etc. A more fine-grained analysis of the textual document involves identifying various aspects of the main entity in the document and classifying the opinion expressed about the identified aspects. This type of analysis is termed, aspect-based sentiment analysis (ABSA) and it has gained popularity and importance in the last decade.
Exploiting BERT for Aspect-Based Sentiment Analysis tasks.
todo
ABSA typically involves two sub-tasks:
- Aspect-Based Text Extraction (ABTE): The goal of ABTE is to identify and extract the aspects or attributes mentioned in the text. It is formulated as a sequence labeling task, where each word in the text is assigned a label indicating whether it belongs to a particular aspect or not.
- Aspect-Based Sentiment Analysis (ABSA): The goal of ABSA is to determine the sentiment polarity (positive, negative, or neutral) expressed towards each identified aspect. It is formulated as a classification task, where the sentiment polarity is assigned to each aspect extracted from the text.

https://ai.stanford.edu/~amaas/data/sentiment
pip install -t requirements.txt
Change the path of the dataset in consts.py
. Data need to be formatted as in Data Pre Label
.
python -m train_ABSA ABTE
python -m train_ABSA ABSA
python -m pred_ABSA ABSA
python -m pred_ABSA ABSA
train.sh ABTE
prediction.sh ABTE
NAME
train_ABSA.py - Train the model.
SYNOPSIS
train_ABSA.py WORK_TYPE <flags>
DESCRIPTION
Train the model.
POSITIONAL ARGUMENTS
WORK_TYPE
Training for which task choices: ['ABTE', 'ABSA']
FLAGS
-b, --batch=BATCH
Default: 16
Batch size for training (default: 5).
-e, --epochs=EPOCHS
Default: 3
Number of training epochs (default: 8).
--lr=LR
Default: 3.0000000000000004e-05
Learning rate (default: 3*1e-5).
--lr_schedule=LR_SCHEDULE
Default: False
Whether to use learning rate scheduling (default: False).
-a, --adapter=ADAPTER
Default: True
Whether to use Adapter(default: True).
NOTES
You can also use flags syntax for POSITIONAL ARGUMENTS
(END)
NAME
pred_ABSA.py - Predict the model.
SYNOPSIS
pred_ABSA.py WORK_TYPE <flags>
DESCRIPTION
Predict the model.
POSITIONAL ARGUMENTS
WORK_TYPE
Training for which task choices: ['ABTE', 'ABSA']
FLAGS
-a, --adapter=ADAPTER
Default: True
Whether to use Adapter(default: True).
-l, --lr_schedule=LR_SCHEDULE
Default: False
Whether to use learning rate scheduling (default: False).
NOTES
You can also use flags syntax for POSITIONAL ARGUMENTS
(END)
Ref: