You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've included a transformer language model base as well as a 4096-d mlstm language model base. For examples on how to use these models please see our [finetuning](#classifier-finetuning) and [transfer](#sentiment-transfer) sections. Even though these models were trained with FP16 they can be used in FP32 training/inference.
and classifiers trained on a subset of SemEval emotions corresponding to the 8 plutchik emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust):
To use classification models that reproduce results from our original large batch language modeling paper please use the following [commit hash and set of models](https://github.com/NVIDIA/sentiment-discovery/tree/7f5ab28918a6fc29318a30f557b9454f0f5cc26a#pretrained-models).
85
+
86
+
We did not include pretrained models leveraging ELMo. To reproduce our papers' results with ELMo, please see our [available resources](./analysis/reproduction.md#elmo-comparison).
80
87
81
88
Each file has a dictionary containing a PyTorch `state_dict` consisting of a language model (lm_encoder keys) trained on Amazon reviews and a classifier (classifier key) as well as accompanying `args` necessary to run a model with that `state_dict`.
bash ./experiments/train_mlstm_singlenode.sh #run our mLSTM training script on 1 DGX-1V
@@ -140,9 +147,9 @@ Lastly it performs feature selection to try and fit a regression model to the to
140
147
By default only one neuron is used for this second regression.
141
148
142
149
```
143
-
python3 transfer.py --load <model>.pt #performs transfer to SST, saves results to `<model>_transfer/` directory
144
-
python3 transfer.py --load <model>.pt --neurons 5 #use 5 neurons for the second regression
145
-
python3 transfer.py --load <model>.pt --fp16 #run model in fp16 for featurization step
150
+
python3 transfer.py --load mlstm.pt #performs transfer to SST, saves results to `<model>_transfer/` directory
151
+
python3 transfer.py --load mlstm.pt --neurons 5 #use 5 neurons for the second regression
152
+
python3 transfer.py --load mlstm.pt --fp16 #run model in fp16 for featurization step
146
153
```
147
154
148
155
Expected test accuracy for transfering fully trained mLSTM models to sentiment classification for a given mLSTM hidden size:
@@ -159,15 +166,15 @@ This script supports building arbitrary multilable, multilayer, and multihead pe
159
166
Lastly this script supports automatically selecting classification thresholds from validation performance. To measure validation performance this script includes more complex metrics including: f1-score, mathew correlation coefficient, jaccard index, recall, precision, and accuracy.
160
167
161
168
```
162
-
python3 finetune_classifier.py --load <model>.pt --lr 2e-5 --aux-lm-loss --aux-lm-loss-weight .02 #finetune mLSTM model on sst (default dataset) with auxiliary loss
163
-
python3 finetune_classifier.py --load <model>.pt --automatic-thresholding --threshold-metric f1 #finetune mLSTM model on sst and automatically select classification thresholds based on the validation f1 score
169
+
python3 finetune_classifier.py --load mlstm.pt --lr 2e-5 --aux-lm-loss --aux-lm-loss-weight .02 #finetune mLSTM model on sst (default dataset) with auxiliary loss
170
+
python3 finetune_classifier.py --load mlstm.pt --automatic-thresholding --threshold-metric f1 #finetune mLSTM model on sst and automatically select classification thresholds based on the validation f1 score
164
171
python3 finetune_classifier.py --tokenizer-type SentencePieceTokenizer --vocab-size 32000 \ #finetune transformer with sentencepiece on SST
python3 finetune_classifier.py --automatic-thresholding --non-binary-cols l1 l2 l3 --lr 2e-5\ #finetune multilayer classifier with 3 classes and 4 heads per class on some custom dataset and automatically select classfication thresholds
169
176
--classifier-hidden-layers 2048 1024 3 --heads-per-class 4 --aux-head-variance-loss-weight 1. #`aux-head-variance-loss-weight` is an auxiliary loss to increase the variance between each of the 4 head's weights
Copy file name to clipboardExpand all lines: analysis/reproduction.md
+16-3Lines changed: 16 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Contrary to results in the OpenAI work the validation reconstruction loss is low
11
11
### mLSTM Training Set Up
12
12
It took several cycles of trial and error to come up with a result comparable to the original. Some things were not entirely apparent from the paper, key model details were often hidden in one line, and took several tries to get right. Other minutia were found out independently. We've included what we found to work well.
13
13
***Model**: 4096-d mLSTM, 64-d embedding, 256-d output. (we also trained a similarly parameterized lstm)
14
-
***Weight Norm**: applied only to lstm parameters (hidden->hidden/gate weights), not embedding or output.
14
+
***Weight Norm**: Applied only to lstm parameters (hidden->hidden/gate weights), not embedding or output.
15
15
***Optimizer**: Adam
16
16
***Learning Rate**: 5e-4 per batch of 128. Linear Learning rate decay to 0 over course of epoch.
17
17
***Gradient Clipping**: We occassionally ran into problems with destabilizing gradient explosions. Therfore, we clipped our gradients to a maximum of `1.`.
@@ -23,11 +23,12 @@ It took several cycles of trial and error to come up with a result comparable to
23
23
***Hardware**: 8 volta-class gpus
24
24
***Learning Rate Scaling**: We took queues from recent work in training imagenet at scale and leveraged [FAIR's (Goyal et. al 2017)](https://arxiv.org/pdf/1706.02677.pdf) linear scaling rule. However, after sufficient experimentation we found that learning rate scaling did not work well at all batch sizes and we capped our max learning rate at 3e-3. We also found that using a linear decay rate over 100k steps for global batch sizes greater than 2048 worked well in our case.
25
25
***Training Time**: With FP16 training it takes approximately 17 hours to train.
26
-
***Training command**: To run this training experiment run `./experiments/run_mlstm_singlenode.sh`.
26
+
***Training command**: To run this training experiment run `./experiments/train_mlstm_singlenode.sh`.
27
27
28
28
### Transformer Training Set Up
29
29
The transformer model has demonstrated its capabilities in recent work as a state of the art language model for natural language understanding. We similarly leveraged the transformer in our work on [Practical Text Classification With Large Pre-Trained Language Models](https://arxiv.org/abs/1812.01207). The transformer we used was pre trained as follows.
30
30
***Model**: Transformer with 12 layers, 8 attention heads, hidden size of 768, and an embedding size of 3072. Positional embeddings up to length 256 were used.
31
+
***Weight Norm**: Applied only to transformer and output head parameters, not embedding parameters.
31
32
***Optimizer**: Adam
32
33
***Learning Rate**: 1e-4 with cosine annealing schedule
33
34
***Data set**: Aggressively Deduplicated Amazon Review dataset with 1000/1/1 train/test/validation shards. Each of the three sets are internally shuffled.
@@ -37,7 +38,7 @@ The transformer model has demonstrated its capabilities in recent work as a stat
37
38
***Hardware**: 1 DGX-1V with 8 V100 GPUs
38
39
***Learning rate Scaling**: In our experiences we found that learning rate scaling as a function of available compute did not help train our transformer, and that a learning rate of 1e-4 across all global batch sizes was simple and performed well.
39
40
***Training time**: With FP16 training it takes approximately 3 days to train.
40
-
***Training command**: To run this training experiment run `./experiments/run_transformer_singlenode.sh`.
41
+
***Training command**: To run this training experiment run `./experiments/train_transformer_singlenode.sh`.
41
42
42
43
43
44
## FP16 Training
@@ -89,6 +90,18 @@ Results should line up approximately with below.
To analyze how our pretraining, transfer, and finetuning methods stack up to other state of the art models and techniques we utilize the publicly available ELMo language model as a baseline. In order to reproduce our results with ELMo please switch to the [ELMo branch](https://github.com/NVIDIA/sentiment-discovery/tree/elmo).
95
+
96
+
To train a text classifier with ELMo we utilize ELMo as a language model to encode text whose features are passed to a classifier. The classifier can either be a simple linear layer or a more complex multilayer perceptron. The training can either be performed with end to end training of the classifier and language model, or in a transfer learning setting with only the classifier being trained via logistic regression or SGD.
97
+
98
+
The following training scripts are capable of reproducing our results with ELMo on SST and the SemEval benchmark challenge. In order to run these scripts you must follow the installation instructions in AllenNLP's [ELMo repository](https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md). Note that for finetuning we did not use an auxliary language modeling loss as ELMo is bidirectional and cannot perform Left to Right language modeling normally.
99
+
100
+
```
101
+
bash ./run_elmo_sk_sst.sh #trains a logistic regression classifier on SST with ELMo
102
+
bash ./run_elmo_se_multihead.sh #end to end finetuning of ELMo and a multihead MLP on 8 SemEval categories
103
+
```
104
+
92
105
------
93
106
94
107
[<- Why Unsupervised Language Modeling?](./unsupervised.md) | [Data Parallel Scalability ->](./scale.md)
0 commit comments