Skip to content

Generation task on squad dataset. #66

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 71 additions & 0 deletions examples/squad/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# GLM Squad Generation Dataset

# Task Description

+ **Dataset Name**: squad
+ **Authors**: Yu-Wen Michael Zhang, [email protected], https://github.com/yuwenmichael.
+ **Task Description**: Generate answer given a context and question.
+ **Running Commands**: just run the .ipynb file. on GPU.
+ **Results**: By taking 1/38 of the training dataset (2308 instances) and evaluate on 1/38 of the validation dataset(278 instances), the result is the following (due to random sampling the train dataset, the score will be different if you run the .ipynb file (but not much) by yourself).
```
{'exact_match': 92.0863309352518, 'f1': 94.69410937415482}
```
+ **Reference**:
The dataset is squad. Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250}
}
```

The comparison methods is F-1 score and exact match.

# Training Stage
## Save model
You must save your model in order to do evaluation !!!
Recommended path for storing the saved model is examples/squad/model_gen as the following (since there is already have all the config file you need except for `config.json` and `pytorch_model.bin`, which you need to run the code to generate it):
```
└── examples
└── squad
├── README.md
├── requirements.txt
├── squad.ipynb
└── model_gen
```
You need to first customise the path where you save the model in order to run the .ipynb file.

The best_model_path = '/home/zyw/squad/model_gen' is located in section GenerationTrainerClass.

Otherwise, the config.zip has all the configuration files you need. Just unzip it and put them in the same folder as the `pytorch_model.bin` and `config.json` and you are good to go.

## Hyperparameter
train batch size = 4
epoch = 1
learning rate = 8e-6

## Sampling a small protion of the dataset
### setting
1. portion is used to sampling the dataset. when portion is 1, you are using the whole dataset. when portion is 38, you are using 1/38 of the dataset
2. random state is 1 when doing train test split to sample the dataset.


# Evaluation stage
### Load model with the path specified by best_model_path
In the provided code, the training set has 1/38 of the whole training dataset (shuffle = True, 2305 samples). The validation set has 1/38 of the whole testing dataset (shuffle = False, 278 samples). The model is trained on the training set and evaluated on the validation set.

The final result with the current setting is:

{'exact_match': 92.0863309352518, 'f1': 94.69410937415482}

you can find this result in the end of the squad.ipynb as I have run the program myself.

# Contact
Should you have any problem, feel free to email me: [email protected] or Wechat ID: M_Zhang6
Binary file added examples/squad/config.zip
Binary file not shown.
5 changes: 5 additions & 0 deletions examples/squad/model_gen/added_tokens.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"<|startofpiece|>": 50265,
"<|endofpiece|>": 50266,
"[MASK]": 50267
}
136 changes: 136 additions & 0 deletions examples/squad/model_gen/configuration_glm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# coding=utf-8
# Copyright 2022 shunxing1234 and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" GLM model configuration """

from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging

logger = logging.get_logger(__name__)

GLM_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"shunxing1234/GLM": "https://huggingface.co/shunxing1234/GLM/resolve/main/config.json",
# See all GLM models at https://huggingface.co/models?filter=glm
}


class GLMConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`~GLMModel`].
It is used to instantiate an GLM model according to the specified arguments, defining the model
architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
the GLM [shunxing1234/GLM-base-cased](https://huggingface.co/shunxing1234/GLM-base-cased) architecture.

Configuration objects inherit from [`PretrainedConfig`] and can be used
to control the model outputs. Read the documentation from [`PretrainedConfig`]
for more information.


Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the GLM model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`~GLMModel`] or
[`~TFGLMModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler.
If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with.
Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`~GLMModel`] or
[`~TFGLMModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
Example:

```python
>>> from transformers import GLMModel, GLMConfig

>>> # Initializing a GLM shunxing1234/GLM-base-cased style configuration
>>> configuration = GLMConfig()

>>> # Initializing a model from the shunxing1234/GLM-base-cased style configuration
>>> model = GLMModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```
"""
model_type = "glm"
attribute_map = {
"num_hidden_layers": "num_layers"
}

def __init__(
self,
num_layers=24,
vocab_size=30592,
hidden_size=1024,
num_attention_heads=16,
embedding_dropout_prob=0.1,
attention_dropout_prob=0.1,
output_dropout_prob=0.1,
max_sequence_length=512,
checkpoint_activations=False,
checkpoint_num_layers=1,
parallel_output=True,
relative_encoding=False,
block_position_encoding=True,
output_predict=False,
spell_length=None,
spell_func="lstm",
attention_scale=1.0,
initializer_range=0.02,
pool_token="cls",
**kwargs
):
self.num_layers = num_layers
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_attention_heads = num_attention_heads
self.embedding_dropout_prob = embedding_dropout_prob
self.attention_dropout_prob = attention_dropout_prob
self.output_dropout_prob = output_dropout_prob
self.max_sequence_length = max_sequence_length
self.checkpoint_activations = checkpoint_activations
self.checkpoint_num_layers = checkpoint_num_layers
self.parallel_output = parallel_output
self.relative_encoding = relative_encoding
self.block_position_encoding = block_position_encoding
self.output_predict = output_predict
self.spell_length = spell_length
self.spell_func = spell_func
self.attention_scale = attention_scale
self.initializer_range = initializer_range
self.pool_token = pool_token

super().__init__(**kwargs)
Loading