Skip to content

Commit 5eef7e2

Browse files
authored
Update README.md (bytedance#67)
1 parent 3a1ff01 commit 5eef7e2

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

lightseq/training/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ We compute speedup on different batch size using the WPS (real words per second)
8686
- To install LightSeq training library:
8787
```shell
8888
git clone https://github.com/bytedance/lightseq.git
89-
cd lightseq/training/
89+
cd lightseq
9090
pip install -e .
9191
```
9292

@@ -114,7 +114,7 @@ You can also use LightSeq operators directly in your codes to build your own mod
114114
For example, if you want to use the encoder layers, you first need to generate a config containing all the arguments of the models and training. Then you can initialize the LightSeq encoder layer using the config and integrate it into you models.
115115

116116
```
117-
from ops.pytorch.transformer_encoder_layer import LSTransformerEncoderLayer
117+
from lightseq.training.ops.pytorch.transformer_encoder_layer import LSTransformerEncoderLayer
118118
119119
config = LSTransformerEncoderLayer.get_config(
120120
max_batch_tokens=4096,
@@ -131,7 +131,7 @@ config = LSTransformerEncoderLayer.get_config(
131131
)
132132
enc_layer = LSTransformerEncoderLayer(config)
133133
```
134-
Currently, LightSeq supports the separate use of five operations: embedding, encoder layer, decoder layer, criterion and optimizer. You can checkout out the `ops/pytorch` and `ops/tensorflow` directory for detail.
134+
Currently, LightSeq supports the separate use of five operations: embedding, encoder layer, decoder layer, criterion and optimizer. You can checkout out the `lightseq/training/ops/pytorch` and `lightseq/training/ops/tensorflow` directory for detail.
135135

136136
## Limitations and Future Plans
137137
* Training with 8 bit integers.

0 commit comments

Comments
 (0)