Skip to content

Commit cc3b659

Browse files
authored
Update README.md
1 parent 7d34a34 commit cc3b659

File tree

1 file changed

+1
-6
lines changed

1 file changed

+1
-6
lines changed

README.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,11 @@
22
## About
33

44
**SRU** is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.
5-
65
<p align="center">
76
<img width=620 src="imgs/speed.png"><br>
87
<i>Average processing time of LSTM, conv2d and SRU, tested on GTX 1070</i><br>
98
</p>
10-
11-
<br>
12-
13-
For example, the figures above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d.
9+
For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d.
1410

1511
<br>
1612

@@ -23,7 +19,6 @@ For example, the figures above presents the processing time of a single mini-bat
2319
CuPy and pynvrtc needed to compile the CUDA code into a callable function at runtime.
2420

2521

26-
2722
## Examples
2823
The usage of SRU is the similar to `nn.LSTM`.
2924
```python

0 commit comments

Comments
 (0)