@@ -21,7 +21,7 @@ For example, the figure above presents the processing time of a single mini-batc
21
21
<br >
22
22
23
23
## Requirements
24
- - ** GPU and CUDA are required**
24
+ - ** GPU and CUDA 8 are required**
25
25
- [ PyTorch] ( http://pytorch.org/ )
26
26
- [ CuPy] ( https://cupy.chainer.org/ )
27
27
- [ pynvrtc] ( https://github.com/NVIDIA/pynvrtc )
@@ -34,10 +34,11 @@ Install requirements via `pip install -r requirements.txt`. CuPy and pynvrtc nee
34
34
The usage of SRU is similar to ` nn.LSTM ` .
35
35
``` python
36
36
import torch
37
+ from torch.autograd import Variable
37
38
from cuda_functional import SRU , SRUCell
38
39
39
40
# input has length 20, batch size 32 and dimension 128
40
- x = torch.FloatTensor(20 , 32 , 128 ).cuda()
41
+ x = Variable( torch.FloatTensor(20 , 32 , 128 ).cuda() )
41
42
42
43
input_size, hidden_size = 128 , 128
43
44
@@ -48,6 +49,7 @@ rnn = SRU(input_size, hidden_size,
48
49
use_tanh = 1 , # use tanh or identity activation
49
50
bidirectional = False # bidirectional RNN ?
50
51
)
52
+ rnn.cuda()
51
53
52
54
output, hidden = rnn(x) # forward pass
53
55
0 commit comments