You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,11 +20,12 @@ We are in an early-release Beta. Expect some adventures and rough edges.
20
20
-[Releases and Contributing](#releases-and-contributing)
21
21
-[The Team](#the-team)
22
22
23
-
| System |Python|Status|
23
+
| System |2.7|3.5|
24
24
| --- | --- | --- |
25
-
| Linux CPU | 2.7.8, 2.7, 3.5, nightly |[](https://travis-ci.org/pytorch/pytorch)|
26
-
| Linux GPU | 2.7 |[](https://build.pytorch.org/job/pytorch-master-py2)|
27
-
| Linux GPU | 3.5 |[](https://build.pytorch.org/job/pytorch-master-py3)|
25
+
| Linux CPU |[](https://travis-ci.org/pytorch/pytorch)|[](https://travis-ci.org/pytorch/pytorch)|
26
+
| Linux GPU |[](https://build.pytorch.org/job/pytorch-master-py2-linux)|[](https://build.pytorch.org/job/pytorch-master-py3-linux)|
27
+
| macOS CPU |[](https://build.pytorch.org/job/pytorch-master-py2-osx-cpu)|[](https://build.pytorch.org/job/pytorch-master-py3-osx-cpu)|
28
+
28
29
29
30
## More about PyTorch
30
31
@@ -116,9 +117,9 @@ We hope you never spend hours debugging your code because of bad stack traces or
116
117
117
118
### Fast and Lean
118
119
119
-
PyTorch has minimal framework overhead. We integrate acceleration libraries
120
-
such as Intel MKL and NVIDIA (CuDNN, NCCL) to maximize speed.
121
-
At the core, its CPU and GPU Tensor and Neural Network backends
120
+
PyTorch has minimal framework overhead. We integrate acceleration libraries
121
+
such as Intel MKL and NVIDIA (CuDNN, NCCL) to maximize speed.
122
+
At the core, its CPU and GPU Tensor and Neural Network backends
122
123
(TH, THC, THNN, THCUNN) are written as independent libraries with a C99 API.
123
124
They are mature and have been tested for years.
124
125
@@ -204,7 +205,7 @@ nvidia-docker run --rm -ti --ipc=host pytorch-cudnnv6
204
205
```
205
206
Please note that pytorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
206
207
for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
207
-
should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.
208
+
should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.
208
209
209
210
210
211
## Getting Started
@@ -222,7 +223,7 @@ Three pointers to get you started:
222
223
223
224
## Releases and Contributing
224
225
225
-
PyTorch has a 90 day release cycle (major releases).
226
+
PyTorch has a 90 day release cycle (major releases).
226
227
It's current state is Beta, we expect no obvious bugs. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
227
228
228
229
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
0 commit comments