Skip to content

Commit bd1b6a9

Browse files
inctrlinctrl
inctrl
authored and
inctrl
committed
Autopush
1 parent 4f6e111 commit bd1b6a9

File tree

1 file changed

+37
-63
lines changed

1 file changed

+37
-63
lines changed

README.md

+37-63
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,32 @@
11
# tensorflow-vs-pytorch
22

3+
A comparative study of TensorFlow vs PyTorch.
4+
35
This repository aims for comparative analysis of TensorFlow vs PyTorch, for those who want to learn TensorFlow while already familiar with PyTorch or vice versa.
46

5-
The whole content was written in Ipython Notebook then converted into MarkDown. Ipython Notebooks in main directory cotains the same content.
67

7-
## TABLE OF CONTENTS
8+
## Important Updates
9+
10+
**TensorFlow**
11+
12+
[Eager Excution (Oct 17, 2018)](https://www.tensorflow.org/guide/eager)
13+
Tensorflow also launches a dynamic graph framework which enables define by run.
14+
15+
**Pytorch**
16+
17+
[Pytorch 4.0 Migraction (Apr 22, 2018)](https://pytorch.org/blog/pytorch-0_4_0-migration-guide).
18+
Variable is merged into Tensor. From 4.0, torch.Variable returns torch.tensor and torch.tensor can function as old torch.Variable.
19+
20+
21+
## vs. Table
22+
23+
| | TensorFlow | PyTorch |
24+
|---------------|------------------------------------------------------|----------------------------------------------------|
25+
| Numpy to tensor | [**- Numpy to tf.Tensor**](https://github.com/tango4j/tensorflow-vs-pytorch#numpy-to-tftensor) <br/> ```tf.convert_to_tensor(numpy_array, np.float32)``` | [**- Numpy to torch.Tensor**](https://github.com/tango4j/tensorflow-vs-pytorch#numpy-to-torchtensor) <br/> ```torch.from_numpy(numpy_array)``` |
26+
| Tensor to Numpy | [**- tf.Tensor to Numpy**](https://github.com/tango4j/tensorflow-vs-pytorch#tftensor-to-numpy) <br/> ```tensorflow_tensor.eval()``` <br/> ```tf.convert_to_tensor(numpy_array, np.float32)``` | [**- torch.Tensor to Numpy**](https://github.com/tango4j/tensorflow-vs-pytorch#torchtensor-to-numpy) <br/> ```torch_for_numpy.numpy()``` |
27+
| Dimension check | [**- .shape variable**](https://github.com/tango4j/tensorflow-vs-pytorch#shape-variable-in-tensorflow) <br/> [**- tf.rank function**](https://github.com/tango4j/tensorflow-vs-pytorch#tfrank-function) <br/> ```my_image.shape``` <br/> ```tf.rank(my_image)``` | [**- Automatically Displayed Dim.**](https://github.com/tango4j/tensorflow-vs-pytorch#automatically-displayed-pytorch-tensor-dimension) <br/> [**- .shape variable in PyTorch**](https://github.com/tango4j/tensorflow-vs-pytorch#shape-variable-in-pytorch) <br/> ```torch_for_numpy.shape ``` |
28+
29+
## Table of Contents
830

931
[**01. Tensor**](https://github.com/tango4j/tensorflow-vs-pytorch#01-tensor)
1032

@@ -24,13 +46,13 @@ The whole content was written in Ipython Notebook then converted into MarkDown.
2446
>>[(1) PyTorch Tensor](https://github.com/tango4j/tensorflow-vs-pytorch#1-pytorch-tensor)
2547
>>[(2) PyTorch's dynamic graph feature](https://github.com/tango4j/tensorflow-vs-pytorch#2-pytorchs-dynamic-graph-feature)
2648
>>[(3) What does torch.autograd.Variable contain?](https://github.com/tango4j/tensorflow-vs-pytorch#3-what-does-torchautogradvariable-contain)
27-
>>[(4) Backpropagation with dynamic graph](https://github.com/tango4j/tensorflow-vs-pytorch#4-backpropagation-with-dynamic-graph)
49+
>>[(4) Backpropagation with dynamic graph](https:f//github.com/tango4j/tensorflow-vs-pytorch#4-backpropagation-with-dynamic-graph)
2850
2951
>[**2. Tensor Numpy Conversion**](https://github.com/tango4j/tensorflow-vs-pytorch#2-tensor-numpy-conversion)
3052
3153
>[[TensorFlow] tf.convert_to_tensor or .eval()](https://github.com/tango4j/tensorflow-vs-pytorch#tensorflow-tfconvert_to_tensor-or-eval)
32-
>>[Numpy to tf.Tensor](https://github.com/tango4j/tensorflow-vs-pytorch#numpy-to-tftensor)
33-
>>[tf.Tensor to Numpy](https://github.com/tango4j/tensorflow-vs-pytorch#tftensor-to-numpy)
54+
>> [Numpy to tf.Tensor](https://github.com/tango4j/tensorflow-vs-pytorch#numpy-to-tftensor)
55+
>> [tf.Tensor to Numpy](https://github.com/tango4j/tensorflow-vs-pytorch#tftensor-to-numpy) |
3456
3557
>[[PyTorch] .numpy() or torch.from_numpy()](https://github.com/tango4j/tensorflow-vs-pytorch#pytorch-numpy-or-torchfrom_numpy)
3658
>>[Numpy to torch.Tensor](https://github.com/tango4j/tensorflow-vs-pytorch#numpy-to-torchtensor)
@@ -58,7 +80,9 @@ The whole content was written in Ipython Notebook then converted into MarkDown.
5880
>> [Copy the Dimension of other PyTorch Tensor .view_as()](https://github.com/tango4j/tensorflow-vs-pytorch#copy-the-dimension-of-other-pytorch-tensor-view_as)
5981
6082
> [**5. Shaping the Tensor Variables**](https://github.com/tango4j/tensorflow-vs-pytorch#4-shaping-the-tensor-variables)
83+
6184
> [**6. Datatype Conversion**](https://github.com/tango4j/tensorflow-vs-pytorch#5-datatype-conversion)
85+
6286
> [**7. Printing Variables**](https://github.com/tango4j/tensorflow-vs-pytorch#6-printing-variables)
6387
6488
[**02. Variable**](https://github.com/tango4j/tensorflow-vs-pytorch#02-variables-)
@@ -90,52 +114,6 @@ The whole content was written in Ipython Notebook then converted into MarkDown.
90114

91115
- Once define a computational graph and excute the same graph repeatedly.
92116

93-
- Pros:
94-
95-
(1) Optimizes the graph upfront and makes better distributed computation.
96-
(2) Repeated computation does not cause additional computational cost.
97-
98-
99-
- Cons:
100-
101-
(1) Difficult to perform different computation for each data point.
102-
(2) The structure becomes more complicated and harder to debug than dynamic graph.
103-
104-
105-
#**PyTorch:**
106-
107-
- Dynamic graph.
108-
109-
- Does not define a graph in advance. Every forward pass makes a new computational graph.
110-
111-
- Pros:
112-
113-
(1) Debugging is easier than static graph.
114-
(2) Keep the whole structure concise and intuitive.
115-
(3) For each data point and time different computation can be performed.
116-
117-
118-
- Cons:
119-
120-
(1) Repetitive computation can lead to slower computation speed.
121-
(2) Difficult to distribute the work load in the beginning of training.
122-
123-
- There are a few distinct differences between Tensorflow and Pytorch when it comes to data compuation.
124-
125-
| | TensorFlow | PyTorch |
126-
|---------------|---------------------------------- |----------------|
127-
| Framework | Define-and-run | Define-by-run |
128-
| Graph | Static | Dynamic |
129-
| Debug | Non-native debugger (tfdbg) |pdb(ipdb) Python debugger|
130-
131-
**How "Graph" is defined in each framework?**
132-
133-
#**TensorFlow:**
134-
135-
- Static graph.
136-
137-
- Once define a computational graph and excute the same graph repeatedly.
138-
139117
- Pros:
140118

141119
(1) Optimizes the graph upfront and makes better distributed computation.
@@ -172,6 +150,8 @@ The whole content was written in Ipython Notebook then converted into MarkDown.
172150
(2) Difficult to distribute the work load in the beginning of training.
173151

174152

153+
- There are a few distinct differences between Tensorflow and Pytorch when it comes to data compuation.
154+
175155
# **01 Tensor**
176156

177157
Both TensorFlow and PyTorch are based on the concept "Tensor".
@@ -307,18 +287,12 @@ Let's find out.
307287
### Difference Between Special Tensors and tf.Variable (TensorFlow)
308288
### (1) tf.Variable:
309289

310-
- tf.Variable is **NOT** actually tensor, but rather it
311-
should be classified as **Variable** to avoid confusion.
312-
- tf.Variable is the
313-
only type that can be modified.
314-
- tf.Variable is designed for weights and bias(≠
315-
tf.placeholder). Not for feeding data.
316-
- tf.Variable is stored separately, and
317-
may live on a parameter server, **not in the graph**.
318-
- tf.Variable should
319-
always be initialized before run.
320-
- Usually declared by [initial value],
321-
[dtype], [name]. (There are more arguments...)
290+
- tf.Variable is the only type that can be modified.
291+
- tf.Variable is designed for weights and bias(≠ tf.placeholder). Not for feeding data.
292+
- tf.Variable is **NOT** actually tensor, but rather it should be classified as **Variable** to avoid confusion.
293+
- tf.Variable is stored separately, and may live on a parameter server, **not in the graph**.
294+
- tf.Variable should always be initialized before run.
295+
- Usually declared by [initial value], [dtype], [name]. (There are more arguments...)
322296

323297
```python
324298
mymat = tf.Variable([[7],[11]], tf.int16, name='cat')

0 commit comments

Comments
 (0)