Skip to content

Commit 2d1ccec

Browse files
committed
update
2 parents b973730 + e55a706 commit 2d1ccec

23 files changed

+682
-0
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
*pyc
22
.DS_Store
3+
<<<<<<< HEAD
34
doctrees/
45
.buildinfo
56
.remote-sync.json
67
*tensorboard*
78
.coverage.*
89
__pycache__/
10+
=======
11+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5

.travis.yml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,29 @@ install:
99
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
1010
pip install --only-binary=numpy,scipy numpy nose scipy pytest sklearn;
1111
pip install tensorflow;
12+
<<<<<<< HEAD
1213
pip install git+https://github.com/hycis/TensorGraph.git@master;
14+
=======
15+
pip install git+https://github.com/hycis/TensorGraphX.git@master;
16+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
1317
fi
1418

1519
- if [[ "$TRAVIS_PYTHON_VERSION" == "3.5" ]]; then
1620
pip3 install --only-binary=numpy,scipy numpy nose scipy pytest sklearn;
1721
pip3 install tensorflow;
22+
<<<<<<< HEAD
1823
pip3 install git+https://github.com/hycis/TensorGraph.git@master;
1924
fi
2025

2126
script:
2227
- echo "TensorGraph Testing.."
28+
=======
29+
pip3 install git+https://github.com/hycis/TensorGraphX.git@master;
30+
fi
31+
32+
script:
33+
- echo "TensorGraphX Testing.."
34+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
2335
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
2436
python -m pytest test;
2537
fi

LICENCE

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,8 @@
1+
<<<<<<< HEAD
12
Copyright 2015 The TensorGraph Authors. All rights reserved.
3+
=======
4+
Copyright 2015 The TensorGraphX Authors. All rights reserved.
5+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
26

37
Apache License
48
Version 2.0, January 2004
@@ -188,7 +192,11 @@ Copyright 2015 The TensorGraph Authors. All rights reserved.
188192
same "printed page" as the copyright notice for easier
189193
identification within third-party archives.
190194

195+
<<<<<<< HEAD
191196
Copyright 2015, The TensorGraph Authors.
197+
=======
198+
Copyright 2015, The TensorGraphX Authors.
199+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
192200

193201
Licensed under the Apache License, Version 2.0 (the "License");
194202
you may not use this file except in compliance with the License.

MANIFEST.in

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,6 @@
11
include README.md LICENCE
2+
<<<<<<< HEAD
23
recursive-include tensorgraph *.py
4+
=======
5+
recursive-include tensorgraphx *.py
6+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5

README.md

Lines changed: 164 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
<<<<<<< HEAD
12
`master` [![Build Status](http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/master)](http://54.222.242.222:1010/job/TensorGraph/master)
23
`develop` [![Build Status](http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/develop)](http://54.222.242.222:1010/job/TensorGraph/develop)
34

@@ -8,18 +9,34 @@ TensorGraph is a simple, lean, and clean framework on TensorFlow for building an
89
As deep learning becomes more and more common and the architectures becoming more
910
and more complicated, it seems that we need some easy to use framework to quickly
1011
build these models and that's what TensorGraph is designed for. It's a very simple
12+
=======
13+
[![Build Status](https://travis-ci.org/hycis/TensorGraphX.svg?branch=master)](https://travis-ci.org/hycis/TensorGraphX)
14+
15+
# TensorGraphX - Simplicity is Beauty
16+
TensorGraphX is a simple, lean, and clean framework on TensorFlow for building any imaginable models.
17+
18+
As deep learning becomes more and more common and the architectures becoming more
19+
and more complicated, it seems that we need some easy to use framework to quickly
20+
build these models and that's what TensorGraphX is designed for. It's a very simple
21+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
1122
framework that adds a very thin layer above tensorflow. It is for more advanced
1223
users who want to have more control and flexibility over his model building and
1324
who wants efficiency at the same time.
1425

1526
-----
27+
<<<<<<< HEAD
1628
## Target Audience
1729
TensorGraph is targeted more at intermediate to advance users who feel keras or
30+
=======
31+
### Target Audience
32+
TensorGraphX is targeted more at intermediate to advance users who feel keras or
33+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
1834
other packages is having too much restrictions and too much black box on model
1935
building, and someone who don't want to rewrite the standard layers in tensorflow
2036
constantly. Also for enterprise users who want to share deep learning models
2137
easily between teams.
2238

39+
<<<<<<< HEAD
2340
## Documentation
2441

2542
You can check out the documentation [https://skymed.ai/pages/AI-Platform/TensorGraph/](https://skymed.ai/pages/AI-Platform/TensorGraph/)
@@ -39,16 +56,46 @@ git clone https://skymed.ai/AI-Platform/TensorGraph.git
3956
export PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH
4057
```
4158
in order for the install to persist via export `PYTHONPATH`. Add `PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH` to your `.bashrc` for linux or
59+
=======
60+
-----
61+
### Install
62+
63+
First you need to install [tensorflow](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html)
64+
65+
To install tensorgraphx simply do via pip
66+
```bash
67+
sudo pip install tensorgraphx
68+
```
69+
or for bleeding edge version do
70+
```bash
71+
sudo pip install --upgrade git+https://github.com/hycis/TensorGraphX.git@master
72+
```
73+
or simply clone and add to `PYTHONPATH`.
74+
```bash
75+
git clone https://github.com/hycis/TensorGraphX.git
76+
export PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH
77+
```
78+
in order for the install to persist via export `PYTHONPATH`. Add `PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH` to your `.bashrc` for linux or
79+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
4280
`.bash_profile` for mac. While this method works, you will have to ensure that
4381
all the dependencies in [setup.py](setup.py) are installed.
4482

4583
-----
84+
<<<<<<< HEAD
4685
## Everything in TensorGraph is about Layers
4786
Everything in TensorGraph is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraph is same as building a toy with lego. For example you can create a new model (layer) by subclass the `BaseModel` layer and use `DenseBlock` layer inside your `ModelA` layer.
4887

4988
```python
5089
from tensorgraph.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
5190
import tensorgraph as tg
91+
=======
92+
### Everything in TensorGraphX is about Layers
93+
Everything in TensorGraphX is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraphX is same as building a toy with lego. For example you can create a new model (layer) by subclass the `BaseModel` layer and use `DenseBlock` layer inside your `ModelA` layer.
94+
95+
```python
96+
from tensorgraphx.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
97+
import tensorgraphx as tg
98+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
5299

53100
class ModelA(BaseModel):
54101
@BaseModel.init_name_scope
@@ -86,6 +133,7 @@ y_train = modelb.train_fprop(X_ph)
86133
y_test = modelb.test_fprop(X_ph)
87134
```
88135

136+
<<<<<<< HEAD
89137
checkout some well known models in TensorGraph
90138
1. [VGG16 code](tensorgraph/layers/backbones.py#L37) and [VGG19 code](tensorgraph/layers/backbones.py#L125) - [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
91139
2. [DenseNet code](tensorgraph/layers/backbones.py#L477) - [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993)
@@ -323,26 +371,96 @@ graph are two separate steps. By splitting them into two separate steps, we ensu
323371
the flexibility of building our computational graph without the worry of accidental
324372
reinitialization of the `Variables`.
325373
We defined three types of nodes
374+
=======
375+
checkout some well known models in TensorGraphX
376+
1. [VGG16 code](tensorgraphx/layers/backbones.py#L37) and [VGG19 code](tensorgraphx/layers/backbones.py#L125) - [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
377+
2. [DenseNet code](tensorgraphx/layers/backbones.py#L477) - [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993)
378+
3. [ResNet code](tensorgraphx/layers/backbones.py#L225) - [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
379+
4. [Unet code](tensorgraphx/layers/backbones.py#L531) - [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
380+
381+
-----
382+
### TensorGraphX on Multiple GPUS
383+
To use tensorgraphx on multiple gpus, you can easily integrate it with [horovod](https://github.com/uber/horovod).
384+
385+
```python
386+
import horovod.tensorflow as hvd
387+
from tensorflow.python.framework import ops
388+
import tensorflow as tf
389+
hvd.init()
390+
391+
# tensorgraphx model derived previously
392+
modelb = ModelB()
393+
X_ph = tf.placeholder()
394+
y_ph = tf.placeholder()
395+
y_train = modelb.train_fprop(X_ph)
396+
y_test = modelb.test_fprop(X_ph)
397+
398+
train_cost = mse(y_train, y_ph)
399+
test_cost = mse(y_test, y_ph)
400+
401+
opt = tf.train.RMSPropOptimizer(0.001)
402+
opt = hvd.DistributedOptimizer(opt)
403+
404+
# required for BatchNormalization layer
405+
update_ops = ops.get_collection(ops.GraphKeys.UPDATE_OPS)
406+
with ops.control_dependencies(update_ops):
407+
train_op = opt.minimize(train_cost)
408+
409+
init_op = tf.group(tf.global_variables_initializer(),
410+
tf.local_variables_initializer())
411+
bcast = hvd.broadcast_global_variables(0)
412+
413+
# Pin GPU to be used to process local rank (one GPU per process)
414+
config = tf.ConfigProto()
415+
config.gpu_options.allow_growth = True
416+
config.gpu_options.visible_device_list = str(hvd.local_rank())
417+
418+
with tf.Session(graph=graph, config=config) as sess:
419+
sess.run(init_op)
420+
bcast.run()
421+
422+
# training model
423+
for epoch in range(100):
424+
for X,y in train_data:
425+
_, loss_train = sess.run([train_op, train_cost], feed_dict={X_ph:X, y_ph:y})
426+
```
427+
428+
for a full example on [tensorgraphx on horovod](./examples/multi_gpus_horovod.py)
429+
430+
-----
431+
### How TensorGraphX Works?
432+
In TensorGraphX, we defined three types of nodes
433+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
326434
327435
1. StartNode : for inputs to the graph
328436
2. HiddenNode : for putting sequential layers inside
329437
3. EndNode : for getting outputs from the model
330438

439+
<<<<<<< HEAD
331440
We put all the sequential layers into a `HiddenNode`, `HiddenNode` can be connected
332441
to another `HiddenNode` or `StartNode`, the nodes are connected together to form
333442
an architecture. The graph always starts with `StartNode` and ends with `EndNode`.
334443
Once we have defined an architecture, we can use the `Graph` object to connect the
335444
path we want in the architecture, there can be multiple StartNodes (s1, s2, etc)
336445
and multiple EndNodes (e1, e2, etc), we can define which path we want in the
337446
entire architecture, example to link from `s2` to `e1`. The `StartNode` is where you place
447+
=======
448+
We put all the sequential layers into a `HiddenNode`, and connect the hidden nodes
449+
together to build the architecture that you want. The graph always
450+
starts with `StartNode` and ends with `EndNode`. The `StartNode` is where you place
451+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
338452
your starting point, it can be a `placeholder`, a symbolic output from another graph,
339453
or data output from `tfrecords`. `EndNode` is where you want to get an output from
340454
the graph, where the output can be used to calculate loss or simply just a peek at the
341455
outputs at that particular layer. Below shows an
342456
[example](examples/example.py) of building a tensor graph.
343457

344458
-----
459+
<<<<<<< HEAD
345460
## Graph Example
461+
=======
462+
### Graph Example
463+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
346464
347465
<img src="draw/graph.png" height="250">
348466

@@ -362,19 +480,29 @@ Then define the `HiddenNode` for putting the sequential layers in each `HiddenNo
362480
```python
363481
h1 = HiddenNode(prev=[s1, s2],
364482
input_merge_mode=Concat(),
483+
<<<<<<< HEAD
365484
layers=[Linear(y2_dim), RELU()])
366485
h2 = HiddenNode(prev=[s2],
367486
layers=[Linear(y2_dim), RELU()])
368487
h3 = HiddenNode(prev=[h1, h2],
369488
input_merge_mode=Sum(),
370489
layers=[Linear(y1_dim), RELU()])
490+
=======
491+
layers=[Linear(y1_dim+y2_dim, y2_dim), RELU()])
492+
h2 = HiddenNode(prev=[s2],
493+
layers=[Linear(y2_dim, y2_dim), RELU()])
494+
h3 = HiddenNode(prev=[h1, h2],
495+
input_merge_mode=Sum(),
496+
layers=[Linear(y2_dim, y1_dim), RELU()])
497+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
371498
```
372499
Then define the `EndNode`. `EndNode` is used to back-trace the graph to connect
373500
the nodes together.
374501
```python
375502
e1 = EndNode(prev=[h3])
376503
e2 = EndNode(prev=[h2])
377504
```
505+
<<<<<<< HEAD
378506
Finally build the graph by putting `StartNodes` and `EndNodes` into `Graph`, we
379507
can choose to use the entire architecture by using all the `StartNodes` and `EndNodes`
380508
and run the forward propagation to get symbolic output from train mode. The number
@@ -390,13 +518,26 @@ graph = Graph(start=[s2], end=[e1])
390518
o1, = graph.train_fprop()
391519
```
392520

521+
=======
522+
Finally build the graph by putting `StartNodes` and `EndNodes` into `Graph`
523+
```python
524+
graph = Graph(start=[s1, s2], end=[e1, e2])
525+
```
526+
Run train forward propagation to get symbolic output from train mode. The number
527+
of outputs from `graph.train_fprop` is the same as the number of `EndNodes` put
528+
into `Graph`
529+
```python
530+
o1, o2 = graph.train_fprop()
531+
```
532+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
393533
Finally build an optimizer to optimize the objective function
394534
```python
395535
o1_mse = tf.reduce_mean((y1 - o1)**2)
396536
o2_mse = tf.reduce_mean((y2 - o2)**2)
397537
mse = o1_mse + o2_mse
398538
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(mse)
399539
```
540+
<<<<<<< HEAD
400541

401542
-----
402543
## TensorGraph on Multiple GPUS
@@ -449,6 +590,10 @@ for a full example on [tensorgraph on horovod](./examples/multi_gpus_horovod.py)
449590

450591
-----
451592
## Hierachical Softmax Example
593+
=======
594+
-----
595+
### Hierachical Softmax Example
596+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
452597
Below is another example for building a more powerful [hierachical softmax](examples/hierachical_softmax.py)
453598
whereby the lower hierachical softmax layer can be conditioned on all the upper
454599
hierachical softmax layers.
@@ -472,9 +617,15 @@ y3_ph = tf.placeholder('float32', [None, component_dim])
472617
# define the graph model structure
473618
start = StartNode(input_vars=[x_ph])
474619

620+
<<<<<<< HEAD
475621
h1 = HiddenNode(prev=[start], layers=[Linear(component_dim), Softmax()])
476622
h2 = HiddenNode(prev=[h1], layers=[Linear(component_dim), Softmax()])
477623
h3 = HiddenNode(prev=[h2], layers=[Linear(component_dim), Softmax()])
624+
=======
625+
h1 = HiddenNode(prev=[start], layers=[Linear(x_dim, component_dim), Softmax()])
626+
h2 = HiddenNode(prev=[h1], layers=[Linear(component_dim, component_dim), Softmax()])
627+
h3 = HiddenNode(prev=[h2], layers=[Linear(component_dim, component_dim), Softmax()])
628+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
478629

479630

480631
e1 = EndNode(prev=[h1], input_merge_mode=Sum())
@@ -493,9 +644,15 @@ optimizer = tf.train.AdamOptimizer(learning_rate).minimize(mse)
493644
```
494645

495646
-----
647+
<<<<<<< HEAD
496648
## Transfer Learning Example
497649
Below is an example on transfer learning with bi-modality inputs and merge at
498650
the middle layer with shared representation, in fact, TensorGraph can be used
651+
=======
652+
### Transfer Learning Example
653+
Below is an example on transfer learning with bi-modality inputs and merge at
654+
the middle layer with shared representation, in fact, TensorGraphX can be used
655+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
499656
to build any number of modalities for transfer learning.
500657

501658
<img src="draw/transferlearn.png" height="250">
@@ -518,10 +675,17 @@ y_ph = tf.placeholder('float32', [None, y_dim])
518675
s1 = StartNode(input_vars=[x1_ph])
519676
s2 = StartNode(input_vars=[x2_ph])
520677

678+
<<<<<<< HEAD
521679
h1 = HiddenNode(prev=[s1], layers=[Linear(shared_dim), RELU()])
522680
h2 = HiddenNode(prev=[s2], layers=[Linear(shared_dim), RELU()])
523681
h3 = HiddenNode(prev=[h1,h2], input_merge_mode=Sum(),
524682
layers=[Linear(y_dim), Softmax()])
683+
=======
684+
h1 = HiddenNode(prev=[s1], layers=[Linear(x1_dim, shared_dim), RELU()])
685+
h2 = HiddenNode(prev=[s2], layers=[Linear(x2_dim, shared_dim), RELU()])
686+
h3 = HiddenNode(prev=[h1,h2], input_merge_mode=Sum(),
687+
layers=[Linear(shared_dim, y_dim), Softmax()])
688+
>>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
525689

526690
e1 = EndNode(prev=[h3])
527691

docs/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+

0 commit comments

Comments
 (0)