1+ <<<<<<< HEAD
12` master ` [ ![ Build Status] ( http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/master )] ( http://54.222.242.222:1010/job/TensorGraph/master )
23` develop ` [ ![ Build Status] ( http://54.222.242.222:1010/buildStatus/icon?job=TensorGraph/develop )] ( http://54.222.242.222:1010/job/TensorGraph/develop )
34
@@ -8,18 +9,34 @@ TensorGraph is a simple, lean, and clean framework on TensorFlow for building an
89As deep learning becomes more and more common and the architectures becoming more
910and more complicated, it seems that we need some easy to use framework to quickly
1011build these models and that's what TensorGraph is designed for. It's a very simple
12+ =======
13+ [ ![ Build Status] ( https://travis-ci.org/hycis/TensorGraphX.svg?branch=master )] ( https://travis-ci.org/hycis/TensorGraphX )
14+
15+ # TensorGraphX - Simplicity is Beauty
16+ TensorGraphX is a simple, lean, and clean framework on TensorFlow for building any imaginable models.
17+
18+ As deep learning becomes more and more common and the architectures becoming more
19+ and more complicated, it seems that we need some easy to use framework to quickly
20+ build these models and that's what TensorGraphX is designed for. It's a very simple
21+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
1122 framework that adds a very thin layer above tensorflow. It is for more advanced
1223users who want to have more control and flexibility over his model building and
1324who wants efficiency at the same time.
1425
1526-----
27+ <<<<<<< HEAD
1628## Target Audience
1729TensorGraph is targeted more at intermediate to advance users who feel keras or
30+ =======
31+ ### Target Audience
32+ TensorGraphX is targeted more at intermediate to advance users who feel keras or
33+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
1834 other packages is having too much restrictions and too much black box on model
1935building, and someone who don't want to rewrite the standard layers in tensorflow
2036constantly. Also for enterprise users who want to share deep learning models
2137easily between teams.
2238
39+ <<<<<<< HEAD
2340## Documentation
2441
2542You can check out the documentation [ https://skymed.ai/pages/AI-Platform/TensorGraph/ ] ( https://skymed.ai/pages/AI-Platform/TensorGraph/ )
@@ -39,16 +56,46 @@ git clone https://skymed.ai/AI-Platform/TensorGraph.git
3956export PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH
4057```
4158in order for the install to persist via export ` PYTHONPATH ` . Add ` PYTHONPATH=/path/to/TensorGraph:$PYTHONPATH ` to your ` .bashrc ` for linux or
59+ =======
60+ -----
61+ ### Install
62+
63+ First you need to install [ tensorflow] ( https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html )
64+
65+ To install tensorgraphx simply do via pip
66+ ``` bash
67+ sudo pip install tensorgraphx
68+ ```
69+ or for bleeding edge version do
70+ ``` bash
71+ sudo pip install --upgrade git+https://github.com/hycis/TensorGraphX.git@master
72+ ```
73+ or simply clone and add to ` PYTHONPATH ` .
74+ ``` bash
75+ git clone https://github.com/hycis/TensorGraphX.git
76+ export PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH
77+ ```
78+ in order for the install to persist via export ` PYTHONPATH ` . Add ` PYTHONPATH=/path/to/TensorGraphX:$PYTHONPATH ` to your ` .bashrc ` for linux or
79+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
4280` .bash_profile ` for mac. While this method works, you will have to ensure that
4381all the dependencies in [ setup.py] ( setup.py ) are installed.
4482
4583-----
84+ <<<<<<< HEAD
4685## Everything in TensorGraph is about Layers
4786Everything in TensorGraph is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraph is same as building a toy with lego. For example you can create a new model (layer) by subclass the ` BaseModel ` layer and use ` DenseBlock ` layer inside your ` ModelA ` layer.
4887
4988``` python
5089from tensorgraph.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
5190import tensorgraph as tg
91+ ====== =
92+ # ## Everything in TensorGraphX is about Layers
93+ Everything in TensorGraphX is about layers. A model such as VGG or Resnet can be a layer. An identity block from Resnet or a dense block from Densenet can be a layer as well. Building models in TensorGraphX is same as building a toy with lego. For example you can create a new model (layer) by subclass the `BaseModel` layer and use `DenseBlock` layer inside your `ModelA` layer.
94+
95+ ```python
96+ from tensorgraphx.layers import DenseBlock, BaseModel, Flatten, Linear, Softmax
97+ import tensorgraphx as tg
98+ >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
5299
53100class ModelA (BaseModel ):
54101 @BaseModel.init_name_scope
@@ -86,6 +133,7 @@ y_train = modelb.train_fprop(X_ph)
86133y_test = modelb.test_fprop(X_ph)
87134```
88135
136+ <<<<<<< HEAD
89137checkout some well known models in TensorGraph
901381 . [ VGG16 code] ( tensorgraph/layers/backbones.py#L37 ) and [ VGG19 code] ( tensorgraph/layers/backbones.py#L125 ) - [ Very Deep Convolutional Networks for Large-Scale Image Recognition] ( https://arxiv.org/abs/1409.1556 )
911392 . [ DenseNet code] ( tensorgraph/layers/backbones.py#L477 ) - [ Densely Connected Convolutional Networks] ( https://arxiv.org/abs/1608.06993 )
@@ -323,26 +371,96 @@ graph are two separate steps. By splitting them into two separate steps, we ensu
323371the flexibility of building our computational graph without the worry of accidental
324372reinitialization of the ` Variables ` .
325373We defined three types of nodes
374+ =======
375+ checkout some well known models in TensorGraphX
376+ 1 . [ VGG16 code] ( tensorgraphx/layers/backbones.py#L37 ) and [ VGG19 code] ( tensorgraphx/layers/backbones.py#L125 ) - [ Very Deep Convolutional Networks for Large-Scale Image Recognition] ( https://arxiv.org/abs/1409.1556 )
377+ 2 . [ DenseNet code] ( tensorgraphx/layers/backbones.py#L477 ) - [ Densely Connected Convolutional Networks] ( https://arxiv.org/abs/1608.06993 )
378+ 3 . [ ResNet code] ( tensorgraphx/layers/backbones.py#L225 ) - [ Deep Residual Learning for Image Recognition] ( https://arxiv.org/abs/1512.03385 )
379+ 4 . [ Unet code] ( tensorgraphx/layers/backbones.py#L531 ) - [ U-Net: Convolutional Networks for Biomedical Image Segmentation] ( https://arxiv.org/abs/1505.04597 )
380+
381+ -----
382+ ### TensorGraphX on Multiple GPUS
383+ To use tensorgraphx on multiple gpus, you can easily integrate it with [ horovod] ( https://github.com/uber/horovod ) .
384+
385+ ``` python
386+ import horovod.tensorflow as hvd
387+ from tensorflow.python.framework import ops
388+ import tensorflow as tf
389+ hvd.init()
390+
391+ # tensorgraphx model derived previously
392+ modelb = ModelB()
393+ X_ph = tf.placeholder()
394+ y_ph = tf.placeholder()
395+ y_train = modelb.train_fprop(X_ph)
396+ y_test = modelb.test_fprop(X_ph)
397+
398+ train_cost = mse(y_train, y_ph)
399+ test_cost = mse(y_test, y_ph)
400+
401+ opt = tf.train.RMSPropOptimizer(0.001 )
402+ opt = hvd.DistributedOptimizer(opt)
403+
404+ # required for BatchNormalization layer
405+ update_ops = ops.get_collection(ops.GraphKeys.UPDATE_OPS )
406+ with ops.control_dependencies(update_ops):
407+ train_op = opt.minimize(train_cost)
408+
409+ init_op = tf.group(tf.global_variables_initializer(),
410+ tf.local_variables_initializer())
411+ bcast = hvd.broadcast_global_variables(0 )
412+
413+ # Pin GPU to be used to process local rank (one GPU per process)
414+ config = tf.ConfigProto()
415+ config.gpu_options.allow_growth = True
416+ config.gpu_options.visible_device_list = str (hvd.local_rank())
417+
418+ with tf.Session(graph = graph, config = config) as sess:
419+ sess.run(init_op)
420+ bcast.run()
421+
422+ # training model
423+ for epoch in range (100 ):
424+ for X,y in train_data:
425+ _, loss_train = sess.run([train_op, train_cost], feed_dict = {X_ph:X, y_ph:y})
426+ ```
427+
428+ for a full example on [ tensorgraphx on horovod] ( ./examples/multi_gpus_horovod.py )
429+
430+ -----
431+ ### How TensorGraphX Works?
432+ In TensorGraphX, we defined three types of nodes
433+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
326434
3274351 . StartNode : for inputs to the graph
3284362 . HiddenNode : for putting sequential layers inside
3294373 . EndNode : for getting outputs from the model
330438
439+ <<<<<<< HEAD
331440We put all the sequential layers into a ` HiddenNode ` , ` HiddenNode ` can be connected
332441to another ` HiddenNode ` or ` StartNode ` , the nodes are connected together to form
333442an architecture. The graph always starts with ` StartNode ` and ends with ` EndNode ` .
334443Once we have defined an architecture, we can use the ` Graph ` object to connect the
335444path we want in the architecture, there can be multiple StartNodes (s1, s2, etc)
336445and multiple EndNodes (e1, e2, etc), we can define which path we want in the
337446entire architecture, example to link from ` s2 ` to ` e1 ` . The ` StartNode ` is where you place
447+ =======
448+ We put all the sequential layers into a ` HiddenNode ` , and connect the hidden nodes
449+ together to build the architecture that you want. The graph always
450+ starts with ` StartNode ` and ends with ` EndNode ` . The ` StartNode ` is where you place
451+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
338452 your starting point, it can be a ` placeholder ` , a symbolic output from another graph,
339453or data output from ` tfrecords ` . ` EndNode ` is where you want to get an output from
340454the graph, where the output can be used to calculate loss or simply just a peek at the
341455outputs at that particular layer. Below shows an
342456[ example] ( examples/example.py ) of building a tensor graph.
343457
344458-----
459+ <<<<<<< HEAD
345460## Graph Example
461+ =======
462+ ### Graph Example
463+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
346464
347465<img src =" draw/graph.png " height =" 250 " >
348466
@@ -362,19 +480,29 @@ Then define the `HiddenNode` for putting the sequential layers in each `HiddenNo
362480``` python
363481h1 = HiddenNode(prev = [s1, s2],
364482 input_merge_mode = Concat(),
483+ <<<<<< < HEAD
365484 layers = [Linear(y2_dim), RELU()])
366485h2 = HiddenNode(prev = [s2],
367486 layers = [Linear(y2_dim), RELU()])
368487h3 = HiddenNode(prev = [h1, h2],
369488 input_merge_mode = Sum(),
370489 layers = [Linear(y1_dim), RELU()])
490+ ====== =
491+ layers= [Linear(y1_dim+ y2_dim, y2_dim), RELU()])
492+ h2 = HiddenNode(prev = [s2],
493+ layers = [Linear(y2_dim, y2_dim), RELU()])
494+ h3 = HiddenNode(prev = [h1, h2],
495+ input_merge_mode = Sum(),
496+ layers = [Linear(y2_dim, y1_dim), RELU()])
497+ >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
371498```
372499Then define the ` EndNode ` . ` EndNode ` is used to back-trace the graph to connect
373500the nodes together.
374501``` python
375502e1 = EndNode(prev = [h3])
376503e2 = EndNode(prev = [h2])
377504```
505+ <<<<<<< HEAD
378506Finally build the graph by putting ` StartNodes ` and ` EndNodes ` into ` Graph ` , we
379507can choose to use the entire architecture by using all the ` StartNodes ` and ` EndNodes `
380508and run the forward propagation to get symbolic output from train mode. The number
@@ -390,13 +518,26 @@ graph = Graph(start=[s2], end=[e1])
390518o1, = graph.train_fprop()
391519```
392520
521+ =======
522+ Finally build the graph by putting ` StartNodes ` and ` EndNodes ` into ` Graph `
523+ ``` python
524+ graph = Graph(start = [s1, s2], end = [e1, e2])
525+ ```
526+ Run train forward propagation to get symbolic output from train mode. The number
527+ of outputs from ` graph.train_fprop ` is the same as the number of ` EndNodes ` put
528+ into ` Graph `
529+ ``` python
530+ o1, o2 = graph.train_fprop()
531+ ```
532+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
393533 Finally build an optimizer to optimize the objective function
394534``` python
395535o1_mse = tf.reduce_mean((y1 - o1)** 2 )
396536o2_mse = tf.reduce_mean((y2 - o2)** 2 )
397537mse = o1_mse + o2_mse
398538optimizer = tf.train.AdamOptimizer(learning_rate).minimize(mse)
399539```
540+ <<<<<<< HEAD
400541
401542-----
402543## TensorGraph on Multiple GPUS
@@ -449,6 +590,10 @@ for a full example on [tensorgraph on horovod](./examples/multi_gpus_horovod.py)
449590
450591-----
451592## Hierachical Softmax Example
593+ =======
594+ -----
595+ ### Hierachical Softmax Example
596+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
452597 Below is another example for building a more powerful [ hierachical softmax] ( examples/hierachical_softmax.py )
453598whereby the lower hierachical softmax layer can be conditioned on all the upper
454599hierachical softmax layers.
@@ -472,9 +617,15 @@ y3_ph = tf.placeholder('float32', [None, component_dim])
472617# define the graph model structure
473618start = StartNode(input_vars = [x_ph])
474619
620+ <<<<<< < HEAD
475621h1 = HiddenNode(prev = [start], layers = [Linear(component_dim), Softmax()])
476622h2 = HiddenNode(prev = [h1], layers = [Linear(component_dim), Softmax()])
477623h3 = HiddenNode(prev = [h2], layers = [Linear(component_dim), Softmax()])
624+ ====== =
625+ h1 = HiddenNode(prev = [start], layers = [Linear(x_dim, component_dim), Softmax()])
626+ h2 = HiddenNode(prev = [h1], layers = [Linear(component_dim, component_dim), Softmax()])
627+ h3 = HiddenNode(prev = [h2], layers = [Linear(component_dim, component_dim), Softmax()])
628+ >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
478629
479630
480631e1 = EndNode(prev = [h1], input_merge_mode = Sum())
@@ -493,9 +644,15 @@ optimizer = tf.train.AdamOptimizer(learning_rate).minimize(mse)
493644```
494645
495646-----
647+ <<<<<<< HEAD
496648## Transfer Learning Example
497649Below is an example on transfer learning with bi-modality inputs and merge at
498650the middle layer with shared representation, in fact, TensorGraph can be used
651+ =======
652+ ### Transfer Learning Example
653+ Below is an example on transfer learning with bi-modality inputs and merge at
654+ the middle layer with shared representation, in fact, TensorGraphX can be used
655+ >>>>>>> e55a706e1467da7b7c54b6d04055aba847f5a2b5
499656 to build any number of modalities for transfer learning.
500657
501658<img src =" draw/transferlearn.png " height =" 250 " >
@@ -518,10 +675,17 @@ y_ph = tf.placeholder('float32', [None, y_dim])
518675s1 = StartNode(input_vars = [x1_ph])
519676s2 = StartNode(input_vars = [x2_ph])
520677
678+ <<<<<< < HEAD
521679h1 = HiddenNode(prev = [s1], layers = [Linear(shared_dim), RELU()])
522680h2 = HiddenNode(prev = [s2], layers = [Linear(shared_dim), RELU()])
523681h3 = HiddenNode(prev = [h1,h2], input_merge_mode = Sum(),
524682 layers = [Linear(y_dim), Softmax()])
683+ ====== =
684+ h1 = HiddenNode(prev = [s1], layers = [Linear(x1_dim, shared_dim), RELU()])
685+ h2 = HiddenNode(prev = [s2], layers = [Linear(x2_dim, shared_dim), RELU()])
686+ h3 = HiddenNode(prev = [h1,h2], input_merge_mode = Sum(),
687+ layers = [Linear(shared_dim, y_dim), Softmax()])
688+ >>>>>> > e55a706e1467da7b7c54b6d04055aba847f5a2b5
525689
526690e1 = EndNode(prev = [h3])
527691
0 commit comments