Tensorflow Presentation
Tensorflow Presentation
TensorFlow
By: Jared Ostmeyer
Laboratory of Dr. Lindsay Cowell, UTSW
https://github.com/jostmey/NakedTensor
What is TensorFlow ?
y = mx + b
e
p
slo
y-intercept
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import tensorflow as tf
xs = [ 0.00, 1.00, 2.00,
ys = [-0.82, -0.94, -0.12,
m_initial = -0.5
b_initial = 1.0
m = tf.Variable(m_initial)
b = tf.Variable(b_initial)
error = 0.0
for i in range(len(xs)):
y_model = m*xs[i]+b
error += (ys[i]-y_model)**2
operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(error)
with tf.Session() as session:
session.run(tf.initialize_all_variables())
for iteration in range(10000):
session.run(operation)
print('Slope:', m.eval(), 'Intercept:', b.eval())
$ python3 serial.py
Slope: 0.297022 Intercept: -0.860827
$ # Runtime was 11.1 seconds
Math Functions
tf.exp
tf.tan
tf.pow
tf.sign
Control Flow
tf.cond
tf.do_while
Tensor Operations
tf.matmul
tf.add
tf.reduce_sum tf.cumprod
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import tensorflow as tf
xs = [ 0.00, 1.00, 2.00,
ys = [-0.82, -0.94, -0.12,
m_initial = -0.5
b_initial = 1.0
m = tf.Variable(m_initial)
b = tf.Variable(b_initial)
ys_model = m*xs+b
error = tf.reduce_sum((ys-ys_model)**2)
operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(error)
with tf.Session() as session:
session.run(tf.initialize_all_variables())
for iteration in range(10000):
session.run(operation)
print('Slope:', m.eval(), 'Intercept:', b.eval())
$ python3 tensor.py
Slope: 0.297022 Intercept: -0.860827
$ # Runtime was 2.5 seconds
S am e
M od el
S am e
M od el
S am e
M od el
S am e
M od el
S am e
M od el
S am e
M od el
S am e
M od el
S am e
M od el
S am e
M od el
S am e
A irp lan e
C at
C ar
C at
D eer
D og
D eer
M on key
S h ip
Tru ck
Nonlinearity
Matrix
Mul.
Nonlinearity
Matrix
Mul.
Nonlinearity
Advice
Learn about gradient optimization
For most models, gradient optimization is
inefficient
Explore different built-in optimizers