Skip to content

Commit a29a1d4

Browse files
authored
Some errors I noticed
since we cannot iterate through a float its better we take the floor division of (len(train_x) / batch_size) ==> (len(train_x) // batch_size) can't specify softmax_cross_entropy_with_logits without labels or logits tf.nn.softmax_cross_entropy_with_logits( y, y_) ==> tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y)
1 parent 123312f commit a29a1d4

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

tutorials_previous/5_tensorflow_traffic_light_classification.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@
226226
"source": [
227227
"# Our loss function and optimizer\n",
228228
"\n",
229-
"loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))\n",
229+
"loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y))\n",
230230
"train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)\n",
231231
"sess.run(tf.initialize_all_variables())"
232232
]
@@ -340,7 +340,7 @@
340340
"for i in range(0, max_epochs):\n",
341341
"\n",
342342
" # Iterate over our training set\n",
343-
" for tt in range(0, (len(train_x) / batch_size)):\n",
343+
" for tt in range(0, (len(train_x) // batch_size)):\n",
344344
" start_batch = batch_size * tt\n",
345345
" end_batch = batch_size * (tt + 1)\n",
346346
" train_step.run(feed_dict={x: train_x[start_batch:end_batch], y_: train_y[start_batch:end_batch]})\n",

0 commit comments

Comments
 (0)