|
13 | 13 | { |
14 | 14 | "cell_type": "code", |
15 | 15 | "execution_count": null, |
16 | | - "metadata": { |
17 | | - "collapsed": false |
18 | | - }, |
| 16 | + "metadata": {}, |
19 | 17 | "outputs": [], |
20 | 18 | "source": [ |
21 | 19 | "%matplotlib inline\n", |
|
38 | 36 | { |
39 | 37 | "cell_type": "code", |
40 | 38 | "execution_count": null, |
41 | | - "metadata": { |
42 | | - "collapsed": false |
43 | | - }, |
| 39 | + "metadata": {}, |
44 | 40 | "outputs": [], |
45 | 41 | "source": [ |
46 | 42 | "data_path = 'Bike-Sharing-Dataset/hour.csv'\n", |
|
51 | 47 | { |
52 | 48 | "cell_type": "code", |
53 | 49 | "execution_count": null, |
54 | | - "metadata": { |
55 | | - "collapsed": false |
56 | | - }, |
| 50 | + "metadata": {}, |
57 | 51 | "outputs": [], |
58 | 52 | "source": [ |
59 | 53 | "rides.head()" |
|
73 | 67 | { |
74 | 68 | "cell_type": "code", |
75 | 69 | "execution_count": null, |
76 | | - "metadata": { |
77 | | - "collapsed": false |
78 | | - }, |
| 70 | + "metadata": {}, |
79 | 71 | "outputs": [], |
80 | 72 | "source": [ |
81 | 73 | "rides[:24*10].plot(x='dteday', y='cnt')" |
|
92 | 84 | { |
93 | 85 | "cell_type": "code", |
94 | 86 | "execution_count": null, |
95 | | - "metadata": { |
96 | | - "collapsed": false |
97 | | - }, |
| 87 | + "metadata": {}, |
98 | 88 | "outputs": [], |
99 | 89 | "source": [ |
100 | 90 | "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\n", |
|
121 | 111 | { |
122 | 112 | "cell_type": "code", |
123 | 113 | "execution_count": null, |
124 | | - "metadata": { |
125 | | - "collapsed": false |
126 | | - }, |
| 114 | + "metadata": {}, |
127 | 115 | "outputs": [], |
128 | 116 | "source": [ |
129 | 117 | "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n", |
|
147 | 135 | { |
148 | 136 | "cell_type": "code", |
149 | 137 | "execution_count": null, |
150 | | - "metadata": { |
151 | | - "collapsed": false |
152 | | - }, |
| 138 | + "metadata": {}, |
153 | 139 | "outputs": [], |
154 | 140 | "source": [ |
155 | 141 | "# Save data for approximately the last 21 days \n", |
|
174 | 160 | { |
175 | 161 | "cell_type": "code", |
176 | 162 | "execution_count": null, |
177 | | - "metadata": { |
178 | | - "collapsed": false |
179 | | - }, |
| 163 | + "metadata": {}, |
180 | 164 | "outputs": [], |
181 | 165 | "source": [ |
182 | 166 | "# Hold out the last 60 days or so of the remaining data as a validation set\n", |
|
336 | 320 | { |
337 | 321 | "cell_type": "code", |
338 | 322 | "execution_count": null, |
339 | | - "metadata": { |
340 | | - "collapsed": false |
341 | | - }, |
| 323 | + "metadata": {}, |
342 | 324 | "outputs": [], |
343 | 325 | "source": [ |
344 | 326 | "import unittest\n", |
|
415 | 397 | "This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\n", |
416 | 398 | "\n", |
417 | 399 | "### Choose the learning rate\n", |
418 | | - "This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n", |
| 400 | + "This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n", |
419 | 401 | "\n", |
420 | 402 | "### Choose the number of hidden nodes\n", |
421 | 403 | "The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose." |
|
424 | 406 | { |
425 | 407 | "cell_type": "code", |
426 | 408 | "execution_count": null, |
427 | | - "metadata": { |
428 | | - "collapsed": false |
429 | | - }, |
| 409 | + "metadata": {}, |
430 | 410 | "outputs": [], |
431 | 411 | "source": [ |
432 | 412 | "import sys\n", |
|
463 | 443 | { |
464 | 444 | "cell_type": "code", |
465 | 445 | "execution_count": null, |
466 | | - "metadata": { |
467 | | - "collapsed": false |
468 | | - }, |
| 446 | + "metadata": {}, |
469 | 447 | "outputs": [], |
470 | 448 | "source": [ |
471 | 449 | "plt.plot(losses['train'], label='Training loss')\n", |
|
486 | 464 | { |
487 | 465 | "cell_type": "code", |
488 | 466 | "execution_count": null, |
489 | | - "metadata": { |
490 | | - "collapsed": false |
491 | | - }, |
| 467 | + "metadata": {}, |
492 | 468 | "outputs": [], |
493 | 469 | "source": [ |
494 | 470 | "fig, ax = plt.subplots(figsize=(8,4))\n", |
|
0 commit comments