You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Searching through high dimensional hyperparameter spaces to find the most performant model can get unwieldy very fast. Hyperparameter sweeps provide an organized and efficient way to conduct a battle royale of models and pick the most accurate model. They enable this by automatically searching through combinations of hyperparameter values (e.g. learning rate, batch size, number of hidden layers, optimizer type) to find the most optimal values.\n",
40
+
"\n",
41
+
"In this tutorial we'll see how you can run sophisticated hyperparameter sweeps in 3 easy steps using Weights and Biases.\n",
42
+
"\n",
43
+
"## Sweeps: An Overview\n",
44
+
"\n",
45
+
"Running a hyperparameter sweep with Weights & Biases is very easy. There are just 3 simple steps:\n",
46
+
"\n",
47
+
"1. **Define the sweep:** we do this by creating a dictionary or a [YAML file](https://docs.wandb.com/library/sweeps/configuration) that specifies the parameters to search through, the search strategy, the optimization metric et all.\n",
48
+
"\n",
49
+
"2. **Initialize the sweep:** with one line of code we initialize the sweep and pass in the dictionary of sweep configurations:\n",
50
+
"`sweep_id = wandb.sweep(sweep_config)`\n",
51
+
"\n",
52
+
"3. **Run the sweep agent:** also accomplished with one line of code, we call wandb.agent() and pass the sweep_id to run, along with a function that defines your model architecture and trains it:\n",
53
+
"`wandb.agent(sweep_id, function=train)`\n",
54
+
"\n",
55
+
"And voila! That's all there is to running a hyperparameter sweep! In the notebook below, we'll walk through these 3 steps in more detail.\n",
56
+
"\n",
57
+
"\n",
58
+
"We highly encourage you to fork this notebook, tweak the parameters, or try the model with your own dataset!\n",
"Weights & Biases sweeps give you powerful levers to configure your sweeps exactly how you want them, with just a few lines of code. The sweeps config can be defined as a dictionary or a [YAML file](https://docs.wandb.com/library/sweeps).\n",
155
+
"\n",
156
+
"Let's walk through some of them together:\n",
157
+
"* **Metric** – This is the metric the sweeps are attempting to optimize. Metrics can take a `name` (this metric should be logged by your training script) and a `goal` (maximize or minimize). \n",
158
+
"* **Search Strategy** – Specified using the 'method' variable. We support several different search strategies with sweeps. \n",
159
+
" * **Grid Search** – Iterates over every combination of hyperparameter values.\n",
160
+
" * **Random Search** – Iterates over randomly chosen combinations of hyperparameter values.\n",
161
+
" * **Bayesian Search** – Creates a probabilistic model that maps hyperparameters to probability of a metric score, and chooses parameters with high probability of improving the metric. The objective of Bayesian optimization is to spend more time in picking the hyperparameter values, but in doing so trying out fewer hyperparameter values.\n",
162
+
"* **Stopping Criteria** – The strategy for determining when to kill off poorly peforming runs, and try more combinations faster. We offer several custom scheduling algorithms like [HyperBand](https://arxiv.org/pdf/1603.06560.pdf) and Envelope.\n",
163
+
"* **Parameters** – A dictionary containing the hyperparameter names, and discreet values, max and min values or distributions from which to pull their values to sweep over.\n",
164
+
"\n",
165
+
"You can find a list of all configuration options [here](https://docs.wandb.com/library/sweeps/configuration)."
166
+
]
167
+
},
168
+
{
169
+
"cell_type": "code",
170
+
"metadata": {
171
+
"id": "EdaHO-3M8ly3",
172
+
"colab_type": "code",
173
+
"colab": {}
174
+
},
175
+
"source": [
176
+
"# Configure the sweep – specify the parameters to search through, the search strategy, the optimization metric et all.\n",
"Before we can run the sweep, let's define a function that creates and trains our neural network.\n",
251
+
"\n",
252
+
"In the function below, we define a simplified version of a VGG19 model in Keras, and add the following lines of code to log models metrics, visualize performance and output and track our experiments easily:\n",
253
+
"* **wandb.init()** – Initialize a new W&B run. Each run is single execution of the training script.\n",
254
+
"* **wandb.config** – Save all your hyperparameters in a config object. This lets you use our app to sort and compare your runs by hyperparameter values.\n",
255
+
"* **callbacks=[WandbCallback()]** – Fetch all layer dimensions, model parameters and log them automatically to your W&B dashboard.\n",
256
+
"* **wandb.log()** – Logs custom objects – these can be images, videos, audio files, HTML, plots, point clouds etc. Here we use wandb.log to log images of Simpson characters overlaid with actual and predicted labels."
257
+
]
258
+
},
259
+
{
260
+
"cell_type": "code",
261
+
"metadata": {
262
+
"trusted": true,
263
+
"id": "aIhxl7glaJ5k",
264
+
"colab_type": "code",
265
+
"colab": {}
266
+
},
267
+
"source": [
268
+
"# The sweep calls this function with each set of hyperparameters\n",
269
+
"def train():\n",
270
+
" # Default values for hyper-parameters we're going to sweep over\n",
271
+
" config_defaults = {\n",
272
+
" 'epochs': 5,\n",
273
+
" 'batch_size': 128,\n",
274
+
" 'weight_decay': 0.0005,\n",
275
+
" 'learning_rate': 1e-3,\n",
276
+
" 'activation': 'relu',\n",
277
+
" 'optimizer': 'nadam',\n",
278
+
" 'hidden_layer_size': 128,\n",
279
+
" 'conv_layer_size': 16,\n",
280
+
" 'dropout': 0.5,\n",
281
+
" 'momentum': 0.9,\n",
282
+
" 'seed': 42\n",
283
+
" }\n",
284
+
"\n",
285
+
" # Initialize a new wandb run\n",
286
+
" wandb.init(config=config_defaults)\n",
287
+
"\n",
288
+
" # Config is a variable that holds and saves hyperparameters and inputs\n",
289
+
" config = wandb.config\n",
290
+
"\n",
291
+
" # Define the model architecture - This is a simplified version of the VGG19 architecture\n",
292
+
" model = Sequential()\n",
293
+
"\n",
294
+
" # Set of Conv2D, Conv2D, MaxPooling2D layers with 32 and 64 filters\n",
"# – sweep_id: the sweep_id to run - this was returned above by wandb.sweep()\n",
350
+
"# – function: function that defines your model architecture and trains it\n",
351
+
"wandb.agent(sweep_id, train)"
352
+
],
353
+
"execution_count": 0,
354
+
"outputs": []
355
+
},
356
+
{
357
+
"cell_type": "markdown",
358
+
"metadata": {
359
+
"id": "lWE4BXssxsDJ",
360
+
"colab_type": "text"
361
+
},
362
+
"source": [
363
+
"# Visualize Sweeps Results\n",
364
+
"\n",
365
+
"## Parallel coordinates plot\n",
366
+
"This plot maps hyperparameter values to model metrics. It’s useful for honing in on combinations of hyperparameters that led to the best model performance.\n",
"The hyperparameter importance plot surfaces which hyperparameters were the best predictors of, and highly correlated to desirable values for your metrics.\n",
"These visualizations can help you save both time and resources running expensive hyperparameter optimizations by honing in on the parameters (and value ranges) that are the most important, and thereby worthy of further exploration.\n",
376
+
"\n",
377
+
"# Next step - Get your hands dirty with sweeps\n",
378
+
"We created a simple training script and [a few flavors of sweep configs](https://github.com/wandb/examples/tree/master/keras-cnn-fashion) for you to play with. We highly encourage you to give these a try. This repo also has examples to help you try more advanced sweep features like [Bayesian Hyperband](https://app.wandb.ai/wandb/examples-keras-cnn-fashion/sweeps/us0ifmrf?workspace=user-lavanyashukla), and [Hyperopt](https://app.wandb.ai/wandb/examples-keras-cnn-fashion/sweeps/xbs2wm5e?workspace=user-lavanyashukla)."
0 commit comments