Hyper-parameter tuning
Running a hyperparameter tuning trial with ZenML.
Last updated
Was this helpful?
Running a hyperparameter tuning trial with ZenML.
Last updated
Was this helpful?
Hyper‑parameter tuning is the process of systematically searching for the best set of hyper‑parameters for your model. In ZenML, you can express these experiments declaratively inside a pipeline so that every trial is tracked, reproducible and shareable.
In this tutorial you will:
Build a simple training step
that takes a hyper‑parameter as input.
Create a fan‑out / fan‑in pipeline that trains multiple models in parallel – one for each hyper‑parameter value.
Select the best performing model.
Run the pipeline and inspect the results in the ZenML dashboard or programmatically.
ZenML installed and an active stack (the local default stack is fine)
scikit‑learn
installed (pip install scikit-learn
)
Basic familiarity with ZenML pipelines and steps
Create a training step that accepts the learning‑rate as an input parameter and returns both the trained model and its training accuracy:
Next, wire several instances of the same train_step
into a pipeline, each with a different hyper‑parameter. Afterwards, use a selection step that takes all models as input and decides which one is best.
Currently ZenML doesn't allow passing a variable number of inputs into a step. The workaround shown above queries the artifacts after the fact via the Client
.
While the pipeline is running you can:
follow the logs in your terminal
open the ZenML dashboard and watch the DAG execute
Once the run is finished you can programmatically analyze which hyper‑parameter performed best or load the chosen model:
For a deeper exploration of how to query past pipeline runs, see the tutorial.
Replace the simple grid‑search with a more sophisticated tuner (e.g. sklearn.model_selection.GridSearchCV
or ).
Serve the winning model via a to serve it right away.
Move the pipeline to a to scale out the search.