LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Starter guide
    • Create an ML pipeline
    • Cache previous executions
    • Manage artifacts
    • Track ML models
    • A starter project
  • Production guide
    • Deploying ZenML
    • Understanding stacks
    • Connecting remote storage
    • Orchestrate on the cloud
    • Configure your pipeline to add compute
    • Configure a code repository
    • Set up CI/CD
    • An end-to-end project
  • LLMOps guide
    • RAG with ZenML
      • RAG in 85 lines of code
      • Understanding Retrieval-Augmented Generation (RAG)
      • Data ingestion and preprocessing
      • Embeddings generation
      • Storing embeddings in a vector database
      • Basic RAG inference pipeline
    • Evaluation and metrics
      • Evaluation in 65 lines of code
      • Retrieval evaluation
      • Generation evaluation
      • Evaluation in practice
    • Reranking for better retrieval
      • Understanding reranking
      • Implementing reranking in ZenML
      • Evaluating reranking performance
    • Improve retrieval by finetuning embeddings
      • Synthetic data generation
      • Finetuning embeddings with Sentence Transformers
      • Evaluating finetuned embeddings
    • Finetuning LLMs with ZenML
      • Finetuning in 100 lines of code
      • Why and when to finetune LLMs
      • Starter choices with finetuning
      • Finetuning with 🤗 Accelerate
      • Evaluation for finetuning
      • Deploying finetuned models
      • Next steps
  • Tutorials
    • Managing scheduled pipelines
    • Trigger pipelines from external systems
    • Hyper-parameter tuning
    • Inspecting past pipeline runs
    • Train with GPUs
    • Running notebooks remotely
    • Managing machine learning datasets
    • Handling big data
  • Best practices
    • 5-minute Quick Wins
    • Keep Your Dashboard Clean
    • Configure Python environments
    • Shared Components for Teams
    • Organizing Stacks Pipelines Models
    • Access Management
    • Setting up a Project Repository
    • Infrastructure as Code with Terraform
    • Creating Templates for ML Platform
    • Using VS Code extension
    • Leveraging MCP
    • Debugging and Solving Issues
    • Choosing an Orchestrator
  • Examples
    • Quickstart
    • End-to-End Batch Inference
    • Basic NLP with BERT
    • Computer Vision with YoloV8
    • LLM Finetuning
    • More Projects...
Powered by GitBook
On this page
  • Client Environment (or the Runner environment)
  • ZenML Server Environment
  • Execution Environments
  • Image Builder Environment
  • Handling dependencies
  • Suggestions for Resolving Dependency Conflicts
  • Use a tool like pip-compile for reproducibility
  • Use pip check to discover dependency conflicts
  • Well-known dependency resolution issues
  • Manually bypassing ZenML's integration installation

Was this helpful?

Edit on GitHub
  1. Best practices

Configure Python environments

Navigating multiple development environments.

PreviousKeep Your Dashboard CleanNextShared Components for Teams

Last updated 22 days ago

Was this helpful?

ZenML deployments often involve multiple environments. This guide helps you manage dependencies and configurations across these environments.

Here is a visual overview of the different environments:

Client Environment (or the Runner environment)

The client environment (sometimes known as the runner environment) is where the ZenML pipelines are compiled, i.e., where you call the pipeline function (typically in a run.py script). There are different types of client environments:

  • A local development environment

  • A CI runner in production.

  • A runner image orchestrated by the ZenML server to start pipelines.

The client environment typically follows these key steps when starting a pipeline:

  1. Compiling an intermediate pipeline representation via the @pipeline function.

Please note that the @pipeline function in your code is only ever called in this environment. Therefore, any computational logic that is executed in the pipeline function needs to be relevant to this so-called compile time, rather than at execution time, which happens later.

ZenML Server Environment

Execution Environments

Image Builder Environment

Handling dependencies

When using ZenML with other libraries, you may encounter issues with conflicting dependencies. ZenML aims to be stack- and integration-agnostic, allowing you to run your pipelines using the tools that make sense for your problems. With this flexibility comes the possibility of dependency conflicts.

ZenML allows you to install dependencies required by integrations through the zenml integration install ... command. This is a convenient way to install dependencies for a specific integration, but it can also lead to dependency conflicts if you are using other libraries in your environment. An easy way to see if the ZenML requirements are still met (after installing any extra dependencies required by your work) by running zenml integration list and checking that your desired integrations still bear the green tick symbol denoting that all requirements are met.

Suggestions for Resolving Dependency Conflicts

Use a tool like pip-compile for reproducibility

Use pip check to discover dependency conflicts

Well-known dependency resolution issues

Some of ZenML's integrations come with strict dependency and package version requirements. We try to keep these dependency requirements ranges as wide as possible for the integrations developed by ZenML, but it is not always possible to make this work completely smoothly. Here is one of the known issues:

  • click: ZenML currently requires click~=8.0.3 for its CLI. This is on account of another dependency of ZenML. Using versions of click in your own project that are greater than 8.0.3 may cause unanticipated behaviors.

Manually bypassing ZenML's integration installation

It is possible to skip ZenML's integration installation process and install dependencies manually. This is not recommended, but it is possible and can be run at your own risk.

To do this, you will need to install the dependencies for the integration you want to use manually. You can find the dependencies for the integrations by running the following:

# to have the requirements exported to a file
zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME

# to have the requirements printed to the console
zenml integration export-requirements INTEGRATION_NAME

A runner.

In all the environments, you should use your preferred package manager (e.g., pip or poetry) to manage dependencies. Ensure you install the ZenML package and any required .

Creating or triggering if running remotely.

Triggering a run in the .

The ZenML server environment is a FastAPI application managing pipelines and metadata. It includes the ZenML Dashboard and is accessed when you . To manage dependencies, install them during , but only if you have custom integrations, as most are built-in.

When running locally, there is no real concept of an execution environment as the client, server, and execution environment are all the same. However, when running a pipeline remotely, ZenML needs to transfer your code and environment over to the remote . In order to achieve this, ZenML builds Docker images known as execution environments.

ZenML handles the Docker image configuration, creation, and pushing, starting with a containing ZenML and Python, then adding pipeline dependencies. To manage the Docker image configuration, follow the steps in the guide, including specifying additional pip dependencies, using a custom parent image, and customizing the build process.

By default, execution environments are created locally in the using the local Docker client. However, this requires Docker installation and permissions. ZenML offers , a special , allowing users to build and push Docker images in a different specialized image builder environment.

Note that even if you don't configure an image builder in your stack, ZenML still uses the to retain consistency across all builds. In this case, the image builder environment is the same as the client environment.

Consider using a tool like pip-compile (available through ) to compile your dependencies into a static requirements.txt file that can be used across environments. (If you are using , you might want to use uv pip compile as an alternative.)

For a practical example and explanation of using pip-compile to address exactly this need, see to learn more.

Running will verify that your environment's dependencies are compatible with one another. If not, you will see a list of the conflicts. This may or may not be a problem or something that will prevent you from moving forward with your specific use case, but it is certainly worth being aware of whether this is the case.

Note that the zenml integration install ... command runs a pip install ... under the hood as part of its implementation, taking the dependencies listed in the integration object and installing them. For example, zenml integration install gcp will run pip install "kfp==1.8.16" "gcsfs" "google-cloud-secret-manager" ... and so on, since they are .

You can then amend and tweak those requirements as you see fit. Note that if you are using a remote orchestrator, you would then have to place the updated versions for the dependencies in a DockerSettings object (described in detail ) which will then make sure everything is working as you need.

ZenML Pro
integrations
pipeline and step build environments
orchestrator
deploy ZenML
ZenML deployment
orchestrator
base image
containerize your pipeline
local image builder
the pip-tools package
uv
our 'gitflow' repository and workflow
pip check
specified in the integration definition
here
image builders
stack component
client environment
Left box is the client environment, middle is the zenml server environment, and the right most contains the build environments