From 4c176207468114a8fa0e5b5f90105ca89c6a9f0a Mon Sep 17 00:00:00 2001 From: Julien Chaumond Date: Wed, 7 May 2025 11:48:14 +0200 Subject: [PATCH] mention provider="auto" at least here --- docs/inference-providers/index.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/inference-providers/index.md b/docs/inference-providers/index.md index bfb994dfc..f8534b9ca 100644 --- a/docs/inference-providers/index.md +++ b/docs/inference-providers/index.md @@ -59,6 +59,10 @@ You can use Inference Providers with your preferred tools, such as Python, JavaS In this section, we will demonstrate a simple example using [deepseek-ai/DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), a conversational Large Language Model. For the example, we will use [Novita AI](https://novita.ai/) as Inference Provider. +> [!TIP] +> You can also automatically select a provider for a model using `provider="auto"` — it will pick the first available provider for your model based on your preferred order set in https://hf.co/settings/inference-providers. +> This is the default if you don't specify a provider in our Python or JavaScript SDK. + ### Authentication Inference Providers requires passing a user token in the request headers. You can generate a token by signing up on the Hugging Face website and going to the [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). We recommend creating a `fine-grained` token with the scope to `Make calls to Inference Providers`.