The name of the Inference Endpoint to create.
It must be unique within the organization and must follow the Resource Naming Conventions.
Examples
-
dev-llama-endpoint -
prod-mixtral-endpoint -
vision-gemma
Creates an Inference Endpoint from a model hosted by Hugging Face. The target model must be part of the vLLM supported models list found here 🔗 .
flexai inference serve <inference_endpoint_name> [ --accels <number_of_accelerators> --accel-sm-slices <number_of_slices> --affinity <key1=value1,key2=value2,...> --api-key-secret <flexai_secret_name> --checkpoint <checkpoint_name_or_uuid> --device-arch <device_architecture> --hf-token-secret <flexai_secret_name> --max-replicas <max_replicas> --min-replicas <min_replicas> --no-queuing --runtime <runtime_name> ] (-- --model=<model_name> [<VLLM_Arguments...>])The name of the Inference Endpoint to create.
It must be unique within the organization and must follow the Resource Naming Conventions.
dev-llama-endpoint prod-mixtral-endpoint vision-gemma The name of the model to use for the Inference Endpoint.
Visit the vLLM supported models list found here 🔗 to see the list of supported models.
--task --enable-lora --seed vLLM Engine Arguments 🔗 that can be passed after the End-of-options marker (—).
Note: The
—deviceargument is not supported: FlexAI handles the device selection tasks.
--task --enable-lora --seed Number of slices to divide each SM into.
1 Number of accelerators/GPUs to use.
--accels 2 --accels 5 --accels 8 [] Hardware affinity settings for the inference endpoint.
The name of a FlexAI Secret containing the API key you want to set to protect the Inference Endpoint.
If not provided:
<inference_endpoint_name>-api-key containing the auto-generated API key will be created.--api-key-secret api-key-diego-test --api-key-secret prod-llama-endpoint-key A Checkpoint to serve the Inference Endpoint from.
The name of a previously pushed Checkpoint. Use flexai checkpoint list to see available Checkpoints.
--checkpoint Mixtral-8x7B-v0_1 --checkpoint gemma-3n-E4B-it The UUID of an Inference Ready Checkpoint generated during the execution of a Training or Fine-tuning job. Use flexai training checkpoints to see available Checkpoints.
--checkpoint 3fa85f64-5717-4562-b3fc-2c963f66afa6 nvidia The architecture of the device to run the Inference Endpoint on.
One of:
nvidiaamdtt--device-arch nvidia Displays this help page.
--help -h The name of the FlexAI Secret containing the Hugging Face token that will be used to access the model.
To create a Hugging Face Access Token, follow these steps:
We recommend you keep the "Save your Access Token" prompt open until you've securely stored your token as a FlexAI Secret, as you won't be able to view it again.
--hf-token-secret diego_hf_token --hf-token-secret prod-hf-token The maximum number of replicas to use for the Inference Endpoint.
Visit the FlexAI Inference Autoscaling page to learn more about how autoscaling works and how to configure it.
--max-replicas 1 --max-replicas 10 --max-replicas 5 The minimum number of replicas to use for the Inference Endpoint.
Visit the FlexAI Inference Autoscaling page to learn more about how autoscaling works and how to configure it.
--min-replicas 0 --min-replicas 5 Disable queuing for the Inference Endpoint.
This means that if there are not enough resources available in the cluster, the request will be rejected immediately instead of being queued.
The name of the runtime to use for the Inference Endpoint.
If not provided, the default runtime set for the organization will be used.
Provides more detailed output when initiating an Inference Serving operation.
--verbose Keep in mind that some models are “Gated”, meaning that you need to go through a process of agreeing to their license agreement, privacy policy, or similar before you can use them.
You can visit the model’s page on the Hugging Face Hub to see if it is marked as “Gated”. Gated models can be identified by this symbol:
If the model is “Gated”, you will find the necessary information on how to proceed.
Example:
If you have already gone through the process, you will find a badge on the model’s page indicating that you have access to the model. Example:
Learn more about deploying an Inference Endpoint from a Private or Gated model in the Creating an Inference Endpoint: Private Model quickstart guide.