Skip to content

FlexAI Inference - Public Model

We’ll get started by creating an Inference Endpoint for a public model hosted in the Hugging Face Hub that does not require us to provide a Hugging Face Access Token β€”This will allow us to focus on the deployment process itself for now.

We’ll be using https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 πŸ”—.

  1. Navigate to the Inference section from either the navigation bar or the card on the home page.
  2. Select the "+ New" button to display the "Launch Inference" panel.
  3. Fill out the Launch Inference form according to the instructions below.

The Launch Inference form

The Launch Inference form consists of a set of required and optional fields that you can use to customize your deployment.

Required Fields

  • Name: A unique name for your inference endpoint. This will be used to identify your endpoint in the FlexAI console. It must follow the FlexAI resource naming conventions.
  • Hugging Face Model: The name of the model to deploy.
  • Cluster: The cluster where the Training workload will run. It can be selected from a dropdown list of available clusters in your FlexAI account.

Form Values

Field Value
Name quickstart-inference-tinyLlama
Hugging Face Model TinyLlama/TinyLlama-1.1B-Chat-v1.0
Cluster Your organization's designated cluster

Other fields

There are a few optional fields that you can use to customize your deployment:

  • Hugging Face Token: Only required if the model you want to deploy is private or requires authentication.
  • API Key: A secret key that will be used to authenticate requests to your Inference endpoint. If left empty, a random API Key will be generated and displayed to you after you initiate the deployment process. Make sure to copy it and store it in a safe place, as you will not be able to see it again.
  • vLLM Parameters: A set of arguments that will be passed to vLLM. You can use this to customize the behavior of the vLLM server, such as setting the maximum number of tokens to generate, the temperature, and other parameters.

Starting the Inference Endpoint

After filling out the form, select the Submit button to start the Inference Endpoint deployment.

You should get a confirmation window displaying the details of your Inference Endpoint, including the API Key that needs to be used to authenticate requests towards the endpoint.

After a few minutes, your Inference Endpoint should be up and running.

Inference Endpoint Details

You can select the gear icon βš™οΈ (labeled as Configure) in the Actions field of the Inference Endpoint list row of your newly created Endpoint to open the details panel of the Inference Endpoint deployment.

The Details tab will be opened by default, showing you all the relevant information about your Inference Endpoint.

This tab provides you with detailed information about your Inference Endpoint, including:

The Summary tab

FieldDescription
IDThe unique identifier of the Inference Endpoint.
NameThe name you assigned to the Inference Endpoint.
StatusThe current status of the Inference Endpoint (e.g., Running, Stopped, etc.).
URLThe base URL of the Inference Endpoint, which you can use to query the model.
Playground URLThe URL of the Inference Playground, a user-friendly interface to interact with your deployed model.
Dashboard URLThe URL of the Inference Endpoint dashboard, where you can monitor the performance and usage of your model.

Configuration

FieldDescription
Device ArchitectureThe architecture of the device where the Inference Endpoint is running (e.g., nvidia).
Runtime ArgsThe vLLM runtime arguments that were used to deploy the Inference Endpoint. These can be customized when creating or updating the Inference Endpoint.
HF Token Secret NameThe name of the FlexAI Secret that contains the Hugging Face Access Token, if applicable. This is only shown if the Inference Endpoint requires a Hugging Face Access Token to access the model.
API Key Secret NameThe name of the FlexAI Secret that contains the API Key used to authenticate requests to the Inference Endpoint.

The Activity tab

The Activity tab provides you with a timeline of events related to your Inference Endpoint, including deployment status changes, scaling events, and more.


The Logs tab

The Logs tab provides you with real-time logs from your Inference Endpoint, allowing you to monitor its activity and troubleshoot any issues that may arise.

You can use the Search bar input field to filter the logs by a specific keyword. This is useful to quickly find relevant information in the logs.


The Inference Endpoints table’s Actions column provides a set of actions that you can use to manage your Inference Endpoint:

  • Configure: To access its Details panel.
  • Pause: To temporarily stop the Inference Endpoint without deleting it.
  • Delete: To permanently remove the Inference Endpoint.
  • Resume: To restart a paused Inference Endpoint.