# FlexAI Docs ## Docs - [Best Practices](https://docs.flex.ai/best-practices.md): Recommended practices for working with the FlexAI platform - [Logging in with the CLI on a local Virtualized Environment](https://docs.flex.ai/best-practices/alternative-methods/cli-on-a-virtualized-env.md): Configure the FlexAI CLI for use in virtual machines and containers - [Logging in with the CLI from a non-graphical environment](https://docs.flex.ai/best-practices/alternative-methods/logging-from-a-remote-env.md): Authenticate the FlexAI CLI on headless or remote environments - [Resource Naming Conventions](https://docs.flex.ai/best-practices/resource-naming-conventions.md): Naming rules and constraints for FlexAI resources - [Blueprints](https://docs.flex.ai/blueprints.md): A set of end-to-end experiments to explore FlexAI — from your first training job to advanced fine-tuning, inference, and multi-agent applications. - [Fine-Tune LLMs with Axolotl on Domain-Specific Data](https://docs.flex.ai/blueprints/axolotl.md): Use Axolotl to fine-tune language models on your own data, then deploy as a production inference endpoint on FlexAI. FSDP, custom datasets, one-click deploy. - [Resume Training from a Checkpoint on FlexAI — Never Lose a Run](https://docs.flex.ai/blueprints/continuing-a-training-job-from-a-checkpoint.md): Learn how to resume a training job from a saved checkpoint on FlexAI. Managed checkpoint storage means preemption never kills your progress. - [RL Fine-Tuning with EasyR1: GRPO & DAPO for Better Reasoning](https://docs.flex.ai/blueprints/easyR1.md): Fine-tune LLMs with reinforcement learning using EasyR1 on FlexAI. GRPO and DAPO algorithms, better reasoning, distributed training with FSDP and vLLM. - [Fine-Tune LLMs with Flash Attention on FlexAI — Faster Training](https://docs.flex.ai/blueprints/flash-attention-ft-on-a-language-model.md): Fine-tune a causal language model using Flash Attention on FlexAI. Faster training, lower memory, managed checkpoints, full walkthrough from setup to deploy. - [Fine-Tune a Text-to-Speech Model on FlexAI](https://docs.flex.ai/blueprints/ft-on-a-tts-model.md): Fine-tune a TTS model on FlexAI with managed checkpoints and GPU access. Full tutorial from dataset prep to deployment, no infra setup. - [Track Training Experiments with Weights & Biases on FlexAI](https://docs.flex.ai/blueprints/integrating-a-experiment-tracker.md): Integrate Weights & Biases experiment tracking with FlexAI training jobs. Monitor loss curves, compare runs, and share results — zero extra infra. - [Fine-Tune and Deploy an LLM on FlexAI with LlamaFactory](https://docs.flex.ai/blueprints/llama-factory.md): Fine-tune and deploy any LLM with LlamaFactory on FlexAI. Managed checkpoints, train and serve in one platform, no lost runs from spot preemption. - [Evaluate Any LLM Across 300+ Benchmarks with lm-evaluation-harness](https://docs.flex.ai/blueprints/lm-evaluation-harness.md): Run comprehensive LLM evaluation using EleutherAI's lm-evaluation-harness on FlexAI. 300+ tasks and benchmarks, reproducible results, no infra setup required. - [Fine-Tune Stable Diffusion XL with LoRA on FlexAI](https://docs.flex.ai/blueprints/lora-ft-on-a-diffusion-model.md): Step-by-step guide to fine-tuning SDXL with LoRA on FlexAI. Launches in under 60s, managed checkpoints, built-in observability — no infra setup required. - [Deploy Multi-Agent LangGraph Systems on FlexAI Inference Endpoints](https://docs.flex.ai/blueprints/multi-agent.md): Build and deploy a multi-agent LangGraph system on FlexAI inference endpoints. Full tutorial with working code, auto-scaling, cold start under 60 seconds. - [Fine-Tune Llama 3.1 with QLoRA (4-bit) on FlexAI](https://docs.flex.ai/blueprints/qlora-ft-on-a-language-model.md): Fine-tune Llama 3.1 using QLoRA 4-bit quantization on FlexAI. Cut memory requirements by 75%, launch in under 60s, managed checkpoints included. - [Build a RAG App with LangChain on FlexAI Inference Endpoints](https://docs.flex.ai/blueprints/rag-application.md): Build a retrieval-augmented generation app using LangChain and FlexAI inference endpoints. Interactive Q&A from your documents, deploy in minutes. - [Run Your First Training Job on FlexAI — Quickstart Guide](https://docs.flex.ai/blueprints/running-a-simple-training-job.md): Get your first training job running on FlexAI in minutes. Step-by-step quickstart — no infra setup, managed checkpoints, built-in observability. - [Run Distributed Data Parallel (DDP) Training on FlexAI](https://docs.flex.ai/blueprints/running-a-simple-training-job-with-ddp.md): Start a distributed DDP training job on FlexAI with just 2 flags. Multi-GPU training made simple — no SLURM, no infra config, launch in under 60 seconds. - [Deploy Speech-to-Text with FlexAI Inference Endpoints](https://docs.flex.ai/blueprints/speech-to-text-inference.md): Build a speech-to-text transcription app using FlexAI inference endpoints. Record audio and get transcriptions in real time, hardware-agnostic deployment. - [Stream Large Datasets During Training — No Memory Limits](https://docs.flex.ai/blueprints/streaming-datasets.md): Train on datasets that don't fit in memory using streaming on FlexAI. No preprocessing, no memory limits, works with any dataset size. Step-by-step walkthrough. - [Generate Audio with Stable Audio Open on FlexAI Endpoints](https://docs.flex.ai/blueprints/text-to-audio-inference.md): Deploy Stable Audio Open 1.0 for audio generation via FlexAI inference endpoints. High-quality output, hardware-agnostic, scales automatically. - [Text-to-Speech with Qwen3-TTS on FlexAI Inference Endpoints](https://docs.flex.ai/blueprints/text-to-audio-qwen3-tts.md): Deploy Qwen3-TTS for high-quality text-to-speech synthesis via FlexAI inference endpoints. Interactive demo, low-latency, production-ready. - [Generate Images with Stable Diffusion 3.5 on FlexAI Endpoints](https://docs.flex.ai/blueprints/text-to-image-inference.md): Deploy Stable Diffusion 3.5 Large for AI image generation via FlexAI inference endpoints. High-quality output, auto-scaling, no code changes needed. - [Text-to-Speech with Kokoro on FlexAI Inference Endpoints](https://docs.flex.ai/blueprints/text-to-speech-inference.md): Deploy the Kokoro TTS model for natural voice synthesis via FlexAI inference endpoints. Low-latency, hardware-agnostic, production-ready deployment. - [Generate Video with Wan2.2 on FlexAI Inference Endpoints](https://docs.flex.ai/blueprints/text-to-video-inference.md): Deploy Wan2.2-T2V for high-quality AI video generation via FlexAI inference endpoints. Hardware-agnostic, auto-scaling, cold start under 60 seconds. - [Fine-Tune YOLO11 Object Detection Models on FlexAI](https://docs.flex.ai/blueprints/ultralytics.md): Fine-tune and deploy YOLO11 for object detection, segmentation, and pose estimation on FlexAI using Ultralytics. From training to production endpoint. - [Fine-Tune Vision-Language Models with GRPO on FlexAI](https://docs.flex.ai/blueprints/vlm-grpo.md): Train vision-language models using GRPO reinforcement learning with TRL, LoRA, vLLM, and DeepSpeed ZeRO-3 on FlexAI. Full tutorial with working code. - [FlexAI CLI](https://docs.flex.ai/cli.md): Command-line interface for FlexAI - manage your AI workloads from the terminal - [FlexAI CLI Login](https://docs.flex.ai/cli/authenticate.md): Authenticate the FlexAI CLI using your GitHub account - [Changelog: 2024-02-21](https://docs.flex.ai/cli/changelog/2024-02-21.md) - [Changelog: 2024-02-22](https://docs.flex.ai/cli/changelog/2024-02-22.md) - [Changelog: 2024-03-07](https://docs.flex.ai/cli/changelog/2024-03-07.md) - [Changelog: 2024-03-08](https://docs.flex.ai/cli/changelog/2024-03-08.md) - [Changelog: 2024-03-19](https://docs.flex.ai/cli/changelog/2024-03-19.md) - [Changelog: 2024-03-29](https://docs.flex.ai/cli/changelog/2024-03-29.md) - [Changelog: 2024-05-07](https://docs.flex.ai/cli/changelog/2024-05-07.md) - [Changelog: 2024-05-22](https://docs.flex.ai/cli/changelog/2024-05-22.md) - [Changelog: 2024-06-25](https://docs.flex.ai/cli/changelog/2024-06-25.md) - [Changelog: 2024-07-08](https://docs.flex.ai/cli/changelog/2024-07-08.md) - [Changelog: 2024-07-22](https://docs.flex.ai/cli/changelog/2024-07-22.md) - [Changelog: 2024-07-24](https://docs.flex.ai/cli/changelog/2024-07-24.md) - [Changelog: 2024-07-29](https://docs.flex.ai/cli/changelog/2024-07-29.md) - [Changelog: 2024-07-30](https://docs.flex.ai/cli/changelog/2024-07-30.md) - [Changelog: 2024-08-14](https://docs.flex.ai/cli/changelog/2024-08-14.md) - [Changelog: 2024-09-03](https://docs.flex.ai/cli/changelog/2024-09-03.md) - [Changelog: 2024-10-24](https://docs.flex.ai/cli/changelog/2024-10-24.md) - [Changelog: 2024-11-15](https://docs.flex.ai/cli/changelog/2024-11-15.md) - [Changelog: 2024-11-27](https://docs.flex.ai/cli/changelog/2024-11-27.md) - [Changelog: 2024-12-02](https://docs.flex.ai/cli/changelog/2024-12-02.md) - [Changelog: 2024-12-13](https://docs.flex.ai/cli/changelog/2024-12-13.md) - [Changelog: 2024-12-20](https://docs.flex.ai/cli/changelog/2024-12-20.md) - [Changelog: 2025-01-31](https://docs.flex.ai/cli/changelog/2025-01-31.md) - [Changelog: 2025-02-04](https://docs.flex.ai/cli/changelog/2025-02-04.md) - [Changelog: 2025-03-10](https://docs.flex.ai/cli/changelog/2025-03-10.md) - [Changelog: 2025-04-01](https://docs.flex.ai/cli/changelog/2025-04-01.md) - [Changelog: 2025-05-15](https://docs.flex.ai/cli/changelog/2025-05-15.md) - [Changelog: 2025-05-19](https://docs.flex.ai/cli/changelog/2025-05-19.md) - [Changelog: 2025-07-15](https://docs.flex.ai/cli/changelog/2025-07-15.md) - [Changelog: 2025-07-25](https://docs.flex.ai/cli/changelog/2025-07-25.md) - [Installing the FlexAI CLI](https://docs.flex.ai/cli/install.md): Install and set up the FlexAI command-line interface - [Command: checkpoint](https://docs.flex.ai/cli/reference/checkpoint.md): Manage model checkpoints on FlexAI — push, export, inspect, and delete - [checkpoint delete](https://docs.flex.ai/cli/reference/checkpoint/delete.md): Delete a checkpoint from FlexAI - [checkpoint export](https://docs.flex.ai/cli/reference/checkpoint/export.md): Export a checkpoint to a remote storage provider - [checkpoint fetch](https://docs.flex.ai/cli/reference/checkpoint/fetch.md): Download checkpoint files to your local machine - [checkpoint inspect](https://docs.flex.ai/cli/reference/checkpoint/inspect.md): View detailed information about a specific checkpoint - [checkpoint list](https://docs.flex.ai/cli/reference/checkpoint/list.md): List all checkpoints and their current status - [checkpoint push](https://docs.flex.ai/cli/reference/checkpoint/push.md): Upload checkpoint files to FlexAI from local or remote storage - [Command: code-registry](https://docs.flex.ai/cli/reference/code-registry.md): Manage GitHub repository connections via the FlexAI GitHub App - [code-registry connect](https://docs.flex.ai/cli/reference/code-registry/connect.md): Connect a GitHub repository to FlexAI - [code-registry list](https://docs.flex.ai/cli/reference/code-registry/list.md): List all connected GitHub repositories - [completion](https://docs.flex.ai/cli/reference/completion.md): Generate shell autocompletion scripts for the FlexAI CLI - [completion bash](https://docs.flex.ai/cli/reference/completion/bash.md): Generate bash autocompletion script for the FlexAI CLI - [completion fish](https://docs.flex.ai/cli/reference/completion/fish.md): Generate fish autocompletion script for the FlexAI CLI - [completion powershell](https://docs.flex.ai/cli/reference/completion/powershell.md): Generate PowerShell autocompletion script for the FlexAI CLI - [completion zsh](https://docs.flex.ai/cli/reference/completion/zsh.md): Generate zsh autocompletion script for the FlexAI CLI - [Command: dataset](https://docs.flex.ai/cli/reference/dataset.md): Manage datasets on FlexAI — upload, inspect, and delete datasets - [dataset delete](https://docs.flex.ai/cli/reference/dataset/delete.md): Delete a dataset from FlexAI - [dataset inspect](https://docs.flex.ai/cli/reference/dataset/inspect.md): View detailed information about a specific dataset - [dataset list](https://docs.flex.ai/cli/reference/dataset/list.md): List all datasets and their current status - [dataset push](https://docs.flex.ai/cli/reference/dataset/push.md): Upload dataset files to FlexAI from local or remote storage - [General Usage Commands](https://docs.flex.ai/cli/reference/general.md): General CLI commands for authentication, diagnostics, and updates - [general auth](https://docs.flex.ai/cli/reference/general/auth.md): Manage FlexAI CLI authentication - [general auth login](https://docs.flex.ai/cli/reference/general/auth/login.md): Authenticate the FlexAI CLI with your account - [general auth logout](https://docs.flex.ai/cli/reference/general/auth/logout.md): Sign out of the FlexAI CLI - [general doctor](https://docs.flex.ai/cli/reference/general/doctor.md): Run diagnostics to verify your FlexAI CLI installation - [general update](https://docs.flex.ai/cli/reference/general/update.md): Update the FlexAI CLI to the latest version - [general user](https://docs.flex.ai/cli/reference/general/user.md): View and manage your FlexAI user account - [general user info](https://docs.flex.ai/cli/reference/general/user/info.md): Display your FlexAI account information - [general user orgs](https://docs.flex.ai/cli/reference/general/user/orgs.md): List organizations associated with your FlexAI account - [general user set-default-org](https://docs.flex.ai/cli/reference/general/user/set-default-org.md): Set the default organization for FlexAI CLI commands - [general version](https://docs.flex.ai/cli/reference/general/version.md): Display the current FlexAI CLI version - [Command: inference](https://docs.flex.ai/cli/reference/inference.md): Manage inference endpoints on FlexAI — deploy, scale, and monitor models - [inference delete](https://docs.flex.ai/cli/reference/inference/delete.md): Delete an inference endpoint - [inference inspect](https://docs.flex.ai/cli/reference/inference/inspect.md): View detailed information about a specific inference endpoint - [inference list](https://docs.flex.ai/cli/reference/inference/list.md): List all inference endpoints and their current status - [inference logs](https://docs.flex.ai/cli/reference/inference/logs.md): Stream or retrieve logs from an inference endpoint - [inference resume](https://docs.flex.ai/cli/reference/inference/resume.md): Resume a stopped inference endpoint - [inference runtime](https://docs.flex.ai/cli/reference/inference/runtime.md): Manage inference endpoint runtimes - [inference runtime list](https://docs.flex.ai/cli/reference/inference/runtime/list.md): List available inference runtimes and their configurations - [inference scale](https://docs.flex.ai/cli/reference/inference/scale.md): Scale the number of replicas for an inference endpoint - [inference serve](https://docs.flex.ai/cli/reference/inference/serve.md): Create a new inference endpoint from a Hugging Face model - [inference stop](https://docs.flex.ai/cli/reference/inference/stop.md): Stop a running inference endpoint - [Command: secret](https://docs.flex.ai/cli/reference/secret.md): Manage encrypted secrets for training jobs and inference endpoints - [secret create](https://docs.flex.ai/cli/reference/secret/create.md): Create a new secret in the FlexAI Secret Manager - [secret delete](https://docs.flex.ai/cli/reference/secret/delete.md): Delete a secret from the FlexAI Secret Manager - [secret list](https://docs.flex.ai/cli/reference/secret/list.md): List all secrets stored in the FlexAI Secret Manager - [secret update](https://docs.flex.ai/cli/reference/secret/update.md): Update the value of an existing secret - [Command: storage](https://docs.flex.ai/cli/reference/storage.md): Manage remote storage connections for artifact transfer - [storage create](https://docs.flex.ai/cli/reference/storage/create.md): Create a new remote storage provider connection - [storage inspect](https://docs.flex.ai/cli/reference/storage/inspect.md): View detailed information about a remote storage connection - [storage list](https://docs.flex.ai/cli/reference/storage/list.md): List all remote storage connections - [Command: training](https://docs.flex.ai/cli/reference/training.md): Manage training jobs on FlexAI — run, monitor, inspect, and delete jobs - [training checkpoints](https://docs.flex.ai/cli/reference/training/checkpoints.md): List checkpoints generated by a training job - [training debug-ssh](https://docs.flex.ai/cli/reference/training/debug-ssh.md): Start an interactive SSH session into a training job's runtime - [training delete](https://docs.flex.ai/cli/reference/training/delete.md): Delete a training job and its associated resources - [training fetch](https://docs.flex.ai/cli/reference/training/fetch.md): Download output files from a completed training job - [training inspect](https://docs.flex.ai/cli/reference/training/inspect.md): View detailed information about a specific training job - [training list](https://docs.flex.ai/cli/reference/training/list.md): List all training jobs and their current status - [training logs](https://docs.flex.ai/cli/reference/training/logs.md): Stream or retrieve logs from a running or completed training job - [training run](https://docs.flex.ai/cli/reference/training/run.md): Start a new training job on FlexAI with custom configuration - [training runtime](https://docs.flex.ai/cli/reference/training/runtime.md): Manage training job runtimes - [training runtime list](https://docs.flex.ai/cli/reference/training/runtime/list.md): List available training runtimes and their configurations - [training stop](https://docs.flex.ai/cli/reference/training/stop.md): Stop a running training job - [FlexAI Web Console](https://docs.flex.ai/console.md): Manage training, fine-tuning, and inference workloads from the FlexAI Web Console - [Sign In](https://docs.flex.ai/console/getting-started/signin.md): Sign in to the FlexAI Web Console with email or GitHub - [Sign Up for FlexAI](https://docs.flex.ai/console/getting-started/signup.md): Create a new FlexAI account to get started - [Fine-Tuning](https://docs.flex.ai/core-services/fine-tuning.md): Fine-tune AI models with your own data using FlexAI - [Lifecycle](https://docs.flex.ai/core-services/fine-tuning/lifecycle.md): Understand the status progression of a fine-tuning job from creation to completion - [Fine-tuning a model with FlexAI](https://docs.flex.ai/core-services/fine-tuning/quickstart.md): Step-by-step guide to fine-tuning a model on FlexAI - [Checking the Fine-tuning Job's Details](https://docs.flex.ai/core-services/fine-tuning/quickstart/checking-details.md): Inspect the details and status of your fine-tuning job - [Creating a Fine-tuning Job](https://docs.flex.ai/core-services/fine-tuning/quickstart/creating.md): Create and configure a new fine-tuning job on FlexAI - [Getting a Fine-tuning Job's Output](https://docs.flex.ai/core-services/fine-tuning/quickstart/getting-output.md): Retrieve output files and results from a completed fine-tuning job - [Monitoring a Fine-tuning Job's Progress](https://docs.flex.ai/core-services/fine-tuning/quickstart/monitoring-progress.md): Monitor your fine-tuning job's progress with logs and metrics - [Uploading a Dataset](https://docs.flex.ai/core-services/fine-tuning/quickstart/uploading-a-dataset.md): Upload and prepare a dataset for your fine-tuning job - [FlexAI Inference Endpoints](https://docs.flex.ai/core-services/inference.md): Deploy and manage AI models for inference with FlexAI - [FlexAI Inference Autoscaling](https://docs.flex.ai/core-services/inference/autoscaling.md): Set up and manage autoscaling rules for FlexAI Inference Endpoints - [Quickstart: FlexAI Inference Endpoints](https://docs.flex.ai/core-services/inference/quickstart.md): Step-by-step guide to deploying inference endpoints on FlexAI - [Creating an Inference Endpoint: Private Model](https://docs.flex.ai/core-services/inference/quickstart/create-private.md): Deploy an inference endpoint from a private or gated Hugging Face model - [FlexAI Inference - Public Model](https://docs.flex.ai/core-services/inference/quickstart/create-public.md): Deploy an inference endpoint from a public Hugging Face model - [Querying an Inference Endpoint](https://docs.flex.ai/core-services/inference/quickstart/query.md): Query a deployed inference endpoint using the playground or HTTP requests - [Training](https://docs.flex.ai/core-services/training.md): Train AI models from scratch or continue training existing models with FlexAI - [Lifecycle](https://docs.flex.ai/core-services/training/lifecycle.md): Understand the status progression of a training job from creation to completion - [Training a model with FlexAI](https://docs.flex.ai/core-services/training/quickstart.md): Step-by-step guide to training a model on FlexAI - [Checking the Training Job's Details](https://docs.flex.ai/core-services/training/quickstart/checking-details.md): Inspect the details and status of your training job - [Creating a Training Job](https://docs.flex.ai/core-services/training/quickstart/creating.md): Create and configure a new training job on FlexAI - [Getting a Training Job's Output](https://docs.flex.ai/core-services/training/quickstart/getting-output.md): Retrieve output files and results from a completed training job - [Monitoring a Training Job's Progress](https://docs.flex.ai/core-services/training/quickstart/monitoring-progress.md): Monitor your training job's progress with logs and metrics - [Uploading a Dataset](https://docs.flex.ai/core-services/training/quickstart/uploading-a-dataset.md): Upload and prepare a dataset for your training job - [Frequently Asked Questions: FlexAI](https://docs.flex.ai/faq.md): Answers to common questions about the FlexAI platform - [Frequently Asked Questions: FlexAI Credits](https://docs.flex.ai/faq/credits.md): Answers to common questions about FlexAI credits and billing - [Getting Started](https://docs.flex.ai/getting-started.md): Get started with FlexAI platform - [Welcome to FlexAI](https://docs.flex.ai/index.md): Get started managing your AI workloads without any hassle. - [Streaming](https://docs.flex.ai/inference-api/guides/streaming.md): Stream tokens as they are generated and collect the final usage block for accurate spend tracking. - [Tool Use](https://docs.flex.ai/inference-api/guides/tool-use.md): Let the model call functions you define, then feed their output back in. - [Vision](https://docs.flex.ai/inference-api/guides/vision.md): Send images to multimodal models using the OpenAI `image_url` content shape. - [Overview](https://docs.flex.ai/inference-api/overview.md): OpenAI-compatible inference API for text, image, video, and audio models. One API key, every modality, billed per token / image / second. - [Quickstart](https://docs.flex.ai/inference-api/quickstart.md): Get an API key and make your first inference request in under two minutes. - [Billing & Quotas](https://docs.flex.ai/inference-api/reference/billing.md): Free credit, how we price requests, rate limits, and what happens when your balance runs out. - [Errors](https://docs.flex.ai/inference-api/reference/errors.md): Every 4xx and 5xx status we return, with an example body and what to do about it. - [Model Catalog](https://docs.flex.ai/inference-api/reference/models.md): Every model hosted on the FlexAI Inference API, with context window, pricing, and capabilities. - [OpenAI Compatibility](https://docs.flex.ai/inference-api/reference/openai-compatibility.md): What is and isn't supported at launch when you point an OpenAI SDK at tokens.flex.ai. - [Overview](https://docs.flex.ai/interactive-development.md): Develop and debug your AI workloads interactively with VSCode and SSH access - [Interactive Training Session](https://docs.flex.ai/interactive-development/interactive-training.md): Start an interactive SSH session for real-time training development and debugging - [Platform services](https://docs.flex.ai/platform.md): FlexAI platform services and tools - [Checkpoint Manager](https://docs.flex.ai/platform-services/checkpoint-manager.md): Manage, version, and deploy model checkpoints from Training and Fine-tuning jobs - [FlexAI Checkpoints: In Practice](https://docs.flex.ai/platform-services/checkpoint-manager/in-practice.md): Practical guide to managing checkpoints in training and fine-tuning workflows - [Inference-ready Checkpoints](https://docs.flex.ai/platform-services/checkpoint-manager/inference-ready-checkpoints.md): Mark and manage checkpoints ready for inference deployment - [Code Registry Manager](https://docs.flex.ai/platform-services/code-registry-manager.md): Connect and manage GitHub repositories for training and inference workloads - [Dataset Manager](https://docs.flex.ai/platform-services/dataset-manager.md): Upload, manage, and organize Datasets for Training and Fine-tuning your AI models - [Uploading Datasets from your local machine](https://docs.flex.ai/platform-services/dataset-manager/from-local.md): Upload dataset files from your local machine to FlexAI - [Uploading Datasets from Remote Sources](https://docs.flex.ai/platform-services/dataset-manager/from-remote.md): Upload datasets from a remote storage provider to FlexAI - [Step by step: Uploading a Dataset From a Remote Storage Provider](https://docs.flex.ai/platform-services/dataset-manager/from-remote/steps.md): Step-by-step instructions for uploading datasets from remote storage - [Dataset Lifecycle](https://docs.flex.ai/platform-services/dataset-manager/lifecycle.md): Understand the status progression of a dataset from upload to availability - [Overview](https://docs.flex.ai/platform-services/observability.md): Monitor and analyze infrastructure metrics, training performance, and system health - [Infrastructure Monitor](https://docs.flex.ai/platform-services/observability/infrastructure-monitor.md): Monitor real-time GPU and system metrics for your FlexAI workloads - [TensorBoard](https://docs.flex.ai/platform-services/observability/tensorboard.md): Visualize training metrics in real time with FlexAI's hosted TensorBoard - [Overview](https://docs.flex.ai/platform-services/remote-storage-connections-manager.md): Remote Storage Connection Manager - [Create a Remote Storage Connection](https://docs.flex.ai/platform-services/remote-storage-connections-manager/create.md): Set up a new remote storage connection with credentials and provider configuration - [Secret Manager](https://docs.flex.ai/platform-services/secret-manager.md): Securely store and manage API keys, credentials, and secrets for your FlexAI workloads - [Sign In](https://docs.flex.ai/signin.md): Sign in to FlexAI with your email or GitHub account - [Sign Up](https://docs.flex.ai/signup.md): Create a FlexAI account with email or GitHub ## OpenAPI Specs - [openapi](https://docs.flex.ai/inference-api/openapi.yaml)