Skip to main content
Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.

Usage

flexai inference logs <inference_endpoint_name> [flags]

Arguments

ArgumentTypeRequiredDescription
inference_endpoint_namestringYesThe name of the Inference Endpoint to view logs for.

Flags

FlagShortTypeDescription
--help-hbooleanDisplays this help page.
--no-colorbooleanDisables color formatting in the log output.
--verbose-vbooleanProvides more detailed output when viewing logs.