Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.
<string>
Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.
flexai inference logs <inference_endpoint_name> [flags]Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.
Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.
Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.
Displays a stream of logs from an Inference Endpoint. The logs include information about the deployment’s status, the model being served, and the requests being processed.