Run ASGI/WSGI Python apps on Cerebrium
cerebrium.toml
by adding a custom runtime section:
entrypoint
: The command that starts your serverport
: The port your server listens onhealthcheck_endpoint
: The endpoint used to confirm instance health. If unspecified, defaults to a TCP ping on the configured port. If the health check registers a non-200 response, it will be considered unhealthy, and be restarted should it not recover timely.readycheck_endpoint
: The endpoint used to confirm if the instance is ready to receive. If unspecified, defaults to a TCP ping on the configured port. If the ready check registers a non-200 response, it will not be a viable target for request routing.uvicorn
) in your dependencies. After deployment, your endpoints become
available at https://api.cortex.cerebrium.ai/v4/{project - id}/{app - name} /your/endpoint
.X-Request-Id
header, which is included in all requests to your endpoints. This header is particularly useful for tracking and debugging requests in your custom runtime implementation, as it corresponds to the run_id
that Cerebrium uses internally.