Gradio Chat Interface
Using FastAPI, Gradio and Cerebrium to deploy an LLM chat interface
In this tutorial, we’ll create and deploy a Gradio chat interface that connects to a Llama 8B language model using Cerebrium’s custom ASGI runtime. We’ll build a scalable architecture where the frontend runs on CPU instances, while the model runs separately on GPU instances for optimal resource utilization.
You can find the full codebase for deploying your Gradio frontend here.
Architecture Overview
Our application consists of two main components:
- A frontend interface running on CPU instances using FastAPI and Gradio.
- A separate Llama model endpoint running on GPU instances. (While this is beyond the scope of this article, you can find a comprehensive example for deploying Llama 8B with TensorRT here.)
This separation allows us to:
- Keep the frontend always available while minimizing costs (CPU-only).
- Scale our GPU-intensive model independently based on demand.
- Optimize resource allocation for different components.
Prerequisites
Before starting, you’ll need:
- A Cerebrium account (sign up here).
- The Cerebrium CLI installed:
pip install --upgrade cerebrium
. - A Llama model endpoint (or other LLM API endpoint).
Basic Setup
First, create a new directory for your project and initialize it:
Next, let us add the following configuration to our cerebrium.toml
file:
This configuration does several things:
- Disables the default JWT authentication that is automatically placed on all Cerebrium endpoints, making your Gradio interface publicly accessible.
- Sets the entrypoint for the ASGI server to run through Uvicorn.
- Sets the default port to 8080 for serving your app.
- Sets the health endpoint to
/health
for checking app availability through our FastAPI application. - Configures hardware settings for the CPU instance running your app.
- Defines scaling configuration with minimum and maximum replicas, cooldown period, and replica concurrency (set to 10 requests per replica).
- Specifies required dependencies: Gradio, FastAPI, Requests, HTTPX, Uvicorn, and Starlette.
Now, let’s set up our main entrypoint file (main.py
). To start, let’s create our FastAPI application:
The above code:
- Initializes a FastAPI application to forward requests to our Gradio app running as a subprocess on a different port.
- Sets up a health check endpoint at
/health
. - Creates a catchall proxy that routes all requests to Gradio, including headers.
Now that we’ve set up our FastAPI application, let’s set up our Gradio application. Staying in our main.py
, let’s add the following code:
Above, we have:
- A class
GradioServer
that handles the communication with the Llama model endpoint - A
chat_with_llama
method that sends a message to the Llama model and returns the response - A
run_server
method that creates a Gradio chat interface - A
start
method that starts the Gradio server in a separate process - A
stop
method that stops the Gradio server - An
on_event
startup and shutdown event that starts and stops the Gradio server respectively
Finally, your main.py
file should look like this:
Deploy
Deploy the app use the following command:
Once deployed, navigate to the following URL in your browser:
You should see the Gradio chat interface.
Conclusion
This architecture provides a scalable chat app that efficiently utilizes our new ASGI custom runtime. The separation of frontend and backend services allows for improved performance and cost management while maintaining flexibility for future scaling. We hope you’ve enjoyed this tutorial. Please feel free to share feedback, challenges, or your own Gradio apps in our Discord community.