OpenAI compatible vLLM endpoint
Create a OpenAI compatible endpoint using the vLLM framework
In this tutorial, we’ll create an OpenAI-compatible endpoint that works with any open-source model. This lets you use your existing OpenAI code with Cerebrium serverless functions by changing just two lines of code.
To see the final code implementation, you can view it here
Cerebrium setup
If you don’t have a Cerebrium account, you can create one by signing up here and following the documentation here to get set up.
In your IDE, run the following command to create our Cerebrium starter project: cerebrium init 1-openai-compatible-endpoint
. This creates two files:
main.py
: Our entrypoint file where our code livescerebrium.toml
: A configuration file that contains all our build and environment settings
Add the following pip packages and hardware requirements to your cerebrium.toml
to create your deployment environment:
Let’s define our imports and initialize our model. We’ll use Meta’s Llama 3.1 model, which requires Hugging Face authorization. Add your HF token to your secrets in the Cerebrium dashboard, then add this code to main.py
:
Next, define the required output format for OpenAI endpoints using Pydantic:
The function:
- Takes parameters through its signature, with optional and default values available
- Automatically receives a unique
run_id
for each request - Processes the entire prompt through the model
- Streams results when
stream=True
using async functionality - Returns the complete result at the end if streaming is disabled
Deploy & Inference
To deploy the model use the following command:
After deployment, you’ll see a curl command like this:
In Cerebrium, each function name becomes an endpoint (ending with /run
). While OpenAI-compatible endpoints typically end with /chat/completions
, we’ve made all endpoints OpenAI-compatible. Here’s how to call the endpoint:
Set the base URL to the one from your deploy command (ending in /run
). Use your JWT token from either the curl command or your Cerebrium dashboard’s API Keys section.
Voilà! You now have an OpenAI-compatible endpoint that you can customize to your needs!