You may also use files as input data for your model. This is particularly useful if you have a model that takes in images, audio files, or other binary data. To do this, you will need to supply at least an initial pre-processing function. You can then use the files in your model as you would any other input data. It is important to note that conduit.run expects the files to be passed in as binary data on the cloud, so you will need to open the files and read them in as binary data before passing them to test with conduit.run locally.

def pre_process(data, files):
    import numpy as np
    from PIL import Image
    labelled = {d: Image.open(f) for d, f in zip(data, files)}
    return labelled

from cerebrium import Conduit, model_type
conduit = Conduit('<MODEL_NAME>', '<API_KEY>', [('<MODEL_TYPE>', '<MODEL_FILE>', {"pre": pre_process"})])
conduit.load('./')
conduit.run(data, [files]) # files should be binary objects - for images it would the return of Image.open(f)
conduit.deploy()

Request and Response with Forms

You currently cannot use model_api_request with file input. You will need to use curl instead.

  curl --location --request POST '<ENDPOINT>' \
      --header 'Authorization: <API_KEY>' \
      --header 'Content-Type: multipart/form-data' \
      -F data='[<DATA_INPUT>]' \
      -F files=@<FILES_PATH>

The parameters to a form request are similar to a JSON request, however, both the data and files are passed in as form data.

Authorizationrequired
string

This is the Cerebrium API key used to authenticate your request. You can get it from your Cerebrium dashboard.

Content-Typerequired
string

The content type of your request. Must be multipart/form-data for forms.

{
  "result": [<MODEL_PREDICTION>],
  "run_id": "<RUN_ID>",
  "run_time_ms": <RUN_TIME_MS>
  "prediction_ids": ["<PREDICTION_ID>"]
}

Response Parameters

resultrequired
array

The result of your model prediction.

run_idrequired
string

The run ID associated with your model predictions.

run_time_msrequired
float

The amount of time if took your model to run down to the millisecond. This is what we charge you based on.

prediction_idsrequired
array

The prediction IDs associated with each of your model predictions. Used to track your model predictions with monitoring tools.