Skip to content

model_metadata

ModelMetadata

ModelMetadata(info=None, description=None, inputs=None, outputs=None, resources=None, timeout=None, features=None)

This class provides an interface for customizing metadata embedded into the model container.

model_name property writable

model_name: str

The human-readable name of the model.

model_version property writable

model_version: str

The semantic-versioning compatible version of the model.

model_author property writable

model_author: str

The name and optional email of the author.

Example

John Smith john.smith@example.com

summary property writable

summary: str

A short summary of what the model does and how to use it.

details property writable

details: str

A longer description of the model that contains useful information that was unsuitable to put in the Summary.

technical property writable

technical: str

Technical information about the model such as how it was trained, any known biases, the dataset that was used, etc.

performance property writable

performance: str

Performance information about the model.

required_ram property writable

required_ram: str

The amount of RAM required to run the model. This string can be any value accepted by Docker or Kubernetes.

num_cpus property writable

num_cpus: float

The number of fractional CPU cores required to run the model.

num_gpus property writable

num_gpus: int

The number of GPUs required to run the model.

status_timeout property writable

status_timeout: str

The amount of time after which a model should be considered to have failed initializing itself.

run_timeout property writable

run_timeout: str

The amount of time after which an inference should be considered to have failed.

batch_size property writable

batch_size: int

The batch size supported by this model. For models that don't support batch, set this to 1.

has_inputs

has_inputs()

Returns:

Type Description
bool

True if at least one input has been defined.

add_input

add_input(key, accepted_media_types=None, max_size='1M', description='')

Defines an input to the model. Inputs are identified by a string key that will be used to retrieve them from the dictionary of inputs during inference.

Since all input values are sent as bytes, each input should define one or more MIME types that are suitable for decoding the bytes into a usable object.

Additionally, each input can be set to have a maximum size to easily reject requests with inputs that are too large.

Finally, you can give each input a description which can be used in documentation to explain any further details about the input requirements, such as indicating whether color channels need to be stripped from the image, etc.

Parameters:

Name Type Description Default
key str

Key name to represent the input. E.g., "input", "image", etc.

required
accepted_media_types Optional[List[str]]

Acceptable mime type(s) for the respective input. For more information on common mime types, visit https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types

None
max_size str

Maximum acceptable size of input. This value should include an integer followed by a letter indicating the unit of measure (e.g., "3M" = 3 MB, "1.5G" = 1.5 GB, etc.)

'1M'
description str

Short description of the input

''

Example:

from chassisml import ChassisModel
model = ChassisModel(process_fn=predict)
model.metadata.add_input(
    "image",
    ["image/png", "image/jpeg"],
    "10M",
    "Image to be classified by computer vision model"
)

has_outputs

has_outputs()

Returns:

Type Description
bool

True if at least one output has been defined.

add_output

add_output(key, media_type='application/octet-stream', max_size='1M', description='')

Defines an output from the model. Outputs are identified by a string key that will be used to retrieve them from the dictionary of outputs received after inference.

Since all output values are sent as bytes, each output should define the MIME type that is suitable for decoding the bytes into a usable object.

Additionally, each output should be set to have a maximum size to prevent results that are too large for practical use.

Finally, you can give each output a description which can be used in documentation to explain any further details about the output.

Parameters:

Name Type Description Default
key str

Key name to represent the output. E.g., "results.json", "results", "output", etc.

required
media_type str

Acceptable mime type for the respective output. For more information on common mime types, visit https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types

'application/octet-stream'
max_size str

Maximum acceptable size of output. This value should include an integer followed by a letter indicating the unit of measure (e.g., "3M" = 3 MB, "1.5G" = 1.5 GB, etc.)

'1M'
description str

Short description of the output

''

Example:

from chassisml import ChassisModel
model = ChassisModel(process_fn=predict)
model.metadata.add_input(
    "results.json",
    ["application/json"],
    "1M,
    "Classification results of computer vision model with class name and confidence score in JSON format"
)

serialize

serialize()

For internal use only.

This method will take the values of this object and serialize them in the protobuf message that the final container expects to receive.

Returns:

Type Description
bytes

The serialized protobuf object.

default classmethod

default()

A ModelMetadata object that corresponds to the defaults used by Chassis v1.5+.

The defaults are blank values for all properties. You are responsible for setting any appropriate values for your model.

Note

It is always required to set the model_name, model_version fields and to add at least one input and one output.

Returns:

Type Description
ModelMetadata

An empty ModelMetadata object.

legacy classmethod

legacy()

A ModelMetadata object that corresponds to the values used before Chassis v1.5.

Returns:

Type Description
ModelMetadata

A partially filled ModelMetadata object.