Skip to content

chassis_model

DEFAULT_CUDA_VERSION module-attribute

DEFAULT_CUDA_VERSION = '11.0.3'

ChassisModel

ChassisModel(process_fn, batch_size=1, legacy_predict_fn=False, chassis_client=None)

The Chassis Model Object.

This class inherits from chassis.builder.Buildable and is the main object that gets fed into a Chassis builder object (i.e., chassis.builder.DockerBuilder or chassis.builder.RemoteBuilder)

Parameters:

Name Type Description Default
process_fn PredictFunction

Single predict function of type PredictFunction that represents a model inference function.

required
batch_size int

Integer representing the batch size your model supports. If your model does not support batching, the default value is 1.

1
legacy_predict_fn bool

For internal backwards-compatibility use only.

False
chassis_client

For internal backwards-compatibility use only.

None

runner instance-attribute

runner = ModelRunner(process_fn, batch_size=batch_size, is_legacy_fn=legacy_predict_fn)

metadata instance-attribute

metadata = ModelMetadata.legacy()

chassis_client instance-attribute

chassis_client = chassis_client

packaged instance-attribute

packaged = False

requirements instance-attribute

requirements: set[str] = set()

apt_packages instance-attribute

apt_packages: set[str] = set()

additional_files instance-attribute

additional_files: set[str] = set()

python_modules instance-attribute

python_modules: dict = {}

test

test(test_input)

Runs a test inference against the model before it is packaged.

This method supports multiple input types

  • Single input: A map-like object with a string for the key and bytes as the value.
  • Batch input: A list of map-like objects with strings for keys and bytes for values.

The following input types are also supported but considered deprecated and may be removed in a future release

  • File: A BufferedReader object. Use of this type assumes that your predict function expects the input key to be "input".
  • bytes: Any arbitrary bytes. Use of this type assumes that your predict function expects the input key to be "input".
  • str: A string. If the string maps to a filesystem location, then the file at that location will be read and used as the value. If not the string itself is used as the value. Use of this type assumes that your predict function expects the input key to be "input".

Parameters:

Name Type Description Default
test_input Union[str, bytes, _io.BufferedReader, Mapping[str, bytes], Sequence[Mapping[str, bytes]]]

Sample input data used to test the model. See above for more information.

required

Returns:

Type Description
Sequence[Mapping[str, bytes]]

Results returned by your model's predict function based on the test_input sample data fed to this function.

Example:

from chassisml import ChassisModel
chassis_model = ChassisModel(process_fn=predict)
results = chassis_model.test(sample_data)

merge_package

merge_package(package)

Allows for merging two Buildable objects. This will ensure that any pip requirements, apt packages, files, or modules are merged.

Parameters:

Name Type Description Default
package Buildable

Another Buildable object to merge into this one.

required

add_requirements

add_requirements(reqs)

Declare a pip requirement for your model.

The value of each requirement can be anything supported by a line in a requirements.txt file, including version constraints, etc.

All pip requirements declared via this method will be automatically installed when the container is built.

Parameters:

Name Type Description Default
reqs Union[str, list[str]]

Single python package (str) or list of python packages that are required dependencies to run the ChassisModel.process_fn attribute. These values are the same values that would follow pip install or that would be added to a Python dependencies txt file (e.g., requirements.txt)

required

test_batch

test_batch(test_input)

DEPRECATED

The chassisml.ChassisModel.test method now supports supplying batches of inputs.

add_apt_packages

add_apt_packages(packages)

Add an OS package that will be installed via apt-get.

If your model requires additional OS packages that are not part of the standard Python container, you can declare them here. Each package declared here will be apt-get install'd when the container is built.

Parameters:

Name Type Description Default
packages Union[str, list]

Single OS-level package (str) or list of OS-level packages that are required dependencies to run the ChassisModel.process_fn attribute. These values are the same values that can be installed via apt-get install.

required

get_packaged_path

get_packaged_path(path)

Convenience method for developers wanting to implement their own subclasses of Buildable. This method will return the final path in the built container of any additional files, etc.

Parameters:

Name Type Description Default
path str

The local path of a file.

required

Returns:

Type Description
str

The path the file will have in the final built container.

test_env

test_env(test_input_path, conda_env=None, fix_env=True)

No Longer Available

Please use chassis.client.OMIClient.test_container moving forward.

save

save(path=None, requirements=None, overwrite=False, fix_env=False, gpu=False, arm64=False, conda_env=None)

DEPRECATED

Please use chassisml.ChassisModel.prepare_context moving forward.


Saves a copy of ChassisModel to local filepath

Parameters:

Name Type Description Default
path Optional[str]

Filepath to save chassis model.

None
requirements Optional[Union[str, List[str]]]

Additional pip requirements needed by the model.

None
conda_env Optional[dict]

A dictionary with environment requirements.

None
overwrite bool

No longer used.

False
fix_env bool

No longer used.

False
gpu bool

If True and arm64 is True, modifies dependencies as needed by chassis for ARM64+GPU support

False
arm64 bool

If True and gpu is True, modifies dependencies as needed by chassis for ARM64+GPU support

False

Returns:

Type Description
BuildContext

The BuildContext object that allows for further actions to be taken.

Example:

chassis_model = ChassisModel(process_fn=process)
context = chassis_model.save("local_model_directory")

verify_prerequisites

verify_prerequisites(options)

Raises an exception if the object is not yet ready for building.

Models require having a name, version, and at least one input and one output.

Parameters:

Name Type Description Default
options BuildOptions

The BuildOptions used for the build.

required

prepare_context

prepare_context(options=DefaultBuildOptions)

Constructs the build context that will be used to build the container.

A build context is a directory containing a Dockerfile and any other resources the Dockerfile needs to build the container.

This method is called just before the build is initiated and compiles all the resources necessary to build the container. This includes the Dockerfile, required Chassis library code, the server implementation indicated by the BuildOptions, the cloudpickle'd model, the serialized model metadata, copies of any additional files, and a requirements.txt.

Typically, you won't call this method directly, it will be called automatically by a Builder. The one instance where you might want to use this method directly is if you want to inspect the contents of the build context before sending it to a Builder.

Parameters:

Name Type Description Default
options BuildOptions

The BuildOptions to be used for this build.

DefaultBuildOptions

Returns:

Type Description
BuildContext

A BuildContext object.

publish

publish(model_name, model_version, registry_user=None, registry_pass=None, requirements=None, fix_env=True, gpu=False, arm64=False, sample_input_path=None, webhook=None, conda_env=None)

DEPRECATED

Please use chassis.builder.RemoteBuilder moving forward.


Builds the model locally using Docker.

Parameters:

Name Type Description Default
model_name str

Model name that serves as model's name and docker registry repository name.

required
model_version str

Version of model

required
registry_user Optional[str]

Docker registry username

None
registry_pass Optional[str]

Docker registry password

None
requirements Optional[Union[str, list[str]]]

Additional pip requirements needed by the model.

None
conda_env Optional[dict]

A dictionary with environment requirements.

None
fix_env bool

No longer used.

True
gpu bool

If True, builds container image that runs on GPU hardware

False
arm64 bool

If True, builds container image that runs on ARM64 architecture

False
sample_input_path Optional[str]

No longer used.

None
webhook Optional[str]

No longer used.

None

Returns:

Type Description

Details about the result of the build.

Example:

# Create Chassisml model
chassis_model = ChassisModel(process_fn=process)

# Build the model locally using Docker.
response = chassis_model.publish(
    model_name="Chassisml Regression Model",
    model_version="0.0.1",
)

render_dockerfile

render_dockerfile(options)

Renders an appropriate Dockerfile for this object with the supplied BuildOptions.

Parameters:

Name Type Description Default
options BuildOptions

The BuildOptions that will be used for this build.

required

Returns:

Type Description
str

A string containing the contents for a Dockerfile.

parse_conda_env

parse_conda_env(conda_env)

Supports legacy Chassis conda_env functionality by parsing pip dependencies and inserting into the Buildable object via the add_requirements function.

If supplied, the input parameter will look like this:

env = {
    "name": "sklearn-chassis",
    "channels": ['conda-forge'],
    "dependencies": [
        "python=3.8.5",
        {
            "pip": [
                "scikit-learn",
                "numpy",
                "chassisml"
            ]
        }
    ]
}
chassis_model = ChassisModel(process_fn=predict)
chassis_model.parse_conda_env(env)

Parameters:

Name Type Description Default
conda_env Optional[dict]

A conda environment structure. See above for more details.

required