Package ML models into containerized prediction APIs in just minutes, and run them anywhere - in the Cloud, On-Prem, or at the Edge
Get Started Join Community Chassis is an open-source project that automatically turns your ML models into production containers.
With easy integration in mind, Chassis picks up right where your training code leaves off and builds containers for any target architecture. Execute a single Chassis job and run your models in the cloud, on-prem, or on edge devices ( Raspberry Pi,
NVIDIA Jetson Nano,
Intel NUC, and more!).
Benefits:
Turns models into containers, automatically
Creates easy-to-use prediction APIs
Containers can run on Modzy, KServe (v1), and more
Connects to Docker Hub and other registries
Compiles for both x86 and ARM processors
Supports GPU batch processing
No missing dependencies, perfect for edge AI
Getting started with Chassis is as easy as installing a Python package and incorporating a few lines of code into your existing workflow. Follow these short steps to start building your first ML container in just minutes!
Chassis SDK: The Chassis Python package enables you to interact with the Chassis service. Download via PyPi: pip install chassisml
Python model: Bring your model trained with your favorite Python ML framework (Scikit-learn, PyTorch, Tensorflow, or any framework you use!)
Registry Credentials: Chassis will build a container image and push it to your preferred registry, so make sure you either have a Docker Hub account or credentials to your private container registry. You can create a free Docker Hub account today if you need one!
import chassisml
# NOTE: The below code snippet is pseudo code that
# intends to demonstrate the general workflow when
# using Chassis and will not execute as is. Substitute
# with your own Python framework, any revelant utility
# methods, and syntax specific to your model.
import framework
import preprocess, postprocess
# load model
model = framework.load("path/to/model/file")
# define process function and create model
def process(input_bytes):
# preprocess data
data = preprocess(input_bytes)
# perform inference
predictions = model.predict(data)
# process output
output = postprocess(predictions)
return output
# connect to Chassis, create and publish model
chassis_client = chassisml.ChassisClient("https://chassis.app.modzy.com")
chassis_model = chassis_client.create_model(process_fn=process)
# publish model to Docker registry
response = chassis_model.publish(
model_name="My First Chassis Model!",
model_version="0.0.1",
registry_user="insert-dockerhub-username",
registry_pass="insert-dockerhub-password",
)