chassis_client
ChassisClient
DEPRECATED
Please use chassis.builder.RemoteBuilder moving forward.
The Chassis Client object.
This class is used to interact with the Chassis remote build service.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
base_url | str | The base url for the API. | 'http://localhost:5000' |
auth_header | Optional[str] | Optional authorization header to be included with all requests. | None |
ssl_verification | bool | Verify TLS connections. | True |
get_job_status
DEPRECATED
Please use chassis.builder.RemoteBuilder.get_build_status moving forward.
Checks the status of a chassis job
Parameters:
Name | Type | Description | Default |
---|---|---|---|
job_id | str | Chassis job identifier generated from ChassisModel.publish method. | required |
Returns:
Type | Description |
---|---|
BuildResponse | Chassis job status. |
Example:
# Create Chassisml model
chassis_model = chassis_client.create_model(process_fn=process)
# Define Dockerhub credentials
dockerhub_user = "user"
dockerhub_pass = "password"
# Publish model to Docker registry
response = chassis_model.publish(
model_name="Chassisml Regression Model",
model_version="0.0.1",
registry_user=dockerhub_user,
registry_pass=dockerhub_pass,
)
job_id = response.get('job_id')
job_status = chassis_client.get_job_status(job_id)
get_job_logs
DEPRECATED
Please use chassis.builder.RemoteBuilder.get_build_logs moving forward.
Checks the status of a chassis job
Parameters:
Name | Type | Description | Default |
---|---|---|---|
job_id | str | Chassis job identifier generated from ChassisModel.publish method. | required |
Returns:
Type | Description |
---|---|
str | The job logs. |
Example:
# Create Chassisml model
chassis_model = chassis_client.create_model(process_fn=process)
# Define Dockerhub credentials
dockerhub_user = "user"
dockerhub_pass = "password"
# Publish model to Docker registry
response = chassis_model.publish(
model_name="Chassisml Regression Model",
model_version="0.0.1",
registry_user=dockerhub_user,
registry_pass=dockerhub_pass,
)
job_id = response.get('job_id')
job_status = chassis_client.get_job_logs(job_id)
block_until_complete
DEPRECATED
Please use chassis.builder.RemoteBuilder.block_until_complete moving forward.
Blocks until Chassis job is complete or timeout is reached. Polls Chassis job API until a result is marked finished.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
job_id | str | Chassis job identifier generated from ChassisModel.publish method. | required |
timeout | Optional[int] | Timeout threshold in seconds. | None |
poll_interval | int | Amount of time to wait in between API polls to check status of job. | 5 |
Returns:
Type | Description |
---|---|
BuildResponse | Final job status. |
Example:
# Create Chassisml model
chassis_model = chassis_client.create_model(process_fn=process)
# Define Dockerhub credentials
dockerhub_user = "user"
dockerhub_pass = "password"
# Publish model to Docker registry
response = chassis_model.publish(
model_name="Chassisml Regression Model",
model_version="0.0.1",
registry_user=dockerhub_user,
registry_pass=dockerhub_pass,
)
job_id = response.get('job_id')
final_status = chassis_client.block_until_complete(job_id)
create_model
DEPRECATED
Please use chassisml.ChassisModel moving forward.
Builds chassis model locally
Parameters:
Name | Type | Description | Default |
---|---|---|---|
process_fn | Optional[LegacyNormalPredictFunction] | Python function that must accept a single piece of input data in raw bytes form. This method is responsible for handling all data preprocessing, executing inference, and returning the processed predictions. Defining additional functions is acceptable as long as they are called within the | None |
batch_process_fn | Optional[LegacyBatchPredictFunction] | Python function that must accept a batch of input data in raw bytes form. This method is responsible for handling all data preprocessing, executing inference, and returning the processed predictions. Defining additional functions is acceptable as long as they are called within the | None |
batch_size | Optional[int] | Maximum batch size if | None |
Returns:
Type | Description |
---|---|
ChassisModel | Chassis Model object that can be tested locally and published to a |
ChassisModel | Docker Registry. |
Examples:
The following snippet was taken from this example.
# Import and normalize data
X_digits, y_digits = datasets.load_digits(return_X_y=True)
X_digits = X_digits / X_digits.max()
n_samples = len(X_digits)
# Split data into training and test sets
X_train = X_digits[: int(0.9 * n_samples)]
y_train = y_digits[: int(0.9 * n_samples)]
X_test = X_digits[int(0.9 * n_samples) :]
y_test = y_digits[int(0.9 * n_samples) :]
# Train Model
logistic = LogisticRegression(max_iter=1000)
print(
"LogisticRegression mean accuracy score: %f"
% logistic.fit(X_train, y_train).score(X_test, y_test)
)
# Save small sample input to use for testing later
sample = X_test[:5].tolist()
with open("digits_sample.json", 'w') as out:
json.dump(sample, out)
# Define Process function
def process(input_bytes):
inputs = np.array(json.loads(input_bytes))
inference_results = logistic.predict(inputs)
structured_results = []
for inference_result in inference_results:
structured_output = {
"data": {
"result": {
"classPredictions": [
{
"class": str(inference_result),
"score": str(1)
}
]
}
}
}
structured_results.append(structured_output)
return structured_results
# create Chassis model
chassis_model = chassis_client.create_model(process_fn=process)
run_inference
No Longer Available
Please use chassis.client.OMIClient.run moving forward.
This is the method you use to submit data to a container chassis has built for inference. It assumes the container has been downloaded from dockerhub and is running somewhere you have access to.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Dict[str, bytes] | dictionary of the form {"input": b"binary representation of your data"}. | required |
container_url | str | URL where container is running. | 'localhost' |
host_port | int | host port that forwards to container's grpc server port | 45000 |
Returns:
Type | Description |
---|---|
Iterable[OutputItem] | Success -> results from model processing as specified in the process function. |
Iterable[OutputItem] | Failure -> Error codes from processing errors. All errors should container the word "Error." |
Example:
# assume that the container is running locally, and that it was started with this docker command
# docker run -it -p 5001:45000 <docker_uname>/<container_name>:<tag_id>
from chassisml_sdk.chassisml import chassisml
client = chassisml.ChassisClient("https://chassis.app.modzy.com/")
input_data = {"input": b"[[0.0, 0.0, 0.0, 1.0, 12.0, 6.0, 0.0, 0.0, 0.0, 0.0, 0.0, 11.0, 15.0, 2.0, 0.0, 0.0, 0.0, 0.0, 8.0, 16.0, 6.0, 1.0, 2.0, 0.0, 0.0, 4.0, 16.0, 9.0, 1.0, 15.0, 9.0, 0.0, 0.0, 13.0, 15.0, 6.0, 10.0, 16.0, 6.0, 0.0, 0.0, 12.0, 16.0, 16.0, 16.0, 16.0, 1.0, 0.0, 0.0, 1.0, 7.0, 4.0, 14.0, 13.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 14.0, 9.0, 0.0, 0.0], [0.0, 0.0, 8.0, 16.0, 3.0, 0.0, 1.0, 0.0, 0.0, 0.0, 16.0, 14.0, 5.0, 14.0, 12.0, 0.0, 0.0, 0.0, 8.0, 16.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 3.0, 16.0, 14.0, 1.0, 0.0, 0.0, 0.0, 0.0, 12.0, 16.0, 16.0, 2.0, 0.0, 0.0, 0.0, 0.0, 16.0, 11.0, 16.0, 4.0, 0.0, 0.0, 0.0, 3.0, 16.0, 16.0, 16.0, 6.0, 0.0, 0.0, 0.0, 0.0, 10.0, 16.0, 10.0, 1.0, 0.0, 0.0], [0.0, 0.0, 5.0, 12.0, 8.0, 0.0, 1.0, 0.0, 0.0, 0.0, 11.0, 16.0, 5.0, 13.0, 6.0, 0.0, 0.0, 0.0, 2.0, 15.0, 16.0, 12.0, 1.0, 0.0, 0.0, 0.0, 0.0, 10.0, 16.0, 6.0, 0.0, 0.0, 0.0, 0.0, 1.0, 15.0, 16.0, 7.0, 0.0, 0.0, 0.0, 0.0, 8.0, 16.0, 16.0, 11.0, 0.0, 0.0, 0.0, 0.0, 11.0, 16.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 6.0, 12.0, 12.0, 3.0, 0.0, 0.0], [0.0, 0.0, 0.0, 3.0, 15.0, 4.0, 0.0, 0.0, 0.0, 0.0, 4.0, 16.0, 12.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 3.0, 4.0, 3.0, 0.0, 0.0, 7.0, 16.0, 5.0, 3.0, 15.0, 8.0, 0.0, 0.0, 13.0, 16.0, 13.0, 15.0, 16.0, 2.0, 0.0, 0.0, 12.0, 16.0, 16.0, 16.0, 13.0, 0.0, 0.0, 0.0, 0.0, 4.0, 5.0, 16.0, 8.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 16.0, 4.0, 0.0, 0.0], [0.0, 0.0, 10.0, 14.0, 8.0, 1.0, 0.0, 0.0, 0.0, 2.0, 16.0, 14.0, 6.0, 1.0, 0.0, 0.0, 0.0, 0.0, 15.0, 15.0, 8.0, 15.0, 0.0, 0.0, 0.0, 0.0, 5.0, 16.0, 16.0, 10.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 15.0, 12.0, 0.0, 0.0, 0.0, 4.0, 16.0, 6.0, 4.0, 16.0, 6.0, 0.0, 0.0, 8.0, 16.0, 10.0, 8.0, 16.0, 8.0, 0.0, 0.0, 1.0, 8.0, 12.0, 14.0, 12.0, 1.0, 0.0]]"}
input_list = [input_data for _ in range(30)]
print("single input")
print(client.run_inference(input_data, container_url="localhost", host_port=5001))
print("multi inputs")
results = client.run_inference(input_list, container_url="localhost", host_port=5001)
for x in results:
print(x)
docker_infer
docker_infer(image_id, input_data, container_url='localhost', host_port=5001, container_port=None, timeout=20, clean_up=True, pull_container=False)
No Longer Available
Please use chassis.client.OMIClient.test_container moving forward.
Runs inference on an OMI compliant container. This method checks to see if a container is running and if not starts it. The method then runs inference against the input_data with the model in the container, and optionally shuts down the container.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image_id | str | the name of an OMI container image usually of the form | required |
input_data | Mapping[str, bytes] | dictionary of the form | required |
container_url | str | No longer used. | 'localhost' |
host_port | int | host port that forwards to container's grpc server port | 5001 |
container_port | Optional[str] | No longer used. | None |
timeout | int | number of seconds to wait for gRPC server to spin up | 20 |
clean_up | bool | No longer used. | True |
pull_container | bool | if True pulls missing container from repo | False |
Returns:
Type | Description |
---|---|
Iterable[OutputItem] | Success -> model output as defined in the process function |
Iterable[OutputItem] | Failure -> Error message if any success criteria is missing. |
Example:
host_port = 5002
client = chassisml.ChassisClient()
input_data = {"input": b"[[0.0, 0.0, 0.0, 1.0, 12.0, 6.0, 0.0, 0.0, 0.0, 0.0, 0.0, 11.0, 15.0, 2.0, 0.0, 0.0, 0.0, 0.0, 8.0, 16.0, 6.0, 1.0, 2.0, 0.0, 0.0, 4.0, 16.0, 9.0, 1.0, 15.0, 9.0, 0.0, 0.0, 13.0, 15.0, 6.0, 10.0, 16.0, 6.0, 0.0, 0.0, 12.0, 16.0, 16.0, 16.0, 16.0, 1.0, 0.0, 0.0, 1.0, 7.0, 4.0, 14.0, 13.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 14.0, 9.0, 0.0, 0.0], [0.0, 0.0, 8.0, 16.0, 3.0, 0.0, 1.0, 0.0, 0.0, 0.0, 16.0, 14.0, 5.0, 14.0, 12.0, 0.0, 0.0, 0.0, 8.0, 16.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 3.0, 16.0, 14.0, 1.0, 0.0, 0.0, 0.0, 0.0, 12.0, 16.0, 16.0, 2.0, 0.0, 0.0, 0.0, 0.0, 16.0, 11.0, 16.0, 4.0, 0.0, 0.0, 0.0, 3.0, 16.0, 16.0, 16.0, 6.0, 0.0, 0.0, 0.0, 0.0, 10.0, 16.0, 10.0, 1.0, 0.0, 0.0], [0.0, 0.0, 5.0, 12.0, 8.0, 0.0, 1.0, 0.0, 0.0, 0.0, 11.0, 16.0, 5.0, 13.0, 6.0, 0.0, 0.0, 0.0, 2.0, 15.0, 16.0, 12.0, 1.0, 0.0, 0.0, 0.0, 0.0, 10.0, 16.0, 6.0, 0.0, 0.0, 0.0, 0.0, 1.0, 15.0, 16.0, 7.0, 0.0, 0.0, 0.0, 0.0, 8.0, 16.0, 16.0, 11.0, 0.0, 0.0, 0.0, 0.0, 11.0, 16.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 6.0, 12.0, 12.0, 3.0, 0.0, 0.0], [0.0, 0.0, 0.0, 3.0, 15.0, 4.0, 0.0, 0.0, 0.0, 0.0, 4.0, 16.0, 12.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 3.0, 4.0, 3.0, 0.0, 0.0, 7.0, 16.0, 5.0, 3.0, 15.0, 8.0, 0.0, 0.0, 13.0, 16.0, 13.0, 15.0, 16.0, 2.0, 0.0, 0.0, 12.0, 16.0, 16.0, 16.0, 13.0, 0.0, 0.0, 0.0, 0.0, 4.0, 5.0, 16.0, 8.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 16.0, 4.0, 0.0, 0.0], [0.0, 0.0, 10.0, 14.0, 8.0, 1.0, 0.0, 0.0, 0.0, 2.0, 16.0, 14.0, 6.0, 1.0, 0.0, 0.0, 0.0, 0.0, 15.0, 15.0, 8.0, 15.0, 0.0, 0.0, 0.0, 0.0, 5.0, 16.0, 16.0, 10.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 15.0, 12.0, 0.0, 0.0, 0.0, 4.0, 16.0, 6.0, 4.0, 16.0, 6.0, 0.0, 0.0, 8.0, 16.0, 10.0, 8.0, 16.0, 8.0, 0.0, 0.0, 1.0, 8.0, 12.0, 14.0, 12.0, 1.0, 0.0]]"}
input_list = [input_data for _ in range(30)]
print("single input")
print(client.docker_infer(image_id="claytondavisms/sklearn-digits-docker-test:0.0.7", input_data=input_data, container_url="localhost", host_port=host_port, clean_up=False, pull_container=True))
print("multi inputs")
results = client.run_inference(input_list, container_url="localhost", host_port=host_port)
results = client.docker_infer(image_id="claytondavisms/sklearn-digits-docker-test:0.0.7", input_data=input_list, container_url="localhost", host_port=host_port)
for x in results:
print(x)