Package and deploy models outside Azure Machine Learning (preview)
You can deploy models outside of Azure Machine Learning for online serving by creating model packages (preview). Azure Machine Learning allows you to create a model package that collects all the dependencies required for deploying a machine learning model to a serving platform. You can move a model package across workspaces and even outside of Azure Machine Learning. To learn more about model packages, see Model packages for deployment (preview).
Important
This feature is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
For more information, see Supplemental Terms of Use for Azure Previews.
In this article, you learn how package a model and deploy it to an Azure App Service.
Prerequisites
Before following the steps in this article, make sure you have the following prerequisites:
An Azure subscription. If you don't have an Azure subscription, create a Trial before you begin. Try the Azure Machine Learning.
An Azure Machine Learning workspace. If you don't have one, use the steps in the How to manage workspaces article to create one.
Note
Private link enabled workspaces don't support packaging models for deployment outside of Azure Machine Learning.
Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role. For more information, see Manage access to an Azure Machine Learning workspace.
Prepare your system
Follow these steps to prepare your system.
The example in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo and then change directories to the folder:
git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/cli
This article uses the example in the folder endpoints/online/deploy-with-packages/mlflow-model.
Connect to the Azure Machine Learning workspace where you'll do your work.
Packages require the model to be registered in either your workspace or in an Azure Machine Learning registry. In this example, there's a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you're trying to deploy is already registered.
#<register_model>
MODEL_NAME='heart-classifier-mlflow'
MODEL_PATH='model'
az ml model create --name $MODEL_NAME --path $MODEL_PATH --type mlflow_model
#</register_model>
#<build_package>
az ml model package -n $MODEL_NAME -l latest --file package.yml
#</build_package>
#<endpoint_name>
ENDPOINT_NAME = "heart-classifier"
#</endpoint_name>
# The following code ensures the created deployment has a unique name
ENDPOINT_SUFIX=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w ${1:-5} | head -n 1)
ENDPOINT_NAME="$ENDPOINT_NAME-$ENDPOINT_SUFIX"
#<create_endpoint>
az ml online-endpoint create -n $ENDPOINT_NAME
#</create_endpoint>
#<create_deployment>
az ml online-deployment create -f deployment.yml -e $ENDPOINT_NAME
#</create_deployment>
#<test_deployment>
az ml online-endpoint invoke -n $ENDPOINT_NAME -d with-package -f sample-request.json
#</test_deployment>
#<create_deployment_inline>
az ml online-deployment create --with-package -f model-deployment.yml -e $ENDPOINT_NAME
#</create_deployment_inline>
#<delete_resources>
az ml online-endpoint delete -n $ENDPOINT_NAME --yes
#</delete_resources>
#<build_package_copy>
az ml model package -n $MODEL_NAME -l latest --file package-external.yml
#</build_package_copy>
Tip
When you specify the model configuration using copy
for the mode property, you guarantee that all the model artifacts are copied inside the generated docker image instead of downloaded from the Azure Machine Learning model registry, thereby allowing true portability outside of Azure Machine Learning. For a full specification about all the options when creating packages see Create a package specification.
- Start the package operation.
#<register_model>
MODEL_NAME='heart-classifier-mlflow'
MODEL_PATH='model'
az ml model create --name $MODEL_NAME --path $MODEL_PATH --type mlflow_model
#</register_model>
#<build_package>
az ml model package -n $MODEL_NAME -l latest --file package.yml
#</build_package>
#<endpoint_name>
ENDPOINT_NAME = "heart-classifier"
#</endpoint_name>
# The following code ensures the created deployment has a unique name
ENDPOINT_SUFIX=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w ${1:-5} | head -n 1)
ENDPOINT_NAME="$ENDPOINT_NAME-$ENDPOINT_SUFIX"
#<create_endpoint>
az ml online-endpoint create -n $ENDPOINT_NAME
#</create_endpoint>
#<create_deployment>
az ml online-deployment create -f deployment.yml -e $ENDPOINT_NAME
#</create_deployment>
#<test_deployment>
az ml online-endpoint invoke -n $ENDPOINT_NAME -d with-package -f sample-request.json
#</test_deployment>
#<create_deployment_inline>
az ml online-deployment create --with-package -f model-deployment.yml -e $ENDPOINT_NAME
#</create_deployment_inline>
#<delete_resources>
az ml online-endpoint delete -n $ENDPOINT_NAME --yes
#</delete_resources>
#<build_package_copy>
az ml model package -n $MODEL_NAME -l latest --file package-external.yml
#</build_package_copy>