Deploy a model to Azure Container Instances
Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Container Instances (ACI). Use Azure Container Instances if you:
- prefer not to manage your own Kubernetes cluster
- Are OK with having only a single replica of your service, which may impact uptime
For information on quota and region availability for ACI, see Quotas and region availability for Azure Container Instances article.
Important
It is highly advised to debug locally before deploying to the web service, for more information see Debug Locally
You can also refer to Azure Machine Learning - Deploy to Local Notebook
Prerequisites
An Azure Machine Learning workspace. For more information, see Create an Azure Machine Learning workspace.
A machine learning model registered in your workspace. If you don't have a registered model, see How and where to deploy models.
The Azure CLI extension (v1) for Machine Learning service, Azure Machine Learning Python SDK, or the Azure Machine Learning Visual Studio Code extension.
The Python code snippets in this article assume that the following variables are set:
ws
- Set to your workspace.model
- Set to your registered model.inference_config
- Set to the inference configuration for the model.
For more information on setting these variables, see How and where to deploy models.
The CLI snippets in this article assume that you've created an
inferenceconfig.json
document. For more information on creating this document, see How and where to deploy models.
Limitations
- When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace.
- When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot also be in the virtual network.
For more information, see How to secure inferencing with virtual networks.
Deploy to ACI
To deploy a model to Azure Container Instances, create a deployment configuration that describes the compute resources needed. For example, number of cores and memory. You also need an inference configuration, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see How and where to deploy models.
Note
- ACI is suitable only for small models that are under 1 GB in size.
- We recommend using single-node AKS to dev-test larger models.
- The number of models to be deployed is limited to 1,000 models per deployment (per container).
Using the SDK
from azureml.core.webservice import AciWebservice, Webservice
from azureml.core.model import Model
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service = Model.deploy(ws, "aciservice", [model], inference_config, deployment_config)
service.wait_for_deployment(show_output = True)
print(service.state)
For more information on the classes, methods, and parameters used in this example, see the following reference documents:
Using the Azure CLI
APPLIES TO: Azure CLI ml extension v1
To deploy using the CLI, use the following command. Replace mymodel:1
with the name and version of the registered model. Replace myservice
with the name to give this service:
az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json
The entries in the deploymentconfig.json
document map to the parameters for AciWebservice.deploy_configuration. The following table describes the mapping between the entities in the JSON document and the parameters for the method:
JSON entity | Method parameter | Description |
---|---|---|
computeType |
NA | The compute target. For ACI, the value must be ACI . |
containerResourceRequirements |
NA | Container for the CPU and memory entities. |
cpu |
cpu_cores |
The number of CPU cores to allocate. Defaults, 0.1 |
memoryInGB |
memory_gb |
The amount of memory (in GB) to allocate for this web service. Default, 0.5 |
location |
location |
The Azure region to deploy this Webservice to. If not specified the Workspace location will be used. More details on available regions can be found here: ACI Regions |
authEnabled |
auth_enabled |
Whether to enable auth for this Webservice. Defaults to False |
sslEnabled |
ssl_enabled |
Whether to enable SSL for this Webservice. Defaults to False. |
appInsightsEnabled |
enable_app_insights |
Whether to enable AppInsights for this Webservice. Defaults to False |
sslCertificate |
ssl_cert_pem_file |
The cert file needed if SSL is enabled |
sslKey |
ssl_key_pem_file |
The key file needed if SSL is enabled |
cname |
ssl_cname |
The cname for if SSL is enabled |
dnsNameLabel |
dns_name_label |
The dns name label for the scoring endpoint. If not specified a unique dns name label will be generated for the scoring endpoint. |
The following JSON is an example deployment configuration for use with the CLI:
{
"computeType": "aci",
"containerResourceRequirements":
{
"cpu": 0.5,
"memoryInGB": 1.0
},
"authEnabled": true,
"sslEnabled": false,
"appInsightsEnabled": false
}
For more information, see the az ml model deploy reference.
Using VS Code
See how to manage resources in VS Code.
Important
You don't need to create an ACI container to test in advance. ACI containers are created as needed.
Important
We append hashed workspace id to all underlying ACI resources which are created, all ACI names from same workspace will have same suffix. The Azure Machine Learning service name would still be the same customer provided "service_name" and all the user facing Azure Machine Learning SDK APIs do not need any change. We do not give any guarantees on the names of underlying resources being created.
Next steps
- How to deploy a model using a custom Docker image
- Deployment troubleshooting
- Update the web service
- Use TLS to secure a web service through Azure Machine Learning
- Consume a ML Model deployed as a web service
- Monitor your Azure Machine Learning models with Application Insights
- Collect data for models in production