Skip to content

Latest commit

 

History

History
347 lines (241 loc) · 15.9 KB

aml-cli-reference.md

File metadata and controls

347 lines (241 loc) · 15.9 KB

Model Management Command Line Interface Reference

You can update your Azure Machine Learning command line interface (CLI) installation using pip. To perform the update, you must have sufficient permissions.

Linux: On Linux you must be running under sudo:

$ sudo -i

Then issue the following command:

# wget -q https://raw.githubusercontent.com/Azure/Machine-Learning-Operationalization/master/scripts/amlupdate.sh -O - | sudo bash -

Windows: On Windows, you must run the command as administrator.

First, open a command prompt with administrator privileges. Press the Window key and type cmd. Right-click on the Command Prompt icon and select Run as administrator from the context menu.

Then type the following command:

pip install azure-cli-ml

Base CLI concepts:

account : Manage model management accounts.	
env     : Manage compute environments.
image   : Manage operationalization images.
manifest: Manage operationalization manifests.
model   : Manage operationalization models.
service : Manage operationalized services.

Account commands

A model management account is required to use the services, which allow you to deploy and manage models. Use az ml account modelmanagement -h to see the following list.

create: Create a Model Management Account.
delete: Delete a specified Model Management Account.
list  : Get the Model Management Accounts in the current subscription.
set   : Set the active Model Management Account.
show  : Show a Model Management Account.
update: Update an existing Model Management Account.

Create a Model Management Account

Create a model management account using the following command. This account will be used for billing.

az ml account modelmanagement create --location [Azure region e.g. eastus2] --name [new account name] --resource-group [resource group name to store the account in]

Command details:

--location -l      : [Required] Resource location.
--name -n          : [Required] Name of the model management account.
--resource-group -g: [Required] Resource group to create the model management account in.
--description -d   : Description of the model management account.
--sku-instances    : Number of instances of the selected SKU. Must be between 1 and 16 inclusive. Default: 1.
--sku-name         : SKU name. Valid names are S1|S2|S3|DevTest.  Default: S1.
--tags -t          : Tags for the model management account.  Default: {}.
-v                 : Verbosity flag.

Set the active Model Management Account

Set the active model management account using the following command.

az ml account modelmanagement set --name [account name] --resource-group [resource group name associated with the account]

Command details:

--name -n          : [Required] Name of model management account to set.
--resource-group -g: [Required] Resource group containing the model management account to set.
-v                 : Verbosity flag.

Environment commands

Use az ml env -h to see the following list.

cluster        : Switch the current execution context to 'cluster'.
delete         : Delete an MLCRP-provisioned resource.
get-credentials: List the keys for an environment.
list           : List all environments in the current subscription.
local          : Switch the current execution context to 'local'.
set            : Set the active MLC environment.
setup          : Sets up an MLC environment.
show           : Show an MLC resource; if resource_group or cluster_name are not provided, shows the active MLC env.

Switch to 'cluster' Use az ml env cluster to switch the current execution context to 'cluster'.

List environment credentials

Use az ml env get-credentials --cluster-name [compute resource] --resource-group [resource group compute resource] to list the credentials for the environment.

Command details:

--cluster-name -n       : [Required] Name of compute resource to retrieve keys for.
--resource-group -g     : [Required] Resource group compute resource to retrieve keys for.
--install-kube-config -i: Flag to save Kubernetes configuration to file. If value not provided, will install to ~/.kube/config.
-v                      : Verbosity flag.

Switch to 'local'

Use az ml env local to switch the current execution context to 'local'.

Set the active MLC environment

Use az ml env set --cluster-name [cluster name] --resource-group [resource group] to set the active MLC environment.

Command details:

--cluster-name -n  : [Required] Name of cluster to provision.
--resource-group -g: [Required] Resource group of compute resource to set as active resource.

Set up the Deployment Environment

When setting up the deployment environment, there are two option for deployment: local and cluster. Setting the --cluster (or -c) flag enables cluster deployment. The basic setup syntax is as follows:

az ml env setup [-c] --location [location of environment resources, e.g. eastus2] --name[name of environment]

This initializes your Azure machine learning environment with a storage account, ACR registry, App Insights service, and an ACS cluster created in your subscription. By default, the environment is initialized for local deployments only (no ACS) if no flag is specified. If you need to scale service, specify the --cluster (or -c) flag to create an ACS cluster.

Command details:

--location -l                  : [Required] Location for environment resources; an Azure region, e.g. eastus2.
--name -n                      : [Required] Name of environment to provision.
--acr -r                       : ARM ID of ACR to associate with this environment.
--agent-count -z               : Number of agents to provision in the ACS cluster. Default: 3.
--cert-cname                   : CNAME of certificate.
--cert-pem                     : Path to .pem file with certificate bytes.
--cluster -c                   : Flag to provision ACS cluster. Off by default; specify this to force an ACS cluster deployment.
--key-pem                      : Path to .pem file with certificate key.
--master-count -m              : Number of master nodes to provision in the ACS cluster. Acceptable values: 1, 3, 5. Default: 1.
--resource-group -g            : Resource group in which to create compute resource. Will be created if it does not exist.
                                 If not provided, resource group will be created with 'rg' appended to 'name.'.
--service-principal-app-id -a  : App ID of service principal to use for configuring ML compute.
--service-principal-password -p: Password associated with service principal.
--storage -s                   : ARM ID of storage account to associate with this environment.
--yes -y                       : Flag to answer 'yes' to any prompts. Command will fail if user is not logged in.

Image commands

create: Create an Operationalization Image. This command has two different sets of required arguments,
        depending on if you want to use a previously created manifest.
list
show
usage

Create image

Note that the create service command listed below can perform the create image operation, so you don't have to create an image separately.

You can create an image with the option of having registered it before, or you can register it with a single command. Each option is shown below.

az ml image create -n [image name] -manifest-id [manifest ID]

az ml image create -n [image name] --model-file [model file or folder path] -f [code file, e.g. the score.py file] -r [the runtime, e.g. spark-py which is the Docker container image base]

Command details:

--image-name -n    : [Required] The name of the image being created.
--image-description: Description of the image.
--image-type       : The image type to create. Defaults to "Docker".
-v                 : Verbosity flag.

Registered Manifest Arguments --manifest-id : [Required] Id of previously registered manifest to use in image creation.

Unregistered Manifest Arguments --conda-file -c : Path to Conda Environment file. --dependency -d : Files and directories required by the service. Multiple dependencies can be specified with additional -d arguments. --model-file -m : [Required] Model file to register. --schema-file -s : Schema file to add to the manifest. -f : [Required] The code file to be deployed. -p : A pip requirements.txt file needed by the code file. -r : [Required] Runtime of the web service. Valid runtimes are python|spark-py.

Image usage

Use az ml image usage --image-id [image ID] to show the usage of the specified image.

Command details:

--image-id -i: [Required] ID of image to show.
-v           : Verbosity flag.

Manifest commands

create: Create an Operationalization Manifest. This command has two different sets of required arguments,
        depending on if you want to use previously registered model/s.
list
show

Create manifest

Creates a manifest file for the model. Note that you can use the service create command, which will perform the manifest creation without you having to create it separately.

az ml manifest create --manifest-name [your new manifest name] -f [path to code file] -r [runtime for the image, e.g. spark-py]

Command details:

--manifest-name -n    : [Required] Name of the manifest to create.
-f                    : [Required] The code file to be deployed.
-r                    : [Required] Runtime of the web service. Valid runtimes are spark-py|python.
--conda-file -c       : Path to Conda Environment file.
--dependency -d       : Files and directories required by the service. Multiple dependencies can be
                        specified with additional -d arguments.
--manifest-description: Description of the manifest.
--schema-file -s      : Schema file to add to the manifest.
-p                    : A pip requirements.txt file needed by the code file.
-v                    : Verbosity flag.

Registered Model Arguments --model-id -i : [Required] Id of previously registered model to add to manifest. Multiple models can be specified with additional -i arguments.

Unregistered Model Arguments --model-file -m : [Required] Model file to register. If used, must be combined with model name.

Model commands

list
register
show

Register a model

Command to register the model. Note that you can use the service create command which will perform the model registraiton (without you having to register it separately).

az ml model register --model [path to model file] --name [model name]

Command details:

--model -m      : [Required] Model to register.
--name -n       : [Required] Name of model to register.
--description -d: Description of the model.
--tag -t        : Tags for the model. Multiple tags can be specified with additional -t arguments.
-v              : Verbosity flag.

Service commands

create
delete
keys
list
logs
run
show
update
usage

Create a service

In the following command, note that the schema needs to be generate-schema command available through the Azure ML SDK (see samples for more info on the schema creation).

az ml service create realtime --imageid [image to deploy] -n [service name]

az ml service create realtime --model-file [path to model file(s)] -f [path to model scoring file, e.g. score.py] -n [service name] -r [run time included in the image, e.g. spark-py]

Command details:

-n                                : [Required] Webservice name.
--autoscale-enabled               : Enable automatic scaling of service replicas based on request demand.
                                    Allowed values: true, false. False if omitted.  Default: false.
--autoscale-max-replicas          : If autoscale is enabled - sets the maximum number of replicas.
--autoscale-min-replicas          : If autoscale is enabled - sets the minimum number of replicas.
--autoscale-refresh-period-seconds: If autoscale is enabled - the interval of evaluating scaling demand.
--autoscale-target-utilization    : If autoscale is enabled - target utilization of replicas time.
--collect-model-data              : Enable model data collection. Allowed values: true, false. False if omitted.  Default: false.
--cpu                             : Reserved number of CPU cores per service replica (can be fraction).
--enable-app-insights -l          : Enable app insights. Allowed values: true, false. False if omitted.  Default: false.
--memory                          : Reserved amount of memory per service replica, in M or G. (ex. 1G, 300M).
--replica-max-concurrent-requests : Maximum number of concurrent requests that can be routed to a service replica.
-v                                : Verbosity flag.
-z                                : Number of replicas for a Kubernetes service.  Default: 1.

Registered Image Arguments --image-id : [Required] Image to deploy to the service.

Unregistered Image Arguments --conda-file -c : Path to Conda Environment file. --image-type : The image type to create. Defaults to "Docker". --model-file -m : [Required] The model to be deployed. -d : Files and directories required by the service. Multiple dependencies can be specified with additional -d arguments. -f : [Required] The code file to be deployed. -p : A pip requirements.txt file of package needed by the code file. -r : [Required] Runtime of the web service. Valid runtimes are python|spark-py. -s : Input and output schema of the web service.

Note on the -d flag for attaching dependencies: If you pass the name of a directory that is not already bundled (zip, tar, etc.), that directory automatically gets tar’ed and is passed along, then automatically unbundled on the other end. If you pass in a directory that is already bundled, we treat it as a file and pass it along as is. It will not be unbundled automatically; you are expected to handle that in your code.

Get the keys for a service

az ml service keys realtime --id [service ID]

Arguments --id -i [Required]: Service ID. --regen -r : Flag to specify to regenerate keys for the specified service. -v : Verbosity flag.

Get service logs

Either a service name or a service ID is required.

az ml service keys realtime --service-id [service ID] az ml service keys realtime --service-name [service name]

--kube-config -k : Kubeconfig of the cluster to get logs from.
--request-id -r  : Request Id to filter the logs by.
--service-id -i  : Service ID.
--service-name -n: Service Name.
-v               : Verbosity flag.

Run the service

az ml service run realtime -i [service ID]

Command details:

--id -i    : [Required] The service id to score against.
-d         : The data to use for calling the web service.
-v         : Verbosity flag.

Get service details

Get service details including URL and usage (including sample data if a schema was created). Either a service name or a service ID is required.

az ml service show realtime --name [service name] az ml service show realtime --id [service ID]

Command details:

--id -i  : The service id to show.
--name -n: Webservice name.
-v       : Verbosity flag.

Service usage

az ml service usage realtime --id [service ID]

Command details:

--id -i: [Required] Service ID.
-v     : Verbosity flag.