Create Responsive Visual AI Using YAML and Python in Azure Machine Learning - Azure Machine Learning (2023)

  • article

Refers to:Create Responsive Visual AI Using YAML and Python in Azure Machine Learning - Azure Machine Learning (1)Azure CLI extension ml v2 (current)Create Responsive Visual AI Using YAML and Python in Azure Machine Learning - Azure Machine Learning (2) Python SDK azure-ai-ml v2 (current)

Understanding and evaluating computer vision models requires a different set of responsible AI tools than table and text scenarios. Responsible AI Dashboard now supports image data with improved debugging capabilities for image data analysis and visualization. The Responsible AI for Image dashboard presents several mature Responsible AI tools in the areas of model rendering, data mining, and model interpretation for holistic evaluation and debugging of computer vision models - leading to informed remediation to address integrity issues and Transparency builds trust between stakeholders. You can use responsible AI components to create responsible visual AI tables with Azure Machine Learning pipeline jobs.

Supported scenarios:

NamedescribeParameter names in RAI Vision Insights components
Image classification (binary and multiclass)Predicting a category from an imagetask_type="sort image"
Sort multiple image tagsPredict multiple tags for a given imagetask_type="multilabel_image_classification"
object detectionDetect and recognize the classes of multiple objects in a given image. Objects are defined using bounding boxes.task_type="detect_object"

great

Responsible Vision for AI is currently in public preview. This preview is provided without a SLA and is not recommended for production workloads. Some features may not be supported or have limited functionality. For more information, seeMicrosoft Azure Preview Additional Terms of Use.

Responsible AI components

The basics of building a responsible image dashboard with AI in Azure Machine Learning areKomponenty RAI Vision Insightswhich is different from creating a Responsible AI dashboard for tabular data.

The following sections contain the Responsible Visual AI Insights component specification and sample code snippets in YAML and Python. For the full code, seeSample YAML and Python notebooks.

limit

  • All models must be registered with Azure Machine Learning in MLflow format and PyTorch version. HuggingFace models are also supported.
  • Dataset input must be in mltable format.
  • For performance reasons, the test dataset is limited to 5000 rows for the visual user interface.
  • Complex objects, such as lists of column names, must be provided as a single JSON-encoded string before being passed to the Vision Insights Responsible AI component.
  • Guided_gradcam does not work with visual translator models
  • AutoML image processing models do not support SHAP
  • Hierarchical group naming (creating a new group from a subset of an existing group) and adding images to an existing group are not supported.
  • The IOU limit cannot be changed (current default is 50%).

Responsible AI visual knowledge

The Visual Insights Responsible AI component has three main ports of entry:

  • machine learning model
  • training data set
  • test data set

First, register your input model in Azure Machine Learning and refer to the same model in the model input port of the AI ​​Vision Analytics Manager. To generate model debugging details (model performance, data visualization, and model interpretation tools) and populate graphics in Responsible AI dashboards, use the training and test datasets of images used to train the model. Both datasets must be in mltable format. The training and test datasets can be the same.

Dataset architecture for different types of vision tasks:

  • object detection

    DataFrame({'image_path_1' : [[object_1, topX1, topY1, bottomX1, bottomY1, (optional) confidence index],[object_2, topX2, topY2, bottomX2, bottomY2, (optional) confidence index],[object_3, topX3, top , lowest Y3, (optional) trust_score]], 'image_path_2': [[object_1, top X4, top Y4, lowest X4, lowest Y4, (optional) trust_score], [object_2, top X5, top Y5, bottom X5 , lower Y5, (optional) confidence_score] ]})
  • Sort images

    DataFrame({ 'image_path_1' : 'label_1', 'image_path_2' : 'label_2' ... })

The RAI Visual Insights component also accepts the following parameters:

Parameter namedescribetype
titleBrief description of the dashboard.series
type of workSpecifies whether this is a scene model.series
Maximum_rows_for_test_datasetFor performance reasons, the maximum number of rows allowed in the test data set.integer, default is 5000
PoliteThe complete list of class labels in the training dataset.optional list of strings
Explanation of the budgetAllows you to create model annotations.Boolean value
enable error analysisEnable generation of model error analysis.Boolean value
use_model_dependencyEnvironment Responsible AI does not include model dependencies. When set to true, model dependency packages are installed.Boolean value
use_condaIf applicable, use conda to install model dependencies, otherwise use pip.Boolean value

This component combines the generated information into a responsible AI image dashboard. There are two output ports:

  • TenInsights_pipeline_job.outputs.dashboardport contains completeAn insight into RAI's visionIntention.
  • TenInsights_pipeline_job.outputs.ux_jsonPorts contain the data required to display a minimal dashboard.

Once you've defined the pipeline and submitted it to Azure Machine Learning for execution, the dashboard should appear in the Azure Machine Learning portal under the Registered Models view.

  • YAML
  • Python Development Kit
parse_model: type: command component: azureml://registries/AzureML-RAI-preview/components/rai_vision_insights/versions/2 input: title: from YAML task_type: sort_image input_model: type: mlflow_model śazcieżka::model_info: ${{parent.inputs.model_info}} test_dataset: typ: mltable ścieżka: ${{parent.inputs.my_test_data}} target_column_name: ${{parent.inputs.target_column_name}} maximum_rows_for_test_data ","狗"]' precompute_explanation: True enable_error_analysis: True

Embedding with AutoML image

Automated machine learning in Azure Machine Learning supports training models for computer vision tasks such as image classification and object detection. For debugging AutoML Vision models and interpreting model predictions, AutoML Models for Computer Vision is integrated into the Responsible AI dashboard. To generate responsible AI insights for AutoML image processing models, register the best AutoML model in an Azure Machine Learning workspace and run it on the Responsible AI Vision pipeline. To find out, seeHow to set up AutoML to train a computer vision model.

Notebooks related to computer vision tasks supported by AutoML can be found atexample azuremlwarehouse.

How to submit a feed of information about responsible AI vision

Responsible AI Vision Pipelines Insights can be submitted in one of the following ways

  • Python SDK: To learn how to submit a pipeline through Python, seeAutoML image classification script with sample RAI Dashboard notebook.See section 5.1 in the pipeline construction notebook.
  • Azure CLI: To pipe through the Azure CLI, see the YAML element in section 5.2 of the example notebook linked above.
  • UI (via Azure Machine Learning Studio): From the designer in Azure Machine Learning Studio, the RAI-Vision Insights component can be used to create and submit pipelines.

AI ​​Vision Insights component responsible parameters (specific to AutoML)

In addition to the list of responsible AI vision parameters given in the previous section, the following are the parameters set specifically for the AutoML model.

notes

Some parameters are specific to the selected XAI algorithm, while others are optional.

Parameter namedescribetype
model typeThe taste of a model. Choose pyfunc for AutoML models.calculate
- pyfunc
- employed
data typeEither the images in the dataset are read from publicly available URLs or stored in a user data store.
For AutoML models, images are always read from the user's data store, so the data type for AutoML models is "private".
For private data types, we download images to your computer before creating callouts.
calculate
- public opinion
- private
xai_algorithmTypes of XAI algorithms supported by AutoML models
Note: Shapes are not supported in AutoML models.
calculate
-guide_backprop
-guide_gradcam
-Compound slope
-xrai
xrai_fastIf you will be using a faster version of XRAI. If true, the computation time of the explanation will be faster, but it will make the explanation (rendering) less accurateBoolean value
approximate methodThis parameter is specific to associative gradients only.
Approximate complete method. Approaches are availableriemann_middleIlegenda gaussa.
calculate
-riemann_middle
-legenda gaussa
n_stepsThis parameter is specific to the integrated gradient and XRAI methods.
The number of steps used by the approximation method. More steps lead to a better approximation of the performance (interpretation). The range of n_steps is [2, inf), but the performance of the computation starts to converge after 50 steps.
integer
zaufanie_score_threshold_multilabelThis parameter only applies to multi-tag sorting. Specifies the confidence score threshold above which tags will be selected to generate annotations.platform

Create model annotations for AutoML models

Once your pipeline is complete and your Responsible AI dashboard is built, you need to connect it to your compute instance to create explanations. After connecting to a compute instance, you can select an input image that will display an interpretation using the selected XAI algorithm in the right sidebar.

notes

For image classification models, methods such as XRAI and embedded gradients generally provide better visual explanations, but are more computationally demanding than guided backprop and guided gradCAM.

Learn more about the AI-powered image control panel manager

To learn more about using the AI ​​Responsible Image Dashboard, seeResponsible AI Image Dashboard in Azure Machine Learning Studio.

next step

  • Learn more aboutThe concept and technology behind Dashboard Responsible AI.
  • see samplesYAML and Python notebooksCreate responsive AI dashboards using YAML or Python.
  • Learn more about using the responsible AI image dashboard to debug and make better decisions with image data and modelsTech community blog posts.
  • Learn how Clearsight worksreal customer stories.
Top Articles
Latest Posts
Article information

Author: Delena Feil

Last Updated: 05/28/2023

Views: 5447

Rating: 4.4 / 5 (45 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.