ZEISS Knowledge Base
Help
ZEISS Knowledge Base

arivis AI: Machine Learning and Deep Learning

Using StarDist

This article guides you through the implementation of StarDist segmentation.

Introduction

StarDist is a deep learning-based method for 2D and 3D nucleus detection, developed & published by Martin Weigert and Uwe Schmidt: github.com/stardist/. StarDist uses a cell detection method that predicts a shape representation with star-convex polygons that is well-suited to approximate the typically roundish shapes of cell nuclei in microscopy images.The 3D shape of a single object (cell nucleus) is described using a star-convex polyhedron instead of polygons.

StarDist is reported to run well in multiple open source environments such as Fiji/ImageJ, Qupath or Python (Using Python editor like Jupyter Notebook, PyCham and Spider), but what are the benefits of integrating StarDist directly inside of your arivis Vision4D imaging software? In this document, we will highlight the advantages of integrating open source analysis tools such as StarDist directly in Vision4D.

Preliminary Remarks

Vision4D is able to run deep learning applications such as StarDist using external and arivis-independent Python libraries and tools produced by third parties.

These tools must be installed by the user under his or her own responsibility, strictly following the instructions in this document. arivis has tested the setup protocol on several computers, however, due to the different and unpredictable hardware and software configurations of any given computer system, the results may vary on a case-by-case basis. Therefore, arivis declines any responsibility concerning the correct tools,  installation, and set up on the individual user’s workstation. arivis cannot be made responsible for any malfunctioning or failure of the deep learning environment setup. arivis does not guarantee technical support on the setup task or on any deep learning application. Furthermore, arivis also declines any responsibility regarding the validity of the scientific results gathered from the deep learning application.

How does it work?

The StarDist workflow is based on three main steps:

  1. Object Annotation
  2. Network creation and training
  3. Inference (image analysis)

Objects annotation

This task consists of manually drawing the shape of the object over a set of representative images (2D or 3D). The reference objects should describe all their possible variation within the reference samples. The annotations are then used to create a binary masked image (Ground-truth). Both the annotations and the related binary masks are used afterward by the training task to build the Neuronal Network (training).

The annotation task is a manual activity and therefore requires a lot of time to be performed. The correct number of annotations must be estimated in advance in order to get reliable training results. During the training, the annotations can be increased if required. The annotations range starts from a minimum number of 200 samples, spread over 10 - 20 different images, up to a thousand (or even tens of thousands) in the more complex cases.

In order to increase the number of samples without the need to acquire new images, it is also possible to re-use the already existing images after applying some operations like rotations, random flips, or intensity shifts of the original.

Creating and training a neural network

This task takes either the sample images or the related masked images to build the Neuronal Network. The training is a loop in which, in any cycle, 2 parameters are computed: the training loss and the validation loss.
The progress of training can be evaluated by comparing the training loss with the validation loss. During training, both values should decrease before reaching the minimal value, which should not change significantly with further cycles. Comparing the validation loss development with the training loss can give insights into the model’s performance. If both training and validation loss values are still decreasing, this indicates that training is still necessary. If the validation loss suddenly increases again, while the training loss decreases towards zero, it usually means that the network is overfitting to the training data.

Basically, the training is based on math operations. These operations are repetitive and time-consuming and can easily be parallelized. The usage of GPU resources improves the training performance in reducing the total time. Working with the CPU only, a complex training can take 7 to 10 days of work, while using the GPU the total time may reduce to mere hours (10 to 12).

Image analysis

Once the Neuronal Network is trained, it can be used to analyze sample images.

Why use StarDist within Vision4D?

arivis Vision4D (V4D) is a modular software for working with multi-channel 2D, 3D, and 4D images of almost unlimited size, independent of available RAM. Many imaging systems, such as high-speed confocal, light sheet / SPIM, and 2-photon microscope systems can produce vast amounts of multi-channel data, which Vision4D handles without constraints.

V4D allows the user to execute complex analysis tasks in automatic or batch mode. This includes sophisticated pre-processing algorithms, multiple segmentation approaches (including machine learning segmentation tools), and powerful data handling. Dataset from Mega- to Terabytes in size can be quantified by Vision4D with a single pipeline on virtually any computer that is capable of holding the data (see system requirements).

StarDist represents an advanced method to detect roundish objects such as cells and nuclei, especially in crowded fields where the objects are overlapping, but it is limited to these cases. The new frontiers of image analysis in life science need the capability to analyze the complex iterations between biological structures. Vision4D has the tools to satisfy these requirements. StarDist can be integrated into the V4D analysis workflow and directly contribute to better detect its target structures.

StarDist can already be currently executed as a python script but, in the near future, it will be available as a V4D pipeline operator, making its usage even more flexible and powerful.

Application examples

Nuclei Tracking

Stardist is used to segment the nuclei over multiple timepoints while further operations take the result of the segmentation to create tracks and measure their properties.

Distribution Analysis

Stardist segments the nuclei, then a pipeline operation can import Atlas regions from a labelled image and compartmentalize the nuclei to specific regions and report numerical density. 

Measuring Cell Volumes

StarDist is used to segment the cell nuclei and a region growing operation can be used to identify the boundaries of the cell based on a membrane staining. 

How to get StarDist working with Vision4D

The StarDist python package is required to get it to work with Vision4D and must be added to an existing python 3.x enviroment. We tested & strongly recommend the Anaconda 3.x python package for the scope of this application.

Once the StarDist package has been correctly set up, Vision4D must also be configured accordingly by changing the scripting preferences.

Setting up Vision4D preferences

With Vision4D open, go to Extras> Preferences.

In the preferences window, select the Scripting tab on the left and select "Anaconda environment".

You will then need to browse your computer for the Anaconda installation folder and select the Stardist environment previously created.

By default, the new enviroments are stored under the \envs folder located in the Anaconda installation folder  e.g. C:\Anaconda3\envs\stardist

Having selected the correct environemtn folder we can then run “Install arivis package” and "Test Environment" for compatibility:

arivis also provides a free python script to run the StarDist algorithm inside of V4D.

The script allows the user to select the active channel, as well as the time points and Z planes (full-range or sub-selection thereof), from which to segment objects. A new channel will be created to store the labeled objects found by StarDist.

Loading the script

A startup package including the python script, the technical instructions and the test image is available on request. Contact your arivis local area sales manager to get more information about how to get the python script mentioned here or use the contact form on our website

In the Script Editor, the script can be opened by dragging. the .py file into an open window, of by going to File > Open... and navigating to the script file.

Once loaded some parameters will need to be changed for your specific images. 

# @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ USER SETTINGS @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

# -----------------------------------------------------------------------------

# Stardist 2D - Script

# INPUT_CHANNEL channel with the target objects

# is only used to read/write the voxels

# ---> imageset.get_channeldata / imageset.set_channeldata# -----------------------------------------------------------------------------

INPUT_CHANNEL = 1 # <---- Count start from 0 (Ch#1 == 0)

# name of the new channel (skeleton storage)

OUTPUT_CH_NAME = "Stardist_" # Net_Skel_

#

#MODEL3D_PATH = "D:/Arivis_Dataset/Carlo Stardist/2021-05-10/3D/2021-04-22/models"

MODEL2D_PATH = "D:/Arivis_Dataset/Carlo Stardist/2021-05-10/2D/2021-05-03/models"

MODEL_NAME = "stardist"

# PARAMETERS YOU CAN CHANGE?

CURRENT_TIME_POINT = True

FIRST_PLANE = -1 # -1 == BOTTOM

LAST_PLANE = -1 # -1 TOP

#

# @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ END USER SETTINGS @@@@@@@@@@@@@@@@@@@@@@@@@@@@

Of specific importance are:

  • Input Channel number (remember the first channel is channel "0"). This is to specify which channel to do the segmentation on
  • Model Path and name. This will depend on your installation and the model you are using. 
  • Input range.
  • CURRENT_TIME_POINT, if true only the current time point is analyzed, if false the entire time series.
  • FIRST_PLANE, the first plane in the stack to be used for segmentation
  • LAST_PLANE, the last plane in the stack to be used for segmentation
  • Note that is both the first and last plane parameters are set to "-1" the script will use the full stack.

Once the parameters have been set the script can be saved and run.

Download Full "How to run StarDist within arivis Vision4D" PDF

Applying cellpose models (arivis Vision4D 3.4.0 to 4.1.0)

Overview

To use cellpose in Vision4D we need the following:

  1. Install anaconda python
  2. Import the cellpose environment in the Anaconda Navigator
  3. Configure Vision4D to install the arivis libraries into the cellpose environment
  4. Use the Python Segmenter operation to load the cellpose script and define the required parameters

Introduction

Cellpose is a deep-learning (DL) based algorithm for cell and nucleus segmentation. It was created by the Stringer and Pachitariu groups and was originally published in the Stringer et al., Nature Methods, 2021.

Cellpose uses a cell detection method that predicts object shape using the flow-representation of object cell dynamics that is well-suited to approximate and define the complex borders of the cells in the microscopy images. These representations are provided for the DL model training and predictions (inference). Full documentation of the method can be found on the cellpose website.

 

Vision4D can be configured to execute cellpose segmentation within analysis pipelines, thereby enabling users to take advantage of both the advanced segmentation enabled by cellpose and the image and segment processing and visualization tools offered by Vision4D. This article explains how to download and install the necessary tools, and how to configure the pipeline Python Segmenter operation to segment objects using cellpose. 

By integrating cellpose into a pipeline, users can take advantage of the full functionality of the Vision4D pipeline concept to:

  • Process large multidimensional images
  • Enable segmentation in an easy-to-use interface
  • Enable the visualization of objects segmented using cellpose in 4D with advanced object display options
  • Facilitate complex further analysis like parent-child relationships and tracking

Preliminary Remarks

Vision4D runs deep learning applications for instance segmentation such as Cellpose and StarDist using external and arivis-independent Python libraries and tools produced by third parties.

These tools must be installed by the user under their own responsibility, strictly following the instructions in this document. arivis has tested the setup protocol on several computers, however, due to the different and unpredictable hardware and software configurations of any given computer system, the results may vary on a case-by-case basis. Therefore, arivis declines any responsibility concerning the correct tools, installation, and setup on the individual user’s workstation. arivis cannot be made responsible for any malfunctioning or failure of the deep learning environment setup. arivis does not guarantee technical support on the setup task or on any deep learning application. Furthermore, arivis also declines any responsibility regarding the validity of the scientific results gathered from the deep learning application.

Installing the prerequisites

To use the cellpose script in Vision4D, we need 3 configuration steps:

  1. Install Anaconda Python
  2. Import the cellpose environment
  3. Configure the Vision4D Machine Learning preferences to use the environment.

Installing Anaconda

Instructions for downloading, installing, and configuring anaconda for Vision4D can be found here

Setting up the cellpose environment

Using cellpose in Vision4D

Setting up cellpose parameters

Once the script has been loaded into the operator we can set up the parameters for the operation.

All the parameters here are defined by the cellpose method. More information about each of those settings can be found on the cellpose website.

As mentioned in the introduction, cellpose uses pre-trained models to segment the cells. Some pre-trained models are included (cyto2/nuclei), others can be downloaded from the cellpose website. For most applications, the cyto2 or nuclei models work fine. To select either of these models, we can simply type the name of the model in the Model_Name field (note that the name is case sensitive).

As hinted at by the name of these models, they are each tuned to detect specific portions of cells. Namely, the cyto2 model detects cytoplasm, while the nuclei model is tuned for nuclei. The Channel field should therefore indicate which channel shows this specific component. Note that the cyto2 model can also produce improved results by adding a secondary channel for the nuclei. In this case, the cytoplasm channel should be selected in the Input_channel field, while the nuclear channel should be used in the Second_channel field. If there is no nucleus channel, or the model used is not able to use a second channel the Second_channel field should be set to 0 (zero).

Note that the channel numbering is taken from their ordering in the set. This can be clearly seen in the Channel Visibility panel.

In this case, the DAPI nuclear channel is channel #1.

The Diameter_in_um field is used to define how big we expect the objects to be. This does not have to be an exact specific number, but it should be representative of the typical object diameter. Some experimentation with this value is worthwhile to better understand how much of an effect it has. Generally, the best way to set this is to use the Measure tool to measure the diameter of a typical cell.

The Flow_threshold and Cellprob_threshold fields are used to fine-tune the predictions for our specific images. Details of how each of these parameters affects the result can be found here, but as with the diameter, some experimentation is worthwhile. Note that the Cellprob_threshold value must be between -6 & +6.

The Fast_3D parameter is binary and can therefore only be on or off. If it is left on the predictions are made plane by plane, without any knowledge of the neighboring planes, and the end result is simply collating those predictions in 3D. As the name implies this is faster but potentially less accurate as an error in the prediction in one plane is more likely to lead to an object being split in the Z axis. In contrast, if this option is left off the model uses information from neighboring planes to improve the prediction in the current plane. In either case, the end result will be a 3D object (if the set has only one plane the object is simply one plane thick). 

The operator can be configured to use custom models. If we want to use a custom model we can simply check the box for Custom_model and enter the path for the model file (note path names must use forward slashes e.g. "C:/CustomPath/Model"). The name in the Model_name field will then be ignored. More information on creating and using custom models can be found on the cellpose website.

Finally, we have two Vision4D-specific parameters. The first is the Tile_Size. This implementation of cellpose has been designed to take advantage of Vision4D's capacity to process very large images. However, the cellpose model itself is not built to be RAM independent the way that Vision4D is. Therefore, to process very large images this implementation divides the images into multiple tiles, each tile is processed individually and the results are collated from these predictions. Because these models are RAM-dependent, the tile size should be set according to the available memory. In practice, leaving the tile size at the default value works for most people.

The final parameter is common to all segmentation operations, it is the tag used to label the objects created by this operation. It is how other pipeline operations can call on the objects created by this operation to use or refine the results. 

Using cellpose in pipelines

Cellpose is a powerful segmentation tool for bioimage analysis, and it is free to use and does not require commercial software like Vision4D. However, there are advantages to using cellpose within Vision4D pipelines.

The first main advantage of using cellpose in Vision4D was alluded to above. This implementation allows users to use the method with images of virtually any size on virtually any computer that runs MS Windows. It can therefore be used to segment very large 2D and 3D datasets like slide scans and Light-Sheet scans.

The second advantage of using cellpose in Vision4D is the ability to use it in conjunction with other pipeline operations to refine the results, extract additional information, and enable easy-to-use visualization tools to both review and present the results of analyses.

Building complex pipelines that use cellpose, users can:

  • Use pre-segmentation enhancements to create more easily segmented images
  • Refine the results of the segmentation to remove segmentation artifacts like small/large objects that shouldn't be segmented, 
  • Use cellpose segmentation results together with traditional segmentation tools to identify inter-object relationships like finding children objects inside cells or distances to neighbors

Many of these possibilities are covered in the User Guide that users can access from the help menu, and in other articles on this website. Please use the search tool to find out more about compartments analysis, object coloring options, movie making, etc.

Download Full "How to: install and run predictions with Cellpose" PDF

Docker support for instance segmentation

What is Docker?

Docker is an open-source platform (https://www.docker.com/) that allows you to build, test, and deploy applications quickly and easily. Docker packages software into standardized units called containers, which contain everything the software needs to run. Containers virtualize the operating system of a server, essentially running the package as a Virtual Machine (VM).

Using Docker in arivis Pro

arivis Pro 4.2 introduced the possibility of performing instance segmentation using Deep Learning. This DL segmentation tool relies on various layers of dependent libraries. Rather than forcing configurations that could cause a conflict with other softwares, arivis Pro uses Docker technology to embed all the necessary dependencies for these DL models into a Docker container. A Docker container has all the information required to execute the segmentation sandboxed in such a way as to be independent of the system configuration and without risking conflicts with other software installations.

What is difference between Docker Desktop and Docker Engine?

Docker Desktop

Targeted for workstation use, this is free to use (personal use, academia and open source projects) but has a commercial use subscription (250 employees OR more than $10 million in annual revenue). Docker Desktop is more user-friendly and is aimed at developers and those simply using Docker. It provides an easy-to-use GUI interface, includes additional tools like the Docker Dashboard for easier container management, and automatically updates to the latest version.

For detailed information see the Docker Desktop docs: https://docs.docker.com/desktop/

arivis Pro 4.2 onwards and ZEN products can connect with Docker Desktop on the local workstation, see:

Docker Engine

It is the main service which runs containers, this runs a Linux kernel Virtual Machine (VM). Docker Engine is open source and free to use, it operates primarily through the command-line interface and is often used in production environments. It’s highly configurable and can be adjusted to suit a variety of use-cases.

The engine is an integral component of any docker installation, even Docker Desktop. In a server environment, the engine can be installed standalone.

For detailed information see the Docker Engine docs: https://docs.docker.com/engine/

arivis Pro 4.3 onwards can connect with Docker Engine using TCP and Remote Docker support, see:

Docker virtualization requirements

Virtualization features in both the BIOS (on bear-metal installations) and Operating System are required for containers to work. Typically Intel Virtualization Technology (VT-x) or SVM Mode (AMD-V) on AMD systems, see https://docs.docker.com/desktop/troubleshoot-and-support/troubleshoot/topics/#virtualization

Docker GPU Support

Only NVIDIA drivers are currently supported for CUDA processing.

For Docker Desktop, WSL2 must be used for Paravirtualization support, see https://docs.docker.com/desktop/features/gpu/

For Docker Engine, installation of the nVidia container toolkit is required, see https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Installing Docker for AI Instance Segmentation

Guidance on installing Docker Desktop to enable Deep Learning Instance segmentation in arivis Pro.

Overview

  1. Check what Docker license you need and obtain it if necessary
  2. Check the Docker system requirements and ensure that your system meets the specifications
  3. Download and install Docker Desktop
  4. Restart your system
  5. Check the application settings

Introduction

Arivis Pro 4.2 introduced the possibility of performing instance segmentation using Deep Learning. This DL segmentation tool relies on various layers of dependent libraries. Rather than force configurations that could cause conflict with other software users might also have on their system, arivis uses Docker technology to embed all the necessary dependencies for these DL models into a Docker container. A Docker container has all the information required to execute the segmentation sandboxed in such a way as to be independent of the system configuration and without risking conflicts with other software installations. To use Docker Containers, Docker Desktop must be installed on the system. 

Docker Licenses

The use of Docker, like most software packages, is limited bythe terms of the licensing system it uses. Various licensing options are available, at various price points, depending on the type and size of institution that uses the technology. To install Docker, it is therefore important to know what type of license your institution has access to, and since large organizations may have existing licenses it is recommended that you consult with your IT team to check what type of license may be required and/or available.

Just so you know, Docker licensing is fully independent of your arivis license and as a user, you are responsible for your usage of the Docker license.

Installing Docker Desktop

Configuring Docker Desktop

Troubleshooting

After starting Docker Desktop you may be prompted to update the WSL Kernel version:

In such a case, a Windows update is required. First, close the Docker Desktop prompt above to ensure the correct installation of the update (otherwise Docker Desktop is still running in the background).

Windows 10

Open the Windows Settings and click on the Update & Security tab...

Then, go to Advanced options...

... and turn on the option to Receive updates for other Microsoft products...

...then go back and check for updates. If an update appears for the Windows Subsystem for Linux, install it, then restart Docker Desktop.

Note that the option to install Microsoft updates may be locked by your organization's IT policy. If so, please contact your IT team to enable it as needed.

Windows 11

On Windows 11 the process is similar. In theWindows Settings, select Windows Updates and then Advanced options.

Then turn on the option to Receive updates for other Microsoft products.

Go back to the main update tab, and if an update appears for the Windows Subsystem for Linux, install it, then restart Docker Desktop.

As with Windows 10, the option to install updates for other Microsoft products may be locked by your organization's IT policy. Please check with your IT team to enable this option as needed.

Installing Docker Engine on AWS

Overview

Guidance on installing a standalone Docker Engine for Instance segmentation in the Amazon Cloud environment.

arivis Pro instance segmentation is typically performed on a local workstation using Docker Desktop. This process is discussed in the KB article here:

Installing Docker for AI Instance Segmentation

Docker Desktop is not licensed or intended for multi-session use. On a server environment, the Docker Engine must be used.

Introduction

The Docker Engine can be installed as a standalone instance, to be shared remotely as a service.

From arivis Cloud, the segmentation container is Linux based, so for both reduced costs and ease of implementation, we would recommend a Linux based distribution to host the Docker Engine.

Here is an overview of the required steps when creating the virtual machine on AWS.

Selecting size and image

  1. From the EC2 Dashboard, open the Launch instance dialog.
  2. Label the instance as wished.
  3. Select an OS, for example Ubuntu.
  4. Select an AMI with nVidia drivers contained. Here the Deep Learning Base image is selected as it contains nVidia drivers and toolkit:
    azon.com/dlami/latest/devguide/gpu.html
  5. Select an Instance type. Only certain AWS instance types have GPU. A recommended list can be found on the website:
    https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html
    For this example, we will use a g4dn instance type:
  6. Create your Key pair to access the instance.
  7. Download the .pem file in your browser and save it as you will need it to connect to the instance.
  8. Create your network. You may want to switch SSH traffic from Anywhere to your specific IP. Extra inbound rules can be added to the security group later if needed.
  9. Configure your storage. Be aware that most image models are at least 5GB in size.

    By adding the nVidia OSS image there is an extra volume.
  10. Launch the instance.
  11. When started, use the key pair .pem and the default user ssh inside the instance. Use it's public IP address: ssh -i arivisEC2.pem ubuntu@<ip address>

Installing Docker Engine

Install | Docker Docs

Set up Docker's apt repository:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

 

Install the Docker packages:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Enabling Docker Remote access

Configure remote access for Docker daemon | Docker Docs

Please note opening TCP to the Docker Engine is a security risk.

Connection to the containers can provide root access. Ensure that necessary firewall restrictions are in place to allow only expected clients. Within a Cloud environment external access is typically blocked by default but access from other machines within the virtual network need to be considered.

Edit the systemctl service override:

sudo systemctl edit docker.service

The Docker instructions specify using 127.0.0.1, which will only bind to the localhost interface. To permit external connections, 0.0.0.0 will listen on all interfaces, you can modify this to a specific interface IP as required.

Add these lines between the top comments:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375

Save the file (CTRL+X) and reload the systemctl configuration.

sudo systemctl daemon-reload

Restart Docker

sudo systemctl restart docker.service

Within AWS the VM Security Group settings must receive a new port rule to allow 2375 from any specific clients that need to connect. Also a port range for the containers is required (if 10 containers may run in parallel, use 5000-5009):

Configuring arivis Pro to use Remote Docker Engine

  1. Create an access token in arivis Cloud.
  2. Copy the access token and paste it into the Access token field.
  3. Inside the Remote URL field, select your server IP.
  4. Click Apply to end the configuration.

Last updated: 2025.02.18

Code snippets were used from the links provided at the time of writing. Check the contained links for updates to any presented commands.

Installing Docker Engine on Azure

Overview

Guidance on installing a standalone Docker Engine for Instance segmentation in the Azure Cloud environment.

arivis Pro instance segmentation is typically performed on a local workstation using Docker Desktop. This process is discussed in the KB article here:

Installing Docker for AI Instance Segmentation

Docker Desktop is not licensed or intended for multi-session use. On a server environment, the Docker Engine must be used.

Introduction

The Docker Engine can be installed as a standalone instance, to be shared remotely as a service.

From arivis Cloud, the segmentation container is Linux based, so for both reduced costs and ease of implementation, we would recommend a Linux based distribution to host the Docker Engine.

Here is an overview of the required steps when creating the virtual machine on Azure.

Selecting size and image

  1. To only select a GPU supported type image, add GPU to the Type filter criteria.
  2. Then select the preferred size.
    For this example, we will use Ubuntu Server LTS Image and a smaller GPU enabled size.

Configuring Disk

The image is transferred to the Docker container prior to processing.

  1. Select a disk.The disk must be large enough to hold all required models, and all concurrently processed images. The IOPS of the storage should at least match the network performance of the VM:
  2. You can now start the VM.

Installing the nVidia GPU Extension

  1. The VM is running.
  1. Use the filter type GPU to find the NVIDIA GPU Driver Extension.
  2. Run through the creation/deployment process.
  3. For Deployment error “NVIDIA GPU not found on this VM size”, select a different GPU instance type and re-deploy extension.
  4. For Deployment error “Code 14”, on Linux, try disabling Secure Boot in the VM and re-deploy extension:

Installing Docker Engine

Install | Docker Docs

Set up Docker's apt repository:

# Add Docker's official GPG key:

sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

 

Install the Docker packages:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Enabling Docker Remote access

Configure remote access for Docker daemon | Docker Docs

Note that opening TCP to the Docker Engine is a security risk.

Connection to the containers can provide root access. Ensure that necessary firewall restrictions are in place to allow only expected clients. Within a Cloud environment external access is typically blocked by default but access from other machines within the virtual network need to be considered.

Edit the systemctl service override:

sudo systemctl edit docker.service

The Docker instructions specify using 127.0.0.1, which will only bind to the localhost interface. To permit external connections, 0.0.0.0 will listen on all interfaces, you can modify this to a specific interface IP as required.

Add these lines between the top comments:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375

Save the file (CTRL+X) and reload the systemctl configuration.

sudo systemctl daemon-reload

Restart Docker:

sudo systemctl restart docker.service

Within Azure the VM Network settings must receive a new port rule to allow 2375 from any specific clients that need to connect. Also a port range for the containers is required (if 10 containers may run in parallel, use 5000-5009):

Installing nVidia Container Toolkit

Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.17.0 documentation

Configure the production repository:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Update the packages list from the repository:

sudo apt-get update

Install the NVIDIA Container Toolkit packages:

sudo apt-get install -y nvidia-container-toolkit

Configuring arivis Pro to use Remote Docker Engine

  1. Create an access token in arivis Cloud.
  2. Copy the access token and paste it into the Access token field.
  3. Inside the Remote URL field, select your server IP.
  4. Click Apply to end the configuration.

Last updated: 2024.11.27

Code snippets were used from the links provided at the time of writing. Check the contained links for updates to any presented commands.

Deep Learning segmentation pipelines

How to create, share and import pipelines that use Deep Learning segmentation.

Overview

Link your arivis Pro installation to you arivis Cloud account

  1. Log in to your arivis Cloud account and create an access token
  2. In arivis Pro, open the arivis Cloud Model Store and enter your access token 

Creating a pipeline that includes ZEISS arivis AI segmentation

  1. Add the DL Segmenter to your pipeline and select your model
  2. Build the rest of your pipeline as needed

Exporting pipelines with DL instance segmentation

  1. Export your pipeline from the arivis Pro analysis panel
  2. Share your your ONNX/CZANN file or share your model using your arivis Cloud account 

Importing pipelines with DL segmentation

  1. Connect to your arivis Cloud account and accept the shared model invitation, if necessary, create an access token
  2. import the pipeline in the arivis Pro analysis panel and point the segmentation operation to your downloaded model
  3. Run your pipeline

Introduction

Since arivis Vision4D 3.6 arivis has included Deep Learning inference operations in the analysis panel. As of the release of arivis Pro 4.2, pipelines can run DL Instance segmentation. Like all pipelines, those that include DL Instance segmentation can be shared to use on other systems and with other images, and can also be run in batch. However, DL pipelines must be linked to the specific model on which they are based and these models must be exported as well as the pipeline and linked back to the pipeline before execution.

There are two ways to select a model for use in a pipeline. Whether we are using the Deep Learning Reconstruction or the Deep Learning Segmenter, once it is added to the pipeline we need to select the model to be used:

The model can either be loaded from a file as either and ONNX or CZANN file, or selected from the arivis Cloud model store.

ONNX or CZANN models typically only allow semantic segmentation. If the model has been created outside of the Zeiss ecosystem, other formats are commonly used but these can usually be converted to ONNX and we provide some scripts to do this, including for PyTorch and Cellpose

Access to models from arivis Cloud in arivis Pro is managed through access tokens. These access tokens give an arivis Pro installation access to every model in an arivis Cloud account. Therefore, rather than sharing an access token, which would give access to every model linked to an account, it is better to share the specific model with collaborators so that they can access the model by creating their own access token.

Arivis Cloud models can be trained for either instance or semantic segmentation. Semantic models can be exported as ONNX or CZANN directly from your arivis Cloud account. Instance models are only accessible through Access Tokens as mentioned above. Also, please note that DL segmentation in arivis Cloud instance models requires that you have installed and configured Docker on your system.

In either case, it is also a good idea to install and use the GPU acceleration package if you haven't done so already as this can significantly speed up DL and ML tasks. 

Linking arivis Pro to your arivis Cloud account

As mentioned above, arivis Pro can use ONNX or CZANN files with DL operations, and these files are relatively simple to create, share and use. However, arivis Cloud instance segmentation creates models with additional dependencies which would ordinarily require the installation of additional software libraries and could cause compatibility issues. To avoid such issues, arivis uses Docker containers to store both the model and the dependencies. To facilitate management of these containers we access our models through the arivis Cloud Model Store. 

Creating access tokens

Configuring arivis Pro using access tokens

To give arivis Pro access to your models, we need to provide it the access token generated previously. We find the arivis Cloud AI Model Store under the Analysis menu.

The first time we access the Model Store we'll be prompted to enter an access token and select a destination folder for the downloaded models.

When we click OK the Model store will automatically populate with all the models linked to your arivis Cloud account.

Note that some models may be incompatible do to versioning issues. If preferred, we can simply hide all incompatible models.

Also, the list of available models is dynamically updated after each restart of the application, so a new model is added there is no need for a new access token.

Creating Pipelines with DL Segmentation

In many ways, creating a pipeline that uses DL Instance or Semantic Segmentation is no different to creating any other pipelines. There is nothing special in the way the objects created by DL segmentation are handled compared to any other pipeline created segments. The Features available are the same, including Custom Features, and they can be used for any downstream segment processing operations, including tracking, parent-child analysis, and segment morphology operations to mention just a few. This ability to do both DL segmentation and traditional segmentation, and use the resulting segments all in the same pipeline, with the ability to batch process, is one of the key strengths of the arivis approach.

To create an analysis pipeline that uses DL we start the same way we always do to create pipelines, that is to say we open the Analysis panel, either from the Analysis menu or from the Shortcuts toolbar.

Then, in the Analysis panel we can create a new pipeline by using + New Pipeline, or choose an existing pipeline to modify. 

With the pipeline open, we can set up the Input ROI and any other operations as needed, and add the Deep Learning Segmenter to the pipeline using + Add Operation.

Note that there are two ways to use DL in pipelines.

The Deep Learning Reconstruction can use a model to create new images of the probability maps from that model. These probability maps can be used like any channel in the pipeline. This includes filtering (denoising, morphology, image math etc), and segmentation. We can, for example, use a Blob Finder on a probability map of a semantic model to obtain an instance segmentation result. Dell Learning Reconstruction only support ONNX or CZANN models.

However, the majority of cases will call for the Deep Learning Segmenter which uses the model to generate objects form the image.

Once we've added the Deep Learning Segmenter to our pipeline, all we need to do is select which model we want to use. If we use ONNX or CZANN file option we then click the browse button and select our model file.

If we use arivis Cloud models, we can either select from previously downloaded models, or open the Model Store to download models as needed.

Once we've selected the model, the operation works like any other segmentation operation. We can preview the results, choose an output tag and colour, and the segmented objects can be used in downstream pipeline operations like any other pipeline objects.

 

Exporting Pipelines with DL Instance Segmentation

Exporting the pipeline

As with any pipeline created in arivis, we can use the Analysis Panel menu to open the export window...

...choose a save location...

...and send the .pipeline file to our collaborator.

But if the pipeline requires additional files, like DL models, these need to be shared as well.

An ONNX or CZANN file can just be shared together with the pipeline like any other file. The most practical way to do this may be to copy both the exported pipeline and the model file to a new folder, compress this folder into a ZIP file, and share the ZIP using your preferred file sharing method. Email should work fine, though some email clients have file size limits so other cloud-sharing methods might work better (OneDrive, Dropbox, etc.).

Sharing an arivis Cloud model

Importing the pipeline and model

If we receive a pipeline that includes a DL model from a collaborator, first we must save the pipeline to our workstation. 

If the model was shared as an ONNX or CZANN file together with the pipeline, we simply save that file to the workstation along with the pipeline.

If the model was shared using an arivis cloud account, we can log in to our arivis Cloud account, accept the invitation to the shared model, and if necessary link the arivis installation to our Cloud account as described above. Once the model appears in the Model Store, we can click the Download link on the right:

Importing the analysis pipeline

Once the models have been added to the store, we can import the pipeline into the analysis panel. Then, we simply click the Menu icon at the top of the analysis panel, select the Import option, and select the pipeline file.

Pipelines using CZANN or ONNX models will need to be re-pointed to the new file location. Arivis Cloud pipeline should load the model automatically and we can then run the pipeline like any other.

Impressum
Carl-Zeiss-Strasse 22
73447 Oberkochen
Germany
Legal