How to create, share and import pipelines that use Deep Learning segmentation.
How to create, share and import pipelines that use Deep Learning segmentation.
Link your arivis Pro installation to you arivis Cloud account
Creating a pipeline that includes ZEISS arivis AI segmentation
Exporting pipelines with DL instance segmentation
Importing pipelines with DL segmentation
Since arivis Vision4D 3.6 arivis has included Deep Learning inference operations in the analysis panel. As of the release of arivis Pro 4.2, pipelines can run DL Instance segmentation. Like all pipelines, those that include DL Instance segmentation can be shared to use on other systems and with other images, and can also be run in batch. However, DL pipelines must be linked to the specific model on which they are based and these models must be exported as well as the pipeline and linked back to the pipeline before execution.
There are two ways to select a model for use in a pipeline. Whether we are using the Deep Learning Reconstruction or the Deep Learning Segmenter, once it is added to the pipeline we need to select the model to be used:
The model can either be loaded from a file as either and ONNX or CZANN file, or selected from the arivis Cloud model store.
ONNX or CZANN models typically only allow semantic segmentation. If the model has been created outside of the Zeiss ecosystem, other formats are commonly used but these can usually be converted to ONNX and we provide some scripts to do this, including for PyTorch and Cellpose.
Access to models from arivis Cloud in arivis Pro is managed through access tokens. These access tokens give an arivis Pro installation access to every model in an arivis Cloud account. Therefore, rather than sharing an access token, which would give access to every model linked to an account, it is better to share the specific model with collaborators so that they can access the model by creating their own access token.
Arivis Cloud models can be trained for either instance or semantic segmentation. Semantic models can be exported as ONNX or CZANN directly from your arivis Cloud account. Instance models are only accessible through Access Tokens as mentioned above. Also, please note that DL segmentation in arivis Cloud instance models requires that you have installed and configured Docker on your system.
In either case, it is also a good idea to install and use the GPU acceleration package if you haven't done so already as this can significantly speed up DL and ML tasks.
As mentioned above, arivis Pro can use ONNX or CZANN files with DL operations, and these files are relatively simple to create, share and use. However, arivis Cloud instance segmentation creates models with additional dependencies which would ordinarily require the installation of additional software libraries and could cause compatibility issues. To avoid such issues, arivis uses Docker containers to store both the model and the dependencies. To facilitate management of these containers we access our models through the arivis Cloud Model Store.
To give arivis Pro access to your models, we need to provide it the access token generated previously. We find the arivis Cloud AI Model Store under the Analysis menu.
The first time we access the Model Store we'll be prompted to enter an access token and select a destination folder for the downloaded models.
When we click OK the Model store will automatically populate with all the models linked to your arivis Cloud account.
Note that some models may be incompatible do to versioning issues. If preferred, we can simply hide all incompatible models.
Also, the list of available models is dynamically updated after each restart of the application, so a new model is added there is no need for a new access token.
In many ways, creating a pipeline that uses DL Instance or Semantic Segmentation is no different to creating any other pipelines. There is nothing special in the way the objects created by DL segmentation are handled compared to any other pipeline created segments. The Features available are the same, including Custom Features, and they can be used for any downstream segment processing operations, including tracking, parent-child analysis, and segment morphology operations to mention just a few. This ability to do both DL segmentation and traditional segmentation, and use the resulting segments all in the same pipeline, with the ability to batch process, is one of the key strengths of the arivis approach.
To create an analysis pipeline that uses DL we start the same way we always do to create pipelines, that is to say we open the Analysis panel, either from the Analysis menu or from the Shortcuts toolbar.
Then, in the Analysis panel we can create a new pipeline by using + New Pipeline, or choose an existing pipeline to modify.
With the pipeline open, we can set up the Input ROI and any other operations as needed, and add the Deep Learning Segmenter to the pipeline using + Add Operation.
Note that there are two ways to use DL in pipelines.
The Deep Learning Reconstruction can use a model to create new images of the probability maps from that model. These probability maps can be used like any channel in the pipeline. This includes filtering (denoising, morphology, image math etc), and segmentation. We can, for example, use a Blob Finder on a probability map of a semantic model to obtain an instance segmentation result. Dell Learning Reconstruction only support ONNX or CZANN models.
However, the majority of cases will call for the Deep Learning Segmenter which uses the model to generate objects form the image.
Once we've added the Deep Learning Segmenter to our pipeline, all we need to do is select which model we want to use. If we use ONNX or CZANN file option we then click the browse button and select our model file.
If we use arivis Cloud models, we can either select from previously downloaded models, or open the Model Store to download models as needed.
Once we've selected the model, the operation works like any other segmentation operation. We can preview the results, choose an output tag and colour, and the segmented objects can be used in downstream pipeline operations like any other pipeline objects.
If we receive a pipeline that includes a DL model from a collaborator, first we must save the pipeline to our workstation.
If the model was shared as an ONNX or CZANN file together with the pipeline, we simply save that file to the workstation along with the pipeline.
If the model was shared using an arivis cloud account, we can log in to our arivis Cloud account, accept the invitation to the shared model, and if necessary link the arivis installation to our Cloud account as described above. Once the model appears in the Model Store, we can click the Download link on the right: