ZEISS Knowledge Base
Help
ZEISS Knowledge Base

Application and Workflow Toolkits

Bio Applications

This toolkit offers the functionality to set up image analysis for very specific analysis scenarios, for example to count the number of cells in an image. Each individual analysis scenario is a module and has its own application.

Running Bio Applications

Analyzing in Batch Mode

You can also setup the analysis of multiple images with Bio Applications and their settings in Batch Mode, see Running Bio Applications in Batch Mode.

When you run a Bio Application, the image analysis defined in the specific setting is applied to the image.

  1. You have opened the image you want to analyze.
  2. You have created a setting for the Bio Application you want to use, see Creating a General Bio Application Setting.
  1. On the Analysis tab, in the Bio Applications tool, click on the application.
  2. For Setting, select the setting you have created for this application.
  3. Click Run Analysis.
  4. The analysis runs with the selected application and a result screen opens. On this result screen you can see the analyzed picture, charts, and a table.
  5. Click Finish to close the result screen.
  1. You have successfully run your Bio Application. The analysis results are saved in the image file and can be displayed and exported with the Bio Applications view, see Bio Applications View.

Creating a Setting for Cell Counting

File Types

Bio Applications can only be applied to suitable image files. If you try to use them with an unsuitable image type, a message is displayed. The following types of images are not supported for this application:

  1. Z-Stacks
  2. Unprocessed Airyscan data
  3. Unprocessed Apotome data
  4. Multi-phase images
  5. Multi-block images
  6. PSF images
  1. You have opened an image which is typical for the analysis scenario.
  2. You have set up a general setting for your application, see Creating a General Bio Application Setting.
  1. On the Analysis tab, in the Bio Applications tool, click Cell Counting.
  2. The parameters are displayed.
  3. Select your created setting as well as your image and click Create Setting.
  4. The Bio Applications wizard opens.
  5. Enter a Name for the objects you want to segment (e.g. Nuclei).
  6. In the channel control, select the channel which contains the necessary information for the analysis (the channel in which the cell nuclei have been stained). For multi-channel images, which contain one of the following channels (DAPI, Hoechst, To-pro-3, HCS Nuclear Mask Deep Red), this channel is automatically preselected.
  7. Select a Color for the resulting masks.
  8. Select Manual if you want to define the objects manually by clicking in the image. Otherwise, select Automatic. In the Manual mode, you can click on the objects you want to segment and the lower and higher threshold values will be adapted automatically. Alternatively you can directly enter a Threshold (lowest and highest value) or use the Histogram for the pixel values used by the segmentation. For color images, you can set the threshold for each color channel.
  9. If you want to use machine learning for segmentation, click Semantic or Instance for semantic or instance segmentation. The prerequisite for semantic segmentation is that you have installed the 3rd party Python Tools during the installation of ZEN. For instance segmentation, you need to have the Docker Desktop software running.
  10. A dropdown to select your model is displayed. It contains your own trained and imported networks if they have been trained on one channel. For semantic segmentation, a default neural network for the segmentation of fluorescently labeled nuclei is provided.
  11. In the AI Model dropdown, select the model you want to use for segmentation.
  12. Select the Model Class that should be used for segmentation and set a Min. Confidence. For instance segmentation you also have to select the AI Model Version.
  13. If you want to perform a rolling ball background subtraction, click On.
  14. Set the lowest and highest value for the Area and Circularity measurement to filter out unwanted objects.
  15. A result preview is displayed by the Image View. The unwanted objects are displayed in white.
  16. If you manually want to include certain objects, for Pick to Include click + and then click on the object in the image.
  17. The values employed for the filters Area and Circularity are updated accordingly to include the selected object and any other objects that fulfil the newly adapted filter criteria.
  18. Click Finish.
  1. You have created a setting for Cell Counting which can now be used to analyze images by clicking Run Analysis, see Running Bio Applications.

Creating a Setting for Confluency

File Types

Bio Applications can only be applied to suitable image files. If you try to use them with an unsuitable image type, a message is displayed. The following types of images are not supported for this application:

  1. Z-Stacks
  2. Unprocessed Airyscan data
  3. Unprocessed Apotome data
  4. Multi-phase images
  5. Multi-block images
  6. PSF images
  1. You have opened an image which is typical for the analysis scenario.
  2. You have set up a general setting for your application, see Creating a General Bio Application Setting.
  1. On the Analysis tab, in the Bio Applications tool, click Confluency.
  2. The parameters are displayed.
  3. Select your created setting and click Create Setting.
  4. The Bio Applications wizard opens.
  5. Enter a Name for the class you want to segment,
  6. In the channel control, select the channel which contains the necessary information for the analysis. For multi-channel images, which contain a channel out of the list (Bright, Oblique, DIC and PGC), this channel is automatically preselected.
  7. Select a Color for the resulting masks.
  8. Per default, the Segmentation Type is set to Manual to use a manual variance-based segmentation.
  9. Select if you want to segment the Structure in your image or the Background (this inverts the applied threshold values).
  10. Define the Threshold for the variance calculated from one pixel with the neighboring pixels and adjust the Kernel Size for this calculation.
  11. If you want to use machine learning for segmentation, click AI-based (the prerequisite is that you have installed the 3rd party Python Tools during the installation of ZEN).
  12. A dropdown to select your model is displayed. It contains your own trained and imported networks if they have been trained on one channel.
  13. In the AI Model dropdown, select the model you want to use for segmentation.
  14. Select the Model Class that should be used for segmentation and set a Min. Confidence.
  15. Set the Min. Object Size to define the minimum area in pixel that an object must have to be segmented.
  16. Activate Fill all holes if you want to close the holes in the segmented masks. Otherwise leave Fill all holes deactivated.
  17. Set the Min. Hole Size to define minimum area in pixel of the holes in the detected objects.
  18. Click Finish.
  1. You have created a setting for Confluency which can now be used to analyze images by clicking Run Analysis, see Running Bio Applications.

Creating a Setting for Gene- and Protein Expression

File Types

Bio Applications can only be applied to suitable image files. If you try to use them with an unsuitable image type, a message is displayed. The following types of images are not supported for this application:

  1. Z-Stacks
  2. Unprocessed Airyscan data
  3. Unprocessed Apotome data
  4. Multi-phase images
  5. Multi-block images
  6. PSF images
  1. You have opened an image with at least two channels that is typical for the analysis scenario.
  2. You have set up a general setting for your application, see Creating a General Bio Application Setting.
  1. On the Analysis tab, in the Bio Applications tool, click Gene- and Protein Expression.
  2. The parameters are displayed.
  3. Select your created setting as well as your image and click Create Setting.
  4. The Bio Applications wizard opens.
  5. Enter a Name for the segmented objects (the nuclei), select the Channel in which the nuclei are stained as well as a Color for the resulting masks.
  6. Select Manual if you want to define the objects manually by clicking in the image. Otherwise, select Automatic. In the Manual mode, you can click on the objects you want to segment and the lower and higher threshold values will be adapted automatically. Alternatively you can directly enter a Threshold (lowest and highest value) or use the Histogram for the pixel values used by the segmentation. For color images, you can set the threshold for each color channel.
  7. If you want to use machine learning for segmentation, click Semantic or Instance for semantic or instance segmentation. The prerequisite for semantic segmentation is that you have installed the 3rd party Python Tools during the installation of ZEN. For instance segmentation, you need to have the Docker Desktop software running.
  8. A dropdown to select your model is displayed. It contains your own trained and imported networks if they have been trained on one channel. For semantic segmentation, a default neural network for the segmentation of fluorescently labeled nuclei is provided.
  9. In the AI Model dropdown, select the model you want to use for segmentation.
  10. Select the Model Class that should be used for segmentation and set a Min. Confidence. For instance segmentation you also have to select the AI Model Version.
  11. If you want to perform a rolling ball background subtraction, click On.
  12. Set the lowest and highest value for the Area and Circularity measurement to filter out unwanted objects.
  13. A result preview is displayed by the Image View. The unwanted objects are displayed in white.
  14. If you manually want to include certain objects, for Pick to Include click + and then click on the object in the image.
  15. The values employed for the filters Area and Circularity are updated accordingly to include the selected object and any other objects that fulfil the newly adapted filter criteria.
  16. Set the distance between the boundary of the masks of the segmented nuclei and the rings where the transfection is measured. A negative value means that the ring already begins inside of the nucleus. A positive value creates a ring that starts in a distance from the boundary of the nuclei masks.
  17. Set the width of the rings displayed in the image.
  18. Click Next.
  19. The Gene Expression step opens.
  20. Enter a Name for the transfected cells, select the Channel in which you want to measure the transfection as well as a Color for the resulting masks.
  21. Set the lowest and highest value for the mean intensity of the transfection channel, for which the cells are counted as "positive".
  22. If you manually want to include certain intensities, for Pick to Include click + and then click on the area in the image.
  23. The values employed for Intensity Mean are updated accordingly.
  24. Click Finish.
  1. You have created a setting for Gene Expression which can now be used to analyze images by clicking Run Analysis, see Running Bio Applications. As a result, this settings calculates the transfection efficiency, which can then be displayed in Bio Applications view after the run of the analysis.

Creating a Setting for Translocation

File Types

Bio Applications can only be applied to suitable image files. If you try to use them with an unsuitable image type, a message is displayed. The following types of images are not supported for this application:

  1. Z-Stacks
  2. Unprocessed Airyscan data
  3. Unprocessed Apotome data
  4. Multi-phase images
  5. Multi-block images
  6. PSF images
  1. You have opened a multichannel image which is typical for the analysis scenario.
  2. You have set up a general setting for your application, see Creating a General Bio Application Setting.
  1. On the Analysis tab, in the Bio Applications tool, click on Translocation.
  2. The parameters are displayed.
  3. Select your created setting as well as your image and click Create Setting.
  4. The Bio Applications wizard opens.
  5. Enter a Name for the segmented objects (the nuclei), select the Channel in which the nuclei are stained as well as a Color for the resulting masks.
  6. Select Manual if you want to define the objects manually by clicking in the image. Otherwise, select Automatic. In the Manual mode, you can click on the objects you want to segment and the lower and higher threshold values will be adapted automatically. Alternatively you can directly enter a Threshold (lowest and highest value) or use the Histogram for the pixel values used by the segmentation. For color images, you can set the threshold for each color channel.
  7. If you want to use machine learning for segmentation, click Semantic or Instance for semantic or instance segmentation. The prerequisite for semantic segmentation is that you have installed the 3rd party Python Tools during the installation of ZEN. For instance segmentation, you need to have the Docker Desktop software running.
  8. A dropdown to select your model is displayed. It contains your own trained and imported networks if they have been trained on one channel. For semantic segmentation, a default neural network for the segmentation of fluorescently labeled nuclei is provided.
  9. In the AI Model dropdown, select the model you want to use for segmentation.
  10. Select the Model Class that should be used for segmentation and set a Min. Confidence. For instance segmentation you also have to select the AI Model Version.
  11. If you want to perform a rolling ball background subtraction, click On.
  12. Set the lowest and highest value for the Area and Circularity measurement to filter out unwanted objects.
  13. A result preview is displayed by the image view. The unwanted objects are displayed in white.
  14. If you manually want to include certain objects, for Pick to Include click + and then click on the object in the image.
  15. The values employed for the filters Area and Circularity are updated accordingly to include the selected object and any other objects that fulfil the newly adapted filter criteria.
  16. Select the Translocation Channel.
  17. Set the distance between the boundary of the masks of the segmented nuclei and the rings where the translocation is measured. A positive value creates a ring that starts in a distance from the boundary of the nuclei masks.
  18. Set the width of the rings.
  19. Click Finish.
  1. You have created a setting for translocation which can now be used to analyze images by clicking Run Analysis, see Running Bio Applications.

Exporting Results of the Bio Applications Analysis

  1. You have analyzed an image with a Bio Application, see Running Bio Applications or Running Bio Applications in Batch Mode.
  2. You have opened the results of the analysis in the Bio Applications view.
  1. Set the displayed information (chart, chart axis, table, etc.) in the Bio Applications view according to your needs. The export uses the currently displayed information.
  2. In the Export tab, activate the checkboxes for all the information you want to export and select the corresponding format for each with the dropdown lists.
  3. Click Export.
  4. A file browser opens.
  5. Name the file for export, navigate to the folder where the results should be exported to and click Save.

Bio Applications View

This view is only available for images which have been analyzed by a Bio Application and if you have the license for the Bio Applications toolkit. Here you can see the result of the image analysis conducted by the Bio Application and you have a table and plot section to see the result data of the analysis. The information displayed by the plots and table is specific for each application.

1

Image View
Displays the currently selected image of your analyzed image document as well as the masking objects of the analysis (depending on the settings done in the Objects tab).

2

Result Chart
Displays the chart for the analysis results of the Bio Application. The chart type and the Measurement features displayed on the axis can be set in the Chart view options tab.

3

Result Table
Displays the table with the results of the analysis.

4

View Options
Displays general view options as well as Bio Application specific options.

Export Tab

This tab enables you to export various results of your Bio Applications analysis.

Parameter

Description

Image

Activated: Selects the current result image for export with the format selected in the dropdown.

Table

Activated: Selects the currently displayed table for export with the format selected in the dropdown.

Chart

Activated: Selects the currently displayed chart for export with the format selected in the dropdown. The resolution for the exported charts is 300 dpi.

Width (pixel)

Sets the width for the exported chart in pixel.

Height (pixel)

Sets the height for the exported chart in pixel.

Maintain aspect ratio

Activated: A change of the width or height automatically results in a change of the other factor to maintain the aspect ratio of the plot currently displayed in the Bio Applications result view.
Deactivated: The aspect ratio is not maintained. Width and height can be set independently.

Processing Info

Activated: Selects the current processing information for export with the format selected in the dropdown.

Export

Opens the file browser to export all the selected result documents.

Bio Applications Wizard

In this wizard you define the settings for your Bio Applications. In this wizard you have your image, the settings for the Bio Applications, basic view options as in the 2D view and an additional Legends window. This window displays information about the individual regions in the image and can be toggled on and off with the right click menu entry Regions Legend.

The options and parameters shown here in the wizard are, for the most part, Bio Application specific. The following parameters are generally available:

Parameter

Description

Name

Sets the name for the class/objects that is analyzed by this Bio Application.

Channel

Selects the image channel for the analysis.

Color

Selects a color for the resulting masks. Note: Use a different color than the channel color to be able to differentiate between mask and measurement signal.

Finish

Saves the changes and closes the wizard.

Cancel

Closes the wizard without saving the changes.

Confluency Specific Settings

Parameter

Description

Segmentation Type

Selects the type of segmentation.

Manual

Uses a manual threshold by clicking on the regions in the image that you want to segment, or by using the Threshold control displayed below.

Semantic

Uses machine learning to automatically segment the structures. The use of AI-segmentation requires an installation of the 3rd party Python Tools during the installation of ZEN.

Segm. Area

Only visible if Manual is selected.
Selects which area is segmented.

Structure

Segments the structure(s) in the image.

Background

Segments only the background of the image.

Threshold

Only visible if Manual is selected.
Sets the threshold for the variance calculation.
If Structure is selected, it sets the threshold for the minimum variance of the structures in your image, i.e. all areas which have a variance value higher than the specified threshold will be segmented.
If Background is selected, it sets the threshold for the maximum variance of the background, i.e. all areas which have a variance value lower than the specified threshold will be segmented.

Kernel Size

Only visible if Manual is selected.
Sets the kernel size to calculate the variance value of one pixel with the neighboring pixels.

AI Model

Only visible if AI-based is selected.
Selects the model for segmentation from a dropdown which includes your own trained and imported networks.

Model Class

Only visible if AI-based is selected.
Selects the model class used for segmentation.

Min. Confidence

Only visible if AI-based is selected.
Sets the minimum confidence in % for the prediction of every object.

Min. Object Size

Sets the minimum size in pixel that an object must have to be segmented.

Fill all Holes

On

Fills all holes in the segmented objects irrespective of their size.

Off

Fills holes in the segmented objects only if they are smaller than the specified Min. Hole Area.

Min. Hole Size

Sets the minimum area in pixel for the holes in the detected objects. The input is synchronized with Min. Object Size, which cannot be smaller than Min. Hole Size.

Gene- and Protein Expression Specific Settings

For Gene- and Protein Expression you have to specify settings for the nuclei segmentation as well as for quantification of the gene expression in the respective step.

Parameter

Description

Segm. Method

Only visible in the Nuclei step.
Selects the method for segmentation.

Automatic

Uses threshold values that are determined automatically from the histogram based on the Otsu method. For all possible threshold values, the Otsu method calculates the variance of intensities on each side of the respective threshold. It minimizes the sum of the variances for the background and the foreground.

Manual

Sets the threshold manually by clicking on the regions in the image that you want to segment, or by using the Threshold control displayed below.

Semantic

Uses semantic segmentation based on machine learning to automatically segment (fluorescently labeled) cell nuclei. This requires an installation of the 3rd party Python Tools during the installation of ZEN.

Instance

Uses an AI model for instance segmentation to automatically segment (fluorescently labeled) cell nuclei. The AI models for instance segmentation need the software Docker Desktop to run.

AI Model

Only visible if Semantic or Instance is selected.
Selects the model for segmentation from a dropdown which includes your own trained and imported networks as well as a default neural network for the segmentation of (fluorescently labeled) nuclei. Note that you can only use models trained on a single channel.

AI Model Version

Only visible if Instance is selected.
Selects the model version you want to use.

Model Class

Only visible if Semantic or Instance is selected.
Selects the model class used for segmentation.

Min. Confidence

Only visible if Semantic or Instance is selected.
Sets the minimum confidence in % for the prediction of every object.

Threshold

Only visible in the Nuclei step and if Manual is selected.
Sets the threshold for the pixel intensity used by the segmentation. For an RGB image you can set the threshold for each color channel individually.

Low

Defines the lowest pixel intensity considered for the segmentation.

High

Defines the highest pixel intensity considered for segmentation.

Undoes the last change.

Pick to Segment

Only visible in the Nuclei step and if Manual is selected.

+

Enables you to expand the currently segmented regions by the gray values/colors of the objects subsequently clicked on.

-

Enables you to reduce the currently segmented regions by the gray values/colors of the objects subsequently clicked on.

Histogram

Only visible in the Nuclei step and if Manual is selected.
In the histogram you can change the lower and upper threshold value by dragging the lower or upper adjustment handle or shift the entire highlighted area between the lower and upper threshold value.

BG Subtraction

On

Applies a rolling ball background subtraction to the image.

Off

Applies no background subtraction to the image.

Area

Only available in the Nuclei step.
Sets the lowest and highest value for the area of the objects. For more information, see the list of Measurement Features.

Circularity

Only available in the Nuclei step.
Sets the lowest and highest value for the roundness of the objects. For more information, see the list of Measurement Features.

Pick to Include
+

Enables you to expand the values employed for the region filters (area and circularity) and the mean intensity in the second step by clicking on objects in the image.

Ring Distance

Only available in the Nuclei step.
Sets the distance between the inner border of the ring and the border of the segmented nuclei.

Ring Width

Only available in the Nuclei step.
Sets the width of the ring where the gene expression is measured.

Intensity Mean

Only available in the Gene- and Protein Expression step.
Sets the lowest and highest value for the mean intensity measurement of the selected channel. For more information, see the list of Measurement Features. If the mean intensity of a cell falls into that defined range, it is considered as a positive cell.

Automated Spot Detection Specific Settings

For Automated Spot Detection you have to specify settings for the nuclei segmentation as well as the spot detection itself in the respective step.

Parameter

Description

Segm. Method

Selects the method for segmentation.

Automatic

Uses threshold values that are determined automatically from the histogram based on the Otsu method. For all possible threshold values, the Otsu method calculates the variance of intensities on each side of the respective threshold. It minimizes the sum of the variances for the background and the foreground.

Manual

Sets the threshold manually by clicking on the regions in the image that you want to segment, or by using the Threshold control displayed below.

Semantic

Uses semantic segmentation based on machine learning to automatically segment (fluorescently labeled) cell nuclei. This requires an installation of the 3rd party Python Tools during the installation of ZEN.

Instance

Uses an AI model for instance segmentation to automatically segment (fluorescently labeled) cell nuclei. The AI models for instance segmentation need the software Docker Desktop to run.

AI Model

Only visible if Semantic or Instance is selected.
Selects the model for segmentation from a dropdown which includes your own trained and imported networks as well as a default neural network for the segmentation of fluorescently labeled nuclei. Note that you can only use models trained on a single channel.

AI Model Version

Only visible if Instance is selected.
Selects the model version you want to use.

Model Class

Only visible if Semantic or Instance is selected.
Selects the model class used for segmentation.

Min. Confidence

Only visible if Semantic or Instance is selected.
Sets the minimum confidence in % for the prediction of every object.

Threshold

Only visible if Manual is selected.
Sets the threshold for the pixel intensity used by the segmentation. For an RGB image you can set the threshold for each color channel individually.

Low

Defines the lowest pixel intensity considered for the segmentation.

High

Defines the highest pixel intensity considered for segmentation.

Undoes the last change.

Pick to Segment

Only visible if Manual is selected.

+

Enables you to expand the currently segmented regions by the gray values/colors of the objects subsequently clicked on.

-

Enables you to reduce the currently segmented regions by the gray values/colors of the objects subsequently clicked on.

Histogram

Only visible if Manual is selected.
In the histogram you can change the lower and upper threshold value by dragging the lower or upper adjustment handle or shift the entire highlighted area between the lower and upper threshold value.

BG Subtraction

On

Applies a rolling ball background subtraction to the image.

Off

Applies no background subtraction to the image.

Area

Sets the lowest and highest value for the area of the objects. For more information, see the list of Measurement Features.

Circularity

Sets the lowest and highest value for the roundness of the objects. For more information, see the list of Measurement Features.

+
Pick to Include

Enables you to expand the values employed for the region filters (area and circularity) by clicking on objects in the image.

Ring Distance

Only available in the Nuclei step.
Sets the distance between the inner border of the ring and the border of the segmented nuclei.

Ring Width

Only available in the Nuclei step.
Sets the width of the ring where the spots are detected.

Translocation Specific Settings

Parameter

Description

Segm. Method

Selects the method for segmentation.

Automatic

Uses threshold values that are determined automatically from the histogram based on the Otsu method. For all possible threshold values, the Otsu method calculates the variance of intensities on each side of the respective threshold. It minimizes the sum of the variances for the background and the foreground.

Manual

Sets the threshold manually by clicking on the regions in the image that you want to segment, or by using the Threshold control displayed below.

Semantic

Uses semantic segmentation based on machine learning to automatically segment (fluorescently labeled) cell nuclei. This requires an installation of the 3rd party Python Tools during the installation of ZEN.

Instance

Uses an AI model for instance segmentation to automatically segment (fluorescently labeled) cell nuclei. The AI models for instance segmentation need the software Docker Desktop to run.

AI Model

Only visible if Semantic or Instance is selected.
Selects the model for segmentation from a dropdown which includes your own trained and imported networks as well as a default neural network for the segmentation of (fluorescently labeled) nuclei. Note that you can only use models trained on a single channel.

AI Model Version

Only visible if Instance is selected.
Selects the model version you want to use.

Model Class

Only visible if Semantic or Instance is selected.
Selects the model class used for segmentation.

Min. Confidence

Only visible if Semantic or Instance is selected.
Sets the minimum confidence in % for the prediction of every object.

Threshold

Only visible if Manual is selected.
Sets the threshold for the pixel intensity used by the segmentation. For an RGB image you can set the threshold for each color channel individually.

Low

Defines the lowest pixel intensity considered for the segmentation.

High

Defines the highest pixel intensity considered for segmentation.

Undoes the last change.

Pick to Segment

Only visible if Manual is selected.

+

Enables you to expand the currently segmented regions by the gray values/colors of the objects subsequently clicked on.

-

Enables you to reduce the currently segmented regions by the gray values/colors of the objects subsequently clicked on.

Histogram

Only visible if Manual is selected.
In the histogram you can change the lower and upper threshold value by dragging the lower or upper adjustment handle or shift the entire highlighted area between the lower and upper threshold value.

BG Subtraction

Only visible if Automatic or Manual is selected.

On

Applies a rolling ball background subtraction to the image.

Off

Applies no background subtraction to the image.

Area

Sets the lowest and highest value for the area of the objects. For more information, see the list of Measurement Features.

Circularity

Sets the lowest and highest value for the roundness of the objects. For more information, see the list of Measurement Features.

Pick to Include
+

Enables you to expand the values employed for the region filters (area and circularity) and the mean intensity in the second step by clicking on objects in the image.

Translocation Channel

Selects the channel to quantify translocation.

Ring Distance

Sets the distance between the inner border of the ring and the border of the segmented nuclei.

Ring Width

Sets the width of the ring where the translocation is measured.

Macro Environment

The acronym OAD for Open Application Development is a term describing both the OAD platform on ZEN as well as the process of developing applications on it. The platform has been made available for our customers to enhance the functionality of ZEN in a flexible way. With OAD typical microscopy workflows can be integrated into the ZEN software. A short list of OAD highlights: Macro Interface to access the major functionality of ZEN and its objects and the access to external libraries like the .Net Framework to significantly enlarge the field of application. OAD uses IronPython 3.4.

This module offers the following components which we regard as main parts for Open Application Development (OAD):

  • Macro Runtime Environment (integrated)
  • Macro Recorder
  • Macro Editor
  • Macro Debugger
  • Macro Interface (Object Library)
  • ImageJ Extension

Basic functionality

All ZEN products (ZEN lite excluded) come with a basic macro functionality which allows to play existing macros within the software (using the Macro tool).

Within the software you can only run .czmac macro files which are recorded or saved in the ZEN macro environment. To run your own macros later on they must be located in the folder:
…/User/Documents/Carl Zeiss/ZEN/Documents/Macros.

Licensed functionality

When you have licensed the Macro Environment functionality you will get the:

  • Macro Recorder,
  • Macro Editor and
  • Macro Debugger.

In the Right Tool Area you find the Macro tool. The Macro Editor dialog allows you to generate and work with macros similar to Excel/Word macros. The macro interface is part of ZEN software and therefore not a separate product. The ImageJ Extension is the first extension for ZEN and will be free of charge.

Additional resources

Running an Existing Macro

  1. You run a licensed version of ZEN software. Note that the macro environment is not available for the free of charge version ZEN lite.
  2. You have a macro file available that you want to run in the software.
  1. Copy your macro file in the following folder:
    .../User/My Documents/Carl Zeiss/ZEN/Documents/Macros.
  2. Start the software.
  3. In the Right Tool Area open the Macro tool.
  4. You see your macro in the list under User Documents.
  5. Select your macro.
  6. Click on the Run button.
  1. Your macro is executed. You have successfully played a macro in ZEN.

Recording a Macro

This guide shows how to record a macro of a simple processing workflow.

  1. You have licensed the Macro Environment module.
  2. You are in the Right Tool Area in the Macro tool.
  1. Click on the Record button.
  2. Load a color image via the menu File > Open
  3. Go to the Processing tab.
  4. Under Method select Edges > Sobel.
  5. Under Method Parameters > Normalization select the entry Clip.
  6. Under Image Parameters set your color image as Input Image.
  7. At the top of the Processing tab, click on the Apply button.
  8. The Sobel method will be applied to your image. The output image will be generated and opened in a new image container.
  9. In the Macro tool click on the Stop button.
  1. You have successfully recorded a macro for a simple processing workflow. The workflow can now be repeated automatically just by playing the recorded macro file.

EM Processing Toolbox

This module offers functionality for the processing of FIB-SEM stacks. This chapter describes how the different functions of the EM Processing Toolbox can be used to process a FIB-SEM-stack acquired with SmartFIB in the ZEN software. Note that parts of this special workflow also require functionalities of the ZEN Connect module. To make yourself familiar with this module, see also the documentation for ZEN Connect.

Workflow Overview

This chapter gives an overview how you can process your FIB-SEM stacks and align them. Consider the following workflow:

  1. Sorting of image files:
    With the function Sort SmartFIB Tiffs you can sort your .tiff image files created by SmartFIB according to channel name, number of pixels, image size, and spacing of the images, corresponding to slice thickness. Note that this function only works if the tiff files have their default names (e.g. channel0_slice_0001.tiff or slice_0001.tiff)! Do not rename your files before you use this function!
  2. Image import and conversion:
    With the special import functionality, you can import your FIB stack images and save them as a czi for further processing in ZEN. You can import the stack into ZEN (see Importing SmartFIB Tiffs) or into ZEN Connect (see Importing a SmartFIB Stack into ZEN Connect).
  3. Subset image creation:
    If you want to reduce the imported z-stack to a particular z-range and region before applying further processing steps, you can create a subset image of the imported FIB stack with the image processing function Create Image Subset.
  4. Replacing individual slices in the z-stack:
    If your stack contains image slices of bad quality which prevents further processing or segmentation, you can use the function Slices Replacement to replace those slices with the respective predecessor or successor. For more information, see also Replacing Z-Slices in a Z-Stack.
  5. Coarse alignment of the z-stack:
    To minimize shifts in x and y in the z-stack and correct a potential beam shift, you can use the image processing function Coarse Z-Stack Alignment to roughly align the z-stack. For more information, see also Aligning Z-Planes Manually.
  6. Image processing:
    Use the image processing functions to process your image and reduce artifacts. For general information about image processing, see also the chapter for the Image Processing Workflow.
  7. Automatic alignment of the z-stack:
    Make an automatic fine alignment of the planes in your z-stack with the processing function Z-Stack Alignment with ROI. For more information, see also Aligning Z-Planes Automatically (Based on a ROI).
  8. Equalization:
    Correct for variation of overall image intensity from image to image by equalizing the intensity value throughout the entire z-stack with the function Z-Stack Equalization.
  9. Cropping of a specific volume:
    Identify a particular region of interest and cut it out of your stack with the function Cut Out Regions. For more information, see also Cutting Out a Volume from a Z-Stack.
  10. Adding the processed z-stack to the ZEN Connect project:
    Add your processed z-stack into the correlative workspace of the ZEN Connect module. For this you can use the Add to Correlative Workspace button in the toolbar if you have an open project. In the correlative workspace you can then align several z-stacks and images (e.g. an overview image). For detailed information, see also Adding an Open Image to the ZEN Connect Project and Aligning Image Data.
    In order to import the data into a specific session in the ZEN connect project, right-click on the respective session and select the czi file. The transformation that was applied to the session will then also be applied to the newly imported image.

Replacing Z-Slices in a Z-Stack

With the processing function Slices Replacement you can replace slices of a z-stack with the previous or next slice in the stack.

  1. On the Processing tab, select the method Slices Replacement.
  2. In the Dimensions tab, use the Z-Position slider or input field to select the slice you want to replace.
  3. Click on Replace with Next to replace the selected slice with the next one, or click on Replace with previous if you want to replace the slice with the previous one.
  4. If you want to replace other slices as well, repeat the steps 2 and 3 for each slice.
  5. Each slice is listed in the Replacement Table on the left.
  6. Click on Apply.
  1. You have now replaced the selected slice(s) with its previous and/or next one(s).

Replacing multiple slices

If you have several slices in your z-stack which you want to replace with the following or preceding slice, you can also take the following workflow:

  1. On the Processing tab, select the method Slices Replacement.
  2. Open the Gallery view of your z-stack.
  3. In the Gallery view, press the Ctrl button and select all the slices you want to replace. As an example, you select the slices 34, 37, and 38.
  4. Click on Replace with Next to replace the selected slices with the next ones, or click on Replace with previous if you want to replace the slices with the previous ones.
  5. Each slice is listed in the Replacement Table on the left. As an example, clicking Replace with previous would replace slice 34 with 33, and 37 and 38 with the slice 36.
  6. Click on Apply.
  1. You have now replaced the selected slices with its previous and/or next ones.

Importing SmartFIB Tiffs

In ZEN you can import SmartFIB stacks of Crossbeam microscopes. The orientation of these stacks differs from standard z-stack acquisition, as the acquired images are tilted by a certain angle compared to a z-stack acquired on a light microscope. The import function calculates this tilt from the metadata of the image. If the import finds no metadata concerning the tilt angle and the user does not enter a value for the sample angle, it uses a default angle of 54 degrees (default angle between FIB and SEM column at the Crossbeam) and the image is rendered with a 90 degree tilt when displayed in a ZEN connect project. Alternatively, you can enter the angle of your sample during import, e.g. as set during acquisition of the stack with SmartFIB, and the import then calculates the tilt angle based on this sample angle.
During import, the XY offset metadata of the individual slices is ignored by default and only the offset of the first tiff file is considered. This default avoids the creation of a slanted z-stack, however in certain cases, such as on-grid-thinning configuration, the XY offset of the individual slices needs to be taken into account.

  1. On the Processing tab, select the image processing function Import SmartFIB TIFFs.
  2. The function settings are displayed in the Parameters tool.
  3. Click on Select Files.
  4. A file browser opens.
  5. Select the images you want to import as FIB stack.
    Note: Select only images with consistent metadata with respect to number of pixels, image size, and spacing of the images (i.e. use the Sort SmartFIB tiffs function before importing the data).
    Note: To make sure the stack is composed/ ordered correctly, watch out how the images are sorted in the explorer and in which order you choose them.
  6. Enter a File name for the FIB stack.
  7. If you import images without scaling information, deactivate the Auto checkbox for XY-Scaling and manually enter the information.
    Note: ZEN currently cannot determine automatically if scaling information is present.
  8. You can set the slice distance manually if you deactivate the Auto checkbox for Z-Spacing. This step is optional and should only be done if you have reason to believe the information calculated with the metadata is incorrect. Leave the Auto checkbox activated and the slice distance is automatically calculated with information saved in the metadata of the images.
    Note: When you set the slice distance manually, the information in the metadata is ignored.
  9. If you know the angle of your sample, deactivate the Auto checkbox for Sample Angle and enter it. Otherwise the tilt for the image is calculated using the metadata, or the sample angle is set to the default of 54 degrees (default angle between FIB and SEM column at the Crossbeam) and the image is rendered with a 90 degree tilt (if no information is available in the metadata).
  10. If you want to consider the xy offset metadata of the individual slices for the import, activate the checkbox Read XY Offsets. Note that this can lead to a slanted z-stack depending on the sample and the metadata, assuming tilt correction was used during acquisition with SmartFIB (e.g. if the metadata contain incorrect offset information).
  11. Click on Apply.
  1. The FIB stack is now imported into ZEN and a czi-file is created.
    Note: When importing larger image files, it may take a while until the entire stack is visible in the viewer.

Microscopy Copilot

Microscopy Copilot may occasionally provide incomplete, wrong, or outdated answers. Use the provided information with caution.

This module offers an advanced AI assistant trained to help you with the software functionalities for confocal microscope systems. You can open the assistant by clicking on the icon in the bottom right corner of ZEN.

Limitations

Currently using Microscopy Copilot has the following limitations:

  • Active internet connection is required.
  • Available for all LSM devices except the LSM 780 and Celldiscoverer 7.
  • Only available if you have started ZEN system.
  • Not available for systems in China.
  • Only available in English.

Using the Microscopy Copilot

  1. You are connected to the internet.
  2. You have started ZEN system with a connected LSM.
  1. In the bottom right, click the Microscopy Copilot icon.
  2. The window opens.
  3. Click Login. Alternatively click Start a conversation.
  4. The login screen opens.
  5. Enter Email and Password for your ZEISS ID account, then click Sign in. Alternatively, if you do not yet have a ZEISS ID account, click Sign up and follow the respective steps.
  6. If the login is successful, the chat interface opens.
  7. Type your question into the text field at the bottom and click Send.
  8. Your question is sent and an answer is displayed as soon as it is available.
  9. To log out of the chat interface, click Logout.
  10. You are logged out and the start screen opens.

Microscopy Copilot Dialog

This dialog provides the interface to interact with Microscopy Copilot, an advanced AI assistant trained to help you with the software functionalities for confocal microscope systems, see Microscopy Copilot.

Parameter

Description

Login

Loads the ZEISS ID page to sign into the application.

Logout

Logs you out of the Microscopy Copilot and displays the start page.

Start a conversation

Only visible on the start page if you are not yet signed in.
Loads the ZEISS ID page to sign into the application.

Send

Only visible if you are logged in.
Sends the text from the text field on the left to Microscopy Copilot as input.

AI Tissue Detection

This module adds a new tissue detection method to the Axioscan 7 which is capable of detecting faint contrast of fluorescent tissue in fast brightfield (flash) prescans, combining speed and robustness. It includes an AI based model to detect tissue regions in mIF stained slides from low magnification prescans. The model is based on more than 3000 annotated images, and it requires the software Docker Desktop to run on your PC.

Tissue Image Alignment

This module adds a new Tissue Image Alignment image processing function for combining multi-channel images of different staining and imaging cycles. It supports cyclic mIF staining and imaging protocols with integrated co-registration software and provides near pixel-perfect alignment results based on a shared marker in all rounds (e.g. DAPI) for up to ten rounds of imaging.

Tissue Image Alignment

This method enables you to combine multi-channel images of different staining and imaging cycles. One channel needs to be identical in each of the images to be able to do this alignment. The alignment is done in a two-step process, a coarse and then a fine alignment. Note that Docker Desktop needs to be running to use this function.

Parameter

Description

Reference Channel

Selects the channel of the reference image that is used for alignment. This channel needs to be present in all the images that should be aligned.

Target Channel

Selects channel of the target image which corresponds to the reference channel and is used for alignment. Depending on the number of inputs set below, you have multiple target images where you have to select the corresponding channel.

Maximum Shift in µm

Defines the maximum allowed translation in µm during the fine alignment. If the maximum is not sufficient, the corresponding tiles are not registered. The default value is 50. Increasing the value leads to longer processing times.

Tile Size Setting

Selects the tiling that is used during processing.

Fast

Uses a bigger tile size for faster alignment.

Optimal

Uses a tile size for optimal balance between speed and accuracy.

Accurate

Uses a smaller tile size for the most accurate alignment.

Number of Inputs

Defines the number of inputs for the method. The minimum number of inputs is two (one reference image and one target image), and you can align up to ten images. For each additional input, a new Target Channel section is displayed above. Additionally, the number of input fields in the Input tool is adapted accordingly.

Compress Output File

Activated: Applies the lossless zstd compression to the output file.

Deactivated: Applies no compression to the output file.

Impressum
Carl-Zeiss-Strasse 22
73447 Oberkochen
Germany
Legal