ZEISS Knowledge Base
Help
ZEISS Knowledge Base

Visualization and Analysis Toolkits

Advanced Processing

This module provides the following additional processing functions:

Edges

Laplace

This function performs a Laplace highpass filter on an image.

The calculation is based on a 3 x 3 x 3 Laplace operator in all directions. The function does not show smooth gray value changes very well.

Parameter

Description

Normalization

Defines how out-of-range pixel values are mapped.

The calculated pixel values of the output image may be out-of-range and are mapped into the available range.

-

Clip

Values exceeding the pixel value range are set to the highest value available (white), values falling short of the pixel value range are set to the lowest value available (black). The effect corresponds to underexposure or overexposure. This means that in some cases information is lost.

-

Automatic

Normalizes the pixel values automatically to the available pixel value range. The highest resulting value is mapped to the maximum pixel value, the lowest resulting value to 0. As a result, the whole range of resulting pixel values is compressed evenly.

-

Wrap

If a resulting value is larger than the maximum pixel value of the image, the difference exceeding the maximum pixel value is added to 0. Similarly, if a resulting value is below 0, the resulting pixel value is the maximum pixel value minus the difference falling below 0.

-

Shift

Normalizes the output to the value "pixel value + maximum pixel value/2". As a result, all resulting values are mapped to the available value range.

The middle value of the pixel value range remains constant. Values left and right of the middle value are changed progressively, so that values inside the pixel value range are changed only slightly. Values outside the pixel value range are changed strongly and mapped to the fringes of the pixel value range.

-

Absolute

Converts negative pixel values into positive values. Positive pixel values exceeding the maximum pixel value are set to the maximum pixel value.

Local Variance

This method is an edge filter, which calculates the variance of each pixel with its neighboring pixels by the lateral filter size. The variance is calculated in the x and y directions and each pixel in the output image represents the average of the two variances.

Parameter

Description

Kernel Size

Specifies the number of pixels in the different directions taken into account to calculate the variance.

Normalization

Defines how out-of-range pixel values are mapped.

The calculated pixel values of the output image may be out-of-range and are mapped into the available range.

-

Clip

Values exceeding the pixel value range are set to the highest value available (white), values falling short of the pixel value range are set to the lowest value available (black). The effect corresponds to underexposure or overexposure. This means that in some cases information is lost.

-

Automatic

Normalizes the pixel values automatically to the available pixel value range. The highest resulting value is mapped to the maximum pixel value, the lowest resulting value to 0. As a result, the whole range of resulting pixel values is compressed evenly.

Binary

Apply Mask

This tool enables you to isolate features in an image and to suppress image areas not of interest using a mask image.

Parameter

Description

Input

The input image from which you wish to isolate features or suppress areas not of interest.

Mask

The mask image that is applied to the input image.

The mask is laid on top of the input image. Image regions of the input image in1 where the mask is white remain unchanged, image regions where the mask is black are blacked out and suppressed.

Both images are aligned at the upper left corner. If the mask image is smaller than the input image in1, the mask is applied only to part of the input image, beginning at the upper left corner. The rest of the input image remains unchanged.

Exoskeleton

This method generates an image with the skeleton of the influence zone of regions. The background in the Input image is analyzed, and the skeleton of the influence zones of the objects is determined. This is then saved as a binary image in the Output image.

Input image
Input image

Parameter

Description

Count

Specifies the number of times the functionality is applied.

Each time the functionality is applied more white pixels are added around white structures. The invisible pixels around the image borders are assumed to be white so that a white rectangular frame grows into the image.

If two white regions would merge into a single structure, a single-pixel black border is maintained between the two regions.

Converge

Activated: The functionality is applied until the image does not change anymore. As a result, the structures are extended to their maximum possible size, limited by single-pixel lines.

AxioVision Compatibility

The algorithm was re-implemented, the results differ from the results produced using AxioVision.

Activated: A former version of the algorithm is used to get the same results as produced by AxioVision.

Mark Regions

This function marks binary regions of the input image. For each region in the input image, a check is performed to establish whether a pixel has been set in the marker image.

Parameter

Description

Select Marked

Activated: Copies the marked region into the output image.
Deactivated: Copies the unmarked region into the output image.

Or

This method performs a bit-by-bit OR calculation for the Input1 and Input2 images. This function can be used to combine binary masks or regions. All the pixels that are white in input image 1 OR input image 2 are set to white in the resulting image. This means that all the white pixels in the two input images are white in the resulting image.

Scrap

This method removes structures within a certain area.

The size is defined as the number of pixels. Structures with a pixel size within a specified size interval are removed, structures with a pixel size outside the boundaries of the specified size interval are maintained.

Parameter

Description

Minimum Area

Sets the minimum number of pixels of the foreground structures to be removed.

Maximum Area

Sets the maximum number of pixels of the foreground structures to be removed.

Select in Range

Activated: The effect is inverted. Structures with a size within the interval are maintained, structures with a size outside the interval boundaries are removed.

Separation

Using this function you can attempt to separate objects that are touching (and that you have been unable to separate using segmentation) automatically.

Parameter

Description

Separation Mode

-

Morphology

This method separates objects by first reducing and then enlarging them, making sure that once objects have been separated they do not merge together again.

-

Watersheds

With this method you can separate objects that are roughly the same shape. This method may however result in the splitting of elongated objects.

Count

Enter how often the method is applied successively to the result at the location of the separation, using the slider or input field.

Thinning

This method thins objects to a line of single pixel thickness.

Parameter

Description

Thinning Element

Select the desired thinning method here.

-

Arcelli

Applies thinning in accordance with the Arcelli method.

-

Levialdi

Applies thinning in accordance with the Levialdi method.

Count

Sets the number of repetitions.

This means that the function is applied a number of times in succession to the filtering result. This increases the effect accordingly. The value range is from 1...256.

Prune

Cuts off the ends of the thinned lines.

Converge

If activated, the function is automatically repeated until all regions will be deleted by the next erosion step.

AxioVision Compatibility

Performs the function exactly like in AxioVision to achieve identical results.

Ultimate Erode

This function works in the same way as normal erosion. Structures in the input image are reduced. Thin connections between regions are separated. The difference between this function and normal erosion is that structures are eroded until they would be deleted by the next erosion step. With erosion, the pixel in question is set to the gray value 0 (black) in the resulting image. For regions (pixels) at the image edge, the assumption is that the pixels outside the image are white.

Parameter

Description

Structure Element

Selects the preferred direction of morphological change (e.g. Cross, Diagonal).

Count

Sets the number of repetitions. This means that the function is applied a number of times in succession to the filtering result. This increases the effect accordingly.

Converge

Activated: The function is automatically repeated until all regions will be deleted by the next erosion step.

Morphology

Morphology functions apply structure elements to images. A structure element is like a stencil with holes. When the stencil is placed on an image then only some pixels are visible through the holes. The gray values of these pixels are collected and their external gray value (minimum or maximum) is computed.

This external gray value is assigned to that pixel of the resulting image which corresponds to the place of the origin of the stencil on the input image. When the stencil is placed at all positions of the input image, all pixels of the resulting image are thus assigned. When bigger structure elements are required than those which are provided, these can be achieved by iterating the small elements using the Count parameter.

The following functions are available:

Function

Description

Erode

Shrinks bright structures on a darker background in the input image. Thin connections between structures and small structures itself will disappear.

Dilate

Expands bright structures on a darker background in the input image. Small gaps between structures are filled and those structures become connected.

Open

First erodes (Erode class) the bright structures on a darker background in the input image, then it dilates (Dilate class) the result by the same number of steps. Thus it separates bright structures on a darker background, but approximately keeps the size of the structures.

Close

First dilates (Dilate class) the bright structures on a darker background in the input image, then it erodes (Erode class) the result by the same number of steps. Thus it connects bright structures on a darker background, but approximately keeps the size of the structures.

Top Hat (White)

Computes the difference between the original image and the image produced by an open operation (Open class). Bright structures which were flattened by the opening are strengthened in the result. This is like putting a top hat with the size of the open operation upon the structure and keep only the part inside the hat.

Top Hat (Black)

Computes the difference between the original image and the image produced by a close operation (Close class). Dark structures which were flattened by the closing are strengthened in the result. This is like lifting a top hat with the size of the close operation beneath the structure from the dark side and keep only the part inside the hat.

Gradient

Computes the difference between the dilated image and the eroded image (Dilate and Erode class). Since a point in the dilated image has the maximum gray value and the corresponding point in the eroded image has the minimum gray value within the structure element the difference is zero for regions of constant gray values and gets bigger for steeper gray value ramps or edges.

Watersheds

Computes the barriers between catchment basins of local minima in the gray valued input image. A local minimum is a connected plateau of points from which it is impossible to reach a point of lower gray value without first climbing up to higher gray values. A catchment basin of a local minimum is a connected component which contains that minimum and all downstream points to it. A downstream is a path of points along which gray values are monotonically descending. Thus all catchment basins of local minima are expanded until they collide with another catchment basins. At those point barriers (watersheds) are built up. The output image is binary and contains all watersheds. If the "Basins" flag is set, the catchment basins themselves are in the output image as uniquely labeled connected components without any border lines.

Grey Reconstruction

Works mainly as an iterated dilation (Dilate class) of the image, but with a constraint image as a second input image. After every dilation step the pixel wise minimum of the dilated image and the constraint image is computed and gives the next image to be dilated. The computation stops automatically when all the just dilated pixels are bigger than the corresponding ones in the constraint image.

Parameters

Parameter

Description

Structure Element

Here you select the desired structure element. Following elements are available: Horizontal, Diagonal 45°, Vertical, Diagonal 135°, Cross, Square, Octagon.

Count

Here you can adjust the number of repetitions to define the size of the structure element.

Binary

Only available for the Erode, Dilate, Open and Close function.

Activated: Increases calculation speed by creating a binary image before the morphology operation is applied. The resulting image contains black or white pixels instead of gray values.

Deactivated: Applies the morphology operation to a gray scale image. The results are finer graded compared to the results of the morphology operation applied to a binary image.

Utilities

Combine HLS

With this method a HLS image can be generated of the single color extractions H, L, S.

Impose Noise

This function imposes an image with a defined noise for testing purposes.

Parameter

Description

Signal to Noise Ratio

Adjusts the signal to noise ratio.
Range 0.10 - 100.00.

Distribution

-

Poisson

Imposes a Poisson distributed noise.

-

Gauss

Imposes a Gauss distributed noise.

Split into HLS

This method generates the individual color extractions for a HLS input image. The resulting images for hue, lightness and saturation take the form of gray images.

3Dxl

This module enables you to visualize 3D or 4D image data. It provides up to three clipping planes and features five different rendering methods, including an improved transparency mode for better visualization of dense structures, such as EM, XRM and dense fluorescent data. Time series (4D) movies, the generation and export of movies as well as tools for interactive 3D measurements are also included. 3Dxl offers a bridge functionality and sample pipelines to send data to arivis Pro with saved settings for fast and easy 3D analysis. For full functionality, the 3Dxl module requires a dedicated current graphic card with full OpenGL support (NVIDIA recommended, AMD possible).

3D View

This view is only available if:

  • you have loaded or acquired a z-stack image.
  • a suitable NVIDIA or AMD graphics card with full OpenGL 4.3 or higher functionality is present.

3D View employs 3D rendering technology which requires access to advanced OpenGL functionality. For full functionality this means that a modern dedicated graphic card (NVIDIA or AMD technology) has to be present.

1

Tool Bars
With the toolbars on the left, right and bottom of the image area you can directly control and move the 3D volume, see Tool Bars.

2

3D View
The 3D view displays z-stack images three-dimensionally as a 3D volume and objects resulting from a 3D image analysis (if you have the required license for 3D Image Analysis and toggled their visibility). Selecting an analysis object in the view highlights the respective row in the table. You can also select multiple objects by pressing Ctrl while clicking on the objects.

3

Analysis Object Table
This table is only visible if you have opened an image with objects from the 3D Image Analysis. It displays the objects resulting from the image analysis and allows you to highlight them in the viewer, see Analysis Objects Table.

4

Summary Table
This table is only visible if you have opened an image with objects from the 3D Image Analysis. This table displays a summary of information about the objects of the class selected in the Analysis Object Table. The displayed features are the ones selected for all regions in the Feature step of the analysis wizard.

5

View Options
In this area you have your 3D specific view options with parameters to adjust the appearance and further settings of the 3D volume.

Tool Bars

The tool bars are arranged to the left and right of the image area and underneath it. You can use the tools to control and adjust the display of the 3D volumes in the image area.

Tool Bar (Left)

Parameter

Description


Top Thumb Wheel

Zooms in or out of the 3D image.


Select

Enables you to select end points of measurement tools that have been drawn into the 3D image (Measurement tab). You can then edit the position of the end points.


Rotate

Enables you to rotate the 3D image in any way you wish within the space. This is the default mode when you switch to 3D view for the first time.


Zoom

Enables you to increase or reduce the zoom factor of the image area.


Move

Enables you to move the 3D image.


Fly

Enables the flight mode. This mode allows you to virtually fly through the 3D image. Use the keys from the list below to control your flight.


Bottom Thumb Wheel

Rotates the 3D image around the horizontal (X) axis.

Flight Mode Key Layout/Controls

Key

Function

W

Forward

S

Backward

A

Left

D

Right

Space

Up

C

Down

E

Rotate (clockwise)

Q

Rotate (counter-clockwise)

X

Precision Mode, enables slower movement

Tool Bar (Right)

Parameter

Description


Toggle X/Y Clipping Plane (Blue)

Toggles the visibility of the X/Y clipping plane.


Toggle X/Z Clipping Plane (Green)

Toggles the visibility of the X/Z clipping plane.


Toggle Y/Z Clipping Plane (Red)

Toggles the visibility of the Y/Z clipping plane.


Snap

Creates a 2D image of the current view. The image is a 24 bit color image. All annotations are burned in automatically.


Add

Adds the current view to a position list as a new position.
With the help of position lists you can have your view calculated as a series of individual images. This series can then be exported as a movie, for example.


Play

Only active if a position list containing at least two saved positions exists.
Plays back a preview of the series that is calculated. To stop the preview, click on the button again.

Tool Bar (Bottom)

Control element

Description


Left Thumb Wheel

Rotates the 3D volume around the vertical (Y) axis.


Home View

Switches back to the start view from any view.
A top view of the 3D volume is displayed. Lateral movements and the zoom factor are adjusted so that the 3D volume can be seen at the center of the image area.


Show Measurements

Shows or hides measurements.
If measurements are drawn in, a table of the measurements appears at the right side of the image area.


Show Bounding Box

Shows or hides a bounding box around the 3D volume.


Show Coordinate Axes

Toggles the visibility of the coordinate axes.

  • X axis = red
  • Y axis = green
  • Z axis = blue


Show Scaling

Toggles the visibility of the scaling on each axis.


Show Objects

Only available if you have licensed the 3D Image Analysis functionality.
Toggles the visibility of the analysis objects in the 3D volume.


Spin Mode

Enables the spin mode. This allows to set the 3D volume in continuous motion. For a short description on how to use the spin mode, see Animating the 3D Volume.


Glass Visualization

Only available if you have licensed the 3D Image Analysis functionality.
Switches to glass visualization of the analysis objects in the 3D volume.


Opaque Visualization

Only available if you have licensed the 3D Image Analysis functionality.
Switches to opaque visualization of the analysis objects in the 3D volume.


Send to arivis Pro

Only active, if arivis Pro is installed and licensed on the system.
Starts arivis Pro and imports the image. The image will be displayed with almost identical rendering settings as in the 3D view of ZEN.
Note: Not all render methods produce absolutely identical settings between the two applications. This is due to additional functionalities available for arivis Pro.


Right Thumb Wheel

Rotates the 3D volume around the (Z) axis perpendicular to the screen plane.

Appearance Tab

Here you can define the appearance of the 3D volume. On the tabs available on this tab, select the setting that you want to change (e.g. Transparency). Depending on which mode you have activated on the 3D tab, different tabs and parameters are available.

Transparency Tab

Parameter

Description


Channel selection

Here you can select the channel of a multichannel image for which you want to set the transparency.

Threshold

Sets the lower threshold value in percent of the gray levels displayed. With this setting you specify the gray value range for the relevant channel that you want to be included in the rendered image.

Ramp

Sets the extent of the transition from completely transparent to completely opaque (0-100 percent).

Maximum

Sets the level of opacity (0-100 percent).

Histogram

Displays the settings that you enter using the sliders schematically. The X axis represents the gray level values and the Y axis the opacity. You can also change the position of the curve using the mouse.

Reset

Resets all parameters to the original values.

Channels Tab

Only visible if Mixed view mode is activated on the 3D tab.

Here you can specify how Transparency and Surface settings are mixed. In the case of multichannel images you can also configure these settings differently for each channel.

Activate the corresponding checkboxes for Transparency and Surface in the list.

Light Tab

Parameter

Description

Brightness

Sets the brightness of the light source (from 0 - 100 %).

Azimuth

Here you can enter the angle of the light source above the virtual horizon.

Elongation

Here you can enter the light source's horizontal angle of incidence.

Light source

As an alternative to the slider or input field, you can set the Azimuth and Elongation together by using the mouse to move the point within the light source display.

Enable Directional Light

Activated: Enables full lighting for volumetric rendering. A directional light illuminates all structures in a scene with parallel light rays from a specific direction, similar to sun light. The light disregards the distance between the light itself and the structures, so the light does not diminish with distance.

Enable Tone Mapping

Activated: Enables tone mapping during the rendering of the image data. Tone mapping refers to the compression of the dynamic range of high contrast images (HDR). The contrast range is reduced in order to display digital HDR images on output devices with a more limited dynamic range. In most cases, tone mapping increases brightness and contrast of the rendering result and makes colors more vibrant.

Reset

Resets all parameters to the original values.

Projection Tab

Parameter

Description

View angle

Sets the projection angle at which you want to view the scene freely between 0° and 80°. The effect of this on the perspective display is as if you are viewing the 3D image through a telephoto or wide-angle lens.

Scale Z

Here you can set the scaling of the volume in the Z direction (value range 10% - 600%).

Stereo anaglyph

Activated: Displays the 3D volume as anaglyphs. You can choose between a

  • Red/Green display, or a
  • Red/Cyan display.

Camera separation

Sets the distance between the two virtual cameras (0-20%).

Parallax shift

Sets the degree of movement that is necessary to bring the two camera images back into line (-100 to +100%).

Reset

Resets all parameters to the original values.

Measurements Tab

Only visible if the Show All mode is activated.

Here you can perform interactive measurements in the 3D volume. Note that measurements are not possible in Shadow projection mode. The measurements can be drawn in directly in the 3D volume using different tools. The measurement results are displayed in a list at the right of the image area.

Parameter

Description

Tool bar


Using the tools you can perform interactive measurements in the 3D volume. The following tools are available:

-


Select

Changes the mouse pointer to Selection mode. Use this to select measurements in the 3D volume in order to change them.

-


Line

Use this to measure the length of a line in µm. Click once on the starting point and hold down the mouse button. Then drag the mouse to the end point and release the mouse button again. The measurement is complete. The result of the measurement is displayed in the list to the right of the image area.

-


Angle

Use this to measure the angle between two connected legs. First define the starting point. Then use the mouse to drag the first leg to the desired first end point. Define the second leg by clicking on the second end point. The angle measurement ends with a display of the angle measured (in degrees). The result of the measurement is displayed in the list to the right of the image area.

-


Polygon Curve

Use this to measure along a line with any number of segments. Click from corner point to corner point. Complete the measurement by right-clicking. The result of the measurement is displayed in the list to the right of the image area.

-

Color selection

Here you can select a color for the tool you want to draw in. Simply click on the colored rectangle and choose a color from the list.

-

Keep Tool

Activated: Keeps the selected tool active.

-

Auto Color

Activated: Automatically changes the color of the drawn-in tool.

Parameter

Description

Show Measurements

Activated: Shows the measurements in the 3D volume or in the list of measured values at the right of the image area.

-

On top

Activated: All drawn-in measurement tools appear in the foreground, even if these are in fact obscured by image structures.

Display Values

-

on the objects

Activated: Displays the measured values in the 3D volume.

-

as list

Activated: Displays the measured values in the measurement data table.

Delete Selected

Only active if a measurement tool has been selected in the 3D volume.
Deletes selected measurement tools from the 3D volume.

Delete All

Deletes all measurement tools from the 3D volume.

3Dxl Plus

This module offers functionality to combine 3D and 2D visualization in one screen, enabling the user to render up to three 2D and one 3D view panes together in one new viewer (Tomo3D view). The 3D view features ray-casting-based volume rendering with transparency, volume and maximum intensity modes, flexible channel-wise adjustment of 3D view, background color and lighting. The position of the three orthogonal 2D view panes are synchronized with the 3D view, can be interactively positioned and are indicated by colored cut lines.

Tomo3D View

The Tomo3D view combines a 3D viewer with up to three orthogonal 2D views. The Tomo3D is only available if you have loaded or acquired a z-stack.

1

Left Toolbar
Toolbar to manipulate the image display. For more information, see Left Toolbar (Tomo3D).

2

Image View
Area where you interact with the image views and set the cut lines with the mouse. You can display up to four different views, including a 3D and different 2D views. The number of image views can be set in the Ortho Display tab, the type can be set by the dropdown in the top left corner of each view. The individual views can be synchronized with the Synchronize View Area of 2D Panes checkbox in the Dimensions tab.

3

Snap button
Creates a 2D image of the currently displayed image views. All annotations are burned in automatically.

4

Bottom Toolbar
Toolbar to manipulate the image display, see Bottom Toolbar (Tomo3D).

5

View Options
Area for general and specific view options.

Ortho Display Tab (Tomo3D)

Parameter

Description

Cut Lines

Sets the positions (pixel values) for the section lines using the X/Y/Z sliders or input fields.
Alternatively you can also adjust the positions directly in the image area. To adjust the positions, move the mouse over a section line in the image. Hold down the left mouse button and move the mouse.

-

Mid

Positions the relevant slider at the center of the view.

Line Width

Only visible if the Show All mode is activated.
Enter the thickness of the section lines in pixels using the sliders or input fields. This results in a maximum intensity projection being displayed over the selected pixel width.
You can also adjust the width directly in the image area. To adjust the width, move the mouse over a section line in the image until a small arrow is displayed. Hold down the left mouse button and move the mouse.

Cut Line Opacity

Only visible if the Show All mode is activated.
Here you can enter the degree of opacity of the section lines from 0% (invisible) to 100% (completely opaque).

Views

Only visible if the Show All mode is activated.
Sets the number and allocation of views in the Image View.

-

Displays four views.

-

Displays one view.

-

Displays two vertically separated views.

-

Displays two horizontally separated views.

-

Displays three vertically separated views.

-

Displays three horizontally separated views.

Create Image Tab (Tomo3D)

This tab enables you to create an image of the current view(s).

Parameter

Description

Resolution

Selects the resolution for the image. You have the following options:

  • Current Resolution
  • 720 x 576 (SD)
    (SD = Standard Definition)
  • 1024 x 768
  • 1920 x 1080 (HD)
    (HD = High Definition)
  • 4096 x 3072 (4K)

Create

Creates an image of the current view and opens it in ZEN.

Intellesis Segmentation

This module enables you to use machine-learning algorithms for segmenting images using pixel-classification. It uses different feature extractors to classify pixels inside an image based on the training data and the labeling provided by the user. There are a variety of use cases because the functionality itself is "data-agnostic", meaning it can be used basically with every kind of image data.

The module has the following main functionality:

  • Any user can intuitively train a machine learning model to perform image segmentation without advanced training by simply labeling what shall be segmented.
  • Import of any image format readable by the software, incl. CZI, OME-TIFF, TIFF, JPG, PNG and TXM (special import required).
  • Creation of pre-defined image analysis settings (*.czias) using machine learning based segmentation that can be used inside the image analysis.
  • Integration of the Intellesis Segmentation processing functionality into the OAD environment.

Application
Example:

XRM (X-Ray Microscopy) image from sandstone showing the main steps when working with the Intellesis Segmentation module.


1 Original Image


2 Labeled Image


3 Overlay of Original Image and Segmentation Result


4 Segmented Image

Application
Example:

Cells image with phase gradient contrast on the Celldiscoverer 7 and segmented using Intellesis Segmentation.


1 Original Image


2 Labeled Image


3 Overlay of Original Image and Segmentation Result


4 Segmented Image

Note:
The training of Intellesis Segmentation models is CPU/GPU specific. A model trained on GPU only runs on a GPU machine. If a model trained on GPU is transferred to a CPU-only machine, the model has to be retrained to run on this machine.

Licensing and Functionalities of Intellesis Segmentation

Some functionality of Intellesis Segmentation is generally available in ZEN, but the full functionality requires the AI Toolkit license.

Basic functionality

The general available functionality includes:

  • Importing and exporting models.
  • Managing the models, including renaming and deleting.
  • Creating an analysis setting from your model.
  • Running a model with the Intellesis Segmentation function on the Processing tab or in OAD (if you have licensed the Developer Toolkit).
  • Running a model as part of the image analysis or Bio Application segmentation step, if you have the license for the 2D Toolkit or Bio Applications Toolkit respectively.

Licensed functionality

If you have licensed this functionality and activated it under Tools > Toolkit Manager, the following additional functionality is available:

  • Creating and training a new model.
  • Retraining an existing model.

FAQ/Terminology

Question/Term

Description

Machine Learning

The Intellesis Segmentation module uses machine learning to automatically identify objects within an image according to a pre-defined set of rules (the model). This enables any microscopy user to perform image segmentation even on complex data sets without programming experience or advanced knowledge on how to set up an image segmentation.

What is a "Model" ?

A model is a collection of rules according to which the software attributes the pixels to a class. Such a class is mutually exclusive for a given pixel, i.e. a pixel can only belong to one class. The model is the result of (repeated) labeling and training a subset of the data. After the model is trained using the labels provided by the user, it can be applied to the full data set in image processing, or it can be used to create an image analysis setting (*.czias) to be used with the 2D Toolkit.

In image processing the trained model can be applied to an image or data set and perform segmentation automatically. As result you will get two images, the segmented image on the one hand and a confidence map on the other.

What is a "Class" ?

A class is a group of objects (consisting of individual pixels) with similar features. According to the selected model the pixels of the image will be attributed as belonging to a certain class, e.g. cell nuclei, inclusions in metals, etc.

Every model has by default already two classes built-in, because at least two classes are needed (e.g. cells and background or steel and inclusions). More classes can be defined if necessary.

What is "Labeling" ?

Instead of using a series of complex image processing steps in order to extract the features of the image, you can simply label some objects in the image that belong to the same class. Based on this manual labeling the software will attribute the pixels of the image as belonging to a certain class. In order to refine the result, you can re-label wrongly attributed pixels to assign them to another class.

What is "Training" ?

During the training process (in the Intellesis Segmentation training user interface) you can repeatedly label structures as belonging to one class, run the training, check if the result matches your expectation and if necessary, refine the labeling in order to improve the result. The result is a trained model (a set of rules) which produce the desired result when applied to the training data.

With the labeled pixels and their classes a classifier will be trained. The classifier will then try to automatically assign single pixels to classes.

Training UI
(User Interface)

The user interface for training is the starting point of the automatic image segmentation process. Here you import and label images, and train the model which you can later use for automatic image segmentation. Within this interface you can load the training data, define the classes of objects found in your data and train the classifier to assign the objects to the correct classes.

What is "Segmenting" or "Segmentation"?

Segmentation is the process of partitioning an image into segments, where each segment consists of pixels that share certain features. This involves assigning a class label to each pixel based on its features, such as color, texture, or intensity and grouping of pixels: pixels that are classified into the same class are grouped together to form distinct segments within the image. Before you can perform segmentation, the segmentation model has to be trained. Within the Training UI you train the software by labeling specific objects or structures that belong to different classes. A pseudo-segmentation is performed each time you train the model so that you see if the feature extractor works for your image.

One output of the Intellesis Segmentation processing is the fully segmented image using the trained model. The second output is the confidence map, helping you assess the reliability of the segmentation.

Confidence Map

The confidence map is one of two resulting images when you apply a trained model to an image by using the processing function Intellesis Segmentation.

The (resulting) grayscale image encodes the reliability of the segmentation. Areas which can be addressed to a certain class with a high confidence will appear bright, whereas areas which have a lower confidence to belong to a certain class will appear dark. The confidence is represented by a percentage value, where 0 means "Not confident at all" (dark) and 100 "Very confident" (bright).

What is a "Feature"?

A feature is a specific property of a pixel that is calculated using a predefined set of filters and processing functions. This process results in a "Feature Vector" for each pixel, which encapsulates various characteristics of the pixel.

What is a "Feature Extractor"?

A feature extractor is a pre-defined set of processing functions that is used to create the feature vector for every pixel. A specific layer of a pre-trained neuronal network can be used as feature extractor as well.

Prediction

When the model that was trained on example data is applied to a new unlabeled data set the result is called a prediction.

Multi-Channel Images

The Intellesis Segmentation module supports multi-channel data sets. It is important to understand that in case of multi-channel images every pixel can still only belong to one class, i. e. the classes are mutually exclusive.

The additional information of having more than one intensity value per pixel (e.g. one for every channel) is also used for classification.

Example: If you have overlapping regions A and B in the image you want to classify, consider labeling three independent classes:

  • Class 1: A
  • Class 2: B
  • Class 3: A overlapping with B

Workflow Overview for Intellesis Segmentation

Intellesis Segmentation offers three main workflows. The general workflows and the basic steps involved are shown inside the diagram.

  • Labeling and training your images -> results in a Trained Model.
  • Using the trained model to segment images -> results in Binary Masks.
  • Using the trained model for image analysis -> results in classified pixels for subsequent segmentation and measurements of objects.
Process description of the Intellesis workflow
Process description of the Intellesis workflow

Training User Interface Intellesis Segmentation

The training user interface is accessed via the Intellesis Segmentation tool on the Analysis tab.

User Interface for Training

1

Labeling and Training Settings
On the left side you find elements for managing the classes. You can add and delete classes and select them for labeling an image.
You can change the label opacity and the segmentation opacity by adjusting the corresponding slider. Opacity determines to what degree it obscures or reveals labels or segmentations. Opacity of 1% appears nearly transparent, whereas 100% opacity appears completely opaque. Additionally, you can hide all segmented pixels where the confidence value is below a certain threshold set by the Min. Confidence (%) slider.
With the different parameters of the Segmentation options and the Postprocessing options it is possible to further improve the results of the training and the (pseudo-)segmentation. The Train & Segment button starts the automatic training algorithm and then performs the pseudo-segmentation of the defined classes in the image.

2

Image Area
In the image area, the image selected in the left panel is displayed. You can label parts of the image as belonging to the class highlighted in the "classes" box on the left. When you are inside the image, the actual brush size for labeling is represented by a square. If the brush size is very small, the square is changed into a dotted circle with a small point inside.

3

Image Gallery
On the right side you can import and select the images you want to use for training and segmenting.

4

Labeling Options
Below the center screen area you can adjust the Labeling Mode or Brush Size.

When you use images with large X/Y dimensions, e.g. large tile images, the segmentation will be only performed on a subset of the whole image in order to avoid long waiting periods. The current image subset maximum size in X/Y is 5000 pixels and is centered on the current view port. Nevertheless, all labels inside the complete image will be used for training, but the segmentation preview (pseudo-segmentation) will be only applied to that subset.

Intellesis Training Options

Selects the set of feature extractors used for training Intellesis segmentation models, see Feature Extractors. Keep in mind that there is no definitive 'correct' selection of parameters. It is advisable to experiment with various parameters for the same image to determine which configuration yields the best results.

Parameter

Description

Basic Features 25

A predefined feature set using 25 features, see Basic Features 25.

Basic Features 33

A predefined feature set using 33 features, see Basic Features 33.

Deep Features 50

Deep Features 64

Deep Features 70

Deep Features 128

Deep Features 256

The complete or reduced feature set from either the 1st, 2nd or 3rd layer of a pre-trained network is used to extract the respective number of features, see Intellesis Deep Features or the respective Zeiss GitHub page.

Postprocessing Options

Parameter

Description

No Postprocessing

This parameter is set by default. No further postprocessing will be applied on the images.

Conditional Random Field (CRF)

If selected, this post processing function is applied to the output of the pixel classification. This can improve the segmentation results, depending on your sample. The CRF algorithm tries to create smoother and shaper borders between objects by re-classifying pixels based on confidence levels in their neighborhood.

Note: If CRF is activated, the returned confidence map does not reflect the outcome of the majority votes of all decision trees of a specific class anymore. Therefore, a map containing only ones will be returned when the CRF postprocessing option is activated.

Labeling Options

Parameter

Description

Undo/Redo

When you click on the arrows you can undo/redo the last actions you have performed.

Labeling Mode

Here you can select between labeling and erase mode. To switch the labeling mode, you can also use the shortcut Ctrl + D.

Brush Size

Here you can set the brush size of the labeling/erasing tool.

Note that the brush size can be changed alternatively by pressing the Ctrl key and using the mouse wheel (when the cursor is inside the image area.)

All Labels

When you click on Clear, all labels in the active image will be deleted.

Intellesis Segmentation Models

Deleting an Intellesis Segmentation Model

  1. You have selected an Intellesis segmentation model.
  1. On the Analysis tab, in the Intellesis Segmentation tool, click and select Delete.
  2. The Deleting Model dialog opens.
  3. Click Yes to confirm that you want to delete the model.
  1. You have deleted the model.

Importing Labels from Binary Mask

This class-specific function allows you to import binary images from an external source as labels for the currently selected class. This is useful when the ground truth for a specific image is already available or when you wish to use a binary image obtained through a different modality as annotation for the training.

Be aware that this function overwrites existing labels for this class and that this functionality can possibly create a huge number of labels that might lead to memory issues depending on the system configuration and the selected feature extractor.

  1. The label image to be imported has exactly the same dimension in XY as the currently selected training image.
  2. You have opened the user interface for training, see Creating and Training an Intellesis Segmentation Model.
  1. Right-click a class and select Import Labels from Binary Mask.
  2. The explorer opens.
  3. Navigate to the label image you want to import and click Open.
  1. The imported image is displayed in the Image view. The displayed labels have the color of the selected class and fit exactly with the class of the loaded image.

Converting Segmentations to Labels

With this function you can convert the result of a segmentation in the Intellesis trainings interface directly to labels and thereby increase the number of labels for the next training iteration.

  1. You have opened the user interface for training, see Creating and Training an Intellesis Segmentation Model.
  2. You have performed a segmentation.
  1. Right-click a class and select Segmentation to Labels.
  1. The segmentations are converted to labels and are visible in the Labels channel. These can be further refined the brush and delete tool.

Using Pretrained Deep Learning Networks for Image Segmentation

In ZEN you can use pre-trained deep learning models for image segmentation. You can use models provided by Zeiss or load your own models. These models can be imported in ZEN via the Intellesis Segmentation tool, see Importing an Intellesis Segmentation Model.
After the import the model can be used for the following workflows:

Using networks provided by ZEISS

Zeiss provides some pre-trained networks for you to use (subject to change without notice). These networks are available for download on the ZEISS GitHub page for Open Application Development (OAD) and can be found inside the Machine-Learning section.

Note: These networks are copyright protected!

Condition of Use
These pre-trained networks were trained with "best-effort" on the available training data and is provided "as is" without warranty of any kind. The licensor assumes no responsibility for the functionality and fault-free condition of the pre-trained network under conditions which are not in the described scope. Be aware that no pre-trained network will perform equally good on any sample, especially not on samples it was not trained for. Therefore, use such pre-trained networks at your own risk and it is up to the user to evaluate and decide if the obtained segmentation results are valid for the images currently segmented using such a network. By downloading you agree to the above terms.

Detailed Information about pre-trained DNNs
Such networks are specific for the application they have been trained for. Detailed information can be provided on demand.

Using your own networks

You can also train and use your own networks. To be able to use your own networks in ZEN, your networks have to fulfill certain specifications detailed in the ANN Model Specification.
Additional information about ZEISS machine learning, including an example of how to train a model and convert it into a czmodel can be found in this Readme on GitHub. It also explains the usage of the PyPi package which is free to use for everybody.

Performing an Image Analysis Using an Intellesis Segmentation Model

Once you have trained a model for segmentation, you can also use it in the Image Analysis for further analysis. To use the trained model, you must first create a new image analysis (IA) setting (*.CZIAS format) first. To use the trained model there are two options:

  • Create a new image analysis setting from the Intellesis segmentation model. This is the only option for Intellesis segmentation models trained on a multi-channel image.
  • Use an Intellesis segmentation model in the Automatic Segmentation step of an image analysis setting, see Automatic Segmentation. This option is only possible for models trained on single-channel images, but allows you to create image analysis settings with a complex hierarchy.
  1. You are in the Intellesis Segmentation tool.
  1. Select the trained model which you want to use to create an analysis setting, click , and select Create Analysis Setting.
  2. The dialog for saving the setting opens. The setting will be saved as *.czias file in the ZEN default folder for image analysis settings (usually under User/Documents/Carl Zeiss/ZEN/Documents/Image Analysis Settings).
  3. Click Save.
  4. The file is saved.
  5. Now change to the Image Analysis tool and select the setting from the dropdown list. Note that the setting is only available in the dropdown list when you have used the default folder for saving. Otherwise the setting must be loaded from the file system (specific location) via the Import option.
  6. The image analysis setting is loaded with the classes defined in the Intellesis segmentation model.
  7. You can now continue with setting up an image analysis. For more information, see Image Analysis Wizard.
  8. The image displays a sandstone dataset segmented using Intellesis Segmentation inside the Image Analysis Wizard and showing the actual segmentation step. Instead of conventional thresholds, the Intellesis segmentation model is used to segment the image.
  9. With the Min. Confidence (%) parameter, it is possible to exclude pixels where the model exhibits a low confidence level, applicable to all classes.
  10. The binary functions Fill Holes and Separate are applied solely to the binary masks produced by the Intellesis segmentation step, making them independent of the actual segmentation process.
  11. Sandstone Dataset segmented using Intellesis Segmentation in the Image Analysis Wizard showing the measurement results for one particular class (shown in green).

Changing the Tile Border Size for Deep Learning Networks

Undo Border Size Changes

There is no way to undo the change of the border size unless you remember the original value and change it back with the same workflow described here.

  1. You have imported a deep learning network, see Importing an Intellesis Segmentation Model.
  1. On the Analysis tab, in the Intellesis Segmentation tool, select the network as your Model.
  2. Click and select Change Border Size.
  3. The Change Border Size dialog opens.
  4. Change the Border Size to fit your needs. Note while increasing the border size reduces segmentation artifacts in the output, it also decreases the tiling speed.
  5. Click OK.
  1. You have changed the border size for tiling. If there are still tiling artifacts with the maximum border size, consider retraining the model with larger tiles.

Remarks and Additional Information

  • Segmentation performance in general depends among other factors on the system performance, the available and free RAM and GPU memory.
  • Whenever using Intellesis Segmentation it is strongly recommend not to use other memory- or GPU-intensive applications at the same time.
  • Deep Feature Extraction uses the GPU (NVIDIA only) if present on the system. It is recommended to use a GPU with at least 8GB of RAM.
  • When installing the GPU libraries it is required to use the latest drivers which can be obtained from the NVIDIA homepage (https://www.nvidia.com/Download/index.aspx?lang=en-us).
  • In case of using an approved ZEISS workstation, the latest drivers can be found on the installer.
  • When using Deep Feature Extractor on a GPU system, Tensorflow will occupy only as much as GPU RAM as needed to ensure system stability. When the segmentation is finished this GPU memory is released automatically.
  • Therefore, when starting another GPU-intensive application, the GPU memory cannot be used by this new process and a CPU fallback will be used or performance issues may occur.
  • In this case, restart the software to free all possible GPU memory and then start using the GPU-intensive application.

Feature Extractors

Intellesis Basic Features

  • For calculating the features various filters with various filter sizes and parameters are applied to the region around this pixel (2D Kernels).
  • Results are concatenated and yield the final feature vector describing the pixel.

Basic Features 33

Used Filters:

  • Gaussian filter (20 different sigma) = 20 feature dimensions
  • Sobel filter (1 sigma) = 1 feature dimension
  • Gabor filter (1 theta, 2 different sigma, 2 different frequencies) = 4 feature dimensions
  • Mean filter (5 different sizes) = 5 feature dimensions
  • Hessian filter (1 sigma) = 3 feature dimensions (one for derivative in direction xx, one for derivative in direction xy and one for derivative in direction yy)

Intellesis Deep Features

  • Entire image as input for pre-trained network.
  • Note: If you use the CPU for segmentation with Deep Feature sets, the results can be different on different machines because they are hardware (CPU) dependent.
  • Take the output from an intermediate layer of that network as feature vector, e.g. output from layer 3 was processed by preceding layers 1 and 2.
    • Deep Features 50: Using layer 2 with reduced feature dimension = 50
    • Deep Features 64: Using layer 1 with full feature dimension = 64
    • Deep Features 70: Using layer 3 with reduced feature dimension = 70
    • Deep Features 128: Using layer 2 with full feature dimension = 128
    • Deep Features 256: Using layer 3 with full feature dimension = 256

Change Border Size Dialog

Parameter

Description

Total Tile Width

Displays the total tile width used by the network.

Total Tile Height

Displays the total tile height used by the network.

Border Size

Sets the border size of the tiles. The lower limit of the border size is zero and the upper limit is a quarter of the smallest dimension of the tile.

Tile Overlap

Displays the tile overlap which is the is the sum of the overlap on the left and right side, see Tile Border Size Example. It is updated according to changes of the border size.

Tile Border Size Example

The tile overlaps in % is the is the sum of the overlap on the left and right side. Consider the following two examples as illustration for the overlap:

Border Size of 10 % with a resulting overlap of 20 %

Border Size of 25 % with a resulting overlap of 50 %

Physiology (Dynamics)

This module enables the analysis of physiological time series data with Ca2+calibration, including mean ROI measurement. It supports imaging with single wavelength (e.g. Fluo-4) and dual wavelength dyes (e.g. Fura-2) allows ratio calculations and flexible charting and image display as well as data table display with data export functionality. It also offers the use of definable switches for online annotations, changing of acquisition speed and freely configurable TTL triggers. Some functionality is generally available, for the full set of features you need the dedicated license for the module, see Licensing and Functionalities of Physiology.

Licensing and Functionalities of Physiology

Some basic functionality for physiology experiments is generally available in the software, but the full Physiology (Dynamics) functionality requires a license.

Basic functionality

The basic functionality is available for time series images and time series with z-stack images opened in the software (excluded ZEN lite), or multi-positions (scenes) where a full time series is collected sequentially at each position (Full time series per tile Region). Note that for acquiring time series images, you also need the license for the Time Series module. The functionality generally available in ZEN includes:

  • Using the MeanROI offline functions to specify user-defined measurement regions (ROIs) after acquisition of your time lapse experiment and analyze their time-dependent changes in intensity.
  • The functionality to display the intensity curves in charts or export the values in the form of tables.
  • Definable switches for online annotations and change of acquisition speed. Pausing and refocusing are possible via a live camera view.

Licensed Functionality

If you have licensed this functionality and activated it under Tools > Toolkit Manager, the additional functionality includes:

  • The option to calculate online/offline (during/post acquisition) ratios and display a ratio image.
  • Additional display layouts and analysis functions (ROI tracing etc.).

Workflow MeanROI View (Offline)

Adjusting ROIs for Time Points

If objects move laterally in the course of the time series, you can adjust the ROIs at each Time Point in order to follow the objects.

  1. You have defined at least one ROI.
  2. You are in the MeanROI view.
  1. Open the Dimensions tab in the general view options.
  2. Use the Time slider to scroll through the time points. Stop at the first time point at which you want to adjust a ROI.
  3. Open the ROI Tracing tab and activate the Enable ROI Tracing checkbox. To edit the position of a single Key frame change the Key Frame Edit Mode to single mode . Note that you can only select the key frame edit mode if the frame number is set to a value >1.
  4. Adjust the position of the ROI using drag & drop. To do this, select the ROI in the image area by pressing the left mouse button. Then move the ROI to the new position and release the mouse button. Note that rectangular or contour ROIs can also be rotated.
  5. If necessary, you can change the shape of a ROI, by right-clicking on a ROI and selecting Edit Points (e.g. for polygon contours). It is also possible to rotate a ROI if necessary.
    Note that if the area of the ROI changes, the mean intensity value will change. For ratio values in which thresholding is applied, only "valid" pixels will be respected (see Basics of Calculation of Intensity and Ratio Values). Ratios can only be performed if you have the module Physiology (Dynamics).
  6. Adjust the shape/ rotation of the ROI, by drag & drop the contour points/ via the rotation handle.
  7. Changes to the position and shape of the ROIs are adopted for all subsequent time points.
  8. Repeat the previous steps for all other time points for which you want to adjust an ROI.
    For a selected ROI you can see a list of the time points in which its position/shape was modified. As the distance (in frames) between each key frame can vary, a linear interpolation is used to smoothly progress the ROI through the time points. Alternatively, deactivate this (constant) or set it to a spline method that may better describe the progression of the object you are tracing.
  1. You have successfully adjusted the measurement regions to the course of the experiment.

Adjusting the Display

Here you will find out how to adjust the display of the measured intensity values in charts and tables according to your wishes.

  1. You are in the MeanROI view or MeanROI Setup.
  2. You have defined at least one ROI.
  1. Select the Layout tab in the view options.
  2. To adjust the layout of the image and diagram display, select the desired display mode under Image and Chart.
  3. If you also want your data to be displayed in table form (note for MeanROI setup the table is only displayed after the acquisition is completed), select the desired display mode under Image and Chart with Table. With the Export tab, you can export the data table directly as a comma separated values file or create a separate data table document.
    Tables created in this manner contain additional information, for details see Basics of Calculation of Intensity and Ratio Values. The table in the MeanROI view only shows two values per ROI and time point, the mean intensity of pixels above the set threshold per channel and the mean ratio value derived from (common) "valid" pixels.
  4. In the MeanROI view (not the MeanROI setup) you can interact with a table in various ways. See Interaction with a Table in MeanROI View.
  5. You can determine the zoom (range) of the charts of all given channels (including Ratio charts) by clicking on the chart, selecting it, and scrolling with the mouse wheel.
  6. For Offline Analysis only: Select a suitable layout for the image, chart and table display.
  7. If you want to adjust the axis scaling (range), go to the Charts tab in the view options.
  8. To define the minimum and maximum values of the axes manually, click on the Fixed button under X-/Y-Axis.
  9. The Min and Max input fields for the axis are activated.
  10. Enter the desired values first into the Max and then the Min input fields.
  11. The minimum and maximum axis values of the diagrams are adjusted. Note that the Y-axis scaling can be adjusted individually for each chart.
  12. To change the unit of the X axis, click on the Fixed button under X Units.
  13. The dropdown menu for the units is activated. You can now select the desired unit.
  14. On the Layout tab, it is also possible to determine if a given channel view and/ or chart panel should be hidden. This is useful if these do not contain information that needs to be visible, thus increasing space on the screen for the remaining items. For example, in many applications transmitted light is used to monitor the specimen, but the intensity information (chart) is not required. The controls behave in a similar manner to the channel toggles on Dimensions tab.
  1. You have successfully adjusted the display of the intensity values.

Using Background Correction

Use this function to subtract background values from the measurement values. A background correction will allow you to make a better comparison of the magnitude of any fluorescent intensity changes observed over the time course of an experiment. Determine the background value with the help of a Background ROI or define a fixed value. Note that the background correction for ROI is only available if there are at least two ROI defined in the image!
If the ratio calculations are enabled, the background correction parameters are defined on the Ratio tab. The background correction values on MeanROI tab are disabled, i.e. not used in this case.

Defining a Background ROI

  1. You are in the MeanROI view or MeanROI setup.
  1. At the desired time point of the time series, draw a ROI into a part of the image that contains only background signal in all channels.
  2. Go to the MeanROI tab > Background Correction section and activate the radio button ROI. Note that this function is only available if two or more ROIs are present.
  3. In the drop down, select the ROI-ID of the ROI that you defined in the first step.
  4. To edit the Background ROI, simply draw a new ROI and select it from the dropdown list.
  1. You have successfully defined a Background ROI. The mean intensity of the background ROI is subtracted from the measured values of the ROIs in a channel- and time-point-specific manner. The corrected values are adopted into all diagrams and tables.

Defining a fixed background value

  1. You are in the MeanROI View or MeanROI Setup.
  1. On the MeanROI tab in the Background Correction section select the Constant option.
  2. The associated input field is activated.
  3. Enter a fixed background value into the Constant input field.
  4. Press Enter to update the measurements.
  1. The defined background value is subtracted from all measured values of the ROIs in a time-point-specific manner.

Exporting a Data Table

  1. You are in the MeanROI view.
  2. You have defined at least one ROI.
  1. Select the Export tab in the View Options.
  2. In the Data Table section click on the Save As (*.csv) button.
  3. The Save As dialog opens.
  4. Enter a suitable file name, navigate to the desired folder and click on Save.
  5. All the measurement data are saved as comma-separated values in a csv file. This contains the time information, marker events, the geometric area of the ROIs, the area of ROI adjusted by threshold (only with Physiology (Dynamics) module), the mean intensity of the ROI, the mean intensity of the ROI adjusted by threshold (only with Physiology (Dynamics) module), mean ratio value (only with Physiology (Dynamics) module), focus position, and incubation events/ values (if configured) for each channel and each ROI.

Calculating a Ratio for One Wavelength

  1. To calculate ratios (quotient of two fluorescence intensities) and display ratio images, you need the Physiology (Dynamics) module.
  2. You have a suitable image data set open.
  3. You are in the MeanROI view on the Ratio tab (view option).
  1. Activate the checkbox Enable Ratio Calculation.
  2. In the Method dropdown list, select the Single Wavelength (F/F0) entry.
  3. In the Calculation dropdown list select the channel for calculating the ratio.
  4. In the Reference image (Ft0) setup, define the frames of the time series image from which you want the reference value Ft 0 to be calculated.
  5. Click on the Update button.
  6. The ratio values are calculated. The ratio image and a diagram for the ratio values are displayed in the MeanROI view. For very large images (pixels and time points) it might be necessary to use the Cache Ratio Image function on the Ratio tab, as this will eliminate flickering when playing through the images at speed.
  1. You have successfully calculated a ratio for a single wavelength dye such as Fluo-4.

Basics of Calculation of Intensity and Ratio Values

The ratio calculation in ZEN blue MeanROI/Physiology functions in the following manner:

After you have set up the ratio to your satisfaction (background correction/thresholding), you can gather/ view all the results using the export functions found on the export tab in MeanROI view. You can also view a smaller table within the MeanROI view itself by activating the appropriate layout.

How are your threshold and background values handled for intensity and ratio measurements? The threshold value will be applied prior to background subtraction if both are active. If this is this is the case, then for charts/tables the intensity values of any given ROI are handled this way: if a pixel in the ROI is under the threshold value, it will be ignored for the calculation of the mean value of the ROI. After that the background is subtracted from the mean value to get the corrected intensity value of the ROI (note if no pixel remains valid in the ROI after the application of the threshold, the corrected mean intensity is always 0 and hence the ratio is also 0). In the case of the ratio image, each pixel is "validated" based on the threshold value. If the pixel is above the threshold, then the pixel value is kept (i.e. is valid), otherwise it is set as NaN (Not a Number, which is not the same as zero) and is considered invalid). As before, background correction is done after threshold. If the pixel value is NaN, the ratio pixel value is NaN. If the pixel is still valid then use pixel value - background value in the ratio calculation. If a negative value results, it is clipped to 0.

The following example shows how a ratio value is generated based on the applied background and thresholding values:

Consider a region of interest that is 6 pixels wide by 1 pixel high. The pixels of the region in the Wavelength 1 image are as follows:
[50, 75, 100, 125, 150, 175].

For the purpose of this example, assume the threshold = 60 for wavelength 1.

ZEN thresholds Wavelength 1 to obtain:
[--, 75, 100, 125, 150, 175].

The pixels of the region in the Wavelength 2 image are as follows:
[25, 25, 25, 25, 100, 100].

For the purpose of the example, assume the threshold = 50. ZEN thresholds Wavelength 2 to obtain:
[---, ---, --- , ---, 100, 100].

ZEN computes the Ratio by only rationing the averaged values for the valid pixels in each individual wavelength. To recap, the pixels for each wavelength were:

[--, 75, 100, 125, 150, 175] (Wavelength 1)
[---, ---, --- , ---, 100, 100] (Wavelength 2)

The ratio value of this region is calculated by taking into account only the common area, which corresponds to the area of the sum of the valid pixels common to the two wavelengths (which in this example is only 2 “valid” pixels).

Using the threshold values as above, the pixels that are used to calculate the ratio average are:
150/100
175/100
which gives a ratio of 1.625.

To get an overview of all the results, including the original values not corrected for their validity in this manner, use the data table creation function in the MeanROI export tab. This opens, for example, a new document or allows you to export the results as a *.csv file. This data table/ *.csv includes for each ROI the following information/measurements:

Value in ID in table header

Description

<Channel name>_<Region ID>_Area

Geometric area of ROI (constant for both channels).

<Channel name>_<Region ID>_IntensityAreaThrs

Threshold corrected area within ROI for given channel.

<Channel name>_<Region ID>_IntensityMean

Mean intensity of pixels within the geometric area of the ROI.

<Channel name>_<Region ID>_IntensityMeanThrs

Mean intensity of pixels above the set threshold for the given channel.

Ratio <Region ID>

Mean Ratio value derived from common “valid” pixels.

This is repeated for the second channel, and at the very end you will find the Ratio value. Thus, the threshold corrected values are provided for each channel (mean intensity and the corresponding area from which this is derived) as well as the ratio value for the common valid pixels. Relative time, markers, focus values and parameters from the incubation (if configured) are also listed. In the embedded table in Mean ROI view you will find a summary that gives the following values: Mean intensity of pixels above the set threshold for each channel (wavelength) and the corresponding ratio values. No Area values are given here.

Interaction with a Table in MeanROI View

In the MeanROI view (not the MeanROI setup) you can interact with a table in the following ways:

  • Scroll up or down or left to right. Time values are given at the far left and at the far right any markers are shown at the time point they were created. In between you find the values for each ROI in the first channel, then the second and so on, then the ratio values.
  • If you click on the column header you select the entire column and in your charts, you will see the trace corresponding to this ROI is highlighted by a thicker line. Multiple columns can be selected by pressing and holding shift key.
  • If you click any given value in a column, not only will the trace of the corresponding ROI be highlighted, the images that correspond to this time point will be displayed, and the playhead (vertical blue line) of all charts will synchronise to this time point. This allows quick and easy examination of the data/ events.

Workflow Physiology (Dynamics) Experiments

If you own the Physiology (Dynamics) module you can use the MeanROI setup to specify user-defined measurement regions (ROIs) before the acquisition of your time lapse experiment and analyze their time­‑dependent changes in intensity online during acquisition. Ratios can also be calculated and displayed online - these are the typical functions used in physiology/ calcium Fura-2 applications.

Before the experiment

A precondition for a physiology experiment is a Time Series experiment (which can include a z-stack acquisition), which is set up in the Time Series tool. Adding a time dimension to your experiment allows you to activate the optional tool Dynamics. This tool contains the button for opening MeanROI Setup. Here you can draw in ROIs and adjust the display layout of the measurement results. Note that when the setup is opened a snap is automatically acquired, on the basis of which you can configure the settings for the subsequent experiment, like for example the ratio parameters. The structure of the MeanROI setup is based on the MeanROI view, making it easier to learn. Note that online measurements on tiles and positions experiments are not possible, but you can perform measurements on tiles or position data collected over time (i.e. multi-scene time series) in MeanROI view post-acquisition.

During the experiment

After being started, physiology experiments are displayed in the online mode of the MeanROI View. This allows you to analyze and follow the experiment during acquisition. The structure and options largely correspond to the offline mode of the MeanROI View. We therefore recommend that you familiarize yourself with the MeanROI View (offline) before performing your Physiology experiment.

After the experiment

After you have performed your Physiology experiment the data are displayed in the offline mode of the MeanROI view and can be analyzed, processed and exported there. For more information, see also Workflow MeanROI View (Offline).

  1. To perform physiology experiments, you need the Physiology (Dynamics) module.
  2. You have created a new experiment, defined at least one channel and adjusted the focus and exposure time, see also Set up a new experiment and Set up multi-channel experiments.
  3. You have licensed this functionality and activated it under Tools > Toolkit Manager.
  4. You are on the Acquisition tab.
  1. Activate Time Series in the Acquisition Dimensions section.
  2. The Time Series tool is displayed in the Left Tool Area under Multidimensional Acquisition.
  3. Activate Dynamics in the experiment manager.
  4. The Dynamics tool now is displayed in the in the Left Tool Area under Applications.
  5. Note that the tool is not available if the Tiles or Panorama dimensions are activated. Deactivate these dimensions to make the tool available.
  6. Set up a time series experiment, see Acquiring Time Series Images.
  7. Open the Dynamics tool.
  8. Click MeanROI Setup.
  1. You have completed the general prerequisites for Physiology experiments.

Setting up an Experiment in MeanROI Setup

  1. You have read the Workflow Physiology (Dynamics) Experiments chapter.
  1. Activate the Dynamics checkbox in the Experiment Manager.
  2. In the Dynamics tool, click on the MeanROI Setup button.
  3. MeanROI setup opens.
  4. An image is acquired automatically on the basis of which you can configure your settings. You can click on Snap at any time to update the image.

Drawing in ROIs

  1. You are in the MeanROI View or in the MeanROI Setup on Acquisition tab.
  1. Go to the Graphics tab in the View Options.
  2. Select a tool for drawing in ROIs, e.g. the Polygon tool.
  3. Activate the Keep tool checkbox.
  4. The selected tool remains active after you have drawn in an ROI. This means you can draw in several ROIs without having to re-select the tool.
  5. Using the selected tool, in the image view draw in the objects or regions (ROIs) for which intensity measurements are required.
  6. The ROIs are displayed in the list (Annotations/ Measurements Layer) on the Graphics tab.
  7. Intensity measurements are performed for each ROI and displayed in the chart area to the right of the image view.
  1. You have successfully defined measurement regions for the intensity measurement.

Measurement time

Note that the time taken to initially create measurements will vary as some date is cached to memory. Thus, when a long time series image is opened that already contains ROIs, you might have to wait briefly until ZEN completes its measurements. The duration depends on e.g. number of ROIs, number of time points, image size, number of pixels etc.

Using Background Correction

Use this function to subtract background values from the measurement values. A background correction will allow you to make a better comparison of the magnitude of any fluorescent intensity changes observed over the time course of an experiment. Determine the background value with the help of a Background ROI or define a fixed value. Note that the background correction for ROI is only available if there are at least two ROI defined in the image!
If the ratio calculations are enabled, the background correction parameters are defined on the Ratio tab. The background correction values on MeanROI tab are disabled, i.e. not used in this case.

Defining a Background ROI

  1. You are in the MeanROI view or MeanROI setup.
  1. At the desired time point of the time series, draw a ROI into a part of the image that contains only background signal in all channels.
  2. Go to the MeanROI tab > Background Correction section and activate the radio button ROI. Note that this function is only available if two or more ROIs are present.
  3. In the drop down, select the ROI-ID of the ROI that you defined in the first step.
  4. To edit the Background ROI, simply draw a new ROI and select it from the dropdown list.
  1. You have successfully defined a Background ROI. The mean intensity of the background ROI is subtracted from the measured values of the ROIs in a channel- and time-point-specific manner. The corrected values are adopted into all diagrams and tables.

Defining a fixed background value

  1. You are in the MeanROI View or MeanROI Setup.
  1. On the MeanROI tab in the Background Correction section select the Constant option.
  2. The associated input field is activated.
  3. Enter a fixed background value into the Constant input field.
  4. Press Enter to update the measurements.
  1. The defined background value is subtracted from all measured values of the ROIs in a time-point-specific manner.

Starting and Influencing an Experiment

  1. You have read the Workflow Physiology (Dynamics) Experiments chapter and set up an experiment in MeanROI Setup.
  2. You are on the Acquisition tab.
  1. Start your Physiology experiment by clicking on the Start Experiment button.
  2. The time series experiment is started. The MeanROI View (online) opens and displays the current images and the intensity curves for each ROI measured online. The intensity curves are displayed in the Time Line View and in the diagrams. Note that the MeanROI view will display at the third time point. This is noticeable when the interval time is longer. Thus this display delay should fall into the typically base line of this type of experiments, i.e. prior to the first stimulus of the sample.
  3. You can pause the experiment at any time by clicking on the Pause Experiment button and continue it again by clicking on the Continue Experiment button.
  4. The Focus can be adjusted during the experiment. To prevent images that are not sharp being acquired, pause your experiment and use the Live acquisition button to adjust the focus. Then continue the experiment. Note that using the Live view only works with experiments run in interactive mode. In triggered acquisition scenarios this is not possible.
  5. Adjust the display of the intensity values during the experiment by changing the settings on the Layout or Charts tab. The unit of the X-axis cannot be changed during the experiment.
  6. You can move and change ROIs during acquisition. The changes are adopted for all time points, see Drawing in and adjusting ROIs. Note that ROI tracing functions (these allow objects to be followed in XY) are only available after an acquisition.
  7. Activate Switches in the Time Series tool during the experiment to perform the corresponding actions.
  8. Various events, such as the activation of switches or the pausing of the experiment, are labeled in the Time Line view by markers.
  9. On the Dimensions tab deactivate the Follow Acquisition checkbox to analyze the data acquired up to that point. To do this, select the corresponding time points using the Time slider, the diagram sliders or the Time Line view slider in the MeanROI view.
  10. Change the size of the area marked in blue in the Time Line View to adjust the section displayed in the charts (time axis).
  1. You have successfully started the experiment, analyzed it online and influenced it.

Adjusting ROIs during Experiments

If objects move laterally in the course of the experiment, you can adjust the ROIs at any time during the experiment in order to follow the objects.

  1. You have defined at least one ROI.
  2. You have started your Physiology experiment.
  1. In the Experiment Manger click on Pause experiment button.
  2. Adjust the position of the ROI using drag & drop. To do this, select the ROI in the image area by left-clicking and hold the mouse button down. Then move the ROI to the new position and release the mouse button.
  3. To change the shape of an ROI left click on an ROI and drag the bounds to adjust the size.
  4. Changes to the position and shape of the ROIs are adopted for all time points.
  5. Repeat the previous steps for all subsequent ROIs that you wish to adjust. Note that you can select multiple ROIs and adjust all the positions simultaneously.
  1. You have successfully adjusted the measurement regions (ROIs) to the course of the experiment.

Sample Experiment Fura-2 with DG4/5

Step 1: Creating Channels

  1. To perform the experiment, you need the Physiology (Dynamics) module.
  2. You have a Sutter DG4/5 with appropriate excitation filters for Fura-2 and a Fura-2 filter set in the microscope's reflector wheel.
  3. You are on the Acquisition tab.
  1. Create a new experiment in the Experiment Manager, e.g. "Physiology Fura-2".
  2. Add the channel Fura-2 using Smart Setup.
  3. Activate the Time Series checkbox in the acquisition dimensions.
  4. Open the Channels tool.
  5. Select the Fura-2 channel from the list.
  6. Click on Options and select Duplicate.
  7. Select the first Fura-2 channel from the list.
  8. Click on the Options and select Rename.
  9. You can now rename the channel, e.g. Fura-2 340 nm.
  10. Repeat steps 7 and 8 to rename the second channel, e.g. Fura-2 380 nm.
  11. Select the Fura-2 380 nm channel.
  12. Select another LUT from the dropdown list, e.g. red.
  13. Select the entry 21 HE Ex. FURA 380 from the Excitation dropdown list.
  14. The excitation filter is used for this channel.
  15. Adjust the exposure time and focus for both channels.
  1. You have created the channels for your experiment.

Step 3: Setting Up an Online Ratio

  1. Open the MeanROI setup from inside the Dynamics tool.
  2. Mean ROI definition opens and snaps of the configured channels are acquired automatically and displayed in the Center Screen Area. The diagrams for each image are displayed to the right of this.
  3. Select the Online Ratio tab from the view options and activate live ratio generation.
  4. Under Method select the Dual Wavelength entry from the dropdown list.
  5. Under Calculation select the Fura-2 340 nm entry from the dropdown list in the numerator of the formula.
  6. Under Calculation select the Fura-2 380 nm entry from the dropdown list in the denominator of the formula.
  7. A preview of the ratio image, which is calculated according to the ratio settings, is displayed.
  1. You have successfully activated the ratio functions and specified the calculation of the ratio.

Step 4: MeanROI Setup

  1. Open the MeanROI setup from inside the Dynamics tool.
  2. On the Graphics tab, select a tool for drawing in ROIs, e.g. Circle.
  3. Activate the Keep Tool checkbox.
  4. Draw your ROIs into one of the images.
  5. Deactivate the Keep Tool checkbox and select the selection tool (arrow) again.
  6. On the Layouts tab select a layout for the image and diagram display, e.g. multichannel image and single channel charts.
  7. Go to Charts tab and click on the Fixed button under X Units and select a unit from the dropdown list, e.g. seconds.
  8. Click on Exit at the top left of MeanROI Setup.
  1. You have successfully configured and adjusted the MeanROI Setup.

Step 5: Starting, Analyzing and Influencing an Experiment

  1. Start the experiment by clicking on the Start Experiment button.
  2. The experiment is started. In our example an image is acquired every second for a period of 10 minutes. The experiment opens in the online mode of the MeanROI View, which displays the current images and measurements.
  3. Activate the created switch at the desired time point. To do this, open the Switches section in the Time Series tool. Click on a switch as soon as you want its action to be performed, e.g. click on the "Fast" switch to acquire the subsequent images as quickly as possible one after the other. A marker will mark the time point at which the switch was activated on the X axis in the color of the switch (e.g. blue).
  4. Once the time series has been completed you can analyze the experiment in the offline mode of the MeanROI view, process it and export its values.
  1. You have successfully performed the experiment.

Mean ROI View

In the Mean ROI view you can draw ROIs and measure their intensity profile after acquiring time series experiments. The intensity profiles are displayed as charts and can be exported to data tables.

  1. The Physiology (Dynamics) module activates additional features to those of MeanROI for the offline analysis of physiology experiments, e.g. Ratio functions and ROI tracing.
  2. In this view the Image area is always to the left, charting area always to the right. Depending on which Region layout you have selected in Layout tab, the MeanROI view can have a different appearance.

1

Image area
Here you see the images for each channel of the time series and the ratio image (if the ratio calculation is enabled). The display of images can be adapted in the Layout tab.

2

Charting Area
Here you see the charts for the values of all channels selected in the Layout tab as well as for the ratio calculation (if it is enabled on the Ratio tab). If a ROI is selected in the image area on the left, the corresponding plot line is highlighted (the plot line is thicker) in the charts. The lines in the merged channel image can either have the color of the ROI, or the color of the channel (option available on the Charts tab).

Playhead (blue line)
Indicates the current frame of the time series visible in the image panel(s). The position is synchronized with the displayed image frame number and vice versa and it can be moved via drag & drop. The current time point of the visible frame is displayed to the right of the playhead line in the same time unit as the x-axis of the chart.

3

Table
Displays the values for all channels and regions at the different time points, as well as the temperature, focus (if present in the image metadata) and information about markers. If you deactivate a channel in the Visible Charts section of the Layout tab, the corresponding columns are hidden in the table.
This table is synchronized with the image view and the charts. If you select a field in the table, the corresponding ROI is selected in the image view and the charts (playheads) are updated accordingly.

4

Time Line View
This view is activated in the Layout tab. The chart supports similar functions as the other charts in MeanROI. Here you can limit the time range with the zoom functionality. The actions in this chart are synchronized with the others in the MeanROI view.

5

View Options
Here you have your standard view options as well as specific options for MeanROI, for example for the Layout or the calculation of the Ratio.

Hover over the plot with the mouse (crosshair). A tool tip appears with details of the intensity value at this position, ROI ID #, Channel, and time point (in currently set time unit of x-axis). Note these values (intensity and time) are interpolated. You can visualize the time points alone a plot by activation of the Show Tick marks function on the Chart tab.

Mean ROI Tab

Parameter

Description

Background Correction

If you have activated the Live Ratio Generation in the Online Ratio tab, or the Ratio Calculation in the Ratio tab, the background correction is disabled here and only visible in the Online Ratio/ Ratio tab. Also note that the correction for ROI is only available if there are at least two ROI defined in the image!
The following modes are available:

-

None

No background correction is performed.

-

Constant

Allows a user defined numeric value to be entered for both channels in the spin box.

-

ROI

Allows to select the background ROI, the determined value will be channel specific.

Layout and Charts Default Settings

-

Define Default

Defines the current layout and chart setup as the default. The layout and charts can be changed in the Layout tab.

-

Apply Default

Displays the default setup for layout and charts.

Restart Measurements

Only visible if the measurements calculation was canceled.
Restarts the measurements calculation.

Layout Tab

Parameter

Description

MeanROI View Layouts

Adjusts how the images, charts, and the table will be displayed.

-

Image and Chart

Selects one of three different layouts of how an image together with a chart will be displayed. If you click on one of the buttons the layout will be changed.

-

Image and Chart with Table

Selects one of three different layouts of how an image and a chart together with a table will be displayed. If you click on one of the buttons the layout will be changed.

Visible Views

Selects the channels for which the image should be displayed in the image area.

-

Single Channel View

Activated: Only one channel can be selected whose image is displayed.
Deactivated: You can manually switch on/off channels whose images should (not) be displayed.

Visible Charts

Selects the channels for which the charts and table columns should be displayed in the chart area and the table.

-

Single Channel View

Activated: Only one channel can be selected whose chart and information is displayed.
Deactivated: You can manually switch on/off channels whose charts and information should (not) be displayed.

Synchronize Charts and Channels

Activated: Synchronizes the chart and channel settings.

Show Markers/Switches

Activated: The temporal position of any switches and markers are always displayed on the charts both during acquisition or post-acquisition.

Show Time Line View

Only visible if you have licensed the Physiology (Dynamics) module.
Activated: The Time Line View panel is displayed below the other image chart panels of the Center Screen Area. The Time Line View panel is designed to provide an overview of the experiment whilst allowing the user to examine the detail displayed in the other chart panels by means of an integrated zoom tool. The Time Line View can be hidden by unselecting the check box as required both during or after an experiment.

Show View Captions

Activated: Displays the channel name clearly with the image of each channel in the multichannel view layout.
Deactivated: Hides the channel name of each image in the image view.

Charts Tab

Parameter

Description

All Chart Settings
(X-/Y-Axis)

Note that a function is active when the button is highlighted in blue.

The settings for X- and Y-Axis (only if Show All is activated) are the same, see description below. The Y- axis settings are always applied to the selected chart. The currently selected chart name (channel) is displayed above the Y-axis settings.

-

Auto

The scaling of the respective axis is automatic, allowing for an optimal, and appropriate adjusting display of the all values.

-

Fixed

The upper and lower limit of the axis can be defined using the min and max spin boxes.

X-Units

-

Auto

The units are selected automatically.

-

Fixed

You can select the desired unit for the x-axis from the dropdown list.

Show Tick Marks

Activated: Displays tick marks in the chart. You can set the Form and Size of the tick marks. The tick marks have to be set per chart.

Show Legend

Activated: Displays the chart legend.

Show Axis Captions

Activated: Displays captions of the axis.

Use Channel Color

Only visible if you display the merged channels chart.
Activated: Displays the lines in the chart in the colors of the channel. If a channel has no color defined, the chart line is displayed in white. The colors are synchronized with the channel settings, i.e. they update accordingly if the channel colors are changed.

Export Tab

Parameter

Description

Data Table

-

As New Document

Opens the measurement data table in a new document tab. The table displays all measurement values and area for all ROIs in each channel. If event markers are present, these are also listed here at the appropriate time points. For a description of the measured parameters, see Basics of Calculation of Intensity and Ratio Values.

-

Save as *.csv

Opens the Save As dialog and allows the measurement data to be exported as a comma separated value (*.csv) file. The following values are exported for each ROI and channel: Intensity, area and if present event markers. The exported values are the raw data without the subtraction of any background correction.

Ratio image

-

As New Document

Opens the ratio image in a separate new document as a *.czi file (current Z only).

-

Save as

Opens the Save As dialogue to save the ratio image directly to a *.czi file.

Online Ratio Tab

Parameter

Description

Calculation Dropdown

Selects the ratiometric method you want to use. Single and Dual Wavelength dyes are supported with an additional three formulas for further adapted (online or offline) image ratio calculations. The ratio set-up will change in accordance with your selection.

-

Single Wavelength

Select the channel in the dropdown menu. The Ft0 value is the averaged fluorescence from the specified number of image frames. The number of frames to average is defined in the spin box of the reference image set-up (see 10). The spin box at the far left is a multiplication factor.

-

Dual Wavelength

Select the channels in the dropdown list required to calculate the ratio values/image e.g. for Fura-2, a dual excitation dye, the numerator is the 340 nm image the denominator the 380 nm image. For dual emission dyes the function is identical. The spin box at the far right is a multiplication factor.

-

Image Ratio Type 2

The formula calculates the normalized ratio of the difference between two weighted channel intensities.

-

Image Ratio Type 3

The formula calculates the ratio between the weighted difference and weighted sum of two channel intensities

-

Image Ratio Type 4

The formula calculates the ratio between the intensity difference of two channels in relation to the intensity of one channel.

Background Correction

A background correction can be performed on a channel-by-channel basis. The selection of a background correction method modifies the ratio set-up formula accordingly. Note that the correction by ROI is only available if there are at least two ROI defined in the image! The following modes are available:

-

None

No background correction is performed.

-

Constant

Allows a user defined numeric value to be entered for each channel in the appropriate spin box.

-

ROI

Allows to select the background ROI defined in the Mean ROI view/setup.

Note that for dual wavelength protocols the same ROI is used in each case, but its channel specific values are applied for the correction.

For the ratio types Single Wavelength, Image Ratio Type 2, 3, and 4, no ROI background correction is available.

Ratio Clipping

Activated: Sets the factor for ratio clipping.

Color

Select the color (LUT) used to display the ratio image. Per default the Rainbow LUT is used as it allows intensity changes to be followed easily.

Enable Threshold

Activated: Allows the threshold values to be set for the ratio calculation.

-

Channel/Threshold

A threshold value can be applied in the form of a constant integer value for each channel individually. Thresholds help to reduce noise anomalies that are cause by pixel-to-pixel variations in areas between cells or near cell boarders during the ratio calculation. Enter the desired threshold value for each channel into the spin boxes provided. For more detailed information on how ZEN handles thresholds, see Basics of Calculation of Intensity and Ratio Values.

ROI Tracing Tab

This tab is only available if you have licensed the Physiology (Dynamics) module.
ROI tracing allows you to adjust the position of your ROI as necessary to accommodate the lateral movement of an object in an image of which the mean intensity is to be measured. This is done by defining a series of one or more so called key frames for individual ROIs. In this manner, complex object movements can be corrected.

Parameter

Description

Enable ROI Tracing

Enables the functionality for ROI tracing.

Selected ROI

Displays the number and shape of the currently selected ROI.

Key Frame Edit Mode

Only available if you have selected a ROI.


All Key Frames

Manipulates the ROI for all time points/key frames.


Single Key Frame

Manipulates the ROI for the currently selected time point and creates a key frame. Note that a single key frame adjustment can only be performed when the frame number (time point) is set to 2 or higher.

Interpolation

Only available if you have selected a ROI.
Selects an interpolation method for the ROI changes between the time points.

Constant

Does not interpolate the ROI position between the key frames, i.e. the ROI is only present at the set key frames.

Linear

Determines the ROI position at the time points between key frames based on linear interpolation.

Spline

Determines the ROI position at the time points between key frames based on spline interpolation.

Key frame list

Displays the key frames of the ROI and the changes compared to the previous key frame.


Add

Adds the current position as key frame.


Delete

Deletes the currently selected key frame.

Show

Trajectories

Activated: Displays the trajectories between the key frames in the image.

Ticks

Activated: Displays ticks for the (center) position the ROI for each time point in the image. These ticks are only visible if Linear or Spline is selected as interpolation method.

Ghosted key frames

Activated: Displays the shape of the ROI at each key frame.
Deactivated: Displays the shape of the ROI only at the currently selected time point.

All ghosted

Only available if Ghosted key frames is activated.
Activated: Displays the shape of the ROI for all time points. This effect is only visible if Linear or Spline is selected as interpolation method.

FRAP Efficiency Analysis

This module enables you to analyze time series acquisitions with bleach events to determine the half time of recovery/decrease of fluorescent signals. It supports mono or bi-exponential fit algorithms, including options for background correction and correction of imaging-induced photobleaching. You also have the possibility to evaluate grouped regions of interest. Additionally, you can determine the fading factor from a reference region (Ref.) from the present experiment or a control experiment and reuse it for subsequent experiments.

Automated Photomanipulation

This module is exclusively available for the Celldiscoverer 7 and allows automated photoactivation and bleaching at multiple positions. It is not applicable to Tile Regions. Using this module, the system executes the following experiment steps without user interaction:

  • Acquisition of a multi-position image as defined in the Tiles Tool.
  • Identification of the photomanipulation ROIs based on a customized image analysis that was defined beforehand in the Image Analysis Wizard.
  • Photomanipulation experiment as defined for Bleaching and in the Time Series Tool.

For Automated Photomanipulation, you need to license the function and activate it under Tools > Toolkit Manager. The tool for the model needs to be displayed on the Applications tab.

Using Automated Photomanipulation Settings

Automated Photomanipulation offers you the possibility to save your whole experiment setup in a settings file.

Creating an Automated Photomanipulation setting

  1. On the Applications tab, open the Automated Photomanipulation tool.
  2. Click on Options and select New.
  3. Name the setting and press Enter or click on .
  1. You have created a setting for Automated Photomanipulation.

Saving an Automated Photomanipulation setting

When you have set up your Automated Photomanipulation experiment and created a setting, you can save the setup as a setting.

  1. Click on Options and select Save.
  1. Your experiment setup is now saved.

Importing and exporting an Automated Photomanipulation setting

  1. On the Applications tab, open the Automated Photomanipulation tool.
  2. Click on Options and select Import or Export.
  3. A file browser opens.
  4. Select the file you want to import or the folder where you want to export the setting to.
  5. Click on Open and/or Save.
  1. You have now imported/exported a setting.

Deleting an Automated Photomanipulation setting

  1. On the Applications tab, open the Automated Photomanipulation tool.
  2. Select the setting you want to delete in the drop-down list.
  3. Click on the Options and select Delete.
  4. Confirm that you want to delete the file.
  1. The selected setting is deleted.

ZEN Connect

This module enables you to work with images from multiple sources: zoom in from the full macroscopic view of your sample down to nanoscale details. The Correlative Workspace (CWS) is the efficient way to analyze and correlate images from multiple sources. You can manage, correct, and align these images in 2D as well as in 3D. It works with images from SEM, FIB-SEM, X-ray, light microscopes and any optical images, e.g., from your digital camera. Its sample-centric workspace lets you build a seamless multimodal, multiscale picture of your sample. Use it to guide further investigations and target additional acquisitions.

The module employs a novel graphical user interface concept that makes it easy to investigate all your samples. Design a workflow tailored precisely to the complexity of your experiment, no matter whether it’s a simple task or a compound experiment. A sophisticated workflow environment guides you all the way from the setup for automated acquisition to post processing and customized exports, and right on through to analysis.

Licensing and Functionalities of ZEN Connect

For working with ZEN Connect projects or images, you might need a separate license. The basic ZEN Connect functionality is available for all versions. This functionality includes:

  • ZEN Connect correlative workspace, including the display of images with their relations.
  • Manual alignment of captured images.
  • Auto-registration of images using stage coordinates.
  • Image acquisition into the project.
  • Import of images into the correlative workspace.
  • Interactive control of stage movement from the correlative workspace.

Licensed Functionality

If you have the necessary license, additional functionality for 2D and 3D work is available.

Additional 2D functionality:

  • Export of merged project view as image.
  • Movie export as fly-through videos.
  • Import of third-party microscopy images powered by Bio-Formats.
  • SerialEM export.
  • Adding of measurements to the ZEN Connect project.
  • Two dedicated wizards to support manual alignment and 2D point alignment.
  • S&F calibration, see also Shuttle & Find.
  • Definition of regions of interest in the correlative workspace.
  • Retrieval of defined regions of interest.

Additional 3D functionality:

  • Control of the displayed z-position in ZEN Connect.
  • Alignment of images in z-dimension.
  • Viewing of two 3D stacks (provided you have the necessary licenses).
  • Alignment of two 3D stacks in x, y and z (provided you have the necessary licenses).
  • Import FIB stacks.
  • Alignment wizard for 3D Point alignment.

ZEN Connect 3D View

The 3D view for ZEN Connect allows you to see and align two 3D volumes from your project. The viewer and functionality is based on the Tomo3D view, see Tomo3D View. For this functionality, you also need the license for the 3D Toolkit.

1

Image Views
Area where you interact with the image views, set the cut lines with the mouse and align the adjustable image in the 2D views. You can display up to four different views, including a 3D and different 2D views. The number of image views can be set in the Ortho Display tab, the type can be set by the dropdown in the top left corner of each view. For further information on the functionality, see Tomo3D View.

2

ZEN Connect Tool
The standard ZEN Connect tool where you manage your projects and contained images, see ZEN Connect Tool.

3

View Options
Area for general and the Tomo3D specific view options as well as the dedicated Alignment and Connect 3D tabs, see Alignment Tab ZEN Connect 3D and Connect 3D Tab.

Connect 3D Tab

Parameter

Description

Image Table

Displays information for the volumes currently opened in the 3D view.

#

Displays the number of the volume.


Visibility

Displays and toggles the visibility of the respective volume.

Name

Displays the name of the volume. A checkmark in front of the Name column indicates that this volume is currently selected for alignment.

Align Volume 1

Starts the alignment mode to align the volume 1.

Align Volume 2

Starts the alignment mode to align the volume 2.

Opening the Correlative Workspace

  1. Start the software. For more information, see Starting Software.
  1. The software opens and ZEN Connect is available.

Note that before working with ZEN Connect, you need to create a ZEN Connect project. For more information, see Creating a ZEN Connect Project.

Non Image Data

In Zen Connect, it is also possible to import non image data into your project, have a visual representation (marker) of it in your image area, and align the position of the data marker with respect to the images in the project. The data is listed under Non-Image Data in the tree of the ZEN Connect tool and represented by the marker in the image area. Per default, the marker for this non image data is toggled invisible. To toggle data visible and invisible, see also Moving or Hiding Images.

Project and Image Management

Creating a ZEN Connect Project

Within a ZEN Connect project, in the ZEN Connect tree view, you manage your data in a project structure tree combined with the viewer. Before acquiring or importing any images, you need to create the ZEN Connect project. Only within a ZEN Connect project, you can use all ZEN Connect functionality.

You can open only one ZEN Connect project at a time.

  1. You have set up the sample on the microscope.
  1. In the ZEN Connect tool, click Create.
  2. The New Document Dialog dialog opens.
  3. In the New ZEN Connect Project Setup area, select the Project Path where you want to store the ZEN Connect project file.
  4. Select the relevant data to configure the ZEN Connect project. If you select the holder/carrier now, you can change it later. Note: If you change the holder/carrier after a S&F calibration, the S&F calibration needs to be redone.
  5. Click on OK.
  6. The ZEN Connect project is created with the project file name <Projectname>.a5proj. Note that all new images are saved in the subordinated folder <Projectname_data>. You can always check the path on the Acquisition tab in the Auto Save tool. Alternatively, right- click on the project container in the Center Screen Area and select Open Containing Folder.
  7. In the Image View, the sample holder is displayed.
  8. In the ZEN Connect tool, the empty ZEN Connect project is displayed. Here, the structure of the ZEN Connect project will be displayed as soon as you acquire or import images. In the folder on your computer, the ZEN Connect project file <CWS project name>.a5proj is generated. A <ZEN Connect project name>.a5lock is generated to prevent more than one user to work on the project at the same time. It is generated any time you load a ZEN Connect project.
  9. At the bottom of the Image View, a scale bar with size, the width of the field of view (FOV), and scaling is displayed.
  10. You have created a ZEN Connect project.
  11. Acquire an image.
  12. In the Project View, a new session node is created, and each acquisition is displayed.
  13. In the Image View, all images are displayed. They are signed with a colored frame (blue: normal image, red: selected image).
  14. When you close the project or the software, you are prompted to save the project file.

For information on setting up holders and carriers, see Selecting and Clearing Carrier/Holder.
For information on the ZEN Connect tool, see ZEN Connect Tool.

Loading a ZEN Connect Project

You can load any of your ZEN Connect projects to continue with your work. You can also load existing ZEISS Atlas 5 projects.

Open first the ZEN Connect project before performing a S&F calibration.

  1. You have created a ZEN Connect project, or a ZEISS Atlas 5 project is available. ZEISS Atlas 5 projects belong to the ZEISS ATLAS 5 software. ZEN Connect supports these formats.
  1. Select File > Open, navigate to the ZEN Connect project and open it.
  1. In the ZEN Connect Project View, the current state of the Connect project is displayed. In the Image View, the sample holders are marked, and previously acquired images are displayed. The current stage position is marked with a cross hair. If you want to acquire additional images to the project, align the new session with the existing data.

Renaming Images in a ZEN Connect Project

  1. You have loaded a ZEN Connect project.
  1. In the Project view or in the Layer view, select an image to rename.
  2. Right-click, select Rename data and rename the image, or press the F2 key. Alternatively, you can double click the image name to rename it, or click on the Context menu button , and select Rename data.
  1. You have renamed the image. It is updated either in the Layer or the Projectview respectively.
  2. The image name is not changed on the disk.

Opening Images in ZEN

  1. You have loaded a ZEN Connect project.
  1. In the Project View or in the Layers View, select one or more images you want to open, and right-click Open image(s) in ZEN. Alternatively, click on the Context menu button and select Open image(s) in ZEN. Additionally, you can also simply double click on an image in the Project View or in the Layers View.
  1. The image is opened in ZEN and displayed on a separate tab.

ZEN Data Storage

If you have opened a project from ZEN Data Storage, you can also open an individual image this way. The image is then downloaded and each change is not updated in the viewer and the project until you upload the image again. For information on uploading an image to the data storage, see Saving an Image to ZEN Data Storage.

Toggling View Modes

In ZEN Connect you can switch between two different view modes for your projects. The default is the carrier/ holder view mode, where the coordinate system of the correlative workspace is aligned with the screen and images of the current system/ session might be rotated. The second is the stage centric view mode, where the coordinate system of the current session is aligned with the screen and the carrier/ sample holder as well as other sessions might be rotated.

  1. You have opened a ZEN Connect project.
  1. In the button bar below the Image View of the correlative workspace, click on the button for the carrier/ holder or the stage centric view mode.
  1. The view is changed according to the selected view mode.

Import and Export

Importing Data

You can import simple images, such as camera images or more complex images, such as a light microscope image with overlays, to your ZEN Connect project. For more information, see Adding an Image to the ZEN Connect Project.

Alternatively, you have the option to import BioFormats into your ZEN Connect project. For more information, see Importing Third-party Images.

Note: If you import an Airyscan image, ZEN Connect displays only the raw data and not the calculated Airyscan. Such images should be processed before you add them to ZEN Connect. If you want to add an unprocessed Airyscan image, a warning will appear asking if you want to continue.

Exporting Single Image Data

You can export data of ZEN Connect projects as a single image for distribution to collaborators, or for the use in publications. The content can be a single image, tiles, a collection of images, or a view of the entire ZEN Connect project. You can drag or resize the region to control the area that you want to export, or activate if image names and frames are shown on the exported image or not. You can pan and zoom using the mouse in the Image View to get fine control of the export area.

  1. You have loaded a ZEN Connect project.
  2. In the loaded ZEN Connect project, you have activated and deactivated the respective areas of interest.
  1. In the ZEN Connect tool, open the Project View or the Layers View. Right-click an image and select Single Image Export.
    Alternatively, select an image and select Single Image Export for the Export button.
  2. A wizard opens.
  3. Make your settings and click Export Data.
  4. Navigate to the folder where you want to store the exported image. The default file name is the ZEN Connect project name. Click Save.
  1. You have exported one image in a standard image format. The exported image is based on the export area you set up in the Image View.

Exporting a ZEN Connect Project as a Video

You can export data of ZEN Connect projects as a video.

  1. You have loaded a ZEN Connect project.
  2. In the loaded ZEN Connect project, you have activated and deactivated the respective areas of interest.
  1. In the ZEN Connect tool, open the Project View or the Layers View. Right-click an image and select Video Export.
    Alternatively, select an image and select Video Export for the Export button.
  2. The wizard for video export opens.
  3. Choose your key frames by positioning the export area in the Image View and add them to the list of key frames by clicking Add current view as key frame.
  4. Make your settings, and click Start Export.
  5. Navigate to the folder where you want to store the exported video. The default file name is the ZEN Connect project name. Click Save.
  1. You have exported a video in a standard video format.

Exporting Data for SerialEM

ZEN offers the functionality to export image data as an MRC file which makes it compatible to the software application SerialEM and available for TEM users. This export is available for z-stacks and multi-channel images with all the common pixel types (8/16 Bit, 32 Bit Float, RGB 24/32/48). Tiles, time series, multi-scene images, and images with unprocessed data or special dimensions are not supported. For images with more dimensions, you can use the image processing function Create Image Subset to extract a single image or stack and then export it with this function.
The export creates a MRC file as well as a NAV file, which contains regions of interests (e.g. points, rectangles, polygons,...). The NAV file can be loaded in the SerialEM software and it then loads the MRC file, so that the image with the respective regions is shown.

  1. You have opened an image in ZEN.
  1. Click on File > Export/Import > MRC Export.
  2. A file explorer opens.
  3. In the file explorer, select a folder to save the file.
  4. Name the file and click on Save.
  1. You have exported the image data as an MRC (and NAV) file.

Alignment

In a ZEN Connect project, you can manually align images in your workspace to correct their position or size with respect to the samples. To do so, you activate the alignment process and start aligning image data. Within a ZEN Connect project, you can also calibrate your system using a sample holder with fiducial markers by moving between the markers and confirming their positions.

Activating the Alignment Process

The alignment process lets you align your current session with fiducial marks or previous images. You can align image data manually.

You should create a new session any time the alignment of the sample in the microscope has been disturbed.

  1. A ZEN Connect project is loaded.
  1. In the Layer View, or in the Project View, select the image you want to align. Alternatively, you can select a region to select a couple of images.
  2. The image is marked with a square in each frame corner.
  3. As long as the alignment process is not activated, this is indicated with a little lock next to the cursor.
  4. Right-click the selected image and select Align Data.
    Alternatively, right-click the image(s) in the ZEN Connect tool and select Align Data. You can also select Align for the Alignment button and click it.
  1. You have activated the alignment process for one or more images. The Alignment Tab below the Image View is displayed.
  2. You can start aligning image data. If you start an alignment on a session node, the set alignment is used for all current and future images of the session. You can use this if you change your sample between different systems and want to align their coordinate systems to each other.

Aligning Image Data

In the alignment process, you have various options to align image data. Note that you can change the alignment mode during the alignment process. The alignment edits you have made are preserved, but you have to restart the pinning process if you have inserted any pins before changing the mode.

Note: The alignment process can be executed multiple times. Each time you run the alignment process, the end result of the last alignment is used as the starting point for the new alignment. If the initial image was far out of alignment at the start, it is easiest to do the alignment process once roughly, and then do the alignment process a second time with more precision. The second alignment will use the first alignment as a starting point, and will allow you to establish a more precise alignment quickly.

  1. You have loaded a ZEN Connect project and activated the alignment process.
  1. In the Alignment tab, select one of the following alignment modes and the region you want to align.

Translate Only

  1. Click and drag with the mouse to translate the image you are aligning with respect to everything else.
  2. You can zoom in and out with the mouse wheel, or press and hold the CTRL key to pan while you are in the process of aligning the image.

Translate and Rotate Only

  1. Right-click at the location you have lined up to insert the first pin, a red and grey pin icon. The pin locks the image to the reference at this location. Press the DEL key to remove the last pin you inserted.
  2. After you insert the first pin, your input will rotate the item around the first pin, when dragging it with the mouse.

Translate, Rotate and Scale Only

  1. If one of the images is smaller than the other, you can scale it. Right-click to insert a pin, and drag with the mouse to scale and rotate the image.

Translate, Rotate, Scale and Shear

  1. Right-click to insert a second pin, and drag with the mouse to shear the image.
  2. After you insert the second pin, your input will also stretch and shear the item.

Image data from microscopes should not need to be stretched or sheared to perform alignment. If you need to provide much input after inserting the second pin, this might be an indication of other problems, such as equipment calibration issues.

Alignment Handles

  1. If you select Alignment Handles, you can use handles to rotate, translate, and scale the image.

Flipping the image horizontally and vertically

You can flip your image, to mirror it.

  • To flip the image horizontally, click on the Flip Horizontally button .
  • To flip the image vertically, click on the Flip Vertically button .
  • To flip the image stack in z direction, click on the Flip in Z button .

Reset alignment

  1. Click on the Reset button to reset the alignment you performed.
  2. The alignment is reverted as it was when you started aligning. The alignment mode is still activated.

Cancel alignment

  1. Click on the Cancel button to reset the alignment you performed.
  2. The current alignment is cancelled and reverted to the alignment in place before you started the alignment mode. The alignment mode is not activated any longer.

Finish alignment

  1. Click on the Finish button to finish the alignment mode and to save the alignment information.

Clear alignment

  1. Click on the Clear button.
  2. The session is restored to its un-aligned state.

Aligning Images in Z Direction

In ZEN Connect you can not only align your images/ sessions in x and y direction, but also in z.

  1. You have opened a ZEN Connect project containing images/ z-stacks with z information.
  1. Select the image or session you want to shift in z direction.
  2. In the ZEN Connect tool, select Align for the Alignment button and click it. Alternatively, right click on the image and select Align Data.
  3. The Alignment Tab is displayed below the Image View.
  4. For Alignment Mode select 3D Alignment in the dropdown list.
  5. For Relative Z Offset set the value for your shift in z direction.
  6. Click on Finish.
  1. You have now aligned your data in z direction. For an illustration of the alignment see Example for Z Alignment.
    Note that for the z alignment the view of the aligned stack remains the same, whereas the view of the other stacks changes.

Setting an image to the current Global Z

You can also set the center of a z-stack to the currently selected z-position of the Global-Z slider.

  1. You have opened a ZEN Connect project containing images/ z-stacks with z information.
  1. Activate the Global-Z slider and move to the z-position where you want your image to be placed.
  2. Select the image you want to shift in z direction.
  3. In the ZEN Connect tool, select Align for the Alignment button and click it. Alternatively, right click on the image and select Align Data.
  4. The Alignment Tab is displayed below the Image View.
  5. For Alignment Mode select 3D Alignment in the dropdown list.
  6. On the Alignment tab, click on the Set to current Global-Z button.
  7. Click on Finish.
  1. You have now set the center of your z-stack to the currently selected Global-Z.

Aligning Non Image Data

  1. You have opened a ZEN Connect project with non-image data.
  2. Your non image data is toggled visible, see also Moving or Hiding Images.
  1. In the ZEN Connect tool or in the image view, right-click the non-image data and select Align Point Position. Alternatively, in the ZEN Connect tool, select the non-image data and select Align for the Alignment button and click it.
  2. You enter the alignment mode.
  3. In the image area, click at the position where you want to place the none-image data.
  4. Click on Finish.
  1. You have aligned your non image data.

Aligning Images in the Point Alignment Wizard

  1. You have opened a ZEN Connect project.
  1. In the correlative workspace or in the ZEN Connect tool, select the image(s) you want to align.
  2. In the ZEN Connect tool, for the Alignment button, select Point Alignment Wizard and click on it.
  3. The ZEN Connect Point Alignment Wizard opens.
  4. In the list on the left, use the button to add as many points as necessary for your point alignment.
    With one point only a translation operation is possible, with two translation and rotation, and three or more points enable all transformations.
  5. In the Algorithm dropdown list, select the alignment operations you want to perform.
  6. Click on Draw for the first point.
  7. You enter the drawing mode for the first point.
  8. In the Image Window on the left, click to set the point in your image(s) (Subject point).
  9. In the Project Window on the right, click to set the corresponding location for this point in the project (Reference point).
  10. Both color makers in the table are green and the first point for the alignment is set successfully.
  11. Repeat these three steps for every point you add/need.
  12. If you want to redraw a point pair, click on Redraw and click to set the new positions in both windows.
  13. Click on Next.
  14. The second step of the wizard opens. It displays a preview of the final alignment result and values for the parameter changes.
  15. If you want to change the alignment, click on Back to get back to the previous step. Otherwise, click on Finish to save the alignment and close the wizard.
  1. You have aligned the selected image(s) in the ZEN Connect project.

Aligning Images in the 3D Point Alignment Wizard

  1. You have opened a ZEN Connect project.
  1. In the correlative workspace or in the ZEN Connect tool, select the image(s) you want to align.
  2. In the ZEN Connect tool, for the Alignment button, select Point Alignment 3D Wizard and click on it.
  3. The ZEN Connect 3D Point Alignment Wizard opens.
  4. In the left image window (Tomo3D view), select the 2D image view you want to use from the dropdown as points for alignment can only be set in the 2D views.
  5. In the list on the left, click to add as many points as necessary for your point alignment.
    With one point only a translation operation is possible, with two translation and rotation, and four or more points enable all transformations.
  6. In the Algorithm dropdown list, select the alignment operations you want to perform.
  7. Click Draw for the first point.
  8. You enter the drawing mode for the first point.
  9. In the Image Window on the left, click to set the point in your image(s) (Subject point).
  10. In the Project Window on the right, click to set the corresponding location for this point in the project (Reference point).
  11. Both color makers in the table are green and the first point for the alignment is set successfully.
  12. Repeat these steps for every point you add/need.
  13. If necessary, use the controls in the Dimensions tabs, e.g. to switch to a different z-slice to set the points on specific z-levels.
  14. If necessary, change the left image view via the dropdown to add or check points in other 2D image dimensions.
  15. The view changes to the selected 2D image dimensions.
  16. If you want to redraw a point pair, click Redraw and click to set the new positions in both windows.
  17. Click Next.
  18. The second step of the wizard opens. It displays a preview of the final alignment result and values for the parameter changes.
  19. If you want to change the alignment, click Back to get back to the previous step. Otherwise, click Finish to save the alignment and close the wizard.
  1. You have aligned the selected image(s) in the ZEN Connect project.

ZEN Connect Tool

The ZEN Connect tool provides a layer's view and a project view of image data that you have acquired for the ZEN Connect project. Every image that you have acquired for the ZEN Connect project is listed. As you acquire or import more image data, the new image data is listed in the views.

The ZEN Connect tool offers different options to open a ZEN Connect project or to create a new ZEN Connect project.

Alternatively, you can create a new ZEN Connect project via File > New Document. For more information, see New Document Dialog.

The ZEN Connect tool displays the following:

  • Images that have been acquired for the ZEN Connect project.
  • Images that have been imported to the ZEN Connect project.
  • Position of the image in the project or the layers.
  • Non image data added to the ZEN Connect project.

ZEN Connect Project Layer view

Images in your ZEN Connect project are displayed in the Image View according to its position in the Layers View. With drag & drop, you can move images over and under other images. You can also hide them completely . For more information, see Moving or Hiding Images. Additionally, you can see the objective magnification with which the image was acquired and if an image is a z-stack .

Select Template Dialog

In the Select Template Dialog, you select carriers and holders.

Parameter

Description

Celldiscoverer

Only available for celldiscoverer application.

Shows a list of celldiscoverer sample holders.

Tiles

Shows a list of all generic sample holders.

Correlative

Shows a list with all relevant correlative sample holders.

Alignment Tab

The alignment tab is visible as soon as you enter the alignment mode. For more information, see Activating the Alignment Process.

You can perform Three-Point Alignment:

  • Line-up an imported image with reference marks, such as the precision fiducials on a CorrMic Holder.
  • Line-up features in an imported image with LM, EM and SEM images of the same features.
  • Line-up a session of LM, EM and SEM imagery with previously acquired LM, EM and SEM imagery session.

With the three-point alignment process you can set the position, rotation, and scale of an image or tile. This is used to line the image or tile up with reference marks or other images. Once an image is lined up, it can be used as a reference (road map) to move the stage to control further image acquisition.

Parameter

Description

Alignment Mode

Sets which data properties you can change during the alignment.

-

Translate Only

Moves the item you are aligning in x and y only without changing its size or orientation.

-

Translate and Rotate Only

Moves the item in x and y direction and changes its orientation. It does not change the scale of the item you are aligning.

-

Translate, Rotate and Scale Only

Moves, reorients, and resizes the item you are aligning. It does not shear it.

-

Translate, Rotate, Scale and Shear

Supports full three-point alignment.

-

Alignment Handles

Displays alignment handles to rotate and resize the image.

-

3D Alignment

Displays options to align an image in z direction and for three dimensional rotation.


Flip Horizontally

Mirrors the image horizontally.


Flip Vertically

Mirrors the image vertically.


Flip in Z

Mirrors the image stack in z direction around the middle z-height.

Relative Z Offset

Only visible if 3D Alignment is selected.
Sets the z offset for the selected image.

Step Size

Only visible if 3D Alignment is selected.
Sets the step size for the Relative Z Offset input.

Set to current Global-Z

Only visible, if 3D Alignment is selected and only active if the Global-Z slider is activated on the Dimensions tab.
Sets the z value of the currently selected Global-Z as the z value for the center of the z-stack.

Apply 3D Rotation

Only visible, if 3D Alignment is selected.
Enables the controls below for the three dimensional rotation of the z-stack.

-

Rotation X-Axis

Sets the rotation along the x-axis with the slider or input field. Click on the button to reset the angle.

-

Rotation Y-Axis

Sets the rotation along the y-axis with the slider or input field. Click on the button to reset the angle.

-

Rotation Z-Axis

Sets the rotation along the z-axis with the slider or input field. Click on the button to reset the angle.

-

Angle Step Size

Sets the angle step size for the slider/input fields above.

-

View Cube Control

With this cube control, you can rotate the stack interactively. It has a visual representation of the current stack (white box) and the cutting plane which is displayed in the 2D view above.

-

Presets

Sets the View Cube Control to a predefined position. You can set it to the Default, or you can select an orientation from the dropdown of the button to show the Default, Viewer Perspective, Left, or Right orientation.

-

Manual Control

Controls the orientation of the View Cube Control.

Reset

Resets the alignment as it was when you started this alignment operation.

Clear

Resets the alignment to where it was when the data was first acquired or imported.

Finish

Exits from the alignment operation, keeping the alignment you have established.

Cancel

Returns to the alignment as it was when you started, and exits from the alignment operation.

Button Bar below Image View

Parameter

Description


Zoom To Extent

Resets the view space of the Image View to be centered on the holder with a field of view (FOV) that includes all visible images in the project. For more information, see Zooming to Extent.


Pan & Zoom

Activates the mouse for panning around and zooming in and out in the Image View. For more information, see Panning & Zooming.


Select Region

Selects an image/ a region in the Image View. For more information, see Selecting Region.


Interact with Measurement

Activates a mode to interact with measurements added to the project. For more information, see Editing Measurements in a ZEN Connect Project.


Create Region of Interest

Creates a Region of Interest in the Image View. For more information, see Using Regions of Interest in Zen Connect.

A

Hides or displays image names and frames of images in the Image View. For more information, see Toggling the Display of Region Caption and Frame.

Select Carrier/Holder

Opens a dialog to select a carrier or sample holder that matches. For more information, see Selecting and Clearing Carrier/Holder.


Stage Centric View Mode

Activates a stage centric view mode where the coordinate system of the current session is aligned with the screen and the carrier/sample holder as well as other sessions might be rotated. For more information, see Toggling View Modes.


Carrier/Holder View Mode

Activates the carrier/holder view mode where the coordinate system of the correlative workspace is aligned with the screen and images of the current session might be rotated. For more information, see Toggling View Modes.

Grab Image

Creates an image of the ZEN Connect project. For more information, see Grabbing an Image.

Field of View Width Dialog

Parameter

Description

FOV Width

Displays the current width of the field of view and allows you to enter a value.

OK

Sets the width to the entered value.

Cancel

Closes the dialog box without setting the field of view.

Pixel Size Dialog

Parameter

Description

Pixel Size

Displays the current pixel size and allows you to enter a value.

OK

Sets the pixel size to the entered value.

Cancel

Closes the dialog box without setting the pixel size.

Video Export Wizard

Parameter

Description

Burn in Data Bar

Burns the currently configured data bar into the exported video.

Show Region Caption

Controls if the image names are shown in the exported video.

Show Region Outline

Controls if the image frames are shown in the exported video.

Export Resolution

Sets the resolution and format for the video export.

Resolutions available in the drop down list:

  • 320 x 240 (4:3)
  • 428 x 240 (16:9)
  • 640 x 480 (4:3) (=default value)
  • 854 x 480 (16:9)
  • 960 x 720 (4:3)
  • 1280 x 720 (16:9)
  • 1440 x 1080 (4:3)
  • 1920 x 1080 (16:9)

Zoom to Extent

Places the image to the center of the preview area.

Rotation

You can use the slider or the input field to rotate the view.

Start Delay

Sets the delay at the start of the video. The default setting is 1,0 seconds.

Key Frames

Lists the key frames, including the data of the position and FOV.

Move Up

Moves the selected key frame up in the list.

Move Down

Moves the selected key frame down in the list.

Delete key frame

Deletes the selected key frame.

Go to key frame

Displays the selected key frame.

Reset key frame to current view

Sets the values of the selected key frame to the current view.

Options

Load Export Key Frames

Loads stored key frames.

Save Export Key Frames

Saves the key frames in XML format.

Transit To

Sets the transition time for zooming to a selected key frame. The default setting is 3,5 seconds.

Pause At

Sets the time to stay at the selected key frame. Default setting is 0,5 seconds.

Return to first at end

Returns to the first key frame at the end of the video.

Add current view as key frame

Adds the current view as a key frame.

Preview export

Displays a real-time preview of the video.

Start Export

Exports and saves the video.

Finish

Saves the changes and closes the wizard.

Cancel

Cancels the video export.

Regions Tool

This tool displays a list of regions of interest (ROI) which are drawn into a ZEN Connect project.

Parameter

Description

ROI List

Here you see a list of all ROI in your ZEN Connect project.
A double click on an entry moves the stage to the center of the respective ROI.


Delete

Deletes the currently selected region of interest.


Rename

Allows you to rename the currently selected region of interest.


Move to

Moves the stage to the center of the currently selected region.

ZEN Connect Manual Alignment Wizard

1

Alignment Parameter
Control parameters to align the respective image in the project. For more information, see Alignment Parameter Section.

2

Image View
Displays the images of the project and allows alignment of images.

3

View Options
Here you have the general options of the Dimensions tab. The options are always those of the image selected in the Select Node For Dimensions tab.

ZEN Connect Point Alignment Wizard

This wizard guides you through a three-point alignment of the image in your ZEN Connect project.

Step 1: Setup Points

1

Point Alignment Options
Options to configure your point alignment. For more information, see Point Alignment Options.

2

Image Window
Displays the image(s) you are aligning. Area where you set the subject points.

3

Project Window
Displays all the images of the project except the one you are aligning. Area where you set the reference points.

Point Alignment Options

Parameter

Description

Point list

Name

Displays the name of the point.

Subject

Displays the status of the subject point with a color.

  • Yellow: You are in drawing mode and have not yet set the subject point in the Image Window.
  • Green: You have set the subject point.

Reference

Displays the status of the reference point with a color.

  • Red: You have not yet set the subject point and cannot set the reference point.
  • Yellow: You are in drawing mode and have not yet set the reference point in the Project Window.
  • Green: You have set the reference point.

Draw

Only visible if no points have been drawn for the current point entry.
Enters the drawing mode for the respective point.

Cancel

Only visible if you are in drawing mode.
Cancels the drawing of points for the current point.

Redraw

Only visible if you have already drawn the reference and subject point for this entry.
Reenters the drawing mode to redraw the points for this entry.


Add

Adds another point entry in the list.


Delete

Deletes the currently selected list entry and removes all drawn points of the entry.

Algorithm

Selects the algorithm for alignment. The algorithm is preselected based on the number of positioned points.

Autoselect

Automatically selects the algorithm suitable for the drawn points.

Translation

Moves the item you are aligning in x and y only, without changing its size or orientation.

Translation and Rotation

Moves the item in x and y direction and changes its orientation. It does not change the scale of the item you are aligning.

Translation and Scale

Moves and resizes the item you are aligning.

Allow all transformations

Supports all possible transformations.

Parameter

Description

Next

Moves on to the next step of the wizard.

Finish

Saves the changes and closes the wizard.

Cancel

Closes the wizard without saving.

Step 2: Preview

This step displays a preview of the finished alignment and the parameter values of each alignment.

Parameter

Description

Algorithm Result

Displays the resulting alignment changes.

Translation

Displays the resulting translation in X-Direction and Y-Direction.

Rotation

Displays the resulting rotation angle around the z-axis.

Scaling

Displays the resulting scaling factor for the X-Dimension and Y-Dimension.

Back

Moves to the previous step of the wizard.

Finish

Saves the changes and closes the wizard.

Cancel

Closes the wizard without saving.

Shuttle & Find

SEM/LM system for correlative microscopy
SEM/LM system for correlative microscopy

This module enables you to locate sample positions in two different microscopes, e.g. a light microscope and a scanning electron microscope (SEM). Afterwards you can correlate the two images to one merged image. This technique is called correlative microscopy or just "CorrMic". It is used to combine the two worlds of scanning electron microscopy and light microscopy and brings it together in one image. To use the functionality, you need a license for the Connect Toolkit.

The samples can be mounted in special designed correlative holder systems (with three correlative calibration markers) from ZEISS. Also user-defined holder systems with three calibration markers can be used. Biological samples are mainly deposited on cover glasses or on TEM grids. In contrast to biological samples, the shape and size of material samples vary strongly. In respect to these requirements, the correlative holders were designed accordingly.

Example of a correlative ZEISS sample holder
Example of a correlative ZEISS sample holder

Settings and Image Acquisition with the Light Microscope

Before acquiring an image with the light microscope and using it for correlative microscopy, it is necessary to make general settings e.g. stage calibration, camera orientation, calibrating objectives and setting the correct scaling. Please notice that we do not describe all these topics within this guide as we focus on the Shuttle & Find workflow only.

Furthermore we will not describe basic functionality of the software in this guide, like program layout or general image acquisition topics.

Starting the LM Software

For correlative microscopy with light microscopes ZEN software has to be installed. In addition you need to licence the Connect toolkit.

  1. To start the software double click on the ZEN program icon on your desktop.
  2. The software starts now.
  3. In the Left Tool Area switch to the Acquisition tab and activate Shuttle & Find.
  4. Open the Shuttle & Find tool.
  1. You have successfully started the software. Now you can start working with the Shuttle & Find module.

Defining a New Sample Holder Template

With this dialog you can define new correlative holders in addition to the existing holder templates. It is not mandatory to use correlative holders from ZEISS. User-defined correlative holders with 3 fiducial markers can be used as well.

  1. To open the dialog click on Add in the Select Template dialog. This dialog can be opened via the Shuttle & Find tool.
  2. The New Template dialog opens.
  3. Type in a name for the new holder or sample carrier. An image of the new holder can be loaded as well.
  4. Insert the distances (in millimeters) between the first and the second marker and between the second and third marker.
  5. The distances can be determined using the Stage Control dialog accessible via the Light Path tool in Right Tool Area tab. We recommend doing this before starting the New template dialog. Write down the distances to be prepared to enter them within the New Template dialog.
  6. Activate the live view in the Center Screen Area by clicking on the Live button in the Locate tab.
  7. Navigate the stage manually to the calibration marker on the sample holder by means of the joystick and note the x/y-coordinates of the marker.
  8. Repeat this procedure for all three markers and calculate the distances between marker 1 and marker 2 and between marker 2 and marker 3, respectively.

Calibrating the Sample Holder

Correlative sample holders have three fiducial markers enabling a three point calibration (signed with the numbers 1-2-3) The calibration markers consist of one small (length 50 µm) and a large L-shape marker (length 1 mm). The bigger marker is used for coarse orientation, whereas the smaller marker is used for the calibration.

Preparing Calibration

  1. Click on Live in the Acquisition tab to activate the live view in the Center Screen Area.
  2. Navigate the stage manually to the first calibration marker on the sample holder (marked with No. 1) by means of the joystick. It is enough if you move the stage to the larger L-shaped calibration marker. The smaller marker will be detected automatically within the Sample Holder Calibration Wizard. To locate the marker positions we recommend using a dry objective with low magnification (5x – 20x).
  3. Open the Shuttle & Find tool.
  4. Click on Calibrate… to open the Sample Holder Calibration Wizard

Setting Calibration Options

Sample Holder Calibration Wizard Options
Sample Holder Calibration Wizard Options

In step 1 of the wizard, the following options should be activated to follow our recommended workflow:

  1. Check if the Automatic movement to next marker checkbox is activated.
  2. This will automatically move the stage to the next marker position after you have confirmed the position of the marker and clicked on Next.
  3. Check if the Use automatic marker detection checkbox is activated.
  4. The software will try to find the correct positions of each marker automatically.
  5. If you need to change the marker color, or check if the marker orientation is set correctly, activate the Use settings for marker detection checkbox to access these functions.
  6. Click on Next to move to the next wizard step.

Acquiring the LM Image

Basically image acquisition is performed as you are used to do it within ZEN software. The file format for Shuttle & Find data is the common *.czi file format. Saved images can be loaded in ZEN via the menu File > Open.

After image acquisition the next step in the correlative workflow is to define/draw in ROIs/POIs in your image. Therefore you can use the Region tools on the S&F tab, see Regions, Find and Dimensions.

LM image
LM image

Shuttle & Find Sample Positions at the Electron Microscope

Now you can transfer (Shuttle) the sample and the LM (Light Microscope) image file (.czi) to the SEM (Scanning Electron Microscope). There you can easily relocate (Find) the same sample positions and acquire a corresponding image within the ZEN SEM software. Therefore exactly the same steps have to be done as for the light microscope.

Mounting the Sample Holder to the SEM

For imaging your sample in the SEM, insert the sample holder (2) in the special SEM adapter (1) and mount it to the SEM.

The arrow of the sample holder has to face the arrow of the SEM adapter.

Sample holder mounted in SEM adapter
Sample holder mounted in SEM adapter

Starting the ZEN SEM Software

For correlative microscopy with scanning electron microscopes SmartSEM and ZEN SEM have to be installed. SmartSEM is still the control software of the scanning electron microscope. ZEN SEM comes as an add-on for SmartSEM to perform correlative microscopy and using Shuttle & Find on a SEM.

  1. You have started SmartSEM.
  1. Start the ZEN software by clicking on the program icon on your desktop.
  2. The application selection window appears.
  3. Click on the SEM button to start.
  4. You will see the program interface with a reduced user interface comparing to the software. In the Left Tool Area, the SEM Acquisition tab and the Processing tab are available only. On the SEM Acquisition tab you will find the Shuttle & Find tool which has 3 additional buttons at the lower part of the tool.

Selecting the Sample Holder

This step is exactly the same step like for the light microscope, so please read the chapter Selecting the Sample Holder if you want to know the exact steps which you have to perform.

Calibrating the Sample Holder

Like the step before this step is exactly the same like for the light microscopy, so please refer to the chapter Calibrating the sample holder for details.

  1. The calibration of the sample holder has to be done on both systems the LM and the SEM. Otherwise the relocation of your sample positions or ROIs/POIs stored in the image won`t be successful.
  2. Note that for Shuttle & Find the beam shift must be switched off. The beam shift is deactivated in SmartSEM as follows:
  3. Call up the shortcut menu Center Point/Feature by right-clicking on the Stage property page.
  4. Select Center Point/Feature and select Stage only.

Acquiring an EM Image

  1. Load your LM image to ZEN SEM (.czi).
  2. The image will be displayed in the center screen area.
  3. Activate the Live mode.
  4. You will see the Live image from the SEM. Notice that all settings for the SEM image have to be done within the SmartSEM software.
  5. Activate the S&F View in Center Screen Area.
  6. Go to the S&F tab.
  7. Check if the Double click in image to move stage and Show splitter view checkboxes are activated (default setting).
  8. In the left image container you see the live image from the SEM. The right image container is empty.
  9. Drag the loaded LM image from the Images and Documents gallery into the empty image container.
  1. Now you can easily relocate sample positions by double clicking within the image or on the ROI/POI button (if ROI/POI are drawn in and selected) on S&F tab.
  2. For image acquisition you have to use the Snap button within ZEN SEM. Notice that we will not describe setup and image acquisition with the SEM. Please read the online help or user guide for the SEM software.
SEM and LM image
SEM and LM image

Fine Calibration of the Sample Holder

The precision of relocation can be improved by determination of an offset value. This value describes the position offset between the loaded image and the live image. The defined offset value is only valid for the loaded image. If another image is loaded or if you close the dialogue, the offset value will be deleted.

  1. An offset is visible when you try to relocate marker positions on the live image comparing to the LM image.
  1. Click on the Set Offset button.
  2. The stage moves to the selected marker position. Then a message appears which asks you to move the stage to the correct position.
  3. Move the stage manually to the correct position by using the joystick.
  4. Confirm the message by clicking on the OK button.
  1. Now you can repeat the relocation. The positions should be identical now.

Shuttle & Find with an EVO 10

To use Shuttle & Find (SW and correlative holders) with an EVO 10 make sure that the stage limits (for x, y and z) are set as follows:

Holder Positions

The holder positions must be oriented like shown in the images

NOTICE

notice

If you set a wrong orientation the stage cannot be moved to all correlative markers because of the stage limits for the EVO 10.

  1. The holder has to be mounted into the EVO in that the way that the correlative markers (1) and (2) have to be near the chamber door whereas marker (3) is located furthest from the chamber door (see Mounting A/B).
  2. If necessary, the SEM image can be rotated according to the LM image using the option Scan Rotate in SmartSEM.

Mounting A:

Mounting B:

Correlative Sample Holders

Name

Image

Life Science cover glass 22 x 22

Life Science Cryo Holder

Life Science for TEM Grids

Cover glass with fiducials 22 x 22

MAT Flat Stubs A

MAT Flat Stubs

MAT Universal A

MAT Universal B_A

MAT Universal B_B

Impressum
Carl-Zeiss-Strasse 22
73447 Oberkochen
Germany
Legal