This article is intended as a starting point for users who are completely new to the platform, explains certain principles and concepts, and basic functions.
This article is intended as a starting point for users who are completely new to the platform, explains certain principles and concepts, and basic functions.
arivis solutions are widely reputed for their ability to handle large image data. At the core of this ability is the way that we store, access and process image data. A more thorough explanation of why we do things this way and the basic concepts behind it can be found here, but for now, it suffices to state that Vision4D requires images to be saved in a specific format to work.
The first two points to note on this are:
As mentioned above, importing images is necessary, and usually very straightforward. You can simply drag & drop a file from the file explorer into an open viewer to start the import process. Then, choose a name and save location for the results.
In cases where the file selected is fully supported that is all that is required. The software will automatically read and copy all the image data and metadata. However, it is also possible to import some partially supported files or use more advanced import scenarios when needed.
For example, if the images are stored as multiple TIFF files, the software will be able to read the image information but some of the metadata relating to the dimensionality of the image set may be unrecognizable. In such cases use simple import scenarios, for example, import simple Z-stacks or import time series.
More complex import scenarios are also possible for multidimensional import or imports with stitching of tiles images.
Once the images have been imported into a *.sis file, you can open the file in the same way (drag & drop into an open viewer), or simply double click on the file in the Windows Explorer.
This article describes annotation of images and creation of training models for Deep Learning segmentation in arivis Pro using the optional arivis AI toolkit.
Image segmentation, the process of identifying objects from an image, has been around for as long as digital images. Usually, this process involves creating and using algorithms to classify pixels in an image and then using this classification to identify objects. Various algorithms exist to classify pixels, which often work fast and well but have several limitations regarding the types of images that can be analyzed, and often require a high level of knowledge and skill to apply successfully. whereas humans can usually be trained quickly to recognise objects from images. Deep Learning works by using human knowledge of what constitutes an object to train a model that a computer can use to interpret the image data and produce objects. The exact workings of how a DL network performs the classification are highly complex, but the process of creating a network and applying it to images is accessible to anyone with basic computer skills.
Since arivis Vision4D 3.6 it has been possible to run DL inference for segmentation. This inference can use pre-trained models created with apeer and other platforms that use ONNX models. This only requires that the Vision4D license include the Analysis module.
With Vision4D/arivis Pro 4.1 it is possible to train models directly in the application. This requires a license that includes the AI toolkit and also requires that the arivis Deep Learning package for GPU acceleration be installed.
Once trainings have been created, actually using them in a pipeline is just as easy as using any other pipeline operation, and the results of a DL segmentation can be used in a pipeline in exactly the same way as any other pipeline segments. This includes the ability to:
The simplest way to use a trained network is to create a new pipeline that uses this network immediately after training. At the bottom of the DL Trainer panel, once the training is complete we can click on Open in pipeline. The software will automatically open the analysis panel, create a new pipeline, add the Deep Learning Segmenter operation to the pipeline, and select the trained model for use with the operation.
With our trained model in the pipeline, we can use this segmentation operation and its output like any other pipeline segmentation operation.
See this other article one using Deep Learning in pipelines for more information on configuring such pipelines to DL models, including custom models created elsewhere, including ZEISS arivis Cloud.
Use 4D Opacity Settings mapping to apply a color scale based on an axis rather than intensity
Maximum intensity projections are a great way to render images in 3D as they allow the user to see through the entire depth of an image. However, a MIP can look rather flat unless the volume is rotating or moving making it difficult to get a sense of depth perception from a 2D display. We can use color gradients and apply this to the axis rather than the intensity to better visualize such images.
Once the set has been re-colored we can change the mapping to any axis instead of the intensity from the 4D Opacity Settings panel.
This guide explains how to switch between CSV and XLSX export for spreadsheet-style content export.
Pipelines are a great tool to extract information from images, and the Objects window can provide a lot of powerful insights through its interactive interface with many display options. However, when dealing with multiple image sets or if statistical analysis tools are required that are not included in Vision4D then exporting the results of the analysis to a spreadsheet can be really useful.
By default, both the Objects table and the analysis panel's Export Object Features operation default to saving the results as MS Excel .XLSX files.
And while XLSX is a widely supported format, using alternatives like CSV can be preferable in some circumstances.
If the pipeline is used in a batch analysis, the options set in the pipeline remain. Therefore, if a CSV file is the required output in a batch, this should be defined in the pipeline operation, not the Batch Analysis window.
Using CSV export in batch analysis also allows the batch to concatenate analysis results from multiple image sets. To enable this option, start by opening the Batch Analysis window, and after clicking the Next button select the option to "Combine exported CSV files".
Finish by setting up the other export options (output folder and nomenclature) and click Run to start the analysis.
If the pipeline has been configured to export CSV files in arivis Vision4D, it will also export CSV files in VisionHub. To review the results, simply double click on an analysis job and the options to view or download the CSV results will be available on the Results pane on the right of the window.
This guide clarifies the practical differences between the background subtraction technique compared to the shading correction. The meaning and the effects of both methods on the images are also detailed. Finally, the guide will focus on the background subtraction techniques available on arivis Pro.
There's a great deal of confusion regarding the use of the Shading correction and Background subtraction on images for quantitative fluorescence microscopy.
Shading correction and background subtraction allows you to quantify intensities more accurately and improve image quality for image display. Moreover, they may be very useful for the object's detection tasks.
The Background subtraction is a technique for separating foreground elements from the background. The background definition is simple. Anything that is not object of interest in the image counts as background. This technique improves the precision and the reliability on which the objects can be separated from the rest of the scene, regardless from the images changing. The background subtraction is almost always mandatory for object tracking in the time lapse dataset. There are several techniques for background subtraction. This document will show the options available in arivis Pro.
The Shading correction (also known as Flat-Field correction) is a technique used to improve quality of the image by correcting the uneven illuminations in the image itself. It cancels the effects of image artifacts caused by variations in the pixel-to-pixel sensitivity of the detector and by distortions in the optical path. The Shading effect is usually visible as different intensity areas distributed across the entire image. In some cases, the image might be bright in the center and decrease in brightness as one goes to the edge of the field-of-view. In other cases, the image might be darker on the left side and lighter on the right side. The Shading effect makes the objects detection very complex.
This guide will only focus on the Background subtraction topic.
The image below is a typical example of uneven illumination between the center and the edges of the field-of-view. The same structures will have a different intensity range if located closer to the borders than the center. This makes their segmentation very complex.
The shading effect is much more visible in case of fields stitching. The dark border pattern makes the reconstruction imperfect.
Arivis Pro offers several background subtraction approaches. All of them are available as operators in the pipeline workflow. The background subtraction results can be saved as new SIS file, new Image-Set or as an additional channel in the active Image-Set for display purposes.
The background operator allows to set many methods and sources to compute the result.
Each of them can have one or more parameters associated.
The Background subtraction operator temporarily generates the corrected image only with the purpose to detect the objects. Intensity measurements of the detected objects are executed on the original image (before the background subtraction and any other processes).
The background is not homogeneous. There is an intensity gradient from top to the bottom of the image
The background subtraction has partially maintained almost all the structures of interest.
The method is fine for this kind of image.
The background subtraction has partially removed the structures of interest.
The method is too strong for this kind of image.
The background subtraction has removed the structures of interest.
The method is too strong for this kind of image.
The morphology filter Shape should be selected according to the size of the structures (small structures = Box, big structures = Sphere)
The Perform Plane Wise option must be selected.
This guide explains you how to perform an interactive objects drawing.
The interactive objects drawing method allows to directly define the structure edges by using one of the many graphic tools available. The object drawing is, not always but often, the unique approach available to get the object outline and shape. This is more evident when the images have a complex texture or a not well defined edges (e.g. EM or CT images).
Vision4D offers a wide drawing tool collection, comprising the fully manual methods as well as some semi-automatic approaches. The drawing tool collection is available on the top icon panel. It's content varies according to the viewing method setting.
With the 4D viewing method on, both the Magic Wand tool as well as the tool to place simple geometric shapes, markers or spheres on the volume are available. The 2D viewing method allows access to more tools, including the Manual Drawing tool, the Magic Wand tool, and the simple geometric shapes creator tool. In this modality, the drawn objects can be also edited.
The objects created manually are, to all purposes, evaluable segments, exactly as the objects gathered using the automatic approach (Pipeline). All the available measurements can be quantified.
These objects can also be applied as free shape ROI to mask part of the volume or managed to perform evaluations about between structures' interactions and relationships (Objects Compartmentalization, Objects Co-Localization, etc.).
To draw objects in 4D, select the 4D view.
The Magic Wand tool selects the object based on the intensity range detected in the area under the mouse cursor.
The TAG is an object property assigned to it by the running operator (only the preprocessing operators don't use the TAGs) and used to link all the detected objects to a common collection. By default, the TAG inherits the name from the operator that has generated it. For example, the objects created by BLOB FINDER will have a TAG called “Blob Finder”. The TAG is the Vision4D's way to create a logical and hierarchical relationship between the objects without duplicating them. A single object can belong to many collections. Therefore, it can have numerous TAG's attached to it. The rule is: one object, many TAGs.
The operator's name can be edited. We strongly suggest to rename the operators according to the structure detected typology (e.g., DAPI Nuclei, GFP Cells, etc.).
This guide explains how to perform the SIS file structure optimization (Defragmentation).
All the redundancies and the unused spaces are removed and the SIS file compacted and packed. This task allows to reduce the file overall size.
The defragmentation is strongly suggested after any editing task of the Image-Set, Channels, Planes or Time points.
It is also useful when the stitching, the volume fusion or additional dataset import are done. Check the file status always before transferring it to the permanent storage unit or before sharing it by upload to any transfer services.
This guide describes how to create maximum intensity projections, and how these can be used for further downstream processing.
When using fluorescence microscopy, especially for in vitro cell cultures it is common to acquire multiple planes to capture the full depth of the sample. In many cases the 3D information is useful and arivis can use it to segment 3D objects and measure volumetric properties. However, in some cases the main advantage of acquiring a stack is to be able to get the whole sample in focus at the optical resolution that allows us to measure the features of interest. In such cases compressing the stack into a single plane can provide all the information that is required while working from much simpler, and smaller datasets. Maximum Intensity Projections (MIPs) are an efficient way to compress such images into a single plane.
The first point to note is that many acquisition systems are capable of producing these projection images at the time of capture, or at the time of saving, and that if our imaging device is capable of doing this automatically this may be the most efficient way of producing these images for analysis. This is generally the preferred method because:
Having said all this, arivis Pro can produce Maximum Intensity Projections from the Projection Viewer.
As mentioned in the introduction, it is best, where possible, to generate the MIPs at the point of image capture.
If we want to use images in a batch analysis it may be preferable to save the MIPs as new documents to facilitate the batch configuration since different pipeline parameters will most likely be required to process MIPs as to process stacks.
For simple snapshots to be added to presentations or publications, using the copying the viewer content to the clipboard, at the resolution currently displayed on the screen, is the fastest way to export the results, and in most cases produces images of an acceptable quality. Creating MIPs as described here is only necessary if we want to have the full native resolution of the dataset and if we want to be able to use it in image analysis pipelines.
For publication it is also worth considering whether a 3D snapshot or video could be a better output.
Very short, basic overview of how to create and run pipelines in ZEISS arivis Pro.
The video above shows the complete process of creating, modifying and executing a pipeline for tracking on a single dataset. Let's break down the various steps.
All segmentation operations, and the operations needed to enhance or otherwise modify the segmentation, can be built into a pipeline within the analysis panel. When we first open the analysis panel we can create new pipelines from scratch or use existing sample pipelines.
The first operation in our pipeline is the Input ROI. This operation is included by default in every pipeline and allows us to limit the portion of the dataset to be analysed. The main effect of restricting the Input ROI is to speed up pipeline execution. This is particularly useful during the pipeline building process as we might go back and forth through the pipeline trying to optimise the parameters, but can also be used to facilitate the segmentation of very large objects (objects typically larger then 1000 pixels across), and to avoid wasting time processing portions of the image that are not of interest without the need to crop the data.
The Blob Finder is a popular segmentation operation that is available in arivis. It is a fairly powerful operation because it is quite robust in dealing with noise and uneven backgrounds without the need for additional image pre-processing. The exact details of how this operation works is covered in the help files which we can access by pressing the F1 key on our keyboard.
The first parameter that needs to be set in any segmentation operation is the channel from which the segmentation extracts objects. Most operation only use one channel for segmentation, but some, including the Machine Learning and Deep Learning segmenters can use multiple channel inputs.
When setting up any segmentation operation, we usually have a couple of parameters to set, and we can use the preview to help us set the correct values.
The exact values we used in this case aren't particularly important, by using the preview we can adjust the parameters until the segmentation seems optimal. If we use this pipeline with multiple images it is of course also important to use the same settings for all our images and therefore to also test on a variety of images.
Every segment operation, whether it creates de novo objects or modifies existing ones, uses tags to help us select the objects in downstream pipeline operations. The default tag is the name of the operation, which is fine as a 1st default value but likely not ideal if our pipeline contains several segmentations or object processing operations of the same type, and it is therefore recommended to change the tag to something more appropriate.
Useful tip with regards to naming: start with names that describe the process then narrow down the naming to specific object types. For example, we might start with a tag "DAPI Seg" and then use the tag "Nucleus" once all false positives have been removed.
Whenever we create or modify objects, the purpose is usually to extract some useful numerical value pertaining to these objects. In arivis we call these Features of the objects, and these can include values like:
The complete list of available features is covered in the help files (User Interface>Additional Windows> View Objects> Object Features), and additional features can be created by using Custom Features. Custom Features can be used to:
The Export Object Features operation, allows us to create an excel spreadsheet containing the numerical information generated by the pipeline. In our example we exported a spreadsheet containing both the information pertaining to tracks and the tracked objects.
The general process of doing image analysis in arivis is fairly simple. We use the Analysis Panel to create Pipelines. Those pipelines are built from individual operations that work with each other to extract the information we need. Pipelines can then easily be re-used with other images as needed, including in batch mode to streamline the process.
Of course, the process we described here is only the basic principle of pipeline:
The full breadth of what can be done in a pipeline cannot be covered here. The inclusion of Machine Learning and Deep Learning makes it possible to segment objects that were previously impossible to segment, and the pipeline tools allow us to extract all sorts of useful information from these segmentation. Please check our pipeline examples, to find out more about the types of information arivis can extract from images, and to learn more about how individual operations work.
Finally, again since our Knowledge Base and sample pipelines couldn't hope to fully cover what can be achieved in a pipeline, don't hesitate to get in touch with your local ZEISS representative.
This article covers the Spine Tracer pipeline operation and how it can be used in conjunction with Neurite Tracer and Neuron Tracer operations.
The Spine Tracer module was introduced in arivis Pro 4.3 as a complement to the Neuron Tracer functionality introduced in arivis Vision4D 4.0. It is designed to identify and quantify neuronal spines. It requires that neurites have already been segmented in the current pipeline as a prerequisite.
The Spine Tracer is then used to detect and quantify dendritic spines in the immediate vicinity of the detected neurites.
As mentioned above, the first prerequisite for spine detection is having already detected neurites. We can use either the Neurite Tracer or Neuron Tracer for this task. The Spine Tracer operation can be added immediately after the Neurite Tracer in the pipeline, or after some filtering operations.
The Spine Tracer can use 3 different methods to detect spines:
These options make it easy for the majority of users to carry out spine quantification with little additional effort while providing the flexibility to address more challenging images.
In every case below the first step is to select the input tag for the trace objects. Most pipelines of this type will only have one tag for the traces, but it is possible to have several tracing operations in the same pipeline, we therefore need to take care to select the correct tag for the traces at this stage.
The integration of AI ML and DL algorithms in arivis has hugely facilitated the segmentation of complex structures from noisy images while requiring little by way of image processing experience for the users. Because of this, and thanks to significant improvement in GPU computing, it is now possible to use AI models that can be significantly better and overall faster than traditional algorithms in all aspects of the image analysis workflow. Indeed the AI Assisted method mentioned above uses a custom spine detection model to facilitate this type of analysis. However, it is also possible to use AI models for both neurite enhancement and spine detection if the included algorithms do not provide adequate results.
This complete pipeline uses a custom DL model to enhance both the neurites and the spine heads as separate classes, then uses the resulting probability maps to trace the neurites and segment the spines:
Note that the model must have been created in advance of the pipeline execution. These models can be created using arivis Cloud, and model creation is included in the arivis AI Toolkit module, but arivis supports any DL model that can be saved as an ONNX file for this purpose as well.
The first step in this case is to use the Deep Learning Reconstruction operation to create the probability maps. In our example the model has 2 classes, but it is also possible to run 2 separate DL reconstruction operations, each with its own single class to enhance the spines and neurites separately, or use DL to enhance the spines alone if the arivis tracing algorithms suffice for trace detection.
These probability maps are stored as temporary new channels that are available to the pipeline but automatically deleted upon pipeline completion. Consult this article on configuring the Result Storage operation to learn about storing the probability map permanently if needed.
In our example, the Neurite Tracer uses a threshold based algorithm on the Neurites probability map to detect the neurite. The Spine Tracer then uses both probability maps for the spine detection. The Trace channel is used to enhance the neck detection while the Probability map channel is used to detect the spines.
As before we still have the max. spine length parameter, but we also have a Head threshold parameter to optimise the spine detection. Since probability maps aren't necessarily binary and the results will depend on the quality of the model, some experimentation by the user with this parameter may be required to obtain the best results.