This article explains how to use the Compartments operation with imported Atlas objects to get segmentation results per atlas regions.
This article explains how to use the Compartments operation with imported Atlas objects to get segmentation results per atlas regions.
Anatomical atlases are valuable tools in contextualizing segmentation results by allowing users to not only get object counts and morphologies, but also knowing how these change in various regions of an organism or organ. This article does not show how to import and register atlas objects to a sample dataset but only covers how to use pipelines that take imported atlas objects together with image segmentation to contextualise the information.
As always, the first step in the pipeline is the selection of the Input ROI. To speed up the pipeline it is generally advised to limit the analysis only to a specified region of interest. In this case this may involve restricting it only to the bounds of specific regions or the whole atlas. In our example, the atlas regions have already been imported as objects and so we can use the bounds of these objects in XY together with their extent in Z to set the input ROI. The bounds and plane range will depend on which objects are selected, whether just the one, a few, or all of them.
If any voxel operations are required to facilitate the segmentation these can be added to the pipeline prior to any segmentation step.
To use the atlas objects for compartmentalization they must be imported into the pipeline, Here we've used the Import Document Objects operation. Because there are no other objects in the dataset at present the Tag field can be left empty, otherwise, tags can be used to specify exactly which objects need to be used. We've also renamed the operation to "Atlas" so that all the objects get this tag to facilitate reading further pipeline operations.
From there on we can segment and filter any of the objects that we want to compartmentalise. In our example we've used a blob finder segmentation and further filtered those objects to more specifically identify cells, but any segmentation tool can be used with whatever filter is required.
Having segmented all the objects to be compartmentalised, the final step is to group objects to their specific atlas regions. This is done using the Compartments operation.
Object tags are added to the pipeline using the green "+" sign and arranged in order of hierarchy using the arrows. In the example above the objects with the tag "Cell Flt" have been configured to be children of Objects with the tag "Atlas". The threshold of inclusion has be set up so that if at least 50% of any object with the tag "Cell Flt" is within the boundary of an object with the tag "Atlas" it will become a child of that object.
Finally, as with any pipeline, we must save the objects.
The results fo the pipeline are compartmentalised objects. There are several ways to review and interpret the data in the Objects table. Selecting the "Atlas" tag will display all the atlas objects and their features, which could include any morphological features of the objects (volume, surface area, sphericity, etc.), but can also include data concerning the children objects compartmentalized within them, such as the number of children, but also statistical values like numerical density which can be defined as a custom feature by dividing the number of children by the volume of the parent.
The objects table can also be arranged in the so called "Master-Detail" view where parents, track or groups appear in the top part of the table, and the children of these appear in the bottom. In such a layout it is then possible to view not just features of the groups, but also of the objects within them.
image courtesy: Kirsty Craigie, University of Edinburgh, UK.
This article explains how to export movies from images in arivis Pro.
As we can see above, the Movie Export function can be a fast and effective way to create an animation. However, it is very limited by the types of transition and effects we can apply to images. Also, for datasets with a lot of depth a volumetric rendering animation can be much more effective at communicating spatial relationships, and using clipping planes and object display transitions can make it easier to understand the relationships between objects. As a whole, volumetric rendering animations can create a much more powerful impression of the image and analysis.
Clicking the Export button above opens the Storyboard Movie Export window where we can configure our export options.
Unlike snapshots and HD image exports that can normally be stored in memory for easy copy & paste into an output destination (e.g. PowerPoint presentation), movies are generally much larger and therefore need to be written to the hard disk and so the first options we find relate to the save location and name. The save location and name are completely up to you (the default is to save the movie with the same name as the original file in the same folder as the dataset). For the video format we need to consider where the video will be shown. Generally most modern devices support the H264 format and it is our recommendation. It is also a good format for importing into video editing software if we want to splice our arivis video with other footage, but some older systems may not support it and MPEG4 may be more suitable in such cases.
After the file saving settings we have the movie video output options. These can make a significant impact on the quality of the video output and the time required to create it.
Generally speaking, the higher the quality of the video the longer the movie export will take.
The Video Resolution setting affects both the size of the output file and the quality of the output. Here the the way the video will be presented will be quite important. If we include the video in a PowerPoint presentation, using the XGA or HD (720p) options are usually fine, especially if the video does not take the entire slide. If we export it for full screen viewing then Full HD (1080p) up to Ultra HD (4K) may be preferrable, but we should remember that the time it takes to render the video will increase significantly, and so will the output file size, for what may appear to be only a marginal gain in image quality in many cases. For example, going from 1080p to 4K will typically double the rendering time. arivis also support 360 video outputs which can be quite interesting in some cases, but requires viewers that support this format and some user interactivity to set the view angle when playing back the video.
The frame rate affects how smooth the video playback can be. Low frame rates (below 24FPS) can appear a little jerky, while 25-30FPS is the standard for most videos. Using 60FPS can appear smoother, and is recommended for 360 videos, but some video players won't be able to play back these high framerates smoothly resulting in noticeable jumps as the media player lags and then catches up to the required framerate. The most important thing to note is that going from 30FPS to 60FPS will require at least twice as much time to process the video and will result in a file that is about twice as large.
Generally, we recommend using a low video resolution and framerate (720p at 25FPS) to produce a video quickly, and switch to high resolution and higher framerates only on occasions to really showcase data only as needed.
This setting is mostly related to how the software handles large datasets. By large datasets we mean datasets for which a single timepoint cannot easily fit in the video memory of the GPU.
Viewing data in 3D and rendering videos are both highly dependent on the GPU. We cover the topic of how arivis handles large data in the 4D viewer in this article, but in short, since most GPUs have a finite amount of video memory available and a finite ability to render 3D datasets within a given amount of time, arivis typically down-samples data to make possible to render quickly in 3D. This down-sampling can lead to a noticeable drop in in the level of detail we can render and can also lead to noticeable down-sampling artefacts. As we explained in this article about rendering HD screenshots, it is always possible to render both screenshots and videos with the highest level of details, but for large datasets this comes at the cost of high processing times.
The data resolution slider adapts both to the PC configuration and to the dataset.
The scale goes from 64MB of VRAM usage on the left to the full resolution of the dataset on the right, and each graduation mark typically represents a doubling of the memory usage.
Note that since the 3D viewer renders 8bit version of the images the VRAM requirement may be significantly smaller than the actual dataset. Also, since we can only render one timepoint in any given frame, the amount of memory required is limited by the amount of data in a single timepoint.
The colour coding reflects the hardware configuration. The green part of the scale represents the amount of VRAM available on the GPU. Sticking to the green part of the scale reduces the loading times and also leads to much faster renders.
Since most computers typically have more system memory that video memory, we can use this as temporary storage for the image data instead, up to the amount of system memory available. The loading time is usually comparatively faster because loading 1MB from the disk in the RAM is usually faster than loading that data in the VRAM, but the rendering time suffers because the GPU is now reading data from the RAM rather than the faster and closer VRAM.
But since some datasets can be larger than even the RAM available, arivis also allows creating high resolution videos even at native resolution by using a hybrid rendering approach, though this is done at the cost of rendering time.
If we are rendering a very large dataset, it is recommended the minimum amount of data resolution required to see the detail that we need to see. Some experimentation using HD screenshots may be worthwhile to find out what those optimal settings might be. It may also be worth considering splitting the animation into several movies and using volume clipping to narrow down the rendering to specific regions of interest for the high level of detail required and then, if necessary, merging the movies into a single file using video editing software.
None of the options above can be used for recording application processes like object creation, pipeline execution, and other image or object modifications. However, as mentioned above a picture is worth a thousand words and a movie even more so. Sometimes it is easier to create movies to explain the process. In such cases it is better to use screen recording software to create these movies. Some basic screen recording software do not feature video editing tools but the movies can be imported into video editing tools. Several screen recording software exist that include movie editing options with advanced features. An example of this can be seen above where we used screen recording software to record the process of generating and exporting the storyboard.
How to run the Random-subsampling operator in arivis Pro:
In order to run the Random-subsampling operator in arivis Pro, the objects of interest should first be created and imported into the pipeline. No custom python environment is required for this operator.
This operator is designed for the application dealing with large amounts of data and objects (hundreds of thousands to millions), where working with the entire population of objects is, while feasible in arivis Pro, will be very time-consuming and not advance the biological findings.
We suggest first testing this operator on a small subset of the objects. Running it on the large data set might be taking a few hours as well.
Select the Python Image Filter operator and use the button with three dots to upload the Random-subsampling operator. The .py script file can be downloaded from here.
Input_tag: object tag to use for random selection
Subset-size: size of the randomly samples population. Should be smaller than the total number of objects with the given tag
When importing objects created on another image set, as in the case of atlas regions, it is sometimes necessary to move them to match the destination image set. This article explains how objects can be imported and moved to a new position.
Atlases can be a very useful tool in contextualising segmentation results by allowing the compartmentalisation of objects to specific regions. Creating these regions is often done on a "standard" image set, but when copied over to a new image set slight changes in the sampling can cause a misalignment of the atlas regions with the destination image set. The Copy/Move Objects... function allows the user to move the objects to the correct location.
If the objects have been exported as an objects database file (.OBJECTDB), they can be imported simply from the Objects menu or the Im/Export button in the objects table.
Once imported the objects will appear in the Objects table and the 2D/4D viewers.
To move the objects we start by selecting all the objects that need to be moved. To do this, simply select any object in the Objects table then press Ctrl-A on your keyboard to select all. Then, in the Objects menu we can select the option to Copy/Move Objects...
This will open a new window where we can select the parameters for the move. In this case we first need to move all the objects 106 planes back up the stack:
Note that the Move option has been selected at the top and we are currently only moving the objects -106 planes relative to their current position. Selecting an absolute position would move every object to the specified plane.
Finally, if an XY adjustment is also required we can use the Move Objects Tool in the 2D viewer to drag them to the correct location:
Kirsty Craigie, The University of Edinburgh.
This guide explains how to perform a Compartmentalization analysis.
It guides you how to set up a study focused on the interactions and of the relationships between compartments of the structure under evaluation.
The Compartmentalization’ concept is strictly related to the studies of the interactions and of the relationship between the structure' compartments.
Complex hierarchies between the structures can be established and evaluated using this operator. Objects inside a parent structure can be selected, their position inside the main structure, as well as their distribution (clustering), and other features, can be evaluated. A child object can be a parent for other objects. The number of the available nested levels are, theoretically, unlimited.
In the example here above, The COMPARTMENTALIZATION is extended on 3 levels. The result is a hierarchical link between the Cell (Reference) and its nucleus (subject) . The nucleus is, in turn, related to the vesicles it contains. Finally, the vesicles count per cell is obtained.
The COMPARTMENTALIZATION’ analysis is not limited to the biological samples, even if this is the most common situation. Any structure located inside of a defined surrounding volume can be evaluated.
It is not mandatory that the parent object is a defined structure (E.G. Cell or Nucleus), it can be an anatomical region or, generally speaking, a sub-region of interest of the sample volume. These regions can be drawn both manually or using the interactive method.
The Compartmentalization’ approach is the base on which more complex and sophisticated evaluations can be performed.
The Compartments Operator allows to set the structures hierarchy to be used to evaluate the relationships between levels. Several nested levels are possible as well as two or more compartments at the same hierarchical basis.
Inputs: Select the TAGs of the structures to be compartmentalize.
The top list shows the reference TAG , while the other lists show the children TAGs. By default, 2 hierarchical levels are provided.
Additional children can be added using the + Add input
Move the selected TAG as a "child" of the above TAG and push it down in the hierarchy.
Move the selected TAG to the same level of the above TAG and push it up in the hierarchy.
Open the Tag relationship panel. From this panel, various options can be defined.
The following options (rules) are available:
Child object must be completely inside the parent structure.
Check whether a "child" object is partially covered by a "parent" structure. You can set the minimum amount of overlap using either the slider or the text box.
The Min. Overlap value defines the percentage of the child object volume that must be covered by the parent structure to be considered as compartmentalized.
If you choose 0%, it still checks whether there is at least 1 voxel of coverage.
Note
The selected parent TAG, as well as the child TAG can label more objects. For any object in the Parent TAG, all the child objects are compared to it in order to establish their belonging.
Child object must be located within the max. distance value (The children distances are taken outside the parent structure only)
These 3 options can be set independently or in combination between them.
The compartment operator generates different outputs. Each of them can be enabled and named freely by the user. Multiple outputs allow to better group and distinguish between the different features available.
From the operator menu, the main compartment TAG can be renamed. By default, the TAG name is the same
Click on the colored box to change the objects colorization.
Each parent structure reports the children belonging to it accordingly to the Compartmentalization settings.
A new TAG collecting the parents structures can be created.Press the «Configure output» icon on the right of the parent selection.
Check ON to activate the option and type the Tag name in the text box.
Pull down the list of additional Features on the right in order to create group statistics
The parent structures are shown with the related teatures.
A new TAG collecting the parents structures can be created.
Press the «Configure output» icon on the right of the parent selection.
Check ON to activate the option and type the Tag name in the text box.
Pull down the list of additional Features on the right in order to create group statistics.
All the children objects are shown with the related teatures.
Additional features related to a specific relationship criteria, can be set.
This article goes through the basics of segmenting objects in Vision4D.
Segmentation in Vision4D is done through the Analysis panel. In the analysis panel, we can build pipelines by adding operations that create objects and assign them tags that we can use in downstream pipeline operations to further refine the analysis.
A simple pipeline like the one above is unlikely to be the whole of the image analysis. We can also:
Use cases and result interpretation for the Compartments pipeline operation.
In a nutshell, it establishes relationships between objects, assigning children and parent links, and permits the analysis of those relationships.
A Parent-Child relationship is a hierarchical structure where some objects belong to other objects and vice versa. The individual objects have their own characteristics (volume, surface area, intensity), but also features dependent on those relationships (ID of the parent, number of children). The Compartments operation is specifically concerned with establishing the relationships between parent segments and their children. Other types of parents exist, such as tracks or groups, but the compartments operation is specifically concerned with the overlap or proximity of different object types.
Establishing parent/child relationships can be valuable in a variety of cases, such as:
In each of these cases, the number and size of the objects of interest are relevant based on their position relative to a parent.
The first thing that is needed before adding the compartments operation is a set of parent and children objects. Since many segmentation methods can result in unwanted objects it is also best to use any object filtering operation prior to adding the Comparmtnets operation to the pipeline.
The compartments operation can the be added to the pipeline like any other operation.
Once added to the pipeline we can configure the operation parameters.
Each Compartments operation can have one Parent object tag and multiple children object tags. If there are several potential parent objects in your pipeline, either multiple Compartments operations must be used (one for each parent type), or the parents must be given a common tag with the Combine Segment Outputs operation. The tag to be used for the parents can be selected from previous pipeline tags.
Parents can have multiple children tags. When added to the pipeline the operation starts with one parent and one child tag. To add another child object tag we can just click "+ Add input" and select from the available object tags. Note that the children cannot be either the parents or a subset thereof.
Once the children have been selected we need to configure by what criteria children can be included. This is done by clicking the relation icon:
Then we can select under what conditions objects can be considered a child of any given parent:
If the Inside rule is selected, any potential child object that is fully within the boundaries of the parent will be included.
If the Intersecting rule is selected, a child that crosses the boundary of the parent can be included, depending on the amount of overlap between the parent and child candidate, as defined by the Min. Overlap slider.
If the Close rule is selected, potential children can also be included based on the proximity of these objects to potential parents. Note that children can only belong to one parent so if a child is within the max allowed distance of two parents it will be assigned to the closest one.
The result of the operation is a special type of segment grouping. Each parent object will have a number of children, and each child will be linked to the parent. From this we can do a variety of things.
First, the number of children is a feature of the parents. you can display the number of children a parent has as a feature in the Objects table:
Likewise, the ID of the parent or children can also be displayed as a feature of the respective object types. But most importantly, those features and the children's features can be used to inform other pipeline operations or custom features.
Along with Custom Features, the objects table layout can also be changed to provide easier access to the features of parent and children.
The default layout for the Objects table is a single table showing all the objects with the selected tags. If the Compartments operation has just been run it may look something like this:
However, in this layout, the table doesn't give any clear indication of which objects are parents and which are children, or the relationship between them. Instead we can switch to the Master/Detail layout where parents can be displayed on the top, and the respective children shown in their own table at the bottom:
In the Master detail layout, each table behaves somewhat independently, meaning that different tags can be selected at the top and bottom, but the Detail table only shows the children of parents selected in the Master table.
Of course, the objects table is a powerful tool for visualizing and sorting the results of the Compartments operation. However, in many cases, it can be advantageous to export the results so that further analysis of the results can be done in different software like Excel. In that case, the results can be exported, either out of the Objects table using the Im/export button, or directly in the pipeline by adding the Export Objects Features operation.
The Export Object Features operation can be configured in a variety of ways, including the Master-Detail report as seen above.
As with the Master-Detail layout in the objects table, the features to be exported can be set independently for both the main table export and the details tables.
The Compartments operation, together with the objects tables and features, can provide powerful insights into object relationships in image analysis. Unfortunately, it is difficult to do this topic justice in a knowledge base article due to the vast array of potential uses and applications where these tools can be used. Hopefully, this article provides a valuable overview. Please contact support using the link at the top of the page if you require any additional help with your analysis.