This guide explains how to create a density distribution map, also called ”Heat-Map”. The script generates contiguous boxes in X,Y and Z directions that can be used as ROIs for further analysis (Compartmentalization, gradient distribution, etc.)
This guide explains how to create a density distribution map, also called ”Heat-Map”. The script generates contiguous boxes in X,Y and Z directions that can be used as ROIs for further analysis (Compartmentalization, gradient distribution, etc.)
Python script code usage rights:The user has the permission to use, modify and distribute this code, as long as this copyright notice remains part of the code itself: Copyright(c) 2021 arivis AG, Germany. All Rights Reserved.
In order to define the contigues sub-regions (sampling volume) features, some parameters of the script should be adjusted to match your analysis needs. These parameters are located in the code area labeled as USER SETTING.
SIZE_BOX_X : Set the X sub-volume size.
SIZE_BOX_Y : Set the Y sub-volume size.
SIZE_BOX_Z : Set the Z sub-volume size.
All the values are expressed in metric units (µm).
If one of the box dimensions is bigger than the correspondent volume size, the script will not be executed and an error message is issued.
Only the parameters located in the USER SETTING area can be modified. Don’t change any other number, definition or text in the code outside this dedicated area.
Run the DivideScope RevE (3_4) Python Script by pressing the Run Script button or pressing the F5 key.
Note: Activate the Output Panel, if not already displayed. The status of the script execution (errors including) will be visualized here.
This is the script result, a 3D matrix of sub-volumes:
The sub-volumes segments are shown in the objects table using the TAG Script.
The pipeline has to be created according to the user analysis requirements as well as the sample typology.
The sample labeling, the imaging technique (Fluorescence, EM, Tomography, bright-field ...) and the image characteristics are important to drive the pipeline setup.
The knowledge of the biological structures under evaluation, it’s behavior and the expected features' trend are also important as well. All the above information should be used to build a target driven pipeline.
To achieve the application note goals, only a couple of operators are mandatory, as described below:
Please refer to arivis Pro Help for more details.
The pipeline can be executed step by step (back and forth). This method allows to run and undo a single Operation. Either the arrow buttons or the Operation list can be used to go through the operators list.
This icon, located on the right side of the operator title bar, shows the operator status.
Task running:
Task completed:
Results (segments and measurements) will be stored in the dataset only if the Store Objects operator has been correctly set. Tick appropriately the option as shown below before complete the pipeline execution.
Features can be added or removed from the data table using the Feature Column command.