Background and integration of tracking systems in arivis Vision4D.
Background and integration of tracking systems in arivis Vision4D.
Object tracking is the process of identifying objects and features and measuring changes in these objects over time. It is different from looking at a simple before/after, 2-timepoint dataset and measuring macro changes. Instead, tracking focuses on dynamic changes of specific objects through a series of consecutive time points to better understand dynamic processes affecting the changes that can be seen from start to finish.
Tracking systems take a series of images taken over multiple time points, and then try to identify the same specific objects or features from one time-point to the next so that we can measure and monitor specific changes.
Examples of tracking applications include:
And many others besides. But what all these applications have in common is that we are monitoring how a specific feature (wound area, velocity, intensity, etc) changes over a time series.
The process of track creation can generally be considered as two separate processes:
Both of these tasks can be carried out in a variety of ways depending on the specific application.
For example, object recognition can be done through automatic segmentation of an image, or it could be done by creating a simple region of interest and duplicating that region over all the available time points. Tracking can be done manually by a user making interpretations of the image data, or algorithmically by identifying segmented objects from one time-point to the next.
With this in mind, the tracking accuracy is highly dependent both on the ability to recognize the same object accurately from one time-point to the next, and the ability to recognize the objects in each time point individually in the first place. Both of these are highly dependent on the quality of the image data and the sampling frequency.
Along with image quality being a very significant factor in segmentation, sampling resolution is also critical for correct tracking. If the movement is too great compared to the typical object separation, correct identification of movement can be challenging or even impossible.
If, however, we can increase the acquisition frequency so that the movement from one time-point to the next the confidence in the correct identification of tracks improves significantly.
As a rule of thumb, it is generally preferable to take images frequently enough that the typical movement of objects from one time-point to the next is no more than 20% of the typical distance between neighboring objects.
Similarly, in cases where we are looking to monitor intensity changes, it's important to take images frequently enough to measure those changes accurately. If those changes are rhythmical or affected by rhythms in the sample (e.g. heartbeats), the acquisition frequency for each time-point should be at least 4 times higher than the frequency of changes in the sample.
All this taken together means that in many cases tracking will not be possible and looking at macro before/after changes may be the most suitable type of analysis. Also, in many cases, the sampling frequency needs, coupled with the exposure time needed to capture images of sufficient quality may restrict the acquisitions to single planes thereby limiting the ability to measure changes in 4D.
Having acquired images for the purpose of tracking, the first task will be to establish an effective segmentation strategy. Several factors can complicate good segmentation throughout the time series. For example:
All of these factors can, to some extent, be handled by image and object processing algorithms.
Correct segmentation is crucial for successful tracking. An error of as little as 5% can mean that the mean correct track length would only be about 20 time-points. Of course, even incomplete tracks can still provide valuable information and some degree of error is inevitable, but generally, the better the segmentation, the better the tracking results.
Some common problems and solutions in segmentation include:
In many cases, the Blob Finder segmenter can over all of the above issues in one operation.
However, as stated above, the accuracy of the segmentation is particularly important for good tracking. Since tracking algorithms will typically try to find corresponding objects in timepoint n+1 for any objects in timepoint n, minimizing the number of incorrect or missing objects is particularly important. As such, there are typically two main problems with segmentation with regards to tracking:
Often the segmentation step will create objects that need not be tracked. In most cases, the objects to be tracked tend to have some common features that can be used to identify them from those that do not. We can use the Segment Feature Filter operation to tag only those objects we want:
Problems with splitting can usually be dealt with either by refining the segmentation method, either using different splitting parameters, or even a different segmentation operation, together with segment feature filters. the main aim is to avoid situations like the one here where, depending on the settings used, the tracking algorithm may need to decide what to do with two objects where there should only be one, or vice-versa.
Perfect segmentation is highly unlikely with non-perfect images, but we should strive to reduce the potential sources of error.
Once the segmentation has been optimized, we can add the tracking operation to our pipeline.
The tracking operation offers a range of parameters that must be set according to the needs of the analysis. As stated above, when tracking we identify objects at every time point and then try to establish the connection between these objects. What is allowed or not is depending on the parameters of the operation. Broadly these parameters tell the algorithm:
The main aim of specifying the motion type is to facilitate the correct identification of objects from one timepoint to the next. When several objects are moving around in 3D space, it is quite likely that at some point ambiguous situations arise when multiple candidates might be considered as the tracked object.
In Vision4D, 3 methods of motion detection are available:
|
Linear Regression |
|
|
|
Conal Angle |
|
|
|
Brownian Motion |
|
|
The size of the search radius, in all cases, must be set in line with the expected movement from one timepoint to the next, bearing in mind the need to reduce this distance to a practical range during the acquisition of the images.
When selecting Linear Regression or Conal Angle, additional parameters need to be set.
In both cases we can set the "Max time points" option to define how many time points to use to calculate the direction of movement.
In the case of Conal Angle, we also need to set the maximum permitted deviation from previous directions.
Sometimes, tracking involves fusions and divisions of tracks.
If fusions are allowed and only one segmented object can be found within the search radius of where two objects were found previously, the algorithm will assume that those two objects merged into a single one, and the tracks will merge.
Likewise, if divisions are allowed and two objects are found in the search radius of a track where only one existed previously, the algorithm will assume that the object divided or split and the tracking will continue along both branches.
Some points of note:
Additional options relating to tracking fusions and divisions are available and described in the help files.
Once the tracking parameters have been set as needed the operation can be run and the segmented images will be analysed to create tracks. The outputs are:
This enables the user to do a few things.
First, if the tracks appear wrong we can revert the tracking operation, change the parameters so as to try and improve the results, and run the operation again. If no good tracking parameters exist because the image doesn't provide good automatic tracking conditions, manual correction of the tracks is also possible.
Secondly, the objects table can be configured to display features of the tracks and their segments. Since the tracks are essentially groups of objects it is often best to switch the Objects table display to the Master/Detail view, which shows the tracks in the upper table and the segmented objects in those tracks in the lower table.
Each table can then be configured individually to show pertinent information for both tracks, and the segments in those tracks.
Finally, the results can be exported, either as a pipeline operation, or from the Object window's Im/Export... menu.
The tracking tools in arivis Vision4D are very good, but tracking results are unlikely to be perfect. A range of options are available if the results of the tracking operation are unsatisfactory:
The Track Editor can be found in the Objects menu.
In the Track editor, users can link, split, and merge tracks, as well as assign segments to tracks if they were missed, or manually remove individual segments incorrectly assigned to a track.
In this example, we have two tracks that appear like they should be connected but are not:
In such cases, the first thing to do is double check that they are indeed tracks that should be connected as it could be that they only appear so due to the current visualisation parameters. Tracks could appear to be connected but shouldn't be because:
Once the connection is confirmed, editing the tracks is as simple as drawing a line between the last time point where the track was correct, and the next time point.
Further details on track editing are available in the help files.
This guide explains how to perform object tracking using existing segments (previously detected).
The segments belonging to the track must be manually selected before applying the pipeline to generate the track.
Multiple segments can be grouped in different tags following the described procedure.
Measuring how often and how long some objects come into contact with each other can be a useful tool in biological relationship analysis. This guide explains how grouping combined with tracking can provide this information.
The key to identifying interactions is to tag subjects based on their proximity to or overlap with reference objects. To enable this step both types of objects must first be segmented and classified. Use any segmentation tool that is appropriate and use filters if necessary to identify only those segmented objects that are likely subject by, for example, not tagging objects outside of specific volume ranges or other features.
Having tagged contact events we can then go on to tracking the objects.
Here the tracking operator is configured as follows:
Having tracked the objects, there are a few things that can be done to help interpret the data and extract the required information. Most of it is done from the Objects table.
First, in the object colouring options, setting the track colour to match the segment colour makes is really easy to see in the viewer if a tracked object makes contact and where.
And this colouring also affects the track editor: