Simplify your objective
arivis AI toolkit employs powerful neural networks capable of handling various complex segmentation tasks. However, for more complex tasks, a significant amount of annotations is typically necessary to train the network to perform the desired task effectively. Collecting large amounts of annotated data for such tasks can be a challenging and time-consuming effort. To mitigate this challenge, we recommend simplifying the task by taking steps such as standardizing imaging conditions.
Standardize imaging conditions
Standardizing imaging conditions can greatly impact the complexity of a segmentation task. Standardizing imaging parameters can significantly simplify the task, making it easier for the algorithm to learn, and ultimately reducing the number of annotations required. To achieve this, consider the following guidelines when collecting images for your dataset to ensure optimal performance with the arivis AI toolkit:
- use the same illumination parameters for similar intensity histograms
- use the same magnification and binning for objects of similar pixel size
- keep the size of individual regions/objects to be segmented below 320*320 pixels
Standardize experimental conditions
Aside from standardizing your optical parameters, standardizing other experimental parameters can also be beneficial. The objective is to obtain images where your objects or regions of interest are as standardized as possible. Depending on your use case, the following measures may be helpful in achieving this goal:
- use the same sample preparation (microscopy images)
- keep the size, location, and orientation of your region of interest constant
- keep background homogeneous
- keep object density low
Avoid complexity when defining classes
Sometimes, it can be tempting to create segmentation categories for problems that could be solved more easily using post-processing techniques. For example, attempting to train an algorithm to distinguish between "small cells" and "large cells" may not be the best approach if the only difference between the two classes is the size of the cells. While it is possible to train such an algorithm using many annotated cells that cover the size boundary between small and large cells, such challenges can often be resolved more simply by filtering the post-processed segmentation output of a generic algorithm that segments cells of all sizes.
Another example of a case where there is no need to create a category is the segmentation of separate classes for objects that are "in" and "out" of focus. You could instead train a model for all objects that you can recognize and use the mean or maximum pixel intensity within the segmented area to determine if an object is sufficiently in focus for your application.
Start simple and increase complexity as needed
When developing a DL-based segmentation model, the ultimate goal is often to achieve robust segmentation of multiple object classes across various imaging conditions. However, if the parameter space is large and you only provide a few annotated objects, it might be challenging for the algorithm to learn all the complexity at once. To mitigate this challenge, we recommend the following this step-by-step approach:
- Start by annotating a single class
- Aim for annotationg approximately 50 objects or regions in similar images
- After training, inspect the results to evaluate the accuracy of segmentation
- To improve the accuracy of the algorithm, consider adding images with more variability to your dataset and repeating the earlier steps
- Once you are satisfied with the performance of the previous class, start annotating for new classes by repeating the earlier steps for each of them
The instructions provided here will not only assist your algorithm in effectively learning the task from the annotated dataset used for training, but they will also aid in creating a segmentation model that generalizes well and performs effectively on data acquired from future standardized experiments.
Have questions? We're here to help.
Have a question about optimizing your conditions? Feel free to reach out to our support team.