ZEISS Knowledge Base
Help
ZEISS Knowledge Base

arivis AI: Machine Learning and Deep Learning

Using StarDist

This article guides you through the implementation of StarDist segmentation.

Preliminary Remarks

Vision4D is able to run deep learning applications such as StarDist using external and arivis-independent Python libraries and tools produced by third parties.

These tools must be installed by the user under his or her own responsibility, strictly following the instructions in this document. arivis has tested the setup protocol on several computers, however, due to the different and unpredictable hardware and software configurations of any given computer system, the results may vary on a case-by-case basis. Therefore, arivis declines any responsibility concerning the correct tools,  installation, and set up on the individual user’s workstation. arivis cannot be made responsible for any malfunctioning or failure of the deep learning environment setup. arivis does not guarantee technical support on the setup task or on any deep learning application. Furthermore, arivis also declines any responsibility regarding the validity of the scientific results gathered from the deep learning application.

How does it work?

The StarDist workflow is based on three main steps:

  1. Object Annotation
  2. Network creation and training
  3. Inference (image analysis)

Objects annotation

This task consists of manually drawing the shape of the object over a set of representative images (2D or 3D). The reference objects should describe all their possible variation within the reference samples. The annotations are then used to create a binary masked image (Ground-truth). Both the annotations and the related binary masks are used afterward by the training task to build the Neuronal Network (training).

The annotation task is a manual activity and therefore requires a lot of time to be performed. The correct number of annotations must be estimated in advance in order to get reliable training results. During the training, the annotations can be increased if required. The annotations range starts from a minimum number of 200 samples, spread over 10 - 20 different images, up to a thousand (or even tens of thousands) in the more complex cases.

In order to increase the number of samples without the need to acquire new images, it is also possible to re-use the already existing images after applying some operations like rotations, random flips, or intensity shifts of the original.

Creating and training a neural network

This task takes either the sample images or the related masked images to build the Neuronal Network. The training is a loop in which, in any cycle, 2 parameters are computed: the training loss and the validation loss.
The progress of training can be evaluated by comparing the training loss with the validation loss. During training, both values should decrease before reaching the minimal value, which should not change significantly with further cycles. Comparing the validation loss development with the training loss can give insights into the model’s performance. If both training and validation loss values are still decreasing, this indicates that training is still necessary. If the validation loss suddenly increases again, while the training loss decreases towards zero, it usually means that the network is overfitting to the training data.

Basically, the training is based on math operations. These operations are repetitive and time-consuming and can easily be parallelized. The usage of GPU resources improves the training performance in reducing the total time. Working with the CPU only, a complex training can take 7 to 10 days of work, while using the GPU the total time may reduce to mere hours (10 to 12).

Image analysis

Once the Neuronal Network is trained, it can be used to analyze sample images.

Why use StarDist within Vision4D?

arivis Vision4D (V4D) is a modular software for working with multi-channel 2D, 3D, and 4D images of almost unlimited size, independent of available RAM. Many imaging systems, such as high-speed confocal, light sheet / SPIM, and 2-photon microscope systems can produce vast amounts of multi-channel data, which Vision4D handles without constraints.

V4D allows the user to execute complex analysis tasks in automatic or batch mode. This includes sophisticated pre-processing algorithms, multiple segmentation approaches (including machine learning segmentation tools), and powerful data handling. Dataset from Mega- to Terabytes in size can be quantified by Vision4D with a single pipeline on virtually any computer that is capable of holding the data (see system requirements).

StarDist represents an advanced method to detect roundish objects such as cells and nuclei, especially in crowded fields where the objects are overlapping, but it is limited to these cases. The new frontiers of image analysis in life science need the capability to analyze the complex iterations between biological structures. Vision4D has the tools to satisfy these requirements. StarDist can be integrated into the V4D analysis workflow and directly contribute to better detect its target structures.

StarDist can already be currently executed as a python script but, in the near future, it will be available as a V4D pipeline operator, making its usage even more flexible and powerful.

Application examples

Nuclei Tracking

Stardist is used to segment the nuclei over multiple timepoints while further operations take the result of the segmentation to create tracks and measure their properties.

Distribution Analysis

Stardist segments the nuclei, then a pipeline operation can import Atlas regions from a labelled image and compartmentalize the nuclei to specific regions and report numerical density. 

Measuring Cell Volumes

StarDist is used to segment the cell nuclei and a region growing operation can be used to identify the boundaries of the cell based on a membrane staining. 

How to get StarDist working with Vision4D

The StarDist python package is required to get it to work with Vision4D and must be added to an existing python 3.x enviroment. We tested & strongly recommend the Anaconda 3.x python package for the scope of this application.

Once the StarDist package has been correctly set up, Vision4D must also be configured accordingly by changing the scripting preferences.

Setting up Vision4D preferences

With Vision4D open, go to Extras> Preferences.

In the preferences window, select the Scripting tab on the left and select "Anaconda environment".

You will then need to browse your computer for the Anaconda installation folder and select the Stardist environment previously created.

By default, the new enviroments are stored under the \envs folder located in the Anaconda installation folder  e.g. C:\Anaconda3\envs\stardist

Having selected the correct environemtn folder we can then run “Install arivis package” and "Test Environment" for compatibility:

arivis also provides a free python script to run the StarDist algorithm inside of V4D.

The script allows the user to select the active channel, as well as the time points and Z planes (full-range or sub-selection thereof), from which to segment objects. A new channel will be created to store the labeled objects found by StarDist.

Loading the script

A startup package including the python script, the technical instructions and the test image is available on request. Contact your arivis local area sales manager to get more information about how to get the python script mentioned here or use the contact form on our website

In the Script Editor, the script can be opened by dragging. the .py file into an open window, of by going to File > Open... and navigating to the script file.

Once loaded some parameters will need to be changed for your specific images. 

# @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ USER SETTINGS @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

# -----------------------------------------------------------------------------

# Stardist 2D - Script

# INPUT_CHANNEL channel with the target objects

# is only used to read/write the voxels

# ---> imageset.get_channeldata / imageset.set_channeldata# -----------------------------------------------------------------------------

INPUT_CHANNEL = 1 # <---- Count start from 0 (Ch#1 == 0)

# name of the new channel (skeleton storage)

OUTPUT_CH_NAME = "Stardist_" # Net_Skel_

#

#MODEL3D_PATH = "D:/Arivis_Dataset/Carlo Stardist/2021-05-10/3D/2021-04-22/models"

MODEL2D_PATH = "D:/Arivis_Dataset/Carlo Stardist/2021-05-10/2D/2021-05-03/models"

MODEL_NAME = "stardist"

# PARAMETERS YOU CAN CHANGE?

CURRENT_TIME_POINT = True

FIRST_PLANE = -1 # -1 == BOTTOM

LAST_PLANE = -1 # -1 TOP

#

# @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ END USER SETTINGS @@@@@@@@@@@@@@@@@@@@@@@@@@@@

Of specific importance are:

  • Input Channel number (remember the first channel is channel "0"). This is to specify which channel to do the segmentation on
  • Model Path and name. This will depend on your installation and the model you are using. 
  • Input range.
  • CURRENT_TIME_POINT, if true only the current time point is analyzed, if false the entire time series.
  • FIRST_PLANE, the first plane in the stack to be used for segmentation
  • LAST_PLANE, the last plane in the stack to be used for segmentation
  • Note that is both the first and last plane parameters are set to "-1" the script will use the full stack.

Once the parameters have been set the script can be saved and run.

Download Full "How to run StarDist within arivis Vision4D" PDF

Applying cellpose models (arivis Vision4D 3.4.0 to 4.1.0)

Overview

To use cellpose in Vision4D we need the following:

  1. Install anaconda python
  2. Import the cellpose environment in the Anaconda Navigator
  3. Configure Vision4D to install the arivis libraries into the cellpose environment
  4. Use the Python Segmenter operation to load the cellpose script and define the required parameters

Introduction

Cellpose is a deep-learning (DL) based algorithm for cell and nucleus segmentation. It was created by the Stringer and Pachitariu groups and was originally published in the Stringer et al., Nature Methods, 2021.

Cellpose uses a cell detection method that predicts object shape using the flow-representation of object cell dynamics that is well-suited to approximate and define the complex borders of the cells in the microscopy images. These representations are provided for the DL model training and predictions (inference). Full documentation of the method can be found on the cellpose website.

 

Vision4D can be configured to execute cellpose segmentation within analysis pipelines, thereby enabling users to take advantage of both the advanced segmentation enabled by cellpose and the image and segment processing and visualization tools offered by Vision4D. This article explains how to download and install the necessary tools, and how to configure the pipeline Python Segmenter operation to segment objects using cellpose. 

By integrating cellpose into a pipeline, users can take advantage of the full functionality of the Vision4D pipeline concept to:

  • Process large multidimensional images
  • Enable segmentation in an easy-to-use interface
  • Enable the visualization of objects segmented using cellpose in 4D with advanced object display options
  • Facilitate complex further analysis like parent-child relationships and tracking

Preliminary Remarks

Vision4D runs deep learning applications for instance segmentation such as Cellpose and StarDist using external and arivis-independent Python libraries and tools produced by third parties.

These tools must be installed by the user under their own responsibility, strictly following the instructions in this document. arivis has tested the setup protocol on several computers, however, due to the different and unpredictable hardware and software configurations of any given computer system, the results may vary on a case-by-case basis. Therefore, arivis declines any responsibility concerning the correct tools, installation, and setup on the individual user’s workstation. arivis cannot be made responsible for any malfunctioning or failure of the deep learning environment setup. arivis does not guarantee technical support on the setup task or on any deep learning application. Furthermore, arivis also declines any responsibility regarding the validity of the scientific results gathered from the deep learning application.

Installing the prerequisites

To use the cellpose script in Vision4D, we need 3 configuration steps:

  1. Install Anaconda Python
  2. Import the cellpose environment
  3. Configure the Vision4D Machine Learning preferences to use the environment.

Installing Anaconda

Instructions for downloading, installing, and configuring anaconda for Vision4D can be found here

Docker support for instance segmentation

What is Docker?

Docker is an open-source platform (https://www.docker.com/) that allows you to build, test, and deploy applications quickly and easily. Docker packages software into standardized units called containers, which contain everything the software needs to run. Containers virtualize the operating system of a server, essentially running the package as a Virtual Machine (VM).

What is difference between Docker Desktop and Docker Engine?

Docker Desktop

Targeted for workstation use, this is free to use (personal use, academia and open source projects) but has a commercial use subscription (250 employees OR more than $10 million in annual revenue). Docker Desktop is more user-friendly and is aimed at developers and those simply using Docker. It provides an easy-to-use GUI interface, includes additional tools like the Docker Dashboard for easier container management, and automatically updates to the latest version.

For detailed information see the Docker Desktop docs: https://docs.docker.com/desktop/

arivis Pro 4.2 onwards and ZEN products can connect with Docker Desktop on the local workstation, see:

Docker Engine

It is the main service which runs containers, this runs a Linux kernel Virtual Machine (VM). Docker Engine is open source and free to use, it operates primarily through the command-line interface and is often used in production environments. It’s highly configurable and can be adjusted to suit a variety of use-cases.

The engine is an integral component of any docker installation, even Docker Desktop. In a server environment, the engine can be installed standalone.

For detailed information see the Docker Engine docs: https://docs.docker.com/engine/

arivis Pro 4.3 onwards can connect with Docker Engine using TCP and Remote Docker support, see:

Docker virtualization requirements

Virtualization features in both the BIOS (on bear-metal installations) and Operating System are required for containers to work. Typically Intel Virtualization Technology (VT-x) or SVM Mode (AMD-V) on AMD systems, see https://docs.docker.com/desktop/troubleshoot-and-support/troubleshoot/topics/#virtualization

Installing Docker Engine on AWS

Overview

Guidance on installing a standalone Docker Engine for Instance segmentation in the Amazon Cloud environment.

arivis Pro instance segmentation is typically performed on a local workstation using Docker Desktop. This process is discussed in the KB article here:

Installing Docker for AI Instance Segmentation

Docker Desktop is not licensed or intended for multi-session use. On a server environment, the Docker Engine must be used.

Introduction

The Docker Engine can be installed as a standalone instance, to be shared remotely as a service.

From arivis Cloud, the segmentation container is Linux based, so for both reduced costs and ease of implementation, we would recommend a Linux based distribution to host the Docker Engine.

Here is an overview of the required steps when creating the virtual machine on AWS.

Installing Docker Engine

Install | Docker Docs

Set up Docker's apt repository:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

 

Install the Docker packages:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Enabling Docker Remote access

Configure remote access for Docker daemon | Docker Docs

Please note opening TCP to the Docker Engine is a security risk.

Connection to the containers can provide root access. Ensure that necessary firewall restrictions are in place to allow only expected clients. Within a Cloud environment external access is typically blocked by default but access from other machines within the virtual network need to be considered.

Edit the systemctl service override:

sudo systemctl edit docker.service

The Docker instructions specify using 127.0.0.1, which will only bind to the localhost interface. To permit external connections, 0.0.0.0 will listen on all interfaces, you can modify this to a specific interface IP as required.

Add these lines between the top comments:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375

Save the file (CTRL+X) and reload the systemctl configuration.

sudo systemctl daemon-reload

Restart Docker

sudo systemctl restart docker.service

Within AWS the VM Security Group settings must receive a new port rule to allow 2375 from any specific clients that need to connect. Also a port range for the containers is required (if 10 containers may run in parallel, use 5000-5009):

Configuring arivis Pro to use Remote Docker Engine

  1. Create an access token in arivis Cloud.
  2. Copy the access token and paste it into the Access token field.
  3. Inside the Remote URL field, select your server IP.
  4. Click Apply to end the configuration.

Last updated: 2025.02.18

Code snippets were used from the links provided at the time of writing. Check the contained links for updates to any presented commands.

Installing Docker Engine on Azure

Overview

Guidance on installing a standalone Docker Engine for Instance segmentation in the Azure Cloud environment.

arivis Pro instance segmentation is typically performed on a local workstation using Docker Desktop. This process is discussed in the KB article here:

Installing Docker for AI Instance Segmentation

Docker Desktop is not licensed or intended for multi-session use. On a server environment, the Docker Engine must be used.

Introduction

The Docker Engine can be installed as a standalone instance, to be shared remotely as a service.

From arivis Cloud, the segmentation container is Linux based, so for both reduced costs and ease of implementation, we would recommend a Linux based distribution to host the Docker Engine.

Here is an overview of the required steps when creating the virtual machine on Azure.

Configuring Disk

The image is transferred to the Docker container prior to processing.

  1. Select a disk.The disk must be large enough to hold all required models, and all concurrently processed images. The IOPS of the storage should at least match the network performance of the VM:
  2. You can now start the VM.

Installing the nVidia GPU Extension

  1. The VM is running.
  1. Use the filter type GPU to find the NVIDIA GPU Driver Extension.
  2. Run through the creation/deployment process.
  3. For Deployment error “NVIDIA GPU not found on this VM size”, select a different GPU instance type and re-deploy extension.
  4. For Deployment error “Code 14”, on Linux, try disabling Secure Boot in the VM and re-deploy extension:

Enabling Docker Remote access

Configure remote access for Docker daemon | Docker Docs

Note that opening TCP to the Docker Engine is a security risk.

Connection to the containers can provide root access. Ensure that necessary firewall restrictions are in place to allow only expected clients. Within a Cloud environment external access is typically blocked by default but access from other machines within the virtual network need to be considered.

Edit the systemctl service override:

sudo systemctl edit docker.service

The Docker instructions specify using 127.0.0.1, which will only bind to the localhost interface. To permit external connections, 0.0.0.0 will listen on all interfaces, you can modify this to a specific interface IP as required.

Add these lines between the top comments:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375

Save the file (CTRL+X) and reload the systemctl configuration.

sudo systemctl daemon-reload

Restart Docker:

sudo systemctl restart docker.service

Within Azure the VM Network settings must receive a new port rule to allow 2375 from any specific clients that need to connect. Also a port range for the containers is required (if 10 containers may run in parallel, use 5000-5009):

Installing nVidia Container Toolkit

Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.17.0 documentation

Configure the production repository:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Update the packages list from the repository:

sudo apt-get update

Install the NVIDIA Container Toolkit packages:

sudo apt-get install -y nvidia-container-toolkit

Configuring arivis Pro to use Remote Docker Engine

  1. Create an access token in arivis Cloud.
  2. Copy the access token and paste it into the Access token field.
  3. Inside the Remote URL field, select your server IP.
  4. Click Apply to end the configuration.

Last updated: 2024.11.27

Code snippets were used from the links provided at the time of writing. Check the contained links for updates to any presented commands.

Deep Learning segmentation pipelines

How to create, share and import pipelines that use Deep Learning segmentation.

Overview

Link your arivis Pro installation to you arivis Cloud account

  1. Log in to your arivis Cloud account and create an access token
  2. In arivis Pro, open the arivis Cloud Model Store and enter your access token 

Creating a pipeline that includes ZEISS arivis AI segmentation

  1. Add the DL Segmenter to your pipeline and select your model
  2. Build the rest of your pipeline as needed

Exporting pipelines with DL instance segmentation

  1. Export your pipeline from the arivis Pro analysis panel
  2. Share your your ONNX/CZANN file or share your model using your arivis Cloud account 

Importing pipelines with DL segmentation

  1. Connect to your arivis Cloud account and accept the shared model invitation, if necessary, create an access token
  2. import the pipeline in the arivis Pro analysis panel and point the segmentation operation to your downloaded model
  3. Run your pipeline

Introduction

Since arivis Vision4D 3.6 arivis has included Deep Learning inference operations in the analysis panel. As of the release of arivis Pro 4.2, pipelines can run DL Instance segmentation. Like all pipelines, those that include DL Instance segmentation can be shared to use on other systems and with other images, and can also be run in batch. However, DL pipelines must be linked to the specific model on which they are based and these models must be exported as well as the pipeline and linked back to the pipeline before execution.

There are two ways to select a model for use in a pipeline. Whether we are using the Deep Learning Reconstruction or the Deep Learning Segmenter, once it is added to the pipeline we need to select the model to be used:

The model can either be loaded from a file as either and ONNX or CZANN file, or selected from the arivis Cloud model store.

ONNX or CZANN models typically only allow semantic segmentation. If the model has been created outside of the Zeiss ecosystem, other formats are commonly used but these can usually be converted to ONNX and we provide some scripts to do this, including for PyTorch and Cellpose

Access to models from arivis Cloud in arivis Pro is managed through access tokens. These access tokens give an arivis Pro installation access to every model in an arivis Cloud account. Therefore, rather than sharing an access token, which would give access to every model linked to an account, it is better to share the specific model with collaborators so that they can access the model by creating their own access token.

Arivis Cloud models can be trained for either instance or semantic segmentation. Semantic models can be exported as ONNX or CZANN directly from your arivis Cloud account. Instance models are only accessible through Access Tokens as mentioned above. Also, please note that DL segmentation in arivis Cloud instance models requires that you have installed and configured Docker on your system.

In either case, it is also a good idea to install and use the GPU acceleration package if you haven't done so already as this can significantly speed up DL and ML tasks. 

Linking arivis Pro to your arivis Cloud account

As mentioned above, arivis Pro can use ONNX or CZANN files with DL operations, and these files are relatively simple to create, share and use. However, arivis Cloud instance segmentation creates models with additional dependencies which would ordinarily require the installation of additional software libraries and could cause compatibility issues. To avoid such issues, arivis uses Docker containers to store both the model and the dependencies. To facilitate management of these containers we access our models through the arivis Cloud Model Store. 

Creating access tokens

Configuring arivis Pro using access tokens

To give arivis Pro access to your models, we need to provide it the access token generated previously. We find the arivis Cloud AI Model Store under the Analysis menu.

The first time we access the Model Store we'll be prompted to enter an access token and select a destination folder for the downloaded models.

When we click OK the Model store will automatically populate with all the models linked to your arivis Cloud account.

Note that some models may be incompatible do to versioning issues. If preferred, we can simply hide all incompatible models.

Also, the list of available models is dynamically updated after each restart of the application, so a new model is added there is no need for a new access token.

Creating Pipelines with DL Segmentation

In many ways, creating a pipeline that uses DL Instance or Semantic Segmentation is no different to creating any other pipelines. There is nothing special in the way the objects created by DL segmentation are handled compared to any other pipeline created segments. The Features available are the same, including Custom Features, and they can be used for any downstream segment processing operations, including tracking, parent-child analysis, and segment morphology operations to mention just a few. This ability to do both DL segmentation and traditional segmentation, and use the resulting segments all in the same pipeline, with the ability to batch process, is one of the key strengths of the arivis approach.

To create an analysis pipeline that uses DL we start the same way we always do to create pipelines, that is to say we open the Analysis panel, either from the Analysis menu or from the Shortcuts toolbar.

Then, in the Analysis panel we can create a new pipeline by using + New Pipeline, or choose an existing pipeline to modify. 

With the pipeline open, we can set up the Input ROI and any other operations as needed, and add the Deep Learning Segmenter to the pipeline using + Add Operation.

Note that there are two ways to use DL in pipelines.

The Deep Learning Reconstruction can use a model to create new images of the probability maps from that model. These probability maps can be used like any channel in the pipeline. This includes filtering (denoising, morphology, image math etc), and segmentation. We can, for example, use a Blob Finder on a probability map of a semantic model to obtain an instance segmentation result. Dell Learning Reconstruction only support ONNX or CZANN models.

However, the majority of cases will call for the Deep Learning Segmenter which uses the model to generate objects form the image.

Once we've added the Deep Learning Segmenter to our pipeline, all we need to do is select which model we want to use. If we use ONNX or CZANN file option we then click the browse button and select our model file.

If we use arivis Cloud models, we can either select from previously downloaded models, or open the Model Store to download models as needed.

Once we've selected the model, the operation works like any other segmentation operation. We can preview the results, choose an output tag and colour, and the segmented objects can be used in downstream pipeline operations like any other pipeline objects.

 

Exporting Pipelines with DL Instance Segmentation

Exporting the pipeline

As with any pipeline created in arivis, we can use the Analysis Panel menu to open the export window...

...choose a save location...

...and send the .pipeline file to our collaborator.

But if the pipeline requires additional files, like DL models, these need to be shared as well.

An ONNX or CZANN file can just be shared together with the pipeline like any other file. The most practical way to do this may be to copy both the exported pipeline and the model file to a new folder, compress this folder into a ZIP file, and share the ZIP using your preferred file sharing method. Email should work fine, though some email clients have file size limits so other cloud-sharing methods might work better (OneDrive, Dropbox, etc.).

Sharing an arivis Cloud model

Importing the pipeline and model

If we receive a pipeline that includes a DL model from a collaborator, first we must save the pipeline to our workstation. 

If the model was shared as an ONNX or CZANN file together with the pipeline, we simply save that file to the workstation along with the pipeline.

If the model was shared using an arivis cloud account, we can log in to our arivis Cloud account, accept the invitation to the shared model, and if necessary link the arivis installation to our Cloud account as described above. Once the model appears in the Model Store, we can click the Download link on the right:

Impressum
Carl-Zeiss-Strasse 22
73447 Oberkochen
Germany
Legal