Overview of Current and Past Projects
Current fluorescence microscopes allow studying early embryonic development in 3D and over time (3D+t). To decipher large-scale tissue reorganization at the cellular level, automatic segmentation and tracking methods are of utmost importance to be able to cope with this potentially terabyte-scale 3D+t image data. A fundamental problem observed both during segmentation and tracking are data-intrinsic events such as cell movement, cell division or cell death, inhomogeneous expression of fluorescent dyes, imaging artifacts as well as algorithmic flaws. Reconstructing the lineage from the fertilized egg to maturely developed tissues and organs, however, is indispensable for answering fundamental questions in developmental biology and related fields. The aim of this project is the development of a learning-based content- and context-aware cell tracking pipeline for large 2D+t live-cell imaging experiments from high-content screens and 3D+t image data of developing zebrafish embryos.
Funding: Deutsche Forschungsgemeinschaft (DFG)
Contact: Zhu Chen, Johannes Stegmaier
Animal experiments are still an indispensable component in many areas of research. The current EU directive on this (Directive 2010/63/EU) is based on the 3Rs principle and aims to replace, reduce, and refine animal testing wherever possible. In addition to regular observation and assessment of the health status of laboratory animals by trained personnel, continuously recorded video data provide a rich and objective source of information for analyzing the behavior and stress perception of laboratory animals. Although direct quantitative analysis of video data is nearly impossible and too time-consuming for researchers, automatic localization of laboratory animals, automatic detection of anatomical landmarks, and analysis of animal movement trajectories using state-of-the-art computer vision algorithms are becoming increasingly accurate and allow detailed analyses of social and individual behavior. Tracking of laboratory animals in video data can be performed non-invasively, allowing detailed behavioral studies without possible bias caused by human interference. The home-cage developed within DFG-funded Research Unit FOR 2591: Severity Assessment in Animal-Based Research represents such a non-invasive environment. However, the quality of the quantifications obtained depends significantly on the robustness of the detection and tracking algorithms, i.e., the animals must be tracked without error over longer periods of time. Although video recording and automated archiving have already been established in the previous phases of the FOR 2591 Research Unit, the processing and analysis of the recorded video data is a remaining challenge that we address in this subproject.
Funding: Deutsche Forschungsgemeinschaft (DFG)
Project Website: https://severity-assessment.de/
Contact: Emil Mededovic (RWTH Aachen University), Johannes Stegmaier
Worldwide, over 43 million people are affected by blindness, and this number is continuously rising. More than 2 million individuals suffer from age-related macular degeneration alone. Retinal implants represent an innovative technology aimed at restoring vision. The Graduate School 2610 – InnoRetVision, funded by the German Research Foundation (DFG), is a program dedicated to training PhD candidates in the field of retinal implants and collectively more than 40 doctoral students from RWTH Aachen University, the University of Duisburg-Essen, and Forschungszentrum Jülich are supervised.
In this subproject (continued at the Chair of Imaging and Computer Vision at RWTH Aachen University), we’re working on algorithmic advances to cope with the inherently low resolution of current retinal implants. The most commonly used Argus II system has only 60 electrodes, equating to a resolution of 6 × 10 pixels. This is insufficient to render even simple images, such as a “Space Invader” (88 = 11 × 8 pixels).
Furthermore, nonlinear effects occur, such as the unintended activation of axons (nerve fibers) instead of somas (cell bodies) in the retina. The perceived image quality is limited by severely reduced contrast, and color perception is currently not possible. Our research aims to overcome these challenges and significantly improve the functionality of retinal implants. This includes work on semantically relevant downsampling, minimizing nonlinear effects, and developing technologies to enhance contrast and enable color perception.
Funding: Deutsche Forschungsgemeinschaft (DFG)
Contact: Henning Konermann (RWTH Aachen University), Yuli Wu, Johannes Stegmaier
Early embryonic development can be studied in space and time (3D+t) at cellular resolution using current fluorescence microscopy techniques such as light-sheet or confocal microscopy. Automatic segmentation and tracking algorithms are used to extract thousands of cell movement trajectories from potentially terabyte-scale 3D+t image data sets that offer the possibility for a detailed analysis of inter-individual differences. A fundamental problem that remains after having obtained such tracked point clouds, however, is the comparison of individual experiments to confirm biological hypotheses in multiple repeats. The lack of fully automated solutions to this 3D+t alignment problem currently limits whole-embryo analyses to simple specimens, early time points or manual analyses. The aim of the proposed project is the development of new methods for automated spatiotemporal alignment of large 3D+t point clouds. As complex organisms usually lack one-to-one cell correspondences that could be used for registration, a fundamental part of the project will be the development of generic descriptors to identify various anatomical regions at different developmental stages using both classical and machine learning-based approaches. We are working towards general methods for spatiotemporal alignment of 3D+t point cloud data sets, open-source implementations and the application of the new methods to large-scale light-sheet microscopy experiments of zebrafish and fruit fly embryos.
Funding: Deutsche Forschungsgemeinschaft (DFG)
Contact: Johannes Stegmaier
Multidimensional fluorescence microscopy allows capturing 3D videos (3D+t) of entire model organisms at high spatial and temporal resolution. Automated image analysis can be used to detect and segment fluorescently labeled structures like cell nuclei and plasma membranes and to follow their temporal dynamics in terabyte-scale 3D+t image data sets. However, to date there are no algorithms available yet that provide error-free segmentation and tracking results automatically. Moreover, deep learning-based methods that are potentially perfectly suited for such analysis tasks are not sufficiently applicable to these large-scale image analysis problems yet, due to a significant lack of suitable training data, limited GPU memory and the substantial time effort required for manual annotations. The aim of this project is to develop new methods for generating synthetic training data that can be used for 3D segmentation and tracking tasks in developmental biology. Generated data will enable the training of data-hungry supervised deep learning models and is additionally suitable for extending image analysis competitions by providing benchmark data sets for large-scale 3D+t problems.
Funding: Deutsche Forschungsgemeinschaft (DFG)
Contact: Rüveyda Yilmaz (RWTH Aachen University), Dennis Eschweiler (RWTH Aachen University / UKA Aachen), Johannes Stegmaier