Image Registration: The Navigation System for Digital Pathology Diagnostics and Biomedical Research
DOI: https://doi.org/10.47184/tp.2025.01.02Image registration is the automatic alignment of similar images such as consecutive tissue sections in pathology diagnostics and biomedical research. In a digital workflow, image registration allows pathologists to seamlessly switch between various stains, minimizing repetitive navigation tasks and thereby streamlining the diagnostic procedure. In research, it integrates direct and third-party information from differently stained slides, combining otherwise separate information in pharmacological studies and biomarker discovery. For this integration to work in practice, image registration must be robust against variations found in routine pathology images, such as differences in stain intensity and artifacts, to ensure widespread applicability. Diverse approaches are possible for registration: feature-based, intensity-based, deep learning, and combinations thereof. As computational pathology continues to advance, leveraging image registration technologies significantly improves diagnostic efficiency and accuracy, paving the way for enhanced patient outcomes and groundbreaking research discoveries.
Keywords: rigid transformation, deformable transformation, image viewer, image management system
Pathology routine diagnostics involves multiple repetitive steps such as examining Whole Slide Images (WSIs), procuring additional samples, analyzing patient non-image data, maintaining diagnostic principles, drafting reports, and participating in video conferences on short notice. Rapid execution is sometimes necessary, particularly during histopathological rapid incision examination.
One task when examining differently stained consecutive slides is the repeated identification of corresponding regions of interest across multiple slides. This is especially relevant in examinations such as breast and prostate cancer diagnostics, which typically require a series of consecutive slides, stained with various immunohistochemical (IHC) reagents. To streamline the workflow, it is crucial to minimize time-consuming navigation tasks. In road transport, navigation assistance is omnipresent; why should it be any different for pathology?
Image registration provides a solution: By automatically aligning all consecutive tissue sections from a block, pathologists can navigate between stains without losing focus on the key areas.
Biomedical Research
Beyond guiding users to their desired destination, image registration in histology integrates the information from distinct immunohistochemical or immunofluorescent slides. It also combines the results from third party analysis tools, enhancing the overall synthesis of information. Merging data from multiple stained slides is beneficial for pharmacological research, where often only a combination of markers in a specific area is relevant for prognosis.
What is Image Registration?
Originating in applied mathematics, image registration describes the process of finding a reasonable image transformation to achieve optimal similarity between images. Transformations can be either rigid or deformable. Rigid transformations are represented by a global transformation rule composed of rotation and translation, affecting the whole image. On the contrary, deformable transformations deform individual points “elastically” within a grid overlayed on the image. Typically, images are first preliminarily aligned using a global, rigid transformation, followed by a detailed local deformation for each grid point (Fig. 1).
Requirements for Clinical and Research Applications
To be effective, image registration methods must address the real-world quality of routine pathology images. Slides may differ in stain intensity and may contain artifacts such as hair, tearing, missing tissue, foldings, pen markings, or shadows. Additionally, digitized images exhibit variations in brightness, saturation, blur, and contrast.
In clinical use, speed is another crucial factor, as pathologists examine a high volume of slides daily. Conversely, researchers prioritize accuracy over speed, especially when aligning differently stained slides to gain insight into biological processes.
The accuracy of image registration is limited by the underlying data, as consecutive slides differ from one another, depending on the slice thickness and the distance between slices. In contrast, restaining or co-staining allows precise identification of identical structures by repeating the cycle of staining, scanning, and washing. Research has shown the limits of accuracy that can be reached in both settings [1].
Methodological Diversity
Various approaches for image registration have been proposed in literature. They can be categorized into three primary types: feature-based, intensity-based, and deep learning approaches.
Feature-based approaches identify key points in the images and compute descriptors that characterize each point and its surroundings. These descriptors are then compared across images to find matches and to compute a transformation or deformation that best aligns the key points.
Intensity-based methods use RGB or gray values of each pixel and compare them with corresponding pixel values in other images. This may also utilize derivatives or other functions, to guide optimization algorithms towards matching edges.
Deep learning approaches use neural networks for feature identification and matching. For instance, SuperPoint [2] employs a convolutional neural network (CNN) for localization and description and SuperGlue [3] utilizes a graph neural network for feature matching.
Hybrid approaches combine multiple techniques. For instance, HistokatFusion provides an optional feature-based coarse registration alongside an intensity-based fine registration. Notably, HistokatFusion is also capable of fully intensity-based processing. [4]
HistokatFusion
The Fraunhofer Institut for Digital Medicine MEVIS has developed image registration techniques over many years. It was initially developed for radiology to detect changes in follow-up scans of the same patient or to apply atlases for enhanced image information. The source code has been extensively optimized for runtime, robustness and enhanced features, such as masking of bones to restrict implausible deformations [5]. Later, it was adapted to histological images with some minor modifications. It remains highly efficient, fully automatic and robust, validated in challenges such as ANHIR (1st place) and ACROBAT2023 (2nd place). While AI registration was explored, numerical methods were favored for their independence from task-specific training data and direct applicability to new datasets.
Among other applications, HistokatFusion was used to analyze Tumor Infiltrating Lymphocytes (TILs) in breast cancer, registering differently stained IHC slides to explore correlations between active antibodies in tumor and stroma areas [6].
“GPS for Histology”
The MEVIS Tissue Navigator, powered by HistokatFusion, allows pathologists to bookmark regions of interest by simply marking them. Upon reopening a case, the viewer automatically navigates to the previously viewed region or the last marking. When new images arrive, the Tissue Navigator initiates HistokatFusion to register them. The registration results are employed in the viewer to deform the currently displayed image, offering a fast alternative to time-consuming whole slide image deformation (Fig. 2).
The registration enables the automatic transfer of regions of interest to other slides and the viewer navigates to its destination immediately, akin to a GPS for Histology.
Utilizing the Tissue Navigator has been shown to reduce diagnosis time by 17 %, saving approximately 100 seconds per case compared to an analog workflow [7].
Image Registration for AI
Image registration in computational pathology delivers most value when integrated with image viewers or image management systems. Tools such as MIKAIA® benefit from integrating image registration for viewing, annotating and running image analysis, but also by combining interactive training with the alignment of differently stained slides, enabl-ing rapid training of customized AI models without technical expertise.
Additionally, image registration can be utilized to generate ground truth for AI training. Compared to labor- and time-intensive manual annotations, utilizing the information from biologically accurate IHC stains can aid in training an AI model to extract knowledge from H&E-stained WSIs. HistokatFusion has already contributed to epithelium detection in H&E [8], and in PIN4-based pre-training of foundation models [9]. The future of image registration offers further possibilities to support AI training for classification, segmentation, captioning, and more.
Conclusion
Image registration is a transformative technique that significantly enhances multi-slide workflows. By navigating between consecutive tissue sections and integrating the information from various stains, it allows pathologists to focus on critical areas while minimizing repetitive tasks.
HistokatFusion offers a highly effective, accurate, and robust solution for image registration. Its fully automatic nature eliminates the need for cumbersome manual landmarks, allowing it to operate in the background. It can be executed with settings for high resolutions to get superior results or quickly align images for rough comparison in a few seconds.
The integration of image registration into routine diagnostics not only streamlines workflows but also enhances the precision of histopathological analysis.