This is ongoing work done in collaboration with the RIKEN CBS lab for Molecular Mechanisms of Brain Development.
The overall goal of this project is to automate the segmentation and registration of in situ hybridization (ISH) gene expression in the adult marmoset brain.
Main Figure 1. 3D rendering of 3 genes from ISH expression data images in the adult marmoset brain after segmentation, alignment, and integration into standard space (drd2: blue, tgfrb2: green, c1ql2: pink).
Main Figure 2. Coronal, horizontal, sagittal views of 5 genes overlaid onto backlit images in standard space (dgkz: purple, drd2: blue, tgfrb2: green, il1rap: yellow, c1ql2: pink).
Main Figure 3. Coronal, horizontal, sagittal views of backlit images (left: NanoZoomer) aligned to serial two-photon tomography images (right: TissueCyte; used with permission from Dr. A. Watakabe) in standard space, to show alignment of anatomical structures.
Gene expression brain atlases in lower-order model organisms, such as the Allen Mouse Brain Atlas, are widely used in neuroscience research. Primate brain atlases are necessary to facilitate understanding human development and disease. The Marmoset Gene Atlas created by the Brain/MINDS project in Japan, is an ISH database of gene expression in the marmoset brain. We are working on image segmentation, registration, and integration into standard space, which is needed for quantification and mapping of gene expression to a 3D atlas. We prioritize the automation of this analysis pipeline by using deep learning methods to reduce human error and bias.
Figure 1. Project pipeline overview.
The overall goal of image registration is to align brain images to a common template space.
Registration of brain images to a common template space facilitates better spatial understanding of gene expression (Main Figures 1, 2) and enables integration of data from different imaging modalities (e.g. serial two-photon tomography images, MRI; Main Figure 3). In other words, gene expression data from a single animal can be registered to an anatomically correct space, and data from different marmosets, acquired by different research centers, can be combined to facilitate a more holistic understanding of the marmoset brain (Skibbe et al., 2022). We describe the alignment of ISH images obtained from brain sections taken from 1 single marmoset. Image registration was performed in the opposite direction as image acquisition (Fig. 2). Three image modalities were collected:
Figure 2. Image acquisition resulted in 3 images: blockface, backlit, and ISH gene expression. All 3 images were used for image registration to map ISH gene expression to the 3D marmoset brain.
Advanced Normalization Tools (ANTs and ANTsPy) were used to register images within one subject, and to an anatomically-correct adult marmoset template space. For the 3D reconstruction of the backlit images, backlit images were registered to the blockface image stack by iteratively applying affine image registration (rotation, translation, scaling) combined with deformable image registration (symmetric normalization, SyN) (Fig. 3). Then, ISH images were registered to the reconstructed 3D backlit image stack using the same iterative algorithm. For all image registration tasks, we used normalized mutual information as the optimization metric
Figure 3. Registration of backlit to blockface (left), ISH to backlit (center), and backlit to three ISH gene expression images (right).
The registration of images to the Brain/MINDS Marmoset Connectivity Atlas (BMCA) reference space enables integration with imaging data from other modalities, including anterograde tracer data.
Figure 4. Alignment of images acquired using different modalities to a common template space: backlit, anterograde and retrograde tracer images, and a labelled brain atlas. Images from the Brain/MINDS Marmoset Connectivity Atlas (Skibbe et al., 2022).
For gene segmentation, we developed a semi-supervised model in order to decrease the amount of labelled data needed for training.
Model outputs were compared to supervised loss only outputs and thresholding (Fig. 5). Fine-grained details, i.e. gene expression on a single cell level, were clearer in the model's outputs, compared to supervised loss only and thresholding outputs (top 2 rows). Artifacts such as folds in the tissue were incorrectly segmented as signals in the supervised loss and thresholding outputs, whereas the model correctly identified them as part of the tissue, and not signal (third row). Finally, gene expression that is contained within tissue folds was clearly segmented in the model's outputs, but not in the supervised loss only outputs, nor the thresholding outputs (bottom row)
Figure 5. Comparison of labels from the model, supervised loss only, and thresholding.