Skip to main content
  • Research article
  • Open access
  • Published:

Automatic rigid image Fusion of preoperative MR and intraoperative US acquired after craniotomy

Abstract

Background

Neuronavigation of preoperative MRI is limited by several errors. Intraoperative ultrasound (iUS) with navigated probes that provide automatic superposition of pre-operative MRI and iUS and three-dimensional iUS reconstruction may overcome some of these limitations. Aim of the present study is to verify the accuracy of an automatic MRI – iUS fusion algorithm to improve MR-based neuronavigation accuracy.

Methods

An algorithm using Linear Correlation of Linear Combination (LC2)-based similarity metric has been retrospectively evaluated for twelve datasets acquired in patients with brain tumor. A series of landmarks were defined both in MRI and iUS scans. The Target Registration Error (TRE) was determined for each pair of landmarks before and after the automatic Rigid Image Fusion (RIF). The algorithm has been tested on two conditions of the initial image alignment: registration-based fusion (RBF), as given by the navigated ultrasound probe, and different simulated course alignments during convergence test.

Results

Except for one case RIF was successfully applied in all patients considering the RBF as initial alignment. Here, mean TRE after RBF was significantly reduced from 4.03 (± 1.40) mm to (2.08 ± 0.96 mm) (p = 0.002), after RIF. For convergence test, the mean TRE value after initial perturbations was 8.82 (± 0.23) mm which has been reduced to a mean TRE of 2.64 (± 1.20) mm after RIF (p < 0.001).

Conclusions

The integration of an automatic image fusion method for co-registration of pre-operative MRI and iUS data may improve the accuracy in MR-based neuronavigation.

Background

Maximal safe resection is an important prognostic factor for gliomas. [1,2,3] Neuronavigation of preoperative MRI is the current standard in surgical management of brain tumors. However, neuronavigation systems show limited accuracy due to multiple physical, technical, operational, and biological issues. [4,5,6,7,8] In order to account for non-biological factors limiting the navigation accuracy, e.g. fundamental aspects of optical navigation or operational constraints, such as loose reference systems, head clamps, etc., [9, 10] intraoperative imaging strategies have been developed to enhance and maintain accuracy throughout the procedure. Examples of 3D imaging modalities are intraoperative MRI (iMRI), intraoperative computed tomography (iCT) and 3D intraoperative ultrasound (iUS).

Compared to iCT, iUS imaging is not associated with radiation exposure and more cost-efficient. Furthermore, acquisition time is shorter than for iCT or iMRI allowing to be applied multiple times during surgery, while providing high spatial resolution and reasonable soft tissue contrast. [11,12,13,14,15,16,17] Neuronavigation systems enable reconstruction and integration of 3D iUS imaging (Ultrasound Navigation, Brainlab AG, Germany) by means of tracked ultrasound probes and intelligent image reconstruction enabling the surgeon to use this modality in parallel to previously patient-registered MRI data. [18,19,20,21] To this end, the iUS software creates a new patient registration, links it to a previously created registration, i.e. registration-based fusion (RBF) and superimposes iUS and MRI scans in the neuronavigation system. RBF requires that both registered scans show a common (physical) coordinate system, which, however, might be unprecise due to above-mentioned factors limiting the navigation accuracy of the preoperatively acquired MRI.

In order to account for non-biological factors compromising the patient registration of the MRI and to enable an automatic update of the patient registration by means of iUS, an image co-registration algorithm to rigidly align preoperative MRI and intraoperative 3D iUS scans has been previously proposed. [22] This method is based on a modal-specific metric to find the optimal registration result [23] and has been successfully applied to retrospective iUS-MRI data before. [24] However, clinical integration of this method on commercially available neuronavigation systems has not been accomplished so far. Therefore, the goal of the present study is to evaluate a similar implementation of an automatic rigid image co-registration algorithm integrated in a CE-certified / FDA-cleared medical device software to enable intraoperative re-registration of MRI derived planning data based on iUS. Since factors related to resection of brain tissue result in loss of the rigid spatial relationship of intracranial anatomies (e.g. CSF leakage, tissue resection etc.) only iUS acquired after craniotomy and right before start of resection have been considered.

Methods

Twelve consecutive brain tumor patients were included in the present study (Table 1). The local Ethics Committee approved this investigation. All patients signed a written informed consent. For each patient, preoperative MRI and iUS scans were acquired during clinical routine. The image pairs were retrospectively enriched with verification landmarks to determine the accuracy of the automatic rigid fusion algorithm currently under development at Snke OS GmbH (part of Brainlab AG, Germany).

Table 1 Demographic and Clinical data. F: female. M: male. IDH: isocitrate dehydrogenase

Neuronavigation Protocol

The MRI protocol preoperatively acquired for neuronavigation the day before surgery was:

  • T1 (voxel size = 1*1*1 mm, FOV = 24 cm, slice thickness = 1 mm, TA = 3 min).

  • T2 (voxel size = 0.8*0.9*2.6 mm, FOV = 26 cm, slice thickness = 2.6 mm, TA = 5 min).

  • FLAIR (voxel size = 0.9*1.2*4 mm, FOV = 24, slice thickness = 4 mm, TA = 2.23 min).

For neuronavigation, all preoperative MRI scans were rigidly fused, and the 3D T1-weighted MRI was registered to an iCT acquired for patient registration by means of CT-based automatic image registration (Cranial Navigation with Automatic Image Registration, Brainlab AG, Germany). An iUS scan is conducted before dural opening using a dedicated machine (bk5000, BK Medical Holding Company, GE Healthcare, United States). The echograph is integrated with the neuronavigation system (Ultrasound Navigation, Brainlab AG, Germany) which provides user-guided continuous, optically tracked iUS sampling and can be used to reconstruct a patient-registered 3D iUS dataset for neuronavigation. [19, 21] However, changes to the registration system and thus inconsistencies between the patient-registered MR and separately registered iUS scan may affect the superposition of both modalities, i.e. RBF, and re-registration using an automatic image co-registration, i.e. rigid image fusion (RIF), may address this ambiguity by providing an update to the initial registration of the preoperative MRI.

Automatic rigid image Fusion

The prototype implementation of the automatic MRI-iUS rigid image fusion algorithm retrospectively evaluated in this work (release process is ongoing during preparation of the manuscript) was inspired by earlier works [22, 23] and has been previously successfully applied to representative MR-US registration scenarios. [24] In essence, this rigid co-registration algorithm applies Linear Correlation of Linear Combination (LC2)-based similarity metric which exhibits local invariance to how much two channels of information contribute to an US image, i.e. correlation of the US image with both the MRI intensity values and its spatial gradient magnitude, and allows for fully automatic MR-US registration in the matter of seconds. [22]

Landmark Definition

To evaluate image co-registration accuracy of the proposed image fusion algorithm, landmark pairs were retrospectively determined for reliably identifiable intra-cranial anatomical structures of the MR and the US scan. Brain shift may affect soft and hard tissues in different ways. In order to be representative of the real fusion in each patient, landmarks were chosen on both rigid (e.g. falx, skull base) and deformable structures (ventricles, sulci, vessels) at different distance from the craniotomy. Moreover, it should be easy to identify them both on US and MRI images.

A multi-staged approach has been chosen to ensure high landmark precision and address interrater variability. First, a clinical expert labelled relevant structures as initial proposal. Second, proposed landmarks were technically verified (i.e. labelling of the right datasets, following the naming convention) and refined if needed, and, lastly, clinically validated by the medical expert. All clinically confirmed landmarks have then been used for automated batch-processing to apply RIF and subsequently measure the Euclidean distances for each landmark pair. Examples of different pairs of landmarks are shown in Figs. 1 and 2.

Fig. 1
figure 1

Clinically determined landmarks for a representative patient (Patient 3) with a non-contrast enhancing glioma. The landmarks are displayed in axial, coronal and sagittal projections in T2 (top row), iUS (middle row) and T1 with contrast images (lower row) simultaneously, based on a registration-based fusion (RBF) of the preoperative MR and intraoperatively obtained iUS scans before dura opening. A spatial shift of anatomical structures, e.g. the lateral ventricle, can be observed based on the RBF data

Fig. 2
figure 2

Representative landmarks for patient (Patient 2) showing a lesion in the occipital lobe. A spatial shift of anatomical structures, e.g. the falx, can be observed based on the RBF MR-US data

Quantitative performance test

The reliability of the proposed fusion algorithm was determined in two steps. First, the RBF was used to calculate the automatic RIF. Second, RIF was calculated for a broad range of initial image pre-alignments i.e. clinically representative capture range of RBFs. This allows to investigate the performance of the proposed algorithm for real-world clinical cases and provided information on the method’s sensitivity to different initial alignments. Therefore, Euclidean distance between a landmark defined in preoperative MR image space and the equivalent landmark identified in the US scan was determined, the so-called Target Registration Error (TRE). TREs are averaged across available landmark pairs in each fusion pair and measured before and after fusing the images automatically.

To consider different initial alignment of the datasets, a convergence test was performed. This test was inspired by previous work. [22] The basic principle is to use landmark-based fusion parameters – as derived from an automatic rigid registration using landmark coordinates only, landmark-based fusion (LBF) – to provide the initial alignment between the MR and US scan. This alignment is then perturbed by random rotations and translations, and an automatic fusion is calculated at each new perturbed position. Measuring the mean TRE at the perturbed position gives a measure of the initial image alignment. Measuring the mean TRE after automatic fusion gives a measure of the algorithm accuracy. This approach has several advantages:

  • Increased testing power, since many fusions per image pair can be computed, each with slightly different initial alignment.

  • A range of possible initial alignments that includes and goes beyond what is present in the original datasets is covered.

  • It allows to determine sensitivity to initial alignment.

The following perturbation parameters were selected to cover a range of possible and realistic initial alignments between the image pairs:

  • Rotation range: 0° – 10°.

  • Translation range: 0–40 mm.

  • Number of perturbations per image pair: 100.

Once the input data is defined (image pairs + landmarks), the test is performed in a fully automated manner within the testing infrastructure at Snke OS. This includes (1) computing the landmark-based initial alignment, (2) perturbing this initial alignment, (3) computing the automatic image-based fusion from the perturbed position, (4) computation of the mean TRE values averaged across patient-individual landmarks and (5) quantification of maximum, minimum, mean and median of average TRE for all iterations of the convergence test per patient dateset.

Results

Except for one patient (Patient 2), all calculations could be successfully performed without the need of any manual interaction (in terms adjusting the ROI used for image fusion). The patient requiring manual adjustment of the fusion ROI was excluded from further quantitative analysis (and was qualitatively assessed in Fig. 3). The results for RBF quantitation and subsequently calculated RIFs are given in Table 2. On average, a mean TRE of RBF of 4.03 (± 1.40) mm was determined, whereas RIF showed a TRE of 2.08 (± 0.96) mm representing a significant reduction of the registration error (p = 0.002). In concordance, the convergence test simulated alignment, thus mean initial TRE values was 8.82 (± 0.23) mm as given in Table 3. The mean TRE for RIF after the convergence test was 2.64 (± 1.20) mm, with a significant improvement of accuracy (p < 0.001). Results of the convergence test are illustrated in Fig. 4.

Fig. 3
figure 3

Patient dataset excluded from quantitative evaluation of the automatic image fusion algorithm. This dataset showed initial misalignment of the RBF (A) and the algorithm using a default fusion region of interest (ROI), which is determined based on the field of view of the ultrasound scan was not able to align both datasets in fully automated manner (B). After manual definition of the fusion ROI – focusing on the clinical target and some deep anatomical structures serving as references for the RIF (C) – the proposed method was able to provide a highly accurate co-registration result (D)

Table 2 Patient-individual mean Target Registration Error (TRE in mm) after Landmark Based Fusion (LBF), registration based fusions (RBF) and the proposed Rigid Image Fusion (RIF), latter performed based on RBF.
Table 3 Convergence test for Rigid Image Fusion (RIF) yielding higher mean Target Registration Error (TRE in mm) values after perturbation (i.e. initial TRE in mm) compared to RBF and the proposed method shows significantly reduced values after Rigid Image Fusion (RIF) (p < 0.001)
Fig. 4
figure 4

Patient individual mean Target Registration Error (TRE in mm) determined by means of convergence test simulating a set of perturbations of the initial rigid alignment of each MR-US pair (TRE initial), measuring the spatial registration after proposed automatic Rigid Image Fusion (RIF), and in light of best possible fusion accuracy considering determined landmarks as reference only (Landmark Based Fusion, LBF).

To interpret the results in terms of reliability of the landmark definition, LBF has been quantified additionally yielding a mean TRE of 1.33 (± 0.61) mm. This indicates that the landmarks are associated with a certain imprecision limiting the theoretically assessable accuracy of RIF.

Discussion

Intraoperative orientation and recognition both of tumor border and of eloquent structures is fundamental to achieve the best oncological and functional outcome. Neuronavigation of pre-operative imaging may be inaccurate due to various technical and physiological sources of error. The integration of intraoperative imaging and multiple monitoring techniques aims at reducing the uncertainty of neuronavigation, thus improving both safety and efficacy of resection [14, 25,26,27,28].

Intraoperative MRI represents the state of the art in neuro-oncological surgery, but it is available only in few neurosurgical centers. Intraoperative CT has been recently established to automatically register a preoperatively obtained MRI dataset by means of tracked iCT scanning and automatic multi-modal MR-CT rigid image fusion, yielding a patient registration accuracy below 1.5 mm. [29,30,31] It is less expensive and faster than iMRI, but it produces lower quality images and requires the exposure of the patient to radiation. Ultrasound is the cheapest and fastest of the three available intraoperative imaging techniques, but interpretation of the acquired images is more difficult. Neuronavigation systems that allow immediate superimposition of iUS images with (typically more familiar) MRI scans may facilitate identification both of tumor and of eloquent cortical and subcortical areas. This fusion relies on the initial registration of the patient; it is thus affected by the same sources of error described for neuronavigation. [7] In the present study the mean TRE of RBF, determined as initial superimposition accuracy of pre-operative MRI and 3D iUS (automatically provided by the commercially available software used intraoperatively) was 4.03 mm with a maximum patient individual mean TRE of 7.38 mm. Interestingly, the initial registration was conducted by means of iCT-based automatic image registration that was previously reported to represent a very accurate method. [29, 32] Therefore, the navigation errors registered in this work are most likely caused by operational, technical effects, e.g. mechanical forces applied during craniotomy or loose reference marker adapters.

Prada et al. [33] described a workflow for intraoperative adjustment of neuronavigation based on iUS. It is an efficient method, highly dependent on experience of the user, with a widely variable processing time. Here, preliminary results of a new method for automatic adjustment of MRI-iUS fusion are presented to evaluate the performance in terms of accuracy and workflow efficiency (i.e. reduction in the processing time without any user dependence). It is (semi-) automatically integrated in the navigation workflow to improve the quality of initial registration within a few seconds (see Video 1, showing that a user interface application needs to be started and approved upon accuracy review). This may facilitate, for example, correction of errors due to inaccurate registration (without the need of iCT [29]) or of minimal movements of the reference array or of the head.

This pilot investigation has been performed considering iUS data acquired before dural opening only, and therefore on data which reflect intra-individually the same anatomical situation of pre-operative MRI. Future studies are required to evaluate the capabilities of the proposed method to apply it to compensate for brain shift which occurs during surgical resection.

The TRE determined in this work is in line with previously described methods for fusion of MRI and US images. [22, 34,35,36] However, TRE measurements are generally prone to systematic and random e.g. subjective errors, such as due to limited spatial resolution and non-rigid modifications to intracranial anatomies or uncertainties present during the process of recognition of same anatomical structures in both imaging modalities (MRI and iUS), respectively. Brain deformations produced by the convex probe during US scanning (with the probe gently pushed on the dura or parenchyma to maximize the area of contact), may limit rigid fusion and result in biased TRE measurements (probably more prominent for superficial and softer structures while mostly absent for deep / rigid structures). As a consequence, the TRE cannot be equal to zero (even for the most advanced algorithm). In our case series the LBF, which is the fusion acquired using landmark coordinate information only, thus representing the lowest possible TRE, was on average 1.33 mm. Therefore, it can be argued that the landmark definition is limited to this kind of uncertainty in terms of spatial localization and that a TRE of approximately 2 mm for the RIF represents an optimal co-registration result.

Limitations

In one case (patient 2) automatic RIF failed and a manual definition of the ROI used for image fusion calculation was necessary to successfully achieve a reasonable RIF result (please see Fig. 3). The effect of registration artifacts is a consequence of a statistically possible error of the fusion algorithm (and in the nature of the algorithm relying on statistical principles to optimize similarities) which may occasionally result in iUS scans co-registered with MRI datasets in other parts of the brain (or even outside the skull). In this case, RIF resulted in a co-registration error that was evident, even for an inexperienced user, so it would not have been misleading in the clinical practice. An optimal RIF (Fig. 3) was achieved after manual constraint of the ROI which is per definition initially defined according to the dimensions of the acquired 3D iUS scan. Nevertheless, such artefacts reinforce the awareness that the surgeon must verify the image fusion result. The occurrence of this error will, however, be object of specific tests to further refine the proposed method.

The present article provides preliminary results in a small cohort of patients. Future studies are deemed necessary to evaluate the clinical value of the method prospectively, in a larger group of patients and at different phases of surgery, to verify its efficacy in correcting the brain shift that occurs during resection.

Conclusions

The integration of an algorithm for automatic fusion of pre-operative MRI and iUS may improve the accuracy of target registration in neuronavigation for brain tumor. Future studies will evaluate its efficacy in clinical practice in a larger cohort of patient.

Data Availability

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

References

  1. Lacroix M, Abi-Said D, Fourney DR, Gokaslan ZL, Shi W, DeMonte F, et al. A multivariate analysis of 416 patients with glioblastoma multiforme: prognosis, extent of resection, and survival. J Neurosurg agosto. 2001;95(2):190–8.

    Article  CAS  Google Scholar 

  2. McGirt MJ, Chaichana KL, Attenello FJ, Weingart JD, Than K, Burger PC, et al. Extent of surgical resection is independently associated with survival in patients with hemispheric infiltrating low-grade gliomas. Neurosurg ottobre. 2008;63(4):700–7. author reply 707–708.

    Article  Google Scholar 

  3. Sanai N, Polley MY, McDermott MW, Parsa AT, Berger MS. An extent of resection threshold for newly diagnosed glioblastomas. JNS luglio. 2011;115(1):3–8.

    Article  Google Scholar 

  4. Nimsky C, Ganslandt O, Cerny S, Hastreiter P, Greiner G, Fahlbusch R. Quantification of, visualization of, and compensation for brain shift using intraoperative magnetic resonance imaging. Neurosurg novembre. 2000;47(5):1070–9. discussion 1079–1080.

    Article  CAS  Google Scholar 

  5. Orringer DA, Golby A, Jolesz F. Neuronavigation in the surgical management of brain tumors: current and future trends. Expert Rev Med Devices settembre. 2012;9(5):491–500.

    Article  CAS  Google Scholar 

  6. Steinmeier R, Rachinger J, Kaus M, Ganslandt O, Huk W, Fahlbusch R. Factors influencing the application accuracy of neuronavigation systems. Stereotact Funct Neurosurg. 2000;75(4):188–202.

    Article  CAS  PubMed  Google Scholar 

  7. Wang MN, Song ZJ. Classification and analysis of the errors in neuronavigation. Neurosurg aprile. 2011;68(4):1131–43. discussion 1143.

    Article  Google Scholar 

  8. Pfisterer WK, Papadopoulos S, Drumm DA, Smith K, Preul MC. Fiducial versus nonfiducial neuronavigation registration assessment and considerations of accuracy. Neurosurg marzo. 2008;62(3 Suppl 1):201–7. discussion 207–208.

    Google Scholar 

  9. Stieglitz LH, Fichtner J, Andres R, Schucht P, Krähenbühl AK, Raabe A, et al. The silent loss of neuronavigation accuracy: a systematic retrospective analysis of factors influencing the mismatch of frameless stereotactic systems in cranial neurosurgery. Neurosurg maggio. 2013;72(5):796–807.

    Article  Google Scholar 

  10. De Lorenzo D, De Momi E, Conti L, Votta E, Riva M, Fava E, et al. Intraoperative forces and moments analysis on patient head clamp during awake brain surgery. Med Biol Eng Comput marzo. 2013;51(3):331–41.

    Article  Google Scholar 

  11. Coburger J, Scheuerle A, Kapapa T, Engelke J, Thal DR, Wirtz CR, et al. Sensitivity and specificity of linear array intraoperative ultrasound in glioblastoma surgery: a comparative study with high field intraoperative MRI and conventional sector array ultrasound. Neurosurg Rev luglio. 2015;38(3):499–509. discussion 509.

    Article  Google Scholar 

  12. Fountain DM, Bryant A, Barone DG, Waqar M, Hart MG, Bulbeck H, et al. Intraoperative imaging technology to maximise extent of resection for glioma: a network meta-analysis. Cochrane Database Syst Rev 4 gennaio. 2021;1:CD013630.

    Google Scholar 

  13. Gerard IJ, Kersten-Oertel M, Hall JA, Sirhan D, Collins DL. Brain shift in Neuronavigation of Brain Tumors: an updated review of Intra-Operative Ultrasound Applications. Front Oncol. 2020;10:618837.

    Article  PubMed  Google Scholar 

  14. Mazzucchi E, La Rocca G, Hiepe P, Pignotti F, Galieri G, Policicchio D et al. Intraoperative integration of multimodal imaging to improve neuronavigation: a technical note. World Neurosurg. 3 giugno 2022;S1878-8750(22)00773-2.

  15. Policicchio D, Doda A, Sgaramella E, Ticca S, Veneziani Santonio F, Boccaletti R. Ultrasound-guided brain surgery: echographic visibility of different pathologies and surgical applications in neurosurgical routine. Acta Neurochir giugno. 2018;160(6):1175–85.

    Article  Google Scholar 

  16. Policicchio D, Ticca S, Dipellegrini G, Doda A, Muggianu G, Boccaletti R. Multimodal Surgical Management of Cerebral Lesions in Motor-Eloquent Areas Combining Intraoperative 3D Ultrasound with Neurophysiological Mapping. Journal of neurological surgery Part A, Central European neurosurgery. luglio 2021;82(4):344–56.

  17. La Rocca G, Ius T, Mazzucchi E, Simboli GA, Altieri R, Garbossa D, et al. Trans-sulcal versus trans-parenchymal approach in supratentorial cavernomas. A multicentric experience. Clin Neurol Neurosurg ottobre. 2020;197:106180.

    Article  Google Scholar 

  18. Alshareef M, Lowe S, Park Y, Frankel B. Utility of intraoperative ultrasonography for resection of pituitary adenomas: a comparative retrospective study. Acta Neurochir (Wien) giugno. 2021;163(6):1725–34.

    Article  Google Scholar 

  19. Barak T, Vetsa S, Nadar A, Jin L, Gupte TP, Fomchenko EI, et al. Surgical strategies for older patients with glioblastoma. J Neurooncol dicembre. 2021;155(3):255–64.

    Article  Google Scholar 

  20. Shetty P, Yeole U, Singh V, Moiyadi A. Navigated ultrasound-based image guidance during resection of gliomas: practical utility in intraoperative decision-making and outcomes. Neurosurg Focus gennaio. 2021;50(1):E14.

    Article  Google Scholar 

  21. Saß B, Pojskic M, Zivkovic D, Carl B, Nimsky C, Bopp MHA. Utilizing intraoperative navigated 3D Color Doppler Ultrasound in Glioma surgery. Front Oncol. 2021;11:656020.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Wein W, Ladikos A, Fuerst B, Shah A, Sharma K, Navab N. Global Registration of Ultrasound to MRI Using the LC2 Metric for Enabling Neurosurgical Guidance. In: Salinesi C, Norrie MC, Pastor Ó, curatori. Advanced Information Systems Engineering [Internet]. Berlin, Heidelberg: Springer Berlin Heidelberg;, Hutchison D, Kanade T, Kittler J, Kleinberg JM, Mattern F, Mitchell JC et al. Lecture Notes in Computer Science; vol. 7908). Disponibile su: http://link.springer.com/https://doi.org/10.1007/978-3-642-40811-3_5

  23. Wein W, Brunke S, Khamene A, Callstrom MR, Navab N. Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention. Med Image Anal ottobre. 2008;12(5):577–85.

    Article  Google Scholar 

  24. Xiao Y, Rivaz H, Chabanas M, Fortin M, Machado I, Ou Y, et al. Evaluation of MRI to Ultrasound Registration Methods for Brain Shift correction: the CuRIOUS2018 challenge. IEEE Trans Med Imaging marzo. 2020;39(3):777–86.

    Article  Google Scholar 

  25. Mazzucchi E, La Rocca G, Ius T, Sabatino G, Della Pepa GM. Multimodality Imaging Techniques to assist surgery in Low-Grade Gliomas. World Neurosurg gennaio. 2020;133:423–5.

    Article  Google Scholar 

  26. Ius T, Mazzucchi E, Tomasino B, Pauletto G, Sabatino G, Della Pepa GM et al. Multimodal integrated approaches in low grade glioma surgery. Scientific reports. 11 maggio 2021;11(1):9964.

  27. Della Pepa GM, Ius T, Menna G, La Rocca G, Battistella C, Rapisarda A, et al. «Dark corridors» in 5-ALA resection of high-grade gliomas: combining fluorescence-guided surgery and contrast-enhanced ultrasonography to better explore the surgical field. J Neurosurg Sci dicembre. 2019;63(6):688–96.

    Google Scholar 

  28. Della Pepa GM, Ius T, La Rocca G, Gaudino S, Isola M, Pignotti F, et al. 5-Aminolevulinic acid and contrast-enhanced Ultrasound: the combination of the two techniques to optimize the extent of resection in glioblastoma surgery. Neurosurg 1 giugno. 2020;86(6):E529–40.

    Article  Google Scholar 

  29. Carl B, Bopp M, Saß B, Nimsky C. Intraoperative computed tomography as reliable navigation registration device in 200 cranial procedures. Acta Neurochir (Wien) settembre. 2018;160(9):1681–9.

    Article  Google Scholar 

  30. Carl B, Bopp M, Chehab S, Bien S, Nimsky C. World Neurosurg maggio. 2018;113:e414–25. Preoperative 3-Dimensional Angiography Data and Intraoperative Real-Time Vascular Data Integrated in Microscope-Based Navigation by Automatic Patient Registration Applying Intraoperative Computed Tomography.

  31. Eggers G, Kress B, Rohde S, Mühling J. Intraoperative computed tomography and automated registration for image-guided cranial surgery. Dentomaxillofac Radiol gennaio. 2009;38(1):28–33.

    Article  CAS  Google Scholar 

  32. Carl B, Bopp M, Sass B, Pojskic M, Gjorgjevski M, Voellger B, et al. Reliable navigation registration in cranial and spine surgery based on intraoperative computed tomography. Neurosurgical focus 1 dicembre. 2019;47(6):E11.

    Article  Google Scholar 

  33. Prada F, Del Bene M, Mattei L, Lodigiani L, DeBeni S, Kolev V et al. Preoperative Magnetic Resonance and Intraoperative Ultrasound Fusion Imaging for Real-Time Neuronavigation in Brain Tumor Surgery. Ultraschall in Med. 27 novembre 2014;36(02):174–86.

  34. Zeineldin RA, Karar ME, Coburger J, Wirtz CR, Mathis-Ullrich F, Burgert O. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration. Curr Dir Biomedical Eng 17 settembre. 2020;6(1):20200039.

    Google Scholar 

  35. Chen SJS, Reinertsen I, Coupé P, Yan CXB, Mercier L, Del Maestro DR, et al. Validation of a hybrid doppler ultrasound vessel-based registration algorithm for neurosurgery. Int J Comput Assist Radiol Surg settembre. 2012;7(5):667–85.

    Article  Google Scholar 

  36. Fuerst B, Wein W, Müller M, Navab N. Automatic ultrasound–MRI registration for neurosurgery using the 2D and 3D LC2 Metric. Med Image Anal dicembre. 2014;18(8):1312–9.

    Article  Google Scholar 

Download references

Acknowledgements

No acknowledgment has to be made.

Funding

No funding has been received for this specific research.

Author information

Authors and Affiliations

Authors

Contributions

Manuscript drafting: EM, PH, ML. Conception of the work: EM, PH. Data acquisition: GLR, FP, EM. Data analysis and interpretation: EM, PR, ML. Critical revision: GS, GLR.

Corresponding author

Correspondence to Edoardo Mazzucchi.

Ethics declarations

Ethics approval and consent to participate

The study has been conducted in accordance with the principles of the Declaration of Helsinki. The Ethics committee of Fondazione Policlinico Gemelli approved the study under the following protocol N 13891/18, ID study 2015. The patients signed an informed consent.

Consent for publication

Not applicable.

Competing interests

Edoardo Mazzucchi and Giovanni Sabatino are consultants for Brainlab AG. Patrick Hiepe and Max Langhof are employees of Brainlab AG. The other authors have nothing to declare.

Conflict of Interest

Edoardo Mazzucchi and Giovanni Sabatino are consultants for Brainlab AG. Patrick Hiepe and Max Langhof are employees of Brainlab AG. The other authors have nothing to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Video 1

: Screen capture video showing the user interface application that has to be launched and approved to provide the automatic rigid image fusion

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mazzucchi, E., Hiepe, P., Langhof, M. et al. Automatic rigid image Fusion of preoperative MR and intraoperative US acquired after craniotomy. Cancer Imaging 23, 37 (2023). https://doi.org/10.1186/s40644-023-00554-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40644-023-00554-x

Keywords