- Review
- Open access
- Published:
Extracting value from total-body PET/CT image data - the emerging role of artificial intelligence
Cancer Imaging volume 24, Article number: 51 (2024)
Abstract
The evolution of Positron Emission Tomography (PET), culminating in the Total-Body PET (TB-PET) system, represents a paradigm shift in medical imaging. This paper explores the transformative role of Artificial Intelligence (AI) in enhancing clinical and research applications of TB-PET imaging. Clinically, TB-PET’s superior sensitivity facilitates rapid imaging, low-dose imaging protocols, improved diagnostic capabilities and higher patient comfort. In research, TB-PET shows promise in studying systemic interactions and enhancing our understanding of human physiology and pathophysiology. In parallel, AI’s integration into PET imaging workflows—spanning from image acquisition to data analysis—marks a significant development in nuclear medicine. This review delves into the current and potential roles of AI in augmenting TB-PET/CT’s functionality and utility. We explore how AI can streamline current PET imaging processes and pioneer new applications, thereby maximising the technology’s capabilities. The discussion also addresses necessary steps and considerations for effectively integrating AI into TB-PET/CT research and clinical practice. The paper highlights AI’s role in enhancing TB-PET’s efficiency and addresses the challenges posed by TB-PET’s increased complexity. In conclusion, this exploration emphasises the need for a collaborative approach in the field of medical imaging. We advocate for shared resources and open-source initiatives as crucial steps towards harnessing the full potential of the AI/TB-PET synergy. This collaborative effort is essential for revolutionising medical imaging, ultimately leading to significant advancements in patient care and medical research.
Introduction
Positron Emission Tomography (PET) has evolved from its initial role as a specialised research tool into an indispensable element in clinical diagnostics, thereby significantly enhancing our understanding of physiological and molecular activities within the human body. The integration of PET with computed tomography (PET/CT) marked a major advancement, melding metabolic imaging with anatomical detail to improve the accuracy and comprehensiveness of diagnostic assessments [1]. This evolution has broadened PET’s applications across various clinical disciplines, notably oncology, neurology, and cardiology [2]. The introduction of the Total-body (TB) PET/CT or extended axial field-of-view PET/CT systems represents a further enhancement, offering increased volume sensitivity and a synchronous view of the bodily processes [3,4,5]. This advancement not only has streamlined scanning efficiency but also has extended the scope of PET applications into unchartered territories in both clinical practice and medical research [6,7,8,9].
The ‘value’ of TB-PET extends well beyond its technological advancements. Its true value is encapsulated in the flexibility of imaging protocols as well as in novel applications in both clinical settings and research domains. Clinically, TB-PET is notable for its enhanced sensitivity and efficiency, enabling rapid imaging and low-dose protocols, as well as facilitating delayed and same-day dual-tracer imaging [10]. These attributes have already markedly improved diagnostic capabilities and patient experiences. While still in its early stages in clinical research, TB-PET has shown promise in exploring systemic interactions across organ systems and in fostering a more holistic understanding of the human body [11, 12].
To date, artificial Intelligence (AI) has already established a significant presence in the realm of radiology, and its impact is increasingly evident in nuclear medicine as well [13]. Over the years, AI has successfully integrated into the entire imaging workflow of PET, including aspects such as image acquisition, image reconstruction, data corrections, and data mining. In the context of TB-PET, the application of AI in augmenting TB-PET’s value is still in its nascent stages but is showing steady growth. Given that TB-PET generates dense and rich datasets, AI is expected to play a central role in transforming this data into meaningful insights. Moving beyond its previous role as a supplementary technology, AI is emerging as a fundamental component in the future of TB-PET research.
This manuscript aims to explore both the current and potential roles of AI in enhancing the functionality and utility of TB-PET. Central to this inquiry is an examination of how AI can not only streamline existing TB-PET procedures but also pioneer previously unexplored applications, fully capitalising on the technology’s advanced capabilities. This paper will also discuss the critical steps and factors necessary for the effective integration of AI into TB-PET research.
Current applications of total-body PET: enhancing efficiency with AI
The clinical community has shown ardent interest in TB-PET, largely because of its greater sensitivity compared to traditional short-axial field-of-view PET/CT systems. This enhanced volume sensitivity facilitates two key imaging options: rapid acquisitions with conventional dose injection and low-dose imaging over standard acquisition times. The first approach allows for swift, comprehensive imaging, essential for a detailed assessment of disease in a single bed position. Conversely, the latter option allows for the distribution of radiation dose over time, enabling longitudinal studies for more detailed disease observation and characterization. Furthermore, TB-PET’s heightened sensitivity also permits single-day, dual-tracer imaging [10]. This approach entails sequential scanning utilising disparate tracers, thereby substantially optimising patient throughput and scanning logistics. Additionally, the advent of dynamic imaging in a single bed position, supplemented by vendors integrating direct parametric reconstructions into TB-PET systems, provides the opportunity for more nuanced characterization of oncological cases in clinical routine [14, 15]. But despite the advancements brought forth by TB-PET, it also introduces new challenges in the domain of clinical imaging.
Revealing more, demanding greater quantification
Initial investigations using the uEXPLORER (United Imaging) with healthy subjects have demonstrated remarkable detail in PET images from extended scan durations (up to 20 min), showcasing clear delineation of vessel walls, spinal cord, and brain structures [5]. Subsequent clinical studies employing either the uEXPLORER or Siemens Quadra TB-PET/CT system have further demonstrated improvements in both image quality and lesion quantification [16,17,18,19,20]. Notably, delayed imaging techniques have been observed to enhance the contrast between lesions and their background while simultaneously reducing image noise [19, 21]. Beyond oncology, the efficacy of ultra-low-dose TB-PET in imaging cardiovascular conditions and autoimmune inflammatory diseases underlines its broad clinical utility [22, 23].
The comprehensive diagnostic capabilities offered by TB-PET, though invaluable, also come with risks of information overload for clinicians tasked with interpreting these complex scans. Traditionally, results have been derived through either visual assessment or labour-intensive manual segmentation, approaches that are increasingly inadequate given the breadth and volume of data provided by TB-PET. This is where AI-driven segmentation and detection tools become crucial, offering streamlined processing and interpretation of diverse biomarkers, from tumour loads (Fig. 1) and aortic wall uptake to systemic inflammations.
Currently, no single algorithm exists that can match the multi-label classification skills of a clinician across a variety of clinical scenarios. Nonetheless, significant progress has been made in tumour segmentation within 18F-FDG PET/CT imaging [24,25,26], driven by deep learning frameworks like nnU-Net [27] and MONAI Auto3DSeg [28], and supported by open-source datasets from initiatives such as AUTOPET [29] and HEKTOR [30]. Despite these advances, the challenge of algorithmic generalisation beyond specific training datasets persists. This limitation becomes particularly pronounced in total-body PET imaging, which encompasses a diverse range of clinical findings, from various tumour types to pathologies like inflammation and infection, particularly since these may coexist in individual patients. Consequently, developing individual algorithms for each distinct aspect within this domain is impractical, considering the vast diversity of data involved.
In response to these challenges, the concept of foundational models offers a promising path forward. The success of vision models such as Meta’s ‘Segment Anything Model’ (SAM) [31] in general applications has inspired similar innovations in medical imaging. The Medical SAM (MedSAM), for instance, demonstrates the potential of these models to segment any specified area in medical imaging based on varied inputs like bounding boxes or points [32]. Interestingly, the native SAM is already capable of performing semantic segmentation on 2D PET images (Fig. 2), without any modification. A similar approach for 3D, tailored for PET imaging, could accelerate the analysis of complex TB-PET datasets. A foundational model that is agnostic to tracer or disease, would allow clinicians to efficiently segment and analyse diverse data, greatly facilitating the diagnostic process. The clinical impact of such a model could be profound, potentially automating the detection of key biomarkers such as Total Lesion Glycolysis (TLG) and Metabolic Tumour Volume (MTV), and efficiently quantifying systemic inflammation. This innovation holds the promise of becoming an essential tool in routine clinical practice, enabling more effective and efficient data mining from TB-PET imaging studies.
In the radiation drama of total-body PET/CT: CT plays the lead
The substantial sensitivity increase in total-body PET/CT imaging has led to the advent of ultra-low-dose PET techniques, using as little as 1/20th of the standard dose while maintaining clinical image quality [33]. This advancement has broadened the scope for dose-optimised longitudinal imaging, with applications spanning various clinical areas [21, 34]. These include the development of new radiopharmaceuticals, monitoring of treatment responses, early detection of malignancy-related vascular complications, immune response imaging in infectious diseases, and paediatric imaging [21, 34,35,36,37,38]. The core concept involves dividing the total radiation dose across multiple scans to avoid additional radiation exposure. However, it’s important to note that in TB- PET/CT imaging, the primary source of radiation exposure is often not the PET component but rather the CT component. This aspect becomes particularly relevant in dual-time-point and dual-tracer studies, where patients undergo two CT scans [39, 40]. While CT is indispensable in providing essential anatomical details and enabling attenuation correction in clinical PET/CT studies, in the context of longitudinal imaging, either reducing the CT dose or omitting repeated CT scans could be beneficial. Such an approach aligns with the ‘As Low As Reasonably Achievable’ (ALARA) principle [41], supporting radiation safety initiatives like the ‘Image Gently’ campaign [42], which advocates for minimising radiation exposure, particularly in paediatric imaging. Researchers have already used AI in tackling this challenge, particularly in the context of attenuation correction. Sari et al. developed a deep learning-based method to create attenuation maps for PET scans without needing CT scans for correction [43]. Specifically, a convolutional neural network (CNN) was used to enhance initial µ-maps generated using a joint activity and attenuation reconstruction algorithm, showing promising results in enabling CT-free attenuation and scatter correction. This approach could be particularly useful in longitudinal imaging studies, where reducing or omitting CT scans can significantly lower patient radiation exposure while maintaining imaging quality. Likewise, Guo, Xue et al [44]. address a key challenge in CT-free PET imaging using deep learning (DL): the heterogeneity of tracers and scanners. They simplify this complex issue through domain decomposition, separating the learning process into low-frequency, anatomy-dependent attenuation correction and preserving high-frequency, anatomy-independent textures. This approach, trained with just one tracer on one scanner, showed robustness and effectiveness across various tracers and scanners, enhancing the potential for clinical translation of DL methods in PET imaging.
In another study by Hu et al., [45] an ultra-low-dose CT (ULDCT) reconstructed with an artificial intelligence iterative reconstruction algorithm (AIIR) was evaluated for use in 18F-FDG total-body PET/CT examinations. The study, including both phantom and clinical components, explored the feasibility of ULDCT (10 mAs) reconstructed with AIIR in comparison to standard-dose CT (SDCT) (120 mAs) using hybrid iterative reconstruction (HIR). The results indicated that while ULDCT-AIIR did not completely match the image quality of SDCT-HIR, it significantly reduced image noise and improved the signal-to-noise ratio (SNR), suggesting its potential application under specific circumstances in PET/CT examinations.
These advancements in AI for PET imaging not only enhance attenuation correction but also significantly increase the value of total-body PET by facilitating low-dose longitudinal imaging. This progression marks a pivotal step in maximising the clinical utility of PET imaging, offering more frequent and safer imaging options for patient monitoring and disease progression assessment, in line with minimising radiation exposure.
Advancing disease characterization amidst growing data complexities
The utilisation of dual-tracer PET/CT imaging with 18F-FDG and 68Ga-PSMA has been instrumental in enhancing our understanding of tumour biology, specifically in terms of aggressiveness and differentiation. This approach, which combines 18F-FDG and 68Ga-PSMA tracers, has been implemented in preliminary studies using conventional PET/CT systems [39]. These studies have primarily focused on patient prognostic stratification. However, the integration of this dual tracer method into clinical routine has been limited. The primary challenges include increased radiation exposure and logistical complexities, such as organising scans on two separate days.
Recent advancements in TB-PET/CT have shown promising developments in addressing these challenges. Clinically viable protocols have been developed that allow for the sequential imaging of 68Ga-PSMA and 18F-FDG on the same day [40]. These protocols typically involve administering a standard dose of 68Ga-PSMA, followed by a low-dose 18F-FDG scan. Additionally, TB-PET/CT has been explored for dual-tracer PET/CT scans using 18F-FDG and FAPI tracers, offering insights into the tumour-associated microenvironment [10].
However, it is important to distinguish these practices from multiplexed PET imaging. Multiplexed PET imaging involves administering a mixture of tracers to the patient and employing advanced reconstruction techniques to isolate individual signals. This method offers two significant advantages: firstly, it facilitates a single imaging session without the need for a second CT or subsequent scan. Secondly, it enables voxelwise alignment, providing true spatial multiplexing. This capability is crucial for understanding the spatial heterogeneity of tumours, as the multiplexed image can simultaneously highlight various attributes of the tumour under investigation.
The advent of reconstruction-based multiplexing in PET/CT imaging represents a significant advancement in the field, offering a sophisticated approach to capturing complex biological processes in a single imaging session [46]. While this technique holds great promise, its implementation in clinical practice is not yet widespread, primarily due to its implementation complexity. However, an equally effective alternative can be achieved through the precise spatial alignment of dual tracer PET/CT images, a method that can be readily applied in current clinical settings with the aid of artificial intelligence (AI).
The spatial alignment of two distinct tracers in PET/CT imaging presents a notable challenge, as these tracers often exhibit varying activity distributions. A promising solution to this problem is aligning the corresponding CT images first and then transferring the derived motion fields to their PET counterparts. This technique, especially relevant in sequential dual-tracer scans performed on the same day, could effectively mimic the outcomes of reconstruction-based multiplexing, thus offering a ‘pseudo-multiplexing’ effect (Fig. 3).
Nevertheless, aligning the CT images is a complex task. Conventional diffeomorphic algorithms, despite their capability to handle large deformations, may not provide the necessary precision. Research has shown that augmenting these algorithms with dense segmentation maps can greatly enhance the accuracy of the motion fields, leading to more accurate alignment [47]. In this context, the use of advanced open-source CT organ segmentation tools such as MOOSE [48] and TotalSegmentator [49], which are based on the robust nnU-Net [27] AI framework, becomes crucial. These tools facilitate detailed whole-body segmentations, which, when integrated into the registration process, significantly improve alignment accuracy.
For the registration process, one can choose between classical diffeomorphic algorithms [50, 51], known for their effectiveness in computational neuroanatomy, or adopt contemporary learning-based algorithms like VoxelMorph [47]. Learning-based algorithms (e.g., VoxelMorph) offer a substantial advantage in terms of computational speed, as they eliminate the need for optimization during the inference process, unlike classical diffeomorphic algorithms, which are more computationally demanding. By leveraging AI, particularly in the alignment of dual-tracer TB-PET/CT images, we can approach the intricacies of tumour heterogeneity with a level of precision and detail akin to that achieved in multiplexing techniques used in immunohistochemistry.
Another emerging area of interest in TB-PET imaging is dynamic imaging, adding a temporal domain to the rich 3-D information intrinsic to PET. Recent research indicates that by analysing the raw time activity curves (TACs) of tumour regions, it is possible to assess the spatial heterogeneity within tumours [20, 52]. At the same time, kinetic modelling has garnered considerable attention as well. Research groups are exploring its applicability in oncology, particularly in ways to abbreviate the scan duration required for kinetic analysis [53, 54]. The objective is to extract kinetic parameters that provide a more nuanced understanding of the tumour under investigation. Nonetheless, these dynamic PET imaging techniques present several challenges. Characterising TACs requires precise tumour segmentation, and kinetic modelling depends on segmenting specific regions to determine the input function derived from imaging data (IDIF). Both tasks are labour-intensive and demand high precision. In this context, whole-body AI-based organ segmentation tools like MOOSE [48] and TotalSegmentator [49] prove invaluable. They facilitate the segmentation process for IDIF as both cover major input function regions, thereby streamlining kinetic modelling (Fig. 4). Employing a foundational AI model for tumour segmentation, as previously discussed, can significantly ease the extraction and analysis of tumour TACs. Integrating AI into TB-PET imaging workflows is essential to fully leverage dynamic imaging’s potential. Automating these processes reduces the manual and cognitive burden on clinicians, allowing them to concentrate more on interpretation and clinical decision-making.
Novel applications from total-body PET: AI - a key enabler in generating value
Comprehensive health assessment with total-body PET: a unified diagnostic approach
The capability of TB-PET/CT to simultaneously image the entire human body, combined with its high spatial and temporal resolution, presents certain unique opportunities. Recent research has demonstrated the potential of conducting sub-second image reconstructions with TB-PET/CT, closely mirroring the temporal resolution achieved by functional Magnetic Resonance Imaging (fMRI) [55]. These advanced capabilities in TB-PET/CT could herald a paradigm shift from traditional imaging (often colloquially referred to as “lumpology [56]”) to a renewed focus on PET’s fundamental strengths in assessing physiological and pathophysiological functions and processes. This capacity for high-temporal dynamic imaging across all organs promises to deliver a wealth of clinically relevant data, surpassing mere identification of pathologies and encompassing a comprehensive understanding of bodily functions. Measurements including first-pass cardiac ejection fractions, as well as pulmonary and renal perfusion assessments, may be derived through the analysis of finely sampled PET frames followed by voxel-level data evaluation, thereby providing an extensive assessment of health [37, 57]. In such studies, the definition of volume and motion correction is going to be crucial post-processing steps, essential for the generation of data that is both quantitative and useful. Furthermore, as previously discussed, the availability of various AI-based organ segmentation algorithms could prove to be indispensable in the facilitation of such research endeavours. Addressing motion correction in total-body PET presents a complex challenge, given the multifaceted nature of motion encountered in such settings. This includes gross body motion, respiratory and cardiac movements, as well as abdominal motion, with the motion profile varying from rigid structures like the brain to more deformable ones like the gut and bladder. Developing a motion compensation tool that effectively manages this range of motion profiles across various tracers poses significant difficulty.
Recent research has explored the application of diffeomorphic registration for total-body motion correction. For instance, Sun et al. [58] utilised Symmetric Normalisation [50] for whole-body motion correction in 18F-FDG PET/CT scans. In a similar vein, we introduced FALCON [59], a diffeomorphic algorithm optimised for speed and applied across various tracers to correct for total-body motion, albeit compromising the symmetric property of the algorithm for enhanced computational efficiency [51]. Notably, both these algorithms demonstrate limitations in correcting early frames (less than 2 min post-injection), where tracer dynamics undergo rapid changes critical for clinical perfusion parameters. The primary challenge here lies in the disparity of image content in these early frames, attributable to the swiftly changing tracer kinetics, which complicates the task of any correction algorithm.
To address this specific issue, the use of conditional Generative Adversarial Networks (GANs) has been proposed and effectively implemented in both brain [60] and total-body studies [61]. The objective of these networks is to create synthetic images resembling those of later frames from the early imaging data. However, a hurdle in this approach is the limited generalizability across different tracers, necessitating specific training for each type of tracer used.
With the emergence of generative AI models, such as diffusion models [62], there is potential to develop a more universal model capable of generalising across multiple tracers. Such a model could theoretically create a pseudo-late-frame image from early-frame data or transform all images into an intermediate synthetic form to facilitate motion correction, potentially overcoming the current limitations in early frame motion correction.
Total-body PET with AI: a window into understanding normal physiology and health
Originally, PET imaging predominantly served as a tool for exploring physiological processes prior to its evolution into a clinical diagnostic instrument [63]. Concerns regarding radiation exposure have steered the medical community towards alternative modalities, notably MRI. However, the advent of TB-PET, coupled with advancements in minimising CT radiation exposure, has paved the way for ultra-low dose imaging. This innovation holds the promise of safely extending PET imaging applications to healthy populations, thereby broadening its utility in understanding normal physiology and non-malignant disease processes.
Comprehending normal physiology is paramount for the accurate interpretation of disease-related anomalies. Within the field of oncology, PET imaging has predominantly concentrated on tumours and their immediate surroundings. Nevertheless, the wider scientific consensus views cancer as a systemic condition, thus underscoring the need to extend focus beyond just the tumour’s locale. Observing the macroenvironment, particularly organ systems not directly compromised by tumour invasion, is crucial for a holistic understanding of cancer’s and therapies systemic and toxic effects [64, 65]. This approach is not only pertinent in oncology but may also hold significant relevance in elucidating musculoskeletal disorders and metabolic diseases, where systemic factors play a key role [66, 67].
The creation of a ‘normative database’ derived from healthy individuals is instrumental in facilitating the rapid systemic analysis of pathological cases. The notion of a normative database is well-established in medicine, providing clinicians with a benchmark of ‘normalcy’ for various parameters. This concept has been extensively applied in the realm of neuroimaging, where it has become a cornerstone in the identification of pathological conditions [68,69,70,71]. Extending this approach to total-body PET would allow for a similar utility in detecting systemic anomalies, offering a comprehensive reference point for distinguishing between normal and abnormal physiological states across the entire body.
Initial research in the realm of whole-body MRI, particularly under the scope of Imiomics [72], has laid the groundwork for establishing a proof-of-concept normative database. This database focused on quantifying average distributions of adipose and lean tissue within an asymptomatic population. Participants for this study were randomly selected from the general population, which meant that not all individuals were in perfect health. In this sample, 2% had diagnosed diabetes, 8% were known to have hypertension, and 4% were undergoing statin therapy. However, none of the participants suffered from severe diseases, such as cancer, myocardial infarction, stroke, heart failure, or chronic obstructive lung disease. Though not representative of a completely healthy cohort, this initial effort has laid the groundwork for developing a comprehensive total-body normative database, a crucial step in expanding the potential of PET imaging in systemic health assessment.
Aligning total-body PET images across individuals presents a significant challenge, particularly when compared to MRI. This difficulty arises from PET’s relatively lower resolution and variable tracer uptake characteristics. Nevertheless, it is feasible to utilise the accompanying CT images to facilitate alignment, subsequently transferring the deformable fields to their PET counterparts. In the process of constructing a normative database, the deformable alignment of healthy control images is a key step in creating a standard atlas of healthy individuals.
During this alignment process, two elements are of paramount importance: firstly, the alignment between subjects, and secondly, the segmentation that supports and enhances this alignment (Fig. 5). Recent advancements in AI, as discussed in the context of multiplexing, can greatly expedite this process. Tools producing dense segmentation maps, along with learning-based diffeomorphic methods like VoxelMorph [47], have the potential to significantly streamline the creation of normative databases. However, it is crucial to consider various confounding factors, such as age, body mass index (BMI), and gender, when developing these databases. Careful accounting for these variables is essential to ensure that the normative database accurately reflects the diversity and range of the healthy population [73]. This careful consideration is vital for the database to be a reliable and representative tool in clinical and research settings.
The creation of a normative database via TB-PET not only paves the way for high-throughput screening in at-risk populations like lung cancer (Fig. 6) or breast cancer but also presents the opportunity to explore comprehensive assessments of physiological health and ageing effects throughout the body. Notably, achieving a crucial milestone in this endeavour is the reduction of the effective radiation dose to patients to levels below 1 mSv per scan. While 18FDG remains the clinical tracer-of-choice for many clinical applications, generating similar normative databases for additional tracers that are now routinely used in clinical practice, including PSMA and somatostatin receptor ligands, and emerging tracers, such as FAPI agents, will also be beneficial.
Making sense of systemic information provided by total-body PET: AI
In previous sections, we have established that TB-PET/CT generates a comprehensive array of multi-dimensional systemic data. The extraction of meaningful insights from such data necessitates the adoption of robust analytical techniques, among which AI stands out as particularly suited for this task. Recent research initiatives have focused on delving into this multidimensional data to understand systemic effects across both healthy and pathological cohorts. These studies primarily utilise classical correlation analysis methods, which involve extracting organ-specific Standardised Uptake Values (SUVs) and generating correlation heatmaps within the cohorts under study. The fundamental aim is to identify variations in the resulting correlation maps [8, 11, 74, 75].
A notable advancement in this field was introduced by Sun et al. [11]., who proposed a novel methodology centred on the identification of individual deviations from normative patterns. This is achieved through a perturbation-based approach, where the baseline healthy correlation network is disrupted by integrating pathological cases, thereby facilitating the detection of individual anomalies. However, it is crucial to recognize that these studies typically involve relatively small sample sizes. Moreover, it is imperative to understand that these are correlation-focused studies that do not inherently imply causality.
In the context of analysing comprehensive datasets derived from TB-PET/CT scans, a multitude of methodological approaches are available to researchers. Key among these is the utilisation of robust computational frameworks such as scikit-learn [76], which facilitate the compilation of an extensive array of parameters from total-body datasets. These parameters include SUVs, kinetic parameters, and additional clinical data, such as volumetric measurements obtained from CT scans. Subsequent to parameter extraction, various machine learning algorithms can be employed to effectively differentiate between distinct groups, thus framing this analysis as a classification problem.
Alongside these conventional methodologies, the emergence of Automated Machine Learning (AutoML) represents a significant advancement in the field of medical image analysis. AutoML particularly enhances the automatic analysis of tabulated data from TB-PET scans. By automating critical tasks like model selection, hyperparameter tuning, and validation, AutoML renders advanced analytical techniques more accessible and efficient. Prominent frameworks in this domain include Google’s AutoML [77], H2O AutoML [78], and TPOT (Tree-based Pipeline Optimization Tool) [79]. Google’s AutoML is notable for its user-friendly interface and powerful algorithms that adeptly handle complex data structures, making them suitable for researchers with varying levels of programming expertise. H2O AutoML is acclaimed for its efficiency in rapidly producing high-quality models. Conversely, TPOT leverages a genetic programming approach to optimise machine learning pipelines, ensuring optimal model adaptation for specific datasets.
The incorporation of these AutoML frameworks into the analysis of total-body PET data substantially streamlines the identification of relevant features and patterns. By automating the more labour-intensive aspects of model building, researchers can devote greater attention to interpreting results and extracting clinically relevant insights. Additionally, the iterative model refinement and adaptability to new data inherent in AutoML, ensure that analyses remain at the forefront of medical dataset evolution.
To further enhance the transparency and interpretability of these algorithms, the application of explainable AI methods is advantageous. Techniques such as SHAP [80] (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) [81] elucidate how individual features contribute to specific algorithmic decisions. This clarity is instrumental in elevating the interpretability of the results.
However, when employing these machine learning techniques, it is crucial to exercise caution to circumvent issues like overfitting and underfitting. A commonly overlooked yet critical aspect is the sample-to-feature ratio [82, 83]. Maintaining a minimum ratio of 10:1 is widely recommended, serving as a reasonable benchmark to ensure the robustness and reliability of the model’s performance.
The recent advancements in deep learning open promising avenues for mining TB-PET datasets, especially through the creation of embeddings [84]. Utilising deep learning architectures like convolutional neural networks (CNNs) [85] or Vision Transformers [86], three-dimensional PET images can be transformed into high-dimensional vector embeddings. These embeddings have the potential to concisely capture the comprehensive physiological and metabolic profiles of patients, offering a distilled yet information-rich representation of the original dataset.
The role of vector databases [87] in this context is crucial and deserves emphasis. Traditional relational databases are not optimised for handling the high-dimensional data typical of deep learning outputs. Vector databases, on the other hand, are specifically designed to store, index, and retrieve high-dimensional vectors efficiently. This makes them uniquely suited for dealing with the kind of complex, feature-rich data produced by deep learning models applied to TB-PET datasets. Their ability to perform similarity searches and clustering at scale adds significant value, allowing researchers to quickly and accurately group patients into meaningful categories, such as responders and non-responders to treatments like radioligand therapy and immunotherapy.
Incorporating vector databases into this process facilitates the handling and analysis of these complex embeddings, enhancing the potential of deep learning techniques to discern subtle patterns and correlations within the data. This synergy between deep learning and vector databases can significantly augment the precision and effectiveness of treatments, leading to more personalised therapeutic strategies.
Charting the future of total-body PET with AI - a call for collaborative innovation
As one reflects on the advancements in TB-PET and its clinical applications, it becomes evident that AI stands at the forefront of enhancing this field. TB-PET’s increased sensitivity and comprehensive diagnostic capabilities, though invaluable, introduce the challenge of managing and interpreting vast amounts of complex data. Here, AI emerges not just as a tool but as a pivotal catalyst in transforming TB-PET from a diagnostic modality to a comprehensive solution for personalised medicine.
The clinical community’s growing interest in TB-PET is primarily driven by its capability for rapid and low-dose imaging. This advancement, however, brings to the fore the need for sophisticated analytical methods capable of managing the resultant data deluge. In this regard, AI-driven tools for segmentation and detection are becoming increasingly crucial. These tools not only streamline the processing of complex datasets but also enable the nuanced interpretation of diverse biomarkers, thus enhancing the overall utility of TB-PET.
Yet, as we advance in integrating AI into TB-PET, challenges persist, notably in ensuring the broad applicability of algorithms across varied clinical scenarios. The development of foundational models, inspired by their success in general vision tasks, is a promising avenue for overcoming these challenges. Such models, adept at segmenting any specified area within medical imaging datasets, hold the potential to revolutionise TB-PET analysis by automating essential processes and improving diagnostic precision. However, the realisation of these foundational models, like MedSAM in radiology, is contingent on the availability of large-scale, diverse datasets and considerable computational resources. The PET imaging field currently faces a gap in available data volumes, with significant initiatives like AUTOPET [29] and HEKTOR [88] providing only a limited number of images. This situation underscores an urgent need within the PET community for a collective effort in data pooling. The prevailing concerns about data protection hindering the sharing of PET images must be re-examined. Given that high-resolution modalities such as CT have been successfully open-sourced, PET imaging should also venture down this path. It is imperative for the community to not only advocate for, but also actively pursue the open source availability of PET data subject to patient privacy regulations that operate in certain jurisdictions.
The creation of a comprehensive normative database from TB-PET scans further exemplifies the need for extensive data pooling. Given the vast variability in human physiology, constructing such a database requires data from diverse and large population samples, something that single sites cannot achieve alone. A normative database, crucial for distinguishing between normal and pathological states, would benefit immensely from a collaborative approach to data collection and sharing. Emulating the open-source successes of radiology could significantly accelerate advancements in TB-PET analysis, paving the way for more personalised and effective patient care.
Building on the momentum of integrating AI into TB-PET and addressing the challenges of data availability and algorithmic applicability, it is essential to also consider the role of AI in enhancing the safety and efficiency of PET/CT imaging. This is particularly pertinent in scenarios like low-dose longitudinal studies, paediatric imaging, or screening, where optimising the CT component of PET/CT imaging becomes crucial. While TB-PET’s increased sensitivity enables inherently low-dose imaging, the radiation dose primarily stems from the CT component, necessitating careful consideration in repeated imaging scenarios. Advancements in AI offer potential solutions to reduce, or in some cases, eliminate the need for CT scans even though CT itself already has a role in some screening approaches and likely itself to provide complementary diagnostic information. Consequently, this approach requires a balanced perspective. CT scans provide essential anatomical details vital for various TB-PET data mining applications, including organ segmentation, multiplexing, and creating normative databases. These tasks depend heavily on CT as it is challenging to work on PET data due to image variability introduced by the tracers. It is crucial to reduce the dose while preserving the critical diagnostic and analytical value that CT imaging brings to TB-PET.
In advancing TB-PET data-mining, the role of Automated Machine Learning (AutoML) is pivotal. AutoML streamlines the process of applying ML algorithms, making it more accessible and efficient. It automates crucial tasks like model selection, hyperparameter tuning, and validation, which are often barriers to effective data analysis in medical imaging. This automation is particularly beneficial in TB-PET, where the data’s multidimensionality can be overwhelming and nuanced. With AutoML and explainable AI paradigms, researchers and clinicians can more readily analyse and interpret complex datasets. Importantly, the progress and acceleration of AI and ML in TB-PET should take cues from the broader AI community, especially regarding the open-source movement. The rapid advancement in AI fields is partly attributed to the community’s commitment to open-sourcing and collaborative development, avoiding the pitfalls of redundant efforts. A prime example is nnU-Net, an open-source framework that has standardised neural network applications in medical imaging. Before nnU-Net, numerous variations of U-Net architectures proliferated, but its introduction streamlined the development process, demonstrating that open-source collaboration can lead to more efficient and effective solutions.
Building on our previous discussions, it is evident there is a pressing need for a unified community initiative to consolidate resources, software, and data in TB-PET and AI. Currently, these elements are fragmented, impeding the pace of progress. Platforms like enhance.pet (https://enhance.pet) serve as a promising model, offering a centralised web hub for data, software, and educational resources. Similarly, the National PET Imaging Platform (NPIP, https://npip.org.uk/) represents another step in the right direction, aiming to create a cohesive framework for advancing PET imaging through shared resources and collective expertise.
Conclusion
In conclusion, this manuscript has comprehensively explored the transformative role of AI in elevating the capabilities of TB-PET/CT imaging. As we have elucidated, the integration of AI not only augments the efficiency of TB-PET but also unlocks novel applications in both clinical and research settings. However, the journey towards fully realising the potential of AI in TB-PET is not just a technological challenge but a collaborative endeavour. It calls for the dismantling of data silos, the creation of open-source tools, and the establishment of platforms for knowledge and resource exchange.
Data availability
Not applicable.
References
Beyer T, Townsend DW, Brun T, Kinahan PE, Charron M, Roddy R, et al. A combined PET/CT scanner for clinical oncology. J Nucl Med off Publ Soc Nucl Med. 2000;41:1369–79.
Vaquero JJ, Kinahan P. Positron Emission Tomography: current challenges and opportunities for Technological advances in clinical and preclinical Imaging systems. Annu Rev Biomed Eng. 2015;17:385–414.
Prenosil GA, Sari H, Fürstner M, Afshar-Oromieh A, Shi K, Rominger A et al. Performance Characteristics of the Biograph Vision Quadra PET/CT system with long axial field of view using the NEMA NU 2-2018 Standard. J Nucl Med. 2021;jnumed.121.261972.
Spencer BA, Berg E, Schmall JP, Omidvari N, Leung EK, Abdelhafez YG, et al. Performance evaluation of the uEXPLORER Total-Body PET/CT scanner based on NEMA NU 2-2018 with additional tests to characterize PET scanners with a long Axial Field of View. J Nucl Med. 2021;62:861–70.
Badawi RD, Shi H, Hu P, Chen S, Xu T, Price PM, et al. First Human Imaging Studies with the EXPLORER Total-Body PET Scanner*. J Nucl Med. 2019;60:299–303.
Cherry SR, Jones T, Karp JS, Qi J, Moses WW, Badawi RD, Total-Body PET. Maximizing sensitivity to Create New Opportunities for Clinical Research and Patient Care. J Nucl Med. 2018;59:3–12.
Cherry SR, Badawi RD, Karp JS, Moses WW, Price P, Jones T. Total-body imaging: transforming the role of positron emission tomography. Sci Transl Med. 2017;9:eaaf6169.
Knuuti J, Tuisku J, Kärpijoki H, Iida H, Maaniitty T, Latva-Rasku A, et al. Quantitative perfusion imaging with total-body PET. J Nucl Med. 2023;64:S11–9.
Sundar LKS, Hacker M, Beyer T, Whole-Body PET, Imaging. A Catalyst for whole-person research? J Nucl Med. 2023;64:197–9.
Liu G, Mao W, Yu H, Hu Y, Gu J, Shi H. One-stop [18F]FDG and [68Ga]Ga-DOTA-FAPI-04 total-body PET/CT examination with dual-low activity: a feasibility study. Eur J Nucl Med Mol Imaging. 2023;50:2271–81.
Sun T, Wang Z, Wu Y, Gu F, Li X, Bai Y, et al. Identifying the individual metabolic abnormities using whole-body PET imaging from a systematic perspective. J Nucl Med. 2022;63:3213–3213.
Sundar LS, Badawi RD, Spencer BA, Li E, Cherry SR, Abdelhafez YG, Hacker M, Jones T, Beyer T. Enhance-PET: exploring the human functional connectome using total-body [18F] FDG-PET. Eur J Nucl Med Mol IMAGING. 2021;48:201.
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods. 2021;188:4–19.
Chen R, Yang X, Ng YL, Yu X, Huo Y, Xiao X, et al. First Total-Body Kinetic modeling and Parametric Imaging of Dynamic 68 Ga-FAPI-04 PET in pancreatic and gastric Cancer. J Nucl Med. 2023;64:960–7.
Sari H, Mingels C, Alberts I, Hu J, Buesser D, Shah V, et al. First results on kinetic modelling and parametric imaging of dynamic 18F-FDG datasets from a long axial FOV PET scanner in oncological patients. Eur J Nucl Med Mol Imaging. 2022;49:1997–2009.
Alberts I, Sari H, Mingels C, Afshar-Oromieh A, Pyka T, Shi K, et al. Long-axial field-of-view PET/CT: perspectives and review of a revolutionary development in nuclear medicine based on clinical experience in over 7000 patients. Cancer Imaging. 2023;23:28.
Sui X, Liu G, Hu P, Chen S, Yu H, Wang Y, et al. Total-body PET/Computed tomography highlights in clinical practice: experiences from Zhongshan Hospital, Fudan University. PET Clin. 2021;16:9–14.
Dimitrakopoulou-Strauss A, Pan L, Sachpekidis C. Long axial field of view (LAFOV) PET-CT: implementation in static and dynamic oncological studies. Eur J Nucl Med Mol Imaging. 2023;50:3354–62.
Ng QK-T, Triumbari EKA, Omidvari N, Cherry SR, Badawi RD, Nardo L. Total-body PET/CT– first clinical experiences and future perspectives. Semin Nucl Med. 2022;52:330–9.
Tan H, Gu Y, Yu H, Hu P, Zhang Y, Mao W, et al. Total-body PET/CT: current applications and future perspectives. Am J Roentgenol. 2020;215:325–37.
Alavi A, Saboury B, Nardo L, Zhang V, Wang M, Li H, et al. Potential and most relevant applications of total body PET/CT imaging. Clin Nucl Med. 2022;47:43.
Abdelhafez Y, Raychaudhuri SP, Mazza D, Sarkar S, Hunt HL, McBride K, et al. Total-body 18F-FDG PET/CT in autoimmune inflammatory arthritis at Ultra-low Dose: initial observations. J Nucl Med. 2022;63:1579–85.
Høilund-Carlsen PF, Piri R, Gerke O, Edenbrandt L, Alavi A. Assessment of total-body atherosclerosis by PET/Computed tomography. PET Clin. 2021;16:119–28.
Sibille L, Seifert R, Avramovic N, Vehren T, Spottiswoode B, Zuehlsdorff S, et al. 18F-FDG PET/CT uptake classification in Lymphoma and Lung Cancer by using deep convolutional neural networks. Radiology. 2020;294:445–52.
Capobianco N, Meignan M, Cottereau A-S, Vercellino L, Sibille L, Spottiswoode B, et al. Deep-learning 18F-FDG uptake classification enables total metabolic tumor volume estimation in diffuse large B-Cell lymphoma. J Nucl Med. 2021;62:30–6.
Girum KB, Rebaud L, Cottereau A-S, Meignan M, Clerc J, Vercellino L, et al. 18F-FDG PET maximum-intensity projections and Artificial Intelligence: a Win-Win Combination to easily measure prognostic biomarkers in DLBCL patients. J Nucl Med. 2022;63:1925–32.
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18:203–11.
Cardoso MJ, Li W, Brown R, Ma N, Kerfoot E, Wang Y et al. MONAI: An open-source framework for deep learning in healthcare [Internet]. arXiv; 2022 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2211.02701.
Gatidis S, Hepp T, Früh M, La Fougère C, Nikolaou K, Pfannenberg C, et al. A whole-body FDG-PET/CT dataset with manually annotated Tumor lesions. Sci Data. 2022;9:601.
Andrearczyk V, Oreiller V, Boughdad S, Rest CCL, Elhalawani H, Jreige M et al. Overview of the HECKTOR Challenge at MICCAI 2021: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT Images [Internet]. arXiv; 2022 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2201.04138.
Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L et al. Segment Anything [Internet]. arXiv; 2023 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2304.02643.
Ma J, He Y, Li F, Han L, You C, Wang B. Segment Anything in Medical Images [Internet]. arXiv; 2023 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2304.12306.
Nardo L, Pantel AR. Oncologic applications of long Axial Field-of-view PET/Computed tomography. PET Clin. 2021;16:65–73.
Katal S, Eibschutz LS, Saboury B, Gholamrezanezhad A, Alavi A. Advantages and applications of total-body PET scanning. Diagnostics. 2022;12:426.
Rodriguez JA, Selvaraj S, Bravo PE. Potential Cardiovascular applications of total-body PET imaging. PET Clin. 2021;16:129–36.
Chen W, Liu L, Li Y, Li S, Li Z, Zhang W, et al. Evaluation of pediatric malignancies using total-body PET/CT with half-dose [18F]-FDG. Eur J Nucl Med Mol Imaging. 2022;49:4145–55.
Cherry SR, Diekmann J, Bengel FM. Total-body Positron Emission Tomography: adding New perspectives to Cardiovascular Research. JACC Cardiovasc Imaging. 2023;16:1335–47.
Omidvari N, Jones T, Price PM, Ferre AL, Lu J, Abdelhafez YG, et al. First-in-human immunoPET imaging of COVID-19 convalescent patients using dynamic total-body PET and a CD8-targeted minibody. Sci Adv. 2023;9:eadh7968.
Michalski K, Ruf J, Goetz C, Seitz AK, Buck AK, Lapa C, et al. Prognostic implications of dual tracer PET/CT: PSMA ligand and [18F]FDG PET/CT in patients undergoing [177Lu]PSMA radioligand therapy. Eur J Nucl Med Mol Imaging. 2021;48:2024–30.
Alberts I, Schepers R, Zeimpekis K, Sari H, Rominger A, Afshar-Oromieh A. Combined [68 Ga]Ga-PSMA-11 and low-dose 2-[18F]FDG PET/CT using a long-axial field of view scanner for patients referred for [177Lu]-PSMA-radioligand therapy. Eur J Nucl Med Mol Imaging. 2023;50:951–6.
Miller D, Schauer D. The ALARA principle in medical imaging. AAPM Newsl. 2015;40:38–40.
Cohen MD, ALARA. Image gently and CT-induced cancer. Pediatr Radiol. 2015;45:465–70.
Sari H, Teimoorisichani M, Mingels C, Alberts I, Panin V, Bharkhada D, et al. Quantitative evaluation of a deep learning-based framework to generate whole-body attenuation maps using LSO background radiation in long axial FOV PET scanners. Eur J Nucl Med Mol Imaging. 2022;49:4490–502.
Guo R, Xue S, Hu J, Sari H, Mingels C, Zeimpekis K, et al. Using domain knowledge for robust and generalizable deep learning-based CT-free PET attenuation and scatter correction. Nat Commun. 2022;13:5882.
Hu Y, Zheng Z, Yu H, Wang J, Yang X, Shi H. Ultra-low-dose CT reconstructed with the artificial intelligence iterative reconstruction algorithm (AIIR) in 18F-FDG total-body PET/CT examination: a preliminary study. EJNMMI Phys. 2023;10:1.
Pratt EC, Lopez-Montes A, Volpe A, Crowley MJ, Carter LM, Mittal V, et al. Simultaneous quantitative imaging of two PET radiotracers via the detection of positron–electron annihilation and prompt gamma emissions. Nat Biomed Eng. 2023;7:1028–39.
Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV. VoxelMorph: a Learning Framework for Deformable Medical Image Registration. IEEE Trans Med Imaging. 2019;38:1788–800.
Sundar LKS, Yu J, Muzik O, Kulterer OC, Fueger B, Kifjak D, et al. Fully automated, semantic segmentation of whole-body 18 F-FDG PET/CT images based on Data-Centric Artificial Intelligence. J Nucl Med. 2022;63:1941–8.
Wasserthal J, Breit H-C, Meyer MT, Pradella M, Hinck D, Sauter AW, et al. TotalSegmentator: robust segmentation of 104 anatomic structures in CT images. Radiol Artif Intell. 2023;5:e230024.
Avants BB, Epstein CL, Grossman M, Gee JC. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal. 2008;12:26–41.
Venet L, Pati S, Feldman MD, Nasrallah MP, Yushkevich P, Bakas S. Accurate and robust alignment of differently stained histologic images based on Greedy Diffeomorphic Registration. Appl Sci. 2021;11:1892.
Chitalia R, Viswanath V, Pantel AR, Peterson LM, Gastounioti A, Cohen EA, et al. Functional 4-D clustering for characterizing intratumor heterogeneity in dynamic imaging: evaluation in FDG PET as a prognostic biomarker for breast cancer. Eur J Nucl Med Mol Imaging. 2021;48:3990–4001.
Liu G, Yu H, Shi D, Hu P, Hu Y, Tan H, et al. Short-time total-body dynamic PET imaging performance in quantifying the kinetic metrics of 18F-FDG in healthy volunteers. Eur J Nucl Med Mol Imaging. 2022;49:2493–503.
Feng T, Zhao Y, Shi H, Li H, Zhang X, Wang G, et al. Total-body quantitative Parametric Imaging of Early Kinetics of 18F-FDG. J Nucl Med. 2021;62:738–44.
Zhang X, Cherry SR, Xie Z, Shi H, Badawi RD, Qi J. Subsecond total-body imaging using ultrasensitive positron emission tomography. Proc Natl Acad Sci. 2020;117:2265–7.
Hofman MS, Hicks RJ. Moving Beyond Lumpology: PET/CT Imaging of Pheochromocytoma and Paraganglioma. Clin Cancer Res. 2015;21:3815–7.
Hicks RJ. So, you want to get into total-body PET/CT scanning? An installation guide for beginners! Cancer Imaging. 2023;23:35.
Sun T, Wu Y, Wei W, Fu F, Meng N, Chen H, et al. Motion correction and its impact on quantification in dynamic total-body 18F-fluorodeoxyglucose PET. EJNMMI Phys. 2022;9:62.
Sundar LKS, Lassen ML, Gutschmayer S, Ferrara D, Calabrò A, Yu J, et al. Fully automated, fast motion correction of dynamic whole-body and total-body PET/CT imaging studies. J Nucl Med. 2023;64:1145–53.
Sundar LKS, Iommi D, Muzik O, Chalampalakis Z, Klebermass E-M, Hienert M, et al. Conditional generative adversarial networks aided motion correction of dynamic 18F-FDG PET brain studies. J Nucl Med. 2021;62:871–9.
Sundar LS, Iommi D, Spencer B, Wang Q, Cherry S, Beyer T, et al. Data-driven motion compensation using cGAN for total-body [18F]FDG-PET imaging. J Nucl Med. 2021;62:35–35.
Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-Resolution Image Synthesis with Latent Diffusion Models [Internet]. arXiv; 2022 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2112.10752.
Phelps ME, Mazziotta JC. Positron Emission Tomography: human brain function and Biochemistry. Science. 1985;228:799–809.
Penet M-F, Winnard PTJ, Jacobs MA, Bhujwalla ZM. Understanding cancer-induced cachexia: imaging the flame and its fuel. Curr Opin Support Palliat Care. 2011;5:327.
Argilés JM, López-Soriano FJ, Stemmler B, Busquets S. Cancer-associated cachexia — understanding the tumour macroenvironment and microenvironment to improve management. Nat Rev Clin Oncol. 2023;20:250–64.
Sattar N, McCarey DW, Capell H, McInnes IB. Explaining how high-Grade systemic inflammation accelerates vascular risk in rheumatoid arthritis. Circulation. 2003;108:2957–63.
Hotamisligil GS. Inflammation and metabolic disorders. Nature. 2006;444:860–7.
Sundar LKS, Muzik O, Rischka L, Hahn A, Lanzenberger R, Hienert M, et al. Promise of fully Integrated PET/MRI: Noninvasive Clinical quantification of cerebral glucose metabolism. J Nucl Med. 2020;61:276–84.
Blatter DD, Bigler ED, Gale SD, Johnson SC, Anderson CV, Burnett BM, et al. Quantitative volumetric analysis of brain MR: normative database spanning 5 decades of life. AJNR Am J Neuroradiol. 1995;16:241–51.
Costes N, Merlet I, Ostrowsky K, Faillenot I, Lavenne F, Zimmer L, et al. A 18F-MPPF PET normative database of 5-HT1A receptor binding in men and women over aging. J Nucl Med. 2005;46:1980–9.
Savli M, Bauer A, Mitterhauser M, Ding Y-S, Hahn A, Kroll T, et al. Normative database of the serotonergic system in healthy subjects using multi-tracer PET. NeuroImage. 2012;63:447–59.
Lind L, Kullberg J, Ahlström H, Michaëlsson K, Strand R. Proof of principle study of a detailed whole-body image analysis technique, Imiomics, regarding adipose and lean tissue distribution. Sci Rep. 2019;9:7388.
Gutschmayer S, Muzik O, Hacker M, Ferrara D, Zuehlsdorff S, Beyer T et al. Automated and tracer-independent generation of a total-body PET/CT normative database for future holistic patient analysis. Leipzig; 2023 [cited 2024 Feb 26]. p. s-0043-1766169. Available from: http://www.thieme-connect.de/DOI/DOI?10.1055/s-0043-1766169.
Suchacki KJ, Alcaide-Corral CJ, Nimale S, Macaskill MG, Stimson RH, Farquharson C, et al. A systems-Level analysis of total-body PET Data reveals complex skeletal metabolism networks in vivo. Front Med. 2021;8:740615.
Dias AH, Hansen AK, Munk OL, Gormsen LC. Normal values for 18F-FDG uptake in organs and tissues measured by dynamic whole body multiparametric FDG PET in 126 patients. EJNMMI Res. 2022;12:15.
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2011;12:2825–30.
An End-to-End. AutoML Solution for Tabular Data at KaggleDays [Internet]. 2019 [cited 2024 Feb 26]. Available from: https://blog.research.google/2019/05/an-end-to-end-automl-solution-for.html?m=1.
LeDell E, Poirier S. H2O AutoML: Scalable Automatic Machine Learning [Internet]. [cited 2024 Feb 26]. Available from: https://www.automl.org/wp-content/uploads/2020/07/AutoML_2020_paper_61.pdf.
Olson RS, Moore JH. TPOT: A Tree-Based Pipeline Optimization Tool for Automating Machine Learning. In: Hutter F, Kotthoff L, Vanschoren J, editors. Autom Mach Learn [Internet]. Cham: Springer International Publishing; 2019 [cited 2024 Feb 26]. p. 151–60. Available from: http://link.springer.com/https://doi.org/10.1007/978-3-030-05318-5_8.
Lundberg S, Lee S-I. A Unified Approach to Interpreting Model Predictions [Internet]. arXiv; 2017 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/1705.07874.
Ribeiro MT, Singh S, Guestrin C, Why Should I. Trust You? Explaining the Predictions of Any Classifier [Internet]. arXiv; 2016 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/1602.04938.
Hua J, Xiong Z, Lowey J, Suh E, Dougherty ER. Optimal number of features as a function of sample size for various classification rules. Bioinformatics. 2005;21:1509–15.
Figueroa RL, Zeng-Treitler Q, Kandula S, Ngo LH. Predicting sample size required for classification performance. BMC Med Inf Decis Mak. 2012;12:8.
Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S et al. Learning Transferable Visual Models From Natural Language Supervision [Internet]. arXiv; 2021 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2103.00020.
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN et al. Attention Is All You Need [Internet]. arXiv; 2023 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/1706.03762.
Pan JJ, Wang J, Li G. Survey of Vector Database Management Systems [Internet]. arXiv; 2023 [cited 2024 Feb 26]. Available from: http://arxiv.org/abs/2310.14021.
Andrearczyk V, Oreiller V, Boughdad S, Le Rest CC, Tankyevych O, Elhalawani H, et al. Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: findings from the second edition of the HECKTOR challenge. Med Image Anal. 2023;90:102972.
Acknowledgements
We extend our heartfelt thanks to Rodney Hicks and Jason Callahan for their invaluable collaboration in this project. Their generosity in sharing datasets, coupled with their willingness to share their experiences using our ENHANCE software stack, has been instrumental in shaping our understanding of its practical applications. Their contributions have significantly influenced the development and refinement of this manuscript. Additionally, the figures they provided have greatly enhanced its depth and clarity. We are immensely grateful for their insights and support.
Funding
No funding was received for this work.
Author information
Authors and Affiliations
Contributions
LKS initiated the conceptualization, literature review, and manuscript drafting for this review. SG and MM significantly contributed through figure development and critical manuscript revisions. As the senior author, TB provided overarching guidance and final editorial oversight, ensuring academic rigour.
Corresponding author
Ethics declarations
Ethical approval and consent to participate
Not applicable.
Consent for publication
All authors consent for publication.
Competing interests
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Shiyam Sundar, L.K., Gutschmayer, S., Maenle, M. et al. Extracting value from total-body PET/CT image data - the emerging role of artificial intelligence. Cancer Imaging 24, 51 (2024). https://doi.org/10.1186/s40644-024-00684-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40644-024-00684-w