RESUMEN
OBJECTIVES: To evaluate the performance and potential biases of deep-learning models in detecting chronic obstructive pulmonary disease (COPD) on chest CT scans across different ethnic groups, specifically non-Hispanic White (NHW) and African American (AA) populations. MATERIALS AND METHODS: Inspiratory chest CT and clinical data from 7549 Genetic epidemiology of COPD individuals (mean age 62 years old, 56-69 interquartile range), including 5240 NHW and 2309 AA individuals, were retrospectively analyzed. Several factors influencing COPD binary classification performance on different ethnic populations were examined: (1) effects of training population: NHW-only, AA-only, balanced set (half NHW, half AA) and the entire set (NHW + AA all); (2) learning strategy: three supervised learning (SL) vs. three self-supervised learning (SSL) methods. Distribution shifts across ethnicity were further assessed for the top-performing methods. RESULTS: The learning strategy significantly influenced model performance, with SSL methods achieving higher performances compared to SL methods (p < 0.001), across all training configurations. Training on balanced datasets containing NHW and AA individuals resulted in improved model performance compared to population-specific datasets. Distribution shifts were found between ethnicities for the same health status, particularly when models were trained on nearest-neighbor contrastive SSL. Training on a balanced dataset resulted in fewer distribution shifts across ethnicity and health status, highlighting its efficacy in reducing biases. CONCLUSION: Our findings demonstrate that utilizing SSL methods and training on large and balanced datasets can enhance COPD detection model performance and reduce biases across diverse ethnic populations. These findings emphasize the importance of equitable AI-driven healthcare solutions for COPD diagnosis. CRITICAL RELEVANCE STATEMENT: Self-supervised learning coupled with balanced datasets significantly improves COPD detection model performance, addressing biases across diverse ethnic populations and emphasizing the crucial role of equitable AI-driven healthcare solutions. KEY POINTS: Self-supervised learning methods outperform supervised learning methods, showing higher AUC values (p < 0.001). Balanced datasets with non-Hispanic White and African American individuals improve model performance. Training on diverse datasets enhances COPD detection accuracy. Ethnically diverse datasets reduce bias in COPD detection models. SimCLR models mitigate biases in COPD detection across ethnicities.
RESUMEN
OBJECTIVES: Achieving a consensus on a definition for different aspects of radiomics workflows to support their translation into clinical usage. Furthermore, to assess the perspective of experts on important challenges for a successful clinical workflow implementation. MATERIALS AND METHODS: The consensus was achieved by a multi-stage process. Stage 1 comprised a definition screening, a retrospective analysis with semantic mapping of terms found in 22 workflow definitions, and the compilation of an initial baseline definition. Stages 2 and 3 consisted of a Delphi process with over 45 experts hailing from sites participating in the German Research Foundation (DFG) Priority Program 2177. Stage 2 aimed to achieve a broad consensus for a definition proposal, while stage 3 identified the importance of translational challenges. RESULTS: Workflow definitions from 22 publications (published 2012-2020) were analyzed. Sixty-nine definition terms were extracted, mapped, and semantic ambiguities (e.g., homonymous and synonymous terms) were identified and resolved. The consensus definition was developed via a Delphi process. The final definition comprising seven phases and 37 aspects reached a high overall consensus (> 89% of experts "agree" or "strongly agree"). Two aspects reached no strong consensus. In addition, the Delphi process identified and characterized from the participating experts' perspective the ten most important challenges in radiomics workflows. CONCLUSION: To overcome semantic inconsistencies between existing definitions and offer a well-defined, broad, referenceable terminology, a consensus workflow definition for radiomics-based setups and a terms mapping to existing literature was compiled. Moreover, the most relevant challenges towards clinical application were characterized. CRITICAL RELEVANCE STATEMENT: Lack of standardization represents one major obstacle to successful clinical translation of radiomics. Here, we report a consensus workflow definition on different aspects of radiomics studies and highlight important challenges to advance the clinical adoption of radiomics. KEY POINTS: Published radiomics workflow terminologies are inconsistent, hindering standardization and translation. A consensus radiomics workflow definition proposal with high agreement was developed. Publicly available result resources for further exploitation by the scientific community.
RESUMEN
Background: Chronic obstructive pulmonary disease (COPD) poses a substantial global health burden, demanding advanced diagnostic tools for early detection and accurate phenotyping. In this line, this study seeks to enhance COPD characterization on chest computed tomography (CT) by comparing the spatial and quantitative relationships between traditional parametric response mapping (PRM) and a novel self-supervised anomaly detection approach, and to unveil potential additional insights into the dynamic transitional stages of COPD. Methods: Non-contrast inspiratory and expiratory CT of 1,310 never-smoker and GOLD 0 individuals and COPD patients (GOLD 1-4) from the COPDGene dataset were retrospectively evaluated. A novel self-supervised anomaly detection approach was applied to quantify lung abnormalities associated with COPD, as regional deviations. These regional anomaly scores were qualitatively and quantitatively compared, per GOLD class, to PRM volumes (emphysema: PRMEmph, functional small-airway disease: PRMfSAD) and to a Principal Component Analysis (PCA) and Clustering, applied on the self-supervised latent space. Its relationships to pulmonary function tests (PFTs) were also evaluated. Results: Initial t-Distributed Stochastic Neighbor Embedding (t-SNE) visualization of the self-supervised latent space highlighted distinct spatial patterns, revealing clear separations between regions with and without emphysema and air trapping. Four stable clusters were identified among this latent space by the PCA and Cluster Analysis. As the GOLD stage increased, PRMEmph, PRMfSAD, anomaly score, and Cluster 3 volumes exhibited escalating trends, contrasting with a decline in Cluster 2. The patient-wise anomaly scores significantly differed across GOLD stages (p < 0.01), except for never-smokers and GOLD 0 patients. In contrast, PRMEmph, PRMfSAD, and cluster classes showed fewer significant differences. Pearson correlation coefficients revealed moderate anomaly score correlations to PFTs (0.41-0.68), except for the functional residual capacity and smoking duration. The anomaly score was correlated with PRMEmph (r = 0.66, p < 0.01) and PRMfSAD (r = 0.61, p < 0.01). Anomaly scores significantly improved fitting of PRM-adjusted multivariate models for predicting clinical parameters (p < 0.001). Bland-Altman plots revealed that volume agreement between PRM-derived volumes and clusters was not constant across the range of measurements. Conclusion: Our study highlights the synergistic utility of the anomaly detection approach and traditional PRM in capturing the nuanced heterogeneity of COPD. The observed disparities in spatial patterns, cluster dynamics, and correlations with PFTs underscore the distinct - yet complementary - strengths of these methods. Integrating anomaly detection and PRM offers a promising avenue for understanding of COPD pathophysiology, potentially informing more tailored diagnostic and intervention approaches to improve patient outcomes.
RESUMEN
OBJECTIVES: To quantify regional manifestations related to COPD as anomalies from a modeled distribution of normal-appearing lung on chest CT using a deep learning (DL) approach, and to assess its potential to predict disease severity. MATERIALS AND METHODS: Paired inspiratory/expiratory CT and clinical data from COPDGene and COSYCONET cohort studies were included. COPDGene data served as training/validation/test data sets (N = 3144/786/1310) and COSYCONET as external test set (N = 446). To differentiate low-risk (healthy/minimal disease, [GOLD 0]) from COPD patients (GOLD 1-4), the self-supervised DL model learned semantic information from 50 × 50 × 50 voxel samples from segmented intact lungs. An anomaly detection approach was trained to quantify lung abnormalities related to COPD, as regional deviations. Four supervised DL models were run for comparison. The clinical and radiological predictive power of the proposed anomaly score was assessed using linear mixed effects models (LMM). RESULTS: The proposed approach achieved an area under the curve of 84.3 ± 0.3 (p < 0.001) for COPDGene and 76.3 ± 0.6 (p < 0.001) for COSYCONET, outperforming supervised models even when including only inspiratory CT. Anomaly scores significantly improved fitting of LMM for predicting lung function, health status, and quantitative CT features (emphysema/air trapping; p < 0.001). Higher anomaly scores were significantly associated with exacerbations for both cohorts (p < 0.001) and greater dyspnea scores for COPDGene (p < 0.001). CONCLUSION: Quantifying heterogeneous COPD manifestations as anomaly offers advantages over supervised methods and was found to be predictive for lung function impairment and morphology deterioration. CLINICAL RELEVANCE STATEMENT: Using deep learning, lung manifestations of COPD can be identified as deviations from normal-appearing chest CT and attributed an anomaly score which is consistent with decreased pulmonary function, emphysema, and air trapping. KEY POINTS: ⢠A self-supervised DL anomaly detection method discriminated low-risk individuals and COPD subjects, outperforming classic DL methods on two datasets (COPDGene AUC = 84.3%, COSYCONET AUC = 76.3%). ⢠Our contrastive task exhibits robust performance even without the inclusion of expiratory images, while voxel-based methods demonstrate significant performance enhancement when incorporating expiratory images, in the COPDGene dataset. ⢠Anomaly scores improved the fitting of linear mixed effects models in predicting clinical parameters and imaging alterations (p < 0.001) and were directly associated with clinical outcomes (p < 0.001).
Asunto(s)
Aprendizaje Profundo , Enfermedad Pulmonar Obstructiva Crónica , Índice de Severidad de la Enfermedad , Tomografía Computarizada por Rayos X , Humanos , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Masculino , Femenino , Tomografía Computarizada por Rayos X/métodos , Persona de Mediana Edad , Anciano , Valor Predictivo de las Pruebas , Pulmón/diagnóstico por imagen , Estudios de CohortesRESUMEN
Automated image analysis plays an increasing role in radiology in detecting and quantifying image features outside of the perception of human eyes. Common AI-based approaches address a single medical problem, although patients often present with multiple interacting, frequently subclinical medical conditions. A holistic imaging diagnostics tool based on artificial intelligence (AI) has the potential of providing an overview of multi-system comorbidities within a single workflow. An interdisciplinary, multicentric team of medical experts and computer scientists designed a pipeline, comprising AI-based tools for the automated detection, quantification and characterization of the most common pulmonary, metabolic, cardiovascular and musculoskeletal comorbidities in chest computed tomography (CT). To provide a comprehensive evaluation of each patient, a multidimensional workflow was established with algorithms operating synchronously on a decentralized Joined Imaging Platform (JIP). The results of each patient are transferred to a dedicated database and summarized as a structured report with reference to available reference values and annotated sample images of detected pathologies. Hence, this tool allows for the comprehensive, large-scale analysis of imaging-biomarkers of comorbidities in chest CT, first in science and then in clinical routine. Moreover, this tool accommodates the quantitative analysis and classification of each pathology, providing integral diagnostic and prognostic value, and subsequently leading to improved preventive patient care and further possibilities for future studies.
RESUMEN
BACKGROUND: Although digital and data-based technologies are widespread in various industries in the context of Industry 4.0, the use of smart connected devices in health care is still in its infancy. Innovative solutions for the medical environment are affected by difficult access to medical device data and high barriers to market entry because of proprietary systems. OBJECTIVE: In the proof-of-concept project OP 4.1, we show the business viability of connecting and augmenting medical devices and data through software add-ons by giving companies a technical and commercial platform for the development, implementation, distribution, and billing of innovative software solutions. METHODS: The creation of a central platform prototype requires the collaboration of several independent market contenders, including medical users, software developers, medical device manufacturers, and platform providers. A dedicated consortium of clinical and scientific partners as well as industry partners was set up. RESULTS: We demonstrate the successful development of the prototype of a user-centric, open, and extensible platform for the intelligent support of processes starting with the operating room. By connecting heterogeneous data sources and medical devices from different manufacturers and making them accessible for software developers and medical users, the cloud-based platform OP 4.1 enables the augmentation of medical devices and procedures through software-based solutions. The platform also allows for the demand-oriented billing of apps and medical devices, thus permitting software-based solutions to fast-track their economic development and become commercially successful. CONCLUSIONS: The technology and business platform OP 4.1 creates a multisided market for the successful development, implementation, distribution, and billing of new software solutions in the operating room and in the health care sector in general. Consequently, software-based medical innovation can be translated into clinical routine quickly, efficiently, and cost-effectively, optimizing the treatment of patients through smartly assisted procedures.
RESUMEN
BACKGROUND: Hepatectomy, living donor liver transplantations and other major hepatic interventions rely on precise calculation of the total, remnant and graft liver volume. However, liver volume might differ between the pre- and intraoperative situation. To model liver volume changes and develop and validate such pre- and intraoperative assistance systems, exact information about the influence of lung ventilation and intraoperative surgical state on liver volume is essential. METHODS: This study assessed the effects of respiratory phase, pneumoperitoneum for laparoscopy, and laparotomy on liver volume in a live porcine model. Nine CT scans were conducted per pig (N = 10), each for all possible combinations of the three operative (native, pneumoperitoneum and laparotomy) and respiratory states (expiration, middle inspiration and deep inspiration). Manual segmentations of the liver were generated and converted to a mesh model, and the corresponding liver volumes were calculated. RESULTS: With pneumoperitoneum the liver volume decreased on average by 13.2% (112.7 ml ± 63.8 ml, p < 0.0001) and after laparotomy by 7.3% (62.0 ml ± 65.7 ml, p = 0.0001) compared to native state. From expiration to middle inspiration the liver volume increased on average by 4.1% (31.1 ml ± 55.8 ml, p = 0.166) and from expiration to deep inspiration by 7.2% (54.7 ml ± 51.8 ml, p = 0.007). CONCLUSIONS: Considerable changes in liver volume change were caused by pneumoperitoneum, laparotomy and respiration. These findings provide knowledge for the refinement of available preoperative simulation and operation planning and help to adjust preoperative imaging parameters to best suit the intraoperative situation.
Asunto(s)
Laparoscopía , Trasplante de Hígado , Animales , Hepatectomía , Humanos , Imagenología Tridimensional , Laparotomía , Hígado/diagnóstico por imagen , Hígado/cirugía , Donadores Vivos , PorcinosRESUMEN
PURPOSE: Image analysis is one of the most promising applications of artificial intelligence (AI) in health care, potentially improving prediction, diagnosis, and treatment of diseases. Although scientific advances in this area critically depend on the accessibility of large-volume and high-quality data, sharing data between institutions faces various ethical and legal constraints as well as organizational and technical obstacles. METHODS: The Joint Imaging Platform (JIP) of the German Cancer Consortium (DKTK) addresses these issues by providing federated data analysis technology in a secure and compliant way. Using the JIP, medical image data remain in the originator institutions, but analysis and AI algorithms are shared and jointly used. Common standards and interfaces to local systems ensure permanent data sovereignty of participating institutions. RESULTS: The JIP is established in the radiology and nuclear medicine departments of 10 university hospitals in Germany (DKTK partner sites). In multiple complementary use cases, we show that the platform fulfills all relevant requirements to serve as a foundation for multicenter medical imaging trials and research on large cohorts, including the harmonization and integration of data, interactive analysis, automatic analysis, federated machine learning, and extensibility and maintenance processes, which are elementary for the sustainability of such a platform. CONCLUSION: The results demonstrate the feasibility of using the JIP as a federated data analytics platform in heterogeneous clinical information technology and software landscapes, solving an important bottleneck for the application of AI to large-scale clinical imaging data.
Asunto(s)
Inteligencia Artificial , Radiología , Ciencia de los Datos , Atención a la Salud , Alemania , HumanosRESUMEN
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
Asunto(s)
Glioblastoma , Informática Médica , Neoplasias de la Próstata , Diagnóstico por Imagen , Humanos , Masculino , National Cancer Institute (U.S.) , Estados UnidosRESUMEN
PURPOSE: Fracture reduction and fixation of syndesmotic injuries is a common procedure in trauma surgery. An intra-operative evaluation of the surgical outcome is challenging due to high inter-individual anatomical variation. A comparison to the contralateral uninjured ankle would be highly beneficial but would also incur additional radiation and time consumption. In this work, we pioneer automatic contralateral side comparison while avoiding an additional 3D scan. METHODS: We reconstruct an accurate 3D surface of the uninjured ankle joint from three low-dose 2D fluoroscopic projections. Through CNN complemented 3D shape model segmentation, we create a reference model of the injured ankle while addressing the issues of metal artifacts and initialization. Following 2D-3D multiple bone reconstruction, a final reference contour can be created and matched to the uninjured ankle for contralateral side comparison without any user interaction. RESULTS: The accuracy and robustness of individual workflow steps were assessed using 81 C-arm datasets, with 2D and 3D images available for injured and uninjured ankles. Furthermore, the entire workflow was tested on eleven clinical cases. These experiments showed an overall average Hausdorff distance of [Formula: see text] mm measured at clinical evaluation level. CONCLUSION: Reference contours of the contralateral side reconstructed from three projection images can assist surgeons in optimizing reduction results, reducing the duration of radiation exposure and potentially improving postoperative outcomes in the long term.
Asunto(s)
Traumatismos del Tobillo/cirugía , Articulación del Tobillo/cirugía , Fijación Interna de Fracturas/métodos , Imagenología Tridimensional/métodos , Monitoreo Intraoperatorio/métodos , Traumatismos del Tobillo/diagnóstico por imagen , Articulación del Tobillo/diagnóstico por imagen , Humanos , Modelos Anatómicos , Tomografía Computarizada por Rayos X/métodos , Resultado del TratamientoRESUMEN
BACKGROUND: The Response Assessment in Neuro-Oncology (RANO) criteria and requirements for a uniform protocol have been introduced to standardise assessment of MRI scans in both clinical trials and clinical practice. However, these criteria mainly rely on manual two-dimensional measurements of contrast-enhancing (CE) target lesions and thus restrict both reliability and accurate assessment of tumour burden and treatment response. We aimed to develop a framework relying on artificial neural networks (ANNs) for fully automated quantitative analysis of MRI in neuro-oncology to overcome the inherent limitations of manual assessment of tumour burden. METHODS: In this retrospective study, we compiled a single-institution dataset of MRI data from patients with brain tumours being treated at Heidelberg University Hospital (Heidelberg, Germany; Heidelberg training dataset) to develop and train an ANN for automated identification and volumetric segmentation of CE tumours and non-enhancing T2-signal abnormalities (NEs) on MRI. Independent testing and large-scale application of the ANN for tumour segmentation was done in a single-institution longitudinal testing dataset from the Heidelberg University Hospital and in a multi-institutional longitudinal testing dataset from the prospective randomised phase 2 and 3 European Organisation for Research and Treatment of Cancer (EORTC)-26101 trial (NCT01290939), acquired at 38 institutions across Europe. In both longitudinal datasets, spatial and temporal tumour volume dynamics were automatically quantified to calculate time to progression, which was compared with time to progression determined by RANO, both in terms of reliability and as a surrogate endpoint for predicting overall survival. We integrated this approach for fully automated quantitative analysis of MRI in neuro-oncology within an application-ready software infrastructure and applied it in a simulated clinical environment of patients with brain tumours from the Heidelberg University Hospital (Heidelberg simulation dataset). FINDINGS: For training of the ANN, MRI data were collected from 455 patients with brain tumours (one MRI per patient) being treated at Heidelberg hospital between July 29, 2009, and March 17, 2017 (Heidelberg training dataset). For independent testing of the ANN, an independent longitudinal dataset of 40 patients, with data from 239 MRI scans, was collected at Heidelberg University Hospital in parallel with the training dataset (Heidelberg test dataset), and 2034 MRI scans from 532 patients at 34 institutions collected between Oct 26, 2011, and Dec 3, 2015, in the EORTC-26101 study were of sufficient quality to be included in the EORTC-26101 test dataset. The ANN yielded excellent performance for accurate detection and segmentation of CE tumours and NE volumes in both longitudinal test datasets (median DICE coefficient for CE tumours 0·89 [95% CI 0·86-0·90], and for NEs 0·93 [0·92-0·94] in the Heidelberg test dataset; CE tumours 0·91 [0·90-0·92], NEs 0·93 [0·93-0·94] in the EORTC-26101 test dataset). Time to progression from quantitative ANN-based assessment of tumour response was a significantly better surrogate endpoint than central RANO assessment for predicting overall survival in the EORTC-26101 test dataset (hazard ratios ANN 2·59 [95% CI 1·86-3·60] vs central RANO 2·07 [1·46-2·92]; p<0·0001) and also yielded a 36% margin over RANO (p<0·0001) when comparing reliability values (ie, agreement in the quantitative volumetrically defined time to progression [based on radiologist ground truth vs automated assessment with ANN] of 87% [266 of 306 with sufficient data] compared with 51% [155 of 306] with local vs independent central RANO assessment). In the Heidelberg simulation dataset, which comprised 466 patients with brain tumours, with 595 MRI scans obtained between April 27, and Sept 17, 2018, automated on-demand processing of MRI scans and quantitative tumour response assessment within the simulated clinical environment required 10 min of computation time (average per scan). INTERPRETATION: Overall, we found that ANN enabled objective and automated assessment of tumour response in neuro-oncology at high throughput and could ultimately serve as a blueprint for the application of ANN in radiology to improve clinical decision making. Future research should focus on prospective validation within clinical trials and application for automated high-throughput imaging biomarker discovery and extension to other diseases. FUNDING: Medical Faculty Heidelberg Postdoc-Program, Else Kröner-Fresenius Foundation.
Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/terapia , Diagnóstico por Computador , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Automatización , Neoplasias Encefálicas/patología , Ensayos Clínicos Fase II como Asunto , Ensayos Clínicos Fase III como Asunto , Bases de Datos Factuales , Progresión de la Enfermedad , Femenino , Alemania , Humanos , Masculino , Estudios Multicéntricos como Asunto , Valor Predictivo de las Pruebas , Ensayos Clínicos Controlados Aleatorios como Asunto , Reproducibilidad de los Resultados , Estudios Retrospectivos , Factores de Tiempo , Resultado del Tratamiento , Carga Tumoral , Flujo de TrabajoRESUMEN
Radiomics - The extraction of quantitative features from radiologic images - shows increasing potential in contributing to modern personalized medicine approaches. MITK Phenotyping is an openly distributed radiomics framework implementing an exhaustive set of features, adhering to most recent international standards, and supporting a variety of different user interfaces and programming languages.
Asunto(s)
Aumento de la Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos , Fenotipo , Medicina de Precisión/métodos , Programas InformáticosRESUMEN
BACKGROUND: Many medical imaging techniques utilize fitting approaches for quantitative parameter estimation and analysis. Common examples are pharmacokinetic modeling in dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI)/computed tomography (CT), apparent diffusion coefficient calculations and intravoxel incoherent motion modeling in diffusion-weighted MRI and Z-spectra analysis in chemical exchange saturation transfer MRI. Most available software tools are limited to a special purpose and do not allow for own developments and extensions. Furthermore, they are mostly designed as stand-alone solutions using external frameworks and thus cannot be easily incorporated natively in the analysis workflow. RESULTS: We present a framework for medical image fitting tasks that is included in the Medical Imaging Interaction Toolkit MITK, following a rigorous open-source, well-integrated and operating system independent policy. Software engineering-wise, the local models, the fitting infrastructure and the results representation are abstracted and thus can be easily adapted to any model fitting task on image data, independent of image modality or model. Several ready-to-use libraries for model fitting and use-cases, including fit evaluation and visualization, were implemented. Their embedding into MITK allows for easy data loading, pre- and post-processing and thus a natural inclusion of model fitting into an overarching workflow. As an example, we present a comprehensive set of plug-ins for the analysis of DCE MRI data, which we validated on existing and novel digital phantoms, yielding competitive deviations between fit and ground truth. CONCLUSIONS: Providing a very flexible environment, our software mainly addresses developers of medical imaging software that includes model fitting algorithms and tools. Additionally, the framework is of high interest to users in the domain of perfusion MRI, as it offers feature-rich, freely available, validated tools to perform pharmacokinetic analysis on DCE MRI data, with both interactive and automatized batch processing workflows.
Asunto(s)
Algoritmos , Medios de Contraste , Diagnóstico por Imagen/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Glioblastoma/diagnóstico , Programas Informáticos , Tomografía Computarizada por Rayos X/métodos , Glioblastoma/diagnóstico por imagen , Humanos , Aumento de la Imagen/métodosRESUMEN
PURPOSE: Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. METHODS: MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. RESULTS: We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively. CONCLUSION: With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org ).
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Programas Informáticos , Cirugía Asistida por Computador/métodos , Telecomunicaciones , Ultrasonografía , Humanos , Procedimientos Quirúrgicos Robotizados , Robótica , Flujo de TrabajoRESUMEN
AIM: To perform a quantitative, volumetric analysis of therapeutic effects of trans-arterial chemoembolization (TACE) in hepatocellular carcinoma (HCC) patients. PATIENTS AND METHODS: Entire tumor volume and a subset of hypervascular tumor portions were analyzed pre- and post-TACE in magnetic resonance imaging datasets of 22 HCC patients using a semi-automated segmentation and evaluation tool from the Medical Imaging Interaction Toolkit. Results were compared to mRECIST measurements and inter-reader variability was assessed. RESULTS: Mean total tumor volume increased statistical significantly after TACE (84.6 ml pre- vs. 97.1 ml post-TACE, p=0.03) while hypervascular tumor volume decreased from 9.1 ml pre- to 3.7 ml post-TACE (p=0.0001). Likewise, mRECIST diameters decreased significantly after therapy (44.2 vs. 15.4 mm). In the inter-reader assessment, overlap errors were 12.3-17.7% for entire and 36.3-64.2% for the enhancing tumor volume. CONCLUSION: Quantification of therapeutic changes after TACE therapy is feasible using a semi-automated segmentation and evaluation tool. Following TACE, hypervascular tumor volume decreases significantly.
Asunto(s)
Carcinoma Hepatocelular/terapia , Quimioembolización Terapéutica , Neoplasias Hepáticas/terapia , Imagen por Resonancia Magnética , Anciano , Anciano de 80 o más Años , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/patología , Femenino , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Masculino , Persona de Mediana Edad , Resultado del Tratamiento , Carga TumoralRESUMEN
PURPOSE: Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. METHODS: We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. RESULTS: Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. CONCLUSION: The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.
Asunto(s)
Algoritmos , Cognición/fisiología , Computadores , HumanosRESUMEN
BACKGROUND: Laparoscopic liver surgery is particularly challenging owing to restricted access, risk of bleeding, and lack of haptic feedback. Navigation systems have the potential to improve information on the exact position of intrahepatic tumors, and thus facilitate oncological resection. This study aims to evaluate the feasibility of a commercially available augmented reality (AR) guidance system employing intraoperative robotic C-arm cone-beam computed tomography (CBCT) for laparoscopic liver surgery. METHODS: A human liver-like phantom with 16 target fiducials was used to evaluate the Syngo iPilot(®) AR system. Subsequently, the system was used for the laparoscopic resection of a hepatocellular carcinoma in segment 7 of a 50-year-old male patient. RESULTS: In the phantom experiment, the AR system showed a mean target registration error of 0.96 ± 0.52 mm, with a maximum error of 2.49 mm. The patient successfully underwent the operation and showed no postoperative complications. CONCLUSION: The use of intraoperative CBCT and AR for laparoscopic liver resection is feasible and could be considered an option for future liver surgery in complex cases.
Asunto(s)
Carcinoma Hepatocelular/cirugía , Tomografía Computarizada de Haz Cónico/métodos , Marcadores Fiduciales , Hepatectomía/métodos , Laparoscopía/métodos , Neoplasias Hepáticas/cirugía , Fantasmas de Imagen , Cirugía Asistida por Computador/instrumentación , Carcinoma Hepatocelular/diagnóstico por imagen , Diseño de Equipo , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Factores de TiempoRESUMEN
PURPOSE: The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. METHODS: MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. RESULTS: MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. CONCLUSIONS: MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.
Asunto(s)
Algoritmos , Sistemas de Computación , Diagnóstico por Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Programas Informáticos , Terapia Asistida por Computador/métodos , Interfaz Usuario-Computador , HumanosRESUMEN
PURPOSE: The time-of-flight (ToF) technique is an emerging technique for rapidly acquiring distance information and is becoming increasingly popular for intra-operative surface acquisition. Using the ToF technique as an intra-operative imaging modality requires seamless integration into the clinical workflow. We thus aim to integrate ToF support in an existing framework for medical image processing. METHODS: MITK-ToF was implemented as an extension of the open-source C++ Medical Imaging Interaction Toolkit (MITK) and provides the basic functionality needed for rapid prototyping and development of image-guided therapy (IGT) applications that utilize range data for intra-operative surface acquisition. This framework was designed with a module-based architecture separating the hardware-dependent image acquisition task from the processing of the range data. RESULTS: The first version of MITK-ToF has been released as an open-source toolkit and supports several ToF cameras and basic processing algorithms. The toolkit, a sample application, and a tutorial are available from http://mitk.org. CONCLUSIONS: With the increased popularity of time-of-flight cameras for intra-operative surface acquisition, integration of range data supports into medical image processing toolkits such as MITK is a necessary step. Handling acquisition of range data from different cameras and processing of the data requires the establishment and use of software design principles that emphasize flexibility, extendibility, robustness, performance, and portability. The open-source toolkit MITK-ToF satisfies these requirements for the image-guided therapy community and was already used in several research projects.
Asunto(s)
Diagnóstico por Imagen , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Programas Informáticos , Algoritmos , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Diseño de Software , Interfaz Usuario-ComputadorRESUMEN
Thoroughly designed, open-source toolkits emerge to boost progress in medical imaging. The Insight Toolkit (ITK) provides this for the algorithmic scope of medical imaging, especially for segmentation and registration. But medical imaging algorithms have to be clinically applied to be useful, which additionally requires visualization and interaction. The Visualization Toolkit (VTK) has powerful visualization capabilities, but only low-level support for interaction. In this paper, we present the Medical Imaging Interaction Toolkit (MITK). The goal of MITK is to significantly reduce the effort required to construct specifically tailored, interactive applications for medical image analysis. MITK allows an easy combination of algorithms developed by ITK with visualizations created by VTK and extends these two toolkits with those features, which are outside the scope of both. MITK adds support for complex interactions with multiple states as well as undo-capabilities, a very important prerequisite for convenient user interfaces. Furthermore, MITK facilitates the realization of multiple, different views of the same data (as a multiplanar reconstruction and a 3D rendering) and supports the visualization of 3D+t data, whereas VTK is only designed to create one kind of view of 2D or 3D data. MITK reuses virtually everything from ITK and VTK. Thus, it is not at all a competitor to ITK or VTK, but an extension, which eases the combination of both and adds the features required for interactive, convenient to use medical imaging software. MITK is an open-source project (www.mitk.org).