RESUMO
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , SemânticaRESUMO
BACKGROUND: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.
Assuntos
Esofagectomia , Robótica , Humanos , Teorema de Bayes , Esofagectomia/métodos , Aprendizado de Máquina , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Estudos ProspectivosRESUMO
Photoacoustic imaging potentially allows for the real-time visualization of functional human tissue parameters such as oxygenation but is subject to a challenging underlying quantification problem. While in silico studies have revealed the great potential of deep learning (DL) methodology in solving this problem, the inherent lack of an efficient gold standard method for model training and validation remains a grand challenge. This work investigates whether DL can be leveraged to accurately and efficiently simulate photon propagation in biological tissue, enabling photoacoustic image synthesis. Our approach is based on estimating the initial pressure distribution of the photoacoustic waves from the underlying optical properties using a back-propagatable neural network trained on synthetic data. In proof-of-concept studies, we validated the performance of two complementary neural network architectures, namely a conventional U-Net-like model and a Fourier Neural Operator (FNO) network. Our in silico validation on multispectral human forearm images shows that DL methods can speed up image generation by a factor of 100 when compared to Monte Carlo simulations with 5×108 photons. While the FNO is slightly more accurate than the U-Net, when compared to Monte Carlo simulations performed with a reduced number of photons (5×106), both neural network architectures achieve equivalent accuracy. In contrast to Monte Carlo simulations, the proposed DL models can be used as inherently differentiable surrogate models in the photoacoustic image synthesis pipeline, allowing for back-propagation of the synthesis error and gradient-based optimization over the entire pipeline. Due to their efficiency, they have the potential to enable large-scale training data generation that can expedite the clinical application of photoacoustic imaging.
Assuntos
Aprendizado Profundo , Humanos , Análise Espectral , Antebraço , Método de Monte Carlo , Redes Neurais de ComputaçãoRESUMO
Machine learning methods exploiting multi-parametric biomarkers, especially based on neuroimaging, have huge potential to improve early diagnosis of dementia and to predict which individuals are at-risk of developing dementia. To benchmark algorithms in the field of machine learning and neuroimaging in dementia and assess their potential for use in clinical practice and clinical trials, seven grand challenges have been organized in the last decade: MIRIAD (2012), Alzheimer's Disease Big Data DREAM (2014), CADDementia (2014), Machine Learning Challenge (2014), MCI Neuroimaging (2017), TADPOLE (2017), and the Predictive Analytics Competition (2019). Based on two challenge evaluation frameworks, we analyzed how these grand challenges are complementing each other regarding research questions, datasets, validation approaches, results and impact. The seven grand challenges addressed questions related to screening, clinical status estimation, prediction and monitoring in (pre-clinical) dementia. There was little overlap in clinical questions, tasks and performance metrics. Whereas this aids providing insight on a broad range of questions, it also limits the validation of results across challenges. The validation process itself was mostly comparable between challenges, using similar methods for ensuring objective comparison, uncertainty estimation and statistical testing. In general, winning algorithms performed rigorous data pre-processing and combined a wide range of input features. Despite high state-of-the-art performances, most of the methods evaluated by the challenges are not clinically used. To increase impact, future challenges could pay more attention to statistical analysis of which factors (i.e., features, models) relate to higher performance, to clinical questions beyond Alzheimer's disease, and to using testing data beyond the Alzheimer's Disease Neuroimaging Initiative. Grand challenges would be an ideal venue for assessing the generalizability of algorithm performance to unseen data of other cohorts. Key for increasing impact in this way are larger testing data sizes, which could be reached by sharing algorithms rather than data to exploit data that cannot be shared. Given the potential and lessons learned in the past ten years, we are excited by the prospects of grand challenges in machine learning and neuroimaging for the next ten years and beyond.
Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico , Diagnóstico Precoce , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodosRESUMO
BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Assuntos
Aprendizado de Máquina , Cirurgiões , Humanos , MorbidadeRESUMO
Information technology (IT) can enhance or change many scenarios in cancer research for the better. In this paper, we introduce several examples, starting with clinical data reuse and collaboration including data sharing in research networks. Key challenges are semantic interoperability and data access (including data privacy). We deal with gathering and analyzing genomic information, where cloud computing, uncertainties and reproducibility challenge researchers. Also, new sources for additional phenotypical data are shown in patient-reported outcome and machine learning in imaging. Last, we focus on therapy assistance, introducing tools used in molecular tumor boards and techniques for computer-assisted surgery. We discuss the need for metadata to aggregate and analyze data sets reliably. We conclude with an outlook towards a learning health care system in oncology, which connects bench and bedside by employing modern IT solutions.
Assuntos
Oncologia/métodos , Neoplasias/diagnóstico , Neoplasias/terapia , Pesquisa Biomédica/métodos , Humanos , Tecnologia da Informação , Aprendizado de Máquina , Reprodutibilidade dos TestesRESUMO
BACKGROUND: Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. METHODS: In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. RESULTS: In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. CONCLUSIONS: Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient bedside.
Assuntos
Imageamento Tridimensional , Sistemas Automatizados de Assistência Junto ao Leito , Cirurgia Assistida por Computador/métodos , Animais , Estudos de Viabilidade , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Animais , Imagens de Fantasmas , Projetos Piloto , Estudos Prospectivos , Suínos , Tomografia Computadorizada por Raios XRESUMO
Spectral imaging has the potential to become a key technique in interventional medicine as it unveils much richer optical information compared to conventional RBG (red, green, and blue)-based imaging. Thus allowing for high-resolution functional tissue analysis in real time. Its higher information density particularly shows promise for the development of powerful perfusion monitoring methods for clinical use. However, even though in vivo validation of such methods is crucial for their clinical translation, the biomedical field suffers from a lack of publicly available datasets for this purpose. Closing this gap, we generated the SPECTRAL Perfusion Arm Clamping dAtaset (SPECTRALPACA). It comprises ten spectral videos (â¼20 Hz, approx. 20,000 frames each) systematically recorded of the hands of ten healthy human participants in different functional states. We paired each spectral video with concisely tracked regions of interest, and corresponding diffuse reflectance measurements recorded with a spectrometer. Providing the first openly accessible in human spectral video dataset for perfusion monitoring, our work facilitates the development and validation of new functional imaging methods.
Assuntos
Pele , Humanos , Pele/irrigação sanguínea , Pele/diagnóstico por imagem , Gravação em Vídeo , Mãos/irrigação sanguínea , Braço/irrigação sanguínea , Braço/diagnóstico por imagemRESUMO
Intelligent systems in interventional healthcare depend on the reliable perception of the environment. In this context, photoacoustic tomography (PAT) has emerged as a non-invasive, functional imaging modality with great clinical potential. Current research focuses on converting the high-dimensional, not human-interpretable spectral data into the underlying functional information, specifically the blood oxygenation. One of the largely unexplored issues stalling clinical advances is the fact that the quantification problem is ambiguous, i.e. that radically different tissue parameter configurations could lead to almost identical photoacoustic spectra. In the present work, we tackle this problem with conditional Invertible Neural Networks (cINNs). Going beyond traditional point estimates, our network is used to compute an approximation of the conditional posterior density of tissue parameters given the photoacoustic spectrum. To this end, an automatic mode detection algorithm extracts the plausible solution from the sample-based posterior. According to a comprehensive validation study based on both synthetic and real images, our approach is well-suited for exploring ambiguity in quantitative PAT.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Oxigênio , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Oxigênio/sangue , Oxigênio/metabolismo , Imagens de FantasmasRESUMO
Ultrasound (US) has gained popularity as a guidance modality for percutaneous needle insertions because it is widely available and non-ionizing. However, coordinating scanning and needle insertion still requires significant experience. Current assistance solutions utilize optical or electromagnetic tracking (EMT) technology directly integrated into the US device or probe. This results in specialized devices or introduces additional hardware, limiting the ergonomics of both the scanning and insertion process. We developed the first ultrasound (US) navigation solution designed to be used as a non-permanent accessory for existing US devices while maintaining the ergonomics during the scanning process. A miniaturized EMT source is reversibly attached to the US probe, temporarily creating a combined modality that provides real-time anatomical imaging and instrument tracking at the same time. Studies performed with 11 clinical operators show that the proposed navigation solution can guide needle insertions with a targeting accuracy of about 5 mm, which is comparable to existing approaches and unaffected by repeated attachment and detachment of the miniaturized tracking solution. The assistance proved particularly helpful for non-expert users and needle insertions performed outside of the US plane. The small size and reversible attachability of the proposed navigation solution promises streamlined integration into the clinical workflow and widespread access to US navigated punctures.
Assuntos
Fenômenos Eletromagnéticos , Agulhas , Humanos , Ultrassonografia de Intervenção/métodos , Ultrassonografia de Intervenção/instrumentação , Miniaturização , Desenho de Equipamento , Imagens de FantasmasRESUMO
PURPOSE: Surgical scene segmentation is crucial for providing context-aware surgical assistance. Recent studies highlight the significant advantages of hyperspectral imaging (HSI) over traditional RGB data in enhancing segmentation performance. Nevertheless, the current hyperspectral imaging (HSI) datasets remain limited and do not capture the full range of tissue variations encountered clinically. METHODS: Based on a total of 615 hyperspectral images from a total of 16 pigs, featuring porcine organs in different perfusion states, we carry out an exploration of distribution shifts in spectral imaging caused by perfusion alterations. We further introduce a novel strategy to mitigate such distribution shifts, utilizing synthetic data for test-time augmentation. RESULTS: The effect of perfusion changes on state-of-the-art (SOA) segmentation networks depended on the organ and the specific perfusion alteration induced. In the case of the kidney, we observed a performance decline of up to 93% when applying a state-of-the-art (SOA) network under ischemic conditions. Our method improved on the state-of-the-art (SOA) by up to 4.6 times. CONCLUSION: Given its potential wide-ranging relevance to diverse pathologies, our approach may serve as a pivotal tool to enhance neural network generalization within the realm of spectral imaging.
Assuntos
Imageamento Hiperespectral , Animais , Suínos , Imageamento Hiperespectral/métodos , Rim/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodosRESUMO
Spreading depolarizations (SDs) are a marker of brain injury and have a causative effect on ischemic lesion progression. The hemodynamic responses elicited by SDs are contingent upon the metabolic integrity of the affected tissue, with vasoconstrictive reactions leading to pronounced hypoxia often indicating poor outcomes. The stratification of hemodynamic responses within different cortical layers remains poorly characterized. This pilot study sought to elucidate the depth-specific hemodynamic changes in response to SDs within the gray matter of the gyrencephalic swine brain. Employing a potassium chloride-induced SD model, we utilized multispectral photoacoustic imaging (PAI) to estimate regional cerebral oxygen saturation (rcSO2%) changes consequent to potassium chloride-induced SDs. Regions of interest were demarcated at three cortical depths covering up to 4 mm. Electrocorticography (ECoG) strips were placed to validate the presence of SDs. Through PAI, we detected 12 distinct rcSO2% responses, which corresponded with SDs detected in ECoG. Notably, a higher frequency of hypoxic responses was observed in the deeper cortical layers compared to superficial layers, where hyperoxic and mixed responses predominated (p < 0.001). This data provides novel insights into the differential oxygenation patterns across cortical layers in response to SDs, underlining the complexity of cerebral hemodynamics post-injury.
RESUMO
Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from Computed Tomography (CT) or Magnetic Resonance (MR) imaging data are registered to the intraoperative laparoscopic images during this process. Regarding 3D-2D fusion, most algorithms use anatomical landmarks to guide registration, such as the liver's inferior ridge, the falciform ligament, and the occluding contours. These are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and prone to error. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion challenge (P2ILF), held during the Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: (1) A 2D and 3D landmark segmentation task and (2) a 3D-2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated in the challenge, whose results were assessed for each task independently. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. The proposed methods were evaluated on 16 test images and 2 preoperative 3D models from 2 patients. In Task 1, the teams were able to segment most of the 2D landmarks, while the 3D landmarks showed to be more challenging to segment. In Task 2, only one team obtained acceptable qualitative and quantitative registration results. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.
RESUMO
[This corrects the article DOI: 10.2196/44204.].
RESUMO
Spreading depolarizations (SDs) have been linked to infarct volume expansion following ischemic stroke. Therapeutic hypothermia provides a neuroprotective effect after ischemic stroke. This study aimed to evaluate the effect of hypothermia on the propagation of SDs and infarct volume in an ischemic swine model. Through left orbital exenteration, middle cerebral arteries were surgically occluded (MCAo) in 16 swine. Extensive craniotomy and durotomy were performed. Six hypothermic and five normothermic animals were included in the analysis. An intracranial temperature probe was placed right frontal subdural. One hour after ischemic onset, mild hypothermia was induced and eighteen hours of electrocorticographic (ECoG) and intrinsic optical signal (IOS) recordings were acquired. Postmortem, 4 mm-thick slices were stained with 2,3,5-triphenyltetrazolium chloride to estimate the infarct volume. Compared to normothermia (36.4 ± 0.4°C), hypothermia (32.3 ± 0.2°C) significantly reduced the frequency and expansion of SDs (ECoG: 3.5 ± 2.1, 73.2 ± 5.2% vs. 1.0 ± 0.7, 41.9 ± 21.8%; IOS 3.9 ± 0.4, 87.6 ± 12.0% vs. 1.4 ± 0.7, 67.7 ± 8.3%, respectively). Further, infarct volume among hypothermic animals (23.2 ± 1.8% vs. 32.4 ± 2.5%) was significantly reduced. Therapeutic hypothermia reduces infarct volume and the frequency and expansion of SDs following cerebral ischemia.
Assuntos
Isquemia Encefálica , Hipotermia Induzida , Hipotermia , Ataque Isquêmico Transitório , AVC Isquêmico , Animais , Suínos , Infarto CerebralRESUMO
Even though radiomics can hold great potential for supporting clinical decision-making, its current use is mostly limited to academic research, without applications in routine clinical practice. The workflow of radiomics is complex due to several methodological steps and nuances, which often leads to inadequate reporting and evaluation, and poor reproducibility. Available reporting guidelines and checklists for artificial intelligence and predictive modeling include relevant good practices, but they are not tailored to radiomic research. There is a clear need for a complete radiomics checklist for study planning, manuscript writing, and evaluation during the review process to facilitate the repeatability and reproducibility of studies. We here present a documentation standard for radiomic research that can guide authors and reviewers. Our motivation is to improve the quality and reliability and, in turn, the reproducibility of radiomic research. We name the checklist CLEAR (CheckList for EvaluAtion of Radiomics research), to convey the idea of being more transparent. With its 58 items, the CLEAR checklist should be considered a standardization tool providing the minimum requirements for presenting clinical radiomics research. In addition to a dynamic online version of the checklist, a public repository has also been set up to allow the radiomics community to comment on the checklist items and adapt the checklist for future versions. Prepared and revised by an international group of experts using a modified Delphi method, we hope the CLEAR checklist will serve well as a single and complete scientific documentation tool for authors and reviewers to improve the radiomics literature.
RESUMO
Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond.
Assuntos
Algoritmos , Laparoscopia , Humanos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Laparoscopic surgery has evolved as a key technique for cancer diagnosis and therapy. While characterization of the tissue perfusion is crucial in various procedures, such as partial nephrectomy, doing so by means of visual inspection remains highly challenging. We developed a laparoscopic real-time multispectral imaging system featuring a compact and lightweight multispectral camera and the possibility to complement the conventional surgical view of the patient with functional information at a video rate of 25 Hz. To enable contrast agent-free ischemia monitoring during laparoscopic partial nephrectomy, we phrase the problem of ischemia detection as an out-of-distribution detection problem that does not rely on data from any other patient and uses an ensemble of invertible neural networks at its core. An in-human trial demonstrates the feasibility of our approach and highlights the potential of spectral imaging combined with advanced deep learning-based analysis tools for fast, efficient, reliable, and safe functional laparoscopic imaging.
Assuntos
Meios de Contraste , Laparoscopia , Humanos , Nefrectomia/métodos , Redes Neurais de Computação , Laparoscopia/métodos , IsquemiaRESUMO
Hyperspectral Imaging (HSI) is a relatively new medical imaging modality that exploits an area of diagnostic potential formerly untouched. Although exploratory translational and clinical studies exist, no surgical HSI datasets are openly accessible to the general scientific community. To address this bottleneck, this publication releases HeiPorSPECTRAL ( https://www.heiporspectral.org ; https://doi.org/10.5281/zenodo.7737674 ), the first annotated high-quality standardized surgical HSI dataset. It comprises 5,758 spectral images acquired with the TIVITA® Tissue and annotated with 20 physiological porcine organs from 8 pigs per organ distributed over a total number of 11 pigs. Each HSI image features a resolution of 480 × 640 pixels acquired over the 500-1000 nm wavelength range. The acquisition protocol has been designed such that the variability of organ spectra as a function of several parameters including the camera angle and the individual can be assessed. A comprehensive technical validation confirmed both the quality of the raw data and the annotations. We envision potential reuse within this dataset, but also its reuse as baseline data for future research questions outside this dataset. Measurement(s) Spectral Reflectance Technology Type(s) Hyperspectral Imaging Sample Characteristic - Organism Sus scrofa.
Assuntos
Imageamento Hiperespectral , Suínos , Suínos/anatomia & histologia , AnimaisRESUMO
BACKGROUND: Small bowel malperfusion (SBM) can cause high morbidity and severe surgical consequences. However, there is no standardized objective measuring tool for the quantification of SBM. Indocyanine green (ICG) imaging can be used for visualization, but lacks standardization and objectivity. Hyperspectral imaging (HSI) as a newly emerging technology in medicine might present advantages over conventional ICG fluorescence or in combination with it. METHODS: HSI baseline data from physiological small bowel, avascular small bowel and small bowel after intravenous application of ICG was recorded in a total number of 54 in-vivo pig models. Visualizations of avascular small bowel after mesotomy were compared between HSI only (1), ICG-augmented HSI (IA-HSI) (2), clinical evaluation through the eyes of the surgeon (3) and conventional ICG imaging (4). The primary research focus was the localization of resection borders as suggested by each of the four methods. Distances between these borders were measured and histological samples were obtained from the regions in between in order to quantify necrotic changes 6 h after mesotomy for every region. RESULTS: StO2 images (1) were capable of visualizing areas of physiological perfusion and areas of clearly impaired perfusion. However, exact borders where physiological perfusion started to decrease could not be clearly identified. Instead, IA-HSI (2) suggested a sharp-resection line where StO2 values started to decrease. Clinical evaluation (3) suggested a resection line 23 mm (±7 mm) and conventional ICG imaging (4) even suggested a resection line 53 mm (±13 mm) closer towards the malperfused region. Histopathological evaluation of the region that was sufficiently perfused only according to conventional ICG (R3) already revealed a significant increase in pre-necrotic changes in 27% (±9%) of surface area. Therefore, conventional ICG seems less sensitive than IA-HSI with regards to detection of insufficient tissue perfusion. CONCLUSIONS: In this experimental animal study, IA-HSI (2) was superior for the visualization of segmental SBM compared to conventional HSI imaging (1), clinical evaluation (3) or conventional ICG imaging (4) regarding histopathological safety. ICG application caused visual artifacts in the StO2 values of the HSI camera as values significantly increase. This is caused by optical properties of systemic ICG and does not resemble a true increase in oxygenation levels. However, this empirical finding can be used to visualize segmental SBM utilizing ICG as contrast agent in an approach for IA-HSI. Clinical applicability and relevance will have to be explored in clinical trials. LEVEL OF EVIDENCE: Not applicable. Translational animal science. Original article.