Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38739509

RESUMEN

X-ray computed tomography (CT) is a crucial tool for non-invasive medical diagnosis that uses differences in materials' attenuation coefficients to generate contrast and provide 3D information. Grating-based dark-field-contrast X-ray imaging is an innovative technique that utilizes small-angle scattering to generate additional co-registered images with additional microstructural information. While it is already possible to perform human chest dark-field radiography, it is assumed that its diagnostic value increases when performed in a tomographic setup. However, the susceptibility of Talbot-Lau interferometers to mechanical vibrations coupled with a need to minimize data acquisition times has hindered its application in clinical routines and the combination of X-ray dark-field imaging and large field-of-view (FOV) tomography in the past. In this work, we propose a processing pipeline to address this issue in a human-sized clinical dark-field CT prototype. We present the corrective measures that are applied in the employed processing and reconstruction algorithms to mitigate the effects of vibrations and deformations of the interferometer gratings. This is achieved by identifying spatially and temporally variable vibrations in air reference scans. By translating the found correlations to the sample scan, we can identify and mitigate relevant fluctuation modes for scans with arbitrary sample sizes. This approach effectively eliminates the requirement for sample-free detector area, while still distinctly separating fluctuation and sample information. As a result, samples of arbitrary dimensions can be reconstructed without being affected by vibration artifacts. To demonstrate the viability of the technique for human-scale objects, we present reconstructions of an anthropomorphic thorax phantom.

2.
Eur Radiol Exp ; 8(1): 54, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38698099

RESUMEN

BACKGROUND: We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS: CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS: The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS: Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT: Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS: • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Femenino , Estudios Retrospectivos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Anciano
3.
Radiol Artif Intell ; : e230275, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38717293

RESUMEN

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparseview cranial CT scans from 3000 patients obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, the EfficientNetB2 was trained on full-view CT data from 17,545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operator characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view, served as the basis for comparison. A Bonferronicorrected significance level of 0.001/6 = 0.00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97; 95% CI: 0.97-0.98) to 512 (0.97; 0.97-0.98; P < .00017) and to 256 views (0.97; 0.96-0.97; P < .00017) with minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210-0.0211) and 0.0560 (95% CI: 0.0559-0.0560) relative to unprocessed images. Conclusion U-Net based artifact reduction substantially enhances automated hemorrhage detection in sparse-view cranial CTs. ©RSNA, 2024.

4.
Rofo ; 2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38663428

RESUMEN

The aim of this study was to explore the potential of weak supervision in a deep learning-based label prediction model. The goal was to use this model to extract labels from German free-text thoracic radiology reports on chest X-ray images and for training chest X-ray classification models.The proposed label extraction model for German thoracic radiology reports uses a German BERT encoder as a backbone and classifies a report based on the CheXpert labels. For investigating the efficient use of manually annotated data, the model was trained using manual annotations, weak rule-based labels, and both. Rule-based labels were extracted from 66071 retrospectively collected radiology reports from 2017-2021 (DS 0), and 1091 reports from 2020-2021 (DS 1) were manually labeled according to the CheXpert classes. Label extraction performance was evaluated with respect to mention extraction, negation detection, and uncertainty detection by measuring F1 scores. The influence of the label extraction method on chest X-ray classification was evaluated on a pneumothorax data set (DS 2) containing 6434 chest radiographs with associated reports and expert diagnoses of pneumothorax. For this, DenseNet-121 models trained on manual annotations, rule-based and deep learning-based label predictions, and publicly available data were compared.The proposed deep learning-based labeler (DL) performed on average considerably stronger than the rule-based labeler (RB) for all three tasks on DS 1 with F1 scores of 0.938 vs. 0.844 for mention extraction, 0.891 vs. 0.821 for negation detection, and 0.624 vs. 0.518 for uncertainty detection. Pre-training on DS 0 and fine-tuning on DS 1 performed better than only training on either DS 0 or DS 1. Chest X-ray pneumothorax classification results (DS 2) were highest when trained with DL labels with an area under the receiver operating curve (AUC) of 0.939 compared to RB labels with an AUC of 0.858. Training with manual labels performed slightly worse than training with DL labels with an AUC of 0.934. In contrast, training with a public data set resulted in an AUC of 0.720.Our results show that leveraging a rule-based report labeler for weak supervision leads to improved labeling performance. The pneumothorax classification results demonstrate that our proposed deep learning-based labeler can serve as a substitute for manual labeling requiring only 1000 manually annotated reports for training. · The proposed deep learning-based label extraction model for German thoracic radiology reports performs better than the rule-based model.. · Training with limited supervision outperformed training with a small manually labeled data set.. · Using predicted labels for pneumothorax classification from chest radiographs performed equally to using manual annotations.. Wollek A, Haitzer P, Sedlmeyr T et al. Language modelbased labeling of German thoracic radiology reports. Fortschr Röntgenstr 2024; DOI 10.1055/a-2287-5054.

5.
Biomed Opt Express ; 15(2): 1219-1232, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38404325

RESUMEN

Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints but do not automatically provide a way to certify the realism of their reconstructions, which is essential in the biomedical realm. To address these shortcomings, this work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512x512x96 voxels, and it can be trained in under two hours due to the small dataset requirements (50 image-volume pairs). Furthermore, normalizing flows provides a way to compute the exact likelihood of a sample. This allows us to certify whether the predicted output is in- or ood, and retrain the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.

6.
Rofo ; 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38295825

RESUMEN

PURPOSE: The aim of this study was to develop an algorithm to automatically extract annotations from German thoracic radiology reports to train deep learning-based chest X-ray classification models. MATERIALS AND METHODS: An automatic label extraction model for German thoracic radiology reports was designed based on the CheXpert architecture. The algorithm can extract labels for twelve common chest pathologies, the presence of support devices, and "no finding". For iterative improvements and to generate a ground truth, a web-based multi-reader annotation interface was created. With the proposed annotation interface, a radiologist annotated 1086 retrospectively collected radiology reports from 2020-2021 (data set 1). The effect of automatically extracted labels on chest radiograph classification performance was evaluated on an additional, in-house pneumothorax data set (data set 2), containing 6434 chest radiographs with corresponding reports, by comparing a DenseNet-121 model trained on extracted labels from the associated reports, image-based pneumothorax labels, and publicly available data, respectively. RESULTS: Comparing automated to manual labeling on data set 1: "mention extraction" class-wise F1 scores ranged from 0.8 to 0.995, the "negation detection" F1 scores from 0.624 to 0.981, and F1 scores for "uncertainty detection" from 0.353 to 0.725. Extracted pneumothorax labels on data set 2 had a sensitivity of 0.997 [95 % CI: 0.994, 0.999] and specificity of 0.991 [95 % CI: 0.988, 0.994]. The model trained on publicly available data achieved an area under the receiver operating curve (AUC) for pneumothorax classification of 0.728 [95 % CI: 0.694, 0.760], while the models trained on automatically extracted labels and on manual annotations achieved values of 0.858 [95 % CI: 0.832, 0.882] and 0.934 [95 % CI: 0.918, 0.949], respectively. CONCLUSION: Automatic label extraction from German thoracic radiology reports is a promising substitute for manual labeling. By reducing the time required for data annotation, larger training data sets can be created, resulting in improved overall modeling performance. Our results demonstrated that a pneumothorax classifier trained on automatically extracted labels strongly outperformed the model trained on publicly available data, without the need for additional annotation time and performed competitively compared to manually labeled data. KEY POINTS: · An algorithm for automatic German thoracic radiology report annotation was developed.. · Automatic label extraction is a promising substitute for manual labeling.. · The classifier trained on extracted labels outperformed the model trained on publicly available data..

7.
Med Phys ; 51(4): 2721-2732, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37831587

RESUMEN

BACKGROUND: Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? Models are typically tested on specific cleaned data sets, but when deployed in the real world, the model will encounter unexpected, out-of-distribution (OOD) data. PURPOSE: To investigate the impact of OOD radiographs on existing chest x-ray classification models and to increase their robustness against OOD data. METHODS: The study employed the commonly used chest x-ray classification model, CheXnet, trained on the chest x-ray 14 data set, and tested its robustness against OOD data using three public radiography data sets: IRMA, Bone Age, and MURA, and the ImageNet data set. To detect OOD data for multi-label classification, we proposed in-distribution voting (IDV). The OOD detection performance is measured across data sets using the area under the receiver operating characteristic curve (AUC) analysis and compared with Mahalanobis-based OOD detection, MaxLogit, MaxEnergy, self-supervised OOD detection (SS OOD), and CutMix. RESULTS: Without additional OOD detection, the chest x-ray classifier failed to discard any OOD images, with an AUC of 0.5. The proposed IDV approach trained on ID (chest x-ray 14) and OOD data (IRMA and ImageNet) achieved, on average, 0.999 OOD AUC across the three data sets, surpassing all other OOD detection methods. Mahalanobis-based OOD detection achieved an average OOD detection AUC of 0.982. IDV trained solely with a few thousand ImageNet images had an AUC 0.913, which was considerably higher than MaxLogit (0.726), MaxEnergy (0.724), SS OOD (0.476), and CutMix (0.376). CONCLUSIONS: The performance of all tested OOD detection methods did not translate well to radiography data sets, except Mahalanobis-based OOD detection and the proposed IDV method. Consequently, training solely on ID data led to incorrect classification of OOD images as ID, resulting in increased false positive rates. IDV substantially improved the model's ID classification performance, even when trained with data that will not occur in the intended use case or test set (ImageNet), without additional inference overhead or performance decrease in the target classification. The corresponding code is available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-disease.


Asunto(s)
Votación , Rayos X , Radiografía , Curva ROC
8.
J Imaging ; 9(12)2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-38132688

RESUMEN

Public chest X-ray (CXR) data sets are commonly compressed to a lower bit depth to reduce their size, potentially hiding subtle diagnostic features. In contrast, radiologists apply a windowing operation to the uncompressed image to enhance such subtle features. While it has been shown that windowing improves classification performance on computed tomography (CT) images, the impact of such an operation on CXR classification performance remains unclear. In this study, we show that windowing strongly improves the CXR classification performance of machine learning models and propose WindowNet, a model that learns multiple optimal window settings. Our model achieved an average AUC score of 0.812 compared with the 0.759 score of a commonly used architecture without windowing capabilities on the MIMIC data set.

9.
World Allergy Organ J ; 16(10): 100820, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37822702

RESUMEN

Background: Immunoglobulin E (IgE) blood tests are used to detect sensitizations and potential allergies. Recent studies suggest that specific IgE sensitization patterns due to molecular interactions affect an individual's risk of developing allergic symptoms. Objective: The aim of this study was to reveal specific IgE sensitization patterns and investigate their clinical implications in Hymenoptera venom allergy. Methods: In this cross-sectional study, 257 hunters or fishers with self-filled surveys on previous Hymenoptera stings were analyzed. Blood samples were taken to determine Hymenoptera IgE sensitization levels. Using dimensionality reduction and clustering, specific IgE for 10 Hymenoptera venom allergens were evaluated for clinical relevance. Results: Three clusters were unmasked using novel dimensionality reduction and clustering methods solely based on specific IgE levels to Hymenoptera venom allergens. These clusters show different characteristics regarding previous systemic reactions to Hymenoptera stings. Conclusion: Our study was able to unmask non-linear sensitization patterns for specific IgE tests in Hymenoptera venom allergy. We were able to derive risk clusters for anaphylactic reactions following hymenoptera stings and pinpoint relevant allergens (rApi m 10, rVes v 1, whole bee, and wasp venom) for clustering.

10.
ArXiv ; 2023 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-37396615

RESUMEN

Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints at the expense of lacking certainty metrics, which renders them untrustworthy for the biomedical realm. This work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512 × 512 × 96 voxels, and it can be trained in under two hours due to the small dataset requirements (10 image-volume pairs). Furthermore, normalizing flows allow for exact Likelihood computation, enabling distribution monitoring, followed by out-of-distribution detection and retraining of the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.

11.
Radiol Artif Intell ; 5(2): e220187, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37035429

RESUMEN

Purpose: To investigate the chest radiograph classification performance of vision transformers (ViTs) and interpretability of attention-based saliency maps, using the example of pneumothorax classification. Materials and Methods: In this retrospective study, ViTs were fine-tuned for lung disease classification using four public datasets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData. Saliency maps were generated using transformer multimodal explainability and gradient-weighted class activation mapping (GradCAM). Classification performance was evaluated on the Chest X-Ray 14, VinBigData, and Society for Imaging Informatics in Medicine-American College of Radiology (SIIM-ACR) Pneumothorax Segmentation datasets using the area under the receiver operating characteristic curve (AUC) analysis and compared with convolutional neural networks (CNNs). The explainability methods were evaluated with positive and negative perturbation, sensitivity-n, effective heat ratio, intra-architecture repeatability, and interarchitecture reproducibility. In the user study, three radiologists classified 160 chest radiographs with and without saliency maps for pneumothorax and rated their usefulness. Results: ViTs had comparable chest radiograph classification AUCs compared with state-of-the-art CNNs: 0.95 (95% CI: 0.94, 0.95) versus 0.83 (95%, CI 0.83, 0.84) on Chest X-Ray 14, 0.84 (95% CI: 0.77, 0.91) versus 0.83 (95% CI: 0.76, 0.90) on VinBigData, and 0.85 (95% CI: 0.85, 0.86) versus 0.87 (95% CI: 0.87, 0.88) on SIIM-ACR. Both saliency map methods unveiled a strong bias toward pneumothorax tubes in the models. Radiologists found 47% of the attention-based and 39% of the GradCAM saliency maps useful. The attention-based methods outperformed GradCAM on all metrics. Conclusion: ViTs performed similarly to CNNs in chest radiograph classification, and their attention-based saliency maps were more useful to radiologists and outperformed GradCAM.Keywords: Conventional Radiography, Thorax, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN) Online supplemental material is available for this article. © RSNA, 2023.

12.
IEEE Trans Med Imaging ; 42(10): 2876-2885, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37115841

RESUMEN

Grating-based phase- and dark-field-contrast X-ray imaging is a novel technology that aims to extend conventional attenuation-based X-ray imaging by unlocking two additional contrast modalities. The so called phase-contrast and dark-field channels provide enhanced soft tissue contrast and additional microstructural information. Accessing this additional information comes at the expense of a more intricate measurement setup and necessitates sophisticated data processing. A big challenge for translating grating-based dark-field computed tomography to medical applications lies in minimizing the data acquisition time. While a continuously moving detector is ideal, it prohibits conventional phase stepping techniques that require multiple projections under the same angle with different grating positions. One solution to this problem is the so-called sliding window processing approach that is compatible with continuous data acquisition. However, conventional sliding window techniques lead to crosstalk-artifacts between the three image channels, if the projection of the sample moves too fast on the detector within a processing window. In this work we introduce a new interpretation of the phase retrieval problem for continuous acquisitions as a demodulation problem. In this interpretation, we identify the origin of the crosstalk-artifacts as partially overlapping modulation side bands. Furthermore, we present three algorithmic extensions that improve the conventional sliding-window-based phase retrieval and mitigate crosstalk-artifacts. The presented algorithms are tested in a simulation study and on experimental data from a human-scale dark-field CT prototype. In both cases they achieve a substantial reduction of the occurring crosstalk-artifacts.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Rayos X , Tomografía Computarizada por Rayos X/métodos , Radiografía , Simulación por Computador , Fantasmas de Imagen
13.
J Eur Acad Dermatol Venereol ; 37(5): 1071-1079, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36606561

RESUMEN

BACKGROUND: Artificial intelligence (AI) and convolutional neural networks (CNNs) represent rising trends in modern medicine. However, comprehensive data on the performance of AI practices in clinical dermatologic images are non-existent. Furthermore, the role of professional data selection for training remains unknown. OBJECTIVES: The aims of this study were to develop AI applications for outlier detection of dermatological pathologies, to evaluate CNN architectures' performance on dermatological images and to investigate the role of professional pre-processing of the training data, serving as one of the first anchor points regarding data selection criteria in dermatological AI-based binary classification tasks of non-melanoma pathologies. METHODS: Six state-of-the-art CNN architectures were evaluated for their accuracy, sensitivity and specificity for five dermatological diseases and using five data subsets, including data selected by two dermatologists, one with 5 and the other with 11 years of clinical experience. RESULTS: Overall, 150 CNNs were evaluated on up to 4051 clinical images. The best accuracy was reached for onychomycosis (accuracy = 1.000), followed by bullous pemphigoid (accuracy = 0.951) and lupus erythematosus (accuracy = 0.912). The CNNs InceptionV3, Xception and ResNet50 achieved the best accuracy in 9, 8 and 6 out of 25 data sets, respectively (36.0%, 32.0% and 24.0%). On average, the data set provided by the senior physician and the data set provided in accordance with both dermatologists performed the best (accuracy = 0.910). CONCLUSIONS: This AI approach for the detection of outliers in dermatological diagnoses represents one of the first studies to evaluate the performance of different CNNs for binary decisions in clinical non-dermatoscopic images of a variety of dermatological diseases other than melanoma. The selection of images by an experienced dermatologist during pre-processing had substantial benefits for the performance of the CNNs. These comparative results might guide future AI approaches to dermatology diagnostics, and the evaluated CNNs might be applicable for the future training of dermatology residents.


Asunto(s)
Dermatología , Melanoma , Enfermedades de la Piel , Humanos , Inteligencia Artificial , Redes Neurales de la Computación , Melanoma/diagnóstico , Melanoma/patología , Enfermedades de la Piel/diagnóstico
14.
IEEE Trans Med Imaging ; 42(3): 774-784, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36301786

RESUMEN

X-ray computed tomography (CT) is an invaluable imaging technique for non-invasive medical diagnosis. However, for soft tissue in the human body the difference in attenuation is inherently small. Grating-based X-ray phase-contrast is a relatively novel imaging method which detects additional interaction mechanisms between photons and matter, namely refraction and small-angle scattering, to generate additional images with different contrast. The experimental setup involves a Talbot-Lau interferometer whose susceptibility to mechanical vibrations hindered acquisition schemes suitable for clinical routine in the past. We present a processing pipeline to identify spatially and temporally variable fluctuations occurring in an interferometer installed on a continuously rotating clinical CT gantry. The correlations of the vibrations in the modular grating setup are exploited to identify a small number of relevant fluctuation modes, allowing for a sample reconstruction free of vibration artifacts.


Asunto(s)
Interferometría , Vibración , Humanos , Interferometría/métodos , Tomografía Computarizada por Rayos X/métodos , Radiografía , Rayos X
15.
Proc Natl Acad Sci U S A ; 119(8)2022 02 22.
Artículo en Inglés | MEDLINE | ID: mdl-35131900

RESUMEN

X-ray computed tomography (CT) is one of the most commonly used three-dimensional medical imaging modalities today. It has been refined over several decades, with the most recent innovations including dual-energy and spectral photon-counting technologies. Nevertheless, it has been discovered that wave-optical contrast mechanisms-beyond the presently used X-ray attenuation-offer the potential of complementary information, particularly on otherwise unresolved tissue microstructure. One such approach is dark-field imaging, which has recently been introduced and already demonstrated significantly improved radiological benefit in small-animal models, especially for lung diseases. Until now, however, dark-field CT could not yet be translated to the human scale and has been restricted to benchtop and small-animal systems, with scan durations of several minutes or more. This is mainly because the adaption and upscaling to the mechanical complexity, speed, and size of a human CT scanner so far remained an unsolved challenge. Here, we now report the successful integration of a Talbot-Lau interferometer into a clinical CT gantry and present dark-field CT results of a human-sized anthropomorphic body phantom, reconstructed from a single rotation scan performed in 1 s. Moreover, we present our key hardware and software solutions to the previously unsolved roadblocks, which so far have kept dark-field CT from being translated from the optical bench into a rapidly rotating CT gantry, with all its associated challenges like vibrations, continuous rotation, and large field of view. This development enables clinical dark-field CT studies with human patients in the near future.


Asunto(s)
Dispersión del Ángulo Pequeño , Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Animales , Humanos , Imagenología Tridimensional , Interferometría/métodos , Fantasmas de Imagen , Radiografía , Tomógrafos Computarizados por Rayos X , Rayos X
16.
Med Image Anal ; 76: 102307, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34861602

RESUMEN

Skin disease is one of the most common diseases in the world. Deep learning-based methods have achieved excellent skin lesion recognition performance, most of which are based on only dermoscopy images. In recent works that use multi-modality data (patient's meta-data, clinical images, and dermoscopy images), the methods adopt a one-stage fusion approach and only optimize the information fusion at the feature level. These methods do not use information fusion at the decision level and thus cannot fully use the data of all modalities. This work proposes a novel two-stage multi-modal learning algorithm (FusionM4Net) for multi-label skin diseases classification. At the first stage, we construct a FusionNet, which exploits and integrates the representation of clinical and dermoscopy images at the feature level, and then uses a Fusion Scheme 1 to conduct the information fusion at the decision level. At the second stage, to further incorporate the patient's meta-data, we propose a Fusion Scheme 2, which integrates the multi-label predictive information from the first stage and patient's meta-data information to train an SVM cluster. The final diagnosis is formed by the fusion of the predictions from the first and second stages. Our algorithm was evaluated on the seven-point checklist dataset, a well-established multi-modality multi-label skin disease dataset. Without using the patient's meta-data, the proposed FusionM4Net's first stage (FusionM4Net-FS) achieved an average accuracy of 75.7% for multi-classification tasks and 74.9% for diagnostic tasks, which is more accurate than other state-of-the-art methods. By further fusing the patient's meta-data at FusionM4Net's second stage (FusionM4Net-SS), the entire FusionM4Net finally boosts the average accuracy to 77.0% and the diagnostic accuracy to 78.5%, which indicates its robust and excellent classification performance on the label-imbalanced dataset. The corresponding code is available at: https://github.com/pixixiaonaogou/MLSDR.


Asunto(s)
Algoritmos , Enfermedades de la Piel , Humanos , Enfermedades de la Piel/diagnóstico por imagen
17.
Opt Express ; 28(11): 16554-16568, 2020 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-32549475

RESUMEN

Recently, Fourier light field microscopy was proposed to overcome the limitations in conventional light field microscopy by placing a micro-lens array at the aperture stop of the microscope objective instead of the image plane. In this way, a collection of orthographic views from different perspectives are directly captured. When inspecting fluorescent samples, the sensitivity and noise of the sensors are a major concern and large sensor pixels are required to cope with low-light conditions, which implies under-sampling issues. In this context, we analyze the sampling patterns in Fourier light field microscopy to understand to what extent computational super-resolution can be triggered during deconvolution in order to improve the resolution of the 3D reconstruction of the imaged data.

18.
Sci Rep ; 9(1): 18123, 2019 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-31792293

RESUMEN

Fluorescence imaging opens new possibilities for intraoperative guidance and early cancer detection, in particular when using agents that target specific disease features. Nevertheless, photon scattering in tissue degrades image quality and leads to ambiguity in fluorescence image interpretation and challenges clinical translation. We introduce the concept of capturing the spatially-dependent impulse response of an image and investigate Spatially Adaptive Impulse Response Correction (SAIRC), a method that is proposed for improving the accuracy and sensitivity achieved. Unlike classical methods that presume a homogeneous spatial distribution of optical properties in tissue, SAIRC explicitly measures the optical heterogeneity in tissues. This information allows, for the first time, the application of spatially-dependent deconvolution to correct the fluorescence images captured in relation to their modification by photon scatter. Using experimental measurements from phantoms and animals, we investigate the improvement in resolution and quantification over non-corrected images. We discuss how the proposed method is essential for maximizing the performance of fluorescence molecular imaging in the clinic.

19.
Opt Express ; 27(22): 31644-31666, 2019 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-31684394

RESUMEN

The sampling patterns of the light field microscope (LFM) are highly depth-dependent, which implies non-uniform recoverable lateral resolution across depth. Moreover, reconstructions using state-of-the-art approaches suffer from strong artifacts at axial ranges, where the LFM samples the light field at a coarse rate. In this work, we analyze the sampling patterns of the LFM, and introduce a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs. We then propose a novel aliasing-aware deconvolution scheme to address the sampling artifacts. We demonstrate the high potential of the proposed method on real experimental data.

20.
Z Med Phys ; 29(2): 86-101, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-30686613

RESUMEN

This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...