Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 119(8)2022 02 22.
Artículo en Inglés | MEDLINE | ID: mdl-35131900

RESUMEN

X-ray computed tomography (CT) is one of the most commonly used three-dimensional medical imaging modalities today. It has been refined over several decades, with the most recent innovations including dual-energy and spectral photon-counting technologies. Nevertheless, it has been discovered that wave-optical contrast mechanisms-beyond the presently used X-ray attenuation-offer the potential of complementary information, particularly on otherwise unresolved tissue microstructure. One such approach is dark-field imaging, which has recently been introduced and already demonstrated significantly improved radiological benefit in small-animal models, especially for lung diseases. Until now, however, dark-field CT could not yet be translated to the human scale and has been restricted to benchtop and small-animal systems, with scan durations of several minutes or more. This is mainly because the adaption and upscaling to the mechanical complexity, speed, and size of a human CT scanner so far remained an unsolved challenge. Here, we now report the successful integration of a Talbot-Lau interferometer into a clinical CT gantry and present dark-field CT results of a human-sized anthropomorphic body phantom, reconstructed from a single rotation scan performed in 1 s. Moreover, we present our key hardware and software solutions to the previously unsolved roadblocks, which so far have kept dark-field CT from being translated from the optical bench into a rapidly rotating CT gantry, with all its associated challenges like vibrations, continuous rotation, and large field of view. This development enables clinical dark-field CT studies with human patients in the near future.


Asunto(s)
Dispersión del Ángulo Pequeño , Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Animales , Humanos , Imagenología Tridimensional , Interferometría/métodos , Fantasmas de Imagen , Radiografía , Tomógrafos Computarizados por Rayos X , Rayos X
2.
J Eur Acad Dermatol Venereol ; 37(5): 1071-1079, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36606561

RESUMEN

BACKGROUND: Artificial intelligence (AI) and convolutional neural networks (CNNs) represent rising trends in modern medicine. However, comprehensive data on the performance of AI practices in clinical dermatologic images are non-existent. Furthermore, the role of professional data selection for training remains unknown. OBJECTIVES: The aims of this study were to develop AI applications for outlier detection of dermatological pathologies, to evaluate CNN architectures' performance on dermatological images and to investigate the role of professional pre-processing of the training data, serving as one of the first anchor points regarding data selection criteria in dermatological AI-based binary classification tasks of non-melanoma pathologies. METHODS: Six state-of-the-art CNN architectures were evaluated for their accuracy, sensitivity and specificity for five dermatological diseases and using five data subsets, including data selected by two dermatologists, one with 5 and the other with 11 years of clinical experience. RESULTS: Overall, 150 CNNs were evaluated on up to 4051 clinical images. The best accuracy was reached for onychomycosis (accuracy = 1.000), followed by bullous pemphigoid (accuracy = 0.951) and lupus erythematosus (accuracy = 0.912). The CNNs InceptionV3, Xception and ResNet50 achieved the best accuracy in 9, 8 and 6 out of 25 data sets, respectively (36.0%, 32.0% and 24.0%). On average, the data set provided by the senior physician and the data set provided in accordance with both dermatologists performed the best (accuracy = 0.910). CONCLUSIONS: This AI approach for the detection of outliers in dermatological diagnoses represents one of the first studies to evaluate the performance of different CNNs for binary decisions in clinical non-dermatoscopic images of a variety of dermatological diseases other than melanoma. The selection of images by an experienced dermatologist during pre-processing had substantial benefits for the performance of the CNNs. These comparative results might guide future AI approaches to dermatology diagnostics, and the evaluated CNNs might be applicable for the future training of dermatology residents.


Asunto(s)
Dermatología , Melanoma , Enfermedades de la Piel , Humanos , Inteligencia Artificial , Redes Neurales de la Computación , Melanoma/diagnóstico , Melanoma/patología , Enfermedades de la Piel/diagnóstico
3.
Nat Methods ; 14(11): 1079-1082, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28967889

RESUMEN

A long-standing objective in neuroscience has been to image distributed neuronal activity in freely behaving animals. Here we introduce NeuBtracker, a tracking microscope for simultaneous imaging of neuronal activity and behavior of freely swimming fluorescent reporter fish. We showcase the value of NeuBtracker for screening neurostimulants with respect to their combined neuronal and behavioral effects and for determining spontaneous and stimulus-induced spatiotemporal patterns of neuronal activation during naturalistic behavior.


Asunto(s)
Conducta Animal , Peces/fisiología , Animales , Microscopía/métodos , Neuronas/fisiología , Natación/fisiología
4.
Opt Express ; 28(11): 16554-16568, 2020 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-32549475

RESUMEN

Recently, Fourier light field microscopy was proposed to overcome the limitations in conventional light field microscopy by placing a micro-lens array at the aperture stop of the microscope objective instead of the image plane. In this way, a collection of orthographic views from different perspectives are directly captured. When inspecting fluorescent samples, the sensitivity and noise of the sensors are a major concern and large sensor pixels are required to cope with low-light conditions, which implies under-sampling issues. In this context, we analyze the sampling patterns in Fourier light field microscopy to understand to what extent computational super-resolution can be triggered during deconvolution in order to improve the resolution of the 3D reconstruction of the imaged data.

5.
Opt Express ; 27(22): 31644-31666, 2019 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-31684394

RESUMEN

The sampling patterns of the light field microscope (LFM) are highly depth-dependent, which implies non-uniform recoverable lateral resolution across depth. Moreover, reconstructions using state-of-the-art approaches suffer from strong artifacts at axial ranges, where the LFM samples the light field at a coarse rate. In this work, we analyze the sampling patterns of the LFM, and introduce a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs. We then propose a novel aliasing-aware deconvolution scheme to address the sampling artifacts. We demonstrate the high potential of the proposed method on real experimental data.

6.
Opt Express ; 23(12): 15134-51, 2015 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-26193497

RESUMEN

Quite recently, a method has been presented to reconstruct X-ray scattering tensors from projections obtained in a grating interferometry setup. The original publications present a rather specialised approach, for instance by suggesting a single SART-based solver. In this work, we propose a novel approach to solving the inverse problem, allowing the use of other algorithms than SART (like conjugate gradient), a faster tensor recovery, and an intuitive visualisation. Furthermore, we introduce constraint enforcement for X-ray tensor tomography (cXTT) and demonstrate that this yields visually smoother results in comparison to the state-of-art approach, similar to regularisation.

7.
Med Phys ; 51(4): 2721-2732, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37831587

RESUMEN

BACKGROUND: Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? Models are typically tested on specific cleaned data sets, but when deployed in the real world, the model will encounter unexpected, out-of-distribution (OOD) data. PURPOSE: To investigate the impact of OOD radiographs on existing chest x-ray classification models and to increase their robustness against OOD data. METHODS: The study employed the commonly used chest x-ray classification model, CheXnet, trained on the chest x-ray 14 data set, and tested its robustness against OOD data using three public radiography data sets: IRMA, Bone Age, and MURA, and the ImageNet data set. To detect OOD data for multi-label classification, we proposed in-distribution voting (IDV). The OOD detection performance is measured across data sets using the area under the receiver operating characteristic curve (AUC) analysis and compared with Mahalanobis-based OOD detection, MaxLogit, MaxEnergy, self-supervised OOD detection (SS OOD), and CutMix. RESULTS: Without additional OOD detection, the chest x-ray classifier failed to discard any OOD images, with an AUC of 0.5. The proposed IDV approach trained on ID (chest x-ray 14) and OOD data (IRMA and ImageNet) achieved, on average, 0.999 OOD AUC across the three data sets, surpassing all other OOD detection methods. Mahalanobis-based OOD detection achieved an average OOD detection AUC of 0.982. IDV trained solely with a few thousand ImageNet images had an AUC 0.913, which was considerably higher than MaxLogit (0.726), MaxEnergy (0.724), SS OOD (0.476), and CutMix (0.376). CONCLUSIONS: The performance of all tested OOD detection methods did not translate well to radiography data sets, except Mahalanobis-based OOD detection and the proposed IDV method. Consequently, training solely on ID data led to incorrect classification of OOD images as ID, resulting in increased false positive rates. IDV substantially improved the model's ID classification performance, even when trained with data that will not occur in the intended use case or test set (ImageNet), without additional inference overhead or performance decrease in the target classification. The corresponding code is available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-disease.


Asunto(s)
Votación , Rayos X , Radiografía , Curva ROC
8.
Med Phys ; 51(10): 7404-7414, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39012833

RESUMEN

BACKGROUND: Computed tomography (CT) relies on the attenuation of x-rays, and is, hence, of limited use for weakly attenuating organs of the body, such as the lung. X-ray dark-field (DF) imaging is a recently developed technology that utilizes x-ray optical gratings to enable small-angle scattering as an alternative contrast mechanism. The DF signal provides structural information about the micromorphology of an object, complementary to the conventional attenuation signal. A first human-scale x-ray DF CT has been developed by our group. Despite specialized processing algorithms, reconstructed images remain affected by streaking artifacts, which often hinder image interpretation. In recent years, convolutional neural networks have gained popularity in the field of CT reconstruction, amongst others for streak artefact removal. PURPOSE: Reducing streak artifacts is essential for the optimization of image quality in DF CT, and artefact free images are a prerequisite for potential future clinical application. The purpose of this paper is to demonstrate the feasibility of CNN post-processing for artefact reduction in x-ray DF CT and how multi-rotation scans can serve as a pathway for training data. METHODS: We employed a supervised deep-learning approach using a three-dimensional dual-frame UNet in order to remove streak artifacts. Required training data were obtained from the experimental x-ray DF CT prototype at our institute. Two different operating modes were used to generate input and corresponding ground truth data sets. Clinically relevant scans at dose-compatible radiation levels were used as input data, and extended scans with substantially fewer artifacts were used as ground truth data. The latter is neither dose-, nor time-compatible and, therefore, unfeasible for clinical imaging of patients. RESULTS: The trained CNN was able to greatly reduce streak artifacts in DF CT images. The network was tested against images with entirely different, previously unseen image characteristics. In all cases, CNN processing substantially increased the image quality, which was quantitatively confirmed by increased image quality metrics. Fine details are preserved during processing, despite the output images appearing smoother than the ground truth images. CONCLUSIONS: Our results showcase the potential of a neural network to reduce streak artifacts in x-ray DF CT. The image quality is successfully enhanced in dose-compatible x-ray DF CT, which plays an essential role for the adoption of x-ray DF CT into modern clinical radiology.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
9.
Biomed Opt Express ; 15(2): 1219-1232, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38404325

RESUMEN

Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints but do not automatically provide a way to certify the realism of their reconstructions, which is essential in the biomedical realm. To address these shortcomings, this work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512x512x96 voxels, and it can be trained in under two hours due to the small dataset requirements (50 image-volume pairs). Furthermore, normalizing flows provides a way to compute the exact likelihood of a sample. This allows us to certify whether the predicted output is in- or ood, and retrain the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.

10.
IEEE Trans Med Imaging ; 43(11): 3820-3829, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38739509

RESUMEN

X-ray computed tomography (CT) is a crucial tool for non-invasive medical diagnosis that uses differences in materials' attenuation coefficients to generate contrast and provide 3D information. Grating-based dark-field-contrast X-ray imaging is an innovative technique that utilizes small-angle scattering to generate additional co-registered images with additional microstructural information. While it is already possible to perform human chest dark-field radiography, it is assumed that its diagnostic value increases when performed in a tomographic setup. However, the susceptibility of Talbot-Lau interferometers to mechanical vibrations coupled with a need to minimize data acquisition times has hindered its application in clinical routines and the combination of X-ray dark-field imaging and large field-of-view (FOV) tomography in the past. In this work, we propose a processing pipeline to address this issue in a human-sized clinical dark-field CT prototype. We present the corrective measures that are applied in the employed processing and reconstruction algorithms to mitigate the effects of vibrations and deformations of the interferometer gratings. This is achieved by identifying spatially and temporally variable vibrations in air reference scans. By translating the found correlations to the sample scan, we can identify and mitigate relevant fluctuation modes for scans with arbitrary sample sizes. This approach effectively eliminates the requirement for sample-free detector area, while still distinctly separating fluctuation and sample information. As a result, samples of arbitrary dimensions can be reconstructed without being affected by vibration artifacts. To demonstrate the viability of the technique for human-scale objects, we present reconstructions of an anthropomorphic thorax phantom.


Asunto(s)
Algoritmos , Interferometría , Fantasmas de Imagen , Tomografía Computarizada por Rayos X , Interferometría/métodos , Interferometría/instrumentación , Humanos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Vibración
11.
Radiol Artif Intell ; 6(4): e230275, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38717293

RESUMEN

Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparse-view cranial CT scans in 3000 patients, obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, EfficientNet-B2 was trained on full-view CT data from 17 545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operating characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view CT, served as the basis for comparison. A Bonferroni-corrected significance level of .001/6 = .00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97 [95% CI: 0.97, 0.98]) to 512 (0.97 [95% CI: 0.97, 0.98], P < .00017) and to 256 views (0.97 [95% CI: 0.96, 0.97], P < .00017) with a minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210, 0.0211) and 0.0560 (95% CI: 0.0559, 0.0560) relative to unprocessed images. Conclusion U-Net-based artifact reduction substantially enhanced automated hemorrhage detection in sparse-view cranial CT scans. Keywords: CT, Head/Neck, Hemorrhage, Diagnosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Artefactos , Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Humanos , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Masculino , Femenino , Hemorragias Intracraneales/diagnóstico por imagen , Hemorragias Intracraneales/diagnóstico
12.
Rofo ; 2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38663428

RESUMEN

The aim of this study was to explore the potential of weak supervision in a deep learning-based label prediction model. The goal was to use this model to extract labels from German free-text thoracic radiology reports on chest X-ray images and for training chest X-ray classification models.The proposed label extraction model for German thoracic radiology reports uses a German BERT encoder as a backbone and classifies a report based on the CheXpert labels. For investigating the efficient use of manually annotated data, the model was trained using manual annotations, weak rule-based labels, and both. Rule-based labels were extracted from 66071 retrospectively collected radiology reports from 2017-2021 (DS 0), and 1091 reports from 2020-2021 (DS 1) were manually labeled according to the CheXpert classes. Label extraction performance was evaluated with respect to mention extraction, negation detection, and uncertainty detection by measuring F1 scores. The influence of the label extraction method on chest X-ray classification was evaluated on a pneumothorax data set (DS 2) containing 6434 chest radiographs with associated reports and expert diagnoses of pneumothorax. For this, DenseNet-121 models trained on manual annotations, rule-based and deep learning-based label predictions, and publicly available data were compared.The proposed deep learning-based labeler (DL) performed on average considerably stronger than the rule-based labeler (RB) for all three tasks on DS 1 with F1 scores of 0.938 vs. 0.844 for mention extraction, 0.891 vs. 0.821 for negation detection, and 0.624 vs. 0.518 for uncertainty detection. Pre-training on DS 0 and fine-tuning on DS 1 performed better than only training on either DS 0 or DS 1. Chest X-ray pneumothorax classification results (DS 2) were highest when trained with DL labels with an area under the receiver operating curve (AUC) of 0.939 compared to RB labels with an AUC of 0.858. Training with manual labels performed slightly worse than training with DL labels with an AUC of 0.934. In contrast, training with a public data set resulted in an AUC of 0.720.Our results show that leveraging a rule-based report labeler for weak supervision leads to improved labeling performance. The pneumothorax classification results demonstrate that our proposed deep learning-based labeler can serve as a substitute for manual labeling requiring only 1000 manually annotated reports for training. · The proposed deep learning-based label extraction model for German thoracic radiology reports performs better than the rule-based model.. · Training with limited supervision outperformed training with a small manually labeled data set.. · Using predicted labels for pneumothorax classification from chest radiographs performed equally to using manual annotations.. Wollek A, Haitzer P, Sedlmeyr T et al. Language modelbased labeling of German thoracic radiology reports. Fortschr Röntgenstr 2024; DOI 10.1055/a-2287-5054.

13.
Rofo ; 196(9): 956-965, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38295825

RESUMEN

PURPOSE: The aim of this study was to develop an algorithm to automatically extract annotations from German thoracic radiology reports to train deep learning-based chest X-ray classification models. MATERIALS AND METHODS: An automatic label extraction model for German thoracic radiology reports was designed based on the CheXpert architecture. The algorithm can extract labels for twelve common chest pathologies, the presence of support devices, and "no finding". For iterative improvements and to generate a ground truth, a web-based multi-reader annotation interface was created. With the proposed annotation interface, a radiologist annotated 1086 retrospectively collected radiology reports from 2020-2021 (data set 1). The effect of automatically extracted labels on chest radiograph classification performance was evaluated on an additional, in-house pneumothorax data set (data set 2), containing 6434 chest radiographs with corresponding reports, by comparing a DenseNet-121 model trained on extracted labels from the associated reports, image-based pneumothorax labels, and publicly available data, respectively. RESULTS: Comparing automated to manual labeling on data set 1: "mention extraction" class-wise F1 scores ranged from 0.8 to 0.995, the "negation detection" F1 scores from 0.624 to 0.981, and F1 scores for "uncertainty detection" from 0.353 to 0.725. Extracted pneumothorax labels on data set 2 had a sensitivity of 0.997 [95 % CI: 0.994, 0.999] and specificity of 0.991 [95 % CI: 0.988, 0.994]. The model trained on publicly available data achieved an area under the receiver operating curve (AUC) for pneumothorax classification of 0.728 [95 % CI: 0.694, 0.760], while the models trained on automatically extracted labels and on manual annotations achieved values of 0.858 [95 % CI: 0.832, 0.882] and 0.934 [95 % CI: 0.918, 0.949], respectively. CONCLUSION: Automatic label extraction from German thoracic radiology reports is a promising substitute for manual labeling. By reducing the time required for data annotation, larger training data sets can be created, resulting in improved overall modeling performance. Our results demonstrated that a pneumothorax classifier trained on automatically extracted labels strongly outperformed the model trained on publicly available data, without the need for additional annotation time and performed competitively compared to manually labeled data. KEY POINTS: · An algorithm for automatic German thoracic radiology report annotation was developed.. · Automatic label extraction is a promising substitute for manual labeling.. · The classifier trained on extracted labels outperformed the model trained on publicly available data.. ZITIERWEISE: · Wollek A, Hyska S, Sedlmeyr T et al. German CheXpert Chest X-ray Radiology Report Labeler. Fortschr Röntgenstr 2024; 196: 956 - 965.


Asunto(s)
Algoritmos , Radiografía Torácica , Radiografía Torácica/métodos , Humanos , Alemania , Estudios Retrospectivos , Neumotórax/diagnóstico por imagen , Redes Neurales de la Computación
14.
Eur Radiol Exp ; 8(1): 54, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38698099

RESUMEN

BACKGROUND: We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS: CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS: The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS: Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT: Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS: • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Femenino , Estudios Retrospectivos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Anciano
15.
World Allergy Organ J ; 16(10): 100820, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37822702

RESUMEN

Background: Immunoglobulin E (IgE) blood tests are used to detect sensitizations and potential allergies. Recent studies suggest that specific IgE sensitization patterns due to molecular interactions affect an individual's risk of developing allergic symptoms. Objective: The aim of this study was to reveal specific IgE sensitization patterns and investigate their clinical implications in Hymenoptera venom allergy. Methods: In this cross-sectional study, 257 hunters or fishers with self-filled surveys on previous Hymenoptera stings were analyzed. Blood samples were taken to determine Hymenoptera IgE sensitization levels. Using dimensionality reduction and clustering, specific IgE for 10 Hymenoptera venom allergens were evaluated for clinical relevance. Results: Three clusters were unmasked using novel dimensionality reduction and clustering methods solely based on specific IgE levels to Hymenoptera venom allergens. These clusters show different characteristics regarding previous systemic reactions to Hymenoptera stings. Conclusion: Our study was able to unmask non-linear sensitization patterns for specific IgE tests in Hymenoptera venom allergy. We were able to derive risk clusters for anaphylactic reactions following hymenoptera stings and pinpoint relevant allergens (rApi m 10, rVes v 1, whole bee, and wasp venom) for clustering.

16.
J Imaging ; 9(12)2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-38132688

RESUMEN

Public chest X-ray (CXR) data sets are commonly compressed to a lower bit depth to reduce their size, potentially hiding subtle diagnostic features. In contrast, radiologists apply a windowing operation to the uncompressed image to enhance such subtle features. While it has been shown that windowing improves classification performance on computed tomography (CT) images, the impact of such an operation on CXR classification performance remains unclear. In this study, we show that windowing strongly improves the CXR classification performance of machine learning models and propose WindowNet, a model that learns multiple optimal window settings. Our model achieved an average AUC score of 0.812 compared with the 0.759 score of a commonly used architecture without windowing capabilities on the MIMIC data set.

17.
IEEE Trans Med Imaging ; 42(3): 774-784, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36301786

RESUMEN

X-ray computed tomography (CT) is an invaluable imaging technique for non-invasive medical diagnosis. However, for soft tissue in the human body the difference in attenuation is inherently small. Grating-based X-ray phase-contrast is a relatively novel imaging method which detects additional interaction mechanisms between photons and matter, namely refraction and small-angle scattering, to generate additional images with different contrast. The experimental setup involves a Talbot-Lau interferometer whose susceptibility to mechanical vibrations hindered acquisition schemes suitable for clinical routine in the past. We present a processing pipeline to identify spatially and temporally variable fluctuations occurring in an interferometer installed on a continuously rotating clinical CT gantry. The correlations of the vibrations in the modular grating setup are exploited to identify a small number of relevant fluctuation modes, allowing for a sample reconstruction free of vibration artifacts.


Asunto(s)
Interferometría , Vibración , Humanos , Interferometría/métodos , Tomografía Computarizada por Rayos X/métodos , Radiografía , Rayos X
18.
IEEE Trans Med Imaging ; 42(10): 2876-2885, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37115841

RESUMEN

Grating-based phase- and dark-field-contrast X-ray imaging is a novel technology that aims to extend conventional attenuation-based X-ray imaging by unlocking two additional contrast modalities. The so called phase-contrast and dark-field channels provide enhanced soft tissue contrast and additional microstructural information. Accessing this additional information comes at the expense of a more intricate measurement setup and necessitates sophisticated data processing. A big challenge for translating grating-based dark-field computed tomography to medical applications lies in minimizing the data acquisition time. While a continuously moving detector is ideal, it prohibits conventional phase stepping techniques that require multiple projections under the same angle with different grating positions. One solution to this problem is the so-called sliding window processing approach that is compatible with continuous data acquisition. However, conventional sliding window techniques lead to crosstalk-artifacts between the three image channels, if the projection of the sample moves too fast on the detector within a processing window. In this work we introduce a new interpretation of the phase retrieval problem for continuous acquisitions as a demodulation problem. In this interpretation, we identify the origin of the crosstalk-artifacts as partially overlapping modulation side bands. Furthermore, we present three algorithmic extensions that improve the conventional sliding-window-based phase retrieval and mitigate crosstalk-artifacts. The presented algorithms are tested in a simulation study and on experimental data from a human-scale dark-field CT prototype. In both cases they achieve a substantial reduction of the occurring crosstalk-artifacts.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Rayos X , Tomografía Computarizada por Rayos X/métodos , Radiografía , Simulación por Computador , Fantasmas de Imagen
19.
Radiol Artif Intell ; 5(2): e220187, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37035429

RESUMEN

Purpose: To investigate the chest radiograph classification performance of vision transformers (ViTs) and interpretability of attention-based saliency maps, using the example of pneumothorax classification. Materials and Methods: In this retrospective study, ViTs were fine-tuned for lung disease classification using four public datasets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData. Saliency maps were generated using transformer multimodal explainability and gradient-weighted class activation mapping (GradCAM). Classification performance was evaluated on the Chest X-Ray 14, VinBigData, and Society for Imaging Informatics in Medicine-American College of Radiology (SIIM-ACR) Pneumothorax Segmentation datasets using the area under the receiver operating characteristic curve (AUC) analysis and compared with convolutional neural networks (CNNs). The explainability methods were evaluated with positive and negative perturbation, sensitivity-n, effective heat ratio, intra-architecture repeatability, and interarchitecture reproducibility. In the user study, three radiologists classified 160 chest radiographs with and without saliency maps for pneumothorax and rated their usefulness. Results: ViTs had comparable chest radiograph classification AUCs compared with state-of-the-art CNNs: 0.95 (95% CI: 0.94, 0.95) versus 0.83 (95%, CI 0.83, 0.84) on Chest X-Ray 14, 0.84 (95% CI: 0.77, 0.91) versus 0.83 (95% CI: 0.76, 0.90) on VinBigData, and 0.85 (95% CI: 0.85, 0.86) versus 0.87 (95% CI: 0.87, 0.88) on SIIM-ACR. Both saliency map methods unveiled a strong bias toward pneumothorax tubes in the models. Radiologists found 47% of the attention-based and 39% of the GradCAM saliency maps useful. The attention-based methods outperformed GradCAM on all metrics. Conclusion: ViTs performed similarly to CNNs in chest radiograph classification, and their attention-based saliency maps were more useful to radiologists and outperformed GradCAM.Keywords: Conventional Radiography, Thorax, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN) Online supplemental material is available for this article. © RSNA, 2023.

20.
ArXiv ; 2023 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-37396615

RESUMEN

Real-time 3D fluorescence microscopy is crucial for the spatiotemporal analysis of live organisms, such as neural activity monitoring. The eXtended field-of-view light field microscope (XLFM), also known as Fourier light field microscope, is a straightforward, single snapshot solution to achieve this. The XLFM acquires spatial-angular information in a single camera exposure. In a subsequent step, a 3D volume can be algorithmically reconstructed, making it exceptionally well-suited for real-time 3D acquisition and potential analysis. Unfortunately, traditional reconstruction methods (like deconvolution) require lengthy processing times (0.0220 Hz), hampering the speed advantages of the XLFM. Neural network architectures can overcome the speed constraints at the expense of lacking certainty metrics, which renders them untrustworthy for the biomedical realm. This work proposes a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity based on a conditional normalizing flow. It reconstructs volumes at 8 Hz spanning 512 × 512 × 96 voxels, and it can be trained in under two hours due to the small dataset requirements (10 image-volume pairs). Furthermore, normalizing flows allow for exact Likelihood computation, enabling distribution monitoring, followed by out-of-distribution detection and retraining of the system when a novel sample is detected. We evaluate the proposed method on a cross-validation approach involving multiple in-distribution samples (genetically identical zebrafish) and various out-of-distribution ones.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA