Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Bioengineering (Basel) ; 11(4)2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38671773

RESUMEN

Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder-decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.

2.
Sci Rep ; 13(1): 12284, 2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37507517

RESUMEN

One of the main activities of the nuclear industry is the characterisation of radioactive waste based on the detection of gamma radiation. Large volumes of radioactive waste are classified according to their average activity, but often the radioactivity exceeds the maximum allowed by regulators in specific parts of the bulk. In addition, the detection of the radiation is currently based on static detection systems where the geometry of the bulk is fixed and well known. Furthermore, these systems are not portable and depend on the transport of waste to the places where the detection systems are located. However, there are situations where the geometry varies and where moving waste is complex. This is especially true in compromised situations.We present a new model for nuclear waste management based on a portable and geometry-independent tomographic system for three-dimensional image reconstruction for gamma radiation detection. The system relies on a combination of a gamma radiation camera and a visible camera that allows to visualise radioactivity using augmented reality and artificial computer vision techniques. This novel tomographic system has the potential to be a disruptive innovation in the nuclear industry for nuclear waste management.

4.
Sensors (Basel) ; 22(14)2022 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-35891127

RESUMEN

Quality assessment is one of the most common processes in the agri-food industry. Typically, this task involves the analysis of multiple views of the fruit. Generally speaking, analyzing these single views is a highly time-consuming operation. Moreover, there is usually significant overlap between consecutive views, so it might be necessary to provide a mechanism to cope with the redundancy and prevent the multiple counting of defect points. This paper presents a method to create surface maps of fruit from collections of views obtained when the piece is rotating. This single image map combines the information contained in the views, thus reducing the number of analysis operations and avoiding possible miscounts in the number of defects. After assigning each piece with a simple geometrical model, 3D rotation between consecutive views is estimated only from the captured images, without any further need for sensors or information about the conveyor. The fact that rotation is estimated directly from the views makes this novel methodology readily usable in high-throughput industrial inspection machines without any special hardware modification. As proof of this technique's usefulness, an application is shown where maps have been used as input to a CNN to classify oranges into different categories.


Asunto(s)
Frutas
5.
Insights Imaging ; 13(1): 122, 2022 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-35900673

RESUMEN

BACKGROUND: The role of chest radiography in COVID-19 disease has changed since the beginning of the pandemic from a diagnostic tool when microbiological resources were scarce to a different one focused on detecting and monitoring COVID-19 lung involvement. Using chest radiographs, early detection of the disease is still helpful in resource-poor environments. However, the sensitivity of a chest radiograph for diagnosing COVID-19 is modest, even for expert radiologists. In this paper, the performance of a deep learning algorithm on the first clinical encounter is evaluated and compared with a group of radiologists with different years of experience. METHODS: The algorithm uses an ensemble of four deep convolutional networks, Ensemble4Covid, trained to detect COVID-19 on frontal chest radiographs. The algorithm was tested using images from the first clinical encounter of positive and negative cases. Its performance was compared with five radiologists on a smaller test subset of patients. The algorithm's performance was also validated using the public dataset COVIDx. RESULTS: Compared to the consensus of five radiologists, the Ensemble4Covid model achieved an AUC of 0.85, whereas the radiologists achieved an AUC of 0.71. Compared with other state-of-the-art models, the performance of a single model of our ensemble achieved nonsignificant differences in the public dataset COVIDx. CONCLUSION: The results show that the use of images from the first clinical encounter significantly drops the detection performance of COVID-19. The performance of our Ensemble4Covid under these challenging conditions is considerably higher compared to a consensus of five radiologists. Artificial intelligence can be used for the fast diagnosis of COVID-19.

6.
IEEE Trans Med Imaging ; 40(12): 3543-3554, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34138702

RESUMEN

The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.


Asunto(s)
Corazón , Imagen por Resonancia Magnética , Técnicas de Imagen Cardíaca , Corazón/diagnóstico por imagen , Humanos
7.
Sensors (Basel) ; 21(6)2021 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-33806776

RESUMEN

Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit quality, it is necessary to match the defects between adjacent views to prevent counting them more than once and assert that the whole surface has been examined. To accomplish this goal, this paper estimates the 3D rotation undergone by the fruit using a single camera. A 3D model of the fruit geometry is needed to estimate the rotation. This paper proposes to model the fruit shape as a 3D spheroid. The spheroid size and pose in each view is estimated from the silhouettes of all views. Once the geometric model has been fitted, a single 3D rotation for each view transition is estimated. Once all rotations have been estimated, it is possible to use them to propagate defects to neighbor views or to even build a topographic map of the whole fruit surface, thus opening the possibility to analyze a single image (the map) instead of a collection of individual views. A large effort was made to make this method as fast as possible. Execution times are under 0.5 ms to estimate each 3D rotation on a standard I7 CPU using a single core.

8.
JAMA Netw Open ; 3(3): e200265, 2020 03 02.
Artículo en Inglés | MEDLINE | ID: mdl-32119094

RESUMEN

Importance: Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective: To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants: In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements: Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results: Overall, 144 231 screening mammograms from 85 580 US women (952 cancer positive ≤12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 578 examinations from 68 008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance: While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Mamografía/métodos , Radiólogos , Adulto , Anciano , Algoritmos , Inteligencia Artificial , Detección Precoz del Cáncer , Femenino , Humanos , Persona de Mediana Edad , Radiología , Sensibilidad y Especificidad , Suecia , Estados Unidos
9.
Med Eng Phys ; 42: 73-79, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28223012

RESUMEN

A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and X-ray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body.


Asunto(s)
Imagenología Tridimensional/instrumentación , Tomografía Computarizada por Rayos X/instrumentación , Humanos , Fantasmas de Imagen
10.
Med Phys ; 44(4): 1369-1378, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28160525

RESUMEN

PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Automatización , Mamografía
11.
Radiol Phys Technol ; 10(1): 68-81, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-27431651

RESUMEN

We explore three different alternatives for obtaining intrinsic and extrinsic parameters in conventional diagnostic X-ray frameworks: the direct linear transform (DLT), the Zhang method, and the Tsai approach. We analyze and describe the computational, operational, and mathematical background differences for these algorithms when they are applied to ordinary radiograph acquisition. For our study, we developed an initial 3D calibration frame with tin cross-shaped fiducials at specific locations. The three studied methods enable the derivation of projection matrices from 3D to 2D point correlations. We propose a set of metrics to compare the efficiency of each technique. One of these metrics consists of the calculation of the detector pixel density, which can be also included as part of the quality control sequence in general X-ray settings. The results show a clear superiority of the DLT approach, both in accuracy and operational suitability. We paid special attention to the Zhang calibration method. Although this technique has been extensively implemented in the field of computer vision, it has rarely been tested in depth in common radiograph production scenarios. Zhang's approach can operate on much simpler and more affordable 2D calibration frames, which were also tested in our research. We experimentally confirm that even three or four plane-image correspondences achieve accurate focal lengths.


Asunto(s)
Radiografía/métodos , Algoritmos , Calibración , Radiografía/instrumentación
12.
IEEE Trans Med Imaging ; 35(8): 1952-61, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-26978665

RESUMEN

We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.


Asunto(s)
Imagenología Tridimensional , Calibración , Fantasmas de Imagen , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...