Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143.531
Filtrar
Más filtros




Intervalo de año de publicación
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38434231

RESUMEN

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Asunto(s)
Técnicas Histológicas , Microscopía , Animales , Citometría de Flujo , Procesamiento de Imagen Asistido por Computador
2.
PLoS One ; 19(5): e0302880, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38718092

RESUMEN

Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.


Asunto(s)
Neoplasias Gastrointestinales , Tracto Gastrointestinal , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Neoplasias Gastrointestinales/diagnóstico por imagen , Neoplasias Gastrointestinales/patología , Tracto Gastrointestinal/diagnóstico por imagen , Semántica , Procesamiento de Imagen Asistido por Computador/métodos , Femenino , Masculino , Estómago/diagnóstico por imagen , Estómago/patología
3.
Sci Rep ; 14(1): 10560, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720020

RESUMEN

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Asunto(s)
Algoritmos , Compresión de Datos , Grabación en Video , Humanos , Compresión de Datos/métodos , Actividades Humanas , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
4.
Platelets ; 35(1): 2344512, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38722090

RESUMEN

The last decade has seen increasing use of advanced imaging techniques in platelet research. However, there has been a lag in the development of image analysis methods, leaving much of the information trapped in images. Herein, we present a robust analytical pipeline for finding and following individual platelets over time in growing thrombi. Our pipeline covers four steps: detection, tracking, estimation of tracking accuracy, and quantification of platelet metrics. We detect platelets using a deep learning network for image segmentation, which we validated with proofreading by multiple experts. We then track platelets using a standard particle tracking algorithm and validate the tracks with custom image sampling - essential when following platelets within a dense thrombus. We show that our pipeline is more accurate than previously described methods. To demonstrate the utility of our analytical platform, we use it to show that in vivo thrombus formation is much faster than that ex vivo. Furthermore, platelets in vivo exhibit less passive movement in the direction of blood flow. Our tools are free and open source and written in the popular and user-friendly Python programming language. They empower researchers to accurately find and follow platelets in fluorescence microscopy experiments.


In this paper we describe computational tools to find and follow individual platelets in blood clots recorded with fluorescence microscopy. Our tools work in a diverse range of conditions, both in living animals and in artificial flow chamber models of thrombosis. Our work uses deep learning methods to achieve excellent accuracy. We also provide tools for visualizing data and estimating error rates, so you don't have to just trust the output. Our workflow measures platelet density, shape, and speed, which we use to demonstrate differences in the kinetics of clotting in living vessels versus a synthetic environment. The tools we wrote are open source, written in the popular Python programming language, and freely available to all. We hope they will be of use to other platelet researchers.


Asunto(s)
Plaquetas , Aprendizaje Profundo , Trombosis , Plaquetas/metabolismo , Trombosis/sangre , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Ratones , Algoritmos
5.
Braz Oral Res ; 38: e032, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38747819

RESUMEN

This study assessed the reliability of a color measurement method using images obtained from a charge-coupled device (CCD) camera and a stereoscopic loupe. Disc-shaped specimens were created using the composite Filtek Z350 XT (shades DA1, DA2, DA3, and DA4) (n = 3). CIELAB color coordinates of the specimens were measured using the spectrophotometer SP60 over white and black backgrounds. Images of the same specimens were taken using a CCD camera attached to a stereoscopic loupe. The color of the image was measured (red-green-blue [RGB]) using an image processing software and converted to CIELAB coordinates. For each color coordinate, data from images were adjusted using linear regressions predicting those values from SP60. The whiteness index for dentistry (WID) and translucency parameter (TP00) of the specimens as well as the color differences (ΔE00) among pairwise shades were calculated. Data were analyzed via repeated-measures analysis of variance and Tukey's post hoc test (α = 0.05). Images obtained using the loupe tended to be darker and redder than the actual color. Data adjustment resulted in similar WID, ΔE00, and TP00 values to those observed for the spectrophotometer. Differences were observed only for the WID of shade DA3 and ΔE00 for comparing DA1 and DA3 over the black background. However, these differences were not clinically relevant. The use of adjusted data from images taken using a stereoscopic loupe is considered a feasible method for color measurement.


Asunto(s)
Color , Colorimetría , Resinas Compuestas , Ensayo de Materiales , Espectrofotometría , Reproducibilidad de los Resultados , Resinas Compuestas/química , Espectrofotometría/métodos , Colorimetría/métodos , Colorimetría/instrumentación , Análisis de Varianza , Valores de Referencia , Modelos Lineales , Procesamiento de Imagen Asistido por Computador/métodos
6.
Opt Lett ; 49(10): 2621-2624, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38748120

RESUMEN

Fluorescence fluctuation super-resolution microscopy (FF-SRM) has emerged as a promising method for the fast, low-cost, and uncomplicated imaging of biological specimens beyond the diffraction limit. Among FF-SRM techniques, super-resolution radial fluctuation (SRRF) microscopy is a popular technique but is prone to artifacts, resulting in low fidelity, especially under conditions of high-density fluorophores. In this Letter, we developed a novel, to the best of our knowledge, combinatory computational super-resolution microscopy method, namely VeSRRF, that demonstrated superior performance in SRRF microscopy. VeSRRF combined intensity and gradient variance reweighted radial fluctuations (VRRF) and enhanced-SRRF (eSRRF) algorithms, leveraging the enhanced resolution achieved through intensity and gradient variance analysis in VRRF and the improved fidelity obtained from the radial gradient convergence transform in eSRRF. Our method was validated using microtubules in mammalian cells as a standard biological model system. Our results demonstrated that VeSRRF consistently achieved the highest resolution and exceptional fidelity compared to those obtained from other algorithms in both single-molecule localization microscopy (SMLM) and FF-SRM. Moreover, we developed the VeSRRF software package that is freely available on the open-source ImageJ/Fiji software platform to facilitate the use of VeSRRF in the broader community of biomedical researchers. VeSRRF is an exemplary method in which complementary microscopy techniques are integrated holistically, creating superior imaging performance and capabilities.


Asunto(s)
Algoritmos , Microscopía Fluorescente , Microscopía Fluorescente/métodos , Microtúbulos , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Programas Informáticos
7.
Opt Lett ; 49(10): 2729-2732, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38748147

RESUMEN

In recent years, the emergence of a variety of novel optical microscopy techniques has enabled the generation of virtual optical stains of unlabeled tissue specimens, which have the potential to transform existing clinical histopathology workflows. In this work, we present a simultaneous deep ultraviolet transmission and scattering microscopy system that can produce virtual histology images that show concordance to conventional gold-standard histological processing techniques. The results of this work demonstrate the system's diagnostic potential for characterizing unlabeled thin tissue sections and streamlining histological workflows.


Asunto(s)
Microscopía Ultravioleta , Microscopía Ultravioleta/métodos , Humanos , Rayos Ultravioleta , Microscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos
8.
Sci Data ; 11(1): 483, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38729970

RESUMEN

The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.


Asunto(s)
Tomografía Computarizada por Rayos X , Imagen de Cuerpo Entero , Femenino , Humanos , Masculino , Procesamiento de Imagen Asistido por Computador
9.
Sci Rep ; 14(1): 10801, 2024 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734727

RESUMEN

The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with branch retinal vein occlusion (BRVO). However, the current evaluation method of NPA, fluorescein angiography (FA), is invasive and burdensome. In this study, we examined the use of deep learning models for detecting NPA in color fundus images, bypassing the need for FA, and we also investigated the utility of synthetic FA generated from color fundus images. The models were evaluated using the Dice score and Monte Carlo dropout uncertainty. We retrospectively collected 403 sets of color fundus and FA images from 319 BRVO patients. We trained three deep learning models on FA, color fundus images, and synthetic FA. As a result, though the FA model achieved the highest score, the other two models also performed comparably. We found no statistical significance in median Dice scores between the models. However, the color fundus model showed significantly higher uncertainty than the other models (p < 0.05). In conclusion, deep learning models can detect NPAs from color fundus images with reasonable accuracy, though with somewhat less prediction stability. Synthetic FA stabilizes the prediction and reduces misleading uncertainty estimates by enhancing image quality.


Asunto(s)
Aprendizaje Profundo , Angiografía con Fluoresceína , Fondo de Ojo , Oclusión de la Vena Retiniana , Humanos , Angiografía con Fluoresceína/métodos , Estudios Retrospectivos , Oclusión de la Vena Retiniana/diagnóstico por imagen , Masculino , Femenino , Anciano , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos
10.
Nat Commun ; 15(1): 3992, 2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734767

RESUMEN

Visual proteomics attempts to build atlases of the molecular content of cells but the automated annotation of cryo electron tomograms remains challenging. Template matching (TM) and methods based on machine learning detect structural signatures of macromolecules. However, their applicability remains limited in terms of both the abundance and size of the molecular targets. Here we show that the performance of TM is greatly improved by using template-specific search parameter optimization and by including higher-resolution information. We establish a TM pipeline with systematically tuned parameters for the automated, objective and comprehensive identification of structures with confidence 10 to 100-fold above the noise level. We demonstrate high-fidelity and high-confidence localizations of nuclear pore complexes, vaults, ribosomes, proteasomes, fatty acid synthases, lipid membranes and microtubules, and individual subunits inside crowded eukaryotic cells. We provide software tools for the generic implementation of our method that is broadly applicable towards realizing visual proteomics.


Asunto(s)
Microscopía por Crioelectrón , Tomografía con Microscopio Electrónico , Complejo de la Endopetidasa Proteasomal , Proteómica , Ribosomas , Programas Informáticos , Tomografía con Microscopio Electrónico/métodos , Microscopía por Crioelectrón/métodos , Ribosomas/ultraestructura , Ribosomas/metabolismo , Complejo de la Endopetidasa Proteasomal/ultraestructura , Complejo de la Endopetidasa Proteasomal/metabolismo , Complejo de la Endopetidasa Proteasomal/química , Humanos , Proteómica/métodos , Poro Nuclear/ultraestructura , Poro Nuclear/metabolismo , Microtúbulos/ultraestructura , Microtúbulos/metabolismo , Ácido Graso Sintasas/metabolismo , Aprendizaje Automático , Imagenología Tridimensional/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
11.
Sci Rep ; 14(1): 10781, 2024 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734781

RESUMEN

Magnetic resonance (MR) acquisitions of the torso are frequently affected by respiratory motion with detrimental effects on signal quality. The motion of organs inside the body is typically decoupled from surface motion and is best captured using rapid MR imaging (MRI). We propose a pipeline for prospective motion correction of the target organ using MR image navigators providing absolute motion estimates in millimeters. Our method is designed to feature multi-nuclear interleaving for non-proton MR acquisitions and to tolerate local transmit coils with inhomogeneous field and sensitivity distributions. OpenCV object tracking was introduced for rapid estimation of in-plane displacements in 2D MR images. A full three-dimensional translation vector was derived by combining displacements from slices of multiple and arbitrary orientations. The pipeline was implemented on 3 T and 7 T MR scanners and tested in phantoms and volunteers. Fast motion handling was achieved with low-resolution 2D MR image navigators and direct implementation of OpenCV into the MR scanner's reconstruction pipeline. Motion-phantom measurements demonstrate high tracking precision and accuracy with minor processing latency. The feasibility of the pipeline for reliable in-vivo motion extraction was shown on heart and kidney data. Organ motion was manually assessed by independent operators to quantify tracking performance. Object tracking performed convincingly on 7774 navigator images from phantom scans and different organs in volunteers. In particular the kernelized correlation filter (KCF) achieved similar accuracy (74%) as scored from inter-operator comparison (82%) while processing at a rate of over 100 frames per second. We conclude that fast 2D MR navigator images and computer vision object tracking can be used for accurate and rapid prospective motion correction. This and the modular structure of the pipeline allows for the proposed method to be used in imaging of moving organs and in challenging applications like cardiac magnetic resonance spectroscopy (MRS) or magnetic resonance imaging (MRI) guided radiotherapy.


Asunto(s)
Fantasmas de Imagen , Humanos , Espectroscopía de Resonancia Magnética/métodos , Imagen por Resonancia Magnética/métodos , Respiración , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento (Física) , Movimiento , Algoritmos
12.
Sci Rep ; 14(1): 10820, 2024 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734825

RESUMEN

Advancements in clinical treatment are increasingly constrained by the limitations of supervised learning techniques, which depend heavily on large volumes of annotated data. The annotation process is not only costly but also demands substantial time from clinical specialists. Addressing this issue, we introduce the S4MI (Self-Supervision and Semi-Supervision for Medical Imaging) pipeline, a novel approach that leverages advancements in self-supervised and semi-supervised learning. These techniques engage in auxiliary tasks that do not require labeling, thus simplifying the scaling of machine supervision compared to fully-supervised methods. Our study benchmarks these techniques on three distinct medical imaging datasets to evaluate their effectiveness in classification and segmentation tasks. Notably, we observed that self-supervised learning significantly surpassed the performance of supervised methods in the classification of all evaluated datasets. Remarkably, the semi-supervised approach demonstrated superior outcomes in segmentation, outperforming fully-supervised methods while using 50% fewer labels across all datasets. In line with our commitment to contributing to the scientific community, we have made the S4MI code openly accessible, allowing for broader application and further development of these methods. The code can be accessed at https://github.com/pranavsinghps1/S4MI .


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático Supervisado , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Diagnóstico por Imagen/métodos , Algoritmos
13.
Radiat Oncol ; 19(1): 55, 2024 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-38735947

RESUMEN

BACKGROUND: Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS: A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS: The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION: The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.


Asunto(s)
Aprendizaje Profundo , Esófago , Humanos , Esófago/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos
14.
Int J Oral Sci ; 16(1): 34, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38719817

RESUMEN

Accurate segmentation of oral surgery-related tissues from cone beam computed tomography (CBCT) images can significantly accelerate treatment planning and improve surgical accuracy. In this paper, we propose a fully automated tissue segmentation system for dental implant surgery. Specifically, we propose an image preprocessing method based on data distribution histograms, which can adaptively process CBCT images with different parameters. Based on this, we use the bone segmentation network to obtain the segmentation results of alveolar bone, teeth, and maxillary sinus. We use the tooth and mandibular regions as the ROI regions of tooth segmentation and mandibular nerve tube segmentation to achieve the corresponding tasks. The tooth segmentation results can obtain the order information of the dentition. The corresponding experimental results show that our method can achieve higher segmentation accuracy and efficiency compared to existing methods. Its average Dice scores on the tooth, alveolar bone, maxillary sinus, and mandibular canal segmentation tasks were 96.5%, 95.4%, 93.6%, and 94.8%, respectively. These results demonstrate that it can accelerate the development of digital dentistry.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Proceso Alveolar/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Inteligencia Artificial , Seno Maxilar/diagnóstico por imagen , Seno Maxilar/cirugía , Mandíbula/diagnóstico por imagen , Mandíbula/cirugía , Diente/diagnóstico por imagen
15.
Sci Rep ; 14(1): 10569, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38719918

RESUMEN

Within the medical field of human assisted reproductive technology, a method for interpretable, non-invasive, and objective oocyte evaluation is lacking. To address this clinical gap, a workflow utilizing machine learning techniques has been developed involving automatic multi-class segmentation of two-dimensional images, morphometric analysis, and prediction of developmental outcomes of mature denuded oocytes based on feature extraction and clinical variables. Two separate models have been developed for this purpose-a model to perform multiclass segmentation, and a classifier model to classify oocytes as likely or unlikely to develop into a blastocyst (Day 5-7 embryo). The segmentation model is highly accurate at segmenting the oocyte, ensuring high-quality segmented images (masks) are utilized as inputs for the classifier model (mask model). The mask model displayed an area under the curve (AUC) of 0.63, a sensitivity of 0.51, and a specificity of 0.66 on the test set. The AUC underwent a reduction to 0.57 when features extracted from the ooplasm were removed, suggesting the ooplasm holds the information most pertinent to oocyte developmental competence. The mask model was further compared to a deep learning model, which also utilized the segmented images as inputs. The performance of both models combined in an ensemble model was evaluated, showing an improvement (AUC 0.67) compared to either model alone. The results of this study indicate that direct assessments of the oocyte are warranted, providing the first objective insights into key features for developmental competence, a step above the current standard of care-solely utilizing oocyte age as a proxy for quality.


Asunto(s)
Blastocisto , Aprendizaje Automático , Oocitos , Humanos , Blastocisto/citología , Blastocisto/fisiología , Oocitos/citología , Femenino , Desarrollo Embrionario , Adulto , Fertilización In Vitro/métodos , Procesamiento de Imagen Asistido por Computador/métodos
16.
J Transl Med ; 22(1): 434, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720370

RESUMEN

BACKGROUND: Cardiometabolic disorders pose significant health risks globally. Metabolic syndrome, characterized by a cluster of potentially reversible metabolic abnormalities, is a known risk factor for these disorders. Early detection and intervention for individuals with metabolic abnormalities can help mitigate the risk of developing more serious cardiometabolic conditions. This study aimed to develop an image-derived phenotype (IDP) for metabolic abnormality from unenhanced abdominal computed tomography (CT) scans using deep learning. We used this IDP to classify individuals with metabolic syndrome and predict future occurrence of cardiometabolic disorders. METHODS: A multi-stage deep learning approach was used to extract the IDP from the liver region of unenhanced abdominal CT scans. In a cohort of over 2,000 individuals the IDP was used to classify individuals with metabolic syndrome. In a subset of over 1,300 individuals, the IDP was used to predict future occurrence of hypertension, type II diabetes, and fatty liver disease. RESULTS: For metabolic syndrome (MetS) classification, we compared the performance of the proposed IDP to liver attenuation and visceral adipose tissue area (VAT). The proposed IDP showed the strongest performance (AUC 0.82) compared to attenuation (AUC 0.70) and VAT (AUC 0.80). For disease prediction, we compared the performance of the IDP to baseline MetS diagnosis. The models including the IDP outperformed MetS for type II diabetes (AUCs 0.91 and 0.90) and fatty liver disease (AUCs 0.67 and 0.62) prediction and performed comparably for hypertension prediction (AUCs of 0.77). CONCLUSIONS: This study demonstrated the superior performance of a deep learning IDP compared to traditional radiomic features to classify individuals with metabolic syndrome. Additionally, the IDP outperformed the clinical definition of metabolic syndrome in predicting future morbidities. Our findings underscore the utility of data-driven imaging phenotypes as valuable tools in the assessment and management of metabolic syndrome and cardiometabolic disorders.


Asunto(s)
Aprendizaje Profundo , Síndrome Metabólico , Fenotipo , Humanos , Síndrome Metabólico/diagnóstico por imagen , Síndrome Metabólico/complicaciones , Femenino , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X , Enfermedades Cardiovasculares/diagnóstico por imagen , Adulto , Procesamiento de Imagen Asistido por Computador/métodos
17.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38720391

RESUMEN

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Fantasmas de Imagen , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Nódulos Pulmonares Múltiples/patología , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Nódulo Pulmonar Solitario/diagnóstico por imagen , Nódulo Pulmonar Solitario/patología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
18.
F1000Res ; 13: 274, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38725640

RESUMEN

Background: The most recent advances in Computed Tomography (CT) image reconstruction technology are Deep learning image reconstruction (DLIR) algorithms. Due to drawbacks in Iterative reconstruction (IR) techniques such as negative image texture and nonlinear spatial resolutions, DLIRs are gradually replacing them. However, the potential use of DLIR in Head and Chest CT has to be examined further. Hence, the purpose of the study is to review the influence of DLIR on Radiation dose (RD), Image noise (IN), and outcomes of the studies compared with IR and FBP in Head and Chest CT examinations. Methods: We performed a detailed search in PubMed, Scopus, Web of Science, Cochrane Library, and Embase to find the articles reported using DLIR for Head and Chest CT examinations between 2017 to 2023. Data were retrieved from the short-listed studies using Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Results: Out of 196 articles searched, 15 articles were included. A total of 1292 sample size was included. 14 articles were rated as high and 1 article as moderate quality. All studies compared DLIR to IR techniques. 5 studies compared DLIR with IR and FBP. The review showed that DLIR improved IQ, and reduced RD and IN for CT Head and Chest examinations. Conclusions: DLIR algorithm have demonstrated a noted enhancement in IQ with reduced IN for CT Head and Chest examinations at lower dose compared with IR and FBP. DLIR showed potential for enhancing patient care by reducing radiation risks and increasing diagnostic accuracy.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Cabeza , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tórax/diagnóstico por imagen , Radiografía Torácica/métodos , Relación Señal-Ruido
19.
PLoS One ; 19(5): e0302067, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38728318

RESUMEN

Many lumbar spine diseases are caused by defects or degeneration of lumbar intervertebral discs (IVD) and are usually diagnosed through inspection of the patient's lumbar spine MRI. Efficient and accurate assessments of the lumbar spine are essential but a challenge due to the size of the clinical radiologist workforce not keeping pace with the demand for radiology services. In this paper, we present a methodology to automatically annotate lumbar spine IVDs with their height and degenerative state which is quantified using the Pfirrmann grading system. The method starts with semantic segmentation of a mid-sagittal MRI image into six distinct non-overlapping regions, including the IVD and vertebrae regions. Each IVD region is then located and assigned with its label. Using geometry, a line segment bisecting the IVD is determined and its Euclidean distance is used as the IVD height. We then extract an image feature, called self-similar color correlogram, from the nucleus of the IVD region as a representation of the region's spatial pixel intensity distribution. We then use the IVD height data and machine learning classification process to predict the Pfirrmann grade of the IVD. We considered five different deep learning networks and six different machine learning algorithms in our experiment and found the ResNet-50 model and Ensemble of Decision Trees classifier to be the combination that gives the best results. When tested using a dataset containing 515 MRI studies, we achieved a mean accuracy of 88.1%.


Asunto(s)
Disco Intervertebral , Vértebras Lumbares , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Vértebras Lumbares/diagnóstico por imagen , Disco Intervertebral/diagnóstico por imagen , Degeneración del Disco Intervertebral/diagnóstico por imagen , Degeneración del Disco Intervertebral/patología , Aprendizaje Automático , Masculino , Femenino , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos , Adulto
20.
Sci Rep ; 14(1): 10395, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38710726

RESUMEN

To assess the feasibility of code-free deep learning (CFDL) platforms in the prediction of binary outcomes from fundus images in ophthalmology, evaluating two distinct online-based platforms (Google Vertex and Amazon Rekognition), and two distinct datasets. Two publicly available datasets, Messidor-2 and BRSET, were utilized for model development. The Messidor-2 consists of fundus photographs from diabetic patients and the BRSET is a multi-label dataset. The CFDL platforms were used to create deep learning models, with no preprocessing of the images, by a single ophthalmologist without coding expertise. The performance metrics employed to evaluate the models were F1 score, area under curve (AUC), precision and recall. The performance metrics for referable diabetic retinopathy and macular edema were above 0.9 for both tasks and CDFL. The Google Vertex models demonstrated superior performance compared to the Amazon models, with the BRSET dataset achieving the highest accuracy (AUC of 0.994). Multi-classification tasks using only BRSET achieved similar overall performance between platforms, achieving AUC of 0.994 for laterality, 0.942 for age grouping, 0.779 for genetic sex identification, 0.857 for optic, and 0.837 for normality with Google Vertex. The study demonstrates the feasibility of using automated machine learning platforms for predicting binary outcomes from fundus images in ophthalmology. It highlights the high accuracy achieved by the models in some tasks and the potential of CFDL as an entry-friendly platform for ophthalmologists to familiarize themselves with machine learning concepts.


Asunto(s)
Retinopatía Diabética , Fondo de Ojo , Aprendizaje Automático , Humanos , Retinopatía Diabética/diagnóstico por imagen , Femenino , Masculino , Aprendizaje Profundo , Persona de Mediana Edad , Adulto , Personal de Salud , Edema Macular/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Anciano
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA