Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143.700
Filtrar
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38434231

RESUMO

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Assuntos
Técnicas Histológicas , Microscopia , Animais , Citometria de Fluxo , Processamento de Imagem Assistida por Computador
2.
Radiat Oncol ; 19(1): 55, 2024 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-38735947

RESUMO

BACKGROUND: Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS: A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS: The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION: The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.


Assuntos
Aprendizado Profundo , Esôfago , Humanos , Esôfago/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos
3.
PLoS One ; 19(5): e0302883, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38739605

RESUMO

Anemia is defined as a low hemoglobin (Hb) concentration and is highly prevalent worldwide. We report on the performance of a smartphone application (app) that records images in RAW format of the palpebral conjunctivae and estimates Hb concentration by relying upon computation of the tissue surface high hue ratio. Images of bilateral conjunctivae were obtained prospectively from a convenience sample of 435 Emergency Department patients using a dedicated smartphone. A previous computer-based and validated derivation data set associating estimated conjunctival Hb (HBc) and the actual laboratory-determined Hb (HBl) was used in deriving Hb estimations using a self-contained mobile app. Accuracy of HBc was 75.4% (95% CI 71.3, 79.4%) for all categories of anemia, and Bland-Altman plot analysis showed a bias of 0.10 and limits of agreement (LOA) of (-4.73, 4.93 g/dL). Analysis of HBc estimation accuracy around different anemia thresholds showed that AUC was maximized at transfusion thresholds of 7 and 9 g/dL which showed AUC values of 0.92 and 0.90 respectively. We found that the app is sufficiently accurate for detecting severe anemia and shows promise as a population-sourced screening platform or as a non-invasive point-of-care anemia classifier.


Assuntos
Anemia , Túnica Conjuntiva , Hemoglobinas , Smartphone , Humanos , Anemia/diagnóstico , Túnica Conjuntiva/irrigação sanguínea , Túnica Conjuntiva/patologia , Feminino , Masculino , Hemoglobinas/análise , Pessoa de Meia-Idade , Adulto , Aplicativos Móveis , Idoso , Estudos Prospectivos , Processamento de Imagem Assistida por Computador/métodos , Idoso de 80 Anos ou mais
4.
Biomed Phys Eng Express ; 10(4)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38744255

RESUMO

Purpose. To develop a method to extract statistical low-contrast detectability (LCD) and contrast-detail (C-D) curves from clinical patient images.Method. We used the region of air surrounding the patient as an alternative for a homogeneous region within a patient. A simple graphical user interface (GUI) was created to set the initial configuration for region of interest (ROI), ROI size, and minimum detectable contrast (MDC). The process was started by segmenting the air surrounding the patient with a threshold between -980 HU (Hounsfield units) and -1024 HU to get an air mask. The mask was trimmed using the patient center coordinates to avoid distortion from the patient table. It was used to automatically place square ROIs of a predetermined size. The mean pixel values in HU within each ROI were calculated, and the standard deviation (SD) from all the means was obtained. The MDC for a particular target size was generated by multiplying the SD by 3.29. A C-D curve was obtained by iterating this process for the other ROI sizes. This method was applied to the homogeneous area from the uniformity module of an ACR CT phantom to find the correlation between the parameters inside and outside the phantom, for 30 thoracic, 26 abdominal, and 23 head images.Results. The phantom images showed a significant linear correlation between the LCDs obtained from outside and inside the phantom, with R2values of 0.67 and 0.99 for variations in tube currents and tube voltages. This indicated that the air region outside the phantom can act as a surrogate for the homogenous region inside the phantom to obtain the LCD and C-D curves.Conclusion. The C-D curves obtained from outside the ACR CT phantom show a strong linear correlation with those from inside the phantom. The proposed method can also be used to extract the LCD from patient images by using the region of air outside as a surrogate for a region inside the patient.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Interface Usuário-Computador , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
5.
J Microsc ; 294(3): 350-371, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38752662

RESUMO

Bioimage data are generated in diverse research fields throughout the life and biomedical sciences. Its potential for advancing scientific progress via modern, data-driven discovery approaches reaches beyond disciplinary borders. To fully exploit this potential, it is necessary to make bioimaging data, in general, multidimensional microscopy images and image series, FAIR, that is, findable, accessible, interoperable and reusable. These FAIR principles for research data management are now widely accepted in the scientific community and have been adopted by funding agencies, policymakers and publishers. To remain competitive and at the forefront of research, implementing the FAIR principles into daily routines is an essential but challenging task for researchers and research infrastructures. Imaging core facilities, well-established providers of access to imaging equipment and expertise, are in an excellent position to lead this transformation in bioimaging research data management. They are positioned at the intersection of research groups, IT infrastructure providers, the institution´s administration, and microscope vendors. In the frame of German BioImaging - Society for Microscopy and Image Analysis (GerBI-GMB), cross-institutional working groups and third-party funded projects were initiated in recent years to advance the bioimaging community's capability and capacity for FAIR bioimage data management. Here, we provide an imaging-core-facility-centric perspective outlining the experience and current strategies in Germany to facilitate the practical adoption of the FAIR principles closely aligned with the international bioimaging community. We highlight which tools and services are ready to be implemented and what the future directions for FAIR bioimage data have to offer.


Assuntos
Microscopia , Pesquisa Biomédica/métodos , Gerenciamento de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos
6.
Med Mycol ; 62(5)2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38692846

RESUMO

Candida albicans is a pathogenic fungus that undergoes morphological transitions between hyphal and yeast forms, adapting to diverse environmental stimuli and exhibiting distinct virulence. Existing research works on antifungal blue light (ABL) therapy have either focused solely on hyphae or neglected to differentiate between morphologies, obscuring potential differential effects. To address this gap, we established a novel dataset of 150 C. albicans-infected mouse skin tissue slice images with meticulously annotated hyphae and yeast. Eleven representative convolutional neural networks were trained and evaluated on this dataset using seven metrics to identify the optimal model for segmenting hyphae and yeast in original high pixel size images. Leveraging the segmentation results, we analyzed the differential impact of blue light on the invasion depth and density of both morphologies within the skin tissue. U-Net-BN outperformed other models in segmentation accuracy, achieving the best overall performance. While both hyphae and yeast exhibited significant reductions in invasion depth and density at the highest ABL dose (180 J/cm2), only yeast was significantly inhibited at the lower dose (135 J/cm2). This novel finding emphasizes the importance of developing more effective treatment strategies for both morphologies.


We studied the effects of blue light therapy on hyphal and yeast forms of Candida albicans. Through image segmentation techniques, we discovered that the changes in invasion depth and density differed between these two forms after exposure to blue light.


Assuntos
Candida albicans , Hifas , Animais , Camundongos , Candida albicans/efeitos da radiação , Pele/microbiologia , Fototerapia/métodos , Processamento de Imagem Assistida por Computador/métodos , Luz , Antifúngicos/farmacologia , Antifúngicos/uso terapêutico , Redes Neurais de Computação , Modelos Animais de Doenças , Candidíase/microbiologia
7.
Methods Mol Biol ; 2800: 89-102, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38709480

RESUMO

In recent years, Correlative Multimodal Imaging (CMI) has become an "en vogue" technique and a bit of a buzzword. It entails combining information from different imaging modalities to extract more information from a sample that would otherwise not be possible from each individual technique. The best established CMI technology is correlative light and electron microscopy (CLEM), which applies light and electron microscopy on the exact same sample/structure. In general, it entails the detection of fluorescently tagged proteins or structures by light microscopy and subsequently their relative intracellular localization is determined with nanometer resolution using transmission electron microscopy (TEM). Here, we describe the different steps involved in a "simple" CLEM approach. We describe the overall workflow, instrumentation, and basic principles of sample preparation for a CLEM experiment exploiting stable expression of fluorescent proteins.


Assuntos
Microscopia Eletrônica de Transmissão , Humanos , Microscopia Eletrônica de Transmissão/métodos , Microscopia de Fluorescência/métodos , Microscopia Eletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Animais
8.
Methods Mol Biol ; 2800: 167-187, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38709484

RESUMO

Analyzing the dynamics of mitochondrial content in developing T cells is crucial for understanding the metabolic state during T cell development. However, monitoring mitochondrial content in real-time needs a balance of cell viability and image resolution. In this chapter, we present experimental protocols for measuring mitochondrial content in developing T cells using three modalities: bulk analysis via flow cytometry, volumetric imaging in laser scanning confocal microscopy, and dynamic live-cell monitoring in spinning disc confocal microscopy. Next, we provide an image segmentation and centroid tracking-based analysis pipeline for automated quantification of a large number of microscopy images. These protocols together offer comprehensive approaches to investigate mitochondrial dynamics in developing T cells, enabling a deeper understanding of their metabolic processes.


Assuntos
Citometria de Fluxo , Microscopia Confocal , Mitocôndrias , Análise de Célula Única , Linfócitos T , Citometria de Fluxo/métodos , Mitocôndrias/metabolismo , Análise de Célula Única/métodos , Linfócitos T/metabolismo , Linfócitos T/citologia , Microscopia Confocal/métodos , Animais , Processamento de Imagem Assistida por Computador/métodos , Humanos , Camundongos , Dinâmica Mitocondrial
9.
Methods Mol Biol ; 2800: 203-215, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38709486

RESUMO

Cell tracking is an essential step in extracting cellular signals from moving cells, which is vital for understanding the mechanisms underlying various biological functions and processes, particularly in organs such as the brain and heart. However, cells in living organisms often exhibit extensive and complex movements caused by organ deformation and whole-body motion. These movements pose a challenge in obtaining high-quality time-lapse cell images and tracking the intricate cell movements in the captured images. Recent advances in deep learning techniques provide powerful tools for detecting cells in low-quality images with densely packed cell populations, as well as estimating cell positions for cells undergoing large nonrigid movements. This chapter introduces the challenges of cell tracking in deforming organs and moving animals, outlines the solutions to these challenges, and presents a detailed protocol for data preparation, as well as for performing cell segmentation and tracking using the latest version of 3DeeCellTracker. This protocol is expected to enable researchers to gain deeper insights into organ dynamics and biological processes.


Assuntos
Rastreamento de Células , Aprendizado Profundo , Animais , Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Movimento Celular , Encéfalo/citologia , Imagem com Lapso de Tempo/métodos
10.
Methods Mol Biol ; 2800: 231-244, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38709488

RESUMO

In this chapter, we describe protocols for using the CellOrganizer software on the Jupyter Notebook platform to analyze and model cell and organelle shape and spatial arrangement. CellOrganizer is an open-source system for using microscope images to learn statistical models of the structure of cell components and how those components are organized relative to each other. Such models capture the statistical variation in the organization of cellular components by jointly modeling the distributions of their number, shape, and spatial distributions. These models can be created for different cell types or conditions and compared to reflect differences in their spatial organizations. The models are also generative, in that they can be used to synthesize new cell instances reflecting what a model learned and to provide well-structured cell geometries that can be used for biochemical simulations.


Assuntos
Software , Processamento de Imagem Assistida por Computador/métodos , Modelos Biológicos , Humanos , Simulação por Computador , Organelas/metabolismo
11.
Methods Mol Biol ; 2800: 217-229, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38709487

RESUMO

High-throughput microscopy has enabled screening of cell phenotypes at unprecedented scale. Systematic identification of cell phenotype changes (such as cell morphology and protein localization changes) is a major analysis goal. Because cell phenotypes are high-dimensional, unbiased approaches to detect and visualize the changes in phenotypes are still needed. Here, we suggest that changes in cellular phenotype can be visualized in reduced dimensionality representations of the image feature space. We describe a freely available analysis pipeline to visualize changes in protein localization in feature spaces obtained from deep learning. As an example, we use the pipeline to identify changes in subcellular localization after the yeast GFP collection was treated with hydroxyurea.


Assuntos
Processamento de Imagem Assistida por Computador , Fenótipo , Processamento de Imagem Assistida por Computador/métodos , Ensaios de Triagem em Larga Escala/métodos , Microscopia/métodos , Saccharomyces cerevisiae/metabolismo , Saccharomyces cerevisiae/genética , Aprendizado Profundo , Proteínas de Fluorescência Verde/metabolismo , Proteínas de Fluorescência Verde/genética , Hidroxiureia/farmacologia
12.
Radiat Oncol ; 19(1): 61, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773620

RESUMO

PURPOSE: Accurate deformable registration of magnetic resonance imaging (MRI) scans containing pathologies is challenging due to changes in tissue appearance. In this paper, we developed a novel automated three-dimensional (3D) convolutional U-Net based deformable image registration (ConvUNet-DIR) method using unsupervised learning to establish correspondence between baseline pre-operative and follow-up MRI scans of patients with brain glioma. METHODS: This study involved multi-parametric brain MRI scans (T1, T1-contrast enhanced, T2, FLAIR) acquired at pre-operative and follow-up time for 160 patients diagnosed with glioma, representing the BraTS-Reg 2022 challenge dataset. ConvUNet-DIR, a deep learning-based deformable registration workflow using 3D U-Net style architecture as a core, was developed to establish correspondence between the MRI scans. The workflow consists of three components: (1) the U-Net learns features from pairs of MRI scans and estimates a mapping between them, (2) the grid generator computes the sampling grid based on the derived transformation parameters, and (3) the spatial transformation layer generates a warped image by applying the sampling operation using interpolation. A similarity measure was used as a loss function for the network with a regularization parameter limiting the deformation. The model was trained via unsupervised learning using pairs of MRI scans on a training data set (n = 102) and validated on a validation data set (n = 26) to assess its generalizability. Its performance was evaluated on a test set (n = 32) by computing the Dice score and structural similarity index (SSIM) quantitative metrics. The model's performance also was compared with the baseline state-of-the-art VoxelMorph (VM1 and VM2) learning-based algorithms. RESULTS: The ConvUNet-DIR model showed promising competency in performing accurate 3D deformable registration. It achieved a mean Dice score of 0.975 ± 0.003 and SSIM of 0.908 ± 0.011 on the test set (n = 32). Experimental results also demonstrated that ConvUNet-DIR outperformed the VoxelMorph algorithms concerning Dice (VM1: 0.969 ± 0.006 and VM2: 0.957 ± 0.008) and SSIM (VM1: 0.893 ± 0.012 and VM2: 0.857 ± 0.017) metrics. The time required to perform a registration for a pair of MRI scans is about 1 s on the CPU. CONCLUSIONS: The developed deep learning-based model can perform an end-to-end deformable registration of a pair of 3D MRI scans for glioma patients without human intervention. The model could provide accurate, efficient, and robust deformable registration without needing pre-alignment and labeling. It outperformed the state-of-the-art VoxelMorph learning-based deformable registration algorithms and other supervised/unsupervised deep learning-based methods reported in the literature.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Glioma , Imageamento por Ressonância Magnética , Aprendizado de Máquina não Supervisionado , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/radioterapia , Glioma/diagnóstico por imagem , Glioma/radioterapia , Glioma/patologia , Radioterapia (Especialidade)/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos
13.
Med Eng Phys ; 127: 104162, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38692762

RESUMO

OBJECTIVE: Early detection of cardiovascular diseases is based on accurate quantification of the left ventricle (LV) function parameters. In this paper, we propose a fully automatic framework for LV volume and mass quantification from 2D-cine MR images already segmented using U-Net. METHODS: The general framework consists of three main steps: Data preparation including automatic LV localization using a convolution neural network (CNN) and application of morphological operations to exclude papillary muscles from the LV cavity. The second step consists in automatically extracting the LV contours using U-Net architecture. Finally, by integrating temporal information which is manifested by a spatial motion of myocytes as a third dimension, we calculated LV volume, LV ejection fraction (LVEF) and left ventricle mass (LVM). Based on these parameters, we detected and quantified cardiac contraction abnormalities using Python software. RESULTS: CNN was trained with 35 patients and tested on 15 patients from the ACDC database with an accuracy of 99,15 %. U-Net architecture was trained using ACDC database and evaluated using local dataset with a Dice similarity coefficient (DSC) of 99,78 % and a Hausdorff Distance (HD) of 4.468 mm (p < 0,001). Quantification results showed a strong correlation with physiological measures with a Pearson correlation coefficient (PCC) of 0,991 for LV volume, 0.962 for LVEF, 0.98 for stroke volume (SV) and 0.923 for LVM after pillars' elimination. Clinically, our method allows regional and accurate identification of pathological myocardial segments and can serve as a diagnostic aid tool of cardiac contraction abnormalities. CONCLUSION: Experimental results prove the usefulness of the proposed method for LV volume and function quantification and verify its potential clinical applicability.


Assuntos
Automação , Ventrículos do Coração , Processamento de Imagem Assistida por Computador , Imagem Cinética por Ressonância Magnética , Músculos Papilares , Humanos , Ventrículos do Coração/diagnóstico por imagem , Imagem Cinética por Ressonância Magnética/métodos , Músculos Papilares/diagnóstico por imagem , Músculos Papilares/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Tamanho do Órgão , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Feminino , Volume Sistólico
14.
Sci Rep ; 14(1): 10664, 2024 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724603

RESUMO

Kiwifruit soft rot is highly contagious and causes serious economic loss. Therefore, early detection and elimination of soft rot are important for postharvest treatment and storage of kiwifruit. This study aims to accurately detect kiwifruit soft rot based on hyperspectral images by using a deep learning approach for image classification. A dual-branch selective attention capsule network (DBSACaps) was proposed to improve the classification accuracy. The network uses two branches to separately extract the spectral and spatial features so as to reduce their mutual interference, followed by fusion of the two features through the attention mechanism. Capsule network was used instead of convolutional neural networks to extract the features and complete the classification. Compared with existing methods, the proposed method exhibited the best classification performance on the kiwifruit soft rot dataset, with an overall accuracy of 97.08% and a 97.83% accuracy for soft rot. Our results confirm that potential soft rot of kiwifruit can be detected using hyperspectral images, which may contribute to the construction of smart agriculture.


Assuntos
Actinidia , Redes Neurais de Computação , Doenças das Plantas , Actinidia/microbiologia , Doenças das Plantas/microbiologia , Aprendizado Profundo , Imageamento Hiperespectral/métodos , Frutas/microbiologia , Processamento de Imagem Assistida por Computador/métodos
15.
Nat Commun ; 15(1): 3942, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38729933

RESUMO

In clinical oncology, many diagnostic tasks rely on the identification of cells in histopathology images. While supervised machine learning techniques necessitate the need for labels, providing manual cell annotations is time-consuming. In this paper, we propose a self-supervised framework (enVironment-aware cOntrastive cell represenTation learning: VOLTA) for cell representation learning in histopathology images using a technique that accounts for the cell's mutual relationship with its environment. We subject our model to extensive experiments on data collected from multiple institutions comprising over 800,000 cells and six cancer types. To showcase the potential of our proposed framework, we apply VOLTA to ovarian and endometrial cancers and demonstrate that our cell representations can be utilized to identify the known histotypes of ovarian cancer and provide insights that link histopathology and molecular subtypes of endometrial cancer. Unlike supervised models, we provide a framework that can empower discoveries without any annotation data, even in situations where sample sizes are limited.


Assuntos
Neoplasias do Endométrio , Neoplasias Ovarianas , Humanos , Feminino , Neoplasias do Endométrio/patologia , Neoplasias Ovarianas/patologia , Aprendizado de Máquina , Aprendizado de Máquina Supervisionado , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
16.
Sci Rep ; 14(1): 10569, 2024 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-38719918

RESUMO

Within the medical field of human assisted reproductive technology, a method for interpretable, non-invasive, and objective oocyte evaluation is lacking. To address this clinical gap, a workflow utilizing machine learning techniques has been developed involving automatic multi-class segmentation of two-dimensional images, morphometric analysis, and prediction of developmental outcomes of mature denuded oocytes based on feature extraction and clinical variables. Two separate models have been developed for this purpose-a model to perform multiclass segmentation, and a classifier model to classify oocytes as likely or unlikely to develop into a blastocyst (Day 5-7 embryo). The segmentation model is highly accurate at segmenting the oocyte, ensuring high-quality segmented images (masks) are utilized as inputs for the classifier model (mask model). The mask model displayed an area under the curve (AUC) of 0.63, a sensitivity of 0.51, and a specificity of 0.66 on the test set. The AUC underwent a reduction to 0.57 when features extracted from the ooplasm were removed, suggesting the ooplasm holds the information most pertinent to oocyte developmental competence. The mask model was further compared to a deep learning model, which also utilized the segmented images as inputs. The performance of both models combined in an ensemble model was evaluated, showing an improvement (AUC 0.67) compared to either model alone. The results of this study indicate that direct assessments of the oocyte are warranted, providing the first objective insights into key features for developmental competence, a step above the current standard of care-solely utilizing oocyte age as a proxy for quality.


Assuntos
Blastocisto , Aprendizado de Máquina , Oócitos , Humanos , Blastocisto/citologia , Blastocisto/fisiologia , Oócitos/citologia , Feminino , Desenvolvimento Embrionário , Adulto , Fertilização in vitro/métodos , Processamento de Imagem Assistida por Computador/métodos
17.
Commun Biol ; 7(1): 571, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750282

RESUMO

Digital reconstruction has been instrumental in deciphering how in vitro neuron architecture shapes information flow. Emerging approaches reconstruct neural systems as networks with the aim of understanding their organization through graph theory. Computational tools dedicated to this objective build models of nodes and edges based on key cellular features such as somata, axons, and dendrites. Fully automatic implementations of these tools are readily available, but they may also be purpose-built from specialized algorithms in the form of multi-step pipelines. Here we review software tools informing the construction of network models, spanning from noise reduction and segmentation to full network reconstruction. The scope and core specifications of each tool are explicitly defined to assist bench scientists in selecting the most suitable option for their microscopy dataset. Existing tools provide a foundation for complete network reconstruction, however more progress is needed in establishing morphological bases for directed/weighted connectivity and in software validation.


Assuntos
Neurônios , Software , Neurônios/fisiologia , Humanos , Animais , Algoritmos , Rede Nervosa/fisiologia , Rede Nervosa/citologia , Processamento de Imagem Assistida por Computador/métodos , Modelos Neurológicos
18.
Sci Rep ; 14(1): 10909, 2024 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-38740903

RESUMO

To improve the recognition effect of the folk dance image recognition model and put forward new suggestions for teachers' teaching strategies, this study introduces a Deep Neural Network (DNN) to optimize the folk dance training image recognition model. Moreover, a corresponding teaching strategy optimization scheme is proposed according to the experimental results. Firstly, the image preprocessing and feature extraction of DNN are optimized. Secondly, classification and target detection models are established to analyze the folk dance training images, and the C-dance dataset is used for experiments. Finally, the results are compared with those of the Naive Bayes classifier, K-nearest neighbor, decision tree classifier, support vector machine, and logistic regression models. The results of this study provide new suggestions for teaching strategies. The research results indicate that the optimized classification model shows a significant improvement in classification accuracy across various aspects such as action complexity, dance types, movement speed, dance styles, body dynamics, and rhythm. The accuracy, precision, recall, and F1 scores have increased by approximately 14.7, 11.8, 13.2, and 17.4%, respectively. In the study of factors such as different training images, changes in perspective, lighting conditions, and noise interference, the optimized model demonstrates a substantial enhancement in recognition accuracy and robustness. These findings suggest that, compared to traditional models, the optimized model performs better in identifying various dances and movements, enhancing the accuracy and stability of classification. Based on the experimental results, strategies for optimizing the real-time feedback and assessment mechanism in folk dance teaching, as well as the design of personalized learning paths, are proposed. Therefore, this study holds the potential to be applied in the field of folk dance, promoting the development and innovation of folk dance education.


Assuntos
Dança , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Ensino
19.
J Biomed Opt ; 29(6): 066501, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38799979

RESUMO

Significance: Spectroscopic single-molecule localization microscopy (sSMLM) takes advantage of nanoscopy and spectroscopy, enabling sub-10 nm resolution as well as simultaneous multicolor imaging of multi-labeled samples. Reconstruction of raw sSMLM data using deep learning is a promising approach for visualizing the subcellular structures at the nanoscale. Aim: Develop a novel computational approach leveraging deep learning to reconstruct both label-free and fluorescence-labeled sSMLM imaging data. Approach: We developed a two-network-model based deep learning algorithm, termed DsSMLM, to reconstruct sSMLM data. The effectiveness of DsSMLM was assessed by conducting imaging experiments on diverse samples, including label-free single-stranded DNA (ssDNA) fiber, fluorescence-labeled histone markers on COS-7 and U2OS cells, and simultaneous multicolor imaging of synthetic DNA origami nanoruler. Results: For label-free imaging, a spatial resolution of 6.22 nm was achieved on ssDNA fiber; for fluorescence-labeled imaging, DsSMLM revealed the distribution of chromatin-rich and chromatin-poor regions defined by histone markers on the cell nucleus and also offered simultaneous multicolor imaging of nanoruler samples, distinguishing two dyes labeled in three emitting points with a separation distance of 40 nm. With DsSMLM, we observed enhanced spectral profiles with 8.8% higher localization detection for single-color imaging and up to 5.05% higher localization detection for simultaneous two-color imaging. Conclusions: We demonstrate the feasibility of deep learning-based reconstruction for sSMLM imaging applicable to label-free and fluorescence-labeled sSMLM imaging data. We anticipate our technique will be a valuable tool for high-quality super-resolution imaging for a deeper understanding of DNA molecules' photophysics and will facilitate the investigation of multiple nanoscopic cellular structures and their interactions.


Assuntos
Aprendizado Profundo , Imagem Individual de Molécula , Animais , Imagem Individual de Molécula/métodos , Humanos , Chlorocebus aethiops , Células COS , Microscopia de Fluorescência/métodos , Processamento de Imagem Assistida por Computador/métodos , DNA de Cadeia Simples/química , DNA de Cadeia Simples/análise , Algoritmos , Histonas/química , Histonas/análise
20.
Artigo em Chinês | MEDLINE | ID: mdl-38802310

RESUMO

Objective: To select chest CT image patterns for the diagnosis of pneumoconiosis and establish a method for determining the profusion of circular small shadows in chest CT. Methods: In April 2021, 66 cases of occupational pneumoconiosis patients with digital radiography (DR) chest radiographs and chest CT imaging data with circular small shadow as the main manifestations were selected as the study objects. 1.5 mm and 5 mm chest CT axial images, 1 mm and 5 mm chest CT coronal multi-plane recombination (MPR) images, and 5 mm chest CT coronal maximum intensity projection (MIP) images were used to observe the different characteristics of pneumoconiosis patients, and were compared and analyzed with DR chest radiographs to establish the experimental chest CT standards. The consistency of the profusion results between the experimental chest CT standards and GBZ 70-2015 Diagnosis of Occupational Pneumoconiosis was verified. Results: All the 66 objects were male, including 33 cases of stage Ⅰ pneumoconiosis, 17 cases of stage Ⅱ pneumoconiosis and 16 cases of stage Ⅲ pneumoconiosis. By observing five chest CT images of 66 objects, we found that chest CT images of different modes could clearly display and identify abnormal images such as small circular shadow, large shadow, small shadow aggregation, honeycomb glass shadow, flake glass shadow, uniform low-profusion glass shadow, mesh glass shadow, cable shadow, linear shadow, subpleural spinous shadow, subpleural nodules, various kinds of emphysema and lung texture distortion and fracture. Small shadow aggregation was usually accompanied by the appearance of large shadow. The vascular shadows in 5 mm CT images had good ductility, and small nodules were easy to distinguish. The coronal MIP image of 5 mm chest CT used edge enhancement technology, which was prone to small shadow fusion and fibrotic shadow fusion. The coronal MPR image of 5 mm chest CT was highly consistent with the DR chest radiographs in terms of the integrity of film reading. GBZ 70-2015 standard was used to compare the profusion of DR chest radiographs and 5 mm chest CT coronal MPR images of 66 objects, and the consistency test Kappa=0.64. GBZ 70-2015 standard and experimental chest CT standard were used to compare the profusion results of DR chest radiographs and 5 mm chest CT coronal MPR images of 66 objects, respectively, and the consistency test Kappa=0.80, with high consistency. Conclusion: 5 mm coronal MPR image is suitable for chest CT imaging in the diagnosis of pneumoconiosis. Following the selection path and method of GBZ 70-2015 profusion criterion, the established experimental chest CT standard in determining the profusion of small circular shadows in 5 mm coronal MPR images of chest CT with pneumoconiosis has a high consistency with GBZ 70-2015 standard.


Assuntos
Pneumoconiose , Radiografia Torácica , Tomografia Computadorizada por Raios X , Humanos , Pneumoconiose/diagnóstico por imagem , Masculino , Tomografia Computadorizada por Raios X/métodos , Radiografia Torácica/métodos , Pessoa de Meia-Idade , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Idoso
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA