Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.610
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Cell ; 187(12): 3141-3160.e23, 2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38759650

RESUMO

Systematic functional profiling of the gene set that directs embryonic development is an important challenge. To tackle this challenge, we used 4D imaging of C. elegans embryogenesis to capture the effects of 500 gene knockdowns and developed an automated approach to compare developmental phenotypes. The automated approach quantifies features-including germ layer cell numbers, tissue position, and tissue shape-to generate temporal curves whose parameterization yields numerical phenotypic signatures. In conjunction with a new similarity metric that operates across phenotypic space, these signatures enabled the generation of ranked lists of genes predicted to have similar functions, accessible in the PhenoBank web portal, for ∼25% of essential development genes. The approach identified new gene and pathway relationships in cell fate specification and morphogenesis and highlighted the utilization of specialized energy generation pathways during embryogenesis. Collectively, the effort establishes the foundation for comprehensive analysis of the gene set that builds a multicellular organism.


Assuntos
Caenorhabditis elegans , Desenvolvimento Embrionário , Regulação da Expressão Gênica no Desenvolvimento , Animais , Caenorhabditis elegans/embriologia , Caenorhabditis elegans/genética , Caenorhabditis elegans/metabolismo , Proteínas de Caenorhabditis elegans/metabolismo , Proteínas de Caenorhabditis elegans/genética , Embrião não Mamífero/metabolismo , Perfilação da Expressão Gênica/métodos , Técnicas de Silenciamento de Genes , Fenótipo
2.
Cell ; 173(3): 792-803.e19, 2018 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-29656897

RESUMO

Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary perturbations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call "in silico labeling" (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire.


Assuntos
Corantes Fluorescentes/química , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Neurônios Motores/citologia , Algoritmos , Animais , Linhagem Celular Tumoral , Sobrevivência Celular , Córtex Cerebral/citologia , Humanos , Células-Tronco Pluripotentes Induzidas/citologia , Aprendizado de Máquina , Redes Neurais de Computação , Neurociências , Ratos , Software , Células-Tronco/citologia
3.
Cell ; 170(2): 393-406.e28, 2017 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-28709004

RESUMO

Assigning behavioral functions to neural structures has long been a central goal in neuroscience and is a necessary first step toward a circuit-level understanding of how the brain generates behavior. Here, we map the neural substrates of locomotion and social behaviors for Drosophila melanogaster using automated machine-vision and machine-learning techniques. From videos of 400,000 flies, we quantified the behavioral effects of activating 2,204 genetically targeted populations of neurons. We combined a novel quantification of anatomy with our behavioral analysis to create brain-behavior correlation maps, which are shared as browsable web pages and interactive software. Based on these maps, we generated hypotheses of regions of the brain causally related to sensory processing, locomotor control, courtship, aggression, and sleep. Our maps directly specify genetic tools to target these regions, which we used to identify a small population of neurons with a role in the control of walking.


Assuntos
Mapeamento Encefálico/métodos , Drosophila melanogaster/fisiologia , Animais , Comportamento Animal , Feminino , Locomoção , Masculino , Software
4.
EMBO J ; 42(19): e113288, 2023 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-37671467

RESUMO

Coordinated cardiomyocyte contraction drives the mammalian heart to beat and circulate blood. No consensus model of cardiomyocyte geometrical arrangement exists, due to the limited spatial resolution of whole heart imaging methods and the piecemeal nature of studies based on histological sections. By combining microscopy and computer vision, we produced the first-ever three-dimensional cardiomyocyte orientation reconstruction across mouse ventricular walls at the micrometer scale, representing a gain of three orders of magnitude in spatial resolution. We recovered a cardiomyocyte arrangement aligned to the long-axis direction of the outer ventricular walls. This cellular network lies in a thin shell and forms a continuum with longitudinally arranged cardiomyocytes in the inner walls, with a complex geometry at the apex. Our reconstruction methods can be applied at fine spatial scales to further understanding of heart wall electrical function and mechanics, and set the stage for the study of micron-scale fiber remodeling in heart disease.


Assuntos
Ventrículos do Coração , Miócitos Cardíacos , Animais , Camundongos , Mamíferos
5.
J Cell Sci ; 137(4)2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38264939

RESUMO

Filopodia are slender, actin-filled membrane projections used by various cell types for environment exploration. Analyzing filopodia often involves visualizing them using actin, filopodia tip or membrane markers. Due to the diversity of cell types that extend filopodia, from amoeboid to mammalian, it can be challenging for some to find a reliable filopodia analysis workflow suited for their cell type and preferred visualization method. The lack of an automated workflow capable of analyzing amoeboid filopodia with only a filopodia tip label prompted the development of filoVision. filoVision is an adaptable deep learning platform featuring the tools filoTips and filoSkeleton. filoTips labels filopodia tips and the cytosol using a single tip marker, allowing information extraction without actin or membrane markers. In contrast, filoSkeleton combines tip marker signals with actin labeling for a more comprehensive analysis of filopodia shafts in addition to tip protein analysis. The ZeroCostDL4Mic deep learning framework facilitates accessibility and customization for different datasets and cell types, making filoVision a flexible tool for automated analysis of tip-marked filopodia across various cell types and user data.


Assuntos
Actinas , Aprendizado Profundo , Animais , Actinas/metabolismo , Pseudópodes/metabolismo , Mamíferos/metabolismo
6.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38920347

RESUMO

Artificial intelligence (AI) powered drug development has received remarkable attention in recent years. It addresses the limitations of traditional experimental methods that are costly and time-consuming. While there have been many surveys attempting to summarize related research, they only focus on general AI or specific aspects such as natural language processing and graph neural network. Considering the rapid advance on computer vision, using the molecular image to enable AI appears to be a more intuitive and effective approach since each chemical substance has a unique visual representation. In this paper, we provide the first survey on image-based molecular representation for drug development. The survey proposes a taxonomy based on the learning paradigms in computer vision and reviews a large number of corresponding papers, highlighting the contributions of molecular visual representation in drug development. Besides, we discuss the applications, limitations and future directions in the field. We hope this survey could offer valuable insight into the use of image-based molecular representation learning in the context of drug development.


Assuntos
Desenvolvimento de Medicamentos , Desenvolvimento de Medicamentos/métodos , Inteligência Artificial , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Descoberta de Drogas/métodos
7.
Proc Natl Acad Sci U S A ; 120(27): e2220417120, 2023 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-37364096

RESUMO

A longstanding line of research in urban studies explores how cities can be understood through their appearance. However, what remains unclear is to what extent urban dwellers' everyday life can be explained by the visual clues of the urban environment. In this paper, we address this question by applying a computer vision model to 27 million street view images across 80 counties in the United States. Then, we use the spatial distribution of notable urban features identified through the street view images, such as street furniture, sidewalks, building façades, and vegetation, to predict the socioeconomic profiles of their immediate neighborhood. Our results show that these urban features alone can account for up to 83% of the variance in people's travel behavior, 62% in poverty status, 64% in crime, and 68% in health behaviors. The results outperform models based on points of interest (POI), population, and other demographic data alone. Moreover, incorporating urban features captured from street view images can improve the explanatory power of these other methods by 5% to 25%. We propose "urban visual intelligence" as a process to uncover hidden city profiles, infer, and synthesize urban information with computer vision and street view images. This study serves as a foundation for future urban research interested in this process and understanding the role of visual aspects of the city.

8.
Gastroenterology ; 166(1): 155-167.e2, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37832924

RESUMO

BACKGROUND & AIMS: Endoscopic assessment of ulcerative colitis (UC) typically reports only the maximum severity observed. Computer vision methods may better quantify mucosal injury detail, which varies among patients. METHODS: Endoscopic video from the UNIFI clinical trial (A Study to Evaluate the Safety and Efficacy of Ustekinumab Induction and Maintenance Therapy in Participants With Moderately to Severely Active Ulcerative Colitis) comparing ustekinumab and placebo for UC were processed in a computer vision analysis that spatially mapped Mayo Endoscopic Score (MES) to generate the Cumulative Disease Score (CDS). CDS was compared with the MES for differentiating ustekinumab vs placebo treatment response and agreement with symptomatic remission at week 44. Statistical power, effect, and estimated sample sizes for detecting endoscopic differences between treatments were calculated using both CDS and MES measures. Endoscopic video from a separate phase 2 clinical trial replication cohort was performed for validation of CDS performance. RESULTS: Among 748 induction and 348 maintenance patients, CDS was lower in ustekinumab vs placebo users at week 8 (141.9 vs 184.3; P < .0001) and week 44 (78.2 vs 151.5; P < .0001). CDS was correlated with the MES (P < .0001) and all clinical components of the partial Mayo score (P < .0001). Stratification by pretreatment CDS revealed ustekinumab was more effective than placebo (P < .0001) with increasing effect in severe vs mild disease (-85.0 vs -55.4; P < .0001). Compared with the MES, CDS was more sensitive to change, requiring 50% fewer participants to demonstrate endoscopic differences between ustekinumab and placebo (Hedges' g = 0.743 vs 0.460). CDS performance in the JAK-UC replication cohort was similar to UNIFI. CONCLUSIONS: As an automated and quantitative measure of global endoscopic disease severity, the CDS offers artificial intelligence enhancement of traditional MES capability to better evaluate UC in clinical trials and potentially practice.


Assuntos
Colite Ulcerativa , Humanos , Inteligência Artificial , Colite Ulcerativa/diagnóstico , Colite Ulcerativa/tratamento farmacológico , Colonoscopia/métodos , Computadores , Indução de Remissão , Índice de Gravidade de Doença , Ustekinumab/efeitos adversos
9.
Proc Natl Acad Sci U S A ; 119(39): e2115730119, 2022 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-36122244

RESUMO

Regardless of how much data artificial intelligence agents have available, agents will inevitably encounter previously unseen situations in real-world deployments. Reacting to novel situations by acquiring new information from other people-socially situated learning-is a core faculty of human development. Unfortunately, socially situated learning remains an open challenge for artificial intelligence agents because they must learn how to interact with people to seek out the information that they lack. In this article, we formalize the task of socially situated artificial intelligence-agents that seek out new information through social interactions with people-as a reinforcement learning problem where the agent learns to identify meaningful and informative questions via rewards observed through social interaction. We manifest our framework as an interactive agent that learns how to ask natural language questions about photos as it broadens its visual intelligence on a large photo-sharing social network. Unlike active-learning methods, which implicitly assume that humans are oracles willing to answer any question, our agent adapts its behavior based on observed norms of which questions people are or are not interested to answer. Through an 8-mo deployment where our agent interacted with 236,000 social media users, our agent improved its performance at recognizing new visual information by 112%. A controlled field experiment confirmed that our agent outperformed an active-learning baseline by 25.6%. This work advances opportunities for continuously improving artificial intelligence (AI) agents that better respect norms in open social environments.


Assuntos
Inteligência Artificial , Reforço Psicológico , Interação Social , Humanos , Recompensa , Normas Sociais
10.
Semin Cancer Biol ; 95: 52-74, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37473825

RESUMO

Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.


Assuntos
Inteligência Artificial , Neoplasias de Cabeça e Pescoço , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Neoplasias de Cabeça e Pescoço/diagnóstico , Neoplasias de Cabeça e Pescoço/terapia , Diagnóstico por Imagem/métodos
11.
BMC Bioinformatics ; 25(1): 178, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38714921

RESUMO

BACKGROUND: In low-middle income countries, healthcare providers primarily use paper health records for capturing data. Paper health records are utilized predominately due to the prohibitive cost of acquisition and maintenance of automated data capture devices and electronic medical records. Data recorded on paper health records is not easily accessible in a digital format to healthcare providers. The lack of real time accessible digital data limits healthcare providers, researchers, and quality improvement champions to leverage data to improve patient outcomes. In this project, we demonstrate the novel use of computer vision software to digitize handwritten intraoperative data elements from smartphone photographs of paper anesthesia charts from the University Teaching Hospital of Kigali. We specifically report our approach to digitize checkbox data, symbol-denoted systolic and diastolic blood pressure, and physiological data. METHODS: We implemented approaches for removing perspective distortions from smartphone photographs, removing shadows, and improving image readability through morphological operations. YOLOv8 models were used to deconstruct the anesthesia paper chart into specific data sections. Handwritten blood pressure symbols and physiological data were identified, and values were assigned using deep neural networks. Our work builds upon the contributions of previous research by improving upon their methods, updating the deep learning models to newer architectures, as well as consolidating them into a single piece of software. RESULTS: The model for extracting the sections of the anesthesia paper chart achieved an average box precision of 0.99, an average box recall of 0.99, and an mAP0.5-95 of 0.97. Our software digitizes checkbox data with greater than 99% accuracy and digitizes blood pressure data with a mean average error of 1.0 and 1.36 mmHg for systolic and diastolic blood pressure respectively. Overall accuracy for physiological data which includes oxygen saturation, inspired oxygen concentration and end tidal carbon dioxide concentration was 85.2%. CONCLUSIONS: We demonstrate that under normal photography conditions we can digitize checkbox, blood pressure and physiological data to within human accuracy when provided legible handwriting. Our contributions provide improved access to digital data to healthcare practitioners in low-middle income countries.


Assuntos
Smartphone , Humanos , Anestesia , Registros Eletrônicos de Saúde , Países em Desenvolvimento , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo
12.
BMC Bioinformatics ; 25(1): 123, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38515011

RESUMO

BACKGROUND: Chromosome is one of the most fundamental part of cell biology where DNA holds the hierarchical information. DNA compacts its size by forming loops, and these regions house various protein particles, including CTCF, SMC3, H3 histone. Numerous sequencing methods, such as Hi-C, ChIP-seq, and Micro-C, have been developed to investigate these properties. Utilizing these data, scientists have developed a variety of loop prediction techniques that have greatly improved their methods for characterizing loop prediction and related aspects. RESULTS: In this study, we categorized 22 loop calling methods and conducted a comprehensive study of 11 of them. Additionally, we have provided detailed insights into the methodologies underlying these algorithms for loop detection, categorizing them into five distinct groups based on their fundamental approaches. Furthermore, we have included critical information such as resolution, input and output formats, and parameters. For this analysis, we utilized the GM12878 Hi-C datasets at 5 KB, 10 KB, 100 KB and 250 KB resolutions. Our evaluation criteria encompassed various factors, including memory usages, running time, sequencing depth, and recovery of protein-specific sites such as CTCF, H3K27ac, and RNAPII. CONCLUSION: This analysis offers insights into the loop detection processes of each method, along with the strengths and weaknesses of each, enabling readers to effectively choose suitable methods for their datasets. We evaluate the capabilities of these tools and introduce a novel Biological, Consistency, and Computational robustness score ( B C C score ) to measure their overall robustness ensuring a comprehensive evaluation of their performance.


Assuntos
Cromatina , Cromossomos , Cromatina/genética , DNA , Sequenciamento de Cromatina por Imunoprecipitação , Algoritmos
13.
Am J Epidemiol ; 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39013794

RESUMO

Deep learning is a subfield of artificial intelligence and machine learning based mostly on neural networks and often combined with attention algorithms that has been used to detect and identify objects in text, audio, images, and video. Serghiou and Rough (Am J Epidemiol. 0000;000(00):0000-0000) present a primer for epidemiologists on deep learning models. These models provide substantial opportunities for epidemiologists to expand and amplify their research in both data collection and analyses by increasing the geographic reach of studies, including more research subjects, and working with large or high dimensional data. The tools for implementing deep learning methods are not quite yet as straightforward or ubiquitous for epidemiologists as traditional regression methods found in standard statistical software, but there are exciting opportunities for interdisciplinary collaboration with deep learning experts, just as epidemiologists have with statisticians, healthcare providers, urban planners, and other professionals. Despite the novelty of these methods, epidemiological principles of assessing bias, study design, interpretation and others still apply when implementing deep learning methods or assessing the findings of studies that have used them.

14.
Annu Rev Neurosci ; 39: 217-36, 2016 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-27090952

RESUMO

In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.


Assuntos
Algoritmos , Inteligência Artificial , Comportamento Animal/fisiologia , Ciências Biocomportamentais , Aprendizagem/fisiologia , Animais , Automação/métodos , Ciências Biocomportamentais/métodos , Humanos
15.
Artigo em Inglês | MEDLINE | ID: mdl-38992406

RESUMO

Artificial intelligence (AI) refers to computer-based methodologies that use data to teach a computer to solve pre-defined tasks; these methods can be applied to identify patterns in large multi-modal data sources. AI applications in inflammatory bowel disease (IBD) includes predicting response to therapy, disease activity scoring of endoscopy, drug discovery, and identifying bowel damage in images. As a complex disease with entangled relationships between genomics, metabolomics, microbiome, and the environment, IBD stands to benefit greatly from methodologies that can handle this complexity. We describe current applications, critical challenges, and propose future directions of AI in IBD.

16.
J Comput Chem ; 45(16): 1380-1389, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38407482

RESUMO

Electrical equivalent circuits are a widely applied tool with which electrical processes can be rationalized. There is a wide-ranging selection of fields from bioelectrochemistry to batteries to fuel cells making use of this tool. Enabling meta-analysis on the similarities and differences in the used circuits will help to identify commonly used circuits and aid in evaluating the underlying physics. We present a method and an implementation that enables the conversion of circuits included in scientific publications into a machine-readable form for generating machine learning datasets or circuit simulations.

17.
J Urol ; 211(4): 575-584, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38265365

RESUMO

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Assuntos
Prostatectomia , Procedimentos Cirúrgicos Robóticos , Humanos , Masculino , Inteligência Artificial , Escolaridade , Próstata/cirurgia , Prostatectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Gravação em Vídeo
18.
Electrophoresis ; 45(9-10): 794-804, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38161244

RESUMO

Facial image-based kinship verification represents a burgeoning frontier within the realms of computer vision and biomedicine. Recent genome-wide association studies have underscored the heritability of human facial morphology, revealing its predictability based on genetic information. These revelations form a robust foundation for advancing facial image-based kinship verification. Despite strides in computer vision, there remains a discernible gap between the biomedical and computer vision domains. Notably, the absence of family photo datasets established through biological paternity testing methods poses a significant challenge. This study addresses this gap by introducing the biological kinship visualization dataset, encompassing 5773 individuals from 2412 families with biologically confirmed kinship. Our analysis delves into the distribution and influencing factors of facial similarity among parent-child pairs, probing the potential association between forensic short tandem repeat polymorphisms and facial similarity. Additionally, we have developed a machine learning model for facial image-based kinship verification, achieving an accuracy of 0.80 in the dataset. To facilitate further exploration, we have established an online tool and database, accessible at http://120.55.161.230:88/.


Assuntos
Face , Humanos , Face/anatomia & histologia , Genética Forense/métodos , Estudos de Associação Genética/métodos , Estudo de Associação Genômica Ampla/métodos , Aprendizado de Máquina , Repetições de Microssatélites
19.
Hum Reprod ; 39(4): 698-708, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38396213

RESUMO

STUDY QUESTION: Can the BlastAssist deep learning pipeline perform comparably to or outperform human experts and embryologists at measuring interpretable, clinically relevant features of human embryos in IVF? SUMMARY ANSWER: The BlastAssist pipeline can measure a comprehensive set of interpretable features of human embryos and either outperform or perform comparably to embryologists and human experts in measuring these features. WHAT IS KNOWN ALREADY: Some studies have applied deep learning and developed 'black-box' algorithms to predict embryo viability directly from microscope images and videos but these lack interpretability and generalizability. Other studies have developed deep learning networks to measure individual features of embryos but fail to conduct careful comparisons to embryologists' performance, which are fundamental to demonstrate the network's effectiveness. STUDY DESIGN, SIZE, DURATION: We applied the BlastAssist pipeline to 67 043 973 images (32 939 embryos) recorded in the IVF lab from 2012 to 2017 in Tel Aviv Sourasky Medical Center. We first compared the pipeline measurements of individual images/embryos to manual measurements by human experts for sets of features, including: (i) fertilization status (n = 207 embryos), (ii) cell symmetry (n = 109 embryos), (iii) degree of fragmentation (n = 6664 images), and (iv) developmental timing (n = 21 036 images). We then conducted detailed comparisons between pipeline outputs and annotations made by embryologists during routine treatments for features, including: (i) fertilization status (n = 18 922 embryos), (ii) pronuclei (PN) fade time (n = 13 781 embryos), (iii) degree of fragmentation on Day 2 (n = 11 582 embryos), and (iv) time of blastulation (n = 3266 embryos). In addition, we compared the pipeline outputs to the implantation results of 723 single embryo transfer (SET) cycles, and to the live birth results of 3421 embryos transferred in 1801 cycles. PARTICIPANTS/MATERIALS, SETTING, METHODS: In addition to EmbryoScope™ image data, manual embryo grading and annotations, and electronic health record (EHR) data on treatment outcomes were also included. We integrated the deep learning networks we developed for individual features to construct the BlastAssist pipeline. Pearson's χ2 test was used to evaluate the statistical independence of individual features and implantation success. Bayesian statistics was used to evaluate the association of the probability of an embryo resulting in live birth to BlastAssist inputs. MAIN RESULTS AND THE ROLE OF CHANCE: The BlastAssist pipeline integrates five deep learning networks and measures comprehensive, interpretable, and quantitative features in clinical IVF. The pipeline performs similarly or better than manual measurements. For fertilization status, the network performs with very good parameters of specificity and sensitivity (area under the receiver operating characteristics (AUROC) 0.84-0.94). For symmetry score, the pipeline performs comparably to the human expert at both 2-cell (r = 0.71 ± 0.06) and 4-cell stages (r = 0.77 ± 0.07). For degree of fragmentation, the pipeline (acc = 69.4%) slightly under-performs compared to human experts (acc = 73.8%). For developmental timing, the pipeline (acc = 90.0%) performs similarly to human experts (acc = 91.4%). There is also strong agreement between pipeline outputs and annotations made by embryologists during routine treatments. For fertilization status, the pipeline and embryologists strongly agree (acc = 79.6%), and there is strong correlation between the two measurements (r = 0.683). For degree of fragmentation, the pipeline and embryologists mostly agree (acc = 55.4%), and there is also strong correlation between the two measurements (r = 0.648). For both PN fade time (r = 0.787) and time of blastulation (r = 0.887), there's strong correlation between the pipeline and embryologists. For SET cycles, 2-cell time (P < 0.01) and 2-cell symmetry (P < 0.03) are significantly correlated with implantation success rate, while other features showed correlations with implantation success without statistical significance. In addition, 2-cell time (P < 5 × 10-11), PN fade time (P < 5 × 10-10), degree of fragmentation on Day 3 (P < 5 × 10-4), and 2-cell symmetry (P < 5 × 10-3) showed statistically significant correlation with the probability of the transferred embryo resulting in live birth. LIMITATIONS, REASONS FOR CAUTION: We have not tested the BlastAssist pipeline on data from other clinics or other time-lapse microscopy (TLM) systems. The association study we conducted with live birth results do not take into account confounding variables, which will be necessary to construct an embryo selection algorithm. Randomized controlled trials (RCT) will be necessary to determine whether the pipeline can improve success rates in clinical IVF. WIDER IMPLICATIONS OF THE FINDINGS: BlastAssist provides a comprehensive and holistic means of evaluating human embryos. Instead of using a black-box algorithm, BlastAssist outputs meaningful measurements of embryos that can be interpreted and corroborated by embryologists, which is crucial in clinical decision making. Furthermore, the unprecedentedly large dataset generated by BlastAssist measurements can be used as a powerful resource for further research in human embryology and IVF. STUDY FUNDING/COMPETING INTEREST(S): This work was supported by Harvard Quantitative Biology Initiative, the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard (award number 1764269), the National Institute of Heath (award number R01HD104969), the Perelson Fund, and the Sagol fund for embryos and stem cells as part of the Sagol Network. The authors declare no competing interests. TRIAL REGISTRATION NUMBER: Not applicable.


Assuntos
Aprendizado Profundo , Gravidez , Feminino , Humanos , Implantação do Embrião , Transferência de Embrião Único/métodos , Blastocisto , Nascido Vivo , Fertilização in vitro , Estudos Retrospectivos
20.
Microvasc Res ; 151: 104610, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37739214

RESUMO

Images contain a wealth of information that is often under analyzed in biological studies. Developmental models of vascular disease are a powerful way to quantify developmentally regulated vessel phenotypes to identify the roots of the disease process. We present vessel Metrics, a software tool specifically designed to analyze developmental vascular microscopy images that will expedite the analysis of vascular images and provide consistency between research groups. We developed a segmentation algorithm that robustly quantifies different image types, developmental stages, organisms, and disease models at a similar accuracy level to a human observer. We validate the algorithm on confocal, lightsheet, and two photon microscopy data in a zebrafish model expressing fluorescent protein in the endothelial nuclei. The tool accurately segments data taken by multiple scientists on varying microscopes. We validate vascular parameters such as vessel density, network length, and diameter, across developmental stages, genetic mutations, and drug treatments, and show a favorable comparison to other freely available software tools. Additionally, we validate the tool in a mouse model. Vessel Metrics reduces the time to analyze experimental results, improves repeatability within and between institutions, and expands the percentage of a given vascular network analyzable in experiments.


Assuntos
Software , Peixe-Zebra , Camundongos , Animais , Humanos , Algoritmos , Núcleo Celular , Processamento de Imagem Assistida por Computador/métodos , Microscopia Confocal/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA