Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
PLoS Biol ; 18(4): e3000678, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32243449

RESUMEN

Histological atlases of the cerebral cortex, such as those made famous by Brodmann and von Economo, are invaluable for understanding human brain microstructure and its relationship with functional organization in the brain. However, these existing atlases are limited to small numbers of manually annotated samples from a single cerebral hemisphere, measured from 2D histological sections. We present the first whole-brain quantitative 3D laminar atlas of the human cerebral cortex. It was derived from a 3D histological atlas of the human brain at 20-micrometer isotropic resolution (BigBrain), using a convolutional neural network to segment, automatically, the cortical layers in both hemispheres. Our approach overcomes many of the historical challenges with measurement of histological thickness in 2D, and the resultant laminar atlas provides an unprecedented level of precision and detail. We utilized this BigBrain cortical atlas to test whether previously reported thickness gradients, as measured by MRI in sensory and motor processing cortices, were present in a histological atlas of cortical thickness and which cortical layers were contributing to these gradients. Cortical thickness increased across sensory processing hierarchies, primarily driven by layers III, V, and VI. In contrast, motor-frontal cortices showed the opposite pattern, with decreases in total and pyramidal layer thickness from motor to frontal association cortices. These findings illustrate how this laminar atlas will provide a link between single-neuron morphology, mesoscale cortical layering, macroscopic cortical thickness, and, ultimately, functional neuroanatomy.


Asunto(s)
Corteza Cerebral/anatomía & histología , Corteza Cerebral/diagnóstico por imagen , Imagenología Tridimensional/métodos , Encéfalo/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
2.
Bioinformatics ; 36(Suppl_1): i417-i426, 2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-32657403

RESUMEN

MOTIVATION: The recent development of sequencing technologies revolutionized our understanding of the inner workings of the cell as well as the way disease is treated. A single RNA sequencing (RNA-Seq) experiment, however, measures tens of thousands of parameters simultaneously. While the results are information rich, data analysis provides a challenge. Dimensionality reduction methods help with this task by extracting patterns from the data by compressing it into compact vector representations. RESULTS: We present the factorized embeddings (FE) model, a self-supervised deep learning algorithm that learns simultaneously, by tensor factorization, gene and sample representation spaces. We ran the model on RNA-Seq data from two large-scale cohorts and observed that the sample representation captures information on single gene and global gene expression patterns. Moreover, we found that the gene representation space was organized such that tissue-specific genes, highly correlated genes as well as genes participating in the same GO terms were grouped. Finally, we compared the vector representation of samples learned by the FE model to other similar models on 49 regression tasks. We report that the representations trained with FE rank first or second in all of the tasks, surpassing, sometimes by a considerable margin, other representations. AVAILABILITY AND IMPLEMENTATION: A toy example in the form of a Jupyter Notebook as well as the code and trained embeddings for this project can be found at: https://github.com/TrofimovAssya/FactorizedEmbeddings. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Algoritmos , ARN , Análisis de Secuencia de ARN
7.
Artículo en Inglés | MEDLINE | ID: mdl-39167505

RESUMEN

Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities. Over the years, the U-Net model has received tremendous attention from academic and industrial researchers who have extended it to address the scale and complexity created by medical tasks. These extensions are commonly related to enhancing the U-Net's backbone, bottleneck, or skip connections, or including representation learning, or combining it with a Transformer architecture, or even addressing probabilistic prediction of the segmentation map. Having a compendium of different previously proposed U-Net variants makes it easier for machine learning researchers to identify relevant research questions and understand the challenges of the biological tasks that challenge the model. In this work, we discuss the practical aspects of the U-Net model and organize each variant model into a taxonomy. Moreover, to measure the performance of these strategies in a clinical application, we propose fair evaluations of some unique and famous designs on well-known datasets. Furthermore, we provide a comprehensive implementation library with trained models. In addition, for ease of future studies, we created an online list of U-Net papers with their possible official implementation. All information is gathered in a GitHub repository https://github.com/NITR098/Awesome-U-Net.

8.
Res Sq ; 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38978576

RESUMEN

Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current shortage of both general and specialized radiologists, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies while simultaneously using the images to extract novel physiological insights. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs) that utilize both the image and the corresponding textual radiology reports. However, current medical VLMs are generally limited to 2D images and short reports. To overcome these shortcomings for abdominal CT interpretation, we introduce Merlin - a 3D VLM that leverages both structured electronic health records (EHR) and unstructured radiology reports for pretraining without requiring additional manual annotations. We train Merlin using a high-quality clinical dataset of paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens) for training. We comprehensively evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year chronic disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU. This computationally efficient design can help democratize foundation model training, especially for health systems with compute constraints. We plan to release our trained models, code, and dataset, pending manual removal of all protected health information.

9.
Nat Commun ; 14(1): 4039, 2023 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-37419921

RESUMEN

Deep learning (DL) models can harness electronic health records (EHRs) to predict diseases and extract radiologic findings for diagnosis. With ambulatory chest radiographs (CXRs) frequently ordered, we investigated detecting type 2 diabetes (T2D) by combining radiographic and EHR data using a DL model. Our model, developed from 271,065 CXRs and 160,244 patients, was tested on a prospective dataset of 9,943 CXRs. Here we show the model effectively detected T2D with a ROC AUC of 0.84 and a 16% prevalence. The algorithm flagged 1,381 cases (14%) as suspicious for T2D. External validation at a distinct institution yielded a ROC AUC of 0.77, with 5% of patients subsequently diagnosed with T2D. Explainable AI techniques revealed correlations between specific adiposity measures and high predictivity, suggesting CXRs' potential for enhanced T2D screening.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus Tipo 2 , Humanos , Diabetes Mellitus Tipo 2/diagnóstico por imagen , Radiografía Torácica/métodos , Estudios Prospectivos , Radiografía
10.
Br J Radiol ; 95(1139): 20210688, 2022 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-36062807

RESUMEN

OBJECTIVE: Chest X-rays are the most commonly performed diagnostic examinations. An artificial intelligence (AI) system that evaluates the images fast and accurately help reducing workflow and management of the patients. An automated assistant may reduce the time of interpretation in daily practice. We aim to investigate whether radiology residents consider the recommendations of an AI system for their final decisions, and to assess the diagnostic performances of the residents and the AI system. METHODS: Posteroanterior (PA) chest X-rays with confirmed diagnosis were evaluated by 10 radiology residents. After interpretation, the residents checked the evaluations of the AI Algorithm and made their final decisions. Diagnostic performances of the residents without AI and after checking the AI results were compared. RESULTS: Residents' diagnostic performance for all radiological findings had a mean sensitivity of 37.9% (vs 39.8% with AI support), a mean specificity of 93.9% (vs 93.9% with AI support). The residents obtained a mean AUC of 0.660 vs 0.669 with AI support. The AI algorithm diagnostic accuracy, measured by the overall mean AUC, was 0.789. No significant difference was detected between decisions taken with and without the support of AI. CONCLUSION: Although, the AI algorithm diagnostic accuracy were higher than the residents, the radiology residents did not change their final decisions after reviewing AI recommendations. In order to benefit from these tools, the recommendations of the AI system must be more precise to the user. ADVANCES IN KNOWLEDGE: This research provides information about the willingness or resistance of radiologists to work with AI technologies via diagnostic performance tests. It also shows the diagnostic performance of an existing AI algorithm, determined by real-life data.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Rayos X , Radiología/métodos , Algoritmos , Radiólogos
11.
Front Immunol ; 13: 867443, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35401501

RESUMEN

Early T-cell development is precisely controlled by E proteins, that indistinguishably include HEB/TCF12 and E2A/TCF3 transcription factors, together with NOTCH1 and pre-T cell receptor (TCR) signalling. Importantly, perturbations of early T-cell regulatory networks are implicated in leukemogenesis. NOTCH1 gain of function mutations invariably lead to T-cell acute lymphoblastic leukemia (T-ALL), whereas inhibition of E proteins accelerates leukemogenesis. Thus, NOTCH1, pre-TCR, E2A and HEB functions are intertwined, but how these pathways contribute individually or synergistically to leukemogenesis remain to be documented. To directly address these questions, we leveraged Cd3e-deficient mice in which pre-TCR signaling and progression through ß-selection is abrogated to dissect and decouple the roles of pre-TCR, NOTCH1, E2A and HEB in SCL/TAL1-induced T-ALL, via the use of Notch1 gain of function transgenic (Notch1ICtg) and Tcf12+/- or Tcf3+/- heterozygote mice. As a result, we now provide evidence that both HEB and E2A restrain cell proliferation at the ß-selection checkpoint while the clonal expansion of SCL-LMO1-induced pre-leukemic stem cells in T-ALL is uniquely dependent on Tcf12 gene dosage. At the molecular level, HEB protein levels are decreased via proteasomal degradation at the leukemic stage, pointing to a reversible loss of function mechanism. Moreover, in SCL-LMO1-induced T-ALL, loss of one Tcf12 allele is sufficient to bypass pre-TCR signaling which is required for Notch1 gain of function mutations and for progression to T-ALL. In contrast, Tcf12 monoallelic deletion does not accelerate Notch1IC-induced T-ALL, indicating that Tcf12 and Notch1 operate in the same pathway. Finally, we identify a tumor suppressor gene set downstream of HEB, exhibiting significantly lower expression levels in pediatric T-ALL compared to B-ALL and brain cancer samples, the three most frequent pediatric cancers. In summary, our results indicate a tumor suppressor function of HEB/TCF12 in T-ALL to mitigate cell proliferation controlled by NOTCH1 in pre-leukemic stem cells and prevent NOTCH1-driven progression to T-ALL.


Asunto(s)
Leucemia-Linfoma Linfoblástico de Células T Precursoras , Animales , Factores de Transcripción con Motivo Hélice-Asa-Hélice Básico/metabolismo , Humanos , Ratones , Leucemia-Linfoma Linfoblástico de Células T Precursoras/genética , Proteínas Proto-Oncogénicas/metabolismo , Receptor Notch1/genética , Receptor Notch1/metabolismo , Receptores de Antígenos de Linfocitos T , Proteína 1 de la Leucemia Linfocítica T Aguda , Linfocitos T/metabolismo , Factores de Transcripción/metabolismo
12.
NPJ Digit Med ; 5(1): 89, 2022 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-35817953

RESUMEN

Solid-organ transplantation is a life-saving treatment for end-stage organ disease in highly selected patients. Alongside the tremendous progress in the last several decades, new challenges have emerged. The growing disparity between organ demand and supply requires optimal patient/donor selection and matching. Improvements in long-term graft and patient survival require data-driven diagnosis and management of post-transplant complications. The growing abundance of clinical, genetic, radiologic, and metabolic data in transplantation has led to increasing interest in applying machine-learning (ML) tools that can uncover hidden patterns in large datasets. ML algorithms have been applied in predictive modeling of waitlist mortality, donor-recipient matching, survival prediction, post-transplant complications diagnosis, and prediction, aiming to optimize immunosuppression and management. In this review, we provide insight into the various applications of ML in transplant medicine, why these were used to evaluate a specific clinical question, and the potential of ML to transform the care of transplant recipients. 36 articles were selected after a comprehensive search of the following databases: Ovid MEDLINE; Ovid MEDLINE Epub Ahead of Print and In-Process & Other Non-Indexed Citations; Ovid Embase; Cochrane Database of Systematic Reviews (Ovid); and Cochrane Central Register of Controlled Trials (Ovid). In summary, these studies showed that ML techniques hold great potential to improve the outcome of transplant recipients. Future work is required to improve the interpretability of these algorithms, ensure generalizability through larger-scale external validation, and establishment of infrastructure to permit clinical integration.

13.
Cureus ; 12(7): e9448, 2020 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-32864270

RESUMEN

Introduction The need to streamline patient management for coronavirus disease-19 (COVID-19) has become more pressing than ever. Chest X-rays (CXRs) provide a non-invasive (potentially bedside) tool to monitor the progression of the disease. In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images. Such a tool can gauge the severity of COVID-19 lung infections (and pneumonia in general) that can be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. Methods Images from a public COVID-19 database were scored retrospectively by three blinded experts in terms of the extent of lung involvement as well as the degree of opacity. A neural network model that was pre-trained on large (non-COVID-19) chest X-ray datasets is used to construct features for COVID-19 images which are predictive for our task. Results This study finds that training a regression model on a subset of the outputs from this pre-trained chest X-ray model predicts our geographic extent score (range 0-8) with 1.14 mean absolute error (MAE) and our lung opacity score (range 0-6) with 0.78 MAE. Conclusions These results indicate that our model's ability to gauge the severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. To enable follow up work, we make our code, labels, and data available online.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA