Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sci Rep ; 14(1): 23199, 2024 10 05.
Artigo em Inglês | MEDLINE | ID: mdl-39369048

RESUMO

Deep neural networks are increasingly used in medical imaging for tasks such as pathological classification, but they face challenges due to the scarcity of high-quality, expert-labeled training data. Recent efforts have utilized pre-trained contrastive image-text models like CLIP, adapting them for medical use by fine-tuning the model with chest X-ray images and corresponding reports for zero-shot pathology classification, thus eliminating the need for pathology-specific annotations. However, most studies continue to use the same contrastive learning objectives as in the general domain, overlooking the multi-labeled nature of medical image-report pairs. In this paper, we propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling. We aim to improve the performance of zero-shot pathology classification without relying on external knowledge. Our method can be applied to any pre-trained contrastive image-text encoder and easily transferred to out-of-domain datasets without further training, as it does not use external data. Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models, with an average macro AUROC increase of 4.3%. Additionally, our method outperforms the state-of-the-art and marginally surpasses board-certified radiologists in zero-shot classification for the five competition pathologies in the CheXpert dataset.


Assuntos
Redes Neurais de Computação , Humanos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Radiografia Torácica/métodos , Raios X , Algoritmos
2.
Artigo em Inglês | MEDLINE | ID: mdl-39355755

RESUMO

Volumetric biomedical microscopy has the potential to increase the diagnostic information extracted from clinical tissue specimens and improve the diagnostic accuracy of both human pathologists and computational pathology models. Unfortunately, barriers to integrating 3-dimensional (3D) volumetric microscopy into clinical medicine include long imaging times, poor depth/z-axis resolution, and an insufficient amount of high-quality volumetric data. Leveraging the abundance of high-resolution 2D microscopy data, we introduce masked slice diffusion for super-resolution (MSDSR), which exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens. This intrinsic characteristic allows for super-resolution models trained on high-resolution images from one plane (e.g., XY) to effectively generalize to others (XZ, YZ), overcoming the traditional dependency on orientation. We focus on the application of MSDSR to stimulated Raman histology (SRH), an optical imaging modality for biological specimen analysis and intraoperative diagnosis, characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning. To evaluate MSDSR's efficacy, we introduce a new performance metric, SliceFID, and demonstrate MSDSR's superior performance over baseline models through extensive evaluations. Our findings reveal that MSDSR not only significantly enhances the quality and resolution of 3D volumetric data, but also addresses major obstacles hindering the broader application of 3D volumetric microscopy in clinical diagnostics and biomedical research.

3.
Sci Rep ; 13(1): 20784, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38012171

RESUMO

During the continuous charge and discharge process in lithium-sulfur batteries, one of the next-generation batteries, polysulfides are generated in the battery's electrolyte, and impact its performance in terms of power and capacity by involving the process. The amount of polysulfides in the electrolyte could be estimated by the change of the Gibbs free energy of the electrolyte, [Formula: see text] in the presence of polysulfide. However, obtaining [Formula: see text] of the diverse mixtures of components in the electrolyte is a complex and expensive task that shows itself as a bottleneck in optimization of electrolytes. In this work, we present a machine-learning approach for predicting [Formula: see text] of electrolytes. The proposed architecture utilizes (1) an attention-based model (Attentive FP), a contrastive learning model (MolCLR) or morgan fingerprints to represent chemical components, and (2) transformers to account for the interactions between chemicals in the electrolyte. This architecture was not only capable of predicting electrolyte properties, including those of chemicals not used during training, but also providing insights into chemical interactions within electrolytes. It revealed that interactions with other chemicals relate to the logP and molecular weight of the chemicals.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37654477

RESUMO

Learning high-quality, self-supervised, visual representations is essential to advance the role of computer vision in biomedical microscopy and clinical medicine. Previous work has focused on self-supervised representation learning (SSL) methods developed for instance discrimination and applied them directly to image patches, or fields-of-view, sampled from gigapixel whole-slide images (WSIs) used for cancer diagnosis. However, this strategy is limited because it (1) assumes patches from the same patient are independent, (2) neglects the patient-slide-patch hierarchy of clinical biomedical microscopy, and (3) requires strong data augmentations that can degrade downstream performance. Importantly, sampled patches from WSIs of a patient's tumor are a diverse set of image examples that capture the same underlying cancer diagnosis. This motivated HiDisc, a data-driven method that leverages the inherent patient-slide-patch hierarchy of clinical biomedical microscopy to define a hierarchical discriminative learning task that implicitly learns features of the underlying diagnosis. HiDisc uses a self-supervised contrastive learning framework in which positive patch pairs are defined based on a common ancestry in the data hierarchy, and a unified patch, slide, and patient discriminative learning objective is used for visual SSL. We benchmark HiDisc visual representations on two vision tasks using two biomedical microscopy datasets, and demonstrate that (1) HiDisc pretraining outperforms current state-of-the-art self-supervised pretraining methods for cancer diagnosis and genetic mutation prediction, and (2) HiDisc learns high-quality visual representations using natural patch diversity without strong data augmentations.

5.
Nat Med ; 29(4): 828-832, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36959422

RESUMO

Molecular classification has transformed the management of brain tumors by enabling more accurate prognostication and personalized treatment. However, timely molecular diagnostic testing for patients with brain tumors is limited, complicating surgical and adjuvant treatment and obstructing clinical trial enrollment. In this study, we developed DeepGlioma, a rapid (<90 seconds), artificial-intelligence-based diagnostic screening system to streamline the molecular diagnosis of diffuse gliomas. DeepGlioma is trained using a multimodal dataset that includes stimulated Raman histology (SRH); a rapid, label-free, non-consumptive, optical imaging method; and large-scale, public genomic data. In a prospective, multicenter, international testing cohort of patients with diffuse glioma (n = 153) who underwent real-time SRH imaging, we demonstrate that DeepGlioma can predict the molecular alterations used by the World Health Organization to define the adult-type diffuse glioma taxonomy (IDH mutation, 1p19q co-deletion and ATRX mutation), achieving a mean molecular classification accuracy of 93.3 ± 1.6%. Our results represent how artificial intelligence and optical histology can be used to provide a rapid and scalable adjunct to wet lab methods for the molecular screening of patients with diffuse glioma.


Assuntos
Neoplasias Encefálicas , Glioma , Adulto , Humanos , Inteligência Artificial , Estudos Prospectivos , Glioma/diagnóstico por imagem , Glioma/genética , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/genética , Mutação , Isocitrato Desidrogenase/genética , Imagem Óptica , Inteligência
6.
Neurosurgery ; 90(6): 758-767, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35343469

RESUMO

BACKGROUND: Accurate specimen analysis of skull base tumors is essential for providing personalized surgical treatment strategies. Intraoperative specimen interpretation can be challenging because of the wide range of skull base pathologies and lack of intraoperative pathology resources. OBJECTIVE: To develop an independent and parallel intraoperative workflow that can provide rapid and accurate skull base tumor specimen analysis using label-free optical imaging and artificial intelligence. METHODS: We used a fiber laser-based, label-free, nonconsumptive, high-resolution microscopy method (<60 seconds per 1 × 1 mm2), called stimulated Raman histology (SRH), to image a consecutive, multicenter cohort of patients with skull base tumor. SRH images were then used to train a convolutional neural network model using 3 representation learning strategies: cross-entropy, self-supervised contrastive learning, and supervised contrastive learning. Our trained convolutional neural network models were tested on a held-out, multicenter SRH data set. RESULTS: SRH was able to image the diagnostic features of both benign and malignant skull base tumors. Of the 3 representation learning strategies, supervised contrastive learning most effectively learned the distinctive and diagnostic SRH image features for each of the skull base tumor types. In our multicenter testing set, cross-entropy achieved an overall diagnostic accuracy of 91.5%, self-supervised contrastive learning 83.9%, and supervised contrastive learning 96.6%. Our trained model was able to segment tumor-normal margins and detect regions of microscopic tumor infiltration in meningioma SRH images. CONCLUSION: SRH with trained artificial intelligence models can provide rapid and accurate intraoperative analysis of skull base tumor specimens to inform surgical decision-making.


Assuntos
Neoplasias Encefálicas , Neoplasias Meníngeas , Neoplasias da Base do Crânio , Inteligência Artificial , Neoplasias Encefálicas/cirurgia , Humanos , Neoplasias Meníngeas/diagnóstico por imagem , Neoplasias Meníngeas/cirurgia , Imagem Óptica , Neoplasias da Base do Crânio/diagnóstico por imagem , Neoplasias da Base do Crânio/cirurgia
7.
Adv Neural Inf Process Syst ; 35(DB): 28502-28516, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37082565

RESUMO

Accurate intraoperative diagnosis is essential for providing safe and effective care during brain tumor surgery. Our standard-of-care diagnostic methods are time, resource, and labor intensive, which restricts access to optimal surgical treatments. To address these limitations, we propose an alternative workflow that combines stimulated Raman histology (SRH), a rapid optical imaging method, with deep learning-based automated interpretation of SRH images for intraoperative brain tumor diagnosis and real-time surgical decision support. Here, we present OpenSRH, the first public dataset of clinical SRH images from 300+ brain tumors patients and 1300+ unique whole slide optical images. OpenSRH contains data from the most common brain tumors diagnoses, full pathologic annotations, whole slide tumor segmentations, raw and processed optical imaging data for end-to-end model development and validation. We provide a framework for patch-based whole slide SRH classification and inference using weak (i.e. patient-level) diagnostic labels. Finally, we benchmark two computer vision tasks: multiclass histologic brain tumor classification and patch-based contrastive representation learning. We hope OpenSRH will facilitate the clinical translation of rapid optical imaging and real-time ML-based surgical decision support in order to improve the access, safety, and efficacy of cancer surgery in the era of precision medicine. Dataset access, code, and benchmarks are available at https://opensrh.mlins.org.

8.
Neuro Oncol ; 23(1): 144-155, 2021 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-32672793

RESUMO

BACKGROUND: Detection of glioma recurrence remains a challenge in modern neuro-oncology. Noninvasive radiographic imaging is unable to definitively differentiate true recurrence versus pseudoprogression. Even in biopsied tissue, it can be challenging to differentiate recurrent tumor and treatment effect. We hypothesized that intraoperative stimulated Raman histology (SRH) and deep neural networks can be used to improve the intraoperative detection of glioma recurrence. METHODS: We used fiber laser-based SRH, a label-free, nonconsumptive, high-resolution microscopy method (<60 sec per 1 × 1 mm2) to image a cohort of patients (n = 35) with suspected recurrent gliomas who underwent biopsy or resection. The SRH images were then used to train a convolutional neural network (CNN) and develop an inference algorithm to detect viable recurrent glioma. Following network training, the performance of the CNN was tested for diagnostic accuracy in a retrospective cohort (n = 48). RESULTS: Using patch-level CNN predictions, the inference algorithm returns a single Bernoulli distribution for the probability of tumor recurrence for each surgical specimen or patient. The external SRH validation dataset consisted of 48 patients (recurrent, 30; pseudoprogression, 18), and we achieved a diagnostic accuracy of 95.8%. CONCLUSION: SRH with CNN-based diagnosis can be used to improve the intraoperative detection of glioma recurrence in near-real time. Our results provide insight into how optical imaging and computer vision can be combined to augment conventional diagnostic methods and improve the quality of specimen sampling at glioma recurrence.


Assuntos
Neoplasias Encefálicas , Glioma , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Glioma/diagnóstico por imagem , Glioma/cirurgia , Humanos , Redes Neurais de Computação , Estudos Retrospectivos
9.
Nat Med ; 26(1): 52-58, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31907460

RESUMO

Intraoperative diagnosis is essential for providing safe and effective care during cancer surgery1. The existing workflow for intraoperative diagnosis based on hematoxylin and eosin staining of processed tissue is time, resource and labor intensive2,3. Moreover, interpretation of intraoperative histologic images is dependent on a contracting, unevenly distributed, pathology workforce4. In the present study, we report a parallel workflow that combines stimulated Raman histology (SRH)5-7, a label-free optical imaging method and deep convolutional neural networks (CNNs) to predict diagnosis at the bedside in near real-time in an automated fashion. Specifically, our CNNs, trained on over 2.5 million SRH images, predict brain tumor diagnosis in the operating room in under 150 s, an order of magnitude faster than conventional techniques (for example, 20-30 min)2. In a multicenter, prospective clinical trial (n = 278), we demonstrated that CNN-based diagnosis of SRH images was noninferior to pathologist-based interpretation of conventional histologic images (overall accuracy, 94.6% versus 93.9%). Our CNNs learned a hierarchy of recognizable histologic feature representations to classify the major histopathologic classes of brain tumors. In addition, we implemented a semantic segmentation method to identify tumor-infiltrated diagnostic regions within SRH images. These results demonstrate how intraoperative cancer diagnosis can be streamlined, creating a complementary pathway for tissue diagnosis that is independent of a traditional pathology laboratory.


Assuntos
Neoplasias Encefálicas/diagnóstico , Sistemas Computacionais , Monitorização Intraoperatória , Redes Neurais de Computação , Análise Espectral Raman , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Ensaios Clínicos como Assunto , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Probabilidade
11.
Artigo em Inglês | MEDLINE | ID: mdl-26736781

RESUMO

Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Cicatrização , Automação , Humanos , Aprendizado de Máquina , Reprodutibilidade dos Testes
13.
Nat Biotechnol ; 25(5): 584-92, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17401361

RESUMO

Using 62 probe-level datasets obtained with a custom-designed Caulobacter crescentus microarray chip, we identify transcriptional start sites of 769 genes, 53 of which are transcribed from multiple start sites. Transcriptional start sites are identified by analyzing probe signal cross-correlation matrices created from probe pairs tiled every 5 bp upstream of the genes. Signals from probes binding the same message are correlated. The contribution of each promoter for genes transcribed from multiple promoters is identified. Knowing the transcription start site enables targeted searching for regulatory-protein binding motifs in the promoter regions of genes with similar expression patterns. We identified 27 motifs, 17 of which share no similarity to the characterized motifs of other C. crescentus transcriptional regulators. Using these motifs, we predict coregulated genes. We verified novel promoter motifs that regulate stress-response genes, including those responding to uranium challenge, a stress-response sigma factor and a stress-response noncoding RNA.


Assuntos
Caulobacter crescentus/genética , Sequência Conservada/genética , DNA Bacteriano/genética , Modelos Genéticos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Regulon/genética , Transcrição Gênica/genética , Sequência de Bases , Simulação por Computador , Dados de Sequência Molecular , Regiões Promotoras Genéticas/genética , Análise de Sequência de DNA/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA