Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 12.012
Filtrar
1.
MAbs ; 14(1): 2020203, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35133949

RESUMO

Despite recent advances in transgenic animal models and display technologies, humanization of mouse sequences remains one of the main routes for therapeutic antibody development. Traditionally, humanization is manual, laborious, and requires expert knowledge. Although automation efforts are advancing, existing methods are either demonstrated on a small scale or are entirely proprietary. To predict the immunogenicity risk, the human-likeness of sequences can be evaluated using existing humanness scores, but these lack diversity, granularity or interpretability. Meanwhile, immune repertoire sequencing has generated rich antibody libraries such as the Observed Antibody Space (OAS) that offer augmented diversity not yet exploited for antibody engineering. Here we present BioPhi, an open-source platform featuring novel methods for humanization (Sapiens) and humanness evaluation (OASis). Sapiens is a deep learning humanization method trained on the OAS using language modeling. Based on an in silico humanization benchmark of 177 antibodies, Sapiens produced sequences at scale while achieving results comparable to that of human experts. OASis is a granular, interpretable and diverse humanness score based on 9-mer peptide search in the OAS. OASis separated human and non-human sequences with high accuracy, and correlated with clinical immunogenicity. BioPhi thus offers an antibody design interface with automated methods that capture the richness of natural antibody repertoires to produce therapeutics with desired properties and accelerate antibody discovery campaigns. The BioPhi platform is accessible at https://biophi.dichlab.org and https://github.com/Merck/BioPhi.


Assuntos
Aprendizado Profundo , Animais , Anticorpos , Camundongos
2.
Cancer Res ; 82(15): 2672-2673, 2022 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-35919991

RESUMO

Despite the crucial role of phenotypic and genetic intratumoral heterogeneity in understanding and predicting clinical outcomes for patients with cancer, computational pathology studies have yet to make substantial steps in this area. The major limiting factor has been the bulk gene-sequencing practice that results in loss of spatial information of gene status, making the study of intratumoral heterogeneity difficult. In this issue of Cancer Research, Acosta and colleagues used deep learning to study if localized gene mutation status can be predicted from localized tumor morphology for clear cell renal cell carcinoma. The algorithm was developed using curated sets of matched hematoxylin and eosin and IHC images, which represent spatially resolved morphology and genotype, respectively. This study confirms the existence of a strong link between morphology and underlying genetics on a regional level, paving the way for further investigations into intratumoral heterogeneity. See related article by Acosta et al., p. 2792.


Assuntos
Aprendizado Profundo , Neoplasias Renais , Humanos , Neoplasias Renais/genética , Mutação
3.
PLoS One ; 17(8): e0272317, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35930531

RESUMO

Extracting water bodies from remote sensing images is important in many fields, such as in water resources information acquisition and analysis. Conventional methods of water body extraction enhance the differences between water bodies and other interfering water bodies to improve the accuracy of water body boundary extraction. Multiple methods must be used alternately to extract water body boundaries more accurately. Water body extraction methods combined with neural networks struggle to improve the extraction accuracy of fine water bodies while ensuring an overall extraction effect. In this study, false color processing and a generative adversarial network (GAN) were added to reconstruct remote sensing images and enhance the features of tiny water bodies. In addition, a multi-scale input strategy was designed to reduce the training cost. We input the processed data into a new water body extraction method based on strip pooling for remote sensing images, which is an improvement of DeepLabv3+. Strip pooling was introduced in the DeepLabv3+ network to better extract water bodies with a discrete distribution at long distances using different strip kernels. The experiments and tests show that the proposed method can improve the accuracy of water body extraction and is effective in fine water body extraction. Compared with seven other traditional remote sensing water body extraction methods and deep learning semantic segmentation methods, the prediction accuracy of the proposed method reaches 94.72%. In summary, the proposed method performs water body extraction better than existing methods.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tecnologia de Sensoriamento Remoto , Água
4.
Structure ; 30(8): 1047-1049, 2022 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-35931059

RESUMO

Accurate protein structure predictors use clusters of homologues, which disregard sequence specific effects. In this issue of Structure, Weißenow and colleagues report a deep learning-based tool, EMBER2, that efficiently predicts the distances in a protein structure from its amino acid sequence only. This approach should enable the analysis of mutation effects.


Assuntos
Biologia Computacional , Aprendizado Profundo , Sequência de Aminoácidos , Idioma , Proteínas/química
5.
Sci Rep ; 12(1): 13462, 2022 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-35931705

RESUMO

Application of scanning transmission electron microscopy (STEM) to in situ observation will be essential in the current and emerging data-driven materials science by taking STEM's high affinity with various analytical options into account. As is well known, STEM's image acquisition time needs to be further shortened to capture a targeted phenomenon in real-time as STEM's current temporal resolution is far below the conventional TEM's. However, rapid image acquisition in the millisecond per frame or faster generally causes image distortion, poor electron signals, and unidirectional blurring, which are obstacles for realizing video-rate STEM observation. Here we show an image correction framework integrating deep learning (DL)-based denoising and image distortion correction schemes optimized for STEM rapid image acquisition. By comparing a series of distortion corrected rapid scan images with corresponding regular scan speed images, the trained DL network is shown to remove not only the statistical noise but also the unidirectional blurring. This result demonstrates that rapid as well as high-quality image acquisition by STEM without hardware modification can be established by the DL. The DL-based noise filter could be applied to in-situ observation, such as dislocation activities under external stimuli, with high spatio-temporal resolution.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Microscopia Eletrônica de Transmissão e Varredura , Razão Sinal-Ruído
6.
Sci Rep ; 12(1): 13468, 2022 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-35931710

RESUMO

We approach the task of detecting the illicit movement of cultural heritage from a machine learning perspective by presenting a framework for detecting a known artefact in a new and unseen image. To this end, we explore the machine learning problem of instance classification for large archaeological images datasets, i.e. where each individual object (instance) is itself a class that all of the multiple images of that object belongs. We focus on a wide variety of objects in the Durham Oriental Museum with which we build a dataset with over 24,502 images of 4332 unique object instances. We experiment with state-of-the-art convolutional neural network models, the smaller variations of which are suitable for deployment on mobile applications. We find the exact object instance of a given image can be predicted from among 4332 others with ~ 72% accuracy, showing how effectively machine learning can detect a known object from a new image. We demonstrate that accuracy significantly improves as the number of images-per-object instance increases (up to ~ 83%), with an ensemble of classifiers scoring as high as 84%. We find that the correct instance is found in the top 3, 5, or 10 predictions of our best models ~ 91%, ~ 93%, or ~ 95% of the time respectively. Our findings contribute to the emerging overlap of machine learning and cultural heritage, and highlights the potential available to future applications and research.


Assuntos
Aprendizado Profundo , Artefatos , Aprendizado de Máquina , Redes Neurais de Computação
7.
PLoS One ; 17(8): e0269826, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35925956

RESUMO

The complex feature characteristics and low contrast of cancer lesions, a high degree of inter-class resemblance between malignant and benign lesions, and the presence of various artifacts including hairs make automated melanoma recognition in dermoscopy images quite challenging. To date, various computer-aided solutions have been proposed to identify and classify skin cancer. In this paper, a deep learning model with a shallow architecture is proposed to classify the lesions into benign and malignant. To achieve effective training while limiting overfitting problems due to limited training data, image preprocessing and data augmentation processes are introduced. After this, the 'box blur' down-scaling method is employed, which adds efficiency to our study by reducing the overall training time and space complexity significantly. Our proposed shallow convolutional neural network (SCNN_12) model is trained and evaluated on the Kaggle skin cancer data ISIC archive which was augmented to 16485 images by implementing different augmentation techniques. The model was able to achieve an accuracy of 98.87% with optimizer Adam and a learning rate of 0.001. In this regard, parameter and hyper-parameters of the model are determined by performing ablation studies. To assert no occurrence of overfitting, experiments are carried out exploring k-fold cross-validation and different dataset split ratios. Furthermore, to affirm the robustness the model is evaluated on noisy data to examine the performance when the image quality gets corrupted.This research corroborates that effective training for medical image analysis, addressing training time and space complexity, is possible even with a lightweighted network using a limited amount of training data.


Assuntos
Aprendizado Profundo , Melanoma , Neoplasias Cutâneas , Artefatos , Dermoscopia , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
8.
Cell Rep ; 40(5): 111151, 2022 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-35926462

RESUMO

Serial section electron microscopy (ssEM) can provide comprehensive 3D ultrastructural information of the brain with exceptional computational cost. Targeted reconstruction of subcellular structures from ssEM datasets is less computationally demanding but still highly informative. We thus developed a region-CNN-based deep learning method to identify, segment, and reconstruct synapses and mitochondria to explore the structural plasticity of synapses and mitochondria in the auditory cortex of mice subjected to fear conditioning. Upon reconstructing over 135,000 mitochondria and 160,000 synapses, we find that fear conditioning significantly increases the number of mitochondria but decreases their size and promotes formation of multi-contact synapses, comprising a single axonal bouton and multiple postsynaptic sites from different dendrites. Modeling indicates that such multi-contact configuration increases the information storage capacity of new synapses by over 50%. With high accuracy and speed in reconstruction, our method yields structural and functional insight into cellular plasticity associated with fear learning.


Assuntos
Aprendizado Profundo , Animais , Medo , Camundongos , Microscopia Eletrônica , Mitocôndrias/ultraestrutura , Plasticidade Neuronal , Sinapses/metabolismo
9.
J Am Coll Cardiol ; 80(6): 613-626, 2022 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-35926935

RESUMO

BACKGROUND: Valvular heart disease is an important contributor to cardiovascular morbidity and mortality and remains underdiagnosed. Deep learning analysis of electrocardiography (ECG) may be useful in detecting aortic stenosis (AS), aortic regurgitation (AR), and mitral regurgitation (MR). OBJECTIVES: This study aimed to develop ECG deep learning algorithms to identify moderate or severe AS, AR, and MR alone and in combination. METHODS: A total of 77,163 patients undergoing ECG within 1 year before echocardiography from 2005-2021 were identified and split into train (n = 43,165), validation (n = 12,950), and test sets (n = 21,048; 7.8% with any of AS, AR, or MR). Model performance was assessed using area under the receiver-operating characteristic (AU-ROC) and precision-recall curves. Outside validation was conducted on an independent data set. Test accuracy was modeled using different disease prevalence levels to simulate screening efficacy using the deep learning model. RESULTS: The deep learning algorithm model accuracy was as follows: AS (AU-ROC: 0.88), AR (AU-ROC: 0.77), MR (AU-ROC: 0.83), and any of AS, AR, or MR (AU-ROC: 0.84; sensitivity 78%, specificity 73%) with similar accuracy in external validation. In screening program modeling, test characteristics were dependent on underlying prevalence and selected sensitivity levels. At a prevalence of 7.8%, the positive and negative predictive values were 20% and 97.6%, respectively. CONCLUSIONS: Deep learning analysis of the ECG can accurately detect AS, AR, and MR in this multicenter cohort and may serve as the basis for the development of a valvular heart disease screening program.


Assuntos
Insuficiência da Valva Aórtica , Estenose da Valva Aórtica , Aprendizado Profundo , Doenças das Valvas Cardíacas , Insuficiência da Valva Mitral , Insuficiência da Valva Aórtica/diagnóstico , Estenose da Valva Aórtica/diagnóstico , Eletrocardiografia , Doenças das Valvas Cardíacas/diagnóstico , Doenças das Valvas Cardíacas/epidemiologia , Humanos , Insuficiência da Valva Mitral/diagnóstico , Insuficiência da Valva Mitral/epidemiologia
10.
BMC Bioinformatics ; 23(1): 318, 2022 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-35927611

RESUMO

BACKGROUND: Essential Proteins are demonstrated to exert vital functions on cellular processes and are indispensable for the survival and reproduction of the organism. Traditional centrality methods perform poorly on complex protein-protein interaction (PPI) networks. Machine learning approaches based on high-throughput data lack the exploitation of the temporal and spatial dimensions of biological information. RESULTS: We put forward a deep learning framework to predict essential proteins by integrating features obtained from the PPI network, subcellular localization, and gene expression profiles. In our model, the node2vec method is applied to learn continuous feature representations for proteins in the PPI network, which capture the diversity of connectivity patterns in the network. The concept of depthwise separable convolution is employed on gene expression profiles to extract properties and observe the trends of gene expression over time under different experimental conditions. Subcellular localization information is mapped into a long one-dimensional vector to capture its characteristics. Additionally, we use a sampling method to mitigate the impact of imbalanced learning when training the model. With experiments carried out on the data of Saccharomyces cerevisiae, results show that our model outperforms traditional centrality methods and machine learning methods. Likewise, the comparative experiments have manifested that our process of various biological information is preferable. CONCLUSIONS: Our proposed deep learning framework effectively identifies essential proteins by integrating multiple biological data, proving a broader selection of subcellular localization information significantly improves the results of prediction and depthwise separable convolution implemented on gene expression profiles enhances the performance.


Assuntos
Aprendizado Profundo , Biologia Computacional/métodos , Aprendizado de Máquina , Mapas de Interação de Proteínas , Proteínas/metabolismo , Saccharomyces cerevisiae/genética , Saccharomyces cerevisiae/metabolismo
11.
Sci Rep ; 12(1): 13276, 2022 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-35918392

RESUMO

Cynomolgus monkeys exhibit human-like features, such as a fovea, so they are often used in non-clinical research. Nevertheless, little is known about the natural variation of the choroidal thickness in relation to origin and sex. A combination of deep learning and a deterministic computer vision algorithm was applied for automatic segmentation of foveolar optical coherence tomography images in cynomolgus monkeys. The main evaluation parameters were choroidal thickness and surface area directed from the deepest point on OCT images within the fovea, marked as the nulla with regard to sex and origin. Reference choroid landmarks were set underneath the nulla and at 500 µm intervals laterally up to a distance of 2000 µm nasally and temporally, complemented by a sub-analysis of the central bouquet of cones. 203 animals contributed 374 eyes for a reference choroid database. The overall average central choroidal thickness was 193 µm with a coefficient of variation of 7.8%, and the overall mean surface area of the central bouquet temporally was 19,335 µm2 and nasally was 19,283 µm2. The choroidal thickness of the fovea appears relatively homogeneous between the sexes and the studied origins. However, considerable natural variation has been observed, which needs to be appreciated.


Assuntos
Aprendizado Profundo , Tomografia de Coerência Óptica , Animais , Corioide/diagnóstico por imagem , Fóvea Central/diagnóstico por imagem , Humanos , Macaca fascicularis , Tomografia de Coerência Óptica/métodos
12.
Sci Rep ; 12(1): 13281, 2022 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-35918498

RESUMO

The use of sharpness aware minimization (SAM) as an optimizer that achieves high performance for convolutional neural networks (CNNs) is attracting attention in various fields of deep learning. We used deep learning to perform classification diagnosis in oral exfoliative cytology and to analyze performance, using SAM as an optimization algorithm to improve classification accuracy. The whole image of the oral exfoliation cytology slide was cut into tiles and labeled by an oral pathologist. CNN was VGG16, and stochastic gradient descent (SGD) and SAM were used as optimizers. Each was analyzed with and without a learning rate scheduler in 300 epochs. The performance metrics used were accuracy, precision, recall, specificity, F1 score, AUC, and statistical and effect size. All optimizers performed better with the rate scheduler. In particular, the SAM effect size had high accuracy (11.2) and AUC (11.0). SAM had the best performance of all models with a learning rate scheduler. (AUC = 0.9328) SAM tended to suppress overfitting compared to SGD. In oral exfoliation cytology classification, CNNs using SAM rate scheduler showed the highest classification performance. These results suggest that SAM can play an important role in primary screening of the oral cytological diagnostic environment.


Assuntos
Aprendizado Profundo , Algoritmos , Redes Neurais de Computação
13.
Rev Soc Bras Med Trop ; 55: e0420, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35946631

RESUMO

BACKGROUND: Malaria is curable. Nonetheless, over 229 million cases of malaria were recorded in 2019, along with 409,000 deaths. Although over 42 million Brazilians are at risk of contracting malaria, 99% percent of all malaria cases in Brazil are located in or around the Amazon rainforest. Despite declining cases and deaths, malaria remains a major public health issue in Brazil. Accurate spatiotemporal prediction of malaria propagation may enable improved resource allocation to support efforts to eradicate the disease. METHODS: In response to calls for novel research on malaria elimination strategies that suit local conditions, in this study, we propose machine learning (ML) and deep learning (DL) models to predict the probability of malaria cases in the state of Amazonas. Using a dataset of approximately 6 million records (January 2003 to December 2018), we applied k-means clustering to group cities based on their similarity of malaria incidence. We evaluated random forest, long-short term memory (LSTM) and dated recurrent unit (GRU) models and compared their performance. RESULTS: The LSTM architecture achieved better performance in clusters with less variability in the number of cases, whereas the GRU presents better results in clusters with high variability. Although Diebold-Mariano testing suggested that both the LSTM and GRU performed comparably, GRU can be trained significantly faster, which could prove advantageous in practice. CONCLUSIONS: All models showed satisfactory accuracy and strong performance in predicting new cases of malaria, and each could serve as a supplemental tool to support regional policies and strategies.


Assuntos
Aprendizado Profundo , Malária , Brasil/epidemiologia , Cidades , Humanos , Incidência , Malária/epidemiologia
14.
Biomed Eng Online ; 21(1): 55, 2022 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-35941613

RESUMO

BACKGROUND: Refractive error detection is a significant factor in preventing the development of myopia. To improve the efficiency and accuracy of refractive error detection, a refractive error detection network (REDNet) is proposed that combines the advantages of a convolutional neural network (CNN) and a recurrent neural network (RNN). It not only extracts the features of each image, but also fully utilizes the sequential relationship between images. In this article, we develop a system to predict the spherical power, cylindrical power, and spherical equivalent in multiple eccentric photorefraction images. Approach First, images of the pupil area are extracted from multiple eccentric photorefraction images; then, the features of each pupil image are extracted using the REDNet convolution layers. Finally, the features are fused by the recurrent layers in REDNet to predict the spherical power, cylindrical power, and spherical equivalent. RESULTS: The results show that the mean absolute error (MAE) values of the spherical power, cylindrical power, and spherical equivalent can reach 0.1740 D (diopters), 0.0702 D, and 0.1835 D, respectively. SIGNIFICANCE: This method demonstrates a much higher accuracy than those of current state-of-the-art deep-learning methods. Moreover, it is effective and practical.


Assuntos
Aprendizado Profundo , Miopia , Erros de Refração , Humanos , Redes Neurais de Computação , Refração Ocular , Erros de Refração/diagnóstico
15.
PLoS One ; 17(8): e0272055, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35944013

RESUMO

To develop deep learning models for predicting Interoperative hypotension (IOH) using waveforms from arterial blood pressure (ABP), electrocardiogram (ECG), and electroencephalogram (EEG), and to determine whether combination ABP with EEG or CG improves model performance. Data were retrieved from VitalDB, a public data repository of vital signs taken during surgeries in 10 operating rooms at Seoul National University Hospital from January 6, 2005, to March 1, 2014. Retrospective data from 14,140 adult patients undergoing non-cardiac surgery with general anaesthesia were used. The predictive performances of models trained with different combinations of waveforms were evaluated and compared at time points at 3, 5, 10, 15 minutes before the event. The performance was calculated by area under the receiver operating characteristic (AUROC), area under the precision-recall curve (AUPRC), sensitivity and specificity. The model performance was better in the model using both ABP and EEG waveforms than in all other models at all time points (3, 5, 10, and 15 minutes before an event) Using high-fidelity ABP and EEG waveforms, the model predicted IOH with a AUROC and AUPRC of 0.935 [0.932 to 0.938] and 0.882 [0.876 to 0.887] at 5 minutes before an IOH event. The output of both ABP and EEG was more calibrated than that using other combinations or ABP alone. The results demonstrate that a predictive deep neural network can be trained using ABP, ECG, and EEG waveforms, and the combination of ABP and EEG improves model performance and calibration.


Assuntos
Aprendizado Profundo , Hipotensão , Adulto , Pressão Arterial/fisiologia , Pressão Sanguínea , Eletrocardiografia/métodos , Eletroencefalografia , Humanos , Hipotensão/diagnóstico , Estudos Retrospectivos
16.
PLoS One ; 17(8): e0272696, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35944056

RESUMO

INTRODUCTION: According to the World Health Organization, the tall cell variant (TCV) is an aggressive subtype of papillary thyroid carcinoma (PTC) comprising at least 30% epithelial cells two to three times as tall as they are wide. In practice, applying this definition is difficult causing substantial interobserver variability. We aimed to train a deep learning algorithm to detect and quantify the proportion of tall cells (TCs) in PTC. METHODS: We trained the deep learning algorithm using supervised learning, testing it on an independent dataset, and further validating it on an independent set of 90 PTC samples from patients treated at the Hospital District of Helsinki and Uusimaa between 2003 and 2013. We compared the algorithm-based TC percentage to the independent scoring by a human investigator and how those scorings associated with disease outcomes. Additionally, we assessed the TC score in 71 local and distant tumor relapse samples from patients with aggressive disease. RESULTS: In the test set, the deep learning algorithm detected TCs with a sensitivity of 93.7% and a specificity of 94.5%, whereas the sensitivity fell to 90.9% and specificity to 94.1% for non-TC areas. In the validation set, the deep learning algorithm TC scores correlated with a diminished relapse-free survival using cutoff points of 10% (p = 0.044), 20% (p < 0.01), and 30% (p = 0.036). The visually assessed TC score did not statistically significantly predict survival at any of the analyzed cutoff points. We observed no statistically significant difference in the TC score between primary tumors and relapse tumors determined by the deep learning algorithm or visually. CONCLUSIONS: We present a novel deep learning-based algorithm to detect tall cells, showing that a high deep learning-based TC score represents a statistically significant predictor of less favorable relapse-free survival in PTC.


Assuntos
Carcinoma Papilar , Aprendizado Profundo , Neoplasias da Glândula Tireoide , Carcinoma Papilar/diagnóstico , Carcinoma Papilar/patologia , Humanos , Recidiva Local de Neoplasia/patologia , Câncer Papilífero da Tireoide/diagnóstico , Câncer Papilífero da Tireoide/patologia , Neoplasias da Glândula Tireoide/diagnóstico , Neoplasias da Glândula Tireoide/patologia
17.
Cancer Cell ; 40(8): 865-878.e6, 2022 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-35944502

RESUMO

The rapidly emerging field of computational pathology has demonstrated promise in developing objective prognostic models from histology images. However, most prognostic models are either based on histology or genomics alone and do not address how these data sources can be integrated to develop joint image-omic prognostic models. Additionally, identifying explainable morphological and molecular descriptors from these models that govern such prognosis is of interest. We use multimodal deep learning to jointly examine pathology whole-slide images and molecular profile data from 14 cancer types. Our weakly supervised, multimodal deep-learning algorithm is able to fuse these heterogeneous modalities to predict outcomes and discover prognostic features that correlate with poor and favorable outcomes. We present all analyses for morphological and molecular correlates of patient prognosis across the 14 cancer types at both a disease and a patient level in an interactive open-access database to allow for further exploration, biomarker discovery, and feature assessment.


Assuntos
Aprendizado Profundo , Neoplasias , Algoritmos , Genômica/métodos , Humanos , Neoplasias/genética , Neoplasias/patologia , Prognóstico
18.
J Cardiovasc Magn Reson ; 24(1): 47, 2022 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-35948936

RESUMO

BACKGROUND: Exercise cardiovascular magnetic resonance (Ex-CMR) is a promising stress imaging test for coronary artery disease (CAD). However, Ex-CMR requires accelerated imaging techniques that result in significant aliasing artifacts. Our goal was to develop and evaluate a free-breathing and electrocardiogram (ECG)-free real-time cine with deep learning (DL)-based radial acceleration for Ex-CMR. METHODS: A 3D (2D + time) convolutional neural network was implemented to suppress artifacts from aliased radial cine images. The network was trained using synthetic real-time radial cine images simulated using breath-hold, ECG-gated segmented Cartesian k-space data acquired at 3 T from 503 patients at rest. A prototype real-time radial sequence with acceleration rate = 12 was used to collect images with inline DL reconstruction. Performance was evaluated in 8 healthy subjects in whom only rest images were collected. Subsequently, 14 subjects (6 healthy and 8 patients with suspected CAD) were prospectively recruited for an Ex-CMR to evaluate image quality. At rest (n = 22), standard breath-hold ECG-gated Cartesian segmented cine and free-breathing ECG-free real-time radial cine images were acquired. During post-exercise stress (n = 14), only real-time radial cine images were acquired. Three readers evaluated residual artifact level in all collected images on a 4-point Likert scale (1-non-diagnostic, 2-severe, 3-moderate, 4-minimal). RESULTS: The DL model substantially suppressed artifacts in real-time radial cine images acquired at rest and during post-exercise stress. In real-time images at rest, 89.4% of scores were moderate to minimal. The mean score was 3.3 ± 0.7, representing increased (P < 0.001) artifacts compared to standard cine (3.9 ± 0.3). In real-time images during post-exercise stress, 84.6% of scores were moderate to minimal, and the mean artifact level score was 3.1 ± 0.6. Comparison of left-ventricular (LV) measures derived from standard and real-time cine at rest showed differences in LV end-diastolic volume (3.0 mL [- 11.7, 17.8], P = 0.320) that were not significantly different from zero. Differences in measures of LV end-systolic volume (7.0 mL [- 1.3, 15.3], P < 0.001) and LV ejection fraction (- 5.0% [- 11.1, 1.0], P < 0.001) were significant. Total inline reconstruction time of real-time radial images was 16.6 ms per frame. CONCLUSIONS: Our proof-of-concept study demonstrated the feasibility of inline real-time cine with DL-based radial acceleration for Ex-CMR.


Assuntos
Aprendizado Profundo , Imagem Cinética por Ressonância Magnética , Eletrocardiografia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Espectroscopia de Ressonância Magnética , Valor Preditivo dos Testes , Reprodutibilidade dos Testes
19.
JAMA Netw Open ; 5(8): e2225608, 2022 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-35939301

RESUMO

Importance: Deep learning may be able to use patient magnetic resonance imaging (MRI) data to aid in brain tumor classification and diagnosis. Objective: To develop and clinically validate a deep learning system for automated identification and classification of 18 types of brain tumors from patient MRI data. Design, Setting, and Participants: This diagnostic study was conducted using MRI data collected between 2000 and 2019 from 37 871 patients. A deep learning system for segmentation and classification of 18 types of intracranial tumors based on T1- and T2-weighted images and T2 contrast MRI sequences was developed and tested. The diagnostic accuracy of the system was tested using 1 internal and 3 external independent data sets. The clinical value of the system was assessed by comparing the tumor diagnostic accuracy of neuroradiologists with vs without assistance of the proposed system using a separate internal test data set. Data were analyzed from March 2019 through February 2020. Main Outcomes and Measures: Changes in neuroradiologist clinical diagnostic accuracy in brain MRI scans with vs without the deep learning system were evaluated. Results: A deep learning system was trained among 37 871 patients (mean [SD] age, 41.6 [11.4] years; 18 519 women [48.9%]). It achieved a mean area under the receiver operating characteristic curve of 0.92 (95% CI, 0.84-0.99) on 1339 patients from 4 centers' data sets in diagnosis and classification of 18 types of tumors. Higher outcomes were found compared with neuroradiologists for accuracy and sensitivity and similar outcomes for specificity (for 300 patients in the Tiantan Hospital test data set: accuracy, 73.3% [95% CI, 67.7%-77.7%] vs 60.9% [95% CI, 46.8%-75.1%]; sensitivity, 88.9% [95% CI, 85.3%-92.4%] vs 53.4% [95% CI, 41.8%-64.9%]; and specificity, 96.3% [95% CI, 94.2%-98.4%] vs 97.9%; [95% CI, 97.3%-98.5%]). With the assistance of the deep learning system, the mean accuracy of neuroradiologists among 1166 patients increased by 12.0 percentage points, from 63.5% (95% CI, 60.7%-66.2%) without assistance to 75.5% (95% CI, 73.0%-77.9%) with assistance. Conclusions and Relevance: These findings suggest that deep learning system-based automated diagnosis may be associated with improved classification and diagnosis of intracranial tumors from MRI data among neuroradiologists.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Adulto , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Curva ROC
20.
Proc Natl Acad Sci U S A ; 119(33): e2201062119, 2022 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-35939712

RESUMO

Following their success in numerous imaging and computer vision applications, deep-learning (DL) techniques have emerged as one of the most prominent strategies for accelerated MRI reconstruction. These methods have been shown to outperform conventional regularized methods based on compressed sensing (CS). However, in most comparisons, CS is implemented with two or three hand-tuned parameters, while DL methods enjoy a plethora of advanced data science tools. In this work, we revisit [Formula: see text]-wavelet CS reconstruction using these modern tools. Using ideas such as algorithm unrolling and advanced optimization methods over large databases that DL algorithms utilize, along with conventional insights from wavelet representations and CS theory, we show that [Formula: see text]-wavelet CS can be fine-tuned to a level close to DL reconstruction for accelerated MRI. The optimized [Formula: see text]-wavelet CS method uses only 128 parameters compared to >500,000 for DL, employs a convex reconstruction at inference time, and performs within <1% of a DL approach that has been used in multiple studies in terms of quantitative quality metrics.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...