Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Bioinformatics ; 36(Suppl_2): i875-i883, 2020 12 30.
Artigo em Inglês | MEDLINE | ID: mdl-33381813

RESUMO

MOTIVATION: Advances in automation and imaging have made it possible to capture a large image dataset that spans multiple experimental batches of data. However, accurate biological comparison across the batches is challenged by batch-to-batch variation (i.e. batch effect) due to uncontrollable experimental noise (e.g. varying stain intensity or cell density). Previous approaches to minimize the batch effect have commonly focused on normalizing the low-dimensional image measurements such as an embedding generated by a neural network. However, normalization of the embedding could suffer from over-correction and alter true biological features (e.g. cell size) due to our limited ability to interpret the effect of the normalization on the embedding space. Although techniques like flat-field correction can be applied to normalize the image values directly, they are limited transformations that handle only simple artifacts due to batch effect. RESULTS: We present a neural network-based batch equalization method that can transfer images from one batch to another while preserving the biological phenotype. The equalization method is trained as a generative adversarial network (GAN), using the StarGAN architecture that has shown considerable ability in style transfer. After incorporating new objectives that disentangle batch effect from biological features, we show that the equalized images have less batch information and preserve the biological information. We also demonstrate that the same model training parameters can generalize to two dramatically different types of cells, indicating this approach could be broadly applicable. AVAILABILITY AND IMPLEMENTATION: https://github.com/tensorflow/gan/tree/master/tensorflow_gan/examples/stargan. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Artefatos
2.
JAMA ; 316(22): 2402-2410, 2016 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-27898976

RESUMO

Importance: Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. Objective: To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. Design and Setting: A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Exposure: Deep learning-trained algorithm. Main Outcomes and Measures: The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. Results: The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%. Conclusions and Relevance: In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.


Assuntos
Algoritmos , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Aprendizado de Máquina , Edema Macular/diagnóstico por imagem , Redes Neurais de Computação , Fotografação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Oftalmologistas , Sensibilidade e Especificidade
3.
Nat Commun ; 15(1): 9449, 2024 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-39487163

RESUMO

Accelerating text input in augmentative and alternative communication (AAC) is a long-standing area of research with bearings on the quality of life in individuals with profound motor impairments. Recent advances in large language models (LLMs) pose opportunities for re-thinking strategies for enhanced text entry in AAC. In this paper, we present SpeakFaster, consisting of an LLM-powered user interface for text entry in a highly-abbreviated form, saving 57% more motor actions than traditional predictive keyboards in offline simulation. A pilot study on a mobile device with 19 non-AAC participants demonstrated motor savings in line with simulation and relatively small changes in typing speed. Lab and field testing on two eye-gaze AAC users with amyotrophic lateral sclerosis demonstrated text-entry rates 29-60% above baselines, due to significant saving of expensive keystrokes based on LLM predictions. These findings form a foundation for further exploration of LLM-assisted text entry in AAC and other user interfaces.


Assuntos
Esclerose Lateral Amiotrófica , Auxiliares de Comunicação para Pessoas com Deficiência , Fixação Ocular , Humanos , Esclerose Lateral Amiotrófica/fisiopatologia , Feminino , Masculino , Projetos Piloto , Fixação Ocular/fisiologia , Idioma , Adulto , Interface Usuário-Computador , Pessoa de Meia-Idade , Comunicação
4.
NPJ Digit Med ; 5(1): 45, 2022 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-35396385

RESUMO

Amyotrophic Lateral Sclerosis (ALS) disease severity is usually measured using the subjective, questionnaire-based revised ALS Functional Rating Scale (ALSFRS-R). Objective measures of disease severity would be powerful tools for evaluating real-world drug effectiveness, efficacy in clinical trials, and for identifying participants for cohort studies. We developed a machine learning (ML) based objective measure for ALS disease severity based on voice samples and accelerometer measurements from a four-year longitudinal dataset. 584 people living with ALS consented and carried out prescribed speaking and limb-based tasks. 542 participants contributed 5814 voice recordings, and 350 contributed 13,009 accelerometer samples, while simultaneously measuring ALSFRS-R scores. Using these data, we trained ML models to predict bulbar-related and limb-related ALSFRS-R scores. On the test set (n = 109 participants) the voice models achieved a multiclass AUC of 0.86 (95% CI, 0.85-0.88) on speech ALSFRS-R prediction, whereas the accelerometer models achieved a median multiclass AUC of 0.73 on 6 limb-related functions. The correlations across functions observed in self-reported ALSFRS-R scores were preserved in ML-derived scores. We used these models and self-reported ALSFRS-R scores to evaluate the real-world effects of edaravone, a drug approved for use in ALS. In the cohort of 54 test participants who received edaravone as part of their usual care, the ML-derived scores were consistent with the self-reported ALSFRS-R scores. At the individual level, the continuous ML-derived score can capture gradual changes that are absent in the integer ALSFRS-R scores. This demonstrates the value of these tools for assessing disease severity and, potentially, drug effects.

5.
Nat Commun ; 13(1): 1590, 2022 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-35338121

RESUMO

Drug discovery for diseases such as Parkinson's disease are impeded by the lack of screenable cellular phenotypes. We present an unbiased phenotypic profiling platform that combines automated cell culture, high-content imaging, Cell Painting, and deep learning. We applied this platform to primary fibroblasts from 91 Parkinson's disease patients and matched healthy controls, creating the largest publicly available Cell Painting image dataset to date at 48 terabytes. We use fixed weights from a convolutional deep neural network trained on ImageNet to generate deep embeddings from each image and train machine learning models to detect morphological disease phenotypes. Our platform's robustness and sensitivity allow the detection of individual-specific variation with high fidelity across batches and plate layouts. Lastly, our models confidently separate LRRK2 and sporadic Parkinson's disease lines from healthy controls (receiver operating characteristic area under curve 0.79 (0.08 standard deviation)), supporting the capacity of this platform for complex disease modeling and drug screening applications.


Assuntos
Aprendizado Profundo , Doença de Parkinson , Fibroblastos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
6.
Lancet Digit Health ; 3(1): e10-e19, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33735063

RESUMO

BACKGROUND: Diabetic retinopathy screening is instrumental to preventing blindness, but scaling up screening is challenging because of the increasing number of patients with all forms of diabetes. We aimed to create a deep-learning system to predict the risk of patients with diabetes developing diabetic retinopathy within 2 years. METHODS: We created and validated two versions of a deep-learning system to predict the development of diabetic retinopathy in patients with diabetes who had had teleretinal diabetic retinopathy screening in a primary care setting. The input for the two versions was either a set of three-field or one-field colour fundus photographs. Of the 575 431 eyes in the development set 28 899 had known outcomes, with the remaining 546 532 eyes used to augment the training process via multitask learning. Validation was done on one eye (selected at random) per patient from two datasets: an internal validation (from EyePACS, a teleretinal screening service in the USA) set of 3678 eyes with known outcomes and an external validation (from Thailand) set of 2345 eyes with known outcomes. FINDINGS: The three-field deep-learning system had an area under the receiver operating characteristic curve (AUC) of 0·79 (95% CI 0·77-0·81) in the internal validation set. Assessment of the external validation set-which contained only one-field colour fundus photographs-with the one-field deep-learning system gave an AUC of 0·70 (0·67-0·74). In the internal validation set, the AUC of available risk factors was 0·72 (0·68-0·76), which improved to 0·81 (0·77-0·84) after combining the deep-learning system with these risk factors (p<0·0001). In the external validation set, the corresponding AUC improved from 0·62 (0·58-0·66) to 0·71 (0·68-0·75; p<0·0001) following the addition of the deep-learning system to available risk factors. INTERPRETATION: The deep-learning systems predicted diabetic retinopathy development using colour fundus photographs, and the systems were independent of and more informative than available risk factors. Such a risk stratification tool might help to optimise screening intervals to reduce costs while improving vision-related outcomes. FUNDING: Google.


Assuntos
Aprendizado Profundo , Retinopatia Diabética/diagnóstico , Idoso , Área Sob a Curva , Técnicas de Diagnóstico Oftalmológico , Feminino , Humanos , Estimativa de Kaplan-Meier , Masculino , Pessoa de Meia-Idade , Fotografação , Prognóstico , Curva ROC , Reprodutibilidade dos Testes , Medição de Risco/métodos
7.
Nat Biomed Eng ; 4(2): 242, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32051580

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

8.
Nat Biomed Eng ; 4(1): 18-27, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31873211

RESUMO

Owing to the invasiveness of diagnostic tests for anaemia and the costs associated with screening for it, the condition is often undetected. Here, we show that anaemia can be detected via machine-learning algorithms trained using retinal fundus images, study participant metadata (including race or ethnicity, age, sex and blood pressure) or the combination of both data types (images and study participant metadata). In a validation dataset of 11,388 study participants from the UK Biobank, the fundus-image-only, metadata-only and combined models predicted haemoglobin concentration (in g dl-1) with mean absolute error values of 0.73 (95% confidence interval: 0.72-0.74), 0.67 (0.66-0.68) and 0.63 (0.62-0.64), respectively, and with areas under the receiver operating characteristic curve (AUC) values of 0.74 (0.71-0.76), 0.87 (0.85-0.89) and 0.88 (0.86-0.89), respectively. For 539 study participants with self-reported diabetes, the combined model predicted haemoglobin concentration with a mean absolute error of 0.73 (0.68-0.78) and anaemia an AUC of 0.89 (0.85-0.93). Automated anaemia screening on the basis of fundus images could particularly aid patients with diabetes undergoing regular retinal imaging and for whom anaemia can increase morbidity and mortality risks.


Assuntos
Anemia/diagnóstico por imagem , Retina/diagnóstico por imagem , Aprendizado Profundo , Feminino , Fundo de Olho , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Curva ROC
9.
Nat Commun ; 11(1): 130, 2020 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-31913272

RESUMO

Center-involved diabetic macular edema (ci-DME) is a major cause of vision loss. Although the gold standard for diagnosis involves 3D imaging, 2D imaging by fundus photography is usually used in screening settings, resulting in high false-positive and false-negative calls. To address this, we train a deep learning model to predict ci-DME from fundus photographs, with an ROC-AUC of 0.89 (95% CI: 0.87-0.91), corresponding to 85% sensitivity at 80% specificity. In comparison, retinal specialists have similar sensitivities (82-85%), but only half the specificity (45-50%, p < 0.001). Our model can also detect the presence of intraretinal fluid (AUC: 0.81; 95% CI: 0.81-0.86) and subretinal fluid (AUC 0.88; 95% CI: 0.85-0.91). Using deep learning to make predictions via simple 2D images without sophisticated 3D-imaging equipment and with better than specialist performance, has broad relevance to many other applications in medical imaging.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Idoso , Aprendizado Profundo , Retinopatia Diabética/genética , Feminino , Humanos , Imageamento Tridimensional , Edema Macular/genética , Masculino , Pessoa de Meia-Idade , Mutação , Fotografação , Retina/diagnóstico por imagem , Tomografia de Coerência Óptica
10.
SLAS Discov ; 24(8): 829-841, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31284814

RESUMO

The etiological underpinnings of many CNS disorders are not well understood. This is likely due to the fact that individual diseases aggregate numerous pathological subtypes, each associated with a complex landscape of genetic risk factors. To overcome these challenges, researchers are integrating novel data types from numerous patients, including imaging studies capturing broadly applicable features from patient-derived materials. These datasets, when combined with machine learning, potentially hold the power to elucidate the subtle patterns that stratify patients by shared pathology. In this study, we interrogated whether high-content imaging of primary skin fibroblasts, using the Cell Painting method, could reveal disease-relevant information among patients. First, we showed that technical features such as batch/plate type, plate, and location within a plate lead to detectable nuisance signals, as revealed by a pre-trained deep neural network and analysis with deep image embeddings. Using a plate design and image acquisition strategy that accounts for these variables, we performed a pilot study with 12 healthy controls and 12 subjects affected by the severe genetic neurological disorder spinal muscular atrophy (SMA), and evaluated whether a convolutional neural network (CNN) generated using a subset of the cells could distinguish disease states on cells from the remaining unseen control-SMA pair. Our results indicate that these two populations could effectively be differentiated from one another and that model selectivity is insensitive to batch/plate type. One caveat is that the samples were also largely separated by source. These findings lay a foundation for how to conduct future studies exploring diseases with more complex genetic contributions and unknown subtypes.


Assuntos
Ensaios de Triagem em Larga Escala , Aprendizado de Máquina , Imagem Molecular , Redes Neurais de Computação , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador
11.
IEEE Trans Pattern Anal Mach Intell ; 39(4): 677-691, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-27608449

RESUMO

Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA