Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Res Sq ; 2023 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-37461545

RESUMO

Pathology reports are considered the gold standard in medical research due to their comprehensive and accurate diagnostic information. Natural language processing (NLP) techniques have been developed to automate information extraction from pathology reports. However, existing studies suffer from two significant limitations. First, they typically frame their tasks as report classification, which restricts the granularity of extracted information. Second, they often fail to generalize to unseen reports due to variations in language, negation, and human error. To overcome these challenges, we propose a BERT (bidirectional encoder representations from transformers) named entity recognition (NER) system to extract key diagnostic elements from pathology reports. We also introduce four data augmentation methods to improve the robustness of our model. Trained and evaluated on 1438 annotated breast pathology reports, acquired from a large medical center in the United States, our BERT model trained with data augmentation achieves an entity F1-score of 0.916 on an internal test set, surpassing the BERT baseline (0.843). We further assessed the model's generalizability using an external validation dataset from the United Arab Emirates, where our model maintained satisfactory performance (F1-score 0.860). Our findings demonstrate that our NER systems can effectively extract fine-grained information from widely diverse medical reports, offering the potential for large-scale information extraction in a wide range of medical and AI research. We publish our code at https://github.com/nyukat/pathology_extraction.

3.
J Digit Imaging ; 34(6): 1414-1423, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34731338

RESUMO

Breast cancer is the most common cancer in women, and hundreds of thousands of unnecessary biopsies are done around the world at a tremendous cost. It is crucial to reduce the rate of biopsies that turn out to be benign tissue. In this study, we build deep neural networks (DNNs) to classify biopsied lesions as being either malignant or benign, with the goal of using these networks as second readers serving radiologists to further reduce the number of false-positive findings. We enhance the performance of DNNs that are trained to learn from small image patches by integrating global context provided in the form of saliency maps learned from the entire image into their reasoning, similar to how radiologists consider global context when evaluating areas of interest. Our experiments are conducted on a dataset of 229,426 screening mammography examinations from 141,473 patients. We achieve an AUC of 0.8 on a test set consisting of 464 benign and 136 malignant lesions.


Assuntos
Neoplasias da Mama , Mamografia , Biópsia , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Feminino , Humanos , Redes Neurais de Computação
4.
Nat Commun ; 12(1): 5645, 2021 09 24.
Artigo em Inglês | MEDLINE | ID: mdl-34561440

RESUMO

Though consistently shown to detect mammographically occult cancers, breast ultrasound has been noted to have high false-positive rates. In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images. Developed on 288,767 exams, consisting of 5,442,907 B-mode and Color Doppler images, the AI achieves an area under the receiver operating characteristic curve (AUROC) of 0.976 on a test set consisting of 44,755 exams. In a retrospective reader study, the AI achieves a higher AUROC than the average of ten board-certified breast radiologists (AUROC: 0.962 AI, 0.924 ± 0.02 radiologists). With the help of the AI, radiologists decrease their false positive rates by 37.3% and reduce requested biopsies by 27.8%, while maintaining the same level of sensitivity. This highlights the potential of AI in improving the accuracy, consistency, and efficiency of breast ultrasound diagnosis.


Assuntos
Algoritmos , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Ultrassonografia/métodos , Adulto , Idoso , Neoplasias da Mama/diagnóstico , Feminino , Humanos , Mamografia/métodos , Pessoa de Meia-Idade , Curva ROC , Radiologistas/estatística & dados numéricos , Reprodutibilidade dos Testes , Estudos Retrospectivos
5.
NPJ Digit Med ; 4(1): 80, 2021 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-33980980

RESUMO

During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745-0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.

6.
Proc Mach Learn Res ; 143: 268-285, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35088055

RESUMO

In the last few years, deep learning classifiers have shown promising results in image-based medical diagnosis. However, interpreting the outputs of these models remains a challenge. In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output, i.e. the location of a lesion. Alternatively, segmentation or detection models can be trained with pixel-wise annotations indicating the locations of malignant lesions. Unfortunately, acquiring such labels is labor-intensive and requires medical expertise. To overcome this difficulty, weakly-supervised localization can be utilized. These methods allow neural network classifiers to output saliency maps highlighting the regions of the input most relevant to the classification task (e.g. malignant lesions in mammograms) using only image-level labels (e.g. whether the patient has cancer or not) during training. When applied to high-resolution images, existing methods produce low-resolution saliency maps. This is problematic in applications in which suspicious lesions are small in relation to the image size. In this work, we introduce a novel neural network architecture to perform weakly-supervised segmentation of high-resolution images. The proposed model selects regions of interest via coarse-level localization, and then performs fine-grained segmentation of those regions. We apply this model to breast cancer diagnosis with screening mammography, and validate it on a large clinically-realistic dataset. Measured by Dice similarity score, our approach outperforms existing methods by a large margin in terms of localization performance of benign and malignant lesions, relatively improving the performance by 39.6% and 20.0%, respectively. Code and the weights of some of the models are available at https://github.com/nyukat/GLAM.

7.
Med Image Anal ; 68: 101908, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33383334

RESUMO

Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.


Assuntos
Neoplasias da Mama , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Feminino , Humanos , Mamografia , Redes Neurais de Computação
8.
ArXiv ; 2020 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-32793769

RESUMO

During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3,661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745-0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.

9.
Radiology ; 296(3): 584-593, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32573386

RESUMO

Background The methods for assessing knee osteoarthritis (OA) do not provide enough comprehensive information to make robust and accurate outcome predictions. Purpose To develop a deep learning (DL) prediction model for risk of OA progression by using knee radiographs in patients who underwent total knee replacement (TKR) and matched control patients who did not undergo TKR. Materials and Methods In this retrospective analysis that used data from the OA Initiative, a DL model on knee radiographs was developed to predict both the likelihood of a patient undergoing TKR within 9 years and Kellgren-Lawrence (KL) grade. Study participants included a case-control matched subcohort between 45 and 79 years. Patients were matched to control patients according to age, sex, ethnicity, and body mass index. The proposed model used a transfer learning approach based on the ResNet34 architecture with sevenfold nested cross-validation. Receiver operating characteristic curve analysis and conditional logistic regression assessed model performance for predicting probability and risk of TKR compared with clinical observations and two binary outcome prediction models on the basis of radiographic readings: KL grade and OA Research Society International (OARSI) grade. Results Evaluated were 728 participants including 324 patients (mean age, 64 years ± 8 [standard deviation]; 222 women) and 324 control patients (mean age, 64 years ± 8; 222 women). The prediction model based on DL achieved an area under the receiver operating characteristic curve (AUC) of 0.87 (95% confidence interval [CI]: 0.85, 0.90), outperforming a baseline prediction model by using KL grade with an AUC of 0.74 (95% CI: 0.71, 0.77; P < .001). The risk for TKR increased with probability that a person will undergo TKR from the DL model (odds ratio [OR], 7.7; 95% CI: 2.3, 25; P < .001), KL grade (OR, 1.92; 95% CI: 1.17, 3.13; P = .009), and OARSI grade (OR, 1.20; 95% CI: 0.41, 3.50; P = .73). Conclusion The proposed deep learning model better predicted risk of total knee replacement in osteoarthritis than did binary outcome models by using standard grading systems. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Richardson in this issue.


Assuntos
Artroplastia do Joelho/estatística & dados numéricos , Aprendizado Profundo , Articulação do Joelho/diagnóstico por imagem , Osteoartrite do Joelho/diagnóstico por imagem , Idoso , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Articulação do Joelho/cirurgia , Masculino , Pessoa de Meia-Idade , Osteoartrite do Joelho/epidemiologia , Osteoartrite do Joelho/cirurgia , Radiografia , Estudos Retrospectivos , Fatores de Risco
10.
JAMA Netw Open ; 3(3): e200265, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32119094

RESUMO

Importance: Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective: To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants: In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements: Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results: Overall, 144 231 screening mammograms from 85 580 US women (952 cancer positive ≤12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 578 examinations from 68 008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance: While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Mamografia/métodos , Radiologistas , Adulto , Idoso , Algoritmos , Inteligência Artificial , Detecção Precoce de Câncer , Feminino , Humanos , Pessoa de Meia-Idade , Radiologia , Sensibilidade e Especificidade , Suécia , Estados Unidos
11.
IEEE Trans Med Imaging ; 39(4): 1184-1194, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31603772

RESUMO

We present a deep convolutional neural network for breast cancer screening exam classification, trained, and evaluated on over 200000 exams (over 1000000 images). Our network achieves an AUC of 0.895 in predicting the presence of cancer in the breast, when tested on the screening population. We attribute the high accuracy to a few technical advances. 1) Our network's novel two-stage architecture and training procedure, which allows us to use a high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. 2) A custom ResNet-based network used as a building block of our model, whose balance of depth and width is optimized for high-resolution medical images. 3) Pretraining the network on screening BI-RADS classification, a related task with more noisy labels. 4) Combining multiple input views in an optimal way among a number of possible choices. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and show that our model is as accurate as experienced radiologists when presented with the same data. We also show that a hybrid model, averaging the probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To further understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, the model's design, training procedure, errors, and properties of its internal representations. Our best models are publicly available at https://github.com/nyukat/breast_cancer_classifier.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Detecção Precoce de Câncer/métodos , Interpretação de Imagem Assistida por Computador/métodos , Mamografia/métodos , Mama/diagnóstico por imagem , Feminino , Humanos , Radiologistas
12.
Environ Sci Pollut Res Int ; 26(24): 24609-24619, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31236858

RESUMO

Phthalates (PAEs) in drinking water sources such as the Yangtze River in developing countries had aroused widespread concern. Here, the water, suspended particulate matter (SPM), and sediment samples were collected from 15 sites in wet and dry seasons in Zhenjiang, for the determination of six PAEs (DMP, DEP, DIBP, DBP, DEHP, and DOP) using the solid-phase extraction (SPE) or ultrasonic extraction coupled with gas chromatography-mass spectrometry (GC-MS). The total concentrations of six PAEs (Σ6PAEs) spanned a range of 2.65-39.31 µg L-1 in water, 1.97-34.10 µg g-1 in SPM, and 0.93-34.70 µg g-1 in sediment. The partition coefficients (Kd1) of PAEs in water and SPM phase ranged from 0.004 to 3.36 L g-1 in the wet season and from 0.12 to 2.84 L g-1 in the dry season. Kd2 of PAEs in water and sediment phase was 0.001-9.75 L g-1 in the wet season and 0.006-8.05 L g-1 in the dry season. The dominant PAEs were DIBP, DBP, and DEHP in water and SPM, DIBP, DEHP, and DOP in sediment. The concentration of DBP in water exceeded the China Surface Water Standard. The discharge of domestic sewage and industrial wastewater might be the main potential sources of PAEs. The risk quotient (RQ) method used for the risk assessment revealed that DBP (0.01 < RQ < 1) posed a medium risk, while DIBP and DEHP (RQ > 1) posed a high environmental risk in water, DIBP (RQ > 1) also showed a high risk in sediment.


Assuntos
Ésteres/análise , Ácidos Ftálicos , Rios/química , Águas Residuárias/análise , China , Cidades , Ésteres/química , Material Particulado/análise , Medição de Risco , Estações do Ano , Esgotos/análise , Águas Residuárias/química
13.
J Cell Physiol ; 234(2): 1088-1098, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30203485

RESUMO

Bovine mammary epithelial cells (MAC-Ts) are a common cell line for the study of mammary epithelial inflammation; these cells are used to mechanistically elucidate molecular underpinnings that contribute to bovine mastitis. Bovine mastitis is the most prevalent form of disease in dairy cattle that culminates in annual losses of two billion dollars for the US dairy industry. Thus, there is an urgent need for improved therapeutic strategies. Histone deacetylase (HDAC) inhibitors are efficacious in rodent models of inflammation, yet their role in bovine mammary cells remain unclear. HDACs have traditionally been studied in the regulation of nucleosomal DNA, in which deacetylation of histones impact chromatin accessibility and gene expression. Using MAC-T cells stimulated with tumor necrosis factor α (TNF-α) as a model for mammary cell inflammation, we report that inhibition of HDACs1 and 2 (HDAC1/2) attenuated TNF-α-mediated inflammatory gene expression. Of note, we report that HDAC1/2-mediated inflammatory gene expression was partly regulated by c-Jun N-terminal kinase (JNK) and extracellular signal-regulated kinase (ERK) phosphorylation. Here, we report that HDAC1/2 inhibition attenuated JNK and ERK activation and thus inflammatory gene expression. These data suggest that HDACs1 and 2 regulate inflammatory gene expression via canonical (i.e., gene expression) and noncanonical (e.g., signaling dependent) mechanisms. Whereas, further studies using primary cell lines and animal models are needed. Our combined data suggest that HDAC1/2-specific inhibitors may prove efficacious for the treatment of bovine mastitis.


Assuntos
Células Epiteliais/efeitos dos fármacos , MAP Quinases Reguladas por Sinal Extracelular/metabolismo , Histona Desacetilase 1/metabolismo , Histona Desacetilase 2/metabolismo , Proteínas Quinases JNK Ativadas por Mitógeno/metabolismo , Glândulas Mamárias Animais/efeitos dos fármacos , Fator de Necrose Tumoral alfa/farmacologia , Animais , Anti-Inflamatórios/uso terapêutico , Bovinos , Linhagem Celular , Células Epiteliais/enzimologia , Feminino , Regulação da Expressão Gênica , Histona Desacetilase 1/antagonistas & inibidores , Histona Desacetilase 2/antagonistas & inibidores , Inibidores de Histona Desacetilases/uso terapêutico , Glândulas Mamárias Animais/enzimologia , Mastite Bovina/tratamento farmacológico , Mastite Bovina/enzimologia , Fosforilação , Transdução de Sinais
14.
Mach Learn Med Imaging ; 11861: 18-26, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32149282

RESUMO

Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles in medical image analysis tasks. To address these unique properties of medical images, we propose a neural network that is able to classify breast cancer lesions utilizing information from both a global saliency map and multiple local patches. The proposed model outperforms the ResNet-based baseline and achieves radiologist-level performance in the interpretation of screening mammography. Although our model is trained only with image-level labels, it is able to generate pixel-level saliency maps that provide localization of possible malignant findings.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...