Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Acad Radiol ; 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39142976

RESUMO

RATIONALE AND OBJECTIVES: The process of generating radiology reports is often time-consuming and labor-intensive, prone to incompleteness, heterogeneity, and errors. By employing natural language processing (NLP)-based techniques, this study explores the potential for enhancing the efficiency of radiology report generation through the remarkable capabilities of ChatGPT (Generative Pre-training Transformer), a prominent large language model (LLM). MATERIALS AND METHODS: Using a sample of 1000 records from the Medical Information Mart for Intensive Care (MIMIC) Chest X-ray Database, this investigation employed Claude.ai to extract initial radiological report keywords. ChatGPT then generated radiology reports using a consistent 3-step prompt template outline. Various lexical and sentence similarity techniques were employed to evaluate the correspondence between the AI assistant-generated reports and reference reports authored by medical professionals. RESULTS: Results showed varying performance among NLP models, with Bart (Bidirectional and Auto-Regressive Transformers) and XLM (Cross-lingual Language Model) displaying high proficiency (mean similarity scores up to 99.3%), closely mirroring physician reports. Conversely, DeBERTa (Decoding-enhanced BERT with disentangled attention) and sequence-matching models scored lower, indicating less alignment with medical language. In the Impression section, the Word-Embedding model excelled with a mean similarity of 84.4%, while others like the Jaccard index showed lower performance. CONCLUSION: Overall, the study highlights significant variations across NLP models in their ability to generate radiology reports consistent with medical professionals' language. Pairwise comparisons and Kruskal-Wallis tests confirmed these differences, emphasizing the need for careful selection and evaluation of NLP models in radiology report generation. This research underscores the potential of ChatGPT to streamline and improve the radiology reporting process, with implications for enhancing efficiency and accuracy in clinical practice.

2.
Comput Biol Med ; 158: 106804, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36989740

RESUMO

Cardiovascular disease is one of the leading causes of mortality worldwide and is responsible for millions of deaths annually. One of the most promising approaches to deal with this problem, which has spread recently, is cardiac tissue engineering (CTE). Many researchers have tried developing scaffolds with different materials, cell lines, and fabrication methods to help regenerate heart tissue. Machine learning (ML) is one of the hottest topics in science and technology, revolutionizing many fields and changing our perspective on solving problems. As a result of using ML, some scientific issues have been resolved, including protein-folding, a challenging problem in biology that remained unsolved for 50 years. However, it is not well addressed in tissue engineering. An AI-based software was developed by our group called MLATE (Machine Learning Applications in Tissue Engineering) to tackle tissue engineering challenges, which highly depend on conducting costly and time-consuming experiments. For the first time, to the best of our knowledge, a CTE scaffold dataset was created by collecting specifications from the literature, including different materials, cell lines, and fabrication methods commonly used in CTE scaffold development. These specifications were used as variables in the study. Then, the CTE scaffolds were rated based on cell behaviors such as cell viability, growth, proliferation, and differentiation on the scaffold on a scale of 0-3. These ratings were considered a function of the variables in the gathered dataset. It should be stated that this study was merely based on information available in the literature. Then, twenty-eight ML algorithms were applied to determine the most effective one for predicting cell behavior on CTE scaffolds fabricated by different materials, compositions, and methods. The results indicated the high performance of XGBoost with an accuracy of 87%. Also, by implementing ensemble learning algorithms and using five algorithms with the best performance, an accuracy of 93% with the AdaBoost Classifier and Voting Classifier was achieved. Finally, the open-source software developed in this study was made available for everyone by publishing the best model along with a step-by-step guide to using it online at: https://github.com/saeedrafieyan/MLATE.


Assuntos
Engenharia Tecidual , Alicerces Teciduais , Engenharia Tecidual/métodos , Coração , Aprendizado de Máquina , Software
3.
BMC Med Inform Decis Mak ; 23(1): 16, 2023 01 23.
Artigo em Inglês | MEDLINE | ID: mdl-36691030

RESUMO

BACKGROUND: Detecting brain tumors in their early stages is crucial. Brain tumors are classified by biopsy, which can only be performed through definitive brain surgery. Computational intelligence-oriented techniques can help physicians identify and classify brain tumors. Herein, we proposed two deep learning methods and several machine learning approaches for diagnosing three types of tumor, i.e., glioma, meningioma, and pituitary gland tumors, as well as healthy brains without tumors, using magnetic resonance brain images to enable physicians to detect with high accuracy tumors in early stages. MATERIALS AND METHODS: A dataset containing 3264 Magnetic Resonance Imaging (MRI) brain images comprising images of glioma, meningioma, pituitary gland tumors, and healthy brains were used in this study. First, preprocessing and augmentation algorithms were applied to MRI brain images. Next, we developed a new 2D Convolutional Neural Network (CNN) and a convolutional auto-encoder network, both of which were already trained by our assigned hyperparameters. Then 2D CNN includes several convolution layers; all layers in this hierarchical network have a 2*2 kernel function. This network consists of eight convolutional and four pooling layers, and after all convolution layers, batch-normalization layers were applied. The modified auto-encoder network includes a convolutional auto-encoder network and a convolutional network for classification that uses the last output encoder layer of the first part. Furthermore, six machine-learning techniques that were applied to classify brain tumors were also compared in this study. RESULTS: The training accuracy of the proposed 2D CNN and that of the proposed auto-encoder network were found to be 96.47% and 95.63%, respectively. The average recall values for the 2D CNN and auto-encoder networks were 95% and 94%, respectively. The areas under the ROC curve for both networks were 0.99 or 1. Among applied machine learning methods, Multilayer Perceptron (MLP) (28%) and K-Nearest Neighbors (KNN) (86%) achieved the lowest and highest accuracy rates, respectively. Statistical tests showed a significant difference between the means of the two methods developed in this study and several machine learning methods (p-value < 0.05). CONCLUSION: The present study shows that the proposed 2D CNN has optimal accuracy in classifying brain tumors. Comparing the performance of various CNNs and machine learning methods in diagnosing three types of brain tumors revealed that the 2D CNN achieved exemplary performance and optimal execution time without latency. This proposed network is less complex than the auto-encoder network and can be employed by radiologists and physicians in clinical systems for brain tumor detection.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Glioma , Neoplasias Meníngeas , Meningioma , Neoplasias Hipofisárias , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Meningioma/diagnóstico por imagem , Neoplasias Hipofisárias/diagnóstico por imagem
4.
Aesthetic Plast Surg ; 2022 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-35614157

RESUMO

BACKGROUND: Accurate assessment of breast volume is helpful in preoperative planning and intraoperative judgment in both cosmetic and reconstructive breast surgery. In this prospective study, a formula was derived using machine learning algorithm (Gradient Boosted Model). METHOD: A prospective study was performed on 39 female-to-male transgender patients. Bilateral mastectomy was done for all patients. Preoperative anthropometric measurements were performed on 78 breasts of these patients. Weight of breasts was calculated postoperatively with digital scale (weight), and then volume of breasts was calculated with the calibrated container (water displacement technique). Authors built a model based on Python CatBoostClassifier. Finally, an android application was built for ease of real-time utilization. RESULTS: Eight anthropometric measurements were collected preoperatively as independent variables. Breast vertical perimeter at lower half, upper pole, sternal notch to nipple and nipple to IMF had most correlation with volume and weight. Based on machine learning model, the following formula established: Breast volume = (breast width) × 24.69 + (nipple to IMF) × 49.03 - (sternal notch to nipple) × 1.34 + (anterior axillary line to medial border) × 6.57 - (upper pole) × 1.27 - (chest perimeter IMF) × 5.63 + (chest perimeter nipple) × 10.40 + (breast vertical perimeter at lower half) × 9.20 - 1133.74. The R2 of the model is 0.93, and RMSE is 62.4. CONCLUSION: Our formula is an accurate method for preoperative breast volume assessment. We built an android App (Breast Volume Predictor) for the real-time utilization of resulting formula. It is available at Google Play Store for free download. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA