Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 89
Filter
1.
Front Med (Lausanne) ; 11: 1390634, 2024.
Article in English | MEDLINE | ID: mdl-39091290

ABSTRACT

In the relentless pursuit of precision medicine, the intersection of cutting-edge technology and healthcare has given rise to a transformative era. At the forefront of this revolution stands the burgeoning field of wearable and implantable biosensors, promising a paradigm shift in how we monitor, analyze, and tailor medical interventions. As these miniature marvels seamlessly integrate with the human body, they weave a tapestry of real-time health data, offering unprecedented insights into individual physiological landscapes. This log embarks on a journey into the realm of wearable and implantable biosensors, where the convergence of biology and technology heralds a new dawn in personalized healthcare. Here, we explore the intricate web of innovations, challenges, and the immense potential these bioelectronics sentinels hold in sculpting the future of precision medicine.

2.
Article in English | MEDLINE | ID: mdl-38967074

ABSTRACT

Viral diseases have always been a threat to mankind throughout history, and many people have lost their lives due to the epidemic of these diseases. In recent years, despite the progress of science, we are still witnessing a pandemic of dangerous diseases such as COVID-19 all over the world, which can be a warning for humanity. Ferula is a genus of flowering plants commonly found in Central Asia, and its species have shown antiviral activity against a variety of viruses, including respiratory syncytial virus, Herpes simplex virus type 1, influenza, human immunodeficiency virus, hepatitis B, and coronaviruses. In this study, we intend to review the antiviral effects of Ferula plants, emphasizing the therapeutic potential of these plants in the treatment of COVID-19. Google, PubMed, Web of Science, and Scopus databases were searched to review the relevant literature on the antiviral effect of Ferula or its isolated compounds. The search was performed using the keywords Ferula, antiviral, Coronaviruses, respiratory syncytial virus, Herpes simplex virus type 1, influenza, human immunodeficiency virus, and hepatitis B. According to the reviewed articles and available scientific evidence, it was determined that the plants of this genus have strong antiviral effects. Also, clinical studies have shown that some species, such as Ferula assa-foetida, can be used effectively in the treatment of COVID-19. Ferula plants have inhibitory effects on various viruses, making them an attractive alternative to conventional antiviral agents. Therefore, these plants are a natural source of valuable compounds that can help us fight infectious diseases.

3.
Int Endod J ; 2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39056554

ABSTRACT

The integration of artificial intelligence (AI) in healthcare has seen significant advancements, particularly in areas requiring image interpretation. Endodontics, a specialty within dentistry, stands to benefit immensely from AI applications, especially in interpreting radiographic images. However, there is a knowledge gap among endodontists regarding the fundamentals of machine learning and deep learning, hindering the full utilization of AI in this field. This narrative review aims to: (A) elaborate on the basic principles of machine learning and deep learning and present the basics of neural network architectures; (B) explain the workflow for developing AI solutions, from data collection through clinical integration; (C) discuss specific AI tasks and applications relevant to endodontic diagnosis and treatment. The article shows that AI offers diverse practical applications in endodontics. Computer vision methods help analyse images while natural language processing extracts insights from text. With robust validation, these techniques can enhance diagnosis, treatment planning, education, and patient care. In conclusion, AI holds significant potential to benefit endodontic research, practice, and education. Successful integration requires an evolving partnership between clinicians, computer scientists, and industry.

4.
Int Endod J ; 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39075670

ABSTRACT

Artificial intelligence (AI) is emerging as a transformative technology in healthcare, including endodontics. A gap in knowledge exists in understanding AI's applications and limitations among endodontic experts. This comprehensive review aims to (A) elaborate on technical and ethical aspects of using data to implement AI models in endodontics; (B) elaborate on evaluation metrics; (C) review the current applications of AI in endodontics; and (D) review the limitations and barriers to real-world implementation of AI in the field of endodontics and its future potentials/directions. The article shows that AI techniques have been applied in endodontics for critical tasks such as detection of radiolucent lesions, analysis of root canal morphology, prediction of treatment outcome and post-operative pain and more. Deep learning models like convolutional neural networks demonstrate high accuracy in these applications. However, challenges remain regarding model interpretability, generalizability, and adoption into clinical practice. When thoughtfully implemented, AI has great potential to aid with diagnostics, treatment planning, clinical interventions, and education in the field of endodontics. However, concerted efforts are still needed to address limitations and to facilitate integration into clinical workflows.

5.
BMC Oral Health ; 24(1): 574, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38760686

ABSTRACT

BACKGROUND: To develop and validate a deep learning model for automated assessment of endodontic case difficulty from periapical radiographs. METHODS: A dataset of 1,386 periapical radiographs was compiled from two clinical sites. Two dentists and two endodontists annotated the radiographs for difficulty using the "simple assessment" criteria from the American Association of Endodontists' case difficulty assessment form in the Endocase application. A classification task labeled cases as "easy" or "hard", while regression predicted overall difficulty scores. Convolutional neural networks (i.e. VGG16, ResNet18, ResNet50, ResNext50, and Inception v2) were used, with a baseline model trained via transfer learning from ImageNet weights. Other models was pre-trained using self-supervised contrastive learning (i.e. BYOL, SimCLR, MoCo, and DINO) on 20,295 unlabeled dental radiographs to learn representation without manual labels. Both models were evaluated using 10-fold cross-validation, with performance compared to seven human examiners (three general dentists and four endodontists) on a hold-out test set. RESULTS: The baseline VGG16 model attained 87.62% accuracy in classifying difficulty. Self-supervised pretraining did not improve performance. Regression predicted scores with ± 3.21 score error. All models outperformed human raters, with poor inter-examiner reliability. CONCLUSION: This pilot study demonstrated the feasibility of automated endodontic difficulty assessment via deep learning models.


Subject(s)
Deep Learning , Humans , Pilot Projects , Radiography, Dental , Neural Networks, Computer
6.
J Oral Rehabil ; 51(8): 1632-1644, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38757865

ABSTRACT

BACKGROUND AND OBJECTIVE: The accurate diagnosis of temporomandibular disorders continues to be a challenge, despite the existence of internationally agreed-upon diagnostic criteria. The purpose of this study is to review applications of deep learning models in the diagnosis of temporomandibular joint arthropathies. MATERIALS AND METHODS: An electronic search was conducted on PubMed, Scopus, Embase, Google Scholar, IEEE, arXiv, and medRxiv up to June 2023. Studies that reported the efficacy (outcome) of prediction, object detection or classification of TMJ arthropathies by deep learning models (intervention) of human joint-based or arthrogenous TMDs (population) in comparison to reference standard (comparison) were included. To evaluate the risk of bias, included studies were critically analysed using the quality assessment of diagnostic accuracy studies (QUADAS-2). Diagnostic odds ratios (DOR) were calculated. Forrest plot and funnel plot were created using STATA 17 and MetaDiSc. RESULTS: Full text review was performed on 46 out of the 1056 identified studies and 21 studies met the eligibility criteria and were included in the systematic review. Four studies were graded as having a low risk of bias for all domains of QUADAS-2. The accuracy of all included studies ranged from 74% to 100%. Sensitivity ranged from 54% to 100%, specificity: 85%-100%, Dice coefficient: 85%-98%, and AUC: 77%-99%. The datasets were then pooled based on the sensitivity, specificity, and dataset size of seven studies that qualified for meta-analysis. The pooled sensitivity was 95% (85%-99%), specificity: 92% (86%-96%), and AUC: 97% (96%-98%). DORs were 232 (74-729). According to Deek's funnel plot and statistical evaluation (p =.49), publication bias was not present. CONCLUSION: Deep learning models can detect TMJ arthropathies high sensitivity and specificity. Clinicians, and especially those not specialized in orofacial pain, may benefit from this methodology for assessing TMD as it facilitates a rigorous and evidence-based framework, objective measurements, and advanced analysis techniques, ultimately enhancing diagnostic accuracy.


Subject(s)
Deep Learning , Temporomandibular Joint Disorders , Humans , Temporomandibular Joint Disorders/diagnosis , Sensitivity and Specificity
7.
Article in English | MEDLINE | ID: mdl-38570273

ABSTRACT

OBJECTIVES: This study aims to evaluate the correctness of the generated answers by Google Bard, GPT-3.5, GPT-4, Claude-Instant, and Bing chatbots to decision-making clinical questions in the oral and maxillofacial surgery (OMFS) area. STUDY DESIGN: A group of 3 board-certified oral and maxillofacial surgeons designed a questionnaire with 50 case-based questions in multiple-choice and open-ended formats. Answers of chatbots to multiple-choice questions were examined against the chosen option by 3 referees. The chatbots' answers to the open-ended questions were evaluated based on the modified global quality scale. A P-value under .05 was considered significant. RESULTS: Bard, GPT-3.5, GPT-4, Claude-Instant, and Bing answered 34%, 36%, 38%, 38%, and 26% of the questions correctly, respectively. In open-ended questions, GPT-4 scored the most answers evaluated as grades "4" or "5," and Bing scored the most answers evaluated as grades "1" or "2." There were no statistically significant differences between the 5 chatbots in responding to the open-ended (P = .275) and multiple-choice (P = .699) questions. CONCLUSION: Considering the major inaccuracies in the responses of chatbots, despite their relatively good performance in answering open-ended questions, this technology yet cannot be trusted as a consultant for clinicians in decision-making situations.


Subject(s)
Artificial Intelligence , Clinical Decision-Making , Humans , Surveys and Questionnaires , Surgery, Oral , Internet
8.
Pediatr Dent ; 46(1): 27-35, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38449036

ABSTRACT

Purpose: To systematically evaluate artificial intelligence applications for diagnostic and treatment planning possibilities in pediatric dentistry. Methods: PubMed®, EMBASE®, Scopus, Web of Science™, IEEE, medRxiv, arXiv, and Google Scholar were searched using specific search queries. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) checklist was used to assess the risk of bias assessment of the included studies. Results: Based on the initial screening, 33 eligible studies were included (among 3,542). Eleven studies appeared to have low bias risk across all QUADAS-2 domains. Most applications focused on early childhood caries diagnosis and prediction, tooth identification, oral health evaluation, and supernumerary tooth identification. Six studies evaluated AI tools for mesiodens or supernumerary tooth identification on radigraphs, four for primary tooth identification and/or numbering, seven studies to detect caries on radiographs, and 12 to predict early childhood caries. For these four tasks, the reported accuracy of AI varied from 60 percent to 99 percent, sensitivity was from 20 percent to 100 percent, specificity was from 49 percent to 100 percent, F1-score was from 60 percent to 97 percent, and the area-under-the-curve varied from 87 percent to 100 percent. Conclusions: The overall body of evidence regarding artificial intelligence applications in pediatric dentistry does not allow for firm conclusions. For a wide range of applications, AI shows promising accuracy. Future studies should focus on a comparison of AI against the standard of care and employ a set of standardized outcomes and metrics to allow comparison across studies.


Subject(s)
Artificial Intelligence , Pediatric Dentistry , Child , Child, Preschool , Humans , Dental Caries/diagnostic imaging , Dental Caries/therapy , Oral Health , Tooth, Supernumerary
9.
J Dent ; 144: 104938, 2024 05.
Article in English | MEDLINE | ID: mdl-38499280

ABSTRACT

OBJECTIVES: Artificial Intelligence has applications such as Large Language Models (LLMs), which simulate human-like conversations. The potential of LLMs in healthcare is not fully evaluated. This pilot study assessed the accuracy and consistency of chatbots and clinicians in answering common questions in pediatric dentistry. METHODS: Two expert pediatric dentists developed thirty true or false questions involving different aspects of pediatric dentistry. Publicly accessible chatbots (Google Bard, ChatGPT4, ChatGPT 3.5, Llama, Sage, Claude 2 100k, Claude-instant, Claude-instant-100k, and Google Palm) were employed to answer the questions (3 independent new conversations). Three groups of clinicians (general dentists, pediatric specialists, and students; n = 20/group) also answered. Responses were graded by two pediatric dentistry faculty members, along with a third independent pediatric dentist. Resulting accuracies (percentage of correct responses) were compared using analysis of variance (ANOVA), and post-hoc pairwise group comparisons were corrected using Tukey's HSD method. ACronbach's alpha was calculated to determine consistency. RESULTS: Pediatric dentists were significantly more accurate (mean±SD 96.67 %± 4.3 %) than other clinicians and chatbots (p < 0.001). General dentists (88.0 % ± 6.1 %) also demonstrated significantly higher accuracy than chatbots (p < 0.001), followed by students (80.8 %±6.9 %). ChatGPT showed the highest accuracy (78 %±3 %) among chatbots. All chatbots except ChatGPT3.5 showed acceptable consistency (Cronbach alpha>0.7). CLINICAL SIGNIFICANCE: Based on this pilot study, chatbots may be valuable adjuncts for educational purposes and for distributing information to patients. However, they are not yet ready to serve as substitutes for human clinicians in diagnostic decision-making. CONCLUSION: In this pilot study, chatbots showed lower accuracy than dentists. Chatbots may not yet be recommended for clinical pediatric dentistry.


Subject(s)
Dentists , Pediatric Dentistry , Humans , Pilot Projects , Dentists/psychology , Artificial Intelligence , Communication , Surveys and Questionnaires , Child
10.
Article in English | MEDLINE | ID: mdl-38553304

ABSTRACT

OBJECTIVES: In this study, we assessed 6 different artificial intelligence (AI) chatbots (Bing, GPT-3.5, GPT-4, Google Bard, Claude, Sage) responses to controversial and difficult questions in oral pathology, oral medicine, and oral radiology. STUDY DESIGN: The chatbots' answers were evaluated by board-certified specialists using a modified version of the global quality score on a 5-point Likert scale. The quality and validity of chatbot citations were evaluated. RESULTS: Claude had the highest mean score of 4.341 ± 0.582 for oral pathology and medicine. Bing had the lowest scores of 3.447 ± 0.566. In oral radiology, GPT-4 had the highest mean score of 3.621 ± 1.009 and Bing the lowest score of 2.379 ± 0.978. GPT-4 achieved the highest mean score of 4.066 ± 0.825 for performance across all disciplines. 82 out of 349 (23.50%) of generated citations from chatbots were fake. CONCLUSIONS: The most superior chatbot in providing high-quality information for controversial topics in various dental disciplines was GPT-4. Although the majority of chatbots performed well, it is suggested that developers of AI medical chatbots incorporate scientific citation authenticators to validate the outputted citations given the relatively high number of fabricated citations.


Subject(s)
Artificial Intelligence , Oral Medicine , Humans , Radiology , Pathology, Oral
11.
J Endod ; 50(5): 562-578, 2024 May.
Article in English | MEDLINE | ID: mdl-38387793

ABSTRACT

AIMS: The future dental and endodontic education must adapt to the current digitalized healthcare system in a hyper-connected world. The purpose of this scoping review was to investigate the ways an endodontic education curriculum could benefit from the implementation of artificial intelligence (AI) and overcome the limitations of this technology in the delivery of healthcare to patients. METHODS: An electronic search was carried out up to December 2023 using MEDLINE, Web of Science, Cochrane Library, and a manual search of reference literature. Grey literature, ongoing clinical trials were also searched using ClinicalTrials.gov. RESULTS: The search identified 251 records, of which 35 were deemed relevant to artificial intelligence (AI) and Endodontic education. Areas in which AI might aid students with their didactic and clinical endodontic education were identified as follows: 1) radiographic interpretation; 2) differential diagnosis; 3) treatment planning and decision-making; 4) case difficulty assessment; 5) preclinical training; 6) advanced clinical simulation and case-based training, 7) real-time clinical guidance; 8) autonomous systems and robotics; 9) progress evaluation and personalized education; 10) calibration and standardization. CONCLUSIONS: AI in endodontic education will support clinical and didactic teaching through individualized feedback; enhanced, augmented, and virtually generated training aids; automated detection and diagnosis; treatment planning and decision support; and AI-based student progress evaluation, and personalized education. Its implementation will inarguably change the current concept of teaching Endodontics. Dental educators would benefit from introducing AI in clinical and didactic pedagogy; however, they must be aware of AI's limitations and challenges to overcome.


Subject(s)
Artificial Intelligence , Curriculum , Education, Dental , Endodontics , Endodontics/education , Humans , Education, Dental/methods , Clinical Competence
12.
Clin Oral Investig ; 28(1): 88, 2024 Jan 13.
Article in English | MEDLINE | ID: mdl-38217733

ABSTRACT

OBJECTIVE: This study aimed to review and synthesize studies using artificial intelligence (AI) for classifying, detecting, or segmenting oral mucosal lesions on photographs. MATERIALS AND METHOD: Inclusion criteria were (1) studies employing AI to (2) classify, detect, or segment oral mucosa lesions, (3) on oral photographs of human subjects. Included studies were assessed for risk of bias using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). A PubMed, Scopus, Embase, Web of Science, IEEE, arXiv, medRxiv, and grey literature (Google Scholar) search was conducted until June 2023, without language limitation. RESULTS: After initial searching, 36 eligible studies (from 8734 identified records) were included. Based on QUADAS-2, only 7% of studies were at low risk of bias for all domains. Studies employed different AI models and reported a wide range of outcomes and metrics. The accuracy of AI for detecting oral mucosal lesions ranged from 74 to 100%, while that for clinicians un-aided by AI ranged from 61 to 98%. Pooled diagnostic odds ratio for studies which evaluated AI for diagnosing or discriminating potentially malignant lesions was 155 (95% confidence interval 23-1019), while that for cancerous lesions was 114 (59-221). CONCLUSIONS: AI may assist in oral mucosa lesion screening while the expected accuracy gains or further health benefits remain unclear so far. CLINICAL RELEVANCE: Artificial intelligence assists oral mucosa lesion screening and may foster more targeted testing and referral in the hands of non-specialist providers, for example. So far, it remains unclear if accuracy gains compared with specialized can be realized.


Subject(s)
Artificial Intelligence , Mouth Mucosa , Humans , Referral and Consultation
13.
Dentomaxillofac Radiol ; 53(1): 5-21, 2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38183164

ABSTRACT

OBJECTIVES: Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification. METHODS: An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation. RESULTS: The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%. CONCLUSION: Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.


Subject(s)
Deep Learning , Dental Caries , Tooth , Humans , Radiography, Dental , Tooth/diagnostic imaging
14.
Oral Radiol ; 40(1): 1-20, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37855976

ABSTRACT

PURPOSE: This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS: Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS: From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION: With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.


Subject(s)
Deep Learning , Head and Neck Neoplasms , Humans , Sensitivity and Specificity , Head and Neck Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods , Positron-Emission Tomography/methods
16.
J Endod ; 50(2): 144-153.e2, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37977219

ABSTRACT

INTRODUCTION: The aim of this study was to leverage label-efficient self-supervised learning (SSL) to train a model that can detect ECR and differentiate it from caries. METHODS: Periapical (PA) radiographs of teeth with ECR defects were collected. Two board-certified endodontists reviewed PA radiographs and cone beam computed tomographic (CBCT) images independently to determine presence of ECR (ground truth). Radiographic data were divided into 3 regions of interest (ROIs): healthy teeth, teeth with ECR, and teeth with caries. Nine contrastive SSL models (SimCLR v2, MoCo v2, BYOL, DINO, NNCLR, SwAV, MSN, Barlow Twins, and SimSiam) were implemented in the assessment alongside 7 baseline deep learning models (ResNet-18, ResNet-50, VGG16, DenseNet, MobileNetV2, ResNeXt-50, and InceptionV3). A 10-fold cross-validation strategy and a hold-out test set were employed for model evaluation. Model performance was assessed via various metrics including classification accuracy, precision, recall, and F1-score. RESULTS: Included were 190 PA radiographs, composed of 470 ROIs. Results from 10-fold cross-validation demonstrated that most SSL models outperformed the transfer learning baseline models, with DINO achieving the highest mean accuracy (85.64 ± 4.56), significantly outperforming 13 other models (P < .05). DINO reached the highest test set (ie, 3 ROIs) accuracy (84.09%) while MoCo v2 exhibited the highest recall and F1-score (77.37% and 82.93%, respectively). CONCLUSIONS: This study showed that AI can assist clinicians in detecting ECR and differentiating it from caries. Additionally, it introduced the application of SSL in detecting ECR, emphasizing that SSL-based models can outperform transfer learning baselines and reduce reliance on large, labeled datasets.


Subject(s)
Dental Caries , Tooth , Humans , Cone-Beam Computed Tomography/methods , Artificial Intelligence , Tomography, X-Ray Computed/methods , Supervised Machine Learning
17.
AIDS Res Hum Retroviruses ; 40(3): 141-147, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37565279

ABSTRACT

Adult T cell leukemia/lymphoma is a malignancy with a poor prognosis caused by human T lymphocyte virus type 1 (HTLV-1) infection. Tax and HBZ are two major viral proteins that may be involved in oncogenesis by disrupting apoptosis. Because Bcl-xL plays an integral role in the anti-apoptotic pathway, this study examines the interaction between host apoptosis and oncoproteins. We investigated 37 HTLV-1-infected individuals, including 18 asymptomatic and 19 adult T cell leukemia/lymphoma (ATLL) subjects. mRNA was extracted and converted to cDNA from peripheral blood mononuclear cells, and then gene expression was determined using TaqMan q-PCR. Moreover, the HTLV-1 proviral load (PVL) was also measured using a commercial absolute quantification kit (Novin Gene, Iran). Data analysis revealed that the mean of TAX, HBZ, and PVL was significantly higher among the study groups (ATLL and carrier groups p = .003, p = .000, and p = .002 respectively). There was no statistical difference in Bcl-xL gene expression between the study groups (p = .323). It is proposed that this anti-apoptotic pathway may not be directly involved in the development of ATLL lymphoma. Bcl-xL, TAX, HBZ gene expression, and PVL can be utilized as prognostic markers.


Subject(s)
HIV Infections , Human T-lymphotropic virus 1 , Leukemia-Lymphoma, Adult T-Cell , Lymphoma , Adult , Humans , Leukemia-Lymphoma, Adult T-Cell/genetics , Human T-lymphotropic virus 1/genetics , Leukocytes, Mononuclear , Basic-Leucine Zipper Transcription Factors/genetics , Basic-Leucine Zipper Transcription Factors/metabolism , Retroviridae Proteins/genetics , Retroviridae Proteins/metabolism , HIV Infections/pathology , Lymphoma/pathology , Gene Expression , Gene Products, tax/genetics , Gene Products, tax/metabolism
18.
Int Endod J ; 57(3): 305-314, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38117284

ABSTRACT

AIM: This study aimed to evaluate and compare the validity and reliability of responses provided by GPT-3.5, Google Bard, and Bing to frequently asked questions (FAQs) in the field of endodontics. METHODOLOGY: FAQs were formulated by expert endodontists (n = 10) and collected through GPT-3.5 queries (n = 10), with every question posed to each chatbot three times. Responses (N = 180) were independently evaluated by two board-certified endodontists using a modified Global Quality Score (GQS) on a 5-point Likert scale (5: strongly agree; 4: agree; 3: neutral; 2: disagree; 1: strongly disagree). Disagreements on scoring were resolved through evidence-based discussions. The validity of responses was analysed by categorizing scores into valid or invalid at two thresholds: The low threshold was set at score ≥4 for all three responses whilst the high threshold was set at score 5 for all three responses. Fisher's exact test was conducted to compare the validity of responses between chatbots. Cronbach's alpha was calculated to assess the reliability by assessing the consistency of repeated responses for each chatbot. RESULTS: All three chatbots provided answers to all questions. Using the low-threshold validity test (GPT-3.5: 95%; Google Bard: 85%; Bing: 75%), there was no significant difference between the platforms (p > .05). When using the high-threshold validity test, the chatbot scores were substantially lower (GPT-3.5: 60%; Google Bard: 15%; Bing: 15%). The validity of GPT-3.5 responses was significantly higher than Google Bard and Bing (p = .008). All three chatbots achieved an acceptable level of reliability (Cronbach's alpha >0.7). CONCLUSIONS: GPT-3.5 provided more credible information on topics related to endodontics compared to Google Bard and Bing.


Subject(s)
Artificial Intelligence , Endodontics , Reproducibility of Results , Software , Information Sources
19.
Article in English | MEDLINE | ID: mdl-38095650

ABSTRACT

Cardiotoxicity caused by anthracyclines chemotherapy is one of the leading causes of mortality and morbidity in cancer survivors. Continuous infusion (CI) instead of bolus (BOL) injection is one of the methods that seem to be effective in reducing doxorubicin (DOX) cardiotoxicity. Due to the variety of results, we decided to compare these two approaches regarding toxicity and efficacy and report the final results for different cancers. We included 21 studies (four preclinical and seventeen clinical trials) up to May 15, 2023. In children with acute lymphoblastic leukemia (ALL) and adults with chronic lymphoblastic leukemia (CLL) and gastric cancer, results were in favor of BOL injection, without increase in cardiotoxicity. On the other hand, CI showed to be better option in patients with small-cell lung cancer (SCLC) and breast cancer. Various results were also observed in adult patients with sarcoma. Overall, it can be concluded that the benefits of CI, especially in adults, outweigh its disadvantages. However, due to the variety of results and heterogeneity of studies, further clinical trials with a larger sample size and a longer duration of follow-up are needed to make a more accurate comparison between CI and BOL injection.

20.
Article in English | MEDLINE | ID: mdl-38095652

ABSTRACT

The development of invasive fungal infections (IFIs) is a serious complication in acute myeloid leukemia (AML) patients who undergo an induction to remission chemotherapy. Given the increased mortality in AML patients with IFI despite prophylaxis, we need to address this problem. Statins have traditionally been employed in clinical settings as agents for reducing lipid levels. Nonetheless, recent investigations have brought to light their antifungal properties in animals, as well as in vitro studies. The objective of this study was to assess the effectiveness of atorvastatin when added to the routine IFI prophylaxis regimen in patients diagnosed with AML. A randomized, multicenter, triple-blind study was conducted on 76 AML patients aged 18-70, who received either placebo or atorvastatin in addition to fluconazole. Patients were followed for 30 days in case of developing IFIs, patient survival, and atorvastatin- related adverse drug reactions. Data were analyzed with SPSS version 26.0. A level of significance of 0.05 was utilized as the threshold for all statistical tests. The data were analyzed by adjusting for the effect of age, regarding that there was a significant difference between the two groups, and showed that atorvastatin reduced the development of both probable and proven IFI (based on EORTC/MSGERC criteria) compared to placebo. IFI-free survival was also significantly better in the atorvastatin group. The incidence of developing aspergillosis did not differ between the two groups. No serious adverse events related to atorvastatin were observed. The present investigation has substantiated the antecedent in vitro and animal research on the fungicidal impact of statins and has suggested the need for additional research involving larger sample sizes and an extended duration of follow-up. Trial registration: This study was registered on the Iranian registry of clinical trials as IRCT20210503051166N1 (Date of confirmation 2021.05.03).

SELECTION OF CITATIONS
SEARCH DETAIL