Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 2 de 2
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
Br J Ophthalmol ; 2024 May 15.
Article de Anglais | MEDLINE | ID: mdl-38749531

RÉSUMÉ

BACKGROUND/AIMS: To compare the performance of generative versus retrieval-based chatbots in answering patient inquiries regarding age-related macular degeneration (AMD) and diabetic retinopathy (DR). METHODS: We evaluated four chatbots: generative models (ChatGPT-4, ChatGPT-3.5 and Google Bard) and a retrieval-based model (OcularBERT) in a cross-sectional study. Their response accuracy to 45 questions (15 AMD, 15 DR and 15 others) was evaluated and compared. Three masked retinal specialists graded the responses using a three-point Likert scale: either 2 (good, error-free), 1 (borderline) or 0 (poor with significant inaccuracies). The scores were aggregated, ranging from 0 to 6. Based on majority consensus among the graders, the responses were also classified as 'Good', 'Borderline' or 'Poor' quality. RESULTS: Overall, ChatGPT-4 and ChatGPT-3.5 outperformed the other chatbots, both achieving median scores (IQR) of 6 (1), compared with 4.5 (2) in Google Bard, and 2 (1) in OcularBERT (all p ≤8.4×10-3). Based on the consensus approach, 83.3% of ChatGPT-4's responses and 86.7% of ChatGPT-3.5's were rated as 'Good', surpassing Google Bard (50%) and OcularBERT (10%) (all p ≤1.4×10-2). ChatGPT-4 and ChatGPT-3.5 had no 'Poor' rated responses. Google Bard produced 6.7% Poor responses, and OcularBERT produced 20%. Across question types, ChatGPT-4 outperformed Google Bard only for AMD, and ChatGPT-3.5 outperformed Google Bard for DR and others. CONCLUSION: ChatGPT-4 and ChatGPT-3.5 demonstrated superior performance, followed by Google Bard and OcularBERT. Generative chatbots are potentially capable of answering domain-specific questions outside their original training. Further validation studies are still required prior to real-world implementation.

2.
Sensors (Basel) ; 21(13)2021 Jun 23.
Article de Anglais | MEDLINE | ID: mdl-34201798

RÉSUMÉ

As network attacks are constantly and dramatically evolving, demonstrating new patterns, intelligent Network Intrusion Detection Systems (NIDS), using deep-learning techniques, have been actively studied to tackle these problems. Recently, various autoencoders have been used for NIDS in order to accurately and promptly detect unknown types of attacks (i.e., zero-day attacks) and also alleviate the burden of the laborious labeling task. Although the autoencoders are effective in detecting unknown types of attacks, it takes tremendous time and effort to find the optimal model architecture and hyperparameter settings of the autoencoders that result in the best detection performance. This can be an obstacle that hinders practical applications of autoencoder-based NIDS. To address this challenge, we rigorously study autoencoders using the benchmark datasets, NSL-KDD, IoTID20, and N-BaIoT. We evaluate multiple combinations of different model structures and latent sizes, using a simple autoencoder model. The results indicate that the latent size of an autoencoder model can have a significant impact on the IDS performance.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...