Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Cell Rep Phys Sci ; 4(11)2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38078148

RESUMEN

Large language models like ChatGPT can generate authentic-seeming text at lightning speed, but many journal publishers reject language models as authors on manuscripts. Thus, a means to accurately distinguish human-generated from artificial intelligence (AI)-generated text is immediately needed. We recently developed an accurate AI text detector for scientific journals and, herein, test its ability in a variety of challenging situations, including on human text from a wide variety of chemistry journals, on AI text from the most advanced publicly available language model (GPT-4), and, most important, on AI text generated using prompts designed to obfuscate AI use. In all cases, AI and human text was assigned with high accuracy. ChatGPT-generated text can be readily detected in chemistry journals; this advance is a fundamental prerequisite for understanding how automated text generation will impact scientific publishing from now into the future.

2.
J Am Soc Mass Spectrom ; 34(12): 2775-2784, 2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-37897440

RESUMEN

To achieve high quality omics results, systematic variability in mass spectrometry (MS) data must be adequately addressed. Effective data normalization is essential for minimizing this variability. The abundance of approaches and the data-dependent nature of normalization have led some researchers to develop open-source academic software for choosing the best approach. While these tools are certainly beneficial to the community, none of them meet all of the needs of all users, particularly users who want to test new strategies that are not available in these products. Herein, we present a simple and straightforward workflow that facilitates the identification of optimal normalization strategies using straightforward evaluation metrics, employing both supervised and unsupervised machine learning. The workflow offers a "DIY" aspect, where the performance of any normalization strategy can be evaluated for any type of MS data. As a demonstration of its utility, we apply this workflow on two distinct datasets, an ESI-MS dataset of extracted lipids from latent fingerprints and a cancer spheroid dataset of metabolites ionized by MALDI-MSI, for which we identified the best-performing normalization strategies.


Asunto(s)
Neoplasias , Aprendizaje Automático no Supervisado , Humanos , Flujo de Trabajo , Programas Informáticos , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción
3.
Cell Rep Phys Sci ; 4(6)2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37426542

RESUMEN

ChatGPT has enabled access to artificial intelligence (AI)-generated writing for the masses, initiating a culture shift in the way people work, learn, and write. The need to discriminate human writing from AI is now both critical and urgent. Addressing this need, we report a method for discriminating text generated by ChatGPT from (human) academic scientists, relying on prevalent and accessible supervised classification methods. The approach uses new features for discriminating (these) humans from AI; as examples, scientists write long paragraphs and have a penchant for equivocal language, frequently using words like "but," "however," and "although." With a set of 20 features, we built a model that assigns the author, as human or AI, at over 99% accuracy. This strategy could be further adapted and developed by others with basic skills in supervised classification, enabling access to many highly accurate and targeted models for detecting AI usage in academic writing and beyond.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...