Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Big Data ; 7: 1387325, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39345825

RESUMO

Introduction: Recent advancements in Natural Language Processing (NLP) and widely available social media data have made it possible to predict human personalities in various computational applications. In this context, pre-trained Large Language Models (LLMs) have gained recognition for their exceptional performance in NLP benchmarks. However, these models require substantial computational resources, escalating their carbon and water footprint. Consequently, a shift toward more computationally efficient smaller models is observed. Methods: This study compares a small model ALBERT (11.8M parameters) with a larger model, RoBERTa (125M parameters) in predicting big five personality traits. It utilizes the PANDORA dataset comprising Reddit comments, processing them on a Tesla P100-PCIE-16GB GPU. The study customized both models to support multi-output regression and added two linear layers for fine-grained regression analysis. Results: Results are evaluated on Mean Squared Error (MSE) and Root Mean Squared Error (RMSE), considering the computational resources consumed during training. While ALBERT consumed lower levels of system memory with lower heat emission, it took higher computation time compared to RoBERTa. The study produced comparable levels of MSE, RMSE, and training loss reduction. Discussion: This highlights the influence of training data quality on the model's performance, outweighing the significance of model size. Theoretical and practical implications are also discussed.

2.
Front Psychiatry ; 15: 1437569, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39149156

RESUMO

Introduction: With rapid advancements in natural language processing (NLP), predicting personality using this technology has become a significant research interest. In personality prediction, exploring appropriate questions that elicit natural language is particularly important because questions determine the context of responses. This study aimed to predict levels of neuroticism-a core psychological trait known to predict various psychological outcomes-using responses to a series of open-ended questions developed based on the five-factor model of personality. This study examined the model's accuracy and explored the influence of item content in predicting neuroticism. Methods: A total of 425 Korean adults were recruited and responded to 18 open-ended questions about their personalities, along with the measurement of the Five-Factor Model traits. In total, 30,576 Korean sentences were collected. To develop the prediction models, the pre-trained language model KoBERT was used. Accuracy, F1 Score, Precision, and Recall were calculated as evaluation metrics. Results: The results showed that items inquiring about social comparison, unintended harm, and negative feelings performed better in predicting neuroticism than other items. For predicting depressivity, items related to negative feelings, social comparison, and emotions showed superior performance. For dependency, items related to unintended harm, social dominance, and negative feelings were the most predictive. Discussion: We identified items that performed better at neuroticism prediction than others. Prediction models developed based on open-ended questions that theoretically aligned with neuroticism exhibited superior predictive performance.

3.
Neural Netw ; 169: 542-554, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37952390

RESUMO

Personality prediction task not only helps us to better understand personal needs and preferences but also is essential for many fields such as psychology and behavioral economics. Current personality prediction primarily focuses on discovering personality traits through user posts. Additionally, there are also methods that utilize psychological information to uncover certain underlying personality traits. Although significant progress has been made in personality prediction, we believe that current solutions still overlook the long-term sustainability of personality and are constrained by the challenge of capturing consistent personality-related clues across different views in a simple and efficient manner. To this end, we propose HG-PerCon, which utilizes user representations based on historical semantic information and psychological knowledge for cross-view contrastive learning. Specifically, we design a transformer-based module to obtain user representations with long-lasting personality-related information from their historical posts. We leverage a psychological knowledge graph which incorporates language styles to generate user representations guided by psychological knowledge. Additionally, we employ contrastive learning to capture the consistency of user personality-related clues across views. To evaluate the effectiveness of our model, and our approach achieved a reduction of 2%, 4%, and 6% in RMSE compared to the second-best baseline method.


Assuntos
Aprendizagem , Personalidade , Conhecimento , Idioma , Semântica
4.
Front Psychol ; 13: 865841, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36467183

RESUMO

In this work, we demonstrate how textual content from answers to interview questions related to past behavior and situational judgement can be used to infer personality traits. We analyzed responses from over 58,000 job applicants who completed an online text-based interview that also included a personality questionnaire based on the HEXACO personality model to self-rate their personality. The inference model training utilizes a fine-tuned version of InterviewBERT, a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model extended with a large interview answer corpus of over 3 million answers (over 330 million words). InterviewBERT is able to better contextualize interview responses based on the interview specific knowledge learnt from the answer corpus in addition to the general language knowledge already encoded in the initial pre-trained BERT. Further, the "Attention-based" learning approaches in InterviewBERT enable the development of explainable personality inference models that can address concerns of model explainability, a frequently raised issue when using machine learning models. We obtained an average correlation of r = 0.37 (p < 0.001) across the six HEXACO dimensions between the self-rated and the language-inferred trait scores with the highest correlation of r = 0.45 for Openness and the lowest of r = 0.28 for Agreeableness. We also show that the mean differences in inferred trait scores between male and female groups are similar to that reported by others using standard self-rated item inventories. Our results show the potential of using InterviewBERT to infer personality in an explainable manner using only the textual content of interview responses, making personality assessments more accessible and removing the subjective biases involved in human interviewer judgement of candidate personality.

5.
Acta Psychol (Amst) ; 230: 103740, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36126377

RESUMO

Speech is a powerful medium through which a variety of psychologically relevant phenomena are expressed. Here we take a first step in evaluating the potential of using voice samples as non-self-report measures of personality. In particular, we examine the extent to which linguistic and vocal information extracted from semi-structured vocal samples can be used to predict conventional measures of personality. We extracted 94 linguistic features (using Linquistic Inquiry Word Count, 2015) and 272 vocal features (using pyAudioAnalysis) from 614 voice samples of at least 50 words. Using a two-stage, fully automatable machine learning pipeline we evaluated the extent to which these features predicted self-report personality scales (Big Five Inventory). For comparison purposes, we also examined the predictive performance of these voice features with respect to depression, age, and gender. Results showed that voice samples accounted for 10.67 % of the variance in personality traits on average and that the same samples could also predict depression, age, and gender. Moreover, the results reported here provide a conservative estimate of the degree to which features derived from voice samples could be used to predict personality traits and suggest a number of opportunities to optimize personality prediction and better understand how voice samples carry information about personality.


Assuntos
Voz , Humanos , Personalidade , Fala
6.
Front Psychol ; 13: 865541, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35465529

RESUMO

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one's personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological construct to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that of psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: Phase I (pilot) study was conducted to fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 300 Korean adults will be recruited using a convenience sampling method via online survey. The text data collected from interviews will be analyzed using the natural language processing. The results of the online survey including demographic data, depression, anxiety, and personality inventories will be analyzed together in the model to predict individuals' FFM of personality and the level of psychological distress (phase 2).

7.
Future Gener Comput Syst ; 132: 266-281, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35342213

RESUMO

Continuous passive sensing of daily behavior from mobile devices has the potential to identify behavioral patterns associated with different aspects of human characteristics. This paper presents novel analytic approaches to extract and understand these behavioral patterns and their impact on predicting adaptive and maladaptive personality traits. Our machine learning analysis extends previous research by showing that both adaptive and maladaptive traits are associated with passively sensed behavior providing initial evidence for the utility of this type of data to study personality and its pathology. The analysis also suggests directions for future confirmatory studies into the underlying behavior patterns that link adaptive and maladaptive variants consistent with contemporary models of personality pathology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA