Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-36231724

RESUMEN

The results of gender equality indicators across the world in the form of prevalence of intimate partner violence (IPV) against women are striking and has thus drawn the attention of policy makers as well as necessitates the adoption of a comprehensive system to deal with. The situation of IPV in Pakistan is alarming. This study examines the acceptability attitude of women and men toward intimate partner violence against women through data science. It discovers and contrasts the frequently co-occurring reasons due to which husbands' behaviour of beating their wives is believed to be legitimate by both partners in the province of Punjab, Pakistan. Though the discovered frequently co-occurring reasons, such as "arguing with the husband and neglecting the children" altogether, are similar in both genders but the fraction of wives believing in such reasons are significantly greater than that of husbands. This psychological disparity across genders could help in identifying the social and cultural factors to whom this disparity is attributed. It is expected that the identified co-occurring groups of reasons would help to understand the problem to the next level and devise better strategies to mitigate them.


Asunto(s)
Ciencia de los Datos , Violencia de Pareja , Actitud , Niño , Femenino , Humanos , Violencia de Pareja/psicología , Masculino , Prevalencia , Factores de Riesgo , Parejas Sexuales , Esposos/psicología
2.
Entropy (Basel) ; 22(10)2020 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-33286862

RESUMEN

The complexity and high dimensionality are the inherent concerns of big data. The role of feature selection has gained prime importance to cope with the issue by reducing dimensionality of datasets. The compromise between the maximum classification accuracy and the minimum dimensions is as yet an unsolved puzzle. Recently, Monte Carlo Tree Search (MCTS)-based techniques have been invented that have attained great success in feature selection by constructing a binary feature selection tree and efficiently focusing on the most valuable features in the features space. However, one challenging problem associated with such approaches is a tradeoff between the tree search and the number of simulations. In a limited number of simulations, the tree might not meet the sufficient depth, thus inducing biasness towards randomness in feature subset selection. In this paper, a new algorithm for feature selection is proposed where multiple feature selection trees are built iteratively in a recursive fashion. The state space of every successor feature selection tree is less than its predecessor, thus increasing the impact of tree search in selecting best features, keeping the MCTS simulations fixed. In this study, experiments are performed on 16 benchmark datasets for validation purposes. We also compare the performance with state-of-the-art methods in literature both in terms of classification accuracy and the feature selection ratio.

3.
Entropy (Basel) ; 20(5)2018 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-33265475

RESUMEN

Given the increasing size and complexity of datasets needed to train machine learning algorithms, it is necessary to reduce the number of features required to achieve high classification accuracy. This paper presents a novel and efficient approach based on the Monte Carlo Tree Search (MCTS) to find the optimal feature subset through the feature space. The algorithm searches for the best feature subset by combining the benefits of tree search with random sampling. Starting from an empty node, the tree is incrementally built by adding nodes representing the inclusion or exclusion of the features in the feature space. Every iteration leads to a feature subset following the tree and default policies. The accuracy of the classifier on the feature subset is used as the reward and propagated backwards to update the tree. Finally, the subset with the highest reward is chosen as the best feature subset. The efficiency and effectiveness of the proposed method is validated by experimenting on many benchmark datasets. The results are also compared with significant methods in the literature, which demonstrates the superiority of the proposed method.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...