Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
ScientificWorldJournal ; 2022: 8002363, 2022.
Article in English | MEDLINE | ID: mdl-36225947

ABSTRACT

The search for the right person for the right job, or in other words the selection of the candidate who best reflects the skills demanded by employers to perform a specific set of duties in a job appointment, is a key premise of the personnel selection pipeline of recruitment departments. This task is usually performed by human experts who examine the résumé or curriculum vitae of candidates in search of the right skills necessary to fit the vacant position. Recent advances in AI, specifically in the fields of text analytics and natural language processing, have sparked the interest of research on the application of these technologies to help recruiters accomplish this task or part of it automatically, applying algorithms for information extraction, parsing, representation, and matching of résumés and job descriptions, or sections within. In this study, we aim to better understand how the research landscape in this field has evolved. To do this, we follow a multifaceted bibliometric approach aimed at identifying trends, dynamics, structures, and visual mapping of the most relevant topics, highly cited or influential papers, authors, and universities working on these topics, based on a publication record retrieved from Scopus and Google Scholar bibliographic databases. We conclude that, unlike a traditional literature review, the bibliometric-guided approach allowed us to discover a more comprehensive picture of the evolution of research in this subject and to clearly identify paradigm shifts from the earliest stages to the most recent efforts proposed to address this problem.


Subject(s)
Algorithms , Bibliometrics , Humans , Artificial Intelligence
2.
ScientificWorldJournal ; 2021: 6616654, 2021.
Article in English | MEDLINE | ID: mdl-33859542

ABSTRACT

BACKGROUND: After several waves of spread of the COVID-19 pandemic, countries around the world are struggling to regain their economies by slowly lifting mobility restrictions and social distance measures applied during the crisis. Meanwhile, recent studies provide compelling evidence on how contact distancing, the use of face masks, and handwashing habits can reduce the risk of SARS-CoV-2 transmission. In this context, we investigated the effect that these personal protection habits can have in preventing new waves of contagion. METHODS: We extended an agent-based COVID-19 epidemic model in a simulated community to incorporate the mechanisms of these aforementioned personal care habits and measure their incidence in person-to-person transmission. A full factorial experiment design was performed to illustrate the extent to which the interplay between these personal habits is effective in mitigating the spread of disease. A global sensitivity analysis was performed on the parameters that control these habits to further validate the results. RESULTS: We found that observing physical distance is the dominant habit in reducing disease transmission, although adopting either or both of the other two habits is necessary to some extent to suppress a new outbreak entirely. When physical distance is not observed, adherence to the use of masks or handwashing has a significant decrease in infections and mortality, but the epidemic still unfolds. We also found that in all scenarios, the combined effect of adhering to the three habits is more powerful than adopting them separately. CONCLUSIONS: Our findings suggest that a broad adherence of the population to voluntary self-care habits would help contain unfold of new outbreaks. The purpose of our model is illustrative and contributes to ratify the importance of urging citizens to adopt the amalgam of personal care habits as a primary collective protection measure to prevent communities from returning to confinements, while immunisation is carried out in late stages of the pandemic.


Subject(s)
COVID-19 , Systems Analysis , COVID-19/mortality , COVID-19/prevention & control , COVID-19/transmission , Habits , Humans , Masks , Personal Protective Equipment , Physical Distancing , Population Density , Quarantine
3.
Front Public Health ; 11: 1207624, 2023.
Article in English | MEDLINE | ID: mdl-37808978

ABSTRACT

Malaria is a common and serious disease that primarily affects developing countries and its spread is influenced by a variety of environmental and human behavioral factors; therefore, accurate prevalence prediction has been identified as a critical component of the Global Technical Strategy for Malaria from 2016 to 2030. While traditional differential equation models can perform basic forecasting, supervised machine learning algorithms provide more accurate predictions, as demonstrated by a recent study using an elastic net model (REMPS). Nevertheless, current short-term prediction systems do not achieve the required accuracy levels for routine clinical practice. To improve in this direction, stacked hybrid models have been proposed, in which the outputs of several machine learning models are aggregated by using a meta-learner predictive model. In this paper, we propose an alternative specialist hybrid approach that combines a linear predictive model that specializes in the linear component of the malaria prevalence signal and a recurrent neural network predictive model that specializes in the non-linear residuals of the linear prediction, trained with a novel asymmetric loss. Our findings show that the specialist hybrid approach outperforms the current state-of-the-art stacked models on an open-source dataset containing 22 years of malaria prevalence data from the city of Ibadan in southwest Nigeria. The specialist hybrid approach is a promising alternative to current prediction methods, as well as a tool to improve decision-making and resource allocation for malaria control in high-risk countries.


Subject(s)
Malaria , Neural Networks, Computer , Humans , Prevalence , Nigeria/epidemiology , Algorithms , Malaria/epidemiology
4.
BioData Min ; 10: 12, 2017.
Article in English | MEDLINE | ID: mdl-28331548

ABSTRACT

BACKGROUND: Discovering relevant features (biomarkers) that discriminate etiologies of a disease is useful to provide biomedical researchers with candidate targets for further laboratory experimentation while saving costs; dependencies among biomarkers may suggest additional valuable information, for example, to characterize complex epistatic relationships from genetic data. The use of classifiers to guide the search for biomarkers (the so-called wrapper approach) has been widely studied. However, simultaneously searching for relevancy and dependencies among markers is a less explored ground. RESULTS: We propose a new wrapper method that builds upon the discrimination power of a weighted kernel classifier to guide the search for a probabilistic model of simultaneous marginal and interacting effects. The feasibility of the method was evaluated in three empirical studies. The first one assessed its ability to discover complex epistatic effects on a large-scale testbed of generated human genetic problems; the method succeeded in 4 out of 5 of these problems while providing more accurate and expressive results than a baseline technique that also considers dependencies. The second study evaluated the performance of the method in benchmark classification tasks; in average the prediction accuracy was comparable to two other baseline techniques whilst finding smaller subsets of relevant features. The last study was aimed at discovering relevancy/dependency in a hepatitis dataset; in this regard, evidence recently reported in medical literature corroborated our findings. As a byproduct, the method was implemented and made freely available as a toolbox of software components deployed within an existing visual data-mining workbench. CONCLUSIONS: The mining advantages exhibited by the method come at the expense of a higher computational complexity, posing interesting algorithmic challenges regarding its applicability to large-scale datasets. Extending the probabilistic assumptions of the method to continuous distributions and higher-degree interactions is also appealing. As a final remark, we advocate broadening the use of visual graphical software tools as they enable biodata researchers to focus on experiment design, visualisation and data analysis rather than on refining their scripting programming skills.

5.
PLoS One ; 3(3): e1806, 2008 Mar 26.
Article in English | MEDLINE | ID: mdl-18509521

ABSTRACT

BACKGROUND: The analysis of complex proteomic and genomic profiles involves the identification of significant markers within a set of hundreds or even thousands of variables that represent a high-dimensional problem space. The occurrence of noise, redundancy or combinatorial interactions in the profile makes the selection of relevant variables harder. METHODOLOGY/PRINCIPAL FINDINGS: Here we propose a method to select variables based on estimated relevance to hidden patterns. Our method combines a weighted-kernel discriminant with an iterative stochastic probability estimation algorithm to discover the relevance distribution over the set of variables. We verified the ability of our method to select predefined relevant variables in synthetic proteome-like data and then assessed its performance on biological high-dimensional problems. Experiments were run on serum proteomic datasets of infectious diseases. The resulting variable subsets achieved classification accuracies of 99% on Human African Trypanosomiasis, 91% on Tuberculosis, and 91% on Malaria serum proteomic profiles with fewer than 20% of variables selected. Our method scaled-up to dimensionalities of much higher orders of magnitude as shown with gene expression microarray datasets in which we obtained classification accuracies close to 90% with fewer than 1% of the total number of variables. CONCLUSIONS: Our method consistently found relevant variables attaining high classification accuracies across synthetic and biological datasets. Notably, it yielded very compact subsets compared to the original number of variables, which should simplify downstream biological experimentation.


Subject(s)
Algorithms , Computational Biology/statistics & numerical data , Pattern Recognition, Automated , Software , Genomics/statistics & numerical data , Humans , Oligonucleotide Array Sequence Analysis/statistics & numerical data , Proteomics/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL