Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Cogn Sci ; 47(4): e13279, 2023 04.
Article in English | MEDLINE | ID: mdl-37052215

ABSTRACT

The enormous scale of the available information and products on the Internet has necessitated the development of algorithms that intermediate between options and human users. These algorithms attempt to provide the user with relevant information. In doing so, the algorithms may incur potential negative consequences stemming from the need to select items about which it is uncertain to obtain information about users versus the need to select items about which it is certain to secure high ratings. This tension is an instance of the exploration-exploitation trade-off in the context of recommender systems. Because humans are in this interaction loop, the long-term trade-off behavior depends on human variability. Our goal is to characterize the trade-off behavior as a function of human variability fundamental to such human-algorithm interaction. To tackle the characterization, we first introduce a unifying model that smoothly transitions between active learning and recommending relevant information. The unifying model gives us access to a continuum of algorithms along the exploration-exploitation trade-off. We then present two experiments to measure the trade-off behavior under two very different levels of human variability. The experimental results inform a thorough simulation study in which we modeled and varied human variability systematically over a wide rage. The main result is that exploration-exploitation trade-off grows in severity as human variability increases, but there exists a regime of low variability where algorithms balanced in exploration and exploitation can largely overcome the trade-off.


Subject(s)
Algorithms , Exploratory Behavior , Humans , Uncertainty , Computer Simulation , Internet
2.
Front Big Data ; 4: 693494, 2021.
Article in English | MEDLINE | ID: mdl-34396093

ABSTRACT

Despite advances in deep learning methods for song recommendation, most existing methods do not take advantage of the sequential nature of song content. In addition, there is a lack of methods that can explain their predictions using the content of recommended songs and only a few approaches can handle the item cold start problem. In this work, we propose a hybrid deep learning model that uses collaborative filtering (CF) and deep learning sequence models on the Musical Instrument Digital Interface (MIDI) content of songs to provide accurate recommendations, while also being able to generate a relevant, personalized explanation for each recommended song. Compared to state-of-the-art methods, our validation experiments showed that in addition to generating explainable recommendations, our model stood out among the top performers in terms of recommendation accuracy and the ability to handle the item cold start problem. Moreover, validation shows that our personalized explanations capture properties that are in accordance with the user's preferences.

3.
PLoS One ; 15(8): e0235502, 2020.
Article in English | MEDLINE | ID: mdl-32790666

ABSTRACT

Traditionally, machine learning algorithms relied on reliable labels from experts to build predictions. More recently however, algorithms have been receiving data from the general population in the form of labeling, annotations, etc. The result is that algorithms are subject to bias that is born from ingesting unchecked information, such as biased samples and biased labels. Furthermore, people and algorithms are increasingly engaged in interactive processes wherein neither the human nor the algorithms receive unbiased data. Algorithms can also make biased predictions, leading to what is now known as algorithmic bias. On the other hand, human's reaction to the output of machine learning methods with algorithmic bias worsen the situations by making decision based on biased information, which will probably be consumed by algorithms later. Some recent research has focused on the ethical and moral implication of machine learning algorithmic bias on society. However, most research has so far treated algorithmic bias as a static factor, which fails to capture the dynamic and iterative properties of bias. We argue that algorithmic bias interacts with humans in an iterative manner, which has a long-term effect on algorithms' performance. For this purpose, we present an iterated-learning framework that is inspired from human language evolution to study the interaction between machine learning algorithms and humans. Our goal is to study two sources of bias that interact: the process by which people select information to label (human action); and the process by which an algorithm selects the subset of information to present to people (iterated algorithmic bias mode). We investigate three forms of iterated algorithmic bias (personalization filter, active learning, and random) and how they affect the performance of machine learning algorithms by formulating research questions about the impact of each type of bias. Based on statistical analyses of the results of several controlled experiments, we found that the three different iterated bias modes, as well as initial training data class imbalance and human action, do affect the models learned by machine learning algorithms. We also found that iterated filter bias, which is prominent in personalized user interfaces, can lead to more inequality in estimated relevance and to a limited human ability to discover relevant data. Our findings indicate that the relevance blind spot (items from the testing set whose predicted relevance probability is less than 0.5 and who thus risk being hidden from humans) amounted to 4% of all relevant items when using a content-based filter that predicts relevant items. A similar simulation using a real-life rating data set found that the same filter resulted in a blind spot size of 75% of the relevant testing set.


Subject(s)
Learning , Machine Learning/standards , Bias , Humans
4.
PLoS One ; 12(11): e0187426, 2017.
Article in English | MEDLINE | ID: mdl-29121052

ABSTRACT

The goal of this study is to develop a model that explains the relationship between microRNAs, transcription factors, and their co-target genes. This relationship was previously reported in gene regulatory loops associated with 24 hour (24h) and 7 day (7d) time periods following ischemia-reperfusion injury in a rat's retina. Using a model system of retinal ischemia-reperfusion injury, we propose that microRNAs first influence transcription factors, which in turn act as mediators to influence transcription of genes via triadic regulatory loops. Analysis of the relative contributions of direct and indirect regulatory influences on genes revealed that a substantial fraction of the regulatory loops (69% for 24 hours and 77% for 7 days) could be explained by causal mediation. Over 40% of the mediated loops in both time points were regulated by transcription factors only, while about 20% of the loops were regulated entirely by microRNAs. The remaining fractions of the mediated regulatory loops were cooperatively mediated by both microRNAs and transcription factors. The results from these analyses were supported by the patterns of expression of the genes, transcription factors, and microRNAs involved in the mediated loops in both post-ischemic time points. Additionally, network motif detection for the mediated loops showed a handful of time specific motifs related to ischemia-reperfusion injury in a rat's retina. In summary, the effects of microRNAs on genes are mediated, in large part, via transcription factors.


Subject(s)
Reperfusion Injury/genetics , Retina/pathology , Animals , Disease Models, Animal , MicroRNAs/genetics , MicroRNAs/metabolism , Time Factors , Transcription Factors/metabolism
5.
BioData Min ; 9: 17, 2016.
Article in English | MEDLINE | ID: mdl-27152122

ABSTRACT

BACKGROUND: The volume of biomedical literature and its underlying knowledge base is rapidly expanding, making it beyond the ability of a single human being to read through all the literature. Several automated methods have been developed to help make sense of this dilemma. The present study reports on the results of a text mining approach to extract gene interactions from the data warehouse of published experimental results which are then used to benchmark an interaction network associated with glaucoma. To the best of our knowledge, there is, as yet, no glaucoma interaction network derived solely from text mining approaches. The presence of such a network could provide a useful summative knowledge base to complement other forms of clinical information related to this disease. RESULTS: A glaucoma corpus was constructed from PubMed Central and a text mining approach was applied to extract genes and their relations from this corpus. The extracted relations between genes were checked using reference interaction databases and classified generally as known or new relations. The extracted genes and relations were then used to construct a glaucoma interaction network. Analysis of the resulting network indicated that it bears the characteristics of a small world interaction network. Our analysis showed the presence of seven glaucoma linked genes that defined the network modularity. A web-based system for browsing and visualizing the extracted glaucoma related interaction networks is made available at http://neurogene.spd.louisville.edu/GlaucomaINViewer/Form1.aspx. CONCLUSIONS: This study has reported the first version of a glaucoma interaction network using a text mining approach. The power of such an approach is in its ability to cover a wide range of glaucoma related studies published over many years. Hence, a bigger picture of the disease can be established. To the best of our knowledge, this is the first glaucoma interaction network to summarize the known literature. The major findings were a set of relations that could not be found in existing interaction databases and that were found to be new, in addition to a smaller subnetwork consisting of interconnected clusters of seven glaucoma genes. Future improvements can be applied towards obtaining a better version of this network.

6.
IEEE Trans Neural Netw Learn Syst ; 27(12): 2486-2498, 2016 12.
Article in English | MEDLINE | ID: mdl-26529786

ABSTRACT

We demonstrate a new deep learning autoencoder network, trained by a nonnegativity constraint algorithm (nonnegativity-constrained autoencoder), that learns features that show part-based representation of data. The learning algorithm is based on constraining negative weights. The performance of the algorithm is assessed based on decomposing data into parts and its prediction performance is tested on three standard image data sets and one text data set. The results indicate that the nonnegativity constraint forces the autoencoder to learn features that amount to a part-based representation of data, while improving sparsity and reconstruction quality in comparison with the traditional sparse autoencoder and nonnegative matrix factorization. It is also shown that this newly acquired representation improves the prediction performance of a deep neural network.

SELECTION OF CITATIONS
SEARCH DETAIL
...