Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Proc Natl Acad Sci U S A ; 116(48): 24268-24274, 2019 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-31712420

RESUMO

A wide range of research has promised new tools for forecasting infectious disease dynamics, but little of that research is currently being applied in practice, because tools do not address key public health needs, do not produce probabilistic forecasts, have not been evaluated on external data, or do not provide sufficient forecast skill to be useful. We developed an open collaborative forecasting challenge to assess probabilistic forecasts for seasonal epidemics of dengue, a major global public health problem. Sixteen teams used a variety of methods and data to generate forecasts for 3 epidemiological targets (peak incidence, the week of the peak, and total incidence) over 8 dengue seasons in Iquitos, Peru and San Juan, Puerto Rico. Forecast skill was highly variable across teams and targets. While numerous forecasts showed high skill for midseason situational awareness, early season skill was low, and skill was generally lowest for high incidence seasons, those for which forecasts would be most valuable. A comparison of modeling approaches revealed that average forecast skill was lower for models including biologically meaningful data and mechanisms and that both multimodel and multiteam ensemble forecasts consistently outperformed individual model forecasts. Leveraging these insights, data, and the forecasting framework will be critical to improve forecast skill and the application of forecasts in real time for epidemic preparedness and response. Moreover, key components of this project-integration with public health needs, a common forecasting framework, shared and standardized data, and open participation-can help advance infectious disease forecasting beyond dengue.


Assuntos
Dengue/epidemiologia , Métodos Epidemiológicos , Surtos de Doenças , Epidemias/prevenção & controle , Humanos , Incidência , Modelos Estatísticos , Peru/epidemiologia , Porto Rico/epidemiologia
2.
JMIR AI ; 2: e42936, 2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38875587

RESUMO

BACKGROUND: Emerging artificial intelligence (AI) applications have the potential to improve health, but they may also perpetuate or exacerbate inequities. OBJECTIVE: This review aims to provide a comprehensive overview of the health equity issues related to the use of AI applications and identify strategies proposed to address them. METHODS: We searched PubMed, Web of Science, the IEEE (Institute of Electrical and Electronics Engineers) Xplore Digital Library, ProQuest U.S. Newsstream, Academic Search Complete, the Food and Drug Administration (FDA) website, and ClinicalTrials.gov to identify academic and gray literature related to AI and health equity that were published between 2014 and 2021 and additional literature related to AI and health equity during the COVID-19 pandemic from 2020 and 2021. Literature was eligible for inclusion in our review if it identified at least one equity issue and a corresponding strategy to address it. To organize and synthesize equity issues, we adopted a 4-step AI application framework: Background Context, Data Characteristics, Model Design, and Deployment. We then created a many-to-many mapping of the links between issues and strategies. RESULTS: In 660 documents, we identified 18 equity issues and 15 strategies to address them. Equity issues related to Data Characteristics and Model Design were the most common. The most common strategies recommended to improve equity were improving the quantity and quality of data, evaluating the disparities introduced by an application, increasing model reporting and transparency, involving the broader community in AI application development, and improving governance. CONCLUSIONS: Stakeholders should review our many-to-many mapping of equity issues and strategies when planning, developing, and implementing AI applications in health care so that they can make appropriate plans to ensure equity for populations affected by their products. AI application developers should consider adopting equity-focused checklists, and regulators such as the FDA should consider requiring them. Given that our review was limited to documents published online, developers may have unpublished knowledge of additional issues and strategies that we were unable to identify.

3.
Neural Netw ; 129: 359-384, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32599541

RESUMO

We show that the backpropagation algorithm is a special case of the generalized Expectation-Maximization (EM) algorithm for iterative maximum likelihood estimation. We then apply the recent result that carefully chosen noise can speed the average convergence of the EM algorithm as it climbs a hill of probability or log-likelihood. Then injecting such noise can speed the average convergence of the backpropagation algorithm for both the training and pretraining of multilayer neural networks. The beneficial noise adds to the hidden and visible neurons and related parameters. The noise also applies to regularized regression networks. This beneficial noise is just that noise that makes the current signal more probable. We show that such noise also tends to improve classification accuracy. The geometry of the noise-benefit region depends on the probability structure of the neurons in a given layer. The noise-benefit region in noise space lies above the noisy-EM (NEM) hyperplane for classification and involves a hypersphere for regression. Simulations demonstrate these noise benefits using MNIST digit classification. The NEM noise benefits substantially exceed those of simply adding blind noise to the neural network. We further prove that the noise speed-up applies to the deep bidirectional pretraining of neural-network bidirectional associative memories (BAMs) or their functionally equivalent restricted Boltzmann machines. We then show that learning with basic contrastive divergence also reduces to generalized EM for an energy-based network probability. The optimal noise adds to the input visible neurons of a BAM in stacked layers of trained BAMs. Global stability of generalized BAMs guarantees rapid convergence in pretraining where neural signals feed back between contiguous layers. Bipolar coding of inputs further improves pretraining performance.


Assuntos
Algoritmos , Aprendizado Profundo , Redes Neurais de Computação , Aprendizado Profundo/tendências , Neurônios/fisiologia , Probabilidade
4.
Neural Netw ; 78: 15-23, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26700535

RESUMO

Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.


Assuntos
Redes Neurais de Computação , Distribuição Normal , Algoritmos , Aprendizagem/fisiologia , Funções Verossimilhança , Distribuição Aleatória
5.
Neural Netw ; 37: 132-40, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23137615

RESUMO

Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Simulação por Computador , Modelos Neurológicos , Análise por Conglomerados , Humanos , Processos Estocásticos
6.
IEEE Trans Syst Man Cybern B Cybern ; 41(5): 1183-97, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21478078

RESUMO

Fuzzy rule-based systems can approximate prior and likelihood probabilities in Bayesian inference and thereby approximate posterior probabilities. This fuzzy approximation technique allows users to apply a much wider and more flexible range of prior and likelihood probability density functions than found in most Bayesian inference schemes. The technique does not restrict the user to the few known closed-form conjugacy relations between the prior and likelihood. It allows the user in many cases to describe the densities with words and just two rules can absorb any bounded closed-form probability density directly into the rulebase. Learning algorithms can tune the expert rules as well as grow them from sample data. The learning laws and fuzzy approximators have a tractable form because of the convex-sum structure of additive fuzzy systems. This convex-sum structure carries over to the fuzzy posterior approximator. We prove a uniform approximation theorem for Bayesian posteriors: An additive fuzzy posterior uniformly approximates the posterior probability density if the prior or likelihood densities are continuous and bounded and if separate additive fuzzy systems approximate the prior and likelihood densities. Simulations demonstrate this fuzzy approximation of priors and posteriors for the three most common conjugate priors (as when a beta prior combines with a binomial likelihood to give a beta posterior). Adaptive fuzzy systems can also approximate non-conjugate priors and likelihoods as well as approximate hyperpriors in hierarchical Bayesian inference. The number of fuzzy rules can grow exponentially in iterative Bayesian inference if the previous posterior approximator becomes the new prior approximator.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA