Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
J Forensic Sci ; 69(5): 1699-1705, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38978157

RESUMO

During an investigation using Forensic Investigative Genetic Genealogy, which is a novel approach for solving violent crimes and identifying human remains, reference testing-when law enforcement requests a DNA sample from a person in a partially constructed family tree-is sometimes used when an investigation has stalled. Because the people considered for a reference test have not opted in to allow law enforcement to use their DNA profile in this way, reference testing is viewed by many as an invasion of privacy and by some as unethical. We generalize an existing mathematical optimization model of the genealogy process by incorporating the option of reference testing. Using simulated versions of 17 DNA Doe Project cases, we find that reference testing can solve cases more quickly (although many reference tests are required to substantially hasten the investigative process), but only rarely (<1%) solves cases that cannot otherwise be solved. Through a mixture of mathematical and computational analysis, we find that the most desirable people to test are at the bottom of a path descending from an ancestral couple that is most likely to be related to the target. We also characterize the rare cases where reference testing is necessary for solving the case: when there is only one descending path from an ancestral couple, which precludes the possibility of identifying an intersection (e.g., marriage) between two descendants of two different ancestral couples.


Assuntos
Impressões Digitais de DNA , Linhagem , Humanos , Impressões Digitais de DNA/métodos , Genética Forense/métodos , Privacidade Genética , Funções Verossimilhança
2.
J Forensic Sci ; 67(6): 2218-2229, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36059116

RESUMO

The genealogy process is typically the most time-consuming part of-and a limiting factor in the success of-forensic genetic genealogy, which is a new approach to solving violent crimes and identifying human remains. We formulate a stochastic dynamic program that-given the list of matches and their genetic distances to the unknown target-chooses the best decision at each point in time: which match to investigate (i.e., find its ancestors and look for most recent common ancestors between the match and the target), which set of potential most recent common ancestors to descend from (i.e., find its descendants, with the goal of identifying a marriage between the maternal and paternal sides of the target's family tree), or whether to terminate the investigation. The objective is to maximize the probability of finding the target minus a cost associated with the expected size of the final family tree. We estimate the parameters of our model using data from 17 cases (eight solved, nine unsolved) from the DNA Doe Project. We assess the Proposed Strategy using simulated versions of the 17 DNA Doe Project cases, and compare it to a Benchmark Strategy that ranks matches by their genetic distance to the target and only descends from known common ancestors between a pair of matches. The Proposed Strategy solves cases ≈10 - fold faster than the Benchmark Strategy, and does so by aggressively descending from a set of potential most recent common ancestors between the target and a match even when this set has a low probability of containing the correct most recent common ancestor. Our analysis provides a mathematical foundation for improving the genealogy process in forensic genetic genealogy.


Assuntos
DNA , Genética Forense , Humanos , Linhagem , DNA/genética , Probabilidade , Modelos Genéticos
3.
Proc Natl Acad Sci U S A ; 117(24): 13421-13427, 2020 06 16.
Artigo em Inglês | MEDLINE | ID: mdl-32482858

RESUMO

Although the backlog of untested sexual assault kits in the United States is starting to be addressed, many municipalities are opting for selective testing of samples within a kit, where only the most probative samples are tested. We use data from the San Francisco Police Department Criminalistics Laboratory, which tests all samples but also collects information on the samples flagged by sexual assault forensic examiners as most probative, to build a standard machine learning model that predicts (based on covariates gleaned from sexual assault kit questionnaires) which samples are most probative. This model is embedded within an optimization framework that selects which samples to test from each kit to maximize the Combined DNA Index System (CODIS) yield (i.e., the number of kits that generate at least one DNA profile for the criminal DNA database) subject to a budget constraint. Our analysis predicts that, relative to a policy that tests only the samples deemed probative by the sexual assault forensic examiners, the proposed policy increases the CODIS yield by 45.4% without increasing the cost. Full testing of all samples has a slightly lower cost-effectiveness than the selective policy based on forensic examiners, but more than doubles the yield. In over half of the sexual assaults, a sample was not collected during the forensic medical exam from the body location deemed most probative by the machine learning model. Our results suggest that electronic forensic records coupled with machine learning and optimization models could enhance the effectiveness of criminal investigations of sexual assaults.


Assuntos
Vítimas de Crime , Ciências Forenses/economia , Aplicação da Lei/métodos , Delitos Sexuais , Manejo de Espécimes/economia , Adulto , Análise Custo-Benefício , Vítimas de Crime/estatística & dados numéricos , DNA/análise , Bases de Dados de Ácidos Nucleicos , Feminino , Ciências Forenses/estatística & dados numéricos , Humanos , Aprendizado de Máquina , Masculino , São Francisco , Delitos Sexuais/estatística & dados numéricos , Manejo de Espécimes/estatística & dados numéricos
5.
J Forensic Sci ; 63(4): 1110-1121, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29505678

RESUMO

Motivated by the debate over how to deal with the huge backlog of untested sexual assault kits in the U.S.A., we construct and analyze a mathematical model that predicts the expected number of hits (i.e., a new DNA profile matches a DNA sample in the criminal database) as a function of both the proportion of the backlog that is tested and whether the victim-offender relationship is used to prioritize the kits that are tested. Refining the results in Ref. (Criminol Public Policy, 2016, 15, 555), we use data from Detroit, where government funding was used to process ≈15% of their backlog, to predict that prioritizing stranger kits over nonstranger kits leads to only a small improvement in performance (a 0.034 increase in the normalized area under the curve of the hits vs. proportion of backlog tested curve). Two rough but conservative cost-benefit analyses-one for testing the entire backlog and a marginal one for testing kits from nonstranger assaults-suggest that testing all sexual assault kits in the backlog is quite cost-effective: for example, spending ≈$1641 to test a kit averts sexual assaults costing ≈$133,484 on average.


Assuntos
Vítimas de Crime , Impressões Digitais de DNA , Bases de Dados de Ácidos Nucleicos , Modelos Teóricos , Delitos Sexuais , Análise Custo-Benefício , Direito Penal , DNA/análise , Humanos , Polícia , Alocação de Recursos , Manejo de Espécimes , Estados Unidos
6.
J Forensic Sci ; 62(5): 1188-1196, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28374484

RESUMO

Ballistic imaging systems can help solve crimes by comparing images of cartridge cases, which are recovered from a crime scene or test-fired from a gun, to a database of images obtained from past crime scenes. Many U.S. municipalities lack the resources to process all of their cartridge cases. Using data from Stockton, CA, we analyze two problems: how to allocate limited capacity to maximize the number of cartridge cases that generate at least one hit, and how to prioritize the cartridge cases that are processed to maximize the usefulness (i.e., obtained before the corresponding criminal case is closed) of hits. The number of hits can be significantly increased by prioritizing crime scene evidence over test-fires, and by ranking calibers by their hit probability and processing only the higher ranking calibers. We also estimate that last-come first-served increases the proportion of hits that are useful by only 0.05 relative to first-come first-served.

7.
PLoS One ; 12(1): e0163956, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28068341

RESUMO

Fecal microbiota transplantation is being assessed as a treatment for chronic microbiota-related diseases such as ulcerative colitis. Results from an initial randomized trial suggest that remission rates depend on unobservable features of the fecal donors and observable features of the patients. We use mathematical modeling to assess the efficacy of pooling stools from different donors during multiple rounds of treatment. In the model, there are two types of patients and two types of donors, where the patient type is observable and the donor type (effective or not effective) is not observable. In the model, clinical outcomes from earlier rounds of treatment are used to estimate the current likelihood that each donor is effective, and then each patient in each round is treated by a pool of donors that are currently deemed to be the most effective. Relative to the no-pooling case, pools of size two or three significantly increase the proportion of patients in remission during the first several rounds of treatment. Although based on data from a single randomized trial, our modeling suggests that pooling of stools - via daily cycling of encapsulated stool from several different donors - may be beneficial in fecal microbiota transplantation for chronic microbiota-related diseases.


Assuntos
Transplante de Microbiota Fecal , Gastroenterite/microbiologia , Gastroenterite/terapia , Microbiota , Modelos Teóricos , Algoritmos , Doença Crônica , Transplante de Microbiota Fecal/métodos , Humanos , Resultado do Tratamento
8.
PLoS One ; 11(12): e0168432, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27997571

RESUMO

The U.S. is the main country in the world that delivers its food assistance primarily via transoceanic shipments of commodity-based in-kind food. This approach is costlier and less timely than cash-based assistance, which includes cash transfers, food vouchers, and local and regional procurement, where food is bought in or nearby the recipient country. The U.S.'s approach is exacerbated by a requirement that half of its transoceanic food shipments need to be sent on U.S.-flag vessels. We estimate the effect of these U.S. food assistance distribution policies on child mortality in northern Kenya by formulating and optimizing a supply chain model. In our model, monthly orders of transoceanic shipments and cash-based interventions are chosen to minimize child mortality subject to an annual budget constraint and to policy constraints on the allowable proportions of cash-based interventions and non-US-flag shipments. By varying the restrictiveness of these policy constraints, we assess the impact of possible changes in U.S. food aid policies on child mortality. The model includes an existing regression model that uses household survey data and geospatial data to forecast the mean mid-upper-arm circumference Z scores among children in a community, and allows food assistance to increase Z scores, and Z scores to influence mortality rates. We find that cash-based interventions are a much more powerful policy lever than the U.S.-flag vessel requirement: switching to cash-based interventions reduces child mortality from 4.4% to 3.7% (a 16.2% relative reduction) in our model, whereas eliminating the U.S.-flag vessel restriction without increasing the use of cash-based interventions generates a relative reduction in child mortality of only 1.1%. The great majority of the gains achieved by cash-based interventions are due to their reduced cost, not their reduced delivery lead times; i.e., the reduction of shipping expenses allows for more food to be delivered, which reduces child mortality.


Assuntos
Mortalidade da Criança , Abastecimento de Alimentos , Modelos Biológicos , Navios , Criança , Pré-Escolar , Humanos , Lactente , Quênia/epidemiologia , Masculino , Estados Unidos
9.
Malar J ; 14: 479, 2015 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-26619943

RESUMO

BACKGROUND: Motivated by the observation that children suffering from undernutrition are more likely to experience disease and are more likely to die if they do contract a disease, mathematical modelling is used to explore the ramifications of targeting preventive disease measures to undernutritioned children. METHODS: A malaria model is constructed with superinfection and heterogeneous susceptibility, where a portion of this susceptibility is due to undernutrition (as measured by weight-for-age z scores); so as to isolate the impact of supplementary food on malaria from the influence of confounding factors, the portion of the total susceptibility that is due to undernutrition is estimated from a large randomized trial of supplementary feeding. Logistic regression is used to estimate mortality given malaria infection as a function of weight-for-age z scores. The clinical malaria morbidity and malaria mortality are analytically computed for a variety of policies involving supplementary food and insecticide-treated bed nets. RESULTS: The portion of heterogeneity in susceptibility that is due to undernutrition is estimated to be 90.3 %. Targeting insecticide-treated bed nets to undernutritioned children leads to fewer malaria deaths than the random distribution of bed nets in the hypoendemic and mesoendemic settings. When baseline bed net coverage for children is 20 %, supplementary food given to underweight children is estimated to reduce malaria mortality by 7.2-22.9 % as the entomological inoculation rate ranges from 500 to 1.0. In the hyperendemic setting, supplementary food has a bigger impact than bed nets, particularly when baseline bed net coverage is high. CONCLUSIONS: Although the results are speculative (e.g., they are based on parameter estimates that do not possess the traditional statistical significance level), the biological plausibility of the modelling assumptions and the high price-sensitivity of demand for bed nets suggest that free bed net distribution targeted to undernutritioned children in areas suffering from both undernutrition and malaria (e.g., sub-Saharan Africa) should be the subject of a randomized trial in a hypoendemic or mesoendemic setting.


Assuntos
Suscetibilidade a Doenças , Malária/epidemiologia , Malária/mortalidade , Estado Nutricional , Adulto , África Subsaariana/epidemiologia , Animais , Pré-Escolar , Humanos , Lactente , Modelos Estatísticos
10.
PLoS One ; 10(12): e0144967, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26714283

RESUMO

Court-mandated downsizing of the CA prison system has led to a redistribution of detainees from prisons to CA county jails, and subsequent jail overcrowding. Using data that is representative of the LA County jail system, we build a mathematical model that tracks the flow of individuals during arraignment, pretrial release or detention, case disposition, jail sentence, and possible recidivism during pretrial release, after a failure to appear in court, during non-felony probation and during felony supervision. We assess 64 joint pretrial release and split-sentencing (where low-level felon sentences are split between jail time and mandatory supervision) policies that are based on the type of charge (felony or non-felony) and the risk category as determined by the CA Static Risk Assessment tool, and compare their performance to that of the policy LA County used in early 2014, before split sentencing was in use. In our model, policies that offer split sentences to all low-level felons optimize the key tradeoff between public safety and jail congestion by, e.g., simultaneously reducing the rearrest rate by 7% and the mean jail population by 20% relative to the policy LA County used in 2014. The effectiveness of split sentencing is due to two facts: (i) convicted felony offenders comprised ≈ 45% of LA County's jail population in 2014, and (ii) compared to pretrial release, split sentencing exposes offenders to much less time under recidivism risk per saved jail day.


Assuntos
Criminologia/estatística & dados numéricos , Políticas , Prisões/estatística & dados numéricos , Direito Penal , Humanos , Los Angeles , Modelos Estatísticos , Medição de Risco , Segurança
11.
Microbiome ; 3: 75, 2015 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-26675010

RESUMO

BACKGROUND: Fecal microbiota transplantation is an effective treatment for recurrent Clostridium difficile infection and is being investigated as a treatment for other microbiota-associated diseases. To facilitate these activities, an international public stool bank has been created, which screens donors and processes stools in a standardized manner. The goal of this research is to use mathematical modeling and analysis to optimize screening and donor management at the stool bank. RESULTS: Compared to the current policy of screening active donors every 60 days before releasing their quarantined stools for sale, costs can be reduced by 10.3 % by increasing the screening frequency to every 36 days. In addition, the stool production rate varies widely across donors, and using donor-specific screening, where higher producers are screened more frequently, also reduces costs, as does introducing an interim (i.e., between consecutive regular tests) stool test for just rotavirus and C. difficile. We also derive a donor release (i.e., into the system) policy that allows the supply to approximately match an exponentially increasing deterministic demand. CONCLUSIONS: More frequent screening, interim screening for rotavirus and C. difficile, and donor-specific screening, where higher stool producers are screened more frequently, are all cost-reducing measures. If screening costs decrease in the future (e.g., as a result of bringing screening in house), a bottleneck for implementing some of these recommendations may be the reluctance of donors to undergo serum screening more frequently than monthly.


Assuntos
Bancos de Espécimes Biológicos/normas , Seleção do Doador/métodos , Transplante de Microbiota Fecal , Fezes/microbiologia , Clostridioides difficile/isolamento & purificação , Infecções por Clostridium/terapia , Gerenciamento Clínico , Seleção do Doador/economia , Transplante de Microbiota Fecal/normas , Fezes/virologia , Humanos , Modelos Estatísticos , Rotavirus/isolamento & purificação
12.
PLoS One ; 9(6): e99632, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24967745

RESUMO

Motivated by the lack of randomized controlled trials with an intervention-free control arm in the area of child undernutrition, we fit a trivariate model of weight-for-age z score (WAZ), height-for-age z score (HAZ) and diarrhea status to data from an observational study of supplementary feeding (100 kCal/day for children with WAZ [Formula: see text]) in 17 Guatemalan communities. Incorporating time lags, intention to treat (i.e., to give supplementary food), seasonality and age interactions, we estimate how the effect of supplementary food on WAZ, HAZ and diarrhea status varies with a child's age. We find that the effect of supplementary food on all 3 metrics decreases linearly with age from 6 to 20 mo and has little effect after 20 mo. We derive 2 food allocation policies that myopically (i.e., looking ahead 2 mo) minimize either the underweight or stunting severity - i.e., the sum of squared WAZ or HAZ scores for all children with WAZ or HAZ [Formula: see text]. A simulation study based on the statistical model predicts that the 2 derived policies reduce the underweight severity (averaged over all ages) by 13.6-14.1% and reduce the stunting severity at age 60 mo by 7.1-8.0% relative to the policy currently in use, where all policies have a budget that feeds [Formula: see text]% of children. While these findings need to be confirmed on additional data sets, it appears that in a low-dose (100 kCal/day) supplementary feeding setting in Guatemala, allocating food primarily to 6-12 mo infants can reduce the severity of underweight and stunting.


Assuntos
Alimentos Fortificados , Transtornos da Nutrição do Lactente/dietoterapia , Fatores Etários , Feminino , Guatemala , Humanos , Lactente , Transtornos da Nutrição do Lactente/prevenção & controle , Masculino , Modelos Estatísticos
13.
PLoS One ; 9(5): e94087, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24787752

RESUMO

Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.


Assuntos
Identificação Biométrica , Biometria , Internet , Algoritmos , Humanos , Índia , Funções Verossimilhança , Formulação de Políticas
14.
J Forensic Sci ; 59(1): 103-11, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24400829

RESUMO

Firearms identification imaging systems help solve crimes by comparing newly acquired images of cartridge casings or bullets to a database of images obtained from past crime scenes. We formulate an optimization problem that bases its matching decisions not only on the similarity between pairs of images, but also on the time and spatial location of each new acquisition and each database entry. The objective is to maximize the detection probability subject to a constraint on the false positive rate. We use data on all cartridge casings matches detected in Israel during 2006-2008 to estimate most of the model parameters. We estimate matching accuracy from two different studies and predict that the optimal use of extraneous information would increase the detection probability from 0.931 to 0.987 and from 0.707 to 0.844, respectively. These improvements are achieved by favoring pairs of images that are closer together in space and time.

15.
Manage Sci ; 59(4): 782-795, 2013 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-23956465

RESUMO

Due to the health and economic costs of childhood obesity, coupled with studies suggesting the benefits of comprehensive (dietary, physical activity and behavioral counseling) intervention, the United States Preventive Services Task Force recently recommended childhood screening and intervention for obesity beginning at age six. Using a longitudinal data set consisting of the body mass index of 3164 children up to age 18 and another longitudinal data set containing the body mass index at ages 18 and 40 and the presence or absence of disease (hypertension and diabetes) at age 40 for 747 people, we formulate and numerically solve - separately for boys and girls - a dynamic programming problem for the optimal biennial (i.e., at ages 2, 4, …, 16) obesity screening thresholds. Unlike most screening problem formulations, we take a societal viewpoint, where the state of the system at each age is the population-wide probability density function of the body mass index. Compared to the biennial version of the task force's recommendation, the screening thresholds derived from the dynamic program achieve a relative reduction in disease prevalence of 3% at the same screening (and treatment) cost, or - due to the flatness of the disease vs. screening tradeoff curve - achieves the same disease prevalence at a 28% relative reduction in cost. Compared to the task force's policy, which uses the 95th percentile of body mass index (from cross-sectional growth charts tabulated by the Centers for Disease Control and Prevention) as the screening threshold for each age, the dynamic programming policy treats mostly 16 year olds (including many who are not obese) and very few males under 14 years old. While our results suggest that adult hypertension and diabetes are minimized by focusing childhood obesity screening and treatment on older adolescents, the shortcomings in the available data and the narrowness of the medical outcomes considered prevent us from making a recommendation about childhood obesity screening policies.

16.
Proc Natl Acad Sci U S A ; 110(12): 4545-50, 2013 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-23487755

RESUMO

Several aid groups have proposed strategies for allocating ready-to-use (therapeutic and supplementary) foods to children in developing countries. Analysis is needed to investigate whether there are better alternatives. We use a longitudinal dataset of 5,657 children from Bwamanda to construct a bivariate time-series model that tracks each child's height-for-age z score (HAZ) and weight-for-height z score (WHZ) throughout the first 5 y of life. Our optimization model chooses which individual children should receive ready-to-use therapeutic or supplementary food based on a child's sex, age, HAZ, and WHZ, to minimize the mean number of disability-adjusted life years (DALYs) per child during 6-60 mo of age [which includes childhood mortality calculated from a logistic regression and the lifelong effects of stunting (i.e., low HAZ)] subject to a budget constraint. Compared with the strategies proposed by the aid groups, which do not use HAZ information, the simple strategy arising from our analysis [which prioritizes children according to low values of a linear combination of HAZ, WHZ, and age and allocates the entire budget to therapeutic (i.e., 500 kcal/d) food for the prioritized children] reduces the number of DALYs by 9% (for the same budget) or alternatively incurs the same number of DALYs with a 61% reduction in cost. Whereas our qualitative conclusions appear to be robust, the quantitative results derived from our analysis should be treated with caution because of the lack of reliable data on the impact of supplementary food on HAZ and WHZ, the application of our model to a single cohort of children and the inclusion and exclusion errors related to imperfect food targeting.


Assuntos
Países em Desenvolvimento , Ingestão de Energia , Abastecimento de Alimentos , Desnutrição/prevenção & controle , Modelos Biológicos , Adolescente , Fatores Etários , Criança , Pré-Escolar , Feminino , Humanos , Lactente , Masculino , Desnutrição/economia , Desnutrição/epidemiologia , Fatores Sexuais
17.
Obesity (Silver Spring) ; 20(7): 1437-43, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22240724

RESUMO

To address growing concerns over childhood obesity, the United States Preventive Services Task Force (USPSTF) recently recommended that children undergo obesity screening beginning at age 6. An Expert Committee recommends starting at age 2. Analysis is needed to assess these recommendations and investigate whether there are better alternatives. We model the age- and sex-specific population-wide distribution of BMI through age 18 using National Longitudinal Survey of Youth (NLSY) data. The impact of treatment on BMI is estimated using the targeted systematic review performed to aid the USPSTF. The prevalence of hypertension and diabetes at age 40 are estimated from the Panel Study of Income Dynamics (PSID). We fix the screening interval at 2 years, and derive the age- and sex-dependent BMI thresholds that minimize adult disease prevalence, subject to referring a specified percentage of children for treatment yearly. We compare this optimal biennial policy to biennial versions of the USPSTF and Expert Committee recommendations. Compared to the USPSTF recommendation, the optimal policy reduces adult disease prevalence by 3% in relative terms (the absolute reductions are <1%) at the same treatment referral rate, or achieves the same disease prevalence at a 28% reduction in treatment referral rate. If compared to the Expert Committee recommendation, the reductions change to 6 and 40%, respectively. The optimal policy treats mostly 16-year olds and few children under age 14. Our results suggest that adult disease is minimized by focusing childhood obesity screening and treatment on older adolescents.


Assuntos
Política de Saúde , Programas de Rastreamento/organização & administração , Obesidade/diagnóstico , Obesidade/prevenção & controle , Atenção Primária à Saúde/organização & administração , Adolescente , Idade de Início , Terapia Comportamental , Índice de Massa Corporal , Criança , Pré-Escolar , Medicina Baseada em Evidências , Feminino , Humanos , Estudos Longitudinais , Masculino , Obesidade/epidemiologia , Obesidade/terapia , Prevalência , Estados Unidos/epidemiologia , Adulto Jovem
18.
Transfusion ; 52(1): 108-17, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21756261

RESUMO

BACKGROUND: Recent studies show that transfusing older blood may lead to increased mortality. This raises the issue of whether transfusing fresher blood can be achieved without jeopardizing blood availability. STUDY DESIGN AND METHODS: We propose a simple family of policies that is defined by a single threshold: rather than transfusing the oldest available blood that is younger than 42 days, we transfuse the oldest blood that is younger than the threshold, and if there is no blood younger than the threshold then we transfuse the youngest blood that is older than the threshold. To assess this policy, we build a simulation model using data from Stanford University Medical Center. We focus on the tradeoff between the mean age of transfused blood and the fraction of transfused blood that is imported. RESULTS: For hospitals in which the local supply is greater than demand, our policy with a threshold of 14 days leads to a decrease of 10 to 20 days in the mean age of transfused blood while increasing the fraction of imported blood to less than 0.005 (i.e., 0.5%). If the health benefits from transfusing fresher blood can be confirmed by randomized clinical trials, then conservative assumptions suggest that this policy could reduce the annual number of transfused patients who die within 1 year by 20,000. CONCLUSION: The proposed allocation policy with a threshold of 14 days could allow many US hospitals to significantly reduce the age of transfused blood, thereby possibly reducing morbidity and mortality, while having a negligible impact on supply chain operations.


Assuntos
Transfusão de Sangue/métodos , Transfusão de Sangue/economia , Humanos , Fatores de Tempo
19.
Risk Anal ; 30(9): 1315-27, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20840487

RESUMO

We superimpose a radiation fallout model onto a traffic flow model to assess the evacuation versus shelter-in-place decisions after the daytime ground-level detonation of a 10-kt improvised nuclear device in Washington, DC. In our model, ≈ 80k people are killed by the prompt effects of blast, burn, and radiation. Of the ≈ 360k survivors without access to a vehicle, 42.6k would die if they immediately self-evacuated on foot. Sheltering above ground would save several thousand of these lives and sheltering in a basement (or near the middle of a large building) would save of them. Among survivors of the prompt effects with access to a vehicle, the number of deaths depends on the fraction of people who shelter in a basement rather than self-evacuate in their vehicle: 23.1k people die if 90% shelter in a basement and 54.6k die if 10% shelter. Sheltering above ground saves approximately half as many lives as sheltering in a basement. The details related to delayed (i.e., organized) evacuation, search and rescue, decontamination, and situational awareness (via, e.g., telecommunications) have very little impact on the number of casualties. Although antibiotics and transfusion support have the potential to save ≈ 10k lives (and the number of lives saved from medical care increases with the fraction of people who shelter in basements), the logistical challenge appears to be well beyond current response capabilities. Taken together, our results suggest that the government should initiate an aggressive outreach program to educate citizens and the private sector about the importance of sheltering in place in a basement for at least 12 hours after a terrorist nuclear detonation.


Assuntos
Terrorismo/prevenção & controle , Humanos , Modelos Teóricos , Lesões por Radiação/prevenção & controle , Cinza Radioativa/prevenção & controle , Gestão de Riscos , Medidas de Segurança , Terrorismo/legislação & jurisprudência , Estados Unidos
20.
Risk Anal ; 29(7): 949-62, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19392673

RESUMO

We construct a mathematical model of aerosol (i.e., droplet-nuclei) transmission of influenza within a household containing one infected and embed it into an epidemic households model in which infecteds occasionally infect someone from another household; in a companion paper, we argue that the contribution from contact transmission is trivial for influenza and the contribution from droplet transmission is likely to be small. Our model predicts that the key infection control measure is the use of N95 respirators, and that the combination of respirators, humidifiers, and ventilation reduces the threshold parameter (which dictates whether or not an epidemic breaks out) by approximately 20% if 70% of households comply, and by approximately 40% if 70% of households and workplaces comply (approximately 28% reduction would have been required to control the 1918 pandemic). However, only approximately 30% of the benefits in the household are achieved if these interventions are used only after the infected develops symptoms. It is also important for people to sleep in separate bedrooms throughout the pandemic, space permitting. Surgical masks with a device (e.g., nylon hosiery) to reduce face-seal leakage are a reasonable alternative to N95 respirators if the latter are in short supply.


Assuntos
Surtos de Doenças/prevenção & controle , Influenza Humana/prevenção & controle , Desinfecção/métodos , Características da Família , Humanos , Higiene , Virus da Influenza A Subtipo H5N1/patogenicidade , Influenza Humana/transmissão , Máscaras/virologia , Modelos Teóricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA