Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Cleft Palate Craniofac J ; : 10556656241233220, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38347701

RESUMEN

OBJECTIVE: To determine whether facial growth at five years is different for children with a left versus right sided cleft lip and palate. DESIGN: Retrospective cohort study. SETTING: Seven UK regional cleft centres. PATIENTS: Patients born between 2000-2014 with a complete unilateral cleft lip and palate (UCLP). MAIN OUTCOMES MEASURE: 5-Year-Old's Index scores. RESULTS: 378 children were included. 256 (68%) had a left sided UCLP and 122 (32%) had a right sided UCLP. 5-Year-Old's index scores ranged from 1 (good) to 5 (poor). There was a higher proportion of patients getting good scores (1 and 2) in left UCLP (43%) compared to right UCLP (37%) but there was weak evidence for a difference (Adjusted summary odds ratio 1.27, 95% CI 0.87 to 1.87; P = .22). CONCLUSIONS: Whilst maxillary growth may be different for left versus right sided UCLP, definitive analysis requires older growth indices and arch forms.

2.
Gerontology ; 69(6): 783-798, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36470216

RESUMEN

INTRODUCTION: Falls have major implications for quality of life, independence, and cost of health services. Strength and balance training has been found to be effective in reducing the rate/risk of falls, as long as there is adequate fidelity to the evidence-based programme. The aims of this study were to (1) assess the feasibility of using the "Motivate Me" and "My Activity Programme" interventions to support falls rehabilitation when delivered in practice and (2) assess study design and trial procedures for the evaluation of the intervention. METHODS: A two-arm pragmatic feasibility randomized controlled trial was conducted with five health service providers in the UK. Patients aged 50+ years eligible for a falls rehabilitation exercise programme from community services were recruited and received either (1) standard service with a smartphone for outcome measurement only or (2) standard service plus the "Motivate Me" and "My Activity Programme" apps. The primary outcome was feasibility of the intervention, study design, and procedures (including recruitment rate, adherence, and dropout). Outcome measures include balance, function, falls, strength, fear of falling, health-related quality of life, resource use, and adherence, measured at baseline, three-month, and six-month post-randomization. Blinded assessors collected the outcome measures. RESULTS: Twenty four patients were randomized to control group and 26 to intervention group, with a mean age of 77.6 (range 62-92) years. We recruited 37.5% of eligible participants across the five clinical sites. 77% in the intervention group completed their full exercise programme (including the use of the app). Response rates for outcome measures at 6 months were 77-80% across outcome measures, but this was affected by the COVID-19 pandemic. There was a mean 2.6 ± 1.9 point difference between groups in change in Berg balance score from baseline to 3 months and mean 4.4 ± 2.7 point difference from baseline to 6 months in favour of the intervention group. Less falls (1.8 ± 2.8 vs. 9.1 ± 32.6) and less injurious falls (0.1 ± 0.5 vs. 0.4 ± 0.6) in the intervention group and higher adherence scores at three (17.7 ± 6.8 vs. 13.1 ± 6.5) and 6 months (15.2 ± 7.8 vs. 14.9 ± 6.1). There were no related adverse events. Health professionals and patients had few technical issues with the apps. CONCLUSIONS: The motivational apps and trial procedures were feasible for health professionals and patients. There are positive indications from outcome measures in the feasibility trial, and key criteria for progression to full trial were met.


Asunto(s)
COVID-19 , Vida Independiente , Humanos , Anciano , Anciano de 80 o más Años , Teléfono Inteligente , Calidad de Vida , Estudios de Factibilidad , Pandemias , Miedo , Terapia por Ejercicio/métodos , Servicios de Salud , Análisis Costo-Beneficio
3.
Sensors (Basel) ; 23(20)2023 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-37896741

RESUMEN

GPS-based maneuvering target localization and tracking is a crucial aspect of autonomous driving and is widely used in navigation, transportation, autonomous vehicles, and other fields.The classical tracking approach employs a Kalman filter with precise system parameters to estimate the state. However, it is difficult to model their uncertainty because of the complex motion of maneuvering targets and the unknown sensor characteristics. Furthermore, GPS data often involve unknown color noise, making it challenging to obtain accurate system parameters, which can degrade the performance of the classical methods. To address these issues, we present a state estimation method based on the Kalman filter that does not require predefined parameters but instead uses attention learning. We use a transformer encoder with a long short-term memory (LSTM) network to extract dynamic characteristics, and estimate the system model parameters online using the expectation maximization (EM) algorithm, based on the output of the attention learning module. Finally, the Kalman filter computes the dynamic state estimates using the parameters of the learned system, dynamics, and measurement characteristics. Based on GPS simulation data and the Geolife Beijing vehicle GPS trajectory dataset, the experimental results demonstrated that our method outperformed classical and pure model-free network estimation approaches in estimation accuracy, providing an effective solution for practical maneuvering-target tracking applications.

4.
Entropy (Basel) ; 25(2)2023 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-36832613

RESUMEN

The environment and development are major issues of general concern. After much suffering from the harm of environmental pollution, human beings began to pay attention to environmental protection and started to carry out pollutant prediction research. A large number of air pollutant predictions have tried to predict pollutants by revealing their evolution patterns, emphasizing the fitting analysis of time series but ignoring the spatial transmission effect of adjacent areas, leading to low prediction accuracy. To solve this problem, we propose a time series prediction network with the self-optimization ability of a spatio-temporal graph neural network (BGGRU) to mine the changing pattern of the time series and the spatial propagation effect. The proposed network includes spatial and temporal modules. The spatial module uses a graph sampling and aggregation network (GraphSAGE) in order to extract the spatial information of the data. The temporal module uses a Bayesian graph gated recurrent unit (BGraphGRU), which applies a graph network to the gated recurrent unit (GRU) so as to fit the data's temporal information. In addition, this study used Bayesian optimization to solve the problem of the model's inaccuracy caused by inappropriate hyperparameters of the model. The high accuracy of the proposed method was verified by the actual PM2.5 data of Beijing, China, which provided an effective method for predicting the PM2.5 concentration.

5.
Entropy (Basel) ; 24(3)2022 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-35327846

RESUMEN

Compared with mechanism-based modeling methods, data-driven modeling based on big data has become a popular research field in recent years because of its applicability. However, it is not always better to have more data when building a forecasting model in practical areas. Due to the noise and conflict, redundancy, and inconsistency of big time-series data, the forecasting accuracy may reduce on the contrary. This paper proposes a deep network by selecting and understanding data to improve performance. Firstly, a data self-screening layer (DSSL) with a maximal information distance coefficient (MIDC) is designed to filter input data with high correlation and low redundancy; then, a variational Bayesian gated recurrent unit (VBGRU) is used to improve the anti-noise ability and robustness of the model. Beijing's air quality and meteorological data are conducted in a verification experiment of 24 h PM2.5 concentration forecasting, proving that the proposed model is superior to other models in accuracy.

6.
Br J Clin Pharmacol ; 87(10): 4001-4012, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33739542

RESUMEN

AIMS: This study aimed to investigate the prescribing trajectory, geographical variation and population factors, including socioeconomic status (SES), related to prescribing gabapentinoids in primary care in England. METHODS: This ecological study applied practice-level dispensing data and statistics from the UK National Health Service Digital and Office for National Statistics from 2013 to 2019. The prescribing of gabapentinoids (in defined daily doses [DDDs]/1000 people) was measured annually and quarterly. General practices were categorised according to the quarterly prescribing in a group-based trajectory model. The one-year prescribing in 2018/19 was associated with practice-level covariates in a mixed-effects multilevel regression, adjusted for the cluster-effects of Clinical Commissioning Groups (CCGs) and mapped geographically. RESULTS: The annual national prescription rate increased by 70% from 2800 to 4773 DDDs/1000 people in the time period 2013/14 to 2018/19. General practices were stratified into six trajectory groups. Practices with the highest level and the greatest increase in prescribing (n = 789; 9.8%) are mainly located in the north of England and along the east and south coastline. Socioeconomic status, demographic characteristics and relevant disease conditions were significantly associated with the prescribing. For every decrease in the Index of Multiple Deprivation decile (becoming less affluent), prescribing of gabapentinoids increased significantly by 203 (95% CI: 183-222) DDDs/1000 registrants. CONCLUSIONS: Gabapentinoid prescribing trajectories varied across geographical regions and are associated with socioeconomic status, CCG locality (geography) and other population characteristics. These factors should be considered in future studies investigating the determinants of gabapentinoid prescribing and the risk of harms associated with gabapentinoids.


Asunto(s)
Medicina General , Medicina Estatal , Utilización de Medicamentos , Humanos , Pautas de la Práctica en Medicina , Atención Primaria de Salud
7.
Cochrane Database Syst Rev ; 3: CD014545, 2021 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-33720395

RESUMEN

BACKGROUND: The detection and diagnosis of caries at the earliest opportunity is fundamental to the preservation of tooth tissue and maintenance of oral health. Radiographs have traditionally been used to supplement the conventional visual-tactile clinical examination. Accurate, timely detection and diagnosis of early signs of disease could afford patients the opportunity of less invasive treatment with less destruction of tooth tissue, reduce the need for treatment with aerosol-generating procedures, and potentially result in a reduced cost of care to the patient and to healthcare services. OBJECTIVES: To determine the diagnostic accuracy of different dental imaging methods to inform the detection and diagnosis of non-cavitated enamel only coronal dental caries. SEARCH METHODS: Cochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 31 December 2018); Embase Ovid (1980 to 31 December 2018); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 31 December 2018); and the World Health Organization International Clinical Trials Registry Platform (to 31 December 2018). We studied reference lists as well as published systematic review articles. SELECTION CRITERIA: We included diagnostic accuracy study designs that compared a dental imaging method with a reference standard (histology, excavation, enhanced visual examination), studies that evaluated the diagnostic accuracy of single index tests, and studies that directly compared two or more index tests. Studies reporting at both the patient or tooth surface level were included. In vitro and in vivo studies were eligible for inclusion. Studies that explicitly recruited participants with more advanced lesions that were obviously into dentine or frankly cavitated were excluded. We also excluded studies that artificially created carious lesions and those that used an index test during the excavation of dental caries to ascertain the optimum depth of excavation. DATA COLLECTION AND ANALYSIS: Two review authors extracted data independently and in duplicate using a standardised data extraction form and quality assessment based on QUADAS-2 specific to the clinical context. Estimates of diagnostic accuracy were determined using the bivariate hierarchical method to produce summary points of sensitivity and specificity with 95% confidence regions. Comparative accuracy of different radiograph methods was conducted based on indirect and direct comparisons between methods. Potential sources of heterogeneity were pre-specified and explored visually and more formally through meta-regression. MAIN RESULTS: We included 104 datasets from 77 studies reporting a total of 15,518 tooth sites or surfaces. The most frequently reported imaging methods were analogue radiographs (55 datasets from 51 studies) and digital radiographs (42 datasets from 40 studies) followed by cone beam computed tomography (CBCT) (7 datasets from 7 studies). Only 17 studies were of an in vivo study design, carried out in a clinical setting. No studies were considered to be at low risk of bias across all four domains but 16 studies were judged to have low concern for applicability across all domains. The patient selection domain had the largest number of studies judged to be at high risk of bias (43 studies); the index test, reference standard, and flow and timing domains were judged to be at high risk of bias in 30, 12, and 7 studies respectively. Studies were synthesised using a hierarchical bivariate method for meta-analysis. There was substantial variability in the results of the individual studies, with sensitivities that ranged from 0 to 0.96 and specificities from 0 to 1.00. For all imaging methods the estimated summary sensitivity and specificity point was 0.47 (95% confidence interval (CI) 0.40 to 0.53) and 0.88 (95% CI 0.84 to 0.92), respectively. In a cohort of 1000 tooth surfaces with a prevalence of enamel caries of 63%, this would result in 337 tooth surfaces being classified as disease free when enamel caries was truly present (false negatives), and 43 tooth surfaces being classified as diseased in the absence of enamel caries (false positives). Meta-regression indicated that measures of accuracy differed according to the imaging method (Chi2(4) = 32.44, P < 0.001), with the highest sensitivity observed for CBCT, and the highest specificity observed for analogue radiographs. None of the specified potential sources of heterogeneity were able to explain the variability in results. No studies included restored teeth in their sample or reported the inclusion of sealants. We rated the certainty of the evidence as low for sensitivity and specificity and downgraded two levels in total for risk of bias due to limitations in the design and conduct of the included studies, indirectness arising from the in vitro studies, and the observed inconsistency of the results. AUTHORS' CONCLUSIONS: The design and conduct of studies to determine the diagnostic accuracy of methods to detect and diagnose caries in situ are particularly challenging. Low-certainty evidence suggests that imaging for the detection or diagnosis of early caries may have poor sensitivity but acceptable specificity, resulting in a relatively high number of false-negative results with the potential for early disease to progress. If left untreated, the opportunity to provide professional or self-care practices to arrest or reverse early caries lesions will be missed. The specificity of lesion detection is however relatively high, and one could argue that initiation of non-invasive management (such as the use of topical fluoride), is probably of low risk. CBCT showed superior sensitivity to analogue or digital radiographs but has very limited applicability to the general dental practitioner. However, given the high-radiation dose, and potential for caries-like artefacts from existing restorations, its use cannot be justified in routine caries detection. Nonetheless, if early incidental carious lesions are detected in CBCT scans taken for other purposes, these should be reported. CBCT has the potential to be used as a reference standard in diagnostic studies of this type. Despite the robust methodology applied in this comprehensive review, the results should be interpreted with some caution due to shortcomings in the design and execution of many of the included studies. Future research should evaluate the comparative accuracy of different methods, be undertaken in a clinical setting, and focus on minimising bias arising from the use of imperfect reference standards in clinical studies.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Conjuntos de Datos como Asunto , Caries Dental/diagnóstico por imagen , Radiografía Dental/métodos , Adulto , Sesgo , Niño , Tomografía Computarizada de Haz Cónico/estadística & datos numéricos , Dentición Permanente , Reacciones Falso Negativas , Reacciones Falso Positivas , Humanos , Radiografía Dental/estadística & datos numéricos , Radiografía Dental Digital/estadística & datos numéricos , Estándares de Referencia , Sensibilidad y Especificidad , Diente Primario
8.
Sensors (Basel) ; 21(6)2021 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-33809743

RESUMEN

State estimation is widely used in various automated systems, including IoT systems, unmanned systems, robots, etc. In traditional state estimation, measurement data are instantaneous and processed in real time. With modern systems' development, sensors can obtain more and more signals and store them. Therefore, how to use these measurement big data to improve the performance of state estimation has become a hot research issue in this field. This paper reviews the development of state estimation and future development trends. First, we review the model-based state estimation methods, including the Kalman filter, such as the extended Kalman filter (EKF), unscented Kalman filter (UKF), cubature Kalman filter (CKF), etc. Particle filters and Gaussian mixture filters that can handle mixed Gaussian noise are discussed, too. These methods have high requirements for models, while it is not easy to obtain accurate system models in practice. The emergence of robust filters, the interacting multiple model (IMM), and adaptive filters are also mentioned here. Secondly, the current research status of data-driven state estimation methods is introduced based on network learning. Finally, the main research results for hybrid filters obtained in recent years are summarized and discussed, which combine model-based methods and data-driven methods. This paper is based on state estimation research results and provides a more detailed overview of model-driven, data-driven, and hybrid-driven approaches. The main algorithm of each method is provided so that beginners can have a clearer understanding. Additionally, it discusses the future development trends for researchers in state estimation.

9.
Entropy (Basel) ; 23(2)2021 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-33670098

RESUMEN

Trend prediction based on sensor data in a multi-sensor system is an important topic. As the number of sensors increases, we can measure and store more and more data. However, the increase in data has not effectively improved prediction performance. This paper focuses on this problem and presents a distributed predictor that can overcome unrelated data and sensor noise: First, we define the causality entropy to calculate the measurement's causality. Then, the series causality coefficient (SCC) is proposed to select the high causal measurement as the input data. To overcome the traditional deep learning network's over-fitting to the sensor noise, the Bayesian method is used to obtain the weight distribution characteristics of the sub-predictor network. A multi-layer perceptron (MLP) is constructed as the fusion layer to fuse the results from different sub-predictors. The experiments were implemented to verify the effectiveness of the proposed method by meteorological data from Beijing. The results show that the proposed predictor can effectively model the multi-sensor system's big measurement data to improve prediction performance.

10.
Sensors (Basel) ; 20(5)2020 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-32121411

RESUMEN

Smart agricultural sensing has enabled great advantages in practical applications recently, making it one of the most important and valuable systems. For outdoor plantation farms, the prediction of climate data, such as temperature, wind speed, and humidity, enables the planning and control of agricultural production to improve the yield and quality of crops. However, it is not easy to accurately predict climate trends because the sensing data are complex, nonlinear, and contain multiple components. This study proposes a hybrid deep learning predictor, in which an empirical mode decomposition (EMD) method is used to decompose the climate data into fixed component groups with different frequency characteristics, then a gated recurrent unit (GRU) network is trained for each group as the sub-predictor, and finally the results from the GRU are added to obtain the prediction result. Experiments based on climate data from an agricultural Internet of Things (IoT) system verify the development of the proposed model. The prediction results show that the proposed predictor can obtain more accurate predictions of temperature, wind speed, and humidity data to meet the needs of precision agricultural production.


Asunto(s)
Agricultura , Aprendizaje Profundo , Productos Agrícolas , Temperatura
11.
Sensors (Basel) ; 18(9)2018 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-30213109

RESUMEN

In this paper, a novel semi-supervised segmentation framework based on a spot-divergence supervoxelization of multi-sensor fusion data is proposed for autonomous forest machine (AFMs) applications in complex environments. Given the multi-sensor measuring system, our framework addresses three successive steps: firstly, the relationship of multi-sensor coordinates is jointly calibrated to form higher-dimensional fusion data. Then, spot-divergence supervoxels representing the size-change property are given to produce feature vectors covering comprehensive information of multi-sensors at a time. Finally, the Gaussian density peak clustering is proposed to segment supervoxels into sematic objects in the semi-supervised way, which non-requires parameters preset in manual. It is demonstrated that the proposed framework achieves a balancing act both for supervoxel generation and sematic segmentation. Comparative experiments show that the well performance of segmenting various objects in terms of segmentation accuracy (F-score up to 95.6%) and operation time, which would improve intelligent capability of AFMs.

12.
Sensors (Basel) ; 17(7)2017 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-28726729

RESUMEN

Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

13.
BMC Med Inform Decis Mak ; 16: 106, 2016 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-27506547

RESUMEN

BACKGROUND: Well-designed clinical prediction models (CPMs) often out-perform clinicians at estimating probabilities of clinical outcomes, though their adoption by family physicians is variable. How family physicians interact with CPMs is poorly understood, therefore a better understanding and framing within a context-sensitive theoretical framework may improve CPM development and implementation. The aim of this study was to investigate why family physicians do or do not use CPMs, interpreting these findings within a theoretical framework to provide recommendations for the development and implementation of future CPMs. METHODS: Mixed methods study in North West England that comprised an online survey and focus groups. RESULTS: One hundred thirty eight respondents completed the survey, which found the main perceived advantages to using CPMs were that they guided appropriate treatment (weighted rank [r] = 299; maximum r = 414 throughout), justified treatment decisions (r = 217), and incorporated a large body of evidence (r = 156). The most commonly reported barriers to using CPMs were lack of time (r = 163), irrelevance to some patients (r = 161), and poor integration with electronic health records (r = 147). Eighteen clinicians participated in two focus groups (i.e. nine in each), which revealed 13 interdependent themes affecting CPM use under three overarching domains: clinician factors, CPM factors and contextual factors. Themes were interdependent, indicating the tensions family physicians experience in providing evidence-based care for individual patients. CONCLUSIONS: The survey and focus groups showed that CPMs were valued when they supported clinical decision making and were robust. Barriers to their use related to their being time-consuming, difficult to use and not always adding value. Therefore, to be successful, CPMs should offer a relative advantage to current working, be easy to implement, be supported by training, policy and guidelines, and fit within the organisational culture.


Asunto(s)
Toma de Decisiones Clínicas/métodos , Modelos Teóricos , Médicos de Familia/estadística & datos numéricos , Guías de Práctica Clínica como Asunto/normas , Inglaterra , Medicina Familiar y Comunitaria , Grupos Focales , Humanos , Invenciones , Encuestas y Cuestionarios
14.
Sensors (Basel) ; 16(11)2016 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-27801827

RESUMEN

Algal bloom is a typical phenomenon of the eutrophication of rivers and lakes and makes the water dirty and smelly. It is a serious threat to water security and public health. Most scholars studying solutions for this pollution have studied the principles of remediation approaches, but few have studied the decision-making and selection of the approaches. Existing research uses simplex decision-making information which is highly subjective and uses little of the data from water quality sensors. To utilize these data and solve the rational decision-making problem, a novel group decision-making method is proposed using the sensor data with fuzzy evaluation information. Firstly, the optimal similarity aggregation model of group opinions is built based on the modified similarity measurement of Vague values. Secondly, the approaches' ability to improve the water quality indexes is expressed using Vague evaluation methods. Thirdly, the water quality sensor data are analyzed to match the features of the alternative approaches with grey relational degrees. This allows the best remediation approach to be selected to meet the current water status. Finally, the selection model is applied to the remediation of algal bloom in lakes. The results show this method's rationality and feasibility when using different data from different sources.

15.
Pharm Stat ; 14(3): 216-25, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25810342

RESUMEN

The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre-clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre-clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log-normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out-perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out.


Asunto(s)
Antagonismo de Drogas , Evaluación Preclínica de Medicamentos/métodos , Sinergismo Farmacológico , Algoritmos , Animales , Línea Celular/efectos de los fármacos , Relación Dosis-Respuesta a Droga , Humanos , Modelos Estadísticos , Proyectos de Investigación
16.
BMJ Open ; 14(2): e078264, 2024 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-38341207

RESUMEN

INTRODUCTION: The prevalence of gestational diabetes mellitus (GDM) is rising in the UK and is associated with maternal and neonatal complications. National Institute for Health and Care Excellence guidance advises first-line management with healthy eating and physical activity which is only moderately effective for achieving glycaemic targets. Approximately 30% of women require medication with metformin and/or insulin. There is currently no strong evidence base for any particular dietary regimen to improve outcomes in GDM. Intermittent low-energy diets (ILEDs) are associated with improved glycaemic control and reduced insulin resistance in type 2 diabetes and could be a viable option in the management of GDM. This study aims to test the safety, feasibility and acceptability of an ILED intervention among women with GDM compared with best National Health Service (NHS) care. METHOD AND ANALYSIS: We aim to recruit 48 women with GDM diagnosed between 24 and 30 weeks gestation from antenatal clinics at Wythenshawe and St Mary's hospitals, Manchester Foundation Trust, over 13 months starting in November 2022. Participants will be randomised (1:1) to ILED (2 low-energy diet days/week of 1000 kcal and 5 days/week of the best NHS care healthy diet and physical activity advice) or best NHS care 7 days/week until delivery of their baby. Primary outcomes include uptake and retention of participants to the trial and adherence to both dietary interventions. Safety outcomes will include birth weight, gestational age at delivery, neonatal hypoglycaemic episodes requiring intervention, neonatal hyperbilirubinaemia, admission to special care baby unit or neonatal intensive care unit, stillbirths, the percentage of women with hypoglycaemic episodes requiring third-party assistance, and significant maternal ketonaemia (defined as ≥1.0 mmol/L). Secondary outcomes will assess the fidelity of delivery of the interventions, and qualitative analysis of participant and healthcare professionals' experiences of the diet. Exploratory outcomes include the number of women requiring metformin and/or insulin. ETHICS AND DISSEMINATION: Ethical approval has been granted by the Cambridge East Research Ethics Committee (22/EE/0119). Findings will be disseminated via publication in peer-reviewed journals, conference presentations and shared with diabetes charitable bodies and organisations in the UK, such as Diabetes UK and the Association of British Clinical Diabetologists. TRIAL REGISTRATION NUMBER: NCT05344066.


Asunto(s)
Diabetes Mellitus Tipo 2 , Diabetes Gestacional , Metformina , Femenino , Humanos , Recién Nacido , Embarazo , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Diabetes Gestacional/diagnóstico , Dieta , Estudios de Factibilidad , Hipoglucemiantes/uso terapéutico , Insulina/uso terapéutico , Metformina/uso terapéutico , Obesidad/tratamiento farmacológico , Medicina Estatal , Ensayos Clínicos Controlados Aleatorios como Asunto
17.
Stat Med ; 32(15): 2544-54, 2013 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-23280944

RESUMEN

In phase III cancer clinical trials, overall survival is commonly used as the definitive endpoint. In phase II clinical trials, however, more immediate endpoints such as incidence of complete or partial response within 1 or 2 months or progression-free survival (PFS) are generally used. Because of the limited ability to detect change in overall survival with response, the inherent variability of PFS and the long wait for progression to be observed, more informative and immediate alternatives to overall survival are desirable in exploratory phase II trials. In this paper, we show how comparative trials can be designed and analysed using change in tumour size as the primary endpoint. The test developed is based on the framework of score statistics and will formally incorporate the information of whether patients survive until the time at which change in tumour size is assessed. Using an example in non-small cell lung cancer, we show that the sample size requirements for a trial based on change in tumour size are favourable compared with alternative randomized trials and demonstrate that these conclusions are robust to our assumptions.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Neoplasias/patología , Neoplasias/terapia , Bioestadística/métodos , Carcinoma de Pulmón de Células no Pequeñas/patología , Carcinoma de Pulmón de Células no Pequeñas/terapia , Ensayos Clínicos como Asunto/estadística & datos numéricos , Ensayos Clínicos Fase II como Asunto/métodos , Ensayos Clínicos Fase II como Asunto/estadística & datos numéricos , Ensayos Clínicos Fase III como Asunto/métodos , Ensayos Clínicos Fase III como Asunto/estadística & datos numéricos , Bases de Datos Factuales , Determinación de Punto Final/métodos , Determinación de Punto Final/estadística & datos numéricos , Humanos , Neoplasias Pulmonares/patología , Neoplasias Pulmonares/terapia , Modelos Estadísticos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Tamaño de la Muestra
18.
Pharm Stat ; 12(5): 300-8, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23907796

RESUMEN

Pre-clinical studies may be used to screen for synergistic combinations of drugs. The types of in vitro assays used for this purpose will depend upon the disease area of interest. In oncology, one frequently used study measures cell line viability: cells placed into wells on a plate are treated with doses of two compounds, and cell viability is assessed from an optical density measurement corrected for blank well values. These measurements are often transformed and analysed as cell survival relative to untreated wells. The monotherapies are assumed to follow the Hill equation with lower and upper asymptotes at 0 and 1, respectively. Additionally, a common variance about the dose-response curve may be assumed. In this paper, we consider two models for incorporating synergy parameters. We investigate the effect of different models of biological variation on the assessment of synergy from both of these models. We show that estimates of the synergy parameters appear to be robust, even when estimates of the other model parameters are biased. Using untransformed measurements provides better coverage of the 95% confidence intervals for the synergy parameters than using transformed measurements, and the requirement to fit the upper asymptote does not cause difficulties. Assuming homoscedastic variances appears to be robust. The added complexity of determining and fitting an appropriate heteroscedastic model does not seem to be justified.


Asunto(s)
Evaluación Preclínica de Medicamentos/estadística & datos numéricos , Sinergismo Farmacológico , Quimioterapia Combinada/estadística & datos numéricos , Modelos Biológicos
19.
Front Neurorobot ; 17: 1181864, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37389197

RESUMEN

Introduction: Global navigation satellite system (GNSS) signals can be lost in viaducts, urban canyons, and tunnel environments. It has been a significant challenge to achieve the accurate location of pedestrians during Global Positioning System (GPS) signal outages. This paper proposes a location estimation only with inertial measurements. Methods: A method is designed based on deep network models with feature mode matching. First, a framework is designed to extract the features of inertial measurements and match them with deep networks. Second, feature extraction and classification methods are investigated to achieve mode partitioning and to lay the foundation for checking different deep networks. Third, typical deep network models are analyzed to match various features. The selected models can be trained for different modes of inertial measurements to obtain localization information. The experiments are performed with the inertial mileage dataset from Oxford University. Results and discussion: The results demonstrate that the appropriate networks based on different feature modes have more accurate position estimation, which can improve the localization accuracy of pedestrians in GPS signal outages.

20.
Pharm Stat ; 11(2): 107-17, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22337619

RESUMEN

The issues and dangers involved in testing multiple hypotheses are well recognised within the pharmaceutical industry. In reporting clinical trials, strenuous efforts are taken to avoid the inflation of type I error, with procedures such as the Bonferroni adjustment and its many elaborations and refinements being widely employed. Typically, such methods are conservative. They tend to be accurate if the multiple test statistics involved are mutually independent and achieve less than the type I error rate specified if these statistics are positively correlated. An alternative approach is to estimate the correlations between the test statistics and to perform a test that is conditional on those estimates being the true correlations. In this paper, we begin by assuming that test statistics are normally distributed and that their correlations are known. Under these circumstances, we explore several approaches to multiple testing, adapt them so that type I error is preserved exactly and then compare their powers over a range of true parameter values. For simplicity, the explorations are confined to the bivariate case. Having described the relative strengths and weaknesses of the approaches under study, we use simulation to assess the accuracy of the approximate theory developed when the correlations are estimated from the study data rather than being known in advance and when data are binary so that test statistics are only approximately normally distributed.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Determinación de Punto Final , Proyectos de Investigación , Simulación por Computador , Interpretación Estadística de Datos , Industria Farmacéutica/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA