Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 247
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
CA Cancer J Clin ; 73(1): 72-112, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35916666

RESUMO

Sinonasal malignancies make up <5% of all head and neck neoplasms, with an incidence of 0.5-1.0 per 100,000. The outcome of these rare malignancies has been poor, whereas significant progress has been made in the management of other cancers. The objective of the current review was to describe the incidence, causes, presentation, diagnosis, treatment, and recent developments of malignancies of the sinonasal tract. The diagnoses covered in this review included sinonasal undifferentiated carcinoma, sinonasal adenocarcinoma, sinonasal squamous cell carcinoma, and esthesioneuroblastoma, which are exclusive to the sinonasal tract. In addition, the authors covered malignances that are likely to be encountered in the sinonasal tract-primary mucosal melanoma, NUT (nuclear protein of the testis) carcinoma, and extranodal natural killer cell/T-cell lymphoma. For the purpose of keeping this review as concise and focused as possible, sarcomas and malignancies that can be classified as salivary gland neoplasms were excluded.


Assuntos
Carcinoma , Neoplasias do Seio Maxilar , Melanoma , Neoplasias Nasais , Seios Paranasais , Humanos , Carcinoma/diagnóstico , Neoplasias do Seio Maxilar/diagnóstico , Neoplasias do Seio Maxilar/patologia , Cavidade Nasal/patologia , Neoplasias Nasais/diagnóstico , Neoplasias Nasais/epidemiologia , Neoplasias Nasais/terapia , Seios Paranasais/patologia
2.
Oncologist ; 29(7): 547-550, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38824414

RESUMO

Missing visual elements (MVE) in Kaplan-Meier (KM) curves can misrepresent data, preclude curve reconstruction, and hamper transparency. This study evaluated KM plots of phase III oncology trials. MVE were defined as an incomplete y-axis range or missing number at risk table in a KM curve. Surrogate endpoint KM curves were additionally evaluated for complete interpretability, defined by (1) reporting the number of censored patients and (2) correspondence of the disease assessment interval with the number at risk interval. Among 641 trials enrolling 518 235 patients, 116 trials (18%) had MVE in KM curves. Industry sponsorship, larger trials, and more recently published trials were correlated with lower odds of MVE. Only 3% of trials (15 of 574) published surrogate endpoint KM plots with complete interpretability. Improvements in the quality of KM curves of phase III oncology trials, particularly for surrogate endpoints, are needed for greater interpretability, reproducibility, and transparency in oncology research.


Assuntos
Ensaios Clínicos Fase III como Assunto , Estimativa de Kaplan-Meier , Humanos , Ensaios Clínicos Fase III como Assunto/normas , Neoplasias/terapia , Oncologia/normas , Oncologia/métodos
3.
Bioinformatics ; 39(3)2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36825820

RESUMO

MOTIVATION: Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks. Many existing pre-trained models, on the other hand, are resource-intensive and computationally heavy owing to factors such as embedding size, hidden dimension and number of layers. The natural language processing community has developed numerous strategies to compress these models utilizing techniques such as pruning, quantization and knowledge distillation, resulting in models that are considerably faster, smaller and subsequently easier to use in practice. By the same token, in this article, we introduce six lightweight models, namely, BioDistilBERT, BioTinyBERT, BioMobileBERT, DistilBioBERT, TinyBioBERT and CompactBioBERT which are obtained either by knowledge distillation from a biomedical teacher or continual learning on the Pubmed dataset. We evaluate all of our models on three biomedical tasks and compare them with BioBERT-v1.1 to create the best efficient lightweight models that perform on par with their larger counterparts. RESULTS: We trained six different models in total, with the largest model having 65 million in parameters and the smallest having 15 million; a far lower range of parameters compared with BioBERT's 110M. Based on our experiments on three different biomedical tasks, we found that models distilled from a biomedical teacher and models that have been additionally pre-trained on the PubMed dataset can retain up to 98.8% and 98.6% of the performance of the BioBERT-v1.1, respectively. Overall, our best model below 30 M parameters is BioMobileBERT, while our best models over 30 M parameters are DistilBioBERT and CompactBioBERT, which can keep up to 98.2% and 98.8% of the performance of the BioBERT-v1.1, respectively. AVAILABILITY AND IMPLEMENTATION: Codes are available at: https://github.com/nlpie-research/Compact-Biomedical-Transformers. Trained models can be accessed at: https://huggingface.co/nlpie.


Assuntos
Processamento de Linguagem Natural , PubMed , Conjuntos de Dados como Assunto
4.
BMC Infect Dis ; 24(1): 205, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38360603

RESUMO

Hand foot and mouth disease (HFMD) is caused by a variety of enteroviruses, and occurs in large outbreaks in which a small proportion of children deteriorate rapidly with cardiopulmonary failure. Determining which children are likely to deteriorate is difficult and health systems may become overloaded during outbreaks as many children require hospitalization for monitoring. Heart rate variability (HRV) may help distinguish those with more severe diseases but requires simple scalable methods to collect ECG data.We carried out a prospective observational study to examine the feasibility of using wearable devices to measure HRV in 142 children admitted with HFMD at a children's hospital in Vietnam. ECG data were collected in all children. HRV indices calculated were lower in those with enterovirus A71 associated HFMD compared to those with other viral pathogens.HRV analysis collected from wearable devices is feasible in a low and middle income country (LMIC) and may help classify disease severity in HFMD.


Assuntos
Enterovirus Humano A , Infecções por Enterovirus , Enterovirus , Doença de Mão, Pé e Boca , Criança , Humanos , Lactente , Doença de Mão, Pé e Boca/diagnóstico , Frequência Cardíaca , Estudos de Viabilidade , China/epidemiologia
5.
PLoS Genet ; 17(10): e1009436, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34662334

RESUMO

Campylobacteriosis is among the world's most common foodborne illnesses, caused predominantly by the bacterium Campylobacter jejuni. Effective interventions require determination of the infection source which is challenging as transmission occurs via multiple sources such as contaminated meat, poultry, and drinking water. Strain variation has allowed source tracking based upon allelic variation in multi-locus sequence typing (MLST) genes allowing isolates from infected individuals to be attributed to specific animal or environmental reservoirs. However, the accuracy of probabilistic attribution models has been limited by the ability to differentiate isolates based upon just 7 MLST genes. Here, we broaden the input data spectrum to include core genome MLST (cgMLST) and whole genome sequences (WGS), and implement multiple machine learning algorithms, allowing more accurate source attribution. We increase attribution accuracy from 64% using the standard iSource population genetic approach to 71% for MLST, 85% for cgMLST and 78% for kmerized WGS data using the classifier we named aiSource. To gain insight beyond the source model prediction, we use Bayesian inference to analyse the relative affinity of C. jejuni strains to infect humans and identified potential differences, in source-human transmission ability among clonally related isolates in the most common disease causing lineage (ST-21 clonal complex). Providing generalizable computationally efficient methods, based upon machine learning and population genetics, we provide a scalable approach to global disease surveillance that can continuously incorporate novel samples for source attribution and identify fine-scale variation in transmission potential.


Assuntos
Infecções por Campylobacter/microbiologia , Campylobacter jejuni/genética , Gastroenterite/microbiologia , Animais , Teorema de Bayes , Galinhas/microbiologia , Genética Populacional/métodos , Humanos , Aprendizado de Máquina , Carne/microbiologia , Tipagem de Sequências Multilocus/métodos , Sequenciamento Completo do Genoma/métodos
6.
BMC Med Inform Decis Mak ; 24(1): 183, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38937744

RESUMO

The analysis of extensive electronic health records (EHR) datasets often calls for automated solutions, with machine learning (ML) techniques, including deep learning (DL), taking a lead role. One common task involves categorizing EHR data into predefined groups. However, the vulnerability of EHRs to noise and errors stemming from data collection processes, as well as potential human labeling errors, poses a significant risk. This risk is particularly prominent during the training of DL models, where the possibility of overfitting to noisy labels can have serious repercussions in healthcare. Despite the well-documented existence of label noise in EHR data, few studies have tackled this challenge within the EHR domain. Our work addresses this gap by adapting computer vision (CV) algorithms to mitigate the impact of label noise in DL models trained on EHR data. Notably, it remains uncertain whether CV methods, when applied to the EHR domain, will prove effective, given the substantial divergence between the two domains. We present empirical evidence demonstrating that these methods, whether used individually or in combination, can substantially enhance model performance when applied to EHR data, especially in the presence of noisy/incorrect labels. We validate our methods and underscore their practical utility in real-world EHR data, specifically in the context of COVID-19 diagnosis. Our study highlights the effectiveness of CV methods in the EHR domain, making a valuable contribution to the advancement of healthcare analytics and research.


Assuntos
Registros Eletrônicos de Saúde , Humanos , Aprendizado Profundo , COVID-19 , Aprendizado de Máquina
7.
BMC Med Inform Decis Mak ; 24(1): 117, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38702692

RESUMO

BACKGROUND: Irregular time series (ITS) are common in healthcare as patient data is recorded in an electronic health record (EHR) system as per clinical guidelines/requirements but not for research and depends on a patient's health status. Due to irregularity, it is challenging to develop machine learning techniques to uncover vast intelligence hidden in EHR big data, without losing performance on downstream patient outcome prediction tasks. METHODS: In this paper, we propose Perceiver, a cross-attention-based transformer variant that is computationally efficient and can handle long sequences of time series in healthcare. We further develop continuous patient state attention models, using Perceiver and transformer to deal with ITS in EHR. The continuous patient state models utilise neural ordinary differential equations to learn patient health dynamics, i.e., patient health trajectory from observed irregular time steps, which enables them to sample patient state at any time. RESULTS: The proposed models' performance on in-hospital mortality prediction task on PhysioNet-2012 challenge and MIMIC-III datasets is examined. Perceiver model either outperforms or performs at par with baselines, and reduces computations by about nine times when compared to the transformer model, with no significant loss of performance. Experiments to examine irregularity in healthcare reveal that continuous patient state models outperform baselines. Moreover, the predictive uncertainty of the model is used to refer extremely uncertain cases to clinicians, which enhances the model's performance. Code is publicly available and verified at https://codeocean.com/capsule/4587224 . CONCLUSIONS: Perceiver presents a computationally efficient potential alternative for processing long sequences of time series in healthcare, and the continuous patient state attention models outperform the traditional and advanced techniques to handle irregularity in the time series. Moreover, the predictive uncertainty of the model helps in the development of transparent and trustworthy systems, which can be utilised as per the availability of clinicians.


Assuntos
Registros Eletrônicos de Saúde , Humanos , Aprendizado de Máquina , Mortalidade Hospitalar , Modelos Teóricos
8.
Radiology ; 307(1): e220715, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36537895

RESUMO

Background Radiomics is the extraction of predefined mathematic features from medical images for the prediction of variables of clinical interest. While some studies report superlative accuracy of radiomic machine learning (ML) models, the published methodology is often incomplete, and the results are rarely validated in external testing data sets. Purpose To characterize the type, prevalence, and statistical impact of methodologic errors present in radiomic ML studies. Materials and Methods Radiomic ML publications were reviewed for the presence of performance-inflating methodologic flaws. Common flaws were subsequently reproduced with randomly generated features interpolated from publicly available radiomic data sets to demonstrate the precarious nature of reported findings. Results In an assessment of radiomic ML publications, the authors uncovered two general categories of data analysis errors: inconsistent partitioning and unproductive feature associations. In simulations, the authors demonstrated that inconsistent partitioning augments radiomic ML accuracy by 1.4 times from unbiased performance and that correcting for flawed methodologic results in areas under the receiver operating characteristic curve approaching a value of 0.5 (random chance). With use of randomly generated features, the authors illustrated that unproductive associations between radiomic features and gene sets can imply false causality for biologic phenomenon. Conclusion Radiomic machine learning studies may contain methodologic flaws that undermine their validity. This study provides a review template to avoid such flaws. © RSNA, 2022 Supplemental material is available for this article. See also the editorial by Jacobs in this issue.


Assuntos
Aprendizado de Máquina , Humanos , Curva ROC , Estudos Retrospectivos
9.
Brief Bioinform ; 22(6)2021 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-34414415

RESUMO

Antimicrobial resistance (AMR) poses a threat to global public health. To mitigate the impacts of AMR, it is important to identify the molecular mechanisms of AMR and thereby determine optimal therapy as early as possible. Conventional machine learning-based drug-resistance analyses assume genetic variations to be homogeneous, thus not distinguishing between coding and intergenic sequences. In this study, we represent genetic data from Mycobacterium tuberculosis as a graph, and then adopt a deep graph learning method-heterogeneous graph attention network ('HGAT-AMR')-to predict anti-tuberculosis (TB) drug resistance. The HGAT-AMR model is able to accommodate incomplete phenotypic profiles, as well as provide 'attention scores' of genes and single nucleotide polymorphisms (SNPs) both at a population level and for individual samples. These scores encode the inputs, which the model is 'paying attention to' in making its drug resistance predictions. The results show that the proposed model generated the best area under the receiver operating characteristic (AUROC) for isoniazid and rifampicin (98.53 and 99.10%), the best sensitivity for three first-line drugs (94.91% for isoniazid, 96.60% for ethambutol and 90.63% for pyrazinamide), and maintained performance when the data were associated with incomplete phenotypes (i.e. for those isolates for which phenotypic data for some drugs were missing). We also demonstrate that the model successfully identifies genes and SNPs associated with drug resistance, mitigating the impact of resistance profile while considering particular drug resistance, which is consistent with domain knowledge.


Assuntos
Antituberculosos/farmacologia , Farmacorresistência Bacteriana/genética , Mycobacterium tuberculosis/efeitos dos fármacos , Testes de Sensibilidade Microbiana , Mycobacterium tuberculosis/genética , Polimorfismo de Nucleotídeo Único
10.
Am J Otolaryngol ; 44(2): 103781, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36640532

RESUMO

OBJECTIVE: Osteoradionecrosis (ORN) of the mandible is a devastating complication of external beam radiation therapy (EBRT) for head and neck squamous cell carcinoma (HNSCC). We sought to ascertain ORN risk in a Veteran HNSCC population treatment with definitive or adjuvant EBRT and followed prospectively. STUDY DESIGN: Retrospective analysis of prospective cohort. SETTING: Tertiary care Veterans Health Administration (VHA) medical center. METHODS: Patients with HNSCC who initiated treatment at the Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC) are prospectively tracked for quality of care purposes through the end of the cancer surveillance period (5 years post treatment completion). We retrospectively analyzed this patient cohort and extracted clinical and pathologic data for 164 patients with SCC of the oral cavity, oropharynx, larynx, and hypopharynx who received definitive or adjuvant EBRT (2016-2020). RESULTS: Most patients were dentate and 80 % underwent dental extractions prior to EBRT of which 16 (16 %) had complications. The rate of ORN was 3.7 % for oral cavity SCC patients and 8.1 % for oropharyngeal SCC patients. Median time to ORN development was 156 days and the earliest case was detected at 127 days post EBRT completion. All ORN patients were dentate and underwent extraction prior to EBRT start. CONCLUSION: ORN development can occur early following EBRT in a Veteran population with significant comorbid conditions but overall rates are in line with the general population. Prospective tracking of HNSCC patients throughout the post-treatment surveillance period is critical to early detection of this devastating EBRT complication.


Assuntos
Neoplasias de Cabeça e Pescoço , Osteorradionecrose , Veteranos , Humanos , Estudos Retrospectivos , Carcinoma de Células Escamosas de Cabeça e Pescoço/radioterapia , Carcinoma de Células Escamosas de Cabeça e Pescoço/epidemiologia , Osteorradionecrose/diagnóstico , Osteorradionecrose/epidemiologia , Osteorradionecrose/etiologia , Estudos Prospectivos , Detecção Precoce de Câncer , Mandíbula , Neoplasias de Cabeça e Pescoço/epidemiologia , Neoplasias de Cabeça e Pescoço/radioterapia , Neoplasias de Cabeça e Pescoço/complicações , Comorbidade
11.
Sensors (Basel) ; 23(18)2023 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-37766044

RESUMO

Gestational diabetes mellitus (GDM) is a subtype of diabetes that develops during pregnancy. Managing blood glucose (BG) within the healthy physiological range can reduce clinical complications for women with gestational diabetes. The objectives of this study are to (1) develop benchmark glucose prediction models with long short-term memory (LSTM) recurrent neural network models using time-series data collected from the GDm-Health platform, (2) compare the prediction accuracy with published results, and (3) suggest an optimized clinical review schedule with the potential to reduce the overall number of blood tests for mothers with stable and within-range glucose measurements. A total of 190,396 BG readings from 1110 patients were used for model development, validation and testing under three different prediction schemes: 7 days of BG readings to predict the next 7 or 14 days and 14 days to predict 14 days. Our results show that the optimized BG schedule based on a 7-day observational window to predict the BG of the next 14 days achieved the accuracies of the root mean square error (RMSE) = 0.958 ± 0.007, 0.876 ± 0.003, 0.898 ± 0.003, 0.622 ± 0.003, 0.814 ± 0.009 and 0.845 ± 0.005 for the after-breakfast, after-lunch, after-dinner, before-breakfast, before-lunch and before-dinner predictions, respectively. This is the first machine learning study that suggested an optimized blood glucose monitoring frequency, which is 7 days to monitor the next 14 days based on the accuracy of blood glucose prediction. Moreover, the accuracy of our proposed model based on the fingerstick blood glucose test is on par with the prediction accuracies compared with the benchmark performance of one-hour prediction models using continuous glucose monitoring (CGM) readings. In conclusion, the stacked LSTM model is a promising approach for capturing the patterns in time-series data, resulting in accurate predictions of BG levels. Using a deep learning model with routine fingerstick glucose collection is a promising, predictable and low-cost solution for BG monitoring for women with gestational diabetes.


Assuntos
Diabetes Gestacional , Gravidez , Humanos , Feminino , Diabetes Gestacional/diagnóstico , Glicemia , Automonitorização da Glicemia/métodos , Memória de Curto Prazo , Glucose
12.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514865

RESUMO

An electronic health record (EHR) is a vital high-dimensional part of medical concepts. Discovering implicit correlations in the information of this data set and the research and informative aspects can improve the treatment and management process. The challenge of concern is the data sources' limitations in finding a stable model to relate medical concepts and use these existing connections. This paper presents Patient Forest, a novel end-to-end approach for learning patient representations from tree-structured data for readmission and mortality prediction tasks. By leveraging statistical features, the proposed model is able to provide an accurate and reliable classifier for predicting readmission and mortality. Experiments on MIMIC-III and eICU datasets demonstrate Patient Forest outperforms existing machine learning models, especially when the training data are limited. Additionally, a qualitative evaluation of Patient Forest is conducted by visualising the learnt representations in 2D space using the t-SNE, which further confirms the effectiveness of the proposed model in learning EHR representations.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Humanos
13.
Sensors (Basel) ; 23(18)2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37765761

RESUMO

Tetanus is a life-threatening bacterial infection that is often prevalent in low- and middle-income countries (LMIC), Vietnam included. Tetanus affects the nervous system, leading to muscle stiffness and spasms. Moreover, severe tetanus is associated with autonomic nervous system (ANS) dysfunction. To ensure early detection and effective management of ANS dysfunction, patients require continuous monitoring of vital signs using bedside monitors. Wearable electrocardiogram (ECG) sensors offer a more cost-effective and user-friendly alternative to bedside monitors. Machine learning-based ECG analysis can be a valuable resource for classifying tetanus severity; however, using existing ECG signal analysis is excessively time-consuming. Due to the fixed-sized kernel filters used in traditional convolutional neural networks (CNNs), they are limited in their ability to capture global context information. In this work, we propose a 2D-WinSpatt-Net, which is a novel Vision Transformer that contains both local spatial window self-attention and global spatial self-attention mechanisms. The 2D-WinSpatt-Net boosts the classification of tetanus severity in intensive-care settings for LMIC using wearable ECG sensors. The time series imaging-continuous wavelet transforms-is transformed from a one-dimensional ECG signal and input to the proposed 2D-WinSpatt-Net. In the classification of tetanus severity levels, 2D-WinSpatt-Net surpasses state-of-the-art methods in terms of performance and accuracy. It achieves remarkable results with an F1 score of 0.88 ± 0.00, precision of 0.92 ± 0.02, recall of 0.85 ± 0.01, specificity of 0.96 ± 0.01, accuracy of 0.93 ± 0.02 and AUC of 0.90 ± 0.00.


Assuntos
Tétano , Humanos , Países em Desenvolvimento , Eletrocardiografia , Pacientes , Cuidados Críticos
14.
Clin Infect Dis ; 2022 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-35917440

RESUMO

BACKGROUND: The SARS-CoV-2 Delta variant has been replaced by the highly transmissible Omicron BA.1 variant, and subsequently by Omicron BA.2. It is important to understand how these changes in dominant variants affect reported symptoms, while also accounting for symptoms arising from other co-circulating respiratory viruses. METHODS: In a nationally representative UK community study, the COVID-19 Infection Survey, we investigated symptoms in PCR-positive infection episodes vs. PCR-negative study visits over calendar time, by age and vaccination status, comparing periods when the Delta, Omicron BA.1 and BA.2 variants were dominant. RESULTS: Between October-2020 and April-2022, 120,995 SARS-CoV-2 PCR-positive episodes occurred in 115,886 participants, with 70,683 (58%) reporting symptoms. The comparator comprised 4,766,366 PCR-negative study visits (483,894 participants); 203,422 (4%) reporting symptoms. Symptom reporting in PCR-positives varied over time, with a marked reduction in loss of taste/smell as Omicron BA.1 dominated, maintained with BA.2 (44%/45% 17 October 2021, 16%/13% 2 January 2022, 15%/12% 27 March 2022). Cough, fever, shortness of breath, myalgia, fatigue/weakness and headache also decreased after Omicron BA.1 dominated, but sore throat increased, the latter to a greater degree than concurrent increases in PCR-negatives. Fatigue/weakness increased again after BA.2 dominated, although to a similar degree to concurrent increases in PCR-negatives. Symptoms were consistently more common in adults aged 18-65 years than in children or older adults. CONCLUSIONS: Increases in sore throat (also common in the general community), and a marked reduction in loss of taste/smell, make Omicron harder to detect with symptom-based testing algorithms, with implications for institutional and national testing policies.

15.
PLoS Med ; 19(11): e1004107, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36355774

RESUMO

BACKGROUND: Our understanding of the global scale of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection remains incomplete: Routine surveillance data underestimate infection and cannot infer on population immunity; there is a predominance of asymptomatic infections, and uneven access to diagnostics. We meta-analyzed SARS-CoV-2 seroprevalence studies, standardized to those described in the World Health Organization's Unity protocol (WHO Unity) for general population seroepidemiological studies, to estimate the extent of population infection and seropositivity to the virus 2 years into the pandemic. METHODS AND FINDINGS: We conducted a systematic review and meta-analysis, searching MEDLINE, Embase, Web of Science, preprints, and grey literature for SARS-CoV-2 seroprevalence published between January 1, 2020 and May 20, 2022. The review protocol is registered with PROSPERO (CRD42020183634). We included general population cross-sectional and cohort studies meeting an assay quality threshold (90% sensitivity, 97% specificity; exceptions for humanitarian settings). We excluded studies with an unclear or closed population sample frame. Eligible studies-those aligned with the WHO Unity protocol-were extracted and critically appraised in duplicate, with risk of bias evaluated using a modified Joanna Briggs Institute checklist. We meta-analyzed seroprevalence by country and month, pooling to estimate regional and global seroprevalence over time; compared seroprevalence from infection to confirmed cases to estimate underascertainment; meta-analyzed differences in seroprevalence between demographic subgroups such as age and sex; and identified national factors associated with seroprevalence using meta-regression. We identified 513 full texts reporting 965 distinct seroprevalence studies (41% low- and middle-income countries [LMICs]) sampling 5,346,069 participants between January 2020 and April 2022, including 459 low/moderate risk of bias studies with national/subnational scope in further analysis. By September 2021, global SARS-CoV-2 seroprevalence from infection or vaccination was 59.2%, 95% CI [56.1% to 62.2%]. Overall seroprevalence rose steeply in 2021 due to infection in some regions (e.g., 26.6% [24.6 to 28.8] to 86.7% [84.6% to 88.5%] in Africa in December 2021) and vaccination and infection in others (e.g., 9.6% [8.3% to 11.0%] in June 2020 to 95.9% [92.6% to 97.8%] in December 2021, in European high-income countries [HICs]). After the emergence of Omicron in March 2022, infection-induced seroprevalence rose to 47.9% [41.0% to 54.9%] in Europe HIC and 33.7% [31.6% to 36.0%] in Americas HIC. In 2021 Quarter Three (July to September), median seroprevalence to cumulative incidence ratios ranged from around 2:1 in the Americas and Europe HICs to over 100:1 in Africa (LMICs). Children 0 to 9 years and adults 60+ were at lower risk of seropositivity than adults 20 to 29 (p < 0.001 and p = 0.005, respectively). In a multivariable model using prevaccination data, stringent public health and social measures were associated with lower seroprevalence (p = 0.02). The main limitations of our methodology include that some estimates were driven by certain countries or populations being overrepresented. CONCLUSIONS: In this study, we observed that global seroprevalence has risen considerably over time and with regional variation; however, over one-third of the global population are seronegative to the SARS-CoV-2 virus. Our estimates of infections based on seroprevalence far exceed reported Coronavirus Disease 2019 (COVID-19) cases. Quality and standardized seroprevalence studies are essential to inform COVID-19 response, particularly in resource-limited regions.


Assuntos
COVID-19 , SARS-CoV-2 , Criança , Adulto , Humanos , COVID-19/epidemiologia , Estudos Soroepidemiológicos , Estudos Transversais , Pandemias
16.
Am J Respir Crit Care Med ; 204(1): 44-52, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33525997

RESUMO

Rationale: Late recognition of patient deterioration in hospital is associated with worse outcomes, including higher mortality. Despite the widespread introduction of early warning score (EWS) systems and electronic health records, deterioration still goes unrecognized. Objectives: To develop and externally validate a Hospital- wide Alerting via Electronic Noticeboard (HAVEN) system to identify hospitalized patients at risk of reversible deterioration. Methods: This was a retrospective cohort study of patients 16 years of age or above admitted to four UK hospitals. The primary outcome was cardiac arrest or unplanned admission to the ICU. We used patient data (vital signs, laboratory tests, comorbidities, and frailty) from one hospital to train a machine-learning model (gradient boosting trees). We internally and externally validated the model and compared its performance with existing scoring systems (including the National EWS, laboratory-based acute physiology score, and electronic cardiac arrest risk triage score). Measurements and Main Results: We developed the HAVEN model using 230,415 patient admissions to a single hospital. We validated HAVEN on 266,295 admissions to four hospitals. HAVEN showed substantially higher discrimination (c-statistic, 0.901 [95% confidence interval, 0.898-0.903]) for the primary outcome within 24 hours of each measurement than other published scoring systems (which range from 0.700 [0.696-0.704] to 0.863 [0.860-0.865]). With a precision of 10%, HAVEN was able to identify 42% of cardiac arrests or unplanned ICU admissions with a lead time of up to 48 hours in advance, compared with 22% by the next best system. Conclusions: The HAVEN machine-learning algorithm for early identification of in-hospital deterioration significantly outperforms other published scores such as the National EWS.


Assuntos
Deterioração Clínica , Escore de Alerta Precoce , Guias como Assunto , Medição de Risco/normas , Sinais Vitais/fisiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Estudos de Coortes , Feminino , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Fatores de Risco , Reino Unido , Adulto Jovem
17.
Sensors (Basel) ; 22(9)2022 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-35591004

RESUMO

Non-invasive foetal electrocardiography (NI-FECG) has become an important prenatal monitoring method in the hospital. However, due to its susceptibility to non-stationary noise sources and lack of robust extraction methods, the capture of high-quality NI-FECG remains a challenge. Recording waveforms of sufficient quality for clinical use typically requires human visual inspection of each recording. A Signal Quality Index (SQI) can help to automate this task but, contrary to adult ECG, work on SQIs for NI-FECG is sparse. In this paper, a multi-channel signal quality classifier for NI-FECG waveforms is presented. The model can be used during the capture of NI-FECG to assist technicians to record high-quality waveforms, which is currently a labour-intensive task. A Convolutional Neural Network (CNN) is trained to distinguish between NI-FECG segments of high and low quality. NI-FECG recordings with one maternal channel and three abdominal channels were collected from 100 subjects during a routine hospital screening (102.6 min of data). The model achieves an average 10-fold cross-validated AUC of 0.95 ± 0.02. The results show that the model can reliably assess the FECG signal quality on our dataset. The proposed model can improve the automated capture and analysis of NI-FECG as well as reduce technician labour time.


Assuntos
Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Algoritmos , Eletrocardiografia/métodos , Feminino , Feto , Humanos , Redes Neurais de Computação , Gravidez
18.
Sensors (Basel) ; 22(13)2022 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-35808300

RESUMO

Gestational diabetes mellitus (GDM) is often diagnosed during the last trimester of pregnancy, leaving only a short timeframe for intervention. However, appropriate assessment, management, and treatment have been shown to reduce the complications of GDM. This study introduces a machine learning-based stratification system for identifying patients at risk of exhibiting high blood glucose levels, based on daily blood glucose measurements and electronic health record (EHR) data from GDM patients. We internally trained and validated our model on a cohort of 1148 pregnancies at Oxford University Hospitals NHS Foundation Trust (OUH), and performed external validation on 709 patients from Royal Berkshire Hospital NHS Foundation Trust (RBH). We trained linear and non-linear tree-based regression models to predict the proportion of high-readings (readings above the UK's National Institute for Health and Care Excellence [NICE] guideline) a patient may exhibit in upcoming days, and found that XGBoost achieved the highest performance during internal validation (0.021 [CI 0.019-0.023], 0.482 [0.442-0.516], and 0.112 [0.109-0.116], for MSE, R2, MAE, respectively). The model also performed similarly during external validation, suggesting that our method is generalizable across different cohorts of GDM patients.


Assuntos
Diabetes Gestacional , Glicemia , Diabetes Gestacional/diagnóstico , Diabetes Gestacional/terapia , Feminino , Humanos , Aprendizado de Máquina , Gravidez , Terceiro Trimestre da Gravidez , Medição de Risco
19.
Sensors (Basel) ; 22(10)2022 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-35632275

RESUMO

Sepsis is associated with high mortality-particularly in low-middle income countries (LMICs). Critical care management of sepsis is challenging in LMICs due to the lack of care providers and the high cost of bedside monitors. Recent advances in wearable sensor technology and machine learning (ML) models in healthcare promise to deliver new ways of digital monitoring integrated with automated decision systems to reduce the mortality risk in sepsis. In this study, firstly, we aim to assess the feasibility of using wearable sensors instead of traditional bedside monitors in the sepsis care management of hospital admitted patients, and secondly, to introduce automated prediction models for the mortality prediction of sepsis patients. To this end, we continuously monitored 50 sepsis patients for nearly 24 h after their admission to the Hospital for Tropical Diseases in Vietnam. We then compared the performance and interpretability of state-of-the-art ML models for the task of mortality prediction of sepsis using the heart rate variability (HRV) signal from wearable sensors and vital signs from bedside monitors. Our results show that all ML models trained on wearable data outperformed ML models trained on data gathered from the bedside monitors for the task of mortality prediction with the highest performance (area under the precision recall curve = 0.83) achieved using time-varying features of HRV and recurrent neural networks. Our results demonstrate that the integration of automated ML prediction models with wearable technology is well suited for helping clinicians who manage sepsis patients in LMICs to reduce the mortality risk of sepsis.


Assuntos
Sepse , Dispositivos Eletrônicos Vestíveis , Países em Desenvolvimento , Humanos , Aprendizado de Máquina , Sepse/diagnóstico , Sinais Vitais
20.
Sensors (Basel) ; 22(17)2022 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-36081013

RESUMO

Infectious diseases remain a common problem in low- and middle-income countries, including in Vietnam. Tetanus is a severe infectious disease characterized by muscle spasms and complicated by autonomic nervous system dysfunction in severe cases. Patients require careful monitoring using electrocardiograms (ECGs) to detect deterioration and the onset of autonomic nervous system dysfunction as early as possible. Machine learning analysis of ECG has been shown of extra value in predicting tetanus severity, however any additional ECG signal analysis places a high demand on time-limited hospital staff and requires specialist equipment. Therefore, we present a novel approach to tetanus monitoring from low-cost wearable sensors combined with a deep-learning-based automatic severity detection. This approach can automatically triage tetanus patients and reduce the burden on hospital staff. In this study, we propose a two-dimensional (2D) convolutional neural network with a channel-wise attention mechanism for the binary classification of ECG signals. According to the Ablett classification of tetanus severity, we define grades 1 and 2 as mild tetanus and grades 3 and 4 as severe tetanus. The one-dimensional ECG time series signals are transformed into 2D spectrograms. The 2D attention-based network is designed to extract the features from the input spectrograms. Experiments demonstrate a promising performance for the proposed method in tetanus classification with an F1 score of 0.79 ± 0.03, precision of 0.78 ± 0.08, recall of 0.82 ± 0.05, specificity of 0.85 ± 0.08, accuracy of 0.84 ± 0.04 and AUC of 0.84 ± 0.03.


Assuntos
Tétano , Dispositivos Eletrônicos Vestíveis , Algoritmos , Eletrocardiografia , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Tétano/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA