Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Lancet Digit Health ; 5(5): e288-e294, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37100543

RESUMEN

As the health-care industry emerges into a new era of digital health driven by cloud data storage, distributed computing, and machine learning, health-care data have become a premium commodity with value for private and public entities. Current frameworks of health data collection and distribution, whether from industry, academia, or government institutions, are imperfect and do not allow researchers to leverage the full potential of downstream analytical efforts. In this Health Policy paper, we review the current landscape of commercial health data vendors, with special emphasis on the sources of their data, challenges associated with data reproducibility and generalisability, and ethical considerations for data vending. We argue for sustainable approaches to curating open-source health data to enable global populations to be included in the biomedical research community. However, to fully implement these approaches, key stakeholders should come together to make health-care datasets increasingly accessible, inclusive, and representative, while balancing the privacy and rights of individuals whose data are being collected.


Asunto(s)
Algoritmos , Investigación Biomédica , Conjuntos de Datos como Asunto , Humanos , Privacidad , Reproducibilidad de los Resultados , Conjuntos de Datos como Asunto/economía , Conjuntos de Datos como Asunto/ética , Conjuntos de Datos como Asunto/tendencias , Información de Salud al Consumidor/economía , Información de Salud al Consumidor/ética
2.
Sci Rep ; 12(1): 14626, 2022 08 26.
Artículo en Inglés | MEDLINE | ID: mdl-36028547

RESUMEN

Polyp segmentation has accomplished massive triumph over the years in the field of supervised learning. However, obtaining a vast number of labeled datasets is commonly challenging in the medical domain. To solve this problem, we employ semi-supervised methods and suitably take advantage of unlabeled data to improve the performance of polyp image segmentation. First, we propose an encoder-decoder-based method well suited for the polyp with varying shape, size, and scales. Second, we utilize the teacher-student concept of training the model, where the teacher model is the student model's exponential average. Third, to leverage the unlabeled dataset, we enforce a consistency technique and force the teacher model to generate a similar output on the different perturbed versions of the given input. Finally, we propose a method that upgrades the traditional pseudo-label method by learning the model with continuous update of pseudo-label. We show the efficacy of our proposed method on different polyp datasets, and hence attaining better results in semi-supervised settings. Extensive experiments demonstrate that our proposed method can propagate the unlabeled dataset's essential information to improve performance.


Asunto(s)
Pólipos/patología , Aprendizaje Automático Supervisado , Conjuntos de Datos como Asunto/normas , Conjuntos de Datos como Asunto/tendencias , Humanos , Procesamiento de Imagen Asistido por Computador , Pólipos/diagnóstico por imagen
4.
J Neurosci ; 41(5): 927-936, 2021 02 03.
Artículo en Inglés | MEDLINE | ID: mdl-33472826

RESUMEN

High digital connectivity and a focus on reproducibility are contributing to an open science revolution in neuroscience. Repositories and platforms have emerged across the whole spectrum of subdisciplines, paving the way for a paradigm shift in the way we share, analyze, and reuse vast amounts of data collected across many laboratories. Here, we describe how open access web-based tools are changing the landscape and culture of neuroscience, highlighting six free resources that span subdisciplines from behavior to whole-brain mapping, circuits, neurons, and gene variants.


Asunto(s)
Acceso a la Información , Encéfalo/fisiología , Internet/tendencias , Red Nerviosa/fisiología , Neuronas/fisiología , Animales , Encéfalo/citología , Conjuntos de Datos como Asunto/tendencias , Redes Reguladoras de Genes/fisiología , Humanos , Red Nerviosa/citología
5.
BMC Palliat Care ; 19(1): 89, 2020 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-32576171

RESUMEN

BACKGROUND: There is an increased interest in the analysis of large, national palliative care data sets including patient reported outcomes (PROs). No study has investigated if it was best to include or exclude data from services with low response rates in order to obtain the patient reported outcomes most representative of the national palliative care population. Thus, the aim of this study was to investigate whether services with low response rates should be excluded from analyses to prevent effects of possible selection bias. METHODS: Data from the Danish Palliative Care Database from 24,589 specialized palliative care admittances of cancer patients was included. Patients reported ten aspects of quality of life using the EORTC QLQ-C15-PAL-questionnaire. Multiple linear regression was performed to test if response rate was associated with the ten aspects of quality of life. RESULTS: The score of six quality of life aspects were significantly associated with response rate. However, in only two cases patients from specialized palliative care services with lower response rates (< 20.0%, 20.0-29.9%, 30.0-39.9%, 40.0-49.9% or 50.0-59.9) were feeling better than patients from services with high response rates (≥60%) and in both cases it was less than 2 points on a 0-100 scale. CONCLUSIONS: The study hypothesis, that patients from specialized palliative care services with lower response rates were reporting better quality of life than those from specialized palliative care services with high response rates, was not supported. This suggests that there is no reason to exclude data from specialized palliative care services with low response rates.


Asunto(s)
Exactitud de los Datos , Conjuntos de Datos como Asunto/tendencias , Cuidados Paliativos/estadística & datos numéricos , Medición de Resultados Informados por el Paciente , Sistema de Registros/estadística & datos numéricos , Adulto , Conjuntos de Datos como Asunto/normas , Femenino , Humanos , Masculino , Persona de Mediana Edad , Cuidados Paliativos/métodos , Calidad de la Atención de Salud/normas , Calidad de la Atención de Salud/estadística & datos numéricos , Sujetos de Investigación/estadística & datos numéricos , Encuestas y Cuestionarios
6.
Emerg Med J ; 36(8): 459-464, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31253597

RESUMEN

INTRODUCTION: For the London Olympic and Paralympic Games in 2012, a sentinel ED syndromic surveillance system was established to enhance public health surveillance by obtaining data from a selected network of EDs, focusing on London. In 2017, a new national standard Emergency Care Dataset was introduced, which enabled Public Health England (PHE) to initiate the expansion of their sentinel system to national coverage. Prior to this initiative, we estimated the added value, and potential additional resource use, of an expansion of the sentinel surveillance system. METHODS: The detection capabilities of the sentinel and national systems were compared using the aberration detection methods currently used by PHE. Different scenarios were used to measure the impact on health at a local, subnational and national level, including improvements to sensitivity and timeliness, along with changes in specificity. RESULTS: The biggest added value was found to be for detecting local impacts, with an increase in sensitivity of over 80%. There were also improvements found at a national level with outbreaks being detected earlier and smaller impacts being detectable. However, the increased number of local sites will also increase the number of false alarms likely to be generated. CONCLUSION: We have quantified the added value of national ED syndromic surveillance systems, showing how they will enable detection of more localised events. Furthermore, national systems add value in enabling timelier public health interventions. Finally, we have highlighted areas where extra resource may be required to manage improvements in detection coverage.


Asunto(s)
Conjuntos de Datos como Asunto/normas , Servicios Médicos de Urgencia/normas , Salud Pública/instrumentación , Conjuntos de Datos como Asunto/tendencias , Servicios Médicos de Urgencia/métodos , Servicios Médicos de Urgencia/tendencias , Inglaterra , Humanos , Vigilancia de la Población/métodos , Salud Pública/métodos , Salud Pública/normas
8.
Rev Epidemiol Sante Publique ; 67 Suppl 1: S19-S23, 2019 Feb.
Artículo en Francés | MEDLINE | ID: mdl-30635133

RESUMEN

Big Data, the production of a massive amount of heterogeneous data, is often presented as a means to ensure the economic survival and sustainability of health systems. According to this perspective, Big Data could help save the spirit of our welfare states based on the principles of risks-sharing and equal access to care for all. According to a second perspective, opposed to the first, Big Data would fuel a process of demutualization, transferring to individuals a growing share of responsibility for managing their health. This article proposes to develop a third approach: Big Data does not induce a loss of solidarity but a transformation of the European model of welfare states. These are the data that are now the objects of the pooling. Individual and collective responsibilities are thus redistributed. However, this model, as new as it is, remains liberal in its inspiration; it basically allows the continuation of political liberalism by other means.


Asunto(s)
Altruismo , Conjuntos de Datos como Asunto , Atención a la Salud , Invenciones , Ciencias Bioconductuales , Conjuntos de Datos como Asunto/normas , Conjuntos de Datos como Asunto/provisión & distribución , Conjuntos de Datos como Asunto/tendencias , Atención a la Salud/organización & administración , Atención a la Salud/normas , Atención a la Salud/tendencias , Pruebas Genéticas/tendencias , Ensayos Analíticos de Alto Rendimiento/normas , Ensayos Analíticos de Alto Rendimiento/estadística & datos numéricos , Ensayos Analíticos de Alto Rendimiento/tendencias , Humanos , Individualidad , Invenciones/tendencias , Medicina de Precisión/efectos adversos , Medicina de Precisión/métodos , Medicina de Precisión/normas , Medicina de Precisión/tendencias , Mejoramiento de la Calidad/tendencias , Factores de Riesgo , Justicia Social , Bienestar Social
12.
Neural Netw ; 103: 29-43, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29625354

RESUMEN

Multilayer bootstrap network builds a gradually narrowed multilayer nonlinear network from bottom up for unsupervised nonlinear dimensionality reduction. Each layer of the network is a nonparametric density estimator. It consists of a group of k-centroids clusterings. Each clustering randomly selects data points with randomly selected features as its centroids, and learns a one-hot encoder by one-nearest-neighbor optimization. Geometrically, the nonparametric density estimator at each layer projects the input data space to a uniformly-distributed discrete feature space, where the similarity of two data points in the discrete feature space is measured by the number of the nearest centroids they share in common. The multilayer network gradually reduces the nonlinear variations of data from bottom up by building a vast number of hierarchical trees implicitly on the original data space. Theoretically, the estimation error caused by the nonparametric density estimator is proportional to the correlation between the clusterings, both of which are reduced by the randomization steps.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Análisis por Conglomerados , Conjuntos de Datos como Asunto/tendencias
13.
Z Rheumatol ; 77(3): 195-202, 2018 Apr.
Artículo en Alemán | MEDLINE | ID: mdl-29520680

RESUMEN

Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.


Asunto(s)
Macrodatos , Técnicas de Diagnóstico Molecular/métodos , Reumatología/métodos , Algoritmos , Conjuntos de Datos como Asunto/tendencias , Predicción , Alemania , Humanos , Sistemas de Registros Médicos Computarizados/tendencias , Técnicas de Diagnóstico Molecular/tendencias , Datos de Salud Generados por el Paciente/tendencias , Reumatología/tendencias , Programas Informáticos/tendencias
14.
J Tissue Viability ; 26(4): 226-240, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-29030056

RESUMEN

BACKGROUND: At present there is no established national minimum data set (MDS) for generic wound assessment in England, which has led to a lack of standardisation and variable assessment criteria being used across the country. This hampers the quality and monitoring of wound healing progress and treatment. AIM: To establish a generic wound assessment MDS to underpin clinical practice. METHOD: The project comprised 1) a literature review to provide an overview of wound assessment best practice and identify potential assessment criteria for inclusion in the MDS and 2) a structured consensus study using an adapted Research and Development/University of California at Los Angeles Appropriateness method. This incorporated experts in the wound care field considering the evidence of a literature review and their experience to agree the assessment criteria to be included in the MDS. RESULTS: The literature review identified 24 papers that contained criteria which might be considered as part of generic wound assessment. From these papers 68 potential assessment items were identified and the expert group agreed that 37 (relating to general health information, baseline wound information, wound assessment parameters, wound symptoms and specialists) should be included in the MDS. DISCUSSION: Using a structured approach we have developed a generic wound assessment MDS to underpin wound assessment documentation and practice. It is anticipated that the MDS will facilitate a more consistent approach to generic wound assessment practice and support providers and commissioners of care to develop and re-focus services that promote improvements in wound care.


Asunto(s)
Conjuntos de Datos como Asunto/tendencias , Examen Físico/métodos , Heridas y Lesiones/clasificación , Consenso , Inglaterra , Humanos , Examen Físico/tendencias
15.
Stat Methods Med Res ; 26(4): 1605-1610, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28482753

RESUMEN

We asked three leading researchers in the area of dynamic treatment regimes to share their stories on how they became interested in this topic and their perspectives on the most important opportunities and challenges for the future.


Asunto(s)
Medicina de Precisión/tendencias , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Conjuntos de Datos como Asunto/tendencias , Historia del Siglo XX , Historia del Siglo XXI , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto/historia
17.
Neuroimage ; 155: 549-564, 2017 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-28456584

RESUMEN

Neuroscience is undergoing faster changes than ever before. Over 100 years our field qualitatively described and invasively manipulated single or few organisms to gain anatomical, physiological, and pharmacological insights. In the last 10 years neuroscience spawned quantitative datasets of unprecedented breadth (e.g., microanatomy, synaptic connections, and optogenetic brain-behavior assays) and size (e.g., cognition, brain imaging, and genetics). While growing data availability and information granularity have been amply discussed, we direct attention to a less explored question: How will the unprecedented data richness shape data analysis practices? Statistical reasoning is becoming more important to distill neurobiological knowledge from healthy and pathological brain measurements. We argue that large-scale data analysis will use more statistical models that are non-parametric, generative, and mixing frequentist and Bayesian aspects, while supplementing classical hypothesis testing with out-of-sample predictions.


Asunto(s)
Interpretación Estadística de Datos , Conjuntos de Datos como Asunto/tendencias , Modelos Teóricos , Neurociencias/tendencias , Humanos
20.
Philos Trans A Math Phys Eng Sci ; 374(2080)2016 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-27698035

RESUMEN

The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'.


Asunto(s)
Conjuntos de Datos como Asunto/tendencias , Almacenamiento y Recuperación de la Información/métodos , Modelos Teóricos , Interfaz Usuario-Computador , Simulación por Computador , Sistemas de Administración de Bases de Datos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA