Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Pediatr Crit Care Med ; 24(4): 322-333, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36735282

RESUMEN

OBJECTIVES: Develop and deploy a disease cohort-based machine learning algorithm for timely identification of hospitalized pediatric patients at risk for clinical deterioration that outperforms our existing situational awareness program. DESIGN: Retrospective cohort study. SETTING: Nationwide Children's Hospital, a freestanding, quaternary-care, academic children's hospital in Columbus, OH. PATIENTS: All patients admitted to inpatient units participating in the preexisting situational awareness program from October 20, 2015, to December 31, 2019, excluding patients over 18 years old at admission and those with a neonatal ICU stay during their hospitalization. INTERVENTIONS: We developed separate algorithms for cardiac, malignancy, and general cohorts via lasso-regularized logistic regression. Candidate model predictors included vital signs, supplemental oxygen, nursing assessments, early warning scores, diagnoses, lab results, and situational awareness criteria. Model performance was characterized in clinical terms and compared with our previous situational awareness program based on a novel retrospective validation approach. Simulations with frontline staff, prior to clinical implementation, informed user experience and refined interdisciplinary workflows. Model implementation was piloted on cardiology and hospital medicine units in early 2021. MEASUREMENTS AND MAIN RESULTS: The Deterioration Risk Index (DRI) was 2.4 times as sensitive as our existing situational awareness program (sensitivities of 53% and 22%, respectively; p < 0.001) and required 2.3 times fewer alarms per detected event (121 DRI alarms per detected event vs 276 for existing program). Notable improvements were a four-fold sensitivity gain for the cardiac diagnostic cohort (73% vs 18%; p < 0.001) and a three-fold gain (81% vs 27%; p < 0.001) for the malignancy diagnostic cohort. Postimplementation pilot results over 18 months revealed a 77% reduction in deterioration events (three events observed vs 13.1 expected, p = 0.001). CONCLUSIONS: The etiology of pediatric inpatient deterioration requires acknowledgement of the unique pathophysiology among cardiology and oncology patients. Selection and weighting of diverse candidate risk factors via machine learning can produce a more sensitive early warning system for clinical deterioration. Leveraging preexisting situational awareness platforms and accounting for operational impacts of model implementation are key aspects to successful bedside translation.


Asunto(s)
Deterioro Clínico , Neoplasias , Recién Nacido , Niño , Humanos , Adolescente , Estudios Retrospectivos , Pacientes Internos , Unidades de Cuidado Intensivo Pediátrico , Algoritmos , Aprendizaje Automático
2.
JMIR Pediatr Parent ; 5(1): e33614, 2022 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-35311681

RESUMEN

BACKGROUND: Parental justice involvement (eg, prison, jail, parole, or probation) is an unfortunately common and disruptive household adversity for many US youths, disproportionately affecting families of color and rural families. Data on this adversity has not been captured routinely in pediatric health care settings, and if it is, it is not discrete nor able to be readily analyzed for purposes of research. OBJECTIVE: In this study, we outline our process training a state-of-the-art natural language processing model using unstructured clinician notes of one large pediatric health system to identify patients who have experienced a justice-involved parent. METHODS: Using the electronic health record database of a large Midwestern pediatric hospital-based institution from 2011-2019, we located clinician notes (of any type and written by any type of provider) that were likely to contain such evidence of family justice involvement via a justice-keyword search (eg, prison and jail). To train and validate the model, we used a labeled data set of 7500 clinician notes identifying whether the patient was ever exposed to parental justice involvement. We calculated the precision and recall of the model and compared those rates to the keyword search. RESULTS: The development of the machine learning model increased the precision (positive predictive value) of locating children affected by parental justice involvement in the electronic health record from 61% (a simple keyword search) to 92%. CONCLUSIONS: The use of machine learning may be a feasible approach to addressing the gaps in our understanding of the health and health services of underrepresented youth who encounter childhood adversities not routinely captured-particularly for children of justice-involved parents.

3.
J Magn Reson Imaging ; 55(3): 698-719, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33314349

RESUMEN

Arterial spin labeling (ASL) is a powerful noncontrast magnetic resonance imaging (MRI) technique that enables quantitative evaluation of brain perfusion. To optimize the clinical and research utilization of ASL, radiologists and physicists must understand the technical considerations and age-related variations in normal and disease states. We discuss advanced applications of ASL across the lifespan, with example cases from children and adults covering a wide variety of pathologies. Through literature review and illustrated clinical cases, we highlight the subtleties as well as pitfalls of ASL interpretation. First, we review basic physical principles, techniques, and artifacts. This is followed by a discussion of normal perfusion variants based on age and physiology. The three major categories of perfusion abnormalities-hypoperfusion, hyperperfusion, and mixed patterns-are covered with an emphasis on clinical interpretation and relationship to the disease process. Major etiologies of hypoperfusion include large artery, small artery, and venous disease; other vascular conditions; global hypoxic-ischemic injury; and neurodegeneration. Hyperperfusion is characteristic of vascular malformations and tumors. Mixed perfusion patterns can be seen with epilepsy, migraine, trauma, infection/inflammation, and toxic-metabolic encephalopathy. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY STAGE: 3.


Asunto(s)
Encefalopatías , Circulación Cerebrovascular , Adulto , Arterias , Encefalopatías/diagnóstico por imagen , Circulación Cerebrovascular/fisiología , Niño , Humanos , Angiografía por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/métodos , Marcadores de Spin
4.
Magn Reson Imaging Clin N Am ; 29(4): 583-593, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34717846

RESUMEN

Bone MR imaging techniques use extremely rapid echo times to maximize detection of short-T2 tissues with low water concentrations. The major approaches used in clinical practice are ultrashort echo-time and zero echo-time. Synthetic CT generation is feasible using atlas-based, voxel-based, and deep learning approaches. Major clinical applications in the pediatric head and neck include evaluation for craniosynostosis, sinonasal and jaw imaging, trauma, interventional planning, and postoperative follow-up. In this article, we review the technical background and practical usefulness of bone MR imaging with key imaging examples.


Asunto(s)
Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Niño , Humanos
5.
Top Magn Reson Imaging ; 30(2): 105-115, 2021 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-33828062

RESUMEN

ABSTRACT: Zero-echo time (ZTE) magnetic resonance imaging (MRI) is the newest in a family of MRI pulse sequences that involve ultrafast sequence readouts, permitting visualization of short-T2 tissues such as cortical bone. Inherent sequence properties enable rapid, high-resolution, quiet, and artifact-resistant imaging. ZTE can be performed as part of a "one-stop-shop" MRI examination for comprehensive evaluation of head and neck pathology. As a potential alternative to computed tomography for bone imaging, this approach could help reduce patient exposure to ionizing radiation and improve radiology resource utilization. Because ZTE is not yet widely used clinically, it is important to understand the technical limitations and pitfalls for diagnosis. Imaging cases are presented to demonstrate potential applications of ZTE for imaging of oral cavity, oropharynx, and jaw anatomy and pathology in adult and pediatric patients. Emerging studies indicate promise for future clinical implementation based on synthetic computed tomography image generation, 3D printing, and interventional applications.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Boca/diagnóstico por imagen , Orofaringe/diagnóstico por imagen , Humanos
6.
J Child Neurol ; 35(13): 873-878, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32677477

RESUMEN

Currently, the tracking of seizures is highly subjective, dependent on qualitative information provided by the patient and family instead of quantifiable seizure data. Usage of a seizure detection device to potentially detect seizure events in a population of epilepsy patients has been previously done. Therefore, we chose the Fitbit Charge 2 smart watch to determine if it could detect seizure events in patients when compared to continuous electroencephalographic (EEG) monitoring for those admitted to an epilepsy monitoring unit. A total of 40 patients were enrolled in the study that met the criteria between 2015 and 2016. All seizure types were recorded. Twelve patients had a total of 53 epileptic seizures. The patient-aggregated receiver operating characteristic curve had an area under the curve of 0.58 [0.56, 0.60], indicating that the neural network models were generally able to detect seizure events at an above-chance level. However, the overall low specificity implied a false alarm rate that would likely make the model unsuitable in practice. Overall, the use of the Fitbit Charge 2 activity tracker does not appear well suited in its current form to detect epileptic seizures in patients with seizure activity when compared to data recorded from the continuous EEG.


Asunto(s)
Epilepsia/complicaciones , Monitores de Ejercicio , Monitoreo Fisiológico/métodos , Convulsiones/diagnóstico , Convulsiones/etiología , Adolescente , Adulto , Niño , Femenino , Humanos , Aprendizaje Automático , Masculino , Reproducibilidad de los Resultados , Adulto Joven
7.
8.
JMIR Form Res ; 4(6): e18279, 2020 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-32459656

RESUMEN

BACKGROUND: Qualitative self- or parent-reports used in assessing children's behavioral disorders are often inconvenient to collect and can be misleading due to missing information, rater biases, and limited validity. A data-driven approach to quantify behavioral disorders could alleviate these concerns. This study proposes a machine learning approach to identify screams in voice recordings that avoids the need to gather large amounts of clinical data for model training. OBJECTIVE: The goal of this study is to evaluate if a machine learning model trained only on publicly available audio data sets could be used to detect screaming sounds in audio streams captured in an at-home setting. METHODS: Two sets of audio samples were prepared to evaluate the model: a subset of the publicly available AudioSet data set and a set of audio data extracted from the TV show Supernanny, which was chosen for its similarity to clinical data. Scream events were manually annotated for the Supernanny data, and existing annotations were refined for the AudioSet data. Audio feature extraction was performed with a convolutional neural network pretrained on AudioSet. A gradient-boosted tree model was trained and cross-validated for scream classification on the AudioSet data and then validated independently on the Supernanny audio. RESULTS: On the held-out AudioSet clips, the model achieved a receiver operating characteristic (ROC)-area under the curve (AUC) of 0.86. The same model applied to three full episodes of Supernanny audio achieved an ROC-AUC of 0.95 and an average precision (positive predictive value) of 42% despite screams only making up 1.3% (n=92/7166 seconds) of the total run time. CONCLUSIONS: These results suggest that a scream-detection model trained with publicly available data could be valuable for monitoring clinical recordings and identifying tantrums as opposed to depending on collecting costly privacy-protected clinical data for model training.

9.
J Vis Exp ; (140)2018 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-30346402

RESUMEN

Infants and toddlers view the world, at a basic sensory level, in a fundamentally different way from their parents. This is largely due to biological constraints: infants possess different body proportions than their parents and the ability to control their own head movements is less developed. Such constraints limit the visual input available. This protocol aims to provide guiding principles for researchers using head-mounted cameras to understand the changing visual input experienced by the developing infant. Successful use of this protocol will allow researchers to design and execute studies of the developing child's visual environment set in the home or laboratory. From this method, researchers can compile an aggregate view of all the possible items in a child's field of view. This method does not directly measure exactly what the child is looking at. By combining this approach with machine learning, computer vision algorithms, and hand-coding, researchers can produce a high-density dataset to illustrate the changing visual ecology of the developing infant.


Asunto(s)
Desarrollo Infantil , Grabación en Video/instrumentación , Grabación en Video/métodos , Visión Ocular/fisiología , Preescolar , Femenino , Mano/fisiología , Humanos , Lactante , Masculino , Percepción Visual/fisiología
10.
Proc ACM Int Conf Multimodal Interact ; 2015: 351-354, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28966999

RESUMEN

Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer's everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction.

11.
Proc IEEE Int Conf Comput Vis ; 2015: 1949-1957, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29225555

RESUMEN

Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA