RESUMO
The xCures platform aggregates, organizes, structures, and normalizes clinical EMR data across care sites, utilizing advanced technologies for near real-time access. The platform generates data in a format to support clinical care, accelerate research, and promote artificial intelligence/ machine learning algorithm development, highlighted by a clinical decision support algorithm for precision oncology.
Assuntos
Inteligência Artificial , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Neoplasias , Humanos , Neoplasias/terapia , Algoritmos , Oncologia/métodos , Ensaios Clínicos como Assunto , Sistemas de Apoio a Decisões ClínicasRESUMO
Light-sheet fluorescence microscopy (LSFM), a prominent fluorescence microscopy technique, offers enhanced temporal resolution for imaging biological samples in four dimensions (4D; x, y, z, time). Some of the most recent implementations, including inverted selective plane illumination microscopy (iSPIM) and lattice light-sheet microscopy (LLSM), move the sample substrate at an oblique angle relative to the detection objective's optical axis. Data from such tilted-sample-scan LSFMs require subsequent deskewing and rotation for proper visualisation and analysis. Such data preprocessing operations currently demand substantial memory allocation and pose significant computational challenges for large 4D dataset. The consequence is prolonged data preprocessing time compared to data acquisition time, which limits the ability for live-viewing the data as it is being captured by the microscope. To enable the fast preprocessing of large light-sheet microscopy datasets without significant hardware demand, we have developed WH-Transform, a memory-efficient transformation algorithm for deskewing and rotating the raw dataset, significantly reducing memory usage and the run time by more than 10-fold for large image stacks. Benchmarked against the conventional method and existing software, our approach demonstrates linear runtime compared to the cubic and quadratic runtime of the other approaches. Preprocessing a raw 3D volume of 2 GB (512 × 1536 × 600 pixels) can be accomplished in 3 s using a GPU with 24 GB of memory on a single workstation. Applied to 4D LLSM datasets of human hepatocytes, lung organoid tissue and brain organoid tissue, our method provided rapid and accurate preprocessing within seconds. Importantly, such preprocessing speeds now allow visualisation of the raw microscope data stream in real time, significantly improving the usability of LLSM in biology. In summary, this advancement holds transformative potential for light-sheet microscopy, enabling real-time, on-the-fly data preprocessing, visualisation, and analysis on standard workstations, thereby revolutionising biological imaging applications for LLSM and similar microscopes.
RESUMO
While Artificial Intelligence (AI) has the potential to transform the field of diagnostic radiology, important obstacles still inhibit its integration into clinical environments. Foremost among them is the inability to integrate clinical information and prior and concurrent imaging examinations, which can lead to diagnostic errors that could irreversibly alter patient care. For AI to succeed in modern clinical practice, model training and algorithm development need to account for relevant background information that may influence the presentation of the patient in question. While AI is often remarkably accurate in distinguishing binary outcomes-hemorrhage vs. no hemorrhage; fracture vs. no fracture-the narrow scope of current training datasets prevents AI from examining the entire clinical context of the image in question. In this article, we provide an overview of the ways in which failure to account for clinical data and prior imaging can adversely affect AI interpretation of imaging studies. We then showcase how emerging techniques such as multimodal fusion and combined neural networks can take advantage of both clinical and imaging data, as well as how development strategies like domain adaptation can ensure greater generalizability of AI algorithms across diverse and dynamic clinical environments.
RESUMO
OBJECTIVES: To develop, validate, and implement algorithms to identify diabetic retinopathy (DR) cases and controls from electronic health care records (EHRs). MATERIALS AND METHODS: We developed and validated electronic health record (EHR)-based algorithms to identify DR cases and individuals with type I or II diabetes without DR (controls) in 3 independent EHR systems: Vanderbilt University Medical Center Synthetic Derivative (VUMC), the VA Northeast Ohio Healthcare System (VANEOHS), and Massachusetts General Brigham (MGB). Cases were required to meet 1 of the following 3 criteria: (1) 2 or more dates with any DR ICD-9/10 code documented in the EHR, (2) at least one affirmative health-factor or EPIC code for DR along with an ICD9/10 code for DR on a different day, or (3) at least one ICD-9/10 code for any DR occurring within 24 hours of an ophthalmology examination. Criteria for controls included affirmative evidence for diabetes as well as an ophthalmology examination. RESULTS: The algorithms, developed and evaluated in VUMC through manual chart review, resulted in a positive predictive value (PPV) of 0.93 for cases and negative predictive value (NPV) of 0.91 for controls. Implementation of algorithms yielded similar metrics in VANEOHS (PPV = 0.94; NPV = 0.86) and lower in MGB (PPV = 0.84; NPV = 0.76). In comparison, the algorithm for DR implemented in Phenome-wide association study (PheWAS) in VUMC yielded similar PPV (0.92) but substantially reduced NPV (0.48). Implementation of the algorithms to the Million Veteran Program identified over 62 000 DR cases with genetic data including 14 549 African Americans and 6209 Hispanics with DR. CONCLUSIONS/DISCUSSION: We demonstrate the robustness of the algorithms at 3 separate healthcare centers, with a minimum PPV of 0.84 and substantially improved NPV than existing automated methods. We strongly encourage independent validation and incorporation of features unique to each EHR to enhance algorithm performance for DR cases and controls.
Assuntos
Algoritmos , Retinopatia Diabética , Registros Eletrônicos de Saúde , Humanos , Retinopatia Diabética/diagnóstico , Masculino , Pessoa de Meia-Idade , Feminino , Estudos de Casos e Controles , Idoso , Classificação Internacional de Doenças , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 1/diagnóstico , AdultoRESUMO
The purpose of this study was to develop and validate an algorithm for identifying Veterans with a history of traumatic brain injury (TBI) in the Veterans Affairs (VA) electronic health record using VA Million Veteran Program (MVP) data. Manual chart review (n = 200) was first used to establish 'gold standard' diagnosis labels for TBI ('Yes TBI' vs. 'No TBI'). To develop our algorithm, we used PheCAP, a semi-supervised pipeline that relied on the chart review diagnosis labels to train and create a prediction model for TBI. Cross-validation was used to train and evaluate the proposed algorithm, 'TBI-PheCAP.' TBI-PheCAP performance was compared to existing TBI algorithms and phenotyping methods, and the final algorithm was run on all MVP participants (n = 702,740) to assign a predicted probability for TBI and a binary classification status choosing specificity = 90%. The TBI-PheCAP algorithm had an area under the receiver operating characteristic curve of 0.92, sensitivity of 84%, and positive predictive value (PPV) of 98% at specificity = 90%. TBI-PheCAP generally performed better than other classification methods, with equivalent or higher sensitivity and PPV than existing rules-based TBI algorithms and MVP TBI-related survey data. Given its strong classification metrics, the TBI-PheCAP algorithm is recommended for use in future population-based TBI research.
Assuntos
Algoritmos , Lesões Encefálicas Traumáticas , Registros Eletrônicos de Saúde , United States Department of Veterans Affairs , Veteranos , Humanos , Lesões Encefálicas Traumáticas/diagnóstico , Estados Unidos , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Reprodutibilidade dos TestesRESUMO
Light-sheet fluorescence microscopy (LSFM), a prominent fluorescence microscopy technique, offers enhanced temporal resolution for imaging biological samples in four dimensions (4D; x, y, z, time). Some of the most recent implementations, including inverted selective plane illumination microscopy (iSPIM) and lattice light-sheet microscopy (LLSM), rely on a tilting of the sample plane with respect to the light sheet of 30-45 degrees to ease sample preparation. Data from such tilted-sample-plane LSFMs require subsequent deskewing and rotation for proper visualization and analysis. Such transformations currently demand substantial memory allocation. This poses computational challenges, especially with large datasets. The consequence is long processing times compared to data acquisition times, which currently limits the ability for live-viewing the data as it is being captured by the microscope. To enable the fast preprocessing of large light-sheet microscopy datasets without significant hardware demand, we have developed WH-Transform, a novel GPU-accelerated memory-efficient algorithm that integrates deskewing and rotation into a single transformation, significantly reducing memory requirements and reducing the preprocessing run time by at least 10-fold for large image stacks. Benchmarked against conventional methods and existing software, our approach demonstrates linear scalability. Processing large 3D stacks of up to 15 GB is now possible within one minute using a single GPU with 24 GB of memory. Applied to 4D LLSM datasets of human hepatocytes, human lung organoid tissue, and human brain organoid tissue, our method outperforms alternatives, providing rapid, accurate preprocessing within seconds. Importantly, such processing speeds now allow visualization of the raw microscope data stream in real time, significantly improving the usability of LLSM in biology. In summary, this advancement holds transformative potential for light-sheet microscopy, enabling real-time, on-the-fly data processing, visualization, and analysis on standard workstations, thereby revolutionizing biological imaging applications for LLSM, SPIM and similar light microscopes.
RESUMO
To enhance market demand and fish utilization, cutting processing is essential for fish. Bighead carp were cut into four primary cuts: head, dorsal, belly, and tail, collectively accounting for 77.03% of the fish's total weight. These cuts were refrigerated at 4 °C for 10 days, during which the muscle from each cut was analyzed. Pseudomonas.fragi proliferated most rapidly and was most abundant in eye muscle (EM), while Aeromonas.sobria showed similar growth patterns in tail muscle (TM). Notably, EM exhibited the highest rate of fat oxidation. TM experienced the most rapid protein degradation. Furthermore, to facilitate the cutting applied in mechanical processing, a machine vision-based algorithm was developed. This algorithm utilized color threshold and morphological parameters to segment image background and divide bighead carp region. Consequently, each cut of bighead carp had a different storage quality and the machine vision-based algorithm proved effective for processing bighead carp.
Assuntos
Algoritmos , Carpas , Armazenamento de Alimentos , Alimentos Marinhos , Carpas/crescimento & desenvolvimento , Animais , Alimentos Marinhos/análise , Pseudomonas/crescimento & desenvolvimento , Aeromonas/crescimento & desenvolvimentoRESUMO
The ability to estimate lower-extremity mechanics in real-world scenarios may untether biomechanics research from a laboratory environment. This is particularly important for military populations where outdoor ruck marches over variable terrain and the addition of external load are cited as leading causes of musculoskeletal injury As such, this study aimed to examine (1) the validity of a minimal IMU sensor system for quantifying lower-extremity kinematics during treadmill walking and running compared with optical motion capture (OMC) and (2) the sensitivity of this IMU system to kinematic changes induced by load, grade, or a combination of the two. The IMU system was able to estimate hip and knee range of motion (ROM) with moderate accuracy during walking but not running. However, SPM analyses revealed IMU and OMC kinematic waveforms were significantly different at most gait phases. The IMU system was capable of detecting kinematic differences in knee kinematic waveforms that occur with added load but was not sensitive to changes in grade that influence lower-extremity kinematics when measured with OMC. While IMUs may be able to identify hip and knee ROM during gait, they are not suitable for replicating lab-level kinematic waveforms.
Assuntos
Articulação do Joelho , Caminhada , Fenômenos Biomecânicos , Marcha , Amplitude de Movimento Articular , HumanosRESUMO
OBJECTIVE: This study aimed to 1) investigate algorithm enhancements for identifying patients eligible for genetic testing of hereditary cancer syndromes using family history data from electronic health records (EHRs); and 2) assess their impact on relative differences across sex, race, ethnicity, and language preference. MATERIALS AND METHODS: The study used EHR data from a tertiary academic medical center. A baseline rule-base algorithm, relying on structured family history data (structured data; SD), was enhanced using a natural language processing (NLP) component and a relaxed criteria algorithm (partial match [PM]). The identification rates and differences were analyzed considering sex, race, ethnicity, and language preference. RESULTS: Among 120,007 patients aged 25-60, detection rate differences were found across all groups using the SD (all P < 0.001). Both enhancements increased identification rates; NLP led to a 1.9 % increase and the relaxed criteria algorithm (PM) led to an 18.5 % increase (both P < 0.001). Combining SD with NLP and PM yielded a 20.4 % increase (P < 0.001). Similar increases were observed within subgroups. Relative differences persisted across most categories for the enhanced algorithms, with disproportionately higher identification of patients who are White, Female, non-Hispanic, and whose preferred language is English. CONCLUSION: Algorithm enhancements increased identification rates for patients eligible for genetic testing of hereditary cancer syndromes, regardless of sex, race, ethnicity, and language preference. However, differences in identification rates persisted, emphasizing the need for additional strategies to reduce disparities such as addressing underlying biases in EHR family health information and selectively applying algorithm enhancements for disadvantaged populations. Systematic assessment of differences in algorithm performance across population subgroups should be incorporated into algorithm development processes.
Assuntos
Algoritmos , Síndromes Neoplásicas Hereditárias , Humanos , Feminino , Testes Genéticos , Registros Eletrônicos de Saúde , Processamento de Linguagem NaturalRESUMO
Coronavirus disease 2019 (COVID-19), the disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus, has had extensive economic, social, and public health impacts in the United States and around the world. To date, there have been more than 600 million reported infections worldwide with more than 6 million reported deaths. Retrospective analysis, which identified comorbidities, risk factors, and treatments, has underpinned the response. As the situation transitions to an endemic, retrospective analyses using electronic health records will be important to identify the long-term effects of COVID-19. However, these analyses can be complicated by incomplete records, which makes it difficult to differentiate visits where the patient had COVID-19. To address this issue, we trained a random Forest classifier to assign a probability of a patient having been diagnosed with COVID-19 during each visit. Using these probabilities, we found that higher COVID-19 probabilities were associated with a future diagnosis of myocardial infarction, urinary tract infection, acute renal failure, and type 2 diabetes.
RESUMO
Background: Continuous electrocardiographic (ECG) monitoring is used to identify ventricular tachycardia (VT), but false alarms occur frequently. Objective: The purpose of this study was to assess the rate of 30-day in-hospital mortality associated with VT alerts generated from bedside ECG monitors to those from a new algorithm among intensive care unit (ICU) patients. Methods: We conducted a retrospective cohort study in consecutive adult ICU patients at an urban academic medical center and compared current bedside monitor VT alerts, VT alerts from a new-unannotated algorithm, and true-annotated VT. We used survival analysis to explore the association between VT alerts and mortality. Results: We included 5679 ICU admissions (mean age 58 ± 17 years; 48% women), 503 (8.9%) experienced 30-day in-hospital mortality. A total of 30.1% had at least 1 current bedside monitor VT alert, 14.3% had a new-unannotated algorithm VT alert, and 11.6% had true-annotated VT. Bedside monitor VT alert was not associated with increased rate of 30-day mortality (adjusted hazard ratio [aHR] 1.06; 95% confidence interval [CI] 0.88-1.27), but there was an association for VT alerts from our new-unannotated algorithm (aHR 1.38; 95% CI 1.12-1.69) and true-annotated VT(aHR 1.39; 95% CI 1.12-1.73). Conclusion: Unannotated and annotated-true VT were associated with increased rate of 30-day in-hospital mortality, whereas current bedside monitor VT was not. Our new algorithm may accurately identify high-risk VT; however, prospective validation is needed.
RESUMO
BACKGROUND: Comprehensive models of survivorship care are necessary to improve access to and coordination of care. New models of care provide the opportunity to address the complexity of physical and psychosocial problems and long-term health needs experienced by patients following cancer treatment. OBJECTIVE: This paper presents our expert-informed, rules-based survivorship algorithm to build a nurse-led model of survivorship care to support men living with prostate cancer (PCa). The algorithm is called No Evidence of Disease (Ned) and supports timelier decision-making, enhanced safety, and continuity of care. METHODS: An initial rule set was developed and refined through working groups with clinical experts across Canada (eg, nurse experts, physician experts, and scientists; n=20), and patient partners (n=3). Algorithm priorities were defined through a multidisciplinary consensus meeting with clinical nurse specialists, nurse scientists, nurse practitioners, urologic oncologists, urologists, and radiation oncologists (n=17). The system was refined and validated using the nominal group technique. RESULTS: Four levels of alert classification were established, initiated by responses on the Expanded Prostate Cancer Index Composite for Clinical Practice survey, and mediated by changes in minimal clinically important different alert thresholds, alert history, and clinical urgency with patient autonomy influencing clinical acuity. Patient autonomy was supported through tailored education as a first line of response, and alert escalation depending on a patient-initiated request for a nurse consultation. CONCLUSIONS: The Ned algorithm is positioned to facilitate PCa nurse-led care models with a high nurse-to-patient ratio. This novel expert-informed PCa survivorship care algorithm contains a defined escalation pathway for clinically urgent symptoms while honoring patient preference. Though further validation is required through a pragmatic trial, we anticipate the Ned algorithm will support timelier decision-making and enhance continuity of care through the automation of more frequent automated checkpoints, while empowering patients to self-manage their symptoms more effectively than standard care. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.1136/bmjopen-2020-045806.
RESUMO
Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond.
Assuntos
Algoritmos , Laparoscopia , Humanos , Processamento de Imagem Assistida por Computador/métodosRESUMO
This study aimed to validate a 7-sensor inertial measurement unit system against optical motion capture to estimate bilateral lower-limb kinematics. Hip, knee, and ankle sagittal plane peak angles and range of motion (ROM) were compared during bodyweight squats and countermovement jumps in 18 participants. In the bodyweight squats, left peak hip flexion (intraclass correlation coefficient [ICC] = .51), knee extension (ICC = .68) and ankle plantar flexion (ICC = .55), and hip (ICC = .63) and knee (ICC = .52) ROM had moderate agreement, and right knee ROM had good agreement (ICC = .77). Relatively higher agreement was observed in the countermovement jumps compared to the bodyweight squats, moderate to good agreement in right peak knee flexion (ICC = .73), and right (ICC = .75) and left (ICC = .83) knee ROM. Moderate agreement was observed for right ankle plantar flexion (ICC = .63) and ROM (ICC = .51). Moderate agreement (ICC > .50) was observed in all variables in the left limb except hip extension, knee flexion, and dorsiflexion. In general, there was poor agreement for peak flexion angles, and at least moderate agreement for joint ROM. Future work will aim to optimize methodologies to increase usability and confidence in data interpretation by minimizing variance in system-based differences and may also benefit from expanding planes of movement.
Assuntos
Tornozelo , Extremidade Inferior , Humanos , Fenômenos Biomecânicos , Articulação do Tornozelo , Articulação do Joelho , Postura , Amplitude de Movimento ArticularRESUMO
Phillip L. Geissler made important contributions to the statistical mechanics of biological polymers, heterogeneous materials, and chemical dynamics in aqueous environments. He devised analytical and computational methods that revealed the underlying organization of complex systems at the frontiers of biology, chemistry, and materials science. In this retrospective we celebrate his work at these frontiers.
Assuntos
Física , Masculino , Humanos , Estudos Retrospectivos , Físico-QuímicaRESUMO
BACKGROUND: Accurate projections of procedural case durations are complex but critical to the planning of perioperative staffing, operating room resources, and patient communication. Nonlinear prediction models using machine learning methods may provide opportunities for hospitals to improve upon current estimates of procedure duration. OBJECTIVE: The aim of this study was to determine whether a machine learning algorithm scalable across multiple centers could make estimations of case duration within a tolerance limit because there are substantial resources required for operating room functioning that relate to case duration. METHODS: Deep learning, gradient boosting, and ensemble machine learning models were generated using perioperative data available at 3 distinct time points: the time of scheduling, the time of patient arrival to the operating or procedure room (primary model), and the time of surgical incision or procedure start. The primary outcome was procedure duration, defined by the time between the arrival and the departure of the patient from the procedure room. Model performance was assessed by mean absolute error (MAE), the proportion of predictions falling within 20% of the actual duration, and other standard metrics. Performance was compared with a baseline method of historical means within a linear regression model. Model features driving predictions were assessed using Shapley additive explanations values and permutation feature importance. RESULTS: A total of 1,177,893 procedures from 13 academic and private hospitals between 2016 and 2019 were used. Across all procedures, the median procedure duration was 94 (IQR 50-167) minutes. In estimating the procedure duration, the gradient boosting machine was the best-performing model, demonstrating an MAE of 34 (SD 47) minutes, with 46% of the predictions falling within 20% of the actual duration in the test data set. This represented a statistically and clinically significant improvement in predictions compared with a baseline linear regression model (MAE 43 min; P<.001; 39% of the predictions falling within 20% of the actual duration). The most important features in model training were historical procedure duration by surgeon, the word "free" within the procedure text, and the time of day. CONCLUSIONS: Nonlinear models using machine learning techniques may be used to generate high-performing, automatable, explainable, and scalable prediction models for procedure duration.
RESUMO
BACKGROUND: Under the paradigm of precision medicine (PM), patients with the same disease can receive different personalized therapies according to their clinical and genetic features. These therapies are determined by the totality of all available clinical evidence, including results from case reports, clinical trials, and systematic reviews. However, it is increasingly difficult for physicians to find such evidence from scientific publications, whose size is growing at an unprecedented pace. OBJECTIVE: In this work, we propose the PM-Search system to facilitate the retrieval of clinical literature that contains critical evidence for or against giving specific therapies to certain cancer patients. METHODS: The PM-Search system combines a baseline retriever that selects document candidates at a large scale and an evidence reranker that finely reorders the candidates based on their evidence quality. The baseline retriever uses query expansion and keyword matching with the ElasticSearch retrieval engine, and the evidence reranker fits pretrained language models to expert annotations that are derived from an active learning strategy. RESULTS: The PM-Search system achieved the best performance in the retrieval of high-quality clinical evidence at the Text Retrieval Conference PM Track 2020, outperforming the second-ranking systems by large margins (0.4780 vs 0.4238 for standard normalized discounted cumulative gain at rank 30 and 0.4519 vs 0.4193 for exponential normalized discounted cumulative gain at rank 30). CONCLUSIONS: We present PM-Search, a state-of-the-art search engine to assist the practicing of evidence-based PM. PM-Search uses a novel Bidirectional Encoder Representations from Transformers for Biomedical Text Mining-based active learning strategy that models evidence quality and improves the model performance. Our analyses show that evidence quality is a distinct aspect from general relevance, and specific modeling of evidence quality beyond general relevance is required for a PM search engine.
RESUMO
BACKGROUND: The detection of early changes in vital signs (VSs) enables timely intervention; however, the measurement of VSs requires hands-on technical expertise and is often time-consuming. The contactless measurement of VSs is beneficial to prevent infection, such as during the COVID-19 pandemic. Lifelight is a novel software being developed to measure VSs by remote photoplethysmography based on video captures of the face via the integral camera on mobile phones and tablets. We report two early studies in the development of Lifelight. OBJECTIVE: The objective of the Vital Sign Comparison Between Lifelight and Standard of Care: Development (VISION-D) study (NCT04763746) was to measure respiratory rate (RR), pulse rate (PR), and blood pressure (BP) simultaneously by using the current standard of care manual methods and the Lifelight software to iteratively refine the software algorithms. The objective of the Vital Sign Comparison Between Lifelight and Standard of Care: Validation (VISION-V) study (NCT03998098) was to validate the use of Lifelight software to accurately measure VSs. METHODS: BP, PR, and RR were measured simultaneously using Lifelight, a sphygmomanometer (BP and PR), and the manual counting of RR. Accuracy performance targets for each VS were defined from a systematic literature review of the performance of state-of-the-art VSs technologies. RESULTS: The VISION-D data set (17,233 measurements from 8585 participants) met the accuracy targets for RR (mean error 0.3, SD 3.6 vs target mean error 2.3, SD 5.0; n=7462), PR (mean error 0.3, SD 4.0 vs mean error 2.2, SD 9.2; n=10,214), and diastolic BP (mean error -0.4, SD 8.5 vs mean error 5.5, SD 8.9; n=8951); for systolic BP, the mean error target was met but not the SD (mean error 3.5, SD 16.8 vs mean error 6.7, SD 15.3; n=9233). Fitzpatrick skin type did not affect accuracy. The VISION-V data set (679 measurements from 127 participants) met all the standards: mean error -0.1, SD 3.4 for RR; mean error 1.4, SD 3.8 for PR; mean error 2.8, SD 14.5 for systolic BP; and mean error -0.3, SD 7.0 for diastolic BP. CONCLUSIONS: At this early stage in development, Lifelight demonstrates sufficient accuracy in the measurement of VSs to support certification for a Level 1 Conformité Européenne mark. As the use of Lifelight does not require specific training or equipment, the software is potentially useful for the contactless measurement of VSs by nonclinical staff in residential and home care settings. Work is continuing to enhance data collection and processing to achieve the robustness and accuracy required for routine clinical use. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/14326.
RESUMO
In combination with appropriate data processing algorithms, wearable inertial sensors enable the measurement of motor activities in children's and adolescents' habitual environments after rehabilitation. However, existing algorithms were predominantly designed for adult patients, and their outcomes might not be relevant for a pediatric population. In this study, we identified the needs of pediatric rehabilitation to create the basis for developing new algorithms that derive clinically relevant outcomes for children and adolescents with neuromotor impairments. We conducted an international survey with health professionals of pediatric neurorehabilitation centers, provided them a list of 34 outcome measures currently used in the literature, and asked them to rate the clinical relevance of these measures for a pediatric population. The survey was completed by 62 therapists, 16 doctors, and 9 nurses of 16 different pediatric neurorehabilitation centers from Switzerland, Germany, and Austria. They had an average work experience of 13 ± 10 years. The most relevant outcome measures were the duration of lying, sitting, and standing positions; the amount of active self-propulsion during wheeling periods; the hand use laterality; and the duration, distance, and speed of walking periods. The health profession, work experience, and workplace had a minimal impact on the priorities of health professionals. Eventually, we complemented the survey findings with the family priorities of a previous study to provide developers with the clinically most relevant outcomes to monitor everyday life motor activities of children and adolescents with neuromotor impairments.
RESUMO
The authors present bio-optical data spanning 316 sets of observations made at 34 inland waterbodies in Australia. The data was collected over the period 2013-2021 and comprise radiometric measurements of remote sensing reflectance (Rrs), diffuse attenuation extinction coefficient (Kd); optical backscattering; absorption of coloured dissolved organic matter (aCDOM), phytoplankton (aph) and non-algal particles (aNAP); HPLC analysis of algal pigments including chlorophyll-a (CHL-a); organic and inorganic total suspended solids (TSS); and total and dissolved organic carbon concentration. Data collection has been timed to coincide with either Landsat 8 or Sentinel-2 overpasses. The dataset covers a diverse range of optical water types and is suitable for algorithm development, satellite calibration and validation as well as machine learning applications.