Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
1.
Psychol Med ; : 1-9, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39247942

RESUMEN

This position paper by the international IMMERSE consortium reviews the evidence of a digital mental health solution based on Experience Sampling Methodology (ESM) for advancing person-centered mental health care and outlines a research agenda for implementing innovative digital mental health tools into routine clinical practice. ESM is a structured diary technique recording real-time self-report data about the current mental state using a mobile application. We will review how ESM may contribute to (1) service user engagement and empowerment, (2) self-management and recovery, (3) goal direction in clinical assessment and management of care, and (4) shared decision-making. However, despite the evidence demonstrating the value of ESM-based approaches in enhancing person-centered mental health care, it is hardly integrated into clinical practice. Therefore, we propose a global research agenda for implementing ESM in routine mental health care addressing six key challenges: (1) the motivation and ability of service users to adhere to the ESM monitoring, reporting and feedback, (2) the motivation and competence of clinicians in routine healthcare delivery settings to integrate ESM in the workflow, (3) the technical requirements and (4) governance requirements for integrating these data in the clinical workflow, (5) the financial and competence related resources related to IT-infrastructure and clinician time, and (6) implementation studies that build the evidence-base. While focused on ESM, the research agenda holds broader implications for implementing digital innovations in mental health. This paper calls for a shift in focus from developing new digital interventions to overcoming implementation barriers, essential for achieving a true transformation toward person-centered care in mental health.

2.
JMIR Med Inform ; 12: e57153, 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39158950

RESUMEN

BACKGROUND: Leveraging electronic health record (EHR) data for clinical or research purposes heavily depends on data fitness. However, there is a lack of standardized frameworks to evaluate EHR data suitability, leading to inconsistent quality in data use projects (DUPs). This research focuses on the Medical Informatics for Research and Care in University Medicine (MIRACUM) Data Integration Centers (DICs) and examines empirical practices on assessing and automating the fitness-for-purpose of clinical data in German DIC settings. OBJECTIVE: The study aims (1) to capture and discuss how MIRACUM DICs evaluate and enhance the fitness-for-purpose of observational health care data and examine the alignment with existing recommendations and (2) to identify the requirements for designing and implementing a computer-assisted solution to evaluate EHR data fitness within MIRACUM DICs. METHODS: A qualitative approach was followed using an open-ended survey across DICs of 10 German university hospitals affiliated with MIRACUM. Data were analyzed using thematic analysis following an inductive qualitative method. RESULTS: All 10 MIRACUM DICs participated, with 17 participants revealing various approaches to assessing data fitness, including the 4-eyes principle and data consistency checks such as cross-system data value comparison. Common practices included a DUP-related feedback loop on data fitness and using self-designed dashboards for monitoring. Most experts had a computer science background and a master's degree, suggesting strong technological proficiency but potentially lacking clinical or statistical expertise. Nine key requirements for a computer-assisted solution were identified, including flexibility, understandability, extendibility, and practicability. Participants used heterogeneous data repositories for evaluating data quality criteria and practical strategies to communicate with research and clinical teams. CONCLUSIONS: The study identifies gaps between current practices in MIRACUM DICs and existing recommendations, offering insights into the complexities of assessing and reporting clinical data fitness. Additionally, a tripartite modular framework for fitness-for-purpose assessment was introduced to streamline the forthcoming implementation. It provides valuable input for developing and integrating an automated solution across multiple locations. This may include statistical comparisons to advanced machine learning algorithms for operationalizing frameworks such as the 3×3 data quality assessment framework. These findings provide foundational evidence for future design and implementation studies to enhance data quality assessments for specific DUPs in observational health care settings.

3.
J Med Internet Res ; 26: e51297, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39178413

RESUMEN

BACKGROUND: The record of the origin and the history of data, known as provenance, holds importance. Provenance information leads to higher interpretability of scientific results and enables reliable collaboration and data sharing. However, the lack of comprehensive evidence on provenance approaches hinders the uptake of good scientific practice in clinical research. OBJECTIVE: This scoping review aims to identify approaches and criteria for provenance tracking in the biomedical domain. We reviewed the state-of-the-art frameworks, associated artifacts, and methodologies for provenance tracking. METHODS: This scoping review followed the methodological framework developed by Arksey and O'Malley. We searched the PubMed and Web of Science databases for English-language articles published from 2006 to 2022. Title and abstract screening were carried out by 4 independent reviewers using the Rayyan screening tool. A majority vote was required for consent on the eligibility of papers based on the defined inclusion and exclusion criteria. Full-text reading and screening were performed independently by 2 reviewers, and information was extracted into a pretested template for the 5 research questions. Disagreements were resolved by a domain expert. The study protocol has previously been published. RESULTS: The search resulted in a total of 764 papers. Of 624 identified, deduplicated papers, 66 (10.6%) studies fulfilled the inclusion criteria. We identified diverse provenance-tracking approaches ranging from practical provenance processing and managing to theoretical frameworks distinguishing diverse concepts and details of data and metadata models, provenance components, and notations. A substantial majority investigated underlying requirements to varying extents and validation intensities but lacked completeness in provenance coverage. Mostly, cited requirements concerned the knowledge about data integrity and reproducibility. Moreover, these revolved around robust data quality assessments, consistent policies for sensitive data protection, improved user interfaces, and automated ontology development. We found that different stakeholder groups benefit from the availability of provenance information. Thereby, we recognized that the term provenance is subjected to an evolutionary and technical process with multifaceted meanings and roles. Challenges included organizational and technical issues linked to data annotation, provenance modeling, and performance, amplified by subsequent matters such as enhanced provenance information and quality principles. CONCLUSIONS: As data volumes grow and computing power increases, the challenge of scaling provenance systems to handle data efficiently and assist complex queries intensifies, necessitating automated and scalable solutions. With rising legal and scientific demands, there is an urgent need for greater transparency in implementing provenance systems in research projects, despite the challenges of unresolved granularity and knowledge bottlenecks. We believe that our recommendations enable quality and guide the implementation of auditable and measurable provenance approaches as well as solutions in the daily tasks of biomedical scientists. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/31750.


Asunto(s)
Flujo de Trabajo , Humanos , Investigación Biomédica/métodos
4.
Stud Health Technol Inform ; 316: 1704-1708, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176538

RESUMEN

In the light of big data driven clinical research, fair access to real world clinical health data enables evidence to improve patient care. Germany's healthcare system provides an abundant data resource but unique challenges due to its federated nature, heterogeneity and high data-protection standards. The Medical Informatics Initiative (MII) developed concepts that are being implemented in the German Portal for Medical Research Data (FDPG) to grant access to distributed data-sources across state borders. The portal currently provides access to more than 10 million patient resources containing hundreds of millions of laboratory parameters, diagnostic reports, administered medications, procedures and specimens. Upcoming datasets include among others oncological data, molecular analysis results and microbiological findings. Here, we describe the philosophy, implementation and experience behind the framework: standardized access processes, interoperable fair data, software for in depth feasibility requests, tools to support researchers and hospital stakeholders alike as well as transparency measures to provide data use information for patients. Challenges remain to improve data quality and automatization of technical and organizational processes.


Asunto(s)
Investigación Biomédica , Alemania , Humanos , Portales del Paciente , Macrodatos , Registros Electrónicos de Salud
5.
Artículo en Alemán | MEDLINE | ID: mdl-38753022

RESUMEN

The interoperability Working Group of the Medical Informatics Initiative (MII) is the platform for the coordination of overarching procedures, data structures, and interfaces between the data integration centers (DIC) of the university hospitals and national and international interoperability committees. The goal is the joint content-related and technical design of a distributed infrastructure for the secondary use of healthcare data that can be used via the Research Data Portal for Health. Important general conditions are data privacy and IT security for the use of health data in biomedical research. To this end, suitable methods are used in dedicated task forces to enable procedural, syntactic, and semantic interoperability for data use projects. The MII core dataset was developed as several modules with corresponding information models and implemented using the HL7® FHIR® standard to enable content-related and technical specifications for the interoperable provision of healthcare data through the DIC. International terminologies and consented metadata are used to describe these data in more detail. The overall architecture, including overarching interfaces, implements the methodological and legal requirements for a distributed data use infrastructure, for example, by providing pseudonymized data or by federated analyses. With these results of the Interoperability Working Group, the MII is presenting a future-oriented solution for the exchange and use of healthcare data, the applicability of which goes beyond the purpose of research and can play an essential role in the digital transformation of the healthcare system.


Asunto(s)
Interoperabilidad de la Información en Salud , Humanos , Conjuntos de Datos como Asunto , Registros Electrónicos de Salud , Alemania , Interoperabilidad de la Información en Salud/normas , Informática Médica , Registro Médico Coordinado/métodos , Integración de Sistemas
6.
Sci Rep ; 14(1): 6391, 2024 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-38493266

RESUMEN

The purpose of this feasibility study is to investigate if latent diffusion models (LDMs) are capable to generate contrast enhanced (CE) MRI-derived subtraction maximum intensity projections (MIPs) of the breast, which are conditioned by lesions. We trained an LDM with n = 2832 CE-MIPs of breast MRI examinations of n = 1966 patients (median age: 50 years) acquired between the years 2015 and 2020. The LDM was subsequently conditioned with n = 756 segmented lesions from n = 407 examinations, indicating their location and BI-RADS scores. By applying the LDM, synthetic images were generated from the segmentations of an independent validation dataset. Lesions, anatomical correctness, and realistic impression of synthetic and real MIP images were further assessed in a multi-rater study with five independent raters, each evaluating n = 204 MIPs (50% real/50% synthetic images). The detection of synthetic MIPs by the raters was akin to random guessing with an AUC of 0.58. Interrater reliability of the lesion assessment was high both for real (Kendall's W = 0.77) and synthetic images (W = 0.85). A higher AUC was observed for the detection of suspicious lesions (BI-RADS ≥ 4) in synthetic MIPs (0.88 vs. 0.77; p = 0.051). Our results show that LDMs can generate lesion-conditioned MRI-derived CE subtraction MIPs of the breast, however, they also indicate that the LDM tended to generate rather typical or 'textbook representations' of lesions.


Asunto(s)
Neoplasias de la Mama , Medios de Contraste , Humanos , Persona de Mediana Edad , Femenino , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos , Mama/diagnóstico por imagen , Mama/patología , Examen Físico , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Estudios Retrospectivos
7.
JMIR Res Protoc ; 13: e53627, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38441925

RESUMEN

BACKGROUND: Complex and expanding data sets in clinical oncology applications require flexible and interactive visualization of patient data to provide the maximum amount of information to physicians and other medical practitioners. Interdisciplinary tumor conferences in particular profit from customized tools to integrate, link, and visualize relevant data from all professions involved. OBJECTIVE: The scoping review proposed in this protocol aims to identify and present currently available data visualization tools for tumor boards and related areas. The objective of the review will be to provide not only an overview of digital tools currently used in tumor board settings, but also the data included, the respective visualization solutions, and their integration into hospital processes. METHODS: The planned scoping review process is based on the Arksey and O'Malley scoping study framework. The following electronic databases will be searched for articles published in English: PubMed, Web of Knowledge, and SCOPUS. Eligible articles will first undergo a deduplication step, followed by the screening of titles and abstracts. Second, a full-text screening will be used to reach the final decision about article selection. At least 2 reviewers will independently screen titles, abstracts, and full-text reports. Conflicting inclusion decisions will be resolved by a third reviewer. The remaining literature will be analyzed using a data extraction template proposed in this protocol. The template includes a variety of meta information as well as specific questions aiming to answer the research question: "What are the key features of data visualization solutions used in molecular and organ tumor boards, and how are these elements integrated and used within the clinical setting?" The findings will be compiled, charted, and presented as specified in the scoping study framework. Data for included tools may be supplemented with additional manual literature searches. The entire review process will be documented in alignment with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) flowchart. RESULTS: The results of this scoping review will be reported per the expanded PRISMA-ScR guidelines. A preliminary search using PubMed, Web of Knowledge, and Scopus resulted in 1320 articles after deduplication that will be included in the further review process. We expect the results to be published during the second quarter of 2024. CONCLUSIONS: Visualization is a key process in leveraging a data set's potentially available information and enabling its use in an interdisciplinary setting. The scoping review described in this protocol aims to present the status quo of visualization solutions for tumor board and clinical oncology applications and their integration into hospital processes. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/53627.

8.
JMIR Form Res ; 7: e50027, 2023 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-38060305

RESUMEN

BACKGROUND: Secondary investigations into digital health records, including electronic patient data from German medical data integration centers (DICs), pave the way for enhanced future patient care. However, only limited information is captured regarding the integrity, traceability, and quality of the (sensitive) data elements. This lack of detail diminishes trust in the validity of the collected data. From a technical standpoint, adhering to the widely accepted FAIR (Findability, Accessibility, Interoperability, and Reusability) principles for data stewardship necessitates enriching data with provenance-related metadata. Provenance offers insights into the readiness for the reuse of a data element and serves as a supplier of data governance. OBJECTIVE: The primary goal of this study is to augment the reusability of clinical routine data within a medical DIC for secondary utilization in clinical research. Our aim is to establish provenance traces that underpin the status of data integrity, reliability, and consequently, trust in electronic health records, thereby enhancing the accountability of the medical DIC. We present the implementation of a proof-of-concept provenance library integrating international standards as an initial step. METHODS: We adhered to a customized road map for a provenance framework, and examined the data integration steps across the ETL (extract, transform, and load) phases. Following a maturity model, we derived requirements for a provenance library. Using this research approach, we formulated a provenance model with associated metadata and implemented a proof-of-concept provenance class. Furthermore, we seamlessly incorporated the internationally recognized Word Wide Web Consortium (W3C) provenance standard, aligned the resultant provenance records with the interoperable health care standard Fast Healthcare Interoperability Resources, and presented them in various representation formats. Ultimately, we conducted a thorough assessment of provenance trace measurements. RESULTS: This study marks the inaugural implementation of integrated provenance traces at the data element level within a German medical DIC. We devised and executed a practical method that synergizes the robustness of quality- and health standard-guided (meta)data management practices. Our measurements indicate commendable pipeline execution times, attaining notable levels of accuracy and reliability in processing clinical routine data, thereby ensuring accountability in the medical DIC. These findings should inspire the development of additional tools aimed at providing evidence-based and reliable electronic health record services for secondary use. CONCLUSIONS: The research method outlined for the proof-of-concept provenance class has been crafted to promote effective and reliable core data management practices. It aims to enhance biomedical data by imbuing it with meaningful provenance, thereby bolstering the benefits for both research and society. Additionally, it facilitates the streamlined reuse of biomedical data. As a result, the system mitigates risks, as data analysis without knowledge of the origin and quality of all data elements is rendered futile. While the approach was initially developed for the medical DIC use case, these principles can be universally applied throughout the scientific domain.

9.
J Med Internet Res ; 25: e48809, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37938878

RESUMEN

BACKGROUND: In the context of the Medical Informatics Initiative, medical data integration centers (DICs) have implemented complex data flows to transfer routine health care data into research data repositories for secondary use. Data management practices are of importance throughout these processes, and special attention should be given to provenance aspects. Insufficient knowledge can lead to validity risks and reduce the confidence and quality of the processed data. The need to implement maintainable data management practices is undisputed, but there is a great lack of clarity on the status. OBJECTIVE: Our study examines the current data management practices throughout the data life cycle within the Medical Informatics in Research and Care in University Medicine (MIRACUM) consortium. We present a framework for the maturity status of data management practices and present recommendations to enable a trustful dissemination and reuse of routine health care data. METHODS: In this mixed methods study, we conducted semistructured interviews with stakeholders from 10 DICs between July and September 2021. We used a self-designed questionnaire that we tailored to the MIRACUM DICs, to collect qualitative and quantitative data. Our study method is compliant with the Good Reporting of a Mixed Methods Study (GRAMMS) checklist. RESULTS: Our study provides insights into the data management practices at the MIRACUM DICs. We identify several traceability issues that can be partially explained with a lack of contextual information within nonharmonized workflow steps, unclear responsibilities, missing or incomplete data elements, and incomplete information about the computational environment information. Based on the identified shortcomings, we suggest a data management maturity framework to reach more clarity and to help define enhanced data management strategies. CONCLUSIONS: The data management maturity framework supports the production and dissemination of accurate and provenance-enriched data for secondary use. Our work serves as a catalyst for the derivation of an overarching data management strategy, abiding data integrity and provenance characteristics as key factors. We envision that this work will lead to the generation of fairer and maintained health research data of high quality.


Asunto(s)
Manejo de Datos , Informática Médica , Humanos , Atención a la Salud , Encuestas y Cuestionarios
10.
JMIR Res Protoc ; 12: e46471, 2023 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-37566443

RESUMEN

BACKGROUND: The anonymization of Common Data Model (CDM)-converted EHR data is essential to ensure the data privacy in the use of harmonized health care data. However, applying data anonymization techniques can significantly affect many properties of the resulting data sets and thus biases research results. Few studies have reviewed these applications with a reflection of approaches to manage data utility and quality concerns in the context of CDM-formatted health care data. OBJECTIVE: Our intended scoping review aims to identify and describe (1) how formal anonymization methods are carried out with CDM-converted health care data, (2) how data quality and utility concerns are considered, and (3) how the various CDMs differ in terms of their suitability for recording anonymized data. METHODS: The planned scoping review is based on the framework of Arksey and O'Malley. By using this, only articles published in English will be included. The retrieval of literature items should be based on a literature search string combining keywords related to data anonymization, CDM standards, and data quality assessment. The proposed literature search query should be validated by a librarian, accompanied by manual searches to include further informal sources. Eligible articles will first undergo a deduplication step, followed by the screening of titles. Second, a full-text reading will allow the 2 reviewers involved to reach the final decision about article selection, while a domain expert will support the resolution of citation selection conflicts. Additionally, key information will be extracted, categorized, summarized, and analyzed by using a proposed template into an iterative process. Tabular and graphical analyses should be addressed in alignment with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist. We also performed some tentative searches on Web of Science for estimating the feasibility of reaching eligible articles. RESULTS: Tentative searches on Web of Science resulted in 507 nonduplicated matches, suggesting the availability of (potential) relevant articles. Further analysis and selection steps will allow us to derive a final literature set. Furthermore, the completion of this scoping review study is expected by the end of the fourth quarter of 2023. CONCLUSIONS: Outlining the approaches of applying formal anonymization methods on CDM-formatted health care data while taking into account data quality and utility concerns should provide useful insights to understand the existing approaches and future research direction based on identified gaps. This protocol describes a schedule to perform a scoping review, which should support the conduction of follow-up investigations. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/46471.

11.
Biomedicines ; 11(5)2023 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-37239004

RESUMEN

We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. Frames per second (FPS) of small models consistently surpassed their large counterparts by a factor of 1-2×. DeiT small was the fastest VT in int8 configuration (6.0 FPS). In conclusion, VTs consistently outperformed CNNs for Gram-stain classification in most settings even on smaller datasets.

12.
Biomedicines ; 10(11)2022 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-36359328

RESUMEN

Despite the emergence of mobile health and the success of deep learning (DL), deploying production-ready DL models to resource-limited devices remains challenging. Especially, during inference time, the speed of DL models becomes relevant. We aimed to accelerate inference time for Gram-stained analysis, which is a tedious and manual task involving microorganism detection on whole slide images. Three DL models were optimized in three steps: transfer learning, pruning and quantization and then evaluated on two Android smartphones. Most convolutional layers (≥80%) had to be retrained for adaptation to the Gram-stained classification task. The combination of pruning and quantization demonstrated its utility to reduce the model size and inference time without compromising model quality. Pruning mainly contributed to model size reduction by 15×, while quantization reduced inference time by 3× and decreased model size by 4×. The combination of two reduced the baseline model by an overall factor of 46×. Optimized models were smaller than 6 MB and were able to process one image in <0.6 s on a Galaxy S10. Our findings demonstrate that methods for model compression are highly relevant for the successful deployment of DL solutions to resource-limited devices.

13.
J Med Internet Res ; 24(10): e38041, 2022 10 24.
Artículo en Inglés | MEDLINE | ID: mdl-36279164

RESUMEN

BACKGROUND: Visual analysis and data delivery in the form of visualizations are of great importance in health care, as such forms of presentation can reduce errors and improve care and can also help provide new insights into long-term disease progression. Information visualization and visual analytics also address the complexity of long-term, time-oriented patient data by reducing inherent complexity and facilitating a focus on underlying and hidden patterns. OBJECTIVE: This review aims to provide an overview of visualization techniques for time-oriented data in health care, supporting the comparison of patients. We systematically collected literature and report on the visualization techniques supporting the comparison of time-based data sets of single patients with those of multiple patients or their cohorts and summarized the use of these techniques. METHODS: This scoping review used the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist. After all collected articles were screened by 16 reviewers according to the criteria, 6 reviewers extracted the set of variables under investigation. The characteristics of these variables were based on existing taxonomies or identified through open coding. RESULTS: Of the 249 screened articles, we identified 22 (8.8%) that fit all criteria and reviewed them in depth. We collected and synthesized findings from these articles for medical aspects such as medical context, medical objective, and medical data type, as well as for the core investigated aspects of visualization techniques, interaction techniques, and supported tasks. The extracted articles were published between 2003 and 2019 and were mostly situated in clinical research. These systems used a wide range of visualization techniques, most frequently showing changes over time. Timelines and temporal line charts occurred 8 times each, followed by histograms with 7 occurrences and scatterplots with 5 occurrences. We report on the findings quantitatively through visual summarization, as well as qualitatively. CONCLUSIONS: The articles under review in general mitigated complexity through visualization and supported diverse medical objectives. We identified 3 distinct patient entities: single patients, multiple patients, and cohorts. Cohorts were typically visualized in condensed form, either through prior data aggregation or through visual summarization, whereas visualization of individual patients often contained finer details. All the systems provided mechanisms for viewing and comparing patient data. However, explicitly comparing a single patient with multiple patients or a cohort was supported only by a few systems. These systems mainly use basic visualization techniques, with some using novel visualizations tailored to a specific task. Overall, we found the visual comparison of measurements between single and multiple patients or cohorts to be underdeveloped, and we argue for further research in a systematic review, as well as the usefulness of a design space.


Asunto(s)
Lista de Verificación , Atención a la Salud , Humanos , Publicaciones
14.
Stud Health Technol Inform ; 293: 19-27, 2022 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-35592955

RESUMEN

The academic research environment is characterized by self-developed, innovative, customized solutions, which are often free to use for third parties with open-source code and open licenses. On the other hand, they are maintained only to a very limited extent after the end of project funding. The ToolPool Gesundheitsforschung addresses the problem of finding ready to use solutions by building a registry of proven and supported tools, services, concepts and consulting offers. The goal is to provide an up-to-date selection of "relevant" solutions for a given domain that are immediately usable and that are actually used by third parties, rather than aiming at a complete list of all solutions which belong to that domain. Proof of relevance and usage must be provided, for example, by concrete application scenarios, experience reports by uninvolved third parties, references in publications or workshops held. Quality assurance is carried out for new entries by an agreed list of admission criteria, for existing entries at least once a year by a special task force. Currently, 79 solutions are represented, this number is to be significantly expanded by involving of new editors from current national funding initiatives in Germany.


Asunto(s)
Programas Informáticos , Estudios Epidemiológicos , Alemania , Sistema de Registros
15.
BMC Med Imaging ; 22(1): 69, 2022 04 13.
Artículo en Inglés | MEDLINE | ID: mdl-35418051

RESUMEN

BACKGROUND: Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS: 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS: The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION: The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Bases de Datos Factuales , Humanos
16.
Nervenarzt ; 93(3): 279-287, 2022 Mar.
Artículo en Alemán | MEDLINE | ID: mdl-33730181

RESUMEN

BACKGROUND: Ward-equivalent treatment (StäB), a form of crisis resolution and home treatment in Germany, has been introduced in 2018 as a new model of mental health service delivery for people with an indication for inpatient care. The rapid progress in the field of information and communication technology offers entirely new opportunities for innovative digital mental health care, such as telemedicine, eHealth, or mHealth interventions. OBJECTIVE: This review aims to provide a comprehensive overview of novel digital forms of service delivery that may contribute to a personalized delivery of StäB and improving clinical and social outcomes as well as reducing direct and indirect costs. METHOD: This work is based on a narrative review. RESULTS: Four primary digital forms of service delivery have been identified that can be used for personalized delivery of StäB: (1) communication, continuity of care, and flexibility through online chat and video call; (2) monitoring of symptoms and behavior in real-time through ecological momentary assessment (EMA); (3) use of multimodal EMA data to generate and offer personalized feedback on subjective experience and behavioral patterns as well as (4) adaptive ecological momentary interventions (EMI) tailored to the person, moment, and context in daily life. CONCLUSION: New digital forms of service delivery have considerable potential to increase the effectiveness and cost-effectiveness of crisis resolution, home treatment, and assertive outreach. An important next step is to model and initially evaluate these novel digital forms of service delivery in the context of StäB and carefully investigate their quality from the user perspective, safety, feasibility, initial process and outcome quality as well as barriers and facilitators of implementation.


Asunto(s)
Evaluación Ecológica Momentánea , Telemedicina , Alemania , Humanos
17.
JMIR Res Protoc ; 10(11): e31750, 2021 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-34813494

RESUMEN

BACKGROUND: Provenance supports the understanding of data genesis, and it is a key factor to ensure the trustworthiness of digital objects containing (sensitive) scientific data. Provenance information contributes to a better understanding of scientific results and fosters collaboration on existing data as well as data sharing. This encompasses defining comprehensive concepts and standards for transparency and traceability, reproducibility, validity, and quality assurance during clinical and scientific data workflows and research. OBJECTIVE: The aim of this scoping review is to investigate existing evidence regarding approaches and criteria for provenance tracking as well as disclosing current knowledge gaps in the biomedical domain. This review covers modeling aspects as well as metadata frameworks for meaningful and usable provenance information during creation, collection, and processing of (sensitive) scientific biomedical data. This review also covers the examination of quality aspects of provenance criteria. METHODS: This scoping review will follow the methodological framework by Arksey and O'Malley. Relevant publications will be obtained by querying PubMed and Web of Science. All papers in English language will be included, published between January 1, 2006 and March 23, 2021. Data retrieval will be accompanied by manual search for grey literature. Potential publications will then be exported into a reference management software, and duplicates will be removed. Afterwards, the obtained set of papers will be transferred into a systematic review management tool. All publications will be screened, extracted, and analyzed: title and abstract screening will be carried out by 4 independent reviewers. Majority vote is required for consent to eligibility of papers based on the defined inclusion and exclusion criteria. Full-text reading will be performed independently by 2 reviewers and in the last step, key information will be extracted on a pretested template. If agreement cannot be reached, the conflict will be resolved by a domain expert. Charted data will be analyzed by categorizing and summarizing the individual data items based on the research questions. Tabular or graphical overviews will be given, if applicable. RESULTS: The reporting follows the extension of the Preferred Reporting Items for Systematic reviews and Meta-Analyses statements for Scoping Reviews. Electronic database searches in PubMed and Web of Science resulted in 469 matches after deduplication. As of September 2021, the scoping review is in the full-text screening stage. The data extraction using the pretested charting template will follow the full-text screening stage. We expect the scoping review report to be completed by February 2022. CONCLUSIONS: Information about the origin of healthcare data has a major impact on the quality and the reusability of scientific results as well as follow-up activities. This protocol outlines plans for a scoping review that will provide information about current approaches, challenges, or knowledge gaps with provenance tracking in biomedical sciences. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/31750.

18.
Front Oncol ; 11: 662013, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34249698

RESUMEN

Prehabilitation has shown its potential for most intra-cavity surgery patients on enhancing preoperative functional capacity and postoperative outcomes. However, its large-scale implementation is limited by several constrictions, such as: i) unsolved practicalities of the service workflow, ii) challenges associated to change management in collaborative care; iii) insufficient access to prehabilitation; iv) relevant percentage of program drop-outs; v) need for program personalization; and, vi) economical sustainability. Transferability of prehabilitation programs from the hospital setting to the community would potentially provide a new scenario with greater accessibility, as well as offer an opportunity to effectively address the aforementioned issues and, thus, optimize healthcare value generation. A core aspect to take into account for an optimal management of prehabilitation programs is to use proper technological tools enabling: i) customizable and interoperable integrated care pathways facilitating personalization of the service and effective engagement among stakeholders; ii) remote monitoring (i.e. physical activity, physiological signs and patient-reported outcomes and experience measures) to support patient adherence to the program and empowerment for self-management; and, iii) use of health risk assessment supporting decision making for personalized service selection. The current manuscript details a proposal to bring digital innovation to community-based prehabilitation programs. Moreover, this approach has the potential to be adopted by programs supporting long-term management of cancer patients, chronic patients and prevention of multimorbidity in subjects at risk.

20.
Sci Rep ; 11(1): 5529, 2021 03 09.
Artículo en Inglés | MEDLINE | ID: mdl-33750857

RESUMEN

Computer-assisted reporting (CAR) tools were suggested to improve radiology report quality by context-sensitively recommending key imaging biomarkers. However, studies evaluating machine learning (ML) algorithms on cross-lingual ontological (RadLex) mappings for developing embedded CAR algorithms are lacking. Therefore, we compared ML algorithms developed on human expert-annotated features against those developed on fully automated cross-lingual (German to English) RadLex mappings using 206 CT reports of suspected stroke. Target label was whether the Alberta Stroke Programme Early CT Score (ASPECTS) should have been provided (yes/no:154/52). We focused on probabilistic outputs of ML-algorithms including tree-based methods, elastic net, support vector machines (SVMs) and fastText (linear classifier), which were evaluated in the same 5 × fivefold nested cross-validation framework. This allowed for model stacking and classifier rankings. Performance was evaluated using calibration metrics (AUC, brier score, log loss) and -plots. Contextual ML-based assistance recommending ASPECTS was feasible. SVMs showed the highest accuracies both on human-extracted- (87%) and RadLex features (findings:82.5%; impressions:85.4%). FastText achieved the highest accuracy (89.3%) and AUC (92%) on impressions. Boosted trees fitted on findings had the best calibration profile. Our approach provides guidance for choosing ML classifiers for CAR tools in fully automated and language-agnostic fashion using bag-of-RadLex terms on limited expert-labelled training data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA