Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 150
Filtrar
1.
Med Image Anal ; 95: 103191, 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38728903

RESUMEN

Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.

2.
Comput Methods Programs Biomed ; 250: 108187, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38657383

RESUMEN

BACKGROUND AND OBJECTIVE: The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. METHODS: We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. RESULTS: The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. CONCLUSIONS: The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Programas Informáticos , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Femenino , Coloración y Etiquetado
3.
Radiology ; 310(2): e231319, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38319168

RESUMEN

Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Radiómica , Humanos , Reproducibilidad de los Resultados , Biomarcadores , Imagen Multimodal
4.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347140

RESUMEN

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Asunto(s)
Inteligencia Artificial
5.
Insights Imaging ; 15(1): 8, 2024 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-38228979

RESUMEN

PURPOSE: To propose a new quality scoring tool, METhodological RadiomICs Score (METRICS), to assess and improve research quality of radiomics studies. METHODS: We conducted an online modified Delphi study with a group of international experts. It was performed in three consecutive stages: Stage#1, item preparation; Stage#2, panel discussion among EuSoMII Auditing Group members to identify the items to be voted; and Stage#3, four rounds of the modified Delphi exercise by panelists to determine the items eligible for the METRICS and their weights. The consensus threshold was 75%. Based on the median ranks derived from expert panel opinion and their rank-sum based conversion to importance scores, the category and item weights were calculated. RESULT: In total, 59 panelists from 19 countries participated in selection and ranking of the items and categories. Final METRICS tool included 30 items within 9 categories. According to their weights, the categories were in descending order of importance: study design, imaging data, image processing and feature extraction, metrics and comparison, testing, feature processing, preparation for modeling, segmentation, and open science. A web application and a repository were developed to streamline the calculation of the METRICS score and to collect feedback from the radiomics community. CONCLUSION: In this work, we developed a scoring tool for assessing the methodological quality of the radiomics research, with a large international panel and a modified Delphi protocol. With its conditional format to cover methodological variations, it provides a well-constructed framework for the key methodological concepts to assess the quality of radiomic research papers. CRITICAL RELEVANCE STATEMENT: A quality assessment tool, METhodological RadiomICs Score (METRICS), is made available by a large group of international domain experts, with transparent methodology, aiming at evaluating and improving research quality in radiomics and machine learning. KEY POINTS: • A methodological scoring tool, METRICS, was developed for assessing the quality of radiomics research, with a large international expert panel and a modified Delphi protocol. • The proposed scoring tool presents expert opinion-based importance weights of categories and items with a transparent methodology for the first time. • METRICS accounts for varying use cases, from handcrafted radiomics to entirely deep learning-based pipelines. • A web application has been developed to help with the calculation of the METRICS score ( https://metricsscore.github.io/metrics/METRICS.html ) and a repository created to collect feedback from the radiomics community ( https://github.com/metricsscore/metrics ).

6.
BMJ Open ; 13(12): e076865, 2023 12 09.
Artículo en Inglés | MEDLINE | ID: mdl-38070902

RESUMEN

INTRODUCTION: Radiological imaging is one of the most frequently performed diagnostic tests worldwide. The free-text contained in radiology reports is currently only rarely used for secondary use purposes, including research and predictive analysis. However, this data might be made available by means of information extraction (IE), based on natural language processing (NLP). Recently, a new approach to NLP, large language models (LLMs), has gained momentum and continues to improve performance of IE-related tasks. The objective of this scoping review is to show the state of research regarding IE from free-text radiology reports based on LLMs, to investigate applied methods and to guide future research by showing open challenges and limitations of current approaches. To our knowledge, no systematic or scoping review of IE from radiology reports based on LLMs has been published. Existing publications are outdated and do not comprise LLM-based methods. METHODS AND ANALYSIS: This protocol is designed based on the JBI Manual for Evidence Synthesis, chapter 11.2: 'Development of a scoping review protocol'. Inclusion criteria and a search strategy comprising four databases (PubMed, IEEE Xplore, Web of Science Core Collection and ACM Digital Library) are defined. Furthermore, we describe the screening process, data charting, analysis and presentation of extracted data. ETHICS AND DISSEMINATION: This protocol describes the methodology of a scoping literature review and does not comprise research on or with humans, animals or their data. Therefore, no ethical approval is required. After the publication of this protocol and the conduct of the review, its results are going to be published in an open access journal dedicated to biomedical informatics/digital health.


Asunto(s)
Radiología , Proyectos de Investigación , Humanos , Almacenamiento y Recuperación de la Información , Radiografía , Lenguaje , Literatura de Revisión como Asunto
7.
Artículo en Inglés | MEDLINE | ID: mdl-38082977

RESUMEN

The acquisition of whole slide images is prone to artifacts that can require human control and re-scanning, both in clinical workflows and in research-oriented settings. Quality control algorithms are a first step to overcome this challenge, as they limit the use of low quality images. Developing quality control systems in histopathology is not straightforward, also due to the limited availability of data related to this topic. We address the problem by proposing a tool to augment data with artifacts. The proposed method seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The datasets augmented by the blended artifacts are then used to train an artifact detection network in a supervised way. We use the YOLOv5 model for the artifact detection with a slightly modified training pipeline. The proposed tool can be extended into a complete framework for the quality assessment of whole slide images.Clinical relevance- The proposed method may be useful for the initial quality screening of whole slide images. Each year, millions of whole slide images are acquired and digitized worldwide. Numerous of them contain artifacts affecting the following AI-oriented analysis. Therefore, a tool operating at the acquisition phase and improving the initial quality assessment is crucial to increase the performance of digital pathology algorithms, e.g., early cancer diagnosis.


Asunto(s)
Artefactos , Neoplasias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
8.
Eur J Radiol ; 169: 111159, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37976760

RESUMEN

PURPOSE: To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD: A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS: 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION: The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.


Asunto(s)
Enfermedad de Alzheimer , Neoplasias Encefálicas , Humanos , Inteligencia Artificial , Radiografía , Encéfalo/diagnóstico por imagen
9.
Sci Rep ; 13(1): 19518, 2023 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-37945653

RESUMEN

The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach.


Asunto(s)
Aprendizaje Profundo , Humanos , Animales , Perros , Radiografía , Bases de Datos Factuales , Inversiones en Salud , Conocimiento , Aprendizaje Automático Supervisado
10.
Comput Med Imaging Graph ; 110: 102310, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37979340

RESUMEN

Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/patología , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Carcinoma de Células Escamosas/patología , Tomografía Computarizada por Rayos X/métodos , Curva ROC
11.
Transl Vis Sci Technol ; 12(11): 25, 2023 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-37982767

RESUMEN

Purpose: Adaptive optics scanning light ophthalmoscope (AOSLO) imaging offers a microscopic view of the living retina, holding promise for diagnosing and researching eye diseases like retinitis pigmentosa and Stargardt's disease. The technology's clinical impact of AOSLO hinges on early detection through automated analysis tools. Methods: We introduce Cone Density Estimation (CoDE) and CoDE for Diagnosis (CoDED). CoDE is a deep density estimation model for cone counting that estimates a density function whose integral is equal to the number of cones. CoDED is an integration of CoDE with deep image classifiers for diagnosis. We use two AOSLO image datasets to train and evaluate the performance of cone density estimation and classification models for retinitis pigmentosa and Stargardt's disease. Results: Bland-Altman plots show that CoDE outperforms state-of-the-art models for cone density estimation. CoDED reported an F1 score of 0.770 ± 0.04 for disease classification, outperforming traditional convolutional networks. Conclusions: CoDE shows promise in classifying the retinitis pigmentosa and Stargardt's disease cases from a single AOSLO image. Our preliminary results suggest the potential role of analyzing patterns in the retinal cellular mosaic to aid in the diagnosis of genetic eye diseases. Translational Relevance: Our study explores the potential of deep density estimation models to aid in the analysis of AOSLO images. Although the initial results are encouraging, more research is needed to fully realize the potential of such methods in the treatment and study of genetic retinal pathologies.


Asunto(s)
Células Fotorreceptoras Retinianas Conos , Retinitis Pigmentosa , Humanos , Oftalmoscopía/métodos , Células Fotorreceptoras Retinianas Conos/patología , Retina/diagnóstico por imagen , Oftalmoscopios , Retinitis Pigmentosa/diagnóstico , Retinitis Pigmentosa/genética
12.
Sci Rep ; 13(1): 17024, 2023 10 09.
Artículo en Inglés | MEDLINE | ID: mdl-37813976

RESUMEN

The aim of this study was to develop and test an artificial intelligence (AI)-based algorithm for detecting common technical errors in canine thoracic radiography. The algorithm was trained using a database of thoracic radiographs from three veterinary clinics in Italy, which were evaluated for image quality by three experienced veterinary diagnostic imagers. The algorithm was designed to classify the images as correct or having one or more of the following errors: rotation, underexposure, overexposure, incorrect limb positioning, incorrect neck positioning, blurriness, cut-off, or the presence of foreign objects, or medical devices. The algorithm was able to correctly identify errors in thoracic radiographs with an overall accuracy of 81.5% in latero-lateral and 75.7% in sagittal images. The most accurately identified errors were limb mispositioning and underexposure both in latero-lateral and sagittal images. The accuracy of the developed model in the classification of technically correct radiographs was fair in latero-lateral and good in sagittal images. The authors conclude that their AI-based algorithm is a promising tool for improving the accuracy of radiographic interpretation by identifying technical errors in canine thoracic radiographs.


Asunto(s)
Algoritmos , Inteligencia Artificial , Animales , Perros , Radiografía , Radiografía Torácica/veterinaria , Radiografía Torácica/métodos , Italia , Estudios Retrospectivos
13.
Sci Data ; 10(1): 648, 2023 09 22.
Artículo en Inglés | MEDLINE | ID: mdl-37737210

RESUMEN

Human activity recognition and clinical biomechanics are challenging problems in physical telerehabilitation medicine. However, most publicly available datasets on human body movements cannot be used to study both problems in an out-of-the-lab movement acquisition setting. The objective of the VIDIMU dataset is to pave the way towards affordable patient gross motor tracking solutions for daily life activities recognition and kinematic analysis. The dataset includes 13 activities registered using a commodity camera and five inertial sensors. The video recordings were acquired in 54 subjects, of which 16 also had simultaneous recordings of inertial sensors. The novelty of dataset lies in: (i) the clinical relevance of the chosen movements, (ii) the combined utilization of affordable video and custom sensors, and (iii) the implementation of state-of-the-art tools for multimodal data processing of 3D body pose tracking and motion reconstruction in a musculoskeletal model from inertial data. The validation confirms that a minimally disturbing acquisition protocol, performed according to real-life conditions can provide a comprehensive picture of human joint angles during daily life activities.


Asunto(s)
Actividades Cotidianas , Movimiento , Humanos , Fenómenos Biomecánicos , Relevancia Clínica , Movimiento (Física) , Reconocimiento en Psicología
14.
J Pathol Inform ; 14: 100332, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37705689

RESUMEN

Computational pathology can significantly benefit from ontologies to standardize the employed nomenclature and help with knowledge extraction processes for high-quality annotated image datasets. The end goal is to reach a shared model for digital pathology to overcome data variability and integration problems. Indeed, data annotation in such a specific domain is still an unsolved challenge and datasets cannot be steadily reused in diverse contexts due to heterogeneity issues of the adopted labels, multilingualism, and different clinical practices. Material and methods: This paper presents the ExaMode ontology, modeling the histopathology process by considering 3 key cancer diseases (colon, cervical, and lung tumors) and celiac disease. The ExaMode ontology has been designed bottom-up in an iterative fashion with continuous feedback and validation from pathologists and clinicians. The ontology is organized into 5 semantic areas that defines an ontological template to model any disease of interest in histopathology. Results: The ExaMode ontology is currently being used as a common semantic layer in: (i) an entity linking tool for the automatic annotation of medical records; (ii) a web-based collaborative annotation tool for histopathology text reports; and (iii) a software platform for building holistic solutions integrating multimodal histopathology data. Discussion: The ontology ExaMode is a key means to store data in a graph database according to the RDF data model. The creation of an RDF dataset can help develop more accurate algorithms for image analysis, especially in the field of digital pathology. This approach allows for seamless data integration and a unified query access point, from which we can extract relevant clinical insights about the considered diseases using SPARQL queries.

15.
Neuroimage Clin ; 39: 103491, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37659189

RESUMEN

INTRODUCTION: Over the past few years, the deep learning community has developed and validated a plethora of tools for lesion detection and segmentation in Multiple Sclerosis (MS). However, there is an important gap between validating models technically and clinically. To this end, a six-step framework necessary for the development, validation, and integration of quantitative tools in the clinic was recently proposed under the name of the Quantitative Neuroradiology Initiative (QNI). AIMS: Investigate to what extent automatic tools in MS fulfill the QNI framework necessary to integrate automated detection and segmentation into the clinical neuroradiology workflow. METHODS: Adopting the systematic Cochrane literature review methodology, we screened and summarised published scientific articles that perform automatic MS lesions detection and segmentation. We categorised the retrieved studies based on their degree of fulfillment of QNI's six-steps, which include a tool's technical assessment, clinical validation, and integration. RESULTS: We found 156 studies; 146/156 (94%) fullfilled the first QNI step, 155/156 (99%) the second, 8/156 (5%) the third, 3/156 (2%) the fourth, 5/156 (3%) the fifth and only one the sixth. CONCLUSIONS: To date, little has been done to evaluate the clinical performance and the integration in the clinical workflow of available methods for MS lesion detection/segmentation. In addition, the socio-economic effects and the impact on patients' management of such tools remain almost unexplored.


Asunto(s)
Instituciones de Atención Ambulatoria , Esclerosis Múltiple , Humanos , Flujo de Trabajo , Esclerosis Múltiple/diagnóstico por imagen
16.
BMC Ophthalmol ; 23(1): 220, 2023 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-37198558

RESUMEN

BACKGROUND: Amblyopia is the most common developmental vision disorder in children. The initial treatment consists of refractive correction. When insufficient, occlusion therapy may further improve visual acuity. However, the challenges and compliance issues associated with occlusion therapy may result in treatment failure and residual amblyopia. Virtual reality (VR) games developed to improve visual function have shown positive preliminary results. The aim of this study is to determine the efficacy of these games to improve vision, attention, and motor skills in patients with residual amblyopia and identify brain-related changes. We hypothesize that a VR-based training with the suggested ingredients (3D cues and rich feedback), combined with increasing the difficulty level and the use of various games in a home-based environment is crucial for treatment efficacy of vision recovery, and may be particularly effective in children. METHODS: The AMBER study is a randomized, cross-over, controlled trial designed to assess the effect of binocular stimulation (VR-based stereoptic serious games) in individuals with residual amblyopia (n = 30, 6-35 years of age), compared to refractive correction on vision, selective attention and motor control skills. Additionally, they will be compared to a control group of age-matched healthy individuals (n = 30) to account for the unique benefit of VR-based serious games. All participants will play serious games 30 min per day, 5 days per week, for 8 weeks. The games are delivered with the Vivid Vision Home software. The amblyopic cohort will receive both treatments in a randomized order according to the type of amblyopia, while the control group will only receive the VR-based stereoscopic serious games. The primary outcome is visual acuity in the amblyopic eye. Secondary outcomes include stereoacuity, functional vision, cortical visual responses, selective attention, and motor control. The outcomes will be measured before and after each treatment with 8-week follow-up. DISCUSSION: The VR-based games used in this study have been conceived to deliver binocular visual stimulation tailored to the individual visual needs of the patient, which will potentially result in improved basic and functional vision skills as well as visual attention and motor control skills. TRIAL REGISTRATION: This protocol is registered on ClinicalTrials.gov (identifier: NCT05114252) and in the Swiss National Clinical Trials Portal (identifier: SNCTP000005024).


Asunto(s)
Ambliopía , Juegos de Video , Niño , Humanos , Ambliopía/terapia , Visión Binocular/fisiología , Agudeza Visual , Resultado del Tratamiento , Ensayos Clínicos Controlados Aleatorios como Asunto
17.
Stud Health Technol Inform ; 302: 586-590, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203753

RESUMEN

Risk of bias (RoB) assessment of randomized clinical trials (RCTs) is vital to conducting systematic reviews. Manual RoB assessment for hundreds of RCTs is a cognitively demanding, lengthy process and is prone to subjective judgment. Supervised machine learning (ML) can help to accelerate this process but requires a hand-labelled corpus. There are currently no RoB annotation guidelines for randomized clinical trials or annotated corpora. In this pilot project, we test the practicality of directly using the revised Cochrane RoB 2.0 guidelines for developing an RoB annotated corpus using a novel multi-level annotation scheme. We report inter-annotator agreement among four annotators who used Cochrane RoB 2.0 guidelines. The agreement ranges between 0% for some bias classes and 76% for others. Finally, we discuss the shortcomings of this direct translation of annotation guidelines and scheme and suggest approaches to improve them to obtain an RoB annotated corpus suitable for ML.


Asunto(s)
Juicio , Proyectos de Investigación , Ensayos Clínicos Controlados Aleatorios como Asunto , Sesgo , Medición de Riesgo
18.
Insights Imaging ; 14(1): 75, 2023 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-37142815

RESUMEN

Even though radiomics can hold great potential for supporting clinical decision-making, its current use is mostly limited to academic research, without applications in routine clinical practice. The workflow of radiomics is complex due to several methodological steps and nuances, which often leads to inadequate reporting and evaluation, and poor reproducibility. Available reporting guidelines and checklists for artificial intelligence and predictive modeling include relevant good practices, but they are not tailored to radiomic research. There is a clear need for a complete radiomics checklist for study planning, manuscript writing, and evaluation during the review process to facilitate the repeatability and reproducibility of studies. We here present a documentation standard for radiomic research that can guide authors and reviewers. Our motivation is to improve the quality and reliability and, in turn, the reproducibility of radiomic research. We name the checklist CLEAR (CheckList for EvaluAtion of Radiomics research), to convey the idea of being more transparent. With its 58 items, the CLEAR checklist should be considered a standardization tool providing the minimum requirements for presenting clinical radiomics research. In addition to a dynamic online version of the checklist, a public repository has also been set up to allow the radiomics community to comment on the checklist items and adapt the checklist for future versions. Prepared and revised by an international group of experts using a modified Delphi method, we hope the CLEAR checklist will serve well as a single and complete scientific documentation tool for authors and reviewers to improve the radiomics literature.

19.
Med Phys ; 50(9): 5682-5697, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36945890

RESUMEN

BACKGROUND: To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. Current anthropomorphic phantoms lack fine texture and true anatomic representation. 3D-printing of iodinated ink on paper is a promising phantom manufacturing technique. Previously acquired or artificially created CT data can be used to generate realistic phantoms. PURPOSE: To present the design process of an anthropomorphic 3D-printed iodine ink phantom, highlighting the different advantages and pitfalls in its use. To analyze the phantom's X-ray attenuation properties, and the influences of the printing process on the imaging characteristics, by comparing it to the original input dataset. METHODS: Two patient CT scans and artificially generated test patterns were combined in a single dataset for phantom printing and cropped to a size of 26 × 19 × 30 cm3 . This DICOM dataset was printed on paper using iodinated ink. The phantom was CT-scanned and compared to the original image dataset used for printing the phantom. The water-equivalent diameter of the phantom was compared to that of a patient cohort (N = 104). Iodine concentrations in the phantom were measured using dual-energy CT. 86 radiomics features were extracted from 10 repeat phantom scans and the input dataset. Features were compared using a histogram analysis and a PCA individually and overall, respectively. The frequency content was compared using the normalized spectrum modulus. RESULTS: Low density structures are depicted incorrectly, while soft tissue structures show excellent visual accordance with the input dataset. Maximum deviations of around 30 HU between the original dataset and phantom HU values were observed. The phantom has X-ray attenuation properties comparable to a lightweight adult patient (∼54 kg, BMI 19 kg/m2 ). Iodine concentrations in the phantom varied between 0 and 50 mg/ml. PCA of radiomics features shows different tissue types separate in similar areas of PCA representation in the phantom scans as in the input dataset. Individual feature analysis revealed systematic shift of first order radiomics features compared to the original dataset, while some higher order radiomics features did not. The normalized frequency modulus |f(ω)| of the phantom data agrees well with the original data. However, all frequencies systematically occur more frequently in the phantom compared to the maximum of the spectrum modulus than in the original data set, especially for mid-frequencies (e.g., for ω = 0.3942 mm-1 , |f(ω)|original  = 0.09 * |fmax |original and |f(ω)|phantom  = 0.12 * |fmax |phantom ). CONCLUSIONS: 3D-iodine-ink-printing technology can be used to print anthropomorphic phantoms with a water-equivalent diameter of a lightweight adult patient. Challenges include small residual air enclosures and the fidelity of HU values. For soft tissue, there is a good agreement between the HU values of the phantom and input data set. Radiomics texture features of the phantom scans are similar to the input data set, but systematic shifts of radiomics features in first order features, due to differences in HU values, need to be considered. The paper substrate influences the spatial frequency distribution of the phantom scans. This phantom type is of very limited use for dual-energy CT analyses.


Asunto(s)
Tinta , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Fantasmas de Imagen , Impresión Tridimensional
20.
JAMIA Open ; 6(1): ooac107, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36632329

RESUMEN

Objective: The aim of this study was to test the feasibility of PICO (participants, interventions, comparators, outcomes) entity extraction using weak supervision and natural language processing. Methodology: We re-purpose more than 127 medical and nonmedical ontologies and expert-generated rules to obtain multiple noisy labels for PICO entities in the evidence-based medicine (EBM)-PICO corpus. These noisy labels are aggregated using simple majority voting and generative modeling to get consensus labels. The resulting probabilistic labels are used as weak signals to train a weakly supervised (WS) discriminative model and observe performance changes. We explore mistakes in the EBM-PICO that could have led to inaccurate evaluation of previous automation methods. Results: In total, 4081 randomized clinical trials were weakly labeled to train the WS models and compared against full supervision. The models were separately trained for PICO entities and evaluated on the EBM-PICO test set. A WS approach combining ontologies and expert-generated rules outperformed full supervision for the participant entity by 1.71% macro-F1. Error analysis on the EBM-PICO subset revealed 18-23% erroneous token classifications. Discussion: Automatic PICO entity extraction accelerates the writing of clinical systematic reviews that commonly use PICO information to filter health evidence. However, PICO extends to more entities-PICOS (S-study type and design), PICOC (C-context), and PICOT (T-timeframe) for which labelled datasets are unavailable. In such cases, the ability to use weak supervision overcomes the expensive annotation bottleneck. Conclusions: We show the feasibility of WS PICO entity extraction using freely available ontologies and heuristics without manually annotated data. Weak supervision has encouraging performance compared to full supervision but requires careful design to outperform it.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...