Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Imaging ; 21(1): 77, 2021 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-33964886

RESUMO

BACKGROUND: One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels. RESULTS: As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists ([Formula: see text]). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with [Formula: see text]. Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels [Formula: see text]. CONCLUSION: Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper: https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning.


Assuntos
Gradação de Tumores/métodos , Redes Neurais de Computação , Neoplasias da Próstata/patologia , Aprendizado de Máquina Supervisionado , Conjuntos de Dados como Assunto , Diagnóstico por Computador/métodos , Humanos , Masculino , Gradação de Tumores/classificação , Próstata/patologia , Prostatectomia/métodos , Neoplasias da Próstata/cirurgia , Análise Serial de Tecidos
2.
Sensors (Basel) ; 21(22)2021 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-34833573

RESUMO

One major challenge limiting the use of dexterous robotic hand prostheses controlled via electromyography and pattern recognition relates to the important efforts required to train complex models from scratch. To overcome this problem, several studies in recent years proposed to use transfer learning, combining pre-trained models (obtained from prior subjects) with training sessions performed on a specific user. Although a few promising results were reported in the past, it was recently shown that the use of conventional transfer learning algorithms does not increase performance if proper hyperparameter optimization is performed on the standard approach that does not exploit transfer learning. The objective of this paper is to introduce novel analyses on this topic by using a random forest classifier without hyperparameter optimization and to extend them with experiments performed on data recorded from the same patient, but in different data acquisition sessions. Two domain adaptation techniques were tested on the random forest classifier, allowing us to conduct experiments on healthy subjects and amputees. Differently from several previous papers, our results show that there are no appreciable improvements in terms of accuracy, regardless of the transfer learning techniques tested. The lack of adaptive learning is also demonstrated for the first time in an intra-subject experimental setting when using as a source ten data acquisitions recorded from the same subject but on five different days.


Assuntos
Amputados , Membros Artificiais , Algoritmos , Eletromiografia , Mãos , Humanos , Reconhecimento Automatizado de Padrão
3.
Sensors (Basel) ; 20(15)2020 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-32752155

RESUMO

BACKGROUND: Muscle synergy analysis is an approach to understand the neurophysiological mechanisms behind the hypothesized ability of the Central Nervous System (CNS) to reduce the dimensionality of muscle control. The muscle synergy approach is also used to evaluate motor recovery and the evolution of the patients' motor performance both in single-session and longitudinal studies. Synergy-based assessments are subject to various sources of variability: natural trial-by-trial variability of performed movements, intrinsic characteristics of subjects that change over time (e.g., recovery, adaptation, exercise, etc.), as well as experimental factors such as different electrode positioning. These sources of variability need to be quantified in order to resolve challenges for the application of muscle synergies in clinical environments. The objective of this study is to analyze the stability and similarity of extracted muscle synergies under the effect of factors that may induce variability, including inter- and intra-session variability within subjects and inter-subject variability differentiation. The analysis was performed using the comprehensive, publicly available hand grasp NinaPro Database, featuring surface electromyography (EMG) measures from two EMG electrode bracelets. METHODS: Intra-session, inter-session, and inter-subject synergy stability was analyzed using the following measures: variance accounted for (VAF) and number of synergies (NoS) as measures of reconstruction stability quality and cosine similarity for comparison of spatial composition of extracted synergies. Moreover, an approach based on virtual electrode repositioning was applied to shed light on the influence of electrode position on inter-session synergy similarity. RESULTS: Inter-session synergy similarity was significantly lower with respect to intra-session similarity, both considering coefficient of variation of VAF (approximately 0.2-15% for inter vs. approximately 0.1% to 2.5% for intra, depending on NoS) and coefficient of variation of NoS (approximately 6.5-14.5% for inter vs. approximately 3-3.5% for intra, depending on VAF) as well as synergy similarity (approximately 74-77% for inter vs. approximately 88-94% for intra, depending on the selected VAF). Virtual electrode repositioning revealed that a slightly different electrode position can lower similarity of synergies from the same session and can increase similarity between sessions. Finally, the similarity of inter-subject synergies has no significant difference from the similarity of inter-session synergies (both on average approximately 84-90% depending on selected VAF). CONCLUSION: Synergy similarity was lower in inter-session conditions with respect to intra-session. This finding should be considered when interpreting results from multi-session assessments. Lastly, electrode positioning might play an important role in the lower similarity of synergies over different sessions.


Assuntos
Força da Mão , Músculo Esquelético , Atividades Cotidianas , Adulto , Fenômenos Biomecânicos , Eletromiografia , Feminino , Mãos , Humanos , Masculino , Adulto Jovem
4.
J Neuroeng Rehabil ; 16(1): 63, 2019 05 28.
Artigo em Inglês | MEDLINE | ID: mdl-31138257

RESUMO

BACKGROUND: Hand grasp patterns require complex coordination. The reduction of the kinematic dimensionality is a key process to study the patterns underlying hand usage and grasping. It allows to define metrics for motor assessment and rehabilitation, to develop assistive devices and prosthesis control methods. Several studies were presented in this field but most of them targeted a limited number of subjects, they focused on postures rather than entire grasping movements and they did not perform separate analysis for the tasks and subjects, which can limit the impact on rehabilitation and assistive applications. This paper provides a comprehensive mapping of synergies from hand grasps targeting activities of daily living. It clarifies several current limits of the field and fosters the development of applications in rehabilitation and assistive robotics. METHODS: In this work, hand kinematic data of 77 subjects, performing up to 20 hand grasps, were acquired with a data glove (a 22-sensor CyberGlove II data glove) and analyzed. Principal Component Analysis (PCA) and hierarchical cluster analysis were used to extract and group kinematic synergies that summarize the coordination patterns available for hand grasps. RESULTS: Twelve synergies were found to account for > 80% of the overall variation. The first three synergies accounted for more than 50% of the total amount of variance and consisted of: the flexion and adduction of the Metacarpophalangeal joint (MCP) of fingers 3 to 5 (synergy #1), palmar arching and flexion of the wrist (synergy #2) and opposition of the thumb (synergy #3). Further synergies refine movements and have higher variability among subjects. CONCLUSION: Kinematic synergies are extracted from a large number of subjects (77) and grasps related to activities of daily living (20). The number of motor modules required to perform the motor tasks is higher than what previously described. Twelve synergies are responsible for most of the variation in hand grasping. The first three are used as primary synergies, while the remaining ones target finer movements (e.g. independence of thumb and index finger). The results generalize the description of hand kinematics, better clarifying several limits of the field and fostering the development of applications in rehabilitation and assistive robotics.


Assuntos
Atividades Cotidianas , Força da Mão/fisiologia , Atividade Motora/fisiologia , Fenômenos Biomecânicos , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Análise de Componente Principal
5.
J Neuroeng Rehabil ; 16(1): 28, 2019 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-30770759

RESUMO

BACKGROUND: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. METHODS: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. RESULTS: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. CONCLUSIONS: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields.


Assuntos
Força da Mão/fisiologia , Mãos/fisiologia , Adulto , Algoritmos , Fenômenos Biomecânicos , Classificação , Eletromiografia , Feminino , Dedos , Mãos/anatomia & histologia , Voluntários Saudáveis , Humanos , Masculino , Movimento , Valores de Referência , Processamento de Sinais Assistido por Computador
6.
Comput Methods Programs Biomed ; 250: 108187, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38657383

RESUMO

BACKGROUND AND OBJECTIVE: The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. METHODS: We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. RESULTS: The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. CONCLUSIONS: The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.


Assuntos
Algoritmos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Software , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Coloração e Rotulagem
7.
Med Image Anal ; 95: 103191, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38728903

RESUMO

Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.


Assuntos
Aprendizado Profundo , Gradação de Tumores , Neoplasias da Próstata , Humanos , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Masculino , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
8.
Sci Rep ; 13(1): 1095, 2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36658254

RESUMO

Several challenges prevent extracting knowledge from biomedical resources, including data heterogeneity and the difficulty to obtain and collaborate on data and annotations by medical doctors. Therefore, flexibility in their representation and interconnection is required; it is also essential to be able to interact easily with such data. In recent years, semantic tools have been developed: semantic wikis are collections of wiki pages that can be annotated with properties and so combine flexibility and expressiveness, two desirable aspects when modeling databases, especially in the dynamic biomedical domain. However, semantics and collaborative analysis of biomedical data is still an unsolved challenge. The aim of this work is to create a tool for easing the design and the setup of semantic databases and to give the possibility to enrich them with biostatistical applications. As a side effect, this will also make them reproducible, fostering their application by other research groups. A command-line software has been developed for creating all structures required by Semantic MediaWiki. Besides, a way to expose statistical analyses as R Shiny applications in the interface is provided, along with a facility to export Prolog predicates for reasoning with external tools. The developed software allowed to create a set of biomedical databases for the Neuroscience Department of the University of Padova in a more automated way. They can be extended with additional qualitative and statistical analyses of data, including for instance regressions, geographical distribution of diseases, and clustering. The software is released as open source-code and published under the GPL-3 license at https://github.com/mfalda/tsv2swm .


Assuntos
Semântica , Software , Bases de Dados Factuais
9.
Neuroscience ; 514: 100-122, 2023 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-36708799

RESUMO

Muscle synergy analysis investigates the neurophysiological mechanisms that the central nervous system employs to coordinate muscles. Several models have been developed to decompose electromyographic (EMG) signals into spatial and temporal synergies. However, using multiple approaches can complicate the interpretation of results. Spatial synergies represent invariant muscle weights modulated with variant temporal coefficients; temporal synergies are invariant temporal profiles that coordinate variant muscle weights. While non-negative matrix factorization allows to extract both spatial and temporal synergies, the comparison between the two approaches was rarely investigated targeting a large set of multi-joint upper-limb movements. Spatial and temporal synergies were extracted from two datasets with proximal (16 subjects, 10M, 6F) and distal upper-limb movements (30 subjects, 21M, 9F), focusing on their differences in reconstruction accuracy and inter-individual variability. We showed the existence of both spatial and temporal structure in the EMG data, comparing synergies with those from a surrogate dataset in which the phases were shuffled preserving the frequency content of the original data. The two models provide a compact characterization of motor coordination at the spatial or temporal level, respectively. However, a lower number of temporal synergies are needed to achieve the same reconstruction R2: spatial and temporal synergies may capture different hierarchical levels of motor control and are dual approaches to the characterization of low-dimensional coordination of the upper-limb. Last, a detailed characterization of the structure of the temporal synergies suggested that they can be related to intermittent control of the movement, allowing high flexibility and dexterity. These results improve neurophysiology understanding in several fields such as motor control, rehabilitation, and prosthetics.


Assuntos
Músculo Esquelético , Músculo Temporal , Humanos , Músculo Esquelético/fisiologia , Eletromiografia , Movimento/fisiologia , Extremidade Superior/fisiologia
10.
Artigo em Inglês | MEDLINE | ID: mdl-38082977

RESUMO

The acquisition of whole slide images is prone to artifacts that can require human control and re-scanning, both in clinical workflows and in research-oriented settings. Quality control algorithms are a first step to overcome this challenge, as they limit the use of low quality images. Developing quality control systems in histopathology is not straightforward, also due to the limited availability of data related to this topic. We address the problem by proposing a tool to augment data with artifacts. The proposed method seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The datasets augmented by the blended artifacts are then used to train an artifact detection network in a supervised way. We use the YOLOv5 model for the artifact detection with a slightly modified training pipeline. The proposed tool can be extended into a complete framework for the quality assessment of whole slide images.Clinical relevance- The proposed method may be useful for the initial quality screening of whole slide images. Each year, millions of whole slide images are acquired and digitized worldwide. Numerous of them contain artifacts affecting the following AI-oriented analysis. Therefore, a tool operating at the acquisition phase and improving the initial quality assessment is crucial to increase the performance of digital pathology algorithms, e.g., early cancer diagnosis.


Assuntos
Artefatos , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
11.
Sci Rep ; 13(1): 19518, 2023 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-37945653

RESUMO

The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach.


Assuntos
Aprendizado Profundo , Humanos , Animais , Cães , Radiografia , Bases de Dados Factuais , Investimentos em Saúde , Conhecimento , Aprendizado de Máquina Supervisionado
12.
J Pathol Inform ; 14: 100183, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36687531

RESUMO

Computational pathology targets the automatic analysis of Whole Slide Images (WSI). WSIs are high-resolution digitized histopathology images, stained with chemical reagents to highlight specific tissue structures and scanned via whole slide scanners. The application of different parameters during WSI acquisition may lead to stain color heterogeneity, especially considering samples collected from several medical centers. Dealing with stain color heterogeneity often limits the robustness of methods developed to analyze WSIs, in particular Convolutional Neural Networks (CNN), the state-of-the-art algorithm for most computational pathology tasks. Stain color heterogeneity is still an unsolved problem, although several methods have been developed to alleviate it, such as Hue-Saturation-Contrast (HSC) color augmentation and stain augmentation methods. The goal of this paper is to present Data-Driven Color Augmentation (DDCA), a method to improve the efficiency of color augmentation methods by increasing the reliability of the samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data. DDCA is applied to HSC color augmentation, stain augmentation and H&E-adversarial networks in colon and prostate cancer classification tasks. DDCA is then compared with 11 state-of-the-art baseline methods to handle color heterogeneity, showing that it can substantially improve classification performance on unseen data including heterogeneous color variations.

13.
J Pathol Inform ; 14: 100332, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37705689

RESUMO

Computational pathology can significantly benefit from ontologies to standardize the employed nomenclature and help with knowledge extraction processes for high-quality annotated image datasets. The end goal is to reach a shared model for digital pathology to overcome data variability and integration problems. Indeed, data annotation in such a specific domain is still an unsolved challenge and datasets cannot be steadily reused in diverse contexts due to heterogeneity issues of the adopted labels, multilingualism, and different clinical practices. Material and methods: This paper presents the ExaMode ontology, modeling the histopathology process by considering 3 key cancer diseases (colon, cervical, and lung tumors) and celiac disease. The ExaMode ontology has been designed bottom-up in an iterative fashion with continuous feedback and validation from pathologists and clinicians. The ontology is organized into 5 semantic areas that defines an ontological template to model any disease of interest in histopathology. Results: The ExaMode ontology is currently being used as a common semantic layer in: (i) an entity linking tool for the automatic annotation of medical records; (ii) a web-based collaborative annotation tool for histopathology text reports; and (iii) a software platform for building holistic solutions integrating multimodal histopathology data. Discussion: The ontology ExaMode is a key means to store data in a graph database according to the RDF data model. The creation of an RDF dataset can help develop more accurate algorithms for image analysis, especially in the field of digital pathology. This approach allows for seamless data integration and a unified query access point, from which we can extract relevant clinical insights about the considered diseases using SPARQL queries.

14.
Stud Health Technol Inform ; 180: 828-32, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22874308

RESUMO

Currently, trans-radial amputees can only perform a few simple movements with prosthetic hands. This is mainly due to low control capabilities and the long training time that is required to learn controlling them with surface electromyography (sEMG). This is in contrast with recent advances in mechatronics, thanks to which mechanical hands have multiple degrees of freedom and in some cases force control. To help improve the situation, we are building the NinaPro (Non-Invasive Adaptive Prosthetics) database, a database of about 50 hand and wrist movements recorded from several healthy and currently very few amputated persons that will help the community to test and improve sEMG-based natural control systems for prosthetic hands. In this paper we describe the experimental experiences and practical aspects related to the data acquisition.


Assuntos
Amputados/reabilitação , Bases de Dados Factuais , Eletromiografia/estatística & dados numéricos , Mãos/fisiopatologia , Movimento , Músculo Esquelético/fisiopatologia , Punho/fisiopatologia , Adulto , Mãos/cirurgia , Humanos , Armazenamento e Recuperação da Informação/métodos , Masculino , Contração Muscular
15.
Front Neurosci ; 16: 732156, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720729

RESUMO

Muscle synergies have been largely used in many application fields, including motor control studies, prosthesis control, movement classification, rehabilitation, and clinical studies. Due to the complexity of the motor control system, the full repertoire of the underlying synergies has been identified only for some classes of movements and scenarios. Several extraction methods have been used to extract muscle synergies. However, some of these methods may not effectively capture the nonlinear relationship between muscles and impose constraints on input signals or extracted synergies. Moreover, other approaches such as autoencoders (AEs), an unsupervised neural network, were recently introduced to study bioinspired control and movement classification. In this study, we evaluated the performance of five methods for the extraction of spatial muscle synergy, namely, principal component analysis (PCA), independent component analysis (ICA), factor analysis (FA), nonnegative matrix factorization (NMF), and AEs using simulated data and a publicly available database. To analyze the performance of the considered extraction methods with respect to several factors, we generated a comprehensive set of simulated data (ground truth), including spatial synergies and temporal coefficients. The signal-to-noise ratio (SNR) and the number of channels (NoC) varied when generating simulated data to evaluate their effects on ground truth reconstruction. This study also tested the efficacy of each synergy extraction method when coupled with standard classification methods, including K-nearest neighbors (KNN), linear discriminant analysis (LDA), support vector machines (SVM), and Random Forest (RF). The results showed that both SNR and NoC affected the outputs of the muscle synergy analysis. Although AEs showed better performance than FA in variance accounted for and PCA in synergy vector similarity and activation coefficient similarity, NMF and ICA outperformed the other three methods. Classification tasks showed that classification algorithms were sensitive to synergy extraction methods, while KNN and RF outperformed the other two methods for all extraction methods; in general, the classification accuracy of NMF and PCA was higher. Overall, the results suggest selecting suitable methods when performing muscle synergy-related analysis.

16.
J Pathol Inform ; 13: 100139, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36268087

RESUMO

Exa-scale volumes of medical data have been produced for decades. In most cases, the diagnosis is reported in free text, encoding medical knowledge that is still largely unexploited. In order to allow decoding medical knowledge included in reports, we propose an unsupervised knowledge extraction system combining a rule-based expert system with pre-trained Machine Learning (ML) models, namely the Semantic Knowledge Extractor Tool (SKET). Combining rule-based techniques and pre-trained ML models provides high accuracy results for knowledge extraction. This work demonstrates the viability of unsupervised Natural Language Processing (NLP) techniques to extract critical information from cancer reports, opening opportunities such as data mining for knowledge extraction purposes, precision medicine applications, structured report creation, and multimodal learning. SKET is a practical and unsupervised approach to extracting knowledge from pathology reports, which opens up unprecedented opportunities to exploit textual and multimodal medical information in clinical practice. We also propose SKET eXplained (SKET X), a web-based system providing visual explanations about the algorithmic decisions taken by SKET. SKET X is designed/developed to support pathologists and domain experts in understanding SKET predictions, possibly driving further improvements to the system.

17.
NPJ Digit Med ; 5(1): 102, 2022 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-35869179

RESUMO

The digitalization of clinical workflows and the increasing performance of deep learning algorithms are paving the way towards new methods for tackling cancer diagnosis. However, the availability of medical specialists to annotate digitized images and free-text diagnostic reports does not scale with the need for large datasets required to train robust computer-aided diagnosis methods that can target the high variability of clinical cases and data produced. This work proposes and evaluates an approach to eliminate the need for manual annotations to train computer-aided diagnosis tools in digital pathology. The approach includes two components, to automatically extract semantically meaningful concepts from diagnostic reports and use them as weak labels to train convolutional neural networks (CNNs) for histopathology diagnosis. The approach is trained (through 10-fold cross-validation) on 3'769 clinical images and reports, provided by two hospitals and tested on over 11'000 images from private and publicly available datasets. The CNN, trained with automatically generated labels, is compared with the same architecture trained with manual labels. Results show that combining text analysis and end-to-end deep neural networks allows building computer-aided diagnosis tools that reach solid performance (micro-accuracy = 0.908 at image-level) based only on existing clinical data without the need for manual annotations.

18.
Med Image Anal ; 73: 102165, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34303169

RESUMO

Convolutional neural networks (CNNs) are state-of-the-art computer vision techniques for various tasks, particularly for image classification. However, there are domains where the training of classification models that generalize on several datasets is still an open challenge because of the highly heterogeneous data and the lack of large datasets with local annotations of the regions of interest, such as histopathology image analysis. Histopathology concerns the microscopic analysis of tissue specimens processed in glass slides to identify diseases such as cancer. Digital pathology concerns the acquisition, management and automatic analysis of digitized histopathology images that are large, having in the order of 100'0002 pixels per image. Digital histopathology images are highly heterogeneous due to the variability of the image acquisition procedures. Creating locally labeled regions (required for the training) is time-consuming and often expensive in the medical field, as physicians usually have to annotate the data. Despite the advances in deep learning, leveraging strongly and weakly annotated datasets to train classification models is still an unsolved problem, mainly when data are very heterogeneous. Large amounts of data are needed to create models that generalize well. This paper presents a novel approach to train CNNs that generalize to heterogeneous datasets originating from various sources and without local annotations. The data analysis pipeline targets Gleason grading on prostate images and includes two models in sequence, following a teacher/student training paradigm. The teacher model (a high-capacity neural network) automatically annotates a set of pseudo-labeled patches used to train the student model (a smaller network). The two models are trained with two different teacher/student approaches: semi-supervised learning and semi-weekly supervised learning. For each of the two approaches, three student training variants are presented. The baseline is provided by training the student model only with the strongly annotated data. Classification performance is evaluated on the student model at the patch level (using the local annotations of the Tissue Micro-Arrays Zurich dataset) and at the global level (using the TCGA-PRAD, The Cancer Genome Atlas-PRostate ADenocarcinoma, whole slide image Gleason score). The teacher/student paradigm allows the models to better generalize on both datasets, despite the inter-dataset heterogeneity and the small number of local annotations used. The classification performance is improved both at the patch-level (up to κ=0.6127±0.0133 from κ=0.5667±0.0285), at the TMA core-level (Gleason score) (up to κ=0.7645±0.0231 from κ=0.7186±0.0306) and at the WSI-level (Gleason score) (up to κ=0.4529±0.0512 from κ=0.2293±0.1350). The results show that with the teacher/student paradigm, it is possible to train models that generalize on datasets from entirely different sources, despite the inter-dataset heterogeneity and the lack of large datasets with local annotations.


Assuntos
Redes Neurais de Computação , Neoplasias da Próstata , Humanos , Masculino , Gradação de Tumores , Neoplasias da Próstata/diagnóstico por imagem , Aprendizado de Máquina Supervisionado
19.
Front Artif Intell ; 4: 744476, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35146422

RESUMO

The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.

20.
Sci Rep ; 11(1): 3964, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33597566

RESUMO

The interpretation of thoracic radiographs is a challenging and error-prone task for veterinarians. Despite recent advancements in machine learning and computer vision, the development of computer-aided diagnostic systems for radiographs remains a challenging and unsolved problem, particularly in the context of veterinary medicine. In this study, a novel method, based on multi-label deep convolutional neural network (CNN), for the classification of thoracic radiographs in dogs was developed. All the thoracic radiographs of dogs performed between 2010 and 2020 in the institution were retrospectively collected. Radiographs were taken with two different radiograph acquisition systems and were divided into two data sets accordingly. One data set (Data Set 1) was used for training and testing and another data set (Data Set 2) was used to test the generalization ability of the CNNs. Radiographic findings used as non mutually exclusive labels to train the CNNs were: unremarkable, cardiomegaly, alveolar pattern, bronchial pattern, interstitial pattern, mass, pleural effusion, pneumothorax, and megaesophagus. Two different CNNs, based on ResNet-50 and DenseNet-121 architectures respectively, were developed and tested. The CNN based on ResNet-50 had an Area Under the Receive-Operator Curve (AUC) above 0.8 for all the included radiographic findings except for bronchial and interstitial patterns both on Data Set 1 and Data Set 2. The CNN based on DenseNet-121 had a lower overall performance. Statistically significant differences in the generalization ability between the two CNNs were evident, with the CNN based on ResNet-50 showing better performance for alveolar pattern, interstitial pattern, megaesophagus, and pneumothorax.


Assuntos
Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/classificação , Animais , Cardiomegalia/diagnóstico por imagem , Aprendizado Profundo , Cães , Pulmão/citologia , Pulmão/diagnóstico por imagem , Aprendizado de Máquina , Redes Neurais de Computação , Radiografia/classificação , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA