Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
2.
Methods ; 218: 110-117, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37543302

RESUMEN

Deep learning has brought a significant progress in medical image analysis. However, their lack of interpretability might bring high risk for wrong diagnosis with limited clinical knowledge embedding. In other words, we believe it's crucial for humans to interpret how deep learning work for medical analysis, thus appropriately adding knowledge constraints to correct the bias of wrong results. With such purpose, we propose Representation Group-Disentangling Network (RGD-Net) to explain the process of feature extraction and decision making inside deep learning framework, where we completely disentangle feature space of input X-ray images into independent feature groups, and each group would contribute to diagnose of a specific disease. Specifically, we first state problem definition for interpretable prediction with auto-encoder structure. Then, group-disentangled representations are extracted from input X-ray images with the proposed Group-Disentangle Module, which constructs semantic latent space by enforcing semantic consistency of attributes. Afterwards, adversarial constricts on mapping from features to diseases are proposed to prevent model collapse during training. Finally, a novel design of local tuning medical application is proposed based on RGB-Net, which is capable to aid clinicians for reasonable diagnosis. By conducting quantity of experiments on public datasets, RGD-Net have been superior to comparative studies by leveraging potential factors contributing to different diseases. We believe our work could bring interpretability in digging inherent patterns of deep learning on medical image analysis.


Asunto(s)
Oligopéptidos , Semántica , Humanos
3.
Artículo en Inglés | MEDLINE | ID: mdl-37030846

RESUMEN

Deep learning methods have achieved great success in medical image analysis domain. However, most of them suffer from slow convergency and high computing cost, which prevents their further widely usage in practical scenarios. Moreover, it has been proved that exploring and embedding context knowledge in deep network can significantly improve accuracy. To emphasize these tips, we present CDT-CAD, i.e., context-aware deformable transformers for end-to-end chest abnormality detection on X-Ray images. CDT-CAD firstly constructs an iterative context-aware feature extractor, which not only enlarges receptive fields to encode multi-scale context information via dilated context encoding blocks, but also captures unique and scalable feature variation patterns in wavelet frequency domain via frequency pooling blocks. Afterwards, a deformable transformer detector on the extracted context features is built to accurately classify disease categories and locate regions, where a small set of key points are sampled, thus leading the detector to focus on informative feature subspace and accelerate convergence speed. Through comparative experiments on Vinbig Chest and Chest Det 10 Datasets, CDT-CAD demonstrates its effectiveness in recognizing chest abnormities and outperforms 1.4% and 6.0% than the existing methods in AP50 and AR on VinBig dateset, and 0.9% and 2.1% on Chest Det-10 dataset, respectively.

4.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2518-2529, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37097792

RESUMEN

Modern Healthcare cyberphysical systems have begun to rely more and more on distributed AI leveraging the power of Federated Learning (FL). Its ability to train Machine Learning (ML) and Deep Learning (DL) models for the wide variety of medical fields, while at the same time fortifying the privacy of the sensitive information that are present in the medical sector, makes the FL technology a necessary tool in modern health and medical systems. Unfortunately, due to the polymorphy of distributed data and the shortcomings of distributed learning, the local training of Federated models sometimes proves inadequate and thus negatively imposes the federated learning optimization process and in extend in the subsequent performance of the rest Federated models. Badly trained models can cause dire implications in the healthcare field due to their critical nature. This work strives to solve this problem by applying a post-processing pipeline to models used by FL. In particular, the proposed work ranks the model by finding how fair they are by discovering and inspecting micro-Manifolds that cluster each neural model's latent knowledge. The produced work applies a completely unsupervised both model and data agnostic methodology that can be leveraged for general model fairness discovery. The proposed methodology is tested against a variety of benchmark DL architectures and in the FL environment, showing an average 8.75% increase in Federated model accuracy in comparison with similar work.


Asunto(s)
Benchmarking , Aprendizaje Automático , Atención a la Salud
5.
IEEE J Biomed Health Inform ; 27(10): 5110-5121, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37018727

RESUMEN

Automatic generation of medical reports can provide diagnostic assistance to doctors and reduce their workload. To improve the quality of the generated medical reports, injecting auxiliary information through knowledge graphs or templates into the model is widely adopted in previous methods. However, they suffer from two problems: 1) The injected external information is limited in amount and difficult to adequately meet the information needs of medical report generation in content. 2) The injected external information increases the complexity of model and is hard to be reasonably integrated into the generation process of medical reports. Therefore, we propose an Information Calibrated Transformer (ICT) to address the above issues. First, we design a Precursor-information Enhancement Module (PEM), which can effectively extract numerous inter-intra report features from the datasets as the auxiliary information without external injection. And the auxiliary information can be dynamically updated with the training process. Secondly, a combination mode, which consists of PEM and our proposed Information Calibration Attention Module (ICA), is designed and embedded into ICT. In this method, the auxiliary information extracted from PEM is flexibly injected into ICT and the increment of model parameters is small. The comprehensive evaluations validate that the ICT is not only superior to previous methods in the X-Ray datasets, IU-X-Ray and MIMIC-CXR, but also successfully be extended to a CT COVID-19 dataset COV-CTR.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , Calibración , Suministros de Energía Eléctrica , Conocimiento
6.
Sensors (Basel) ; 23(3)2023 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-36772734

RESUMEN

The implementation of smart networks has made great progress due to the development of the Internet of Things (IoT). LoRa is one of the most prominent technologies in the Internet of Things industry, primarily due to its ability to achieve long-distance transmission while consuming less power. In this work, we modeled different environments and assessed the performances of networks by observing the effects of various factors and network parameters. The path loss model, the deployment area size, the transmission power, the spreading factor, the number of nodes and gateways, and the antenna gain have a significant effect on the main performance metrics such as the energy consumption and the data extraction rate of a LoRa network. In order to examine these parameters, we performed simulations in OMNeT++ using the open source framework FLoRa. The scenarios which were investigated in this work include the simulation of rural and urban environments and a parking area model. The results indicate that the optimization of the key parameters could have a huge impact on the deployment of smart networks.

7.
IEEE J Biomed Health Inform ; 27(4): 1701-1708, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36126032

RESUMEN

Colonic adenocarcinoma is a disease severely endangering human life caused by mucosal epidermal carcinogenesis. The segmentation of potentially cancerous glands is the key in the detection and diagnosis of colonic adenocarcinoma. The appearance of cancerous tissue is different in gland segmentation in colon pathological images, and it is impossible to accurately segment the changes of glands from benign to malignant using a single network. Given these issues, a two-path gland segmentation algorithm of colon pathological image based on local semantic guidance is proposed in this paper. The improved candidate region search algorithm is adopted to expand the original image data set and generate sub-datasets sensitive to specific features. Then, the semantic feature-guided model is employed to extract the local adenocarcinoma features and acts on the backbone network together with context feature extraction based on the attention mechanism. In this way, a larger receptive field and more local feature information are obtained, the learning ability of the network to the morphological features of glands is enhanced, and the performance of automatic gland segmentation is finally improved. The algorithm is verified on Warwick Qu-Dataset. Compared with the current popular segmentation algorithms, our algorithm has good performance in Dice coefficient, F1 score, and Hausdorff distance on different types of test sets.


Asunto(s)
Adenocarcinoma , Semántica , Humanos , Algoritmos , Colon/diagnóstico por imagen
8.
IEEE J Biomed Health Inform ; 26(12): 5817-5828, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34971545

RESUMEN

In ear of smart cities, intelligent medical image recognition technique has become a promising way to solve remote patient diagnosis in IoMT. Although deep learning-based recognition approaches have received great development during the past decade, explainability always acts as a main obstacle to promote recognition approaches to higher levels. Because it is always hard to clearly grasp internal principles of deep learning models. In contrast, the conventional machine learning (CML)-based methods are well explainable, as they give relatively certain meanings to parameters. Motivated by the above view, this paper combines deep learning with the CML, and proposes a hybrid intelligence-driven medical image recognition framework in IoMT. On the one hand, the convolution neural network is utilized to extract deep and abstract features for initial images. On the other hand, the CML-based techniques are employed to reduce dimensions for extracted features and construct a strong classifier that output recognition results. A real dataset about pathologic myopia is selected to establish simulative scenario, in order to assess the proposed recognition framework. Results reveal that the proposal that improves recognition accuracy about two to three percent.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Humanos , Simulación por Computador , Internet , Inteligencia
9.
Neural Netw ; 125: 290-302, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32151916

RESUMEN

With the rapid development and wide application of computer, camera device, network and hardware technology, 3D object (or model) retrieval has attracted widespread attention and it has become a hot research topic in the computer vision domain. Deep learning features already available in 3D object retrieval have been proven to be better than the retrieval performance of hand-crafted features. However, most existing networks do not take into account the impact of multi-view image selection on network training, and the use of contrastive loss alone only forcing the same-class samples to be as close as possible. In this work, a novel solution named Multi-view Discrimination and Pairwise CNN (MDPCNN) for 3D object retrieval is proposed to tackle these issues. It can simultaneously input multiple batches and multiple views by adding the Slice layer and the Concat layer. Furthermore, a highly discriminative network is obtained by training samples that are not easy to be classified by clustering. Lastly, we deploy the contrastive-center loss and contrastive loss as the optimization objective that has better intra-class compactness and inter-class separability. Large-scale experiments show that the proposed MDPCNN can achieve a significant performance over the state-of-the-art algorithms in 3D object retrieval.


Asunto(s)
Aprendizaje Profundo/normas , Análisis por Conglomerados , Reconocimiento de Normas Patrones Automatizadas/métodos
10.
IEEE J Biomed Health Inform ; 23(4): 1363-1373, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30629519

RESUMEN

Accurate and automatic organ segmentation is critical for computer-aided analysis towards clinical decision support and treatment planning. State-of-the-art approaches have achieved remarkable segmentation accuracy on large organs, such as the liver and kidneys. However, most of these methods do not perform well on small organs, such as the pancreas, gallbladder, and adrenal glands, especially when lacking sufficient training data. This paper presents an automatic approach for small organ segmentation with limited training data using two cascaded steps-localization and segmentation. The localization stage involves the extraction of the region of interest after the registration of images to a common template and during the segmentation stage, a voxel-wise label map of the extracted region of interest is obtained and then transformed back to the original space. In the localization step, we propose to utilize a graph-based groupwise image registration method to build the template for registration so as to minimize the potential bias and avoid getting a fuzzy template. More importantly, a novel knowledge-aided convolutional neural network is proposed to improve segmentation accuracy in the second stage. This proposed network is flexible and can combine the effort of both deep learning and traditional methods, consequently achieving better segmentation relative to either of individual methods. The ISBI 2015 VISCERAL challenge dataset is used to evaluate the presented approach. Experimental results demonstrate that the proposed method outperforms cutting-edge deep learning approaches, traditional forest-based approaches, and multi-atlas approaches in the segmentation of small organs.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Glándulas Suprarrenales/diagnóstico por imagen , Algoritmos , Lógica Difusa , Vesícula Biliar/diagnóstico por imagen , Humanos , Páncreas/diagnóstico por imagen , Tomografía Computarizada por Rayos X
11.
Biomed Res Int ; 2016: 6183218, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27127791

RESUMEN

Diagnosis of tumor and definition of tumor borders intraoperatively using fast histopathology is often not sufficiently informative primarily due to tissue architecture alteration during sample preparation step. Confocal laser microscopy (CLE) provides microscopic information of tissue in real-time on cellular and subcellular levels, where tissue characterization is possible. One major challenge is to categorize these images reliably during the surgery as quickly as possible. To address this, we propose an automated tissue differentiation algorithm based on the machine learning concept. During a training phase, a large number of image frames with known tissue types are analyzed and the most discriminant image-based signatures for various tissue types are identified. During the procedure, the algorithm uses the learnt image features to assign a proper tissue type to the acquired image frame. We have verified this method on the example of two types of brain tumors: glioblastoma and meningioma. The algorithm was trained using 117 image sequences containing over 27 thousand images captured from more than 20 patients. We achieved an average cross validation accuracy of better than 83%. We believe this algorithm could be a useful component to an intraoperative pathology system for guiding the resection procedure based on cellular level information.


Asunto(s)
Neoplasias Encefálicas/patología , Microscopía Confocal/métodos , Microcirugia/métodos , Neuroendoscopía/métodos , Cirugía Asistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/cirugía , Humanos , Interpretación de Imagen Asistida por Computador , Microscopía Intravital/métodos , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...