Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
BMC Med Inform Decis Mak ; 22(1): 122, 2022 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-35509058

RESUMO

Liver cancer is a malignant tumor with high morbidity and mortality, which has a tremendous negative impact on human survival. However, it is a challenging task to recognize tens of thousands of histopathological images of liver cancer by naked eye, which poses numerous challenges to inexperienced clinicians. In addition, factors such as long time-consuming, tedious work and huge number of images impose a great burden on clinical diagnosis. Therefore, our study combines convolutional neural networks with histopathology images and adopts a feature fusion approach to help clinicians efficiently discriminate the differentiation types of primary hepatocellular carcinoma histopathology images, thus improving their diagnostic efficiency and relieving their work pressure. In this study, for the first time, 73 patients with different differentiation types of primary liver cancer tumors were classified. We performed an adequate classification evaluation of liver cancer differentiation types using four pre-trained deep convolutional neural networks and nine different machine learning (ML) classifiers on a dataset of liver cancer histopathology images with multiple differentiation types. And the test set accuracy, validation set accuracy, running time with different strategies, precision, recall and F1 value were used for adequate comparative evaluation. Proved by experimental results, fusion networks (FuNet) structure is a good choice, which covers both channel attention and spatial attention, and suppresses channel interference with less information. Meanwhile, it can clarify the importance of each spatial location by learning the weights of different locations in space, then apply it to the study of classification of multi-differentiated types of liver cancer. In addition, in most cases, the Stacking-based integrated learning classifier outperforms other ML classifiers in the classification task of multi-differentiation types of liver cancer with the FuNet fusion strategy after dimensionality reduction of the fused features by principle component analysis (PCA) features, and a satisfactory result of 72.46% is achieved in the test set, which has certain practicality.


Assuntos
Carcinoma Hepatocelular/patologia , Neoplasias Hepáticas/patologia , Redes Neurais de Computação , Carcinoma Hepatocelular/diagnóstico por imagem , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Aprendizado de Máquina
2.
Curr Med Imaging ; 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38415461

RESUMO

BACKGROUND: At present, there are some problems in multimodal medical image fusion, such as texture detail loss, leading to edge contour blurring and image energy loss, leading to contrast reduction. OBJECTIVE: To solve these problems and obtain higher-quality fusion images, this study proposes an image fusion method based on local saliency energy and multi-scale fractal dimension. METHODS: First, by using a non-subsampled contourlet transform, the medical image was divided into 4 layers of high-pass subbands and 1 layer of low-pass subband. Second, in order to fuse the high-pass subbands of layers 2 to 4, the fusion rules based on a multi-scale morphological gradient and an activity measure were used as external stimuli in pulse coupled neural network. Third, a fusion rule based on the improved multi-scale fractal dimension and new local saliency energy was proposed, respectively, for the low-pass subband and the 1st closest to the low-pass subband. Layerhigh pass sub-bands were fused. Lastly, the fused image was created by performing the inverse non-subsampled contourlet transform on the fused sub-bands. RESULTS: On three multimodal medical image datasets, the proposed method was compared with 7 other fusion methods using 5 common objective evaluation metrics. CONCLUSION: Experiments showed that this method can protect the contrast and edge of fusion image well and has strong competitiveness in both subjective and objective evaluation.

3.
J Cancer Res Clin Oncol ; 149(7): 3287-3299, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35918465

RESUMO

PURPOSE: Skin cancer is one of the ten most common cancer types in the world. Early diagnosis and treatment can effectively reduce the mortality of patients. Therefore, it is of great significance to develop an intelligent diagnosis system for skin cancer. According to the survey, at present, most intelligent diagnosis systems of skin cancer only use skin image data, but the multi-modal cross-fusion analysis using image data and patient clinical data is limited. Therefore, to further explore the complementary relationship between image data and patient clinical data, we propose multimode data fusion diagnosis network (MDFNet), a framework for skin cancer based on data fusion strategy. METHODS: MDFNet establishes an effective mapping among heterogeneous data features, effectively fuses clinical skin images and patient clinical data, and effectively solves the problems of feature paucity and insufficient feature richness that only use single-mode data. RESULTS: The experimental results present that our proposed smart skin cancer diagnosis model has an accuracy of 80.42%, which is an improvement of about 9% compared with the model accuracy using only medical images, thus effectively confirming the unique fusion advantages exhibited by MDFNet. CONCLUSIONS: This illustrates that MDFNet can not only be applied as an effective auxiliary diagnostic tool for skin cancer diagnosis, help physicians improve clinical decision-making ability and effectively improve the efficiency of clinical medicine diagnosis, but also its proposed data fusion method fully exerts the advantage of information convergence and has a certain reference value for the intelligent diagnosis of numerous clinical diseases.


Assuntos
Médicos , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Pele/diagnóstico por imagem , Tomada de Decisão Clínica , Valores de Referência
4.
Med Phys ; 50(3): 1507-1527, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36272103

RESUMO

BACKGROUND: Esophageal cancer has become one of the important cancers that seriously threaten human life and health, and its incidence and mortality rate are still among the top malignant tumors. Histopathological image analysis is the gold standard for diagnosing different differentiation types of esophageal cancer. PURPOSE: The grading accuracy and interpretability of the auxiliary diagnostic model for esophageal cancer are seriously affected by small interclass differences, imbalanced data distribution, and poor model interpretability. Therefore, we focused on developing the category imbalance attention block network (CIABNet) model to try to solve the previous problems. METHODS: First, the quantitative metrics and model visualization results are integrated to transfer knowledge from the source domain images to better identify the regions of interest (ROI) in the target domain of esophageal cancer. Second, in order to pay attention to the subtle interclass differences, we propose the concatenate fusion attention block, which can focus on the contextual local feature relationships and the changes of channel attention weights among different regions simultaneously. Third, we proposed a category imbalance attention module, which treats each esophageal cancer differentiation class fairly based on aggregating different intensity information at multiple scales and explores more representative regional features for each class, which effectively mitigates the negative impact of category imbalance. Finally, we use feature map visualization to focus on interpreting whether the ROIs are the same or similar between the model and pathologists, thus better improving the interpretability of the model. RESULTS: The experimental results show that the CIABNet model outperforms other state-of-the-art models, which achieves the most advanced results in classifying the differentiation types of esophageal cancer with an average classification accuracy of 92.24%, an average precision of 93.52%, an average recall of 90.31%, an average F1 value of 91.73%, and an average AUC value of 97.43%. In addition, the CIABNet model has essentially similar or identical to the ROI of pathologists in identifying histopathological images of esophageal cancer. CONCLUSIONS: Our experimental results prove that our proposed computer-aided diagnostic algorithm shows great potential in histopathological images of multi-differentiated types of esophageal cancer.


Assuntos
Neoplasias Esofágicas , Humanos , Neoplasias Esofágicas/diagnóstico por imagem , Benchmarking , Processamento de Imagem Assistida por Computador
5.
Sci Rep ; 12(1): 15103, 2022 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-36068309

RESUMO

Histopathological image analysis is the gold standard for pathologists to grade colorectal cancers of different differentiation types. However, the diagnosis by pathologists is highly subjective and prone to misdiagnosis. In this study, we constructed a new attention mechanism named MCCBAM based on channel attention mechanism and spatial attention mechanism, and developed a computer-aided diagnosis (CAD) method based on CNN and MCCBAM, called HCCANet. In this study, 630 histopathology images processed with Gaussian filtering denoising were included and gradient-weighted class activation map (Grad-CAM) was used to visualize regions of interest in HCCANet to improve its interpretability. The experimental results show that the proposed HCCANet model outperforms four advanced deep learning (ResNet50, MobileNetV2, Xception, and DenseNet121) and four classical machine learning (KNN, NB, RF, and SVM) techniques, achieved 90.2%, 85%, and 86.7% classification accuracy for colorectal cancers with high, medium, and low differentiation levels, respectively, with an overall accuracy of 87.3% and an average AUC value of 0.9.In addition, the MCCBAM constructed in this study outperforms several commonly used attention mechanisms SAM, SENet, SKNet, Non_Local, CBAM, and BAM on the backbone network. In conclusion, the HCCANet model proposed in this study is feasible for postoperative adjuvant diagnosis and grading of colorectal cancer.


Assuntos
Neoplasias Colorretais/diagnóstico por imagem , Diagnóstico por Computador/métodos , Neoplasias Colorretais/classificação , Neoplasias Colorretais/patologia , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Gradação de Tumores , Distribuição Normal , Análise Espacial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA