Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Neural Netw ; 179: 106561, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39084171

RESUMO

Person re-identification (ReID) has made good progress in stationary domains. The ReID model must be retrained to adapt to new scenarios (domains) as they emerge unexpectedly, which leads to catastrophic forgetting. Continual learning trains the model in the order of domain emergence to alleviate catastrophic forgetting. However, generalization ability of the model is still limited due to the distribution difference between training and testing domains. To address the above problem, we propose the generalized continual person re-Identification (GCReID) model to continuously train an anti-forgetting and generalizable model. We endeavor to increase the diversity of samples by prior to simulate unseen domains. Meta-train and meta-test are adopted to enhance generalization of the model. Universal knowledge extracted from all seen domains and the simulated domains is stored in a set of feature embeddings. The knowledge is continually updated and applied to guide meta-train and meta-test via a graph attention network. Extensive experiments on 12 benchmark datasets and comparisons with 6 representative models demonstrate the effectiveness of the proposed model GCReID in enhancing generalization performance on unseen domains and alleviating catastrophic forgetting of seen domains. The code will be available at https://github.com/DFLAG-NEU/GCReID if our work is accepted.


Assuntos
Redes Neurais de Computação , Humanos , Conhecimento , Generalização Psicológica , Aprendizagem , Identificação Biométrica/métodos , Aprendizado de Máquina , Algoritmos
2.
Med Image Anal ; 97: 103272, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39024972

RESUMO

Landmark detection is a crucial task in medical image analysis, with applications across various fields. However, current methods struggle to accurately locate landmarks in medical images with blurred tissue boundaries due to low image quality. In particular, in echocardiography, sparse annotations make it challenging to predict landmarks with position stability and temporal consistency. In this paper, we propose a spatio-temporal graph convolutional network tailored for echocardiography landmark detection. We specifically sample landmark labels from the left ventricular endocardium and pre-calculate their correlations to establish structural priors. Our approach involves a graph convolutional neural network that learns the interrelationships among landmarks, significantly enhancing landmark accuracy within ambiguous tissue contexts. Additionally, we integrate gate recurrent units to grasp the temporal consistency of landmarks across consecutive images, augmenting the model's resilience against unlabeled data. Through validation across three echocardiography datasets, our method demonstrates superior accuracy when contrasted with alternative landmark detection models.


Assuntos
Ecocardiografia , Redes Neurais de Computação , Humanos , Ecocardiografia/métodos , Ventrículos do Coração/diagnóstico por imagem , Pontos de Referência Anatômicos , Processamento de Imagem Assistida por Computador/métodos , Análise Espaço-Temporal , Algoritmos
3.
Quant Imaging Med Surg ; 14(7): 5176-5204, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39022282

RESUMO

Background and Objective: Cervical cancer clinical target volume (CTV) outlining and organs at risk segmentation are crucial steps in the diagnosis and treatment of cervical cancer. Manual segmentation is inefficient and subjective, leading to the development of automated or semi-automated methods. However, limitation of image quality, organ motion, and individual differences still pose significant challenges. Apart from numbers of studies on the medical images' segmentation, a comprehensive review within the field is lacking. The purpose of this paper is to comprehensively review the literatures on different types of medical image segmentation regarding cervical cancer and discuss the current level and challenges in segmentation process. Methods: As of May 31, 2023, we conducted a comprehensive literature search on Google Scholar, PubMed, and Web of Science using the following term combinations: "cervical cancer images", "segmentation", and "outline". The included studies focused on the segmentation of cervical cancer utilizing computed tomography (CT), magnetic resonance (MR), and positron emission tomography (PET) images, with screening for eligibility by two independent investigators. Key Content and Findings: This paper reviews representative papers on CTV and organs at risk segmentation in cervical cancer and classifies the methods into three categories based on image modalities. The traditional or deep learning methods are comprehensively described. The similarities and differences of related methods are analyzed, and their advantages and limitations are discussed in-depth. We have also included experimental results by using our private datasets to verify the performance of selected methods. The results indicate that the residual module and squeeze-and-excitation blocks module can significantly improve the performance of the model. Additionally, the segmentation method based on improved level set demonstrates better segmentation accuracy than other methods. Conclusions: The paper provides valuable insights into the current state-of-the-art in cervical cancer CTV outlining and organs at risk segmentation, highlighting areas for future research.

4.
BMC Med Imaging ; 24(1): 47, 2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-38373915

RESUMO

BACKGROUND: Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS: In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS: We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS: The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.


Assuntos
Algoritmos , Neoplasias da Mama , Humanos , Feminino , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Mama/patologia , Neoplasias da Mama/patologia , Processamento de Imagem Assistida por Computador
5.
iScience ; 26(7): 107005, 2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37534183

RESUMO

Proposing a general segmentation approach for lung lesions, including pulmonary nodules, pneumonia, and tuberculosis, in CT images will improve efficiency in radiology. However, the performance of generative adversarial networks is hampered by the limited availability of annotated samples and the catastrophic forgetting of the discriminator, whereas the universality of traditional morphology-based methods is insufficient for segmenting diverse lung lesions. A cascaded dual-attention network with a context-aware pyramid feature extraction module was designed to address these challenges. A self-supervised rotation loss was designed to mitigate discriminator forgetting. The proposed model achieved Dice coefficients of 70.92, 73.55, and 68.52% on multi-center pneumonia, lung nodule, and tuberculosis test datasets, respectively. No significant decrease in accuracy was observed (p > 0.10) when a small training sample size was used. The cyclic training of the discriminator was reduced with self-supervised rotation loss (p < 0.01). The proposed approach is promising for segmenting multiple lung lesion types in CT images.

6.
Comput Med Imaging Graph ; 108: 102264, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37418789

RESUMO

Cardiovascular disease is the leading cause of human death worldwide, and acute coronary syndrome (ACS) is a common first manifestation of this. Studies have shown that pericoronary adipose tissue (PCAT) computed tomography (CT) attenuation and atherosclerotic plaque characteristics can be used to predict future adverse ACS events. However, radiomics-based methods have limitations in extracting features of PCAT and atherosclerotic plaques. Therefore, we propose a hybrid deep learning framework capable of extracting coronary CT angiography (CCTA) imaging features of both PCAT and atherosclerotic plaques for ACS prediction. The framework designs a two-stream CNN feature extraction (TSCFE) module to extract the features of PCAT and atherosclerotic plaques, respectively, and a channel feature fusion (CFF) to explore feature correlations between their features. Specifically, a trilinear-based fully-connected (FC) prediction module stepwise maps high-dimensional representations to low-dimensional label spaces. The framework was validated in retrospectively collected suspected coronary artery disease cases examined by CCTA. The prediction accuracy, sensitivity, specificity, and area under curve (AUC) are all higher than the classical image classification networks and state-of-the-art medical image classification methods. The experimental results show that the proposed method can effectively and accurately extract CCTA imaging features of PCAT and atherosclerotic plaques and explore the feature correlations to produce impressive performance. Thus, it has the potential value to be applied in clinical applications for accurate ACS prediction.


Assuntos
Síndrome Coronariana Aguda , Doença da Artéria Coronariana , Placa Aterosclerótica , Humanos , Placa Aterosclerótica/diagnóstico por imagem , Síndrome Coronariana Aguda/diagnóstico por imagem , Estudos Retrospectivos , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Angiografia por Tomografia Computadorizada/métodos , Tecido Adiposo/diagnóstico por imagem , Vasos Coronários
7.
Quant Imaging Med Surg ; 13(7): 4429-4446, 2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37456326

RESUMO

Background: Breast cancer is a major cause of mortality among women worldwide. Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) is a good imaging technique that can show temporal information about the kinetics of the contrast agent in suspicious breast lesions as well as acceptable spatial resolution. Computer-aided detection systems assist in the detection of lesions through medical image processing techniques combined with computerized analysis and calculation, which in turn helps radiologists recognize molecular subtypes of breast lesions that will be beneficial for better treatment plan decisions. Methods: In this paper, a computer-aided diagnosis method is proposed to automatically locate breast cancer lesions and identify molecular subtypes of breast cancer with heterogeneity analysis from radiomics data. A fast region-based convolutional network (Faster R-CNN) framework is first applied to images to detect breast cancer lesions. Then, the heterogeneous regions of every breast cancer lesion are extracted. Based on the multiple visual and kinetic radiomics features extracted from the heterogeneous regions, a temporal bag of visual word model is proposed, which takes into account the dynamic characteristics of both lesion and heterogeneous regions in images over time. The recognition task of molecular subtypes of breast lesions is realized based on a stacking classification model. Results: At the genetic level, breast cancer is divided into four molecular subtypes, namely, luminal epithelial type A (Luminal A), luminal epithelial type B (Luminal B), HER-2 overexpression and basal cell type. The experimental results show that the precision of the four subtypes is 93%, 94%, 83%, 86%; the recall is 96%, 80%, 91%, 94%; and the F1-score is 95%, 86%, 87%. Conclusions: The experimental results denote the influence of heterogeneous regions on the recognition task. The DCE-MRI-based approach to identify molecular typing of breast cancer for noninvasive diagnosis will contribute to the development of breast cancer treatment, improved outcomes and reduced mortality.

8.
Eur Radiol ; 33(12): 8477-8487, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37389610

RESUMO

OBJECTIVE: The current study aimed to explore a deep convolutional neural network (DCNN) model that integrates multidimensional CMR data to accurately identify LV paradoxical pulsation after reperfusion by primary percutaneous coronary intervention with isolated anterior infarction. METHODS: A total of 401 participants (311 patients and 90 age-matched volunteers) were recruited for this prospective study. The two-dimensional UNet segmentation model of the LV and classification model for identifying paradoxical pulsation were established using the DCNN model. Features of 2- and 3-chamber images were extracted with 2-dimensional (2D) and 3D ResNets with masks generated by a segmentation model. Next, the accuracy of the segmentation model was evaluated using the Dice score and classification model by receiver operating characteristic (ROC) curve and confusion matrix. The areas under the ROC curve (AUCs) of the physicians in training and DCNN models were compared using the DeLong method. RESULTS: The DCNN model showed that the AUCs for the detection of paradoxical pulsation were 0.97, 0.91, and 0.83 in the training, internal, and external testing cohorts, respectively (p < 0.001). The 2.5-dimensional model established using the end-systolic and end-diastolic images combined with 2-chamber and 3-chamber images was more efficient than the 3D model. The discrimination performance of the DCNN model was better than that of physicians in training (p < 0.05). CONCLUSIONS: Compared to the model trained by 2-chamber or 3-chamber images alone or 3D multiview, our 2.5D multiview model can combine the information of 2-chamber and 3-chamber more efficiently and obtain the highest diagnostic sensitivity. CLINICAL RELEVANCE STATEMENT: A deep convolutional neural network model that integrates 2-chamber and 3-chamber CMR images can identify LV paradoxical pulsation which correlates with LV thrombosis, heart failure, ventricular tachycardia after reperfusion by primary percutaneous coronary intervention with isolated anterior infarction. KEY POINTS: • The epicardial segmentation model was established using the 2D UNet based on end-diastole 2- and 3-chamber cine images. • The DCNN model proposed in this study had better performance for discriminating LV paradoxical pulsation accurately and objectively using CMR cine images after anterior AMI compared to the diagnosis of physicians in training. • The 2.5-dimensional multiview model combined the information of 2- and 3-chamber efficiently and obtained the highest diagnostic sensitivity.


Assuntos
Aprendizado Profundo , Infarto do Miocárdio , Humanos , Estudos Prospectivos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Infarto do Miocárdio/diagnóstico por imagem
9.
IEEE Trans Med Imaging ; 42(8): 2386-2399, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37028009

RESUMO

Increased pericardial adipose tissue (PEAT) is associated with a series of cardiovascular diseases (CVDs) and metabolic syndromes. Quantitative analysis of PEAT by means of image segmentation is of great significance. Although cardiovascular magnetic resonance (CMR) has been utilized as a routine method for non-invasive and non-radioactive CVD diagnosis, segmentation of PEAT in CMR images is challenging and laborious. In practice, no public CMR datasets are available for validating PEAT automatic segmentation. Therefore, we first release a benchmark CMR dataset, MRPEAT, which consists of cardiac short axis (SA) CMR images from 50 hypertrophic cardiomyopathy (HCM), 50 acute myocardial infarction (AMI), and 50 normal control (NC) subjects. We then propose a deep learning model, named as 3SUnet, to segment PEAT on MRPEAT to tackle the challenges that PEAT is relatively small and diverse and its intensities are hard to distinguish from the background. The 3SUnet is a triple-stage network, of which the backbones are all Unet. One Unet is used to extract a region of interest (ROI) for any given image with ventricles and PEAT being contained completely using a multi-task continual learning strategy. Another Unet is adopted to segment PEAT in ROI-cropped images. The third Unet is utilized to refine PEAT segmentation accuracy guided by an image adaptive probability map. The proposed model is qualitatively and quantitatively compared with the state-of-the-art models on the dataset. We obtain the PEAT segmentation results through 3SUnet, assess the robustness of 3SUnet under different pathological conditions, and identify the imaging indications of PEAT in CVDs. The dataset and all source codes are available at https://dflag-neu.github.io/member/csz/research/.


Assuntos
Benchmarking , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Ventrículos do Coração , Pericárdio/diagnóstico por imagem , Solo , Processamento de Imagem Assistida por Computador/métodos
10.
Comput Biol Med ; 156: 106705, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36863190

RESUMO

Left ventricular ejection fraction (LVEF) is essential for evaluating left ventricular systolic function. However, its clinical calculation requires the physician to interactively segment the left ventricle and obtain the mitral annulus and apical landmarks. This process is poorly reproducible and error prone. In this study, we propose a multi-task deep learning network EchoEFNet. The network use ResNet50 with dilated convolution as the backbone to extract high-dimensional features while maintaining spatial features. The branching network used our designed multi-scale feature fusion decoder to segment the left ventricle and detect landmarks simultaneously. The LVEF was then calculated automatically and accurately using the biplane Simpson's method. The model was tested for performance on the public dataset CAMUS and private dataset CMUEcho. The experimental results showed that the geometrical metrics and percentage of correct keypoints of EchoEFNet outperformed other deep learning methods. The correlation between the predicted LVEF and true values on the CAMUS and CMUEcho datasets was 0.854 and 0.916, respectively.


Assuntos
Aprendizado Profundo , Função Ventricular Esquerda , Volume Sistólico , Ecocardiografia/métodos , Ventrículos do Coração/diagnóstico por imagem
11.
Neural Netw ; 161: 105-115, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36739628

RESUMO

Person re-identification (ReID), considered as a sub-problem of image retrieval, is critical for intelligent security. The general practice is to train a deep model on images from a particular scenario (also known as a domain) and perform retrieval tests on images from the same domain. Thus, the model has to be retrained to ensure good performance on unseen domains. Unfortunately, retraining will introduce the so called catastrophic forgetting problem existing in deep learning models. To address this problem, we propose a Continual person re-identification model via a Knowledge-Preserving (CKP) mechanism. The proposed model is able to accumulate knowledge from continuously changing scenarios. The knowledge is updated via a graph attention network from the human cognitive-inspired perspective as the scenario changes. The accumulated knowledge is used to guide the learning process of the proposed model on image samples from new-coming domains. We finally evaluate and compare CKP with fine-tuning, continual learning in image classification and person re-identification, and joint training. Experiments on representative benchmark datasets (Market1501, DukeMTMC, CUHK03, CUHK-SYSU, and MSMT17, which arrive in different orders) demonstrate the advantages of the proposed model in preventing forgetting, and experiments on other benchmark datasets (GRID, SenseReID, CUHK01, CUHK02, VIPER, iLIDS, and PRID, which are not available during training) demonstrate the generalization ability of the proposed model. The CKP outperforms the best comparative model by 0.58% and 0.65% on seen domains (datasets available during training), and by 0.95% and 1.02% on never seen domains (datasets not available during training) in terms of mAP and Rank1, respectively. Arrival order of the training datasets, guidance of accumulated knowledge for learning new knowledge and parameter settings are also discussed.


Assuntos
Identificação Biométrica , Humanos , Identificação Biométrica/métodos , Benchmarking , Estudos Longitudinais
12.
Technol Cancer Res Treat ; 22: 15330338221139164, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36601655

RESUMO

Introduction: Segmentation of clinical target volume (CTV) from CT images is critical for cervical cancer brachytherapy, but this task is time-consuming, laborious, and not reproducible. In this work, we aim to propose an end-to-end model to segment CTV for cervical cancer brachytherapy accurately. Methods: In this paper, an improved M-Net model (Mnet_IM) is proposed to segment CTV of cervical cancer from CT images. An input and an output branch are both proposed to attach to the bottom layer to deal with CTV locating challenges due to its lower contrast than surrounding organs and tissues. A progressive fusion approach is then proposed to recover the prediction results layer by layer to enhance the smoothness of segmentation results. A loss function is defined on each of the multiscale outputs to form a deep supervision mechanism. Numbers of feature map channels that are directly connected to inputs are finally homogenized for each image resolution to reduce feature redundancy and computational burden. Result: Experimental results of the proposed model and some representative models on 5438 image slices from 53 cervical cancer patients demonstrate advantages of the proposed model in terms of segmentation accuracy, such as average surface distance, 95% Hausdorff distance, surface overlap, surface dice, and volumetric dice. Conclusion: A better agreement between the predicted CTV from the proposed model Mnet_IM and manually labeled ground truth is obtained compared to some representative state-of-the-art models.


Assuntos
Braquiterapia , Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos
13.
Med Biol Eng Comput ; 60(11): 3325-3340, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36169905

RESUMO

The accurate abdominal vessel segmentation of CT angiography (CTA) data is essential for diagnosis and surgical planning. However, accurate abdominal vessel segmentation is a difficult problem since the following challenges: (1) complex abdominal vessel structure containing a wide range size of vessel branches, (2) low contrast of small vessels, and (3) uneven distribution of vessel grayscale. With full consideration of the challenges, we propose an automatic vessel segmentation algorithm. For challenge 1, the algorithm's framework is divided into large and small vessel segmentation and has the following steps. Firstly, a vessel model embedded fuzzy c-means (VMEFCM) method with full consideration of challenge 2 is presented to obtain the initial vessel voxels. Then, considering challenge 3, a large vessel segmentation method based on the initial vessel voxels, similarity, and morphologic is proposed. Finally, a small vessel segmentation method based on spine is described. Extensive analysis is carried out on simulation datasets and 78 CTA datasets. The experimental results indicate that each step of the algorithm achieves the prospective results, and the proposed algorithm is effective and accurate with low computational cost. The dice, sensitivity, Jaccard coefficient, and precision rate were 93.7±2.8%, 93.7±2.8%, 88.2±4.8%, and 94.2±7.5% respectively.


Assuntos
Algoritmos , Angiografia por Tomografia Computadorizada , Angiografia por Tomografia Computadorizada/métodos , Estudos Prospectivos
14.
Int J Comput Assist Radiol Surg ; 17(10): 1879-1890, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35764765

RESUMO

PURPOSE: Coronary artery segmentation in coronary computed tomography angiography (CTA) images plays a crucial role in diagnosing cardiovascular diseases. However, due to the complexity of coronary CTA images and coronary structure, it is difficult to automatically segment coronary arteries accurately and efficiently from numerous coronary CTA images. METHOD: In this study, an automatic method based on symmetrical radiation filter (SRF) and D-means is presented. The SRF, which is applied to the three orthogonal planes, is designed to filter the suspicious vessel tissue according to the features of gradient changes on vascular boundaries to segment coronary arteries accurately and reduce computational cost. Additionally, the D-means local clustering is proposed to be embedded into vessel segmentation to eliminate noise impact in coronary CTA images. RESULTS: The results of the proposed method were compared against the manual delineations in 210 coronary CTA data sets. The average values of true positive, false positive, Jaccard measure, and Dice coefficient were [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively. Moreover, comparing the delineated data sets and public data sets showed that the proposed method is better than the related methods. CONCLUSION: The experimental results indicate that the proposed method can perform complete, robust, and accurate segmentation of coronary arteries with low computational cost. Therefore, the proposed method is proven effective in vessel segmentation of coronary CTA images without extensive training data and can meet clinical applications.


Assuntos
Angiografia por Tomografia Computadorizada , Imageamento Tridimensional , Algoritmos , Angiografia por Tomografia Computadorizada/métodos , Angiografia Coronária/métodos , Humanos , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos
15.
Math Biosci Eng ; 19(5): 4881-4891, 2022 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-35430845

RESUMO

Gene expression data is highly dimensional. As disease-related genes account for only a tiny fraction, a deep learning model, namely GSEnet, is proposed to extract instructive features from gene expression data. This model consists of three modules, namely the pre-conv module, the SE-Resnet module, and the SE-conv module. Effectiveness of the proposed model on the performance improvement of 9 representative classifiers is evaluated. Seven evaluation metrics are used for this assessment on the GSE99095 dataset. Robustness and advantages of the proposed model compared with representative feature selection methods are also discussed. Results show superiority of the proposed model on the improvement of the classification precision and accuracy.


Assuntos
Leucemia , Redes Neurais de Computação , Expressão Gênica , Humanos , Leucemia/genética
16.
IEEE J Biomed Health Inform ; 26(1): 79-89, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34057903

RESUMO

Automated pancreatic cancer segmentation is highly crucial for computer-assisted diagnosis. The general practice is to label images from selected modalities since it is expensive to label all modalities. This practice brought about a significant interest in learning the knowledge transfer from the labeled modalities to unlabeled ones. However, the imaging parameter inconsistency between modalities leads to a domain shift, limiting the transfer learning performance. Therefore, we propose an unsupervised domain adaptation segmentation framework for pancreatic cancer based on GCN and meta-learning strategy. Our model first transforms the source image into a target-like visual appearance through the synergistic collaboration between image and feature adaptation. Specifically, we employ encoders incorporating adversarial learning to separate domain-invariant features from domain-specific ones to achieve visual appearance translation. Then, the meta-learning strategy with good generalization capabilities is exploited to strike a reasonable balance in the training of the source and transformed images. Thus, the model acquires more correlated features and improve the adaptability to the target images. Moreover, a GCN is introduced to supervise the high-dimensional abstract features directly related to the segmentation outcomes, and hence ensure the integrity of key structural features. Extensive experiments on four multi-parameter pancreatic-cancer magnetic resonance imaging datasets demonstrate improved performance in all adaptation directions, confirming our model's effectiveness for unlabeled pancreatic cancer images. The results are promising for reducing the burden of annotation and improving the performance of computer-aided diagnosis of pancreatic cancer. Our source codes will be released at https://github.com/SJTUBME-QianLab/UDAseg, once this manuscript is accepted for publication.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Pancreáticas , Humanos , Imageamento por Ressonância Magnética , Neoplasias Pancreáticas/diagnóstico por imagem
17.
Comput Methods Programs Biomed ; 197: 105752, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32971487

RESUMO

Retinal vascular disease has always been the focus of medical attention. However, segmentation of the retinal vessels from fundus images is still an open problem due to intensity inhomogeneity in the image and thickness diversity of the retinal vessels. In this paper, we propose Frangi based multi-scale level sets to segment retinal vessels from fundus images. Vascular structures are first enhanced by the Frangi filter with local optimal scales being obtained at the same time. The enhanced image and local optimal scales are taken considered as inputs of the proposed level set models. Effectiveness of the proposed multi-scale level sets to their scale fixed versions has been evaluated using DRIVE and STARE image repositories. In addition, the proposed level set models have been tested on the DRIVE and STARE images. Experiments show that the proposed models produce segmentation accuracy at the same level with state-of-the-art methods.


Assuntos
Algoritmos , Doenças Retinianas , Fundo de Olho , Humanos , Vasos Retinianos/diagnóstico por imagem
18.
Comput Med Imaging Graph ; 85: 101783, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32858495

RESUMO

Vessel segmentation has always been a considerable challenge task due to the presence of varying thickness of the vessels and weak contrasts of medical image intensities. In this paper, an effective method is proposed, which consists of four steps. Firstly, the input images are converted into gray ones with predetermined weightings aiming at increasing image contrast if they are colorful. Secondly, the image intensities are expanded from regions of interest to the whole image domain with a mirroring operation to avoid introducing undesired boundaries by image filtering operations existing in the next step. Thirdly, an improved multi-scale enhancement method inspired by the Frangi filtering is proposed to enhance image contrast between blood vessels and other objects in the image. Finally, an improved level set model is proposed to segment blood vessels from the enhance images and the original gray images. The proposed method has been evaluated on two retinal vessel image repositories, namely, DRIVE and STARE. Experimental results and comparison with 13 existing methods show that the proposed method produces similar segmentation accuracy at the same level with representative methods in the literature. Its effectiveness on segmentation of other type vessels is also discussed at the end of this paper.


Assuntos
Algoritmos , Vasos Retinianos , Fundo de Olho , Processamento de Imagem Assistida por Computador , Vasos Retinianos/diagnóstico por imagem
19.
Comput Math Methods Med ; 2020: 7595174, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32565883

RESUMO

Image segmentation is still an open problem especially when intensities of the objects of interest are overlapped due to the presence of intensity inhomogeneities. A bias correction embedded level set model is proposed in this paper where inhomogeneities are estimated by orthogonal primary functions. First, an inhomogeneous intensity clustering energy is defined based on global distribution characteristics of the image intensities, and membership functions of the clusters described by the level set function are then introduced to define the data term energy of the proposed model. Second, a regularization term and an arc length term are also included to regularize the level set function and smooth its zero-level set contour, respectively. Third, the proposed model is extended to multichannel and multiphase patterns to segment colorful images and images with multiple objects, respectively. Experimental results and comparison with relevant models demonstrate the advantages of the proposed model in terms of bias correction and segmentation accuracy on widely used synthetic and real images and the BrainWeb and the IBSR image repositories.


Assuntos
Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Algoritmos , Inteligência Artificial/estatística & dados numéricos , Viés , Encéfalo/diagnóstico por imagem , Análise por Conglomerados , Biologia Computacional , Bases de Dados Factuais , Humanos , Imageamento Tridimensional/estatística & dados numéricos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Modelos Estatísticos , Neuroimagem/estatística & dados numéricos , Reconhecimento Automatizado de Padrão/estatística & dados numéricos
20.
Comput Methods Programs Biomed ; 186: 105189, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31759298

RESUMO

Background and Objective Processing of medical imaging big data is deeply challenging due to the size of data, computational complexity, security storage and inherent privacy issues. Traditional picture archiving and communication system, which is an imaging technology used in the healthcare industry, generally uses centralized high performance disk storage arrays in the practical solutions. The existing storage solutions are not suitable for the diverse range of medical imaging big data that needs to be stored reliably and accessed in a timely manner. The economical solution is emerging as the cloud computing which provides scalability, elasticity, performance and better managing cost. Cloud based storage architecture for medical imaging big data has attracted more and more attention in industry and academia. Methods This study presents a novel, fast and scalable framework of medical image storage service based on distributed file system. Two innovations of the framework are introduced in this paper. An integrated medical imaging content indexing file model for large-scale image sequence is designed to adapt to the high performance storage efficiency on distributed file system. A virtual file pooling technology is proposed, which uses the memory-mapped file method to achieve an efficient data reading process and provides the data swapping strategy in the pool. Result The experiments show that the framework not only has comparable performance of reading and writing files which meets requirements in real-time application domain, but also bings greater convenience for clinical system developers by multiple client accessing types. The framework supports different user client types through the unified micro-service interfaces which basically meet the needs of clinical system development especially for online applications. The experimental results demonstrate the framework can meet the needs of real-time data access as well as traditional picture archiving and communication system. Conclusions This framework aims to allow rapid data accessing for massive medical images, which can be demonstrated by the online web client for MISS-D framework implemented in this paper for real-time data interaction. The framework also provides a substantial subset of features to existing open-source and commercial alternatives, which has a wide range of potential applications.


Assuntos
Armazenamento e Recuperação da Informação/métodos , Sistemas de Informação em Radiologia/instrumentação , Big Data , Diagnóstico por Imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA