Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 97: 103241, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38897032

RESUMO

Although the U-shape networks have achieved remarkable performances in many medical image segmentation tasks, they rarely model the sequential relationship of hierarchical layers. This weakness makes it difficult for the current layer to effectively utilize the historical information of the previous layer, leading to unsatisfactory segmentation results for lesions with blurred boundaries and irregular shapes. To solve this problem, we propose a novel dual-path U-Net, dubbed I2U-Net. The newly proposed network encourages historical information re-usage and re-exploration through rich information interaction among the dual paths, allowing deep layers to learn more comprehensive features that contain both low-level detail description and high-level semantic abstraction. Specifically, we introduce a multi-functional information interaction module (MFII), which can model cross-path, cross-layer, and cross-path-and-layer information interactions via a unified design, making the proposed I2U-Net behave similarly to an unfolded RNN and enjoying its advantage of modeling time sequence information. Besides, to further selectively and sensitively integrate the information extracted by the encoder of the dual paths, we propose a holistic information fusion and augmentation module (HIFA), which can efficiently bridge the encoder and the decoder. Extensive experiments on four challenging tasks, including skin lesion, polyp, brain tumor, and abdominal multi-organ segmentation, consistently show that the proposed I2U-Net has superior performance and generalization ability over other state-of-the-art methods. The code is available at https://github.com/duweidai/I2U-Net.

2.
J Appl Clin Med Phys ; 24(7): e13964, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36929569

RESUMO

BACKGROUND: Automatically assessing the malignant status of lung nodules based on CTscan images can help reduce the workload of radiologists while improving their diagnostic accuracy. PURPOSE: Despite remarkable progress in the automatic diagnosis of pulmonary nodules by deep learning technologies, two significant problems remain outstanding. First, end-to-end deep learning solutions tend to neglect the empirical (semantic) features accumulated by radiologists and only rely on automatic features discovered by neural networks to provide the final diagnostic results, leading to questionable reliability, and interpretability. Second, inconsistent diagnosis between radiologists, a widely acknowledged phenomenon in clinical settings, is rarely examined and quantitatively explored by existing machine learning approaches. This paper solves these problems. METHODS: We propose a novel deep neural network called MS-Net, which comprises two sequential modules: A feature derivation and initial diagnosis module (FDID), followed by a diagnosis refinement module (DR). Specifically, to take advantage of accumulated empirical features and discovered automatic features, the FDID model of MS-Net first derives a range of perceptible features and provides two initial diagnoses for lung nodules; then, these results are fed to the subsequent DR module to refine the diagnoses further. In addition, to fully consider the individual and panel diagnosis opinions, we propose a new loss function called collaborative loss, which can collaboratively optimize the individual and her peers' opinions to provide a more accurate diagnosis. RESULTS: We evaluate the performance of the proposed MS-Net on the Lung Image Database Consortium image collection (LIDC-IDRI). It achieves 92.4% of accuracy, 92.9% of sensitivity, and 92.0% of specificity when panel labels are the ground truth, which is superior to other state-of-the-art diagnosis models. As a byproduct, the MS-Net can automatically derive a range of semantic features of lung nodules, increasing the interpretability of the final diagnoses. CONCLUSIONS: The proposed MS-Net can provide an automatic and accurate diagnosis of lung nodules, meeting the need for a reliable computer-aided diagnosis system in clinical practice.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Pulmão/patologia , Radiologistas , Nódulo Pulmonar Solitário/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
3.
Med Image Anal ; 85: 102745, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36630869

RESUMO

Automatic segmentation of coronary arteries provides vital assistance to enable accurate and efficient diagnosis and evaluation of coronary artery disease (CAD). However, the task of coronary artery segmentation (CAS) remains highly challenging due to the large-scale variations exhibited by coronary arteries, their complicated anatomical structures and morphologies, as well as the low contrast between vessels and their background. To comprehensively tackle these challenges, we propose a novel multi-attention, multi-scale 3D deep network for CAS, which we call CAS-Net. Specifically, we first propose an attention-guided feature fusion (AGFF) module to efficiently fuse adjacent hierarchical features in the encoding and decoding stages to capture more effectively latent semantic information. Then, we propose a scale-aware feature enhancement (SAFE) module, aiming to dynamically adjust the receptive fields to extract more expressive features effectively, thereby enhancing the feature representation capability of the network. Furthermore, we employ the multi-scale feature aggregation (MSFA) module to learn a more distinctive semantic representation for refining the vessel maps. In addition, considering that the limited training data annotated with a quality golden standard are also a significant factor restricting the development of CAS, we construct a new dataset containing 119 cases consisting of coronary computed tomographic angiography (CCTA) volumes and annotated coronary arteries. Extensive experiments on our self-collected dataset and three publicly available datasets demonstrate that the proposed method has good segmentation performance and generalization ability, outperforming multiple state-of-the-art algorithms on various metrics. Compared with U-Net3D, the proposed method significantly improves the Dice similarity coefficient (DSC) by at least 4% on each dataset, due to the synergistic effect among the three core modules, AGFF, SAFE, and MSFA. Our implementation is released at https://github.com/Cassie-CV/CAS-Net.


Assuntos
Algoritmos , Vasos Coronários , Humanos , Angiografia , Benchmarking , Atenção , Processamento de Imagem Assistida por Computador
4.
Comput Biol Med ; 152: 106321, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36463792

RESUMO

Automatic segmentation and classification of lesions are two clinically significant tasks in the computer-aided diagnosis of skin diseases. Both tasks are challenging due to the nonnegligible lesion differences in dermoscopic images from different patients. In this paper, we propose a novel pipeline to efficiently implement skin lesions' segmentation and classification tasks, which consists of a segmentation network and a classification network. To improve the performance of the segmentation network, we propose a novel module of Multi-Scale Holistic Feature Exploration (MSH) to thoroughly exploit perceptual clues latent among multi-scale feature maps as synthesized by the decoder. The MSH module enables holistic exploration of features across multiple scales to more effectively support downstream image analysis tasks. To boost the performance of the classification network, we propose a novel module of Cross-Modality Collaborative Feature Exploration (CMC) to discover latent discriminative features by collaboratively exploiting potential relationships between cross-modal features of dermoscopic images and clinical metadata. The CMC module enables dynamically capturing versatile interaction effects among cross-modal features during the model's representation learning procedure by discriminatively and adaptively learning the interaction weight associated with each crossmodality feature pair. In addition, to effectively reduce background noise and boost the lesion discrimination ability of the classification network, we crop the images based on lesion masks generated by the best segmentation model. We evaluate the proposed pipeline on the four public skin lesion datasets, where the ISIC 2018 and PH2 are for segmentation, and the ISIC 2019 and ISIC 2020 are combined into a new dataset, ISIC 2019&2020, for classification. It achieves a Jaccard index of 83.31% and 90.14% in skin lesion segmentation, an AUC of 97.98% and an Accuracy of 92.63% in skin lesion classification, which is superior to the performance of representative state-of-the-art skin lesion segmentation and classification methods. Last but not least, the new model for segmentation utilizes much fewer model parameters (3.3 M) than its peer approaches, leading to a greatly reduced number of labeled samples required for model training, which obtains substantially stronger robustness than its peers.


Assuntos
Metadados , Dermatopatias , Humanos , Dermoscopia/métodos , Dermatopatias/diagnóstico por imagem , Pele/diagnóstico por imagem , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
5.
Med Image Anal ; 82: 102623, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36179379

RESUMO

Medical image segmentation methods based on deep learning have made remarkable progress. However, such existing methods are sensitive to data distribution. Therefore, slight domain shifts will cause a decline of performance in practical applications. To relieve this problem, many domain adaptation methods learn domain-invariant representations by alignment or adversarial training whereas ignoring domain-specific representations. In response to this issue, this paper rethinks the traditional domain adaptation framework and proposes a novel orthogonal decomposition adversarial domain adaptation (ODADA) architecture for medical image segmentation. The main idea behind our proposed ODADA model is to decompose the input features into domain-invariant and domain-specific representations and then use the newly designed orthogonal loss function to encourage their independence. Furthermore, we propose a two-step optimization strategy to extract domain-invariant representations by separating domain-specific representations, fighting the performance degradation caused by domain shifts. Encouragingly, the proposed ODADA framework is plug-and-play and can replace the traditional adversarial domain adaptation module. The proposed method has consistently demonstrated effectiveness through comprehensive experiments on three publicly available datasets, including cross-site prostate segmentation dataset, cross-site COVID-19 lesion segmentation dataset, and cross-modality cardiac segmentation dataset. The source code is available at https://github.com/YonghengSun1997/ODADA.


Assuntos
COVID-19 , Humanos , Processamento de Imagem Assistida por Computador/métodos
6.
Med Image Anal ; 75: 102293, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34800787

RESUMO

Computer-Aided Diagnosis (CAD) for dermatological diseases offers one of the most notable showcases where deep learning technologies display their impressive performance in acquiring and surpassing human experts. In such the CAD process, a critical step is concerned with segmenting skin lesions from dermoscopic images. Despite remarkable successes attained by recent deep learning efforts, much improvement is still anticipated to tackle challenging cases, e.g., segmenting lesions that are irregularly shaped, bearing low contrast, or possessing blurry boundaries. To address such inadequacies, this study proposes a novel Multi-scale Residual Encoding and Decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency. Specifically, a multi-scale residual encoding fusion module (MsR-EFM) is employed in an encoder, and a multi-scale residual decoding fusion module (MsR-DFM) is applied in a decoder to fuse multi-scale features adaptively. In addition, to enhance the representation learning capability of the newly proposed pipeline, we propose a novel multi-resolution, multi-channel feature fusion module (M2F2), which replaces conventional convolutional layers in encoder and decoder networks. Furthermore, we introduce a novel pooling module (Soft-pool) to medical image segmentation for the first time, retaining more helpful information when down-sampling and getting better segmentation performance. To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art methods on ISIC 2016, 2017, 2018, and PH2. Experimental results consistently demonstrate that the proposed Ms RED attains significantly superior segmentation performance across five popularly used evaluation criteria. Last but not least, the new model utilizes much fewer model parameters than its peer approaches, leading to a greatly reduced number of labeled samples required for model training, which in turn produces a substantially faster converging training process than its peers. The source code is available at https://github.com/duweidai/Ms-RED.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Diagnóstico por Computador , Progressão da Doença , Humanos , Software
7.
J Med Internet Res ; 24(1): e32394, 2022 01 21.
Artigo em Inglês | MEDLINE | ID: mdl-34878410

RESUMO

BACKGROUND: Due to the urgency caused by the COVID-19 pandemic worldwide, vaccine manufacturers have to shorten and parallel the development steps to accelerate COVID-19 vaccine production. Although all usual safety and efficacy monitoring mechanisms remain in place, varied attitudes toward the new vaccines have arisen among different population groups. OBJECTIVE: This study aimed to discern the evolution and disparities of attitudes toward COVID-19 vaccines among various population groups through the study of large-scale tweets spanning over a whole year. METHODS: We collected over 1.4 billion tweets from June 2020 to July 2021, which cover some critical phases concerning the development and inoculation of COVID-19 vaccines worldwide. We first developed a data mining model that incorporates a series of deep learning algorithms for inferring a range of individual characteristics, both in reality and in cyberspace, as well as sentiments and emotions expressed in tweets. We further conducted an observational study, including an overall analysis, a longitudinal study, and a cross-sectional study, to collectively explore the attitudes of major population groups. RESULTS: Our study derived 3 main findings. First, the whole population's attentiveness toward vaccines was strongly correlated (Pearson r=0.9512) with official COVID-19 statistics, including confirmed cases and deaths. Such attentiveness was also noticeably influenced by major vaccine-related events. Second, after the beginning of large-scale vaccine inoculation, the sentiments of all population groups stabilized, followed by a considerably pessimistic trend after June 2021. Third, attitude disparities toward vaccines existed among population groups defined by 8 different demographic characteristics. By crossing the 2 dimensions of attitude, we found that among population groups carrying low sentiments, some had high attentiveness ratios, such as males and individuals aged ≥40 years, while some had low attentiveness ratios, such as individuals aged ≤18 years, those with occupations of the 3rd category, those with account age <5 years, and those with follower number <500. These findings can be used as a guide in deciding who should be given more attention and what kinds of help to give to alleviate the concerns about vaccines. CONCLUSIONS: This study tracked the year-long evolution of attitudes toward COVID-19 vaccines among various population groups defined by 8 demographic characteristics, through which significant disparities in attitudes along multiple dimensions were revealed. According to these findings, it is suggested that governments and public health organizations should provide targeted interventions to address different concerns, especially among males, older people, and other individuals with low levels of education, low awareness of news, low income, and light use of social media. Moreover, public health authorities may consider cooperating with Twitter users having high levels of social influence to promote the acceptance of COVID-19 vaccines among all population groups.


Assuntos
COVID-19 , Mídias Sociais , Idoso , Atitude , Vacinas contra COVID-19 , Pré-Escolar , Estudos Transversais , Humanos , Estudos Longitudinais , Masculino , Pandemias , SARS-CoV-2
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...