Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 177: 108670, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38838558

RESUMO

No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a "teacher ensemble network" is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a "student network", comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA's remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology.


Assuntos
Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação
2.
Artif Intell Med ; 119: 102154, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34531013

RESUMO

Deep learning plays a critical role in medical image segmentation. Nevertheless, manually designing a neural network for a specific segmentation problem is a very difficult and time-consuming task due to the massive hyperparameter search space, long training time and large volumetric data. Therefore, most designed networks are highly complex, task specific and over-parametrized. Recently, multiobjective neural architecture search (NAS) methods have been proposed to automate the design of accurate and efficient segmentation architectures. However, they only search for either the micro- or macro-structure of the architecture, do not use the information produced during the optimization process to increase the efficiency of the search, or do not consider the volumetric nature of medical images. In this work, we present EMONAS-Net, an Efficient MultiObjective NAS framework for 3D medical image segmentation that optimizes both the segmentation accuracy and size of the network. EMONAS-Net has two key components, a novel search space that considers the configuration of the micro- and macro-structure of the architecture and a Surrogate-assisted Multiobjective Evolutionary based Algorithm (SaMEA algorithm) that efficiently searches for the best hyperparameter values. The SaMEA algorithm uses the information collected during the initial generations of the evolutionary process to identify the most promising subproblems and select the best performing hyperparameter values during mutation to improve the convergence speed. Furthermore, a Random Forest surrogate model is incorporated to accelerate the fitness evaluation of the candidate architectures. EMONAS-Net is tested on the tasks of prostate segmentation from the MICCAI PROMISE12 challenge, hippocampus segmentation from the Medical Segmentation Decathlon challenge, and cardiac segmentation from the MICCAI ACDC challenge. In all the benchmarks, the proposed framework finds architectures that perform better or comparable with competing state-of-the-art NAS methods while being considerably smaller and reducing the architecture search time by more than 50%.


Assuntos
Imageamento Tridimensional , Redes Neurais de Computação , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Masculino
3.
Neural Netw ; 126: 76-94, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32203876

RESUMO

Fully Convolutional Networks (FCNs) have emerged as powerful segmentation models but are usually designed manually, which requires extensive time and can result in large and complex architectures. There is a growing interest to automatically design efficient architectures that can accurately segment 3D medical images. However, most approaches either do not fully exploit volumetric information or do not optimize the model's size. To address these problems, we propose a self-adaptive 2D-3D ensemble of FCNs called AdaEn-Net for 3D medical image segmentation that incorporates volumetric data and adapts to a particular dataset by optimizing both the model's performance and size. The AdaEn-Net consists of a 2D FCN that extracts intra-slice information and a 3D FCN that exploits inter-slice information. The architecture and hyperparameters of the 2D and 3D architectures are found through a multiobjective evolutionary based algorithm that maximizes the expected segmentation accuracy and minimizes the number of parameters in the network. The main contribution of this work is a model that fully exploits volumetric information and automatically searches for a high-performing and efficient architecture. The AdaEn-Net was evaluated for prostate segmentation on the PROMISE12 Grand Challenge and for cardiac segmentation on the MICCAI ACDC challenge. In the first challenge, the AdaEn-Net ranks 9 out of 297 submissions and surpasses the performance of an automatically-generated segmentation network while producing an architecture with 13× fewer parameters. In the second challenge, the proposed model is ranked within the top 8 submissions and outperforms an architecture designed with reinforcement learning while having 1.25× fewer parameters.


Assuntos
Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA