Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(5)2023 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-36905009

RESUMO

The aim of this study was to evaluate the feasibility of a noninvasive and low-operator-dependent imaging method for carotid-artery-stenosis diagnosis. A previously developed prototype for 3D ultrasound scans based on a standard ultrasound machine and a pose reading sensor was used for this study. Working in a 3D space and processing data using automatic segmentation lowers operator dependency. Additionally, ultrasound imaging is a noninvasive diagnosis method. Artificial intelligence (AI)-based automatic segmentation of the acquired data was performed for the reconstruction and visualization of the scanned area: the carotid artery wall, the carotid artery circulated lumen, soft plaque, and calcified plaque. A qualitative evaluation was conducted via comparing the US reconstruction results with the CT angiographies of healthy and carotid-artery-disease patients. The overall scores for the automated segmentation using the MultiResUNet model for all segmented classes in our study were 0.80 for the IoU and 0.94 for the Dice. The present study demonstrated the potential of the MultiResUNet-based model for 2D-ultrasound-image automated segmentation for atherosclerosis diagnosis purposes. Using 3D ultrasound reconstructions may help operators achieve better spatial orientation and evaluation of segmentation results.


Assuntos
Inteligência Artificial , Angiografia por Tomografia Computadorizada , Humanos , Glândula Tireoide , Artérias Carótidas/diagnóstico por imagem , Ultrassonografia/métodos , Inteligência , Imageamento Tridimensional/métodos
2.
Sensors (Basel) ; 22(19)2022 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-36236200

RESUMO

This research aimed to evaluate Mask R-CNN and U-Net convolutional neural network models for pixel-level classification in order to perform the automatic segmentation of bi-dimensional images of US dental arches, identifying anatomical elements required for periodontal diagnosis. A secondary aim was to evaluate the efficiency of a correction method of the ground truth masks segmented by an operator, for improving the quality of the datasets used for training the neural network models, by 3D ultrasound reconstructions of the examined periodontal tissue. METHODS: Ultrasound periodontal investigations were performed for 52 teeth of 11 patients using a 3D ultrasound scanner prototype. The original ultrasound images were segmented by a low experienced operator using region growing-based segmentation algorithms. Three-dimensional ultrasound reconstructions were used for the quality check and correction of the segmentation. Mask R-CNN and U-NET were trained and used for prediction of periodontal tissue's elements identification. RESULTS: The average Intersection over Union ranged between 10% for the periodontal pocket and 75.6% for gingiva. Even though the original dataset contained 3417 images from 11 patients, and the corrected dataset only 2135 images from 5 patients, the prediction's accuracy is significantly better for the models trained with the corrected dataset. CONCLUSIONS: The proposed quality check and correction method by evaluating in the 3D space the operator's ground truth segmentation had a positive impact on the quality of the datasets demonstrated through higher IoU after retraining the models using the corrected dataset.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia
3.
Sensors (Basel) ; 22(9)2022 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-35591048

RESUMO

The aim of this study was to develop and evaluate a 3D ultrasound scanning method. The main requirements were the freehand architecture of the scanner and high accuracy of the reconstructions. A quantitative evaluation of a freehand 3D ultrasound scanner prototype was performed, comparing the ultrasonographic reconstructions with the CAD (computer-aided design) model of the scanned object, to determine the accuracy of the result. For six consecutive scans, the 3D ultrasonographic reconstructions were scaled and aligned with the model. The mean distance between the 3D objects ranged between 0.019 and 0.05 mm and the standard deviation between 0.287 mm and 0.565 mm. Despite some inherent limitations of our study, the quantitative evaluation of the 3D ultrasonographic reconstructions showed comparable results to other studies performed on smaller areas of the scanned objects, demonstrating the future potential of the developed prototype.


Assuntos
Imageamento Tridimensional , Imageamento Tridimensional/métodos , Imagens de Fantasmas , Ultrassonografia
4.
Biology (Basel) ; 9(11)2020 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-33198415

RESUMO

Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related deaths worldwide, with its mortality rate correlated with the tumor staging; i.e., early detection and treatment are important factors for the survival rate of patients. This paper presents the development of a novel visualization and detection system for HCC, which is a composing module of a robotic system for the targeted treatment of HCC. The system has two modules, one for the tumor visualization that uses image fusion (IF) between computerized tomography (CT) obtained preoperatively and real-time ultrasound (US), and the second module for HCC automatic detection from CT images. Convolutional neural networks (CNN) are used for the tumor segmentation which were trained using 152 contrast-enhanced CT images. Probabilistic maps are shown as well as 3D representation of HCC within the liver tissue. The development of the visualization and detection system represents a milestone in testing the feasibility of a novel robotic system in the targeted treatment of HCC. Further optimizations are planned for the tumor visualization and detection system with the aim of introducing more relevant functions and increase its accuracy.

5.
Sensors (Basel) ; 20(11)2020 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-32485986

RESUMO

The emergence of deep-learning methods in different computer vision tasks has proved to offer increased detection, recognition or segmentation accuracy when large annotated image datasets are available. In the case of medical image processing and computer-aided diagnosis within ultrasound images, where the amount of available annotated data is smaller, a natural question arises: are deep-learning methods better than conventional machine-learning methods? How do the conventional machine-learning methods behave in comparison with deep-learning methods on the same dataset? Based on the study of various deep-learning architectures, a lightweight multi-resolution Convolutional Neural Network (CNN) architecture is proposed. It is suitable for differentiating, within ultrasound images, between the Hepatocellular Carcinoma (HCC), respectively the cirrhotic parenchyma (PAR) on which HCC had evolved. The proposed deep-learning model is compared with other CNN architectures that have been adapted by transfer learning for the ultrasound binary classification task, but also with conventional machine-learning (ML) solutions trained on textural features. The achieved results show that the deep-learning approach overcomes classical machine-learning solutions, by providing a higher classification performance.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Aprendizado de Máquina , Ultrassonografia , Carcinoma Hepatocelular/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA