Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 171
Filtrar
1.
Neural Netw ; 181: 106754, 2024 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-39362185

RESUMO

Accurate segmentation of thyroid nodules is essential for early screening and diagnosis, but it can be challenging due to the nodules' varying sizes and positions. To address this issue, we propose a multi-attention guided UNet (MAUNet) for thyroid nodule segmentation. We use a multi-scale cross attention (MSCA) module for initial image feature extraction. By integrating interactions between features at different scales, the impact of thyroid nodule shape and size on the segmentation results has been reduced. Additionally, we incorporate a dual attention (DA) module into the skip-connection step of the UNet network, which promotes information exchange and fusion between the encoder and decoder. To test the model's robustness and effectiveness, we conduct the extensive experiments on multi-center ultrasound images provided by 17 local hospitals. The model is trained using the federal learning mechanism to ensure privacy protection. The experimental results show that the Dice scores of the model on the data sets from the three centers are 0.908, 0.912 and 0.887, respectively. Compared to existing methods, our method demonstrates higher generalization ability on multi-center datasets and achieves better segmentation results.

2.
BMC Med Imaging ; 24(1): 275, 2024 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-39394589

RESUMO

Early screening methods for the thyroid gland include palpation and imaging. Although palpation is relatively simple, its effectiveness in detecting early clinical signs of the thyroid gland may be limited, especially in children, due to the shorter thyroid growth time. Therefore, this constitutes a crucial foundational work. However, accurately determining the location and size of the thyroid gland in children is a challenging task. Accuracy depends on the experience of the ultrasound operator in current clinical practice, leading to subjective results. Even among experts, there is poor agreement on thyroid identification. In addition, the effective use of ultrasound machines also relies on the experience of the ultrasound operator in current clinical practice. In order to extract sufficient texture information from pediatric thyroid ultrasound images while reducing the computational complexity and number of parameters, this paper designs a novel U-Net-based network called DC-Contrast U-Net, which aims to achieve better segmentation performance with lower complexity in medical image segmentation. The results show that compared with other U-Net-related segmentation models, the proposed DC-Contrast U-Net model achieves higher segmentation accuracy while improving the inference speed, making it a promising candidate for deployment in medical edge devices in clinical applications in the future.


Assuntos
Glândula Tireoide , Ultrassonografia , Humanos , Ultrassonografia/métodos , Glândula Tireoide/diagnóstico por imagem , Criança , Pré-Escolar , Interpretação de Imagem Assistida por Computador/métodos , Lactente , Feminino , Redes Neurais de Computação , Adolescente , Masculino , Algoritmos
3.
Sci Rep ; 14(1): 22754, 2024 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354128

RESUMO

Accurate and unbiased classification of breast lesions is pivotal for early diagnosis and treatment, and a deep learning approach can effectively represent and utilize the digital content of images for more precise medical image analysis. Breast ultrasound imaging is useful for detecting and distinguishing benign masses from malignant masses. Based on the different ways in which benign and malignant tumors affect neighboring tissues, i.e., the pattern of growth and border irregularities, the penetration degree of the adjacent tissue, and tissue-level changes, we investigated the relationship between breast cancer imaging features and the roles of inter- and extra-lesional tissues and their impact on refining the performance of deep learning classification. The novelty of the proposed approach lies in considering the features extracted from the tissue inside the tumor (by performing an erosion operation) and from the lesion and surrounding tissue (by performing a dilation operation) for classification. This study uses these new features and three pre-trained deep neuronal networks to address the challenge of breast lesion classification in ultrasound images. To improve the classification accuracy and interpretability of the model, the proposed model leverages transfer learning to accelerate the training process. Three modern pre-trained CNN architectures (MobileNetV2, VGG16, and EfficientNetB7) are used for transfer learning and fine-tuning for optimization. There are concerns related to the neuronal networks producing erroneous outputs in the presence of noisy images, variations in input data, or adversarial attacks; thus, the proposed system uses the BUS-BRA database (two classes/benign and malignant) for training and testing and the unseen BUSI database (two classes/benign and malignant) for testing. Extensive experiments have recorded accuracy and AUC as performance parameters. The results indicate that the proposed system outperforms the existing breast cancer detection algorithms reported in the literature. AUC values of 1.00 are calculated for VGG16 and EfficientNet-B7 in the dilation cases. The proposed approach will facilitate this challenging and time-consuming classification task.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Neoplasias da Mama/classificação , Neoplasias da Mama/diagnóstico , Feminino , Redes Neurais de Computação , Ultrassonografia Mamária/métodos , Mama/diagnóstico por imagem , Mama/patologia , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos
4.
BMC Med Inform Decis Mak ; 24(1): 281, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354496

RESUMO

Polycystic Ovarian Disease or Polycystic Ovary Syndrome (PCOS) is becoming increasingly communal among women, owing to poor lifestyle choices. According to the research conducted by National Institutes of Health, it has been observe that PCOS, an endocrine condition common in women of childbearing age, has become a significant contributing factor to infertility. Ovarian abnormalities brought on by PCOS carry a high risk of miscarriage, infertility, cardiac problems, diabetes, uterine cancer, etc. Ovarian cysts, obesity, menstrual irregularities, elevated amounts of male hormones, acne vulgaris, hair loss, and hirsutism are some of the symptoms of PCOS. It is not easy to determine PCOS because of its different combinations of symptoms in different women and various criteria needed for diagnosis. Taking biochemical tests and ovary scanning is a time-consuming process and the financial expenses have become a hardship to the patients. Thus, early prognosis of PCOS is crucial to avoid infertility. The goal of the proposed work is to analyse PCOS symptoms based on clinical data for early diagnosis and to classify into PCOS affected or not. To achieve this objective, clinical features dataset and ultrasound imaging dataset from Kaggle is utilized. Initially 541 instances of 45 clinical features such as testosterone, hirsutism, family history, BMI, fast food, menstrual disorder, risk etc. are considered and correlation-based feature extraction method is applied to this dataset which results in 17 features. The extracted features are applied to various machine learning algorithms such as Logistic Regression, Naïve Bayes and Support Vector Machine. The performance of each method is evaluated based on accuracy, precision, recall, F1-score and the result shows that among three models, Support Vector Machine model achieved high accuracy of 94.44%. In addition to this, 3856 ultrasound images are analysed by CNN based deep learning algorithm and VGG16 transfer learning algorithm. The performance of these models is evaluated using training accuracy, loss and validation accuracy, loss and the result depicts that VGG16 outperforms than CNN model with validation accuracy of 98.29%.


Assuntos
Síndrome do Ovário Policístico , Humanos , Síndrome do Ovário Policístico/diagnóstico , Feminino , Prognóstico , Inteligência Artificial , Adulto , Ultrassonografia
5.
SLAS Technol ; 29(6): 100198, 2024 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-39396733

RESUMO

Traditional imaging methods have limitations in the diagnosis of peripheral lung lesions. The aim of this study is to evaluate the diagnostic value of the distance measurement method based on ultrasound image-based inverted electrostrain (rEBUS) combined with thoracoscopic lung biopsy (TBLB) for peripheral lung lesions. A group of patients with peripheral lung lesions were recruited for the study, and rEBUS examination was performed simultaneously during TBLB. Using rEBUS ultrasound images combined with electrostrain information, evaluate the morphological characteristics of peripheral lung lesions and the elastic properties of internal tissues. By comparing with pathological examination results, both rEBUS-D-TBLB and rEBUS-GS-TBLB have a higher positive diagnostic rate for PPL under bronchoscopy. However, rEBUS-D-TBLB is more effective in diagnosing benign PPL with ≥ 3 cm PPL than rEBUS-GS-TBLB. The rEBUS-TBLB combined ranging method has shown high accuracy and sensitivity in diagnosing peripheral lung lesions. Ultrasound images provide clear morphological features of the tumor, while the electrical strain information of rEBUS provides elastic information of the internal tissue of the tumor, further improving the accuracy of diagnosis.

6.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(5): 895-902, 2024 Oct 25.
Artigo em Chinês | MEDLINE | ID: mdl-39462656

RESUMO

Existing classification methods for myositis ultrasound images have problems of poor classification performance or high computational cost. Motivated by this difficulty, a lightweight neural network based on a soft threshold attention mechanism is proposed to cater for a better IIMs classification. The proposed network was constructed by alternately using depthwise separable convolution (DSC) and conventional convolution (CConv). Moreover, a soft threshold attention mechanism was leveraged to enhance the extraction capabilities of key features. Compared with the current dual-branch feature fusion myositis classification network with the highest classification accuracy, the classification accuracy of the network proposed in this paper increased by 5.9%, reaching 96.1%, and its computational complexity was only 0.25% of the existing method. The obtained results support that the proposed method can provide physicians with more accurate classification results at a lower computational cost, thereby greatly assisting them in their clinical diagnosis.


Assuntos
Miosite , Redes Neurais de Computação , Ultrassonografia , Humanos , Miosite/diagnóstico por imagem , Miosite/classificação , Ultrassonografia/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
7.
Artigo em Inglês | MEDLINE | ID: mdl-39289317

RESUMO

PURPOSE: Ultrasound imaging has emerged as a promising cost-effective and portable non-irradiant modality for the diagnosis and follow-up of diseases. Motion analysis can be performed by segmenting anatomical structures of interest before tracking them over time. However, doing so in a robust way is challenging as ultrasound images often display a low contrast and blurry boundaries. METHODS: In this paper, a robust descriptor inspired from the fractal dimension is presented to locally characterize the gray-level variations of an image. This descriptor is an adaptive grid pattern whose scale locally varies as the gray-level variations of the image. Robust features are then located based on the gray-level variations, which are more likely to be consistently tracked over time despite the presence of noise. RESULTS: The method was validated on three datasets: segmentation of the left ventricle on simulated echocardiography (Dice coefficient, DC), accuracy of diaphragm motion tracking for healthy subjects (mean sum of distances, MSD) and for a scoliosis patient (root mean square error, RMSE). Results show that the method segments the left ventricle accurately ( DC = 0.84 ) and robustly tracks the diaphragm motion for healthy subjects ( MSD = 1.10 mm) and for the scoliosis patient ( RMSE = 1.22 mm). CONCLUSIONS: This method has the potential to segment structures of interest according to their texture in an unsupervised fashion, as well as to help analyze the deformation of tissues. Possible applications are not limited to US image. The same principle could also be applied to other medical imaging modalities such as MRI or CT scans.

8.
Med Biol Eng Comput ; 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39292382

RESUMO

Atherosclerosis causes heart disease by forming plaques in arterial walls. IVUS imaging provides a high-resolution cross-sectional view of coronary arteries and plaque morphology. Healthcare professionals diagnose and quantify atherosclerosis physically or using VH-IVUS software. Since manual or VH-IVUS software-based diagnosis is time-consuming, automated plaque characterization tools are essential for accurate atherosclerosis detection and classification. Recently, deep learning (DL) and computer vision (CV) approaches are promising tools for automatically classifying plaques on IVUS images. With this motivation, this manuscript proposes an automated atherosclerotic plaque classification method using a hybrid Ant Lion Optimizer with Deep Learning (AAPC-HALODL) technique on IVUS images. The AAPC-HALODL technique uses the faster regional convolutional neural network (Faster RCNN)-based segmentation approach to identify diseased regions in the IVUS images. Next, the ShuffleNet-v2 model generates a useful set of feature vectors from the segmented IVUS images, and its hyperparameters can be optimally selected by using the HALO technique. Finally, an average ensemble classification process comprising a stacked autoencoder (SAE) and deep extreme learning machine (DELM) model can be utilized. The MICCAI Challenge 2011 dataset was used for AAPC-HALODL simulation analysis. A detailed comparative study showed that the AAPC-HALODL approach outperformed other DL models with a maximum accuracy of 98.33%, precision of 97.87%, sensitivity of 98.33%, and F score of 98.10%.

9.
Heliyon ; 10(16): e36426, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39253160

RESUMO

Objective: It is challenging to accurately distinguish atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) under routine transvaginal ultrasonic (TVU) detection. Our research aims to use the few-shot learning (FSL) method to identify non-atypical endometrial hyperplasia (NAEH), AEH, and EC based on limited TVU images. Methods: The TVU images of pathologically confirmed NAEH, AEH, and EC patients (n = 33 per class) were split into the support set (SS, n = 3 per class) and the query set (QS, n = 30 per class). Next, we used dual pretrained ResNet50 V2 which pretrained on ImageNet first and then on extra collected TVU images to extract 1*64 eigenvectors from the TVU images in SS and QS. Then, the Euclidean distances were calculated between each TVU image in QS and nine TVU images of SS. Finally, the k-nearest neighbor (KNN) algorithm was used to diagnose the TVU images in QS. Results: The overall accuracy and macro precision of the proposed FSL model in QS were 0.878 and 0.882 respectively, superior to the automated machine learning models, traditional ResNet50 V2 model, junior sonographer, and senior sonographer. When identifying EC, the proposed FSL model achieved the highest precision of 0.964, the highest recall of 0.900, and the highest F1-score of 0.931. Conclusions: The proposed FSL model combining dual pretrained ResNet50 V2 eigenvectors extractor and KNN classifier presented well in identifying NAEH, AEH, and EC patients with limited TVU images, showing potential in the application of computer-aided disease diagnosis.

10.
Sci Rep ; 14(1): 21845, 2024 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-39300284

RESUMO

The gallbladder (GB) is a small pouch and a deep tissue placed under the liver. GB Cancer (GBC) is a deadly illness that is complex to discover in an initial phase. Initial diagnosis can significantly enhance the existence rate. Non-ionizing energy, low cost, and convenience make the US a general non-invasive analytical modality for patients with GB diseases. Automatic recognition of GBC from US imagery is a significant issue that has gained much attention from researchers. Recently, machine learning (ML) techniques dependent on convolutional neural network (CNN) architectures have prepared transformational growth in radiology and medical analysis for illnesses like lung, pancreatic, breast, and melanoma. Deep learning (DL) is a region of artificial intelligence (AI), a functional medical tomography model that can help in the initial analysis of GBC. This manuscript presents an Automated Gall Bladder Cancer Detection using an Artificial Gorilla Troops Optimizer with Transfer Learning (GBCD-AGTOTL) technique on Ultrasound Images. The GBCD-AGTOTL technique examines the US images for the presence of gall bladder cancer using the DL model. In the initial stage, the GBCD-AGTOTL technique preprocesses the US images using a median filtering (MF) approach. The GBCD-AGTOTL technique applies the Inception module for feature extraction, which learns the complex and intrinsic patterns in the pre-processed image. Besides, the AGTO algorithm-based hyperparameter tuning procedure takes place, which optimally picks the hyperparameter values of the Inception technique. Lastly, the bidirectional gated recurrent unit (BiGRU) model helps classify gall bladder cancer. A series of simulation analyses were performed to ensure the performance of the GBCD-AGTOTL technique on the GBC dataset. The experimental outcomes inferred the enhanced abilities of the GBCD-AGTOTL in detecting gall bladder cancer.


Assuntos
Aprendizado Profundo , Neoplasias da Vesícula Biliar , Ultrassonografia , Neoplasias da Vesícula Biliar/diagnóstico por imagem , Humanos , Ultrassonografia/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Algoritmos
11.
Sci Rep ; 14(1): 22422, 2024 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-39341859

RESUMO

Breast cancer, a prevalent and life-threatening disease, necessitates early detection for the effective intervention and the improved patient health outcomes. This paper focuses on the critical problem of identifying breast cancer using a model called Attention U-Net. The model is utilized on the Breast Ultrasound Image Dataset (BUSI), comprising 780 breast images. The images are categorized into three distinct groups: 437 cases classified as benign, 210 cases classified as malignant, and 133 cases classified as normal. The proposed model leverages the attention-driven U-Net's encoder blocks to capture hierarchical features effectively. The model comprises four decoder blocks which is a pivotal component in the U-Net architecture, responsible for expanding the encoded feature representation obtained from the encoder block and for reconstructing spatial information. Four attention gates are incorporated strategically to enhance feature localization during decoding, showcasing a sophisticated design that facilitates accurate segmentation of breast tumors in ultrasound images. It displays its efficacy in accurately delineating and segregating tumor borders. The experimental findings demonstrate outstanding performance, achieving an overall accuracy of 0.98, precision of 0.97, recall of 0.90, and a dice score of 0.92. It demonstrates its effectiveness in precisely defining and separating tumor boundaries. This research aims to make automated breast cancer segmentation algorithms by emphasizing the importance of early detection in boosting diagnostic capabilities and enabling prompt and targeted medical interventions.


Assuntos
Neoplasias da Mama , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Ultrassonografia Mamária/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos
12.
Med Biol Eng Comput ; 2024 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-39215783

RESUMO

Deep learning has been widely used in ultrasound image analysis, and it also benefits kidney ultrasound interpretation and diagnosis. However, the importance of ultrasound image resolution often goes overlooked within deep learning methodologies. In this study, we integrate the ultrasound image resolution into a convolutional neural network and explore the effect of the resolution on diagnosis of kidney tumors. In the process of integrating the image resolution information, we propose two different approaches to narrow the semantic gap between the features extracted by the neural network and the resolution features. In the first approach, the resolution is directly concatenated with the features extracted by the neural network. In the second approach, the features extracted by the neural network are first dimensionally reduced and then combined with the resolution features to form new composite features. We compare these two approaches incorporating the resolution with the method without incorporating the resolution on a kidney tumor dataset of 926 images consisting of 211 images of benign kidney tumors and 715 images of malignant kidney tumors. The area under the receiver operating characteristic curve (AUC) of the method without incorporating the resolution is 0.8665, and the AUCs of the two approaches incorporating the resolution are 0.8926 (P < 0.0001) and 0.9135 (P < 0.0001) respectively. This study has established end-to-end kidney tumor classification systems and has demonstrated the benefits of integrating image resolution, showing that incorporating image resolution into neural networks can more accurately distinguish between malignant and benign kidney tumors in ultrasound images.

13.
Ophthalmol Ther ; 13(10): 2645-2659, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39127983

RESUMO

INTRODUCTION: The aim of this work is to develop a deep learning (DL) system for rapidly and accurately screening for intraocular tumor (IOT), retinal detachment (RD), vitreous hemorrhage (VH), and posterior scleral staphyloma (PSS) using ocular B-scan ultrasound images. METHODS: Ultrasound images from five clinically confirmed categories, including vitreous hemorrhage, retinal detachment, intraocular tumor, posterior scleral staphyloma, and normal eyes, were used to develop and evaluate a fine-grained classification system (the Dual-Path Lesion Attention Network, DPLA-Net). Images were derived from five centers scanned by different sonographers and divided into training, validation, and test sets in a ratio of 7:1:2. Two senior ophthalmologists and four junior ophthalmologists were recruited to evaluate the system's performance. RESULTS: This multi-center cross-sectional study was conducted in six hospitals in China. A total of 6054 ultrasound images were collected; 4758 images were used for the training and validation of the system, and 1296 images were used as a testing set. DPLA-Net achieved a mean accuracy of 0.943 in the testing set, and the area under the curve was 0.988 for IOT, 0.997 for RD, 0.994 for PSS, 0.988 for VH, and 0.993 for normal. With the help of DPLA-Net, the accuracy of the four junior ophthalmologists improved from 0.696 (95% confidence interval [CI] 0.684-0.707) to 0.919 (95% CI 0.912-0.926, p < 0.001), and the time used for classifying each image reduced from 16.84 ± 2.34 s to 10.09 ± 1.79 s. CONCLUSIONS: The proposed DPLA-Net showed high accuracy for screening and classifying multiple ophthalmic diseases using B-scan ultrasound images across mutiple centers. Moreover, the system can promote the efficiency of classification by ophthalmologists.

14.
Microsc Res Tech ; 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39145424

RESUMO

Ultrasound images are susceptible to various forms of quality degradation that negatively impact diagnosis. Common degradations include speckle noise, Gaussian noise, salt and pepper noise, and blurring. This research proposes an accurate ultrasound image denoising strategy based on firstly detecting the noise type, then, suitable denoising methods can be applied for each corruption. The technique depends on convolutional neural networks to categorize the type of noise affecting an input ultrasound image. Pre-trained convolutional neural network models including GoogleNet, VGG-19, AlexNet and AlexNet-support vector machine (SVM) are developed and trained to perform this classification. A dataset of 782 numerically generated ultrasound images across different diseases and noise types is utilized for model training and evaluation. Results show AlexNet-SVM achieves the highest accuracy of 99.2% in classifying noise types. The results indicate that, the present technique is considered one of the top-performing models is then applied to real ultrasound images with different noise corruptions to demonstrate efficacy of the proposed detect-then-denoise system. RESEARCH HIGHLIGHTS: Proposes an accurate ultrasound image denoising strategy based on detecting noise type first. Uses pre-trained convolutional neural networks to categorize noise type in input images. Evaluates GoogleNet, VGG-19, AlexNet, and AlexNet-support vector machine (SVM) models on a dataset of 782 synthetic ultrasound images. AlexNet-SVM achieves highest accuracy of 99.2% in classifying noise types. Demonstrates efficacy of the proposed detect-then-denoise system on real ultrasound images.

15.
World J Clin Cases ; 12(22): 4932-4939, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39109037

RESUMO

BACKGROUND: Collision tumor are neoplasms, including two histologically distinct tumors that coexist in the same mass without histological admixture. The incidence of collision tumor is low and is rare clinically. AIM: To investigate ultrasound images and application of ovarian-adnexal reporting and data system (O-RADS) to evaluate the risk and pathological characteristics of ovarian collision tumor. METHODS: This study retrospectively analyzed 17 cases of ovarian collision tumor diagnosed pathologically from January 2020 to December 2023. All clinical features, ultrasound images and histopathological features were collected and analyzed. The O-RADS score was used for classification. The O-RADS score was determined by two senior doctors in the gynecological ultrasound group. Lesions with O-RADS score of 1-3 were classified as benign tumors, and lesions with O-RADS score of 4 or 5 were classified as malignant tumors. RESULTS: There were 17 collision tumors detected in 16 of 6274 patients who underwent gynecological surgery. The average age of 17 women with ovarian collision tumor was 36.7 years (range 20-68 years), in whom, one occurred bilaterally and the rest occurred unilaterally. The average tumor diameter was 10 cm, of which three were 2-5 cm, 11 were 5-10 cm, and three were > 10 cm. Five (29.4%) tumors with O-RADS score 3 were endometriotic cysts with fibroma/serous cystadenoma, and unilocular or multilocular cysts contained a small number of parenchymal components. Eleven (64.7%) tumors had an O-RADS score of 4, including two in category 4A, six in category 4B, and three in category 4C; all of which were multilocular cystic tumors with solid components or multiple papillary components. One (5.9%) tumor had an O-RADS score of 5. This case was a solid mass, and a small amount of pelvic effusion was detected under ultrasound. The pathology was high-grade serous cystic cancer combined with cystic mature teratoma. There were nine (52.9%) tumors with elevated serum carbohydrate antigen (CA)125 and two (11.8%) with elevated serum CA19-9. Histological and pathological results showed that epithelial-cell-derived tumors combined with other tumors were the most common, which was different from previous results. CONCLUSION: The ultrasound images of ovarian collision tumor have certain specificity, but diagnosis by preoperative ultrasound is difficult. The combination of epithelial and mesenchymal cell tumors is one of the most common types of ovarian collision tumor. The O-RADS score of ovarian collision tumor is mostly ≥ 4, which can sensitively detect malignant tumors.

16.
Phys Med Biol ; 69(15)2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-38986480

RESUMO

Objective.Automated detection and segmentation of breast masses in ultrasound images are critical for breast cancer diagnosis, but remain challenging due to limited image quality and complex breast tissues. This study aims to develop a deep learning-based method that enables accurate breast mass detection and segmentation in ultrasound images.Approach.A novel convolutional neural network-based framework that combines the You Only Look Once (YOLO) v5 network and the Global-Local (GOLO) strategy was developed. First, YOLOv5 was applied to locate the mass regions of interest (ROIs). Second, a Global Local-Connected Multi-Scale Selection (GOLO-CMSS) network was developed to segment the masses. The GOLO-CMSS operated on both the entire images globally and mass ROIs locally, and then integrated the two branches for a final segmentation output. Particularly, in global branch, CMSS applied Multi-Scale Selection (MSS) modules to automatically adjust the receptive fields, and Multi-Input (MLI) modules to enable fusion of shallow and deep features at different resolutions. The USTC dataset containing 28 477 breast ultrasound images was collected for training and test. The proposed method was also tested on three public datasets, UDIAT, BUSI and TUH. The segmentation performance of GOLO-CMSS was compared with other networks and three experienced radiologists.Main results.YOLOv5 outperformed other detection models with average precisions of 99.41%, 95.15%, 93.69% and 96.42% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The proposed GOLO-CMSS showed superior segmentation performance over other state-of-the-art networks, with Dice similarity coefficients (DSCs) of 93.19%, 88.56%, 87.58% and 90.37% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The mean DSC between GOLO-CMSS and each radiologist was significantly better than that between radiologists (p< 0.001).Significance.Our proposed method can accurately detect and segment breast masses with a decent performance comparable to radiologists, highlighting its great potential for clinical implementation in breast ultrasound examination.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Neoplasias da Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia/métodos , Feminino , Ultrassonografia Mamária/métodos , Redes Neurais de Computação
17.
Biomed Eng Lett ; 14(4): 785-800, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38946824

RESUMO

The aim of this study is to propose a new diagnostic model based on "segmentation + classification" to improve the routine screening of Thyroid nodule ultrasonography by utilizing the key domain knowledge of medical diagnostic tasks. A Multi-scale segmentation network based on a pyramidal pooling structure of multi-parallel void spaces is proposed. First, in the segmentation network, the exact information of the underlying feature space is obtained by an Attention Gate. Second, the inflated convolutional part of Atrous Spatial Pyramid Pooling (ASPP) is cascaded for multiple downsampling. Finally, a three-branch classification network combined with expert knowledge is designed, drawing on doctors' clinical diagnosis experience, to extract features from the original image of the nodule, the regional image of the nodule, and the edge image of the nodule, respectively, and to improve the classification accuracy of the model by utilizing the Coordinate attention (CA) mechanism and cross-level feature fusion. The Multi-scale segmentation network achieves 94.27%, 93.90% and 88.85% of mean precision (mPA), Dice value (Dice) and mean joint intersection (MIoU), respectively, and the accuracy, specificity and sensitivity of the classification network reaches 86.07%, 81.34% and 90.19%, respectively. Comparison tests show that this method outperforms the U-Net, AGU-Net and DeepLab V3+ classical models as well as the nnU-Net, Swin UNetr and MedFormer models that have emerged in recent years. This algorithm, as an auxiliary diagnostic tool, can help physicians more accurately assess the benign or malignant nature of Thyroid nodules. It can provide objective quantitative indicators, reduce the bias of subjective judgment, and improve the consistency and accuracy of diagnosis. Codes and models are available at https://github.com/enheliang/Thyroid-Segmentation-Network.git.

18.
Breast Cancer Res Treat ; 207(2): 453-468, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38853220

RESUMO

PURPOSE: This study aims to assess the diagnostic value of ultrasound habitat sub-region radiomics feature parameters using a fully connected neural networks (FCNN) combination method L2,1-norm in relation to breast cancer Ki-67 status. METHODS: Ultrasound images from 528 cases of female breast cancer at the Affiliated Hospital of Xiangnan University and 232 cases of female breast cancer at the Affiliated Rehabilitation Hospital of Xiangnan University were selected for this study. We utilized deep learning methods to automatically outline the gross tumor volume and perform habitat clustering. Subsequently, habitat sub-regions were extracted to identify radiomics features and underwent feature engineering using the L1,2-norm. A prediction model for the Ki-67 status of breast cancer patients was then developed using a FCNN. The model's performance was evaluated using accuracy, area under the curve (AUC), specificity (Spe), positive predictive value (PPV), negative predictive value (NPV), Recall, and F1. In addition, calibration curves and clinical decision curves were plotted for the test set to visually assess the predictive accuracy and clinical benefit of the models. RESULT: Based on the feature engineering using the L1,2-norm, a total of 9 core features were identified. The predictive model, constructed by the FCNN model based on these 9 features, achieved the following scores: ACC 0.856, AUC 0.915, Spe 0.843, PPV 0.920, NPV 0.747, Recall 0.974, and F1 0.890. Furthermore, calibration curves and clinical decision curves of the validation set demonstrated a high level of confidence in the model's performance and its clinical benefit. CONCLUSION: Habitat clustering of ultrasound images of breast cancer is effectively supported by the combined implementation of the L1,2-norm and FCNN algorithms, allowing for the accurate classification of the Ki-67 status in breast cancer patients.


Assuntos
Neoplasias da Mama , Antígeno Ki-67 , Redes Neurais de Computação , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/metabolismo , Neoplasias da Mama/patologia , Antígeno Ki-67/metabolismo , Antígeno Ki-67/análise , Pessoa de Meia-Idade , Adulto , Idoso , Aprendizado Profundo , Ultrassonografia Mamária/métodos , Ultrassonografia/métodos , Curva ROC , Biomarcadores Tumorais , Radiômica
19.
BMC Med Imaging ; 24(1): 133, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38840240

RESUMO

BACKGROUND: Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS: We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS: The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS: Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Redes Neurais de Computação , Ultrassonografia Mamária , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos , Adulto
20.
Curr Med Imaging ; 20: e15734056293608, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38712376

RESUMO

BACKGROUND: Transorbital Ultrasonography (TOS) is a promising imaging technology that can be used to characterize the structures of the optic nerve and the potential alterations that may occur in those structures as a result of an increase in intracranial pressure (ICP) or the presence of other disorders such as multiple sclerosis (MS) and hydrocephalus. OBJECTIVE: In this paper, the primary objective is to develop a fully automated system that is capable of segmenting and calculating the diameters of structures that are associated with the optic nerve in TOS images. These structures include the optic nerve diameter sheath (ONSD) and the optic nerve diameter (OND). METHODS: A fully convolutional neural network (FCN) model that has been pre-trained serves as the foundation for the segmentation method. The method that was developed was utilized to collect 464 different photographs from 110 different people, and it was accomplished with the assistance of four distinct pieces of apparatus. RESULTS: An examination was carried out to compare the outcomes of the automatic measurements with those of a manual operator. Both OND and ONSD have a typical inaccuracy of -0.12 0.32 mm and 0.14 0.58 mm, respectively, when compared to the operator. The Pearson correlation coefficient (PCC) for OND is 0.71, while the coefficient for ONSD is 0.64, showing that there is a positive link between the two measuring tools. CONCLUSION: A conclusion may be drawn that the technique that was developed is automatic, and the average error (AE) that was reached for the ONSD measurement is compatible with the ranges of inter-operator variability that have been discovered in the literature.


Assuntos
Aprendizado Profundo , Nervo Óptico , Ultrassonografia , Humanos , Nervo Óptico/diagnóstico por imagem , Ultrassonografia/métodos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA