Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.063
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38849632

RESUMO

OBJECTIVES: In patients having naïve glioblastoma multiforme (GBM), this study aims to assess the efficacy of Deep Learning algorithms in automating the segmentation of brain magnetic resonance (MR) images to accurately determine 3D masks for 4 distinct regions: enhanced tumor, peritumoral edema, non-enhanced/necrotic tumor, and total tumor. MATERIAL AND METHODS: A 3D U-Net neural network algorithm was developed for semantic segmentation of GBM. The training dataset was manually delineated by a group of expert neuroradiologists on MR images from the Brain Tumor Segmentation Challenge 2021 (BraTS2021) image repository, as ground truth labels for diverse glioma (GBM and low-grade glioma) subregions across four MR sequences (T1w, T1w-contrast enhanced, T2w, and FLAIR) in 1251 patients. The in-house test was performed on 50 GBM patients from our cohort (PerProGlio project). By exploring various hyperparameters, the network's performance was optimized, and the most optimal parameter configuration was identified. The assessment of the optimized network's performance utilized Dice scores, precision, and sensitivity metrics. RESULTS: Our adaptation of the 3D U-net with additional residual blocks demonstrated reliable performance on both the BraTS2021 dataset and the in-house PerProGlio cohort, employing only T1w-ce sequences for enhancement and non-enhanced/necrotic tumor models and T1w-ce + T2w + FLAIR for peritumoral edema and total tumor. The mean Dice scores (training and test) were 0.89 and 0.75; 0.75 and 0.64; 0.79 and 0.71; and 0.60 and 0.55, for total tumor, edema, enhanced tumor, and non-enhanced/necrotic tumor, respectively. CONCLUSIONS: The results underscore the high precision with which our network can effectively segment GBM tumors and their distinct subregions. The level of accuracy achieved agrees with the coefficients recorded in previous GBM studies. In particular, our approach allows model specialization for each of the different tumor subregions employing only those MR sequences that provide value for segmentation.

2.
PeerJ Comput Sci ; 10: e2071, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855213

RESUMO

Colorectal cancer is an enormous health concern since it is among the most lethal types of malignancy. The manual examination has its limitations, including subjectivity and data overload. To overcome these challenges, computer-aided diagnostic systems focusing on image segmentation and abnormality classification have been developed. This study presents a two-stage approach for the automatic detection of five types of colorectal abnormalities in addition to a control group: polyp, low-grade intraepithelial neoplasia, high-grade intraepithelial neoplasia, serrated adenoma, adenocarcinoma. In the first stage, UNet3+ was used for image segmentation to locate the anomalies, while in the second stage, the Cross-Attention Multi-Scale Vision Transformer deep learning model was used to predict the type of anomaly after highlighting the anomaly on the raw images. In anomaly segmentation, UNet3+ achieved values of 0.9872, 0.9422, 0.9832, and 0.9560 for Dice Coefficient, Jaccard Index, Sensitivity, Specificity respectively. In anomaly detection, the Cross-Attention Multi-Scale Vision Transformer model attained a classification performance of 0.9340, 0.9037, 0.9446, 0.8723, 0.9102, 0.9849 for accuracy, F1 score, precision, recall, Matthews correlation coefficient, and specificity, respectively. The proposed approach proves its capacity to alleviate the overwhelm of pathologists and enhance the accuracy of colorectal cancer diagnosis by achieving high performance in both the identification of anomalies and the segmentation of regions.

3.
PeerJ Comput Sci ; 10: e2076, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855260

RESUMO

Breast arterial calcifications (BAC) are a type of calcification commonly observed on mammograms and are generally considered benign and not associated with breast cancer. However, there is accumulating observational evidence of an association between BAC and cardiovascular disease, the leading cause of death in women. We present a deep learning method that could assist radiologists in detecting and quantifying BAC in synthesized 2D mammograms. We present a recurrent attention U-Net model consisting of encoder and decoder modules that include multiple blocks that each use a recurrent mechanism, a recurrent mechanism, and an attention module between them. The model also includes a skip connection between the encoder and the decoder, similar to a U-shaped network. The attention module was used to enhance the capture of long-range dependencies and enable the network to effectively classify BAC from the background, whereas the recurrent blocks ensured better feature representation. The model was evaluated using a dataset containing 2,000 synthesized 2D mammogram images. We obtained 99.8861% overall accuracy, 69.6107% sensitivity, 66.5758% F-1 score, and 59.5498% Jaccard coefficient, respectively. The presented model achieved promising performance compared with related models.

4.
Dev Sci ; : e13533, 2024 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-38853379

RESUMO

Infants begin to segment word forms from fluent speech-a crucial task in lexical processing-between 4 and 7 months of age. Prior work has established that infants rely on a variety of cues available in the speech signal (i.e., prosodic, statistical, acoustic-segmental, and lexical) to accomplish this task. In two experiments with French-learning 6- and 10-month-olds, we use a psychoacoustic approach to examine if and how degradation of the two fundamental acoustic components extracted from speech by the auditory system, namely, temporal (both frequency and amplitude modulation) and spectral information, impact word form segmentation. Infants were familiarized with passages containing target words, in which frequency modulation (FM) information was replaced with pure tones using a vocoder, while amplitude modulation (AM) was preserved in either 8 or 16 spectral bands. Infants were then tested on their recognition of the target versus novel control words. While the 6-month-olds were unable to segment in either condition, the 10-month-olds succeeded, although only in the 16 spectral band condition. These findings suggest that 6-month-olds need FM temporal cues for speech segmentation while 10-month-olds do not, although they need the AM cues to be presented in enough spectral bands (i.e., 16). This developmental change observed in infants' sensitivity to spectrotemporal cues likely results from an increase in the range of available segmentation procedures, and/or shift from a vowel to a consonant bias in lexical processing between the two ages, as vowels are more affected by our acoustic manipulations. RESEARCH HIGHLIGHTS: Although segmenting speech into word forms is crucial for lexical acquisition, the acoustic information that infants' auditory system extracts to process continuous speech remains unknown. We examined infants' sensitivity to spectrotemporal cues in speech segmentation using vocoded speech, and revealed a developmental change between 6 and 10 months of age. We showed that FM information, that is, the fast temporal modulations of speech, is necessary for 6- but not 10-month-old infants to segment word forms. Moreover, reducing the number of spectral bands impacts 10-month-olds' segmentation abilities, who succeed when 16 bands are preserved, but fail with 8 bands.

5.
Front Bioeng Biotechnol ; 12: 1285166, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38872900

RESUMO

Objectives: The goal of this study was to explore the reliability and clinical value of fast, accurate automatic segmentation of the aortic root based on a deep learning tool compared with computed tomography angiography. Methods: A deep learning tool for automatic 3-dimensional aortic root reconstruction, the CVPILOT system (TAVIMercy Data Technology Ltd., Nanjing, China), was trained and tested using computed tomography angiography scans collected from 183 patients undergoing transcatheter aortic valve replacement from January 2021 to December 2022. The quality of the reconstructed models was assessed using validation data sets and evaluated clinically by experts. Results: The segmentation of the ascending aorta and the left ventricle attained Dice similarity coefficients (DSC) of 0.9806/0.9711 and 0.9603/0.9643 for the training and validation sets, respectively. The leaflets had a DSC of 0.8049/0.7931, and the calcification had a DSC of 0.8814/0.8630. After 6 months of application, the system modeling time was reduced to 19.83 s. Conclusion: For patients undergoing transcatheter aortic valve replacement, the CVPILOT system facilitates clinical workflow. The reliable evaluation quality of the platform indicates broad clinical application prospects in the future.

6.
Brain Inform ; 11(1): 15, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38833195

RESUMO

Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.

7.
J Robot Surg ; 18(1): 237, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38833204

RESUMO

A major obstacle in applying machine learning for medical fields is the disparity between the data distribution of the training images and the data encountered in clinics. This phenomenon can be explained by inconsistent acquisition techniques and large variations across the patient spectrum. The result is poor translation of the trained models to the clinic, which limits their implementation in medical practice. Patient-specific trained networks could provide a potential solution. Although patient-specific approaches are usually infeasible because of the expenses associated with on-the-fly labeling, the use of generative adversarial networks enables this approach. This study proposes a patient-specific approach based on generative adversarial networks. In the presented training pipeline, the user trains a patient-specific segmentation network with extremely limited data which is supplemented with artificial samples generated by generative adversarial models. This approach is demonstrated in endoscopic video data captured during fetoscopic laser coagulation, a procedure used for treating twin-to-twin transfusion syndrome by ablating the placental blood vessels. Compared to a standard deep learning segmentation approach, the pipeline was able to achieve an intersection over union score of 0.60 using only 20 annotated images compared to 100 images using a standard approach. Furthermore, training with 20 annotated images without the use of the pipeline achieves an intersection over union score of 0.30, which, therefore, corresponds to a 100% increase in performance when incorporating the pipeline. A pipeline using GANs was used to generate artificial data which supplements the real data, this allows patient-specific training of a segmentation network. We show that artificial images generated using GANs significantly improve performance in vessel segmentation and that training patient-specific models can be a viable solution to bring automated vessel segmentation to the clinic.


Assuntos
Placenta , Humanos , Gravidez , Placenta/irrigação sanguínea , Placenta/diagnóstico por imagem , Feminino , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Transfusão Feto-Fetal/cirurgia , Transfusão Feto-Fetal/diagnóstico por imagem , Aprendizado de Máquina , Procedimentos Cirúrgicos Robóticos/métodos , Redes Neurais de Computação
8.
Gait Posture ; 113: 67-74, 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38850852

RESUMO

INTRODUCTION: Foot and ankle alignment plays a pivotal role in human gait and posture. Traditional assessment methods, relying on 2D standing radiographs, present limitations in capturing the dynamic 3D nature of foot alignment during weight-bearing and are prone to observer error. This study aims to integrate weight-bearing CT (WBCT) imaging and advanced deep learning (DL) techniques to automate and enhance quantification of the 3D foot and ankle alignment. METHODS: Thirty-two patients who underwent a WBCT of the foot and ankle were retrospectively included. After training and validation of a 3D nnU-Net model on 45 cases to automate the segmentation into bony models, 35 clinically relevant 3D measurements were automatically computed using a custom-made tool. Automated measurements were assessed for accuracy against manual measurements, while the latter were analyzed for inter-observer reliability. RESULTS: DL-segmentation results showed a mean dice coefficient of 0.95 and mean Hausdorff distance of 1.41 mm. A good to excellent reliability and mean prediction error of under 2 degrees was found for all angles except the talonavicular coverage angle and distal metatarsal articular angle. CONCLUSION: In summary, this study introduces a fully automated framework for quantifying foot and ankle alignment, showcasing reliability comparable to current clinical practice measurements. This operator-friendly and time-efficient tool holds promise for implementation in clinical settings, benefiting both radiologists and surgeons. Future studies are encouraged to assess the tool's impact on streamlining image assessment workflows in a clinical environment.

9.
Comput Biol Med ; 178: 108667, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38850962

RESUMO

Nuclei segmentation and classification play a crucial role in pathology diagnosis, enabling pathologists to analyze cellular characteristics accurately. Overlapping cluster nuclei, misdetection of small-scale nuclei, and pleomorphic nuclei-induced misclassification have always been major challenges in the nuclei segmentation and classification tasks. To this end, we introduce an auxiliary task of nuclei boundary-guided contrastive learning to enhance the representativeness and discriminative power of visual features, particularly for addressing the challenge posed by the unclear contours of adherent nuclei and small nuclei. In addition, misclassifications resulting from pleomorphic nuclei often exhibit low classification confidence, indicating a high level of uncertainty. To mitigate misclassification, we capitalize on the characteristic clustering of similar cells to propose a locality-aware class embedding module, offering a regional perspective to capture category information. Moreover, we address uncertain classification in densely aggregated nuclei by designing a top-k uncertainty attention module that leverages deep features to enhance shallow features, thereby improving the learning of contextual semantic information. We demonstrate that the proposed network outperforms the off-the-shelf methods in both nuclei segmentation and classification experiments, achieving the state-of-the-art performance.

10.
Comput Struct Biotechnol J ; 23: 2304-2325, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38845821

RESUMO

Understanding the intricate relationships between gene expression levels and epigenetic modifications in a genome is crucial to comprehending the pathogenic mechanisms of many diseases. With the advancement of DNA Methylome Profiling techniques, the emphasis on identifying Differentially Methylated Regions (DMRs/DMGs) has become crucial for biomarker discovery, offering new insights into the etiology of illnesses. This review surveys the current state of computational tools/algorithms for the analysis of microarray-based DNA methylation profiling datasets, focusing on key concepts underlying the diagnostic/prognostic CpG site extraction. It addresses methodological frameworks, algorithms, and pipelines employed by various authors, serving as a roadmap to address challenges and understand changing trends in the methodologies for analyzing array-based DNA methylation profiling datasets derived from diseased genomes. Additionally, it highlights the importance of integrating gene expression and methylation datasets for accurate biomarker identification, explores prognostic prediction models, and discusses molecular subtyping for disease classification. The review also emphasizes the contributions of machine learning, neural networks, and data mining to enhance diagnostic workflow development, thereby improving accuracy, precision, and robustness.

11.
Heliyon ; 10(11): e31844, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38845948

RESUMO

Water imbibition is an important process in reservoir rocks during hydraulic fracturing and water-based enhanced oil recovery operations. However, the water imbibition behavior in tight sandstones has not been fully understood due to their complex pore structure, including the presence of nano and micron-sized pores and throats, surface properties, and wide variation in mineralogy. The present study focuses on the effect of spontaneous water imbibition on the porosity evolution of a tight sandstone. Within this context, a core of Torrey Buff sandstone was characterized by using a combination of multiscale imaging methods (X-ray Computed Tomography, Scanning Electron Microscopy), laboratory experiments (porosity-permeability measurements), and analytical techniques (X-ray Diffraction, Fourier Transform Infrared Spectroscopy, Scanning Electron Microscopy-Energy Dispersive Spectroscopy, and Thermogravimetry). The studied tight sandstone core has a porosity of 12.3 % and permeability of 2.05mD with minerals of quartz (58 %), clays (kaolinite and illite, 23 %), K-feldspar (7 %), dolomite (7 %) and calcite (5 %). Primary and secondary pores, ranging in size from 60 to 140 µm and 30-50 µm, respectively, are mostly filled with highly-soluble carbonate minerals and hydrophilic illite, which influence the spontaneous water imbibition capacity of the tight sandstone. The multiscale imaging technique indicates that after a 10-h long water imbibition experiment, the average pore size of the tight sandstone increased by 1.28 %, reaching 2.35 % at the rock-water contact and 0.13 % at the top of the core. In other words, throughout the core, the porosity changes upon water imbibition are not uniform but show an almost linear trend. This observation could be explained by the significant contribution of highly-soluble carbonates and hydrophilic illite on the microstructure of the tight sandstone. This study implies that multiscale imaging techniques, crucial in examining spontaneous water imbibition, hold promise for further research in enhanced oil recovery or hydraulic fracking in tight sandstones.

12.
Quant Imaging Med Surg ; 14(6): 4067-4085, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38846298

RESUMO

Background: The segmentation of prostates from transrectal ultrasound (TRUS) images is a critical step in the diagnosis and treatment of prostate cancer. Nevertheless, the manual segmentation performed by physicians is a time-consuming and laborious task. To address this challenge, there is a pressing need to develop computerized algorithms capable of autonomously segmenting prostates from TRUS images, which sets a direction and form for future development. However, automatic prostate segmentation in TRUS images has always been a challenging problem since prostates in TRUS images have ambiguous boundaries and inhomogeneous intensity distribution. Although many prostate segmentation methods have been proposed, they still need to be improved due to the lack of sensibility to edge information. Consequently, the objective of this study is to devise a highly effective prostate segmentation method that overcomes these limitations and achieves accurate segmentation of prostates in TRUS images. Methods: A three-dimensional (3D) edge-aware attention generative adversarial network (3D EAGAN)-based prostate segmentation method is proposed in this paper, which consists of an edge-aware segmentation network (EASNet) that performs the prostate segmentation and a discriminator network that distinguishes predicted prostates from real prostates. The proposed EASNet is composed of an encoder-decoder-based U-Net backbone network, a detail compensation module (DCM), four 3D spatial and channel attention modules (3D SCAM), an edge enhancement module (EEM), and a global feature extractor (GFE). The DCM is proposed to compensate for the loss of detailed information caused by the down-sampling process of the encoder. The features of the DCM are selectively enhanced by the 3D spatial and channel attention module. Furthermore, an EEM is proposed to guide shallow layers in the EASNet to focus on contour and edge information in prostates. Finally, features from shallow layers and hierarchical features from the decoder module are fused through the GFE to predict the segmentation prostates. Results: The proposed method is evaluated on our TRUS image dataset and the open-source µRegPro dataset. Specifically, experimental results on two datasets show that the proposed method significantly improved the average segmentation Dice score from 85.33% to 90.06%, Jaccard score from 76.09% to 84.11%, Hausdorff distance (HD) score from 8.59 to 4.58 mm, Precision score from 86.48% to 90.58%, and Recall score from 84.79% to 89.24%. Conclusions: A novel 3D EAGAN-based prostate segmentation method is proposed. The proposed method consists of an EASNet and a discriminator network. Experimental results demonstrate that the proposed method has achieved satisfactory results on 3D TRUS image segmentation for prostates.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38848695

RESUMO

Recent advancements in computational intelligence, deep learning, and computer-aided detection have had a significant impact on the field of medical imaging. The task of image segmentation, which involves accurately interpreting and identifying the content of an image, has garnered much attention. The main objective of this task is to separate objects from the background, thereby simplifying and enhancing the significance of the image. However, existing methods for image segmentation have their limitations when applied to certain types of images. This survey paper aims to highlight the importance of image segmentation techniques by providing a thorough examination of their advantages and disadvantages. The accurate detection of cancer regions in medical images is crucial for ensuring effective treatment. In this study, we have also extensively analysed Computer-Aided Diagnosis (CAD) systems for cancer identification, with a focus on recent research advancements. The paper critically assesses various techniques for cancer detection and compares their effectiveness. Convolutional neural networks (CNNs) have attracted particular interest due to their ability to segment and classify medical images in large datasets, thanks to their capacity for self-learning and decision-making. .

14.
Dev Cell ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38848718

RESUMO

Characterizing cellular features during seed germination is crucial for understanding the complex biological functions of different embryonic cells in regulating seed vigor and seedling establishment. We performed spatially enhanced resolution omics sequencing (Stereo-seq) and single-cell RNA sequencing (scRNA-seq) to capture spatially resolved single-cell transcriptomes of germinating rice embryos. An automated cell-segmentation model, employing deep learning, was developed to accommodate the analysis requirements. The spatial transcriptomes of 6, 24, 36, and 48 h after imbibition unveiled both known and previously unreported embryo cell types, including two unreported scutellum cell types, corroborated by in situ hybridization and functional exploration of marker genes. Temporal transcriptomic profiling delineated gene expression dynamics in distinct embryonic cell types during seed germination, highlighting key genes involved in nutrient metabolism, biosynthesis, and signaling of phytohormones, reprogrammed in a cell-type-specific manner. Our study provides a detailed spatiotemporal transcriptome of rice embryo and presents a previously undescribed methodology for exploring the roles of different embryonic cells in seed germination.

15.
Radiother Oncol ; 197: 110367, 2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38834152

RESUMO

BACKGROUND: The number of metastatic lymph nodes (MLNs) is crucial for the survival of nasopharyngeal carcinoma (NPC), but manual counting is laborious. This study aims to explore the feasibility and prognostic value of automatic MLNs segmentation and counting. METHODS: We retrospectively enrolled 980 newly diagnosed patients in the primary cohort and 224 patients from two external cohorts. We utilized the nnUnet model for automatic MLNs segmentation on multimodal magnetic resonance imaging. MLNs counting methods, including manual delineation-assisted counting (MDAC) and fully automatic lymph node counting system (AMLNC), were compared with manual evaluation (Gold standard). RESULTS: In the internal validation group, the MLNs segmentation results showed acceptable agreement with manual delineation, with a mean Dice coefficient of 0.771. The consistency among three counting methods was as follows 0.778 (Gold vs. AMLNC), 0.638 (Gold vs. MDAC), and 0.739 (AMLNC vs. MDAC). MLNs numbers were categorized into three-category variable (1-4, 5-9, > 9) and two-category variable (<4, ≥ 4) based on the gold standard and AMLNC. These categorical variables demonstrated acceptable discriminating abilities for 5-year overall survival (OS), progression-free, and distant metastasis-free survival. Compared with base prediction model, the model incorporating two-category AMLNC-counting numbers showed improved C-indexes for 5-year OS prediction (0.658 vs. 0.675, P = 0.045). All results have been successfully validated in the external cohort. CONCLUSIONS: The AMLNC system offers a time- and labor-saving approach for fully automatic MLNs segmentation and counting in NPC. MLNs counting using AMLNC demonstrated non-inferior performance in survival discrimination compared to manual detection.

16.
Methods ; 229: 9-16, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38838947

RESUMO

Robust segmentation of large and complex conjoined tree structures in 3-D is a major challenge in computer vision. This is particularly true in computational biology, where we often encounter large data structures in size, but few in number, which poses a hard problem for learning algorithms. We show that merging multiscale opening with geodesic path propagation, can shed new light on this classic machine vision challenge, while circumventing the learning issue by developing an unsupervised visual geometry approach (digital topology/morphometry). The novelty of the proposed MSO-GP method comes from the geodesic path propagation being guided by a skeletonization of the conjoined structure that helps to achieve robust segmentation results in a particularly challenging task in this area, that of artery-vein separation from non-contrast pulmonary computed tomography angiograms. This is an important first step in measuring vascular geometry to then diagnose pulmonary diseases and to develop image-based phenotypes. We first present proof-of-concept results on synthetic data, and then verify the performance on pig lung and human lung data with less segmentation time and user intervention needs than those of the competing methods.

17.
Heliyon ; 10(10): e31488, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38826726

RESUMO

Skin cancer is a pervasive and potentially life-threatening disease. Early detection plays a crucial role in improving patient outcomes. Machine learning (ML) techniques, particularly when combined with pre-trained deep learning models, have shown promise in enhancing the accuracy of skin cancer detection. In this paper, we enhanced the VGG19 pre-trained model with max pooling and dense layer for the prediction of skin cancer. Moreover, we also explored the pre-trained models such as Visual Geometry Group 19 (VGG19), Residual Network 152 version 2 (ResNet152v2), Inception-Residual Network version 2 (InceptionResNetV2), Dense Convolutional Network 201 (DenseNet201), Residual Network 50 (ResNet50), Inception version 3 (InceptionV3), For training, skin lesions dataset is used with malignant and benign cases. The models extract features and divide skin lesions into two categories: malignant and benign. The features are then fed into machine learning methods, including Linear Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR) and Support Vector Machine (SVM), our results demonstrate that combining E-VGG19 model with traditional classifiers significantly improves the overall classification accuracy for skin cancer detection and classification. Moreover, we have also compared the performance of baseline classifiers and pre-trained models with metrics (recall, F1 score, precision, sensitivity, and accuracy). The experiment results provide valuable insights into the effectiveness of various models and classifiers for accurate and efficient skin cancer detection. This research contributes to the ongoing efforts to create automated technologies for detecting skin cancer that can help healthcare professionals and individuals identify potential skin cancer cases at an early stage, ultimately leading to more timely and effective treatments.

18.
Front Bioeng Biotechnol ; 12: 1398237, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38827037

RESUMO

Accurate medical image segmentation is critical for disease quantification and treatment evaluation. While traditional U-Net architectures and their transformer-integrated variants excel in automated segmentation tasks. Existing models also struggle with parameter efficiency and computational complexity, often due to the extensive use of Transformers. However, they lack the ability to harness the image's intrinsic position and channel features. Research employing Dual Attention mechanisms of position and channel have not been specifically optimized for the high-detail demands of medical images. To address these issues, this study proposes a novel deep medical image segmentation framework, called DA-TransUNet, aiming to integrate the Transformer and dual attention block (DA-Block) into the traditional U-shaped architecture. Also, DA-TransUNet tailored for the high-detail requirements of medical images, optimizes the intermittent channels of Dual Attention (DA) and employs DA in each skip-connection to effectively filter out irrelevant information. This integration significantly enhances the model's capability to extract features, thereby improving the performance of medical image segmentation. DA-TransUNet is validated in medical image segmentation tasks, consistently outperforming state-of-the-art techniques across 5 datasets. In summary, DA-TransUNet has made significant strides in medical image segmentation, offering new insights into existing techniques. It strengthens model performance from the perspective of image features, thereby advancing the development of high-precision automated medical image diagnosis. The codes and parameters of our model will be publicly available at https://github.com/SUN-1024/DA-TransUnet.

19.
J Med Imaging (Bellingham) ; 11(3): 034504, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38827779

RESUMO

Purpose: Accurate segmentation of the endometrium in ultrasound images is essential for gynecological diagnostics and treatment planning. Manual segmentation methods are time-consuming and subjective, prompting the exploration of automated solutions. We introduce "segment anything with inception module" (SAIM), a specialized adaptation of the segment anything model, tailored specifically for the segmentation of endometrium structures in ultrasound images. Approach: SAIM incorporates enhancements to the image encoder structure and integrates point prompts to guide the segmentation process. We utilized ultrasound images from patients undergoing hysteroscopic surgery in the gynecological department to train and evaluate the model. Results: Our study demonstrates SAIM's superior segmentation performance through quantitative and qualitative evaluations, surpassing existing automated methods. SAIM achieves a dice similarity coefficient of 76.31% and an intersection over union score of 63.71%, outperforming traditional task-specific deep learning models and other SAM-based foundation models. Conclusions: The proposed SAIM achieves high segmentation accuracy, providing high diagnostic precision and efficiency. Furthermore, it is potentially an efficient tool for junior medical professionals in education and diagnosis.

20.
J Med Phys ; 49(1): 12-21, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38828062

RESUMO

Introduction: Segmentation and analysis of organs at risks (OARs) and tumor volumes are integral concepts in the development of radiotherapy treatment plans and prediction of patients' treatment outcomes. Aims: To develop a research tool, PAHPhysRAD, that can be used to semi- and fully automate segmentation of OARs. In addition, the proposed software seeks to extract 3214 radiomic features from tumor volumes and user-specified dose-volume parameters. Materials and Methods: Developed within MATLAB, PAHPhysRAD provides a comprehensive suite of segmentation tools, including manual, semi-automatic, and automatic options. For semi-autosegmentation, meta AI's Segment Anything Model was incorporated using the bounding box methods. Autosegmentation of OARs and tumor volume are implemented through a module that enables the addition of models in Open Neural Network Exchange format. To validate the radiomic feature extraction module in PAHPhysRAD, radiomic features extracted from gross tumor volume of 15 non-small cell lung carcinoma patients were compared against the features extracted from 3D Slicer™. The dose-volume parameters extraction module was validated using the dose volume data extracted from 28 tangential field-based breast treatment planning datasets. The volume receiving ≥20 Gy (V20) for ipsilateral lung and the mean doses received by the heart and ipsilateral lung, were compared against the parameters extracted from Eclipse. Results: The Wilcoxon signed-rank test revealed no significant difference between the majority of the radiomic features derived from PAHPhysRAD and 3D Slicer. The average mean lung and heart doses calculated in Eclipse were 5.51 ± 2.28 Gy and 1.64 ± 1.98 Gy, respectively. Similarly, the average mean lung and heart doses calculated in PAHPhysRAD were 5.45 ± 2.89 Gy and 1.67 ± 2.08 Gy, respectively. Conclusion: The MATLAB-based graphical user interface, PAHPhysRAD, offers a user-friendly platform for viewing and analyzing medical scans with options to extract radiomic features and dose-volume parameters. Its versatility, compatibility, and potential for further development make it an asset in medical image analysis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...