Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 72
Filtrar
1.
Comput Biol Med ; 177: 108628, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38810476

RESUMEN

BACKGROUND AND OBJECTIVE: The metabolic syndrome induced by obesity is closely associated with cardiovascular disease, and the prevalence is increasing globally, year by year. Obesity is a risk marker for detecting this disease. However, current research on computer-aided detection of adipose distribution is hampered by the lack of open-source large abdominal adipose datasets. METHODS: In this study, a benchmark Abdominal Adipose Tissue CT Image Dataset (AATCT-IDS) containing 300 subjects is prepared and published. AATCT-IDS publics 13,732 raw CT slices, and the researchers individually annotate the subcutaneous and visceral adipose tissue regions of 3213 of those slices that have the same slice distance to validate denoising methods, train semantic segmentation models, and study radiomics. For different tasks, this paper compares and analyzes the performance of various methods on AATCT-IDS by combining the visualization results and evaluation data. Thus, verify the research potential of this data set in the above three types of tasks. RESULTS: In the comparative study of image denoising, algorithms using a smoothing strategy suppress mixed noise at the expense of image details and obtain better evaluation data. Methods such as BM3D preserve the original image structure better, although the evaluation data are slightly lower. The results show significant differences among them. In the comparative study of semantic segmentation of abdominal adipose tissue, the segmentation results of adipose tissue by each model show different structural characteristics. Among them, BiSeNet obtains segmentation results only slightly inferior to U-Net with the shortest training time and effectively separates small and isolated adipose tissue. In addition, the radiomics study based on AATCT-IDS reveals three adipose distributions in the subject population. CONCLUSION: AATCT-IDS contains the ground truth of adipose tissue regions in abdominal CT slices. This open-source dataset can attract researchers to explore the multi-dimensional characteristics of abdominal adipose tissue and thus help physicians and patients in clinical practice. AATCT-IDS is freely published for non-commercial purpose at: https://figshare.com/articles/dataset/AATTCT-IDS/23807256.

2.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38652552

RESUMEN

The brain networks for the first (L1) and second (L2) languages are dynamically formed in the bilingual brain. This study delves into the neural mechanisms associated with logographic-logographic bilingualism, where both languages employ visually complex and conceptually rich logographic scripts. Using functional Magnetic Resonance Imaging, we examined the brain activity of Chinese-Japanese bilinguals and Japanese-Chinese bilinguals as they engaged in rhyming tasks with Chinese characters and Japanese Kanji. Results showed that Japanese-Chinese bilinguals processed both languages using common brain areas, demonstrating an assimilation pattern, whereas Chinese-Japanese bilinguals recruited additional neural regions in the left lateral prefrontal cortex for processing Japanese Kanji, reflecting their accommodation to the higher phonological complexity of L2. In addition, Japanese speakers relied more on the phonological processing route, while Chinese speakers favored visual form analysis for both languages, indicating differing neural strategy preferences between the 2 bilingual groups. Moreover, multivariate pattern analysis demonstrated that, despite the considerable neural overlap, each bilingual group formed distinguishable neural representations for each language. These findings highlight the brain's capacity for neural adaptability and specificity when processing complex logographic languages, enriching our understanding of the neural underpinnings supporting bilingual language processing.


Asunto(s)
Mapeo Encefálico , Encéfalo , Imagen por Resonancia Magnética , Multilingüismo , Humanos , Masculino , Femenino , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Adulto , Fonética , Lectura , Lenguaje , Japón
3.
Comput Biol Med ; 173: 108342, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38522249

RESUMEN

BACKGROUND AND OBJECTIVE: Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH) typically presents acutely, prompt and expedited radiological examination is crucial for diagnosis, localization, and quantification of the hemorrhage. Early detection and accurate segmentation of perihematomal edema (PHE) play a critical role in guiding appropriate clinical intervention and enhancing patient prognosis. However, the progress and assessment of computer-aided diagnostic methods for PHE segmentation and detection face challenges due to the scarcity of publicly accessible brain CT image datasets. METHODS: This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage. The dataset comprises 120 brain CT scans and 7,022 CT images, along with corresponding medical information of the patients. To demonstrate its effectiveness, classical algorithms for semantic segmentation, object detection, and radiomic feature extraction are evaluated. The experimental results confirm the suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation, detection and radiomic feature extraction methods. RESULTS: This study conducts numerous experiments using classical machine learning and deep learning methods, demonstrating the differences in various segmentation and detection methods on the PHE-SICH-CT-IDS. The highest precision achieved in semantic segmentation is 76.31%, while object detection attains a maximum precision of 97.62%. The experimental results on radiomic feature extraction and analysis prove the suitability of PHE-SICH-CT-IDS for evaluating image features and highlight the predictive value of these features for the prognosis of SICH patients. CONCLUSION: To the best of our knowledge, this is the first publicly available dataset for PHE in SICH, comprising various data formats suitable for applications across diverse medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to explore novel algorithms, providing valuable support for clinicians and patients in the clinical setting. PHE-SICH-CT-IDS is freely published for non-commercial purpose at https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937.


Asunto(s)
Edema Encefálico , Humanos , Edema Encefálico/diagnóstico por imagen , Benchmarking , Radiómica , Semántica , Edema , Hemorragia Cerebral/diagnóstico por imagen , Tomografía Computarizada por Rayos X
4.
Comput Biol Med ; 171: 108217, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38430743

RESUMEN

BACKGROUND: Endometrial cancer is one of the most common tumors in the female reproductive system and is the third most common gynecological malignancy that causes death after ovarian and cervical cancer. Early diagnosis can significantly improve the 5-year survival rate of patients. With the development of artificial intelligence, computer-assisted diagnosis plays an increasingly important role in improving the accuracy and objectivity of diagnosis and reducing the workload of doctors. However, the absence of publicly available image datasets restricts the application of computer-assisted diagnostic techniques. METHODS: In this paper, a publicly available Endometrial Cancer PET/CT Image Dataset for Evaluation of Semantic Segmentation and Detection of Hypermetabolic Regions (ECPC-IDS) are published. Specifically, the segmentation section includes PET and CT images, with 7159 images in multiple formats totally. In order to prove the effectiveness of segmentation on ECPC-IDS, six deep learning semantic segmentation methods are selected to test the image segmentation task. The object detection section also includes PET and CT images, with 3579 images and XML files with annotation information totally. Eight deep learning methods are selected for experiments on the detection task. RESULTS: This study is conduct using deep learning-based semantic segmentation and object detection methods to demonstrate the distinguishability on ECPC-IDS. From a separate perspective, the minimum and maximum values of Dice on PET images are 0.546 and 0.743, respectively. The minimum and maximum values of Dice on CT images are 0.012 and 0.510, respectively. The target detection section's maximum mAP values on PET and CT images are 0.993 and 0.986, respectively. CONCLUSION: As far as we know, this is the first publicly available dataset of endometrial cancer with a large number of multi-modality images. ECPC-IDS can assist researchers in exploring new algorithms to enhance computer-assisted diagnosis, benefiting both clinical doctors and patients. ECPC-IDS is also freely published for non-commercial at: https://figshare.com/articles/dataset/ECPC-IDS/23808258.


Asunto(s)
Neoplasias Endometriales , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Femenino , Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador/métodos , Semántica , Benchmarking , Neoplasias Endometriales/diagnóstico por imagen
5.
Data Brief ; 53: 110141, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38406254

RESUMEN

A benchmark histopathological Hematoxylin and Eosin (H&E) image dataset for Cervical Adenocarcinoma in Situ (CAISHI), containing 2240 histopathological images of Cervical Adenocarcinoma in Situ (AIS), is established to fill the current data gap, of which 1010 are images of normal cervical glands and another 1230 are images of cervical AIS. The sampling method is endoscope biopsy. Pathological sections are obtained by H&E staining from Shengjing Hospital, China Medical University. These images have a magnification of 100 and are captured by the Axio Scope. A1 microscope. The size of the image is 3840 × 2160 pixels, and the format is ".png". The collection of CAISHI is subject to an ethical review by China Medical University with approval number 2022PS841K. These images are analyzed at multiple levels, including classification tasks and image retrieval tasks. A variety of computer vision and machine learning methods are used to evaluate the performance of the data. For classification tasks, a variety of classical machine learning classifiers such as k-means, support vector machines (SVM), and random forests (RF), as well as convolutional neural network classifiers such as Residual Network 50 (ResNet50), Vision Transformer (ViT), Inception version 3 (Inception-V3), and Visual Geometry Group Network 16 (VGG-16), are used. In addition, the Siamese network is used to evaluate few-shot learning tasks. In terms of image retrieval functions, color features, texture features, and deep learning features are extracted, and their performances are tested. CAISHI can help with the early diagnosis and screening of cervical cancer. Researchers can use this dataset to develop new computer-aided diagnostic tools that could improve the accuracy and efficiency of cervical cancer screening and advance the development of automated diagnostic algorithms.

6.
Comput Biol Med ; 167: 107620, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37922604

RESUMEN

In recent years, there is been a growing reliance on image analysis methods to bolster dentistry practices, such as image classification, segmentation and object detection. However, the availability of related benchmark datasets remains limited. Hence, we spent six years to prepare and test a bench Oral Implant Image Dataset (OII-DS) to support the work in this research domain. OII-DS is a benchmark oral image dataset consisting of 3834 oral CT imaging images and 15240 oral implant images. It serves the purpose of object detection and image classification. To demonstrate the validity of the OII-DS, for each function, the most representative algorithms and metrics are selected for testing and evaluation. For object detection, five object detection algorithms are adopted to test and four evaluation criteria are used to assess the detection of each of the five objects. Additionally, mean average precision serves as the evaluation metric for multi-objective detection. For image classification, 13 classifiers are used for testing and evaluating each of the five categories by meeting four evaluation criteria. Experimental results affirm the high quality of our data in OII-DS, rendering it suitable for evaluating object detection and image classification methods. Furthermore, OII-DS is openly available at the URL for non-commercial purpose: https://doi.org/10.6084/m9.figshare.22608790.


Asunto(s)
Algoritmos , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos
7.
Comput Biol Med ; 165: 107388, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37696178

RESUMEN

Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.


Asunto(s)
Medicina , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Intestinos , Aprendizaje Automático
8.
Comput Methods Programs Biomed ; 240: 107701, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37480645

RESUMEN

OBJECTIVE: CTG is used to record the fetus's fetal heart rate and uterine contraction signal during pregnancy. The prenatal fetal intrauterine monitoring level can be used to evaluate the fetal intrauterine safety status and reduce the morbidity and mortality of the perinatal fetus. Perinatal asphyxia is the leading cause of neonatal hypoxic-ischemic encephalopathy and one of the leading causes of neonatal death and disability. Severe asphyxia can cause brain and permanent nervous system damage and leave different degrees of nervous system sequelae. METHODS: This paper evaluates the classification performance of several machine learning methods on CTG and provides the auxiliary ability of clinical judgment of doctors. This paper uses the data set on the public database UCI, with 2126 samples. RESULTS: The accuracy of each model exceeds 80%, of which XGBoost has the highest accuracy of 91%. Other models are Random tree (90%), light (90%), Decision tree (83%), and KNN (81%). The performance of the model in other indicators is XGBoost (precision: 90%, recall: 93%, F1 score: 90%), Random tree (precision: 88%, recall: 91%, F1 score: 89%), lightGBM (precision: 87%, recall: 93%, F1 score: 90%), Decision tree (precision: 83%, recall: 86%, F1 score: 84%), KNN (precision: 77%, recall: 85%, F1 score: 81%). CONCLUSION: The performance of XGBoost is the best of all models. This result also shows that using the machine learning method to evaluate the fetus's health status in CTG data is feasible. This will also provide and assist doctors with an objective assessment to assist in clinical diagnosis.


Asunto(s)
Asfixia , Lesiones Encefálicas , Femenino , Embarazo , Recién Nacido , Humanos , Feto , Encéfalo , Aprendizaje Automático
9.
Comput Biol Med ; 162: 107070, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37295389

RESUMEN

Cervical cancer is the fourth most common cancer among women, and cytopathological images are often used to screen for this cancer. However, manual examination is very troublesome and the misdiagnosis rate is high. In addition, cervical cancer nest cells are denser and more complex, with high overlap and opacity, increasing the difficulty of identification. The appearance of the computer aided automatic diagnosis system solves this problem. In this paper, a weakly supervised cervical cancer nest image identification approach using Conjugated Attention Mechanism and Visual Transformer (CAM-VT), which can analyze pap slides quickly and accurately. CAM-VT proposes conjugated attention mechanism and visual transformer modules for local and global feature extraction respectively, and then designs an ensemble learning module to further improve the identification capability. In order to determine a reasonable interpretation, comparative experiments are conducted on our datasets. The average accuracy of the validation set of three repeated experiments using CAM-VT framework is 88.92%, which is higher than the optimal result of 22 well-known deep learning models. Moreover, we conduct ablation experiments and extended experiments on Hematoxylin and Eosin stained gastric histopathological image datasets to verify the ability and generalization ability of the framework. Finally, the top 5 and top 10 positive probability values of cervical nests are 97.36% and 96.84%, which have important clinical and practical significance. The experimental results show that the proposed CAM-VT framework has excellent performance in potential cervical cancer nest image identification tasks for practical clinical work.


Asunto(s)
Neoplasias del Cuello Uterino , Femenino , Humanos , Neoplasias del Cuello Uterino/diagnóstico por imagen , Diagnóstico por Computador , Eosina Amarillenta-(YS) , Hematoxilina , Probabilidad , Procesamiento de Imagen Asistido por Computador
10.
J Nanobiotechnology ; 21(1): 150, 2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37158923

RESUMEN

BACKGROUND: Nanotheranostics advances anticancer management by providing therapeutic and diagnostic functions, that combine programmed cell death (PCD) initiation and imaging-guided treatment, thus increasing the efficacy of tumor ablation and efficiently fighting against cancer. However, mild photothermal/radiation therapy with imaging-guided precise mediating PCD in solid tumors, involving processes related to apoptosis and ferroptosis, enhanced the effect of breast cancer inhibition is not fully understood. RESULTS: Herein, targeted peptide conjugated gold nano cages, iRGD-PEG/AuNCs@FePt NPs ternary metallic nanoparticles (Au@FePt NPs) were designed to achieve photoacoustic imaging (PAI)/Magnetic resonance imaging (MRI) guided synergistic therapy. Tumor-targeting Au@FePt forms reactive oxygen species (ROS), initiated by X-ray-induced dynamic therapy (XDT) in collaboration with photothermal therapy (PTT), inducing ferroptosis-augmented apoptosis to realize effective antitumor therapeutics. The relatively high photothermal conversion ability of Au@FePt increases the temperature in the tumor region and hastens Fenton-like processes to achieve enhanced synergistic therapy. Especially, RNA sequencing found Au@FePt inducting the apoptosis pathway in the transcriptome profile. CONCLUSION: Au@FePt combined XDT/PTT therapy activate apoptosis and ferroptosis related proteins in tumors to achieve breast cancer ablation in vitro and in vivo. PAI/MRI images demonstrated Au@FePt has real-time guidance for monitoring synergistic anti-cancer therapy effect. Therefore, we have provided a multifunctional nanotheranostics modality for tumor inhibition and cancer management with high efficacy and limited side effects.


Asunto(s)
Ferroptosis , Neoplasias , Terapia Fototérmica , Imagen por Resonancia Magnética , Apoptosis , Oro
11.
Comput Biol Med ; 161: 107034, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37230019

RESUMEN

In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
12.
NMR Biomed ; : e4945, 2023 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-37012600

RESUMEN

Parametrial infiltration (PMI) is an essential factor in staging and planning treatment of cervical cancer. The purpose of this study was to develop a radiomics model for accessing PMI in patients with IB-IIB cervical cancer using features from 18 F-fluorodeoxy glucose (18 F-FDG) positron emission tomography (PET)/MR images. In this retrospective study, 66 patients with International Federation of Gynecology and Obstetrics stage IB-IIB cervical cancer (22 with PMI and 44 without PMI) who underwent 18 F-FDG PET/MRI were divided into a training dataset (n = 46) and a testing dataset (n = 20). Features were extracted from both the tumoral and peritumoral regions in 18 F-FDG PET/MR images. Single-modality and multimodality radiomics models were developed with random forest to predict PMI. The performance of the models was evaluated with F1 score, accuracy, and area under the curve (AUC). The Kappa test was used to observe the differences between PMI evaluated by radiomics-based models and pathological results. The intraclass correlation coefficient for features extracted from each region of interest (ROI) was measured. Three-fold crossvalidation was conducted to confirm the diagnostic ability of the features. The radiomics models developed by features from the tumoral region in T2 -weighted images (F1 score = 0.400, accuracy = 0.700, AUC = 0.708, Kappa = 0.211, p = 0.329) and the peritumoral region in PET images (F1 score = 0.533, accuracy = 0.650, AUC = 0.714, Kappa = 0.271, p = 0.202) achieved the best performances in the testing dataset among the four single-ROI radiomics models. The combined model using features from the tumoral region in T2 -weighted images and the peritumoral region in PET images achieved the best performance (F1 score = 0.727, accuracy = 0.850, AUC = 0.774, Kappa = 0.625, p < 0.05). The results suggest that 18 F-FDG PET/MRI can provide complementary information regarding cervical cancer. The radiomics-based method integrating features from the tumoral and peritumoral regions in 18 F-FDG PET/MR images gave a superior performance for evaluating PMI.

13.
Front Microbiol ; 14: 1084312, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36891388

RESUMEN

Nowadays, the detection of environmental microorganism indicators is essential for us to assess the degree of pollution, but the traditional detection methods consume a lot of manpower and material resources. Therefore, it is necessary for us to make microbial data sets to be used in artificial intelligence. The Environmental Microorganism Image Dataset Seventh Version (EMDS-7) is a microscopic image data set that is applied in the field of multi-object detection of artificial intelligence. This method reduces the chemicals, manpower and equipment used in the process of detecting microorganisms. EMDS-7 including the original Environmental Microorganism (EM) images and the corresponding object labeling files in ".XML" format file. The EMDS-7 data set consists of 41 types of EMs, which has a total of 2,65 images and 13,216 labeled objects. The EMDS-7 database mainly focuses on the object detection. In order to prove the effectiveness of EMDS-7, we select the most commonly used deep learning methods (Faster-Region Convolutional Neural Network (Faster-RCNN), YOLOv3, YOLOv4, SSD, and RetinaNet) and evaluation indices for testing and evaluation. EMDS-7 is freely published for non-commercial purpose at: https://figshare.com/articles/dataset/EMDS-7_DataSet/16869571.

14.
Phys Med ; 107: 102534, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36804696

RESUMEN

BACKGROUND AND PURPOSE: Colorectal cancer has become the third most common cancer worldwide, accounting for approximately 10% of cancer patients. Early detection of the disease is important for the treatment of colorectal cancer patients. Histopathological examination is the gold standard for screening colorectal cancer. However, the current lack of histopathological image datasets of colorectal cancer, especially enteroscope biopsies, hinders the accurate evaluation of computer-aided diagnosis techniques. Therefore, a multi-category colorectal cancer dataset is needed to test various medical image classification methods to find high classification accuracy and strong robustness. METHODS: A new publicly available Enteroscope Biopsy Histopathological H&E Image Dataset (EBHI) is published in this paper. To demonstrate the effectiveness of the EBHI dataset, we have utilized several machine learning, convolutional neural networks and novel transformer-based classifiers for experimentation and evaluation, using an image with a magnification of 200×. RESULTS: Experimental results show that the deep learning method performs well on the EBHI dataset. Classical machine learning methods achieve maximum accuracy of 76.02% and deep learning method achieves a maximum accuracy of 95.37%. CONCLUSION: To the best of our knowledge, EBHI is the first publicly available colorectal histopathology enteroscope biopsy dataset with four magnifications and five types of images of tumor differentiation stages, totaling 5532 images. We believe that EBHI could attract researchers to explore new classification algorithms for the automated diagnosis of colorectal cancer, which could help physicians and patients in clinical settings.


Asunto(s)
Neoplasias Colorrectales , Redes Neurales de la Computación , Humanos , Algoritmos , Diagnóstico por Computador/métodos , Biopsia , Neoplasias Colorrectales/diagnóstico por imagen
15.
Front Med (Lausanne) ; 10: 1114673, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36760405

RESUMEN

Background and purpose: Colorectal cancer is a common fatal malignancy, the fourth most common cancer in men, and the third most common cancer in women worldwide. Timely detection of cancer in its early stages is essential for treating the disease. Currently, there is a lack of datasets for histopathological image segmentation of colorectal cancer, which often hampers the assessment accuracy when computer technology is used to aid in diagnosis. Methods: This present study provided a new publicly available Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical machine learning methods and deep learning methods. Results: The experimental results showed that deep learning methods had a better image segmentation performance when utilizing EBHI-Seg. The maximum accuracy of the Dice evaluation metric for the classical machine learning method is 0.948, while the Dice evaluation metric for the deep learning method is 0.965. Conclusion: This publicly available dataset contained 4,456 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer, which can be used in the clinical setting to help doctors and patients. EBHI-Seg is publicly available at: https://figshare.com/articles/dataset/EBHI-SEG/21540159/1.

16.
Br J Cancer ; 128(7): 1267-1277, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36646808

RESUMEN

BACKGROUND: To develop and test a Prostate Imaging Stratification Risk (PRISK) tool for precisely assessing the International Society of Urological Pathology Gleason grade (ISUP-GG) of prostate cancer (PCa). METHODS: This study included 1442 patients with prostate biopsy from two centres (training, n = 672; internal test, n = 231 and external test, n = 539). PRISK is designed to classify ISUP-GG 0 (benign), ISUP-GG 1, ISUP-GG 2, ISUP-GG 3 and ISUP GG 4/5. Clinical indicators and high-throughput MRI features of PCa were integrated and modelled with hybrid stacked-ensemble learning algorithms. RESULTS: PRISK achieved a macro area-under-curve of 0.783, 0.798 and 0.762 for the classification of ISUP-GGs in training, internal and external test data. Permitting error ±1 in grading ISUP-GGs, the overall accuracy of PRISK is nearly comparable to invasive biopsy (train: 85.1% vs 88.7%; internal test: 85.1% vs 90.4%; external test: 90.4% vs 94.2%). PSA ≥ 20 ng/ml (odds ratio [OR], 1.58; p = 0.001) and PRISK ≥ GG 3 (OR, 1.45; p = 0.005) were two independent predictors of biochemical recurrence (BCR)-free survival, with a C-index of 0.76 (95% CI, 0.73-0.79) for BCR-free survival prediction. CONCLUSIONS: PRISK might offer a potential alternative to non-invasively assess ISUP-GG of PCa.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/cirugía , Clasificación del Tumor , Próstata/diagnóstico por imagen , Próstata/cirugía , Próstata/patología , Imagen por Resonancia Magnética
17.
Front Med (Lausanne) ; 9: 1072109, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36569152

RESUMEN

Introduction: Gastric cancer is the fifth most common cancer in the world. At the same time, it is also the fourth most deadly cancer. Early detection of cancer exists as a guide for the treatment of gastric cancer. Nowadays, computer technology has advanced rapidly to assist physicians in the diagnosis of pathological pictures of gastric cancer. Ensemble learning is a way to improve the accuracy of algorithms, and finding multiple learning models with complementarity types is the basis of ensemble learning. Therefore, this paper compares the performance of multiple algorithms in anticipation of applying ensemble learning to a practical gastric cancer classification problem. Methods: The complementarity of sub-size pathology image classifiers when machine performance is insufficient is explored in this experimental platform. We choose seven classical machine learning classifiers and four deep learning classifiers for classification experiments on the GasHisSDB database. Among them, classical machine learning algorithms extract five different image virtual features to match multiple classifier algorithms. For deep learning, we choose three convolutional neural network classifiers. In addition, we also choose a novel Transformer-based classifier. Results: The experimental platform, in which a large number of classical machine learning and deep learning methods are performed, demonstrates that there are differences in the performance of different classifiers on GasHisSDB. Classical machine learning models exist for classifiers that classify Abnormal categories very well, while classifiers that excel in classifying Normal categories also exist. Deep learning models also exist with multiple models that can be complementarity. Discussion: Suitable classifiers are selected for ensemble learning, when machine performance is insufficient. This experimental platform demonstrates that multiple classifiers are indeed complementarity and can improve the efficiency of ensemble learning. This can better assist doctors in diagnosis, improve the detection of gastric cancer, and increase the cure rate.

18.
Front Physiol ; 13: 994304, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36311222

RESUMEN

Objectives: We aimed to establish machine learning models based on texture analysis predicting pelvic lymph node metastasis (PLNM) and expression of cyclooxygenase-2 (COX-2) in cervical cancer with PET/CT negative pelvic lymph node (PLN). Methods: Eight hundred and thirty-seven texture features were extracted from PET/CT images of 148 early-stage cervical cancer patients with negative PLN. The machine learning models were established by logistic regression from selected features and evaluated by the area under the curve (AUC). The correlation of selected PET/CT texture features predicting PLNM or COX-2 expression and the corresponding immunohistochemical (IHC) texture features was analyzed by the Spearman test. Results: Fourteen texture features were reserved to calculate the Rad-score for PLNM and COX-2. The PLNM model predicting PLNM showed good prediction accuracy in the training and testing dataset (AUC = 0.817, p < 0.001; AUC = 0.786, p < 0.001, respectively). The COX-2 model also behaved well for predicting COX-2 expression levels in the training and testing dataset (AUC = 0.814, p < 0.001; AUC = 0.748, p = 0.001). The wavelet-LHH-GLCM ClusterShade of the PET image selected to predict PLNM was slightly correlated with the corresponding feature of the IHC image (r = -0.165, p < 0.05). There was a weak correlation of wavelet-LLL-GLRLM LongRunEmphasis of the PET image selected to predict COX-2 correlated with the corresponding feature of the IHC image (r = 0.238, p < 0.05). The correlation between PET image selected to predict COX-2 and the corresponding feature of the IHC image based on wavelet-LLL-GLRLM LongRunEmphasis is considered weak positive (r = 0.238, p=<0.05). Conclusion: This study underlined the significant application of the machine learning models based on PET/CT texture analysis for predicting PLNM and COX-2 expression, which could be a novel tool to assist the clinical management of cervical cancer with negative PLN on PET/CT images.

19.
BMC Cancer ; 22(1): 947, 2022 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-36050751

RESUMEN

PURPOSE: To explore the diagnostic value of integrated positron emission tomography/magnetic resonance imaging (PET/MRI) for the staging of endometrial carcinoma and to investigate the associations between quantitative parameters derived from PET/MRI and clinicopathological characteristics of endometrial carcinoma. METHODS: Altogether, 57 patients with endometrial carcinoma who underwent PET/MRI and PET/computed tomography (PET/CT) preoperatively were included. Diagnostic performance of PET/MRI and PET/CT for staging was compared by three readers. Associations between PET/MRI quantitative parameters of primary tumor lesions and clinicopathological characteristics of endometrial carcinoma were analyzed. Histopathological results were used as the standard. RESULTS: The overall accuracy of the International Federation of Gynecology and Obstetrics (FIGO) staging for PET/MRI and PET/CT was 86.0% and 77.2%, respectively. PET/MRI had higher accuracy in diagnosing myometrial invasion and cervical invasion and an equivalent accuracy in diagnosing pelvic lymph node metastasis against PET/CT, although without significance. All PET/MRI quantitative parameters were significantly different between stage I and stage III tumors. Only SUVmax/ADCmin were significantly different between stage I and II tumors. No parameters were significantly different between stage II and III tumors. The SUVmax/ADCmin in the receiving operating characteristic (ROC) curve had a higher area under the ROC curve for differentiating stage I tumors and other stages of endometrial carcinoma. CONCLUSIONS: PET/MRI had a higher accuracy for the staging of endometrial carcinoma, mainly for FIGO stage I tumors compared to PET/CT. PET/MRI quantitative parameters, especially SUVmax/ADCmin, were associated with tumor stage and other clinicopathological characteristics. Hence, PET/MRI may be a valuable imaging diagnostic tool for preoperative staging of endometrial carcinoma.


Asunto(s)
Neoplasias Endometriales , Fluorodesoxiglucosa F18 , Neoplasias Endometriales/diagnóstico por imagen , Neoplasias Endometriales/patología , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Estadificación de Neoplasias , Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/métodos , Radiofármacos
20.
EClinicalMedicine ; 53: 101662, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36147628

RESUMEN

Background: Accurate identification of ovarian cancer (OC) is of paramount importance in clinical treatment success. Artificial intelligence (AI) is a potentially reliable assistant for the medical imaging recognition. We systematically review articles on the diagnostic performance of AI in OC from medical imaging for the first time. Methods: The Medline, Embase, IEEE, PubMed, Web of Science, and the Cochrane library databases were searched for related studies published until August 1, 2022. Inclusion criteria were studies that developed or used AI algorithms in the diagnosis of OC from medical images. The binary diagnostic accuracy data were extracted to derive the outcomes of interest: sensitivity (SE), specificity (SP), and Area Under the Curve (AUC). The study was registered with the PROSPERO, CRD42022324611. Findings: Thirty-four eligible studies were identified, of which twenty-eight studies were included in the meta-analysis with a pooled SE of 88% (95%CI: 85-90%), SP of 85% (82-88%), and AUC of 0.93 (0.91-0.95). Analysis for different algorithms revealed a pooled SE of 89% (85-92%) and SP of 88% (82-92%) for machine learning; and a pooled SE of 88% (84-91%) and SP of 84% (80-87%) for deep learning. Acceptable diagnostic performance was demonstrated in subgroup analyses stratified by imaging modalities (Ultrasound, Magnetic Resonance Imaging, or Computed Tomography), sample size (≤300 or >300), AI algorithms versus clinicians, year of publication (before or after 2020), geographical distribution (Asia or non Asia), and the different risk of bias levels (≥3 domain low risk or < 3 domain low risk). Interpretation: AI algorithms exhibited favorable performance for the diagnosis of OC through medical imaging. More rigorous reporting standards that address specific challenges of AI research could improve future studies. Funding: This work was supported by the Natural Science Foundation of China (No. 82073647 to Q-JW and No. 82103914 to T-TG), LiaoNing Revitalization Talents Program (No. XLYC1907102 to Q-JW), and 345 Talent Project of Shengjing Hospital of China Medical University (No. M0268 to Q-JW and No. M0952 to T-TG).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...