Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.802
Filtrar
Más filtros











Intervalo de año de publicación
1.
Stud Health Technol Inform ; 314: 183-184, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38785028

RESUMEN

Melanoma represents an extremely aggressive type of skin lesion. Despite its high mortality rate, when detected in its initial stage, the projected five-year survival rate is notably high. The advancement of Artificial Intelligence in recent years has facilitated the creation of diverse solutions aimed at assisting medical diagnosis. This proposal presents an architecture for melanoma classification.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Melanoma/clasificación , Humanos , Neoplasias Cutáneas/clasificación , Inteligencia Artificial , Diagnóstico por Computador/métodos
2.
BMC Oral Health ; 24(1): 598, 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38778322

RESUMEN

BACKGROUND: Machine learning (ML) through artificial intelligence (AI) could provide clinicians and oral pathologists to advance diagnostic problems in the field of potentially malignant lesions, oral cancer, periodontal diseases, salivary gland disease, oral infections, immune-mediated disease, and others. AI can detect micro-features beyond human eyes and provide solution in critical diagnostic cases. OBJECTIVE: The objective of this study was developing a software with all needed feeding data to act as AI-based program to diagnose oral diseases. So our research question was: Can we develop a Computer-Aided Software for accurate diagnosis of oral diseases based on clinical and histopathological data inputs? METHOD: The study sample included clinical images, patient symptoms, radiographic images, histopathological images and texts for the oral diseases of interest in the current study (premalignant lesions, oral cancer, salivary gland neoplasms, immune mediated oral mucosal lesions, oral reactive lesions) total oral diseases enrolled in this study was 28 diseases retrieved from the archives of oral maxillofacial pathology department. Total 11,200 texts and 3000 images (2800 images were used for training data to the program and 100 images were used as test data to the program and 100 cases for calculating accuracy, sensitivity& specificity). RESULTS: The correct diagnosis rates for group 1 (software users), group 2 (microscopic users) and group 3 (hybrid) were 87%, 90.6, 95% respectively. The reliability for inter-observer value was done by calculating Cronbach's alpha and interclass correlation coefficient. The test revealed for group 1, 2 and 3 the following values respectively 0.934, 0.712 & 0.703. All groups showed acceptable reliability especially for Diagnosis Oral Diseases Software (DODS) that revealed higher reliability value than other groups. However, The accuracy, sensitivity & specificity of this software was lower than those of oral pathologists (master's degree). CONCLUSION: The correct diagnosis rate of DODS was comparable to oral pathologists using standard microscopic examination. The DODS program could be utilized as diagnostic guidance tool with high reliability & accuracy.


Asunto(s)
Inteligencia Artificial , Enfermedades de la Boca , Programas Informáticos , Humanos , Enfermedades de la Boca/patología , Enfermedades de la Boca/diagnóstico , Enfermedades de la Boca/diagnóstico por imagen , Diagnóstico por Computador/métodos , Sensibilidad y Especificidad , Neoplasias de la Boca/patología , Neoplasias de la Boca/diagnóstico por imagen , Neoplasias de la Boca/diagnóstico , Aprendizaje Automático
3.
Comput Biol Med ; 175: 108483, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38704900

RESUMEN

The timely and accurate diagnosis of breast cancer is pivotal for effective treatment, but current automated mammography classification methods have their constraints. In this study, we introduce an innovative hybrid model that marries the power of the Extreme Learning Machine (ELM) with FuNet transfer learning, harnessing the potential of the MIAS dataset. This novel approach leverages an Enhanced Quantum-Genetic Binary Grey Wolf Optimizer (Q-GBGWO) within the ELM framework, elevating its performance. Our contributions are twofold: firstly, we employ a feature fusion strategy to optimize feature extraction, significantly enhancing breast cancer classification accuracy. The proposed methodological motivation stems from optimizing feature extraction for improved breast cancer classification accuracy. The Q-GBGWO optimizes ELM parameters, demonstrating its efficacy within the ELM classifier. This innovation marks a considerable advancement beyond traditional methods. Through comparative evaluations against various optimization techniques, the exceptional performance of our Q-GBGWO-ELM model becomes evident. The classification accuracy of the model is exceptionally high, with rates of 96.54 % for Normal, 97.24 % for Benign, and 98.01 % for Malignant classes. Additionally, the model demonstrates a high sensitivity with rates of 96.02 % for Normal, 96.54 % for Benign, and 97.75 % for Malignant classes, and it exhibits impressive specificity with rates of 96.69 % for Normal, 97.38 % for Benign, and 98.16 % for Malignant classes. These metrics are reflected in its ability to classify three different types of breast cancer accurately. Our approach highlights the innovative integration of image data, deep feature extraction, and optimized ELM classification, marking a transformative step in advancing early breast cancer detection and enhancing patient outcomes.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Automático , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Mamografía/métodos , Diagnóstico por Computador/métodos
4.
Sci Rep ; 14(1): 10714, 2024 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-38730250

RESUMEN

A prompt diagnosis of breast cancer in its earliest phases is necessary for effective treatment. While Computer-Aided Diagnosis systems play a crucial role in automated mammography image processing, interpretation, grading, and early detection of breast cancer, existing approaches face limitations in achieving optimal accuracy. This study addresses these limitations by hybridizing the improved quantum-inspired binary Grey Wolf Optimizer with the Support Vector Machines Radial Basis Function Kernel. This hybrid approach aims to enhance the accuracy of breast cancer classification by determining the optimal Support Vector Machine parameters. The motivation for this hybridization lies in the need for improved classification performance compared to existing optimizers such as Particle Swarm Optimization and Genetic Algorithm. Evaluate the efficacy of the proposed IQI-BGWO-SVM approach on the MIAS dataset, considering various metric parameters, including accuracy, sensitivity, and specificity. Furthermore, the application of IQI-BGWO-SVM for feature selection will be explored, and the results will be compared. Experimental findings demonstrate that the suggested IQI-BGWO-SVM technique outperforms state-of-the-art classification methods on the MIAS dataset, with a resulting mean accuracy, sensitivity, and specificity of 99.25%, 98.96%, and 100%, respectively, using a tenfold cross-validation datasets partition.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Máquina de Vectores de Soporte , Humanos , Neoplasias de la Mama/diagnóstico , Femenino , Mamografía/métodos , Diagnóstico por Computador/métodos
5.
Artif Intell Med ; 152: 102883, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38657439

RESUMEN

Hematology is the study of diagnosis and treatment options for blood diseases, including cancer. Cancer is considered one of the deadliest diseases across all age categories. Diagnosing such a deadly disease at the initial stage is essential to cure the disease. Hematologists and pathologists rely on microscopic evaluation of blood or bone marrow smear images to diagnose blood-related ailments. The abundance of overlapping cells, cells of varying densities among platelets, non-illumination levels, and the amount of red and white blood cells make it more difficult to diagnose illness using blood cell images. Pathologists are required to put more effort into the traditional, time-consuming system. Nowadays, it becomes possible with machine learning and deep learning techniques, to automate the diagnostic processes, categorize microscopic blood cells, and improve the accuracy of the procedure and its speed as the models developed using these methods may guide an assisting tool. In this article, we have acquired, analyzed, scrutinized, and finally selected around 57 research papers from various machine learning and deep learning methodologies that have been employed in the diagnosis of leukemia and its classification over the past 20 years, which have been published between the years 2003 and 2023 by PubMed, IEEE, Science Direct, Google Scholar and other pertinent sources. Our primary emphasis is on evaluating the advantages and limitations of analogous research endeavors to provide a concise and valuable research directive that can be of significant utility to fellow researchers in the field.


Asunto(s)
Aprendizaje Profundo , Neoplasias Hematológicas , Aprendizaje Automático , Humanos , Neoplasias Hematológicas/diagnóstico , Neoplasias Hematológicas/clasificación , Diagnóstico por Computador/métodos
6.
Comput Biol Med ; 175: 108394, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38657464

RESUMEN

Gastroesophageal reflux disease (GERD) profoundly compromises the quality of life, with prolonged untreated cases posing a heightened risk of severe complications such as esophageal injury and esophageal carcinoma. The imperative for early diagnosis is paramount in averting progressive pathological developments. This study introduces a wrapper-based feature selection model based on the enhanced Runge Kutta algorithm (SCCRUN) and fuzzy k-nearest neighbors (FKNN) for GERD prediction, named bSCCRUN-FKNN-FS. Runge Kutta algorithm (RUN) is a metaheuristic algorithm designed based on the Runge-Kutta method. However, RUN's effectiveness in local search capabilities is insufficient, and it exhibits insufficient convergence accuracy. To enhance the convergence accuracy of RUN, spiraling communication and collaboration (SCC) is introduced. By facilitating information exchange among population individuals, SCC expands the solution search space, thereby improving convergence accuracy. The optimization capabilities of SCCRUN are experimentally validated through comparisons with classical and state-of-the-art algorithms on the IEEE CEC 2017 benchmark. Subsequently, based on SCCRUN, the bSCCRUN-FKNN-FS model is proposed. During the period from 2019 to 2023, a dataset comprising 179 cases of GERD, including 110 GERD patients and 69 healthy individuals, was collected from Zhejiang Provincial People's Hospital. This dataset was utilized to compare our proposed model against similar algorithms in order to evaluate its performance. Concurrently, it was determined that features such as the internal diameter of the esophageal hiatus during distention, esophagogastric junction diameter during distention, and external diameter of the esophageal hiatus during non-distention play crucial roles in influencing GERD prediction. Experimental findings demonstrate the outstanding performance of the proposed model, with a predictive accuracy reaching as high as 93.824 %. These results underscore the significant advantage of the proposed model in both identifying and predicting GERD patients.


Asunto(s)
Algoritmos , Reflujo Gastroesofágico , Reflujo Gastroesofágico/fisiopatología , Reflujo Gastroesofágico/diagnóstico , Humanos , Masculino , Femenino , Lógica Difusa , Diagnóstico Precoz , Diagnóstico por Computador/métodos
7.
Comput Biol Med ; 175: 108519, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38688128

RESUMEN

Lung cancer has seriously threatened human health due to its high lethality and morbidity. Lung adenocarcinoma, in particular, is one of the most common subtypes of lung cancer. Pathological diagnosis is regarded as the gold standard for cancer diagnosis. However, the traditional manual screening of lung cancer pathology images is time consuming and error prone. Computer-aided diagnostic systems have emerged to solve this problem. Current research methods are unable to fully exploit the beneficial features inherent within patches, and they are characterized by high model complexity and significant computational effort. In this study, a deep learning framework called Multi-Scale Network (MSNet) is proposed for the automatic detection of lung adenocarcinoma pathology images. MSNet is designed to efficiently harness the valuable features within data patches, while simultaneously reducing model complexity, computational demands, and storage space requirements. The MSNet framework employs a dual data stream input method. In this input method, MSNet combines Swin Transformer and MLP-Mixer models to address global information between patches and the local information within each patch. Subsequently, MSNet uses the Multilayer Perceptron (MLP) module to fuse local and global features and perform classification to output the final detection results. In addition, a dataset of lung adenocarcinoma pathology images containing three categories is created for training and testing the MSNet framework. Experimental results show that the diagnostic accuracy of MSNet for lung adenocarcinoma pathology images is 96.55 %. In summary, MSNet has high classification performance and shows effectiveness and potential in the classification of lung adenocarcinoma pathology images.


Asunto(s)
Adenocarcinoma del Pulmón , Neoplasias Pulmonares , Redes Neurales de la Computación , Humanos , Adenocarcinoma del Pulmón/diagnóstico por imagen , Adenocarcinoma del Pulmón/patología , Adenocarcinoma del Pulmón/clasificación , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Neoplasias Pulmonares/clasificación , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Diagnóstico por Computador/métodos
8.
Int Ophthalmol ; 44(1): 191, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38653842

RESUMEN

Optical Coherence Tomography (OCT) is widely recognized as the leading modality for assessing ocular retinal diseases, playing a crucial role in diagnosing retinopathy while maintaining a non-invasive modality. The increasing volume of OCT images underscores the growing importance of automating image analysis. Age-related diabetic Macular Degeneration (AMD) and Diabetic Macular Edema (DME) are the most common cause of visual impairment. Early detection and timely intervention for diabetes-related conditions are essential for preventing optical complications and reducing the risk of blindness. This study introduces a novel Computer-Aided Diagnosis (CAD) system based on a Convolutional Neural Network (CNN) model, aiming to identify and classify OCT retinal images into AMD, DME, and Normal classes. Leveraging CNN efficiency, including feature learning and classification, various CNN, including pre-trained VGG16, VGG19, Inception_V3, a custom from scratch model, BCNN (VGG16) 2 , BCNN (VGG19) 2 , and BCNN (Inception_V3) 2 , are developed for the classification of AMD, DME, and Normal OCT images. The proposed approach has been evaluated on two datasets, including a DUKE public dataset and a Tunisian private dataset. The combination of the Inception_V3 model and the extracted feature from the proposed custom CNN achieved the highest accuracy value of 99.53% in the DUKE dataset. The obtained results on DUKE public and Tunisian datasets demonstrate the proposed approach as a significant tool for efficient and automatic retinal OCT image classification.


Asunto(s)
Aprendizaje Profundo , Degeneración Macular , Edema Macular , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Degeneración Macular/diagnóstico , Edema Macular/diagnóstico , Edema Macular/diagnóstico por imagen , Edema Macular/etiología , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/diagnóstico por imagen , Redes Neurales de la Computación , Retina/diagnóstico por imagen , Retina/patología , Diagnóstico por Computador/métodos , Anciano , Femenino , Masculino
9.
Med Image Anal ; 94: 103157, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574544

RESUMEN

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Humanos , Diagnóstico por Computador/métodos , Endoscopía Gastrointestinal , Procesamiento de Imagen Asistido por Computador/métodos
10.
Respir Res ; 25(1): 177, 2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658980

RESUMEN

BACKGROUND: Computer Aided Lung Sound Analysis (CALSA) aims to overcome limitations associated with standard lung auscultation by removing the subjective component and allowing quantification of sound characteristics. In this proof-of-concept study, a novel automated approach was evaluated in real patient data by comparing lung sound characteristics to structural and functional imaging biomarkers. METHODS: Patients with cystic fibrosis (CF) aged > 5y were recruited in a prospective cross-sectional study. CT scans were analyzed by the CF-CT scoring method and Functional Respiratory Imaging (FRI). A digital stethoscope was used to record lung sounds at six chest locations. Following sound characteristics were determined: expiration-to-inspiration (E/I) signal power ratios within different frequency ranges, number of crackles per respiratory phase and wheeze parameters. Linear mixed-effects models were computed to relate CALSA parameters to imaging biomarkers on a lobar level. RESULTS: 222 recordings from 25 CF patients were included. Significant associations were found between E/I ratios and structural abnormalities, of which the ratio between 200 and 400 Hz appeared to be most clinically relevant due to its relation with bronchiectasis, mucus plugging, bronchial wall thickening and air trapping on CT. The number of crackles was also associated with multiple structural abnormalities as well as regional airway resistance determined by FRI. Wheeze parameters were not considered in the statistical analysis, since wheezing was detected in only one recording. CONCLUSIONS: The present study is the first to investigate associations between auscultatory findings and imaging biomarkers, which are considered the gold standard to evaluate the respiratory system. Despite the exploratory nature of this study, the results showed various meaningful associations that highlight the potential value of automated CALSA as a novel non-invasive outcome measure in future research and clinical practice.


Asunto(s)
Biomarcadores , Fibrosis Quística , Ruidos Respiratorios , Humanos , Estudios Transversales , Masculino , Femenino , Estudios Prospectivos , Adulto , Fibrosis Quística/fisiopatología , Fibrosis Quística/diagnóstico por imagen , Adulto Joven , Adolescente , Auscultación/métodos , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen , Pulmón/fisiopatología , Niño , Prueba de Estudio Conceptual , Diagnóstico por Computador/métodos , Persona de Mediana Edad
11.
Comput Methods Programs Biomed ; 247: 108101, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38432087

RESUMEN

BACKGROUND AND OBJECTIVE: Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS: We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS: The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS: Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.


Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Neoplasias de la Mama/diagnóstico , Detección Precoz del Cáncer , Mamografía/métodos , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Mama/diagnóstico por imagen , Mama/patología
12.
PLoS One ; 19(3): e0298527, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38466701

RESUMEN

Lung cancer is one of the leading causes of cancer-related deaths worldwide. To reduce the mortality rate, early detection and proper treatment should be ensured. Computer-aided diagnosis methods analyze different modalities of medical images to increase diagnostic precision. In this paper, we propose an ensemble model, called the Mitscherlich function-based Ensemble Network (MENet), which combines the prediction probabilities obtained from three deep learning models, namely Xception, InceptionResNetV2, and MobileNetV2, to improve the accuracy of a lung cancer prediction model. The ensemble approach is based on the Mitscherlich function, which produces a fuzzy rank to combine the outputs of the said base classifiers. The proposed method is trained and tested on the two publicly available lung cancer datasets, namely Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) and LIDC-IDRI, both of these are computed tomography (CT) scan datasets. The obtained results in terms of some standard metrics show that the proposed method performs better than state-of-the-art methods. The codes for the proposed work are available at https://github.com/SuryaMajumder/MENet.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador/métodos , Irak
13.
Comput Biol Med ; 172: 108267, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38479197

RESUMEN

Early detection of colon adenomatous polyps is pivotal in reducing colon cancer risk. In this context, accurately distinguishing between adenomatous polyp subtypes, especially tubular and tubulovillous, from hyperplastic variants is crucial. This study introduces a cutting-edge computer-aided diagnosis system optimized for this task. Our system employs advanced Supervised Contrastive learning to ensure precise classification of colon histopathology images. Significantly, we have integrated the Big Transfer model, which has gained prominence for its exemplary adaptability to visual tasks in medical imaging. Our novel approach discerns between in-class and out-of-class images, thereby elevating its discriminatory power for polyp subtypes. We validated our system using two datasets: a specially curated one and the publicly accessible UniToPatho dataset. The results reveal that our model markedly surpasses traditional deep convolutional neural networks, registering classification accuracies of 87.1% and 70.3% for the custom and UniToPatho datasets, respectively. Such results emphasize the transformative potential of our model in polyp classification endeavors.


Asunto(s)
Pólipos Adenomatosos , Pólipos del Colon , Humanos , Pólipos del Colon/diagnóstico por imagen , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Diagnóstico por Imagen
14.
Med Eng Phys ; 124: 104101, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38418029

RESUMEN

With the advancement of deep learning technology, computer-aided diagnosis (CAD) is playing an increasing role in the field of medical diagnosis. In particular, the emergence of Transformer-based models has led to a wider application of computer vision technology in the field of medical image processing. In the diagnosis of thyroid diseases, the diagnosis of benign and malignant thyroid nodules based on the TI-RADS classification is greatly influenced by the subjective judgment of ultrasonographers, and at the same time, it also brings an extremely heavy workload to ultrasonographers. To address this, we propose Swin-Residual Transformer (SRT) in this paper, which incorporates residual blocks and triplet loss into Swin Transformer (SwinT). It improves the sensitivity to global and localized features of thyroid nodules and better distinguishes small feature differences. In our exploratory experiments, SRT model achieves an accuracy of 0.8832 with an AUC of 0.8660, outperforming state-of-the-art convolutional neural network (CNN) and Transformer models. Also, ablation experiments have demonstrated the improved performance in the thyroid nodule classification task after introducing residual blocks and triple loss. These results validate the potential of the proposed SRT model to improve the diagnosis of thyroid nodules' ultrasound images. It also provides a feasible guarantee to avoid excessive puncture sampling of thyroid nodules in future clinical diagnosis.


Asunto(s)
Retraso en el Despertar Posanestésico , Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Ultrasonografía , Diagnóstico por Computador/métodos
15.
United European Gastroenterol J ; 12(4): 487-495, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38400815

RESUMEN

OBJECTIVE: Using endoscopic images, we have previously developed computer-aided diagnosis models to predict the histopathology of gastric neoplasms. However, no model that categorizes every stage of gastric carcinogenesis has been published. In this study, a deep-learning-based diagnosis model was developed and validated to automatically classify all stages of gastric carcinogenesis, including atrophy and intestinal metaplasia, in endoscopy images. DESIGN: A total of 18,701 endoscopic images were collected retrospectively and randomly divided into train, validation, and internal-test datasets in an 8:1:1 ratio. The primary outcome was lesion-classification accuracy in six categories: normal/atrophy/intestinal metaplasia/dysplasia/early /advanced gastric cancer. External-validation of performance in the established model used 1427 novel images from other institutions that were not used in training, validation, or internal-tests. RESULTS: The internal-test lesion-classification accuracy was 91.2% (95% confidence interval: 89.9%-92.5%). For performance validation, the established model achieved an accuracy of 82.3% (80.3%-84.3%). The external-test per-class receiver operating characteristic in the diagnosis of atrophy and intestinal metaplasia was 93.4 ± 0% and 91.3 ± 0%, respectively. CONCLUSIONS: The established model demonstrated high performance in the diagnosis of preneoplastic lesions (atrophy and intestinal metaplasia) as well as gastric neoplasms.


Asunto(s)
Diagnóstico por Computador , Gastroscopía , Metaplasia , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/patología , Neoplasias Gástricas/diagnóstico , Neoplasias Gástricas/diagnóstico por imagen , Estudios Retrospectivos , Diagnóstico por Computador/métodos , Masculino , Femenino , Metaplasia/patología , Metaplasia/diagnóstico por imagen , Gastroscopía/métodos , Persona de Mediana Edad , Aprendizaje Profundo , Lesiones Precancerosas/patología , Lesiones Precancerosas/diagnóstico , Lesiones Precancerosas/diagnóstico por imagen , Atrofia , Carcinogénesis/patología , Anciano , Curva ROC , Estadificación de Neoplasias , Mucosa Gástrica/patología , Mucosa Gástrica/diagnóstico por imagen , Reproducibilidad de los Resultados
16.
Microsc Res Tech ; 87(6): 1271-1285, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38353334

RESUMEN

Skin is the exposed part of the human body that constantly protected from UV rays, heat, light, dust, and other hazardous radiation. One of the most dangerous illnesses that affect people is skin cancer. A type of skin cancer called melanoma starts in the melanocytes, which regulate the colour in human skin. Reducing the fatality rate from skin cancer requires early detection and diagnosis of conditions like melanoma. In this article, a Self-attention based cycle-consistent generative adversarial network optimized with Archerfish Hunting Optimization Algorithm adopted Melanoma Classification (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Primarily, the input Skin dermoscopic images are gathered via the dataset of ISIC 2019. Then, the input Skin dermoscopic images is pre-processed using adjusted quick shift phase preserving dynamic range compression (AQSP-DRC) for removing noise and increase the quality of Skin dermoscopic images. These pre-processed images are fed to the piecewise fuzzy C-means clustering (PF-CMC) for ROI region segmentation. The segmented ROI region is supplied to the Hexadecimal Local Adaptive Binary Pattern (HLABP) to extract the Radiomic features, like Grayscale statistic features (standard deviation, mean, kurtosis, and skewness) together with Haralick Texture features (contrast, energy, entropy, homogeneity, and inverse different moments). The extracted features are fed to self-attention based cycle-consistent generative adversarial network (SACCGAN) which classifies the skin cancers as Melanocytic nevus, Basal cell carcinoma, Actinic Keratosis, Benign keratosis, Dermatofibroma, Vascular lesion, Squamous cell carcinoma and melanoma. In general, SACCGAN not adapt any optimization modes to define the ideal parameters to assure accurate classification of skin cancer. Hence, Archerfish Hunting Optimization Algorithm (AHOA) is considered to maximize the SACCGAN classifier, which categorizes the skin cancer accurately. The proposed method attains 23.01%, 14.96%, and 45.31% higher accuracy and 32.16%, 11.32%, and 24.56% lesser computational time evaluated to the existing methods, like melanoma prediction method for unbalanced data utilizing optimized Squeeze Net through bald eagle search optimization (CNN-BES-MC-DI), hyper-parameter optimized CNN depending on Grey wolf optimization algorithm (CNN-GWOA-MC-DI), DEANN incited skin cancer finding depending on fuzzy c-means clustering (DEANN-MC-DI). RESEARCH HIGHLIGHTS: This manuscript, self-attention based cycle-consistent. SACCGAN-AHOA-MC-DI method is implemented in Python. (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Adjusted quick shift phase preserving dynamic range compression (AQSP-DRC). Removing noise and increase the quality of Skin dermoscopic images.


Asunto(s)
Queratosis Actínica , Melanoma , Neoplasias Cutáneas , Humanos , Melanoma/diagnóstico , Neoplasias Cutáneas/diagnóstico , Melanocitos/patología , Algoritmos , Diagnóstico por Computador/métodos
17.
Comput Methods Programs Biomed ; 244: 107999, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38194766

RESUMEN

BACKGROUND AND OBJECTIVE: Thyroid nodule segmentation is a crucial step in the diagnostic procedure of physicians and computer-aided diagnosis systems. However, prevailing studies often treat segmentation and diagnosis as independent tasks, overlooking the intrinsic relationship between these processes. The sequencial steps of these independent tasks in computer-aided diagnosis systems may lead to the accumulation of errors. Therefore, it is worth combining them as a whole by exploring the relationship between thyroid nodule segmentation and diagnosis. According to the diagnostic procedure of thyroid imaging reporting and data system (TI-RADS), the assessment of shape and margin characteristics is the prerequisite for radiologists to discriminate benign and malignant thyroid nodules. Inspired by TI-RADS, this study aims to integrate these tasks into a cohesive process, leveraging the insights from TI-RADS, thereby enhancing the accuracy and interpretability of thyroid nodule analysis. METHODS: Specifically, this paper proposes a shape-margin knowledge augmented network (SkaNet) for simultaneous thyroid nodule segmentation and diagnosis. Due to the visual feature similarities between segmentation and diagnosis, SkaNet shares visual features in the feature extraction stage and then utilizes a dual-branch architecture to perform thyroid nodule segmentation and diagnosis tasks respectively. In the shared feature extraction, the combination of convolutional feature maps and self-attention maps allows to exploitation of both local information and global patterns in thyroid nodule images. To enhance effective discriminative features, an exponential mixture module is introduced, combining convolutional feature maps and self-attention maps through exponential weighting. Then, SkaNet is jointly optimized by a knowledge augmented multi-task loss function with a constraint penalty term. The constraint penalty term embeds shape and margin characteristics through numerical computations, establishing a vital relationship between thyroid nodule diagnosis results and segmentation masks. RESULTS: We evaluate the proposed approach on a public thyroid ultrasound dataset (DDTI) and a locally collected thyroid ultrasound dataset. The experimental results reveal the value of our contributions and demonstrate that our approach can yield significant improvements compared with state-of-the-art counterparts. CONCLUSIONS: SkaNet highlights the potential of combining thyroid nodule segmentation and diagnosis with knowledge augmented learning into a unified framework, which captures the key shape and margin characteristics for discriminating benign and malignant thyroid nodules. Our findings suggest promising insights for advancing computer-aided diagnosis joint with segmentation.


Asunto(s)
Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/patología , Ultrasonografía/métodos , Diagnóstico por Computador/métodos , Diagnóstico Diferencial
18.
Radiol Phys Technol ; 17(1): 195-206, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38165579

RESUMEN

Somatostatin receptor scintigraphy (SRS) is an essential examination for the diagnosis of neuroendocrine tumors (NETs). This study developed a method to individually optimize the display of whole-body SRS images using a deep convolutional neural network (DCNN) reconstructed by transfer learning of a DCNN constructed using Gallium-67 (67Ga) images. The initial DCNN was constructed using U-Net to optimize the display of 67Ga images (493 cases/986 images), and a DCNN with transposed weight coefficients was reconstructed for the optimization of whole-body SRS images (133 cases/266 images). A DCNN was constructed for each observer using reference display conditions estimated in advance. Furthermore, to eliminate information loss in the original image, a grayscale linear process is performed based on the DCNN output image to obtain the final linearly corrected DCNN (LcDCNN) image. To verify the usefulness of the proposed method, an observer study using a paired-comparison method was conducted on the original, reference, and LcDCNN images of 15 cases with 30 images. The paired comparison method showed that in most cases (29/30), the LcDCNN images were significantly superior to the original images in terms of display conditions. When comparing the LcDCNN and reference images, the number of LcDCNN and reference images that were superior to each other in the display condition was 17 and 13, respectively, and in both cases, 6 of these images showed statistically significant differences. The optimized SRS images obtained using the proposed method, while reflecting the observer's preference, were superior to the conventional manually adjusted images.


Asunto(s)
Redes Neurales de la Computación , Receptores de Somatostatina , Diagnóstico por Computador/métodos , Tomografía Computarizada por Rayos X , Cintigrafía
19.
J Gastroenterol Hepatol ; 39(5): 927-934, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38273460

RESUMEN

BACKGROUND AND AIM: Computer-aided detection (CADe) systems can efficiently detect polyps during colonoscopy. However, false-positive (FP) activation is a major limitation of CADe. We aimed to compare the rate and causes of FP using CADe before and after an update designed to reduce FP. METHODS: We analyzed CADe-assisted colonoscopy videos recorded between July 2022 and October 2022. The number and causes of FPs and excessive time spent by the endoscopist on FP (ET) were compared pre- and post-update using 1:1 propensity score matching. RESULTS: During the study period, 191 colonoscopy videos (94 and 97 in the pre- and post-update groups, respectively) were recorded. Propensity score matching resulted in 146 videos (73 in each group). The mean number of FPs and median ET per colonoscopy were significantly lower in the post-update group than those in the pre-update group (4.2 ± 3.7 vs 18.1 ± 11.1; P < 0.001 and 0 vs 16 s; P < 0.001, respectively). Mucosal tags, bubbles, and folds had the strongest association with decreased FP post-update (pre-update vs post-update: 4.3 ± 3.6 vs 0.4 ± 0.8, 0.32 ± 0.70 vs 0.04 ± 0.20, and 8.6 ± 6.7 vs 1.6 ± 1.7, respectively). There was no significant decrease in the true positive rate (post-update vs pre-update: 95.0% vs 99.2%; P = 0.09) or the adenoma detection rate (post-update vs pre-update: 52.1% vs 49.3%; P = 0.87). CONCLUSIONS: The updated CADe can reduce FP without impairing polyp detection. A reduction in FP may help relieve the burden on endoscopists.


Asunto(s)
Pólipos del Colon , Colonoscopía , Diagnóstico por Computador , Humanos , Colonoscopía/métodos , Diagnóstico por Computador/métodos , Reacciones Falso Positivas , Masculino , Femenino , Persona de Mediana Edad , Pólipos del Colon/diagnóstico , Pólipos del Colon/diagnóstico por imagen , Anciano , Grabación en Video , Puntaje de Propensión , Factores de Tiempo
20.
J Xray Sci Technol ; 32(1): 53-68, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38189730

RESUMEN

BACKGROUND: With the rapid growth of Deep Neural Networks (DNN) and Computer-Aided Diagnosis (CAD), more significant works have been analysed for cancer related diseases. Skin cancer is the most hazardous type of cancer that cannot be diagnosed in the early stages. OBJECTIVE: The diagnosis of skin cancer is becoming a challenge to dermatologists as an abnormal lesion looks like an ordinary nevus at the initial stages. Therefore, early identification of lesions (origin of skin cancer) is essential and helpful for treating skin cancer patients effectively. The enormous development of automated skin cancer diagnosis systems significantly supports dermatologists. METHODS: This paper performs a classification of skin cancer by utilising various deep-learning frameworks after resolving the class Imbalance problem in the ISIC-2019 dataset. A fine-tuned ResNet-50 model is used to evaluate the performance of original data, augmented data, and after by adding the focal loss. Focal loss is the best technique to solve overfitting problems by assigning weights to hard misclassified images. RESULTS: Finally, augmented data with focal loss is given a good classification performance with 98.85% accuracy, 95.52% precision, and 95.93% recall. Matthews Correlation coefficient (MCC) is the best metric to evaluate the quality of multi-class images. It has given outstanding performance by using augmented data and focal loss.


Asunto(s)
Aprendizaje Profundo , Nevo , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Nevo/patología , Redes Neurales de la Computación , Diagnóstico por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA