Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Front Med (Lausanne) ; 11: 1444708, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39188873

RESUMEN

Background: Pneumonia and lung cancer have a mutually reinforcing relationship. Lung cancer patients are prone to contracting COVID-19, with poorer prognoses. Additionally, COVID-19 infection can impact anticancer treatments for lung cancer patients. Developing an early diagnostic system for COVID-19 pneumonia can help improve the prognosis of lung cancer patients with COVID-19 infection. Method: This study proposes a neural network for COVID-19 diagnosis based on non-enhanced CT scans, consisting of two 3D convolutional neural networks (CNN) connected in series to form two diagnostic modules. The first diagnostic module classifies COVID-19 pneumonia patients from other pneumonia patients, while the second diagnostic module distinguishes severe COVID-19 patients from ordinary COVID-19 patients. We also analyzed the correlation between the deep learning features of the two diagnostic modules and various laboratory parameters, including KL-6. Result: The first diagnostic module achieved an accuracy of 0.9669 on the training set and 0.8884 on the test set, while the second diagnostic module achieved an accuracy of 0.9722 on the training set and 0.9184 on the test set. Strong correlation was observed between the deep learning parameters of the second diagnostic module and KL-6. Conclusion: Our neural network can differentiate between COVID-19 pneumonia and other pneumonias on CT images, while also distinguishing between ordinary COVID-19 patients and those with white lung. Patients with white lung in COVID-19 have greater alveolar damage compared to ordinary COVID-19 patients, and our deep learning features can serve as an imaging biomarker.

2.
BMC Womens Health ; 24(1): 425, 2024 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-39060940

RESUMEN

PURPOSE: To build an Mult-Task Learning (MTL) based Artificial Intelligence(AI) model that can simultaneously predict clinical stage, histology, grade and LNM for cervical cancer before surgery. METHODS: This retrospective and prospective cohort study was conducted from January 2001 to March 2014 for the training set and from January 2018 to November 2021 for the validation set at Beijing Chaoyang Hospital, Capital Medical University. Preoperative clinical information of cervical cancer patients was used. An Artificial Neural Network (ANN) algorithm was used to build the MTL-based AI model. Accuracy and weighted F1 scores were calculated as evaluation indicators. The performance of the MTL model was compared with Single-Task Learning (STL) models. Additionally, a Turing test was performed by 20 gynecologists and compared with this AI model. RESULTS: A total of 223 cervical cancer cases were retrospectively enrolled into the training set, and 58 cases were prospectively collected as independent validation set. The accuracy of this cervical cancer AI model constructed with ANN algorithm in predicting stage, histology, grade and LNM were 75%, 95%, 86% and 76%, respectively. And the corresponding weighted F1 score were 70%, 94%, 86%, and 76%, respectively. The average time consumption of AI simultaneously predicting stage, histology, grade and LNM for cervical cancer was 0.01s (95%CI: 0.01-0.01) per 20 patients. The mean time consumption doctor and doctor with AI were 581.1s (95%CI: 300.0-900.0) per 20 patients and 534.8s (95%CI: 255.0-720.0) per 20 patients, respectively. Except for LNM, both the accuracy and F-score of the AI model were significantly better than STL AI, doctors and AI-assisted doctors in predicting stage, grade and histology. (P < 0.05) The time consumption of AI was significantly less than that of doctors' prediction and AI-assisted doctors' results. (P < 0.05 CONCLUSION: A multi-task learning AI model can simultaneously predict stage, histology, grade, and LNM for cervical cancer preoperatively with minimal time consumption. To improve the conditions and use of the beneficiaries, the model should be integrated into routine clinical workflows, offering a decision-support tool for gynecologists. Future studies should focus on refining the model for broader clinical applications, increasing the diversity of the training datasets, and enhancing its adaptability to various clinical settings. Additionally, continuous feedback from clinical practice should be incorporated to ensure the model's accuracy and reliability, ultimately improving personalized patient care and treatment outcomes.


Asunto(s)
Inteligencia Artificial , Neoplasias del Cuello Uterino , Humanos , Femenino , Neoplasias del Cuello Uterino/cirugía , Neoplasias del Cuello Uterino/patología , Estudios Retrospectivos , Estudios Prospectivos , Persona de Mediana Edad , Adulto , Estadificación de Neoplasias/métodos , Clasificación del Tumor/métodos , Redes Neurales de la Computación , Algoritmos , Anciano , Metástasis Linfática , Estudios de Cohortes
3.
Comput Struct Biotechnol J ; 24: 205-212, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38510535

RESUMEN

The diagnosis of cancer is typically based on histopathological sections or biopsies on glass slides. Artificial intelligence (AI) approaches have greatly enhanced our ability to extract quantitative information from digital histopathology images as a rapid growth in oncology data. Gynecological cancers are major diseases affecting women's health worldwide. They are characterized by high mortality and poor prognosis, underscoring the critical importance of early detection, treatment, and identification of prognostic factors. This review highlights the various clinical applications of AI in gynecological cancers using digitized histopathology slides. Particularly, deep learning models have shown promise in accurately diagnosing, classifying histopathological subtypes, and predicting treatment response and prognosis. Furthermore, the integration with transcriptomics, proteomics, and other multi-omics techniques can provide valuable insights into the molecular features of diseases. Despite the considerable potential of AI, substantial challenges remain. Further improvements in data acquisition and model optimization are required, and the exploration of broader clinical applications, such as the biomarker discovery, need to be explored.

4.
Comput Biol Med ; 171: 108217, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38430743

RESUMEN

BACKGROUND: Endometrial cancer is one of the most common tumors in the female reproductive system and is the third most common gynecological malignancy that causes death after ovarian and cervical cancer. Early diagnosis can significantly improve the 5-year survival rate of patients. With the development of artificial intelligence, computer-assisted diagnosis plays an increasingly important role in improving the accuracy and objectivity of diagnosis and reducing the workload of doctors. However, the absence of publicly available image datasets restricts the application of computer-assisted diagnostic techniques. METHODS: In this paper, a publicly available Endometrial Cancer PET/CT Image Dataset for Evaluation of Semantic Segmentation and Detection of Hypermetabolic Regions (ECPC-IDS) are published. Specifically, the segmentation section includes PET and CT images, with 7159 images in multiple formats totally. In order to prove the effectiveness of segmentation on ECPC-IDS, six deep learning semantic segmentation methods are selected to test the image segmentation task. The object detection section also includes PET and CT images, with 3579 images and XML files with annotation information totally. Eight deep learning methods are selected for experiments on the detection task. RESULTS: This study is conduct using deep learning-based semantic segmentation and object detection methods to demonstrate the distinguishability on ECPC-IDS. From a separate perspective, the minimum and maximum values of Dice on PET images are 0.546 and 0.743, respectively. The minimum and maximum values of Dice on CT images are 0.012 and 0.510, respectively. The target detection section's maximum mAP values on PET and CT images are 0.993 and 0.986, respectively. CONCLUSION: As far as we know, this is the first publicly available dataset of endometrial cancer with a large number of multi-modality images. ECPC-IDS can assist researchers in exploring new algorithms to enhance computer-assisted diagnosis, benefiting both clinical doctors and patients. ECPC-IDS is also freely published for non-commercial at: https://figshare.com/articles/dataset/ECPC-IDS/23808258.


Asunto(s)
Neoplasias Endometriales , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Femenino , Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador/métodos , Semántica , Benchmarking , Neoplasias Endometriales/diagnóstico por imagen
5.
Data Brief ; 53: 110141, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38406254

RESUMEN

A benchmark histopathological Hematoxylin and Eosin (H&E) image dataset for Cervical Adenocarcinoma in Situ (CAISHI), containing 2240 histopathological images of Cervical Adenocarcinoma in Situ (AIS), is established to fill the current data gap, of which 1010 are images of normal cervical glands and another 1230 are images of cervical AIS. The sampling method is endoscope biopsy. Pathological sections are obtained by H&E staining from Shengjing Hospital, China Medical University. These images have a magnification of 100 and are captured by the Axio Scope. A1 microscope. The size of the image is 3840 × 2160 pixels, and the format is ".png". The collection of CAISHI is subject to an ethical review by China Medical University with approval number 2022PS841K. These images are analyzed at multiple levels, including classification tasks and image retrieval tasks. A variety of computer vision and machine learning methods are used to evaluate the performance of the data. For classification tasks, a variety of classical machine learning classifiers such as k-means, support vector machines (SVM), and random forests (RF), as well as convolutional neural network classifiers such as Residual Network 50 (ResNet50), Vision Transformer (ViT), Inception version 3 (Inception-V3), and Visual Geometry Group Network 16 (VGG-16), are used. In addition, the Siamese network is used to evaluate few-shot learning tasks. In terms of image retrieval functions, color features, texture features, and deep learning features are extracted, and their performances are tested. CAISHI can help with the early diagnosis and screening of cervical cancer. Researchers can use this dataset to develop new computer-aided diagnostic tools that could improve the accuracy and efficiency of cervical cancer screening and advance the development of automated diagnostic algorithms.

6.
Comput Biol Med ; 165: 107388, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37696178

RESUMEN

Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.


Asunto(s)
Medicina , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Intestinos , Aprendizaje Automático
7.
Comput Biol Med ; 162: 107070, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37295389

RESUMEN

Cervical cancer is the fourth most common cancer among women, and cytopathological images are often used to screen for this cancer. However, manual examination is very troublesome and the misdiagnosis rate is high. In addition, cervical cancer nest cells are denser and more complex, with high overlap and opacity, increasing the difficulty of identification. The appearance of the computer aided automatic diagnosis system solves this problem. In this paper, a weakly supervised cervical cancer nest image identification approach using Conjugated Attention Mechanism and Visual Transformer (CAM-VT), which can analyze pap slides quickly and accurately. CAM-VT proposes conjugated attention mechanism and visual transformer modules for local and global feature extraction respectively, and then designs an ensemble learning module to further improve the identification capability. In order to determine a reasonable interpretation, comparative experiments are conducted on our datasets. The average accuracy of the validation set of three repeated experiments using CAM-VT framework is 88.92%, which is higher than the optimal result of 22 well-known deep learning models. Moreover, we conduct ablation experiments and extended experiments on Hematoxylin and Eosin stained gastric histopathological image datasets to verify the ability and generalization ability of the framework. Finally, the top 5 and top 10 positive probability values of cervical nests are 97.36% and 96.84%, which have important clinical and practical significance. The experimental results show that the proposed CAM-VT framework has excellent performance in potential cervical cancer nest image identification tasks for practical clinical work.


Asunto(s)
Neoplasias del Cuello Uterino , Femenino , Humanos , Neoplasias del Cuello Uterino/diagnóstico por imagen , Diagnóstico por Computador , Eosina Amarillenta-(YS) , Hematoxilina , Probabilidad , Procesamiento de Imagen Asistido por Computador
8.
Phys Med ; 107: 102534, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36804696

RESUMEN

BACKGROUND AND PURPOSE: Colorectal cancer has become the third most common cancer worldwide, accounting for approximately 10% of cancer patients. Early detection of the disease is important for the treatment of colorectal cancer patients. Histopathological examination is the gold standard for screening colorectal cancer. However, the current lack of histopathological image datasets of colorectal cancer, especially enteroscope biopsies, hinders the accurate evaluation of computer-aided diagnosis techniques. Therefore, a multi-category colorectal cancer dataset is needed to test various medical image classification methods to find high classification accuracy and strong robustness. METHODS: A new publicly available Enteroscope Biopsy Histopathological H&E Image Dataset (EBHI) is published in this paper. To demonstrate the effectiveness of the EBHI dataset, we have utilized several machine learning, convolutional neural networks and novel transformer-based classifiers for experimentation and evaluation, using an image with a magnification of 200×. RESULTS: Experimental results show that the deep learning method performs well on the EBHI dataset. Classical machine learning methods achieve maximum accuracy of 76.02% and deep learning method achieves a maximum accuracy of 95.37%. CONCLUSION: To the best of our knowledge, EBHI is the first publicly available colorectal histopathology enteroscope biopsy dataset with four magnifications and five types of images of tumor differentiation stages, totaling 5532 images. We believe that EBHI could attract researchers to explore new classification algorithms for the automated diagnosis of colorectal cancer, which could help physicians and patients in clinical settings.


Asunto(s)
Neoplasias Colorrectales , Redes Neurales de la Computación , Humanos , Algoritmos , Diagnóstico por Computador/métodos , Biopsia , Neoplasias Colorrectales/diagnóstico por imagen
9.
Front Med (Lausanne) ; 10: 1114673, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36760405

RESUMEN

Background and purpose: Colorectal cancer is a common fatal malignancy, the fourth most common cancer in men, and the third most common cancer in women worldwide. Timely detection of cancer in its early stages is essential for treating the disease. Currently, there is a lack of datasets for histopathological image segmentation of colorectal cancer, which often hampers the assessment accuracy when computer technology is used to aid in diagnosis. Methods: This present study provided a new publicly available Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical machine learning methods and deep learning methods. Results: The experimental results showed that deep learning methods had a better image segmentation performance when utilizing EBHI-Seg. The maximum accuracy of the Dice evaluation metric for the classical machine learning method is 0.948, while the Dice evaluation metric for the deep learning method is 0.965. Conclusion: This publicly available dataset contained 4,456 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer, which can be used in the clinical setting to help doctors and patients. EBHI-Seg is publicly available at: https://figshare.com/articles/dataset/EBHI-SEG/21540159/1.

10.
Front Med (Lausanne) ; 9: 1072109, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36569152

RESUMEN

Introduction: Gastric cancer is the fifth most common cancer in the world. At the same time, it is also the fourth most deadly cancer. Early detection of cancer exists as a guide for the treatment of gastric cancer. Nowadays, computer technology has advanced rapidly to assist physicians in the diagnosis of pathological pictures of gastric cancer. Ensemble learning is a way to improve the accuracy of algorithms, and finding multiple learning models with complementarity types is the basis of ensemble learning. Therefore, this paper compares the performance of multiple algorithms in anticipation of applying ensemble learning to a practical gastric cancer classification problem. Methods: The complementarity of sub-size pathology image classifiers when machine performance is insufficient is explored in this experimental platform. We choose seven classical machine learning classifiers and four deep learning classifiers for classification experiments on the GasHisSDB database. Among them, classical machine learning algorithms extract five different image virtual features to match multiple classifier algorithms. For deep learning, we choose three convolutional neural network classifiers. In addition, we also choose a novel Transformer-based classifier. Results: The experimental platform, in which a large number of classical machine learning and deep learning methods are performed, demonstrates that there are differences in the performance of different classifiers on GasHisSDB. Classical machine learning models exist for classifiers that classify Abnormal categories very well, while classifiers that excel in classifying Normal categories also exist. Deep learning models also exist with multiple models that can be complementarity. Discussion: Suitable classifiers are selected for ensemble learning, when machine performance is insufficient. This experimental platform demonstrates that multiple classifiers are indeed complementarity and can improve the efficiency of ensemble learning. This can better assist doctors in diagnosis, improve the detection of gastric cancer, and increase the cure rate.

11.
Front Physiol ; 13: 948767, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36091379

RESUMEN

Purpose: We aim to develop and validate PET/ CT image-based radiomics to determine the Ki-67 status of high-grade serous ovarian cancer (HGSOC), in which we use the metabolic subregion evolution to improve the prediction ability of the model. At the same time, the stratified effect of the radiomics model on the progression-free survival rate of ovarian cancer patients was illustrated. Materials and methods: We retrospectively reviewed 161 patients with HGSOC from April 2013 to January 2019. 18F-FDG PET/ CT images before treatment, pathological reports, and follow-up data were analyzed. A randomized grouping method was used to divide ovarian cancer patients into a training group and validation group. PET/ CT images were fused to extract radiomics features of the whole tumor region and radiomics features based on the Habitat method. The feature is dimensionality reduced, and meaningful features are screened to form a signature for predicting the Ki-67 status of ovarian cancer. Meanwhile, survival analysis was conducted to explore the hierarchical guidance significance of radiomics in the prognosis of patients with ovarian cancer. Results: Compared with texture features extracted from the whole tumor, the texture features generated by the Habitat method can better predict the Ki-67 state (p < 0.001). Radiomics based on Habitat can predict the Ki-67 expression accurately and has the potential to become a new marker instead of Ki-67. At the same time, the Habitat model can better stratify the prognosis (p < 0.05). Conclusion: We found a noninvasive imaging predictor that could guide the stratification of prognosis in ovarian cancer patients, which is related to the expression of Ki-67 in tumor tissues. This method is of great significance for the diagnosis and treatment of ovarian cancer.

12.
Comput Biol Med ; 143: 105265, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35123138

RESUMEN

In recent years, colorectal cancer has become one of the most significant diseases that endanger human health. Deep learning methods are increasingly important for the classification of colorectal histopathology images. However, existing approaches focus more on end-to-end automatic classification using computers rather than human-computer interaction. In this paper, we propose an IL-MCAM framework. It is based on attention mechanisms and interactive learning. The proposed IL-MCAM framework includes two stages: automatic learning (AL) and interactivity learning (IL). In the AL stage, a multi-channel attention mechanism model containing three different attention mechanism channels and convolutional neural networks is used to extract multi-channel features for classification. In the IL stage, the proposed IL-MCAM framework continuously adds misclassified images to the training set in an interactive approach, which improves the classification ability of the MCAM model. We carried out a comparison experiment on our dataset and an extended experiment on the HE-NCT-CRC-100K dataset to verify the performance of the proposed IL-MCAM framework, achieving classification accuracies of 98.98% and 99.77%, respectively. In addition, we conducted an ablation experiment and an interchangeability experiment to verify the ability and interchangeability of the three channels. The experimental results show that the proposed IL-MCAM framework has excellent performance in the colorectal histopathological image classification tasks.

13.
Comput Biol Med ; 142: 105207, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35016101

RESUMEN

BACKGROUND AND OBJECTIVE: Gastric cancer is the fifth most common cancer globally, and early detection of gastric cancer is essential to save lives. Histopathological examination of gastric cancer is the gold standard for the diagnosis of gastric cancer. However, computer-aided diagnostic techniques are challenging to evaluate due to the scarcity of publicly available gastric histopathology image datasets. METHODS: In this paper, a noble publicly available Gastric Histopathology Sub-size Image Database (GasHisSDB) is published to identify classifiers' performance. Specifically, two types of data are included: normal and abnormal, with a total of 245,196 tissue case images. In order to prove that the methods of different periods in the field of image classification have discrepancies on GasHisSDB, we select a variety of classifiers for evaluation. Seven classical machine learning classifiers, three Convolutional Neural Network classifiers, and a novel transformer-based classifier are selected for testing on image classification tasks. RESULTS: This study performed extensive experiments using traditional machine learning and deep learning methods to prove that the methods of different periods have discrepancies on GasHisSDB. Traditional machine learning achieved the best accuracy rate of 86.08% and a minimum of just 41.12%. The best accuracy of deep learning reached 96.47% and the lowest was 86.21%. Accuracy rates vary significantly across classifiers. CONCLUSIONS: To the best of our knowledge, it is the first publicly available gastric cancer histopathology dataset containing a large number of images for weakly supervised learning. We believe that GasHisSDB can attract researchers to explore new algorithms for the automated diagnosis of gastric cancer, which can help physicians and patients in the clinical setting.


Asunto(s)
Neoplasias Gástricas , Algoritmos , Diagnóstico por Computador , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Neoplasias Gástricas/diagnóstico por imagen
14.
Comput Biol Med ; 141: 105026, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34801245

RESUMEN

Cervical cancer is a very common and fatal type of cancer in women. Cytopathology images are often used to screen for this cancer. Given that there is a possibility that many errors can occur during manual screening, a computer-aided diagnosis system based on deep learning has been developed. Deep learning methods require a fixed dimension of input images, but the dimensions of clinical medical images are inconsistent. The aspect ratios of the images suffer while resizing them directly. Clinically, the aspect ratios of cells inside cytopathological images provide important information for doctors to diagnose cancer. Therefore, it is difficult to resize directly. However, many existing studies have resized the images directly and have obtained highly robust classification results. To determine a reasonable interpretation, we have conducted a series of comparative experiments. First, the raw data of the SIPaKMeD dataset are pre-processed to obtain standard and scaled datasets. Then, the datasets are resized to 224 × 224 pixels. Finally, 22 deep learning models are used to classify the standard and scaled datasets. The results of the study indicate that deep learning models are robust to changes in the aspect ratio of cells in cervical cytopathological images. This conclusion is also validated via the Herlev dataset.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Cuello Uterino , Cuello del Útero , Diagnóstico por Computador , Femenino , Humanos , Redes Neurales de la Computación , Neoplasias del Cuello Uterino/diagnóstico por imagen
15.
Biomed Res Int ; 2021: 6671417, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34258279

RESUMEN

Gastric cancer is a common and deadly cancer in the world. The gold standard for the detection of gastric cancer is the histological examination by pathologists, where Gastric Histopathological Image Analysis (GHIA) contributes significant diagnostic information. The histopathological images of gastric cancer contain sufficient characterization information, which plays a crucial role in the diagnosis and treatment of gastric cancer. In order to improve the accuracy and objectivity of GHIA, Computer-Aided Diagnosis (CAD) has been widely used in histological image analysis of gastric cancer. In this review, the CAD technique on pathological images of gastric cancer is summarized. Firstly, the paper summarizes the image preprocessing methods, then introduces the methods of feature extraction, and then generalizes the existing segmentation and classification techniques. Finally, these techniques are systematically introduced and analyzed for the convenience of future researchers.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Estómago/diagnóstico por imagen , Estómago/patología , Algoritmos , Color , Diseño Asistido por Computadora , Diagnóstico por Computador/métodos , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional , Aprendizaje Automático , Distribución de Poisson , Reproducibilidad de los Resultados , Neoplasias Gástricas/diagnóstico por imagen
16.
Int J Med Inform ; 113: 85-95, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29602437

RESUMEN

A neurological illness is t he disorder in human nervous system that can result in various diseases including the motor disabilities. Neurological disorders may affect the motor neurons, which are associated with skeletal muscles and control the body movement. Consequently, they introduce some diseases in the human e.g. cerebral palsy, spinal scoliosis, peripheral paralysis of arms/legs, hip joint dysplasia and various myopathies. Vojta therapy is considered a useful technique to treat the motor disabilities. In Vojta therapy, a specific stimulation is given to the patient's body to perform certain reflexive pattern movements which the patient is unable to perform in a normal manner. The repetition of stimulation ultimately brings forth the previously blocked connections between the spinal cord and the brain. After few therapy sessions, the patient can perform these movements without external stimulation. In this paper, we propose a computer vision-based system to monitor the correct movements of the patient during the therapy treatment using the RGBD data. The proposed framework works in three steps. In the first step, patient's body is automatically detected and segmented and two novel techniques are proposed for this purpose. In the second step, a multi-dimensional feature vector is computed to define various movements of patient's body during the therapy. In the final step, a multi-class support vector machine is used to classify these movements. The experimental evaluation carried out on the large captured dataset shows that the proposed system is highly useful in monitoring the patient's body movements during Vojta therapy.


Asunto(s)
Inteligencia Artificial , Encefalopatías/rehabilitación , Monitoreo Fisiológico , Trastornos del Movimiento/rehabilitación , Modalidades de Fisioterapia , Reflejoterapia/métodos , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Lactante , Recién Nacido , Masculino , Estimulación Física
17.
Comput Med Imaging Graph ; 46 Pt 2: 95-107, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25795630

RESUMEN

The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique.


Asunto(s)
Algoritmos , Neoplasias de la Mama/diagnóstico por imagen , Mamografía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Máquina de Vectores de Soporte , Simulación por Computador , Reacciones Falso Positivas , Femenino , Humanos , Modelos Estadísticos , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA