Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Health Informatics J ; 30(3): 14604582241288460, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39305515

RESUMEN

Importance: Medical imaging increases the workload involved in writing reports. Given the lack of a standardized format for reports, reports are not easily used as communication tools. Objective: During medical team-patient communication, the descriptions in reports also need to be understood. Automatically generated imaging reports with rich and understandable information can improve medical quality. Design, setting, and participants: The image analysis theory of Panofsky and Shatford from the perspective of image metadata was used in this study to establish a medical image interpretation template (MIIT) for automated image report generation. Main outcomes and measures: The image information included digital imaging and communications in medicine (DICOM), reporting and data systems (RADSs), and image features used in computer-aided diagnosis (CAD). The utility of the images was evaluated by a questionnaire survey to determine whether the image content could be better understood. Results: In 100 responses, exploratory factor analysis revealed that the factor loadings of the facets were greater than 0.5, indicating construct validity, and the overall Cronbach's alpha was 0.916, indicating reliability. No significant differences were noted according to sex, age or education. Conclusions and relevance: Overall, the results show that MIIT is helpful for understanding the content of medical images.


Asunto(s)
Metadatos , Humanos , Femenino , Toma de Decisiones Conjunta , Persona de Mediana Edad , Adulto , Encuestas y Cuestionarios , Reproducibilidad de los Resultados , Mama/diagnóstico por imagen
2.
J Imaging Inform Med ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39284980

RESUMEN

Conventionally diagnosing septic arthritis relies on detecting the causal pathogens in samples of synovial fluid, synovium, or blood. However, isolating these pathogens through cultures takes several days, thus delaying both diagnosis and treatment. Establishing a quantitative classification model from ultrasound images for rapid septic arthritis diagnosis is mandatory. For the study, a database composed of 342 images of non-septic arthritis and 168 images of septic arthritis produced by grayscale (GS) and power Doppler (PD) ultrasound was constructed. In the proposed architecture of fusion with attention and selective transformation (FAST), both groups of images were combined in a vision transformer (ViT) with the convolutional block attention module, which incorporates spatial, modality, and channel features. Fivefold cross-validation was applied to evaluate the generalized ability. The FAST architecture achieved the accuracy, sensitivity, specificity, and area under the curve (AUC) of 86.33%, 80.66%, 90.25%, and 0.92, respectively. These performances were higher than using conventional ViT (82.14%) and significantly better than using one modality alone (GS 73.88%, PD 72.02%), with the p-value being less than 0.01. Through the integration of multi-modality and the extraction of multiple channel features, the established model provided promising accuracy and AUC in septic arthritis classification. The end-to-end learning of ultrasound features can provide both rapid and objective assessment suggestions for future clinic use.

3.
PLoS One ; 19(1): e0292277, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38271352

RESUMEN

Colorectal cancer (CRC) is a major global health concern, with microsatellite instability-high (MSI-H) being a defining characteristic of hereditary nonpolyposis colorectal cancer syndrome and affecting 15% of sporadic CRCs. Tumors with MSI-H have unique features and better prognosis compared to MSI-L and microsatellite stable (MSS) tumors. This study proposed establishing a MSI prediction model using more available and low-cost colonoscopy images instead of histopathology. The experiment utilized a database of 427 MSI-H and 1590 MSS colonoscopy images and vision Transformer (ViT) with different feature training approaches to establish the MSI prediction model. The accuracy of combining pre-trained ViT features was 84% with an area under the receiver operating characteristic curve of 0.86, which was better than that of DenseNet201 (80%, 0.80) in the experiment with support vector machine. The content-based image retrieval (CBIR) approach showed that ViT features can obtain a mean average precision of 0.81 compared to 0.79 of DenseNet201. ViT reduced the issues that occur in convolutional neural networks, including limited receptive field and gradient disappearance, and may be better at interpreting diagnostic information around tumors and surrounding tissues. By using CBIR, the presentation of similar images with the same MSI status would provide more convincing deep learning suggestions for clinical use.


Asunto(s)
Neoplasias Colorrectales Hereditarias sin Poliposis , Neoplasias Colorrectales , Humanos , Inestabilidad de Microsatélites , Neoplasias Colorrectales/diagnóstico por imagen , Neoplasias Colorrectales/genética , Repeticiones de Microsatélite , Neoplasias Colorrectales Hereditarias sin Poliposis/diagnóstico , Neoplasias Colorrectales Hereditarias sin Poliposis/genética , Neoplasias Colorrectales Hereditarias sin Poliposis/patología , Colonoscopía
4.
Phys Med Biol ; 69(4)2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-38232396

RESUMEN

Objective.Recognizing the most relevant seven organs in an abdominal computed tomography (CT) slice requires sophisticated knowledge. This study proposed automatically extracting relevant features and applying them in a content-based image retrieval (CBIR) system to provide similar evidence for clinical use.Approach.A total of 2827 abdominal CT slices, including 638 liver, 450 stomach, 229 pancreas, 442 spleen, 362 right kidney, 424 left kidney and 282 gallbladder tissues, were collected to evaluate the proposed CBIR in the present study. Upon fine-tuning, high-level features used to automatically interpret the differences among the seven organs were extracted via deep learning architectures, including DenseNet, Vision Transformer (ViT), and Swin Transformer v2 (SwinViT). Three images with different annotations were employed in the classification and query.Main results.The resulting performances included the classification accuracy (94%-99%) and retrieval result (0.98-0.99). Considering global features and multiple resolutions, SwinViT performed better than ViT. ViT also benefited from a better receptive field to outperform DenseNet. Additionally, the use of hole images can obtain almost perfect results regardless of which deep learning architectures are used.Significance.The experiment showed that using pretrained deep learning architectures and fine-tuning with enough data can achieve successful recognition of seven abdominal organs. The CBIR system can provide more convincing evidence for recognizing abdominal organs via similarity measurements, which could lead to additional possibilities in clinical practice.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Rayos X/métodos , Abdomen/diagnóstico por imagen , Hígado , Pulmón
5.
Med Phys ; 51(1): 126-138, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38043124

RESUMEN

BACKGROUND: Acute stroke is the leading cause of death and disability globally, with an estimated 16 million cases each year. The progression of carotid stenosis reduces blood flow to the intracranial vasculature, causing stroke. Early recognition of ischemic stroke is crucial for disease treatment and management. PURPOSE: A computer-aided diagnosis (CAD) system was proposed in this study to rapidly evaluate ischemic stroke in carotid color Doppler (CCD). METHODS: Based on the ground truth from the clinical examination report, the vision transformer (ViT) features extracted from all CCD images (513 stroke and 458 normal images) were combined in machine learning classifiers to generate the likelihood of ischemic stroke for each image. The pretrained weights from ImageNet reduced the time-consuming training process. The accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve were calculated to evaluate the stroke prediction model. The chi-square test, DeLong test, and Bonferroni correction for multiple comparisons were applied to deal with the type-I error. Only p values equal to or less than 0.00125 were considered to be statistically significant. RESULTS: The proposed CAD system achieved an accuracy of 89%, a sensitivity of 94%, a specificity of 84%, and an area under the receiver operating characteristic curve of 0.95, outperforming the convolutional neural networks AlexNet (82%, p < 0.001), Inception-v3 (78%, p < 0.001), ResNet101 (84%, p < 0.001), and DenseNet201 (85%, p < 0.01). The computational time in model training was only 30 s, which would be efficient and practical in clinical use. CONCLUSIONS: The experiment shows the promising use of CCD images in stroke estimation. Using the pretrained ViT architecture, the image features can be automatically and efficiently generated without human intervention. The proposed CAD system provides a rapid and reliable suggestion for diagnosing ischemic stroke.


Asunto(s)
Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Curva ROC , Accidente Cerebrovascular/diagnóstico por imagen
6.
PLoS One ; 18(11): e0288932, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38032993

RESUMEN

TV drama, through synchronization with social phenomena, allows the audience to resonate with the characters and desire to watch the next episode. In particular, drama ratings can be the criterion for advertisers to invest in ad placement and a predictor of subsequent economic efficiency in the surrounding areas. To identify the dissemination patterns of social information about dramas, this study used machine learning to predict drama ratings and the contribution of various drama metadata, including broadcast year, broadcast season, TV stations, day of the week, broadcast time slot, genre, screenwriters, status as an original work or sequel, actors and facial features on posters. A total of 800 Japanese TV dramas broadcast during prime time between 2003 and 2020 were collected for analysis. Four machine learning classifiers, including naïve Bayes, artificial neural network, support vector machine, and random forest, were used to combine the metadata. With facial features, the accuracy of the random forest model increased from 75.80% to 77.10%, which shows that poster information can improve the accuracy of the overall predicted ratings. Using only posters to predict ratings with a convolutional neural network still obtained an accuracy rate of 71.70%. More insights about the correlations between drama metadata and social information dissemination patterns were explored.


Asunto(s)
Drama , Metadatos , Teorema de Bayes , Aprendizaje Automático , Difusión de la Información , Máquina de Vectores de Soporte
7.
Comput Methods Programs Biomed ; 237: 107575, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37148635

RESUMEN

PURPOSE: Septic arthritis is an infectious disease. Conventionally, the diagnosis of septic arthritis can only be based on the identification of causal pathogens taken from synovial fluid, synovium or blood samples. However, the cultures require several days for the isolation of pathogens. A rapid assessment performed through computer-aided diagnosis (CAD) would bring timely treatment. METHODS: A total of 214 non-septic arthritis and 64 septic arthritis images generated by gray-scale (GS) and Power Doppler (PD) ultrasound modalities were collected for the experiment. A deep learning-based vision transformer (ViT) with pre-trained parameters were used for image feature extraction. The extracted features were then combined in machine learning classifiers with ten-fold cross validation in order to evaluate the abilities of septic arthritis classification. RESULTS: Using a support vector machine, GS and PD features can achieve an accuracy rate of 86% and 91%, with the area under the receiver operating characteristic curves (AUCs) being 0.90 and 0.92, respectively. The best accuracy (92%) and best AUC (0.92) was obtained by combining both feature sets. CONCLUSIONS: This is the first CAD system based on a deep learning approach for the diagnosis of septic arthritis as seen on knee ultrasound images. Using pre-trained ViT, both the accuracy and computation costs improved more than they had through convolutional neural networks. Additionally, automatically combining GS and PD generates a higher accuracy to better assist the physician's observations, thus providing a timely evaluation of septic arthritis.


Asunto(s)
Artritis , Aprendizaje Profundo , Humanos , Ultrasonografía , Aprendizaje Automático , Redes Neurales de la Computación
8.
Comput Med Imaging Graph ; 107: 102242, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37172354

RESUMEN

The prognosis of patients with colorectal cancer (CRC) mostly relies on the classic tumor node metastasis (TNM) staging classification. A more accurate and convenient prediction model would provide a better prognosis and assist in treatment. From May 2014 to December 2017, patients who underwent an operation for CRC were enrolled. The proposed feature ensemble vision transformer (FEViT) used ensemble classifiers to benefit the combinations of relevant colonoscopy features from the pretrained vision transformer and clinical features, including sex, age, family history of CRC, and tumor location, to establish the prognostic model. A total of 1729 colonoscopy images were enrolled in the current retrospective study. For the prediction of patient survival, FEViT achieved an accuracy of 94 % with an area under the receiver operating characteristic curve of 0.93, which was better than the TNM staging classification (90 %, 0.83) in the experiment. FEViT reduced the limited receptive field and gradient disappearance in the conventional convolutional neural network and was a relatively effective and efficient procedure. The promising accuracy of FEViT in modeling survival makes the prognosis of CRC patients more predictable and practical.


Asunto(s)
Colonoscopía , Neoplasias Colorrectales , Humanos , Estadificación de Neoplasias , Estudios Retrospectivos , Pronóstico , Neoplasias Colorrectales/patología
9.
Healthcare (Basel) ; 10(8)2022 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-36011151

RESUMEN

Colorectal cancer is the leading cause of cancer-associated morbidity and mortality worldwide. One of the causes of developing colorectal cancer is untreated colon adenomatous polyps. Clinically, polyps are detected in colonoscopy and the malignancies are determined according to the biopsy. To provide a quick and objective assessment to gastroenterologists, this study proposed a quantitative polyp classification via various image features in colonoscopy. The collected image database was composed of 1991 images including 1053 hyperplastic polyps and 938 adenomatous polyps and adenocarcinomas. From each image, textural features were extracted and combined in machine learning classifiers and machine-generated features were automatically selected in deep convolutional neural networks (DCNN). The DCNNs included AlexNet, Inception-V3, ResNet-101, and DenseNet-201. AlexNet trained from scratch achieved the best performance of 96.4% accuracy which is better than transfer learning and textural features. Using the prediction models, the malignancy level of polyps can be evaluated during a colonoscopy to provide a rapid treatment plan.

10.
Comput Biol Med ; 147: 105779, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35797889

RESUMEN

PURPOSE: Stroke is one of the leading causes of disability and mortality. Carotid atherosclerosis is a crucial factor in the occurrence of ischemic stroke. To achieve timely recognition, a computer-aided diagnosis (CAD) system was proposed to evaluate the ischemic stroke patterns in carotid color Doppler (CCD). METHODS: A total of 513 stroke and 458 normal CCD images were collected from 102 stroke and 75 normal patients, respectively. For each image, quantitative histogram, shape, and texture features were extracted to interpret the diagnostic information. In the experiment, a logistic regression classifier with backward elimination and leave-one-out cross validation was used to combine features as a prediction model. RESULTS: The performance of the CAD system using histogram, shape, and texture features achieved accuracies of 87%, 60%, and 87%, respectively. With respect to the combined features, the CAD achieved an accuracy of 89%, a sensitivity of 89%, a specificity of 88%, a positive predictive value of 89%, a negative predictive value of 88%, and Kappa = 0.77, with an area under the receiver operating characteristic curve of 0.94. CONCLUSIONS: Based on the extracted quantitative features in the CCD images, the proposed CAD system provides valuable suggestions for assisting physicians in improving ischemic stroke diagnoses during carotid ultrasound examination.


Asunto(s)
Accidente Cerebrovascular Isquémico , Arterias Carótidas/diagnóstico por imagen , Computadores , Diagnóstico por Computador , Humanos , Sensibilidad y Especificidad
11.
Cancers (Basel) ; 13(22)2021 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-34830948

RESUMEN

Gastrointestinal stromal tumors (GIST) are common mesenchymal tumors, and their effective treatment depends upon the mutational subtype of the KIT/PDGFRA genes. We established deep convolutional neural network (DCNN) models to rapidly predict drug-sensitive mutation subtypes from images of pathological tissue. A total of 5153 pathological images of 365 different GISTs from three different laboratories were collected and divided into training and validation sets. A transfer learning mechanism based on DCNN was used with four different network architectures, to identify cases with drug-sensitive mutations. The accuracy ranged from 87% to 75%. Cross-institutional inconsistency, however, was observed. Using gray-scale images resulted in a 7% drop in accuracy (accuracy 80%, sensitivity 87%, specificity 73%). Using images containing only nuclei (accuracy 81%, sensitivity 87%, specificity 73%) or cytoplasm (accuracy 79%, sensitivity 88%, specificity 67%) produced 6% and 8% drops in accuracy rate, respectively, suggesting buffering effects across subcellular components in DCNN interpretation. The proposed DCNN model successfully inferred cases with drug-sensitive mutations with high accuracy. The contribution of image color and subcellular components was also revealed. These results will help to generate a cheaper and quicker screening method for tumor gene testing.

12.
J Digit Imaging ; 34(3): 637-646, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33963421

RESUMEN

Acute stroke is one of the leading causes of disability and death worldwide. Regarding clinical diagnoses, a rapid and accurate procedure is necessary for patients suffering from acute stroke. This study proposes an automatic identification scheme for acute ischemic stroke using deep convolutional neural networks (DCNNs) based on non-contrast computed tomographic (NCCT) images. Our image database for the classification model was composed of 1254 grayscale NCCT images from 96 patients (573 images) with acute ischemic stroke and 121 normal controls (681 images). According to the consensus of critical stroke findings by two neuroradiologists, a gold standard was established and used to train the proposed DCNN using machine-generated image features. Including the earliest DCNN, AlexNet, the popular Inception-v3, and ResNet-101 were proposed. To train the limited data size, transfer learning with ImageNet parameters was also used. The established models were evaluated by tenfold cross-validation and tested on an independent dataset containing 50 patients with acute ischemic stroke (108 images) and 58 normal controls (117 images) from another institution. AlexNet without pretrained parameters achieved an accuracy of 97.12%, a sensitivity of 98.11%, a specificity of 96.08%, and an area under the receiver operating characteristic curve (AUC) of 0.9927. Using transfer learning, transferred AlexNet, transferred Inception-v3, and transferred ResNet-101 achieved accuracies between 90.49 and 95.49%. Tested with a dataset from another institution, AlexNet showed an accuracy of 60.89%, a sensitivity of 18.52%, and a specificity of 100%. Transferred AlexNet, Inception-v3, and ResNet-101 achieved accuracies of 81.77%, 85.78%, and 80.89%, respectively. The proposed DCNN architecture as a computer-aided diagnosis system showed that training from scratch can generate a customized model for a specific scanner, and transfer learning can generate a more generalized model to provide diagnostic suggestions of acute ischemic stroke to radiologists.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Isquemia Encefálica/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Accidente Cerebrovascular/diagnóstico por imagen , Tomografía Computarizada por Rayos X
13.
Ultrasound Med Biol ; 47(8): 2266-2276, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34001404

RESUMEN

Stroke is a leading cause of disability and death worldwide. Early and accurate recognition of acute stroke is critical for achieving a good prognosis. The novel automated system proposed in this study was based on convolutional neural networks (CNNs), which were used to identify lesion findings on carotid color Doppler (CCD) images in patients with acute ischemic stroke. An image database composed of 1032 CCD images from 106 patients with acute ischemic stroke (549 images) and from 79 normal controls (483 images) was retrospectively analyzed. Taking the consensus of two neuroradiologists as the gold standard, different CNN models with and without transfer learning were evaluated with 10-fold cross-validation. The diagnostic information provided from individual color channels was also explored. AlexNet, which was trained from scratch, achieved an accuracy of 91.67%, a sensitivity of 93.33%, a specificity of 90.20% and an area under the receiver operating characteristic curves (AUC) of 0.9432. Other transferred models achieved accuracies between 77.69% and 83.94%. In channel comparisons, the green channel had the best performance, with an accuracy of 87.50%, a sensitivity of 97.78%, a specificity of 78.43% and an AUC of 0.9507. The proposed CNN architecture, as a computer-aided diagnosis system, suggests using automatic feature extraction from CCD images to predict ischemic stroke. The developed scheme has the potential to provide diagnostic suggestions in clinical use.


Asunto(s)
Arterias Carótidas/diagnóstico por imagen , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Ultrasonografía Doppler en Color , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Estudios Retrospectivos
14.
Medicine (Baltimore) ; 99(8): e19123, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32080088

RESUMEN

World Health Organization tumor classifications of the central nervous system differentiate glioblastoma multiforme (GBM) into wild-type (WT) and mutant isocitrate dehydrogenase (IDH) genotypes. This study proposes a noninvasive computer-aided diagnosis to interpret the status of IDH in glioblastomas from transformed magnetic resonance imaging patterns. The collected image database was composed of 32 WT and 7 mutant IDH cases. For each image, a ranklet transformation which changed the original pixel values into relative coefficients was 1st applied to reduce the effects of different scanning parameters and machines on the underlying patterns. Extracting various textural features from the transformed ranklet images and combining them in a logistic regression classifier allowed an IDH prediction. We achieved an accuracy of 90%, a sensitivity of 57%, and a specificity of 97%. Four of the selected textural features in the classifier (homogeneity, difference entropy, information measure of correlation, and inverse difference normalized) were significant (P < .05), and the other 2 were close to being significant (P = .06). The proposed computer-aided diagnosis system based on radiomic textural features from ranklet-transformed images using relative rankings of pixel values as intensity-invariant coefficients is a promising noninvasive solution to provide recommendations about the IDH status in GBM across different healthcare institutions.


Asunto(s)
Neoplasias Encefálicas/genética , Diagnóstico por Computador/métodos , Glioblastoma/genética , Isocitrato Deshidrogenasa/genética , Adulto , Anciano , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Femenino , Genotipo , Glioblastoma/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Mutación , Valor Predictivo de las Pruebas , Periodo Preoperatorio , Sensibilidad y Especificidad
15.
BMC Bioinformatics ; 20(Suppl 19): 659, 2019 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-31870275

RESUMEN

BACKGROUND: Accurate classification of diffuse gliomas, the most common tumors of the central nervous system in adults, is important for appropriate treatment. However, detection of isocitrate dehydrogenase (IDH) mutation and chromosome1p/19q codeletion, biomarkers to classify gliomas, is time- and cost-intensive and diagnostic discordance remains an issue. Adenosine to inosine (A-to-I) RNA editing has emerged as a novel cancer prognostic marker, but its value for glioma classification remains largely unexplored. We aim to (1) unravel the relationship between RNA editing and IDH mutation and 1p/19q codeletion and (2) predict IDH mutation and 1p/19q codeletion status using machine learning algorithms. RESULTS: By characterizing genome-wide A-to-I RNA editing signatures of 638 gliomas, we found that tumors without IDH mutation exhibited higher total editing level compared with those carrying it (Kolmogorov-Smirnov test, p < 0.0001). When tumor grade was considered, however, only grade IV tumors without IDH mutation exhibited higher total editing level. According to 10-fold cross-validation, support vector machines (SVM) outperformed random forest and AdaBoost (DeLong test, p < 0.05). The area under the receiver operating characteristic curve (AUC) of SVM in predicting IDH mutation and 1p/19q codeletion were 0.989 and 0.990, respectively. After performing feature selection, AUCs of SVM and AdaBoost in predicting IDH mutation were higher than that of random forest (0.985 and 0.983 vs. 0.977; DeLong test, p < 0.05), but AUCs of the three algorithms in predicting 1p/19q codeletion were similar (0.976-0.982). Furthermore, 67% of the six continuously misclassified samples by our 1p/19q codeletion prediction models were misclassifications in the original labelling after inspection of 1p/19q status and/or pathology report, highlighting the accuracy and clinical utility of our models. CONCLUSIONS: The study represents the first genome-wide analysis of glioma editome and identifies RNA editing as a novel prognostic biomarker for glioma. Our prediction models provide standardized, accurate, reproducible and objective classification of gliomas. Our models are not only useful in clinical decision-making, but also able to identify editing events that have the potential to serve as biomarkers and therapeutic targets in glioma management and treatment.


Asunto(s)
Neoplasias Encefálicas/genética , Glioma/genética , Isocitrato Deshidrogenasa/genética , Edición de ARN , Aberraciones Cromosómicas , Cromosomas Humanos Par 1 , Cromosomas Humanos Par 19 , Humanos , Aprendizaje Automático , Mutación , Clasificación del Tumor
17.
PLoS One ; 14(2): e0212741, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30817804

RESUMEN

The lifetime prevalence of shoulder pain is nearly 70% and is mostly attributable to subacromial disorders. A rotator cuff tear is the most severe form of subacromial disorders, and most occur in the supraspinatus. For clinical examination, shoulder ultrasound is recommended to detect supraspinatus tears. In this study, a computer-aided tear classification (CTC) system was developed to identify supraspinatus tears in ultrasound examinations and reduce inter-operator variability. The observed cases included 89 ultrasound images of supraspinatus tendinopathy and 102 of supraspinatus tear from 136 patients. For each case, intensity and texture features were extracted from the entire lesion and combined in a binary logistic regression classifier for lesion classification. The proposed CTC system achieved an accuracy rate of 92% (176/191) and an area under receiver operating characteristic curve (Az) of 0.9694. Based on its diagnostic performance, the CTC system has promise for clinical use.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Lesiones del Manguito de los Rotadores/diagnóstico , Manguito de los Rotadores/diagnóstico por imagen , Dolor de Hombro/etiología , Adulto , Anciano , Anciano de 80 o más Años , Estudios de Factibilidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas , Curva ROC , Lesiones del Manguito de los Rotadores/complicaciones , Ultrasonografía
18.
J Clin Med ; 7(12)2018 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-30477203

RESUMEN

PURPOSE: Artificial neural networks (ANNs) are one type of artificial intelligence. Here, we use an ANN-based machine learning algorithm to automatically predict visual outcomes after ranibizumab treatment in diabetic macular edema. METHODS: Patient data were used to optimize ANNs for regression calculation. The target was established as the final visual acuity at 52, 78, or 104 weeks. The input baseline variables were sex, age, diabetes type or condition, systemic diseases, eye status and treatment time tables. Three groups were randomly devised to build, test and demonstrate the accuracy of the algorithms. RESULTS: At 52, 78 and 104 weeks, 512, 483 and 464 eyes were included, respectively. For the training group, testing group and validation group, the respective correlation coefficients were 0.75, 0.77 and 0.70 (52 weeks); 0.79, 0.80 and 0.55 (78 weeks); and 0.83, 0.47 and 0.81 (104 weeks), while the mean standard errors of final visual acuity were 6.50, 6.11 and 6.40 (52 weeks); 5.91, 5.83 and 7.59; (78 weeks); and 5.39, 8.70 and 6.81 (104 weeks). CONCLUSIONS: Machine learning had good correlation coefficients for predicating prognosis with ranibizumab with just baseline characteristics. These models could be the useful clinical tools for prediction of success of the treatments.

20.
Med Phys ; 45(12): 5509-5514, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-30325517

RESUMEN

PURPOSE: Bronchoscopy is useful in lung cancer detection, but cannot be used to differentiate cancer types. A computer-aided diagnosis (CAD) system was proposed to distinguish malignant cancer types to achieve objective diagnoses. METHODS: Bronchoscopic images of 12 adenocarcinoma and 10 squamous cell carcinoma patients were collected. The images were transformed from a red-blue-green (RGB) to a hue-saturation-value (HSV) color space to obtain more meaningful color textures. By combining significant textural features (P < 0.05) in a machine learning classifier, a prediction model of malignant types was established. RESULTS: The performance of the CAD system achieved an accuracy of 86% (19/22), a sensitivity of 90% (9/10), a specificity of 83% (10/12), a positive predictive value of 82% (9/11), and a negative predictive value of 91% (10/11) in distinguishing lung cancer types. The area under the receiver operating characteristic curve was 0.82. CONCLUSIONS: On the basis of extracted HSV textures of bronchoscopic images, the CAD system can provide recommendations for clinical diagnoses of lung cancer types.


Asunto(s)
Broncoscopía , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Aprendizaje Automático , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...