Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
PLoS One ; 16(12): e0261307, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34968393

RESUMEN

Medical images commonly exhibit multiple abnormalities. Predicting them requires multi-class classifiers whose training and desired reliable performance can be affected by a combination of factors, such as, dataset size, data source, distribution, and the loss function used to train deep neural networks. Currently, the cross-entropy loss remains the de-facto loss function for training deep learning classifiers. This loss function, however, asserts equal learning from all classes, leading to a bias toward the majority class. Although the choice of the loss function impacts model performance, to the best of our knowledge, we observed that no literature exists that performs a comprehensive analysis and selection of an appropriate loss function toward the classification task under study. In this work, we benchmark various state-of-the-art loss functions, critically analyze model performance, and propose improved loss functions for a multi-class classification task. We select a pediatric chest X-ray (CXR) dataset that includes images with no abnormality (normal), and those exhibiting manifestations consistent with bacterial and viral pneumonia. We construct prediction-level and model-level ensembles to improve classification performance. Our results show that compared to the individual models and the state-of-the-art literature, the weighted averaging of the predictions for top-3 and top-5 model-level ensembles delivered significantly superior classification performance (p < 0.05) in terms of MCC (0.9068, 95% confidence interval (0.8839, 0.9297)) metric. Finally, we performed localization studies to interpret model behavior and confirm that the individual models and ensembles learned task-specific features and highlighted disease-specific regions of interest. The code is available at https://github.com/sivaramakrishnan-rajaraman/multiloss_ensemble_models.


Asunto(s)
Algoritmos , Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador/clasificación , Área Bajo la Curva , Entropía , Humanos , Pulmón/diagnóstico por imagen , Curva ROC , Tórax/diagnóstico por imagen , Rayos X
2.
Oxid Med Cell Longev ; 2021: 6280690, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33688390

RESUMEN

Due to the complexity of medical images, traditional medical image classification methods have been unable to meet the actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification. However, deep learning has the following problems in the application of medical image classification. First, it is impossible to construct a deep learning model with excellent performance according to the characteristics of medical images. Second, the current deep learning network structure and training strategies are less adaptable to medical images. Therefore, this paper first introduces the visual attention mechanism into the deep learning model so that the information can be extracted more effectively according to the problem of medical images, and the reasoning is realized at a finer granularity. It can increase the interpretability of the model. Additionally, to solve the problem of matching the deep learning network structure and training strategy to medical images, this paper will construct a novel multiscale convolutional neural network model that can automatically extract high-level discriminative appearance features from the original image, and the loss function uses the Mahalanobis distance optimization model to obtain a better training strategy, which can improve the robust performance of the network model. The medical image classification task is completed by the above method. Based on the above ideas, this paper proposes a medical classification algorithm based on a visual attention mechanism-multiscale convolutional neural network. The lung nodules and breast cancer images were classified by the method in this paper. The experimental results show that the accuracy of medical image classification in this paper is not only higher than that of traditional machine learning methods but also improved compared with other deep learning methods, and the method has good stability and robustness.


Asunto(s)
Algoritmos , Atención/fisiología , Procesamiento de Imagen Asistido por Computador/clasificación , Redes Neurales de la Computación , Neoplasias de la Mama/clasificación , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Neoplasias Pulmonares/diagnóstico por imagen
3.
Am J Ophthalmol ; 226: 100-107, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33577791

RESUMEN

PURPOSE: To compare the performance of a novel convolutional neural network (CNN) classifier and human graders in detecting angle closure in EyeCam (Clarity Medical Systems, Pleasanton, California, USA) goniophotographs. DESIGN: Retrospective cross-sectional study. METHODS: Subjects from the Chinese American Eye Study underwent EyeCam goniophotography in 4 angle quadrants. A CNN classifier based on the ResNet-50 architecture was trained to detect angle closure, defined as inability to visualize the pigmented trabecular meshwork, using reference labels by a single experienced glaucoma specialist. The performance of the CNN classifier was assessed using an independent test dataset and reference labels by the single glaucoma specialist or a panel of 3 glaucoma specialists. This performance was compared to that of 9 human graders with a range of clinical experience. Outcome measures included area under the receiver operating characteristic curve (AUC) metrics and Cohen kappa coefficients in the binary classification of open or closed angle. RESULTS: The CNN classifier was developed using 29,706 open and 2,929 closed angle images. The independent test dataset was composed of 600 open and 400 closed angle images. The CNN classifier achieved excellent performance based on single-grader (AUC = 0.969) and consensus (AUC = 0.952) labels. The agreement between the CNN classifier and consensus labels (κ = 0.746) surpassed that of all non-reference human graders (κ = 0.578-0.702). Human grader agreement with consensus labels improved with clinical experience (P = 0.03). CONCLUSION: A CNN classifier can effectively detect angle closure in goniophotographs with performance comparable to that of an experienced glaucoma specialist. This provides an automated method to support remote detection of patients at risk for primary angle closure glaucoma.


Asunto(s)
Diagnóstico por Computador/clasificación , Glaucoma de Ángulo Cerrado/diagnóstico , Procesamiento de Imagen Asistido por Computador/clasificación , Redes Neurales de la Computación , Fotograbar/clasificación , Anciano , Anciano de 80 o más Años , Segmento Anterior del Ojo/patología , Área Bajo la Curva , Asiático , China/etnología , Estudios Transversales , Sistemas Especialistas , Femenino , Glaucoma de Ángulo Cerrado/clasificación , Gonioscopía , Humanos , Masculino , Persona de Mediana Edad , Oftalmólogos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Especialización
4.
IEEE Trans Image Process ; 30: 191-206, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33136542

RESUMEN

Kinship recognition is a prominent research aiming to find if kinship relation exists between two different individuals. In general, child closely resembles his/her parents more than others based on facial similarities. These similarities are due to genetically inherited facial features that a child shares with his/her parents. Most existing researches in kinship recognition focus on full facial images to find these kinship similarities. This paper first presents kinship recognition for similar full facial images using proposed Global-based dual-tree complex wavelet transform (G-DTCWT). We then present novel patch-based kinship recognition methods based on dual-tree complex wavelet transform (DT-CWT): Local Patch-based DT-CWT (LP-DTCWT) and Selective Patch-Based DT-CWT (SP-DTCWT). LP-DTCWT extracts coefficients for smaller facial patches for kinship recognition. SP-DTCWT is an extension to LP-DTCWT and extracts coefficients only for representative patches with similarity scores above a normalized cumulative threshold. This threshold is computed by a novel patch selection process. These representative patches contribute more similarities in parent/child image pairs and improve kinship accuracy. Proposed methods are extensively evaluated on different publicly available kinship datasets to validate kinship accuracy. Experimental results showcase efficacy of proposed methods on all kinship datasets. SP-DTCWT achieves competitive accuracy to state-of-the-art methods. Mean kinship accuracy of SP-DTCWT is 95.85% on baseline KinFaceW-I and 95.30% on KinFaceW-II datasets. Further, SP-DTCWT achieves the state-of-the-art accuracy of 80.49% on the largest kinship dataset, Families In the Wild (FIW).


Asunto(s)
Familia , Procesamiento de Imagen Asistido por Computador , Reconocimiento de Normas Patrones Automatizadas/métodos , Análisis de Ondículas , Bases de Datos Factuales , Cara/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/métodos
5.
Sci Rep ; 10(1): 19560, 2020 11 11.
Artículo en Inglés | MEDLINE | ID: mdl-33177565

RESUMEN

The accurate recognition of multiple sclerosis (MS) lesions is challenged by the high sensitivity and imperfect specificity of MRI. To examine whether longitudinal changes in volume, surface area, 3-dimensional (3D) displacement (i.e. change in lesion position), and 3D deformation (i.e. change in lesion shape) could inform on the origin of supratentorial brain lesions, we prospectively enrolled 23 patients with MS and 11 patients with small vessel disease (SVD) and performed standardized 3-T 3D brain MRI studies. Bayesian linear mixed effects regression models were constructed to evaluate associations between changes in lesion morphology and disease state. A total of 248 MS and 157 SVD lesions were studied. Individual MS lesions demonstrated significant decreases in volume < 3.75mm3 (p = 0.04), greater shifts in 3D displacement by 23.4% with increasing duration between MRI time points (p = 0.007), and greater transitions to a more non-spherical shape (p < 0.0001). If 62.2% of lesions within a given MRI study had a calculated theoretical radius > 2.49 based on deviation from a perfect 3D sphere, a 92.7% in-sample and 91.2% out-of-sample accuracy was identified for the diagnosis of MS. Longitudinal 3D shape evolution and displacement characteristics may improve lesion classification, adding to MRI techniques aimed at improving lesion specificity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Esclerosis Múltiple/diagnóstico por imagen , Adulto , Enfermedades de los Pequeños Vasos Cerebrales/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Imagenología Tridimensional/clasificación , Imagenología Tridimensional/métodos , Masculino , Persona de Mediana Edad , Trastornos Migrañosos/diagnóstico por imagen , Esclerosis Múltiple/tratamiento farmacológico
6.
Microscopy (Oxf) ; 69(2): 61-68, 2020 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-32115658

RESUMEN

In this review, we focus on the applications of machine learning methods for analyzing image data acquired in imaging flow cytometry technologies. We propose that the analysis approaches can be categorized into two groups based on the type of data, raw imaging signals or features explicitly extracted from images, being analyzed by a trained model. We hope that this categorization is helpful for understanding uniqueness, differences and opportunities when the machine learning-based analysis is implemented in recently developed 'imaging' cell sorters.


Asunto(s)
Citometría de Flujo/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/clasificación , Imagen Óptica
7.
PLoS One ; 13(9): e0203339, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30208096

RESUMEN

The recent development in the technology has increased the complexity of image contents and demand for image classification becomes more imperative. Digital images play a vital role in many applied domains such as remote sensing, scene analysis, medical care, textile industry and crime investigation. Feature extraction and image representation is considered as an important step in scene analysis as it affects the image classification performance. Automatic classification of images is an open research problem for image analysis and pattern recognition applications. The Bag-of-Features (BoF) model is commonly used to solve image classification, object recognition and other computer vision-based problems. In BoF model, the final feature vector representation of an image contains no information about the co-occurrence of features in the 2D image space. This is considered as a limitation, as the spatial arrangement among visual words in image space contains the information that is beneficial for image representation and learning of classification model. To deal with this, researchers have proposed different image representations. Among these, the division of image-space into different geometric sub-regions for the extraction of histogram for BoF model is considered as a notable contribution for the extraction of spatial clues. Keeping this in view, we aim to explore a Hybrid Geometric Spatial Image Representation (HGSIR) that is based on the combination of histograms computed over the rectangular, triangular and circular regions of the image. Five standard image datasets are used to evaluate the performance of the proposed research. The quantitative analysis demonstrates that the proposed research outperforms the state-of-art research in terms of classification accuracy.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Animales , Inteligencia Artificial , Bases de Datos Factuales/clasificación , Bases de Datos Factuales/estadística & datos numéricos , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Multimedia/estadística & datos numéricos , Reconocimiento de Normas Patrones Automatizadas/clasificación , Reconocimiento de Normas Patrones Automatizadas/estadística & datos numéricos , Fotograbar/estadística & datos numéricos
8.
Fed Regist ; 82(242): 60306-8, 2017 Dec 20.
Artículo en Inglés | MEDLINE | ID: mdl-29260838

RESUMEN

The Food and Drug Administration (FDA or we) is classifying the image processing device for estimation of external blood loss into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the image processing device for estimation of external blood loss' classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.


Asunto(s)
Seguridad de Equipos/clasificación , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/instrumentación , Fotometría/clasificación , Fotometría/instrumentación , Pérdida de Sangre Quirúrgica , Hemoglobinas , Humanos , Tapones Quirúrgicos de Gaza
9.
Fed Regist ; 82(198): 47967-9, 2017 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-29035494

RESUMEN

The Food and Drug Administration (FDA or we) is classifying the automated image assessment system for microbial colonies on solid culture media into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the automated image assessment system for microbial colonies on solid culture media's classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.


Asunto(s)
Aprobación de Recursos/legislación & jurisprudencia , Seguridad de Equipos/clasificación , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/instrumentación , Técnicas Microbiológicas/clasificación , Técnicas Microbiológicas/instrumentación , Medios de Cultivo , Humanos , Estados Unidos
10.
JAMA Ophthalmol ; 135(9): 982-986, 2017 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-28796856

RESUMEN

Importance: Telemedicine in retinopathy of prematurity (ROP) has the potential for delivering timely care to premature infants at risk for serious ROP. Objective: To describe the characteristics of eyes at risk for ROP to provide insights into what types of ROP are most easily detected early by image grading. Design, Setting, and Participants: Secondary analysis of eyes with referral-warranted (RW) ROP (stage 3 ROP, zone I ROP, plus disease) on diagnostic examination from the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) study was conducted from May 1, 2011, to October 31, 2013, in 1257 premature infants with birth weights less than 1251 g in 13 neonatal units in North America. Data analysis was performed between February 1, 2016, and June 5, 2017. Interventions: Serial imaging sessions with concurrent diagnostic examinations for ROP. Main Outcomes and Measures: Time of detecting RW-ROP on image evaluation compared with clinical examination. Results: In the e-ROP study, 246 infants (492 eyes) were included in the analysis; 138 (56.1%) were male. A total of 447 eyes had RW-ROP on diagnostic examination. Image grading in 123 infants (mean [SD] gestational age, 24.8 [1.4] weeks) detected RW-ROP earlier than diagnostic examination (early) in 191 (42.7%) eyes by about 15 days and detected RW-ROP in 123 infants (mean [SD] gestational age, 24.6 [1.5] weeks) at the same time (same) in 200 (44.7%) eyes. Most of the early eyes (153 [80.1%]) interpreted as being RW-ROP positive on imaging evaluation agreed with examination findings when the examination subsequently documented RW-ROP. At the sessions in which RW-ROP was first found by examination, stage 3 or more in 123 infants (mean [SD] gestational age, 24.8 [1.4] weeks) ROP was noted earlier on image evaluation in 151 of 191 early eyes (79.1%) and in 172 of 200 of same eyes (86.0%) (P = .08); the presence of zone I ROP was detected in 57 of 191 (29.8%) early eyes vs 64 of 200 (32.0%) same eyes (P = .90); and plus disease was noted in 30 of 191 (15.7%) early eyes and 45 of 200 (22.5%) same eyes (P = .08). Conclusions and Relevance: In both early and same eyes, zone I and/or stage 3 ROP determined a significant proportion of RW-ROP; plus disease played a relatively minor role. In most early RW-ROP eyes, the findings were consistent with clinical examination and/or image grading at the next session. Because ROP telemedicine is used more widely, development of standard approaches and protocols is essential.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/clasificación , Retinopatía de la Prematuridad/diagnóstico , Telemedicina/métodos , Enfermedad Aguda , Peso al Nacer , Femenino , Edad Gestacional , Humanos , Lactante , Recién Nacido , Recien Nacido Prematuro , Recién Nacido de muy Bajo Peso , Masculino , Oftalmoscopía/métodos , Retinopatía de la Prematuridad/clasificación , Factores de Riesgo
11.
Fed Regist ; 80(38): 10330-3, 2015 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-25898424

RESUMEN

The Food and Drug Administration (FDA) is classifying the Assisted Reproduction Embryo Image Assessment System into class II (special controls). The special controls that will apply to the device are identified in this order, and will be part of the codified language for the Assisted Reproduction Embryo Image Assessment System classification. The Agency is classifying the device into class II (special controls) in order to provide a reasonable assurance of safety and effectiveness of the device.


Asunto(s)
Aprobación de Recursos/legislación & jurisprudencia , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/instrumentación , Microscopía/clasificación , Microscopía/instrumentación , Técnicas Reproductivas Asistidas/clasificación , Técnicas Reproductivas Asistidas/instrumentación , Cigoto , Transferencia de Embrión , Seguridad de Equipos/clasificación , Humanos , Obstetricia/instrumentación , Obstetricia/legislación & jurisprudencia , Estados Unidos
12.
Proc Natl Acad Sci U S A ; 111(15): 5544-9, 2014 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-24706844

RESUMEN

The 26S proteasome is a 2.5 MDa molecular machine that executes the degradation of substrates of the ubiquitin-proteasome pathway. The molecular architecture of the 26S proteasome was recently established by cryo-EM approaches. For a detailed understanding of the sequence of events from the initial binding of polyubiquitylated substrates to the translocation into the proteolytic core complex, it is necessary to move beyond static structures and characterize the conformational landscape of the 26S proteasome. To this end we have subjected a large cryo-EM dataset acquired in the presence of ATP and ATP-γS to a deep classification procedure, which deconvolutes coexisting conformational states. Highly variable regions, such as the density assigned to the largest subunit, Rpn1, are now well resolved and rendered interpretable. Our analysis reveals the existence of three major conformations: in addition to the previously described ATP-hydrolyzing (ATPh) and ATP-γS conformations, an intermediate state has been found. Its AAA-ATPase module adopts essentially the same topology that is observed in the ATPh conformation, whereas the lid is more similar to the ATP-γS bound state. Based on the conformational ensemble of the 26S proteasome in solution, we propose a mechanistic model for substrate recognition, commitment, deubiquitylation, and translocation into the core particle.


Asunto(s)
Microscopía por Crioelectrón/estadística & datos numéricos , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Moleculares , Conformación Molecular , Complejo de la Endopetidasa Proteasomal/química , Bases de Datos Factuales
13.
PLoS One ; 9(2): e87097, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24498292

RESUMEN

Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.


Asunto(s)
Ciudades , Sensibilidad de Contraste , Planificación Ambiental/normas , Reconocimiento Visual de Modelos , Argelia , Algoritmos , Planificación Ambiental/estadística & datos numéricos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Procesamiento de Imagen Asistido por Computador/normas , Japón , Masculino , Fotograbar/clasificación , Fotograbar/normas , Factores de Tiempo
14.
J Vet Diagn Invest ; 25(6): 765-9, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24153030

RESUMEN

A 2-stage algorithmic framework was developed to automatically classify digitized photomicrographs of tissues obtained from bovine liver, lung, spleen, and kidney into different histologic categories. The categories included normal tissue, acute necrosis, and inflammation (acute suppurative; chronic). In the current study, a total of 60 images per category (normal; acute necrosis; acute suppurative inflammation) were obtained from liver samples, 60 images per category (normal; acute suppurative inflammation) were obtained from spleen and lung samples, and 60 images per category (normal; chronic inflammation) were obtained from kidney samples. An automated support vector machine (SVM) classifier was trained to assign each test image to a specific category. Using 10 training images/category/organ, 40 test images/category/organ were examined. Employing confusion matrices to represent category-specific classification accuracy, the classifier-attained accuracies were found to be in the 74-90% range. The same set of test images was evaluated using a SVM classifier trained on 20 images/category/organ. The average classification accuracies were noted to be in the 84-95% range. The accuracy in correctly identifying normal tissue and specific tissue lesions was markedly improved by a small increase in the number of training images. The preliminary results from the study indicate the importance and potential use of automated image classification systems in the histologic identification of normal tissues and specific tissue lesions.


Asunto(s)
Histocitoquímica/veterinaria , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/patología , Hígado/patología , Pulmón/patología , Bazo/patología , Animales , Bovinos , Histocitoquímica/métodos , Procesamiento de Imagen Asistido por Computador/clasificación , Máquina de Vectores de Soporte
15.
Clin Exp Ophthalmol ; 41(9): 842-52, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23566165

RESUMEN

BACKGROUND: To determine the reliability and agreement of a new optic disc grading software program for use in clinical, epidemiological research. DESIGN: Reliability and agreement study. SAMPLES: 328 monoscopic and 85 stereoscopic optic disc images. METHODS: Optic disc parameters were measured using a new optic disc grading software (Singapore Optic Disc Assessment) that is based on polynomial curve-fitting algorithm. Two graders independently graded 328 monoscopic images to determine intergrader reliability. One grader regraded the images after 1 month to determine intragrader reliability. In addition, 85 stereo optic disc images were separately selected, and vertical cup-to-disc ratios were measured using both the new software and standardized Wisconsin manual stereo-grading method by the same grader 1 month apart. Intraclass correlation coefficient (ICC) and Bland-Altman plot analyses were performed. MAIN OUTCOME MEASURES: Optic disc parameters. RESULTS: The intragrader and intergrader reliability for optic disc measurements using Singapore Optic Disc Assessment was high (ICC ranging from 0.82 to 0.94). The mean differences (95% limits of agreement) for intergrader vertical cup-to-disc ratio measurements were 0.00 (-0.12 to 0.13) and 0.03 (-0.15 to 0.09), respectively. The vertical cup-to-disc ratio agreement between the software and Wisconsin grading method was extremely close (ICC = 0.94). The mean difference (95% limits of agreement) of vertical cup-to-disc ratio measurement between the two methods was 0.03 (-0.09 to 0.16). CONCLUSIONS: Intragrader and intergrader reliability using Singapore Optic Disc Assessment was excellent. This software was highly comparable with standardized stereo-grading method. Singapore Optic Disc Assessment is useful for grading digital optic disc images in clinical, population-based studies.


Asunto(s)
Glaucoma/clasificación , Procesamiento de Imagen Asistido por Computador/clasificación , Disco Óptico/patología , Enfermedades del Nervio Óptico/clasificación , Programas Informáticos , Adulto , Anciano , Anciano de 80 o más Años , Estudios Transversales , Diseño de Investigaciones Epidemiológicas , Femenino , Glaucoma/diagnóstico , Glaucoma/etnología , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Enfermedades del Nervio Óptico/diagnóstico , Enfermedades del Nervio Óptico/etnología , Fotograbar , Reproducibilidad de los Resultados , Singapur/epidemiología
16.
Invest Ophthalmol Vis Sci ; 54(3): 1789-96, 2013 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-23361512

RESUMEN

PURPOSE: To evaluate an automated analysis of retinal fundus photographs to detect and classify severity of age-related macular degeneration compared with grading by the Age-Related Eye Disease Study (AREDS) protocol. METHODS: Following approval by the Johns Hopkins University School of Medicine's Institution Review Board, digitized images (downloaded AT http://www.ncbi.nlm.nih.gov/gap/) of field 2 (macular) fundus photographs from AREDS obtained over a 12-year longitudinal study were classified automatically using a visual words method to compare with severity by expert graders. RESULTS: Sensitivities and specificities, respectively, of automated imaging, when compared with expert fundus grading of 468 patients and 2145 fundus images are: 98.6% and 96.3% when classifying categories 1 and 2 versus categories 3 and 4; 96.1% and 96.1% when classifying categories 1 and 2 versus category 3; 98.6% and 95.7% when classifying category 1 versus category 3; and 96.0% and 94.7% when classifying category 1 versus categories 3 and 4; CONCLUSIONS: Development of an automated analysis for classification of age-related macular degeneration from digitized fundus photographs has high sensitivity and specificity when compared with expert graders and may have a role in screening or monitoring.


Asunto(s)
Técnicas de Diagnóstico Oftalmológico , Fondo de Ojo , Degeneración Macular/clasificación , Degeneración Macular/diagnóstico , Fotograbar/métodos , Algoritmos , Reacciones Falso Positivas , Estudios de Seguimiento , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Valor Predictivo de las Pruebas , Sensibilidad y Especificidad , Índice de Severidad de la Enfermedad
17.
Hum Brain Mapp ; 34(11): 3101-15, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-22711230

RESUMEN

What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.


Asunto(s)
Mapeo Encefálico/métodos , Cara/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Percepción Social , Percepción Visual/fisiología , Algoritmos , Discriminación en Psicología/fisiología , Imagen Eco-Planar/métodos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Imagen por Resonancia Magnética/clasificación , Masculino , Oxígeno/sangre , Estimulación Luminosa , Desempeño Psicomotor/fisiología , Adulto Joven
18.
J Med Syst ; 36(2): 865-81, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20703647

RESUMEN

The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.


Asunto(s)
Detección Precoz del Cáncer/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias de la Boca/diagnóstico , Fibrosis de la Submucosa Bucal/diagnóstico , Lesiones Precancerosas/diagnóstico , Teorema de Bayes , Colágeno , Tejido Conectivo/patología , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Mucosa Bucal/patología , Neoplasias de la Boca/clasificación , Neoplasias de la Boca/patología , Redes Neurales de la Computación , Distribución Normal , Fibrosis de la Submucosa Bucal/clasificación , Fibrosis de la Submucosa Bucal/patología , Lesiones Precancerosas/clasificación , Lesiones Precancerosas/patología , Sensibilidad y Especificidad , Máquina de Vectores de Soporte
19.
Neuroimage ; 58(2): 526-36, 2011 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-21723948

RESUMEN

Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Esquizofrenia/patología , Adolescente , Adulto , Anciano , Algoritmos , Inteligencia Artificial , Percepción Auditiva/fisiología , Mapeo Encefálico , Interpretación Estadística de Datos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Modelos Lineales , Masculino , Persona de Mediana Edad , Análisis de Componente Principal , Reproducibilidad de los Resultados , Adulto Joven
20.
Neuroimage ; 58(2): 560-71, 2011 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-21729756

RESUMEN

This paper describes a general kernel regression approach to predict experimental conditions from activity patterns acquired with functional magnetic resonance image (fMRI). The standard approach is to use classifiers that predict conditions from activity patterns. Our approach involves training different regression machines for each experimental condition, so that a predicted temporal profile is computed for each condition. A decision function is then used to classify the responses from the testing volumes into the corresponding category, by comparing the predicted temporal profile elicited by each event, against a canonical hemodynamic response function. This approach utilizes the temporal information in the fMRI signal and maintains more training samples in order to improve the classification accuracy over an existing strategy. This paper also introduces efficient techniques of temporal compaction, which operate directly on kernel matrices for kernel classification algorithms such as the support vector machine (SVM). Temporal compacting can convert the kernel computed from each fMRI volume directly into the kernel computed from beta-maps, average of volumes or spatial-temporal kernel. The proposed method was applied to three different datasets. The first one is a block-design experiment with three conditions of image stimuli. The method outperformed the SVM classifiers of three different types of temporal compaction in single-subject leave-one-block-out cross-validation. Our method achieved 100% classification accuracy for six of the subjects and an average of 94% accuracy across all 16 subjects, exceeding the best SVM classification result, which was 83% accuracy (p=0.008). The second dataset is also a block-design experiment with two conditions of visual attention (left or right). Our method yielded 96% accuracy and SVM yielded 92% (p=0.005). The third dataset is from a fast event-related experiment with two categories of visual objects. Our method achieved 77% accuracy, compared with 72% using SVM (p=0.0006).


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Adulto , Algoritmos , Inteligencia Artificial , Bases de Datos Factuales , Imagen Eco-Planar , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/clasificación , Modelos Lineales , Imagen por Resonancia Magnética/clasificación , Masculino , Estimulación Luminosa , Análisis de Regresión , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...