Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
PLoS One ; 16(12): e0261307, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34968393

RESUMO

Medical images commonly exhibit multiple abnormalities. Predicting them requires multi-class classifiers whose training and desired reliable performance can be affected by a combination of factors, such as, dataset size, data source, distribution, and the loss function used to train deep neural networks. Currently, the cross-entropy loss remains the de-facto loss function for training deep learning classifiers. This loss function, however, asserts equal learning from all classes, leading to a bias toward the majority class. Although the choice of the loss function impacts model performance, to the best of our knowledge, we observed that no literature exists that performs a comprehensive analysis and selection of an appropriate loss function toward the classification task under study. In this work, we benchmark various state-of-the-art loss functions, critically analyze model performance, and propose improved loss functions for a multi-class classification task. We select a pediatric chest X-ray (CXR) dataset that includes images with no abnormality (normal), and those exhibiting manifestations consistent with bacterial and viral pneumonia. We construct prediction-level and model-level ensembles to improve classification performance. Our results show that compared to the individual models and the state-of-the-art literature, the weighted averaging of the predictions for top-3 and top-5 model-level ensembles delivered significantly superior classification performance (p < 0.05) in terms of MCC (0.9068, 95% confidence interval (0.8839, 0.9297)) metric. Finally, we performed localization studies to interpret model behavior and confirm that the individual models and ensembles learned task-specific features and highlighted disease-specific regions of interest. The code is available at https://github.com/sivaramakrishnan-rajaraman/multiloss_ensemble_models.


Assuntos
Algoritmos , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/classificação , Área Sob a Curva , Entropia , Humanos , Pulmão/diagnóstico por imagem , Curva ROC , Tórax/diagnóstico por imagem , Raios X
2.
Oxid Med Cell Longev ; 2021: 6280690, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33688390

RESUMO

Due to the complexity of medical images, traditional medical image classification methods have been unable to meet the actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification. However, deep learning has the following problems in the application of medical image classification. First, it is impossible to construct a deep learning model with excellent performance according to the characteristics of medical images. Second, the current deep learning network structure and training strategies are less adaptable to medical images. Therefore, this paper first introduces the visual attention mechanism into the deep learning model so that the information can be extracted more effectively according to the problem of medical images, and the reasoning is realized at a finer granularity. It can increase the interpretability of the model. Additionally, to solve the problem of matching the deep learning network structure and training strategy to medical images, this paper will construct a novel multiscale convolutional neural network model that can automatically extract high-level discriminative appearance features from the original image, and the loss function uses the Mahalanobis distance optimization model to obtain a better training strategy, which can improve the robust performance of the network model. The medical image classification task is completed by the above method. Based on the above ideas, this paper proposes a medical classification algorithm based on a visual attention mechanism-multiscale convolutional neural network. The lung nodules and breast cancer images were classified by the method in this paper. The experimental results show that the accuracy of medical image classification in this paper is not only higher than that of traditional machine learning methods but also improved compared with other deep learning methods, and the method has good stability and robustness.


Assuntos
Algoritmos , Atenção/fisiologia , Processamento de Imagem Assistida por Computador/classificação , Redes Neurais de Computação , Neoplasias da Mama/classificação , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Neoplasias Pulmonares/diagnóstico por imagem
3.
Am J Ophthalmol ; 226: 100-107, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33577791

RESUMO

PURPOSE: To compare the performance of a novel convolutional neural network (CNN) classifier and human graders in detecting angle closure in EyeCam (Clarity Medical Systems, Pleasanton, California, USA) goniophotographs. DESIGN: Retrospective cross-sectional study. METHODS: Subjects from the Chinese American Eye Study underwent EyeCam goniophotography in 4 angle quadrants. A CNN classifier based on the ResNet-50 architecture was trained to detect angle closure, defined as inability to visualize the pigmented trabecular meshwork, using reference labels by a single experienced glaucoma specialist. The performance of the CNN classifier was assessed using an independent test dataset and reference labels by the single glaucoma specialist or a panel of 3 glaucoma specialists. This performance was compared to that of 9 human graders with a range of clinical experience. Outcome measures included area under the receiver operating characteristic curve (AUC) metrics and Cohen kappa coefficients in the binary classification of open or closed angle. RESULTS: The CNN classifier was developed using 29,706 open and 2,929 closed angle images. The independent test dataset was composed of 600 open and 400 closed angle images. The CNN classifier achieved excellent performance based on single-grader (AUC = 0.969) and consensus (AUC = 0.952) labels. The agreement between the CNN classifier and consensus labels (κ = 0.746) surpassed that of all non-reference human graders (κ = 0.578-0.702). Human grader agreement with consensus labels improved with clinical experience (P = 0.03). CONCLUSION: A CNN classifier can effectively detect angle closure in goniophotographs with performance comparable to that of an experienced glaucoma specialist. This provides an automated method to support remote detection of patients at risk for primary angle closure glaucoma.


Assuntos
Diagnóstico por Computador/classificação , Glaucoma de Ângulo Fechado/diagnóstico , Processamento de Imagem Assistida por Computador/classificação , Redes Neurais de Computação , Fotografação/classificação , Idoso , Idoso de 80 Anos ou mais , Segmento Anterior do Olho/patologia , Área Sob a Curva , Asiático , China/etnologia , Estudos Transversais , Sistemas Inteligentes , Feminino , Glaucoma de Ângulo Fechado/classificação , Gonioscopia , Humanos , Masculino , Pessoa de Meia-Idade , Oftalmologistas , Reprodutibilidade dos Testes , Estudos Retrospectivos , Especialização
4.
IEEE Trans Image Process ; 30: 191-206, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33136542

RESUMO

Kinship recognition is a prominent research aiming to find if kinship relation exists between two different individuals. In general, child closely resembles his/her parents more than others based on facial similarities. These similarities are due to genetically inherited facial features that a child shares with his/her parents. Most existing researches in kinship recognition focus on full facial images to find these kinship similarities. This paper first presents kinship recognition for similar full facial images using proposed Global-based dual-tree complex wavelet transform (G-DTCWT). We then present novel patch-based kinship recognition methods based on dual-tree complex wavelet transform (DT-CWT): Local Patch-based DT-CWT (LP-DTCWT) and Selective Patch-Based DT-CWT (SP-DTCWT). LP-DTCWT extracts coefficients for smaller facial patches for kinship recognition. SP-DTCWT is an extension to LP-DTCWT and extracts coefficients only for representative patches with similarity scores above a normalized cumulative threshold. This threshold is computed by a novel patch selection process. These representative patches contribute more similarities in parent/child image pairs and improve kinship accuracy. Proposed methods are extensively evaluated on different publicly available kinship datasets to validate kinship accuracy. Experimental results showcase efficacy of proposed methods on all kinship datasets. SP-DTCWT achieves competitive accuracy to state-of-the-art methods. Mean kinship accuracy of SP-DTCWT is 95.85% on baseline KinFaceW-I and 95.30% on KinFaceW-II datasets. Further, SP-DTCWT achieves the state-of-the-art accuracy of 80.49% on the largest kinship dataset, Families In the Wild (FIW).


Assuntos
Família , Processamento de Imagem Assistida por Computador , Reconhecimento Automatizado de Padrão/métodos , Análise de Ondaletas , Bases de Dados Factuais , Face/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/métodos
5.
Sci Rep ; 10(1): 19560, 2020 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-33177565

RESUMO

The accurate recognition of multiple sclerosis (MS) lesions is challenged by the high sensitivity and imperfect specificity of MRI. To examine whether longitudinal changes in volume, surface area, 3-dimensional (3D) displacement (i.e. change in lesion position), and 3D deformation (i.e. change in lesion shape) could inform on the origin of supratentorial brain lesions, we prospectively enrolled 23 patients with MS and 11 patients with small vessel disease (SVD) and performed standardized 3-T 3D brain MRI studies. Bayesian linear mixed effects regression models were constructed to evaluate associations between changes in lesion morphology and disease state. A total of 248 MS and 157 SVD lesions were studied. Individual MS lesions demonstrated significant decreases in volume < 3.75mm3 (p = 0.04), greater shifts in 3D displacement by 23.4% with increasing duration between MRI time points (p = 0.007), and greater transitions to a more non-spherical shape (p < 0.0001). If 62.2% of lesions within a given MRI study had a calculated theoretical radius > 2.49 based on deviation from a perfect 3D sphere, a 92.7% in-sample and 91.2% out-of-sample accuracy was identified for the diagnosis of MS. Longitudinal 3D shape evolution and displacement characteristics may improve lesion classification, adding to MRI techniques aimed at improving lesion specificity.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico por imagem , Adulto , Doenças de Pequenos Vasos Cerebrais/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador/classificação , Imageamento Tridimensional/classificação , Imageamento Tridimensional/métodos , Masculino , Pessoa de Meia-Idade , Transtornos de Enxaqueca/diagnóstico por imagem , Esclerose Múltipla/tratamento farmacológico
6.
Microscopy (Oxf) ; 69(2): 61-68, 2020 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-32115658

RESUMO

In this review, we focus on the applications of machine learning methods for analyzing image data acquired in imaging flow cytometry technologies. We propose that the analysis approaches can be categorized into two groups based on the type of data, raw imaging signals or features explicitly extracted from images, being analyzed by a trained model. We hope that this categorization is helpful for understanding uniqueness, differences and opportunities when the machine learning-based analysis is implemented in recently developed 'imaging' cell sorters.


Assuntos
Citometria de Fluxo/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/classificação , Imagem Óptica
7.
PLoS One ; 13(9): e0203339, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30208096

RESUMO

The recent development in the technology has increased the complexity of image contents and demand for image classification becomes more imperative. Digital images play a vital role in many applied domains such as remote sensing, scene analysis, medical care, textile industry and crime investigation. Feature extraction and image representation is considered as an important step in scene analysis as it affects the image classification performance. Automatic classification of images is an open research problem for image analysis and pattern recognition applications. The Bag-of-Features (BoF) model is commonly used to solve image classification, object recognition and other computer vision-based problems. In BoF model, the final feature vector representation of an image contains no information about the co-occurrence of features in the 2D image space. This is considered as a limitation, as the spatial arrangement among visual words in image space contains the information that is beneficial for image representation and learning of classification model. To deal with this, researchers have proposed different image representations. Among these, the division of image-space into different geometric sub-regions for the extraction of histogram for BoF model is considered as a notable contribution for the extraction of spatial clues. Keeping this in view, we aim to explore a Hybrid Geometric Spatial Image Representation (HGSIR) that is based on the combination of histograms computed over the rectangular, triangular and circular regions of the image. Five standard image datasets are used to evaluate the performance of the proposed research. The quantitative analysis demonstrates that the proposed research outperforms the state-of-art research in terms of classification accuracy.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Animais , Inteligência Artificial , Bases de Dados Factuais/classificação , Bases de Dados Factuais/estatística & dados numéricos , Humanos , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Multimídia/estatística & dados numéricos , Reconhecimento Automatizado de Padrão/classificação , Reconhecimento Automatizado de Padrão/estatística & dados numéricos , Fotografação/estatística & dados numéricos
8.
Fed Regist ; 82(242): 60306-8, 2017 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-29260838

RESUMO

The Food and Drug Administration (FDA or we) is classifying the image processing device for estimation of external blood loss into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the image processing device for estimation of external blood loss' classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.


Assuntos
Segurança de Equipamentos/classificação , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/instrumentação , Fotometria/classificação , Fotometria/instrumentação , Perda Sanguínea Cirúrgica , Hemoglobinas , Humanos , Tampões de Gaze Cirúrgicos
9.
Fed Regist ; 82(198): 47967-9, 2017 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-29035494

RESUMO

The Food and Drug Administration (FDA or we) is classifying the automated image assessment system for microbial colonies on solid culture media into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the automated image assessment system for microbial colonies on solid culture media's classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.


Assuntos
Aprovação de Equipamentos/legislação & jurisprudência , Segurança de Equipamentos/classificação , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/instrumentação , Técnicas Microbiológicas/classificação , Técnicas Microbiológicas/instrumentação , Meios de Cultura , Humanos , Estados Unidos
10.
JAMA Ophthalmol ; 135(9): 982-986, 2017 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-28796856

RESUMO

Importance: Telemedicine in retinopathy of prematurity (ROP) has the potential for delivering timely care to premature infants at risk for serious ROP. Objective: To describe the characteristics of eyes at risk for ROP to provide insights into what types of ROP are most easily detected early by image grading. Design, Setting, and Participants: Secondary analysis of eyes with referral-warranted (RW) ROP (stage 3 ROP, zone I ROP, plus disease) on diagnostic examination from the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) study was conducted from May 1, 2011, to October 31, 2013, in 1257 premature infants with birth weights less than 1251 g in 13 neonatal units in North America. Data analysis was performed between February 1, 2016, and June 5, 2017. Interventions: Serial imaging sessions with concurrent diagnostic examinations for ROP. Main Outcomes and Measures: Time of detecting RW-ROP on image evaluation compared with clinical examination. Results: In the e-ROP study, 246 infants (492 eyes) were included in the analysis; 138 (56.1%) were male. A total of 447 eyes had RW-ROP on diagnostic examination. Image grading in 123 infants (mean [SD] gestational age, 24.8 [1.4] weeks) detected RW-ROP earlier than diagnostic examination (early) in 191 (42.7%) eyes by about 15 days and detected RW-ROP in 123 infants (mean [SD] gestational age, 24.6 [1.5] weeks) at the same time (same) in 200 (44.7%) eyes. Most of the early eyes (153 [80.1%]) interpreted as being RW-ROP positive on imaging evaluation agreed with examination findings when the examination subsequently documented RW-ROP. At the sessions in which RW-ROP was first found by examination, stage 3 or more in 123 infants (mean [SD] gestational age, 24.8 [1.4] weeks) ROP was noted earlier on image evaluation in 151 of 191 early eyes (79.1%) and in 172 of 200 of same eyes (86.0%) (P = .08); the presence of zone I ROP was detected in 57 of 191 (29.8%) early eyes vs 64 of 200 (32.0%) same eyes (P = .90); and plus disease was noted in 30 of 191 (15.7%) early eyes and 45 of 200 (22.5%) same eyes (P = .08). Conclusions and Relevance: In both early and same eyes, zone I and/or stage 3 ROP determined a significant proportion of RW-ROP; plus disease played a relatively minor role. In most early RW-ROP eyes, the findings were consistent with clinical examination and/or image grading at the next session. Because ROP telemedicine is used more widely, development of standard approaches and protocols is essential.


Assuntos
Processamento de Imagem Assistida por Computador/classificação , Retinopatia da Prematuridade/diagnóstico , Telemedicina/métodos , Doença Aguda , Peso ao Nascer , Feminino , Idade Gestacional , Humanos , Lactente , Recém-Nascido , Recém-Nascido Prematuro , Recém-Nascido de muito Baixo Peso , Masculino , Oftalmoscopia/métodos , Retinopatia da Prematuridade/classificação , Fatores de Risco
11.
Fed Regist ; 80(38): 10330-3, 2015 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-25898424

RESUMO

The Food and Drug Administration (FDA) is classifying the Assisted Reproduction Embryo Image Assessment System into class II (special controls). The special controls that will apply to the device are identified in this order, and will be part of the codified language for the Assisted Reproduction Embryo Image Assessment System classification. The Agency is classifying the device into class II (special controls) in order to provide a reasonable assurance of safety and effectiveness of the device.


Assuntos
Aprovação de Equipamentos/legislação & jurisprudência , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/instrumentação , Microscopia/classificação , Microscopia/instrumentação , Técnicas de Reprodução Assistida/classificação , Técnicas de Reprodução Assistida/instrumentação , Zigoto , Transferência Embrionária , Segurança de Equipamentos/classificação , Humanos , Obstetrícia/instrumentação , Obstetrícia/legislação & jurisprudência , Estados Unidos
12.
Proc Natl Acad Sci U S A ; 111(15): 5544-9, 2014 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-24706844

RESUMO

The 26S proteasome is a 2.5 MDa molecular machine that executes the degradation of substrates of the ubiquitin-proteasome pathway. The molecular architecture of the 26S proteasome was recently established by cryo-EM approaches. For a detailed understanding of the sequence of events from the initial binding of polyubiquitylated substrates to the translocation into the proteolytic core complex, it is necessary to move beyond static structures and characterize the conformational landscape of the 26S proteasome. To this end we have subjected a large cryo-EM dataset acquired in the presence of ATP and ATP-γS to a deep classification procedure, which deconvolutes coexisting conformational states. Highly variable regions, such as the density assigned to the largest subunit, Rpn1, are now well resolved and rendered interpretable. Our analysis reveals the existence of three major conformations: in addition to the previously described ATP-hydrolyzing (ATPh) and ATP-γS conformations, an intermediate state has been found. Its AAA-ATPase module adopts essentially the same topology that is observed in the ATPh conformation, whereas the lid is more similar to the ATP-γS bound state. Based on the conformational ensemble of the 26S proteasome in solution, we propose a mechanistic model for substrate recognition, commitment, deubiquitylation, and translocation into the core particle.


Assuntos
Microscopia Crioeletrônica/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/métodos , Modelos Moleculares , Conformação Molecular , Complexo de Endopeptidases do Proteassoma/química , Bases de Dados Factuais
13.
PLoS One ; 9(2): e87097, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24498292

RESUMO

Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.


Assuntos
Cidades , Sensibilidades de Contraste , Planejamento Ambiental/normas , Reconhecimento Visual de Modelos , Argélia , Algoritmos , Planejamento Ambiental/estatística & dados numéricos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/classificação , Processamento de Imagem Assistida por Computador/normas , Japão , Masculino , Fotografação/classificação , Fotografação/normas , Fatores de Tempo
14.
J Vet Diagn Invest ; 25(6): 765-9, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24153030

RESUMO

A 2-stage algorithmic framework was developed to automatically classify digitized photomicrographs of tissues obtained from bovine liver, lung, spleen, and kidney into different histologic categories. The categories included normal tissue, acute necrosis, and inflammation (acute suppurative; chronic). In the current study, a total of 60 images per category (normal; acute necrosis; acute suppurative inflammation) were obtained from liver samples, 60 images per category (normal; acute suppurative inflammation) were obtained from spleen and lung samples, and 60 images per category (normal; chronic inflammation) were obtained from kidney samples. An automated support vector machine (SVM) classifier was trained to assign each test image to a specific category. Using 10 training images/category/organ, 40 test images/category/organ were examined. Employing confusion matrices to represent category-specific classification accuracy, the classifier-attained accuracies were found to be in the 74-90% range. The same set of test images was evaluated using a SVM classifier trained on 20 images/category/organ. The average classification accuracies were noted to be in the 84-95% range. The accuracy in correctly identifying normal tissue and specific tissue lesions was markedly improved by a small increase in the number of training images. The preliminary results from the study indicate the importance and potential use of automated image classification systems in the histologic identification of normal tissues and specific tissue lesions.


Assuntos
Histocitoquímica/veterinária , Processamento de Imagem Assistida por Computador/métodos , Rim/patologia , Fígado/patologia , Pulmão/patologia , Baço/patologia , Animais , Bovinos , Histocitoquímica/métodos , Processamento de Imagem Assistida por Computador/classificação , Máquina de Vetores de Suporte
15.
Clin Exp Ophthalmol ; 41(9): 842-52, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23566165

RESUMO

BACKGROUND: To determine the reliability and agreement of a new optic disc grading software program for use in clinical, epidemiological research. DESIGN: Reliability and agreement study. SAMPLES: 328 monoscopic and 85 stereoscopic optic disc images. METHODS: Optic disc parameters were measured using a new optic disc grading software (Singapore Optic Disc Assessment) that is based on polynomial curve-fitting algorithm. Two graders independently graded 328 monoscopic images to determine intergrader reliability. One grader regraded the images after 1 month to determine intragrader reliability. In addition, 85 stereo optic disc images were separately selected, and vertical cup-to-disc ratios were measured using both the new software and standardized Wisconsin manual stereo-grading method by the same grader 1 month apart. Intraclass correlation coefficient (ICC) and Bland-Altman plot analyses were performed. MAIN OUTCOME MEASURES: Optic disc parameters. RESULTS: The intragrader and intergrader reliability for optic disc measurements using Singapore Optic Disc Assessment was high (ICC ranging from 0.82 to 0.94). The mean differences (95% limits of agreement) for intergrader vertical cup-to-disc ratio measurements were 0.00 (-0.12 to 0.13) and 0.03 (-0.15 to 0.09), respectively. The vertical cup-to-disc ratio agreement between the software and Wisconsin grading method was extremely close (ICC = 0.94). The mean difference (95% limits of agreement) of vertical cup-to-disc ratio measurement between the two methods was 0.03 (-0.09 to 0.16). CONCLUSIONS: Intragrader and intergrader reliability using Singapore Optic Disc Assessment was excellent. This software was highly comparable with standardized stereo-grading method. Singapore Optic Disc Assessment is useful for grading digital optic disc images in clinical, population-based studies.


Assuntos
Glaucoma/classificação , Processamento de Imagem Assistida por Computador/classificação , Disco Óptico/patologia , Doenças do Nervo Óptico/classificação , Software , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Projetos de Pesquisa Epidemiológica , Feminino , Glaucoma/diagnóstico , Glaucoma/etnologia , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Doenças do Nervo Óptico/diagnóstico , Doenças do Nervo Óptico/etnologia , Fotografação , Reprodutibilidade dos Testes , Singapura/epidemiologia
16.
Invest Ophthalmol Vis Sci ; 54(3): 1789-96, 2013 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-23361512

RESUMO

PURPOSE: To evaluate an automated analysis of retinal fundus photographs to detect and classify severity of age-related macular degeneration compared with grading by the Age-Related Eye Disease Study (AREDS) protocol. METHODS: Following approval by the Johns Hopkins University School of Medicine's Institution Review Board, digitized images (downloaded AT http://www.ncbi.nlm.nih.gov/gap/) of field 2 (macular) fundus photographs from AREDS obtained over a 12-year longitudinal study were classified automatically using a visual words method to compare with severity by expert graders. RESULTS: Sensitivities and specificities, respectively, of automated imaging, when compared with expert fundus grading of 468 patients and 2145 fundus images are: 98.6% and 96.3% when classifying categories 1 and 2 versus categories 3 and 4; 96.1% and 96.1% when classifying categories 1 and 2 versus category 3; 98.6% and 95.7% when classifying category 1 versus category 3; and 96.0% and 94.7% when classifying category 1 versus categories 3 and 4; CONCLUSIONS: Development of an automated analysis for classification of age-related macular degeneration from digitized fundus photographs has high sensitivity and specificity when compared with expert graders and may have a role in screening or monitoring.


Assuntos
Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Degeneração Macular/classificação , Degeneração Macular/diagnóstico , Fotografação/métodos , Algoritmos , Reações Falso-Positivas , Seguimentos , Humanos , Processamento de Imagem Assistida por Computador/classificação , Valor Preditivo dos Testes , Sensibilidade e Especificidade , Índice de Gravidade de Doença
17.
Hum Brain Mapp ; 34(11): 3101-15, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22711230

RESUMO

What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.


Assuntos
Mapeamento Encefálico/métodos , Face/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Percepção Social , Percepção Visual/fisiologia , Algoritmos , Discriminação Psicológica/fisiologia , Imagem Ecoplanar/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/classificação , Imageamento por Ressonância Magnética/classificação , Masculino , Oxigênio/sangue , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Adulto Jovem
18.
J Med Syst ; 36(2): 865-81, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20703647

RESUMO

The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.


Assuntos
Detecção Precoce de Câncer/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Bucais/diagnóstico , Fibrose Oral Submucosa/diagnóstico , Lesões Pré-Cancerosas/diagnóstico , Teorema de Bayes , Colágeno , Tecido Conjuntivo/patologia , Humanos , Processamento de Imagem Assistida por Computador/classificação , Mucosa Bucal/patologia , Neoplasias Bucais/classificação , Neoplasias Bucais/patologia , Redes Neurais de Computação , Distribuição Normal , Fibrose Oral Submucosa/classificação , Fibrose Oral Submucosa/patologia , Lesões Pré-Cancerosas/classificação , Lesões Pré-Cancerosas/patologia , Sensibilidade e Especificidade , Máquina de Vetores de Suporte
19.
J Vis ; 11(8): 1-20, 2011 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-21795411

RESUMO

Perception of visual texture flows contributes to object segmentation, shape perception, and object recognition. To better understand the visual mechanisms underlying texture flow perception, we studied the factors limiting detection of simple forms of texture flows composed of local dot dipoles (Glass patterns) and related stimuli. To provide a benchmark for human performance, we derived an ideal observer for this task. We found that human detection thresholds were 8.0 times higher than ideal. We considered three factors that might account for this performance gap: (1) false matches between dipole dots (correspondence errors), (2) loss of sensitivity with increasing eccentricity, and (3) local orientation bandwidth. To estimate the effect of correspondence errors, we compared detection of Glass patterns with detection of matched line-segment stimuli, where no correspondence uncertainty exists. We found that eliminating correspondence errors reduced human thresholds by a factor of 1.8. We used a novel form of classification image analysis to directly estimate loss of sensitivity with eccentricity and local orientation bandwidth. Incorporating the eccentricity effects into the ideal observer model increased ideal thresholds by a factor of 2.9. Interestingly, estimated orientation bandwidth increased ideal thresholds by only 8%. Taking all three factors into account, human thresholds were only 58% higher than model thresholds. Our findings suggest that correspondence errors and eccentricity losses account for the great majority of the perceptual loss in the visual processing of Glass patterns.


Assuntos
Percepção de Forma/fisiologia , Processamento de Imagem Assistida por Computador/classificação , Orientação/fisiologia , Reconhecimento Visual de Modelos , Percepção Espacial/fisiologia , Adulto , Humanos , Modelos Biológicos , Limiar Sensorial
20.
Neuroimage ; 58(2): 560-71, 2011 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-21729756

RESUMO

This paper describes a general kernel regression approach to predict experimental conditions from activity patterns acquired with functional magnetic resonance image (fMRI). The standard approach is to use classifiers that predict conditions from activity patterns. Our approach involves training different regression machines for each experimental condition, so that a predicted temporal profile is computed for each condition. A decision function is then used to classify the responses from the testing volumes into the corresponding category, by comparing the predicted temporal profile elicited by each event, against a canonical hemodynamic response function. This approach utilizes the temporal information in the fMRI signal and maintains more training samples in order to improve the classification accuracy over an existing strategy. This paper also introduces efficient techniques of temporal compaction, which operate directly on kernel matrices for kernel classification algorithms such as the support vector machine (SVM). Temporal compacting can convert the kernel computed from each fMRI volume directly into the kernel computed from beta-maps, average of volumes or spatial-temporal kernel. The proposed method was applied to three different datasets. The first one is a block-design experiment with three conditions of image stimuli. The method outperformed the SVM classifiers of three different types of temporal compaction in single-subject leave-one-block-out cross-validation. Our method achieved 100% classification accuracy for six of the subjects and an average of 94% accuracy across all 16 subjects, exceeding the best SVM classification result, which was 83% accuracy (p=0.008). The second dataset is also a block-design experiment with two conditions of visual attention (left or right). Our method yielded 96% accuracy and SVM yielded 92% (p=0.005). The third dataset is from a fast event-related experiment with two categories of visual objects. Our method achieved 77% accuracy, compared with 72% using SVM (p=0.0006).


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Algoritmos , Inteligência Artificial , Bases de Dados Factuais , Imagem Ecoplanar , Feminino , Humanos , Processamento de Imagem Assistida por Computador/classificação , Modelos Lineares , Imageamento por Ressonância Magnética/classificação , Masculino , Estimulação Luminosa , Análise de Regressão , Reprodutibilidade dos Testes , Máquina de Vetores de Suporte , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...