Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
bioRxiv ; 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38328036

RESUMO

CryoEM democratization is hampered by access to costly plunge-freezing supplies. We introduce methods, called CryoCycle, for reliably blotting, vitrifying, and reusing clipped cryoEM grids. We demonstrate that vitreous ice may be produced by plunging clipped grids with purified proteins into liquid ethane and that clipped grids may be reused several times for different protein samples. Furthermore, we demonstrate the vitrification of thin areas of cells prepared on gold-coated, pre-clipped grids.

2.
Transl Vis Sci Technol ; 12(11): 25, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37982767

RESUMO

Purpose: Adaptive optics scanning light ophthalmoscope (AOSLO) imaging offers a microscopic view of the living retina, holding promise for diagnosing and researching eye diseases like retinitis pigmentosa and Stargardt's disease. The technology's clinical impact of AOSLO hinges on early detection through automated analysis tools. Methods: We introduce Cone Density Estimation (CoDE) and CoDE for Diagnosis (CoDED). CoDE is a deep density estimation model for cone counting that estimates a density function whose integral is equal to the number of cones. CoDED is an integration of CoDE with deep image classifiers for diagnosis. We use two AOSLO image datasets to train and evaluate the performance of cone density estimation and classification models for retinitis pigmentosa and Stargardt's disease. Results: Bland-Altman plots show that CoDE outperforms state-of-the-art models for cone density estimation. CoDED reported an F1 score of 0.770 ± 0.04 for disease classification, outperforming traditional convolutional networks. Conclusions: CoDE shows promise in classifying the retinitis pigmentosa and Stargardt's disease cases from a single AOSLO image. Our preliminary results suggest the potential role of analyzing patterns in the retinal cellular mosaic to aid in the diagnosis of genetic eye diseases. Translational Relevance: Our study explores the potential of deep density estimation models to aid in the analysis of AOSLO images. Although the initial results are encouraging, more research is needed to fully realize the potential of such methods in the treatment and study of genetic retinal pathologies.


Assuntos
Células Fotorreceptoras Retinianas Cones , Retinose Pigmentar , Humanos , Oftalmoscopia/métodos , Células Fotorreceptoras Retinianas Cones/patologia , Retina/diagnóstico por imagem , Oftalmoscópios , Retinose Pigmentar/diagnóstico , Retinose Pigmentar/genética
3.
J Mol Biol ; 435(8): 168008, 2023 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-36773692

RESUMO

The severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) replicates and evades detection using ER membranes and their associated protein machinery. Among these hijacked human proteins is selenoprotein S (selenos). This selenoprotein takes part in the protein quality control, signaling, and the regulation of cytokine secretion. While the role of selenos in the viral life cycle is not yet known, it has been reported to interact with SARS-CoV-2 nonstructural protein 7 (nsp7), a viral protein essential for the replication of the virus. We set to study whether selenos and nsp7 interact directly and if they can still bind when nsp7 is bound to the replication and transcription complex of the virus. Using biochemical assays, we show that selenos binds directly to nsp7. In addition, we found that selenos can bind to nsp7 when it is in a complex with the coronavirus's minimal replication and transcription complex, comprised of nsp7, nsp8, and the RNA-dependent RNA polymerase nsp12. In addition, through crosslinking experiments, we mapped the interaction sites of selenos and nsp7 in the replication complex and showed that the hydrophobic segment of selenos is essential for binding to nsp7. This arrangement leaves an extended helix and the intrinsically disordered segment of selenos-including the reactive selenocysteine-exposed and free to potentially recruit additional proteins to the replication and transcription complex.


Assuntos
Proteínas de Membrana , SARS-CoV-2 , Selenoproteínas , Transcrição Gênica , Proteínas não Estruturais Virais , Replicação Viral , Humanos , RNA Polimerase Dependente de RNA/química , SARS-CoV-2/genética , SARS-CoV-2/fisiologia , Selenoproteínas/genética , Selenoproteínas/metabolismo , Proteínas não Estruturais Virais/metabolismo , Proteínas de Membrana/metabolismo
4.
Arch Biochem Biophys ; 731: 109427, 2022 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-36241082

RESUMO

Selenoprotein S (selenos) is a small, intrinsically disordered membrane protein that is associated with various cellular functions, such as inflammatory processes, cellular stress response, protein quality control, and signaling pathways. It is primarily known for its contribution to the ER-associated degradation (ERAD) pathway, which governs the extraction of misfolded proteins or misassembled protein complexes from the ER to the cytosol for degradation by the proteasome. However, selenos's other cellular roles in signaling are equally vital, including the control of transcription factors and cytokine levels. Consequently, genetic polymorphisms of selenos are associated with increased risk for diabetes, dyslipidemia, and cardiovascular diseases, while high expression levels correlate with poor prognosis in several cancers. Its inhibitory role in cytokine secretion is also exploited by viruses. Since selenos binds multiple protein complexes, however, its specific contributions to various cellular pathways and diseases have been difficult to establish. Thus, the precise cellular functions of selenos and their interconnectivity have only recently begun to emerge. This review aims to summarize recent insights into the structure, interactome, and cellular roles of selenos.


Assuntos
Proteínas de Membrana , Selenoproteínas , Selenoproteínas/química , Proteínas de Membrana/metabolismo , Citocinas
5.
Transl Vis Sci Technol ; 11(9): 29, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-36169966

RESUMO

Purpose: To develop an automated method based on deep learning (DL) to classify macular edema (ME) from the evaluation of optical coherence tomography (OCT) scans. Methods: A total of 4230 images were obtained from data repositories of patients attended in an ophthalmology clinic in Colombia and two free open-access databases. They were annotated with four biomarkers (BMs) as intraretinal fluid, subretinal fluid, hyperreflective foci/tissue, and drusen. Then the scans were labeled as control or ocular disease among diabetic macular edema (DME), neovascular age-related macular degeneration (nAMD), and retinal vein occlusion (RVO) by two expert ophthalmologists. Our method was developed by following four consecutive phases: segmentation of BMs, the combination of BMs, feature extraction with convolutional neural networks to achieve binary classification for each disease, and, finally, multiclass classification of diseases and control images. Results: The accuracy of our model for nAMD was 97%, and for DME, RVO, and control were 94%, 93%, and 93%, respectively. Area under curve values were 0.99, 0.98, 0.96, and 0.97, respectively. The mean Cohen's kappa coefficient for the multiclass classification task was 0.84. Conclusions: The proposed DL model may identify OCT scans as normal and ME. In addition, it may classify its cause among three major exudative retinal diseases with high accuracy and reliability. Translational Relevance: Our DL approach can optimize the efficiency and timeliness of appropriate etiological diagnosis of ME, thus improving patient access and clinical decision making. It could be useful in places with a shortage of specialists and for readers that evaluate OCT scans remotely.


Assuntos
Aprendizado Profundo , Retinopatia Diabética , Edema Macular , Oclusão da Veia Retiniana , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/diagnóstico por imagem , Humanos , Edema Macular/diagnóstico por imagem , Edema Macular/etiologia , Reprodutibilidade dos Testes , Oclusão da Veia Retiniana/diagnóstico , Oclusão da Veia Retiniana/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos
6.
Comput Biol Med ; 145: 105472, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35430558

RESUMO

Although for many diseases there is a progressive diagnosis scale, automatic analysis of grade-based medical images is quite often addressed as a binary classification problem, missing the finer distinction and intrinsic relation between the different possible stages or grades. Ordinal regression (or classification) considers the order of the values of the categorical labels and thus takes into account the order of grading scales used to assess the severity of different medical conditions. This paper presents a quantum-inspired deep probabilistic learning ordinal regression model for medical image diagnosis that takes advantage of the representational power of deep learning and the intrinsic ordinal information of disease stages. The method is evaluated on two different medical image analysis tasks: prostate cancer diagnosis and diabetic retinopathy grade estimation on eye fundus images. The experimental results show that the proposed method not only improves the diagnosis performance on the two tasks but also the interpretability of the results by quantifying the uncertainty of the predictions in comparison to conventional deep classification and regression architectures. The code and datasets are available at https://github.com/stoledoc/DQOR.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Neoplasias da Próstata , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos , Masculino , Próstata , Neoplasias da Próstata/diagnóstico por imagem , Incerteza
7.
Clin Oral Investig ; 26(3): 3085-3096, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34997358

RESUMO

OBJECTIVE: To create a mandibular shape prediction model using machine learning techniques and geometric morphometrics. MATERIALS AND METHODS: Six hundred twenty-nine radiographs were used to select the most appropriate craniomaxillary variables in different craniofacial pattern classifications using a support vector machine. To obtain the three-dimensional mandibular shape, a Procrustes fit was used on 55 tomograms, in which 17 three-dimensional landmarks were digitized. A partial least square regression was employed to find the best covariation between craniomaxillary angles and the symmetric components of mandibular shape. The model was applied to a new sample of six tomograms and evaluated by the mean absolute error. Each mandible predicted was assessed using the Hausdorff distance (HDu) and a color scale. The model was also exploratively applied to six new radiographs. RESULTS: Covariation was 88.66% with a significance of < 0.0001 explained by twelve craniomaxillary variables. Low differences between the original and predicted models were obtained, with a mean absolute error of 0.0143. The mean distance between meshes ranged from 0.0033 to 0.0059 HDu and each color scale demonstrated general similarity between the surfaces. CONCLUSIONS: This approach offered promising results in obtaining a mandibular prediction model that enhances shape properties in an economical way and is applicable to a Latin American population. Clinical proof of this method will require further studies with larger samples. CLINICAL RELEVANCE: This method offers a reliable, economic alternative to traditional mandibular prediction methods and is applicable to the Latin American population.


Assuntos
Aprendizado de Máquina , Mandíbula , Cefalometria/métodos , Mandíbula/diagnóstico por imagem
8.
Comput Methods Programs Biomed ; 178: 181-189, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31416547

RESUMO

BACKGROUND AND OBJECTIVES: Spectral Domain Optical Coherence Tomography (SD-OCT) is a volumetric imaging technique that allows measuring patterns between layers such as small amounts of fluid. Since 2012, automatic medical image analysis performance has steadily increased through the use of deep learning models that automatically learn relevant features for specific tasks, instead of designing visual features manually. Nevertheless, providing insights and interpretation of the predictions made by the model is still a challenge. This paper describes a deep learning model able to detect medically interpretable information in relevant images from a volume to classify diabetes-related retinal diseases. METHODS: This article presents a new deep learning model, OCT-NET, which is a customized convolutional neural network for processing scans extracted from optical coherence tomography volumes. OCT-NET is applied to the classification of three conditions seen in SD-OCT volumes. Additionally, the proposed model includes a feedback stage that highlights the areas of the scans to support the interpretation of the results. This information is potentially useful for a medical specialist while assessing the prediction produced by the model. RESULTS: The proposed model was tested on the public SERI-CUHK and A2A SD-OCT data sets containing healthy, diabetic retinopathy, diabetic macular edema and age-related macular degeneration. The experimental evaluation shows that the proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the SERI+CUHK and A2A SD-OCT data sets with a precision of 93% and an area under the ROC curve (AUC) of 0.99 respectively. CONCLUSIONS: The proposed method is able to classify the three studied retinal diseases with high accuracy. One advantage of the method is its ability to produce interpretable clinical information in the form of highlighting the regions of the image that most contribute to the classifier decision.


Assuntos
Aprendizado Profundo , Retinopatia Diabética/diagnóstico por imagem , Degeneração Macular/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Doenças Retinianas/diagnóstico por imagem , Tomografia de Coerência Óptica , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Área Sob a Curva , Humanos , Pessoa de Meia-Idade , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Software
9.
Forensic Sci Int ; 281: 187.e1-187.e7, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29126697

RESUMO

BACKGROUND: The prediction of the mandibular bone morphology in facial reconstruction for forensic purposes is usually performed considering a straight profile corresponding to skeletal class I, with application of linear and parametric analysis which limit the search for relationships between mandibular and craniomaxillary variables. OBJECTIVE: To predict the mandibular morphology through craniomaxillary variables on lateral radiographs in patients with skeletal class I, II and III, using automated learning techniques, such as Artificial Neural Networks and Support Vector Regression. MATERIALS AND METHODS: 229 standardized lateral radiographs from Colombian patients of both sexes aged 18-25 years were collected. Coordinates of craniofacial landmarks were used to create mandibular and craniomaxillary variables. Mandibular measurements were selected to be predicted from 5 sets of craniomaxillary variables or input characteristics by using automated learning techniques, and they were evaluated through a correlation coefficient by a ridge regression between the real value and the predicted value. RESULTS: Coefficients from 0.84 until 0.99 were obtained with Artificial Neural Networks in the 17 mandibular measures, and two coefficients above 0.7 were obtained with the Support Vector Regression. CONCLUSION: The craniomaxillary variables used, showed a high predictability ability of the selected mandibular variables, this may be the key to facial reconstruction from specific craniomaxillary measures in the three skeletal classifications.


Assuntos
Cefalometria , Má Oclusão/diagnóstico por imagem , Mandíbula/anatomia & histologia , Redes Neurais de Computação , Máquina de Vetores de Suporte , Adolescente , Adulto , Pontos de Referência Anatômicos , Feminino , Humanos , Masculino , Mandíbula/diagnóstico por imagem , Análise de Regressão , Adulto Jovem
10.
Sci Rep ; 7: 46450, 2017 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-28418027

RESUMO

With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Ductal de Mama/patologia , Interpretação de Imagem Assistida por Computador/métodos , Adulto , Idoso , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Carcinoma Ductal de Mama/diagnóstico por imagem , Aprendizado Profundo , Feminino , Humanos , Pessoa de Meia-Idade , Invasividade Neoplásica , Carga Tumoral , Adulto Jovem
11.
Comput Methods Programs Biomed ; 127: 248-57, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26826901

RESUMO

BACKGROUND AND OBJECTIVE: The automatic classification of breast imaging lesions is currently an unsolved problem. This paper describes an innovative representation learning framework for breast cancer diagnosis in mammography that integrates deep learning techniques to automatically learn discriminative features avoiding the design of specific hand-crafted image-based feature detectors. METHODS: A new biopsy proven benchmarking dataset was built from 344 breast cancer patients' cases containing a total of 736 film mammography (mediolateral oblique and craniocaudal) views, representative of manually segmented lesions associated with masses: 426 benign lesions and 310 malignant lesions. The developed method comprises two main stages: (i) preprocessing to enhance image details and (ii) supervised training for learning both the features and the breast imaging lesions classifier. In contrast to previous works, we adopt a hybrid approach where convolutional neural networks are used to learn the representation in a supervised way instead of designing particular descriptors to explain the content of mammography images. RESULTS: Experimental results using the developed benchmarking breast cancer dataset demonstrated that our method exhibits significant improved performance when compared to state-of-the-art image descriptors, such as histogram of oriented gradients (HOG) and histogram of the gradient divergence (HGD), increasing the performance from 0.787 to 0.822 in terms of the area under the ROC curve (AUC). Interestingly, this model also outperforms a set of hand-crafted features that take advantage of additional information from segmentation by the radiologist. Finally, the combination of both representations, learned and hand-crafted, resulted in the best descriptor for mass lesion classification, obtaining 0.826 in the AUC score. CONCLUSIONS: A novel deep learning based framework to automatically address classification of breast mass lesions in mammography was developed.


Assuntos
Neoplasias da Mama/diagnóstico , Aprendizado de Máquina , Mamografia , Redes Neurais de Computação , Biópsia , Neoplasias da Mama/patologia , Feminino , Humanos
12.
Forensic Sci Int ; 261: 159.e1-6, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26782070

RESUMO

BACKGROUND: The mandibular bone is an important part of the forensic facial reconstruction and it has the possibility of getting lost in skeletonized remains; for this reason, it is necessary to facilitate the identification process simulating the mandibular position only through craniomaxillary measures, for this task, different modeling techniques have been performed, but they only contemplate a straight facial profile that belong to skeletal pattern Class I, but the 24.5% corresponding to the Colombian skeletal patterns Class II and III are not taking into account, besides, craniofacial measures do not follow a parametric trend or a normal distribution. OBJECTIVE: The aim of this study was to employ an automatic non-parametric method as the Support Vector Machines to classify skeletal patterns through craniomaxillary variables, in order to simulate the natural mandibular position on a contemporary Colombian sample. MATERIALS AND METHODS: Lateral cephalograms (229) of Colombian young adults of both sexes were collected. Landmark coordinates protocols were used to create craniomaxillary variables. A Support Vector Machine with a linear kernel classifier model was trained on a subset of the available data and evaluated over the remaining samples. The weights of the model were used to select the 10 best variables for classification accuracy. RESULTS: An accuracy of 74.51% was obtained, defined by Pr-A-N, N-Pr-A, A-N-Pr, A-Te-Pr, A-Pr-Rhi, Rhi-A-Pr, Pr-A-Te, Te-Pr-A, Zm-A-Pr and PNS-A-Pr angles. The Class Precision and the Class Recall showed a correct distinction of the Class II from the Class III and vice versa. CONCLUSIONS: Support Vector Machines created an important model of classification of skeletal patterns using craniomaxillary variables that are not commonly used in the literature and could be applicable to the 24.5% of the contemporary Colombian sample.


Assuntos
Cefalometria/métodos , Máquina de Vetores de Suporte , Adolescente , Adulto , Pontos de Referência Anatômicos , Colômbia , Feminino , Antropologia Forense/métodos , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Estudos Prospectivos , Crânio/anatomia & histologia , Adulto Jovem
13.
Artif Intell Med ; 64(2): 131-45, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25976208

RESUMO

OBJECTIVE: The paper addresses the problem of automatic detection of basal cell carcinoma (BCC) in histopathology images. In particular, it proposes a framework to both, learn the image representation in an unsupervised way and visualize discriminative features supported by the learned model. MATERIALS AND METHODS: This paper presents an integrated unsupervised feature learning (UFL) framework for histopathology image analysis that comprises three main stages: (1) local (patch) representation learning using different strategies (sparse autoencoders, reconstruct independent component analysis and topographic independent component analysis (TICA), (2) global (image) representation learning using a bag-of-features representation or a convolutional neural network, and (3) a visual interpretation layer to highlight the most discriminant regions detected by the model. The integrated unsupervised feature learning framework was exhaustively evaluated in a histopathology image dataset for BCC diagnosis. RESULTS: The experimental evaluation produced a classification performance of 98.1%, in terms of the area under receiver-operating-characteristic curve, for the proposed framework outperforming by 7% the state-of-the-art discrete cosine transform patch-based representation. CONCLUSIONS: The proposed UFL-representation-based approach outperforms state-of-the-art methods for BCC detection. Thanks to its visual interpretation layer, the method is able to highlight discriminative tissue regions providing a better diagnosis support. Among the different UFL strategies tested, TICA-learned features exhibited the best performance thanks to its ability to capture low-level invariances, which are inherent to the nature of the problem.


Assuntos
Carcinoma Basocelular/patologia , Sistemas de Apoio a Decisões Clínicas , Técnicas de Apoio para a Decisão , Interpretação de Imagem Assistida por Computador/métodos , Patologia Clínica/métodos , Neoplasias Cutâneas/patologia , Aprendizado de Máquina não Supervisionado , Área Sob a Curva , Automação Laboratorial , Biópsia , Carcinoma Basocelular/classificação , Análise Discriminante , Humanos , Valor Preditivo dos Testes , Curva ROC , Reprodutibilidade dos Testes , Neoplasias Cutâneas/classificação , Coloração e Rotulagem
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 797-800, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26736382

RESUMO

Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve.


Assuntos
Redes Neurais de Computação , Mamografia , Curva ROC
15.
IEEE Trans Med Imaging ; 33(6): 1262-74, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24893256

RESUMO

Neurodegenerative diseases comprise a wide variety of mental symptoms whose evolution is not directly related to the visual analysis made by radiologists, who can hardly quantify systematic differences. Moreover, automatic brain morphometric analyses, that do perform this quantification, contribute very little to the comprehension of the disease, i.e., many of these methods classify but they do not produce useful anatomo-functional correlations. This paper presents a new fully automatic image analysis method that reveals discriminative brain patterns associated to the presence of neurodegenerative diseases, mining systematic differences and therefore grading objectively any neurological disorder. This is accomplished by a fusion strategy that mixes together bottom-up and top-down information flows. Bottom-up information comes from a multiscale analysis of different image features, while the top-down stage includes learning and fusion strategies formulated as a max-margin multiple-kernel optimization problem. The capacity of finding discriminative anatomic patterns was evaluated using the Alzheimer's disease (AD) as the use case. The classification performance was assessed under different configurations of the proposed approach in two public brain magnetic resonance datasets (OASIS-MIRIAD) with patients diagnosed with AD, showing an improvement varying from 6.2% to 13% in the equal error rate measure, with respect to what has been reported by the feature-based morphometry strategy. In terms of the anatomical analysis, discriminant regions found by the proposed approach highly correlates to what has been reported in clinical studies of AD.


Assuntos
Doença de Alzheimer/diagnóstico , Encéfalo/patologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Humanos , Pessoa de Meia-Idade
16.
J Biomed Inform ; 51: 114-28, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24820052

RESUMO

This work proposes a histology image indexing strategy based on multimodal representations obtained from the combination of visual features and associated semantic annotations. Both data modalities are complementary information sources for an image retrieval system, since visual features lack explicit semantic information and semantic terms do not usually describe the visual appearance of images. The paper proposes a novel strategy to build a fused image representation using matrix factorization algorithms and data reconstruction principles to generate a set of multimodal features. The methodology can seamlessly recover the multimodal representation of images without semantic annotations, allowing us to index new images using visual features only, and also accepting single example images as queries. Experimental evaluations on three different histology image data sets show that our strategy is a simple, yet effective approach to building multimodal representations for histology image search, and outperforms the response of the popular late fusion approach to combine information.


Assuntos
Mineração de Dados/métodos , Interpretação de Imagem Assistida por Computador/métodos , Microscopia/métodos , Imagem Multimodal/métodos , Reconhecimento Automatizado de Padrão/métodos , Sistemas de Informação em Radiologia/organização & administração , Técnica de Subtração , Algoritmos , Biópsia/métodos , Humanos , Processamento de Linguagem Natural , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Semântica , Sensibilidade e Especificidade
17.
Artif Intell Med ; 52(2): 91-106, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21664806

RESUMO

OBJECTIVE: The paper addresses the problem of finding visual patterns in histology image collections. In particular, it proposes a method for correlating basic visual patterns with high-level concepts combining an appropriate image collection representation with state-of-the-art machine learning techniques. METHODOLOGY: The proposed method starts by representing the visual content of the collection using a bag-of-features strategy. Then, two main visual mining tasks are performed: finding associations between visual-patterns and high-level concepts, and performing automatic image annotation. Associations are found using minimum-redundancy-maximum-relevance feature selection and co-clustering analysis. Annotation is done by applying a support-vector-machine classifier. Additionally, the proposed method includes an interpretation mechanism that associates concept annotations with corresponding image regions. The method was evaluated in two data sets: one comprising histology images from the different four fundamental tissues, and the other composed of histopathology images used for cancer diagnosis. Different visual-word representations and codebook sizes were tested. The performance in both concept association and image annotation tasks was qualitatively and quantitatively evaluated. RESULTS: The results show that the method is able to find highly discriminative visual features and to associate them to high-level concepts. In the annotation task the method showed a competitive performance: an increase of 21% in f-measure with respect to the baseline in the histopathology data set, and an increase of 47% in the histology data set. CONCLUSIONS: The experimental evidence suggests that the bag-of-features representation is a good alternative to represent visual content in histology images. The proposed method exploits this representation to perform visual pattern mining from a wider perspective where the focus is the image collection as a whole, rather than individual images.


Assuntos
Inteligência Artificial , Mineração de Dados/métodos , Diagnóstico por Imagem/métodos , Técnicas Histológicas , Neoplasias/diagnóstico , Bases de Dados Factuais , Humanos , Reconhecimento Automatizado de Padrão/métodos
18.
J Biomed Inform ; 44(4): 519-28, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21296682

RESUMO

Large amounts of histology images are captured and archived in pathology departments due to the ever expanding use of digital microscopy. The ability to manage and access these collections of digital images is regarded as a key component of next generation medical imaging systems. This paper addresses the problem of retrieving histopathology images from a large collection using an example image as query. The proposed approach automatically annotates the images in the collection, as well as the query images, with high-level semantic concepts. This semantic representation delivers an improved retrieval performance providing more meaningful results. We model the problem of automatic image annotation using kernel methods, resulting in a unified framework that includes: (1) multiple features for image representation, (2) a feature integration and selection mechanism (3) and an automatic semantic image annotation strategy. An extensive experimental evaluation demonstrated the effectiveness of the proposed framework to build meaningful image representations for learning and useful semantic annotations for image retrieval.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Diagnóstico por Imagem , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Semântica , Algoritmos , Carcinoma Basocelular/patologia , Biologia Computacional , Bases de Dados Factuais , Histocitoquímica , Humanos , Informática Médica , Neoplasias Cutâneas/patologia
19.
J Pathol Inform ; 2: S4, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22811960

RESUMO

Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively.

20.
J Biomed Inform ; 42(2): 296-307, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19166974

RESUMO

Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.


Assuntos
Diagnóstico por Computador/métodos , Eritrócitos , Processamento de Imagem Assistida por Computador/métodos , Malária Falciparum/parasitologia , Plasmodium falciparum , Animais , Árvores de Decisões , Eritrócitos/parasitologia , Eritrócitos/patologia , Humanos , Malária Falciparum/diagnóstico , Microscopia , Parasitemia/diagnóstico , Parasitemia/parasitologia , Reconhecimento Automatizado de Padrão , Plasmodium falciparum/isolamento & purificação , Plasmodium falciparum/parasitologia , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA