Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
PLoS One ; 19(4): e0302169, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38687694

RESUMO

The current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample taken from the oral cavity. This process is time-consuming and more invasive than an alternative approach of acquiring a brush sample followed by cytological analysis. Using a microscope, skilled cytotechnologists are able to detect changes due to malignancy; however, introducing this approach into clinical routine is associated with challenges such as a lack of resources and experts. To design a trustworthy OC detection system that can assist cytotechnologists, we are interested in deep learning based methods that can reliably detect cancer, given only per-patient labels (thereby minimizing annotation bias), and also provide information regarding which cells are most relevant for the diagnosis (thereby enabling supervision and understanding). In this study, we perform a comparison of two approaches suitable for OC detection and interpretation: (i) conventional single instance learning (SIL) approach and (ii) a modern multiple instance learning (MIL) method. To facilitate systematic evaluation of the considered approaches, we, in addition to a real OC dataset with patient-level ground truth annotations, also introduce a synthetic dataset-PAP-QMNIST. This dataset shares several properties of OC data, such as image size and large and varied number of instances per bag, and may therefore act as a proxy model of a real OC dataset, while, in contrast to OC data, it offers reliable per-instance ground truth, as defined by design. PAP-QMNIST has the additional advantage of being visually interpretable for non-experts, which simplifies analysis of the behavior of methods. For both OC and PAP-QMNIST data, we evaluate performance of the methods utilizing three different neural network architectures. Our study indicates, somewhat surprisingly, that on both synthetic and real data, the performance of the SIL approach is better or equal to the performance of the MIL approach. Visual examination by cytotechnologist indicates that the methods manage to identify cells which deviate from normality, including malignant cells as well as those suspicious for dysplasia. We share the code as open source.


Assuntos
Aprendizado Profundo , Neoplasias Bucais , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/patologia , Humanos , Redes Neurais de Computação
2.
J Maxillofac Oral Surg ; 23(1): 23-32, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38312957

RESUMO

Oral cancer is a cancer type that is widely prevalent in low-and middle-income countries with a high mortality rate, and poor quality of life for patients after treatment. Early treatment of cancer increases patient survival, improves quality of life and results in less morbidity and a better prognosis. To reach this goal, early detection of malignancies using technologies that can be used in remote and low resource areas is desirable. Such technologies should be affordable, accurate, and easy to use and interpret. This review surveys different technologies that have the potentials of implementation in primary health and general dental practice, considering global perspectives and with a focus on the population in India, where oral cancer is highly prevalent. The technologies reviewed include both sample-based methods, such as saliva and blood analysis and brush biopsy, and more direct screening of the oral cavity including fluorescence, Raman techniques, and optical coherence tomography. Digitalisation, followed by automated artificial intelligence based analysis, are key elements in facilitating wide access to these technologies, to non-specialist personnel and in rural areas, increasing quality and objectivity of the analysis while simultaneously reducing the labour and need for highly trained specialists.

3.
J Oral Pathol Med ; 52(9): 826-833, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37710407

RESUMO

BACKGROUND: Oral squamous cell carcinoma (OSCC) is a widespread disease with only 50%-60% 5-year survival. Individuals with potentially malignant precursor lesions are at high risk. METHODS: Survival could be increased by effective, affordable, and simple screening methods, along with a shift from incisional tissue biopsies to non-invasive brush biopsies for cytology diagnosis, which are easy to perform in primary care. Along with the explainable, fast, and objective artificial intelligence characterisation of cells through deep learning, an easy-to-use, rapid, and cost-effective methodology for finding high-risk lesions is achievable. The collection of cytology samples offers the further opportunity of explorative genomic analysis. RESULTS: Our prospective multicentre study of patients with leukoplakia yields a vast number of oral keratinocytes. In addition to cytopathological analysis, whole-slide imaging and the training of deep neural networks, samples are analysed according to a single-cell RNA sequencing protocol, enabling mapping of the entire keratinocyte transcriptome. Mapping the changes in the genetic profile, based on mRNA expression, facilitates the identification of biomarkers that predict cancer transformation. CONCLUSION: This position paper highlights non-invasive methods for identifying patients with oral mucosal lesions at risk of malignant transformation. Reliable non-invasive methods for screening at-risk individuals bring the early diagnosis of OSCC within reach. The use of biomarkers to decide on a targeted therapy is most likely to improve the outcome. With the large-scale collection of samples following patients over time, combined with genomic analysis and modern machine-learning-based approaches for finding patterns in data, this path holds great promise.


Assuntos
Carcinoma de Células Escamosas , Neoplasias de Cabeça e Pescoço , Neoplasias Bucais , Humanos , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/prevenção & controle , Neoplasias Bucais/genética , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/prevenção & controle , Carcinoma de Células Escamosas/genética , Carcinoma de Células Escamosas de Cabeça e Pescoço , Inteligência Artificial , Estudos Prospectivos , Biomarcadores , Leucoplasia Oral/diagnóstico , Leucoplasia Oral/patologia
4.
Int J Dent Hyg ; 21(3): 524-532, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37401636

RESUMO

BACKGROUND: Oral cancer is a severe and potentially fatal disease usually starting in the squamous epithelium lining the oral cavity. Together with oropharyngeal carcinoma, it is the fifth to sixth most common malignancy worldwide. To limit the increase in the global oral cancer incidence over the past two decades, the World Health Assembly adopted a resolution urging member states to integrate preventive measures such as engagement and training of dental personnel in screening, early diagnosis, and treatment into their national cancer control programs. AIM: The aim of this study was to investigate if dental hygienists (DHs) and dentists (Ds) in general dental practice care can be entrusted to perform brush sampling of oral potentially malignant disorders (OPMDs), and to evaluate their level of comfort in performing brush biopsies. METHODS: Participants were five DHs and five Ds who received one day of theoretical and clinical training in oral pathology to identify OPMDs (leukoplakia [LP], erythroplakia [EP], and oral lichen planus [OLP]), and perform brush sampling for PAP cytology and high-risk human papillomavirus (hrHPV) analysis. RESULTS: Out of 222 collected samples, 215 were adequate for morphological assessment and hrHPV analysis. All the participants agreed that sample collection can be incorporated in DHs and Ds routine clinical duties, and most of them reported that sample collection and processing was easy/quite easy. CONCLUSION: Dentists and DHs are capable of collecting satisfactory material for cytology and hrHPV analysis. All the participating DHs and Ds were of the opinion that brush sampling could be handled routinely by DHs and Ds in GDP.


Assuntos
Doenças da Boca , Neoplasias Bucais , Lesões Pré-Cancerosas , Humanos , Mucosa Bucal/patologia , Higienistas Dentários , Neoplasias Bucais/etiologia , Biópsia/efeitos adversos , Lesões Pré-Cancerosas/diagnóstico , Lesões Pré-Cancerosas/complicações , Lesões Pré-Cancerosas/patologia , Odontólogos
5.
PLoS One ; 18(3): e0282432, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36867617

RESUMO

We present INSPIRE, a top-performing general-purpose method for deformable image registration. INSPIRE brings distance measures which combine intensity and spatial information into an elastic B-splines-based transformation model and incorporates an inverse inconsistency penalization supporting symmetric registration performance. We introduce several theoretical and algorithmic solutions which provide high computational efficiency and thereby applicability of the proposed framework in a wide range of real scenarios. We show that INSPIRE delivers highly accurate, as well as stable and robust registration results. We evaluate the method on a 2D dataset created from retinal images, characterized by presence of networks of thin structures. Here INSPIRE exhibits excellent performance, substantially outperforming the widely used reference methods. We also evaluate INSPIRE on the Fundus Image Registration Dataset (FIRE), which consists of 134 pairs of separately acquired retinal images. INSPIRE exhibits excellent performance on the FIRE dataset, substantially outperforming several domain-specific methods. We also evaluate the method on four benchmark datasets of 3D magnetic resonance images of brains, for a total of 2088 pairwise registrations. A comparison with 17 other state-of-the-art methods reveals that INSPIRE provides the best overall performance. Code is available at github.com/MIDA-group/inspire.


Assuntos
Encéfalo , Processamento de Imagem Assistida por Computador , Retina , Encéfalo/diagnóstico por imagem , Fundo de Olho , Humanos , Retina/diagnóstico por imagem
6.
ArXiv ; 2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36945686

RESUMO

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

7.
PLoS One ; 17(11): e0276196, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36441754

RESUMO

Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.


Assuntos
Benchmarking , Traduções , Pesquisa Empírica , Processamento de Imagem Assistida por Computador
8.
Artigo em Inglês | MEDLINE | ID: mdl-30794174

RESUMO

Intensity-based image registration approaches rely on similarity measures to guide the search for geometric correspondences with high affinity between images. The properties of the used measure are vital for the robustness and accuracy of the registration. In this study a symmetric, intensity interpolationfree, affine registration framework based on a combination of intensity and spatial information is proposed. The excellent performance of the framework is demonstrated on a combination of synthetic tests, recovering known transformations in the presence of noise, and real applications in biomedical and medical image registration, for both 2D and 3D images. The method exhibits greater robustness and higher accuracy than similarity measures in common use, when inserted into a standard gradientbased registration framework available as part of the open source Insight Segmentation and Registration Toolkit (ITK). The method is also empirically shown to have a low computational cost, making it practical for real applications. Source code is available.

9.
IEEE Trans Image Process ; 23(1): 126-36, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24158476

RESUMO

We present four novel point-to-set distances defined for fuzzy or gray-level image data, two based on integration over α-cuts and two based on the fuzzy distance transform. We explore their theoretical properties. Inserting the proposed point-to-set distances in existing definitions of set-to-set distances, among which are the Hausdorff distance and the sum of minimal distances, we define a number of distances between fuzzy sets. These set distances are directly applicable for comparing gray-level images or fuzzy segmented objects, but also for detecting patterns and matching parts of images. The distance measures integrate shape and intensity/membership of observed entities, providing a highly applicable tool for image processing and analysis. Performance evaluation of derived set distances in real image processing tasks is conducted and presented. It is shown that the considered distances have a number of appealing theoretical properties and exhibit very good performance in template matching and object classification for fuzzy segmented images as well as when applied directly on gray-level intensity images. Examples include recognition of hand written digits and identification of virus particles. The proposed set distances perform excellently on the MNIST digit classification task, achieving the best reported error rate for classification using only rigid body transformations and a kNN classifier.


Assuntos
Algoritmos , Lógica Fuzzy , Interpretação de Imagem Assistida por Computador/métodos , Modelos Lineares , Reconhecimento Automatizado de Padrão/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Med Imaging ; 32(6): 983-94, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23322760

RESUMO

Cancer diagnosis is based on visual examination under a microscope of tissue sections from biopsies. But whereas pathologists rely on tissue stains to identify morphological features, automated tissue recognition using color is fraught with problems that stem from image intensity variations due to variations in tissue preparation, variations in spectral signatures of the stained tissue, spectral overlap and spatial aliasing in acquisition, and noise at image acquisition. We present a blind method for color decomposition of histological images. The method decouples intensity from color information and bases the decomposition only on the tissue absorption characteristics of each stain. By modeling the charge-coupled device sensor noise, we improve the method accuracy. We extend current linear decomposition methods to include stained tissues where one spectral signature cannot be separated from all combinations of the other tissues' spectral signatures. We demonstrate both qualitatively and quantitatively that our method results in more accurate decompositions than methods based on non-negative matrix factorization and independent component analysis. The result is one density map for each stained tissue type that classifies portions of pixels into the correct stained tissue allowing accurate identification of morphological features that may be linked to cancer.


Assuntos
Histocitoquímica/métodos , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Distribuição de Poisson , Reprodutibilidade dos Testes , Estômago/química
11.
Comput Methods Programs Biomed ; 102(1): 25-34, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21269725

RESUMO

Bone-implant integration is measured in several ways. Traditionally and routinely, 2D histological sections of samples, containing bone and the biomaterial, are stained and analyzed using a light microscope. Such histological section provides detailed cellular information about the bone regeneration in the proximity of the implant. However, this information reflects the integration in only a very small fraction, a 10 µm thick slice, of the sample. In this study, we show that feature values quantified on 2D sections are highly dependent on the orientation and the placement of the section, suggesting that a 3D analysis of the whole sample is of importance for a more complete judgment of the bone structure in the proximity of the implant. We propose features describing the 3D data by extending the features traditionally used for 2D-analysis. We present a method for extracting these features from 3D image data and we measure them on five 3D SRµCT image volumes. We also simulate cuts through the image volume positioned at all possible section positions. These simulations show that the measurement variations due to the orientation of the section around the center line of the implant are about 30%.


Assuntos
Remodelação Óssea , Osso e Ossos/diagnóstico por imagem , Imageamento Tridimensional/métodos , Próteses e Implantes , Titânio/química , Tomografia Computadorizada por Raios X/métodos , Materiais Biocompatíveis/química
12.
Aging Cell ; 9(5): 685-97, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20633000

RESUMO

The skeletal muscle fibre is a syncitium where each myonucleus regulates the gene products in a finite volume of the cytoplasm, i.e., the myonuclear domain (MND). We analysed aging- and gender-related effects on myonuclei organization and the MND size in single muscle fibres from six young (21-31 years) and nine old men (72-96 years), and from six young (24-32 years) and nine old women (65-96 years), using a novel image analysis algorithm applied to confocal images. Muscle fibres were classified according to myosin heavy chain (MyHC) isoform expression. Our image analysis algorithm was effective in determining the spatial organization of myonuclei and the distribution of individual MNDs along the single fibre segments. Significant linear relations were observed between MND size and fibre size, irrespective age, gender and MyHC isoform expression. The spatial organization of individual myonuclei, calculated as the distribution of nearest neighbour distances in 3D, and MND size were affected in old age, but changes were dependent on MyHC isoform expression. In type I muscle fibres, average NN-values were lower and showed an increased variability in old age, reflecting an aggregation of myonuclei in old age. Average MND size did not change in old age, but there was an increased MND size variability. In type IIa fibres, average NN-values and MND sizes were lower in old age, reflecting the smaller size of these muscle fibres in old age. It is suggested that these changes have a significant impact on protein synthesis and degradation during the aging process.


Assuntos
Envelhecimento/fisiologia , Núcleo Celular/metabolismo , Fibras Musculares Esqueléticas/citologia , Caracteres Sexuais , Adulto , Idoso , Idoso de 80 Anos ou mais , Anatomia Transversal , Peso Corporal , Feminino , Humanos , Masculino , Microscopia Confocal , Fenótipo , Adulto Jovem
13.
Exp Physiol ; 94(1): 117-29, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18820003

RESUMO

This comparative study of myonuclear domain (MND) size in mammalian species representing a 100,000-fold difference in body mass, ranging from 25 g to 2500 kg, was undertaken to improve our understanding of myonuclear organization in skeletal muscle fibres. Myonuclear domain size was calculated from three-dimensional reconstructions in a total of 235 single muscle fibre segments at a fixed sarcomere length. Irrespective of species, the largest MND size was observed in muscle fibres expressing fast myosin heavy chain (MyHC) isoforms, but in the two smallest mammalian species studied (mouse and rat), MND size was not larger in the fast-twitch fibres expressing the IIA MyHC isofom than in the slow-twitch type I fibres. In the larger mammals, the type I fibres always had the smallest average MND size, but contrary to mouse and rat muscles, type IIA fibres had lower mitochondrial enzyme activities than type I fibres. Myonuclear domain size was highly dependent on body mass in the two muscle fibre types expressed in all species, i.e. types I and IIA. Myonuclear domain size increased in muscle fibres expressing both the beta/slow (type I; r = 0.84, P < 0.001) and the fast IIA MyHC isoform (r = 0.90; P < 0.001). Thus, MND size scales with body size and is highly dependent on muscle fibre type, independent of species. However, myosin isoform expression is not the sole protein determining MND size, and other protein systems, such as mitochondrial proteins, may be equally or more important determinants of MND size.


Assuntos
Tamanho Corporal/fisiologia , DNA/metabolismo , Fibras Musculares de Contração Rápida/metabolismo , Fibras Musculares de Contração Lenta/metabolismo , Cadeias Pesadas de Miosina/metabolismo , Animais , Índice de Massa Corporal , Feminino , Cavalos , Humanos , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Perissodáctilos , Isoformas de Proteínas/metabolismo , Ratos , Ratos Sprague-Dawley , Especificidade da Espécie , Suínos , Adulto Jovem
14.
IEEE Trans Pattern Anal Mach Intell ; 31(2): 357-63, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19110499

RESUMO

We present a novel method that provides an accurate and precise estimate of the length of the boundary (perimeter) of an object by taking into account gray levels on the boundary of the digitization of the same object. Assuming a model where pixel intensity is proportional to the coverage of a pixel, we show that the presented method provides error-free measurements of the length of straight boundary segments in the case of nonquantized pixel values. For a more realistic situation, where pixel values are quantized, we derive optimal estimates that minimize the maximal estimation error. We show that the estimate converges toward a correct value as the number of gray levels tends toward infinity. The method is easy to implement; we provide the complete pseudocode. Since the method utilizes only a small neighborhood, it is very easy to parallelize. We evaluate the estimator on a set of concave and convex shapes with known perimeters, digitized at increasing resolution. In addition, we provide an example of applicability of the method on real images, by suggesting appropriate preprocessing steps and presenting results of a comparison of the suggested method with other local approaches.


Assuntos
Algoritmos , Inteligência Artificial , Colorimetria/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Cor , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
15.
Cytometry A ; 57(1): 22-33, 2004 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-14699602

RESUMO

BACKGROUND: Rac1 is a GTP-binding molecule involved in a wide range of cellular processes. Using digital image analysis, agonist-induced translocation of green fluorescent protein (GFP) Rac1 to the cellular membrane can be estimated quantitatively for individual cells. METHODS: A fully automatic image analysis method for cell segmentation, feature extraction, and classification of cells according to their activation, i.e., GFP-Rac1 translocation and ruffle formation at stimuli, is described. Based on training data produced by visual annotation of four image series, a statistical classifier was created. RESULTS: The results of the automatic classification were compared with results from visual inspection of the same time sequences. The automatic classification differed from the visual classification at about the same level as visual classifications performed by two different skilled professionals differed from each other. Classification of a second image set, consisting of seven image series with different concentrations of agonist, showed that the classifier could detect an increased proportion of activated cells at increased agonist concentration. CONCLUSIONS: Intracellular activities, such as ruffle formation, can be quantified by fully automatic image analysis, with an accuracy comparable to that achieved by visual inspection. This analysis can be done at a speed of hundreds of cells per second and without the subjectivity introduced by manual judgments.


Assuntos
Citoplasma/metabolismo , Citometria por Imagem/métodos , Aumento da Imagem/métodos , Proteínas rac1 de Ligação ao GTP/biossíntese , Proteínas rac1 de Ligação ao GTP/classificação , Animais , Células CHO , Membrana Celular/metabolismo , Núcleo Celular/efeitos dos fármacos , Núcleo Celular/metabolismo , Cricetinae , Cricetulus , Citoplasma/efeitos dos fármacos , Relação Dose-Resposta a Droga , Proteínas de Fluorescência Verde , Humanos , Insulina/farmacologia , Fator de Crescimento Insulin-Like I/farmacologia , Proteínas Luminescentes/genética , Proteínas Luminescentes/metabolismo , Reprodutibilidade dos Testes , Transfecção , Proteínas rac1 de Ligação ao GTP/genética
16.
Anal Cell Pathol ; 24(2-3): 101-11, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-12446959

RESUMO

Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre-processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO-cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.


Assuntos
Algoritmos , Citoplasma/ultraestrutura , Citometria de Fluxo/métodos , Processamento de Imagem Assistida por Computador , Animais , Humanos , Microscopia de Fluorescência/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA