Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(4): e0302169, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38687694

RESUMO

The current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample taken from the oral cavity. This process is time-consuming and more invasive than an alternative approach of acquiring a brush sample followed by cytological analysis. Using a microscope, skilled cytotechnologists are able to detect changes due to malignancy; however, introducing this approach into clinical routine is associated with challenges such as a lack of resources and experts. To design a trustworthy OC detection system that can assist cytotechnologists, we are interested in deep learning based methods that can reliably detect cancer, given only per-patient labels (thereby minimizing annotation bias), and also provide information regarding which cells are most relevant for the diagnosis (thereby enabling supervision and understanding). In this study, we perform a comparison of two approaches suitable for OC detection and interpretation: (i) conventional single instance learning (SIL) approach and (ii) a modern multiple instance learning (MIL) method. To facilitate systematic evaluation of the considered approaches, we, in addition to a real OC dataset with patient-level ground truth annotations, also introduce a synthetic dataset-PAP-QMNIST. This dataset shares several properties of OC data, such as image size and large and varied number of instances per bag, and may therefore act as a proxy model of a real OC dataset, while, in contrast to OC data, it offers reliable per-instance ground truth, as defined by design. PAP-QMNIST has the additional advantage of being visually interpretable for non-experts, which simplifies analysis of the behavior of methods. For both OC and PAP-QMNIST data, we evaluate performance of the methods utilizing three different neural network architectures. Our study indicates, somewhat surprisingly, that on both synthetic and real data, the performance of the SIL approach is better or equal to the performance of the MIL approach. Visual examination by cytotechnologist indicates that the methods manage to identify cells which deviate from normality, including malignant cells as well as those suspicious for dysplasia. We share the code as open source.


Assuntos
Aprendizado Profundo , Neoplasias Bucais , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/patologia , Humanos , Redes Neurais de Computação
2.
Sci Rep ; 13(1): 18407, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37891213

RESUMO

Mediastinal structure measurements are important for the radiologist's review of computed tomography pulmonary angiography (CTPA) examinations. In the reporting process, radiologists make measurements of diameters, volumes, and organ densities for image quality assessment and risk stratification. However, manual measurement of these features is time consuming. Here, we sought to develop a time-saving automated algorithm that can accurately detect, segment and measure mediastinal structures in routine clinical CTPA examinations. In this study, 700 CTPA examinations collected and annotated. Of these, a training set of 180 examinations were used to develop a fully automated deterministic algorithm. On the test set of 520 examinations, two radiologists validated the detection and segmentation performance quantitatively, and ground truth was annotated to validate the measurement performance. External validation was performed in 47 CTPAs from two independent datasets. The system had 86-100% detection and segmentation accuracy in the different tasks. The automatic measurements correlated well to those of the radiologist (Pearson's r 0.68-0.99). Taken together, the fully automated algorithm accurately detected, segmented, and measured mediastinal structures in routine CTPA examinations having an adequate representation of common artifacts and medical conditions.


Assuntos
Mediastino , Traqueia , Traqueia/diagnóstico por imagem , Angiografia , Algoritmos , Tomografia Computadorizada por Raios X/métodos
3.
PLoS One ; 18(3): e0282432, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36867617

RESUMO

We present INSPIRE, a top-performing general-purpose method for deformable image registration. INSPIRE brings distance measures which combine intensity and spatial information into an elastic B-splines-based transformation model and incorporates an inverse inconsistency penalization supporting symmetric registration performance. We introduce several theoretical and algorithmic solutions which provide high computational efficiency and thereby applicability of the proposed framework in a wide range of real scenarios. We show that INSPIRE delivers highly accurate, as well as stable and robust registration results. We evaluate the method on a 2D dataset created from retinal images, characterized by presence of networks of thin structures. Here INSPIRE exhibits excellent performance, substantially outperforming the widely used reference methods. We also evaluate INSPIRE on the Fundus Image Registration Dataset (FIRE), which consists of 134 pairs of separately acquired retinal images. INSPIRE exhibits excellent performance on the FIRE dataset, substantially outperforming several domain-specific methods. We also evaluate the method on four benchmark datasets of 3D magnetic resonance images of brains, for a total of 2088 pairwise registrations. A comparison with 17 other state-of-the-art methods reveals that INSPIRE provides the best overall performance. Code is available at github.com/MIDA-group/inspire.


Assuntos
Encéfalo , Processamento de Imagem Assistida por Computador , Retina , Encéfalo/diagnóstico por imagem , Fundo de Olho , Humanos , Retina/diagnóstico por imagem
4.
ArXiv ; 2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36945686

RESUMO

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

5.
PLoS One ; 17(11): e0276196, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36441754

RESUMO

Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.


Assuntos
Benchmarking , Traduções , Pesquisa Empírica , Processamento de Imagem Assistida por Computador
6.
Patterns (N Y) ; 1(3): 100040, 2020 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-33205108

RESUMO

Image analysis is key to extracting quantitative information from scientific microscopy images, but the methods involved are now often so refined that they can no longer be unambiguously described by written protocols. We introduce BIAFLOWS, an open-source web tool enabling to reproducibly deploy and benchmark bioimage analysis workflows coming from any software ecosystem. A curated instance of BIAFLOWS populated with 34 image analysis workflows and 15 microscopy image datasets recapitulating common bioimage analysis problems is available online. The workflows can be launched and assessed remotely by comparing their performance visually and according to standard benchmark metrics. We illustrated these features by comparing seven nuclei segmentation workflows, including deep-learning methods. BIAFLOWS enables to benchmark and share bioimage analysis workflows, hence safeguarding research results and promoting high-quality standards in image analysis. The platform is thoroughly documented and ready to gather annotated microscopy datasets and workflows contributed by the bioimaging community.

7.
Sci Data ; 7(1): 169, 2020 06 05.
Artigo em Inglês | MEDLINE | ID: mdl-32503988

RESUMO

Modern histopathology workflows rely on the digitization of histology slides. The quality of the resulting digital representations, in the form of histology slide image mosaics, depends on various specific acquisition conditions and on the image processing steps that underlie the generation of the final mosaic, e.g. registration and blending of the contained image tiles. We introduce HISTOBREAST, an extensive collection of brightfield microscopy images that we collected in a principled manner under different acquisition conditions on Haematoxylin - Eosin (H&E) stained breast tissue. HISTOBREAST is comprised of neighbour image tiles and ensemble of mosaics composed from different combinations of the available image tiles, exhibiting progressively degraded quality levels. HISTOBREAST can be used to benchmark image processing and computer vision techniques with respect to their robustness to image modifications specific to brightfield microscopy of H&E stained tissues. Furthermore, HISTOBREAST can serve in the development of new image processing methods, with the purpose of ensuring robustness to typical image artefacts that raise interpretation problems for expert histopathologists and affect the results of computerized image analysis.


Assuntos
Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Amarelo de Eosina-(YS) , Feminino , Hematoxilina , Humanos , Software
8.
F1000Res ; 9: 613, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32595963

RESUMO

We introduce the NEUBIAS Gateway, a new platform for publishing materials related to bioimage analysis, an interdisciplinary field bridging computer science and life sciences. This emerging field has been lacking a central place to share the efforts of the growing group of scientists addressing biological questions using image data. The Gateway welcomes a wide range of publication formats including articles, reviews, reports and training materials. We hope the Gateway further supports this important field to grow and helps more biologists and computational scientists learn about and contribute to these efforts.


Assuntos
Disciplinas das Ciências Biológicas , Interpretação de Imagem Assistida por Computador , Informática , Editoração , Pesquisa Interdisciplinar
9.
Artigo em Inglês | MEDLINE | ID: mdl-30794174

RESUMO

Intensity-based image registration approaches rely on similarity measures to guide the search for geometric correspondences with high affinity between images. The properties of the used measure are vital for the robustness and accuracy of the registration. In this study a symmetric, intensity interpolationfree, affine registration framework based on a combination of intensity and spatial information is proposed. The excellent performance of the framework is demonstrated on a combination of synthetic tests, recovering known transformations in the presence of noise, and real applications in biomedical and medical image registration, for both 2D and 3D images. The method exhibits greater robustness and higher accuracy than similarity measures in common use, when inserted into a standard gradientbased registration framework available as part of the open source Insight Segmentation and Registration Toolkit (ITK). The method is also empirically shown to have a low computational cost, making it practical for real applications. Source code is available.

10.
IEEE Trans Image Process ; 23(1): 126-36, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24158476

RESUMO

We present four novel point-to-set distances defined for fuzzy or gray-level image data, two based on integration over α-cuts and two based on the fuzzy distance transform. We explore their theoretical properties. Inserting the proposed point-to-set distances in existing definitions of set-to-set distances, among which are the Hausdorff distance and the sum of minimal distances, we define a number of distances between fuzzy sets. These set distances are directly applicable for comparing gray-level images or fuzzy segmented objects, but also for detecting patterns and matching parts of images. The distance measures integrate shape and intensity/membership of observed entities, providing a highly applicable tool for image processing and analysis. Performance evaluation of derived set distances in real image processing tasks is conducted and presented. It is shown that the considered distances have a number of appealing theoretical properties and exhibit very good performance in template matching and object classification for fuzzy segmented images as well as when applied directly on gray-level intensity images. Examples include recognition of hand written digits and identification of virus particles. The proposed set distances perform excellently on the MNIST digit classification task, achieving the best reported error rate for classification using only rigid body transformations and a kNN classifier.


Assuntos
Algoritmos , Lógica Fuzzy , Interpretação de Imagem Assistida por Computador/métodos , Modelos Lineares , Reconhecimento Automatizado de Padrão/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...