Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
J Cell Sci ; 135(7)2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35420128

RESUMO

For the past century, the nucleus has been the focus of extensive investigations in cell biology. However, many questions remain about how its shape and size are regulated during development, in different tissues, or during disease and aging. To track these changes, microscopy has long been the tool of choice. Image analysis has revolutionized this field of research by providing computational tools that can be used to translate qualitative images into quantitative parameters. Many tools have been designed to delimit objects in 2D and, eventually, in 3D in order to define their shapes, their number or their position in nuclear space. Today, the field is driven by deep-learning methods, most of which take advantage of convolutional neural networks. These techniques are remarkably adapted to biomedical images when trained using large datasets and powerful computer graphics cards. To promote these innovative and promising methods to cell biologists, this Review summarizes the main concepts and terminologies of deep learning. Special emphasis is placed on the availability of these methods. We highlight why the quality and characteristics of training image datasets are important and where to find them, as well as how to create, store and share image datasets. Finally, we describe deep-learning methods well-suited for 3D analysis of nuclei and classify them according to their level of usability for biologists. Out of more than 150 published methods, we identify fewer than 12 that biologists can use, and we explain why this is the case. Based on this experience, we propose best practices to share deep-learning methods with biologists.


Assuntos
Aprendizado Profundo , Núcleo Celular , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional , Microscopia/métodos , Redes Neurais de Computação
2.
Microsc Microanal ; : 1-14, 2022 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-35387704

RESUMO

In vivo transparent vessel segmentation is important to life science research. However, this task remains very challenging because of the fuzzy edges and the barely noticeable tubular characteristics of vessels under a light microscope. In this paper, we present a new machine learning method based on blood flow characteristics to segment the global vascular structure in vivo. Specifically, the videos of blood flow in transparent vessels are used as input. We use the machine learning classifier to classify the vessel pixels through the motion features extracted from moving red blood cells and achieve vessel segmentation based on a region-growing algorithm. Moreover, we utilize the moving characteristics of blood flow to distinguish between the types of vessels, including arteries, veins, and capillaries. In the experiments, we evaluate the performance of our method on videos of zebrafish embryos. The experimental results indicate the high accuracy of vessel segmentation, with an average accuracy of 97.98%, which is much more superior than other segmentation or motion-detection algorithms. Our method has good robustness when applied to input videos with various time resolutions, with a minimum of 3.125 fps.

3.
BMC Bioinformatics ; 22(Suppl 3): 327, 2021 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-34130623

RESUMO

BACKGROUND: Proteins are of extremely vital importance in the human body, and no movement or activity can be performed without proteins. Currently, microscopy imaging technologies developed rapidly are employed to observe proteins in various cells and tissues. In addition, due to the complex and crowded cellular environments as well as various types and sizes of proteins, a considerable number of protein images are generated every day and cannot be classified manually. Therefore, an automatic and accurate method should be designed to properly solve and analyse protein images with mixed patterns. RESULTS: In this paper, we first propose a novel customized architecture with adaptive concatenate pooling and "buffering" layers in the classifier part, which could make the networks more adaptive to training and testing datasets, and develop a novel hard sampler at the end of our network to effectively mine the samples from small classes. Furthermore, a new loss is presented to handle the label imbalance based on the effectiveness of samples. In addition, in our method, several novel and effective optimization strategies are adopted to solve the difficult training-time optimization problem and further increase the accuracy by post-processing. CONCLUSION: Our methods outperformed the SOTA method of multi-labelled protein classification on the HPA dataset, GapNet-PL, by above 2% in the F1 score. Therefore, experimental results based on the test set split from the Human Protein Atlas dataset show that our methods have good performance in automatically classifying multi-class and multi-labelled high-throughput microscopy protein images.


Assuntos
Microscopia , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Proteínas
4.
Sensors (Basel) ; 21(24)2021 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-34960422

RESUMO

With the development of light microscopy, it is becoming increasingly easy to obtain detailed multicolor fluorescence volumetric data. The need for their appropriate visualization has become an integral part of fluorescence imaging. Virtual reality (VR) technology provides a new way of visualizing multidimensional image data or models so that the entire 3D structure can be intuitively observed, together with different object features or details on or within the object. With the need for imaging advanced volumetric data, demands for the control of virtual object properties are increasing; this happens especially for multicolor objects obtained by fluorescent microscopy. Existing solutions with universal VR controllers or software-based controllers with the need to define sufficient space for the user to manipulate data in VR are not usable in many practical applications. Therefore, we developed a custom gesture-based VR control system with a custom controller connected to the FluoRender visualization environment. A multitouch sensor disk was used for this purpose. Our control system may be a good choice for easier and more comfortable manipulation of virtual objects and their properties, especially using confocal microscopy, which is the most widely used technique for acquiring volumetric fluorescence data so far.


Assuntos
Gestos , Realidade Virtual , Microscopia Confocal , Software
5.
Microsc Microanal ; 26(3): 387-396, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32241318

RESUMO

Fiber length has a strong impact on the mechanical properties of composite materials. It is one of the most important quantitative features in characterizing microstructures for understanding the material performance. Studies conducted to determine fiber length distribution have primarily focused on sample preparation and fiber dispersion. However, the subsequent image analysis is frequently performed manually or semi-automatically, which either requires careful sample preparation or manual intervention in the image analysis and processing. In this article, an image processing and analysis method has been developed based on medial axis transformation via the multi-stencil fast marching method for fiber length measurements on acquired microscopy images. The developed method can be implemented fully automatically and without any user induced delays. This method offers high efficiency, sub-pixel accuracy, and excellent statistical representativity.

6.
BMC Bioinformatics ; 20(1): 472, 2019 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-31521104

RESUMO

BACKGROUND: Nucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. RESULTS: We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. CONCLUSIONS: We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Microscopia/métodos , Redes Neurais de Computação , Humanos
7.
BMC Bioinformatics ; 19(1): 365, 2018 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-30285608

RESUMO

BACKGROUND: Automatic and reliable characterization of cells in cell cultures is key to several applications such as cancer research and drug discovery. Given the recent advances in light microscopy and the need for accurate and high-throughput analysis of cells, automated algorithms have been developed for segmenting and analyzing the cells in microscopy images. Nevertheless, accurate, generic and robust whole-cell segmentation is still a persisting need to precisely quantify its morphological properties, phenotypes and sub-cellular dynamics. RESULTS: We present a single-channel whole cell segmentation algorithm. We use markers that stain the whole cell, but with less staining in the nucleus, and without using a separate nuclear stain. We show the utility of our approach in microscopy images of cell cultures in a wide variety of conditions. Our algorithm uses a deep learning approach to learn and predict locations of the cells and their nuclei, and combines that with thresholding and watershed-based segmentation. We trained and validated our approach using different sets of images, containing cells stained with various markers and imaged at different magnifications. Our approach achieved a 86% similarity to ground truth segmentation when identifying and separating cells. CONCLUSIONS: The proposed algorithm is able to automatically segment cells from single channel images using a variety of markers and magnifications.


Assuntos
Microscopia/métodos , Algoritmos , Humanos
8.
Heliyon ; 10(3): e25367, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38327447

RESUMO

Water quality can be negatively affected by the presence of some toxic phytoplankton species, whose toxins are difficult to remove by conventional purification systems. This creates the need for periodic analyses, which are nowadays manually performed by experts. These labor-intensive processes are affected by subjectivity and expertise, causing unreliability. Some automatic systems have been proposed to address these limitations. However, most of them are based on classical image processing pipelines with not easily scalable designs. In this context, deep learning techniques are more adequate for the detection and recognition of phytoplankton specimens in multi-specimen microscopy images, as they integrate both tasks in a single end-to-end trainable module that is able to automatize the adaption to such a complex domain. In this work, we explore the use of two different object detectors: Faster R-CNN and RetinaNet, from the one-stage and two-stage paradigms respectively. We use a dataset composed of multi-specimen microscopy images captured using a systematic protocol. This allows the use of widely available optical microscopes, also avoiding manual adjustments on a per-specimen basis, which would require expert knowledge. We have made our dataset publicly available to improve the reproducibility and to foment the development of new alternatives in the field. The selected Faster R-CNN methodology reaches maximum recall levels of 95.35%, 84.69%, and 79.81%, and precisions of 94.68%, 89.30% and 82.61%, for W. naegeliana, A. spiroides, and D. sociale, respectively. The system is able to adapt to the dataset problems and improves the results overall with respect to the reference state-of-the-art work. In addition, the proposed system improves the automation and abstraction from the domain and simplifies the workflow and adjustment.

9.
J Neurosci Methods ; 411: 110273, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39197681

RESUMO

BACKGROUND: The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals. NEW METHOD: We have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images. RESULTS: NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations. COMPARISON WITH EXISTING METHODS: NeuroQuantify overcomes some of the limitations of existing methods in the automatic and accurate analysis of neuronal structures. It has been developed for phase contrast images rather than fluorescence images. In addition to typical functionality of cell counting, NeuroQuantify also detects and counts neurites, measures the neurite lengths, and produces the neurite orientation distribution. CONCLUSIONS: We offer a valuable tool to assess network development rapidly and effectively. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub at https://github.com/StanleyZ0528/neural-image-segmentation.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Neuritos , Neurônios , Software , Neuritos/fisiologia , Neurônios/citologia , Neurônios/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Animais , Microscopia de Contraste de Fase/métodos , Humanos
10.
Front Microbiol ; 15: 1255850, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38533330

RESUMO

Data-driven Artificial Intelligence (AI)/Machine learning (ML) image analysis approaches have gained a lot of momentum in analyzing microscopy images in bioengineering, biotechnology, and medicine. The success of these approaches crucially relies on the availability of high-quality microscopy images, which is often a challenge due to the diverse experimental conditions and modes under which these images are obtained. In this study, we propose the use of recent ML-based image super-resolution (SR) techniques for improving the image quality of microscopy images, incorporating them into multiple ML-based image analysis tasks, and describing a comprehensive study, investigating the impact of SR techniques on the segmentation of microscopy images. The impacts of four Generative Adversarial Network (GAN)- and transformer-based SR techniques on microscopy image quality are measured using three well-established quality metrics. These SR techniques are incorporated into multiple deep network pipelines using supervised, contrastive, and non-contrastive self-supervised methods to semantically segment microscopy images from multiple datasets. Our results show that the image quality of microscopy images has a direct influence on the ML model performance and that both supervised and self-supervised network pipelines using SR images perform better by 2%-6% in comparison to baselines, not using SR. Based on our experiments, we also establish that the image quality improvement threshold range [20-64] for the complemented Perception-based Image Quality Evaluator(PIQE) metric can be used as a pre-condition by domain experts to incorporate SR techniques to significantly improve segmentation performance. A plug-and-play software platform developed to integrate SR techniques with various deep networks using supervised and self-supervised learning methods is also presented.

11.
Med Image Anal ; 97: 103227, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38897031

RESUMO

Automatic tracking of viral and intracellular structures displayed as spots with varying sizes in fluorescence microscopy images is an important task to quantify cellular processes. We propose a novel probabilistic tracking approach for multiple particle tracking based on multi-detector and multi-scale data fusion as well as Bayesian smoothing. The approach integrates results from multiple detectors using a novel intensity-based covariance intersection method which takes into account information about the image intensities, positions, and uncertainties. The method ensures a consistent estimate of multiple fused particle detections and does not require an optimization step. Our probabilistic tracking approach performs data fusion of detections from classical and deep learning methods as well as exploits single-scale and multi-scale detections. In addition, we use Bayesian smoothing to fuse information of predictions from both past and future time points. We evaluated our approach using image data of the Particle Tracking Challenge and achieved state-of-the-art results or outperformed previous methods. Our method was also assessed on challenging live cell fluorescence microscopy image data of viral and cellular proteins expressed in hepatitis C virus-infected cells and chromatin structures in non-infected cells, acquired at different spatial-temporal resolutions. We found that the proposed approach outperforms existing methods.


Assuntos
Teorema de Bayes , Cromatina , Microscopia de Fluorescência , Microscopia de Fluorescência/métodos , Humanos , Hepacivirus , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo
12.
J Microsc ; 252(2): 149-58, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23962006

RESUMO

Automated tracking of cell population is very crucial for quantitative measurements of dynamic cell-cycle behaviour of individual cells. This problem involves several subproblems and a high accuracy of each step is essential to avoid error propagation. In this paper, we propose a holistic three-component system to tackle this problem. For each phase, we first learn a mean shape as well as a model of the temporal dynamics of transformation, which are used for estimating a shape prior for the cell in the current frame. We then segment the cell using a level set-based shape prior model. Finally, we identify its phase based on the goodness-of-fit of the data to the segmentation model. This phase information is also used for fine-tuning the segmentation result. We evaluate the performance of our method empirically in various aspects and in tracking individual cells from HeLa H2B-GFP cell population. Highly accurate validation results confirm the robustness of our method in many realistic scenarios and the essentiality of each component of our integrating system.


Assuntos
Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Ciclo Celular , Linhagem Celular Tumoral , Proteínas de Fluorescência Verde , Células HeLa , Humanos
13.
Math Biosci Eng ; 20(9): 16259-16278, 2023 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-37920012

RESUMO

Cell segmentation from fluorescent microscopy images plays an important role in various applications, such as disease mechanism assessment and drug discovery research. Exiting segmentation methods often adopt image binarization as the first step, through which the foreground cell is separated from the background so that the subsequent processing steps can be greatly facilitated. To pursue this goal, a histogram thresholding can be performed on the input image, which first applies a Gaussian smoothing to suppress the jaggedness of the histogram curve and then exploits Rosin's method to determine a threshold for conducting image binarization. However, an inappropriate amount of smoothing could lead to the inaccurate segmentation of cells. To address this crucial problem, a multi-scale histogram thresholding (MHT) technique is proposed in the present paper, where the scale refers to the standard deviation of the Gaussian that determines the amount of smoothing. To be specific, the image histogram is smoothed at three chosen scales first, and then the smoothed histogram curves are fused to conduct image binarization via thresholding. To further improve the segmentation accuracy and overcome the difficulty of extracting overlapping cells, our proposed MHT technique is incorporated into a multi-scale cell segmentation framework, in which a region-based ellipse fitting technique is adopted to identify overlapping cells. Extensive experimental results obtained on benchmark datasets show that the new method can deliver superior performance compared to the current state-of-the-arts.

14.
Phys Med Biol ; 68(2)2023 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-36577141

RESUMO

Objective.Corneal confocal microscopy (CCM) image analysis is a non-invasivein vivoclinical technique that can quantify corneal nerve fiber damage. However, the acquired CCM images are often accompanied by speckle noise and nonuniform illumination, which seriously affects the analysis and diagnosis of the diseases.Approach.In this paper, first we propose a variational Retinex model for the inhomogeneity correction and noise removal of CCM images. In this model, the Beppo Levi space is introduced to constrain the smoothness of the illumination layer for the first time, and the fractional order differential is adopted as the regularization term to constrain reflectance layer. Then, a denoising regularization term is also constructed with Block Matching 3D (BM3D) to suppress noise. Finally, by adjusting the uneven illumination layer, we obtain the final results. Second, an image quality evaluation metric is proposed to evaluate the illumination uniformity of images objectively.Main results.To demonstrate the effectiveness of our method, the proposed method is tested on 628 low-quality CCM images from the CORN-2 dataset. Extensive experiments show the proposed method outperforms the other four related methods in terms of noise removal and uneven illumination suppression.SignificanceThis demonstrates that the proposed method may be helpful for the diagnostics and analysis of eye diseases.


Assuntos
Processamento de Imagem Assistida por Computador , Iluminação , Processamento de Imagem Assistida por Computador/métodos , Microscopia Confocal/métodos , Fibras Nervosas , Ruído
15.
Med Image Anal ; 90: 102969, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37802010

RESUMO

Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.


Assuntos
Técnicas Histológicas , Aprendizagem , Humanos , Microscopia , Redes Neurais de Computação , Coloração e Rotulagem , Processamento de Imagem Assistida por Computador
16.
Med Image Anal ; 86: 102768, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36857945

RESUMO

While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.


Assuntos
Microscopia , Telas Cirúrgicas , Humanos , Imageamento Tridimensional/métodos , Processamento de Imagem Assistida por Computador/métodos , Neurônios
17.
Neural Netw ; 167: 810-826, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37738716

RESUMO

Self-supervised pre-training has become the priory choice to establish reliable neural networks for automated recognition of massive biomedical microscopy images, which are routinely annotation-free, without semantics, and without guarantee of quality. Note that this paradigm is still at its infancy and limited by closely related open issues: (1) how to learn robust representations in an unsupervised manner from unlabeled biomedical microscopy images of low diversity in samples? and (2) how to obtain the most significant representations demanded by a high-quality segmentation? Aiming at these issues, this study proposes a knowledge-based learning framework (TOWER) towards enhanced recognition of biomedical microscopy images, which works in three phases by synergizing contrastive learning and generative learning methods: (1) Sample Space Diversification: Reconstructive proxy tasks have been enabled to embed a priori knowledge with context highlighted to diversify the expanded sample space; (2) Enhanced Representation Learning: Informative noise-contrastive estimation loss regularizes the encoder to enhance representation learning of annotation-free images; (3) Correlated Optimization: Optimization operations in pre-training the encoder and the decoder have been correlated via image restoration from proxy tasks, targeting the need for semantic segmentation. Experiments have been conducted on public datasets of biomedical microscopy images against the state-of-the-art counterparts (e.g., SimCLR and BYOL), and results demonstrate that: TOWER statistically excels in all self-supervised methods, achieving a Dice improvement of 1.38 percentage points over SimCLR. TOWER also has potential in multi-modality medical image analysis and enables label-efficient semi-supervised learning, e.g., reducing the annotation cost by up to 99% in pathological classification.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia , Conhecimento , Bases de Conhecimento , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado
18.
ACM BCB ; 20232023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39006863

RESUMO

In various applications, such as computer vision, medical imaging and robotics, three-dimensional (3D) image registration is a significant step. It enables the alignment of various datasets into a single coordinate system, consequently providing a consistent perspective that allows further analysis. By precisely aligning images we can compare, analyze, and combine data collected in different situations. This paper presents a novel approach for 3D or z-stack microscopy and medical image registration, utilizing a combination of conventional and deep learning techniques for feature extraction and adaptive likelihood-based methods for outlier detection. The proposed method uses the Scale-invariant Feature Transform (SIFT) and the Residual Network (ResNet50) deep neural learning network to extract effective features and obtain precise and exhaustive representations of image contents. The registration approach also employs the adaptive Maximum Likelihood Estimation SAmple Consensus (MLESAC) method that optimizes outlier detection and increases noise and distortion resistance to improve the efficacy of these combined extracted features. This integrated approach demonstrates robustness, flexibility, and adaptability across a variety of imaging modalities, enabling the registration of complex images with higher precision. Experimental results show that the proposed algorithm outperforms state-of-the-art image registration methods, including conventional SIFT, SIFT with Random Sample Consensus (RANSAC), and Oriented FAST and Rotated BRIEF (ORB) methods, as well as registration software packages such as bUnwrapJ and TurboReg, in terms of Mutual Information (MI), Phase Congruency-Based (PCB) metrics, and Gradiant-based metrics (GBM), using 3D MRI and 3D serial sections of multiplex microscopy images.

19.
Polymers (Basel) ; 14(21)2022 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-36365744

RESUMO

Considering that, in the context of the ecological restoration of a large number of exposed rock slopes, it is difficult for existing artificial soil to meet the requirements of mechanical properties and ecological construction at the same time, this paper investigates the stabilization benefits of polyvinyl acetate and attapulgite-treated clayey soil through a series of laboratory experiments. To study the effectiveness of polyvinyl acetate (PVA) and attapulgite as soil stabilizer, a triaxial strength test, an evaporation test and a vegetation growth test were carried out on improved soil with different amounts of PVA content (0, 1%, 2%, 3%, and 4%) and attapulgite replacement (0, 2%, 4%, 6%, and 8%). The results show that the single and composite materials of polyvinyl acetate and attapulgite can increase the peak deviator stress of the sample. The addition of polyvinyl acetate can improve the soil strength by increasing the cohesion of the sample; the addition of attapulgite improves the soil strength mainly by increasing the internal friction angle of the sample. The strength of the composite is greatly improved by increasing the cohesion and internal friction angle of the sample at the same time. The effect of adding materials increased significantly with increasing curing age. Moreover, polyvinyl acetate and attapulgite improve the soil water retention of the soil by improving the water-holding capacity, so that the soil can still ensure the good growth of vegetation under long-term drought conditions. The scanning electron microscopy (SEM) images indicated that the PVA and attapulgite of soil affect the strength characteristics of soil specimens by the reaction of PVA and water, which changes the structure of the soil and, by the interweaving of attapulgite soil particles, acts as the skeleton of the aggregate. Overall, PVA and attapulgite can effectively increase clayey soil stability by improving the cohesive force and internal friction angle of clayey soil.

20.
Diagnostics (Basel) ; 12(4)2022 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-35453885

RESUMO

Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA