RESUMO
The evolution of parasite resistance to antiparasitic agents has become a serious health issue indicating a critical and pressing need to develop new therapeutics that can conquer drug resistance. Nanoparticles are novel, promising emerging drug carriers that have demonstrated efficiency in treating many parasitic diseases. Lately, attention has been drawn to a broad-spectrum nanoparticle capable of converting absorbed light into heat via the photothermal effect phenomenon. The present study is the first to assess the effect of silver nanoparticles (Ag NPs) and iron oxide nanoparticles (Fe3O4 NPs) as sole agents and with the combined action of the light-emitting diode (LED) on Blastocystis hominins (B. hominis) in vitro. Initially, the aqueous synthesized nanoparticles were characterized by UV-Vis spectroscopy, zeta potential, and transmission electron microscopy (TEM). The anti-blastocyst efficiency of these NPs was tested separately in dark conditions. As these NPs have a wide absorption spectrum in the visible regions, they were also excited by a continuous wave LED of wavelength band (400-700 nm) to test the photothermal effect. The sensitivity of B. hominis cysts was evaluated using scanning laser confocal microscopy whereas the live and dead cells were accurately segmented based on superpixels and the k-mean clustering algorithm. Our findings showed that this excitation led to hyperthermia that induced a significant reduction in the number of cysts treated with photothermally active NPs. The results of this study elucidate the potential role of photothermally active NPs as an effective anti-blastocystis agent. By using this approach, new therapeutic antiparasitic agents can be developed.
Assuntos
Blastocystis hominis , Cistos , Nanopartículas Metálicas , Humanos , Prata/farmacologia , Antiparasitários , Nanopartículas Magnéticas de Óxido de FerroRESUMO
The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. We propose two lightweight strategies to address this challenge efficiently. First, the original image is segmented as superpixels rather than regular grids to preserve the structure of each cell. We propose effective cost functions to generate the warp matrix for each superpixel. The warp matrix varies progressively for smooth projection, which contributes to a more faithful reconstruction of object structures. Second, to deal with artifacts introduced by stitching, we use a seam line method tailored to superpixels. The algorithm takes into account the feature similarity of neighborhood superpixels, including color difference, structure and entropy. We also consider the semantic information to avoid semantic misalignment. The optimal solution constrained by the cost functions is obtained under a graph model. The resulting stitched images exhibit improved naturalness. Extensive testing on common panorama stitching datasets is performed on the algorithm. Experimental results show that the proposed algorithm effectively mitigates artifacts, preserves the completeness of semantics and produces panoramic images with a subjective quality that is superior to that of alternative methods.
RESUMO
Liver fibrosis, a major global health issue, is marked by excessive collagen deposition that impairs liver function. Noninvasive methods for the direct visualization of collagen content are crucial for the early detection and monitoring of fibrosis progression. This study investigates the potential of spectral photoacoustic imaging (sPAI) to monitor collagen development in liver fibrosis. Utilizing a novel data-driven superpixel photoacoustic unmixing (SPAX) framework, we aimed to distinguish collagen presence and evaluate its correlation with fibrosis progression. We employed an established diethylnitrosamine (DEN) model in rats to study liver fibrosis over various time points. Our results revealed a significant correlation between increased collagen photoacoustic signal intensity and advanced fibrosis stages. Collagen abundance maps displayed dynamic changes throughout fibrosis progression. These findings underscore the potential of sPAI for the noninvasive monitoring of collagen dynamics and fibrosis severity assessment. This research advances the development of noninvasive diagnostic tools and personalized management strategies for liver fibrosis.
Assuntos
Colágeno , Cirrose Hepática , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Animais , Cirrose Hepática/diagnóstico por imagem , Cirrose Hepática/patologia , Cirrose Hepática/induzido quimicamente , Cirrose Hepática/metabolismo , Colágeno/metabolismo , Colágeno/química , Ratos , Fígado/diagnóstico por imagem , Fígado/patologia , Fígado/metabolismo , Masculino , Dietilnitrosamina/toxicidade , Modelos Animais de DoençasRESUMO
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial-spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets.
RESUMO
Patterns entered into knitting CAD have thousands or tens of thousands of different colors, which need to be merged by color-separation algorithms. However, for degraded patterns, the current color-separation algorithms cannot achieve the desired results, and the clustering quantity parameter needs to be managed manually. In this paper, we propose a fast and automatic FCM color-separation algorithm based on superpixels, which first uses the Real-ESRGAN blind super-resolution network to clarify the degraded patterns and obtain high-resolution images with clear boundaries. Then, it uses the improved MMGR-WT superpixel algorithm to pre-separate the high-resolution images and obtain superpixel images with smooth and accurate edges. Subsequently, the number of superpixel clusters is automatically calculated by the improved density peak clustering (DPC) algorithm. Finally, the superpixels are clustered using fast fuzzy c-means (FCM) based on a color histogram. The experimental results show that not only is the algorithm able to automatically determine the number of colors in the pattern and achieve the accurate color separation of degraded patterns, but it also has lower running time. The color-separation results for 30 degraded patterns show that the segmentation accuracy of the color-separation algorithm proposed in this paper reaches 95.78%.
RESUMO
Multispectral satellite imagery offers a new perspective for spatial modelling, change detection and land cover classification. The increased demand for accurate classification of geographically diverse regions led to advances in object-based methods. A novel spatiotemporal method is presented for object-based land cover classification of satellite imagery using a Graph Neural Network. This paper introduces innovative representation of sequential satellite images as a directed graph by connecting segmented land region through time. The method's novel modular node classification pipeline utilises the Convolutional Neural Network as a multispectral image feature extraction network, and the Graph Neural Network as a node classification model. To evaluate the performance of the proposed method, we utilised EfficientNetV2-S for feature extraction and the GraphSAGE algorithm with Long Short-Term Memory aggregation for node classification. This innovative application on Sentinel-2 L2A imagery produced complete 4-year intermonthly land cover classification maps for two regions: Graz in Austria, and the region of Portoroz, Izola and Koper in Slovenia. The regions were classified with Corine Land Cover classes. In the level 2 classification of the Graz region, the method outperformed the state-of-the-art UNet model, achieving an average F1-score of 0.841 and an accuracy of 0.831, as opposed to UNet's 0.824 and 0.818, respectively. Similarly, the method demonstrated superior performance over UNet in both regions under the level 1 classification, which contains fewer classes. Individual classes have been classified with accuracies up to 99.17%.
RESUMO
Various statistical data indicate that mobile source pollutants have become a significant contributor to atmospheric environmental pollution, with vehicle tailpipe emissions being the primary contributor to these mobile source pollutants. The motion shadow generated by motor vehicles bears a visual resemblance to emitted black smoke, making this study primarily focused on the interference of motion shadows in the detection of black smoke vehicles. Initially, the YOLOv5s model is used to locate moving objects, including motor vehicles, motion shadows, and black smoke emissions. The extracted images of these moving objects are then processed using simple linear iterative clustering to obtain superpixel images of the three categories for model training. Finally, these superpixel images are fed into a lightweight MobileNetv3 network to build a black smoke vehicle detection model for recognition and classification. This study breaks away from the traditional approach of "detection first, then removal" to overcome shadow interference and instead employs a "segmentation-classification" approach, ingeniously addressing the coexistence of motion shadows and black smoke emissions. Experimental results show that the Y-MobileNetv3 model, which takes motion shadows into account, achieves an accuracy rate of 95.17%, a 4.73% improvement compared with the N-MobileNetv3 model (which does not consider motion shadows). Moreover, the average single-image inference time is only 7.3 ms. The superpixel segmentation algorithm effectively clusters similar pixels, facilitating the detection of trace amounts of black smoke emissions from motor vehicles. The Y-MobileNetv3 model not only improves the accuracy of black smoke vehicle recognition but also meets the real-time detection requirements.
RESUMO
Superpixel decomposition could reconstruct an image through meaningful fragments to extract regional features, thus boosting the performance of advanced computer vision tasks. To further optimize the computational efficiency as well as segmentation quality, a novel framework is proposed to generate superpixels from the perspective of hybridizing two existing linear clustering frameworks. Instead of conventional grid sampling seeds for region clustering, a fast convergence strategy is first introduced to center the final superpixel clusters, which is based on an accelerated convergence strategy. Superpixels are then generated from a center-fixed online average clustering, which adopts region growing to label all pixels in an efficient one-pass manner. The experiments verify that the integration of this two-step implementation could generate a synergistic effect and that it becomes more well-rounded than each single method. Compared with other state-of-the-art superpixel algorithms, the proposed framework achieves a comparable overall performance in terms of segmentation accuracy, spatial compactness and running efficiency; moreover, an application on image segmentation verifies its facilitation for traffic scene analysis.
Assuntos
Algoritmos , Semântica , Análise por ConglomeradosRESUMO
Optical cameras equipped with an underwater scooter can perform efficient shallow marine mapping. In this paper, an underwater image stitching method is proposed for detailed large scene awareness based on a scooter-borne camera, including preprocessing, image registration and post-processing. An underwater image enhancement algorithm based on the inherent underwater optical attenuation characteristics and dark channel prior algorithm is presented to improve underwater feature matching. Furthermore, an optimal seam algorithm is utilized to generate a shape-preserving seam-line in the superpixel-restricted area. The experimental results show the effectiveness of the proposed method for different underwater environments and the ability to generate natural underwater mosaics with few artifacts or visible seams.
RESUMO
Nature-inspired artificial intelligence algorithms have been applied to color image quantization (CIQ) for some time. Among these algorithms, the particle swarm optimization algorithm (PSO-CIQ) and its numerous modifications are important in CIQ. In this article, the usefulness of such a modification, labeled IDE-PSO-CIQ and additionally using the idea of individual difference evolution based on the emotional states of particles, is tested. The superiority of this algorithm over the PSO-CIQ algorithm was demonstrated using a set of quality indices based on pixels, patches, and superpixels. Furthermore, both algorithms studied were applied to superpixel versions of quantized images, creating color palettes in much less time. A heuristic method was proposed to select the number of superpixels, depending on the size of the palette. The effectiveness of the proposed algorithms was experimentally verified on a set of benchmark color images. The results obtained from the computational experiments indicate a multiple reduction in computation time for the superpixel methods while maintaining the high quality of the output quantized images, slightly inferior to that obtained with the pixel methods.
RESUMO
Allocentric semantic 3D maps are highly useful for a variety of human-machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot's perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot's perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications.
Assuntos
Robótica , Humanos , Robótica/métodos , SemânticaRESUMO
The worldwide outbreak of COVID-19 disease was caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV 2). The existence of spike proteins, which allow these viruses to infect host cells, is one of the distinctive biological traits of various prior viruses. As a result, the process by which these viruses infect people is largely dependent on spike proteins. The density of SARS-CoV-2 spike proteins must be estimated to better understand and develop diagnostics and vaccines against the COVID-19 pandemic. CT scans and X-rays have three issues: frosted glass, consolidation, and strange roadway layouts. Each of these issues can be graded separately or together. Although CT scan is sensitive to COVID-19, it is not very specific. Therefore, patients who obtain these results should have more comprehensive clinical and laboratory tests to rule out other probable reasons. This work collected 586 SARS-CoV 2 transmission electron microscopy (TEM) images from open source for density estimation of virus spike proteins through a segmentation approach based on the superpixel technique. As a result, the spike density means of SARS-CoV2 and SARS-CoV were 21,97 nm and 22,45 nm, respectively. Furthermore, in the future, we aim to include this model in an intelligent system to enhance the accuracy of viral detection and classification. Moreover, we can remotely connect hospitals and public sites to conduct environmental hazard assessments and data collection.
RESUMO
Hyperspectral-image (HSI) restoration plays an essential role in remote sensing image processing. Recently, superpixel segmentation-based the low-rank regularized methods for HSI restoration have shown outstanding performance. However, most of them simply segment the HSI according to its first principal component, which is suboptimal. In this paper, integrating the superpixel segmentation with principal component analysis, we propose a robust superpixel segmentation strategy to better divide the HSI, which can further enhance the low-rank attribute of the HSI. To better employ the low-rank attribute, the weighted nuclear norm by three types of weighting is proposed to efficiently remove the mixed noise in degraded HSI. Experiments conducted on simulated and real HSI data verify the performance of the proposed method for HSI restoration.
RESUMO
Burning of clinker is the most influencing step of cement quality during the production process. Appropriate characterisation for quality control and decision-making is therefore the critical point to maintain a stable production but also for the development of alternative cements. Scanning electron microscopy (SEM) in combination with energy dispersive X-ray spectroscopy (EDX) delivers spatially resolved phase and chemical information for cement clinker. This data can be used to quantify phase fractions and chemical composition of identified phases. The contribution aims to provide an overview of phase fraction quantification by semi-automatic phase segmentation using high-resolution backscattered electron (BSE) images and lower-resolved EDX element maps. Therefore, a tool for image analysis was developed that uses state-of-the-art algorithms for pixel-wise image segmentation and labelling in combination with a decision tree that allows searching for specific clinker phases. Results show that this tool can be applied to segment sub-micron scale clinker phases and to get a quantification of all phase fractions. In addition, statistical evaluation of the data is implemented within the tool to reveal whether the imaged area is representative for all clinker phases.
Assuntos
Materiais de Construção , Elétrons , Materiais de Construção/análise , Microscopia Eletrônica de Varredura , Espectrometria por Raios X/métodos , Fluxo de TrabalhoRESUMO
PURPOSE: Melanoma is known as the most aggressive form of skin cancer and one of the fastest growing malignant tumors worldwide. Several computer-aided diagnosis systems for melanoma have been proposed, still, the algorithms encounter difficulties in the early stage of lesions. This paper aims to discriminate melanoma and benign skin lesion in dermoscopic images. METHODS: The proposed algorithm is based on the color and texture of skin lesions by introducing a novel feature extraction technique. The algorithm uses an automatic segmentation based on k-means generating a fairly accurate mask for each lesion. The feature extraction consists of the existing and novel color and texture attributes measuring how color and texture vary inside the lesion. To find the optimal results, all the attributes are extracted from lesions in five different color spaces (RGB, HSV, Lab, XYZ, and YCbCr) and used as the inputs for three classifiers (K nearest neighbors, support vector machine , and artificial neural network). RESULTS: The PH2 set is used to assess the performance of the proposed algorithm. The results of our algorithm are compared to the results of published articles that used the same dataset, and it shows that the proposed method outperforms the state of the art by attaining a sensitivity of 99.25%, specificity of 99.58%, and accuracy of 99.51%. CONCLUSION: The final results show that the colors combined with texture are powerful and relevant attributes for melanoma detection and show improvement over the state of the art.
Assuntos
Melanoma , Neoplasias Cutâneas , Algoritmos , Cor , Dermoscopia/métodos , Diagnóstico por Computador/métodos , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologiaRESUMO
We propose three methods for the color quantization of superpixel images. Prior to the application of each method, the target image is first segmented into a finite number of superpixels by grouping the pixels that are similar in color. The color of a superpixel is given by the arithmetic mean of the colors of all constituent pixels. Following this, the superpixels are quantized using common splitting or clustering methods, such as median cut, k-means, and fuzzy c-means. In this manner, a color palette is generated while the original pixel image undergoes color mapping. The effectiveness of each proposed superpixel method is validated via experimentation using different color images. We compare the proposed methods with state-of-the-art color quantization methods. The results show significantly decreased computation time along with high quality of the quantized images. However, a multi-index evaluation process shows that the image quality is slightly worse than that obtained via pixel methods.
Assuntos
Algoritmos , Análise por Conglomerados , CorRESUMO
Training a deep convolutional neural network (DCNN) to detect defects in substation equipment often requires many defect datasets. However, this dataset is not easily acquired, and the complex background of the infrared images makes defect detection even more difficult. To alleviate this issue, this article presents a two-level defect detection model (TDDM). First, to extract the target equipment in the image, an instance segmentation module is constructed by training from the instance segmentation dataset. Then, the target equipment is segmented by the superpixel segmentation algorithm into superpixels according to obtain more details information. Next, a temperature probability density distribution is constructed with the superpixels, and the defect determination strategy is used to recognize the defect. Finally, experiments verify the effectiveness of the TDDM according to the defect detection dataset.
Assuntos
Algoritmos , Redes Neurais de ComputaçãoRESUMO
Hyperspectral image classification has received a lot of attention in the remote sensing field. However, most classification methods require a large number of training samples to obtain satisfactory performance. In real applications, it is difficult for users to label sufficient samples. To overcome this problem, in this work, a novel multi-scale superpixel-guided structural profile method is proposed for the classification of hyperspectral images. First, the spectral number (of the original image) is reduced with an averaging fusion method. Then, multi-scale structural profiles are extracted with the help of the superpixel segmentation method. Finally, the extracted multi-scale structural profiles are fused with an unsupervised feature selection method followed by a spectral classifier to obtain classification results. Experiments on several hyperspectral datasets verify that the proposed method can produce outstanding classification effects in the case of limited samples compared to other advanced classification methods. The classification accuracies obtained by the proposed method on the Salinas dataset are increased by 43.25%, 31.34%, and 46.82% in terms of overall accuracy (OA), average accuracy (AA), and Kappa coefficient compared to recently proposed deep learning methods.
RESUMO
Dealing with low-light images is a challenging problem in the image processing field. A mature low-light enhancement technology will not only be conductive to human visual perception but also lay a solid foundation for the subsequent high-level tasks, such as target detection and image classification. In order to balance the visual effect of the image and the contribution of the subsequent task, this paper proposes utilizing shallow Convolutional Neural Networks (CNNs) as the priori image processing to restore the necessary image feature information, which is followed by super-pixel image segmentation to obtain image regions with similar colors and brightness and, finally, the Attentive Neural Processes (ANPs) network to find its local enhancement function on each super-pixel to further restore features and details. Through extensive experiments on the synthesized low-light image and the real low-light image, the experimental results of our algorithm reach 23.402, 0.920, and 2.2490 for Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), and Natural Image Quality Evaluator (NIQE), respectively. As demonstrated by the experiments on image Scale-Invariant Feature Transform (SIFT) feature detection and subsequent target detection, the results of our approach achieve excellent results in visual effect and image features.
Assuntos
Aumento da Imagem , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Razão Sinal-RuídoRESUMO
This work proposes a novel scheme for speckle suppression on medical images acquired by ultrasound sensors. The proposed method is based on the block matching procedure by using mutual information as a similarity measure in grouping patches in a clustered area, originating a new despeckling method that integrates the statistical properties of an image and its texture for creating 3D groups in the BM3D scheme. For this purpose, the segmentation of ultrasound images is carried out considering superpixels and a variation of the local binary patterns algorithm to improve the performance of the block matching procedure. The 3D groups are modeled in terms of grouped tensors and despekled with singular value decomposition. Moreover, a variant of the bilateral filter is used as a post-processing step to recover and enhance edges' quality. Experimental results have demonstrated that the designed framework guarantees a good despeckling performance in ultrasound images according to the objective quality criteria commonly used in literature and via visual perception.