Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6129-6143, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33900925

RESUMEN

Underwater image processing has been shown to exhibit significant potential for exploring underwater environments. It has been applied to a wide variety of fields, such as underwater terrain scanning and autonomous underwater vehicles (AUVs)-driven applications, such as image-based underwater object detection. However, underwater images often suffer from degeneration due to attenuation, color distortion, and noise from artificial lighting sources as well as the effects of possibly low-end optical imaging devices. Thus, object detection performance would be degraded accordingly. To tackle this problem, in this article, a lightweight deep underwater object detection network is proposed. The key is to present a deep model for jointly learning color conversion and object detection for underwater images. The image color conversion module aims at transforming color images to the corresponding grayscale images to solve the problem of underwater color absorption to enhance the object detection performance with lower computational complexity. The presented experimental results with our implementation on the Raspberry pi platform have justified the effectiveness of the proposed lightweight jointly learning model for underwater object detection compared with the state-of-the-art approaches.

2.
Front Pharmacol ; 12: 518406, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33994999

RESUMEN

Marsdeniae tenacissimae Caulis is a traditional Chinese medicine, named Tongguanteng (TGT), that is often used for the adjuvant treatment of cancer. In our previous study, we reported that an ethyl acetate extract of TGT had inhibitory effects against adenocarcinoma A549 cells growth. To identify the components of TGT with anti-tumor activity and to elucidate their underlying mechanisms of action, we developed a technique for isolating compounds, which was then followed by cytotoxicity screening, network pharmacology analysis, and cellular and molecular experiments. We isolated a total of 19 compounds from a TGT ethyl acetate extract. Two novel steroidal saponins were assessed using an ultra-performance liquid chromatography-photodiode array coupled with quadrupole time-of-flight mass (UPLC-ESI-Q/TOF-MS). Then, we screened these constituents for anti-cancer activity against non-small cell lung cancer (NSCLC) in vitro and obtained six target compounds. Furthermore, a compound-target-pathway network of these six bioactive ingredients was constructed to elucidate the potential pathways that controlled anticancer effects. Approximately 205 putative targets that were associated with TGT, as well as 270 putative targets that were related to NSCLC, were obtained from online databases and target prediction software. Protein-protein interaction networks for drugs as well as disease putative targets were generated, and 18 candidate targets were detected based on topological features. In addition, pathway enrichment analysis was performed to identify related pathways, including PI3K/AKT, VEGF, and EGFR tyrosine kinase inhibitor resistance, which are all related to metabolic processes and intrinsic apoptotic pathways involving reactive oxygen species (ROS). Then, various cellular experiments were conducted to validate drug-target mechanisms that had been predicted using network pharmacology analysis. The experimental results showed the four C21 steroidal saponins could upregulate Bax and downregulate Bcl-2 expression, thereby changing the mitochondrial membrane potential, producing ROS, and releasing cytochrome C, which finally activated caspase-3, caspase-9, and caspase-8, all of which induced apoptosis in A549 cells. In addition, these components also downregulated the expression of MMP-2 and MMP-9 proteins, further weakening their degradation of extracellular matrix components and type IV collagen, and inhibiting the migration and invasion of A549 cells. Our study elucidated the chemical composition and underlying anti-tumor mechanism of TGT, which may be utilized in the treatment of lung cancer.

3.
Artículo en Inglés | MEDLINE | ID: mdl-32976099

RESUMEN

Various weather conditions, such as rain, haze, or snow, can degrade visual quality in images/videos, which may significantly degrade the performance of related applications. In this paper, a novel framework based on sequential dual attention deep network is proposed for removing rain streaks (deraining) in a single image, called by SSDRNet (Sequential dual attentionbased Single image DeRaining deep Network). Since the inherent correlation among rain steaks within an image should be stronger than that between the rain streaks and the background (non-rain) pixels, a two-stage learning strategy is implemented to better capture the distribution of rain streaks within a rainy image. The two-stage deep neural network primarily involves three blocks: residual dense blocks (RDBs), sequential dual attention blocks (SDABs), and multi-scale feature aggregation modules (MAMs), which are all delicately and specifically designed for rain removal. The two-stage strategy successfully learns very fine details of the rain steaks of the image and then clearly removes them. Extensive experimental results have shown that the proposed deep framework achieves the best performance on qualitative and quantitative metrics compared with state-of-the-art methods. The corresponding code and the trained model of the proposed SSDRNet have been available online at https://github.com/fityanul/SDAN-for-Rain-Removal.

4.
Artículo en Inglés | MEDLINE | ID: mdl-31831420

RESUMEN

Images/videos captured from outdoor visual devices are usually degraded by turbid media, such as haze, smoke, fog, rain, and snow. Haze is the most common one in outdoor scenes due to the atmosphere conditions. In this paper, a novel deep learning-based architecture (denoted by MSRL-DehazeNet) for single image haze removal relying on multi-scale residual learning (MSRL) and image decomposition is proposed. Instead of learning an end-to-end mapping between each pair of hazy image and its corresponding haze-free one adopted by most existing learningbased approaches, we reformulate the problem as restoration of the image base component. Based on the decomposition of a hazy image into the base and the detail components, haze removal (or dehazing) can be achieved by both of our multi-scale deep residual learning and our simplified U-Net learning only for mapping between hazy and haze-free base components, while the detail component is further enhanced via the other learned convolutional neural network (CNN). Moreover, benefited by the basic building block of our deep residual CNN architecture and our simplified UNet structure, the feature maps (produced by extracting structural and statistical features), and each previous layer can be fully preserved and fed into the next layer. Therefore, possible color distortion in the recovered image would be avoided. As a result, the final haze-removed (or dehazed) image is obtained by integrating the haze-removed base and the enhanced detail image components. Experimental results have demonstrated good effectiveness of the proposed framework, compared with state-ofthe-art approaches.

5.
IEEE Trans Image Process ; 24(3): 919-31, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25576569

RESUMEN

This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the reconstructed HR details using dynamic texture synthesis (DTS). Most existing multiframe-based video superresolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate subpixel motion estimation between frames in an LR video. To achieve high-quality reconstruction of HR details for an LR video, we propose a texture-synthesis (TS)-based video SR method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a temporally coherent way, which effectively addresses the temporal incoherence problem caused by traditional TS-based image SR methods. To further reduce the complexity of the proposed method, our method only performs the TS-based SR on a set of key frames, while the HR details of the remaining nonkey frames are simply predicted using the bidirectional overlapped block motion compensation. After all frames are upscaled, the proposed DTS-SR is applied to maintain the temporal coherence in the HR video. Experimental results demonstrate that the proposed method achieves significant subjective and objective visual quality improvement over state-of-the-art video SR methods.

6.
Opt Express ; 21(22): 27127-41, 2013 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-24216937

RESUMEN

Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.


Asunto(s)
Algoritmos , Artefactos , Atmósfera , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Modelos Teóricos , Simulación por Computador , Luz , Dispersión de Radiación
7.
IEEE Trans Image Process ; 21(4): 1742-55, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22167628

RESUMEN

Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.


Asunto(s)
Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/métodos , Lluvia , Técnica de Sustracción , Algoritmos , Inteligencia Artificial , Almacenamiento y Recuperación de la Información/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...