Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Electron Commer Res Appl ; 48: 101075, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-36569978

RESUMO

Arising from the global COVID-19 pandemic, social distancing has become the new norm that shapes consumers' shopping and consumption activities. In response, the contactless channel (i.e., shopping online, self-collecting and returning parcels via delivery lockers) is ideally positioned to fulfil consumers' shopping/logistics needs while avoiding all unnecessary social interactions. Thus, this study examines the factors that motivate consumers' migration to the contactless channel by viewing consumers' channel choice as both health-related and shopping behaviours. Anchored on the synthesised insights of protection motivation theory and automation acceptance theory, the conceptual framework and a series of hypotheses are proposed. A survey instrument is used for data collection, and the data are analysed using structural equation modelling. Our findings reveal that perceived channel characteristics such as compatibility and trust directly contribute to the relative value of the contactless channel; these characteristics are also correlated where trust perception reinforces compatibility perception. The channel characteristics are further influenced by consumers' perceived susceptibility of COVID-19; that is, susceptibility perception enhances channel compatibility but decreases consumers' trust in the contactless channel. However, the impacts of susceptibility become insignificant with a low level of severity perception, confirming the stage-based conceptualisation of severity. Furthermore, the severity perception of COVID-19 is found to amplify the positive impacts of susceptibility perception but attenuate its negative impact. Our study promotes a deeper integration between the health and service literature and encourages more interdisciplinary studies in this nexus. Considering the practical context of social distancing, our findings suggest a struggle between compatibility perception and trust concern that shapes consumers' behaviours.

2.
Sensors (Basel) ; 20(8)2020 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-32290472

RESUMO

Medical image fusion techniques can fuse medical images from different morphologies to make the medical diagnosis more reliable and accurate, which play an increasingly important role in many clinical applications. To obtain a fused image with high visual quality and clear structure details, this paper proposes a convolutional neural network (CNN) based medical image fusion algorithm. The proposed algorithm uses the trained Siamese convolutional network to fuse the pixel activity information of source images to realize the generation of weight map. Meanwhile, a contrast pyramid is implemented to decompose the source image. According to different spatial frequency bands and a weighted fusion operator, source images are integrated. The results of comparative experiments show that the proposed fusion algorithm can effectively preserve the detailed structure information of source images and achieve good human visual effects.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Humanos , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
3.
Sensors (Basel) ; 20(6)2020 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-32182986

RESUMO

Multi exposure image fusion (MEF) provides a concise way to generate high-dynamic-range (HDR) images. Although the precise fusion can be achieved by existing MEF methods in different static scenes, the corresponding performance of ghost removal varies in different dynamic scenes. This paper proposes a precise MEF method based on feature patches (FPM) to improve the robustness of ghost removal in a dynamic scene. A reference image is selected by a priori exposure quality first and then used in the structure consistency test to solve the image ghosting issues existing in the dynamic scene MEF. Source images are decomposed into spatial-domain structures by a guided filter. Both the base and detail layer of the decomposed images are fused to achieve the MEF. The structure decomposition of the image patch and the appropriate exposure evaluation are integrated into the proposed solution. Both global and local exposures are optimized to improve the fusion performance. Compared with six existing MEF methods, the proposed FPM not only improves the robustness of ghost removal in a dynamic scene, but also performs well in color saturation, image sharpness, and local detail processing.

4.
Entropy (Basel) ; 20(7)2018 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-33265611

RESUMO

Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, etc. Traditional multi-scale transform (MST) based image fusion solutions have difficulties in the selection of decomposition level, and the contrast loss in fused image. At the same time, traditional sparse-representation based image fusion methods suffer the weak representation ability of fixed dictionary. In order to overcome these deficiencies of MST- and SR-based methods, this paper proposes an image fusion framework which integrates nonsubsampled contour transformation (NSCT) into sparse representation (SR). In this fusion framework, NSCT is applied to source images decomposition for obtaining corresponding low- and high-pass coefficients. It fuses low- and high-pass coefficients by using SR and Sum Modified-laplacian (SML) respectively. NSCT inversely transforms the fused coefficients to obtain the final fused image. In this framework, a principal component analysis (PCA) is implemented in dictionary training to reduce the dimension of learned dictionary and computation costs. A novel high-pass fusion rule based on SML is applied to suppress pseudo-Gibbs phenomena around singularities of fused image. Compared to three mainstream image fusion solutions, the proposed solution achieves better performance on structural similarity and detail preservation in fused images.

5.
Entropy (Basel) ; 20(12)2018 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-33266659

RESUMO

Multi-exposure image fusion methods are often applied to the fusion of low-dynamic images that are taken from the same scene at different exposure levels. The fused images not only contain more color and detailed information, but also demonstrate the same real visual effects as the observation by the human eye. This paper proposes a novel multi-exposure image fusion (MEF) method based on adaptive patch structure. The proposed algorithm combines image cartoon-texture decomposition, image patch structure decomposition, and the structural similarity index to improve the local contrast of the image. Moreover, the proposed method can capture more detailed information of source images and produce more vivid high-dynamic-range (HDR) images. Specifically, image texture entropy values are used to evaluate image local information for adaptive selection of image patch size. The intermediate fused image is obtained by the proposed structure patch decomposition algorithm. Finally, the intermediate fused image is optimized by using the structural similarity index to obtain the final fused HDR image. The results of comparative experiments show that the proposed method can obtain high-quality HDR images with better visual effects and more detailed information.

6.
Comput Biol Med ; 172: 108284, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38503086

RESUMO

3D MRI Brain Tumor Segmentation is of great significance in clinical diagnosis and treatment. Accurate segmentation results are critical for localization and spatial distribution of brain tumors using 3D MRI. However, most existing methods mainly focus on extracting global semantic features from the spatial and depth dimensions of a 3D volume, while ignoring voxel information, inter-layer connections, and detailed features. A 3D brain tumor segmentation network SDV-TUNet (Sparse Dynamic Volume TransUNet) based on an encoder-decoder architecture is proposed to achieve accurate segmentation by effectively combining voxel information, inter-layer feature connections, and intra-axis information. Volumetric data is fed into a 3D network consisting of extended depth modeling for dense prediction by using two modules: sparse dynamic (SD) encoder-decoder module and multi-level edge feature fusion (MEFF) module. The SD encoder-decoder module is utilized to extract global spatial semantic features for brain tumor segmentation, which employs multi-head self-attention and sparse dynamic adaptive fusion in a 3D extended shifted window strategy. In the encoding stage, dynamic perception of regional connections and multi-axis information interactions are realized through local tight correlations and long-range sparse correlations. The MEFF module achieves the fusion of multi-level local edge information in a layer-by-layer incremental manner and connects the fusion to the decoder module through skip connections to enhance the propagation ability of spatial edge information. The proposed method is applied to the BraTS2020 and BraTS2021 benchmarks, and the experimental results show its superior performance compared with state-of-the-art brain tumor segmentation methods. The source codes of the proposed method are available at https://github.com/SunMengw/SDV-TUNet.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Benchmarking , Neuroimagem , Semântica , Processamento de Imagem Assistida por Computador
7.
Front Neurorobot ; 17: 1203962, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37304664

RESUMO

As a type of biometric recognition, palmprint recognition uses unique discriminative features on the palm of a person to identify his/her identity. It has attracted much attention because of its advantages of contactlessness, stability, and security. Recently, many palmprint recognition methods based on convolutional neural networks (CNN) have been proposed in academia. Convolutional neural networks are limited by the size of the convolutional kernel and lack the ability to extract global information of palmprints. This paper proposes a framework based on the integration of CNN and Transformer-GLGAnet for palmprint recognition, which can take advantage of CNN's local information extraction and Transformer's global modeling capabilities. A gating mechanism and an adaptive feature fusion module are also designed for palmprint feature extraction. The gating mechanism filters features by a feature selection algorithm and the adaptive feature fusion module fuses them with the features extracted by the backbone network. Through extensive experiments on two datasets, the experimental results show that the recognition accuracy is 98.5% for 12,000 palmprints in the Tongji University dataset and 99.5% for 600 palmprints in the Hong Kong Polytechnic University dataset. This demonstrates that the proposed method outperforms existing methods in the correctness of both palmprint recognition tasks. The source codes will be available on https://github.com/Ywatery/GLnet.git.

8.
Math Biosci Eng ; 20(10): 18248-18266, 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-38052557

RESUMO

Real-time and efficient driver distraction detection is of great importance for road traffic safety and assisted driving. The design of a real-time lightweight model is crucial for in-vehicle edge devices that have limited computational resources. However, most existing approaches focus on lighter and more efficient architectures, ignoring the cost of losing tiny target detection performance that comes with lightweighting. In this paper, we present MTNet, a lightweight detector for driver distraction detection scenarios. MTNet consists of a multidimensional adaptive feature extraction block, a lightweight feature fusion block and utilizes the IoU-NWD weighted loss function, all while considering the accuracy gain of tiny target detection. In the feature extraction component, a lightweight backbone network is employed in conjunction with four attention mechanisms strategically integrated across the kernel space. This approach enhances the performance limits of the lightweight network. The lightweight feature fusion module is designed to reduce computational complexity and memory access. The interaction of channel information is improved through the use of lightweight arithmetic techniques. Additionally, CFSM module and EPIEM module are employed to minimize redundant feature map computations and strike a better balance between model weights and accuracy. Finally, the IoU-NWD weighted loss function is formulated to enable more effective detection of tiny targets. We assess the performance of the proposed method on the LDDB benchmark. The experimental results demonstrate that our proposed method outperforms multiple advanced detection models.

9.
Comput Biol Med ; 167: 107621, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37907030

RESUMO

Drug-target affinity (DTA) prediction as an emerging and effective method is widely applied to explore the strength of drug-target interactions in drug development research. By predicting these interactions, researchers can assess the potential efficacy and safety of candidate drugs at an early stage, narrowing down the search space for therapeutic targets and accelerating the discovery and development of new drugs. However, existing DTA prediction models mainly use graphical representations of drug molecules, which lack information on interactions between individual substructures, thus affecting prediction accuracy and model interpretability. Therefore, transformer and diffusion on drug graphs in DTA prediction (TDGraphDTA) are introduced to predict drug-target interactions using multi-scale information interaction and graph optimization. An interactive module is integrated into feature extraction of drug and target features at different granularity levels. A diffusion model-based graph optimization module is proposed to improve the representation of molecular graph structures and enhance the interpretability of graph representations while obtaining optimal feature representations. In addition, TDGraphDTA improves the accuracy and reliability of predictions by capturing relationships and contextual information between molecular substructures. The performance of the proposed TDGraphDTA in DTA prediction was verified on three publicly available benchmark datasets (Davis, Metz, and KIBA). Compared with state-of-the-art baseline models, it achieved better results in terms of consistency index, R-squared, etc. Furthermore, compared with some existing methods, the proposed TDGraphDTA is demonstrated to have better structure capturing capabilities by visualizing the feature capturing capabilities of the model using Grad-AAM toxicity labels in the ToxCast dataset. The corresponding source codes are available at https://github.com/Lamouryz/TDGraph.


Assuntos
Benchmarking , Desenvolvimento de Medicamentos , Reprodutibilidade dos Testes , Difusão , Software
10.
J Imaging ; 8(5)2022 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-35621882

RESUMO

Although deep learning approaches are able to generate generic image features from massive labeled data, discriminative handcrafted features still have advantages in providing explicit domain knowledge and reflecting intuitive visual understanding. Much of the existing research focuses on integrating both handcrafted features and deep networks to leverage the benefits. However, the issues of parameter quality have not been effectively solved in existing applications of handcrafted features in deep networks. In this research, we propose a method that enriches deep network features by utilizing the injected discriminative shape features (generic edge tokens and curve partitioning points) to adjust the network's internal parameter update process. Thus, the modified neural networks are trained under the guidance of specific domain knowledge, and they are able to generate image representations that incorporate the benefits from both handcrafted and deep learned features. The comparative experiments were performed on several benchmark datasets. The experimental results confirmed our method works well on both large and small training datasets. Additionally, compared with existing models using either handcrafted features or deep network representations, our method not only improves the corresponding performance, but also reduces the computational costs.

11.
Front Neurosci ; 16: 1009581, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36188458

RESUMO

Medical image segmentation has important auxiliary significance for clinical diagnosis and treatment. Most of existing medical image segmentation solutions adopt convolutional neural networks (CNNs). Althought these existing solutions can achieve good image segmentation performance, CNNs focus on local information and ignore global image information. Since Transformer can encode the whole image, it has good global modeling ability and is effective for the extraction of global information. Therefore, this paper proposes a hybrid feature extraction network, into which CNNs and Transformer are integrated to utilize their advantages in feature extraction. To enhance low-dimensional texture features, this paper also proposes a multi-dimensional statistical feature extraction module to fully fuse the features extracted by CNNs and Transformer and enhance the segmentation performance of medical images. The experimental results confirm that the proposed method achieves better results in brain tumor segmentation and ventricle segmentation than state-of-the-art solutions.

12.
J Imaging ; 7(4)2021 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-34460512

RESUMO

As a crucial task in surveillance and security, person re-identification (re-ID) aims to identify the targeted pedestrians across multiple images captured by non-overlapping cameras. However, existing person re-ID solutions have two main challenges: the lack of pedestrian identification labels in the captured images, and domain shift issue between different domains. A generative adversarial networks (GAN)-based self-training framework with progressive augmentation (SPA) is proposed to obtain the robust features of the unlabeled data from the target domain, according to the preknowledge of the labeled data from the source domain. Specifically, the proposed framework consists of two stages: the style transfer stage (STrans), and self-training stage (STrain). First, the targeted data is complemented by a camera style transfer algorithm in the STrans stage, in which CycleGAN and Siamese Network are integrated to preserve the unsupervised self-similarity (the similarity of the same image between before and after transformation) and domain dissimilarity (the dissimilarity between a transferred source image and the targeted image). Second, clustering and classification are alternately applied to enhance the model performance progressively in the STrain stage, in which both global and local features of the target-domain images are obtained. Compared with the state-of-the-art methods, the proposed method achieves the competitive accuracy on two existing datasets.

13.
J Imaging ; 7(1)2021 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-34460577

RESUMO

Person re-identification (Re-ID) is challenging due to host of factors: the variety of human positions, difficulties in aligning bounding boxes, and complex backgrounds, among other factors. This paper proposes a new framework called EXAM (EXtreme And Moderate feature embeddings) for Re-ID tasks. This is done using discriminative feature learning, requiring attention-based guidance during training. Here "Extreme" refers to salient human features and "Moderate" refers to common human features. In this framework, these types of embeddings are calculated by global max-pooling and average-pooling operations respectively; and then, jointly supervised by multiple triplet and cross-entropy loss functions. The processes of deducing attention from learned embeddings and discriminative feature learning are incorporated, and benefit from each other in this end-to-end framework. From the comparative experiments and ablation studies, it is shown that the proposed EXAM is effective, and its learned feature representation reaches state-of-the-art performance.

14.
Artigo em Inglês | MEDLINE | ID: mdl-32640662

RESUMO

Shared autonomous vehicles (SAVs), which have several potential benefits, are an emerging innovative technology in the market. However, the successful operation of SAVs largely depends on the extent of travellers' intention to adopt them. This study aims to analyse the factors that influence the adoption of SAVs by integrating two theoretical perspectives: the unified theory of acceptance and use of technology 2 (UTAUT2) and the theory of planned behaviour (TPB). A valid survey sample of 268 participants in Da Nang, Vietnam was collected. Subsequently, structural equation modelling was deployed to test the research model. The results indicate that the five dimensions of UTUAT2: performance expectation, effort expectation, habit, price value and hedonic motivation, are mediated by the attitudes toward using SAVs. Further, the TPB constructs, namely attitude, subject norm, perceived behavioural control, along with its perceived facilitating conditions, are all effective predictors of intention to use SAVs. The findings of this study can serve as a crucial resource for transport operators and the government to enhance transportation services and policies.


Assuntos
Atitude , Intenção , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Motivação , Inquéritos e Questionários , Vietnã , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA