Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Neural Netw ; 145: 209-220, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34768091

ABSTRACT

Although significant progress has been made in synthesizing high-quality and visually realistic face images by unconditional Generative Adversarial Networks (GANs), there is still a lack of control over the generation process in order to achieve semantic face editing. In this paper, we propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on pretrained StyleGAN by guiding the image generation process with a knowledge network. Furthermore, we allow an attention mechanism in StyleGAN generator to adaptively select a single layer for style manipulation. As a result, our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache, hair color and attractive. Both qualitative and quantitative results demonstrate the superiority of our method over other competing methods for semantic face editing. Moreover, we show that our model can be also applied to different types of real and artistic face editing, demonstrating strong generalization ability.


Subject(s)
Neural Networks, Computer , Semantics , Image Processing, Computer-Assisted
2.
Sci Rep ; 11(1): 15907, 2021 08 05.
Article in English | MEDLINE | ID: mdl-34354151

ABSTRACT

Programmed cell death ligend-1 (PD-L1) expression by immunohistochemistry (IHC) assays is a predictive marker of anti-PD-1/PD-L1 therapy response. With the popularity of anti-PD-1/PD-L1 inhibitor drugs, quantitative assessment of PD-L1 expression becomes a new labor for pathologists. Manually counting the PD-L1 positive stained tumor cells is an obviously subjective and time-consuming process. In this paper, we developed a new computer aided Automated Tumor Proportion Scoring System (ATPSS) to determine the comparability of image analysis with pathologist scores. A three-stage process was performed using both image processing and deep learning techniques to mimic the actual diagnostic flow of the pathologists. We conducted a multi-reader multi-case study to evaluate the agreement between pathologists and ATPSS. Fifty-one surgically resected lung squamous cell carcinoma were prepared and stained using the Dako PD-L1 (22C3) assay, and six pathologists with different experience levels were involved in this study. The TPS predicted by the proposed model had high and statistically significant correlation with sub-specialty pathologists' scores with Mean Absolute Error (MAE) of 8.65 (95% confidence interval (CI): 6.42-10.90) and Pearson Correlation Coefficient (PCC) of 0.9436 ([Formula: see text]), and the performance on PD-L1 positive cases achieved by our method surpassed that of non-subspecialty and trainee pathologists. Those experimental results indicate that the proposed automated system can be a powerful tool to improve the PD-L1 TPS assessment of pathologists.


Subject(s)
B7-H1 Antigen/genetics , Carcinoma, Squamous Cell/diagnosis , Gene Expression Profiling/methods , Adult , Aged , Automation, Laboratory/methods , B7-H1 Antigen/analysis , B7-H1 Antigen/metabolism , Biological Assay , Biomarkers, Tumor/metabolism , Carcinoma, Non-Small-Cell Lung/diagnosis , Carcinoma, Non-Small-Cell Lung/genetics , Carcinoma, Squamous Cell/genetics , China , Female , Gene Expression/genetics , Humans , Immunohistochemistry/methods , Lung/pathology , Lung Neoplasms/pathology , Male , Middle Aged , Transcriptome/genetics
3.
Neural Netw ; 136: 233-243, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33257223

ABSTRACT

Recently, convolutional neural networks (CNNs)-based facial landmark detection methods have achieved great success. However, most of existing CNN-based facial landmark detection methods have not attempted to activate multiple correlated facial parts and learn different semantic features from them that they can not accurately model the relationships among the local details and can not fully explore more discriminative and fine semantic features, thus they suffer from partial occlusions and large pose variations. To address these problems, we propose a cross-order cross-semantic deep network (CCDN) to boost the semantic features learning for robust facial landmark detection. Specifically, a cross-order two-squeeze multi-excitation (CTM) module is proposed to introduce the cross-order channel correlations for more discriminative representations learning and multiple attention-specific part activation. Moreover, a novel cross-order cross-semantic (COCS) regularizer is designed to drive the network to learn cross-order cross-semantic features from different activation for facial landmark detection. It is interesting to show that by integrating the CTM module and COCS regularizer, the proposed CCDN can effectively activate and learn more fine and complementary cross-order cross-semantic features to improve the accuracy of facial landmark detection under extremely challenging scenarios. Experimental results on challenging benchmark datasets demonstrate the superiority of our CCDN over state-of-the-art facial landmark detection methods.


Subject(s)
Automated Facial Recognition/methods , Neural Networks, Computer , Semantic Web , Face , Humans , Semantics
4.
IEEE Trans Med Imaging ; 39(6): 1930-1941, 2020 06.
Article in English | MEDLINE | ID: mdl-31880545

ABSTRACT

Deep learning approaches are widely applied to histopathological image analysis due to the impressive levels of performance achieved. However, when dealing with high-resolution histopathological images, utilizing the original image as input to the deep learning model is computationally expensive, while resizing the original image to achieve low resolution incurs information loss. Some hard-attention based approaches have emerged to select possible lesion regions from images to avoid processing the original image. However, these hard-attention based approaches usually take a long time to converge with weak guidance, and valueless patches may be trained by the classifier. To overcome this problem, we propose a deep selective attention approach that aims to select valuable regions in the original images for classification. In our approach, a decision network is developed to decide where to crop and whether the cropped patch is necessary for classification. These selected patches are then trained by the classification network, which then provides feedback to the decision network to update its selection policy. With such a co-evolution training strategy, we show that our approach can achieve a fast convergence rate and high classification accuracy. Our approach is evaluated on a public breast cancer histopathological image database, where it demonstrates superior performance compared to state-of-the-art deep learning approaches, achieving approximately 98% classification accuracy while only taking 50% of the training time of the previous hard-attention approach.


Subject(s)
Breast Neoplasms , Deep Learning , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Databases, Factual , Female , Humans , Image Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL