Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Mol Biol Rep ; 51(1): 196, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38270719

RESUMO

Due to its role in apoptosis, differentiation, cell cycle arrest, and DNA damage repair in stress responses (oxidative stress, hypoxia, chemotherapeutic drugs, and UV irradiation or radiotherapy), FOXO3a is considered a key tumor suppressor that determines radiotherapeutic and chemotherapeutic responses in cancer cells. Mutations in the FOXO3a gene are rare, even in cancer cells. Post-translational regulations are the main mechanisms for inactivating FOXO3a. The subcellular localization, stability, transcriptional activity, and DNA binding affinity for FOXO3a can be modulated via various post-translational modifications, including phosphorylation, acetylation, and interactions with other transcriptional factors or regulators. This review summarizes how proteins that interact with FOXO3a engage in cancer progression.


Assuntos
Proteína Forkhead Box O3 , Neoplasias , Humanos , Acetilação , Apoptose , Diferenciação Celular , Neoplasias/genética , Fatores de Transcrição , Proteína Forkhead Box O3/genética
2.
Curr Issues Mol Biol ; 45(12): 9943-9960, 2023 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-38132467

RESUMO

Enhanced ultraviolet-B (UV-B) radiation promotes anthocyanin biosynthesis in leaves, flowers and fruits of plants. However, the effects and underlying mechanisms of enhanced UV-B radiation on the accumulation of anthocyanins in the tubers of potatoes (Solanum tuberosum L.) remain unclear. Herein, reciprocal grafting experiments were first conducted using colored and uncolored potatoes, demonstrating that the anthocyanins in potato tubers were synthesized in situ, and not transported from the leaves to the tubers. Furthermore, the enhanced UV-B radiation (2.5 kJ·m-2·d-1) on potato stems and leaves significantly increased the contents of total anthocyanin and monomeric pelargonidin and peonidin in the red-fleshed potato '21-1' tubers, compared to the untreated control. A comparative transcriptomic analysis showed that there were 2139 differentially expressed genes (DEGs) under UV-B treatment in comparison to the control, including 1724 up-regulated and 415 down-regulated genes. The anthocyanin-related enzymatic genes in the tubers such as PAL, C4H, 4CL, CHS, CHI, F3H, F3'5'H, ANS, UFGTs, and GSTs were up-regulated under UV-B treatment, except for a down-regulated F3'H. A known anthocyanin-related transcription factor StbHLH1 also showed a significantly higher expression level under UV-B treatment. Moreover, six differentially expressed MYB transcription factors were remarkably correlated to almost all anthocyanin-related enzymatic genes. Additionally, a DEGs enrichment analysis suggested that jasmonic acid might be a potential UV-B signaling molecule involved in the UV-B-induced tuber biosynthesis of anthocyanin. These results indicated that enhanced UV-B radiation in potato stems and leaves induced anthocyanin accumulation in the tubers by regulating the enzymatic genes and transcription factors involved in anthocyanin biosynthesis. This study provides novel insights into the mechanisms of enhanced UV-B radiation that regulate the anthocyanin biosynthesis in potato tubers.

3.
IEEE Trans Image Process ; 30: 8797-8810, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34673487

RESUMO

In this paper, we propose a novel controllable sketch-to-image translation framework that allows users to interactively and robustly synthesize and edit face images with hand-drawn sketches. Inspired by the coarse-to-fine painting process of human artists, we propose a novel dilation-based sketch refinement method to refine sketches at varied coarse levels without the need for real sketch training data. We further investigate multi-level refinement that enables users to flexibly define how "reliable" the input sketch should be considered for the final output through a refinement level control parameter, which helps balance between the realism of the output and its structural consistency with the input sketch. It is realized by leveraging scale-aware style transfer to model and adjust the style features of sketches at different coarse levels. Moreover, advanced user controllability in terms of the editing region control, facial attribute editing, and spatially non-uniform refinement is further explored for fine-grained and semantic editing. We demonstrate the effectiveness of the proposed method in terms of visual quality and user controllability through extensive experiments including qualitative and quantitative comparison with state-of-the-art methods, ablation studies and various applications.

4.
IEEE Trans Pattern Anal Mach Intell ; 43(11): 4161-4176, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32365019

RESUMO

In this paper, we tackle the problem of pose-guided person image generation with unpaired data, which is a challenging problem due to non-rigid spatial deformation. Instead of learning a fixed mapping directly between human bodies as previous methods, we propose a new pathway to decompose a single fixed mapping into two subtasks, namely, semantic parsing transformation and appearance generation. First, to simplify the learning for non-rigid deformation, a semantic generative network is developed to transform semantic parsing maps between different poses. Second, guided by semantic parsing maps, we render the foreground and background image, respectively. A foreground generative network learns to synthesize semantic-aware textures, and another background generative network learns to predict missing background regions caused by pose changes. Third, we enable pseudo-label training with unpaired data, and demonstrate that end-to-end training of the overall network further refines the semantic map prediction and final results accordingly. Moreover, our method is generalizable to other person image generation tasks defined on semantic maps, e.g., clothing texture transfer, controlled image manipulation, and virtual try-on. Experimental results on DeepFashion and Market-1501 datasets demonstrate the superiority of our method, especially in keeping better body shapes and clothing attributes, as well as rendering structure-coherent backgrounds.

5.
Artigo em Inglês | MEDLINE | ID: mdl-31995485

RESUMO

With the prevalence of RGB-D cameras, multimodal video data have become more available for human action recognition. One main challenge for this task lies in how to effectively leverage their complementary information. In this work, we propose a Modality Compensation Network (MCN) to explore the relationships of different modalities, and boost the representations for human action recognition. We regard RGB/ optical flow videos as source modalities, skeletons as auxiliary modality. Our goal is to extract more discriminative features from source modalities, with the help of auxiliary modality. Built on deep Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) networks, our model bridges data from source and auxiliary modalities by a modality adaptation block to achieve adaptive representation learning, that the network learns to compensate for the loss of skeletons at test time and even at training time. We explore multiple adaptation schemes to narrow the distance between source and auxiliary modal distributions from different levels, according to the alignment of source and auxiliary data in training. In addition, skeletons are only required in the training phase. Our model is able to improve the recognition performance with source data when testing. Experimental results reveal that MCN outperforms stateof- the-art approaches on four widely-used action recognition benchmarks.

6.
IEEE Trans Pattern Anal Mach Intell ; 42(6): 1377-1393, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30703011

RESUMO

Rain streaks, particularly in heavy rain, not only degrade visibility but also make many computer vision algorithms fail to function properly. In this paper, we address this visibility problem by focusing on single-image rain removal, even in the presence of dense rain streaks and rain-streak accumulation, which is visually similar to mist or fog. To achieve this, we introduce a new rain model and a deep learning architecture. Our rain model incorporates a binary rain map indicating rain-streak regions, and accommodates various shapes, directions, and sizes of overlapping rain streaks, as well as rain accumulation, to model heavy rain. Based on this model, we construct a multi-task deep network, which jointly learns three targets: the binary rain-streak map, rain streak layers, and clean background, which is our ultimate output. To generate features that can be invariant to rain steaks, we introduce a contextual dilated network, which is able to exploit regional contextual information. To handle various shapes and directions of overlapping rain streaks, our strategy is to utilize a recurrent process that progressively removes rain streaks. Our binary map provides a constraint and thus additional information to train our network. Extensive evaluation on real images, particularly in heavy rain, shows the effectiveness of our model and architecture.

7.
Genomics Proteomics Bioinformatics ; 17(3): 311-318, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31465854

RESUMO

Next-generation sequencing has allowed identification of millions of somatic mutations in human cancer cells. A key challenge in interpreting cancer genomes is to distinguish drivers of cancer development among available genetic mutations. To address this issue, we present the first web-based application, consensus cancer driver gene caller (C3), to identify the consensus driver genes using six different complementary strategies, i.e., frequency-based, machine learning-based, functional bias-based, clustering-based, statistics model-based, and network-based strategies. This application allows users to specify customized operations when calling driver genes, and provides solid statistical evaluations and interpretable visualizations on the integration results. C3 is implemented in Python and is freely available for public use at http://drivergene.rwebox.com/c3.


Assuntos
Algoritmos , Neoplasias/genética , Análise por Conglomerados , Humanos , Internet , Aprendizado de Máquina
8.
Artigo em Inglês | MEDLINE | ID: mdl-30908221

RESUMO

As 3D scanning devices and depth sensors mature, point clouds have attracted increasing attention as a format for 3D object representation, with applications in various fields such as tele-presence, navigation and heritage reconstruction. However, point clouds usually exhibit holes of missing data, mainly due to the limitation of acquisition techniques and complicated structure. Further, point clouds are defined on irregular non- Euclidean domains, which is challenging to address especially with conventional signal processing tools. Hence, leveraging on recent advances in graph signal processing, we propose an efficient point cloud inpainting method, exploiting both the local smoothness and the non-local self-similarity in point clouds. Specifically, we first propose a frequency interpretation in graph nodal domain, based on which we derive the smoothing and denoising properties of a graph-signal smoothness prior in order to describe the local smoothness of point clouds. Secondly, we explore the characteristics of non-local self-similarity, by globally searching for the most similar area to the missing region. The similarity metric between two areas is defined based on the direct component and the anisotropic graph total variation of normals in each area. Finally, we formulate the hole-filling step as an optimization problem based on the selected most similar area and regularized by the graph-signal smoothness prior. Besides, we propose voxelization and automatic hole detection methods for the point cloud prior to inpainting. Experimental results show that the proposed approach outperforms four competing methods significantly, both in objective and subjective quality.

9.
Artigo em Inglês | MEDLINE | ID: mdl-30640610

RESUMO

In this paper, we address a rain removal problem from a single image, even in the presence of large rain streaks and rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog). For rain streak removal, the mismatch problem between different streak sizes in training and testing phases leads to a poor performance, especially when there are large streaks. To mitigate this problem, we embed a hierarchical representation of wavelet transform into a recurrent rain removal process: 1) rain removal on the low-frequency component; 2) recurrent detail recovery on highfrequency components under the guidance of the recovered lowfrequency component. Benefiting from the recurrent multi-scale modeling of wavelet transform-like design, the proposed network trained on streaks with one size can adapt to those with larger sizes, which significantly favors real rain streak removal. The dilated residual dense network is used as the basic model of the recurrent recovery process. The network includes multiple paths with different receptive fields, thus can make full use of multi-scale redundancy and utilize context information in large regions. Furthermore, to handle heavy rain cases where rain streak accumulation is presented, we construct a detail appearing rain accumulation removal to not only improve the visibility but also enhance the details in dark regions. The evaluation on both synthetic and real images, particularly on those containing large rain streaks and heavy accumulation, shows the effectiveness of our novel models, which significantly outperforms the state-ofthe- art methods.

10.
IEEE Trans Image Process ; 28(2): 699-712, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30222570

RESUMO

In this paper, we address the problem of video rain removal by considering rain occlusion regions, i.e., very low light transmittance for rain streaks. Different from additive rain streaks, in such occlusion regions, the details of backgrounds are completely lost. Therefore, we propose a hybrid rain model to depict both rain streaks and occlusions. Integrating the hybrid model and useful motion segmentation context information, we present a Dynamic Routing Residue Recurrent Network (D3R-Net). D3R-Net first extracts the spatial features by a residual network. Then, the spatial features are aggregated by recurrent units along the temporal axis. In the temporal fusion, the context information is embedded into the network in a "dynamic routing" way. A heap of recurrent units takes responsibility for handling the temporal fusion in given contexts, e.g., rain or non-rain regions. In the certain forward and backward processes, one of these recurrent units is mainly activated. Then, a context selection gate is employed to detect the context and select one of these temporally fused features generated by these recurrent units as the final fused feature. Finally, this last feature plays a role of "residual feature." It is combined with the spatial feature and then used to reconstruct the negative rain streaks. In such a D3R-Net, we incorporate motion segmentation, which denotes whether a pixel belongs to fast moving edges or not, and rain type indicator, indicating whether a pixel belongs to rain streaks, rain occlusions, and non-rain regions, as the context variables. Extensive experiments on a series of synthetic and real videos with rain streaks verify not only the superiority of the proposed method over state of the art but also the effectiveness of our network design and its each component.

11.
Artigo em Inglês | MEDLINE | ID: mdl-30281458

RESUMO

In this work, we present a new framework for the stylization of text-based binary images. First, our method stylizes the stroke-based geometric shape like text, symbols and icons in the target binary image based on an input style image. Second, the composition of the stylized geometric shape and a background image is explored. To accomplish the task, we propose legibilitypreserving structure and texture transfer algorithms, which progressively narrow the visual differences between the binary image and the style image. The stylization is then followed by a contextaware layout design algorithm, where cues for both seamlessness and aesthetics are employed to determine the optimal layout of the shape in the background. Given the layout, the binary image is seamlessly embedded into the background by texture synthesis under a context-aware boundary constraint. According to the contents of binary images, our method can be applied to many fields.We show that the proposed method is capable of addressing the unsupervised text stylization problem and is superior to stateof- the-art style transfer methods in automatic artistic typography creation. Besides, extensive experiments on various tasks, such as visual-textual presentation synthesis, icon/symbol rendering and structure-guided image inpainting, demonstrate the effectiveness of the proposed method.

12.
IEEE Trans Image Process ; 27(6): 2828-2841, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29570085

RESUMO

Low-light image enhancement methods based on classic Retinex model attempt to manipulate the estimated illumination and to project it back to the corresponding reflectance. However, the model does not consider the noise, which inevitably exists in images captured in low-light conditions. In this paper, we propose the robust Retinex model, which additionally considers a noise map compared with the conventional Retinex model, to improve the performance of enhancing low-light images accompanied by intensive noise. Based on the robust Retinex model, we present an optimization function that includes novel regularization terms for the illumination and reflectance. Specifically, we use norm to constrain the piece-wise smoothness of the illumination, adopt a fidelity term for gradients of the reflectance to reveal the structure details in low-light images, and make the first attempt to estimate a noise map out of the robust Retinex model. To effectively solve the optimization problem, we provide an augmented Lagrange multiplier based alternating direction minimization algorithm without logarithmic transformation. Experimental results demonstrate the effectiveness of the proposed method in low-light image enhancement. In addition, the proposed method can be generalized to handle a series of similar problems, such as the image enhancement for underwater or remote sensing and in hazy or dusty conditions.

13.
IEEE Trans Cybern ; 48(1): 399-411, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28026798

RESUMO

In this paper, we present a novel method to super-resolve and recover the facial depth map nicely. The key idea is to exploit the exemplar-based method to obtain the reliable face priors from high-quality facial depth map to improve the depth image. Specifically, a new neighbor embedding (NE) framework is designed for face prior learning and depth map reconstruction. First, face components are decomposed to form specialized dictionaries and then reconstructed, respectively. Joint features, i.e., low-level depth, intensity cues and high-level position cues, are put forward for robust patch similarity measurement. The NE results are used to obtain the face priors of facial structures and smooth maps, which are then combined in an uniform optimization framework to recover high-quality facial depth maps. Finally, an edge enhancement process is implemented to estimate the final high resolution depth map. Experimental results demonstrate the superiority of our method compared to state-of-the-art depth map super-resolution techniques on both synthetic data and real-world data from Kinect.


Assuntos
Identificação Biométrica/métodos , Face/anatomia & histologia , Face/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Cibernética , Humanos
14.
IEEE Trans Image Process ; 26(4): 2016-2017, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28212084

RESUMO

In this paper, we propose a novel full-reference objective quality assessment metric for screen content images (SCIs) by structure features and uncertainty weighting (SFUW). The input SCI is first divided into textual and pictorial regions. The visual quality of textual regions is estimated based on perceptual structural similarity, where the gradient information is adopted as the structural feature. To predict the visual quality of pictorial regions in SCIs, we extract the structural features and luminance features for similarity computation between the reference and distorted pictorial patches. To obtain the final visual quality of SCI, we design an uncertainty weighting method by perceptual theories to fuse the visual quality of textual and pictorial regions effectively. Experimental results show that the proposed SFUW can obtain better performance of visual quality prediction for SCIs than other existing ones.

15.
Oncotarget ; 8(3): 5487-5497, 2017 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-27911868

RESUMO

INTRODUCTION: Management of lung cancer remains a challenge. Although clinical and biological patient data are crucial for cancer research, these data may be missing from registries and clinical trials. Biobanks provide a source of high-quality biological material for clinical research; however, linking these samples to the corresponding patient and clinical data is technically challenging. We describe the mobile Lung Cancer Care system (mLCCare), a novel tool which integrates biological and clinical patient data into a single resource. METHODS: mLCCare was developed as a mobile device application (app) and an internet website. Data storage is hosted on cloud servers, with the mobile app and website acting as a front-end to the system. mLCCare also facilitates communication with patients to remind them to take their medication and attend follow-up appointments. RESULTS: Between January 2014 and October 2015, 5,080 patients with lung cancer have been registered with mLCCare. Data validation ensures all the patient information is of consistently high-quality. Patient cohorts can be constructed via user-specified criteria and data exported for statistical analysis by authorized investigators and collaborators. mLCCare forms the basis of establishing an ongoing lung cancer registry and could form the basis of a high-quality multisite patient registry. Integration of mLCCare with SMS messaging and WeChat functionality facilitates communication between physicians and patients. CONCLUSION: It is hoped that mLCCare will prove to be a powerful and widely used tool that will enhance both research and clinical practice.


Assuntos
Neoplasias Pulmonares , Aplicativos Móveis , Sistema de Registros , China , Humanos , Internet
16.
IEEE Trans Image Process ; 24(9): 2797-810, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25966473

RESUMO

Sparse representation has recently attracted enormous interests in the field of image restoration. The conventional sparsity-based methods enforce sparse coding on small image patches with certain constraints. However, they neglected the characteristics of image structures both within the same scale and across the different scales for the image sparse representation. This drawback limits the modeling capability of sparsity-based super-resolution methods, especially for the recovery of the observed low-resolution images. In this paper, we propose a joint super-resolution framework of structure-modulated sparse representations to improve the performance of sparsity-based image super-resolution. The proposed algorithm formulates the constrained optimization problem for high-resolution image recovery. The multistep magnification scheme with the ridge regression is first used to exploit the multiscale redundancy for the initial estimation of the high-resolution image. Then, the gradient histogram preservation is incorporated as a regularization term in sparse modeling of the image super-resolution problem. Finally, the numerical solution is provided to solve the super-resolution problem of model parameter estimation and sparse representation. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed algorithm. Experimental results demonstrate that our proposed algorithm, which can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.

17.
IEEE Trans Image Process ; 22(4): 1456-69, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23221827

RESUMO

Image prior models based on sparse and redundant representations are attracting more and more attention in the field of image restoration. The conventional sparsity-based methods enforce sparsity prior on small image patches independently. Unfortunately, these works neglected the contextual information between sparse representations of neighboring image patches. It limits the modeling capability of sparsity-based image prior, especially when the major structural information of the source image is lost in the following serious degradation process. In this paper, we utilize the contextual information of local patches (denoted as context-aware sparsity prior) to enhance the performance of sparsity-based restoration method. In addition, a unified framework based on the markov random fields model is proposed to tune the local prior into a global one to deal with arbitrary size images. An iterative numerical solution is presented to solve the joint problem of model parameters estimation and sparse recovery. Finally, the experimental results on image denoising and super-resolution demonstrate the effectiveness and robustness of the proposed context-aware method.

18.
IEEE Trans Image Process ; 22(1): 215-28, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22955903

RESUMO

In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

19.
IEEE Trans Image Process ; 22(1): 202-14, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22949060

RESUMO

This paper provides a systematic rate-distortion (R-D) analysis of the dead-zone plus uniform threshold scalar quantization (DZ+UTSQ) with nearly uniform reconstruction quantization (NURQ) for generalized Gaussian distribution (GGD), which consists of two aspects: R-D performance analysis and R-D modeling. In R-D performance analysis, we first derive the preliminary constraint of optimum entropy-constrained DZ+UTSQ/NURQ for GGD, under which the property of the GGD distortion-rate (D-R) function is elucidated. Then for the GGD source of actual transform coefficients, the refined constraint and precise conditions of optimum DZ+UTSQ/NURQ are rigorously deduced in the real coding bit rate range, and efficient DZ+UTSQ/NURQ design criteria are proposed to reasonably simplify the utilization of effective quantizers in practice. In R-D modeling, inspired by R-D performance analysis, the D-R function is first developed, followed by the novel rate-quantization (R-Q) and distortion-quantization (D-Q) models derived using analytical and heuristic methods. The D-R, R-Q, and D-Q models form the source model describing the relationship between the rate, distortion, and quantization steps. One application of the proposed source model is the effective two-pass VBR coding algorithm design on an encoder of H.264/AVC reference software, which achieves constant video quality and desirable rate control accuracy.

20.
Zhonghua Xue Ye Xue Za Zhi ; 27(6): 370-3, 2006 Jun.
Artigo em Chinês | MEDLINE | ID: mdl-17147224

RESUMO

OBJECTIVE: To explore the expression of CD66c (CEACM6) in adult acute leukemia and its significance. METHODS: Acute leukemia cell lines HL-60, K562, LCL721.221 and Jurkat were cultured in vitro. RT-PCR and multi-parameter flow cytometry were applied to analysis of CD66c mRNA and protein expression respectively in the cell lines and patient' s bone marrow leukemic cells. Cytogenetic analysis for 199 bone marrow samples from leukemia patients and Minimal Residual Disease (MRD) detection for 25 CD66c positive B lineage ALL were performed. RESULTS: (1) CD66c expression both on cell surface and in plasma were negative in all the cell lines. (2) Four of 127 AML (3.15%) (mainly of M2 and M4), and 28 of 79 ALL (35.44%) (all of B linage ALL) were CD66c positive the subtypes of the ALL being common B-ALL (20/54) and pre B-ALL (8/11) including 8 Ph + B-linage ALL. (3) Six-month relapse rate was significantly different between the MRD positive and negative patients. (4) CD66c mRNA was strongly expressed in B-linage ALL. For the cell lines, only the HL60 cells weakly expressed CD66c mRNA. CONCLUSION: CD66c expression could be a useful bio-marker for the MRD analysis in ALL, and is closely associated with its transcription level.


Assuntos
Antígenos CD/biossíntese , Antígeno Carcinoembrionário/biossíntese , Moléculas de Adesão Celular/biossíntese , Leucemia Mieloide Aguda/metabolismo , Leucemia-Linfoma Linfoblástico de Células Precursoras/metabolismo , Adolescente , Adulto , Idoso , Antígeno Carcinoembrionário/genética , Proteínas Ligadas por GPI , Células HL-60 , Humanos , Células K562 , Masculino , Pessoa de Meia-Idade , Neoplasia Residual/metabolismo , RNA Mensageiro/biossíntese
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA