Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Entropy (Basel) ; 25(1)2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36673277

RESUMO

Chaotic baseband wireless communication system (CBWCS) suffers bit error rate (BER) degradation due to their intrinsic intersymbol interference (ISI). To this end, an ISI-free chaotic filter based on root-raised-cosine (RRC) division is constructed to generate a chaotic signal. A wireless communication system using this chaotic signal as a baseband waveform is proposed. The chaotic property is proved by the corresponding new hybrid dynamical system with topological conjugation to symbolic sequences and a positive Lyapunov exponent. Simulation results show that under a single-path channel and multi-path channel, the proposed method outperforms CBWCS in both BER performance and computational complexity.

2.
IEEE Trans Pattern Anal Mach Intell ; 44(4): 2198-2215, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-33017289

RESUMO

For 360° video, the existing visual quality assessment (VQA) approaches are designed based on either the whole frames or the cropped patches, ignoring the fact that subjects can only access viewports. When watching 360° video, subjects select viewports through head movement (HM) and then fixate on attractive regions within the viewports through eye movement (EM). Therefore, this paper proposes a two-staged multi-task approach for viewport-based VQA on 360° video. Specifically, we first establish a large-scale VQA dataset of 360° video, called VQA-ODV, which collects the subjective quality scores and the HM and EM data on 600 video sequences. By mining our dataset, we find that the subjective quality of 360° video is related to camera motion, viewport positions and saliency within viewports. Accordingly, we propose a viewport-based convolutional neural network (V-CNN) approach for VQA on 360° video, which has a novel multi-task architecture composed of a viewport proposal network (VP-net) and viewport quality network (VQ-net). The VP-net handles the auxiliary tasks of camera motion detection and viewport proposal, while the VQ-net accomplishes the auxiliary task of viewport saliency prediction and the main task of VQA. The experiments validate that our V-CNN approach significantly advances state-of-the-art VQA performance on 360° video and it is also effective in the three auxiliary tasks.


Assuntos
Algoritmos , Redes Neurais de Computação , Movimentos Oculares , Movimentos da Cabeça , Humanos , Movimento (Física)
3.
IEEE J Biomed Health Inform ; 26(5): 2216-2227, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34648460

RESUMO

Diabetic retinopathy (DR) is a leading cause of permanent blindness among the working-age people. Automatic DR grading can help ophthalmologists make timely treatment for patients. However, the existing grading methods are usually trained with high resolution (HR) fundus images, such that the grading performance decreases a lot given low resolution (LR) images, which are common in clinic. In this paper, we mainly focus on DR grading with LR fundus images. According to our analysis on the DR task, we find that: 1) image super-resolution (ISR) can boost the performance of both DR grading and lesion segmentation; 2) the lesion segmentation regions of fundus images are highly consistent with pathological regions for DR grading. Based on our findings, we propose a convolutional neural network (CNN)-based method for joint learning of multi-level tasks for DR grading, called DeepMT-DR, which can simultaneously handle the low-level task of ISR, the mid-level task of lesion segmentation and the high-level task of disease severity classification on LR fundus images. Moreover, a novel task-aware loss is developed to encourage ISR to focus on the pathological regions for its subsequent tasks: lesion segmentation and DR grading. Extensive experimental results show that our DeepMT-DR method significantly outperforms other state-of-the-art methods for DR grading over three datasets. In addition, our method achieves comparable performance in two auxiliary tasks of ISR and lesion segmentation.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos , Redes Neurais de Computação , Projetos de Pesquisa , Índice de Gravidade de Doença
4.
Artigo em Inglês | MEDLINE | ID: mdl-37015524

RESUMO

Blind visual quality assessment (BVQA) on 360° video plays a key role in optimizing immersive multimedia systems. When assessing the quality of 360° video, human tends to perceive its quality degradation from the viewport-based spatial distortion of each spherical frame to motion artifact across adjacent frames, ending with the video-level quality score, i.e., a progressive quality assessment paradigm. However, the existing BVQA approaches for 360° video neglect this paradigm. In this paper, we take into account the progressive paradigm of human perception towards spherical video quality, and thus propose a novel BVQA approach (namely ProVQA) for 360° video via progressively learning from pixels, frames and video. Corresponding to the progressive learning of pixels, frames and video, three sub-nets are designed in our ProVQA approach, i.e., the spherical perception aware quality prediction (SPAQ), motion perception aware quality prediction (MPAQ) and multi-frame temporal non-local (MFTN) sub-nets. The SPAQ sub-net first models the spatial quality degradation based on spherical perception mechanism of human. Then, by exploiting motion cues across adjacent frames, the MPAQ sub-net properly incorporates motion contextual information for quality assessment on 360° video. Finally, the MFTN sub-net aggregates multi-frame quality degradation to yield the final quality score, via exploring long-term quality correlation from multiple frames. The experiments validate that our approach significantly advances the state-of-the-art BVQA performance on 360° video over two datasets, the code of which has been public in https://github.com/yanglixiaoshen/ProVQA.

5.
IEEE Trans Med Imaging ; 40(9): 2463-2476, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33983881

RESUMO

Given the outbreak of COVID-19 pandemic and the shortage of medical resource, extensive deep learning models have been proposed for automatic COVID-19 diagnosis, based on 3D computed tomography (CT) scans. However, the existing models independently process the 3D lesion segmentation and disease classification, ignoring the inherent correlation between these two tasks. In this paper, we propose a joint deep learning model of 3D lesion segmentation and classification for diagnosing COVID-19, called DeepSC-COVID, as the first attempt in this direction. Specifically, we establish a large-scale CT database containing 1,805 3D CT scans with fine-grained lesion annotations, and reveal 4 findings about lesion difference between COVID-19 and community acquired pneumonia (CAP). Inspired by our findings, DeepSC-COVID is designed with 3 subnets: a cross-task feature subnet for feature extraction, a 3D lesion subnet for lesion segmentation, and a classification subnet for disease diagnosis. Besides, the task-aware loss is proposed for learning the task interaction across the 3D lesion and classification subnets. Different from all existing models for COVID-19 diagnosis, our model is interpretable with fine-grained 3D lesion distribution. Finally, extensive experimental results show that the joint learning framework in our model significantly improves the performance of 3D lesion segmentation and disease classification in both efficiency and efficacy.


Assuntos
COVID-19 , Teste para COVID-19 , Humanos , Pandemias , SARS-CoV-2 , Tomografia Computadorizada por Raios X
6.
IEEE Trans Image Process ; 30: 2087-2102, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33460380

RESUMO

When watching omnidirectional images (ODIs), subjects can access different viewports by moving their heads. Therefore, it is necessary to predict subjects' head fixations on ODIs. Inspired by generative adversarial imitation learning (GAIL), this paper proposes a novel approach to predict saliency of head fixations on ODIs, named SalGAIL. First, we establish a dataset for attention on ODIs (AOI). In contrast to traditional datasets, our AOI dataset is large-scale, which contains the head fixations of 30 subjects viewing 600 ODIs. Next, we mine our AOI dataset and discover three findings: (1) the consistency of head fixations are consistent among subjects, and it grows alongside the increased subject number; (2) the head fixations exist with a front center bias (FCB); and (3) the magnitude of head movement is similar across the subjects. According to these findings, our SalGAIL approach applies deep reinforcement learning (DRL) to predict the head fixations of one subject, in which GAIL learns the reward of DRL, rather than the traditional human-designed reward. Then, multi-stream DRL is developed to yield the head fixations of different subjects, and the saliency map of an ODI is generated via convoluting predicted head fixations. Finally, experiments validate the effectiveness of our approach in predicting saliency maps of ODIs, significantly better than 11 state-of-the-art approaches. Our AOI dataset and code of SalGAIL are available online at https://github.com/yanglixiaoshen/SalGAIL.


Assuntos
Aprendizado Profundo , Fixação Ocular/fisiologia , Movimentos da Cabeça/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Adolescente , Adulto , Bases de Dados Factuais , Tecnologia de Rastreamento Ocular , Feminino , Humanos , Masculino , Adulto Jovem
7.
IEEE Trans Pattern Anal Mach Intell ; 43(3): 949-963, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31581073

RESUMO

The past few years have witnessed great success in applying deep learning to enhance the quality of compressed image/video. The existing approaches mainly focus on enhancing the quality of a single frame, not considering the similarity between consecutive frames. Since heavy fluctuation exists across compressed video frames as investigated in this paper, frame similarity can be utilized for quality enhancement of low-quality frames given their neighboring high-quality frames. This task is Multi-Frame Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach for compressed video, as the first attempt in this direction. In our approach, we first develop a Bidirectional Long Short-Term Memory (BiLSTM) based detector to locate Peak Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame Convolutional Neural Network (MF-CNN) is designed to enhance the quality of compressed video, in which the non-PQF and its nearest two PQFs are the input. In MF-CNN, motion between the non-PQF and PQFs is compensated by a motion compensation subnet. Subsequently, a quality enhancement subnet fuses the non-PQF and compensated PQFs, and then reduces the compression artifacts of the non-PQF. Also, PQF quality is enhanced in the same way. Finally, experiments validate the effectiveness and generalization ability of our MFQE approach in advancing the state-of-the-art quality enhancement of compressed video.

8.
IEEE Trans Med Imaging ; 39(2): 413-424, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31283476

RESUMO

Glaucoma is one of the leading causes of irreversible vision loss. Many approaches have recently been proposed for automatic glaucoma detection based on fundus images. However, none of the existing approaches can efficiently remove high redundancy in fundus images for glaucoma detection, which may reduce the reliability and accuracy of glaucoma detection. To avoid this disadvantage, this paper proposes an attention-based convolutional neural network (CNN) for glaucoma detection, called AG-CNN. Specifically, we first establish a large-scale attention-based glaucoma (LAG) database, which includes 11 760 fundus images labeled as either positive glaucoma (4878) or negative glaucoma (6882). Among the 11 760 fundus images, the attention maps of 5824 images are further obtained from ophthalmologists through a simulated eye-tracking experiment. Then, a new structure of AG-CNN is designed, including an attention prediction subnet, a pathological area localization subnet, and a glaucoma classification subnet. The attention maps are predicted in the attention prediction subnet to highlight the salient regions for glaucoma detection, under a weakly supervised training manner. In contrast to other attention-based CNN methods, the features are also visualized as the localized pathological area, which are further added in our AG-CNN structure to enhance the glaucoma detection performance. Finally, the experiment results from testing over our LAG database and another public glaucoma database show that the proposed AG-CNN approach significantly advances the state-of-the-art in glaucoma detection.


Assuntos
Bases de Dados Factuais , Glaucoma/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Aprendizado de Máquina Supervisionado
9.
JAMA Ophthalmol ; 137(12): 1353-1360, 2019 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-31513266

RESUMO

Importance: A deep learning system (DLS) that could automatically detect glaucomatous optic neuropathy (GON) with high sensitivity and specificity could expedite screening for GON. Objective: To establish a DLS for detection of GON using retinal fundus images and glaucoma diagnosis with convoluted neural networks (GD-CNN) that has the ability to be generalized across populations. Design, Setting, and Participants: In this cross-sectional study, a DLS for the classification of GON was developed for automated classification of GON using retinal fundus images obtained from the Chinese Glaucoma Study Alliance, the Handan Eye Study, and online databases. The researchers selected 241 032 images were selected as the training data set. The images were entered into the databases on June 9, 2009, obtained on July 11, 2018, and analyses were performed on December 15, 2018. The generalization of the DLS was tested in several validation data sets, which allowed assessment of the DLS in a clinical setting without exclusions, testing against variable image quality based on fundus photographs obtained from websites, evaluation in a population-based study that reflects a natural distribution of patients with glaucoma within the cohort and an additive data set that has a diverse ethnic distribution. An online learning system was established to transfer the trained and validated DLS to generalize the results with fundus images from new sources. To better understand the DLS decision-making process, a prediction visualization test was performed that identified regions of the fundus images utilized by the DLS for diagnosis. Exposures: Use of a deep learning system. Main Outcomes and Measures: Area under the receiver operating characteristics curve (AUC), sensitivity and specificity for DLS with reference to professional graders. Results: From a total of 274 413 fundus images initially obtained from CGSA, 269 601 images passed initial image quality review and were graded for GON. A total of 241 032 images (definite GON 29 865 [12.4%], probable GON 11 046 [4.6%], unlikely GON 200 121 [83%]) from 68 013 patients were selected using random sampling to train the GD-CNN model. Validation and evaluation of the GD-CNN model was assessed using the remaining 28 569 images from CGSA. The AUC of the GD-CNN model in primary local validation data sets was 0.996 (95% CI, 0.995-0.998), with sensitivity of 96.2% and specificity of 97.7%. The most common reason for both false-negative and false-positive grading by GD-CNN (51 of 119 [46.3%] and 191 of 588 [32.3%]) and manual grading (50 of 113 [44.2%] and 183 of 538 [34.0%]) was pathologic or high myopia. Conclusions and Relevance: Application of GD-CNN to fundus images from different settings and varying image quality demonstrated a high sensitivity, specificity, and generalizability for detecting GON. These findings suggest that automated DLS could enhance current screening programs in a cost-effective and time-efficient manner.


Assuntos
Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Glaucoma de Ângulo Aberto/diagnóstico por imagem , Doenças do Nervo Óptico/diagnóstico por imagem , Fotografação/métodos , Área Sob a Curva , Estudos Transversais , Bases de Dados Factuais , Reações Falso-Positivas , Feminino , Fundo de Olho , Humanos , Masculino , Valor Preditivo dos Testes , Curva ROC , Sensibilidade e Especificidade
10.
IEEE Trans Image Process ; 28(11): 5663-5678, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31217108

RESUMO

An extensive study on the in-loop filter has been proposed for a high efficiency video coding (HEVC) standard to reduce compression artifacts, thus improving coding efficiency. However, in the existing approaches, the in-loop filter is always applied to each single frame, without exploiting the content correlation among multiple frames. In this paper, we propose a multi-frame in-loop filter (MIF) for HEVC, which enhances the visual quality of each encoded frame by leveraging its adjacent frames. Specifically, we first construct a large-scale database containing encoded frames and their corresponding raw frames of a variety of content, which can be used to learn the in-loop filter in HEVC. Furthermore, we find that there usually exist a number of reference frames of higher quality and of similar content for an encoded frame. Accordingly, a reference frame selector (RFS) is designed to identify these frames. Then, a deep neural network for MIF (known as MIF-Net) is developed to enhance the quality of each encoded frame by utilizing the spatial information of this frame and the temporal information of its neighboring higher-quality frames. The MIF-Net is built on the recently developed DenseNet, benefiting from its improved generalization capacity and computational efficiency. In addition, a novel block-adaptive convolutional layer is designed and applied in the MIF-Net, for handling the artifacts influenced by coding tree unit (CTU) structure in HEVC. Extensive experiments show that our MIF approach achieves on average 11.621% saving of the Bjøntegaard delta bit-rate (BD-BR) on the standard test set, significantly outperforming the standard in-loop filter in HEVC and other state-of-the-art approaches.

11.
IEEE Trans Pattern Anal Mach Intell ; 41(11): 2693-2708, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-30047871

RESUMO

Panoramic video provides immersive and interactive experience by enabling humans to control the field of view (FoV) through head movement (HM). Thus, HM plays a key role in modeling human attention on panoramic video. This paper establishes a database collecting subjects' HM in panoramic video sequences. From this database, we find that the HM data are highly consistent across subjects. Furthermore, we find that deep reinforcement learning (DRL) can be applied to predict HM positions, via maximizing the reward of imitating human HM scanpaths through the agent's actions. Based on our findings, we propose a DRL-based HM prediction (DHP) approach with offline and online versions, called offline-DHP and online-DHP. In offline-DHP, multiple DRL workflows are run to determine potential HM positions at each panoramic frame. Then, a heat map of the potential HM positions, named the HM map, is generated as the output of offline-DHP. In online-DHP, the next HM position of one subject is estimated given the currently observed HM position, which is achieved by developing a DRL algorithm upon the learned offline-DHP model. Finally, the experiments validate that our approach is effective in both offline and online prediction of HM positions for panoramic video, and that the learned offline-DHP model can improve the performance of online-DHP.


Assuntos
Aprendizado Profundo , Movimentos da Cabeça/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Gravação em Vídeo , Adolescente , Adulto , Algoritmos , Feminino , Humanos , Masculino , Adulto Jovem
12.
Artigo em Inglês | MEDLINE | ID: mdl-29994256

RESUMO

High Efficiency Video Coding (HEVC) significantly reduces bit-rates over the preceding H.264 standard but at the expense of extremely high encoding complexity. In HEVC, the quad-tree partition of coding unit (CU) consumes a large proportion of the HEVC encoding complexity, due to the brute-force search for rate-distortion optimization (RDO). Therefore, this paper proposes a deep learning approach to predict the CU partition for reducing the HEVC complexity at both intra-and inter-modes, which is based on convolutional neural network (CNN) and long-and short-term memory (LSTM) network. First, we establish a large-scale database including substantial CU partition data for HEVC intra-and inter-modes. This enables deep learning on the CU partition. Second, we represent the CU partition of an entire coding tree unit (CTU) in the form of a hierarchical CU partition map (HCPM). Then, we propose an early-terminated hierarchical CNN (ETH-CNN) for learning to predict the HCPM. Consequently, the encoding complexity of intra-mode HEVC can be drastically reduced by replacing the brute-force search with ETH-CNN to decide the CU partition. Third, an early-terminated hierarchical LSTM (ETH-LSTM) is proposed to learn the temporal correlation of the CU partition. Then, we combine ETH-LSTM and ETH-CNN to predict the CU partition for reducing the HEVC complexity at inter-mode. Finally, experimental results show that our approach outperforms other state-of-the-art approaches in reducing the HEVC complexity at both intra-and inter-modes.

13.
Mol Med Rep ; 17(2): 3140-3145, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29257301

RESUMO

The prognosis for prostate cancer patients with distant metastasis is poor, with an average survival rate of 24­48 months. The exact mechanisms underlying prostate cancer metastasis remain to be elucidated, despite previous research efforts. The present study aimed to reveal the regulatory roles of miR­138 via Wnt/ß­catenin pathway in prostate cancer cell migration and invasion. Reverse transcription­quantitative polymerase chain reaction was used to examine the mRNA and protein expression levels and transwell assay was conducted to determine cell invasion and migration. A luciferase reporter assay was used to determine the target association between miR­138 and ß­catenin. The present study identified microRNA (miR)­138 as an invasion and migration regulator in prostate cancer. miR­138 was downregulated in aggressive prostate cancer cell lines. Furthermore, followingmiR­138 overexpression, prostate cancer cells exhibited impaired invasive and migratory abilities. E­cadherin was upregulated and vimentin was downregulated. In addition, it was demonstrated that miR­138 negatively regulated the Wnt/ß­catenin pathway activation in prostate cancer. The pathway was then activated via ß­catenin overexpression and this reversed the effects of miR­138. The results suggest that miR­138 downregulation may contribute to prostate cancer progression and metastasis. The findings provide a novel molecular therapeutic target in the treatment of prostate cancer metastasis.


Assuntos
MicroRNAs/metabolismo , Via de Sinalização Wnt , Antagomirs/metabolismo , Caderinas/genética , Caderinas/metabolismo , Linhagem Celular Tumoral , Movimento Celular , Regulação para Baixo , Humanos , Masculino , MicroRNAs/antagonistas & inibidores , MicroRNAs/genética , Neoplasias da Próstata , Regulação para Cima , Vimentina/genética , Vimentina/metabolismo , beta Catenina/metabolismo
14.
Sensors (Basel) ; 16(5)2016 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-27171088

RESUMO

In multi-target tracking, the key problem lies in estimating the number and states of individual targets, in which the challenge is the time-varying multi-target numbers and states. Recently, several multi-target tracking approaches, based on the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter, have been presented to solve such a problem. However, most of these approaches select the transition density as the importance sampling (IS) function, which is inefficient in a nonlinear scenario. To enhance the performance of the conventional SMC-PHD filter, we propose in this paper two approaches using the cubature information filter (CIF) for multi-target tracking. More specifically, we first apply the posterior intensity as the IS function. Then, we propose to utilize the CIF algorithm with a gating method to calculate the IS function, namely CISMC-PHD approach. Meanwhile, a fast implementation of the CISMC-PHD approach is proposed, which clusters the particles into several groups according to the Gaussian mixture components. With the constructed components, the IS function is approximated instead of particles. As a result, the computational complexity of the CISMC-PHD approach can be significantly reduced. The simulation results demonstrate the effectiveness of our approaches.

15.
Cell Biochem Biophys ; 69(3): 503-7, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24526351

RESUMO

To observe the clinical effect of tashinone IIA combined with endocrine therapy in treating advanced-stage prostate cancer. 96 cases of advanced-stage prostate cancer were divided into observation group (44 cases received treatment) and control group (46 cases received treatment). Control group was given leuprolide acetate 3.75 mg hypodermic injection per month, combined with bicalutamide 50 mg per os per day for a 6-month treatment course. Observation group was given tashinone IIA injection 60 mg intravenously per day. They were treated for 2 weeks and paused for 2 weeks as one treatment course for six courses in total. After treating for 6 months, the general therapeutic effect, prostate-specific antigen (PSA), free prostate-specific antigen (f-PSA), hemoglobin (Hb), the quality of life questionnaire Core 30 (QLQ-C30), traditional Chinese medicine symptom information score, international prostate symptom score (I-PSS), and adverse effect rate were observed. The effective rate of observation group and control group was 52.3 and 28.3 %, respectively (P < 0.05). PSA, f-PSA, and Hb in two groups had no statistical difference before treatment. PSA and f-PSA in both groups obviously decreased compared to those before treatment, and they were lower in observation group than in control group (P < 0.01). Hb in observation group was higher than before treatment, whereas Hb in control group was lower than before treatment (P < 0.01). Life quality, motive score, the traditional Chinese medicine symptom score, and I-PSS in observation group were significantly better those that in control group after treatment (P < 0.01). Laboratory tests such as hemogram, and liver and kidney function had no obvious change, and adverse effect rate had no statistical difference. Routine endocrine treatment combined with tashinone IIA can enhance the clinical effects on treating advanced-stage prostate cancer and improve the clinical symptom score.


Assuntos
Anilidas/uso terapêutico , Antineoplásicos Hormonais/uso terapêutico , Protocolos de Quimioterapia Combinada Antineoplásica , Benzofuranos/uso terapêutico , Leuprolida/uso terapêutico , Nitrilas/uso terapêutico , Neoplasias da Próstata/tratamento farmacológico , Neoplasias da Próstata/patologia , Compostos de Tosil/uso terapêutico , Idoso , Anilidas/administração & dosagem , Antineoplásicos Hormonais/administração & dosagem , Benzofuranos/administração & dosagem , Hemoglobinas/metabolismo , Humanos , Leuprolida/administração & dosagem , Masculino , Estadiamento de Neoplasias , Nitrilas/administração & dosagem , Antígeno Prostático Específico/sangue , Neoplasias da Próstata/sangue , Qualidade de Vida , Compostos de Tosil/administração & dosagem , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...