Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
J Med Internet Res ; 26: e47125, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38422347

RESUMO

BACKGROUND: The adoption of predictive algorithms in health care comes with the potential for algorithmic bias, which could exacerbate existing disparities. Fairness metrics have been proposed to measure algorithmic bias, but their application to real-world tasks is limited. OBJECTIVE: This study aims to evaluate the algorithmic bias associated with the application of common 30-day hospital readmission models and assess the usefulness and interpretability of selected fairness metrics. METHODS: We used 10.6 million adult inpatient discharges from Maryland and Florida from 2016 to 2019 in this retrospective study. Models predicting 30-day hospital readmissions were evaluated: LACE Index, modified HOSPITAL score, and modified Centers for Medicare & Medicaid Services (CMS) readmission measure, which were applied as-is (using existing coefficients) and retrained (recalibrated with 50% of the data). Predictive performances and bias measures were evaluated for all, between Black and White populations, and between low- and other-income groups. Bias measures included the parity of false negative rate (FNR), false positive rate (FPR), 0-1 loss, and generalized entropy index. Racial bias represented by FNR and FPR differences was stratified to explore shifts in algorithmic bias in different populations. RESULTS: The retrained CMS model demonstrated the best predictive performance (area under the curve: 0.74 in Maryland and 0.68-0.70 in Florida), and the modified HOSPITAL score demonstrated the best calibration (Brier score: 0.16-0.19 in Maryland and 0.19-0.21 in Florida). Calibration was better in White (compared to Black) populations and other-income (compared to low-income) groups, and the area under the curve was higher or similar in the Black (compared to White) populations. The retrained CMS and modified HOSPITAL score had the lowest racial and income bias in Maryland. In Florida, both of these models overall had the lowest income bias and the modified HOSPITAL score showed the lowest racial bias. In both states, the White and higher-income populations showed a higher FNR, while the Black and low-income populations resulted in a higher FPR and a higher 0-1 loss. When stratified by hospital and population composition, these models demonstrated heterogeneous algorithmic bias in different contexts and populations. CONCLUSIONS: Caution must be taken when interpreting fairness measures' face value. A higher FNR or FPR could potentially reflect missed opportunities or wasted resources, but these measures could also reflect health care use patterns and gaps in care. Simply relying on the statistical notions of bias could obscure or underplay the causes of health disparity. The imperfect health data, analytic frameworks, and the underlying health systems must be carefully considered. Fairness measures can serve as a useful routine assessment to detect disparate model performances but are insufficient to inform mechanisms or policy changes. However, such an assessment is an important first step toward data-driven improvement to address existing health disparities.


Assuntos
Medicare , Readmissão do Paciente , Idoso , Adulto , Humanos , Estados Unidos , Estudos Retrospectivos , Hospitais , Florida/epidemiologia
2.
Environ Monit Assess ; 196(3): 284, 2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-38374477

RESUMO

Accurate and reliable air temperature forecasts are necessary for predicting and responding to thermal disasters such as heat strokes. Forecasts from Numerical Weather Prediction (NWP) models contain biases which require post-processing. Studies assessing the skill of probabilistic post-processing techniques (PPTs) on temperature forecasts in India are lacking. This study aims to evaluate probabilistic post-processing approaches such as Nonhomogeneous Gaussian Regression (NGR) and Bayesian Model Averaging (BMA) for improving daily temperature forecasts from two NWP models, namely, the European Centre for Medium Range Weather Forecasts (ECMWF) and the Global Ensemble Forecast System (GEFS), across the Indian subcontinent. Apart from that, the effect of probabilistic PPT on heatwave prediction skills across India is also evaluated. Results show that probabilistic PPT comprehensively outperform traditional approaches in forecasting temperatures across India at all lead times. In the Himalayan regions where the forecast skill of raw forecasts is low, the probabilistic techniques are not able to produce skillful forecasts even though they perform much better than traditional techniques. The NGR method is found to be the best performing PPT across the Indian region. Post-processing Tmax forecasts using the NGR approach was found to considerably improve the heatwave prediction skill across highly heatwave prone regions in India. The outcomes of this study will be helpful in setting up improved heatwave prediction and early warning systems in India.


Assuntos
Monitoramento Ambiental , Golpe de Calor , Humanos , Temperatura , Teorema de Bayes , Monitoramento Ambiental/métodos , Tempo (Meteorologia)
3.
Geophys Res Lett ; 47(14): e2020GL088662, 2020 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-32999514

RESUMO

Future changes in tropical cyclone properties are an important component of climate change impacts and risk for many tropical and midlatitude countries. In this study we assess the performance of a multimodel ensemble of climate models, at resolutions ranging from 250 to 25 km. We use a common experimental design including both atmosphere-only and coupled simulations run over the period 1950-2050, with two tracking algorithms applied uniformly across the models. There are overall improvements in tropical cyclone frequency, spatial distribution, and intensity in models at 25 km resolution, with several of them able to represent very intense storms. Projected tropical cyclone activity by 2050 generally declines in the South Indian Ocean, while changes in other ocean basins are more uncertain and sensitive to both tracking algorithm and imposed forcings. Coupled models with smaller biases suggest a slight increase in average TC 10 m wind speeds by 2050.

4.
Proc Natl Acad Sci U S A ; 112(29): 8999-9003, 2015 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-26150515

RESUMO

The crystallographic reliability index [Formula: see text] is based on a method proposed more than two decades ago. Because its calculation is computationally expensive its use did not spread into the crystallographic community in favor of the cross-validation method known as [Formula: see text]. The importance of [Formula: see text] has grown beyond a pure validation tool. However, its application requires a sufficiently large dataset. In this work we assess the reliability of [Formula: see text] and we compare it with k-fold cross-validation, bootstrapping, and jackknifing. As opposed to proper cross-validation as realized with [Formula: see text], [Formula: see text] relies on a method of reducing bias from the structural model. We compare two different methods reducing model bias and question the widely spread notion that random parameter shifts are required for this purpose. We show that [Formula: see text] has as little statistical bias as [Formula: see text] with the benefit of a much smaller variance. Because the calculation of [Formula: see text] is based on the entire dataset instead of a small subset, it allows the estimation of maximum likelihood parameters even for small datasets. [Formula: see text] enables maximum likelihood-based refinement to be extended to virtually all areas of crystallographic structure determination including high-pressure studies, neutron diffraction studies, and datasets from free electron lasers.

5.
Acta Crystallogr D Biol Crystallogr ; 71(Pt 3): 646-66, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25760612

RESUMO

A method is presented that modifies a 2mFobs - DFmodel σA-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretability and decreased model bias compared with the starting 2mFobs - DFmodel σA-weighted map.


Assuntos
Modelos Moleculares
6.
J Struct Biol ; 186(1): 122-31, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24582855

RESUMO

Three-dimensional structures of biological assemblies may be calculated from images of single particles obtained by electron cryomicroscopy. A key step is the correct determination of the orientation of the particle in individual image projections. A useful tool for validation of the quality of a 3D map and its consistency with images is tilt-pair analysis. In a successful tilt-pair test, the relative angle between orientations assigned to each image of a tilt-pair agrees with the known relative rotation angle of the microscope specimen holder during the experiment. To make the procedure easy to apply to the increasing number of single particle maps, we have developed software and a web server for tilt-pair analysis. The tilt-pair analysis program reports the overall agreement of the assigned orientations with the known tilt angle and axis of the experiment and the distribution of tilt transformations for individual particles recorded in a single image field. We illustrate application of the validation tool to several single particle specimens and describe how to interpret the scores.


Assuntos
Microscopia Crioeletrônica/métodos , Imageamento Tridimensional , Software , Algoritmos , Internet , Modelos Moleculares , Estrutura Quaternária de Proteína , Proteínas/ultraestrutura
7.
Ecol Evol ; 14(2): e10974, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38362172

RESUMO

Bioenergetics models estimate ectotherm growth, production, and prey consumption - all key for effective ecosystem management during changing global temperatures. Based on species-specific allometric and thermodynamic relationships, these models typically use the species' lab-derived optimum temperatures (physiological optimum) as opposed to empirical field data (realized thermal niche) that reflect actual thermal experience. Yet, dynamic behavioral thermoregulation mediated by biotic and abiotic interactions may provide substantial divergence between physiological optimum and realized thermal niche temperatures to significantly bias model outcomes. Here, using the Wisconsin bioenergetics model and in-situ year-round temperature data, we tested the two approaches and compared the maximum attainable lifetime weight and lifetime prey consumption estimates for two salmonid species with differing life histories. We demonstrate that using the realized thermal niche is the better approach because it eliminates significant biases in estimates produced by the physiological optimum. Specifically, using the physiological optimum, slower-growing Salvelinus namaycush maximum attainable lifetime weight was underestimated, and consumption overestimated, while fast-growing Oncorhynchus tshawytscha maximum attainable weight was overestimated. While the physiological optimum approach is useful for theoretical studies, our results demonstrate the critical importance that models used by management utilize up-to-date system- and species-specific field data representing actual in-situ behaviors (i.e., realized thermal niche).

8.
J Imaging ; 10(7)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39057728

RESUMO

Images, texts, voices, and signals can be synthesized by latent spaces in a multidimensional vector, which can be explored without the hurdles of noise or other interfering factors. In this paper, we present a practical use case that demonstrates the power of latent space in exploring complex realities such as image space. We focus on DaVinciFace, an AI-based system that explores the StyleGAN2 space to create a high-quality portrait for anyone in the style of the Renaissance genius Leonardo da Vinci. The user enters one of their portraits and receives the corresponding Da Vinci-style portrait as an output. Since most of Da Vinci's artworks depict young and beautiful women (e.g., "La Belle Ferroniere", "Beatrice de' Benci"), we investigate the ability of DaVinciFace to account for other social categorizations, including gender, race, and age. The experimental results evaluate the effectiveness of our methodology on 1158 portraits acting on the vector representations of the latent space to produce high-quality portraits that retain the facial features of the subject's social categories, and conclude that sparser vectors have a greater effect on these features. To objectively evaluate and quantify our results, we solicited human feedback via a crowd-sourcing campaign. Analysis of the human feedback showed a high tolerance for the loss of important identity features in the resulting portraits when the Da Vinci style is more pronounced, with some exceptions, including Africanized individuals.

9.
Acta Crystallogr D Biol Crystallogr ; 69(Pt 9): 1861-3, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23999309

RESUMO

Model building starting from, for example, a molecular-replacement solution with low sequence similarity introduces model bias, which can be difficult to detect, especially at low resolution. The program mrtailor removes low-similarity regions from a template PDB file according to sequence similarity between the target sequence and the template sequence and maps the target sequence onto the PDB file. The modified PDB file can be used to generate external restraints for low-resolution refinement with reduced model bias and can be used as a starting point for model building and refinement. The program can call ProSMART [Nicholls et al. (2012), Acta Cryst. D68, 404-417] directly in order to create external restraints suitable for REFMAC5 [Murshudov et al. (2011), Acta Cryst. D67, 355-367]. Both a command-line version and a GUI exist.


Assuntos
Simulação por Computador , Bases de Dados de Proteínas , Alinhamento de Sequência , Software , Sequência de Aminoácidos , Interface Usuário-Computador
10.
Acta Crystallogr D Struct Biol ; 78(Pt 11): 1283-1293, 2022 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-36322413

RESUMO

Structure predictions have matched the accuracy of experimental structures from close homologues, providing suitable models for molecular replacement phasing. Even in predictions that present large differences due to the relative movement of domains or poorly predicted areas, very accurate regions tend to be present. These are suitable for successful fragment-based phasing as implemented in ARCIMBOLDO. The particularities of predicted models are inherently addressed in the new predicted_model mode, rendering preliminary treatment superfluous but also harmless. B-value conversion from predicted LDDT or error estimates, the removal of unstructured polypeptide, hierarchical decomposition of structural units from domains to local folds and systematically probing the model against the experimental data will ensure the optimal use of the model in phasing. Concomitantly, the exhaustive use of models and stereochemistry in phasing, refinement and validation raises the concern of crystallographic model bias and the need to critically establish the information contributed by the experiment. Therefore, in its predicted_model mode ARCIMBOLDO_SHREDDER will first determine whether the input model already constitutes a solution or provides a straightforward solution with Phaser. If not, extracted fragments will be located. If the landscape of solutions reveals numerous, clearly discriminated and consistent probes or if the input model already constitutes a solution, model-free verification will be activated. Expansions with SHELXE will omit the partial solution seeding phases and all traces outside their respective masks will be combined in ALIXE, as far as consistent. This procedure completely eliminates the molecular replacement search model in favour of the inferences derived from this model. In the case of fragments, an incorrect starting hypothesis impedes expansion. The predicted_model mode has been tested in different scenarios.


Assuntos
Peptídeos , Cristalografia por Raios X , Modelos Moleculares
11.
J Am Med Inform Assoc ; 28(3): 549-558, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-33236066

RESUMO

OBJECTIVE: To illustrate the problem of subpopulation miscalibration, to adapt an algorithm for recalibration of the predictions, and to validate its performance. MATERIALS AND METHODS: In this retrospective cohort study, we evaluated the calibration of predictions based on the Pooled Cohort Equations (PCE) and the fracture risk assessment tool (FRAX) in the overall population and in subpopulations defined by the intersection of age, sex, ethnicity, socioeconomic status, and immigration history. We next applied the recalibration algorithm and assessed the change in calibration metrics, including calibration-in-the-large. RESULTS: 1 021 041 patients were included in the PCE population, and 1 116 324 patients were included in the FRAX population. Baseline overall model calibration of the 2 tested models was good, but calibration in a substantial portion of the subpopulations was poor. After applying the algorithm, subpopulation calibration statistics were greatly improved, with the variance of the calibration-in-the-large values across all subpopulations reduced by 98.8% and 94.3% in the PCE and FRAX models, respectively. DISCUSSION: Prediction models in medicine are increasingly common. Calibration, the agreement between predicted and observed risks, is commonly poor for subpopulations that were underrepresented in the development set of the models, resulting in bias and reduced performance for these subpopulations. In this work, we empirically evaluated an adapted version of the fairness algorithm designed by Hebert-Johnson et al. (2017) and demonstrated its use in improving subpopulation miscalibration. CONCLUSION: A postprocessing and model-independent fairness algorithm for recalibration of predictive models greatly decreases the bias of subpopulation miscalibration and thus increases fairness and equality.


Assuntos
Algoritmos , Modelos Estatísticos , Adulto , Idoso , Viés , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Análise Multivariada , Prognóstico , Modelos de Riscos Proporcionais , Estudos Retrospectivos , Medição de Risco
12.
Diagnostics (Basel) ; 12(1)2021 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-35054207

RESUMO

Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.

13.
Top Cogn Sci ; 11(4): 811-816, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-30457220

RESUMO

Hope and Gabbert (2008) and Jay and colleagues (in press) show us that collaborative remembering, in certain contexts, may result in incomplete and less accurate memories. Here, I will discuss the evolutionary origins of this behavior, linking it to phenomena such as social contagion, conformity, and social learning, which are highly adaptive and widespread across non-human taxa.


Assuntos
Comportamento Cooperativo , Memória/fisiologia , Rememoração Mental/fisiologia , Adaptação Psicológica/fisiologia , Evolução Biológica , Humanos , Conhecimento , Comportamento Social , Aprendizado Social/fisiologia
15.
Sci Total Environ ; 650(Pt 2): 2577-2586, 2019 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-30293009

RESUMO

Bridging the gap between the predictions of coarse-scale climate models and the fine-scale climatic reality is a key issue of hydrological research and water management. While many advances have been realized in developed countries, the situation is contrastingly different in most tropical regions where we still lack information on potential discrepancies between measured and modeled climatic conditions. Consequently, water managers in these regions often rely on non-academic expertise to help them plan their future strategies. This issue is particularly alarming in tropical mountainous areas where water demand is increasing rapidly and climate change is expected to have severe impacts. In this article, we addressed this issue by evaluating the limitations and prospects in using regional climate models for evaluating the impact of climate change on water availability in a watershed that provides Quito, the capital of Ecuador, with about 30% of its current water needs. In particular, we quantified the temporal and spatial discrepancies between predicted and observed precipitation and temperature, and explored underlying mechanisms at play. Our results provide a strong critique of the inappropriate use of regional models to inform water planning with regard to adaptation strategies to face climate change. As a multidisciplinary group composed of hydrologists, ecologists and water managers, we then propose a framework to guide future climate change impact studies in tropical mountain watersheds where hydro-climatological data are scarce.

16.
IUCrJ ; 5(Pt 2): 166-171, 2018 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-29765606

RESUMO

Determining macromolecular structures from X-ray data with resolution worse than 3 Šremains a challenge. Even if a related starting model is available, its incompleteness or its bias together with a low observation-to-parameter ratio can render the process unsuccessful or very time-consuming. Yet, many biologically important macromolecules, especially large macromolecular assemblies, membrane proteins and receptors, tend to provide crystals that diffract to low resolution. A new algorithm to tackle this problem is presented that uses a multivariate function to simultaneously exploit information from both an initial partial model and low-resolution single-wavelength anomalous diffraction data. The new approach has been used for six challenging structure determinations, including the crystal structures of membrane proteins and macromolecular complexes that have evaded experts using other methods, and large structures from a 3.0 Šresolution F1-ATPase data set and a 4.5 Šresolution SecYEG-SecA complex data set. All of the models were automatically built by the method to Rfree values of between 28.9 and 39.9% and were free from the initial model bias.

17.
Protein Sci ; 26(12): 2410-2416, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28960580

RESUMO

In 2012, Karplus and Diederichs demonstrated that the Pearson correlation coefficient CC1/2 is a far better indicator of the quality and resolution of crystallographic data sets than more traditional measures like merging R-factor or signal-to-noise ratio. More specifically, they proposed that CC1/2 be computed for data sets in thin shells of increasing resolution so that the resolution dependence of that quantity can be examined. Recently, however, the CC1/2 values of entire data sets, i.e., cumulative correlation coefficients, have been used as a measure of data quality. Here, we show that the difference in cumulative CC1/2 value between a data set that has been accurately measured and a data set that has not is likely to be small. Furthermore, structures obtained by molecular replacement from poorly measured data sets are likely to suffer from extreme model bias.


Assuntos
Cristalografia por Raios X/métodos , Bases de Dados Factuais , Modelos Moleculares , Complexo de Proteína do Fotossistema II/química , Razão Sinal-Ruído
18.
Methods Enzymol ; 579: 227-53, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27572729

RESUMO

Single-particle electron cryomicroscopy may be used to determine the structure of biological assemblies by aligning and averaging low-contrast projection images recorded in the electron microscope. Recent progress in both experimental and computational methods has led to higher resolution three-dimensional maps, including for more challenging low molecular weight proteins, and this has highlighted the problems of model bias and over-fitting during iterative refinement that can potentially lead to incorrect map features at low or high resolution. This chapter discusses the principles and practice of specific validation tests that demonstrate the consistency of a 3D map with projection images. In addition, the chapter describes tests that detect over-fitting during refinement and lead to more robust assessment of both global and local map resolution. Application of several of these tests together demonstrates the reliability of single-particle maps that underpins their correct biological interpretation.


Assuntos
Algoritmos , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Software , Proteínas de Bactérias/ultraestrutura , Microscopia Crioeletrônica/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Cetona Oxirredutases/ultraestrutura , Modelos Moleculares , Conformação Proteica , beta-Galactosidase/ultraestrutura
19.
J Appl Crystallogr ; 49(Pt 3): 1021-1028, 2016 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-27275144

RESUMO

Advances in beamline optics, detectors and X-ray sources allow new techniques of crystallographic data collection. In serial crystallography, a large number of partial datasets from crystals of small volume are measured. Merging of datasets from different crystals in order to enhance data completeness and accuracy is only valid if the crystals are isomorphous, i.e. sufficiently similar in cell parameters, unit-cell contents and molecular structure. Identification and exclusion of non-isomorphous datasets is therefore indispensable and must be done by means of suitable indicators. To identify rogue datasets, the influence of each dataset on CC1/2 [Karplus & Diederichs (2012 ▸). Science, 336, 1030-1033], the correlation coefficient between pairs of intensities averaged in two randomly assigned subsets of observations, is evaluated. The presented method employs a precise calculation of CC1/2 that avoids the random assignment, and instead of using an overall CC1/2, an average over resolution shells is employed to obtain sensible results. The selection procedure was verified by measuring the correlation of observed (merged) intensities and intensities calculated from a model. It is found that inclusion and merging of non-isomorphous datasets may bias the refined model towards those datasets, and measures to reduce this effect are suggested.

20.
J Mol Biol ; 426(4): 980-93, 2014 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-24269527

RESUMO

Sharpening is a powerful method to restore the details from blurred electron density in crystals with high overall temperature factors (B-factors). This valuable technique is currently not optimally used because of the uncertainty in the scope of its application and ambiguities in practice. We performed an analysis of ~2000 crystal data sets deposited in the Protein Data Bank and show that sharpening improves the electron density map in many cases across all resolution ranges, often with dramatic enhancement for mid- and low-resolution structures. It is effective when used with either experimental or model phases without introducing additional bias. Our tests also provide a practical guide for optimal sharpening. We further show that anisotropic diffraction correction improves electron density in many cases but should be used with caution. Our study demonstrates that a routine practice of electron density sharpening may have a broad impact on the outcomes of structural biology studies.


Assuntos
Cristalografia por Raios X/métodos , Modelos Moleculares , Conformação Proteica , Anisotropia , Bases de Dados de Proteínas , Elétrons , Proteínas Ligantes de Maltose/química , Receptores de Hormônio Liberador da Corticotropina/química
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa