Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
J Comput Aided Mol Des ; 36(9): 623-638, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36114380

RESUMO

In May 2022, JCAMD published a Special Issue in honor of Gerald (Gerry) Maggiora, whose scientific leadership over many decades advanced the fields of computational chemistry and chemoinformatics for drug discovery. Along the way, he has impacted many researchers in both academia and the pharmaceutical industry. In this Epilogue, we explain the origins of the Festschrift and present a series of first-hand vignettes, in approximate chronological sequence, that together paint a picture of this remarkable man. Whether they highlight Gerry's endless curiosity about molecular life sciences or his willingness to challenge conventional wisdom or his generous support of junior colleagues and peers, these colleagues and collaborators are united in their appreciation of his positive influence. These tributes also reflect key trends and themes during the evolution of modern drug discovery, seen through the lens of people who worked with a visionary leader. Junior scientists will find an inspiring roadmap for creative collegiality and collaboration.


Assuntos
Disciplinas das Ciências Biológicas , Mentores , História do Século XX , Humanos
2.
PLoS One ; 16(1): e0245874, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33513170

RESUMO

OBJECTIVE: One of the greatest challenges in clinical trial design is dealing with the subjectivity and variability introduced by human raters when measuring clinical end-points. We hypothesized that robotic measures that capture the kinematics of human movements collected longitudinally in patients after stroke would bear a significant relationship to the ordinal clinical scales and potentially lead to the development of more sensitive motor biomarkers that could improve the efficiency and cost of clinical trials. MATERIALS AND METHODS: We used clinical scales and a robotic assay to measure arm movement in 208 patients 7, 14, 21, 30 and 90 days after acute ischemic stroke at two separate clinical sites. The robots are low impedance and low friction interactive devices that precisely measure speed, position and force, so that even a hemiparetic patient can generate a complete measurement profile. These profiles were used to develop predictive models of the clinical assessments employing a combination of artificial ant colonies and neural network ensembles. RESULTS: The resulting models replicated commonly used clinical scales to a cross-validated R2 of 0.73, 0.75, 0.63 and 0.60 for the Fugl-Meyer, Motor Power, NIH stroke and modified Rankin scales, respectively. Moreover, when suitably scaled and combined, the robotic measures demonstrated a significant increase in effect size from day 7 to 90 over historical data (1.47 versus 0.67). DISCUSSION AND CONCLUSION: These results suggest that it is possible to derive surrogate biomarkers that can significantly reduce the sample size required to power future stroke clinical trials.


Assuntos
Movimento , Recuperação de Função Fisiológica , Robótica/métodos , Reabilitação do Acidente Vascular Cerebral/normas , Acidente Vascular Cerebral/fisiopatologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Exame Neurológico/métodos , Exame Neurológico/normas , Reabilitação do Acidente Vascular Cerebral/métodos
3.
SLAS Technol ; 25(5): 427-435, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32726559

RESUMO

Covance Drug Development produces more than 55 million test results via its central laboratory services, requiring the delivery of more than 10 million reports annually to investigators at 35,000 sites in 89 countries. Historically, most of these data were delivered via fax or electronic data transfers in delimited text or SAS transport file format. Here, we present a new web portal that allows secure online delivery of laboratory results, reports, manuals, and training materials, and enables collaboration with investigational sites through alerts, announcements, and communications. By leveraging a three-tier architecture composed of preexisting data warehouses augmented with an application-specific relational database to store configuration data and materialized views for performance optimizations, a RESTful web application programming interface (API), and a browser-based single-page application for user access, the system offers greatly improved capabilities and user experience without requiring any changes to the underlying acquisition systems and data stores. Following a 3-month controlled rollout with 6,500 users at early-adopter sites, the Xcellerate Investigator Portal was deployed to all 240,000 of Covance's Central Laboratory Services' existing users, gaining widespread acceptance and pointing to significant benefits in productivity, convenience, and user experience.


Assuntos
Comunicação , Internet , Laboratórios , Software , Humanos , Interface Usuário-Computador
4.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-30942863

RESUMO

Timely, consistent and integrated access to clinical trial data remains one of the pharmaceutical industry's most pressing needs. As part of a comprehensive clinical data repository, we have developed a data warehouse that can integrate operational data from any source, conform it to a canonical data model and make it accessible to study teams in a timely, secure and contextualized manner to support operational oversight, proactive risk management and other analytic and reporting needs. Our solution consists of a dimensional relational data warehouse, a set of extraction, transformation and loading processes to coordinate data ingestion and mapping, a generalizable metrics engine to enable the computation of operational metrics and key performance, quality and risk indicators and a set of graphical user interfaces to facilitate configuration, management and administration. When combined with the appropriate data visualization tools, the warehouse enables convenient access to raw operational data and derived metrics to help track study conduct and performance, identify and mitigate risks, monitor and improve operational processes, manage resource allocation, strengthen investigator and sponsor relationships and other purposes.


Assuntos
Ensaios Clínicos como Assunto , Data Warehousing , Sistemas de Gerenciamento de Base de Dados , Humanos , Relatório de Pesquisa
5.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-30854563

RESUMO

Clinical trial data are typically collected through multiple systems developed by different vendors using different technologies and data standards. That data need to be integrated, standardized and transformed for a variety of monitoring and reporting purposes. The need to process large volumes of often inconsistent data in the presence of ever-changing requirements poses a significant technical challenge. As part of a comprehensive clinical data repository, we have developed a data warehouse that integrates patient data from any source, standardizes it and makes it accessible to study teams in a timely manner to support a wide range of analytic tasks for both in-flight and completed studies. Our solution combines Apache HBase, a NoSQL column store, Apache Phoenix, a massively parallel relational query engine and a user-friendly interface to facilitate efficient loading of large volumes of data under incomplete or ambiguous specifications, utilizing an extract-load-transform design pattern that defers data mapping until query time. This approach allows us to maintain a single copy of the data and transform it dynamically into any desirable format without requiring additional storage. Changes to the mapping specifications can be easily introduced and multiple representations of the data can be made available concurrently. Further, by versioning the data and the transformations separately, we can apply historical maps to current data or current maps to historical data, which simplifies the maintenance of data cuts and facilitates interim analyses for adaptive trials. The result is a highly scalable, secure and redundant solution that combines the flexibility of a NoSQL store with the robustness of a relational query engine to support a broad range of applications, including clinical data management, medical review, risk-based monitoring, safety signal detection, post hoc analysis of completed studies and many others.


Assuntos
Ensaios Clínicos como Assunto , Data Warehousing , Sistemas de Gerenciamento de Base de Dados , Humanos , Aprendizado de Máquina , Interface Usuário-Computador
6.
Database (Oxford) ; 20192019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-30773591

RESUMO

Assembly of complete and error-free clinical trial data sets for statistical analysis and regulatory submission requires extensive effort and communication among investigational sites, central laboratories, pharmaceutical sponsors, contract research organizations and other entities. Traditionally, this data is captured, cleaned and reconciled through multiple disjointed systems and processes, which is resource intensive and error prone. Here, we introduce a new system for clinical data review that helps data managers identify missing, erroneous and inconsistent data and manage queries in a unified, system-agnostic and efficient way. Our solution enables timely and integrated access to all study data regardless of source, facilitates the review of validation and discrepancy checks and the management of the resulting queries, tracks the status of page review, verification and locking activities, monitors subject data cleanliness and readiness for database lock and provides extensive configuration options to meet any study's needs, automation for regular updates and fit-for-purpose user interfaces for global oversight and problem detection.


Assuntos
Ensaios Clínicos como Assunto , Bases de Dados como Assunto , Data Warehousing
7.
JAMIA Open ; 2(2): 216-221, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31984356

RESUMO

OBJECTIVE: We present a new system to track, manage, and report on all risks and issues encountered during a clinical trial. MATERIALS AND METHODS: Our solution utilizes JIRA, a popular issue and project tracking tool for software development, augmented by third-party and custom-built plugins to provide the additional functionality missing from the core product. RESULTS: The new system integrates all issue types under a single tracking tool and offers a range of capabilities, including configurable issue management workflows, seamless integration with other clinical systems, extensive history, reporting, and trending, and an intuitive web interface. DISCUSSION AND CONCLUSION: By preserving the linkage between risks, issues, actions, decisions, and outcomes, the system allows study teams to assess the impact and effectiveness of their risk management strategies and present a coherent account of how the trial was conducted. Since the tool was put in production, we have observed an increase in the number of reported issues and a decrease in the median issue resolution time which, along with the positive user feedback, point to marked improvements in quality, transparency, productivity, and teamwork.

8.
Clin Ther ; 40(7): 1204-1212, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-30100201

RESUMO

PURPOSE: Clinical trial monitoring is an essential component of drug development aimed at safeguarding subject safety, data quality, and protocol compliance by focusing sponsor oversight on the most important aspects of study conduct. In recent years, regulatory agencies, industry consortia, and nonprofit collaborations between industry and regulators, such as TransCelerate and International Committee for Harmonization, have been advocating a new, risk-based approach to monitoring clinical trials that places increased emphasis on critical data and processes and encourages greater use of centralized monitoring. However, how best to implement risk-based monitoring (RBM) remains unclear and subject to wide variations in tools and methodologies. The nonprescriptive nature of the regulatory guidelines, coupled with limitations in software technology, challenges in operationalization, and lack of robust evidence of superior outcomes, have hindered its widespread adoption. METHODS: We describe a holistic solution that combines convenient access to data, advanced analytics, and seamless integration with established technology infrastructure to enable comprehensive assessment and mitigation of risk at the study, site, and subject level. FINDINGS: Using data from completed RBM studies carried out in the last 4 years, we demonstrate that our implementation of RBM improves the efficiency and effectiveness of the clinical oversight process as measured on various quality, timeline, and cost dimensions. IMPLICATIONS: These results provide strong evidence that our RBM methodology can significantly improve the clinical oversight process and do so at a lower cost through more intelligent deployment of monitoring resources to the sites that need the most attention.


Assuntos
Ensaios Clínicos como Assunto , Confiabilidade dos Dados , Fidelidade a Diretrizes , Humanos , Segurança do Paciente , Risco
10.
Contemp Clin Trials Commun ; 9: 108-114, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29696232

RESUMO

BACKGROUND: One of the keys to running a successful clinical trial is the selection of high quality clinical sites, i.e., sites that are able to enroll patients quickly, engage them on an ongoing basis to prevent drop-out, and execute the trial in strict accordance to the clinical protocol. Intuitively, the historical track record of a site is one of the strongest predictors of its future performance; however, issues such as data availability and wide differences in protocol complexity can complicate interpretation. Here, we demonstrate how operational data derived from central laboratory services can provide key insights into the performance of clinical sites and help guide operational planning and site selection for new clinical trials. METHODS: Our methodology uses the metadata associated with laboratory kit shipments to clinical sites (such as trial and anonymized patient identifiers, investigator names and addresses, sample collection and shipment dates, etc.) to reconstruct the complete schedule of patient visits and derive insights about the operational performance of those sites, including screening, enrollment, and drop-out rates and other quality indicators. This information can be displayed in its raw form or normalized to enable direct comparison of site performance across studies of varied design and complexity. RESULTS: Leveraging Covance's market leadership in central laboratory services, we have assembled a database of operational metrics that spans more than 14,000 protocols, 1400 indications, 230,000 unique investigators, and 23 million patient visits and represents a significant fraction of all clinical trials run globally in the last few years. By analyzing this historical data, we are able to assess and compare the performance of clinical investigators across a wide range of therapeutic areas and study designs. This information can be aggregated across trials and geographies to gain further insights into country and regional trends, sometimes with surprising results. CONCLUSIONS: The use of operational data from Covance Central Laboratories provides a unique perspective into the performance of clinical sites with respect to many important metrics such as patient enrollment and retention. These metrics can, in turn, be used to guide operational planning and site selection for new clinical trials, thereby accelerating recruitment, improving quality, and reducing cost.

11.
Stroke ; 45(1): 200-4, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24335224

RESUMO

BACKGROUND AND PURPOSE: Because robotic devices record the kinematics and kinetics of human movements with high resolution, we hypothesized that robotic measures collected longitudinally in patients after stroke would bear a significant relationship to standard clinical outcome measures and, therefore, might provide superior biomarkers. METHODS: In patients with moderate-to-severe acute ischemic stroke, we used clinical scales and robotic devices to measure arm movement 7, 14, 21, 30, and 90 days after the event at 2 clinical sites. The robots are interactive devices that measure speed, position, and force so that calculated kinematic and kinetic parameters could be compared with clinical assessments. RESULTS: Among 208 patients, robotic measures predicted well the clinical measures (cross-validated R(2) of modified Rankin scale=0.60; National Institutes of Health Stroke Scale=0.63; Fugl-Meyer=0.73; Motor Power=0.75). When suitably scaled and combined by an artificial neural network, the robotic measures demonstrated greater sensitivity in measuring the recovery of patients from day 7 to day 90 (increased standardized effect=1.47). CONCLUSIONS: These results demonstrate that robotic measures of motor performance will more than adequately capture outcome, and the altered effect size will reduce the required sample size. Reducing sample size will likely improve study efficiency.


Assuntos
Braço/fisiologia , Biomarcadores , Movimento/fisiologia , Robótica , Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral/fisiopatologia , Idoso , Fenômenos Biomecânicos , Interpretação Estatística de Dados , Determinação de Ponto Final , Etnicidade , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Modelos Anatômicos , Dinâmica não Linear , Valor Preditivo dos Testes , Recuperação de Função Fisiológica , Reprodutibilidade dos Testes
12.
Curr Top Med Chem ; 12(11): 1237-42, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22571793

RESUMO

Drug discovery is a highly complex process requiring scientists from wide-ranging disciplines to work together in a well-coordinated and streamlined fashion. While the process can be compartmentalized into well-defined functional domains, the success of the entire enterprise rests on the ability to exchange data conveniently between these domains, and integrate it in meaningful ways to support the design, execution and interpretation of experiments aimed at optimizing the efficacy and safety of new drugs. This, in turn, requires information management systems that can support many different types of scientific technologies generating data of imposing complexity, diversity and volume. Here, we describe the key components of our Advanced Biological and Chemical Discovery (ABCD), a software platform designed at Johnson & Johnson to bring coherence in the way discovery data is collected, annotated, organized, integrated, mined and visualized. Unlike the Gordian knot of one-off solutions built to serve a single purpose for a single set of users that one typically encounters in the pharmaceutical industry, we sought to develop a framework that could be extended and leveraged across different application domains, and offer a consistent user experience marked by superior performance and usability. In this work, several major components of ABCD are highlighted, ranging from operational subsystems for managing reagents, reactions, compounds, and assays, to advanced data mining and visualization tools for SAR analysis and interpretation. All these capabilities are delivered through a common application front-end called Third Dimension Explorer (3DX), a modular, multifunctional and extensible platform designed to be the "Swiss-army knife" of the discovery scientist.


Assuntos
Descoberta de Drogas , Software , Bases de Dados Factuais , Indústria Farmacêutica
13.
J Chem Inf Model ; 52(4): 867-81, 2012 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-22435959

RESUMO

The aim of virtual screening (VS) is to identify bioactive compounds through computational means, by employing knowledge about the protein target (structure-based VS) or known bioactive ligands (ligand-based VS). In VS, a large number of molecules are ranked according to their likelihood to be bioactive compounds, with the aim to enrich the top fraction of the resulting list (which can be tested in bioassays afterward). At its core, VS attempts to improve the odds of identifying bioactive molecules by maximizing the true positive rate, that is, by ranking the truly active molecules as high as possible (and, correspondingly, the truly inactive ones as low as possible). In choosing the right approach, the researcher is faced with many questions: where does the optimal balance between efficiency and accuracy lie when evaluating a particular algorithm; do some methods perform better than others and in what particular situations; and what do retrospective results tell us about the prospective utility of a particular method? Given the multitude of settings, parameters, and data sets the practitioner can choose from, there are many pitfalls that lurk along the way which might render VS less efficient or downright useless. This review attempts to catalogue published and unpublished problems, shortcomings, failures, and technical traps of VS methods with the aim to avoid pitfalls by making the user aware of them in the first place.


Assuntos
Algoritmos , Simulação de Acoplamento Molecular , Proteínas/química , Bibliotecas de Moléculas Pequenas/química , Interface Usuário-Computador , Sítios de Ligação , Bases de Dados de Compostos Químicos , Ensaios de Triagem em Larga Escala , Humanos , Ligantes , Funções Verossimilhança , Ligação Proteica , Proteínas/agonistas , Proteínas/antagonistas & inibidores , Relação Estrutura-Atividade
14.
J Chem Inf Model ; 51(12): 3113-30, 2011 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-22035187

RESUMO

Efficient substructure searching is a key requirement for any chemical information management system. In this paper, we describe the substructure search capabilities of ABCD, an integrated drug discovery informatics platform developed at Johnson & Johnson Pharmaceutical Research & Development, L.L.C. The solution consists of several algorithmic components: 1) a pattern mapping algorithm for solving the subgraph isomorphism problem, 2) an indexing scheme that enables very fast substructure searches on large structure files, 3) the incorporation of that indexing scheme into an Oracle cartridge to enable querying large relational databases through SQL, and 4) a cost estimation scheme that allows the Oracle cost-based optimizer to generate a good execution plan when a substructure search is combined with additional constraints in a single SQL query. The algorithm was tested on a public database comprising nearly 1 million molecules using 4,629 substructure queries, the vast majority of which were submitted by discovery scientists over the last 2.5 years of user acceptance testing of ABCD. 80.7% of these queries were completed in less than a second and 96.8% in less than ten seconds on a single CPU, while on eight processing cores these numbers increased to 93.2% and 99.7%, respectively. The slower queries involved extremely generic patterns that returned the entire database as screening hits and required extensive atom-by-atom verification.


Assuntos
Algoritmos , Descoberta de Drogas , Informática/métodos , Bibliotecas de Moléculas Pequenas/química , Bases de Dados Factuais , Descoberta de Drogas/economia , Informática/economia , Fatores de Tempo
15.
J Chem Inf Model ; 51(12): 3275-86, 2011 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-22035213

RESUMO

We present a novel approach for enhancing the diversity of a chemical library rooted on the theory of the wisdom of crowds. Our approach was motivated by a desire to tap into the collective experience of our global medicinal chemistry community and involved four basic steps: (1) Candidate compounds for acquisition were screened using various structural and property filters in order to eliminate clearly nondrug-like matter. (2) The remaining compounds were clustered together with our in-house collection using a novel fingerprint-based clustering algorithm that emphasizes common substructures and works with millions of molecules. (3) Clusters populated exclusively by external compounds were identified as "diversity holes," and representative members of these clusters were presented to our global medicinal chemistry community, who were asked to specify which ones they liked, disliked, or were indifferent to using a simple point-and-click interface. (4) The resulting votes were used to rank the clusters from most to least desirable, and to prioritize which ones should be targeted for acquisition. Analysis of the voting results reveals interesting voter behaviors and distinct preferences for certain molecular property ranges that are fully consistent with lead-like profiles established through systematic analysis of large historical databases.


Assuntos
Bibliotecas de Moléculas Pequenas/química , Química Farmacêutica/métodos , Análise por Conglomerados , Estrutura Molecular
16.
J Chem Inf Model ; 51(11): 2852-9, 2011 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-21961974

RESUMO

Stochastic proximity embedding (SPE) was developed as a method for efficiently calculating lower dimensional embeddings of high-dimensional data sets. Rather than using a global minimization scheme, SPE relies upon updating the distances of randomly selected points in an iterative fashion. This was found to generate embeddings of comparable quality to those obtained using classical multidimensional scaling algorithms. However, SPE is able to obtain these results in O(n) rather than O(n²) time and thus is much better suited to large data sets. In an effort both to speed up SPE and utilize it for even larger problems, we have created a multithreaded implementation which takes advantage of the growing general computing power of graphics processing units (GPUs). The use of GPUs allows the embedding of data sets containing millions of data points in interactive time scales.


Assuntos
Biologia Computacional/métodos , Descoberta de Drogas/métodos , Software , Algoritmos , Biologia Computacional/estatística & dados numéricos , Gráficos por Computador , Computadores , Bases de Dados Factuais , Descoberta de Drogas/estatística & dados numéricos
17.
J Chem Inf Model ; 51(11): 2843-51, 2011 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-21955134

RESUMO

We present a novel class of topological molecular descriptors, which we call power keys. Power keys are computed by enumerating all possible linear, branch, and cyclic subgraphs up to a given size, encoding the connected atoms and bonds into two separate components, and recording the number of occurrences of each subgraph. We have applied these new descriptors for the screening stage of substructure searching on a relational database of about 1 million compounds using a diverse set of reference queries. The new keys can eliminate the vast majority (>99.9% on average) of nonmatching molecules within a fraction of a second. More importantly, for many of the queries the screening efficiency is 100%. A common feature was identified for the molecules for which power keys have perfect discriminative ability. This feature can be exploited to obviate the need for expensive atom-by-atom matching in situations where some ambiguity can be tolerated (fuzzy substructure searching). Other advantages over commonly used molecular keys are also discussed.


Assuntos
Biologia Computacional/métodos , Descoberta de Drogas/métodos , Software , Algoritmos , Biologia Computacional/estatística & dados numéricos , Bases de Dados Factuais , Descoberta de Drogas/estatística & dados numéricos , Lógica Fuzzy , Modelos Moleculares , Relação Estrutura-Atividade
18.
J Chem Inf Model ; 51(8): 1807-16, 2011 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-21696144

RESUMO

The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.


Assuntos
Química Farmacêutica/métodos , Mineração de Dados/métodos , Compostos Orgânicos/análise , Algoritmos , Química Farmacêutica/estatística & dados numéricos , Gráficos por Computador , Compressão de Dados , Bases de Dados Factuais , Modelos Químicos , Estrutura Molecular , Software
19.
J Chem Inf Model ; 51(5): 1122-31, 2011 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-21504183

RESUMO

We introduce Single R-Group Polymorphisms (SRPs, pronounced 'sharps'), an intuitive framework for analyzing substituent effects and activity cliffs in a single congeneric series. A SRP is a pair of compounds that differ only in a single R-group position. Because the same substituent pair may occur in multiple SRPs in the series (i.e., with different combinations of substituents at the other R-group positions), SRP analysis makes it easy to identify systematic substituent effects and activity cliffs at each point of variation (R-cliffs). SRPs can be visualized as a symmetric heatmap where each cell represents a particular pair of substituents color-coded by the average difference in activity between the compounds that contain that particular SRP. SRP maps offer several advantages over existing techniques for visualizing activity cliffs: 1) the chemical structures of all the substituents are displayed simultaneously on a single map, thus directly engaging the pattern recognition abilities of the medicinal chemist; 2) it is based on R-group decomposition, a natural paradigm for generating and rationalizing SAR; 3) it uses a heatmap representation that makes it easy to identify systematic trends in the data; 4) it generalizes the concept of activity cliffs beyond similarity by allowing the analyst to sort the substituents according to any property of interest or place them manually in any desired order.


Assuntos
Catepsinas/antagonistas & inibidores , Descoberta de Drogas , Inibidores de Proteases/química , Software , Catepsinas/química , Gráficos por Computador , Ligantes , Estrutura Molecular , Relação Estrutura-Atividade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA