Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Nat Rev Neurosci ; 22(1): 55-67, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33199854

RESUMEN

Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This approach has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, and not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterize computations or neural codes, or who wish to understand perception, attention, memory and executive functions? In this Perspective, our goal is to offer a road map for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics and neural representations in artificial and biological systems, and we highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Redes Neurales de la Computación , Humanos , Neurociencias
2.
PLoS Comput Biol ; 19(1): e1010808, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36656823

RESUMEN

Humans can learn several tasks in succession with minimal mutual interference but perform more poorly when trained on multiple tasks at once. The opposite is true for standard deep neural networks. Here, we propose novel computational constraints for artificial neural networks, inspired by earlier work on gating in the primate prefrontal cortex, that capture the cost of interleaved training and allow the network to learn two tasks in sequence without forgetting. We augment standard stochastic gradient descent with two algorithmic motifs, so-called "sluggish" task units and a Hebbian training step that strengthens connections between task units and hidden units that encode task-relevant information. We found that the "sluggish" units introduce a switch-cost during training, which biases representations under interleaved training towards a joint representation that ignores the contextual cue, while the Hebbian step promotes the formation of a gating scheme from task units to the hidden layer that produces orthogonal representations which are perfectly guarded against interference. Validating the model on previously published human behavioural data revealed that it matches performance of participants who had been trained on blocked or interleaved curricula, and that these performance differences were driven by misestimation of the true category boundary.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Animales , Humanos , Aprendizaje Automático , Corteza Prefrontal , Curriculum
3.
J Stat Mech ; 2023(11): 114004, 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-38524253

RESUMEN

Learning in deep neural networks is known to depend critically on the knowledge embedded in the initial network weights. However, few theoretical results have precisely linked prior knowledge to learning dynamics. Here we derive exact solutions to the dynamics of learning with rich prior knowledge in deep linear networks by generalising Fukumizu's matrix Riccati solution (Fukumizu 1998 Gen 1 1E-03). We obtain explicit expressions for the evolving network function, hidden representational similarity, and neural tangent kernel over training for a broad class of initialisations and tasks. The expressions reveal a class of task-independent initialisations that radically alter learning dynamics from slow non-linear dynamics to fast exponential trajectories while converging to a global optimum with identical representational similarity, dissociating learning trajectories from the structure of initial internal representations. We characterise how network weights dynamically align with task structure, rigorously justifying why previous solutions successfully described learning from small initial weights without incorporating their fine-scale structure. Finally, we discuss the implications of these findings for continual learning, reversal learning and learning of structured knowledge. Taken together, our results provide a mathematical toolkit for understanding the impact of prior knowledge on deep learning.

4.
Proc Natl Acad Sci U S A ; 116(23): 11537-11546, 2019 06 04.
Artículo en Inglés | MEDLINE | ID: mdl-31101713

RESUMEN

An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: What are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep-learning dynamics to give rise to these regularities.

5.
J Stat Mech ; 2022(11): 114014, 2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37817944

RESUMEN

In animals and humans, curriculum learning-presenting data in a curated order-is critical to rapid learning and effective pedagogy. A long history of experiments has demonstrated the impact of curricula in a variety of animals but, despite its ubiquitous presence, a theoretical understanding of the phenomenon is still lacking. Surprisingly, in contrast to animal learning, curricula strategies are not widely used in machine learning and recent simulation studies reach the conclusion that curricula are moderately effective or even ineffective in most cases. This stark difference in the importance of curriculum raises a fundamental theoretical question: when and why does curriculum learning help? In this work, we analyse a prototypical neural network model of curriculum learning in the high-dimensional limit, employing statistical physics methods. We study a task in which a sparse set of informative features are embedded amidst a large set of noisy features. We analytically derive average learning trajectories for simple neural networks on this task, which establish a clear speed benefit for curriculum learning in the online setting. However, when training experiences can be stored and replayed (for instance, during sleep), the advantage of curriculum in standard neural networks disappears, in line with observations from the deep learning literature. Inspired by synaptic consolidation techniques developed to combat catastrophic forgetting, we propose curriculum-aware algorithms that consolidate synapses at curriculum change points and investigate whether this can boost the benefits of curricula. We derive generalisation performance as a function of consolidation strength (implemented as an L 2 regularisation/elastic coupling connecting learning phases), and show that curriculum-aware algorithms can yield a large improvement in test performance. Our reduced analytical descriptions help reconcile apparently conflicting empirical results, trace regimes where curriculum learning yields the largest gains, and provide experimentally-accessible predictions for the impact of task parameters on curriculum benefits. More broadly, our results suggest that fully exploiting a curriculum may require explicit adjustments in the loss.

6.
J Stat Mech ; 2020(12): 124010, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34262607

RESUMEN

Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in the teacher-student setup, where one network, the student, is trained on data generated by another network, called the teacher. We show how the dynamics of stochastic gradient descent (SGD) is captured by a set of differential equations and prove that this description is asymptotically exact in the limit of large inputs. Using this framework, we calculate the final generalisation error of student networks that have more parameters than their teachers. We find that the final generalisation error of the student increases with network size when training only the first layer, but stays constant or even decreases with size when training both layers. We show that these different behaviours have their root in the different solutions SGD finds for different activation functions. Our results indicate that achieving good generalisation in neural networks goes beyond the properties of SGD alone and depends on the interplay of at least the algorithm, the model architecture, and the data set.

7.
Diagnostics (Basel) ; 14(9)2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38732275

RESUMEN

Injury to the recurrent laryngeal nerve (RLN) can be a devastating complication of thyroid and parathyroid surgery. Intraoperative neuromonitoring (IONM) has been proposed as a method to reduce the number of RLN injuries but the data are inconsistent. We performed a meta-analysis to critically assess the data. After applying inclusion and exclusion criteria, 60 studies, including five randomized trials and eight non-randomized prospective trials, were included. A meta-analysis of all studies demonstrated an odds ratio (OR) of 0.66 (95% CI [0.56, 0.79], p < 0.00001) favoring IONM compared to the visual identification of the RLN in limiting permanent RLN injuries. A meta-analysis of studies employing contemporaneous controls and routine postoperative laryngoscopy to diagnose RLN injuries (considered to be the most reliable design) demonstrated an OR of 0.69 (95% CI [0.56, 0.84], p = 0.0003), favoring IONM. Strong consideration should be given to employing IONM when performing thyroid and parathyroid surgery.

8.
ArXiv ; 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39070033

RESUMEN

Biological and artificial learning agents face numerous choices about how to learn, ranging from hyperparameter selection to aspects of task distributions like curricula. Understanding how to make these meta-learning choices could offer normative accounts of cognitive control functions in biological learners and improve engineered systems. Yet optimal strategies remain challenging to compute in modern deep networks due to the complexity of optimizing through the entire learning process. Here we theoretically investigate optimal strategies in a tractable setting. We present a learning effort framework capable of efficiently optimizing control signals on a fully normative objective: discounted cumulative performance throughout learning. We obtain computational tractability by using average dynamical equations for gradient descent, available for simple neural network architectures. Our framework accommodates a range of meta-learning and automatic curriculum learning methods in a unified normative setting. We apply this framework to investigate the effect of approximations in common meta-learning algorithms; infer aspects of optimal curricula; and compute optimal neuronal resource allocation in a continual learning setting. Across settings, we find that control effort is most beneficial when applied to easier aspects of a task early in learning; followed by sustained effort on harder aspects. Overall, the learning effort framework provides a tractable theoretical test bed to study normative benefits of interventions in a variety of learning systems, as well as a formal account of optimal cognitive control strategies over learning trajectories posited by established theories in cognitive neuroscience.

9.
Trends Neurosci ; 46(3): 199-210, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36682991

RESUMEN

How do humans and other animals learn new tasks? A wave of brain recording studies has investigated how neural representations change during task learning, with a focus on how tasks can be acquired and coded in ways that minimise mutual interference. We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex, and computational models that have exploited these findings to understand how the brain may partition knowledge between tasks. We discuss how ideas from machine learning, including those that combine supervised and unsupervised learning, are helping neuroscientists understand how natural tasks are learned and coded in biological brains.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Animales , Humanos , Encéfalo
10.
Neuron ; 111(12): 1966-1978.e8, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37119818

RESUMEN

Mammals form mental maps of the environments by exploring their surroundings. Here, we investigate which elements of exploration are important for this process. We studied mouse escape behavior, in which mice are known to memorize subgoal locations-obstacle edges-to execute efficient escape routes to shelter. To test the role of exploratory actions, we developed closed-loop neural-stimulation protocols for interrupting various actions while mice explored. We found that blocking running movements directed at obstacle edges prevented subgoal learning; however, blocking several control movements had no effect. Reinforcement learning simulations and analysis of spatial data show that artificial agents can match these results if they have a region-level spatial representation and explore with object-directed movements. We conclude that mice employ an action-driven process for integrating subgoals into a hierarchical cognitive map. These findings broaden our understanding of the cognitive toolkit that mammals use to acquire spatial knowledge.


Asunto(s)
Aprendizaje , Refuerzo en Psicología , Ratones , Animales , Mamíferos
11.
Nat Neurosci ; 26(8): 1438-1448, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37474639

RESUMEN

Memorization and generalization are complementary cognitive processes that jointly promote adaptive behavior. For example, animals should memorize safe routes to specific water sources and generalize from these memories to discover environmental features that predict new ones. These functions depend on systems consolidation mechanisms that construct neocortical memory traces from hippocampal precursors, but why systems consolidation only applies to a subset of hippocampal memories is unclear. Here we introduce a new neural network formalization of systems consolidation that reveals an overlooked tension-unregulated neocortical memory transfer can cause overfitting and harm generalization in an unpredictable world. We resolve this tension by postulating that memories only consolidate when it aids generalization. This framework accounts for partial hippocampal-cortical memory transfer and provides a normative principle for reconceptualizing numerous observations in the field. Generalization-optimized systems consolidation thus provides new insight into how adaptive behavior benefits from complementary learning systems specialized for memorization and generalization.


Asunto(s)
Aprendizaje , Consolidación de la Memoria , Animales , Generalización Psicológica , Hipocampo
12.
Neuron ; 111(9): 1504-1516.e9, 2023 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-36898375

RESUMEN

Human understanding of the world can change rapidly when new information comes to light, such as when a plot twist occurs in a work of fiction. This flexible "knowledge assembly" requires few-shot reorganization of neural codes for relations among objects and events. However, existing computational theories are largely silent about how this could occur. Here, participants learned a transitive ordering among novel objects within two distinct contexts before exposure to new knowledge that revealed how they were linked. Blood-oxygen-level-dependent (BOLD) signals in dorsal frontoparietal cortical areas revealed that objects were rapidly and dramatically rearranged on the neural manifold after minimal exposure to linking information. We then adapt online stochastic gradient descent to permit similar rapid knowledge assembly in a neural network model.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Humanos , Lóbulo Frontal
13.
Elife ; 122023 02 14.
Artículo en Inglés | MEDLINE | ID: mdl-36786427

RESUMEN

Making optimal decisions in the face of noise requires balancing short-term speed and accuracy. But a theory of optimality should account for the fact that short-term speed can influence long-term accuracy through learning. Here, we demonstrate that long-term learning is an important dynamical dimension of the speed-accuracy trade-off. We study learning trajectories in rats and formally characterize these dynamics in a theory expressed as both a recurrent neural network and an analytical extension of the drift-diffusion model that learns over time. The model reveals that choosing suboptimal response times to learn faster sacrifices immediate reward, but can lead to greater total reward. We empirically verify predictions of the theory, including a relationship between stimulus exposure and learning speed, and a modulation of reaction time by future learning prospects. We find that rats' strategies approximately maximize total reward over the full learning epoch, suggesting cognitive control over the learning process.


Asunto(s)
Toma de Decisiones , Aprendizaje , Animales , Ratas , Toma de Decisiones/fisiología , Tiempo de Reacción/fisiología , Recompensa , Redes Neurales de la Computación
14.
Neuron ; 110(7): 1258-1270.e11, 2022 04 06.
Artículo en Inglés | MEDLINE | ID: mdl-35085492

RESUMEN

How do neural populations code for multiple, potentially conflicting tasks? Here we used computational simulations involving neural networks to define "lazy" and "rich" coding solutions to this context-dependent decision-making problem, which trade off learning speed for robustness. During lazy learning the input dimensionality is expanded by random projections to the network hidden layer, whereas in rich learning hidden units acquire structured representations that privilege relevant over irrelevant features. For context-dependent decision-making, one rich solution is to project task representations onto low-dimensional and orthogonal manifolds. Using behavioral testing and neuroimaging in humans and analysis of neural signals from macaque prefrontal cortex, we report evidence for neural coding patterns in biological brains whose dimensionality and neural geometry are consistent with the rich learning regime.


Asunto(s)
Redes Neurales de la Computación , Análisis y Desempeño de Tareas , Encéfalo , Aprendizaje , Corteza Prefrontal
15.
Neural Netw ; 132: 428-446, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33022471

RESUMEN

We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant "high-dimensional" regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that standard application of theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation.


Asunto(s)
Aprendizaje Profundo
16.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31659335

RESUMEN

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Redes Neurales de la Computación , Animales , Encéfalo/fisiología , Humanos
17.
Obes Surg ; 18(2): 192-5; discussion 196, 2008 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-18176831

RESUMEN

BACKGROUND: Successful obesity surgery often results months later in redundant abdominal skin and subcutaneous tissue. Following open obesity surgery, ventral hernias are also common, yet little has been written about the safety of combining panniculectomy with ventral hernia repair. We performed a retrospective analysis of a single plastic surgeon's experience with panniculectomy following gastric bypass surgery including both patients undergoing and those not undergoing simultaneous ventral hernia repair. METHODS: We reviewed the hospital and office records of patients undergoing panniculectomy at two university-affiliated community hospitals from March 2002 to February 2005 following gastric bypass surgery. RESULTS: The records of 100 patients (91 women) were available for review. Median age was 48 (range 25-65) and median interval between bypass surgery and panniculectomy was 23 months (range 6-286). Median decrease in BMI was 19 (range 13-47). Eighty-three patients underwent panniculectomy combined with at least one other procedure, most commonly ventral hernia repair (70) and buttock lift (9). Forty hernia repairs were performed with mesh. No patient required mesh removal in the postoperative period. Median length of hospital stay was 3 days (range 1-7). Twenty-nine patients required outpatient sharp debridement. Ten patients were readmitted for management of wound complications. No patients sustained a stroke, myocardial infarction, or pulmonary embolus. There was no mortality. CONCLUSIONS: Following obesity surgery, simultaneous ventral hernia repair and panniculectomy can be accomplished safely with short hospital stays and few in-hospital complications. Postoperative wound problems are not infrequent but can be managed in the outpatient setting.


Asunto(s)
Grasa Abdominal/cirugía , Derivación Gástrica , Hernia Ventral/cirugía , Obesidad Mórbida/cirugía , Adulto , Anciano , Procedimientos Quirúrgicos Dermatologicos , Femenino , Hernia Ventral/etiología , Humanos , Masculino , Persona de Mediana Edad , Obesidad Mórbida/complicaciones , Estudios Retrospectivos , Resultado del Tratamiento , Pérdida de Peso
18.
Am Surg ; 74(11): 1073-7, 2008 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-19062664

RESUMEN

Adequate lymph node harvest among patients undergoing colectomy for cancer is critical for staging and therapy. Obesity is prevalent in the American population. We investigated whether lymph node harvest was compromised in obese patients undergoing colectomy for cancer. Medical records of patients who had undergone colectomy for colon cancer were reviewed. We correlated the number of lymph nodes with body mass index (BMI) and compared the number of lymph nodes among patients with BMI less than 30 kg/m2 to those with BMI of 30 kg/m2 or greater ("obese"). Among all 191 patients, the correlation coefficient was 0.04 (P > 0.2). The mean number of nodes harvested from 122 nonobese patients was 12.4 +/- 6 and that for 69 obese patients 12.8 +/- 6 (P > 0.2). Among 130 patients undergoing right colectomy and 35 patients undergoing sigmoid colectomy, the correlation coefficients were 0.02 (P > 0.2) and 0.16 (P > 0.2), respectively. There was not a statistically significant difference in lymph node harvest between obese and nonobese patients (14.1 +/- 7 vs. 13.8 +/- 6, P > 0.2; and 11.8 +/- 6 vs. 8.6 +/- 5, P > 0.2), respectively. Obesity did not compromise the number of lymph nodes harvested from patients undergoing colectomy for colon cancer.


Asunto(s)
Adenocarcinoma/patología , Colectomía , Neoplasias del Colon/patología , Escisión del Ganglio Linfático , Obesidad/complicaciones , Adenocarcinoma/complicaciones , Adenocarcinoma/cirugía , Adulto , Anciano , Índice de Masa Corporal , Tamaño Corporal , Estudios de Cohortes , Neoplasias del Colon/complicaciones , Neoplasias del Colon/cirugía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estadificación de Neoplasias , Obesidad/patología , Obesidad/cirugía , Reproducibilidad de los Resultados , Estudios Retrospectivos
19.
J Trauma ; 64(3): 745-8, 2008 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-18332818

RESUMEN

BACKGROUND: Cervical spine fractures in the elderly carry a mortality as high as 26%. We reviewed our experience to define the level of injury, prevalence of neurologic deficits, treatments employed, and the correlation between patients' pre- and posthospital residences. Also, we correlated the prevalence of advanced directives with length of stay. METHODS: We queried the data collected prospectively at an American College of Surgeons verified Level I hospital (National TRACS, American College of Surgeons) regarding patients aged 65 years or older presenting with cervical spine fractures (International Classification of Diseases-9 code 805.X) in calendar years 2000 through 2003. RESULTS: We identified 58 patients (ages 65-94). Mortality was 24%. Twelve patients had quadriplegia or paraplegia and seven of these patients died. Respiratory failure was the primary cause of death. Application of rigid collars and a halo brace were the most commonly employed therapies. Mortality rates for halo stabilization and rigid collar and halo stabilization were similar (23% vs. 29%). Despite having a higher mean Injury Severity Score, the 16 patients with advanced directives had an intensive care unit length of stay similar to that of patients without advanced directives but a statistically significant shorter overall length of stay (13 vs. 6.9 days). Eighteen of 45 patients living at home at the time of injury returned home. CONCLUSIONS: Cervical spine injury in the elderly does not inevitably relegate patients to a setting of more acute nursing care. The health and social factors that allowed many to return to living at home warrant investigation, as support of these factors may assist others with this injury.


Asunto(s)
Vértebras Cervicales/lesiones , Fracturas de la Columna Vertebral/epidemiología , Directivas Anticipadas , Anciano , Anciano de 80 o más Años , Distribución de Chi-Cuadrado , Femenino , Humanos , Tiempo de Internación/estadística & datos numéricos , Masculino , Michigan/epidemiología , Prevalencia , Estudios Prospectivos , Estudios Retrospectivos , Fracturas de la Columna Vertebral/terapia , Centros Traumatológicos , Resultado del Tratamiento
20.
J Enzyme Inhib Med Chem ; 23(4): 549-55, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18608778

RESUMEN

Butyric acid and trichostatin A (TSA) are anti-cancer compounds that cause the upregulation of genes involved in differentiation and cell cycle regulation by inhibiting histone deacetylase (HDAC) activity. In this study we have synthesized and evaluated compounds that combine the bioavailability of short-chain fatty acids, like butyric acid, with the bidentate binding ability of TSA. A series of analogs were made to examine the effects of chain length, simple aromatic cap groups, and substituted hydroxamates on the compounds' ability to inhibit rat-liver HDAC using a fluorometric assay. In keeping with previous structure-activity relationships, the most effective inhibitors consisted of longer chains and hydroxamic acid groups. It was found that 5-phenylvaleric hydroxamic acid and 4-benzoylbutyric hydroxamic acid were the most potent inhibitors with IC50's of 5 microM and 133 microM respectively.


Asunto(s)
Inhibidores Enzimáticos/química , Ácidos Grasos/química , Inhibidores de Histona Desacetilasas , Ácidos Hidroxámicos/química , Animales , Inhibidores Enzimáticos/síntesis química , Inhibidores Enzimáticos/farmacología , Ácidos Grasos/síntesis química , Ácidos Grasos/farmacología , Histona Desacetilasas/metabolismo , Ácidos Hidroxámicos/síntesis química , Ácidos Hidroxámicos/farmacología , Concentración 50 Inhibidora , Ratas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA