Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Life Sci ; 330: 121996, 2023 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37536613

RESUMO

AIM: Sepsis is a common cause of acute kidney injury (AKI). Lipopolysaccharides (LPS) are the main gram-negative bacterial cell wall component with a well-documented inflammatory impact. Diclofenac (DIC) is a non-steroidal anti-inflammatory drug with a potential nephrotoxic effect. Curcumin (CUR) and silymarin (SY) are natural products with a wide range of pharmacological activities, including antioxidant and anti-inflammatory ones. The objective of this study was to examine the protective impact of CUR and SY against kidney damage induced by LPS/DIC co-exposure. MATERIALS AND METHODS: Four groups of rats were used; control; LPS/DIC, LPS/DIC + CUR, and LPS/DIC + SY group. LPS/DIC combination induced renal injury at an LPS dose much lower than a nephrotoxic one. KEY FINDING: Nephrotoxicity was confirmed by histopathological examination and significant elevation of renal function markers. LPS/DIC induced oxidative stress in renal tissues, evidenced by decreasing reduced glutathione and superoxide dismutase, and increasing lipid peroxidation. Inflammatory response of LPS/DIC was associated with a significant increase of renal IL-1ß and TNF-α. Treatment with either CUR or SY shifted measured parameters to the opposite side. Moreover, LPS/DIC exposure was associated with upregulation of mTOR and endoplasmic reticulum stress protein (CHOP) and downregulation of podocin These effects were accompanied by reduced gene expression of cystatin C and KIM-1. CUR and SY ameliorated LPS/DIC effect on the aforementioned genes and protein significantly. SIGNIFICANCE: This study confirms the potential nephrotoxicity; mechanisms include upregulation of mTOR, CHOP, cystatin C, and KIM-1 and downregulation of podocin. Moreover, both CUR and SY are promising nephroprotective products against LPS/DIC co-exposure.


Assuntos
Injúria Renal Aguda , Curcumina , Silimarina , Animais , Ratos , Injúria Renal Aguda/induzido quimicamente , Injúria Renal Aguda/prevenção & controle , Injúria Renal Aguda/tratamento farmacológico , Anti-Inflamatórios/farmacologia , Curcumina/farmacologia , Cistatina C , Diclofenaco/efeitos adversos , Lipopolissacarídeos/efeitos adversos , Estresse Oxidativo , Silimarina/farmacologia , Serina-Treonina Quinases TOR
2.
Front Neurosci ; 14: 424, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32477050

RESUMO

A growing body of work underlines striking similarities between biological neural networks and recurrent, binary neural networks. A relatively smaller body of work, however, addresses the similarities between learning dynamics employed in deep artificial neural networks and synaptic plasticity in spiking neural networks. The challenge preventing this is largely caused by the discrepancy between the dynamical properties of synaptic plasticity and the requirements for gradient backpropagation. Learning algorithms that approximate gradient backpropagation using local error functions can overcome this challenge. Here, we introduce Deep Continuous Local Learning (DECOLLE), a spiking neural network equipped with local error functions for online learning with no memory overhead for computing gradients. DECOLLE is capable of learning deep spatio temporal representations from spikes relying solely on local information, making it compatible with neurobiology and neuromorphic hardware. Synaptic plasticity rules are derived systematically from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. We benchmark our approach on the event-based neuromorphic dataset N-MNIST and DvsGesture, on which DECOLLE performs comparably to the state-of-the-art. DECOLLE networks provide continuously learning machines that are relevant to biology and supportive of event-based, low-power computer vision architectures matching the accuracies of conventional computers on tasks where temporal precision and speed are essential.

3.
Urol Case Rep ; 33: 101261, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32489894

RESUMO

Carcinosarcoma of the kidney and renal pelvis (CSKP) is a rare and highly-aggressive malignancy characterized by rapid progression and widespread metastases. To date, few studies describe the natural history of the disease. We present a patient placed on pembrolizumab therapy for suspected metastatic colon cancer. The patient was found to have a right renal mass with caval extension on surveillance and ultimately underwent radical surgery revealing carcinosarcoma with positive PD-L1 expression with no evidence of recurrence to date. To our knowledge, this is the first case describing PD-L1 expression in CSKP and presents a novel pathway for future treatment algorithms.

4.
Int J Angiol ; 28(3): 173-181, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31452585

RESUMO

This study was aimed to report data on the feasibility, safety, and effectiveness of endovascular procedures in a thromboangiitis obliterans diagnosed patients presenting with critical limb ischemia (CLI). Prospective study conducted on patients affected by Buerger's disease who presented to our center along 2 years. Clinical, radiological, and patient-based outcomes were recorded at 3, 6, and 12 months after the intervention. Total 39 patients were included in the study. Fifteen (38.5%) patients underwent percutaneous transluminal angioplasty, another 15 patients (38.5%) underwent follow-up on medical treatment, there are four other patients (10.3%) underwent surgical bypass, and five (12.8%) patients underwent lumbar sympathectomy. The 12 months' outcome showed 66.7% technical success in endovascular group with 46.7% patency rate ( p -value = 0.06), 86.7% limb salvage rate (LSR; p -value < 0.04), and 66.7% clinical improvement ( p -value = 0.005). The endovascular management of Buerger's disease is feasible, save, and effective with high rate of LSR and clinical improvement.

5.
IEEE Trans Neural Netw Learn Syst ; 30(3): 644-656, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30047912

RESUMO

Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1×1 to 7×7 . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the multiply-accumulate units, and achieves a power efficiency of over 3 TOp/s/W in a core area of 6.3 mm2. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations.

6.
Front Neurosci ; 12: 608, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30233295

RESUMO

Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the CIFAR10 dataset, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads. Code used to run all learning experiments is available under https://gitlab.com/hesham-mostafa/learning-using-local-erros.git.

7.
Cent Eur J Immunol ; 43(2): 222-230, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30135637

RESUMO

Proteolytic and antiproteolytic enzymes play a critical role in the physiology and pathology of different stages of human life. One of the important members of the proteolytic family is the plasminogen activation system (PAS), which includes several elements crucial for this review: the 50 kDa glycoprotein plasminogen activator inhibitor 1 (PAI-1) that inhibits tissue-type (tPA) and urokinase-type plasminogen activator (uPA). These two convert plasminogen into its active form named plasmin that can lyse a broad spectrum of proteins. Urokinase receptor (uPAR) is the binding site of uPA. This glycoprotein on the cell surface facilitates urokinase activation of plasminogen, creating high proteolytic activity close to the cell surface. PAS activities have been reported to predict the outcome of kidney transplants. However, reports on expression of PAS in kidney transplants seem to be controversial. On the one hand there are reports that impaired proteolytic activity leads to induction of chronic allograft nephropathy, while on the other hand treatment with uPA and tPA can restore function of acute renal transplants. In this comprehensive review we describe the complexity of the PAS as well as biological effects of the PAS on renal allografts, and provide a possible explanation of the reported controversy.

8.
Neural Comput ; 30(6): 1542-1572, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29652581

RESUMO

Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.

9.
IEEE Trans Neural Netw Learn Syst ; 29(7): 3227-3235, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-28783639

RESUMO

Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

10.
Front Neurosci ; 11: 496, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28932180

RESUMO

Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

11.
Nat Commun ; 6: 8941, 2015 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-26642827

RESUMO

Constraint satisfaction problems are ubiquitous in many domains. They are typically solved using conventional digital computing architectures that do not reflect the distributed nature of many of these problems, and are thus ill-suited for solving them. Here we present a parallel analogue/digital hardware architecture specifically designed to solve such problems. We cast constraint satisfaction problems as networks of stereotyped nodes that communicate using digital pulses, or events. Each node contains an oscillator implemented using analogue circuits. The non-repeating phase relations among the oscillators drive the exploration of the solution space. We show that this hardware architecture can yield state-of-the-art performance on random SAT problems under reasonable assumptions on the implementation. We present measurements from a prototype electronic chip to demonstrate that a physical implementation of the proposed architecture is robust to practical non-idealities and to validate the theory proposed.

12.
Neural Comput ; 27(12): 2510-47, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26496042

RESUMO

Gamma-band rhythmic inhibition is a ubiquitous phenomenon in neural circuits, yet its computational role remains elusive. We show that a model of gamma-band rhythmic inhibition allows networks of coupled cortical circuit motifs to search for network configurations that best reconcile external inputs with an internal consistency model encoded in the network connectivity. We show that Hebbian plasticity allows the networks to learn the consistency model by example. The search dynamics driven by rhythmic inhibition enable the described networks to solve difficult constraint satisfaction problems without making assumptions about the form of stochastic fluctuations in the network. We show that the search dynamics are well approximated by a stochastic sampling process. We use the described networks to reproduce perceptual multistability phenomena with switching times that are a good match to experimental data and show that they provide a general neural framework that can be used to model other perceptual inference phenomena.


Assuntos
Ritmo Gama/fisiologia , Modelos Neurológicos , Inibição Neural/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Córtex Cerebral/fisiologia , Simulação por Computador , Humanos , Redes Neurais de Computação , Processos Estocásticos
13.
Front Neurosci ; 9: 357, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26483629

RESUMO

Synaptic plasticity plays a crucial role in allowing neural networks to learn and adapt to various input environments. Neuromorphic systems need to implement plastic synapses to obtain basic "cognitive" capabilities such as learning. One promising and scalable approach for implementing neuromorphic synapses is to use nano-scale memristors as synaptic elements. In this paper we propose a hybrid CMOS-memristor system comprising CMOS neurons interconnected through TiO2-x memristors, and spike-based learning circuits that modulate the conductance of the memristive synapse elements according to a spike-based Perceptron plasticity rule. We highlight a number of advantages for using this spike-based plasticity rule as compared to other forms of spike timing dependent plasticity (STDP) rules. We provide experimental proof-of-concept results with two silicon neurons connected through a memristive synapse that show how the CMOS plasticity circuits can induce stable changes in memristor conductances, giving rise to increased synaptic strength after a potentiation episode and to decreased strength after a depression episode.

14.
Front Neurosci ; 9: 141, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25972778

RESUMO

Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

15.
Can J Urol ; 21(6): 7578-81, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25483769

RESUMO

Our objective is to describe a novel presentation of subcutaneous penile insertion of foreign bodies. This is a practice performed globally and mostly has been reported outside of the United States. We present three cases of incarcerated males that implanted sculpted dominos into the penile subcutaneous tissue. The patients presented with erosion of the foreign bodies through the skin without evidence of infection. We believe that insertion of foreign bodies into penile subcutaneous tissue by incarcerated American males for sexual enhancement is more widespread than previously reported. Erosion is a novel presentation.


Assuntos
Corpos Estranhos/complicações , Pênis , Prisioneiros , Próteses e Implantes , Comportamento Sexual , Tela Subcutânea , Adulto , Humanos , Incidência , Masculino , Doenças do Pênis/epidemiologia , Pênis/cirurgia , Fatores de Risco , Tela Subcutânea/cirurgia , Estados Unidos/epidemiologia , Procedimentos Cirúrgicos Urológicos Masculinos
16.
Neural Comput ; 26(9): 1973-2004, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24877737

RESUMO

Understanding the sequence generation and learning mechanisms used by recurrent neural networks in the nervous system is an important problem that has been studied extensively. However, most of the models proposed in the literature are either not compatible with neuroanatomy and neurophysiology experimental findings, or are not robust to noise and rely on fine tuning of the parameters. In this work, we propose a novel model of sequence learning and generation that is based on the interactions among multiple asymmetrically coupled winner-take-all (WTA) circuits. The network architecture is consistent with mammalian cortical connectivity data and uses realistic neuronal and synaptic dynamics that give rise to noise-robust patterns of sequential activity. The novel aspect of the network we propose lies in its ability to produce robust patterns of sequential activity that can be halted, resumed, and readily modulated by external input, and in its ability to make use of realistic plastic synapses to learn and reproduce the arbitrary input-imposed sequential patterns. Sequential activity takes the form of a single activity bump that stably propagates through multiple WTA circuits along one of a number of possible paths. Because the network can be configured to either generate spontaneous sequences or wait for external inputs to trigger a transition in the sequence, it provides the basis for creating state-dependent perception-action loops. We first analyze a rate-based approximation of the proposed spiking network to highlight the relevant features of the network dynamics and then show numerical simulation results with spiking neurons, realistic conductance-based synapses, and spike-timing dependent plasticity (STDP) rules to validate the rate-based model.


Assuntos
Redes Neurais de Computação , Potenciais de Ação , Córtex Cerebral/fisiologia , Simulação por Computador , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia
17.
Appl Environ Microbiol ; 68(5): 2619-23, 2002 May.
Artigo em Inglês | MEDLINE | ID: mdl-11976147

RESUMO

An efficient transformation protocol for Gluconobacter oxydans and Acetobacter liquefaciens strains was developed by preparation of electrocompetent cells grown on yeast extract-ethanol medium. Plasmid pBBR122 was used as broad-host-range vector to clone the Escherichia coli lacZY genes in G. oxydans and A. liquefaciens. Although both lac genes were functionally expressed in both acetic acid bacteria, only a few transformants were able to grow on lactose. However, this ability strictly depended on the presence of a plasmid expressing both lac genes. Mutations in the plasmids and/or in the chromosome were excluded as the cause of growth ability on lactose.


Assuntos
Proteínas de Escherichia coli , Escherichia coli/genética , Óperon Lac/genética , Proteínas de Membrana Transportadoras/biossíntese , Proteínas de Transporte de Monossacarídeos , Simportadores , Acetobacter/genética , Acetobacter/crescimento & desenvolvimento , Clonagem Molecular , Meios de Cultura , Expressão Gênica , Vetores Genéticos , Gluconobacter oxydans/genética , Gluconobacter oxydans/crescimento & desenvolvimento , Lactose/metabolismo , Proteínas de Membrana Transportadoras/genética , Plasmídeos/genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA