Cholinergic Activation of M2 Receptors Leads to Context-Dependent Modulation of Feedforward Inhibition in the Visual Thalamus

"Cholinergic modulation regulates the output mode of these interneurons and controls the resulting inhibition" - It's all right there and couldn't be more explicit.

Yet more evidence in the research series behind the cholinergic role in switching the firing rate in visual thalamus and what is pertinent to my case I'll be going over with my neurologist to decide on most targeted and decisive way to counteract after medication-specific antagonism. The mechanism I've been referring to in previous studies is tested once again.

It's stated outright right at the beginning or top of this article. Remember that acetylcholine as a neurotransmitter is present in all three subsections of the mammalian nervous system - the peripheral nervous system (motor and sensory nerves throughout the body), autonomic nervous system (regulates subconscious/unconscious or involuntary functions like breathing, digestion, heart rate, etc.) and the central nervous system (brain and spinal cord), but unlike in the peripheral and autonomic nervous systems where it is principally excitatory, it straddles and defies the excitatory-inhibitory dichotomy and serves as vital neuromodulator regulating neuronal excitability in the CNS.

(From plos.org Biology section, the open access peer-reviewed online scientific publication)

> Cholinergic Activation of M2 Receptors Leads to Context-Dependent Modulation of Feedforward Inhibition in the Visual Thalamus

**In many brain regions, inhibition is mediated by numerous classes of specialized interneurons, but within the rodent dorsal lateral geniculate nucleus (dLGN), a single class of interneuron is present. dLGN interneurons inhibit thalamocortical (TC) neurons and regulate the activity of TC neurons evoked by retinal ganglion cells (RGCs), thereby controlling the visually evoked signals reaching the cortex. It is not known whether neuromodulation can regulate interneuron firing mode and the resulting inhibition. Here, we examine this in brain slices. We find that cholinergic modulation regulates the output mode of these interneurons and controls the resulting inhibition in a manner that is dependent on the level of afferent activity. When few RGCs are activated, acetylcholine suppresses synaptically evoked interneuron spiking, and strongly reduces disynaptic inhibition. In contrast, when many RGCs are coincidently activated, single stimuli promote the generation of a calcium spike, and stimulation with a brief train evokes prolonged plat

... keep reading on reddit ➑

πŸ‘︎ 8
πŸ’¬︎
πŸ“…︎ Dec 30 2021
🚨︎ report
Model-based control design books? Focused on Feedback control and Feedforward control.

Greetings,

I attended a poorly constructed course about Modern Design of Control System at university for my master's degree due to academic changes in the course's didactic across recent years. Although I well passed the exam, I feel like I've learned only the things to "pass the exam"...

Are there any well-suited books or manuals about Model-Based Control system Design targeted to university students, maybe with some exercises to perform on Matlab, I could use to improve my knowledge?

Any help and advice will be appreciated.

Thank you all for your time and attention

πŸ‘︎ 13
πŸ’¬︎
πŸ“…︎ Jan 06 2022
🚨︎ report
Feedforward delay of deep neural networks

I'm wondering if exciting a DNN (ie. feed-forwarding the values of each layer) would ever be a problem on a hardware-limited platform like a small drone.

For a time-sensitive application like using a DNN as a controller, how large would the network have to be for the feed-forward process to impose considerable issues in terms of time delay?

Have there been any studies exploring this concept?

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/Cogitarius
πŸ“…︎ Jan 18 2022
🚨︎ report
Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network youtu.be/b_w4eEiogaE
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/research_pie
πŸ“…︎ Nov 23 2021
🚨︎ report
Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network youtu.be/b_w4eEiogaE
πŸ‘︎ 20
πŸ’¬︎
πŸ‘€︎ u/research_pie
πŸ“…︎ Dec 10 2021
🚨︎ report
How to add feedforward system to chlorination disinfectant wastewater treatment? Maybe put a flow transmitter at the effluent end monitoring CL2 concentration and controlling valv. Could the feedforward system also be used to feedforward effluent signal of concentration and stop all flow valves?
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/No-Invite-7082
πŸ“…︎ Nov 26 2021
🚨︎ report
The was the post about adding feedforward control to
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/No-Invite-7082
πŸ“…︎ Nov 25 2021
🚨︎ report
Roadrunner Feedforward Tuning Error

Our team is using Roadrunner this year, and we had previously started tuning the Drive Velocity PID until we decided that we wanted to switch to the feedforward control. When trying to run both the Manual and Automatic feedforward tuners, an error message pops up, saying "Error: Feedforward constants usually don't need to be tested when using the built-in drive motor velocity PID." Has anyone else encountered this, and if so, how can we fix this?

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/grandeari
πŸ“…︎ Oct 27 2021
🚨︎ report
Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network youtu.be/b_w4eEiogaE
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/research_pie
πŸ“…︎ Nov 23 2021
🚨︎ report
Feedforward speed of deep neural networks

When implementing neural networks for real-time control systems, would the feedforward process ever pose a problem?

For instance, a PID controller would be nearly instantaneous, as not very many calculations have to take place, but would a deep neural network consisting of hundreds/thousands of units used in place of the PID take too long to feedforward through and therefore cause problems?

I imagine of course this would entirely be a function of the neural network's size, but am wondering where the limits would be. If anyone could share their knowledge of point me in the direction of any studies/papers, I would appreciate it.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Cogitarius
πŸ“…︎ Nov 22 2021
🚨︎ report
[D] [R] Spiking Neural Networks and multiplexing feedforward and feedback signals

Can anyone help me in finding examples or articles on SNNs and multiplexing feedforward and feedback signals similar to the following article?

https://www.nature.com/articles/s41593-021-00857-x

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/Waste_Screen_5803
πŸ“…︎ Sep 29 2021
🚨︎ report
Feedforward for arms and elevators

Hi, Our team wants to use feedforward for our arms and elevator. Should we use the FTC lib feedforward models and if so how would we find the gains? Thank You for all of your help! https://docs.ftclib.org/ftclib/features/controllers#armfeedforward

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Cheeseboy8020
πŸ“…︎ Aug 06 2021
🚨︎ report
Feedforward Controller based of Gaussian Process Regression or Artificial Neural Networks

Hi Everyone,

Last semester I did my first course in Machine Learning. The course was called machine learning for Control Systems. The topics were about approximating transferfunctions using Gaussian Process Regression (GPR), Artificial Neural Networks (ANN) and controlling systems using reinforcement learning.

The GPR and ANN solutions were very good at approximating functions. However I don't quite understand how I can make a feedforward controller from these estimated transferfunctions. Pretty much all of these transferfunctions are difficult to model (because they are very non-linear). Ideally I would keep the model non-linear such that it can correct for the nonlinearities of the true system.

The question thus remains: "How can we make a feedforward controller based of a function estimate made with a GPR or ANN?"

Is there anyone here who has done this before?

Many thanks in advance!

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/hidjedewitje
πŸ“…︎ Jul 11 2021
🚨︎ report
Lack of activation in transformer feedforward layer?

It appears that transformer (point wise) feedforward layers typically take the following form:

def feedforward(x):
    # python-like pseudo code
    # assume W1, W2 are trainable parameters
    x = dropout(gelu(W1 @ x)
    x = dropout(W2 @ x)
    return x

I'm curious as to why the second matrix multiplication is not followed by an activation unlike the first one. Is there any particular reason why a non-linearity would be trivial or even avoided in the second operation? For reference, variations of this can be witnessed in a number of different implementations, including BERT-pytorch and attention-is-all-you-need-pytorch.

Thank you for your insight in advance!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/JST99
πŸ“…︎ May 21 2021
🚨︎ report
ResMLP: Feedforward networks for image classification with data-efficient training by Hugo Touvron et al. deepai.org/publication/re…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/deep_ai
πŸ“…︎ May 13 2021
🚨︎ report
Tuning gravity feedforward?

Hello, I'm from team 16605 and our team saw that you can control an elevator with feedforward control. We are using the roadrunner Gravity Feedforward which can be found here. We would like to know if there is a way to tune this feedforward and if not then what do we put into the coefficients? Thank You so much for your help!

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/Cheeseboy8020
πŸ“…︎ May 16 2021
🚨︎ report
[R] Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes

Hi r/ML,

I'm writing a series of papers on a new way to think about neural networks called Tensor Programs that I'm really excited about. The first paper was published in NeurIPS 2019, but I figured it's never too late to share with the community! I'll put the paper link here and also say a few words about the content.

Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes

paper: https://arxiv.org/abs/1910.12478

code: https://github.com/thegregyang/GP4A

What is a Gaussian process? You can just think of GP as just a fancy way of saying "multivariate Gaussian distribution". Thus our result says: for wide, randomly initialized network f and two inputs x, y, the distribution of (f(x), f(y)) looks like some 2D Gaussian. Similarly, for k inputs x_1, x_2, ..., x_k, the distribution of (f(x_1), ..., f(x_k)) looks like a kD Gaussian. The covariance of these kD Gaussians is the most important data associated with the GP, called the kernel of the GP.

NNGP Correspondence This correspondence between wide neural networks (NN) and Gaussian processes (GP) has a long history, starting with Radford Neal in 1994, and extended over the years (e.g. 1 2 3 4 5). Our paper shows this correspondence is architecturally universal, as the title says.

Architectural universality This architectural universality will be a recurring pattern in this series of papers, and is one of the reasons I'm really excited about it: Theoretical understanding of deep learning has always had a problem scaling up results beyond 1 or multi-layer perceptrons, and this gap grows wider by the day as mainstream deep learning move to transformers and beyond. With tensor programs, for the first time, you really just need to show your results once and it's true for all architectures. It's like a CUDA for theory.

OK so, what is a tensor program? In a gist, it's just a sequence of computation composed of matrix multiplication and coordinatewise nonlinearities --- simple, right? It turns out that practically all modern and classical neural networks can be written in this way (this sounds stupidly obvious but I'm hiding some details here; see paper). This expressivity is half of the power of tensor

... keep reading on reddit ➑

πŸ‘︎ 95
πŸ’¬︎
πŸ‘€︎ u/thegregyang
πŸ“…︎ Jul 31 2020
🚨︎ report
Will psych classes (like psych 215) only offer feedforward exams next semester?

I've heard rumours that the psych department will be implementing feedforward exams for next semester. Is this still true or did they decide to drop feedforward exams?

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/11-Olivia
πŸ“…︎ Dec 14 2020
🚨︎ report
Image to painting with Feedforward CNN s
πŸ‘︎ 33
πŸ’¬︎
πŸ‘€︎ u/singam96
πŸ“…︎ Jan 10 2021
🚨︎ report
Are There Any Non-VCA Feedforward Compressors?

Hello everyone.

Currently doing some research on feedback and feedforward compressor topology. The idea is pretty straightforward but in my research, I've only been able to find feedforward designs in VCA compressors.

So my question, if someone can enlighten me, is: can non-VCA compressors (variable-mu, optical, diode bridge, PWM, FET, etc.) have feedforward designs or are they all feedback by their very nature?

Thanks in advance for any answers!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/arthurwfox
πŸ“…︎ Mar 15 2021
🚨︎ report
I made a software to visuale a feedforward Neural Network with pure Kotlin and Swing
πŸ‘︎ 53
πŸ’¬︎
πŸ‘€︎ u/longuyen2306
πŸ“…︎ Jan 11 2021
🚨︎ report
[D][P] Feedforward Noise Cancelling Project. Looking for some advice before i dive into details.

Hi I am beginner with ML and finished one minor project yet using a simple neural network.

Now I am heading towards a new project which is more complex and searching for good approach.

Since I have not much experience, I want to introduce you the project and hoping for some ideas.

(Which approach makes sense NN, RNN, RL or alternative approaches?)

The main goal of the project is to find parameters (amplitude (A_i) and phase (P_i) at fixed frequencies (i) (i=50Hz, 100Hz… ) for a sinusoid ) for a signal generator. The generated signal is then added to the β€œmain” signal in order to cancel out some noise (which consist of these n x 50Hz frequencies). Basically, it is a noise channeling problem and can be solved by a feedforward algorithm.

The input is a measurement (duration 2-8 seconds) of the main signal. If a Fourier transformation (FFT) is performed the amplitudes of the 50Hz harmonics (50Hz,100Hz,150Hz…) are clearly observable (in the FFT). With the correct A_i and P_i the amplitude of the frequency i can be suppressed (so the frequency is compensated by the additional introduced signal created by our signal generator). The goal is now to find these perfect A_i and P_i for each frequency i.

The first measurements have shown that for each FFT_amplitude(frequency i), a unique minimum can be found and a gradient towards this minimum has been observed ( simplified: FFT_amplitude= (A_i)^2+(P_i)^2) .

The loop for the optimization would look like:

  1. Measurement of the main signal --> FFT

  2. Input (n x 1 array) : [FFTamptitude_1,…., FFTamptitude_n]

  3. Optimization algorithm (NN, RNN, RL ) ?

  4. Output (n x 2 array): [[A_1,P_1],[A_2,P_2],…. ,[A_n,P_n]]

  5. Signal processing

  6. Measurement of the main signal with the added signal

The Loss function could look like:

Loss = sum ( FFTamptitude_i ), (trying to minimize the loss function)

Since FFT_amplitude= (A_i)^2+(P_i)^2, a training simulation can be created in order to β€œpre”-train the algorithm.

I hope I made the situation clear.

One problem to me:

The output is an array of parameters, but the loss function has to be calculated with the measurement of point 6 (see above). I would have to treat the output as a layer in order to get the backpropagation working right? (Please add some advice)

Furthermore, the algorithm has always to wait for the Measurement. Is that an Issue? I want to use Keras.

What do you think? Are NN, RNN, RL capable to s

... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/NotThatGuy97
πŸ“…︎ Apr 05 2021
🚨︎ report
Conversion of a dense feedforward neural net to a reservoir computer/ESN in Python/Pytorch

Hey all,

I've done a bit of searching around, including this subreddit, and am still struggling a bit with the conversion of a three-layer (input, hidden, output) feedforward neural network to a reservoir computer, a la echo state networks. I'm programming in python and *just* transitioning into PyTorch.

Here are my modifications so far:

  • applying an adjacency matrix with roughly 10% density to either my input weights or my output weights
  • keeping input weights constant and only training output weights

I am wondering how to get started on incorporating the time steps and state space.

Is anyone able to help out?

Thanks so much!

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/coolimnancy
πŸ“…︎ Jan 19 2021
🚨︎ report
Feedforward Control using GPR and/or ANN

Hi Everyone,

Last semester I did my first course in Machine Learning. The course was called machine learning for control systems. The topics were about approximating transferfunctions using Gaussian Process Regression (GPR), Artificial Neural Networks (ANN) and controlling systems using reinforcement learning.

The GPR and ANN solutions were very good at approximating functions. However I don't quite understand how I can make a feedforward controller from these estimated transferfunctions. Pretty much all of these transferfunctions are difficult to model (because they are very non-linear). Ideally I would keep the model non-linear such that it can correct for the nonlinearities of the true system.

The question thus remains: "How can we make a feedforward controller based of a function estimate made with a GPR or ANN?"

Is there anyone here who has done this before?

Many thanks in advance!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/hidjedewitje
πŸ“…︎ Jul 11 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.