Understanding Linear Predictive Coding

Disclaimer: I have almost no knowledge of DSP (and math knowledge is not great either), I'm trying to understand this to make a very simple vocoder

As far as I know, using an LPC method (Burg, Levinson-Durbin) allows getting the denominator parameters of an IIR filter.

Giving this signal an impulse on the first sample allows recreating the original signal, with limited accuracy.

AFAIK this filter would also allow passing another carrier signal through, which would be enough for a vocoder.

However, I don't fully understand how I should translate this to (rust) code.

I got this for the IIR filter (only the denominator, as I think that's the only thing I need I think):

// apply the filter
for j in 0..self.coeffs.len() {

       // the current buffer index
       let current_buf_index = (self.buffer_index + self.coeffs.len() - j) % self.coeffs.len();

	// update the sum
	sum -= self.coeffs[j] * self.buffer[current_buf_index];

}

// update the buffer index
self.buffer_index = (self.buffer_index + 1) % self.coeffs.len();

// update the rolling buffer
self.buffer[self.buffer_index] = sum;

Where sum is the current sample of the carrier, self.coeffs are the denominator coeffs (array), self.buffer starts of filled with 0, length of self.coeffs.len() (order/number of coeffs), self.buffer_index starts of at 0, the code gets called for all samples.

1: Is this correct (if not, how could I check if it doesn't work)? 2: If I implement the code from here: http://www.emptyloop.com/technotes/A%20tutorial%20on%20linear%20prediction%20and%20Levinson-Durbin.pdf, and pass a short audio segment into the ForwardLinearPrediction, can I use it as the coeffs in my filter code? (If not, how should I use it to get a vocoder?)

3: Is there a proper way of blending between filter coeffs (So I go from one filter to another without it becoming unstable halfway, and "sounding" correct)?

4: what are the differences between different LPC methods?

Thank you in advance

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/skythedragon64
πŸ“…︎ Feb 26 2021
🚨︎ report
ELI5: how does audio encoding/decoding work? What what in the world is linear predictive coding?

I have been searching all over the internet, but there is simply no "Audio Encoding for Dummies". I am not at all familiar with this technology, and want to know how it works in very simple terms. All the articles I've found are just way too technical and complex for me to understand. Kudos to anyone who can get this in my head: what is linear predictive coding and how does it relate to audio encoding/decoding and more specifically to cell phones.

Thanks!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ilkin247
πŸ“…︎ Nov 25 2019
🚨︎ report
Predisposition to domain-wide maladaptive changes in predictive coding in auditory phantom perception sciencedirect.com/science…
πŸ‘︎ 23
πŸ’¬︎
πŸ‘€︎ u/yellow_cube
πŸ“…︎ Dec 22 2021
🚨︎ report
Abdullah Ali: Predictive coding is a consequence of energy efficiency in recurrent neural networks (Neuromatch Conference) youtube.com/watch?v=JTt9v…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/pianobutter
πŸ“…︎ Jan 06 2022
🚨︎ report
Perceptual Predictions, Predictive Coding by Thoughts and Memory Stores

There’s the conclusion that the brain constantly generates predictions. It uses these to get ahead of experiences, stay safe and so on. The brain is also explained with an internal model of the world, updating with the external.

Fine.

But whatever prediction conjoins or continues with thoughts. Any prediction the brain generates is of the same form as thought. There is no fence between thoughts and predictions.

Also, to predict anything is to knowβ€Šβ€”β€Šalready. And to know anything in the brain is the memory.

https://www.reddit.com/r/psychologyresearch/comments/rhxmvm/multisensory_integration_is_predicated_on_thoughts/

What is often said that the brain has an internal model is more or less like the memory has information on things.

It is the way the information had been known it is often β€˜expected’ to be, except there are changes, which could lead to reactions, then adjustment of how else it could be known.

Internal models of the world or guesses are fromβ€Šβ€”β€Šalreadyβ€Šβ€”β€Šstored information in the memory.

Updates are additions to the memory store, or to the group of stores with commonalities.

https://www.reddit.com/r/Brain/comments/rok32y/thoughts_encirclement_of_existence/

Theoretically, the brain understands the world by conversion of everything external, internal, physical or in general reality to thoughts.

It is thoughts that go to the memory to be stored. It is thoughts the memory gives in situations when presented with an incomingβ€Šβ€”β€Šalready convertedβ€Šβ€”β€Šthought on a situation to know what might follow.

It is thought that goes to the destination where emotions are determined. It is thoughts, at this destination, that result in frowning or smiling, as reactions.

https://www.reddit.com/r/RandomThoughts/comments/rgekso/youre_not_seeing_youre_thinking_theory/

Interoception or internal perception of systems, organs in the body also follows the same thought conversion process. The memory has a store for the liverβ€Šβ€”β€Šregistered and knows what the optimal working for the liver is. In a problem, thoughts from its memory store can be sent to a point to feel danger, towards getting attention.

[https://www.reddit.com/r/cognitivescience/comments/rrdskv

... keep reading on reddit ➑

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/stpvd
πŸ“…︎ Jan 02 2022
🚨︎ report
The evolution of brain architectures for predictive coding and active inference (2021) royalsocietypublishing.or…
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/pianobutter
πŸ“…︎ Dec 30 2021
🚨︎ report
Predictive Coding Theories of Cortical Function arxiv.org/pdf/2112.10048.…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/pianobutter
πŸ“…︎ Dec 21 2021
🚨︎ report
Predictive Coding, Variational Autoencoders, and Biological Connections (2021) doi.org/10.1162/neco_a_01…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/pianobutter
πŸ“…︎ Nov 12 2021
🚨︎ report
Predictive coding I: Introduction (2021) marksprevak.com/publicati…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/pianobutter
πŸ“…︎ Nov 12 2021
🚨︎ report
What do people here think about the predictive coding theory of Autism?

So since januari I have been reading about Autism, and cognitive theories about Autism, and one of the cognitive theories that rings most true to me is (some version) of the predictive coding theory of Autism. Bear in mind that I am not a cognitive scientist (well, as a linguist kind of, but in a completely different field) and I just interpret it as a layman, but a layman who as an Autistic person does have some relevant life experience. The predictive coding theory of Autism is an application of the predictive coding theory of the mind, which is a theory about how the human mind works in general, of all neuro-types. The idea is that the mind always makes predictions what is going to happen, and only fires up so to speak when those predictions haven't been borne out. However, since predictions never come true a hundred percent, if your brains fire up every time a prediction hasn't come true exactly it is still too much to handle , so people's brains have thresholds when they will come into action and when just to ignore it, and this threshold differs.

An example: say you are a striker in a soccer^(1) team. The right winger has the ball on the right side. You have played with them before several times, and your brain predicts based on that experience that they are going to give a crosspass, and also where the ball is going to land, and you "intuitively" go to that spot. Now, the right winger doesn't hit the ball quite right, so the prediction doesn't quite come true, the crosspass is too short, and you have to adapt your movement. Now, when your right winger does this, not only where the ball lands is different from your internal prediction, but also for example the movements of your team mates and opponents, and this might be just to much to take it all in, and therefore you have to filter out some of those challenges to your prediction. The difference between a skilled soccer player and an unskilled soccer player is that the predictions of the first one are better, and more fine-grained to different situations (like the right winger not hitting the ball correctly) so when things are different there are less things they need to change in their mind, so it takes less energy and time.

So with that in mind, there are two theories developed to explain Autism from this idea. The first one is that our predictions are just bad, and therefore there is too much we need to change all the time. The second one (which is the one I believe to be true) is that our bra

... keep reading on reddit ➑

πŸ‘︎ 13
πŸ’¬︎
πŸ‘€︎ u/merijn2
πŸ“…︎ Sep 14 2021
🚨︎ report
Sources on predictive coding?

Hi everyone, I was looking to do some reading on predictive coding, but having difficulty understanding a lot of the jargon and technical terms. Are there any papers or journals someone could point me to to help me gain some insight on predictive coding and what it exactly is and how it relates to thinking/thought processes?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/betsubetsubetsu
πŸ“…︎ Oct 18 2021
🚨︎ report
I build a predictive model but the prediction is very off on the test dataset...most of the predictions are way lower than the actual value. Do I need to select a new modelling technique? I am using linear regression.
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/sarthak004
πŸ“…︎ Jul 24 2021
🚨︎ report
[2110.02345] Unsupervised Speech Segmentation and Variable Rate Representation Learning using Segmental Contrastive Predictive Coding arxiv.org/abs/2110.02345
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/nshmyrev
πŸ“…︎ Oct 09 2021
🚨︎ report
[D] Motivation of Contrastive Predictive Coding

I am going through literature on unsupervised/self-supervised learning and was stuck on the motivation behind CPC as described in "Representation Learning with Contrastive Predictive Coding" by Oord et al. From the paper,

>"One of the challenges of predicting high-dimensional data is that unimodal losses such as mean-squared error and cross-entropy are not very useful, and powerful conditional generative models which need to reconstruct every detail in the data are usually required. But these models are computationally intense, and waste capacity at modeling the complex relationships in the data x, often ignoring the context c. For example, images may contain thousands of bits of information while the high-level latent variables such as the class label contain much less information (10 bits for 1,024 categories). This suggests that modeling p(x|c) directly may not be optimal for the purpose of extracting shared information between x and c."

Given the context that this paper focuses on time-series prediction, I have a couple questions about this motivation.

  1. By "conditional generative models" are they referring to models like CGANs?
  2. Why would modeling p(x|c) not promote shared information between x and c? I think I understand their example when the context contains less information, but if a generative model was conditioned on higher-dimensional data, would it necessarily ignore that context as well?
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/eleswon
πŸ“…︎ Sep 25 2021
🚨︎ report
Associative Memories via Predictive Coding
πŸ‘︎ 5
πŸ’¬︎
πŸ“…︎ Sep 28 2021
🚨︎ report
(PDF) Precise Minds in Uncertain Worlds: Predictive Coding in Autism researchgate.net/publicat…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Skyoneo5
πŸ“…︎ Oct 16 2021
🚨︎ report
How plausible would a Bayesian epistemological theory founded on a Bayesian cognitive scientific theory (e.g. Friston's free energy principle or predictive coding) be?

also: how fruitful / reasonable is the idea, does it make sense, etc.

also: Bayesian cognitive science / the Bayesian Brain hypothesis, and specific theories like Friston's free energy principle and predictive processing hypothesis and theories like those of Hohwy and Clark, etc.

In particular I think it might also ground an evidentialist and proper functionalism (and thus also reliabilist) epistemological theory.

I think the theory might ground a naturalistic, ideal, Hegelian dialectic, pragmatist, 4E (embodied, etc) epistemology.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/mochaelo
πŸ“…︎ Aug 11 2021
🚨︎ report
"Predictive Coding: a Theoretical and Experimental Review", Millidge et al 2021 arxiv.org/abs/2107.12979
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/gwern
πŸ“…︎ Aug 21 2021
🚨︎ report
Predictive Coding: A Theoretical and Experimental Review arxiv.org/abs/2107.12979
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/pianobutter
πŸ“…︎ Jul 30 2021
🚨︎ report
"Predictive Coding: a Theoretical and Experimental Review", Millidge et al 2021 arxiv.org/abs/2107.12979
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/gwern
πŸ“…︎ Aug 21 2021
🚨︎ report
Predictive Coding has been Unified with Backpropagation lesswrong.com/posts/JZZEN…
πŸ‘︎ 42
πŸ’¬︎
πŸ‘€︎ u/clockworktf2
πŸ“…︎ Apr 03 2021
🚨︎ report
Coding Linear Regression | 100 Days of TensorFlow: Episode 6 youtube.com/watch?v=BqUE_…
πŸ‘︎ 61
πŸ’¬︎
πŸ‘€︎ u/quicksote
πŸ“…︎ Oct 08 2021
🚨︎ report
Predictive Coding: a Theoretical and Experimental Review arxiv.org/abs/2107.12979v…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ShareScienceBot
πŸ“…︎ Jul 30 2021
🚨︎ report
The predictive coding theory of autism spectrumnews.org/news/pre…
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/ksk1222
πŸ“…︎ Jul 13 2021
🚨︎ report
Estimation of risk difference with linear probability model vs predictive margins from logistic model

Hello all,

I am trying to estimate an adjusted risk difference for a binary outcome using multivariable regression. The two approaches I am familiar with are (1) use a linear probability model [with sandwich estimator standard errors] to directly estimate the RD; and (2) use a nonlinear model (e.g., logistic regression), calculate the predicted probabilities for the whole sample in the presence and absence of the exposure, and take the difference, using the bootstrap for standard errors. Both approaches are discussed in the literature, but I haven't found any work comparing them to one another.

In my analysis, some of the estimates from approach 1 (linear probability model) are substantively larger than those from approach 2. I can present both sets of results, but I want to be able to discuss why they might differ, and I'm currently not sure. I know that approach 2 gives an estimate specific to my sample and its distribution of covariates (i.e., marginal rather than conditional). However, my understanding is that approach 1 could be interpreted as either a marginal or conditional RD, so I'm not certain why it wouldn't line up.

Does anyone have insights as to why/under what conditions these approaches would produce different results?

Many thanks!

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/artdco
πŸ“…︎ Mar 22 2021
🚨︎ report
"Z-IL: Predictive Coding Can Do Exact Backpropagation on Any Neural Network", Salvatori et al 2021 (scaling local learning rules to ImageNet AlexNet/Resnet & ALE DRL at similar compute cost) arxiv.org/abs/2103.04689
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/gwern
πŸ“…︎ May 02 2021
🚨︎ report
Predictive Coding Can Do Exact Backpropagation on Any Neural Network arxiv.org/abs/2103.04689
πŸ‘︎ 23
πŸ’¬︎
πŸ‘€︎ u/nickb
πŸ“…︎ Jun 04 2021
🚨︎ report
Coding a loop for many linear regressions

For a final set of analyses for my MS thesis, I need to run many linear regressions. I have some basic knowledge of using loops but I would like to know if it's possible to use those for many regressions and save the summary output to a document or data frame? In the past I've just run each regression separately although it was extremely tedious.

In my dataset, columns B-M are the response variables and column Q to end are the predictors.

edit: added screenshot and further explanation

https://preview.redd.it/bqtaweebwkh71.png?width=2560&format=png&auto=webp&s=6501426850128c98511344633b9cf3b7ccfc1a0b

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/dendroeco
πŸ“…︎ Aug 15 2021
🚨︎ report
Coding Linear Regression | 100 Days of TensorFlow: Episode 6 youtube.com/watch?v=BqUE_…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/Ill_Force756
πŸ“…︎ Oct 08 2021
🚨︎ report
Question about Bayesian brain / predictive coding: How far up does it go?

Hello,

I've been very interested over the past few years in Scott's posts about predictive coding: the idea that the brain naturally works in a Bayesian way to generate its model of the world. I've especially been interested in the way that this can be used to look at mental illnesses, and I've even been using it to help with my treatment of my obsessive-compulsive disorder (with somewhat limited success, but still more than I had been having beforehand).

There's one question I have, though, and I'm not sure if I've seen this addressed.

From what I can tell, predictive coding seems to go all the way up: the Bayesian processes seem to happen from the lowest levels of sensory perception (creating optical illusions, causing people to skim over repeated "the"s in sentences, the wine tasting illusion, etc.) all the way up to cognitive belief-forming (e.g. the polar bear example - if your friend tells you that they saw a polar bear, you're naturally skeptical because your prior on polar bears being in the region is extremely low). This makes intuitive sense to me.

What I'm confused about, then, is why people seem to fail at Bayesian reasoning in certain contexts. The classic example is disease testing: If you give someone a test with an 80% accuracy rate (i.e. it detects disease 80% of the time if you have it), people will automatically assume that a positive test means there's an 80% chance they have the disease, even if the disease is extremely rare. In other words, they mix up P(H|E) and P(E|H), or commit the base rate fallacy (I assume these mean the same thing?).

Why does this happen? If the brain naturally works in a Bayesian way, even on the level of conscious belief-formation (e.g. the polar bear example, mental illnesses, etc), why do people make these mistakes? Is there a certain level of cognition, extremely high up, where the natural Bayesian processes break down? Or is there some sort of confusion that happens when numbers in particular are used? Does anyone know what's going on here? Am I misunderstanding something?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/ingx32
πŸ“…︎ Mar 25 2021
🚨︎ report
How do I measure the predictive value of a linear mixed effects model?

I've use multiple linear regression in the past and often evaluated the predictive value of my model using R2 or adjusted R2. Is there an equivalent to this for a linear mixed effects model? I tried searching the literature but there doesn't seem to be a good consensus

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/premed4
πŸ“…︎ Mar 13 2021
🚨︎ report
Predictive Coding Can Do Exact Backpropagation on Any Neural Network arxiv.org/abs/2103.04689
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/010011000111
πŸ“…︎ Jun 04 2021
🚨︎ report
[R] Predictive Coding Approximates Backprop along Arbitrary Computation Graphs

https://openreview.net/forum?id=PdauS7wZBfC

Abstract: The backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies solely on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs, but rather in the concept of automatic differentiation which allows for the optimisation of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding CNNs, RNNs, and the more complex LSTMs, which include a non-layer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks, while utilising only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry, and may also contribute to the development of completely distributed neuromorphic architectures.

πŸ‘︎ 82
πŸ’¬︎
πŸ‘€︎ u/japanhue
πŸ“…︎ Oct 05 2020
🚨︎ report
[R] Predictive coding is a consequence of energy efficiency in recurrent neural networks biorxiv.org/content/10.11…
πŸ‘︎ 59
πŸ’¬︎
πŸ‘€︎ u/hardmaru
πŸ“…︎ Feb 17 2021
🚨︎ report
Loren Lugosch: "Predictive coding in machines and brains" (blog post, 2020)

https://lorenlugosch.github.io/posts/2020/07/predictive-coding/

Introduction to predictive coding / predictive processing coming from the signal theory / engineering side.

Headings:

  • Predictive coding for data compression
  • Predictive coding for representation learning
  • Predictive coding for computational efficiency
  • Predictive coding in the brain
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Daniel_HMBD
πŸ“…︎ Apr 03 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.