Visualising convolutions to explain the Central Limit Theorem [D] [E]

There have been many excellent articles and videos about the Central Limit Theorem (CLT). But besides formal proofs, I was not able to find a good intuitive explanation for why it is true.

Here is my attempt to provide an intuitive explanation using convolutions:

https://www.cantorsparadise.com/the-central-limit-theorem-why-is-it-so-2ae93edf6e8?sk=d531b2503b70e5eaedb7d75040c0b325

Check it out if you are interested, and I would love to hear your feedback. I know that many people in this sub have much more stats knowledge than myself!

Thanks :)

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/mathsTeacher82
πŸ“…︎ Aug 12 2021
🚨︎ report
How to prove convolution theorem in Julia?

I've trying to prove convolution theorem in Julia but without success. Say I have two function f1 = sin(30Ο€t) & f2 = sin(10Ο€t). Then, I have the following code

fconv_fda = ifft(fft(f1).*fft(f2))
fconv_tda = DSP.conv(f1,f2)[1:length(f1)]
plot(fconv_tda)
plot!(real.(fconv_fda))

But both the plots turn out to be different. What am I doing wrong here?

πŸ‘︎ 8
πŸ’¬︎
πŸ“…︎ Apr 28 2021
🚨︎ report
Did my textbook skip a step? [Diff Eq - Convolution Theorem]

I don’t see how to logically do this integral, especially without any work shown.

here’s the book example and here’s my work:

https://imgur.com/EdMWWUK

https://imgur.com/JQ8crcV

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/jaescott
πŸ“…︎ Nov 18 2019
🚨︎ report
[inverse laplace transform]evaluate an inverse laplace transform using the convolution theorem

Here is the problem and here is what i have done, im not sure where to go because its not coming out how i think it should

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/circaking
πŸ“…︎ Mar 07 2019
🚨︎ report
What are convolutions and convolution theorem?

And how does it related to Laplace transforms?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/jflow2
πŸ“…︎ Feb 10 2018
🚨︎ report
A convolution theorem for the Fourier-Sato transform?

For two (sufficiently nice) real-valued functions f and g on a finite dimensional real vector space E with convolution f * g, we have the so-called convolution theorem:

F{f * g} =< F{f} , F{g}>

where F{f} is the Fourier transform of f, and "<-,->" is the standard inner product on E.

In Kashiwara and Schapira's book Sheaves on Manifolds, they define the "Fourier-Sato transform" for (R^+ -conic, complexes of) sheaves (of A-modules, for a suitably nice ring A) on the vector space E as follows:

  • Denote by p_1 and P_2 the first and second projections from E x E^* (where E^* is the dual of E), and set

D^- = {(x,y) in ExE^* | <x,y> <= 0 }.

  • For an object G of the derived category D_{R^+ }^+ (E), the Fourier-Sato transform of G is defined to be the composition

FS(G) := (Rp2_!)( (p1^! G)__{D^- })

(sorry for crap layout) and FS(G) is now an object of the derived category D_{R^+ }^+ (E^* ) of complexes on the dual space E^* .

Also in Sheaves on Manifolds, they define a "convolution" operation on Ob(D^+ (E)) by setting F * G := Rs!( F [x]^L G ) (where [x]^L is the left derived functor of the external direct product [x] on ExE, and s: ExE -> E is the map (x,y) |-> x+y ).

tl;dr: Does anyone know if there is a corresponding convolution theorem for the Fourier-Sato transform?

There aren't too many references for the Fourier-Sato transform (as in, I only know of Sheaves on Manifolds), and I haven't seen this mentioned anywhere. It doesn't seem like it'd be THAT hard to prove, if it's true at all (famous last words!). Thoughts?

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/fuckyourcalculus
πŸ“…︎ Jul 28 2014
🚨︎ report
A diversion: convoluted theorems

Here's a little game I thought up, similar to the whole "complicated proof of a simple statement" game.

State a well-known theorem in terms much more complicated than necessary to introduce the theorem. Then, the goal for others is to try to figure out what the theorem is. For example:

The free commutative monoid on one generator admits a semiring structure whose multiplicative monoid is the free commutative monoid on a countably infinite set of generators.

πŸ‘︎ 59
πŸ’¬︎
πŸ‘€︎ u/whirligig231
πŸ“…︎ Aug 18 2015
🚨︎ report
PhD Dissertations and Imposter Syndrome (Rant / Venting)

Sometimes, even good news isn't quite good enough.

I've posted several times here about some of the peculiar difficulties I've dealt with in my journey toward a PhD in mathematics, either in threads, or as submitted posts like this one. The central problem I've been dealing with is that my research topicβ€”the Collatz Conjectureβ€”and the tools I've been using to study it (harmonic analysis, analytic number theory, andβ€”most recentlyβ€”non-archimedean (functional) analysis) are all completely outside the purview of the expertise of my university's mathematical faculty.

For most of my time in graduate school, the most troubling manifestation of this problem was that I wasn't certain I would be able to get a PhD, seeing as there was no one in my orbit capable of rendering judgment on the merit of my work. The agreement with I'd reached with my department was that if I could get something of mine published in a reputable journal, that would suffice as the "expert approval" needed to justify conferring upon me the doctoral degree that I've been working toward all this time. Unfortunately, my attempts to get myself published have not met with successβ€”and, certainly, the backlog of excess submissions caused by the pandemic has only made matters worse.

In mid November 2021, however, I received some truly wonderful news: my department decided that they would not require me to get something published. They will accept whatever original work I have done.

Although I have no evidence for this, given the way the head of graduate studies phrased the message, I have a strong suspicion that when I informed my advisor I had independently rediscovered a good deal of the contents of W. M. Schikhof's PhD dissertation (Non-Archimedean Harmonic Analysis, 1967), that was what convinced them that I was worthy of their gracious leap of faith.

While this news has definitely taken a great deal of stress off my shoulders, me being meβ€”that is, obsessiveβ€”I've found a new, daunting psychological difficulty to nail to onto my skull:

I'm worried that I don't deserve it, because I haven't done enough.

The positives:

  1. I know for a fact that my work is cutting-edge, insofar as novelty goes. The only major antecedent work in a comparable vein that I can point to is Tao's 2019 paper on the Collatz Conjectureβ€”but, even then, the similarity is only in the fact that our approaches share essentially the same central object of study; otherwise, they couldn't be more different. His take is
... keep reading on reddit ➑

πŸ‘︎ 30
πŸ’¬︎
πŸ‘€︎ u/Aurhim
πŸ“…︎ Jan 10 2022
🚨︎ report
Do I finally understand Bell's Theorem? Critique my 'for fun' paper "A Convoluted View of Reality: An Explanation of How John Bell Performed the Ultimate Nerd Snipe". drive.google.com/file/d/0…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/Bmhowe34
πŸ“…︎ Dec 23 2014
🚨︎ report
Fourier Transform for Products

I was wondering if the Fourier Transform decomposes a function the sum of sinusoids, is there a transform which decomposes a function into the product of sinusoids? Perhaps the convolution theorem can be used here, but I’m not sure how.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/MimirYT
πŸ“…︎ Dec 27 2021
🚨︎ report
Theoretical Computer Science Subjects.

Hey, I will be applying for summer 2022 at TU Berlin for their masters in computer science course. Just wanna know which of these subjects qualify as Theoretical comp sc subjects:

  1. Theory of Computation: This course will focus on the inherent capabilities and limitations of mathematical models of computation, and their relationships with formal languages. Rigorous arguments and proofs of correctness will be emphasized. Particular topics to be covered include: β€’ Finite automata, regular languages, regular grammars β€’ Deterministic and nondeterministic automata β€’ Context free grammars, languages, pushdown-automata β€’ Turing machines, Church's thesis, undecidable problems β€’ NP completeness

  2. Machine Learning: Machine Learning is concerned with computer programs that automatically improve their performance through experience. Topics such as Bayesian networks, decision tree learning, support vector machines, statistical learning methods, unsupervised learning and reinforcement learning would be discussed in this course. Theoretical concepts such as inductive bias, the PAC (probably approximately correct) learning framework, Bayesian learning methods and margin-based learning would be discussed in the course.

  3. AI: In this course, we will study the most fundamental knowledge for understanding AI. We will introduce some basic search algorithms for problem solving; knowledge representation and reasoning; pattern recognition; fuzzy logic; neural networks and genetic algorithms. The later three form synergistically Soft Computing which happens to be an important component of AI.

  4. Algebra and differential equations: * Differential equations: Differential equations of first order: Variables separable form or reducible to Variables separable form of Differential Equations, Exact Differential Equations, Linear Differential equations of first order, Ordinary Linear Differential Equation of higher order with constant coefficients. Homogeneous and Nonhomogeneous equations; Methods of variation of Parameters and undetermined coefficients; Euler’s Equations.

  • Laplace transforms, inverse transforms, transforms of derivatives and integrals, unit function, step functions, convolution of functions, partial fractions, applications to the solution of ordinary differential equations and system of differential equations.
  • Sets, Algebra of sets; Binary operation on a set. Groups, Subgroups, Rings and Fields. (Only definitions and some important properties n
... keep reading on reddit ➑

πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Nov 29 2021
🚨︎ report
SERIOUS: This subreddit needs to understand what a "dad joke" really means.

I don't want to step on anybody's toes here, but the amount of non-dad jokes here in this subreddit really annoys me. First of all, dad jokes CAN be NSFW, it clearly says so in the sub rules. Secondly, it doesn't automatically make it a dad joke if it's from a conversation between you and your child. Most importantly, the jokes that your CHILDREN tell YOU are not dad jokes. The point of a dad joke is that it's so cheesy only a dad who's trying to be funny would make such a joke. That's it. They are stupid plays on words, lame puns and so on. There has to be a clever pun or wordplay for it to be considered a dad joke.

Again, to all the fellow dads, I apologise if I'm sounding too harsh. But I just needed to get it off my chest.

πŸ‘︎ 17k
πŸ’¬︎
πŸ‘€︎ u/anywhereiroa
πŸ“…︎ Jan 15 2022
🚨︎ report
[D] ICLR2022 review stats

I crawled the ICLR2022 preliminary reviews with some help of another repo and uploaded the crawled raw data (crawled today around 2PM UTC+1) .You can also find some quick stats like

  • distribution of mean scores etc..
  • best paper by mean/median score
  • most controversial paper by std of scores

in the following notebook:

https://github.com/VietTralala/ICLR2022-OpenReviewData/blob/master/analyze_reviews.ipynb

Feel free to play around with it ✌

Excerpt of the data

best 10 paper by median score

paper_id title link keywords mean max min std median num
LdlwbBP2mlq Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond https://openreview.net/forum?id=LdlwbBP2mlq Local SGD, Minibatch SGD, Shuffling, Without-replacement, Convex Optimization, Stochastic Optimization, Federated Learning, Large Scale Learning, Distributed Learning 8 8 8 0 8 3
iMSjopcOn0p MT3: Multi-Task Multitrack Music Transcription https://openreview.net/forum?id=iMSjopcOn0p music transcription, transformer, multi-task learning, low resource learning, music understanding, music information retrieval 8 8 8 0 8 4
BrPdX1bDZkQ DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations https://openreview.net/forum?id=BrPdX1bDZkQ imitation learning, offline imitation learning, imperfect demonstration, non-expert demonstration 7.33333 8 6 0.942809 8 3
sOK-zS6WHB Responsible Disclosure of Gene
... keep reading on reddit ➑

πŸ‘︎ 33
πŸ’¬︎
πŸ‘€︎ u/roVinchi
πŸ“…︎ Nov 09 2021
🚨︎ report
[D] Geometric Deep Learning Blueprint (Video on MLST)

YT: https://youtu.be/bIZB1hIJ4u8

Pod: https://anchor.fm/machinelearningstreettalk/episodes/60-Geometric-Deep-Learning-Blueprint-Special-Edition-e17i495

"Symmetry, as wide or narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection." and that was a quote from Hermann Weyl, a German mathematician who was born in the late 19th century.

Hey folks. Hope you don't mind me posting this here. I have stopped posting MLST stuff on this reddit, but I feel that this one in particular is pretty technical and of an academic nature and is relevant.

We spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr. Petar VeličkoviΔ‡ (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges.

There is a long list of references given in the YouTube comments.

Hope you enjoy!

TOC:

[00:00:00] Tim Intro

[00:01:55] Fabian Fuchs article

[00:04:05] High dimensional learning and curse

[00:05:33] Inductive priors

[00:07:55] The proto book

[00:09:37] The domains of geometric deep learning

[00:10:03] Symmetries

[00:12:03] The blueprint

[00:13:30] NNs don't deal with network structure (TedX)

[00:14:26] Penrose - standing edition

[00:15:29] Past decade revolution (ICLR)

[00:16:34] Talking about the blueprint

[00:17:11] Interpolated nature of DL / intelligence

[00:21:29] Going tack to Euclid

[00:22:42] Erlangen program

[00:24:56] β€œHow is geometric deep learning going to have an impact”

[00:26:36] Introduce Michael and Petar

[00:28:35] Petar Intro

[00:32:52] Algorithmic reasoning

[00:36:16] Thinking fast and slow (Petar)

[00:38:12] Taco Intro

[00:46:52] Deep learning is the craze now (Petar)

[00:48:38] On convolutions (Taco)

[00:53:17] Joan Bruna's voyage into geometric deep learning

[00:56:51] What is your most passionately held belief about machine learning? (Bronstein)

[00:57:57] Is the function approximation theorem still useful? (Bruna)

[01:11:52] Could an NN learn a sorting algorithm efficiently (Bruna)

[01:17:08] Curse of dimensionality / manifold hypothesis (Bronstein)

[01:25:17] Will we ever understand approximation of d

... keep reading on reddit ➑

πŸ‘︎ 72
πŸ’¬︎
πŸ‘€︎ u/timscarfe
πŸ“…︎ Sep 19 2021
🚨︎ report
Guide: How to get into stochastic analysis

Foreword:

I’ve seen quite a lot of comments and posts here, along with friends asking how to get into stochastic analysis and probability theory in general, so I thought I would write a guide as to how to most effectively get into the subject, including book recommendations.

In my biased opinion, stochastic analysis is an extremely deep and beautiful field. This is basically what I wish someone had written for me when first getting into the subject. Being honest, I would love to see guides like this for other subjects too. For me personally, that’s PDE and geometric analysis.

Of course this will not cover nearly all areas of modern stochastic analysis, but it should get you to the point of being able to read a good portion of current papers on the arXiv, ask and answer interesting questions, and to jump into further topics if desired.

The first two parts should be done more or less in order, but for the further topics one can basically explore them in any order desired. Though these are at the boundary of my current knowledge, so I might not have the best resources/plan here myself - some of it will be more recommendations of potential topics to explore rather than a full guide. So, here goes!

Preliminaries:

I’ll assume you have a decent understanding of pre-calc, early calculus, and linear algebra. Khan Academy is probably the canonical resource here for the first two. For linear algebra, I like Strang’s Introduction to Linear Algebra.

  1. First off, you should have a good grasp of undergraduate level real analysis. For those of you familiar with the book, this means the equivalent of most of Rudin’s Principles of Mathematical Analysis, or popularly known as Baby Rudin. But I don’t think this is the best book to learn from, especially if you’re learning for the first time. A good series of books for this are Tao’s Analysis 1 and Analysis 2. For a lighter introduction, one could use Abbott’s Understanding Analysis which is very friendly. Although it doesn’t really cover enough of what you need to know from undergrad analysis, so you should supplement with other books. Morgan’s Real Analysis is a good alternative to Tao’s books.

  2. Next up is measure theory. Modern probability theory is built entirely on measure theory, so you would really want to know this well. Later on, stochastic processes and stochastic calculus take this theory to its very limits, so the technical parts you learn here will surely not be wasted. There

... keep reading on reddit ➑

πŸ‘︎ 217
πŸ’¬︎
πŸ‘€︎ u/PaboBormot
πŸ“…︎ Oct 07 2021
🚨︎ report
What can be approached as a formal system?

Hello all,

I want to preface that I am a biochemist with little to no training in logic. I stumbled by pure chance upon a curious philosophical work from 1937 (!) called 'The Axiomatic Method in Biology' by one J. H. Woodger. In summary, it uses the language of R&W's Principia Mathematica to create "a biological axiom-system" to prove certain theorems about Mendelian theory and a/sexual division (or at least, that's the gist I get from skimming it).

It looks to be a serious work and a forerunner in the application of logic to biological theory - even if it is an unpleasant read. A quick search online for its influence shows there is some very niche work on axiomatizing genetics, but I do not know how to read the papers well enough to understand them.

Overall, it's intriguing that I have never heard of anyone trying to organize biology (or some subset like genetics) by formalizing it or building it from the ground-up, so to speak. I don't want to shun the idea outright, but I don't know how suitable a tool modern logic is for this kind of task.

Where else has logic found an application? I see that game theory and probability theory have axiomatic treatments, but what else? Do you think it's a misplaced use of logic? If not, how do you figure the best way to approach a convoluted topic (such as biology!) with a tool so neat and well-behaved?

πŸ‘︎ 20
πŸ’¬︎
πŸ‘€︎ u/Valetudinarian
πŸ“…︎ Dec 04 2021
🚨︎ report
Blind Girl Here. Give Me Your Best Blind Jokes!

Do your worst!

πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Leckzsluthor
πŸ“…︎ Jan 02 2022
🚨︎ report
This subreddit is 10 years old now.

I'm surprised it hasn't decade.

πŸ‘︎ 14k
πŸ’¬︎
πŸ‘€︎ u/frexyincdude
πŸ“…︎ Jan 14 2022
🚨︎ report
Dropped my best ever dad joke & no one was around to hear it

For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.

I said "hey look, an escaPEA"

No one near me but it didn't half make me laugh for a good hour or so!

Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies πŸ˜‚

πŸ‘︎ 20k
πŸ’¬︎
πŸ‘€︎ u/Vegetable-Acadia
πŸ“…︎ Jan 11 2022
🚨︎ report
What starts with a W and ends with a T

It really does, I swear!

πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/PsychedeIic_Sheep
πŸ“…︎ Jan 13 2022
🚨︎ report
Why did Karen press Ctrl+Shift+Delete?

Because she wanted to see the task manager.

πŸ‘︎ 11k
πŸ’¬︎
πŸ‘€︎ u/Eoussama
πŸ“…︎ Jan 17 2022
🚨︎ report
How to shuffle a deck of cards well

Hello, this is a simple question I've been scratching my head over for about a decade. Here it goes:

  • A shuffling strategy for a deck of N cards is a probability distribution over permutations on N elements. You apply the strategy by sampling the distribution and applying the permutation on the deck.

  • A strategy is fair if after k successive independent applications to any initial state, for k -> inf the distribution converges to the uniform distribution over all permutations.

  • Which strategies are fair?

I must have asked around a thousand math-smart people and tried to attack this myself on numerous occasions but I've always come up short. I've picked this problem back up recently and finally I was able to make some progress after learning about Fourier transforms on finite groups. I think what I have in my hands sounds like a correct solution, though the proof is kinda clunky and betrays my lack of knowledge in this subject. I'm posting this in case you can maybe spot some mistakes or shortcuts. Or whether this is actually a well known result which I've never known how to google.

Proposed Solution

  • A strategy with distribution p(g) is fair iff supp(p)^k = S_N eventually in k.

Note the condition only states that for large enough k, you can build any permutation as the product of exactly k elements of supp(p), and it is thus a stronger condition than supp(p) generating all permutations, which only means any permutation is built as a product of supp(p) elements, but each permutation may use a different number of factors. For example, the swap {u} in S_2 = {1,u} does generate the whole of S_2, but no power {u}^k is the whole of S_2 by itself, because they alternate {u},{1},{u},... and indeed, the strategy of always choosing to swap isn't really fair.

However, note also that if supp(p) generates S_N and also p(1) > 0, then the condition is true. For example, my claim is that with a 52 card deck, you could do something as stilted-looking as this: roll 11 dice (even unfair ones), compute m as the sum of all values minus 11, and if m is less than 51, swap the m-th card with the next one, and if it's 52 or larger, do nothing. Since those swaps generate the whole S_52, and the identity is always possible, this strategy would actually be fair, even if the probabilities it assigns to each operation are completely bonkers, and successive iterations will converge to a uniform shuffle (though it would definitely take a little while).

T

... keep reading on reddit ➑

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/cancrizans
πŸ“…︎ Sep 25 2021
🚨︎ report
What is a a bisexual person doing when they’re not dating anybody?

They’re on standbi

πŸ‘︎ 11k
πŸ’¬︎
πŸ‘€︎ u/Toby-the-Cactus
πŸ“…︎ Jan 12 2022
🚨︎ report
A "revolution" in PDE, distributional calculus, and Sato's hyperfunctions

In this post I want to talk about one of the basic ideas in the analysis of PDE: weak solutions.

(This post grew out of a Quick Question I had, and didn't get a response to beyond u/grothendieck1 responding that they had the same question. It's also something of a response to the recent discussion about whether algebraic geometry is particularly prone to revolutions -- the use of low-regularity solutions seems like a comparable development in the analysis of PDE.)

When one studies differential equations it quickly becomes apparent that restricting to analytic or even smooth functions is just not feasible. A few examples:

  • Pretending that the Dirac delta "function" is actually a function allows us to at least formally solve many linear PDE. If P is a linear differential operator and PK = \delta then the Fourier transform of the equation PK = \delta is an algebraic equation that can be solved explicitly for the Fourier transform of K. Then, if u is the convolution u = K * f, then Pu = f.
  • A smooth function u solves the Euler-Lagrange equations for some action I subject to some boundary data v iff u is a minimizer of I with respect to the constraint v. A priori there is no reason for a minimizer to be smooth, however, so the Euler-Lagrange equations may make no sense in general. Hilbert's 19th problem implies that minimizers are analytic subject to certain constraints on I. The hard part of the solution is de Giorgi-Nash-Moser theory, which says that if u is a Lipschitz minimizer and I is "regular" then \nabla u has a positive HΓΆlder exponent -- once we get past the threshold of HΓΆlder gradients, the rest is not bad.
  • Stochastic PDE allow one to model systems with white noise, which is the derivative of Brownian motion and therefore about as far from a smooth function as possible (in fact its HΓΆlder exponent is < 0.5).

Owing to these considerations and others, one is led to the notion of weak solution, a function that satisfies an integral version of a PDE. For example, u solves the (EDIT: inhomogeneous) Laplace equation \Delta u = f iff for every smooth function \psi of compact support (briefly, every test function \psi), the integral of \nabla u \cdot \nabla \psi is f. If a weak solution is smooth, then it honestly satisfies the PDE, so weak solutions are generalizations of "strong" (i.e. smooth) solutions. (EDIT: The theory of weak solutions is largely due to Sobolev's introduction of Sobolev spaces in the 1930s. This motivated the need t

... keep reading on reddit ➑

πŸ‘︎ 550
πŸ’¬︎
πŸ‘€︎ u/catuse
πŸ“…︎ May 29 2021
🚨︎ report
Geddit? No? Only me?
πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/shampy311
πŸ“…︎ Dec 28 2021
🚨︎ report
I wanna hear your best airplane puns.

Pilot on me!!

πŸ‘︎ 3k
πŸ’¬︎
πŸ‘€︎ u/Paulie_Felice
πŸ“…︎ Jan 07 2022
🚨︎ report
E or ß?
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/Amazekam
πŸ“…︎ Jan 03 2022
🚨︎ report
What did Spartacus say when the lion ate his wife?

Nothing, he was gladiator.

πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/rj104
πŸ“…︎ Jan 15 2022
🚨︎ report
Pun intended.
πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Sharmaji1301
πŸ“…︎ Jan 15 2022
🚨︎ report
No spoilers
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/Onfour
πŸ“…︎ Jan 06 2022
🚨︎ report
Covid problems
πŸ‘︎ 7k
πŸ’¬︎
πŸ‘€︎ u/theincrediblebou
πŸ“…︎ Jan 12 2022
🚨︎ report
These aren't dad jokes...

Dad jokes are supposed to be jokes you can tell a kid and they will understand it and find it funny.

This sub is mostly just NSFW puns now.

If it needs a NSFW tag it's not a dad joke. There should just be a NSFW puns subreddit for that.

Edit* I'm not replying any longer and turning off notifications but to all those that say "no one cares", there sure are a lot of you arguing about it. Maybe I'm wrong but you people don't need to be rude about it. If you really don't care, don't comment.

πŸ‘︎ 12k
πŸ’¬︎
πŸ‘€︎ u/Lance986
πŸ“…︎ Dec 15 2021
🚨︎ report
[D] What is the name of this theorem in ML?

I recently discovered something called the Convolution Theorem. I was impressed by its rigor, it's elegance, and the wide range of its applications. (I felt slightly guilty that I did not find it earlier in my life.)

A similar event happened recently to me. I was watching a lecture of Ilya Sutskever. Sutskever describes a theorem, but does not give its name.

> One fact, that's actually a fact. It's a mathematical theorem that you can prove. If you could find the shortest program that does very well on {fitting} your data, then you will achieve the best generalization that is possible. With a little bit of modification, you can turn it into a precise theorem. On a very intuitive level, it's easy to see why it should be the case. If you have data, and you are able to find a shorter program that generates this data, then you have successfuly extracted all conceivable regularity in the data into this program. Then you can use this object to make the best predictions possible. If you have data that is so complex, that there is no way to express it as a shorter program, then it means that your data is totally random. There is no way to extract any regularity from it whatsoever.

Does anyone know what this theorem is called?

πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/moschles
πŸ“…︎ Aug 29 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.