A list of puns related to "Convolution Theorem"
There have been many excellent articles and videos about the Central Limit Theorem (CLT). But besides formal proofs, I was not able to find a good intuitive explanation for why it is true.
Here is my attempt to provide an intuitive explanation using convolutions:
Check it out if you are interested, and I would love to hear your feedback. I know that many people in this sub have much more stats knowledge than myself!
Thanks :)
I've trying to prove convolution theorem in Julia but without success. Say I have two function f1 = sin(30Οt) & f2 = sin(10Οt). Then, I have the following code
fconv_fda = ifft(fft(f1).*fft(f2))
fconv_tda = DSP.conv(f1,f2)[1:length(f1)]
plot(fconv_tda)
plot!(real.(fconv_fda))
But both the plots turn out to be different. What am I doing wrong here?
I donβt see how to logically do this integral, especially without any work shown.
hereβs the book example and hereβs my work:
https://imgur.com/EdMWWUK
https://imgur.com/JQ8crcV
And how does it related to Laplace transforms?
For two (sufficiently nice) real-valued functions f and g on a finite dimensional real vector space E with convolution f * g, we have the so-called convolution theorem:
F{f * g} =< F{f} , F{g}>
where F{f} is the Fourier transform of f, and "<-,->" is the standard inner product on E.
In Kashiwara and Schapira's book Sheaves on Manifolds, they define the "Fourier-Sato transform" for (R^+ -conic, complexes of) sheaves (of A-modules, for a suitably nice ring A) on the vector space E as follows:
D^- = {(x,y) in ExE^* | <x,y> <= 0 }.
FS(G) := (Rp2_!)( (p1^! G)__{D^- })
(sorry for crap layout) and FS(G) is now an object of the derived category D_{R^+ }^+ (E^* ) of complexes on the dual space E^* .
Also in Sheaves on Manifolds, they define a "convolution" operation on Ob(D^+ (E)) by setting F * G := Rs!( F [x]^L G ) (where [x]^L is the left derived functor of the external direct product [x] on ExE, and s: ExE -> E is the map (x,y) |-> x+y ).
tl;dr: Does anyone know if there is a corresponding convolution theorem for the Fourier-Sato transform?
There aren't too many references for the Fourier-Sato transform (as in, I only know of Sheaves on Manifolds), and I haven't seen this mentioned anywhere. It doesn't seem like it'd be THAT hard to prove, if it's true at all (famous last words!). Thoughts?
Here's a little game I thought up, similar to the whole "complicated proof of a simple statement" game.
State a well-known theorem in terms much more complicated than necessary to introduce the theorem. Then, the goal for others is to try to figure out what the theorem is. For example:
The free commutative monoid on one generator admits a semiring structure whose multiplicative monoid is the free commutative monoid on a countably infinite set of generators.
Sometimes, even good news isn't quite good enough.
I've posted several times here about some of the peculiar difficulties I've dealt with in my journey toward a PhD in mathematics, either in threads, or as submitted posts like this one. The central problem I've been dealing with is that my research topicβthe Collatz Conjectureβand the tools I've been using to study it (harmonic analysis, analytic number theory, andβmost recentlyβnon-archimedean (functional) analysis) are all completely outside the purview of the expertise of my university's mathematical faculty.
For most of my time in graduate school, the most troubling manifestation of this problem was that I wasn't certain I would be able to get a PhD, seeing as there was no one in my orbit capable of rendering judgment on the merit of my work. The agreement with I'd reached with my department was that if I could get something of mine published in a reputable journal, that would suffice as the "expert approval" needed to justify conferring upon me the doctoral degree that I've been working toward all this time. Unfortunately, my attempts to get myself published have not met with successβand, certainly, the backlog of excess submissions caused by the pandemic has only made matters worse.
In mid November 2021, however, I received some truly wonderful news: my department decided that they would not require me to get something published. They will accept whatever original work I have done.
Although I have no evidence for this, given the way the head of graduate studies phrased the message, I have a strong suspicion that when I informed my advisor I had independently rediscovered a good deal of the contents of W. M. Schikhof's PhD dissertation (Non-Archimedean Harmonic Analysis, 1967), that was what convinced them that I was worthy of their gracious leap of faith.
While this news has definitely taken a great deal of stress off my shoulders, me being meβthat is, obsessiveβI've found a new, daunting psychological difficulty to nail to onto my skull:
I'm worried that I don't deserve it, because I haven't done enough.
The positives:
I was wondering if the Fourier Transform decomposes a function the sum of sinusoids, is there a transform which decomposes a function into the product of sinusoids? Perhaps the convolution theorem can be used here, but Iβm not sure how.
Hey, I will be applying for summer 2022 at TU Berlin for their masters in computer science course. Just wanna know which of these subjects qualify as Theoretical comp sc subjects:
Theory of Computation: This course will focus on the inherent capabilities and limitations of mathematical models of computation, and their relationships with formal languages. Rigorous arguments and proofs of correctness will be emphasized. Particular topics to be covered include: β’ Finite automata, regular languages, regular grammars β’ Deterministic and nondeterministic automata β’ Context free grammars, languages, pushdown-automata β’ Turing machines, Church's thesis, undecidable problems β’ NP completeness
Machine Learning: Machine Learning is concerned with computer programs that automatically improve their performance through experience. Topics such as Bayesian networks, decision tree learning, support vector machines, statistical learning methods, unsupervised learning and reinforcement learning would be discussed in this course. Theoretical concepts such as inductive bias, the PAC (probably approximately correct) learning framework, Bayesian learning methods and margin-based learning would be discussed in the course.
AI: In this course, we will study the most fundamental knowledge for understanding AI. We will introduce some basic search algorithms for problem solving; knowledge representation and reasoning; pattern recognition; fuzzy logic; neural networks and genetic algorithms. The later three form synergistically Soft Computing which happens to be an important component of AI.
Algebra and differential equations: * Differential equations: Differential equations of first order: Variables separable form or reducible to Variables separable form of Differential Equations, Exact Differential Equations, Linear Differential equations of first order, Ordinary Linear Differential Equation of higher order with constant coefficients. Homogeneous and Nonhomogeneous equations; Methods of variation of Parameters and undetermined coefficients; Eulerβs Equations.
I don't want to step on anybody's toes here, but the amount of non-dad jokes here in this subreddit really annoys me. First of all, dad jokes CAN be NSFW, it clearly says so in the sub rules. Secondly, it doesn't automatically make it a dad joke if it's from a conversation between you and your child. Most importantly, the jokes that your CHILDREN tell YOU are not dad jokes. The point of a dad joke is that it's so cheesy only a dad who's trying to be funny would make such a joke. That's it. They are stupid plays on words, lame puns and so on. There has to be a clever pun or wordplay for it to be considered a dad joke.
Again, to all the fellow dads, I apologise if I'm sounding too harsh. But I just needed to get it off my chest.
I crawled the ICLR2022 preliminary reviews with some help of another repo and uploaded the crawled raw data (crawled today around 2PM UTC+1) .You can also find some quick stats like
in the following notebook:
https://github.com/VietTralala/ICLR2022-OpenReviewData/blob/master/analyze_reviews.ipynb
Feel free to play around with it β
paper_id | title | link | keywords | mean | max | min | std | median | num |
---|---|---|---|---|---|---|---|---|---|
LdlwbBP2mlq | Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond | https://openreview.net/forum?id=LdlwbBP2mlq | Local SGD, Minibatch SGD, Shuffling, Without-replacement, Convex Optimization, Stochastic Optimization, Federated Learning, Large Scale Learning, Distributed Learning | 8 | 8 | 8 | 0 | 8 | 3 |
iMSjopcOn0p | MT3: Multi-Task Multitrack Music Transcription | https://openreview.net/forum?id=iMSjopcOn0p | music transcription, transformer, multi-task learning, low resource learning, music understanding, music information retrieval | 8 | 8 | 8 | 0 | 8 | 4 |
BrPdX1bDZkQ | DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations | https://openreview.net/forum?id=BrPdX1bDZkQ | imitation learning, offline imitation learning, imperfect demonstration, non-expert demonstration | 7.33333 | 8 | 6 | 0.942809 | 8 | 3 |
sOK-zS6WHB | Responsible Disclosure of Gene |
YT: https://youtu.be/bIZB1hIJ4u8
"Symmetry, as wide or narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection." and that was a quote from Hermann Weyl, a German mathematician who was born in the late 19th century.
Hey folks. Hope you don't mind me posting this here. I have stopped posting MLST stuff on this reddit, but I feel that this one in particular is pretty technical and of an academic nature and is relevant.
We spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr. Petar VeliΔkoviΔ (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges.
There is a long list of references given in the YouTube comments.
Hope you enjoy!
TOC:
[00:00:00] Tim Intro
[00:01:55] Fabian Fuchs article
[00:04:05] High dimensional learning and curse
[00:05:33] Inductive priors
[00:07:55] The proto book
[00:09:37] The domains of geometric deep learning
[00:10:03] Symmetries
[00:12:03] The blueprint
[00:13:30] NNs don't deal with network structure (TedX)
[00:14:26] Penrose - standing edition
[00:15:29] Past decade revolution (ICLR)
[00:16:34] Talking about the blueprint
[00:17:11] Interpolated nature of DL / intelligence
[00:21:29] Going tack to Euclid
[00:22:42] Erlangen program
[00:24:56] βHow is geometric deep learning going to have an impactβ
[00:26:36] Introduce Michael and Petar
[00:28:35] Petar Intro
[00:32:52] Algorithmic reasoning
[00:36:16] Thinking fast and slow (Petar)
[00:38:12] Taco Intro
[00:46:52] Deep learning is the craze now (Petar)
[00:48:38] On convolutions (Taco)
[00:53:17] Joan Bruna's voyage into geometric deep learning
[00:56:51] What is your most passionately held belief about machine learning? (Bronstein)
[00:57:57] Is the function approximation theorem still useful? (Bruna)
[01:11:52] Could an NN learn a sorting algorithm efficiently (Bruna)
[01:17:08] Curse of dimensionality / manifold hypothesis (Bronstein)
[01:25:17] Will we ever understand approximation of d
... keep reading on reddit β‘Foreword:
Iβve seen quite a lot of comments and posts here, along with friends asking how to get into stochastic analysis and probability theory in general, so I thought I would write a guide as to how to most effectively get into the subject, including book recommendations.
In my biased opinion, stochastic analysis is an extremely deep and beautiful field. This is basically what I wish someone had written for me when first getting into the subject. Being honest, I would love to see guides like this for other subjects too. For me personally, thatβs PDE and geometric analysis.
Of course this will not cover nearly all areas of modern stochastic analysis, but it should get you to the point of being able to read a good portion of current papers on the arXiv, ask and answer interesting questions, and to jump into further topics if desired.
The first two parts should be done more or less in order, but for the further topics one can basically explore them in any order desired. Though these are at the boundary of my current knowledge, so I might not have the best resources/plan here myself - some of it will be more recommendations of potential topics to explore rather than a full guide. So, here goes!
Preliminaries:
Iβll assume you have a decent understanding of pre-calc, early calculus, and linear algebra. Khan Academy is probably the canonical resource here for the first two. For linear algebra, I like Strangβs Introduction to Linear Algebra.
First off, you should have a good grasp of undergraduate level real analysis. For those of you familiar with the book, this means the equivalent of most of Rudinβs Principles of Mathematical Analysis, or popularly known as Baby Rudin. But I donβt think this is the best book to learn from, especially if youβre learning for the first time. A good series of books for this are Taoβs Analysis 1 and Analysis 2. For a lighter introduction, one could use Abbottβs Understanding Analysis which is very friendly. Although it doesnβt really cover enough of what you need to know from undergrad analysis, so you should supplement with other books. Morganβs Real Analysis is a good alternative to Taoβs books.
Next up is measure theory. Modern probability theory is built entirely on measure theory, so you would really want to know this well. Later on, stochastic processes and stochastic calculus take this theory to its very limits, so the technical parts you learn here will surely not be wasted. There
Hello all,
I want to preface that I am a biochemist with little to no training in logic. I stumbled by pure chance upon a curious philosophical work from 1937 (!) called 'The Axiomatic Method in Biology' by one J. H. Woodger. In summary, it uses the language of R&W's Principia Mathematica to create "a biological axiom-system" to prove certain theorems about Mendelian theory and a/sexual division (or at least, that's the gist I get from skimming it).
It looks to be a serious work and a forerunner in the application of logic to biological theory - even if it is an unpleasant read. A quick search online for its influence shows there is some very niche work on axiomatizing genetics, but I do not know how to read the papers well enough to understand them.
Overall, it's intriguing that I have never heard of anyone trying to organize biology (or some subset like genetics) by formalizing it or building it from the ground-up, so to speak. I don't want to shun the idea outright, but I don't know how suitable a tool modern logic is for this kind of task.
Where else has logic found an application? I see that game theory and probability theory have axiomatic treatments, but what else? Do you think it's a misplaced use of logic? If not, how do you figure the best way to approach a convoluted topic (such as biology!) with a tool so neat and well-behaved?
Do your worst!
I'm surprised it hasn't decade.
For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.
I said "hey look, an escaPEA"
No one near me but it didn't half make me laugh for a good hour or so!
Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies π
It really does, I swear!
Because she wanted to see the task manager.
Hello, this is a simple question I've been scratching my head over for about a decade. Here it goes:
A shuffling strategy for a deck of N cards is a probability distribution over permutations on N elements. You apply the strategy by sampling the distribution and applying the permutation on the deck.
A strategy is fair if after k successive independent applications to any initial state, for k -> inf the distribution converges to the uniform distribution over all permutations.
Which strategies are fair?
I must have asked around a thousand math-smart people and tried to attack this myself on numerous occasions but I've always come up short. I've picked this problem back up recently and finally I was able to make some progress after learning about Fourier transforms on finite groups. I think what I have in my hands sounds like a correct solution, though the proof is kinda clunky and betrays my lack of knowledge in this subject. I'm posting this in case you can maybe spot some mistakes or shortcuts. Or whether this is actually a well known result which I've never known how to google.
Note the condition only states that for large enough k, you can build any permutation as the product of exactly k elements of supp(p), and it is thus a stronger condition than supp(p) generating all permutations, which only means any permutation is built as a product of supp(p) elements, but each permutation may use a different number of factors. For example, the swap {u} in S_2 = {1,u} does generate the whole of S_2, but no power {u}^k is the whole of S_2 by itself, because they alternate {u},{1},{u},... and indeed, the strategy of always choosing to swap isn't really fair.
However, note also that if supp(p) generates S_N and also p(1) > 0, then the condition is true. For example, my claim is that with a 52 card deck, you could do something as stilted-looking as this: roll 11 dice (even unfair ones), compute m as the sum of all values minus 11, and if m is less than 51, swap the m-th card with the next one, and if it's 52 or larger, do nothing. Since those swaps generate the whole S_52, and the identity is always possible, this strategy would actually be fair, even if the probabilities it assigns to each operation are completely bonkers, and successive iterations will converge to a uniform shuffle (though it would definitely take a little while).
T
... keep reading on reddit β‘Theyβre on standbi
In this post I want to talk about one of the basic ideas in the analysis of PDE: weak solutions.
(This post grew out of a Quick Question I had, and didn't get a response to beyond u/grothendieck1 responding that they had the same question. It's also something of a response to the recent discussion about whether algebraic geometry is particularly prone to revolutions -- the use of low-regularity solutions seems like a comparable development in the analysis of PDE.)
When one studies differential equations it quickly becomes apparent that restricting to analytic or even smooth functions is just not feasible. A few examples:
Owing to these considerations and others, one is led to the notion of weak solution, a function that satisfies an integral version of a PDE. For example, u solves the (EDIT: inhomogeneous) Laplace equation \Delta u = f iff for every smooth function \psi of compact support (briefly, every test function \psi), the integral of \nabla u \cdot \nabla \psi is f. If a weak solution is smooth, then it honestly satisfies the PDE, so weak solutions are generalizations of "strong" (i.e. smooth) solutions. (EDIT: The theory of weak solutions is largely due to Sobolev's introduction of Sobolev spaces in the 1930s. This motivated the need t
... keep reading on reddit β‘Pilot on me!!
Nothing, he was gladiator.
Dad jokes are supposed to be jokes you can tell a kid and they will understand it and find it funny.
This sub is mostly just NSFW puns now.
If it needs a NSFW tag it's not a dad joke. There should just be a NSFW puns subreddit for that.
Edit* I'm not replying any longer and turning off notifications but to all those that say "no one cares", there sure are a lot of you arguing about it. Maybe I'm wrong but you people don't need to be rude about it. If you really don't care, don't comment.
I recently discovered something called the Convolution Theorem. I was impressed by its rigor, it's elegance, and the wide range of its applications. (I felt slightly guilty that I did not find it earlier in my life.)
A similar event happened recently to me. I was watching a lecture of Ilya Sutskever. Sutskever describes a theorem, but does not give its name.
> One fact, that's actually a fact. It's a mathematical theorem that you can prove. If you could find the shortest program that does very well on {fitting} your data, then you will achieve the best generalization that is possible. With a little bit of modification, you can turn it into a precise theorem. On a very intuitive level, it's easy to see why it should be the case. If you have data, and you are able to find a shorter program that generates this data, then you have successfuly extracted all conceivable regularity in the data into this program. Then you can use this object to make the best predictions possible. If you have data that is so complex, that there is no way to express it as a shorter program, then it means that your data is totally random. There is no way to extract any regularity from it whatsoever.
Does anyone know what this theorem is called?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.