A list of puns related to "Fourier integral operator"
I'm currently teaching a diff eq class and in today's discussion session after introducing the Laplace transform, a student asked what other types of integral transforms there are.
I told him about Fourier transforms but he asked for others and I was honestly stumped, I never really thought about it when I was in school and never really encountered any in my curriculum.
What other types of integral transforms are there? And what are their uses? Are these types of transforms specific to applications, such as in physics, or are there other such transforms that are used in pure math? Is there some field of study where they are important?
I'd like to be able to give my student a more comprehensive answer and honestly, I'm curious to broaden my knowledge as well! Sorry if this is a quick question - I'm honestly not sure how broad the scope of the answer may be.
Fourier Series, Fourier Integral, Fourier Transform, Discrete Fourier Transform, Fast Fourier Transform
(Forgive me if I didn't tag this correctly, wasn't sure what the best fit was).
For work, I want to implement the CMOS noise reduction algorithm that's in this paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7582543/
I'm having difficulty with this part, immediately preceding equation (8): https://imgur.com/2kQDjDH
Thanks for any input.
When we have an integral operator we can have one that's like
I = int g(x,t) f(t) dt such that I[f(x)] = int g(x,t)f(t)dt
where g(x,t) is called a kernel. Now, there's a lot of kernels, there's kernel: the set of vectors mapped to the 0 element, kernel: what corn cobs are made of, kernel: lowest level of a layerd architecture of an OS, etc.... Now THIS kernel, why is it called kernel? does the word 'kernel' mean to communicate something about g(x,t) or is it just a name?
Hi all. Consider the discrete vs continuous Fourier transforms. The discrete transform allows one to write a function as a linear combination of sin(mx) and cos(nx). The continuous transform allows one to write a function as an integral of the form f(t)e^{its}. The continuous transform essentially is an integral version of the linear combination of sines and cosines.
Now, if one understands what a fourier series is, one realizes that the discrete transform is essentially a change of basis for functions, with the new basis being the trig functions (specifically of integer*frequency). The continuous transform involves ALL functions of the form sin(ax), cos(bx), for all real numbers a and b. I wouldn't really think of this larger set of functions as basis, because a basis usually is something that is for linear combinations, NOT "integral" combinations if you will. Similarly, the laplace transform allows you to write a function (For t>0) as another "integral" combination of functions of the form e^{zt}, where z is any complex number.
My question, is as follows: Are there any fields of study that understand or study these "Integral bases", i.e one parameter families of functions that can represent a function using an integral (Instead of a linear sum) as in the above transforms. Are there any books that talk about this subject that you know of? What would these sorts of families be called, other than "integral" basis?
Ok so I'm currently working on a problem which I posted here on math stackexchange, but I'm not getting any help. Basically it has to do with a connection between Fourier Series and Contour Integration, and I literally can't figure out why this series and the integral described in the post are connecteed.
Any help would be wonderful.
I've already read the chapter in my textbook and it's not making much sense at all. My class is using Modern physics by serway, moses, and moyer. I feel like they are leaving stuff out and I don't understand the example problems. Also, there's no mention of any famous example problems like the particle in the infinite well or any explanation of stuff like that? I feel like I'm getting screwed for when exams come up by reading the book because I have no idea what's really going on....
Is there anything else i could use? i am looking for youtube videos or other textbooks that are available for free online, or at least some example problem sets. i've already watched 2 youtube videos that made a ton more sense than my textbook. also, i am going to try and read #37 and #38 Feynman lectures, will these help??
I everyone, I have a question about the video "the Fourier transform", I tried to answer it myself, to make it more readable I wrote it in Latex, could you tell me what you think about it, is it fair, are there any errors?
Thank you all in advance
first page of the demonstration
View the full paper presentation here which includes a time-stamped outline:
Numerical solvers for Partial Differential Equations are notoriously slow. They need to evolve their state by tiny steps in order to stay accurate, and they need to repeat this for each new problem. Neural Fourier Operators, the architecture proposed in this paper, can evolve a PDE in time by a single forward pass, and do so for an entire family of PDEs, as long as the training set covers them well. By performing crucial operations only in Fourier Space, this new architecture is also independent of the discretization or sampling of the underlying signal and has the potential to speed up many scientific applications.
Abstract:
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and the Navier-Stokes equation (including the turbulent regime). Our Fourier neural operator shows state-of-the-art performance compared to existing neural network methodologies and it is up to three orders of magnitude faster compared to traditional PDE solvers.
Authors: Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
Hi guys, I have 2 MCQ questions here: https://imgur.com/BWM3udk
For question 1, I assume that the DC Component is 0 because usually the DC component is the 1st term of the fourier series. But I dont think this is the right way to solve it.
For question 2, this is my working: https://imgur.com/FiMYiOS but I'm not sure if it is correct, especially the last part.
This new paper by researchers from CalTech & Purdue is notable for making significant advancements in solving Partial Differential Equations, critical for understanding the world around us. Here is a great video that explains the paper in detail. (You can click the time-stamps on the right to jump between sections.)
Quick Summary:
Numerical solvers for Partial Differential Equations are notoriously slow. They need to evolve their state by tiny steps in order to stay accurate, and they need to repeat this for each new problem. Neural Fourier Operators, the architecture proposed in this paper, can evolve a PDE in time by a single forward pass, and do so for an entire family of PDEs, as long as the training set covers them well. By performing crucial operations only in Fourier Space, this new architecture is also independent of the discretization or sampling of the underlying signal and has the potential to speed up many scientific applications.
More information about the research:
Paper: https://arxiv.org/abs/2010.08895
Code: https://github.com/zongyi-li/fourier_neural_operator/blob/master/fourier_3d.py
MIT Technology Review: https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/
https://i.redd.it/thvnxea9lao61.gif
So this is more of just a fun post: I'm curious if anyone had any applications or ideas which used fractional operators . I also wanted to show of my gif.
A bit of background on what's in the GIF and fractional operators in general.
Recall that the 2D discrete Fourier transform, F, is a linear operator on a space of matrices (or d by d arrays). If we apply the Fourier transform 4 times it returns the identity function i.e. F^4 = F(F(F(F))) = I. Note that people have figured out how to let these exponents take non-integer values! This corresponds to fractional Fourier transforms. So for example the half Fourier transform F^(1/2) is something that functions like the square root of the Fourier transform. If we let G = F^(1/2) then we have that G(G)=F, or maybe a bit more concretely, for any matrix/image X, we have that G(G(X)) = F(X). These special exponents behave like regular old exponents in a lot of ways and it has been observed that one can construct F^a for arbitrary real-valued a.
The GIF I've posed takes an image of a pagoda X and applies increasing fractional degrees of Fourier transforms. Specifically the graph shows shows F^a (X) as a goes from 0 to 4.
Links, more on fractional operators
Conclusion
I'm curious if anyone has any interesting ideas...
The way Fourier series are typically taught, you take the integral of f(x)cos(nx) or f(x)sin(nx) from -pi to pi, then divide by pi, and this works because cos(nx) and sin(mx) are orthogonal on the interval [-pi, pi], and so are sin(nx) and sin(mx) for n=/=m, and cos(nx) and cos(mx) for n=/=m.
So I learned about inner product spaces and I realized that cos(nx) and sin(nx) are used as the basis for the function space C[-pi, pi] when you develop a Fourier series. This makes sense because these functions are orthogonal to each other -- but they're not orthonormal, because the integral of (sin(nx))^2 from -pi to pi is pi, not 1, and same for cos(nx).
To make them orthonormal, all you have to divide them by sqrt(pi). Then the 1/pi term in the Fourier coefficients is no longer necessary, and you can treat it like a vector space, where we can represent a vector as v = ∑(v∙e(n))e(n).
My question is whether that sqrt(pi) term is purely coincidental or if there's something more going on here, because I know that that's the value of the Gaussian integral. What's weird to me is that you can normalize trig functions over [-pi, pi] by dividing by sqrt(pi), and you can normalize e^(-x^(2)) over [-inf, +inf] by doing the same thing. I know that the generalized Gaussian integral is important in probability, and that its limit is the Dirac delta, which is again very important. Am I on to something, or am I misguided?
And lastly, the reason Fourier series were invented was to solve the heat equation and other PDE's. Is there something relating function spaces, PDE's, and the Gaussian integral? What branch/theory of math is this all a pat of?
PS: Reddit should really implement support for subscripts, or some kind of basic text editor.
Fourier series allow us to represent any integrable periodic function as an infinite sum of sine and/or cosine functions. My understanding is that all periodic functions over the complex numbers form an infinite dimensional vector space and Fourier series are simply a convenient choice of basis. Am I correct? If not, what am I missing?
If so, does that mean that we could, for example, represent an arbitrary periodic function as an infinite series of square waves? Or triangle waves? Or, in other words, given some specific type of periodic function over the complex numbers, call it T, is it possible to represent every other function in the aforementioned vector space as an infinite series of functions of type T?
When calculating the Fourier series for an even function of x, f(x), of period T, the coefficient Bm that multiplies the sine in the FS equation will zero out because the integral from -L to L of the product of an even function and an odd function is zero.
Now, suppose I have the function f(x) = 0 (-3<x<-1), 1 (-1<x<1), 0 (1<x<3).
I know that integral from -3 to 3 of f(x) sin[m(pi).x/L] dx is zero. But if f(x) is even, could I write: -3 to 3 of f(x) sin[m(pi).x/L] dx = 2 times int from 0 to 3 of of f(x) sin[m(pi).x/L] dx ?
Futhermore, if I know that the only region that will not yield zero is from zero to 1, can I then say: 2 times int from 0 to 3 of of f(x) sin[m(pi).x/L] dx = 2 times int from 0 to 1 of of f(x) sin[m(pi).x/L] dx ?
I know that if I were to integrate only the even f(x), all this simplification would be fine. But given that I have an odd times an even function inside the integral, I'm not sure that move is okay.
Thanks
Hi r/learnmath
i m reading this article page 38 , an integral operator was defined like this where as:
f dot : is a family of functions defined on the boundry of domain omega.
k: is some sort of "kernel", defined like this with some nice properties
P: a polynomial writen in termes of the functions f , on a side note similar to one veriable polynomials differentiating it acts like getting rid of the first term nd sliding the rest one degree back like this
basicly all it does it takes a set of functions defined on the boundry of omega , and gives u one function defined on the inside of omega.
now the heart of the matter is m trying to differentiate it m times , so my attempt was to introduce the differentiating sign into the integral and then apply leibniz formula like so
but on the other side his result was: this
what frustrates me is how he got that additional P(X,Z) term , and what does that Z even mean if it's an arbitrary point from the bondry , the author doesn't explain it or site any refrence about it nd i couldn't find anything similar to this.
any help would be appreciated, thanx. (also m sorry for my bad english)
I have an integral
[; f(t,\tau) = \int_0^\infty ds I(s) G(t-s-\tau) e^{ik\omega_0 t}f_k(s) ;]
Here the function G is a Gaussian. I would like to find the Fourier transform of f with respect to t and evaluate the integral over s. Is it allowed to first perform the Fourier transform of the functions dependent on t and then perform the integral? I.e.,
[; f(\omega,\tau) = \int_0^\infty ds I(s) e^{i\omega(s+\tau)} \tilde{G}(\omega+k\omega_0)e^{ik\omega_0(s+\tau)}f_k(s) ;]
Is this valid?
Hello all.
I'm doing a quantum problem where I had to transform a wavefunction from momentum to position space and ended up with the integral below.
The answer key lists the solution, but doesn't give any indication how to solve the integral.
Anyone have any ideas? I'm preparing for quals and really want to figure this out.
[; \int_{-\infty} ^ {\infty} e ^ {p(ix-ia)} e ^ {\frac{-it}{2m} p ^ 2} dp = \sqrt{\frac{m}{2 \pi i t}} e ^ {\frac{i m}{2 t}(x-a) ^ 2} ;]
Image:
https://i.imgur.com/ZGFyHyC.jpg
Any help on any of these questions would be very much appreciated
https://imgur.com/a/sLx2NId
According to Fourier Analysis any function can be described as an infinite sum of sinusoidal functions with different coefficients in front of them. Let's say that we have a function that is just f(t) = a cos(vt) and we want to find out how this function is described in frequency space - (we should find a dirac-delta function around k = v where k describes the frequency) We would find this by applying
'[;S(k)=\int _{-\infty }^{\infty }f(t)\cdot e^{-2i\pi kt},dt;]'
then because of the orthogonality between cos(vx) and any other sinusoidal functions that doesnt have frequency v, everything would cancel out except from acos(vt) / cos(vt) inside the integral. Here we would be able to factor the a out in front and essentially have "how much (a) there is of the cosinefunction with frequency v in the original function.
My question is, how do we get the a out of our integral since to me it seems that cos(vt)/cos(vt) should be 1, and integrating 1 from minus inf to inf is infinity. It's not enough to just integrate over the period, as all of the different frequencies will have different periods.
The equation is nicely formatted on the link below.
https://math.stackexchange.com/questions/2981774/solving-an-integral-equation-possibly-fredholm-1st-kind-containing-quartic-ex
Main issue is that π(x) needs to be moved to the other side (I believe) but I'm not sure if that's possible without the RHS blowing up. Also, I've seen people say the Fourier transform method to solve this is possible what π(x) is Gaussian, but that seems to give me the exact same issue.
I see how a discrete Fourier expansion works by forming a linear combination of orthogonal functions. And when we write such an expansion, we are working in a basis of those orthogonal functions.
So when we use the integral Fourier transform, are we working with a continuous distribution of basis functions?
If so, are these continuous bases related to infinite dimensional matrices?
Numerical solvers for Partial Differential Equations are notoriously slow. They need to evolve their state by tiny steps in order to stay accurate, and they need to repeat this for each new problem. Neural Fourier Operators, the architecture proposed in this paper, can evolve a PDE in time by a single forward pass, and do so for an entire family of PDEs, as long as the training set covers them well. By performing crucial operations only in Fourier Space, this new architecture is also independent of the discretization or sampling of the underlying signal and has the potential to speed up many scientific applications.
OUTLINE:
0:00 - Intro & Overview
6:15 - Navier Stokes Problem Statement
11:00 - Formal Problem Definition
15:00 - Neural Operator
31:30 - Fourier Neural Operator
48:15 - Experimental Examples
50:35 - Code Walkthrough
1:01:00 - Summary & Conclusion
Paper: https://arxiv.org/abs/2010.08895
Blog: https://zongyi-li.github.io/blog/2020/fourier-pde/
Code: https://github.com/zongyi-li/fourier_neural_operator/blob/master/fourier_3d.py
MIT Technology Review: https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/
This new paper by researchers from CalTech & Purdue is notable for making significant advancements in solving Partial Differential Equations, critical for understanding the world around us. Here is a great video that explains the paper in detail. (You can click the time-stamps on the right to jump between sections.)
Quick Summary:
Numerical solvers for Partial Differential Equations are notoriously slow. They need to evolve their state by tiny steps in order to stay accurate, and they need to repeat this for each new problem. Neural Fourier Operators, the architecture proposed in this paper, can evolve a PDE in time by a single forward pass, and do so for an entire family of PDEs, as long as the training set covers them well. By performing crucial operations only in Fourier Space, this new architecture is also independent of the discretization or sampling of the underlying signal and has the potential to speed up many scientific applications.
More information about the research:
Paper: https://arxiv.org/abs/2010.08895
Code: https://github.com/zongyi-li/fourier_neural_operator/blob/master/fourier_3d.py
MIT Technology Review: https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.