A list of puns related to "Dirac Delta Function"
Ok, so you know the dirac delta function? 0 everywhere, except at one point it's infinity, and if you integrate across that point you get a finite value.
Is there an 'inverse dirac delta'? By that I mean, a function that's constant everywhere, but if you integrate from negative infinity to infinity, you get some finite, non-zero value. So it's basically 'infinitesmally above zero everywhere'.
You make the dirac delta function by taking a gaussian or something and squeezing it up together infinitely into a point. But you'd make this the opposite way by taking a function by stretching it and squishing it down flatter infinitely into a constant floor.
Anybody use this concept or name it anywhere?
I know the hamilton method. But if the functional derivative of distance is given as https://linksharing.samsungcloud.com/9qseeso9w1UV
Where Ξ΄(x β xi) is a dirac's delta function.
Please with all kindness!
Just learn about this in the context of Fourier transforms, still struggling to get a clear mental image of what it's actually doing. For instance I have no idea why integrating f(x) times the delta function from minus infinity to infinity should give you f(0). I understand the proof, but it's extremely counterintuitive. I am doing a maths degree, not physics, so perhaps the intuition is lost to me because of that. Any help is appreciated.
Might be kind of a dumb question, but in Griffith's Electrodynamics (3rd ed), he explains that the integral of a function multiplied by the delta function at x=a is simply the value of the original function at x=a (assuming the bounds of the integral contains x=a).
Why don't you actually integrate the original function? How do you simply end up with the value of the original function of x=a without any integration at all? I'm also somewhat confused as to how you would perform this without actually evaluating the integral at the correct bounds and only including the point at which x=a.
For example, an integral from x=2 to x=6 of f(x)=(3x^2 - 2x - 1) multiplied by delta(x-3) would just result in f(3)=20, without even considering the bounds or performing any integration.
I was wondering, first of all is the following statement correct, and second, does it have an elegant proof. I don't require a rigorous proof (I am an engineer), but I would like to hear some rational reasoning for the following:
>Consider a set {Ξ΄(t), Ξ΄'(t), ..., Ξ΄β½βΏβΎ(t) } where Ξ΄(t) is a Dirac delta functionβ½Λ’α΅α΅ *βΎ and nββ . Are the elements of this set linearly independent?
>
>In other words does aβ Ξ΄(t) + aβ Ξ΄(t) + β― + aβ Ξ΄β½βΏβΎ (t) = 0, imply that aβ = aβ = aβ = β― = aβ = 0 ?
I would propose a proof of sorts using Laplace transform as following:
aβ Ξ΄(t) + aβ Ξ΄(t) + β― + aβ Ξ΄β½βΏβΎ (t) = 0 β β{aβ Ξ΄(t) + aβ Ξ΄(t) + β― + aβ Ξ΄β½βΏβΎ (t)} = β{0}
β aβ + aβ s + aβ sΒ² + β― + aβ sβΏ = 0, which then implies aβ = aβ = aβ = β― = aβ = 0,
because the set {1, s, sΒ², ..., sβΏ} clearly is linearly independent.
Is this proof correct? Is there some "more elementary" "proof" ?
*I know it isn't a real function, but let's pretend for simplicity's sake :-)
I just started reading Griffiths' Electrodynamics textbook last night and it introduced this function. I understand the definition of it fine, but what I don't get is why we care about it. The book specifically referenced the vector function R/rΒ², where R is a unit vector and showed how it's divergence is 0 everywhere except at r=0, at which point it diverges to infinity. From there, the DD function was defined. But what's so important about that function?
Don't get me wrong -- I'm not someone who thinks math is only important if it has an obvious practice application. But this particular function (yes, technically it's a distribution) doesn't seem all that interesting in its own right, plus it was presented in a manner that heavily implied that it's very important to electrodynamics, but didn't explain how.
So Iβm trying to understand what a Dirac delta function actually looks like, which I know may be a bit pointless as Iβve been told the Dirac delta only really makes sense under the integral. But anyway I know why may interpret it basically as a spike that doesnβt technically go to infinity but if one integrates it you get 1 and if you integrate it multiplied by f(x) you get f(0).
Anyway so I know that the delta function Ξ΄(x) basically looks like a spike at 0. And Ξ΄(x-a) is a spike at x=a. So we get this βspikeβ when the argument is zero. Now say we have Ξ΄(g(x)) and g(x) has multiple roots. Does that mean the delta looks like many spikes where g(x) =0.
Am i thinking about this correctly? Any help is appreciated
Hi all
I'm working through some quantitative finance studies and I've run into a spot of confusion. A key part of my studies revolve around wiener process/Brownian motion in which a normal distribution with a mean of zero and variance dx exists.
My question is:
This is confusing me as the infinite sum of normal distributions with variance dx is a normal distribution with zero mean and variance x. The infinite sum of dirac deltas would surely be 0?
Thanks in advance for any help
I just hear that the integral of the dirac delta function times another function f(x) is defined as the value of the integral of dirac function which is 1 times the value of the function defined at 0 treated as constant in the integral, it blows my mind, but why is it true , Iβm confused
What does it actually mean for an expression involving Dirac's delta to be equal to something? Say for example the identity:
x^n * /delta(x) = 0 for natural n
I can't input x=0 because it makes no sense, but I can take the integral of the left side around zero and see it is equal to 0. Does that mean the expression without the integral is also zero? Is equality defined differently here?
I'm quite confused, thank you in advance.
Having trouble with the question and the answer I got is 1/2 . I am not sure if it's right. So asking for help.
I know it's not a function from R to R, because β isn't a real number, but why not simply define it as a function from R to Rβͺ{β}, the projectively extended real number line? My understanding is that at least part of the motivation for DD function is related to the function R/rΒ² and what happens when r=0, and the projective number line was invented at least partly to extend the domain of 1/x to include 0. So why not use what seems like an obvious relationship to let the DD function actually be a function?
https://i.imgur.com/4NrQEDZ.jpg So Iβm trying to prove the blue expression and I am aware of the the two black expressions at the top of the page. Yet Iβm pretty stumped.
I know it involves using the integral at the bottom of the page. And the Dirac delta will have spikes when g(x) = 0. And if one says that this occurs at x_i. One may Taylor expand about this point and sub it into the argument. Then the next step involves a change of variables of for y but beyond that Iβm pretty confused on what to do next.
Any help is very much appreciated
Hi,
I'm currently revising for my exams and one of my lecture slides looks like this. I understand the concept but when doing the example, I don't understand how it simplifies to be that. When I do it, i get the integral at the start followed by + f(-1) + 3f(1) + 2f(-3). I don't get how that simplifies to the answer given
***argument.. sorry
please criticize my """proof""".
DD=Dirac Delta function
A challenge asked: prove DD(sqrt(x))=0. It asked to do so via change of variable in the integration.
I assumed while doing this proof that it was asking that the expression DD(sqrt(x)) acted as zero in the "context" of an integral.
I do not think the man assigning the challenge meant for me to use the integral of an exponential expression for DD, and he dissuaded the use of the composition property of DD.*
Int [ DD(x) ] dx= int [ DD( sqrt (x) )/(2sqrt(x)) ] dx (just changed variables)
Next I equate the argument of the integrals, which I don't think is legal because DD isn't continuous (or even a function, really, right?), but whatever lets roll w it
DD(x)=DD(sqrt(x))/(2sqrtx)
DD(x)*2sqrt(x)= DD(sqrt(x))--- now investigate whatever this is under the context of the regular DD(x).
Change variable from x to x - x' for comfort (lol)
Int [ DD(x-x')*2sqrt(x-x') ] dx' =??????? 2sqrt(x-x)=0
*If you're wondering why I don't just ask him, the course was given roughly 10 years ago. Without me in it.
I have a question concerning a Dirac delta identity. I have the following integral.
[; \int \delta(t-r/c) dr ;]
Naively, I would think that when r=ct, that the delta function integrates to one. However, the right answer is c. I know this is because of the identity d(ax)=d(x)/a. But how do I interpret why the naive answer is wrong?
Image of the math I am having trouble with:
https://drive.google.com/file/d/1hXFOvg5aeRVUfHPnoohmRnkg6ICFWe4z/view?usp=sharing
How does the first line simplify to the second line? For some background, Ξ· = diag(-1,1,1,1), h is metric perturbations, and ΞΎ are killing vectors. This is part of a GR question regarding gravitational waves and we are supposed to calculate all relevant quantities to the first words (where applicable).
I have to prove that Ξ΄(2t) = 1/2 Ξ΄(t) but Iβm having trouble with it.
I have to use the approximation of the unit step function u_Ξ(t) to prove it. In this function, the step is not instantaneous but is a straight line with slope 1/Ξ from t=0 to t=1. The derivative of this function is Ξ΄_Ξ(t), which is constant at 1/Ξ between t=0 and t=Ξ.
So:
u_Ξ(t) = { t/Ξ for 0 < t < Ξ
Ξ΄_Ξ(t) = { 1/Ξ for 0 < t < Ξ
Then I would guess
u_Ξ(2t) = { 2t/Ξ for 0 < 2t < Ξ
Ξ΄_Ξ(2t) = { 2/Ξ for 0 < 2t < Ξ
Both Ξ΄_Ξ(t) and Ξ΄_Ξ(2t) in this case can be integrated and shown to be 1, but the one with 2t is higher and narrower. I donβt see how this should result in the thing I should prove.
This solution on Slader uses a different method, where the 2t is inserted in the range, but not in the value. This is probably the correct way but I donβt understand why.
To find a vector potential we write
1. Bz = βxAy β βyAx,
and try (r = p x^2 + y^2 )^(1/2):
2. Ax = βB βy[g(r)] = βB (y/r) gβ² (r),
3. Ay = B βx[g(r)] = B (x/r) gβ² (r).
Then if
4.(β^2 x + β^2 y )g = Ξ΄^2 (x*),*
we get the right formula for Bz. This is the equation for the Coulomb potential in 2 dimensions, or equivalently for an infinite line source in 3 dimensions, so the solution is
5***. g = ln (r).***
Now
6***.*** Axdx + Aydy = AΞΈdΞΈ + Ardr = B (xdy β ydx)/r^2
implies that Ar = 0 and AΞΈ = βB, so the line integral around the circle is β2ΟB. This follows from Stokesβs theorem, which says that the line integral around any curve is equal to the integral of the magnetic flux through any surface bounded by that curve.
So I am not sure where the equations Ax and Ay (1 and 2) were derived from. Is it a manipulation of the first equation, or some rule to be remembered?
I do get that the right hand side is the partial of a function g(r).
Why is g=ln(r) the solution to the Coulomb potential in 2 dimensions?
I do see how equation 4 resembles Laplaces equation, except the dirac delta means that it's equal to something non-zero at the origin right?
Thanks for any help!
https://i.imgur.com/CHsEP58.jpg
Can someone check if I did this right?
Also I'm having a hard time visualizing heaviside functions when they're written out like this. Does it just mean that the function is 0 except from pi to 2pi the function looks like sin(2t)?
We're also supposed to plot this function and it's derivative. I thought the derivative of a heaviside is the dirac delta function, but I also read that it's not really a function? So what would the derivative graph look like? Just 2cos(2t) from pi to 2pi?
Also we have to graph a phase plane of the function and it's derivative. What would that look like?
Thanks!
I know that the integral from negative infinity to infinity would be one. I also know that the integral from any negative number to infinity would be one. Iβm teaching myself ODE using Khan Academy and learning about Laplace Transforms requires you to take integrals from 0 to infinity and Iβm not sure how this affects the integral of the Dirac delta function.
Using their informal definition of the Dirac delta function, it is the limit as T ->0 of the function that equals 1/(2T) when -T<t<T and 0 elsewhere. By this definition it would seem the integral of the Dirac delta function from 0 to infinity would be 1/2, but this is just an informal definition. Is the integral defined for 0 to infinity for the Dirac delta function? What is it?
So I'm going through Jackson EM for self edification purposes and I can't recall or found the derivation or reason for following [;\Delta (, \frac{1}{r}), =-4\pi\delta (, \textbf{x}-\textbf{x'} ), ;] I am aware that Jackson states it is due to the singular nature but I can't grasp how that is related to the Laplacian of [;\frac{1}{r};] Is this something just to be taken at face value or is there a more illuminating explanation I am missing?
Hey, guys!
I've recently learned of the Dirac Delta Function at uni. My professor briefly went over these three conditions (?) that it satisfies: https://qph.fs.quoracdn.net/main-qimg-37b3ca9307739ded3788d421a9eb3d45
I get the first, but is somebody able to explain how the second works? I tried to evaluate the integral, but it didn't seem to work out for me. :(
Any help is greatly appreciated. :D
I have been trying to implement this paper that describes a retroreflection model, in Open Shading Language. The retroreflection shader is divided into three parts - mirror reflection, retroreflection and diffuse reflection. The mirror reflection part uses a Dirac delta function like so -
https://preview.redd.it/mx5psj0r46131.png?width=495&format=png&auto=webp&s=1908c7f1173bd7fbb87440c074836a320c8c6dd6
where Wo and Wi are viewing and incident directions respectively, Ks is the specular reflectance, F is a function representing the Fresnel reflectance and the numerator part on the ratio is the aforementioned delta function
R(Wo) is defined as -
https://preview.redd.it/8hy36y8v46131.png?width=391&format=png&auto=webp&s=0b0ae6f1a55e13b02171e8f10c99f5550f2fc211
where Wi, Wo, and Wn are the incident, viewing, and normal directions respectively.
So from what I've read online, in this case, the function returns a value of zero whenever wi = R(w0). What value will be returned when the values are unequal? I don't get how it's supposed to take an "infinite" value in this case.
I came to this question by looking at the fourier transform of a hyperbolic cosine. Let's say
[;f(x)=\cosh(ax);],
where a is a complex number and x is real. Then the fourier transform is
[;F(\omega)=\sqrt \frac{\pi}{2} \delta(\omega-ia)+\sqrt \frac{\pi}{2} \delta(\omega+ia);].
So it's the sum of two dirac deltas that take their nonzero values at the complex numbers [;z=\pm ia;]. I want to know what happens if I integrate along a contour that surrounds one of these numbers (i.e. Does that dirac delta function have residue?)
I found a few references to this question online but so far have not come to a definitive answer. This is the best source I've found so far, but it doesn't seem to answer the question. He seems to integrate along a contour, where the dirac delta actually lies on the contour. So that's not really a residue. I guess this question is related to how to take the inverse fourier transform and recover the hyperbolic cosine, since you would need to integrate along a contour that contains the nonzero values of the dirac delta. Anyone have ideas?
problem: https://imgur.com/a/uGoif
how did wolfram alpha get from the expanded form to delta(y-4)?
delta is the Dirac-Delta function
Apparently, the derivative is minus Dirac's delta function, but I don't understand how that can be the case
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.