A list of puns related to "Self Adjoint Operator"
Hello everyone!
I'm gearing up for teaching in person (in Florida) next week, after nearly 2 years of teaching online. This is amid the worst numbers we have ever seen for COVID here, and Florida is one of the three hot spots of the world right now. It's a bit daunting, but I'm doing my best to leverage online platforms to run a flipped classroom and record everything to help students who are concerned and want to distance themselves.
But I'm not here to talk about any of that.
If you're only vaguely aware of Hilbert space theory and Operators, then the video linked below is a start on discussing the Spectral Theorem for Operators, where we go over the theory for Self Adjoint Compact Operators (following Lang's Real and Functional Analysis). I am motivating it with Dynamic Mode Decompositions, where we absolutely need this theory to establish convergence of models obtained from DMD algorithms to those of the true dynamics.
This is continuing my course on Data Driven Methods in Dynamical Systems that I started in spring, and this series expands on our discussion of Dynamic Mode Decompositions. The video linked below is setting us up to talk about how to get convergent routines, and it comes down to some 101 theorems from Functional Analysis.
Right now, the literature has settled on purely heuristic motivations for DMD, where convergence theories haven't been strong enough to get the convergence of the spectra in DMD. Some recent work of me and my colleagues have begun to illuminate how you can achieve actual convergence, and this video series is intended to build up to that new theory.
https://preview.redd.it/njhmlea52cs61.png?width=885&format=png&auto=webp&s=419989ed1facfa2581327182f2a108b1b0181797
https://preview.redd.it/ps845eb02cs61.png?width=792&format=png&auto=webp&s=84484506a029bf0cbd3d3ab47f747fd2a0740cfd
Hello
Does anyone have some example for a bounded self-adjoint operator or an application of the spectral theorem for bounded self-adjoint operators?
Right now I can only find self-adjoin linear operator on a finite dimensional Hilbert space, self-adjoint compact operators, multiplicate operator and the discrete laplacian operator on lΒ²(Z).
(zβΞ¨+zβΞ¦) *=(zβ *)Ξ¨ *+(zβ *)Ξ¦ * for some zβ,zβ β β.
(the asterisks on the complex numbers denote complex conjugation and the asterisks on the operators denote taking the adjoint)
Suppose T is a self-adjoint positve unbounded operator on a Hilbert space. I know there exists T^(1/2) self-adjoint and positive such that T=T^(1/2) T^(1/2), but I also know that I can define T^(1/2) via spectral theorem. Do the two definition give the same operator? If yes, why?
Thanks to anyone who'll answer.
Hi everyone, I'm trying to prove that any self-adjoint linear operator A:H->H on a Hilbert space is bounded. I figured the best way would be to show that A is continuous, and therefore bounded. I followed a path of the form:
take x,y close to eachother.
|A(x-y)|^2 = <A(x-y),A(x-y)> = <(x-y),(A^2)(x-y)> <= |x-y||A^2(x-y)|
But this didn't get me anywhere. Any tips would be appreciated
Any cool consequences? Any particular reasons why it's so cool?
In my physics course I learned that self-adjoint operators only have real eigenvalues and even proved it. When we take the momentum operator (we also proved that it is self-adjoint) p_x=-ihd/dx, e^x is a eigenfunction with the eigenvalue -ih, which is complex. I know that this can not be possible, but I don't know what exactly is wrong.
I'm gearing up for my Quals and need to make sure I understand the difference in case it gets asked. Would you guys agree with this distinction? An operator A is hermitian if <Au,v>=<u,Av> for all u,v in the domain of A. This doesn't necessarily mean A=A* as the domain of A* could be larger than the domain of A. So if A is hermitian and D[A]=D[A*] then A is self-adjoint Please correct me in even the smallest detail as I would much rather hear how wrong I am from you guys then from my qualifying committee
EDIT: formatting
OK.
I know that self-adjoint complex operators always have real eigenvalues. So measurements will always produce a real number which is desirable.
Also Stone's theorem. https://en.wikipedia.org/wiki/Stone%27s_theorem_on_one-parameter_unitar
But this doesn't quite add up to an intuitive understanding yet.
I have a self-adjoint differential operator (w'''' + n^2 * w'' = 0) with self-adjoint boundary conditions (w(0) = w'(0) = w(1) = w'(1) = 0). If my knowledge of the theory serves me right, this is a Hermitian operator and, thus, the eigenfunctions corresponding to different eigenvalues should be orthogonal.
The eigenfunctions can be divided into two groups: Even (n_i = 2Ο, 4Ο, 6Ο, ...) where w_i(x) = A_i*(1 - cos(n_i *x))
Odd (n_i = 2.86Ο, 4.92Ο, 6.94Ο, ...) where w_i(x) = A_i*(1 - cos(n_i x) - (2/n_i)(n_i *x - sin(n_i *x))
Orthogonality holds between an even and an odd eigenfunction, but not between two even or two odd. Am I incorrect in assuming orthogonality is guaranteed or is something else at play?
I know they generalize the conjugate transpose of a matrix but I am not sure how. I also know they involve a restriction mapping. I have seen some examples of them but I don't understand why the examples satisfy the definition.
I just finished an introductory course to partial differential equations. We covered how to solve homogeneous problems by separation of variables and nonhomogeneous problems with the method of eigenfunction expansion.
At the heart of the course was solving the Sturm Liouville (S-L) eigenvalue problem to get your eigenvalues and eigenfunctions from the boundary conditions.
All semester when we solved the S-L problem we had to show that the differential operator and the boundary conditions made the operator self-adjoint, which I mastered, but the professor never really explained what exactly that tells us about the problem.
So my question is what is self-adjointness and why is it useful/important?
I have a linear differential operator and its complementary solution u. Is there a way to show that the complementary solution of the adjoint operator is u/(p(x)*W) where p is the coefficient of the second order differential and W is the wronskian
https://imgur.com/gallery/rM2FNST
They both seem to have the same formula using the integral and taking a conjugate? What is the difference between them? Why does the adjoint have a | between them? What do they actually mean?
I have some more general questions that are more of intuitive nature regarding Operators used in Functional Analysis.
How come anyone ever came up with the concept of Adjoint Operators? Does it arise somewhat naturally when solving "practical" problems (some integrals)? Same question for Compact Operators basically. Is the definition intuitive in the sense that it creates the beautiful results we get from Compact Operators? I am talking about the definition that every bounded sequence xn, the sequence Txn contains a converging subsequence (why not for example say that Txn should be ). There must be some explanation why it is defined like that. I would appreciate some references where I can read more into this stuff.
Thanks!
Hi I am an engineering student independently going through Friedberg's, Insel's, and Spence's Linear Algebra Textbook. This is my first exposure to upper level math. It has been going well until I got up to section 6.3, the Adjoint of a Linear Operator. I understand the definition that the adjoint behaves under the property <x, T(y)> = <T*(x), y>. However, I just do not understand what the adjoint of an operator means. What does it do to the operator? What does it do to the inner product space it is operating in?
I am a graduate student in a Nuclear Engineering PhD program. Nuclear Engineers are interested in solving various forms of the transport equation. Typical approaches to solving these problems involve using Greenβs functions and integral equations. Since the transport equation is integro-differential we also talk about linear operator theory quite a bit. Naturally adjoint operators are discussed when we want to compute sensitivity coefficients.
Unsurprisingly, the rigorous mathematical foundation for these subjects are glossed over in most Nuclear Engineering textbooks. So, I am looking for some general references to help me get a better understanding of these topics. Thanks in advance!
My question is very simple. Given the adjoint of L (not to be confused with the formal adjoint of L), which in turn is given by the formal adjoint L+ such that the bilinear concomitant associated with L is 0, it is evident that if L acting on some function, call it Ο, is a non homogeneous boundary value problem, say L[Ο] = h(ΞΆ), Ο(ΞΆ ) has a solution if, the adjoint exists, then there must be some function, ΞΎ(ΞΆ) that satisfies the boundary conditions that are a result of the bilinear concomitant associated with L being zero if and only if, the inner product of ΞΎ(ΞΆ) with h(ΞΆ), (of course with respect to some weight function, call it Ο(ΞΆ)), is zero; that is to say, given ΞΎ(ΞΆ) and h(ΞΆ) are orthogonal with respect to Ο(ΞΆ), there exists a solution to the non homogeneous, linear differential equation given by LΟ. Obviously this solution is not clearly evident and may not even have a closed form, with that being said, could one not simply use a modified Fourier-Hankel method to solve the system coupled of differential equations given by LΟ and L'π to obtain a solution by choosing the appropriate weight function, Ο(ΞΆ), such that the inner product of h(ΞΆ) and the inner product of h'(ΞΆ) has an analytic solution? It seems obvious to me that this would yield a much more useful and applicable result.
Addendum: I know this is fairly mathematical, but this is an elementary problem that I feel not only has obvious repercussions in applied fields, but could be understood and intuited by any modern human.
This is from Friedberg's Linear Algebra 4th edition, section 6.3 #3c.
V = P_1 (R) with the inner product int(fg) from -1 to 1, and T(f) = f' + 3f. We're meant to evaluate T* with f(t) = 4 - 2t. The answer in the book is 12 + 6t.
I first tried transforming the basis vectors into (3, 0) and (1, 3), forming the matrix
[3 0]
[1 3]
then transposing it and multiplying (4, -2). I get 12 -2t, which doesn't agree with the answer in the book.
I then tried solving directly from the definition of the adjoint to solve for T* in general with x = 1 + x and y = y_1 + y_2 t:
<T(x), y> = <x, T\*(y)>
which gives me the integral of 16 + 4t - 6t^2 , which I can't factor into <1 + x, something> to get T*.
I tried mapping into R^2 (at this point I was just trying anything) and solving from the definition, and I get:
<(w, x), T\*(y, z)> = <T(w, x), (y, z)>
= <(4w, 3x), (y, z)>
= 4wy + 3xz
= <(w, x), (4w, 3x)>
So T* would be 4x + 3xt, which for 4 - 2t gives 16 - 6t, which isn't even what I got before.
I'm completely stumped. I've been working on this one problem for hours now. This is homework so I would appreciate it if you didn't solve the problem for me but just gave a hint about what the hell I'm doing wrong.
I only know that ||T|| = ||T* || but this equality could hold for a smaller subspace D(T*) of H. Could someone clarify this? Thanks!
Hello!
I am having trouble understanding what an adjoint operator is in the context of differential operators. Near as I can find it, if you have uL(v), where L is your operator, then your adjoint is vL(u)?
I've tried looking at texts but I can't seem to grasp exactly how to find the adjoint. Any help would be appreciated.
So, I'm studying for a linear algebra prelim, and I have a question. I've seen adjoints treated differently in a number of different places, so I just wondered if any of you could help clarify this: How is the conjugate transpose of the matrix form of a linear operator related to its adjoint? What is the relationship between normal matrices and adjoints?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.