A list of puns related to "Linear Operator"
Knuckle dragging engineering student here getting twisted up with mathematics definitions.
I completely understand the additivity and homogeneity rules:
L(f(x)+g(x)) = L(f(x)) + L(g(x)); L(kf(x)) = kL(f(x))
Totally understood, basically the same as linear functions. I have this problem from the text that I think may just be poorly written. It's asking to determine if the following operator is linear (using dumby numbers here), where D = d/dx, D^2=d^2/dx^2:
8x^3D^2 + 2xD + 7
That's all it gives me. How is it the operator is supposed to applied? Just the same as multiplying (f(x)+g(x)), where it's obviously linear? Or do I substitute in for x where its obviously nonlinear? Or assumed d/dx \equiv d/dx (f(x)), and plug in (f(x)+g(x)) after each differential operator for f(x), where the constant makes it nonlinear? No guidance in the text whatsoever and online resources are conflicting.
Reading my linear algebra notes I found the following question: Is there an invertible linear operator T in L(V) such that X_T=(-1) ^n t ^n?
Note: V is a vector space with finite dimension.
Writing my PhD in literary studies, trying to make sure my scientific info is accurate.
Obviously, I recognize that the compound structure of the sentence probably simplifies the formalism of quantum mechanics beyond the point of usefulness, but still, is it wrong? Does it miss the relationship between various concepts?
If you can think of a better way to express all of the above in one sentence I won't say no to reading it.
Thanks!
https://i.redd.it/thvnxea9lao61.gif
So this is more of just a fun post: I'm curious if anyone had any applications or ideas which used fractional operators . I also wanted to show of my gif.
A bit of background on what's in the GIF and fractional operators in general.
Recall that the 2D discrete Fourier transform, F, is a linear operator on a space of matrices (or d by d arrays). If we apply the Fourier transform 4 times it returns the identity function i.e. F^4 = F(F(F(F))) = I. Note that people have figured out how to let these exponents take non-integer values! This corresponds to fractional Fourier transforms. So for example the half Fourier transform F^(1/2) is something that functions like the square root of the Fourier transform. If we let G = F^(1/2) then we have that G(G)=F, or maybe a bit more concretely, for any matrix/image X, we have that G(G(X)) = F(X). These special exponents behave like regular old exponents in a lot of ways and it has been observed that one can construct F^a for arbitrary real-valued a.
The GIF I've posed takes an image of a pagoda X and applies increasing fractional degrees of Fourier transforms. Specifically the graph shows shows F^a (X) as a goes from 0 to 4.
Links, more on fractional operators
Conclusion
I'm curious if anyone has any interesting ideas...
I'm very new to high school math, and I've been doing a ton of research on linear operators. I'm trying to think of some real-world applications (as opposed to just applying the theorem) that would benefit from the knowledge I have of linear operators.
I'm not asking for a linear operator that can solve quadratic equations, I'm talking about a linear operator that can solve linear equations. I know that there are linear operators that can solve quadratic and linear equations, but I don't know much about how to solve linear equations.
If I do find a linear operator that can solve linear and quadratic equations, what would be some real-world applications that would benefit from that knowledge?
Thanks in advance!
I am unsure of how one actually shows that L is a linear operator when:
(a,b,c) ---> (2a-2b+c, a-2b+c, -2a+3b-y)
What is written above, is supposed to be the linear transformation of a three dimensional vector (a,b,c) to the other 3D vector on the right.
I hope you can help,
best regards,
Robin.
One book I'd eventually like to read is Kato's Perturbation Theory for Linear Operators. From what I've read it is incredibly dense and incredibly good. It's one of those books that starts with nothing and builds an incredible castle of math starting with the foundation and eventually reaching the stars.
However one of the things though that has been keeping me from diving in (besides never having enough time) is the book is a bit old fashioned; the first edition was in 1966. I've looked around to see if anyone else has written a more recent book on the subject, but so far I haven't found anything that made me think: "This is better than Kato".
Has anyone else found a book is a newer take on Kato's Perturbation Theory for Linear Operators? Or anyone else who's read Kato's book and has an opinion?
Researchers from Brown University have built DeepONet, a novel neural network-based model that can efficiently learn both linear and nonlinear operators. This novel model was inspired by earlier studies led by researchers at Fudan University.
AΒ continuous functionΒ does not have any abrupt changes in value. More precisely, small changes in continuous functionβs output can be assured by restricting to sufficiently small changes in its input. Many studies show that artificial neural networks (ANN) are highly efficient approximators of continuous functions. However, not many studies have yet focused on their ability to approximate nonlinear operators.
Inspired by the papers published by Chen and Chen at Fudan University, which discusses the functional approximation using a single layer of neurons, the researchers decided to explore the possibility of building a neural network that could approximate both linear and nonlinear operators
So Iβm trying ,and currently failing, to understand the idea of linear and matrices.
Iβve been trying to comprehend the following page of mathematical methods for engineers and physics to no avail
https://i.imgur.com/zBZ1p3w.jpg
https://m.imgur.com/SDFtcQp
So the way Iβm trying to understand is that I have some vector a. I operate on this vector using A which transforms it into another vector in the same vector space y.
And I can have a basis in my vector space ei. Where all vectors in this basis are linear independent and I can represent my vector a using this basis as having some component in each of the basis vectors.
I can operate using A onto one of these basis vectors and the result is eq 8.23. But at this point Iβm pretty lost. Iβm unsure what j means and Iβm basically confused.
If anyone could offer any help or recommend any resources it would be very much appreciated as I feel like Iβve been reading the same page for hours on end.
I have a linear differential operator and its complementary solution u. Is there a way to show that the complementary solution of the adjoint operator is u/(p(x)*W) where p is the coefficient of the second order differential and W is the wronskian
So, I don't know how I would go about to finding the fundamental solution, that is a generalized function u, to the operator L= (-D^2+I) where I is the identity and D = d/dx, the differential operator, Ξ΄_0 the dirac delta, such that
Lu= Ξ΄_0
Before anyone says anything, we unfortunately CANβT use the Cayley Hamilton theorem (Weβre proving a special case of it actually)
So this is a part of a bigger proof on my homework, I have already reliably proven that any operator, S, on a complex vector space has itβs characteristic polynomial=0 evaluated at S
We assume the theorem that says if B is a basis for V, then the vector space of matrices Mn(F) is isomorphic to L(V,V) by taking T to itβs matrix representation in Mn(F) with respect to the B basis.
Can anyone check over a part of my proof for me? I feel as though Iβm overlooking something when it comes to viewing polynomials in F[t] in sets like C[t]
excerpt of my proof found here
Any help would be greatly appreciated
Are there any operators that satisfy the condition L(f+g)=L(f)+L(g) but not the L(cf)=cL(f) if c is an imaginary number
In my differential equations class we learned that when you have a second order homogeneous ODE and the differential operator is squared, using the substitution y=e^rx only produces 1 solutionβitβs not too difficult to find the other solution (xe^rx) in this case because the order is low, but when we have a higher order operator how do we show that multiplying the solution by x produces a new solution?
I asked a similar question to this yesterday, however my understanding ,I think, has improved and now my question has slightly changed.
So my understanding of linear transformations and matrices is that say Iβm in 2D and have a the standard basis of i = <1,0> and j = <0,1>. And I have some vector in this space say V = <2,3> = 2i + 3j. And then have some operator A which is a matrix. Now the first column of the matrix is going to tell me where my i ends up and the 2nd column is going to tell me where my j ends up (in the original basis that is I think). And as itβs a linear transformation my V is still going to be 2i+3j but now i and j have changed so I can find my new V but subbing in my new i and j.
However Iβm getting confused and bogged down by the maths jargon see eq 8.23
https://i.imgur.com/SDFtcQp.jpg
Iβm frankly struggling to correlate my (attempted) intuitive understanding to this more rigorous way. So if anyone could help explain what this equation is telling me it would be much appreciated
I was able to identify that the parity and linear translation operator must commute as Pf(x-a)=f(-x-a) intuitively. Is there any way to mathematically show this?
Suppose you have the equation L_1 f + L_2 f = Q and you want to find f = (L_1 + L_2)^{-1} Q. Is there anything useful to help find this inverse?
Similarly, what can be said about the Eigenfunctions of a sum of linear operators, if anything at all?
I'm trying to work out a solution to a differential equation whose solutions are the banach space-valued functions on [0,/infty), that is f β¬ C(R+, X). where X is the Banach space of scalar-valued functions. I am trying to prove its weak solution via lax milgram theorem. But to apply lax milgram, I need my vector space to be a Hilbert space. Also, if someone has the book "Banach and Hilbert space of vector-valued functions: their general theory and applications to holomorphy", please post it here, I can't find its free pdf.
Both of these are linear transformation. Identity linear mapping [ I : V --> V] and linear operators [ T : V --> V]? Book says- identity linear map forms when V maps into itself, whereas linear operator, is formed when domain = range. Then how they are different I can't get it?
Hi I am an engineering student independently going through Friedberg's, Insel's, and Spence's Linear Algebra Textbook. This is my first exposure to upper level math. It has been going well until I got up to section 6.3, the Adjoint of a Linear Operator. I understand the definition that the adjoint behaves under the property <x, T(y)> = <T*(x), y>. However, I just do not understand what the adjoint of an operator means. What does it do to the operator? What does it do to the inner product space it is operating in?
I know for a fact that a derivative is a linear operator. I've heard of the concept that a linear operator can be written as a matrix, or something of the like that a matrix is an instance of a linear operator under a certain basis.
I might be abusing something here, but does that mean I can represent a derivative as a matrix?
If not, where did I go wrong, if so, what would this matrix look like?
Hi, everyone. I read Axler's Linear Algebra Done Right earlier this summer, and he constantly referred to linear transformation from a vector space into the same space as an operator, which made a lot of sense in context. However, I am taking my first graduate linear algebra course this fall, and the textbook that course uses seems to use the term endomorphism to convey the same idea. Is there a significant difference between these terms? I was speaking with a friend about this, and he told me that an endomorphism is more than likely the more general term for the concept, but he wasn't sure. I am interesting in understanding the nuance in using one term over the other. Question might be pedantic to a degree, but I'm just genuinely curious.
Hi all,
I am going through Hoffman and Kunze and I am confused by the switch between chapters 3 and 5 to how linear operators are represented in matrix form.
In chapter 3, the matrix representation A of a linear operator T on a vector space V has entries A(i, j) = f_i (Ta_j), where a_j are basis vectors of V and f_i are the dual basis vectors.
In chapter 5, though, the matrix representation A of a linear operator T on a free K-module V has entries A(i, j) = f_j (Tb_i), where b_i are basis elements of V and f_j are the dual basis elements. Presumably, then, we are to view the basis elements as "row vectors."
Why do the authors make this switch? Is it simply to make it more convenient to treat multilinear functions as acting on "row vectors" of a matrix?
Hello all, I will link the question and some extra info but the only question I'm struggling with the part involving uniform boundedness theorem, showing that for our T_n there is a c_x > 0 etc.
Here is our set of functions : https://gyazo.com/04c395fbdf0b60590656d43e0d2b3224
And here is the question I'm struggling with (2ii, I've done 2i):
https://gyazo.com/38f129f77a7ae0ca64470efb53865fdb
The only example I have using the uniform boundedness theorem shows that a normed space is not a Banach space, but I'm not sure how to apply that to the context of this question. Thanks!
My current answer is to find all the roots of the characteristic polynomial and if one of them satisfies the eqaution Ax = ax, where 'A' is the operator matrix, 'x' is the vector itself and 'a' is one of the roots, then 'x' is an eigenvector.
But I think this is a bad answer because what if we have, say, 100 roots? Sure, it's unlikely this will happen in real life but just hypothetically speaking. Basically I think there is a better way to do this. And maybe my current answer is just wrong, who knows. Any help is appreciated. TIA.
Im not sure how to do it, my first tought was to chase the eigenvectors x that respond to some eigenvalue considering that if A(x)= L*x then A^2(x)= L^2 * x, but im not sure if this is the rout to go, nor am I sure how to finish it, im really looking for any input here, all help will be appreciated.
Reading my linear algebra notes I found the following question: Is there an invertible linear operator T in L(V) such that X_T=(-1) ^n t ^n?
Note: V is a vector space with finite dimension.
Researchers from Brown University have built DeepONet, a novel neural network-based model that can efficiently learn both linear and nonlinear operators. This novel model was inspired by earlier studies led by researchers at Fudan University.
AΒ continuous functionΒ does not have any abrupt changes in value. More precisely, small changes in continuous functionβs output can be assured by restricting to sufficiently small changes in its input. Many studies show that artificial neural networks (ANN) are highly efficient approximators of continuous functions. However, not many studies have yet focused on their ability to approximate nonlinear operators.
Inspired by the papers published by Chen and Chen at Fudan University, which discusses the functional approximation using a single layer of neurons, the researchers decided to explore the possibility of building a neural network that could approximate both linear and nonlinear operators
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.