Linear Algebra: For my HW assignment Im confused on what to do for 2b. I found a guide on stack overflow for if the basis isn’t the natural basis, but idk what to do if it is. My thought would be that its just the matrix of coefficients of the linear operator… but that seems too simple. reddit.com/user/JMoneyG02…
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/JMoneyG0208
πŸ“…︎ Dec 05 2021
🚨︎ report
Determining linear operator

Knuckle dragging engineering student here getting twisted up with mathematics definitions.

I completely understand the additivity and homogeneity rules:

L(f(x)+g(x)) = L(f(x)) + L(g(x)); L(kf(x)) = kL(f(x))

Totally understood, basically the same as linear functions. I have this problem from the text that I think may just be poorly written. It's asking to determine if the following operator is linear (using dumby numbers here), where D = d/dx, D^2=d^2/dx^2:

8x^3D^2 + 2xD + 7

That's all it gives me. How is it the operator is supposed to applied? Just the same as multiplying (f(x)+g(x)), where it's obviously linear? Or do I substitute in for x where its obviously nonlinear? Or assumed d/dx \equiv d/dx (f(x)), and plug in (f(x)+g(x)) after each differential operator for f(x), where the constant makes it nonlinear? No guidance in the text whatsoever and online resources are conflicting.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/_ginj_
πŸ“…︎ Oct 22 2021
🚨︎ report
Is there a linear operator that satisfies certain condition

Reading my linear algebra notes I found the following question: Is there an invertible linear operator T in L(V) such that X_T=(-1) ^n t ^n?

Note: V is a vector space with finite dimension.

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/finball07
πŸ“…︎ Oct 04 2021
🚨︎ report
Quick Question! Is this sentence accurate? "the waves of quantum physics are virtual complex-valued probability amplitudes whose superpositions of the position operator generate infinite-dimensional Hilbert spaces which evolve in accordance with SchrΓΆdinger's linear equation"

Writing my PhD in literary studies, trying to make sure my scientific info is accurate.

Obviously, I recognize that the compound structure of the sentence probably simplifies the formalism of quantum mechanics beyond the point of usefulness, but still, is it wrong? Does it miss the relationship between various concepts?

If you can think of a better way to express all of the above in one sentence I won't say no to reading it.

Thanks!

πŸ‘︎ 37
πŸ’¬︎
πŸ‘€︎ u/Dexav
πŸ“…︎ Aug 12 2021
🚨︎ report
[D] Taking a GIF trip through linear operator space from the identity function to the 2D Discrete Fourier transform. (OC) On the potential of fractional operators.

https://i.redd.it/thvnxea9lao61.gif

So this is more of just a fun post: I'm curious if anyone had any applications or ideas which used fractional operators . I also wanted to show of my gif.

A bit of background on what's in the GIF and fractional operators in general.

Recall that the 2D discrete Fourier transform, F, is a linear operator on a space of matrices (or d by d arrays). If we apply the Fourier transform 4 times it returns the identity function i.e. F^4 = F(F(F(F))) = I. Note that people have figured out how to let these exponents take non-integer values! This corresponds to fractional Fourier transforms. So for example the half Fourier transform F^(1/2) is something that functions like the square root of the Fourier transform. If we let G = F^(1/2) then we have that G(G)=F, or maybe a bit more concretely, for any matrix/image X, we have that G(G(X)) = F(X). These special exponents behave like regular old exponents in a lot of ways and it has been observed that one can construct F^a for arbitrary real-valued a.

The GIF I've posed takes an image of a pagoda X and applies increasing fractional degrees of Fourier transforms. Specifically the graph shows shows F^a (X) as a goes from 0 to 4.

Links, more on fractional operators

Conclusion
I'm curious if anyone has any interesting ideas...

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/robertavdm
πŸ“…︎ Mar 21 2021
🚨︎ report
What would be some real-world applications of the concept of "linear operators"?

I'm very new to high school math, and I've been doing a ton of research on linear operators. I'm trying to think of some real-world applications (as opposed to just applying the theorem) that would benefit from the knowledge I have of linear operators.

I'm not asking for a linear operator that can solve quadratic equations, I'm talking about a linear operator that can solve linear equations. I know that there are linear operators that can solve quadratic and linear equations, but I don't know much about how to solve linear equations.

If I do find a linear operator that can solve linear and quadratic equations, what would be some real-world applications that would benefit from that knowledge?

Thanks in advance!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/mathGPT2Bot
πŸ“…︎ Dec 10 2020
🚨︎ report
Show that L is a linear operator.

I am unsure of how one actually shows that L is a linear operator when:

(a,b,c) ---> (2a-2b+c, a-2b+c, -2a+3b-y)

What is written above, is supposed to be the linear transformation of a three dimensional vector (a,b,c) to the other 3D vector on the right.

I hope you can help,
best regards,
Robin.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/RobinHyaku
πŸ“…︎ Mar 05 2021
🚨︎ report
Is there a sequel to Kato's Perturbation Theory for Linear Operators?

One book I'd eventually like to read is Kato's Perturbation Theory for Linear Operators. From what I've read it is incredibly dense and incredibly good. It's one of those books that starts with nothing and builds an incredible castle of math starting with the foundation and eventually reaching the stars.

However one of the things though that has been keeping me from diving in (besides never having enough time) is the book is a bit old fashioned; the first edition was in 1966. I've looked around to see if anyone else has written a more recent book on the subject, but so far I haven't found anything that made me think: "This is better than Kato".

Has anyone else found a book is a newer take on Kato's Perturbation Theory for Linear Operators? Or anyone else who's read Kato's book and has an opinion?

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/Topoltergeist
πŸ“…︎ Apr 05 2021
🚨︎ report
[R] Brown University Researchers Introduce DeepONet, A Model Based On Deep Neural Network, To Approximate Both Linear and Nonlinear Operators (Paper and Github link included)

Researchers from Brown University have built DeepONet, a novel neural network-based model that can efficiently learn both linear and nonlinear operators. This novel model was inspired by earlier studies led by researchers at Fudan University.

AΒ continuous functionΒ does not have any abrupt changes in value. More precisely, small changes in continuous function’s output can be assured by restricting to sufficiently small changes in its input. Many studies show that artificial neural networks (ANN) are highly efficient approximators of continuous functions. However, not many studies have yet focused on their ability to approximate nonlinear operators.

Inspired by the papers published by Chen and Chen at Fudan University, which discusses the functional approximation using a single layer of neurons, the researchers decided to explore the possibility of building a neural network that could approximate both linear and nonlinear operators

Summary: https://www.marktechpost.com/2021/04/15/brown-university-researchers-introduce-deeponet-a-model-based-on-deep-neural-network-to-approximate-both-linear-and-nonlinear-operators/

Paper: https://www.nature.com/articles/s42256-021-00302-5

Github: https://github.com/lululxvi/deeponet

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/techsucker
πŸ“…︎ Apr 15 2021
🚨︎ report
DeepONet: A deep neural network-based model to approximate linear and nonlinear operators techxplore.com/news/2021-…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Sorin61
πŸ“…︎ Apr 12 2021
🚨︎ report
Linear operators and matrices

So I’m trying ,and currently failing, to understand the idea of linear and matrices.

I’ve been trying to comprehend the following page of mathematical methods for engineers and physics to no avail

https://i.imgur.com/zBZ1p3w.jpg

https://m.imgur.com/SDFtcQp

So the way I’m trying to understand is that I have some vector a. I operate on this vector using A which transforms it into another vector in the same vector space y.

And I can have a basis in my vector space ei. Where all vectors in this basis are linear independent and I can represent my vector a using this basis as having some component in each of the basis vectors.

I can operate using A onto one of these basis vectors and the result is eq 8.23. But at this point I’m pretty lost. I’m unsure what j means and I’m basically confused.

If anyone could offer any help or recommend any resources it would be very much appreciated as I feel like I’ve been reading the same page for hours on end.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/robej
πŸ“…︎ Feb 04 2021
🚨︎ report
Complementary solution to the formal adjoint of a linear differential operator

I have a linear differential operator and its complementary solution u. Is there a way to show that the complementary solution of the adjoint operator is u/(p(x)*W) where p is the coefficient of the second order differential and W is the wronskian

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/MindlessTie2
πŸ“…︎ Mar 25 2021
🚨︎ report
Finding the Fundamental Solution to an linear Operator

So, I don't know how I would go about to finding the fundamental solution, that is a generalized function u, to the operator L= (-D^2+I) where I is the identity and D = d/dx, the differential operator, Ξ΄_0 the dirac delta, such that

Lu= Ξ΄_0

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/idaelikus
πŸ“…︎ Mar 25 2021
🚨︎ report
[Abstract Linear Algebra] Proving the characteristic polynomial of T evaluated at T is 0, given T is a linear operator on a finite dimensional vector space V over some subfield of the complex numbers

Before anyone says anything, we unfortunately CAN’T use the Cayley Hamilton theorem (We’re proving a special case of it actually)

So this is a part of a bigger proof on my homework, I have already reliably proven that any operator, S, on a complex vector space has it’s characteristic polynomial=0 evaluated at S

We assume the theorem that says if B is a basis for V, then the vector space of matrices Mn(F) is isomorphic to L(V,V) by taking T to it’s matrix representation in Mn(F) with respect to the B basis.

Can anyone check over a part of my proof for me? I feel as though I’m overlooking something when it comes to viewing polynomials in F[t] in sets like C[t]

excerpt of my proof found here

Any help would be greatly appreciated

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/lopopon
πŸ“…︎ Mar 10 2021
🚨︎ report
What are some examples of additive but non-linear operators?

Are there any operators that satisfy the condition L(f+g)=L(f)+L(g) but not the L(cf)=cL(f) if c is an imaginary number

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Dec 16 2020
🚨︎ report
If P(D) is a linear differential operator and y satisfies P(D)y=0, how can we show that xy satisfies P(D)^2 y=0

In my differential equations class we learned that when you have a second order homogeneous ODE and the differential operator is squared, using the substitution y=e^rx only produces 1 solutionβ€”it’s not too difficult to find the other solution (xe^rx) in this case because the order is low, but when we have a higher order operator how do we show that multiplying the solution by x produces a new solution?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/okaythanksbud
πŸ“…︎ Feb 27 2021
🚨︎ report
Linear operators and matrices

I asked a similar question to this yesterday, however my understanding ,I think, has improved and now my question has slightly changed.

So my understanding of linear transformations and matrices is that say I’m in 2D and have a the standard basis of i = <1,0> and j = <0,1>. And I have some vector in this space say V = <2,3> = 2i + 3j. And then have some operator A which is a matrix. Now the first column of the matrix is going to tell me where my i ends up and the 2nd column is going to tell me where my j ends up (in the original basis that is I think). And as it’s a linear transformation my V is still going to be 2i+3j but now i and j have changed so I can find my new V but subbing in my new i and j.

However I’m getting confused and bogged down by the maths jargon see eq 8.23

https://i.imgur.com/SDFtcQp.jpg

I’m frankly struggling to correlate my (attempted) intuitive understanding to this more rigorous way. So if anyone could help explain what this equation is telling me it would be much appreciated

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/robej
πŸ“…︎ Feb 05 2021
🚨︎ report
Commutation of parity and linear translation operator

I was able to identify that the parity and linear translation operator must commute as Pf(x-a)=f(-x-a) intuitively. Is there any way to mathematically show this?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Apocalypse-
πŸ“…︎ Oct 23 2020
🚨︎ report
Are there any general properties/theorems/tricks for finding the inverse of a sum of linear operators?

Suppose you have the equation L_1 f + L_2 f = Q and you want to find f = (L_1 + L_2)^{-1} Q. Is there anything useful to help find this inverse?

Similarly, what can be said about the Eigenfunctions of a sum of linear operators, if anything at all?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/NukeBeach
πŸ“…︎ Sep 16 2020
🚨︎ report
Can we define an inner product on the space of bounded Linear operators from Banach space X to itself, i.e B(X)?

I'm trying to work out a solution to a differential equation whose solutions are the banach space-valued functions on [0,/infty), that is f € C(R+, X). where X is the Banach space of scalar-valued functions. I am trying to prove its weak solution via lax milgram theorem. But to apply lax milgram, I need my vector space to be a Hilbert space. Also, if someone has the book "Banach and Hilbert space of vector-valued functions: their general theory and applications to holomorphy", please post it here, I can't find its free pdf.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/aliinpower
πŸ“…︎ Jun 08 2020
🚨︎ report
Difference between identity linear map and linear operator

Both of these are linear transformation. Identity linear mapping [ I : V --> V] and linear operators [ T : V --> V]? Book says- identity linear map forms when V maps into itself, whereas linear operator, is formed when domain = range. Then how they are different I can't get it?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/fresh_A_F
πŸ“…︎ Aug 04 2020
🚨︎ report
[Linear Algebra] Adjoint of Linear Operator

Hi I am an engineering student independently going through Friedberg's, Insel's, and Spence's Linear Algebra Textbook. This is my first exposure to upper level math. It has been going well until I got up to section 6.3, the Adjoint of a Linear Operator. I understand the definition that the adjoint behaves under the property <x, T(y)> = <T*(x), y>. However, I just do not understand what the adjoint of an operator means. What does it do to the operator? What does it do to the inner product space it is operating in?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/chick3234
πŸ“…︎ Jun 06 2020
🚨︎ report
If a derivative is a linear operator, and a linear operator can be represented by a matrix, how would I represent a derivative as a matrix?

I know for a fact that a derivative is a linear operator. I've heard of the concept that a linear operator can be written as a matrix, or something of the like that a matrix is an instance of a linear operator under a certain basis.

I might be abusing something here, but does that mean I can represent a derivative as a matrix?
If not, where did I go wrong, if so, what would this matrix look like?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/CoffeeVector
πŸ“…︎ Mar 31 2020
🚨︎ report
[Late Undergrad / Early Grad Linear Algebra] Terminology: Endomorphism vs. Operator?

Hi, everyone. I read Axler's Linear Algebra Done Right earlier this summer, and he constantly referred to linear transformation from a vector space into the same space as an operator, which made a lot of sense in context. However, I am taking my first graduate linear algebra course this fall, and the textbook that course uses seems to use the term endomorphism to convey the same idea. Is there a significant difference between these terms? I was speaking with a friend about this, and he told me that an endomorphism is more than likely the more general term for the concept, but he wasn't sure. I am interesting in understanding the nuance in using one term over the other. Question might be pedantic to a degree, but I'm just genuinely curious.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/TheBlueSpirit67
πŸ“…︎ Jun 28 2020
🚨︎ report
[University] Question about Hoffman and Kunze - Inconsistency Between Matrix Chapters 3 and 5 on Matrix Representation of Linear Operators?

Hi all,

I am going through Hoffman and Kunze and I am confused by the switch between chapters 3 and 5 to how linear operators are represented in matrix form.

In chapter 3, the matrix representation A of a linear operator T on a vector space V has entries A(i, j) = f_i (Ta_j), where a_j are basis vectors of V and f_i are the dual basis vectors.

In chapter 5, though, the matrix representation A of a linear operator T on a free K-module V has entries A(i, j) = f_j (Tb_i), where b_i are basis elements of V and f_j are the dual basis elements. Presumably, then, we are to view the basis elements as "row vectors."

Why do the authors make this switch? Is it simply to make it more convenient to treat multilinear functions as acting on "row vectors" of a matrix?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/legendariers
πŸ“…︎ Sep 04 2020
🚨︎ report
The Nature Of The Mathematical Universe: Did Bounded Linear Operators Exist In The Time Of Dinosaurs? billwadge.wordpress.com/2…
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/mystikaldanger
πŸ“…︎ May 03 2019
🚨︎ report
[Functional Analysis] Linear operators and Uniform Boundedness Theorem

Hello all, I will link the question and some extra info but the only question I'm struggling with the part involving uniform boundedness theorem, showing that for our T_n there is a c_x > 0 etc.

Here is our set of functions : https://gyazo.com/04c395fbdf0b60590656d43e0d2b3224

And here is the question I'm struggling with (2ii, I've done 2i):

https://gyazo.com/38f129f77a7ae0ca64470efb53865fdb

The only example I have using the uniform boundedness theorem shows that a normed space is not a Banach space, but I'm not sure how to apply that to the context of this question. Thanks!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Jayjenken
πŸ“…︎ Dec 01 2019
🚨︎ report
[Linear Algebra] How to find out if a vector is an eigenvector to a given operator

My current answer is to find all the roots of the characteristic polynomial and if one of them satisfies the eqaution Ax = ax, where 'A' is the operator matrix, 'x' is the vector itself and 'a' is one of the roots, then 'x' is an eigenvector.

But I think this is a bad answer because what if we have, say, 100 roots? Sure, it's unlikely this will happen in real life but just hypothetically speaking. Basically I think there is a better way to do this. And maybe my current answer is just wrong, who knows. Any help is appreciated. TIA.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/surboi
πŸ“…︎ Apr 22 2020
🚨︎ report
[Linear Algebra] Need help with proving that a linear operator A is diagonalizable if A^3 = A

Im not sure how to do it, my first tought was to chase the eigenvectors x that respond to some eigenvalue considering that if A(x)= L*x then A^2(x)= L^2 * x, but im not sure if this is the rout to go, nor am I sure how to finish it, im really looking for any input here, all help will be appreciated.

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Gmaster228
πŸ“…︎ Jan 21 2020
🚨︎ report
Is there a linear operator that satisfies certain condition?

Reading my linear algebra notes I found the following question: Is there an invertible linear operator T in L(V) such that X_T=(-1) ^n t ^n?

Note: V is a vector space with finite dimension.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/finball07
πŸ“…︎ Oct 04 2021
🚨︎ report
Brown University Researchers Introduce DeepONet, A Model Based On Deep Neural Network, To Approximate Both Linear and Nonlinear Operators (Paper and Github link included)

Researchers from Brown University have built DeepONet, a novel neural network-based model that can efficiently learn both linear and nonlinear operators. This novel model was inspired by earlier studies led by researchers at Fudan University.

AΒ continuous functionΒ does not have any abrupt changes in value. More precisely, small changes in continuous function’s output can be assured by restricting to sufficiently small changes in its input. Many studies show that artificial neural networks (ANN) are highly efficient approximators of continuous functions. However, not many studies have yet focused on their ability to approximate nonlinear operators.

Inspired by the papers published by Chen and Chen at Fudan University, which discusses the functional approximation using a single layer of neurons, the researchers decided to explore the possibility of building a neural network that could approximate both linear and nonlinear operators

Summary: https://www.marktechpost.com/2021/04/15/brown-university-researchers-introduce-deeponet-a-model-based-on-deep-neural-network-to-approximate-both-linear-and-nonlinear-operators/

Paper: https://www.nature.com/articles/s42256-021-00302-5

Github: https://github.com/lululxvi/deeponet

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/techsucker
πŸ“…︎ Apr 15 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.