If we multiply an eigenvector times a matrix and take the norm of the resulting matrix, will that be the eigenvalue?

So for example x = eigenvector, A is matrix. ||Ax|| = eigenvalue of x?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Inig039
πŸ“…︎ Dec 19 2021
🚨︎ report
Linear Algebra - Matrix Norms

Can someone explain the relationship between the spectral norm and the Frobenius norm? I know that the Frobenius norm is larger or equal to the spectral norm, but why?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/BeefSquatchKing
πŸ“…︎ Dec 13 2021
🚨︎ report
How to write the L2 norm of the product of a matrix and a vector in terms of their individual L2 norms?

Let M be a square matrix in R^nxn, I be an identity matrix and x be a column vector in R^n.

How to write the expression

|| (M - I) x||^2

in terms of individual L2 norms of M and x?

Can I say that

|| (M - I) x||^2 =< || M - I ||^2 || x ||^2

always holds?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Last_Farmer1746
πŸ“…︎ Jun 01 2021
🚨︎ report
Matrix norms on tensor products

Hey

I am surprised that, apparently, || A tensor B || = || A || ||B|| does not hold in general for all matrix norms... there seems to be some mathematical subtelties, that need to be considered here. On the other hand, I heard it is true for any submultiplicative norms.

It would be really nice if you could give me a quick (for physicists understandable) answer, when it holds and when it does not. Especially, does it hold for Ky Fan Norms? It is submultiplicative. Matrices in questions are positive and Hermitian. And is there some source I can cite for this property?

Thank you guys :)

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/herrschoftszeitn
πŸ“…︎ Mar 13 2021
🚨︎ report
Command for the Norm of a nxnxn matrix

I am trying to find a command that will let me that the norm of a nxnxn matrix but my searching online hasn't lead me anywhere. Does anyone know of a command that will do it?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/jimwilki
πŸ“…︎ Feb 15 2021
🚨︎ report
[Linear algebra] Having tough time proving matrix norm identity

Hi! Suppose that a ∈ R^n and A = a * a^T. We want to show that the second matrix norm is equal to the square of the second vector norm: ||A||_2 = (||a||_2)^2, or equivalently that the LHS of the equation is equal to the sum of squares for each component a_i.

Knowing that matrix norm ||A|| = sup{||Ax|| : x ∈ R^n and ||x|| = 1}, we can open ||Ax|| to be a sum inside the square root. Next I will write the sum inside the square root in a python like syntax as I do not know how to write it in LaTeX.

s = 0
for i in range(0, n):
    t = 0
    for j in range(0, n)
        t += x_j * a_j
    s += (t * t) * a_i * a_i    

In that sum, if we can show that the inner most sum is equal to the square of the term a_i, we are done as then the LHS is equal to the RHS. But here is the problem, that sounds absurd! We would need to show that an individual term is equal to a sum that does not depend on that terms index.

How should I proceed?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/wabhabin
πŸ“…︎ Apr 26 2020
🚨︎ report
If the series of norms converges then the matrix series converges?

Hi, I heard during our class at university that there exists a theorem stating that given a sequence of matrices (A_k), if the series of β€–A_kβ€– converges then the series of the A_k too converges, but our book doesn't report this theorem anywhere so I googled around and I couldn't find anything like that. The closest I found was this, from the book "Matrix analysis" ( Roger A. Horn), which is this.

Can you please confirm this is correct (no proof there)? In such a case then what I'm looking for would follow picking a_k = 1 for every k, right? Thanks in advance!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/BaffoRasta
πŸ“…︎ Jun 18 2020
🚨︎ report
[Linear Algebra] Stuck at a matrix norm proof involving eigenvalues

Hi, I am currently stuck at proving that the second norm of a symmetric real matrix A ∈ R^(nxn) is equal to the largest absolute eigenvalue of the matrix A. The only hint that I have been given is that

||A||_2 = (lambda_max(A*A^T))^(1/2) . I do not see any value in the hint, so my idea was to prove this by ||A||_2 = sup{||Ax|| : x ∈ R^(nxn), ||x|| = 1}. Right now the only parts that I am struggling with are i.) to show/argue that the supremum vector is always an eigenvector, to get

||Ax||_2 = ||lambda*x||_2 = abs(lambda) * ||x||_2 = abs(lambda), ||x||_2 = 1

ii.) to argue that the maximization problem is the same as finding the largest aforementioned lambda.

How should I proceed?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/wabhabin
πŸ“…︎ May 01 2020
🚨︎ report
How to compute the L1 norm of a matrix?

Getting quite confused with the computation of the L1 norm. I know for a vector, you just have to get the summation of the absolute values. But how do you compute it for a matrix?

Here's an example of a matrix:

https://preview.redd.it/cmwh0ogn25t41.jpg?width=241&format=pjpg&auto=webp&s=ebc54d00f6cb663bcd7cc8bc7437dd1582c64586

How do we come up with the L1 norm of 8? For your reference, this is computed using the numpy package of Python.

https://preview.redd.it/8d2yk6rx25t41.jpg?width=362&format=pjpg&auto=webp&s=843a141fdbb888c5cd2355f51683168cd5f72288

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/coloriz95
πŸ“…︎ Apr 16 2020
🚨︎ report
Infinity norm of a matrix ambiguity [University]

https://imgur.com/a/5b3V7wj On the lecture notes of the course that I'm attending about neural networks, I've found this definition of infinity norm of a matrix. However, from previous courses, I recall the infinity norm of a matrix as: \left \| A \right \|_{\infty }=max_{i}\left \sum_{j=1}^{m} |a_{i,j}| \right. Which is the correct definition? Are the 2 definitions related somehow? EDIT: I corrected the absolute value in the formula.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Arabum97
πŸ“…︎ Feb 04 2020
🚨︎ report
Linear Algebra Question: Representation changing from sum of 2-Norm of vectors to Frobenius-Norm of Matrix

What is an intuitive way of seeing the transition from (2.71) to (2.72)? I can't convince myself that these two are equal unless I write everything in terms of scalars, which is inefficient.

https://imgur.com/a/yH10oE7

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/jollytopper
πŸ“…︎ Mar 22 2020
🚨︎ report
Bound on the norm of the product of second moment matrix with random vector

https://mathoverflow.net/questions/351809/is-there-a-bound-on-the-norm-of-the-product-of-second-moment-matrix-with-random

Let $X_1,\dots,X_n$ be vectors in $\mathbb{R}^d$. Assume all of the vectors are inside the unite $\ell_2$ ball. Let $P$ be a vector in the probability simplex $\Delta_n$ with $P_i>0$ for all $i$. Consider the second moment matrix $\Sigma(P) = \sum_{i=1}^n P_i X_i X_i^\top$. Assume the $X_i$s are such that $\Sigma(P)$ is full rank. Does the following bound always hold? If not, when does it hold? $$|\Sigma(P)^{-1}X_j| \leq \frac{1}{P_j} \quad \forall j\in {1,\dots,n}$$ For instance, if $n=d$ and $X_i=e_i$ are the canonical basis vectors of $\mathbb{R}^d$, then this bounds holds with equality?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/sudeepraja
πŸ“…︎ Feb 03 2020
🚨︎ report
The gradient of the matrix norm

I just came across the following proof that requires the gradient of

g(x) = (1/2) (x^T A^T A x) / (x^Tx),

the matrix 2-norm, where A is a m x n matrix, and x is an n vector.

I've tried evaluating this expression in tensor notation but I cannot seem to get the correct answer! Any help would be great, thanks!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/harmonicChanting
πŸ“…︎ Mar 25 2020
🚨︎ report
How to compute resolvent norm of this matrix?
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/vordankarmir
πŸ“…︎ Mar 23 2019
🚨︎ report
Matrix norm

Can someone explain me in a intuitive way what is the norm of a matrix? I know it's an extension of the definition of a vector norm, but I can't "see" it. Please help, I'd appreciate any answer, thank you.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Shiba_ou
πŸ“…︎ Oct 14 2019
🚨︎ report
Bound on the norm of the product of second moment matrix with random vector mathoverflow.net/question…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/sudeepraja
πŸ“…︎ Feb 03 2020
🚨︎ report
Prove that ||A^k|| <= ||A||^k for a matrix A where || || denotes the norm/magnitude/length of the matrix

I can show things that are similar to this but i cant quite show this property. Any help, hints, etc are greatly appreciated!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Andym2019
πŸ“…︎ Sep 06 2019
🚨︎ report
[Linear Algebra] If a matrix has norm <1 then "x"<"y".

Say M is an nxn matrix with |M|_2 < 1 and v=/=0. Then v^(T)v > v^(T)Mv.

I do some diagonalization and use the fact that the eigenvalues of M are all smaller in norm than 1. But isn't there a simpler way, maybe one without eigenvalues?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/TransientObsever
πŸ“…︎ Jul 17 2018
🚨︎ report
proof of matrix norm ||A|| = max x!= 0 ||Ax||/||x|| = max ||x|| = 1 ||Ax|| ... prof said this proof requires an understanding of analysis.

http://imgur.com/a/oKDF0

I wrote up the proof there... I'm not sure why it follows from this assumption of the size of sets :(

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/DarkTriadBAMN
πŸ“…︎ Mar 13 2017
🚨︎ report
[Linear Algebra/Numerics] Trying to control the norm of an inverse matrix

Hi! I've been stuck with this problem for a while now and it seems really straight-forward, but I can't find a decent solution. Any help is appreciated!

Here's what I know (||.|| being the infinity-norm for matrices):

  • Ah is a regular, positive definite matrix, with ||Ah^(-1)|| <= 1/8
  • Ah depends on h > 0, and ||Ah|| -> inf (for h -> 0)

Now how can ||(I + 1/2 Ah)^(-1)|| be controlled?
I wanted to apply the norm of the Neumann Series, but it seems to be only applicable when ||Ah|| can be controlled. Then, in looking for a formula to calculate the inverse matrix, I stumbled upon the recursive definition of Ken Miller. Surely, there have to be more elegant approaches?

Thanks!

Edit: One more idea I had. Matrix spaces have finite dimensions, so all norms are equivalent right? Then can I switch over to the 2-norm instead, argue that the EV of the new matrix are just transformed a little, and thus control the norm of the inverse that way?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/MathQuestions123
πŸ“…︎ Nov 06 2016
🚨︎ report
Can somebody please give me a hint for this matrix norm problem? I'm not sure where to start.

Question: Show that for the vector norm, here, the subordinate matrix norm is this.

Ideas: Definitely going to be a proof by induction. Also, I think I can somehow leverage the fact that a vector with only one 1 and the rest 0's, has a norm value of 1 for the above definition.

The definition of a subordinate matrix norm I learned in class is this.

Sorry about the links, I'm not sure how to get latex to work.

Please help me,

Thank you

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/typsi5
πŸ“…︎ Jul 30 2012
🚨︎ report
l_2 Matrix norm and Moore-Penrose Pesueodinverse

TeXTheWorld 1.3.2 is broken on Reddit, so I'm going to plaintext this. Sorry in advance.

Let A be a positive semi-definite matrix. All norms here are l_2 norms. I know that ||A^+ ||=k (that's a matrix norm). Can I conclude that for any vector x:

x'x <= c x'Ax

for some constant, c?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Actor_Critic
πŸ“…︎ Jan 30 2014
🚨︎ report
[Linear Algebra] 2-norm of a matrix and using A^T A vs A A^T

(Sorry if the LaTeX doesn't render, I've never used the plugin before!)

Disclaimer: I've never formally taken linear algebra, so some of the eigenvalue stuff goes over my head. I'm just trying to wrap my head around why I'm able to do the following:

I'm doing an assignment that requires computing the 1, 2, and $\infty$-norm on various matrices by hand. It says explicitly in the assignment that if we're spending too much time doing the computation, we need to stop and consider an easier way to do it.

The particular matrix of interest is

$A=\left(\begin{matrix}
1&amp;2&amp;2\\
2&amp;1&amp;-2\\
0&amp;-2&amp;2
\end{matrix}\right)$

To compute the 2-norm, I need to find

$\sqrt{max\lambda(A^T A)}$

but finding the eigenvalues of A^T A is a bitch and a half. However, A A^T is a much nicer product, it yields the result

$\left(\begin{matrix}
9&amp;0&amp;0\\
0&amp;9&amp;-6\\
0&amp;-6&amp;8
\end{matrix}\right)$

which is easily solved. When I asked if I should be using this form to solve the eigenvalues and thus the 2-norm, I was told that I was correct.

Why is this the case? I already know that they yield different matrix products, but wouldn't it also yield different eigenvalues?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Pianoplunkster
πŸ“…︎ Oct 01 2016
🚨︎ report
Norm of pseudoinverse of matrix

Let [; A ;] be a symmetric positive semi-definite real [; n \times n ;] matrix whose entries are all bounded by some constant, [; c ;]. I know that [; \lVert A \rVert_2 ;] is bounded by some constant that depends on [; c ;] and [; n ;], where here [; \lVert A \rVert_2 ;] denotes the matrix norm induced by the [; l_2 ;] vector norm.

Given this information, can I say anything about [; \lVert A^+ \rVert_2 ;] (that's a Moore-Penrose Pseudoinverse, not a transpose)? E.g., can we conclude that [; \lVert A^+ \rVert_2 ;] is bounded (more specifically, is there a [; k_2 ;] such that [; \lVert A^+ \rVert_2 \leq k_2 ;] for all [; A ;] with all entries less than [; c ;]?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Actor_Critic
πŸ“…︎ Jan 28 2014
🚨︎ report
ELI5: What is the spectral norm of a matrix?
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Darth_Bac0n
πŸ“…︎ Aug 22 2016
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.