A list of puns related to "Matrix norm"
So for example x = eigenvector, A is matrix. ||Ax|| = eigenvalue of x?
Can someone explain the relationship between the spectral norm and the Frobenius norm? I know that the Frobenius norm is larger or equal to the spectral norm, but why?
Let M be a square matrix in R^nxn, I be an identity matrix and x be a column vector in R^n.
How to write the expression
|| (M - I) x||^2
in terms of individual L2 norms of M and x?
Can I say that
|| (M - I) x||^2 =< || M - I ||^2 || x ||^2
always holds?
Hey
I am surprised that, apparently, || A tensor B || = || A || ||B|| does not hold in general for all matrix norms... there seems to be some mathematical subtelties, that need to be considered here. On the other hand, I heard it is true for any submultiplicative norms.
It would be really nice if you could give me a quick (for physicists understandable) answer, when it holds and when it does not. Especially, does it hold for Ky Fan Norms? It is submultiplicative. Matrices in questions are positive and Hermitian. And is there some source I can cite for this property?
Thank you guys :)
I am trying to find a command that will let me that the norm of a nxnxn matrix but my searching online hasn't lead me anywhere. Does anyone know of a command that will do it?
Hi! Suppose that a β R^n and A = a * a^T. We want to show that the second matrix norm is equal to the square of the second vector norm: ||A||_2 = (||a||_2)^2, or equivalently that the LHS of the equation is equal to the sum of squares for each component a_i.
Knowing that matrix norm ||A|| = sup{||Ax|| : x β R^n and ||x|| = 1}, we can open ||Ax|| to be a sum inside the square root. Next I will write the sum inside the square root in a python like syntax as I do not know how to write it in LaTeX.
s = 0
for i in range(0, n):
t = 0
for j in range(0, n)
t += x_j * a_j
s += (t * t) * a_i * a_i
In that sum, if we can show that the inner most sum is equal to the square of the term a_i, we are done as then the LHS is equal to the RHS. But here is the problem, that sounds absurd! We would need to show that an individual term is equal to a sum that does not depend on that terms index.
How should I proceed?
Hi, I heard during our class at university that there exists a theorem stating that given a sequence of matrices (A_k), if the series of βA_kβ converges then the series of the A_k too converges, but our book doesn't report this theorem anywhere so I googled around and I couldn't find anything like that. The closest I found was this, from the book "Matrix analysis" ( Roger A. Horn), which is this.
Can you please confirm this is correct (no proof there)? In such a case then what I'm looking for would follow picking a_k = 1 for every k, right? Thanks in advance!
Hi, I am currently stuck at proving that the second norm of a symmetric real matrix A β R^(nxn) is equal to the largest absolute eigenvalue of the matrix A. The only hint that I have been given is that
||A||_2 = (lambda_max(A*A^T))^(1/2) . I do not see any value in the hint, so my idea was to prove this by ||A||_2 = sup{||Ax|| : x β R^(nxn), ||x|| = 1}. Right now the only parts that I am struggling with are i.) to show/argue that the supremum vector is always an eigenvector, to get
||Ax||_2 = ||lambda*x||_2 = abs(lambda) * ||x||_2 = abs(lambda), ||x||_2 = 1
ii.) to argue that the maximization problem is the same as finding the largest aforementioned lambda.
How should I proceed?
Getting quite confused with the computation of the L1 norm. I know for a vector, you just have to get the summation of the absolute values. But how do you compute it for a matrix?
Here's an example of a matrix:
https://preview.redd.it/cmwh0ogn25t41.jpg?width=241&format=pjpg&auto=webp&s=ebc54d00f6cb663bcd7cc8bc7437dd1582c64586
How do we come up with the L1 norm of 8? For your reference, this is computed using the numpy package of Python.
https://preview.redd.it/8d2yk6rx25t41.jpg?width=362&format=pjpg&auto=webp&s=843a141fdbb888c5cd2355f51683168cd5f72288
https://imgur.com/a/5b3V7wj
On the lecture notes of the course that I'm attending about neural networks, I've found this definition of infinity norm of a matrix.
However, from previous courses, I recall the infinity norm of a matrix as: \left \| A \right \|_{\infty }=max_{i}\left \sum_{j=1}^{m} |a_{i,j}| \right
.
Which is the correct definition? Are the 2 definitions related somehow?
EDIT: I corrected the absolute value in the formula.
What is an intuitive way of seeing the transition from (2.71) to (2.72)? I can't convince myself that these two are equal unless I write everything in terms of scalars, which is inefficient.
https://mathoverflow.net/questions/351809/is-there-a-bound-on-the-norm-of-the-product-of-second-moment-matrix-with-random
Let $X_1,\dots,X_n$ be vectors in $\mathbb{R}^d$. Assume all of the vectors are inside the unite $\ell_2$ ball. Let $P$ be a vector in the probability simplex $\Delta_n$ with $P_i>0$ for all $i$. Consider the second moment matrix $\Sigma(P) = \sum_{i=1}^n P_i X_i X_i^\top$. Assume the $X_i$s are such that $\Sigma(P)$ is full rank. Does the following bound always hold? If not, when does it hold? $$|\Sigma(P)^{-1}X_j| \leq \frac{1}{P_j} \quad \forall j\in {1,\dots,n}$$ For instance, if $n=d$ and $X_i=e_i$ are the canonical basis vectors of $\mathbb{R}^d$, then this bounds holds with equality?
I just came across the following proof that requires the gradient of
g(x) = (1/2) (x^T A^T A x) / (x^Tx),
the matrix 2-norm, where A is a m x n matrix, and x is an n vector.
I've tried evaluating this expression in tensor notation but I cannot seem to get the correct answer! Any help would be great, thanks!
Can someone explain me in a intuitive way what is the norm of a matrix? I know it's an extension of the definition of a vector norm, but I can't "see" it. Please help, I'd appreciate any answer, thank you.
I can show things that are similar to this but i cant quite show this property. Any help, hints, etc are greatly appreciated!
Say M is an nxn matrix with |M|_2 < 1 and v=/=0. Then v^(T)v > v^(T)Mv.
I do some diagonalization and use the fact that the eigenvalues of M are all smaller in norm than 1. But isn't there a simpler way, maybe one without eigenvalues?
http://imgur.com/a/oKDF0
I wrote up the proof there... I'm not sure why it follows from this assumption of the size of sets :(
Hi! I've been stuck with this problem for a while now and it seems really straight-forward, but I can't find a decent solution. Any help is appreciated!
Here's what I know (||.|| being the infinity-norm for matrices):
Now how can ||(I + 1/2 Ah)^(-1)|| be controlled?
I wanted to apply the norm of the Neumann Series, but it seems to be only applicable when ||Ah|| can be controlled. Then, in looking for a formula to calculate the inverse matrix, I stumbled upon the recursive definition of Ken Miller. Surely, there have to be more elegant approaches?
Thanks!
Edit: One more idea I had. Matrix spaces have finite dimensions, so all norms are equivalent right? Then can I switch over to the 2-norm instead, argue that the EV of the new matrix are just transformed a little, and thus control the norm of the inverse that way?
Question: Show that for the vector norm, here, the subordinate matrix norm is this.
Ideas: Definitely going to be a proof by induction. Also, I think I can somehow leverage the fact that a vector with only one 1 and the rest 0's, has a norm value of 1 for the above definition.
The definition of a subordinate matrix norm I learned in class is this.
Sorry about the links, I'm not sure how to get latex to work.
Please help me,
Thank you
TeXTheWorld 1.3.2 is broken on Reddit, so I'm going to plaintext this. Sorry in advance.
Let A be a positive semi-definite matrix. All norms here are l_2 norms. I know that ||A^+ ||=k (that's a matrix norm). Can I conclude that for any vector x:
x'x <= c x'Ax
for some constant, c?
(Sorry if the LaTeX doesn't render, I've never used the plugin before!)
Disclaimer: I've never formally taken linear algebra, so some of the eigenvalue stuff goes over my head. I'm just trying to wrap my head around why I'm able to do the following:
I'm doing an assignment that requires computing the 1, 2, and $\infty$-norm on various matrices by hand. It says explicitly in the assignment that if we're spending too much time doing the computation, we need to stop and consider an easier way to do it.
The particular matrix of interest is
$A=\left(\begin{matrix}
1&2&2\\
2&1&-2\\
0&-2&2
\end{matrix}\right)$
To compute the 2-norm, I need to find
$\sqrt{max\lambda(A^T A)}$
but finding the eigenvalues of A^T A is a bitch and a half. However, A A^T is a much nicer product, it yields the result
$\left(\begin{matrix}
9&0&0\\
0&9&-6\\
0&-6&8
\end{matrix}\right)$
which is easily solved. When I asked if I should be using this form to solve the eigenvalues and thus the 2-norm, I was told that I was correct.
Why is this the case? I already know that they yield different matrix products, but wouldn't it also yield different eigenvalues?
Let [; A ;] be a symmetric positive semi-definite real [; n \times n ;] matrix whose entries are all bounded by some constant, [; c ;]. I know that [; \lVert A \rVert_2 ;] is bounded by some constant that depends on [; c ;] and [; n ;], where here [; \lVert A \rVert_2 ;] denotes the matrix norm induced by the [; l_2 ;] vector norm.
Given this information, can I say anything about [; \lVert A^+ \rVert_2 ;] (that's a Moore-Penrose Pseudoinverse, not a transpose)? E.g., can we conclude that [; \lVert A^+ \rVert_2 ;] is bounded (more specifically, is there a [; k_2 ;] such that [; \lVert A^+ \rVert_2 \leq k_2 ;] for all [; A ;] with all entries less than [; c ;]?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.