A list of puns related to "Symmetric Matrix"
--ghc 8.6.3
import Control.Monad (replicateM)
main = do
num <- getLine
mat <- replicateM (read num) getLine
putStrLn
. result
. (==(map words mat))
. transpose
. map words
$ mat
transpose ([]:_) = []
transpose m =
(map head m) : transpose (map tail m)
result True = "YES"
result False = "NO"
Hey guys, here's the thing:
I have this exercise in which I need to find the Basis and Dim of Ker and Img of some matrix A, the usual tedious stuff, nothing fancy here. Just for the context, the matrix is:
[;A=\begin{pmatrix}
1 &4 &1 &3 \\
2 &2 &1 &2 \\
1 &2 &4 &1
\end{pmatrix};]
The next exercise is the same but the matrix is AA^T. I can do the same procedure of finding the Basis and Dim all over again, but is there any insight about the Ker and Img of this matrix that I'm missing here? (I know that AA^T is always a symmetric matrix)
i) Show that A A^t is symmetric
For A A^t to be symmetric, it must A A^t = (A A^t )^t. From transpose properties, (A A^t )^t = (A^t )^t A^t, which is A A^t. Is this correct?
ii)If matrix A is symmetric, show that B^t A B is also symmetric
What am i supposed to do here??
I am finding it hard to understand the following piece of code which constructs a sparse adjacency matrix of a given sparse matrix.
```# build symmetric adjacency matrix
adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj)```
This is what I tried out:
adj =np.array([[2. , 0., 0. ],
[3. , 0. , 0. ],
[3. , 0. , 0. ]])
adjs=sp.coo_matrix(adj)
adjs.T.todense()
#matrix([[2., 3., 3.],
[0., 0., 0.],
[0., 0., 0.]])
adjs.T.multiply(adjs.T > adjs).todense()
#matrix([[0., 3., 3.],
[0., 0., 0.],
[0., 0., 0.]])
q= adjs+adjs.T.multiply(adjs.T > adjs)+adjs - adjs.multiply(adjs.T > adjs)
q.todense()
matrix([[4., 3., 3.],
[6., 0., 0.],
[6., 0., 0.]])
```
In the output the diagonals are not zero.
The code is part of the graph convolutional network paper by Kipf and it's in the setup.py file here:
https://github.com/yao8839836/text_gcn/blob/b230e8083f838953646a9034b60abc5f69b062f6/utils.py#L176
I do not understand how that code makes a matrix symmetric. Can anyone give an explanation?
Iβm facing difficulty with solving 8 equations and 8 variables.
Hi,
I can't lie, I'm on a pretty big time crunch. I'm trying to make an R function that takes a matrix as input and checks if this matrix is symmetric. If this matrix is indeed symmetric, nothing happens -- it spits out, "Your input matrix is symmetric." However, if the input matrix is *not symmetric*, the function then combs through the entire matrix and searches for which entries don't match. I'd love it to return something like, "Entry [4,5] does not equal entry [5,4]" or if there are multiple entries that are not equal, it would just print a big list of statements like the one above. Here is what I have so far:
SymmetricErrorFinder <- function(matrix){
if(isSymmetric(matrix) == TRUE){
print("Your input matrix is symmetric!")
}else{
for (i in 1:nrow(matrix)){
for (j in 1:ncol(matrix)){
if(matrix[i,j] != matrix[j,i]){
return(cat("The entry that needs correcting is", matrix[i,j]))
}
}
}
}
}
But the problem with this is that when it returns "matrix[i,j", it will return a specific number, not the index of the entry, if you will. See what I mean? I'm sorry if it isn't clear. I also can't figure out how to get it to say "Entry[i,j] does not equal entry [j,i]" but if that's impossible what I have written in the code block above will absolutely work too since what I'm trying to create is a symmetric matrix after all. Help me reddit, you're my only hope. Please and thanks as always <3
Edit: I'll add some context. This is not a final exam question nor a homework assignment problem or anything. I'm working on a big final project and I've created a matrix that is 130 rows by 130 columns. I'll save you the math, that's 16,900 entries. It's all either 1's or 0's. Specifically the diagonals are supposed to be 0 (I should probably add that into my function somehow...). I need this matrix to be symmetric. I plugged in what I had into R and it came back negative. I do not have the time to hunt through all this to try and find where the inconsistency is at. Hence the need for this function :)
Hey,
I'm currently working on a project where I need to do computations on symmetric positive definit matrices of very large sizes for benchmarking purposes. The way the matrices to be tested are currently generated is just by initialising a random matrix A and computing A^T A. This however get's extremely slow for large sizes. I'm now looking to speed this up by coming up with an index based expression (something like A_i,j = i + j) for a symmetric positiv definit matrix of size N x N. However I'm having trouble to come up with such a matrix. I could think of simple cases like diagonal matrices but the matrix should be as dense as possible for the testing purposes.
Would appreciate any input.
I can't find anything on the Internet that is beginner-friendly and our teacher explained nothing related to this. Can anyone give me an algorithm or a link to any book/website where this method is explained.
Thanks in advance!
S is defined by S= 2NN^(T)-I where N is the steering vector of the axis (this is in 3D) and I is the identity matrix.
Also S^(2) = I
Thanks in advance for your help :)
Now, the question itself gives us 3 M(2x2) matrixs, and asks us to find if any other symmetric 2x2 matrix is a linear combination of those 3. The matrixes are:
A1=
[2 0]
[0 7]
A2=
[13 0]
[0 5]
A3=
[0 3]
[3 0]
Edit: Just realized the matrixs don't look right, but I can't attach a photo
I can't even imagine a case where this is true, let alone a situation in which it is true in general. What am I missing?
Scalar a s.t. 0 < a < 1
Q nxn matrix. symmetric semipositive definite.
p, q vectors in R^n
This inequality is rumored to be false :
(ap + (1-a)q)^T Q (ap + (1-a)q)) > a*(p^T Q p) + (1-a)*(q^T Q q)
There should be a way to manipulate the sides of this inequality to show that it could not possibly be true. The likely culprit (I presume) is that the bounds of a
cause this to be true
a^2 < a
And so, for example, this turns out to be (ironically) true
a^2 p^T Q p < a p^T Q p
From 0 < a < 1 we have both, a > 0 and (1-a) > 0
Q sym-spos-def entails that for all vectors x in R^n ,
x^T Q x >= 0
I have tried to expand the lefthand side to collect terms that looks like
a^2 p^T Q p
But instead it is spitting things at me that look like
p^T Q q
I cannot make heads or tail of these "mixed" terms. Is there some linear algebra trick I am missing?
If A is an mΓn matrix and I know that the rows are all linearly independent, does that mean the symmetric matrix AA^T ...
(A^T meaning transpose of A)
... which is mΓm, is known to be non singular?
This is not a serious post but I was just wondering out of curiousity. I am in a mathematical optimization course where we may change our estimate of a Hessian (symmetric matrix) by adding a rank 2 matrix to it (sum of a rank 1 and it's transpose).
In fact if you take any square matrix A (symmetric or not), adding it to its transpose (A + A^T ) creates a symmetric matrix. If you on the other hand have a symmetric matrix and want to represent it as a sum B = A + A^T, the trivial solution to this is just A = (1/2) B, forcing A to be symmetric.
However, I am wondering if there is a way to factorize symmetric matrix B such that we force A to not be symmetric. Does anyone know if people have done this?
Hi, all
1.like the eigenvector would be computed by the ith row elements somehow in the symmetric matrix?
2.also, if it has more than one eigenvalues of 0, would those eigenvectors have any connection to each other? like compute the others based one in a quick way?
3.or compute these eigenvectors corresponding to the 0 eigenvalue based on other eigenvectors corresponding to the non-zero eigenvalues?
π·
thanks a ton
Hi everyone,
Iβm struggling with understanding how the cross product of 2 vectors, A and B, can be represented as
A x B = S(A) B
where S(A) is the skew symmetric matrix we create from vector A.
I know it works only because Iβve used it a lot and seen it in many places, but is there a way to show the S(A)B is actually equivalent to the cross product?
Thank you to anyone who has suggestions!
Positive, definite, well-conditioned.
Hi,
For the symmetric matrix A = [7 2 0; 2 6 2; 0 2 5], I need to find a decomposition A = QDQ^T . I Know D must be a diagonal matrix consisting of the eigenvectors corresponding to the matrix A and Q must be an orthonormal matrix whose columns are the linearly independent basis vectors of the eigenspace of A. I believe this is also going to require the Gram-Schmidt procedure if A contains repeated eigenvalues.
However the question is said to be worth 2 marks and this is easily hours worth of work (must be done by hand). Am I missing something, like some easy method around this or something???
Thanks!
if a matrix has say the elements
1 3i
3i -1
then is it symmetric? sorry about shit notation
I know symmetric matrices have orthogonal eigenvectors, but does this go both ways. If a linear map has orthogonal eigenvectors, does it imply that the matrix representing this linear map is symmetric?
For example this matrix is symmetric:
5 6 7 6 3 2 7 2 1
With missing values:
5 NAN 7 NAN 3 2 7 2. 1
I canβt assume that NAN are equal right?
For example, if I have the 2x2 matrix A =
1 1
0 2
we see that this is not symmetric. In my textbook, they just diagonalize this matrix A using the similarity transformation
D = P^(-1)AP. So they construct P from the eigenvectors of A, and they just construct P^(-1) by taking the inverse of P.
My question is why can't we use this method for a real, symmetric matrix, such as S =
4 1
1 -2 ?
So if its a real, symmetric matrix, we have to diagonalize it using the similarity transformation O^(T)SO = D? We can't use the similarity transformation P^(-1)AP = D?
I know to being with:
v1(dot product)A*v2 = v1^(T) * Av2 <--- just saying v1 dot with somethin else is like treating v1 as now a matrix and using its transpose
first off, why even start with that?
if it's not too much trouble, could you explain how this proof is carried out - for some reason proofs always stump me =[
Dear All, for my phd I'm trying to create a little macro that would order a fairly large symmetric matrix into a descending list with corresponding variable descriptions.
Some background:
My PhD is a social science enquiry into privacy engineering, looking at engineering decisions that were made as a result of privacy conflicts.
I have used 160+ codes to analyse the interviews with stakeholders.
The database is a matrix that shows co-occurence of codes throughout all the interviews. Here are two screenshots of what the database looks like: http://imgur.com/a/LxDdL
I will need to do this operation several times during my phd.
My Excel experience:
I have Excel 2011 for Mac
I have no real experience with macros in Excel, but can do some basic programming. I understand the concepts.
My question:
I would like to make a macro that sorts the data in order of frequency (highest to lowest combination frequency), along with the codes in the (A) column and (1) row.
After doing some research, I'm thinking about making a macro that uses things like 'get max value', then records the value and the corresponding codes (in column A and row 1) in a separate sheet, then deletes this highest code, and continues until all fields have been sorted.
Do you know whether such a macro has been developed, or could you help me find the right approach for this?
Any help is much appreciated!
Hey guys, here's the thing:
I have this exercise in which I need to find the Basis and Dim of Ker and Img of some matrix A, the usual tedious stuff, nothing fancy here.
The next exercise is the same but the matrix is AA^T. I can go over the same procedure of finding the Basis and Dim all over again, but is there any insight about the Ker and Img of this matrix that I'm missing here? (I know that AA^T is always a symmetric matrix). The new matrix (AA^T) looks pretty nasty and the calculations are not easy at all with this one.
Edit: my 3x4 matrix A is:
1 4 1 3
2 2 1 2
1 2 4 1
and my 3x3 AA^T matrix is:
27 17 16
17 13 12
16 12 22
Scalar a s.t. 0 < a < 1
Q nxn matrix. symmetric semipositive definite.
p, q vectors in R^n
This inequality is rumored to be false :
(ap + (1-a)q)^T Q (ap + (1-a)q)) > a*(p^T Q p) + (1-a)*(q^T Q q)
There should be a way to manipulate the sides of this inequality to show that it could not possibly be true. The likely culprit (I presume) is that the bounds of a
cause this to be true
a^2 < a
And so, for example, this turns out to be (ironically) true
a^2 p^T Q p < a p^T Q p
From 0 < a < 1 we have both, a > 0 and (1-a) > 0
Q sym-spos-def entails that for all vectors x in R^n ,
x^T Q x >= 0
I have tried to expand the lefthand side to collect terms that looks like
a^2 p^T Q p
But instead it is spitting things at me that look like
p^T Q q
I cannot make heads or tail of these "mixed" terms. Is there some linear algebra trick I am missing?
Is there something like this
Array{Float64,2}(undef, 3, 3)
for symmetric matrices?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.