"Elastic Matrix not positive-definite for orthotropic material"

Hello, I'm trying to run a simulation and have been receiving this error and I was wondering what I needed to do to correct this. The simulation itself is fine as I have ran it with other materials so I know it is something with the custom materials I'm adding, but I'm not sure how to correct this. Thank you.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/abstracity
πŸ“…︎ Jan 11 2022
🚨︎ report
Can someone ELI5 what a Positive Definite Matrix is?

I am struggling to set up a vector minimization equation and trying to figure out what my model, "A" is.

I know A must be a positive definite matrix which means x^(T)Ax > 0 for all x =/= 0.

What is x? What is the benefit of having/assuming a positive definite matrix? Where else are these useful? Wikipedia asserts that the matrix is also symmetric, why?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Elilora
πŸ“…︎ Nov 08 2021
🚨︎ report
SVD Decomposition and Eigen Decomposition of a positive semi-definite matrix

Hi, I hope this question fits here.

I have a question regarding the SVD and Eigen decomposition of a symmetric, positive semi-definite matrix. As far as I know, both decompositions should be the same, but I don't understand why. Can you guys help me understand this/find me a good explanation online?

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/yarin10121
πŸ“…︎ Dec 16 2021
🚨︎ report
Useful explanation video on lme4 warning, β€œThe Hessian Matrix is not Positive definite.” youtu.be/84LpYeyLvmY
πŸ‘︎ 56
πŸ’¬︎
πŸ‘€︎ u/Stauce52
πŸ“…︎ Feb 16 2021
🚨︎ report
Useful explanation video on lme4 warning, β€œThe Hessian Matrix is not Positive definite.” youtu.be/84LpYeyLvmY
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/Stauce52
πŸ“…︎ Feb 16 2021
🚨︎ report
a variance matrix S is positive definite

in a proof to show that the eigenvalues of a variance matrix S are non-negative, my lecturer took the eigenvevtor x and showed that x^(T)Sx = Ξ»x^(T)x (where Ξ» is the corresponding eigenvalue) and stated that clearly x^(T)Sx is non-negative. This is not clear to me.

I can see that the S is symmetric and all entries are necessarily positive and x^(T)x is a positive scalar, but I cannot see how x^(T)Sx is necessarily positive.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/callumbous
πŸ“…︎ Nov 07 2020
🚨︎ report
Prove that if matrix A is positive definite, then it is nonsingular.

First, going through what it means to be positive definite and non-singular:

Positive definite implies

  • det(A) > 0

  • All eigenvalues of A are positive, and so 0 is not an eigenvalue of A

Nonsingular implies

  • det(A) =/= 0
  • All eigenvalues of A are nonzero
  • The product of eigenvalues of A = det(A)

It seems as though these two characterizations go hand in hand, though I assume negative eigenvalues could form a non-singular matrix but not a positive definite matrix. Can this be proven directly, or do I need to figure out how to prove by contradiction?

Thanks!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/planetzephyr
πŸ“…︎ Sep 14 2020
🚨︎ report
[D] Second order SGD methods usually approximate Hessian with positive definite matrix (e.g. Gauss-Newton) - can it handle saddles?

For example due to symmetry of parameters, functions optimized in machine learning usually have huge number of local minima and saddles, growing exponentially with dimension - it seems crucial for gradient descent methods to pass saddles in a safe distance (?)

I am trying to understand second order SGD convergence methods (slides), and it seems like they often attract to a saddle, like natural gradient wanting to take us to a close point with zero gradient.

There are many approaches trying to escape non-convexity by approximating Hessian with some positive definite matrix, for example:

While such approximation tries to pretend that minimized function is locally convex, in fact it isn't - we can be near a saddle in this moment.

How do such positive Hessian approximations handle saddles?

For example, naively, covariance matrix of recent gradients should ignore sign of curvature - be similar near minimum and near saddle - why using it doesn't attract to saddles?

πŸ‘︎ 50
πŸ’¬︎
πŸ‘€︎ u/jarekduda
πŸ“…︎ Mar 13 2019
🚨︎ report
Say matrix A is symmetric-positive-definite. For any vector v, I want to somehow show that ||v|| * ||v| <= ||Av|| * ||A^-1 v||

I can't even imagine a case where this is true, let alone a situation in which it is true in general. What am I missing?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/moschles
πŸ“…︎ Feb 24 2020
🚨︎ report
Suppose that 0.0 < a < 1.0 . Q is nxn matrix symmetric semipositive definite. Allegedly this inequality is false (ap + (1-a)q)^T Q (ap + (1-a)q)) > a*(p^T Q p) + (1-a)*(q^T Q q) But how?

Scalar a s.t. 0 < a < 1

Q nxn matrix. symmetric semipositive definite.

p, q vectors in R^n

This inequality is rumored to be false :

(ap + (1-a)q)^T Q (ap + (1-a)q)) > a*(p^T Q p) + (1-a)*(q^T Q q)

There should be a way to manipulate the sides of this inequality to show that it could not possibly be true. The likely culprit (I presume) is that the bounds of a cause this to be true

a^2 < a

And so, for example, this turns out to be (ironically) true

a^2 p^T Q p < a p^T Q p

From 0 < a < 1 we have both, a > 0 and (1-a) > 0

Q sym-spos-def entails that for all vectors x in R^n ,

x^T Q x >= 0

I have tried to expand the lefthand side to collect terms that looks like

a^2 p^T Q p

But instead it is spitting things at me that look like

p^T Q q

I cannot make heads or tail of these "mixed" terms. Is there some linear algebra trick I am missing?

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/moschles
πŸ“…︎ Feb 02 2020
🚨︎ report
Linear Algebra: Understanding a factored positive definite matrix [undergrad]

The following question is part of a self-study guide given in an advanced linear algebra course.

https://i.imgur.com/omdWY9C.png

I'm confident that condition (a) is true. However, I'm having a lot of trouble parsing condition (b).

I understand that ( a_ij v_i v_j ) must evaluate positive for all i, j to satisfy the statement. But, I'm having trouble grasping how the left and right sides of the equation relate to each other, and what exactly is happening on the side with the summation.

Can anyone explain?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/9B9B33
πŸ“…︎ Jan 13 2020
🚨︎ report
Definite glitch in the Matrix! My wife is living a double life across 2 dimensions!
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/glitterxgrunge
πŸ“…︎ Jan 03 2020
🚨︎ report
Model convergence problem; non-positive-definite Hessian matrix - small variance

I want to see the differences between the 6 conditions regarding a centralization index (CI). I am trying to GLMM using the package glmmTMB in R but the following warning appears

Warning messages: 1: In fitTMB(TMBStruc) : Model convergence problem; non-positive-definite Hessian matrix. See vignette('troubleshooting')

I have a large data set (result of a boostrap), and the interactions 2 groups (big, small) and 3 distances (3lo, 7lo, 7up) hence, 6 conditions.

I am using negative binomial, having mean as weight (since CI is calculated by max-mean). Residual plots show little variance (which makes sense looking into CI values), also in agreement with underdispersion.

I tried to transform CI (sqrt and log) but variance is the same. At some point I gave up on modelling and calculated a quick KW and pairwaise but all p-values were 2e-16 for all interactions.

Troubleshooting page is not helping either, perhaps as I am not versed in stats and be missing some basics.

Any comment, suggestion is highly appreciated.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/city-of-the-rain
πŸ“…︎ Nov 22 2019
🚨︎ report
[Linear Algebra] How does adding a positive scalar along the diagonals (except the the first) change the invertibility of a positive semi definite matrix?

If I have some symmetric positive semi definite matrix M, and add some positive number a along the the diagonals of M with the exception of M_(1,1) (which is not zero), is the matrix now positive definite and hence invertible? I thought there would be a way to show that the eigenvalues will increase but I'm having trouble with this approach.

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Jul 21 2019
🚨︎ report
Does anyone know of an easy way to find the eigenvectors / eigenvalues of a positive-definite, symmetric matrix in Swift? I really really don’t want to write that code...
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/LimitlessDensity
πŸ“…︎ Oct 02 2019
🚨︎ report
If the reduced row echelon form of a matrix is the identity matrix, is the matrix positive definite?

I have a 3x3 matrix that contains 3 time-varying variables. It turns out that the rref of the matrix is the identity matrix, independent of the time-varying variables.

According to https://www.math.utah.edu/~zwick/Classes/Fall2012_2270/Lectures/Lecture33_with_Examples.pdf , a matrix is positive when the pivots are positive. The pivots of an identity matrix are obviously positive; does this mean that my matrix is positive definite?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/txinkai
πŸ“…︎ Apr 25 2019
🚨︎ report
If matrix A is symmetric and a certain condition holds then it is positive definite [University, Numerical Math]

Hi!

So the condition I am talking about is this one: [; a_{ii} &gt; \sum_{j=1,i\neq j}^{n} {|a_{ij}|} ;]

My idea for the proof would be:

Since the matrix is symmetric then I can diagonilize it and get a matrix where all the eigenvalues are represented by the values in the diagonal. Since the operations that lead to the diaagonalized matrix are all linear, I can say that for this matrix the condition [; a_{ii} &gt; \sum_{j=1,i\neq j}^{n}{|a_{ij}|} ;] still holds. The condition then can be also rewritten as [; a_{ii} - \sum_{j=1,i\neq j}^{n}{|a_{ij}|} &gt; 0 ;]. Now in the diagonalized matrix it holds that [; \sum_{j=1,i\neq j}^{n} {|a_{ij}|} = 0 ;] which leads to [; a_{ii} &gt; 0;] which means that the eigenvalues are positive. This means that the matrix is positive definite.

Is my proof correct?

Thanks for any tips/help

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/craaaft
πŸ“…︎ Oct 26 2017
🚨︎ report
How come this one-line code makes the matrix positive definite?

The question is asked here about how to generate a positive definite sparse precision matrix: https://stats.stackexchange.com/questions/155543/how-to-generate-a-sparse-inverse-covariance-matrix-for-sampling-multivariate-gau

An the code to make it positive definite is: theta = theta - (min(eig(theta))-.1) * eye(k)

So this is adding smallest eigenvalue - 0.1 to the diagonal of theta. How come this makes the matrix positive definite?

So I think it has to do with a matrix having only positive eigenvalues it is positive definite. But why is adding to the diagonal (and only 0.1 of the smallest eigenvalue) making it definitely only positive eigenvalues?

Does this work for any matrix?

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/mattematik
πŸ“…︎ Feb 18 2018
🚨︎ report
P1: Pink rock, white matrix? Topside has definite crystalline structure.
πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/tiltawhirling
πŸ“…︎ Aug 17 2016
🚨︎ report
Mapping positive definite matrix through a similarity transformation

Hello all,

This is my first post on the group. I'm hoping that the group can help me better understand something.

I'm designing a feedback controller with LQR in Matlab. My LTI model (A,B,C,D) is very stiff and the LQR numerics struggle with it (complaining about an ill-conditioned system). So, I chose to balance the system using Matlab's BALREAL. This resolved the numerical challenge (atleast LQR stopped complaining).

BALREAL returns the similarity transformation matrices to convert between the original state vector 'x' and the newly balanced state vector 'x_b'. That relation is,

x_b = Tx, and x = Tix_b

LQR produces an optimal state-feedback gain matrix that minimizes,

J = Integral {x'Qx + u'Ru + 2*x'Nu} dt

or, if used with the new system it would be,

J = Integral {x_b'Q_bx_b + u'Ru + 2*x_b'Nu} dt

(NOTE: I'm not specifying an N matrix...)

where 'u' are the inputs into the system.

Now to my question. The state-space for the original LTI system has physical meaning, where the state-space for the newly balanced system does not. When designing my controllers with LQR I know how to construct my Q & R matrices because of the physical meaning in the state vector 'x'. However, I lose that intuition when trying to apply LQR to the new balanced system. Meaning, what is the relationship between Q_b and Q?

If we subsitute the definition of 'x_b' into the cost function for the original system we get,

J = Integral {x_b'Ti'QTix_b + u'Ru + 2*x'Nu} dt

To me this means that,

Q_b = Ti'QTi

Now for the problem... let's assume that I have a fully actuated 2 degree-of-freedom mass-spring-damper with a state vector of,

x = [z1; z2; zdot1; zdot2]

where z1 and z2 are the positions of the masses and zdot1 and zdot2 are their respective rates.

My experience with an LQR based controller design process has been that if I want to control z1 more than z2 and I don't really care about zdot1 and zdot 2 then a good set of Q & R matrices would be,

Q = [a 0 0 0 0 b 0 0 0 0 0 0 0 0 0 0] R = [1 0 0 1]

chosing the ratio between 'a' and 'b' in defining 'Q' determines how well z1 performs relative to z2. Meaning, if I make 'a' a lot bigger than 'b' then the feedback control law from LQR will allocate more control authority to z1 and assuming the dynamices between the two DOFs are similar then then response time of z1 will be superior to z2. Correct?

I can construct my Q matrix this way because of

... keep reading on reddit ➑

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/joehays
πŸ“…︎ May 26 2017
🚨︎ report
What did the positive definite matrix say when asked if it was certain all of its eigenvalues were positive?

Well of course I'm Schur!

πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/_emmylynne_
πŸ“…︎ Oct 02 2018
🚨︎ report
How can one prove that for a symmetric n*n matrix A, if the determinant of the kth smaller matrix(defined as the matrix formed by entries between a11 and akk), has sign same as (-1)^k, it is negative definite and positive definite if they are all positive if they are positive for all n

This arise from my calculus course and stated without proof, is there a proof that can be accessed?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/markpreston54
πŸ“…︎ Mar 22 2019
🚨︎ report
If a matrix is not positive definite, negative definite, positive semidefinite or negative semidefinite, is it then (always) indefinite?

This is probably trivial and obviously true, but I can't find it in my book, and I want to be 100% sure.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Greaseddog
πŸ“…︎ Mar 28 2017
🚨︎ report
x' h x = 0 => h x = 0 for h, a positive semi definite symmetric matrix. Proof I cannot remember, maybe someone can help

h is a positive semi-definite symmetric matrix. given x' h x = 0 show h x = 0.

my question is do you use the spectral theorem? Do you show it by using the fact that you can get an orthonormal basis? Just cannot remember the proof and cannot find any where there is an exposition...

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/enock999
πŸ“…︎ Oct 13 2016
🚨︎ report
Tensor Methods and Recommender Systems / projected fixed-point iteration on a low-rank matrix manifold / "Compress and eliminate" solver for symmetric positive definite sparse matrices nuit-blanche.blogspot.com…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/compsens
πŸ“…︎ Apr 11 2016
🚨︎ report
Not sure how to proceed with this positive definite matrix question

Question: For what values of a is this matrix positive definite? [[1,a,a],[a,1,a],[a,a,1]]?

Answer: This is the same as asking for what values of a will make x^T Ax > 0

some algebra => x^2 + 2axy + 2axz + y^2 + 2ayz + z^2 > 0

obviously a = 0 and a = 1 work, but how do I find the rest?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/typsi5
πŸ“…︎ Jul 16 2012
🚨︎ report
Showing that a matrix is positive (semi-)definite math.stackexchange.com/qu…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/huy227
πŸ“…︎ Dec 07 2013
🚨︎ report
Suppose that 0.0 < a < 1.0 . Q is nxn matrix symmetric semipositive definite. Allegedly this inequality is false (ap + (1-a)q)^T Q (ap + (1-a)q)) > a*(p^T Q p) + (1-a)*(q^T Q q) But how?

Scalar a s.t. 0 < a < 1

Q nxn matrix. symmetric semipositive definite.

p, q vectors in R^n

This inequality is rumored to be false :

(ap + (1-a)q)^T Q (ap + (1-a)q)) > a*(p^T Q p) + (1-a)*(q^T Q q)

There should be a way to manipulate the sides of this inequality to show that it could not possibly be true. The likely culprit (I presume) is that the bounds of a cause this to be true

a^2 < a

And so, for example, this turns out to be (ironically) true

a^2 p^T Q p < a p^T Q p

From 0 < a < 1 we have both, a > 0 and (1-a) > 0

Q sym-spos-def entails that for all vectors x in R^n ,

x^T Q x >= 0

I have tried to expand the lefthand side to collect terms that looks like

a^2 p^T Q p

But instead it is spitting things at me that look like

p^T Q q

I cannot make heads or tail of these "mixed" terms. Is there some linear algebra trick I am missing?

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/moschles
πŸ“…︎ Feb 02 2020
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.