A list of puns related to "Eigenvalue"
"You will collapse bridges and get fired if you don't."
Source: my teacher
Hi all
I'm taking an introductory math course at university and seem to have either A) gotten the worst teacher ever; or B) utterly forgotten what numbers even are. Anyway, I've attached an image of the equation in question, including the "characteristic equation" answer. My lecturer has said that "you [students] can do the arithmetic on your own as it's rather simple"
I tried to work it out myself (section 1) and ended up with an answer nowhere near what he did (section 2 - entirely copied from lecture slides).
I tried using the Rule of Sarrus and couldn't find the dot product of 3 bracketed terms, so I input it into a matrix calculator and tried to find a method there. However it only gave me the same formula I had been using, and still gave me a different answer to what the lecturer got! (section 3)
I'm sure it is simple, but I have absolutely no idea how he got the factors in the characteristic equation he provided.
Please help. I don't even know where to start.
It was her mission.
I'm currently practicing my LQR controller on a discretised state space model of a cart pole problem.
I first hand calculated the continuous SS model, used c2d
(matlab) to discretised it for a sample time of Ts (0.01).
I then created a close-loop system called sys_dsc_cl using ss(AA-BB*KK, BB, CC, DD, Ts)
I did a step()
and it shows that the 4 states are able to stablise as expected. But the puzzling thing is that when I queried the eigenvalues, It is clear that the real parts of all 4 eigenvalues are positive, which is contradicting whatever that I have learnt that eigenvalues (poles) need to be negative in order to be stable.
mp = 0.3;
mc = 0.5;
L = 0.15;
g = 9.81;
A = [0 1 0 0; 0 0 (mp*g/mc) 0; 0 0 0 1; 0 0 ((mp+mc)*g/(L*mc)) 0];
B = [0; 1/mc; 0; 1/(L*mc)];
C = eye(4);
D = [0; 0; 0; 0];
Q = diag([1, 1, 10, 100]);
R = 1;
freq = 100;
Ts = 1 / freq;
sys_cont = ss(A, B, C, D);
sys_dsc = c2d(sys_cont, Ts);
[AA, BB, CC, DD] = ssdata(sys_dsc);
KK = dlqr(AA, BB, Q, R);
sys_dsc_cl = ss(AA-BB*KK, BB, CC, DD, Ts);
% step(AA-BB*KK, BB, CC, DD)
eig(sys_dsc_cl)'
eig(AA-BB*KK)'
step(sys_dsc_cl)
output:
0.2840 + 0.0000i 0.9930 - 0.0074i 0.9930 + 0.0074i 0.9907 + 0.0000i
0.2840 + 0.0000i 0.9930 - 0.0074i 0.9930 + 0.0074i 0.9907 + 0.0000i
https://preview.redd.it/yl4gc32b1rt61.jpg?width=560&format=pjpg&auto=webp&s=6410bd17a9829ee87ded9719c5bc5346a7ad03b3
I even verified inside simulink by doing the below. Note that inside the discrete state-space block, the values are AA, BB, CC, DD and an initial state of [0 0 deg2rad(5) 0].
https://preview.redd.it/xx82oevq1rt61.jpg?width=1201&format=pjpg&auto=webp&s=a58dfe5b7194abc45325b05ca93c62de8e246756
https://preview.redd.it/nj2r9e8y1rt61.jpg?width=3000&format=pjpg&auto=webp&s=f3f4667488346252293589c021e5554669ff9059
I looked at the scope and it shows it stabilising (similar to the step function in matlab).
All in all, everything stabilises, even though their eigenvalues are positive! Can someone tell me where I got it wrong?
I'm pretty sure the stabilising part is correct, but for some reason, the extraction of the eigenvalues (poles) are getting wrong.
Can someone advise where I went wrong please?
I can figure out why positive operators are hermitian pretty easily, but I do not see why we can just assume that the eigenvalues are positive as well? Why can't zero be an eigenvalue?
Hello everyone, Im making a programme for structure analysis... the main issuea is that it requires the programme to calculate de eigenvalues and eigenvectors of a matrix (1x1 2x2 3x3 and 4x4) i was wondering if anyone here knows how to do it on visualbasic.
I found a code on google that supposedly does it automatically, but since im new in te programming i dont really know how to apply it :(
Help!!
How do you think about them in relation to topology and transformations?
What happens if the measurement outcome is somewhere in the continuous part of the spectrum? What would the state after the measurement be?
In class we were recently presented that the z component of angular momentum operator acting on the |l,m> state yields (m hbar |l,m>). This seems like a reasonable assumption, since hbar has units of Angular momentum. However, when it came to the eigenvalues equation for L2, it seems much more unintuitive that its l(l+1). Iβve read that this can be done using ladder operators and commutation relations, however how would one start this?
I only know that the first step is assuming that the matrix A is an nxn matrix. I'm stuck at trying to prove that the matrix is not invertible.
Hey everyone, end of semester is more close than ever and projects are coming. One of those requires me to calculate the eigenvalues and eigenvectors of a matrix (1x1 2x2 3x3 and 4x4). As a newby in excel and visualbasic, I was wondering if any of you knows how to do this in excel and visual basic. I found a code in google about this, but i dont really know how to apply it to my matrix and programme.
Any help would be awesome.
Thanks
I've been tasked with finding the eigenvalues and eigenvectors of any n x n matrix without using any hard functions like eig(), det(), etc. Simple functions like length() and randi() are permitted.
All I have right now is the code to create a random n x n matrix and make it symmetric:
n = randi([2,10]);
A = randi([-10,10],n);
A = (A + A.')/2;
I genuinely don't know how to go about doing this. Previous similar assignments included the use of a For Loop so I figured that would be the way to go. I also tried something along the lines of
syms x
lambda = A - x*eye(n)
in order to get something resembling the characteristic equation, but it's essentially unsolvable without the usage of det().
What I do have available to me is a previous code I wrote that would solve any n x n matrix using Gauss-Jordan elimination but that's about it.
Any hints?
So here's the problem I am doing: https://prnt.sc/11ajlie , and I am a bit stuck. Could the bessel equation general solution be applied somehow..?
I am hoping someone can help me begin to address my deficiencies in understanding this problem. On Math Stack Exchange, the OP is asking how to find eigenvalues and eigenvectors of block Toeplitz matrices. I have a similar problem I wish to solve, but I would like to understand their work first, so my question is this:
How did the OP go from, T, a 2N-by-2N matrix, represented as the sum of tensor products (the Pauli matrix step) to T as the sum of (2-by-2) matrices after plugging in the eigenvalues of each (N-by-N) sub-block of T?
I understand that the eigenvalues of the tridiagonal Toeplitz matrices fill each of the elements of the 2-by-2 representation of T, and also that the eigenvectors of tridiagonal N-by-N Toeplitz matrices (e.g., A, B, etc) are all the same, and thus simultaneously diagonalize all blocks of T(2N-by-2N).
Thanks for the help!
Im curently running an FEM of a cold formed steel purlin, I was wondering if there was anyone out there who knows how to determine the critical moment from a eigenvalue generated from buckling analysis.
Edit 1: The beam in question is simply supported with a universally distributed load on the compression flange. The principle failure mode is buckling, I would imagine it is lateral torsional buckling.
Edit 2: Iβm using LUSAS FEA if it helps. I know itβs garbage but its the best I have.
I'm working on learning the power method to evaluate all eigenvalues and associated eigenvectors for an nxn matrix.
My basic understanding is that we guess an eigenvector for the dominant eigenvalue, put it through an iterating function, and then use that output (iterating until desired error is reached.) However, I don't understand solving for the other eigenvectors and eigenvalues. We slightly manipulate the guess vector, and then iterate the same as before.
For my problem at hand, I have a 2x2 matrix, so 2 sets of eigenvectors and eigenvalues.
My problem - I'm not sure which eigenvector we are supposed to use for the next iteration. (I'll attach photos of the work)
What do Eigenvalues and Eigenvectors represent? I have seen a lot of articles and videos on this question but I have not been able to really connect with what they are saying. I have like a basic idea but it isn't clear.
Also just like how Gauss-Jordan elimination is the process of aligning the planes represented by a system of linear equations with their respective coordinate axes, what does the process of finding the Eigenvalues and Eigenvector of a matrix represent?
I want to preface by saying I am in my 3rd year of Engineering at a reputable university. I am definitely not giving myself enough credit as I have taken (and passed with good grades) introductory linear algebra, quantum physics, ODE, etc. and courses in general that require me to be comfortable with some amount of linear algebra/eigenvalue computing and MATLAB.
I just don't understand what the heck we are doing with eigenvalues (yes I know they can say important stuff about states of a system say in quantum physics, but at the same time, I don't?). The introduction is always "Ax=yx; isn't it so cool we can have a vector when transformed by A is simply scaled?!!?".
No! This is not "cool" to me, I don't get why it matters, it just seems like something really random with tons of math and theory behind it. The best analogy I can give is: in chess new players just know how the pieces moves and basic rules, intermediate players (me) understand a bit of tactic when manipulating those pieces and to some extent recognize how some moves have long-term implications by creating weakness/strengthening positions. Great players (usually GMs) understand the motive behind everything that is being done and see the picture more clearly. I want to get to that point with eigenvalues.
Please do not tell me to go watch 3B1B - I know it works for some, but not me - all my exposure to linear algebra has been too much hand waving/visualization and I need some concrete intuition for why the heck "Ax=yx" is something I want to "solve" and learn "tricks" to solve.
By tricks, I mean random things like, "symmetric matrices have all real eigenvalues", or just random facts along those lines. I just don't know why the heck I need to care - but I want to because it is what I believe will separate me from "an engineer" to someone who can excel in fields like computer vision or physics as I often hear random snippets about how crucial eigenvalues are. Sure MATLAB can solve them for me - but I want to make sure I understand why they are important/motivation/I'm so lost.
I apologize for the rant.
TLDR: I'm a fairly experienced university-engineering student with exposure to various practical/theoretical linear algebra. I just cannot get over the motivation for why some random dude thought "wow I want to really solve Ax=yx" other than the garbage about how it's a scaled vector. Any insight/readings you guys can refer me to will be appreciated so much.
Edit: In case it's relevant, I am
... keep reading on reddit β‘I have two matrices [M] and [K]. Why do I get different values for eig(K,M) & eig (A) and [V,D] = eig (K,M) & [V,D] = eig(A)? It seems the values are switched.
>% Eigenvalue & Eigenvector
>
>k1 = 3; k2 = 3; k3 = 3; %%Stiffness
>
>K = [k1+k2 -k2; -k2 k2+k3];
>
>m1 = 0.18534; m2 = 0.38090; %%Mass
>
>M = diag([m1 m2]);
>
>A = inv(M)*K; %%Coefficients
>
>eig(K,M)
>
>eig(A)
>
>[V,D] = eig(K,M)
>
>[V,D] = eig(A)
Supposedly this is an easy computation but I simply don't understand.
Given something like this:
Variables put into the PCA | Component 1 | Component 2 |
---|---|---|
variable 1 | x1 | y1 |
variable 2 | x2 | y2 |
variable 3 | x3 | y3 |
variable 4 | x4 | y4 |
Lets pretend the eigenvalues for Component 1 = 2.5 and Component 2 = .8. How do can I estimate the correlation between 2 of my variables (lets just say 1 and 3)?
I am also confused about correlation between eigenvalues. Is the correlation between components always 0?
Thank you for taking time out to read this.
I am given a graph G for which closed walks of length n are given by : 6^n + 3^2 +2^n + 4* (-2)^n + 4.
These are the eigenvalues of the adjacency matrix. In the excersise I am asked to prove there exist 11 vertices, at most 39 edges, what happens to the number of closed walks of length n if we add a loop on each vertex(+1 to every term above e.g. 7^n +4^n ...) . All this is done.
The last question is to provide of an example of such a graph.
My only idea at an approach is that the adjacecny matrix of an example is diagonilasible with eigenvalues as above. So I am trying to find some orthogonal P (since adc matr is symmetric) such that P^-1 D P = A. The requirements for A are: positive integer entries & eigenvalues 6,3,2, -2,-2,-2,-2, -1,-1,-1,-1 (and that P is orthogonal.)
I also tried to brute force it, and managed to get (2,-2), (-2,6) as eigenvalues of graphs and then to "cut" the graph in connected components.. Nothing seems to work
Any help is appreciated!
Hey
First: is there some nice way to find the minimum eigenvalue of a real Hermitian finite dimensional trace class 1 matrix, similar to finding the largest one with operator norm?
Second: Lets say we have given some (minimum) eigenvalue of said matrix, how does this one change after having a projection operator acted upon it?
The theory is in our view not physically valid because of the same problems for most GUTs and TOEs: too many dimensions.
Is the local Lorentz invariance of general relativity implemented by gauge bosons that have their own Yang-Mills-like action? β https://arxiv.org/pdf/2008.10381.pdf
A Yukawa contribution to gravity is really problematic. So it is not clear if this make sense; but #s have it viable so far.
We call it out because, it discusses massive Lorentz gauge boson as possible Dark matter explanation. it is also an entry point to Lorentz gauge boson that we may discuss soon based on the following paper that we are slowly analyzing (it has relations to our multi-fold approach, but nothing is straightforward) and it does not seem to gather too much community support / In as much that arxiv apparent censure (it is not an original lack of author endorsement issue) of it may raise many questionsβ¦): the characteristic equation of the exceptional Jordan algebra: its eigenvalues, and their possible connection with mass ratios of quarks and leptons β https://www.researchgate.net/publication/348355473_The_characteristic_equation_of_the_exceptional_Jordan_algebra_its_eigenvalues_and_their_possible_connection_with_mass_ratios_of_quarks_and_leptons. (This set of paper and the underlying Trace Dynamics work of Adler have no easy entry points as do Octonions, β¦)
As we progressively go through the paper, it should be clear that the 8 dimensions (vs. 4) implied by this model are problematic; they are incompatible with a asymptotically safe quantum gravity and the SM. See https://shmaesphysics.wordpress.com/2020/09/19/renormalization-and-asymptotic-safety-of-gravity-in-a-multi-fold-universe-more-tracking-of-the-standard-model-at-the-cost-of-supersymmetries-guts-and-superstrings/ and [https://shmaesphysics.wordpress.com/2021/02/28/more-on-multi-fold-particles-as-microscopic-black-holes-with-higgs-regularizing-extremality-and-singularities/](https://shmaesphysics.wordpress.com/2021/02/28/more-on-mul
... keep reading on reddit β‘This is more of a terminology question than a mathematics questions. Eigen'stuff' are associated with a specific matrix which are in turn associated with a linear transformation. However, for example the transformation T:V->V such that T(x)=Ax, would I say that lambda or vector v is an Eigen'stuff' of A or an Eigen'stuff' of T?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.