A list of puns related to "Overdetermined"
Say i have M equations of N unknowns, coefficients A_MN with M > N in this form.
a_11 x_1 + a_12 x_2 + ... a_1N x_N = C_1
...
a_M1 x_1 + a_M2 x_2 + ... a_MN x_N = C_M
I am able to find x_1, x_2, ... x_N fine with least squares.
What i don't understand is, if i were to "normalize" one of the equations, say dividing first one by a_11 throughout, into something like this
x_1 + a_12/a_11 x_2 + ... a_1N/a_11 x_N = C_1/a_11
and leave the others untouched, i get a different set of x_1, x_2 ... x_N
Why is that, what is the significance of multiplying one of the equations by a constant? It doesn't change that particular line in any way, yet it shifts the minimum point significantly. If any arbitrary constant can shift the minimum point, then should i normalize all the equations to a certain equal standard? Say dividing each equation by their respective C_k such that the constant term is 1?
I'm more of a programmer than a mathematician - this is my first post here. Hopefully my formatting works out.
Let p and v be the unknown 3D position and velocity of a point at t = 0. We're assuming that the point isn't accelerating.
I have a sequence of timestamped range measurements between this point and other points with known position. Call these r*i* for the range, p*i* for the position, t*i* for the timestamp.
So, the measurement residual would be something like r*i* - norm((p + v * t*i) - pi*)
The problem is overdetermined, so I have more than 6 measurements (anywhere from tens to hundreds.)
I have a working solution implemented using Levenberg-Marquardt, but it is too slow to run on embedded hardware despite my best efforts to tune it (for some of my testing datasets it takes hundreds of iterations to converge, which works out to hundreds of milliseconds)
Could there be a closed-form solution? Or an optimization method that converges faster?
I also have EKF working, but it suffers from slow convergence and gets really overconfident, really quickly, but does seem to converge eventually in most cases.
I was reading a summary of undergraduate math and it mentions under the linear algebra section that when the number of equations is more than the unknowns, there is usually no solutions to these unknowns. I have tried to find a proof online to no avail. Does a proof of this even exist?
For the life of me I can't find a coherent definition of this phrase
This is in the context of the compatibilist solution to the causal exclusion problem of mental causation.
My understanding of overdetermination is defined as: when an effect has more than one sufficient cause.
I'm just a bit confused on different examples of overdetermination that often come up, and what they are trying to say about mental causation. The term 'overdetermination' itself seems to be used differently in places, too.
Some common examples might be the firing squad (multiple bullets acting as a sufficient cause of death), a man who is simultaneously shot and struck by lightning etc. These are examples of overdetermination. Am I right in thinking that the point of these is to show that these are non-problematic cases of overdetermination? And that the problem with mental causation (along with physical causation) is that it would be overdetermined systematically?
For example, I have seen the causal exclusion principle stated as:
>No single event can have more than one sufficient cause occurring at any given time unless it is a genuine case of overdetermination. (Kim, J. (2005).)
The 'genuine case of overdetermination' seems to refer to these kinds of coincidental examples. Why not just deny overdetermination outright? Why accept these more coincidental cases of overdetermination and not the systematic case of mental/physical causes?
I'm trying to understand the root of why overdetermination is bad for the mental/physical case. It seems like the issue is not just more than one sufficient cause, but because it's systematically more than one sufficient cause. Why is that different?
Hello!
So I have three sets of linear equations that I need to solve:
2x + 2y + 2z = 4; 3x + 4y - 2z = 8; 4x - 2y + 3z = 1
3x - 2y + 3z = 7; -2x + 5y - 4z = -4
3x + 2y - 2z = 3; -2x + 3y + 2z = -6; 4x - 3y + 4z = 0; 2x + 2y + 3z = -1
I'm solving all of them using A*b = X, with A being a matrix containing the coefficients, b our unknowns and X the answers => b = (A^-1)*X to solve for x, y and z:
% a)
Aa = [2,2,2; 3,4,-2; 4,-2,3];
Ba = [4; 8; 1];
xa = Aa\Ba;
% b)
Ab = [3,-2,3; -2,5,-4];
Bb = [7; -4];
xb = Ab\Bb;
% c)
Ac = [3,2,-2; -2,3,2; 4,-3,4; 2,2,3];
Bc = [3; -6; 0; -1];
xc = Ac\Bc;
For a), this produces a nice solution, as the number of unknowns is equal to the number of equations. However for b) and c), Matlab produces a solution where one unknown is equal to 0 (b - the underdetermined system) or produces an approximation (c - the overdetermined system).
What is Matlab doing here? My understanding is that for both b) and c) it's effectively impossible to obtain an absolute solution (eg, x = 4, y = 7 and z = 1, kinda thing), and instead it's doing the best it can with what it's been given, but after scouring the internet I'm still haven't quite got my head around it, hence this post.
Any help is appreciated!
Awhile back, I was browsing the translated versions of Legendre's original appendix where he derived the least square method. In his original paper he mentioned that if we considered a system of equations:
[; E &=& a + bx + cy + fz + ... ;]
[; E' &=& a' + b'x + c'y + f'z + ... ;]
where if a, b, c, f... and a', b', c', f'... indicate different coefficients in the equation system (a' does NOT mean differentiate), and x, y, z... unknowns in this equation, and E the error of the equation, then the equation system can be converted to
[; 0 = \int ab + x\int b^2 + y\int bc + z\int bf + ... ;]
[; 0 = \int ac + x\int bc + y\int c^2 + z\int fc + ... ;]
such that it minimises the sum of squared errors. (β« ab denotes the sum of similar products, i.e., ab + a'b'+a''b''+...) (sorry for the confusing notation)
I know that partial differentiating the errors were involved, but apart from that that it.
When does a overdetermined system have infinitely many solutions?
I tought it could only have 1 unique solution or no solution.
take for instance this matrix with 4 rows and 3 columns (4 equations 3 unknowns):
>1 2 6 =1
>0 1 3 =1
>0 0 1 =2/9
>0 0 (-6+1) =b
For which a and b does this system have infinitely many solutions? are there even such a and b?
I suppose the system got 1 unique solution if they last row is of the form
>0 0 0=0,
since then every column is a pivot column?
And no solution if the last row is of the form
> 0 0 0 = c
where c can by anything non zero? or if the 3 row and 4th row are "incompatible". i.e if x_3=3 but also x_3=4 or something like that, that would also make it have no solution.
Can this be done for an m x n system?
I know this is a longshot, since this might be a problem that is much more difficult to fix than I am thinking. With that being said, I'd love any help I can get.
Here's a copy of the code: http://pastebin.com/8Q2UyGP4
An explanation:
I'm following this paper to model a system of synchronized metronomes. This is for a class project, but due to circumstances outside of my control (professor promising to help, but only just finding time), I'm behind schedule. I have code that works now, but I'm trying to make it "elegant" for a presentation. My hope is to submit it to the Wolfram Demonstrations Project next semester.
If you evaluate the code, NDSolve gives the following error:
NDSolve::overdet: "There are fewer dependent variables,
{ΞΈ[1][t],ΞΈ[2][t],ΞΈ[3][t]}, than equations,
so the system is overdetermined."
I don't really know what this means, but from my understanding it's saying that I have conflicting equations. Does anyone have any suggestions, or could you explain in further detail what the error means? Thanks a bunch!
EDIT: I've noticed a few errors in the code above, but am still having trouble. I'm going to revert to my other code that I know works, and will deal with this after the deadline. Thanks!
Is overdetermination a more broad form of dialectics? Something of a pluralistic "antithesis" maybe? If this is not correct, what is the distinction?
Kim, Merricks, Block, and many others often say that causal overdetermination (I.e. an event having multiple sufficient causes) is something that should be avoided. Although I share the intuition, I find it hard to justify. So what exactly is wrong with causal overdetermination? Why is it a problem if it exists?
https://preview.redd.it/pjsu6wf5wo771.jpg?width=2030&format=pjpg&auto=webp&s=4a242259733e958d8f6edb7b2642924e413dc979
Previous Posts:
Find Laura: Part 1: 1A β’ 1B β’ 1C β’ 1D β’ 1E β’ 1F β’ 1G β’ 1H β’ 1I β’ 1J β’ 1K β’ 1L β’ 1M β’ [1N](https://www.reddit.com/r/twinpeaks/comments/g8qyt7/find_laura_a_season_3_scenebyscene_analysis_1n/?utm_source=share&utm_medium=ios_app&utm
... keep reading on reddit β‘Phil
Sudden Lee
Go post NSFW jokes somewhere else. If I can't tell my kids this joke, then it is not a DAD JOKE.
If you feel it's appropriate to share NSFW jokes with your kids, that's on you. But a real, true dad joke should work for anyone's kid.
Mods... If you exist... Please, stop this madness. Rule #6 should simply not allow NSFW or (wtf) NSFL tags. Also, remember that MINORS browse this subreddit too? Why put that in rule #6, then allow NSFW???
Please consider changing rule #6. I love this sub, but the recent influx of NSFW tagged posts that get all the upvotes, just seem wrong when there are good solid DAD jokes being overlooked because of them.
Thank you,
A Dad.
Because a toothbrush works better
So far nobody has given me a straight answer
Hey everyone, this course on vector/multivariable calculus is going to open on July 14th and it's open for registration now, free to audit.
https://www.edx.org/course/multivariable-calculus-1-vectors-and-derivatives (part 1 of 3)
If you know someone who might be interested in taking it, then please let them know about it. The dedicated course staff participate on the forum and they put a ton of effort into producing the material. I haven't seen the course yet, but if history is a good indicator then it's going to be phenomenal.
Unit 1: Functions of two variables
Unit 2: Geometry of derivatives
Unit 3: Optimization
Unit 4: Matrices
Unit 5: Curves
Most professors put emphasis on how underdetermined matrices always have free variables, however, cant overdetermined matrices have them too? For example,
\begin{bmatrix}
1 & 0 & 3 & 0 & a \\
0 & 1 & 4 & 0 & b \\
0 & 0 & 0 & 1 & c \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
as you can see, x_3 is a free variable.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.