A list of puns related to "AbelβRuffini theorem"
Why does the quadratic, cubic, and quartic formula work, while no quintic formula can exist?
Hey there, I'll be starting uni in a few months and I've got some spare time in the summer, so I've decided to get a bit of the taste of abstract algebra.
Abel-Ruffini Theorem just sounds very fascinating to me, I don't know how one could prove such abstract thing about polynomials, so I decided to try to understand it, but I've got no idea on how should I proceed.
Afaik, it's about Galois theory, and for that I should probably have some idea of group, ring & field theory, but I probably can't understand the whole of those theories. Currently I only know kinda the definitions of groups, rings & fields so you can just say that I know almost nothing about abstract algebra.
So, do you have any suggestions on how should I proceed? Any specific terms, definitions I should be aware of; or the extent to which I should learn those theories to understand Abel-Ruffini? Or any pathways you could suggest, etc... Also I've got no knowledge in linear algebra too, so if it's also required I could use some suggestions
Also I know that I should not learn math theorem by theorem, I should probably learn those subjects on their own etc.. but I'll learn them on uni anyways. I currently want sth to spend time with. Thanks in advance
In the wikipedia article on the Abel-Ruffini theorem (http://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem), they mention that V. Arnold discovered a topological proof of the theorem. However, none of the references they cited had much of an explanation.
Can anyone elaborate on how this proof goes? It sounds fascinating.
https://youtu.be/RhpVSV6iCko
Early on this video demonstrates that in a mapping of a quintic polynomial's roots to its coefficients, it's possible to swap two of the roots without swapping two of the coefficients, and that this should somehow hint that we can't solve a for a quintic polynomial's roots in terms of radicals of its coefficients. I don't see at all where this jump is coming from. Fundamentally were just mapping sets of coefficients to sets of roots. Why does 'swapping' matter at all?
Iβm reading this book right now and it is an absolute dream to work through. I love how self-contained it is. I also like how there is a clear βend goalβ of the book in the form of Abelβs theorem, and the rest of the book is structured around developing the necessary theory through solving problems.
Are there any books similar to this that can be read with basic knowledge about proofs/calculus/linear algebra? The particular topic doesnβt matter
Hi there. I have a pretty quick question about using Abel's Theorem to compute the Wronskian of two solutions for a linear, homogeneous, second order differential equation, specifically how it applies to determining the linear independence of those two solutions.
Let's say we compute the Wronskian using Abel's Theorem, we will end up with an answer in a form like:
C*e^(some function of t)
or maybe:
C*(some function of t)
Given that we have often set our constant of integration, C, to be zero in many cases for simplicity's sake leading up to this point, what is stopping us from doing that here? Is it required for C in this case to be non-zero? Otherwise it seems like a big assumption that our Wronskian will NOT be equal to zero and thus our solutions are in fact linearly independent.
This is just a general question, and not really related to any homework, but I've been struggling to find any resources (including the main ones listed in this subs sidebar) that really explain what our C can and cannot be here.
Thank you for your help.
Hi. In school, we are learning about Abel Partial Summation Formula, and I am having trouble understanding the exact purpose of this theorem. I remember the teacher used it to show a relation of the harmonic series to a natural log function. To me, this example looked weird because we already know beforehand that the harmonic series diverges. It made me question why we would want to convert a power series to a logarithmic function. Anything that would help me understand this theorem would be great.
Didn't he prove Fermat's Last Theorem in the '90s. Why did he only just win it?
Thatβs like goin around callin future nayvadius
Lo he visto para polinomios pero no estoy seguro si tambiΓ©n sirva para binomios.
Basically a theorem that says βall but some number of casesβ satisfies the theorem
In the wikipedia article on the Abel-Ruffini theorem (http://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem), they mention that V. Arnold discovered a topological proof of the theorem. However, none of the references they cited had much of an explanation.
Can anyone elaborate on how this proof goes? It sounds fascinating.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.