A list of puns related to "Orthogonal polynomials"
I've got a set of orthonormal (and monic) polynomials that I generated using the well known three term recurrence relation. When I plot the functions, the interval upon which they are orthogonal is obvious. However I don't know how to determine an appropriate weighting function to use for the inner product. (The polynomials do not match any of the classic or well described functions as far as I can tell.) How does one typically find the weighting function? BTW, just an amateur math enthusiast here, so I probably won't understand anything beyond undergraduate level math.
{1, x, xΒ² - 1/3, ...}
"Show that the polynomials, q_n = D^n u_n = D^n (x^2 - 1)^n , are orthogonal on L^2[-1,1] (with D^n = (d/dx)^n) and are therefore scalar multiples of the Legendre Polynomials by showing by induction that:
D^k u_n( Β± 1) = 0 for k < n - (it isn't hugely clear but this is u_n evaluated at Β± 1)
βͺ D^n u_n, D^m u_m β« = - βͺ D^(n-1) u_n, D^(m+1) u_m β« (Where βͺ f,g β« denotes an inner product β« f(x) g(x) dx over [-1,1]
βͺ q_n, q_m β« = 0, unless n = m
I spent a while expanding out the products and derivatives of the first one hoping to find a pattern, but ended up with a bunch of ugly factorials and I feel like this was the incorrect direction.
Could someone please point my in the right direction?
Hi,
I am trying to improve the predictive power of a multiple regression model. One of the scatterplot show a quadratic relationship between my Y and the variable and I would like to add this to the model.
So far so good.
The problem I have, is that in R it is always suggested to use the function poly, which according to the help function of R, is a function to get the orthogonal polynomial of something:
poly(x, degree)
Although I can add the variable manually to the model without using this function (which is very easy, I have just to add:
~ x + x^2
I want to understand what an orthogonal polynomial is and why it is suggested to use this function in R instead of "normal" polynomials. Please explain like I am 5 :)
Recently, i've been personally researching a way to find any orthogonal polynomial. Sort of reverse engineering them. The goal is to find all sets of orthogonal polynomials and extend to more dimensions.
Ortho. polynomials can be represented as the determinant of a Vandermonde matrix.
p(x)=det T(x)
T(x) =
(u(0) , u(1) , ...,
u() , u() , ... ,
1 , ... , x^n )
Where u(n) is the nth moment of the weight function.
I have also found matrix representations of othro. polynomials, which are the determinant of a matrix A
p(x) = det A
Where A =
(ax, r(2), 0,...,0,
r(2), ax, r(3),0,....0,
0, r(3), ax, r(4),0,....,0,
0,...,0, r(n), ax)
r(n) are constants depending only on n, the size of the matrix
For Hermite and n=2
T =(4,2,
1, x^2 )
A=(2x, sqrt(2) ,
sqrt(2), 2x)
Now, the task is trying to find a generalized B,
where T=BA
such that detB=1 for all B, ie B would be special orthogonal. I have started working out B's for Hermite and hope to generalize for all Ortho. Polynomials. I have also even extended the matrix A to include Bessel functions, which involve a complex r(n).
Has anyone seen work covering or expanding on this?
This type of question is a common exam question on the theoretical part of my exam and I cannot find any way to solve it in my lecture notes or online.
Derive the homogeneous linear second order differential equation that is satisfied by Legendre polynomials (starting from Rodrigues' formula)
Essentially work the other way around -- instead of starting with the equation, you're given the solutions. This question can also be posed with the formula for Hermite or Laguerre polynomials.
I know how to solve the equation using the Frobenius method and derive the polynomials, but I have no idea how to even begin if starting with the polynomials to derive the equation.
Anyone know a good book/paper that really goes into depth on Orthogonal polynomials and Sturm-Liouville Theory? It would be great if it also covers applications to physics.
I feel like I've done a shit ton of reading on poly() and I'm still struggling to wrap my head around it.
So, the way I understand it, is poly(x,#) generates orthogonal polynomials - that is, polynomials of x that are uncorrelated with x^2, x^3, .... etc.
But if it's by itself... what is it being uncorrelated to?
The help file suggests the following: > which contains the centering and normalization constants used in constructing the orthogonal polynomials and class c("poly", "matrix").
I figured this to mean that poly(x,1) would be equivalent to a standardized x, but this isn't the case:
x <- seq(from=1, to=5, length.out=5)
x_standardized <- (x-mean(x))/sd(x_center)
stats:::predict.poly(object = poly(x,1))
x_standardized
Which are different. The answer is - for whatever reason - x_standardized/2.
Yet, this isn't the solve either, because:
x <- seq(from=1, to=10, length.out=10)
x_standardized <- (x-mean(x))/sd(x_center)
stats:::predict.poly(object = poly(x,1))
Are completely different (and do not meet at /2).
Take the space of the P_2(R)={ax^2 + bx + c, a,b,c in R} with an internal product <p(x)q(x)> = int(0,1) p(x)q(x) dx. Find the orthogonal set of the set S = {x, 1}
I'm not sure what i need to do here. Im trying to find q(x) in a way that <p(x)q(x)> = 0, but im lost. What mean that S = {x, 1}? that mean that p(x) = ax^2 + bx + c and p(1) = a + b + c? or mean that p(x) = x and p(1) = 1? and what I need to find a q(y) = ay^2 + by + c, of q(x) = rx^2 + sx + t?
thanks for any help
I am doing some work on multivariate Chebyshev polynomials, and I'm looking for proofs that the roots (as a function of the coefficients) of polynomials in an orthogonal basis are well-conditioned. For example, it is asserted that the roots of polynomials in the Chebyshev basis are well-conditioned on [-1,1]. However, despite my searching, I can't find a proof of this fact.
Can someone point me in the right direction?
After using the Gram-Schmidt algorithm to find an orthogonal basis that is made up of polynomials, how do you check that it is an orthogonal basis?
I learned today that the Legendre Polynomials are "orthogonal over a unit circle". I'm wondering how one could show that?
Preface : I am a senior physics major.
In the lecture series Perturbation and Asymptomatics (lecture 6), a method for determining the next number in a sequence is laid out. In it the lecturer setup a three step process.
Given {a_n} imagine the 2 nth moment to be a_n = [;\int{w (x) x^2n dx};]
If given {b_n} construct set of polynomials such that P0 (x)=1, P1 (x)=x, P_(n+1) (x) = xP_n (x) - b_nP_(n-1) (x)
Then the product of the polynomials P_n (x) and P_m (x) (n != m) must be orthogonal to w (x)
The function w (x) is unknown, but assumed to be an even function. And, in the end really is just used to construct an identity.
This process allows you to construct a new set of polynomials that can then be used to determine either {a_n} or {b_n}, depending on which partial set you are given.
Why does this method work?
Note
This is currently incomplete, I will continue to add to this outline
Goals
A survey on the major topics of numerical analysis. My initial thoughts was to cover a full numerical analysis book, but that does not seem practical. Personally I would like to go over the wavelet transform, so my suggestion is to Numerical Mathematics by Quarteroni, Sacco, and Saleri. The book is a bit dense, and has received negative reviews on amazon for it. I havenβt had quite so negative of an experience with it during a detailed numerical ODEs course. The presentation (from what I remember) was heavy theoretically, with practical exercises (in the form of Matlab programs).
The major topics that will be covered are chapters 10-14:
-Orthogonal polynomials in approximation theory
-Numerical solutions to ordinary differential equations
-two point boundary value problems
-Parabolic and Hyperbolic Initial Value Problems (This is for PDEs)
The free resource Iβve found on numerical analysis covers all but chapter 10, as well as the topics of parts 1-3 of the book (basics of computer arithmetic, numerical linear algebra, βaround functions and functionalsβ which includes rootfinding, interpolation, numerical differentiation and integration)
Free Resources
Notes on Fourier and Wavelet transform
Books
Syllabus
Topic | Book chapters | free resources |
---|
Matlab and Octave learning resources
Related topics and further reading
That is to say, how do I find some [; w(x) ;] such that [; \int dx \; w(x)\; L_n(x)\; L_m(x)\; e^{i\;(n+m)\;(\frac{c_1}{\sqrt{x}}-1)}=f(n)\delta_{mn} ;]
where [;L_n(x);] is the n-th Laguerre polynomial? I know this is oddly specific, but it's getting to be kind of a roadblock on an optics project I'm working on. Thanks in advance for the help!
Edit: to clarify based on some of the responses, I'm looking for an inner product under which [; L_n(x)\;e^{i\;n\;(\frac{c_1}{\sqrt{x}}-1)} ;]
are orthogonal. Just using the inner product that makes the Laguerre polynomials orthogonal doesn't work, and I'm not sure how to tweek it to make it work. Left the f(n) on the right side just for generality sake. Sorry if that lead to any confusion; I don't care about f(n), just that for mismatched n and m the inner product goes to 0.
This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread:
Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer. For example consider which subject your question is related to, or the things you already know or have tried.
so they basically told me that some functions live in a function space and that this is an infinite vector space. doe this means that there are an infinite amout of basis vectors to represent a function? im a bit confused can someone help me out
Hi everyone.I am math second year student from not very reputable university. I took linear algebra(matrices,inverse,determinants,cayley-hamilton,bilinear,quadratic forms,polynomials,gram-schmidt,orthogonal subspaces,orthogonal projections),real analysis(series,sequences,functional series,functional sequences,continuity,uniform continuity,uniform convergence,integration,differentiation),topology(connectedness,compactness,hausdorff spaces,metric spaces,separability,sequences in different topologies,homeomorphism) and analytic geometry.But after taking these courses I still don't feel that I am good at proofs.And I am thinking is math right track for me.I don't know even what math level I have. Am I bad at maths than average math students,maybe average?
https://math.stackexchange.com/users/933947/unit-1991-this is actually my stackexchange account on which I posted math questions.Would be great if you take a look on that account's questions and say approximately what is my current level knowledge of math so I can think ,is going for master's degree worth it?
Hello Everyone,
I created a youTube channel (here's the link) a few months ago in which I post detailed lectures in higher mathematics.
I have been uploading Real Analysis and Linear Algebra videos.
I have covered the following topics so far:
Future lectures will cover
The course will be complete by the end of February after which I plan to start with group theory.
Almost every lecture begins with two or three problems.
My aim behind making this videos is to write a video book so that any one who wants to learn need not look elsewhere (though, of course, other sources can surely help).
I hope that the people here would find the content useful and interesting.
Thank you.
PS. According to the forum rules self-prmotion on Saturdays are allowed so I hope I am not crossing any boundaries.
Fourier's theorem states:
A mathematical theorem stating that a PERIODIC function f(x) which is reasonably continuous may be expressed as the sum of a series of sine or cosine ....
why can't we use triangle waves or another wave? What makes sine waves special? If the Fourier transform is essentially a Taylor expansion with sines instead of polynomials why can't we expand using a different oscillating basis function?
Hello guys, I know this might be a silly question, but could someone please tell me the difference between Seidel and Zernike coefficients?
I have read the explanation in different sources but it is still not so intuitive for me.
Thanks in advance!
Hello everyone
I have encountered the following problem related to reconstructing a positive valued particle density function f: [0,1]^2 -> R>0.
Basically I am given measurements mi=integral_{[0,1]^2} (f * gi) where gi are weighting functions that are known in advance, so the measurements basically correspond to weighted sums/integrals of f with the weights gi.
My question is given the mi, is there a general numerical approach to reconstruct f?
If it helps, I attach a picture of a typical weighting function:
Typical Weighting Function gi, red color is equivalent to 0, blue/greening color corresponds to 1.
Do your worst!
It really does, I swear!
For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.
I said "hey look, an escaPEA"
No one near me but it didn't half make me laugh for a good hour or so!
Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies π
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.