Legendre polynomials are a type of orthogonal polynomials that occur frequently in science and engineering. Therefore, its generation is crucial for these fields. There are different ways to evaluate a Legendre polynomial, using generator functions, Rodrigues formula, recurrence relation, Gram-Schmidt orthogonalization, etc. one of the most accurate methods is to use the recurrence relation.
Here we use the Bonnet recurrence relation of legendre polynomials:
We define Legendre polynomials as a function called P(n,x), where n is called the order of the polynomial and x is the evaluation point. The base cases are if n is 0, then The polynomial value is always 1, and it is x when the order is 1. These are the initial values needed for the recurrence relation.
For other values of n, the function is defined recursively, directly from the Bonnet recurrence. Thus, P(n, x) returns values from the Legendre polynomial, by the recursion method (A function effectively defined with other base cases of the same function.)
# Legendre polynomial def P(n, x): if(n == 0): return 1 # P0 = 1 elif(n == 1): return x # P1 = x else: return (((2 * n)-1)*x * P(n-1, x)-(n-1)*P(n-2, x))/float(n) # Suppose, we want to find the value of # 3rd order legendre polynomial at x = 5 # We can display the value by-- # Driver program n = 3 X = 5 print("The value of the polynomial at given point is:", P(n, X))
Output:
The value of the polynomial at given point is: 305.0
Recursion of integrated Legendre polynomials
StackOverflow question
I am writing these recursion in python and don’t get why the official solution is different than mine. The trivial cases for n = 1, 2 are clear. This is my approach:
return ((2*(k-1)-1)*x*leg(k-1) - ((k-1)-2)*leg(k-2)) / k
This is the official solution:
return ((2*k-1)*x*leg(k-1) - (k-1)*leg(k-2)) / k
Why are they decreasing k to call the function, but in the first part the coefficient (2*k-1) not? And why is the coefficient in the second part changed to (k-1)?
Answer:
So generally, afaiu, your issue stems from the formula (in your attached image does show L_{k+1}(x)
) while they do implement L_{k}(x)
without the intermediate derivation that shows how to obtain L_{k}(x)
from L_{k+1}(x)
.
I further think that there is some confusion here, so I will sightly deviate from the notation.
Let m = k+1
in what follows.
We then obtain through straight forward substitution:
m * L(x, m) = (2*(m+1)-1) * x * L(x, m-1) - ((m-1)-2) * L(x, m-2) # for m >= 3
which yields
L(x, m) = ( (2*m + 2 - 1) * x * L(x, m-1) - ((m-3) * L(x, m-2) ) / m
and in python syntax, this is:
def L(x, m):
if m == 1:
return x
elif m == 2:
return 0.5 * (x**2 - 1)
else: # do this for all m >= 3
return ( (2*m + 1) * x * L(x, m-1) - ((m-3) * L(x, m-2) ) / m
Why are they decreasing k to call the function, but in the first part the coefficient (2*k-1) not?
IMHO they did, follow my derivation.
And why is the coefficient in the second part changed to (k-1)?
I honestly do not know; to me, it seems like they made a mistake during substitution, i.e. they must have put m+1
instead of m-1
.
>>> (2*(k-1)-1)
Does first compute k-1
multiplies it by 2
and then subtracts 1
which is indifferent from 2*k-1
. For example:
k = 5
does yield with your solution (2*(5-1)-1) = 7
and from the official solution (2*5-1) = 9
.
numpy.polynomial.legendre.legmulx
polynomial.legendre.legmulx(c)[source]
Multiply a Legendre series by x. Multiply the Legendre series c by x, where x is the independent variable.
Examples:
from numpy.polynomial import legendre as L L.legmulx([1,2,3]) array([ 0.66666667, 2.2, 1.33333333, 1.8]) # may vary
Legendre polynomials in recurrent neural networks
A recurrent neural network that contains a d-dimensional memory vector, can be optimized such that its neural activities obey the linear time-invariant system given by the following state-space representation:

In this case, the sliding window of u across the past theta units of time is best approximated by a linear combination of the first {displaystyle d}d shifted Legendre polynomials, weighted together by the elements of m at time t:

When combined with deep learning methods, these networks can be trained to outperform long short-term memory units and related architectures, while using fewer computational resources.