L'Hopital's rule & Taylor series
Now we will revolutionize the way we compute function limits.
More precisely, we will develop a very useful method for types of and .
The trick is that instead of the quotient of the original functions f and g, we look at the quotient of their derivatives.
We can do this, because those two limits are the same.
At least for limits of and type, if a few conditions are met.
Let’s see a few examples.
Now we can use l’Hopital’s Rule.
We may have to use l’Hopital’s Rule twice in a row.
Sometimes we have to use it more than twice.
And then sometimes we need to know when to stop.
If we used l’Hopital’s Rule again here, the derivative of the numerator would become quite ugly.
So, instead, we divide by x.
And now comes the real thrill.
L’Hopital’s Rule works for type cases as well. All we have to do is turn the product into a fraction.
Let's see how.
Here we have a few limits that may be useful for the rest of our lives:
A FEW IMPORTANT LIMITS
And here we have a few limits that would be quite difficult to compute using other methods.
Here we could try another round of l’Hopital’s Rule, but that would turn things rough. So instead, we exercise self-control and divide by .
Now we will revolutionize the way we compute function limits.
More precisely, we will develop a very useful method for types of and .
The trick is that instead of the quotient of the original functions f and g, we look at the quotient of their derivatives.
We can do this, because those two limits are the same.
At least for limits of and type, if a few conditions are met.
Let’s see a few examples.
Now we can use l’Hopital’s Rule.
We may have to use l’Hopital’s Rule twice in a row.
Sometimes we have to use it more than twice.
And then sometimes we need to know when to stop.
If we used l’Hopital’s Rule again here, the derivative of the numerator would become quite ugly.
So, instead, we divide by x.
And now comes the real thrill.
L’Hopital’s Rule works for type cases as well. All we have to do is turn the product into a fraction.
Let's see how.
Here we have a few limits that may be useful for the rest of our lives:
A FEW IMPORTANT LIMITS
And here we have a few limits that would be quite difficult to compute using other methods.
Here we could try another round of l’Hopital’s Rule, but that would turn things rough. So instead, we exercise self-control and divide by .
And now we will do a very funny thing.
Here is this function:
Well, there is nothing funny so far.
Now let’s subtract from it
Next, add to it
If we keep subtracting and adding these weird things...
then step by step a familiar curve will take shape.
This curve is .
The explanation for the mysterious appearance of is:
By adding various powers of x, we can recreate a whole variety of functions.
Let's see how.
Let be differentiable k times on interval I that contains number a. Then, the Taylor polynomial of order k generated by function at point is:
Let's see an example.
Here is, for instance, function . Let’s find the Taylor polynomial of order 6 at point .
Ingredients:
The nth derivative at :
The zeroth derivative is the function itself.
We do not indicate the third derivative like this: , as we don’t use for the eighth derivative either. It’s easy to see why.
And now comes the Taylor polynomial:
The Taylor polynomial of function around zero
is useful, because it provides a good approximation of the function around zero.
For example, if we want to figure out the value of
without a calculator, then since near zero
Now let’s see,
The same thing using a calculator:
In fact, the calculator itself is using a Taylor polynomial to find the exact value of , except that it is not using a 4th-order polynomial, but of a higher order.
Next, calculate .
Well, we are not that lucky this time...
To be honest, the difference is a bit too much.
The truth is that a Taylor polynomial generated around provides a good approximation only for numbers close to zero. If a number is farther away from zero, we can do two things.
The first option is using a much higher order Taylor polynomial.
The more terms of the Taylor polynomial we generate, the longer section of the function takes shape.
The second option is to stick with the fourth-order version, but move it closer to 4.
We regenerate the Taylor polynomial, but this time around a number close to 4.
For instance, seems good.
It is a good idea to pick a number where and takes a value that is easy to compute.
Conveniently, is like that: and
Now we can get down to the Taylor polynomial.
If we want to compute the value of even more precisely, then we would have to generate more terms of the Taylor polynomial.
The more terms we generate, the more decimals will be accurate for .
If we are extremely greedy, we should generate all of them.
Well, that means an infinite number of terms, and what we get this way is called a Taylor series.
Let be differentiable any number of times on interval I that contains number a. Then, the Taylor series generated by function at point is:
Here is, for instance
Let's see the fifth-order Taylor polynomial and Taylor series at point .
The Taylor polynomial only approximates the original function;
the Taylor series, on the other hand, is identical to the function itself at every single point.
The Taylor polynomial generated by function at provides a good approximation of the function around number .
The more terms of the Taylor polynomial we generate, the longer section of the function takes shape.
If we get into generating the Taylor polynomial so much that we forget to stop...
well, then we get an infinitely large number of terms, and we called that Taylor series.
The Taylor polynomial only approximates the original function; the Taylor series, on the other hand, is identical to the function itself at every single point.
Let's see the Taylor series of a few functions.
Let’s start with , for instance.
Let’s generate its Taylor series at .
Taylor series where are often called Maclaurin series, too.
We can figure out the remaining terms based on this.
TAYLOR SERIES OF A FEW SPLENDID FUNCTIONS
Let be differentiable any number of times on interval I that contains number a. Then, the Taylor series generated by function at point is:
Here is, for instance
Let's see the fifth-order Taylor polynomial and Taylor series at point .
The Taylor polynomial only approximates the original function;
the Taylor series, on the other hand, is identical to the function itself at every single point.
The Taylor polynomial generated by function at provides a good approximation of the function around number .
The more terms of the Taylor polynomial we generate, the longer section of the function takes shape.
If we get into generating the Taylor polynomial so much that we forget to stop...
well, then we get an infinitely large number of terms, and we called that Taylor series.
The Taylor polynomial only approximates the original function; the Taylor series, on the other hand, is identical to the function itself at every single point.
Let's see the Taylor series of a few functions.
Let’s start with , for instance.
Let’s generate its Taylor series at .
Taylor series where are often called Maclaurin series, too.
We can figure out the remaining terms based on this.
TAYLOR SERIES OF A FEW SPLENDID FUNCTIONS
Let's see one more.
Let’s generate the Taylor series of at .
Let's try to figure out the kth term.
We got it.
.
Now we can get down to the Taylor series.
In similarly exciting fashion we can create the Taylor series of many other functions.
Here is, for example, the Taylor series of and at zero.
And now, if we want to know the Taylor series of – let’s say – , all we have to do here is to replace x with 2x.
And Voila! – it is done.
Getting the Taylor series of is also similarly pleasant.
Here is a more interesting case:
There are some cases that are even more exciting:
But enough excitement for today.
Now we are launching a very interesting endeavor:
we will compute, without using a calculator, the value of .
We will use the Taylor polynomial of .
Let’s generate the fourth-order Taylor polynomial.
If x is near zero, then , therefore
The question is how accurate this result is.
Well, according to the calculator
It seems we only get the value of cos1 with two digits precision.
Of course, it is easy now.
If we already know the exact value of cos1, it is not a big deal to figure out our error afterwards.
It would be great to know the extent of our error, even without knowing the exact value of cos1.
In other words, we need to state the size of our error while we have no idea what the exact result is.
This sounds impossible, but we will still do it.
This is what the Lagrange remainder term is for:
We need one more derivative.
We need to substitute not zero, but a number c.
This c is always a number between some a and x.
The special thing about the Lagrange remainder term is that the Taylor polynomial only approximates the function...
but by adding the remainder term to it, it will be the same.
So, the remainder term tells us the amount of our error.
We don’t know the exact value of c, so we don’t know the exact remainder either, but we can estimate it.
Based on this, our error is less than 0.00139:
Well, this is why the Lagrange remainder term is useful.
Let's see another case.
Let’s compute, with an error of less than 0.05, the value of
We will use the Taylor polynomial of at .
The error must be less than 0.05.
Let’s see where we are now.
If we use the second-order Taylor polynomial, then we calculate the error term from this:
We want to compute , therefore .
We will try to find an upper estimate of the remainder term.
All we know about this c is that it is a number between 1 and 2.
The error is less than 0.0625, but that is not enough for us.
We want the error to be less than 0.05, so we need to suffer some more.
Well, isn’t this enough yet?
Finally, the error is below the specified 0.05, so we will compute the value of using a third-order Taylor polynomial.
Here comes:
Let be differentiable any number of times on interval I that contains number a. Then, the Taylor series generated by function at point is:
Here is, for instance
Let's see the fifth-order Taylor polynomial and Taylor series at point .
The Taylor polynomial only approximates the original function;
the Taylor series, on the other hand, is identical to the function itself at every single point.
The Taylor polynomial generated by function at provides a good approximation of the function around number .
The more terms of the Taylor polynomial we generate, the longer section of the function takes shape.
If we get into generating the Taylor polynomial so much that we forget to stop...
well, then we get an infinitely large number of terms, and we called that Taylor series.
The Taylor polynomial only approximates the original function; the Taylor series, on the other hand, is identical to the function itself at every single point.
Let's see the Taylor series of a few functions.
Let’s start with , for instance.
Let’s generate its Taylor series at .
Taylor series where are often called Maclaurin series, too.
We can figure out the remaining terms based on this.
TAYLOR SERIES OF A FEW SPLENDID FUNCTIONS
Let's see one more.
Let’s generate the Taylor series of at .
Let's try to figure out the kth term.
We got it.
.
Now we can get down to the Taylor series.
In similarly exciting fashion we can create the Taylor series of many other functions.
Here is, for example, the Taylor series of and at zero.
And now, if we want to know the Taylor series of – let’s say – , all we have to do here is to replace x with 2x.
And Voila! – it is done.
Getting the Taylor series of is also similarly pleasant.
Here is a more interesting case:
There are some cases that are even more exciting:
But enough excitement for today.
Now we are launching a very interesting endeavor:
we will compute, without using a calculator, the value of .
We will use the Taylor polynomial of .
Let’s generate the fourth-order Taylor polynomial.
If x is near zero, then , therefore
The question is how accurate this result is.
Well, according to the calculator
It seems we only get the value of cos1 with two digits precision.
Of course, it is easy now.
If we already know the exact value of cos1, it is not a big deal to figure out our error afterwards.
It would be great to know the extent of our error, even without knowing the exact value of cos1.
In other words, we need to state the size of our error while we have no idea what the exact result is.
This sounds impossible, but we will still do it.
This is what the Lagrange remainder term is for:
We need one more derivative.
We need to substitute not zero, but a number c.
This c is always a number between some a and x.
The special thing about the Lagrange remainder term is that the Taylor polynomial only approximates the function...
but by adding the remainder term to it, it will be the same.
So, the remainder term tells us the amount of our error.
We don’t know the exact value of c, so we don’t know the exact remainder either, but we can estimate it.
Based on this, our error is less than 0.00139:
Well, this is why the Lagrange remainder term is useful.
Let's see another case.
Let’s compute, with an error of less than 0.05, the value of
We will use the Taylor polynomial of at .
The error must be less than 0.05.
Let’s see where we are now.
If we use the second-order Taylor polynomial, then we calculate the error term from this:
We want to compute , therefore .
We will try to find an upper estimate of the remainder term.
All we know about this c is that it is a number between 1 and 2.
The error is less than 0.0625, but that is not enough for us.
We want the error to be less than 0.05, so we need to suffer some more.
Well, isn’t this enough yet?
Finally, the error is below the specified 0.05, so we will compute the value of using a third-order Taylor polynomial.
Here comes: