# Functions of two variables

Content of the topic

Functions of two variables and the partial derivatives

DIFFERENTIATION AND LOCAL EXTREMA OF MULTIVARIATE FUNCTIONS

Functions of two variables take two real numbers and assign a third real number to them. In other words, they assign a third number to a pair of numbers.

We could look at these number-pairs as coordinates in a plane.

Functions of two variables assign a third coordinate, the height, to the points of this plane.

By assigning this third (height) coordinate to all points of the domain, a surface is taking shape above the x,y plane. This is the graph of the function.

Some properties of single variable functions can be transmitted to two-variable functions, but some properties cannot.

There is no point for instance talking about monotonicity in case of two-variable functions, as it would be quite difficult to determine whether a surface happens to be increasing or decreasing.

On the other hand, the concept of minimum and maximum can be transmitted.

We should imagine the maximum of a two-variable function as a peak of a mountain,

and the minimum as a valley.

Let's see some two-variable functions.

LOCAL MINIMUM

LOCAL MAXIMUM

Our task is to find out where the minimum,

the maximum or even the saddle point of a two-variable function happens to be.

Just like in the one-variable case, we will have to differentiate here, too, but now we have x as well as y, so we have to differentiate with respect to x and also with respect to y, which should be twice as much fun.

These derivatives are called partial derivatives.

Let's see the partial derivatives.

Let’s differentiate this function, for instance.

PARTIAL DERIVATIVES

PARTIAL DERIVATIVE OF FUNCTION  WITH RESPECT TO

we differentiate with respect to x, while y is held constant

differentiate with respect to x

y is treated as a constant,

if it stands by itself, its derivative is zero

if it is multiplied by some expression with x, than it stays as is

PARTIAL DERIVATIVE OF FUNCTION  WITH RESPECT TO

we differentiate with respect to y, while x is held constant

differentiate with respect to y

x is treated as a constant,

if it stands by itself, its derivative is zero

if it is multiplied by some expression with y, than it stays as is.

There is another notation for partial derivatives.

This

We will use both notations.

Here comes another function, let’s differentiate this one, too.

SECOND ORDER DERIVATIVES

Both first order partial derivatives can be further differentiated with respect to x as well as y, too.

This way we get four second order derivatives.

The two outer ones are called pure second order derivatives,

and the two middle ones are the mixed second order derivatives.

The two mixed second order derivatives are usually equal.

Well, to be exact, they are equal if the function is twice totally differentiable.

But instead, we should remember that they are always equal, except in the section that is for professionals only, where the precise definitions of multivariate differentiation will be discussed.

Now, let’s see how we can find local minima and maxima using partial differentiation.

Finding local extrema and saddle points using partial differentiation 1.0

Now, let’s see how we can find local minima and maxima using partial differentiation.

Differentiation:

Solving the system of equations:

The resulting number pairs are points in the x,y plane.

These points are called stationary points, and at these points

the function can have a minimum, a maximum or a saddle point.

The solutions of the linear system are the stationary points

And now we can get to the second order derivatives.

We arrange them neatly in a matrix that is called a Hessian matrix.

And then we substitute the stationary points.

We have to take these matrices and look at their ... ahem ... determinants.

If somebody have not heard about the determinants of matrices yet, well, that is understandable, it is a very simple thing.

Here is a 2x2 matrix,

and its determinant is a number.

This number can be positive, negative or zero.

Let’s say for this matrix here,

the determinant is -14.

We calculate the determinant of the Hessian matrix, which can be positive, negative or zero.

If the determinant is positive, that means there is a minimum or a maximum.

.

If it is negative, then there is a saddle point.

If it is zero, then further investigation is necessary, but it doesn’t happen very often.

We will try summarizing it in this tiny space here.

Let's see what happens at the two stationary points.

Well, it seems  is a saddle point.

And  is a local minimum.

Let's see another one like this.

Let’s find the local extrema and saddle points of the following function.

Here are the stationary points:

And now come the second derivatives.

Next, let's see what happens at the stationary points.

Differentiation:

Solving the system of equations:

, ,

, ,

Two stationary points:  and

Here comes the Hessian matrix:

Now let's see the stationary points!

First let's check .

Substitute zero for x, y and z:

This is indefinite, so  is a saddle point.

Next, let's see .

Substitute one for x and y, and zero for z:

This is positive definite, so it is a local minimum.

Finding local extrema and saddle points using partial differentiation 2.0

Now, let’s see how we can find local minima and maxima using partial differentiation.

Differentiation:

Solving the system of equations:

The resulting number pairs are points in the x,y plane.

These points are called stationary points, and at these points

the function can have a minimum, a maximum or a saddle point.

The solutions of the linear system are the stationary points

And now we can get to the second order derivatives.

We arrange them neatly in a matrix that is called a Hessian matrix.

And then we substitute the stationary points.

We have to take these matrices and look at their ... ahem ... determinants.

If somebody have not heard about the determinants of matrices yet, well, that is understandable, it is a very simple thing.

Here is a 2x2 matrix,

and its determinant is a number.

This number can be positive, negative or zero.

Let’s say for this matrix here,

the determinant is -14.

We calculate the determinant of the Hessian matrix, which can be positive, negative or zero.

If the determinant is positive, that means there is a minimum or a maximum.

.

If it is negative, then there is a saddle point.

If it is zero, then further investigation is necessary, but it doesn’t happen very often.

We will try summarizing it in this tiny space here.

Let's see what happens at the two stationary points.

Well, it seems  is a saddle point.

And  is a local minimum.

Let's see another one like this.

Let’s find the local extrema and saddle points of the following function.

Here are the stationary points:

And now come the second derivatives.

Next, let's see what happens at the stationary points.

Differentiation:

Solving the system of equations:

, ,

, ,

Two stationary points:  and

Here comes the Hessian matrix:

Now let's see the stationary points!

First let's check .

Substitute zero for x, y and z:

This is indefinite, so  is a saddle point.

Next, let's see .

Substitute one for x and y, and zero for z:

This is positive definite, so it is a local minimum.

The equation of the tangent plane

If we remember, the geometric interpretation of the derivative in case of single variable functions was the slope of the tangent.

The equation of the tangent for function  at point  is:

The tangent of a single variable function is a line, and the tangent of a two-variable function is a plane.

The number of coordinates is increased by 1, so it is not x and y, but x, y and z.

The equation of the plane tangent to function  at point  is:

Well, this is the equation of the tangent plane.

Let's see an example.

Here is this function, for instance:

and we are looking for the tangent plane at point .

Here comes the equation of the tangent plane,

and we have to calculate these.

Well, this is the equation of the tangent plane:

If we expand the parentheses, and get all terms on one side,

then we can see the normal vector of the plane.

And here is the normal vector:

The first two coordinates are the derivatives with respect to x and y,

and the third coordinate is negative one.

What should parameter  be, so that the tangent at point

to function  would also pass through point  ?

A plane passes through a point if the equation holds when substituting the point’s coordinates into the equation of the plane.

Here is :

************************-

Now, let's see the vector.

The vector in the formula must be of unit length.

Since now  is not a unit long,

we turn this into a unit vector.

We divide the vector by its own length:

The equation of the plane that is tangent to the surface given by  at point  is:

The normal vector of the tangent plane is . This is easy to see, if we move z to the right side of the equation of the tangent plane.

THE GRADIENT AND THE DIRECTIONAL DERIVATIVE

The vector made up of the of the  function's partial derivatives with respect to x and y is called the gradient of the  function.

, shortly .

The gradient helps us calculate the directional derivative. The directional derivative describes how steeply the surface of the function slopes along a given arbitrary  direction.

So, it means that there is a mountain climber standing at point P on the surface, and decides to move in  direction. The directional derivative tells him how steep he would have to climb.

Calculating the directional derivative is very simple: it is the dot product of the gradient and the unit-length vector .

The  directional derivative of the  function at point  is:

(  is a unit vector here)

Let's see an example of this!

Let's calculate the directional derivative of  for direction  at point .

According to the formula, the directional derivative is:

Here this funny  symbol is the symbol of differentiation, and it is pronounced as "d", but there is a bit more friendly notation for the directional derivative: .

We need the partial derivatives for calculating the gradient.

To get the directional derivative, we should create the dot product of the gradient and the  vector, but it is now not a unit vector, its length is:

To turn this into a unit vector we divide the vector by its own length:

Now, let's see the vector.

The vector in the formula must be of unit length.

Since now  is not a unit long,

we turn this into a unit vector.

We divide the vector by its own length:

Therefore the directional derivative is:

If a mountain climber asked us which direction he should take from point P in order to climb the steepest route, well...

we could actually give him an answer.

The steepest rise on a surface is always in the direction of the gradient vector.

That means if the climber starts climbing

in the direction of the gradient, then he will be climbing the steepest route.

The function  is an explicit function, its derivative, as expected, is .

The implicit differentiation rule

The function  is an explicit function, its derivative, as expected, is .

A function is implicit if y is not expressed, not in the form of y=...

We get an implicit function if we mess up the function, like this:

and then we take the square root, too

So, this is an implicit function.

If now, we have to differentiate this newly created implicit function, we could do that by differentiating both sides of the equation, and treat y as a function.

actually, it is a function, since .

Well, the derivative of the x on the right side is most definitely 1.

The left side is much more exciting. Here we have a composite function:

And then it also has to be multiplied by the derivative of the inside function.

We need , in other words, the derivative of the function that was given in implicit form.

Let’s try to express

Here it is.

Since , if we substitute this to y...

And this is the same as the explicit derivative.

It is fair to ask why we bothered so much with this, if at the end, we got the same result, except it was a lot more complicated.

Well, the answer is that unfortunately there are some functions that have no explicit forms.

This function has an explicit form, so in this case, it was unnecessary to suffer through the implicit differentiation.

But take a look at this one, for instance.

In this case, y cannot be expressed in any way, so we are forced to use implicit differentiation.

So, we differentiate both sides, but let's not forget that y is a function here.

So, for example  is a composite function.

Therefore, we differentiate it as a composite:

Take the derivative of the outside function,

multiplied by the derivative of the inside function.

Now let’s see the implicit differentiation.

We differentiate both sides of the equation:

We need the derivative of y, so we collect all  terms on one side, and send all others to the other side:

Then we factor out .

and finally, we divide (and conquer!):

Well, this is the derivative of our function that was given in implicit form.

Now let’s see the differentiation rule for implicit functions.

The point of this method is to make our life easier.

It says that if  is an implicit function, then its derivative is:

Well, so far, there is nothing encouraging about this... But let’s see how it works in practice.

Here is the implicit function:

where all terms should be collected on one side,

and it should be called F.

Before we fall victims of a fatal mistake, we must make it clear that this  is not a two-variable function, but an implicit function.

The difference between  and  is huge.

Let's see what the difference is.

Function  is a two-variable function indeed, and x and y can be given freely, but

is not a two-variable function. , Let's just try to substitute 0 for x and 1 for y.

We will get 2=0, which is not true, so here only one of x and y can vary freely, the other cannot. So that is why this function is a single variable function.

Now, that we clarified all this, let’s see what the formula says.

The formula says that we should differentiate this function  and using the customary partial derivation with respect to x and y.

And here is the implicit derivative.

It is exactly the same result as earlier,

only this time it was much simpler.

Now that's what the implicit differentiation rule is good for.

The rule works for more variables, too.

It says that if  is a single variable implicit function, then its derivative is:

If  is an n-variable implicit function, then the derivative of  as an implicit function with respect to variable  is:

Let's see an example for this!

This is a two-variable implicit function.

Even though it has three letters: x, y and z, notice that only two of them can be given freely, due to the equation.

In two-variable functions, x and y are usually the variables, so we can treat this function as

Z=something x and y

Let’s differentiate this with respect to x, and with respect to y!

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points

Problem | Find the local extrema and saddle points