SUNY Geneseo Department of Mathematics
Tuesday, October 25
Math 223 01
Fall 2022
Prof. Doug Baldwin
(No.)
Finishing yesterday’s example, and drawing on “Absolute Maxima and Minima” in section 3.7 of the textbook.
Find the absolute minimum and maximum of z = 2x2 + 3y2 on the triangle in the XY-plane with vertices (1,0), (-1,1), and (-1,-1).
Yesterday, we found that there’s a critical point at (0, 0) which might be the minimum or maximum.
But the minimum and/or maximum might also lie at a corner of the triangle, i.e., at (1,0), (-1,1), or (-1,-1), or anywhere along its sides.
We worked out an equation for one side, namely y = -x/2 + 1/2, but weren’t really sure what to do with it.
So where should we pick up?
Substitute the equation for the side into the function definition in place of y, getting a single-variable formula for z that applies along that side. Then find extreme values for that single-variable function using its derivative. Do this for each of the three sides, ending up with a total of 7 points where the absolute minimum or maximum might occur. Finish by calculating z at each of those points, and noting the smallest and largest values:
In its most general form, “optimization” is the process of finding a maximum or minimum value for some function.
For example, consider a Cobb-Douglas function z = 2 x0.4 y0.5. This is a kind of function sometimes used in economics to model the output of a company or economy (z) in terms of various inputs (x and y in this case, e.g., labor and capital). If the preceding function modeled the output of a company, they would reasonably want to make its value as large as possible. How can they do that?
Easy — make x and y infinitely large.
But that’s not a very practical answer. In real life, there are limitations or constraints placed on the inputs to a function, for example maybe the total amount of money a company can spend on capital and labor is limited, and the company needs to find the optimal trade-off.
So a practical optimization problem involves a function to optimize — the so-called “objective function” — and one or more equations or inequalities that have to be satisfied to meet the constraints.
Sometimes such constraints are why you find yourself looking for a minimum or maximum over some region in the input space (like we just did in the absolute extremes example). And so sometimes you can solve constrained optimization problems as extreme value problems over some region. But there are other techniques too….
Lagrange multipliers, a method for solving some multivariable constrained optimization problems.
Please read section 3.8 in the textbook.