Page 1 of 1

### Maximizing 2 Variables

Posted: Mon Nov 13, 2017 1:31 pm UTC
"Tragedy of the Commons" by Prof. Hardin wrote:It is not mathematically possible to maximize for two (or more) variables at the same time. This was clearly stated by von Neumann and Morgenstern (3), but the principle is implicit in the theory of partial differential equations, dating back at least to D’Alembert (1717–1783).

This does not seem correct to me. I know that if two variables are directly proportional, then maximizing one requires maximizing the other. I am willing to bet that other examples disproving this claim exist. I think that Prof. Hardin, a biologist, misinterpreted the mathematics. However, I would like to double check before making a fool of myself.

### Re: Maximizing 2 Variables

Posted: Mon Nov 13, 2017 3:09 pm UTC
It is in fact not possible to have a feature for maximizing for two variables.

You can maximize for a function, that's fun and good.

### Re: Maximizing 2 Variables

Posted: Mon Nov 13, 2017 4:22 pm UTC
If x/ y = c and c is constant, then maximizing x maximizes y, unless maximizing means something other than what I think it means.

### Re: Maximizing 2 Variables

Posted: Mon Nov 13, 2017 4:50 pm UTC
I think the late Dr. Hardin was speaking in general terms, that you can't always maximize both of two inter-related variables. Of course there are specific cases in which you can. Just like saying you can't trisect an angle with a compass and straight-edge, while there are specific angles that you can trisect with those tools.

### Re: Maximizing 2 Variables

Posted: Mon Nov 13, 2017 5:01 pm UTC
jewish_scientist wrote:If x/ y = c and c is constant, then maximizing x maximizes y, unless maximizing means something other than what I think it means.

Yeah, the problem is that "having two variables" means something different than you think it means.

### Re: Maximizing 2 Variables

Posted: Mon Nov 13, 2017 6:08 pm UTC
What does it mean then?

### Re: Maximizing 2 Variables

Posted: Mon Nov 13, 2017 7:18 pm UTC
The general, arbitrary case. There is an x, and there is a y.

### Re: Maximizing 2 Variables

Posted: Tue Nov 14, 2017 1:51 am UTC
If you have two variables that are functions of the same parameter, for instance, x(t) and y(t), then generally the maxima of x(t) will not be maxima of y(t). I'm pretty sure that's all it comes down to. Obviously if x=c1t and y=c2t, then there simply aren't any maxima at all (unless c=0). If x=c1t and y=c2t and x and y are subject to a constraint, then you are still really only maximizing one variable.

### Re: Maximizing 2 Variables

Posted: Thu Nov 16, 2017 4:34 pm UTC
Yeah, with x(t) and y(t) plotted on an x-y plane, maximizing x and y at the same time means having a cusp that "points" toward the top-right.

cusp.png (6.23 KiB) Viewed 4210 times

Such a cusp is not a general feature of parametric plots, and it's irrelevant (to the general case) that if your graph is just a line segment through the origin then you can identify the top-right endpoint of that segment.

### Re: Maximizing 2 Variables

Posted: Fri Nov 17, 2017 9:16 am UTC
gmalivuk wrote:Yeah, with x(t) and y(t) plotted on an x-y plane, maximizing x and y at the same time means having a cusp that "points" toward the top-right.

cusp.png

Such a cusp is not a general feature of parametric plots, and it's irrelevant (to the general case) that if your graph is just a line segment through the origin then you can identify the top-right endpoint of that segment.

In your picture you can maximise x further, at the cost of taking a hit on y, by choosing the right-most point of that loop at the bottom right.

--
Generally, if you are in a situation where you would like to maximise two variables, you have to choose an objective function. In its simplest form you simply assign weights to the variables x and y, as a measure of how much value you attach to increasing one over the other. For example, if increasing x by 1 unit is always worth twice as much to you as increasing y by 1 unit, then you would have the objective function F(x,y)=2x+y. This offers you a way of deciding whether for example (6,20) is better than (16,13).
If you have a picture of your search space (whether it is a line like gmalivuk's picture, or a region of the x-y plane) then you can draw lines F(x,y)=c for various values of c in the picture. My example F gives you parallel lines, with higher values of c the further up and to the right you go. Your optimal point is the furthest point (or points) of your search space in that direction.

### Re: Maximizing 2 Variables

Posted: Fri Nov 17, 2017 10:24 am UTC
Again, you are maximizing the objective function, not "two variables". You can certainly maximize a function of multiple variables (if it has a maximum).

### Re: Maximizing 2 Variables

Posted: Fri Nov 17, 2017 11:12 am UTC
Eebster the Great wrote:Again, you are maximizing the objective function, not "two variables". You can certainly maximize a function of multiple variables (if it has a maximum).

I'm not disagreeing, just explaining why you need such a function and giving a simple example.

### Re: Maximizing 2 Variables

Posted: Fri Nov 17, 2017 2:15 pm UTC
My MSPaint masterpiece wasn't meant to show a global maximum, just what a local maximum (of two variables at the same time) would look like.

### Re: Maximizing 2 Variables

Posted: Fri Nov 17, 2017 3:00 pm UTC
Local maxima get a bad rap.

### Re: Maximizing 2 Variables

Posted: Fri Nov 17, 2017 8:21 pm UTC
To give a real world example of an objective function: say you’re designing a telescope with two curved mirrors. No matter what, it’ll have some things that mess up the view, but most importantly coma, astigmatism, and spherical abberation.

You cannot minimize all of these at the same time if you have two curved mirrors. How do you decide how curved the mirrors are?

Well, you need to first figure out how important not having each of these aberrations is to you. Assume you can calculate values for each abberation given how curved each mirror is. You can then make a function like so:

Code: Select all

`cost(coma, spherical abberation, astigmatism) = importance(coma) + importance(spherical abberation) + importance(astigmatism)`

You then pick curves for the mirrors that make this function fall into a local minimum.

The objective function is what allows you to decide whether it’s more important to maximize or minimize one variable over another.

### Re: Maximizing 2 Variables

Posted: Sat Nov 18, 2017 2:31 am UTC
Another example of a typical problem encountered in Calculus class is maximizing a function (like the area of a region) with parameters (like dimensions of the region) subject to some constraint (like a constant perimeter). For instance, if a farmer with 100 m of fencing wants a rectangular pen made of fencing on three sides and the wall of his barn on the other, he can get the maximum area by choosing the parallel sides to be 25 m each and the other side to be 50 m, yielding an area of 1250 m2.

### Re: Maximizing 2 Variables

Posted: Sat Nov 18, 2017 5:40 am UTC
If an objective function can find the value that maximizes variables weighted by their importance, then what does Hardin's quote mean?

### Re: Maximizing 2 Variables

Posted: Sat Nov 18, 2017 6:09 am UTC
Again, you are then maximizing a *function* of the two variables. You are not maximizing the variables themselves. Each individual variable might not be at its max value; the function itself is.

### Re: Maximizing 2 Variables

Posted: Sun Nov 19, 2017 8:28 am UTC
jewish_scientist wrote:If an objective function can find the value that maximizes variables weighted by their importance, then what does Hardin's quote mean?

That there is not always an impartial, 'technical' method to pick an objective function. Pick one objective function to get this answer, pick another function to get another answer. Which one is optimal?

It just shifts the problem from 'what is the optimal result for this problem' to 'what is the optimal objective function'.

### Re: Maximizing 2 Variables

Posted: Sun Nov 19, 2017 3:40 pm UTC
jewish_scientist wrote:
If an objective function can find the value that maximizes variables weighted by their importance, then what does Hardin's quote mean?
Maximizing variables weighted by importance maximizes something like a*x+b*y. In other words, it maximizes something like the sum of two variables. The quote means you can't (in general) maximize both variables at the same time.

Assuming a and b are both positive (to maintain a usual sense of "maximum"), you still go from finding the highest point on the graph to the rightmost point on the graph and everything in between (e.g. finding the "northeasternmost" point on the graph, or finding the "north-northeasternmost" point, or the "northeast-by-northernmost" point, or any other direction in the first quadrant).

If the graph is a circle (x = cos t, y = sin t), what's the "correct" objective function to maximize to pick the "maximum" point on that circle?