# Difference between revisions of "Duality"

LockheedELDP (talk | contribs) m (→Applications) |
LockheedELDP (talk | contribs) |
||

Line 4: | Line 4: | ||

== Introduction == | == Introduction == | ||

− | Every linear programming optimization problem may be viewed either from the primal or the dual, this is the principal of '''duality'''. Duality develops the relationships between one linear programming problem and another related linear programming problem. If the primal Linear Programming problem is a maximization problem, the dual can be used to find upper bounds on its optimal value. If the primal LP problem is a minimization problem, the dual can be used to find the lower bounds. For example in economics, if the primal optimization problem deals with production and consumption levels, then the dual of that problem relates to the prices of goods and services. The dual variables in this example can be referred to as shadow prices. The shadow price of a constraint .. | + | Every linear programming optimization problem may be viewed either from the primal or the dual, this is the principal of '''duality'''. Duality develops the relationships between one linear programming problem and another related linear programming problem. If the primal Linear Programming problem is a maximization problem, the dual can be used to find upper bounds on its optimal value. If the primal LP problem is a minimization problem, the dual can be used to find the lower bounds. For example in economics, if the primal optimization problem deals with production and consumption levels, then the dual of that problem relates to the prices of goods and services. The dual variables in this example can be referred to as shadow prices. The shadow price of a constraint in linear programming problems is the difference between the optimized value of the objective function and the value of the objective function, evaluated at the optional basis, when the right hand side (RHS) of a constraint is increased by one unit. (9) The Shadow Price of the i-th constraint is ''only'' valid within the RHS range of the i-th constraint. (10) |

According to the American mathematical scientist George Dantzig, Duality for Linear Optimization was conjectured by Jon von Neumann after Dantzig presented a problem for Linear Optimization. Von Neumann determined that two person zero sum matrix game (from Game Theory) was equivalent to Linear Programming. Proofs of the Duality theory were published by Canadian Mathematician Albert W. Tucker and his group in 1948. (6) | According to the American mathematical scientist George Dantzig, Duality for Linear Optimization was conjectured by Jon von Neumann after Dantzig presented a problem for Linear Optimization. Von Neumann determined that two person zero sum matrix game (from Game Theory) was equivalent to Linear Programming. Proofs of the Duality theory were published by Canadian Mathematician Albert W. Tucker and his group in 1948. (6) | ||

Line 95: | Line 95: | ||

'''Karush-Kuhn-Tucker (KKT) Conditions''' | '''Karush-Kuhn-Tucker (KKT) Conditions''' | ||

− | * The optimal solution to the dual problem is a vector of the KKT multipliers. | + | * The optimal solution to the dual problem is a vector of the KKT multipliers. |

'''Dual Simplex Method''' | '''Dual Simplex Method''' | ||

Line 106: | Line 106: | ||

== Conclusion == | == Conclusion == | ||

+ | The idea of Duality has brought another view point to every linear and nonlinear programming optimization problem since 1948. (6) This technique can be applied to situations such as solving for economic constraints, resource allocation and bounding optimization problems. The switching of the objective function gives insight into shadow prices... | ||

== References == | == References == | ||

Line 117: | Line 118: | ||

# https://faculty.math.illinois.edu/~stolee/Teaching/13-482/gametheory.pdf | # https://faculty.math.illinois.edu/~stolee/Teaching/13-482/gametheory.pdf | ||

# "John Von Neumann." John Von Neumann Biography. N.p., n.d. Web. 25 May 2014. | # "John Von Neumann." John Von Neumann Biography. N.p., n.d. Web. 25 May 2014. | ||

+ | # https://ideas.repec.org/p/fth/nesowa/96-18.html#:~:text=In%20linear%20programming%20problems%20the,is%20increased%20by%20one%20unit. | ||

+ | # https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/lecture-notes/MIT15_053S13_lec6.pdf |

## Revision as of 23:38, 16 November 2020

Author: Claire Gauthier, Trent Melsheimer, Alexa Piper, Nicholas Chung, Michael Kulbacki (SysEn 6800 Fall 2020)

Steward: TA's name, Fengqi You

## Introduction

Every linear programming optimization problem may be viewed either from the primal or the dual, this is the principal of **duality**. Duality develops the relationships between one linear programming problem and another related linear programming problem. If the primal Linear Programming problem is a maximization problem, the dual can be used to find upper bounds on its optimal value. If the primal LP problem is a minimization problem, the dual can be used to find the lower bounds. For example in economics, if the primal optimization problem deals with production and consumption levels, then the dual of that problem relates to the prices of goods and services. The dual variables in this example can be referred to as shadow prices. The shadow price of a constraint in linear programming problems is the difference between the optimized value of the objective function and the value of the objective function, evaluated at the optional basis, when the right hand side (RHS) of a constraint is increased by one unit. (9) The Shadow Price of the i-th constraint is *only* valid within the RHS range of the i-th constraint. (10)

According to the American mathematical scientist George Dantzig, Duality for Linear Optimization was conjectured by Jon von Neumann after Dantzig presented a problem for Linear Optimization. Von Neumann determined that two person zero sum matrix game (from Game Theory) was equivalent to Linear Programming. Proofs of the Duality theory were published by Canadian Mathematician Albert W. Tucker and his group in 1948. (6)

## Theory, methodology, and/or algorithmic discussions

### Definition:

**Primal**

Maximize

subject to:

**Dual**

Minimize

subject to:

Between the primal and the dual, the variables and switch places with each other. The coefficient () of the primal becomes the Right Hand Side (RHS) of the dual. The RHS of the primal () becomes the coefficient of the dual. The less than or equal to constraints in the primal become greater than or equal to in the dual problem.

### Constructing a Dual:

### Duality Properties:

#### Weak Duality

- let be any feasible solution to the primal
- let be any feasible solution to the dual
- (z value for x) (v value for y)

The weak duality theorem says that the z value for x in the primal is always less than or equal to the v value of y in the dual.

The difference between (v value for y) and (z value for x) is called the optimal *duality gap*, which is always nonnegative.

#### Strong Duality Lemma

- let be any feasible solution to the primal
- let be any feasible solution to the dual
- if (z value for x) (v value for y), then
**x**is optimal for the primal and**y**is optimal for the dual

**Graphical Explanation**

Essentially, as you choose values of x or y that come closer to the optimal solution, the value of z for the primal, and v for the dual will converge towards the optimal solution. On a number line, the value of z which is being maximized will approach from the left side of the optimum value while the value of v which is being minimized will approach from the right hand side.

- if the primal is unbounded, then the dual is infeasible
- if the dual is unbounded, then the primal is infeasible

#### Strong Duality Theorem

If the primal solution has an optimal solution then the dual problem has an optimal solution such that

## Numerical Example

### Construct the Dual for the following maximization problem:

**maximize**

**subject to:**

For the problem above, form augmented matrix A. The first two rows represent constraints one and two respectively. The last row represents the objective function.

Find the transpose of matrix A

From the last row of the transpose of matrix A, we can derive the objective function of the dual. Each of the preceding rows represents a constraint. Note that the original maximization problem had three variables and two constraints. The dual problem has two variables and three constraints.

**minimize**

**subject to:**

## Applications

Duality appears in many linear and nonlinear optimization models. One application is in Structural Design. An example of this is in a structural design model, the tension on the beams are the primal variables, and the displacements on the nodes are the dual variables. Another application for duality is when modeling electrical networks, the current flows can be modeled as the primal variables, and the voltage differences are the dual variables. In many of these applications we can solve the dual in cases when solving the primal is more difficult. If for example, there are more constraints than there are variables *(m >> n)*, it may be easier to solve the dual. (4)

Dual problems and their solutions are used in connection with the following optimization topics:

**Karush-Kuhn-Tucker (KKT) Conditions**

- The optimal solution to the dual problem is a vector of the KKT multipliers.

**Dual Simplex Method**

- Solving a Linear Programming problem by the Simplex Method gives you a solution of its dual as a by-product. زThis simplex algorithm tries to reduce the infeasibility of the dual problem. The dual simplex method can be thought of as a disguised simplex method working on the dual. The dual simplex method is when we maintain dual feasibility by imposing the condition that the objective function includes every variable with a nonpositive coefficient, and terminating when the primal feasibility conditions are satisfied. (3)

**Game Theory**

- Duality theory is closely related to game theory. Game theory is an approach used to deal with multi-person decision problems. The game is the decision-making problem, and the players are the decision-makers. Each player chooses a strategy or an action to be taken. Each player will then receive a payoff when each player has selected a strategy. The zero sum game that Von Neumann conjectured was the same as linear programming, is when the gain of one player results in the loss of another. This general situation of a zero sum game has similar characteristics to duality. (7)

## Conclusion

The idea of Duality has brought another view point to every linear and nonlinear programming optimization problem since 1948. (6) This technique can be applied to situations such as solving for economic constraints, resource allocation and bounding optimization problems. The switching of the objective function gives insight into shadow prices...

## References

- https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec18_duality_thy.pdf
- http://web.mit.edu/15.053/www/AMP-Chapter-04.pdf
- http://cgm.cs.mcgill.ca/~avis/courses/567/notes/ch10.pdf
- Professor You Lecture Slides (Linear Programming, Duality)
- https://math.dartmouth.edu/~m126w18/pdf/part4.pdf
- https://en.wikipedia.org/wiki/Duality_(optimization)#:~:text=In%20mathematical%20optimization%20theory%2C%20duality,the%20primal%20(minimization)%20problem.
- https://faculty.math.illinois.edu/~stolee/Teaching/13-482/gametheory.pdf
- "John Von Neumann." John Von Neumann Biography. N.p., n.d. Web. 25 May 2014.
- https://ideas.repec.org/p/fth/nesowa/96-18.html#:~:text=In%20linear%20programming%20problems%20the,is%20increased%20by%20one%20unit.
- https://ocw.mit.edu/courses/sloan-school-of-management/15-053-optimization-methods-in-management-science-spring-2013/lecture-notes/MIT15_053S13_lec6.pdf