Crypto Wiki
Advertisement
File:Knapsack.svg

Example of a one-dimensional (constraint) knapsack problem: which boxes should be chosen to maximize the amount of money while still keeping the overall weight under or equal to 15 kg? A multiple constrained problem could consider both the weight and volume of the boxes. Modeling the shapes and sizes would instead constitute a packing problem.
(Solution: if any number of each box is available, then three yellow boxes and three grey boxes; if only the shown boxes are available, then all but the green box.)

The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most useful items.

The problem often arises in resource allocation with financial constraints. A similar problem also appears in combinatorics, complexity theory, cryptography and applied mathematics.

The decision problem form of the knapsack problem is the question "can a value of at least V be achieved without exceeding the weight W?"

Definition[]

In the following, we have n kinds of items, 1 through n. Each kind of item i has a value vi and a weight wi. We usually assume that all values and weights are nonnegative. To simplify the representation, we can also assume that the items are listed in increasing order of weight. The maximum weight that we can carry in the bag is W.

The most common formulation of the problem is the 0-1 knapsack problem, which restricts the number xi of copies of each kind of item to zero or one. Mathematically the 0-1-knapsack problem can be formulated as:

  • maximize
  • subject to

The bounded knapsack problem restricts the number of copies of each kind of item to a maximum integer value . Mathematically the bounded knapsack problem can be formulated as:

  • maximize
  • subject to

The unbounded knapsack problem places no upper bound on the number of copies of each kind of item.

Of particular interest is the special case of the problem with these properties:

  • it is a decision problem,
  • it is a 0-1 problem,
  • for each kind of item, the weight equals the value: .

Notice that in this special case, the problem is equivalent to this: given a set of nonnegative integers, does any subset of it add up to exactly W? Or, if negative weights are allowed and W is chosen to be zero, the problem is: given a set of integers, does any subset add up to exactly 0? This special case is called the subset sum problem. In the field of cryptography the term knapsack problem is often used to refer specifically to the subset sum problem.

If multiple knapsacks are allowed, the problem is better thought of as the bin packing problem.

Computational complexity[]

The knapsack problem is interesting from the perspective of computer science because

  • there is a pseudo-polynomial time algorithm using dynamic programming
  • there is a fully polynomial-time approximation scheme, which uses the pseudo-polynomial time algorithm as a subroutine
  • the problem is NP-complete to solve exactly, thus it is expected that no algorithm can be both correct and fast (polynomial-time) on all cases
  • many cases that arise in practice, and "random instances" from some distributions, can nonetheless be solved exactly.

The subset sum version of the knapsack problem is commonly known as one of Karp's 21 NP-complete problems.

There have been attempts to use subset sum as the basis for public key cryptography systems, such as the Merkle-Hellman knapsack cryptosystem. These attempts typically used some group other than the integers. Merkle-Hellman and several similar algorithms were later broken, because the particular subset sum problems they produced were in fact solvable by polynomial-time algorithms.

One theme in research literature is to identify what the "hard" instances of the knapsack problem look like[1][2], or viewed another way, to identify what properties of instances in practice might make them more amenable than their worst-case NP-complete behaviour suggests.

Several algorithms are freely available to solve knapsack problems, based on dynamic programming approach[3], branch and bound approach[4] or hybridizations of both approaches.[5][6][7][8]

Dynamic programming solution[]

Unbounded knapsack problem[]

If all weights () are nonnegative integers, the knapsack problem can be solved in pseudo-polynomial time using dynamic programming. The following describes a dynamic programming solution for the unbounded knapsack problem.

To simplify things, assume all weights are strictly positive (wi > 0). We wish to maximize total value subject to the constraint that total weight is less than or equal to W. Then for each wW, define m[w] to be the maximum value that can be attained with total weight less than or equal to w. m[W] then is the solution to the problem.

Observe that m[w] has the following properties:

  • (the sum of zero items, i.e., the summation of the empty set)

where is the value of the i-th kind of item.

Here the maximum of the empty set is taken to be zero. Tabulating the results from up through gives the solution. Since the calculation of each involves examining items, and there are values of to calculate, the running time of the dynamic programming solution is . Dividing by their greatest common divisor is an obvious way to improve the running time.

The complexity does not contradict the fact that the knapsack problem is NP-complete, since , unlike , is not polynomial in the length of the input to the problem. The length of the input to the problem is proportional to the number, , of bits in , not to itself.

0-1 knapsack problem[]

A similar dynamic programming solution for the 0-1 knapsack problem also runs in pseudo-polynomial time. As above, assume are strictly positive integers. Define to be the maximum value that can be attained with weight less than or equal to using items up to .

We can define recursively as follows:

  • if (the new item is more than the current weight limit)
  • if .

The solution can then be found by calculating . To do this efficiently we can use a table to store previous computations. This solution will therefore run in time and space.

Greedy approximation algorithm[]

George Dantzig proposed a greedy approximation algorithm to solve the unbounded knapsack problem [9]. His version sorts the items in decreasing order of value per unit of weight, . It then proceeds to insert them into the sack, starting with as many copies as possible of the first kind of item until there is no longer space in the sack for more. Provided that there is an unlimited supply of each kind of item, if is the maximum value of items that fit into the sack, then the greedy algorithm is guaranteed to achieve at least a value of . However, for the bounded problem, where the supply of each kind of item is limited, the algorithm may be far from optimal.

Dominance relations to simplify the resolution of the unbounded knapsack problem[]

Some relations between items are such that quite a lot of items may be useless to consider to build an optimal solution. These relations are known as Dominance relations. When an item "i" is known to be dominated by a set of items "J", it can be thrown out of the set of items usable to build an optimal value. The dominance relations between items allow the size of the search space to be significantly reduced. All the dominance relations, enumerated below, could be derived by the following inequalities: , and for some

where

Collective dominance[]

The i-th item is collectively dominated by J, written as iff and for some i.e. . The verification of this dominance is computationally hard, so it can be used in a dynamic programming approach only.

Threshold dominance[]

the i-th item is threshold dominated by J, written as iff (the above inequalities hold when . This is an obvious generalization of the collective dominance by using instead of single item "i" a compound one, say times item "i". The smallest such defines the threshold of the item "i", written .

Multiple dominance[]

The item "i" is multiply dominated by "j", written as , iff , and for some i.e. . This dominance could be efficiently used in a preprocessing because it can be detected relatively easily.

Modular dominance[]

Let b = the best item, i.e for all j The item i is modularly dominated by j, written as iff , and i.e.

Applications[]

Knapsack problems can be applied to real-world decision-making processes in a wide variety of fields, such as the finding the least wasteful cutting of raw materials,[10] selection of capital investments and financial portfolios,[11] selection of assets for asset-backed securitization,[12] and generating keys for the Merkle–Hellman knapsack cryptosystem.[13]

One early application of knapsack algorithms was in the construction and scoring of tests in which the test-takers have a choice as to which questions they answer. On tests with a homogeneous distribution of point values for each question, it is a fairly simple process to provide the test-takers with such a choice. For example, if an exam contains 12 questions each worth 10 points, the test-taker need only answer 10 questions to achieve a maximum possible score of 100 points. However, on tests with a heterogeneous distribution of point values—that is, when different questions or sections are worth different amounts of points—it is more difficult to provide choices. Feuerman and Weiss proposed a system in which students are given a heterogeneous test with a total of 125 possible points. The students are asked to answer all of the questions to the best of their abilities. Of the possible subsets of problems whose total point values add up to 100, a knapsack algorithm would determine which subset gives each student the highest possible score.[14]

History[]

The knapsack problem has been studied for several centuries, with early works dating as far back as 1897.[15] It is not known how the name "knapsack problem" originated, though the problem was referred to as such in the early works of mathematician Tobias Dantzig (1884–1956)Template:Citation needed, suggesting that the name could have existed in folklore before a mathematical problem had been fully defined.[16]

The quadratic knapsack problem was first introduced by Gallo, Hammer, and Simeone in 1960.[17]

A 1998 study of the Stony Brook University algorithms repository showed that, out of 75 algorithmic problems, the knapsack problem was the 18th most popular and the 4th most needed after kd-trees, suffix trees, and the bin packing problem.[18]

See also[]

  • List of knapsack problems
  • Packing problem
  • Cutting stock problem
  • Continuous knapsack problem

Notes[]

  1. Pisinger, D. 2003. Where are the hard knapsack problems? Technical Report 2003/08, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
  2. L. Caccetta, A. Kulanoot, Computational Aspects of Hard Knapsack Problems, Nonlinear Analysis 47 (2001) 5547–5558.
  3. Rumen Andonov, Vincent Poirriez, Sanjay Rajopadhye (2000) Unbounded Knapsack Problem : dynamic programming revisited European Journal of Operational Research 123: 2. 168-181 http://dx.doi.org/10.1016/S0377-2217(99)00265-9
  4. S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementation , John Wiley and Sons, 1990
  5. S. Martello, D. Pisinger, P. Toth, Dynamic programming and strong bounds for the 0-1 knapsack problem , Manag. Sci., 45:414-424, 1999.
  6. Vincent Poirriez, Nicola Yanev, Rumen Andonov (2009) A Hybrid Algorithm for the Unbounded Knapsack Problem Discrete Optimization http://dx.doi.org/10.1016/j.disopt.2008.09.004
  7. G. Plateau, M. Elkihel, A hybrid algorithm for the 0-1 knapsack problem, Methods of Oper. Res., 49:277-293, 1985.
  8. S. Martello, P. Toth, A mixture of dynamic programming and branch-and-bound for the subset-sum problem, Manag. Sci., 30:765-771
  9. George B. Dantzig - Discrete-Variable Extremum Problems, OPERATIONS RESEARCH Vol. 5, No. 2, April 1957, pp. 266-288, DOI: http://dx.doi.org/10.1287/opre.5.2.266
  10. Kellerer, Pferschy, and Pisinger 2004, p. 449
  11. Kellerer, Pferschy, and Pisinger 2004, p. 461
  12. Kellerer, Pferschy, and Pisinger 2004, p. 465
  13. Kellerer, Pferschy, and Pisinger 2004, p. 472
  14. Template:Cite journal
  15. Template:Cite journal
  16. Kellerer, Pferschy, and Pisinger 2004, p. 3
  17. Template:Cite journal
  18. Template:Cite journal

References[]

  • Template:Cite book A6: MP9, pg.247.
  • Template:Cite book
  • Template:Cite book

External links[]

Template:Use dmy dates

Template:Link FA ca:Problema de la motxilla cs:Problém batohu de:Rucksackproblem es:Problema de la mochila fa:مسئله کوله‌پشتی fr:Problème du sac à dos ko:배낭 문제 it:Problema dello zaino nl:Knapzakprobleem ja:ナップサック問題 pl:Problem plecakowy pt:Problema da mochila ru:Задача о ранце sv:Kappsäcksproblemet tr:Sırt çantası problemi uk:Задача пакування рюкзака vi:Bài toán xếp ba lô zh:背包问题

Advertisement