PLS (complexity)




In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time.
Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.




Contents






  • 1 Description


    • 1.1 Example




  • 2 Formal Definition


    • 2.1 Example neighborhood structures




  • 3 The standard Algorithm


  • 4 Reductions


    • 4.1 PLS-reduction


    • 4.2 Tight PLS-reduction


      • 4.2.1 Definition Transition graph


      • 4.2.2 Definition Tight PLS-reduction






  • 5 Relationship to other complexity classes


  • 6 PLS-completeness


    • 6.1 Definition


    • 6.2 List of PLS-complete Problems




  • 7 References





Description


When searching for a local optimum, there are two interesting issues to deal with: First how to find a local optimum, and second how long it takes to find a local optimum. For many local search algorithms, it is not known, whether they can find a local optimum in polynomial time or not.[1] So to answer the question of how long it takes to find a local optimum, Johnson, Papadimitriou and Yannakakis [2] introduced the complexity class PLS in their paper "How easy is local search". It contains local search problems for which the local optimality can be verified in polynomial time.


A local search problem is in PLS, if the following properties are satisfied:



  • The size of every solution is polynomial bounded in the size of the instance I{displaystyle I}I

  • It is possible to find some solution of a problem instance in polynomial time.

  • It is possible to calculate the cost of each solution in polynomial time.

  • It is possible to find all neighbors of each solution in polynomial time.


With these properties, it is possible to find for each solution s{displaystyle s}s the best neighboring solution or if there is no such better neighboring solution, state that s{displaystyle s}s is a local optimum.



Example


Consider the following instance I{displaystyle I}I of the Max-2Sat Problem: (x1∨x2)∧x1∨x3)∧x2∨x3){displaystyle (x_{1}vee x_{2})wedge (neg x_{1}vee x_{3})wedge (neg x_{2}vee x_{3})}{displaystyle (x_{1}vee x_{2})wedge (neg x_{1}vee x_{3})wedge (neg x_{2}vee x_{3})}. The aim is to find an assignment, that maximizes the sum of the satisfied clauses.


A solution s{displaystyle s}s for that instance is a bit string that assigns every xi{displaystyle x_{i}}x_{i} the value 0 or 1. In this case, a solution consists of 3 bits, for example s=000{displaystyle s=000}{displaystyle s=000}, which stands for the assignment of x1{displaystyle x_{1}}x_{1} to x3{displaystyle x_{3}}x_{{3}} with the value 0. The set of solutions FL(I){displaystyle F_{L}(I)}{displaystyle F_{L}(I)} is the set of all possible assignments of x1{displaystyle x_{1}}x_{1}, x2{displaystyle x_{2}}x_{2} and x3{displaystyle x_{3}}x_{{3}}.


The cost of each solution is the number of satisfied clauses, so cL(I,s=000)=2{displaystyle c_{L}(I,s=000)=2}{displaystyle c_{L}(I,s=000)=2} because the second and third clause are satisfied.


The Flip-neighbor of a solution s{displaystyle s}s is reached by flipping one bit of the bit string s{displaystyle s}s, so the neighbors of s{displaystyle s}s are N(I,000)={100,010,001}{displaystyle N(I,000)={100,010,001}}{displaystyle N(I,000)={100,010,001}} with the following costs:


cL(I,100)=2{displaystyle c_{L}(I,100)=2}{displaystyle c_{L}(I,100)=2}


cL(I,010)=2{displaystyle c_{L}(I,010)=2}{displaystyle c_{L}(I,010)=2}


cL(I,001)=2{displaystyle c_{L}(I,001)=2}{displaystyle c_{L}(I,001)=2}


There are no neighbors with better costs than s{displaystyle s}s, if we are looking for a solution with maximum cost. Even though s{displaystyle s}s is not a global optimum (which for example would be a solution s′=111{displaystyle s'=111}{displaystyle s'=111} that satisfies all clauses and has cL(I,s′)=3{displaystyle c_{L}(I,s')=3}{displaystyle c_{L}(I,s')=3}), s{displaystyle s}s is a local optimum, because none of its neighbors has better costs.


Intuitively it can be argued that this problem lies in PLS, because:



  • It is possible to find a solution to an instance in polynomial time, for example by setting all bits to 0.

  • It is possible to calculate the cost of a solution in polynomial time, by going once through the whole instance and counting the clauses that are satisfied.

  • It is possible to find all neighbors of a solution in polynomial time, by taking the set of solutions that differ from s{displaystyle s}s in exactly one bit.



Formal Definition


A local search problem L{displaystyle L}L has a set DL{displaystyle D_{L}}D_L of instances which are encoded using strings over a finite alphabet Σ{displaystyle Sigma }Sigma . For each instance I{displaystyle I}I there exists a finite solution set FL(I){displaystyle F_{L}(I)}{displaystyle F_{L}(I)}. Let R{displaystyle R}R be the relation that models L{displaystyle L}L. The relation R⊆DL×FL(I):={(I,s)∣I∈DL,s∈FL(I)}{displaystyle Rsubseteq D_{L}times F_{L}(I):={(I,s)mid Iin D_{L},sin F_{L}(I)}}{displaystyle Rsubseteq D_{L}times F_{L}(I):={(I,s)mid Iin D_{L},sin F_{L}(I)}} is in PLS [2][3][4] if:



  • The size of every solution s∈FL(I){displaystyle sin F_{L}(I)}{displaystyle sin F_{L}(I)} is polynomial bounded in the size of I{displaystyle I}I

  • Problem instances I∈DL{displaystyle Iin D_{L}}{displaystyle Iin D_{L}} and solutions s∈FL(I){displaystyle sin F_{L}(I)}{displaystyle sin F_{L}(I)} are polynomial time verifiable

  • There is a polynomial time computable function A:DL→FL(I){displaystyle A:D_{L}rightarrow F_{L}(I)}{displaystyle A:D_{L}rightarrow F_{L}(I)} that returns for each instance I∈DL{displaystyle Iin D_{L}}{displaystyle Iin D_{L}} some solution s∈FL(I){displaystyle sin F_{L}(I)}{displaystyle sin F_{L}(I)}

  • There is a polynomial time computable function B:DL×FL(I)→R+{displaystyle B:D_{L}times F_{L}(I)rightarrow mathbb {R} ^{+}}{displaystyle B:D_{L}times F_{L}(I)rightarrow mathbb {R} ^{+}} [5] that returns for each solution s∈FL(I){displaystyle sin F_{L}(I)}{displaystyle sin F_{L}(I)} of an instance I∈DL{displaystyle Iin D_{L}}{displaystyle Iin D_{L}} the cost cL(I,s){displaystyle c_{L}(I,s)}{displaystyle c_{L}(I,s)}

  • There is a polynomial time computable function N:DL×FL(I)→Powerset(FL(I)){displaystyle N:D_{L}times F_{L}(I)rightarrow Powerset(F_{L}(I))}{displaystyle N:D_{L}times F_{L}(I)rightarrow Powerset(F_{L}(I))} that returns the set of neighbors for an instance-solution pair

  • There is a polynomial time computable function C:DL×FL(I)→N(I,s)∪{OPT}{displaystyle C:D_{L}times F_{L}(I)rightarrow N(I,s)cup {OPT}}{displaystyle C:D_{L}times F_{L}(I)rightarrow N(I,s)cup {OPT}} that returns a neighboring solution s′{displaystyle s'}s' with better cost than solution s{displaystyle s}s, or states that s{displaystyle s}s is locally optimal

  • For every instance I∈DL{displaystyle Iin D_{L}}{displaystyle Iin D_{L}}, R{displaystyle R}R exactly contains the pairs (I,s){displaystyle (I,s)}{displaystyle (I,s)} where s{displaystyle s}s is a local optimal solution of I{displaystyle I}I


An instance DL{displaystyle D_{L}}D_L has the structure of an implicit graph (also called Transition graph [6]), the vertices being the solutions with two solutions s,s′∈FL(I){displaystyle s,s'in F_{L}(I)}{displaystyle s,s'in F_{L}(I)} connected by a directed arc iff s′∈N(I,s){displaystyle s'in N(I,s)}{displaystyle s'in N(I,s)}.


A local optimum is a solution s{displaystyle s}s, that has no neighbor with better costs. In the implicit graph, a local optimum is a sink. A neighborhood where every local optimum is a global optimum, which is a solution with the best possible cost, is called an exact neighborhood.[6][1]



Example neighborhood structures


Example neighborhood structures for problems with boolean variables (or bit strings) as solution:




  • Flip[2] - The neighborhood of a solution s=x1,...,xn{displaystyle s=x_{1},...,x_{n}}{displaystyle s=x_{1},...,x_{n}} can be achieved by negating (flipping) one arbitrary input bit xi{displaystyle x_{i}}x_{i}. So one solution s{displaystyle s}s and all its neighbors r∈N(I,s){displaystyle rin N(I,s)}{displaystyle rin N(I,s)} have Hamming distance one: H(s,r)=1{displaystyle H(s,r)=1}{displaystyle H(s,r)=1}.


  • Kernighan-Lin[2][6] - A solution r{displaystyle r}r is a neighbor of solution s{displaystyle s}s if r{displaystyle r}r can be obtained from s{displaystyle s}s by a sequence of greedy flips, where no bit is flipped twice. This means, starting with s{displaystyle s}s, the Flip-neighbor s1{displaystyle s_{1}}s_{1} of s{displaystyle s}s with the best cost, or the least loss of cost, is chosen to be a neighbor of s in the Kernighan-Lin structure. As well as best (or least worst) neighbor of s1{displaystyle s_{1}}s_{1} , and so on, until si{displaystyle s_{i}}s_{i} is a solution where every bit of s{displaystyle s}s is negated. Note that it is not allowed to flip a bit back, if it once has been flipped.


  • k-Flip[7] - A solution r{displaystyle r}r is a neighbor of solution s{displaystyle s}s if the Hamming distance H{displaystyle H}H between s{displaystyle s}s and r{displaystyle r}r is at most k{displaystyle k}k, so H(s,r)≤k{displaystyle H(s,r)leq k}{displaystyle H(s,r)leq k}.


Example neighborhood structures for problems on graphs:




  • Swap[8] - A partition (P2,P3){displaystyle (P_{2},P_{3})}{displaystyle (P_{2},P_{3})} of nodes in a graph is a neighbor of a partition (P0,P1){displaystyle (P_{0},P_{1})}{displaystyle (P_{0},P_{1})} if (P2,P3){displaystyle (P_{2},P_{3})}{displaystyle (P_{2},P_{3})} can be obtained from (P0,P1){displaystyle (P_{0},P_{1})}{displaystyle (P_{0},P_{1})} by swapping one node p0∈P0{displaystyle p_{0}in P_{0}}{displaystyle p_{0}in P_{0}} with a node p1∈P1{displaystyle p_{1}in P_{1}}{displaystyle p_{1}in P_{1}}.


  • Kernighan-Lin[1][2] - A partition (P2,P3){displaystyle (P_{2},P_{3})}{displaystyle (P_{2},P_{3})} is a neighbor of (P0,P1){displaystyle (P_{0},P_{1})}{displaystyle (P_{0},P_{1})} if (P2,P3){displaystyle (P_{2},P_{3})}{displaystyle (P_{2},P_{3})} can be obtained by a greedy sequence of swaps from nodes in P0{displaystyle P_{0}}P_{0} with nodes in P1{displaystyle P_{1}}P_{1}. This means the two nodes p0∈P0{displaystyle p_{0}in P_{0}}{displaystyle p_{0}in P_{0}} and p1∈P1{displaystyle p_{1}in P_{1}}{displaystyle p_{1}in P_{1}} are swapped, where the partition ((P0∖p0)∪p1,(P1∖p1)∪p0){displaystyle ((P_{0}setminus p_{0})cup p1,(P_{1}setminus p_{1})cup p_{0})}{displaystyle ((P_{0}setminus p_{0})cup p1,(P_{1}setminus p_{1})cup p_{0})} gains the highest possible weight, or loses the least possible weight. Note that no node is allowed to be swapped twice.


  • Fiduccia-Matheyses [1][9] - This neighborhood is similar to the Kernighan-Lin neighborhood structure, it is a greedy sequence of swaps, except that each swap happens in two steps. First the p0∈P0{displaystyle p_{0}in P_{0}}{displaystyle p_{0}in P_{0}} with the most gain of cost, or the least loss of cost, is swapped to P1{displaystyle P_{1}}P_{1}, then the node p1∈P1{displaystyle p_{1}in P_{1}}{displaystyle p_{1}in P_{1}} with the most cost, or the least loss of cost is swapped to P0{displaystyle P_{0}}P_{0} to balance the partitions again. Experiments have shown that Fiduccia-Mattheyses has a smaller run time in each iteration of the standard algorithm, though it sometimes finds an inferior local optimum.


  • FM-Swap[1] - This neighborhood structure is based on the Fiduccia-Mattheyses neighborhood structure. Each solution s=(P0,P1){displaystyle s=(P_{0},P_{1})}{displaystyle s=(P_{0},P_{1})} has only one neighbor, the partition obtained after the first swap of the Fiduccia-Mattheyses.



The standard Algorithm


Consider the following computational problem:
Given some instance I{displaystyle I}I of a PLS problem L{displaystyle L}L, find a locally optimal solution s∈FL(I){displaystyle sin F_{L}(I)}{displaystyle sin F_{L}(I)} such that cL(I,s′)≥cL(I,s){displaystyle c_{L}(I,s')geq c_{L}(I,s)}{displaystyle c_{L}(I,s')geq c_{L}(I,s)} for all s′∈N(I,s){displaystyle s'in N(I,s)}{displaystyle s'in N(I,s)}.


Every local search problem can be solved using the following iterative improvement algorithm:[2]



  1. Use AL{displaystyle A_{L}}A_{L} to find an initial solution s{displaystyle s}s

  2. Use algorithm CL{displaystyle C_{L}}C_{L} to find a better solution s′∈N(I,s){displaystyle s'in N(I,s)}{displaystyle s'in N(I,s)}. If such a solution exists, replace s{displaystyle s}s by s′{displaystyle s'}s' and repeat step 2, else return s{displaystyle s}s


Unfortunately, it generally takes an exponential number of improvement steps to find a local optimum even if the problem L{displaystyle L}L can be solved exactly in polynomial time.[2] It is not necessary always to use the standard algorithm, there may be a different, faster algorithm for a certain problem. For example a local search algorithm used for Linear programming is the Simplex algorithm.


The run time of the standard algorithm is pseudo-polynomial in the number of different costs of a solution.[10]


The space the standard algorithm needs is only polynomial. It only needs to save the current solution s{displaystyle s}s, which is polynomial bounded by definition.[1]



Reductions


A Reduction of one problem to another may be used to show that the second problem is at least as difficult as the first. In particular, a PLS-reduction is used to prove that a local search problem that lies in PLS is also PLS-complete, by reducing a PLS-complete Problem to the one that shall be proven to be PLS-complete.



PLS-reduction


A local search problem L1{displaystyle L_{1}}L_{1} is PLS-reducable[2] to a local search problem L2{displaystyle L_{2}}L_{2} if there are two polynomial time functions f:D1→D2{displaystyle f:D_{1}rightarrow D_{2}}{displaystyle f:D_{1}rightarrow D_{2}} and g:D1×F2(f(I1))→F1(I1){displaystyle g:D_{1}times F_{2}(f(I_{1}))rightarrow F_{1}(I_{1})}{displaystyle g:D_{1}times F_{2}(f(I_{1}))rightarrow F_{1}(I_{1})} such that:



  • if I1{displaystyle I_{1}}I_{1} is an instance of L1{displaystyle L_{1}}L_{1} , then f(I1){displaystyle f(I_{1})}{displaystyle f(I_{1})} is an instance of L2{displaystyle L_{2}}L_{2}

  • if s2{displaystyle s_{2}}s_{2} is a solution for f(I1){displaystyle f(I_{1})}{displaystyle f(I_{1})} of L2{displaystyle L_{2}}L_{2} , then g(I1,s2){displaystyle g(I_{1},s_{2})}{displaystyle g(I_{1},s_{2})} is a solution for I1{displaystyle I_{1}}I_{1} of L1{displaystyle L_{1}}L_{1}

  • if s2{displaystyle s_{2}}s_{2} is a local optimum for instance f(I1){displaystyle f(I_{1})}{displaystyle f(I_{1})} of L2{displaystyle L_{2}}L_{2} , then g(I1,s2){displaystyle g(I_{1},s_{2})}{displaystyle g(I_{1},s_{2})} has to be a local optimum for instance I1{displaystyle I_{1}}I_{1} of L1{displaystyle L_{1}}L_{1}


It is sufficient to only map the local optima of f(I1){displaystyle f(I_{1})}{displaystyle f(I_{1})} to the local optima of I1{displaystyle I_{1}}I_{1}, and to map all other solutions for example to the standard solution returned by A1{displaystyle A_{1}}A_{1}.[6]


PLS-reductions are transitive.[2]



Tight PLS-reduction



Definition Transition graph


The transition graph[6]TI{displaystyle T_{I}}T_{I} of an instance I{displaystyle I}I of a problem L{displaystyle L}L is a directed graph. The nodes represent all elements of the finite set of solutions FL(I){displaystyle F_{L}(I)}{displaystyle F_{L}(I)} and the edges point from one solution to the neighbor with strictly better cost. Therefore it is an acyclic graph. A sink, which is a node with no outgoing edges, is a local optimum.
The height of a vertex v{displaystyle v}v is the length of the shortest path from v{displaystyle v}v to the nearest sink.
The height of the transition graph is the largest of the heights of all vertices, so it is the height of the largest shortest possible path from a node to its nearest sink.



Definition Tight PLS-reduction


A PLS-reduction (f,g){displaystyle (f,g)}(f,g) from a local search problem L1{displaystyle L_{1}}L_{1} to a local search problem L2{displaystyle L_{2}}L_{2} is a
tight PLS-reduction[8] if for any instance I1{displaystyle I_{1}}I_{1} of L1{displaystyle L_{1}}L_{1}, a subset R{displaystyle R}R of solutions
of instance I2=f(I1){displaystyle I_{2}=f(I_{1})}{displaystyle I_{2}=f(I_{1})} of L2{displaystyle L_{2}}L_{2} can be chosen, so that the following properties are satisfied:




  • R{displaystyle R}R contains, among other solutions, all local optima of I2{displaystyle I_{2}}I_{2}

  • For every solution p{displaystyle p}p of I1{displaystyle I_{1}}I_{1} , a solution q∈R{displaystyle qin R}{displaystyle qin R} of I2=f(I1){displaystyle I_{2}=f(I_{1})}{displaystyle I_{2}=f(I_{1})} can be constructed in polynomial time, so that g(I1,q)=p{displaystyle g(I_{1},q)=p}{displaystyle g(I_{1},q)=p}

  • If the transition graph Tf(I1){displaystyle T_{f(I_{1})}}{displaystyle T_{f(I_{1})}} of f(I1){displaystyle f(I_{1})}{displaystyle f(I_{1})} contains a direct path from q{displaystyle q}q to q0{displaystyle q_{0}}q_0, and q,q0∈R{displaystyle q,q_{0}in R}{displaystyle q,q_{0}in R}, but all internal path vertices are outside R{displaystyle R}R, then for the corresponding solutions p=g(I1,q){displaystyle p=g(I_{1},q)}{displaystyle p=g(I_{1},q)} and p0=g(I1,q0){displaystyle p_{0}=g(I_{1},q_{0})}{displaystyle p_{0}=g(I_{1},q_{0})} holds either p=p0{displaystyle p=p_{0}}{displaystyle p=p_{0}} or TI1{displaystyle T_{I_{1}}}{displaystyle T_{I_{1}}} contains an edge from p{displaystyle p}p to p0{displaystyle p_{0}}p_{0}



Relationship to other complexity classes


PLS lies between the functional versions of P and NP: FP ⊆ PLS ⊆ FNP.[2]


PLS also is a subclass of TFNP,[11] that describes computational problems in which a solution is guaranteed to exist and can be recognized in polynomial time. For a problem in PLS, a solution is guaranteed to exist because the minimum-cost vertex of the entire graph is a valid solution, and the validity of a solution can be checked by computing its neighbors and comparing the costs of each one to another.


It is also proven that if a PLS problem is NP-hard, then NP = co-NP.[2]



PLS-completeness



Definition


A local search problem L{displaystyle L}L is PLS-complete,[2] if




  • L{displaystyle L}L is in PLS

  • every problem in PLS can be PLS-reduced to L{displaystyle L}L


The optimization version of the circuit problem under the Flip neighborhood structure has been shown to be a first PLS-complete problem.[2]



List of PLS-complete Problems


This is an incompete list of some known problems that are PLS-complete.



Overview of PLS-complete problems and how they are reduced to each other. Syntax: Optimization-Problem/Neighborhood structure. Dotted arrow: PLS-reduction from a problem L{displaystyle L}L to a problem Q:L←Q{displaystyle Q:Lleftarrow Q}{displaystyle Q:Lleftarrow Q}. Black arrow: Tight PLS-reduction.[4]


Notation: Problem / Neighborhood structure




  • Min/Max-circuit/Flip has been proven to be the first PLS-complete problem.[2]


  • Positive-not-all-equal-max-3Sat/Flip has been proven to be PLS-complete via a tight PLS-reduction from Min/Max-circuit/Flip to Positive-not-all-equal-max-3Sat/Flip. Note that Positive-not-all-equal-max-3Sat/Flip can be reduced from Max-Cut/Flip too.[8]


  • Positive-not-all-equal-max-3Sat/Kernighan-Lin has been proven to be PLS-complete via a tight PLS-reduction from Min/Max-circuit/Flip to Positive-not-all-equal-max-3Sat/Kernighan-Lin.[1]


  • Max-2Sat/Flip has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-2Sat/Flip.[1][8]


  • Min-4Sat-B/Flip has been proven to be PLS-complete via a tight PLS-reduction from Min-circuit/Flip to Min-4Sat-B/Flip.[7]


  • Max-4Sat-B/Flip(or CNF-SAT) has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip to Max-4Sat-B/Flip.[12]


  • Max-4Sat-(B=3)/Flip has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip to Max-4Sat-(B=3)/Flip.[13]


  • Max-Uniform-Graph-Partitioning/Swap has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-Uniform-Graph-partitioning/Swap.[8]


  • Max-Uniform-Graph-Partitioning/Fiduccia-Matheyses is stated to be PLS-complete without proof.[1]


  • Max-Uniform-Graph-Partitioning/FM-Swap has been proven to be PLS-complete via a tight PLS-reduction from Max-Cut/Flip to Max-Uniform-Graph-partitioning/FM-Swap.[8]


  • Max-Uniform-Graph-Partitioning/Kernighan-Lin has been proven to be PLS-complete via a PLS-reduction from Min/Max-circuit/Flip to Max-Uniform-Graph-Partitioning/Kernighan-Lin.[2] There is also a tight PLS-reduction from Positive-not-all-equal-max-3Sat/Kernighan-Lin to Max-Uniform-Graph-Partitioning/Kernighan-Lin.[1]


  • Max-Cut/Flip has been proven to be PLS-complete via a tight PLS-reduction from Positive-not-all-equal-max-3Sat/Flip to Max-Cut/Flip.[1][8]


  • Max-Cut/Kernighan-Lin is claimed to be PLS-complete without proof.[6]


  • Min-Independent-Dominating-Set-B/k-Flip has been proven to be PLS-complete via a tight PLS-reduction from Min-4Sat-B’/Flip to Min-Independent-Dominating-Set-B/k-Flip.[7]


  • Weighted-Independent-Set/Change is claimed to be PLS-complete without proof.[2][8][6]


  • Maximum-Weighted-Subgraph-with-property-P/Change is PLS-complete if property P = ”has no edges”, as it then equals Weighted-Independent-Set/Change. It has also been proven to be PLS-complete for a general hereditary, non-trivial property P via a tight PLS-reduction from Weighted-Independent-Set/Change to Maximum-Weighted-Subgraph-with-property-P/Change.[14]


  • Set-Cover/k-change has been proven to be PLS-complete for each k ≥ 2 via a tight PLS-reduction from (3, 2, r)-Max-Constraint-Assignment/Change to Set-Cover/k-change.[15]


  • Metric-TSP/k-Change has been proven to be PLS-complete via a PLS-reduction from Max-4Sat-B/Flip to Metric-TSP/k-Change.[13]


  • Metric-TSP/Lin-Kernighan has been proven to be PLS-complete via a tight PLS-reduction from Max-2Sat/Flip to Metric-TSP/Lin-Kernighan.[16]


  • Local-Multi-Processor-Scheduling/k-change has been proven to be PLS-complete via a tight PLS-reduction from Weighted-3Dimensional-Matching/(p, q)-Swap to Local-Multi-Processor-scheduling/(2p+q)-change, where (2p + q) ≥ 8.[5]


  • Selfish-Multi-Processor-Scheduling/k-change-with-property-t has been proven to be PLS-complete via a tight PLS-reduction from Weighted-3Dimensional-Matching/(p, q)-Swap to (2p+q)-Selfish-Multi-Processor-Scheduling/k-change-with-property-t, where (2p + q) ≥ 8.[5]

  • Finding the pure Nash Equilibrium in a General-Congestion-Game/Change has been proven PLS-complete via a tight PLS-reduction from Positive-not-all-equal-max-3Sat/Flip to General-Congestion-Game/Change.[17]

  • Finding the pure Nash Equilibrium in a Symmetric General-Congestion-Game/Change has been proven to be PLS-complete via a tight PLS-reduction from an asymmetric General-Congestion-Game/Change to symmetric General-Congestion-Game/Change.[17]

  • Finding a pure pure Nash Equilibrium in an Asymmetric Directed-Network-Congestion-Games/Change has been proven to be PLS-complete via a tight reduction from Positive-not-all-equal-max-3Sat/Flip to Directed-Network-Congestion-Games/Change [17] and also via a tight PLS-reduction from 2-Threshold-Games/Change to Directed-Network-Congestion-Games/Change.[18]

  • Finding a pure pure Nash Equilibrium in an Asymmetric Undirected-Network-Congestion-Games/Change has been proven to be PLS-complete via a tight PLS-reduction from 2-Threshold-Games/Change to Asymmetric Undirected-Network-Congestion-Games/Change.[18]

  • Finding a pure pure Nash Equilibrium in a 2-Threshold-Game/Change has been proven to be PLS-complete via a tight reduction from Max-Cut/Flip to 2-Threshold-Game/Change.[18]

  • Finding a pure Nash Equilibrium in Market-Sharing-Game/Change with polynomial bounded costs has been proven to be PLS-complete via a tight PLS-reduction from 2-Threshold-Games/Change to Market-Sharing-Game/Change.[18]

  • Finding a pure Nash Equilibrium in an Overlay-Network-Design/Change has been proven to be PLS-complete via a reduction from 2-Threshold-Games/Change to Overlay-Network-Design/Change. Analogously to the proof of asymmetric Directed-Network-Congestion-Game/Change, the reduction is tight.[18]


  • Min-0-1-Integer Programming/k-Flip has been proven to be PLS-complete via a tight PLS-reduction from Min-4Sat-B’/Flip to Min-0-1-Integer Programming/k-Flip.[7]


  • Max-0-1-Integer Programming/k-Flip is claimed to be PLS-complete because of PLS-reduction to Max-0-1-Integer Programming/k-Flip, but the proof is left out.[7]


  • (p, q, r)-Max-Constraint-Assignment


    • (3, 2, 3)-Max-Constraint-Assignment-3-partite/Change has been proven to be PLS-complete via a tight PLS-reduction from Circuit/Flip to (3, 2, 3)-Max-Constraint-Assignment-3-partite/Change.[19]


    • (2, 3, 6)-Max-Constraint-Assignment-2-partite/Change has been proven to be PLS-complete via a tight PLS-reduction from Circuit/Flip to (2, 3, 6)-Max-Constraint-Assignment-2-partite/Change.[19]


    • (6, 2, 2)-Max-Constraint-Assignment/Change has been proven to be PLS-complete via a tight reduction from Circuit/Flip to (6,2, 2)-Max-Constraint-Assignment/Change.[19]


    • (4, 3, 3)-Max-Constraint-Assignment/Change equals Max-4Sat-(B=3)/Flip and has been proven to be PLS-complete via a PLS-reduction from Max-circuit/Flip.[13] It is claimed that the reduction can be extended so tightness is obtained.[19]




  • Nearest-Colorful-Polytope/Change has been proven to be PLS-complete via a PLS-reduction from Max-2Sat/Flip to Nearest-Colorful-Polytope/Change.[3]


  • Stable-Configuration/Flip in a Hopfield network has been proven to be PLS-complete if the thresholds are 0 and the weights are negative via a tight PLS-reduction from Max-Cut/Flip to Stable-Configuration/Flip.[1][8][16]


  • Weighted-3Dimensional-Matching/(p, q)-Swap has been proven to be PLS-complete for p ≥9 and q ≥ 15 via a tight PLS-reduction from (2, 3, r)-Max-Constraint-Assignment-2-partite/Change to Weighted-3Dimensional-Matching/(p, q)-Swap.[5]



References



  • Yannakakis, Mihalis (2009), "Equilibria, fixed points, and complexity classes", Computer Science Review, 3 (2): 71–85, CiteSeerX 10.1.1.371.5034, doi:10.1016/j.cosrev.2009.03.004.mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}.




  1. ^ abcdefghijkl Yannakakis, Mihalis (2003). Local search in combinatorial optimization - Computational complexity. Princeton University Press. pp. 19–55. ISBN 9780691115221.


  2. ^ abcdefghijklmnop Johnson, David S; Papadimitriou, Christos H; Yannakakis, Mihalis (1988). "How easy is local search?". Journal of Computer and System Sciences. 37 (1): 79–100. doi:10.1016/0022-0000(88)90046-3.


  3. ^ ab Mulzer, Wolfgang; Stein, Yannik (10 December 2014). "Computational Aspects of the Colorful Caratheodory Theorem". Discrete & Computational Geometry. 60 (3): 720–755. arXiv:1412.3347. Bibcode:2014arXiv1412.3347M. doi:10.1007/s00454-018-9979-y.


  4. ^ ab Borzechowski, Michaela. "The complexity class Polynomial Local Search (PLS) and PLS-complete problems" (PDF).


  5. ^ abcd Dumrauf, Dominic; Monien, Burkhard; Tiemann, Karsten (2009). "Multiprocessor Scheduling is PLS-Complete". System Sciences, 2009. HICSS'09. 42nd Hawaii International Conference on: 1–10.


  6. ^ abcdefg Michiels, Wil; Aarts, Emile; Korst, Jan (2010). Theoretical aspects of local search. Springer Science & Business Media. ISBN 9783642071485.


  7. ^ abcde Klauck, Hartmut (1996). "On the hardness of global and local approximation". Proceedings of the 5th Scandinavian Workshop on Algorithm Theory: 88–99.


  8. ^ abcdefghi Schäffer, Alejandro A.; Yannakakis, Mihalis (February 1991). "Simple Local Search Problems that are Hard to Solve". SIAM Journal on Computing. 20 (1): 56–87. doi:10.1137/0220004.


  9. ^ Fiduccia, C. M.; Mattheyses, R. M. (1982). "A Linear-time Heuristic for Improving Network Partitions". Proceedings of the 19th Design Automation Conference: 175–181.


  10. ^ Angel, Eric; Christopoulos, Petros; Zissimopoulos, Vassilis (2014). Paradigms of Combinatorial Optimization: Problems and New Approaches - Local Search: Complexity and Approximation (2 ed.). John Wiley & Sons, Inc., Hoboken. pp. 435–471. doi:10.1002/9781119005353.ch14. ISBN 9781119005353.


  11. ^ Megiddo, Nimrod; Papadimitriou, Christos H (1991). "On total functions, existence theorems and computational complexity". Theoretical Computer Science. 81 (2): 317–324. doi:10.1016/0304-3975(91)90200-L.


  12. ^ Krentel, M. (1 August 1990). "On Finding and Verifying Locally Optimal Solutions". SIAM Journal on Computing. 19 (4): 742–749. doi:10.1137/0219052. ISSN 0097-5397.


  13. ^ abc Krentel, Mark W. (1989). "Structure in locally optimal solutions". Proceedings of the 30th Annual Symposium on Foundations of Computer Science: 216–221.


  14. ^ Shimozono, Shinichi (1997). "Finding optimal subgraphs by local search". Theoretical Computer Science. 172 (1): 265–271. doi:10.1016/S0304-3975(96)00135-1.


  15. ^ Dumrauf, Dominic; Süß, Tim (2010). "On the Complexity of Local Search for Weighted Standard Set Problems". CiE 2010: Programs, Proofs, Processes. Lecture Notes in Computer Science. 6158. Springer, Berlin, Heidelberg. pp. 132–140. CiteSeerX 10.1.1.762.6801. doi:10.1007/978-3-642-13962-8_15. ISBN 978-3-642-13961-1.


  16. ^ ab Papadimitriou, C.H.; Schäffer, A. A.; Yannakakis, M. (1990). "On the complexity of local search". Proceedings of the 22nd ACM Symposium on Theory of Computing: 438–445.


  17. ^ abc Fabrikant, Alex; Papadimitriou, Christos; Talwar, Kunal (2004). The Complexity of Pure Nash Equilibria. Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing. ACM. pp. 604–612. CiteSeerX 10.1.1.3.7861. doi:10.1145/1007352.1007445. ISBN 978-1581138528.


  18. ^ abcde Ackermann, Heiner; Röglin, Heiko; Vöcking, Berthold (2008). "On the Impact of Combinatorial Structure on Congestion Games". J. ACM. 55 (6): 25:1–25:22. CiteSeerX 10.1.1.634.4913. doi:10.1145/1455248.1455249. ISSN 0004-5411.


  19. ^ abcd Dumrauf, Dominic; Monien, Burkhard (2013). "On the PLS-complexity of Maximum Constraint Assignment". Theor. Comput. Sci. 469: 24–52. doi:10.1016/j.tcs.2012.10.044. ISSN 0304-3975.








Popular posts from this blog

How do I know what Microsoft account the skydrive app is syncing to?

When does type information flow backwards in C++?

Grease: Live!