Wikipedia only lists two problems under "unsolved problems in computer science":
What are other major problems that should be added to this list?
Rules:
- Only one problem per answer
- Provide a brief description and any relevant links
Wikipedia only lists two problems under "unsolved problems in computer science": What are other major problems that should be added to this list? Rules:
| |||||||||||||||||||||
feedback
|
The exponent of the best known upper bound even has a special symbol, $\omega$. Currently $\omega$ is approximately 2.376, by the Coppersmith-Winograd algorithm. A nice overview Update: Andrew Stothers (in his 2010 thesis) showed that $\omega < 2.3737$, which was improved by Virginia Vassilevska Williams (in a Nov 2011 preprint) to $\omega < 2.3727$. These bounds are both obtained by a careful analysis of the basic Coppersmith-Winograd technique. | |||||||||
feedback
|
The complexity of Graph Isomorphism (GI) has been an open question for several decades. Stephen Cook mentioned it in his 1971 paper on NP-completeness of SAT. Determining whether two graphs are isomorphic can usually be done quickly, for instance by software such as Read and Corneil reviewed the many attempts to tackle the complexity of GI up to that point: The Graph Isomorphism Disease, Journal of Graph Theory 1, 339–363, 1977. GI is not known to be in co-NP, but there is a simple randomized protocol for Graph Non-Isomorphism (GNI). So GI (= co-GNI) is therefore believed to be "close to" NP ${}\cap{}$ co-NP. On the other hand, if GI is NP-complete, then the Polynomial Hierarchy collapses. So GI is unlikely to be NP-complete. (Boppana, Håstad, Zachos, Does co-NP Have Short Interactive Proofs?, IPL 25, 127–132, 1987) Shiva Kintali has a nice discussion of the complexity of GI at his blog. | ||||
feedback
|
Complexity of FactoringIs Factoring in $\mathsf{P}$? | ||||
feedback
|
| |||||||||||||
feedback
|
The exponential-time hypothesis (ETH) asserts that solving SAT requires exponential, 2Ω(n) time. ETH implies many things, for instance that SAT is not in P, so ETH implies P ≠ NP. See Impagliazzo, Paturi, Zane, Which Problems Have Strongly Exponential Complexity?, JCSS 63, 512–530, 2001. ETH is widely believed, but likely to be difficult to prove, as it implies many other complexity class separations. | |||||||||||||||||
feedback
|
Is there a pivoting rule for the simplex algorithm that yields worst-case polynomial running time? More generally, is there any strongly polynomial algorithm for linear programming? | |||||||||
feedback
|
Is the unique games conjecture true? | |||||
feedback
|
Permanent versus Determinant The permanent versus determinant question is interesting because of two facts. First, the permanent of a matrix counts the number of perfect matchings in a bipartite graph. Therefore the permanent of such a matrix is #P-Complete. At the same time, the definition of the permanent is very close that of the determinant, ultimately different only because of a simple sign change. Determinant calculations are well known to be in P. Studying the different between the permanent and the determinant, and how many determinant calculations are required to compute the permanent speak about P versus #P. | |||||
feedback
|
Can we compute the FFT in much less than $O(n \log n)$ time? In the same (very) general vein, there are many questions of improving the run-times of many classical problems/algorithms: e.g., can all-pairs-shortest-paths (APSP) be solved in $O(n^{3-\epsilon})$ time ? | ||||
feedback
|
NP versus co-NP The NP versus co-NP question is interesting because NP ≠ co-NP implies P ≠ NP (as P is closed under complement). It also relates to "duality": separation between finding/verifying examples and finding/verifying counterexamples. In fact, proving that a question is in both NP and co-NP is our first good evidence that a problem that seems to be outside of P is also likely not NP-Complete. | |||||
feedback
|
Immerman and Vardi show that fixed-point logic captures PTIME on the class of ordered structures. One of the biggest open problems in descriptive complexity theory is whether the dependency on the order can be removed:
Put simply, a logic capturing PTIME is a programming language for graph problems that works directly on the graph structure and does not have access to the encoding of the vertices and edges, such that the following hold:
If there is no logic that captures PTIME, then $P \neq NP$ since NP is captured by existential second-order logic. A logic capturing PTIME would provide a possible attack to P vs NP. See Lipton's blog for an informal discussion and M. Grohe: The Quest for a Logic Capturing PTIME (LICS 2008) for a more technical survey. | |||||||||
feedback
|
The dynamic optimality conjecture for splay trees. Or more generally: Is any online dynamic binary search tree O(1)-competitive? | ||||
feedback
|
Can we compute the edit distance between two strings of length $n$ in sub-quadratic time, i.e., in time $O(n^{2-\epsilon})$ for some $\epsilon>0$ ? | |||||
feedback
|
A linear time deterministic algorithm for the minimum spanning tree problem. | ||||
feedback
|
BQP = P? Also: NP contained in BQP? I know this violated the rules by having two questions in the answer, but when taken with the P vs NP question, they are not necessarily independent questions. | ||||
feedback
|
Since I know computational geometry professors have answers here/visit this site and not mentioned this, I am not sure, but the existence/non-existence of deterministic (or randomized) sub-quadratic algorithms for 3SUM Hard Problems seems to be still open. | |||||||||
feedback
|
and, a little further away from the mainstream: (Informally, if you have all problems in EXP on a table, and you pick one up uniformly at random, what is the probability that the problem you chose is also in NP? This question has been formalized by the notion of resource-bounded measure. It is known that P has measure zero within EXP, i.e., the problem you picked up from the table is almost surely not in P.) | |||||||||||||
feedback
|
Arguably the major open problem of proof complexity: demonstrate super-polynomial size lower bounds on propositional proofs (called also Frege proofs). Informally, a Frege proof system is just a standard propositional proof system for proving propositional tautologies (one learns in a basic logic course), having axioms and deduction rules, where proof-lines are written as formulas. The size of a Frege proof is the number of symbols it takes to write down the proof. The problem then asks whether there is a family $(F_n)_{n=1}^\infty$ of propositional tautological formulas for which there is no polynomial $ p $ such that the minimal Frege proof size of $ F_n $ is at most $ p(|F_n|)$, for all $ n=1,2,\ldots$ (where $ |F_n| $ denotes the size of the formula $ F_n $). Formal definition of a Frege proof system Definition (Frege rule) A Frege rule is a sequence of propositional formulas $ A_0(\overline x),\ldots,A_k(\overline x) $, for $ k \le 0 $, written as $ \frac{A_1(\overline x), \ldots,A_k(\overline x)}{A_0(\overline x)}$. In case $ k = 0 $, the Frege rule is called an axiom scheme. A formula $ F_0 $ is said to be derived by the rule from $ F_1,\ldots,F_k $ if $ F_0,\ldots,F_k $ are all substitution instances of $ A_1,\ldots,A_k $, for some assignment to the $ \overline x $ variables (that is, there are formulas $B_1,\ldots,B_n $ such that $F_i = A_i(B_1/x_1,\ldots,B_n/x_n), $ for all $ i=0,\ldots,k $. The Frege rule is said to be sound if whenever an assignment satisfies the formulas in the upper side $A_1,\ldots,A_k $, then it also satisfies the formula in the lower side $ A_0 $. Definition (Frege proof) Given a set of Frege rules, a Frege proof is a sequence of formulas such that every proof-line is either an axiom or was derived by one of the given Frege rules from previous proof-lines. If the sequence terminates with the formula $ A $, then the proof is said to be a proof of $ A $. The size of a Frege proof is the the total sizes of all the formulas in the proof. A proof system is said to be implicationally complete if for all set of formulas $ T $, if $ T $ semantically implies $ F $, then there is a proof of $ F $ using (possibly) axioms from $ T $. A proof system is said to be sound if it admits proofs of only tautologies (when not using auxiliary axioms, like in the $ T $ above). Definition (Frege proof system) Given a propositional language and a set $ P $ of sound Frege rules, we say that $ P $ is a Frege proof system if $ P $ is implicationally complete. Note that a Frege proof is always sound since the Frege rules are assumed to be sound. We do not need to work with a specific Frege proof system, since a basic result in proof complexity states that every two Frege proof systems, even over different languages, are polynomially equivalent [Reckhow, PhD thesis, University of Toronto, 1976]. Establishing lower bounds on Frege proofs could be viewed as a step towards proving $NP \neq coNP$, since if this is true then no propositional proof system (including Frege) can have polynomial size proofs for all tautologies. | ||||
feedback
|
I know the OP asked for only one problem per post, but the RTA (Rewriting Techniques and their Applications) 1 and TLCA (Typed Lambda Calculi and their Applications) conferences both maintain lists of open problems in their fields 2. These lists are quite useful, as they also include pointers to previous work done on attempting to solve these problems. | |||||||||
feedback
|
Problems that are P-complete are not known to be parallelizable. P-complete problems include Horn-SAT and Linear Programming. But proving that this is the case would require separating some notion of parallelizable problems (such as NC or LOGCFL) from P. Computer processor designs are increasing the number of processing units, in the hope that this will yield improved performance. If fundamental algorithms such as Linear Programming are inherently not parallelizable, then there are significant consequences. | |||||
feedback
|
What is the approximability of Metric TSP? Christofides' Algorithm from 1975 is a polynomial-time (3/2)-approximation algorithm. Is it NP-hard to do better?
| |||||||||
feedback
|
Sensitivity versus block sensitivity Boolean sensitivity is interesting because block sensitivity, a close relative, is polynomially related to several other important and interesting complexity measures (like the certificate complexity of a boolean function). If sensitivity is always related to block sensitivity in a polynomial way, we have an extremely simple characteristic of boolean function that's related to so many others. One might read Rubinstein's "Sensitivity vs. block sensitivity of Boolean functions" or Kenyon and Kutin's "Sensitivity, block sensitivity, and l-block sensitivity of boolean functions." | ||||
feedback
|
Is there a Quantum PCP theorem? | ||||
feedback
|
What is the query complexity of testing triangle-freeness in dense graphs (i.e., distinguishing triangle-free graphs from those $\epsilon$-far from being triangle-free)? The known upper bound is a tower of exponentials in $1/\epsilon$, while the known lower bound is only mildly superpolynomial in $1/\epsilon$. This is a pretty basic question in extremal graph theory/additive combinatorics that has been open for nearly 30 years. | ||||
feedback
|
There are a lot of open problems in lambda calculi (typed and untyped). See the TLCA list of open problems for details; there is also a nice PDF version without the frames. I particularly like problem #5:
| |||||
feedback
|
Does there exist any hypothesis class that is NP-Hard to (improperly) PAC learn? This has some possible implications for complexity, and I think the best progress on this question is here: http://www.cs.princeton.edu/~dxiao/docs/ABX08.pdf | ||||
feedback
|
Parity games are two-player infinite-duration graph games, whose natural decision problem is in NP and co-NP, and whose natural search problem in PPAD and PLS. http://en.wikipedia.org/wiki/Parity_game Can parity games be solved in polynomial time? (More generally, a long-standing major open question in mathematical programming is whether P-matrix Linear Complementarity Problems can be solved in polynomial time?) | ||||
feedback
|
Give an explicit function with exponential circuit complexity. Shannon proved in 1949 that if you pick a boolean function at random, it has exponential circuit complexity with probability almost one. In comparison, the best lower bound we have for an explicit boolean function $f:\{0,1\}^n \rightarrow \{0,1\}$ is $5n - o(n)$ (K.Iwama, O.Lachish, H.Morizumi, R.Raz). | |||||||||
feedback
|
Separate NEXP from BPP. People tend to believe BPP=P, but no one can separate NEXP from BPP. | ||||
feedback
|