Can we?

August 28th, 2008

I was in a miserable mood for weeks—regular readers will know that, for whatever reason, I go through these moods from time to time—and, strangely enough, a key to getting out of it seems to have been watching the Democratic convention and reading Obama’s two books.  I’m not saying this ought to have helped, only that it did.  Why?  Well, I can think of three possible reasons:

Firstly, it’s a truism that the cure for misery is to find something greater than yourself to worry about.  (Quantum complexity research used to fill that role for me, and will hopefully do so again in the near future.)  For someone who’s spent so much of his life inside his own head, it’s fascinating to watch people actually going out and doing something that while often corny and cringe-inducing also bears some recognizable relation to the public good.  What a strange, novel idea!  What’s even stranger, they might even succeed this year.  Of course, there’s a paradox at the heart of this philosophy: if you only worry about something greater than yourself because it distracts you from the tragedy of your own existence, then are you really worried about it in the first place?  But this is no more paradoxical than so much else about the human condition.  The hope is that caring about something greater than yourself will become a self-fulfilling prophecy.

Secondly, there’s the fact that a man whose writing demonstrates a finely-developed capacity for introspection, self-criticism, and doubt might become the Leader of the Free World in two months.  Reading Obama’s books, this introspectiveness—which is difficult to fake, and which probably doesn’t help him with most voters anyway—struck me as his most endearing quality.  (Of course, Obama also possesses a finely-developed capacity to suppress his capacity for introspection.  If he didn’t, then he’d still be an obscure instructor at the University of Chicago rather than a rock-star messiah.)  I’ll freely confess to bias in this matter.  I’m sure part of the reason why I’ve never been able to identify with the Republican Right, the Chomskyan Far Left, or the Libertarian Outwards—besides my actual disagreement with those philosophies—has been the serene confidence of those philosophies’ major proponents.  Any worldview that isn’t wracked by self-doubt and confusion over its own identity is not a worldview for me.

Thirdly, seeing President Clinton in his stride always cheers me up a little.

Given the above, I’d like propose the following question: what non-obvious things can nerds who are so inclined do to help the Democrats win in November?  I’m not talking about voting, donating money, licking envelopes, or standing on street corners “Baracking the vote”: the first two are easy and obvious while the second two are unsuitable for nerds.  The sorts of ideas I’m looking for are ones that (1) exploit nerds’ nerdiness, (2) go outside the normal channels of influence, (3) increase nerds’ effective voting power by several orders of magnitude, (4) are legal, (5) target critical swing states, and (6) can be done as a hobby.

Do such ideas exist?  Well, the prototype for such an idea is Nadertrading, which I was involved with in the 2000 election cycle (see here).  Before the main Nadertrading sites were shut down by Republican state attorneys-general (on doubtful legal grounds), we Nadertraders had convinced several hundred Nader supporters in Florida to commit to voting for Gore, in exchange for Gore supporters in safe states voting for Nader on their behalf.  Had Nadertrading been allowed to continue just a couple weeks longer, it might have prevented Bush from taking power and thereby changed the history of the world.  I’m looking for the Nadertrading of 2008, and I haven’t found it yet.

A few possibilities:

  • Nadertrading Redux. Ralph is running again, and it might be worthwhile to try and reduce his influence in swing states once more.  The trouble is that, after 2000, anyone who would still vote for Nader is likely beyond the reach of any outcomes-based consideration.
  • Lobbyists for McCain.  In 2004, I participated in a Billionaires for Bush march in NYC, and can testify that it was a blast.  It seems the 2008 analogue is Lobbyists for McCain.  Downsides: (1) this joke has been done before, and (2) it’s not clear to me that satire, even when amusing and well-executed, actually changes anyone’s mind about anything.
  • Publicize and correct voting machine flaws.  Researchers have demonstrated that a voting machine virus would be almost trivial to install and could go completely undetected by poll workers.  And while some might find such a scenario implausible, it does seem likely that more mundane voting machine problems—system crashes, dropped and lost votes, confusing interfaces, etc.—will determine the outcome this year, exactly as they did in 2000 and possibly in 2004.  These irregularities have, for whatever reasons, been far more likely to favor Republicans than Democrats.  To their credit, computer scientists have been at the forefront of studying and publicizing these voting machine flaws, and have even succeeded in improving election procedures in California.  The downsides?  Firstly, it’s probably already too late to do much before November; secondly, computer scientists have been screaming about these problems for years and yet depressingly little has changed in the swing states.
  • Build a database and/or statistical model for identifying “problem precincts”.  Wouldn’t it have been helpful if, before the 2000 election, prominent Democrats had known about Theresa LePore, and the possibility that her butterfly ballots flapping their wings in Florida would cause the destruction of New Orleans five years later?  Or if before the 2004 election, they’d known to concentrate their monitoring efforts on particular counties in Ohio?  (A side note: improving the Democrats’ ability to challenge results after the election is over strikes me as a complete waste of time.  Whoever the networks announce as the presumptive winner on election eve, that’s who the winner is going to be.)  I don’t know how to predict 2008’s likely trouble zones, and even if I did, I don’t know what I would do about them.  But this still strikes me as the most promising of the four listed directions.

Quantum Computing Since Democritus Lecture 20: Cosmology and Complexity

August 21st, 2008

Come watch me attempt to explain the implications of a positive cosmological constant for computational complexity theory.  If this blog is about anything, it’s about me talking about subjects I don’t understand sufficiently well and thereby making a fool of myself.  But it’s also about experts taking the time to correct me.  The latter is the primary saving grace.

A fictional gap?

August 16th, 2008

Consider the following template, which (with small variations) might describe a third of the world’s movies and novels:

  1. Girl, the protagonist, is set to marry the well-off, educated Dependable Guy, who does something insufferable for a living such as working.  Girl’s parents strongly favor this union.
  2. Girl meets (or re-meets) Dashing Artist Guy, who steals her heart away.  In the closing scene, Girl and DAG walk happily into the sunset; Dependable Guy is not shown.

It’s a classic Darwinian tale of female mate selection, pitting the stable, nerdy provider-male against the self-confident lothario—and at least in fiction, the latter appears to win every single contest.  Insofar as teenagers are influenced by media at all, I’m guessing this sort of thing has a much larger impact on them than unfortunate public-service billboards.

(Has the popularity of the Girl-Picks-DAG story template been constant at least since Shakespeare, or did it increase with the sexual revolution?  Does it represent, in coded form, the triumph of genetic fitness over parenting ability as a selection criterion—despite what may have been the limited value of artist genes on the ancestral savanna?)

Typically the story’s writers stack the deck in Dashing Artist Guy’s favor by making Dependable Guy some combination of mean, old, lecherous, ugly, humorless, or vengeful, or by making Girl’s marriage to him a forced one.  (Think of Cal in Titanic or Lazar Wolf in Fiddler on the Roof.)  In the more interesting variants, Dependable Guy might have none of the negative qualities, and Girl might be shown agonizing over a genuine choice—but she still always goes for DAG in the end.

Interestingly, the writers invariably stack the deck further by portraying DAG as 100% committed to Girl—even though a realistic assessment might find that if DAG stole one heart, then he can and will steal plenty of others as well, and thus the notion of Girl and DAG living together happily ever after may simply represent audience members’ wish-fulfillment fantasy.  Indeed, skepticism about DAG’s long-term motives might be the reason Girl’s parents favor Dependable Guy.

So here’s my question for you: what examples can you come up with, in any of the world’s fiction, where Girl (the protagonist) chooses Dependable Guy—or at least wishes she did?

  • 2 bonus points if Dependable Guy is portrayed as a nerd (and remains so throughout the story)
  • 5 bonus points if he’s the one Girl’s parents are rooting for
  • 10 bonus points if Girl makes her decision by thinking about the likelihood DAG will stay with her, and concluding it’s not that great

Racking my highly-fallible memory for a few minutes, the closest example I could come up with—and it’s not that close—is Gone With The Wind.  Sure, there are comedies about a nerdy guy winning the girl of his dreams, but there the protagonist always seems to be the nerd rather than the girl, and by the end the audience discovers that he isn’t ‘really’ a nerd anyway.  Also, they’re comedies.

Quantum Computing Since Democritus Lecture 19: Time Travel

July 31st, 2008

A visitor from the year 2006, this time travel lecture appears before you now due to a blip in the space/time/procrastination continuum.  No grandfathers were harmed in the writing of it.  I’m looking backward to your comments.

(Alas, these forehead-bangingly obvious lines can now never be unwritten … or can they?)

The Pareto curve of freedom

July 29th, 2008

Inspired by the discussion of my fable, and specifically a comment of Ronald de Wolf, today I decided to do some amateur political science. Specifically, I created a scatterplot that ranks 156 of the world’s countries (those for which data was available) along two axes:

  1. Their “political freedom”, as rated by Freedom House’s 2008 Freedom in the World survey. This is a 0-to-100 scale, which includes 60 points for various civil liberties (such as freedom of speech and freedom of religion) and 40 points for various political rights (such as transparent elections). (Note that I used the raw scores, rather than the less informative 1-to-7 rankings.)
  2. Their “economic freedom”, as rated by the Heritage Foundation and Wall Street Journal’s 2008 Index of Economic Freedom. This is also a 0-to-100 scale, which ranks the sorts of things libertarians and laissez-faire economists love: free trade, deregulation, privatization, low taxes and tariffs, low or nonexistent minimum wage, etc.

The motivation was simple. Among educated people, political freedom is universally acknowledged as both good and important, whereas economic freedom (as defined by Heritage and the Wall Street Journal) is not. Indeed, a huge fraction of the disagreement between liberals and conservatives—at least over economics—seems to boil down to a single question: Is economic freedom (again, as defined by Heritage/WSJ) the friend or enemy of political freedom?

On one side of this question, we have Milton Friedman:

Historical evidence speaks with a single voice on the relation between political freedom and a free market.  I know of no example in time or place of a society that has been marked by a large measure of political freedom that has not also used something comparable to a free market to organize the bulk of economic activity. (From Capitalism and Freedom, quoted by Wu and Davis)

On the other side we have anti-globalization activists like Naomi Klein (author of The Shock Doctrine), who say that “economic freedom” simply means the freedom of multinational corporations to exploit the public, and as such is incompatible with political freedom.  Klein argues that free-market economic policies almost never win democratically, and hence the ruling elites have had to force these policies on unwilling populations using strong-arm tactics of the World Bank and IMF; cynical exploitation of wars, hurricanes, and other disasters; and (when all else fails) state-sponsored torture and murder.

My modest goal was to use the available cross-country data to test these two hypotheses. But before we get to that, a few caveats.

Caveat #1: I know full well that the questions I’m talking about have already been studied in great detail by professional political scientists. Google Scholar turned up Lundström 2005 doing a correlational study between the Freedom House index and various components of the Economic Freedom of the World index (which is similar to the Heritage index), as well as Wu and Davis 1999, Wu and Davis 2005, Berggren 2003, Carbone 2007, and lots more. (Though see also Doucouliagos 2005 for evidence of publication bias in this area.) So why bother to reinvent the wheel?  A few answers:

  • This project was really just a way to procrastinate.
  • I like making charts.
  • My methods were somewhat different from those in the published literature. Rather than using the accepted methodology of the social sciences—which consists of reducing all questions to chi-squared significance tests—I felt free to use my own Doofus Methodology, which consists of staring at graphs and seeing if something pops out at me. After careful deliberation, I decided on the latter methodology for three reasons. First, ultimately I only care about correlations that are strong enough to be obvious to the naked eye.  Second, I might actually know something about some of the countries in question—they’re not just interchangeable data points—and given how informal this study was anyway, I saw no reason to jettison that knowledge.  Third, as we’ll see, when we’re asking about the best forms of government, doing regression analysis on all the countries that happen to exist today can be seriously misleading.  To put it bluntly, the majority of countries are so abysmal in terms of both economic freedom and political freedom, that trying to gain insight from them into a hypothesized tradeoff between the two freedoms is like studying a remedial class of second-graders to find out whether algebraic or geometric insight is more important for winning the Fields Medal.  It’s the outlier countries, the Singapores and Icelands, that should interest us at least as much as the pack.

Caveat #2: The problems with the Freedom House and Heritage surveys—and for that matter, any surveys that try to rank countries on some linear scale of “freedom”—are evident. Indeed, reading the survey methodologies, I found plenty of things to complain about, as I’m sure you would as well. Nevertheless, both surveys struck me as (1) having reasonably consistent methodologies, (2) being reasonably well-accepted by social scientists, and (3) giving results that agree pretty well with intuition, for most of the countries I know something about.  So lacking a better alternative, I decided to go along with these indices.  Just to double-check, I also looked at the Freedom House index versus the Economic Freedom of the World index, and the plot was extremely similar to the one versus the Heritage index.

Caveat #3: A major limitation of my scatterplot is that it only looks at the world of 2008, and disregards a vast wealth of historical examples (Chile under Pinochet, the US under Reagan…).  Future research by amateur procrastinating bloggers should clearly take the available historical data into account as well.

Granting all of this, what can we potentially learn?

1. Political and economic freedom are correlated. Any doofus could have predicted this, and lo and behold, it’s apparent from even a glance at the data. Looking at the countries in question, it seems clear that part of this correlation is due to both freedoms being correlated with economic development, i.e. “having your national shit together.”  In a country like Denmark, you can criticize the government and start a business. In a country like North Korea, you can neither criticize the government nor start a business, at least without being shot.  The studies I linked to above claim some evidence that this obvious correlation has a causal component, as follows: by and large, economic freedom helps make countries richer, and being richer helps make them more politically free.  Assuming that claim is correct, score one for Milton Friedman.

2. A wide range of economic freedom levels is compatible with a “near-maximal” level of political freedom. Let’s look only at the countries on the far right of the scatterplot—those with “US-or-above” levels of political freedom (Australia, Austria, Bahamas, Barbados, Belgium, Canada, Chile, Cyprus, Czech Republic, Denmark, Estonia, Finland, Germany, Iceland, Ireland, Luxembourg, Malta, Netherlands, New Zealand, Norway, Portugal, Spain, Sweden, Switzerland, UK, US, and Uruguay). Here the correlation between economic and political freedom seems to disappear entirely, or even become slightly negative. The economic freedom scores of these countries range from 64.3 to 82.4, which is almost half of the total spread across all countries on earth (excepting a few dictatorships like North Korea, Cuba, and Zimbabwe). More to the point, this list includes countries commonly regarded as “socialist” in contemporary political debate (like the Scandinavian countries), and countries regarded as “capitalist” (like Australia, Chile, and the US).  Thus, the idea that countries that already have a high level of political freedom, would increase their political freedom even more by lowering taxes, privatizing industries, etc., does not seem to be borne out by this dataset.

3. There might be a “Pareto curve of freedom”: that is, a basic tension between economic and political freedom that prevents them from being maximized simultaneously. I’ll admit that the evidence on this point is inconclusive.  Firstly, there aren’t enough data points; secondly, the lack of any example of a country maximizing both freedoms is obviously not an impossibility proof.  A true believer in Ayn Rand’s utopia, like a true believer in Marxism, could always disregard any empirical finding by saying that the right experiment has never been tried yet, and would self-evidently succeed if it were.

However, if we do construct the “Pareto curve of freedom” for the Freedom House/Heritage data, what we find is this:

  • Iceland, with economic freedom score of 76.5 and political freedom score of 100
  • Canada, with economic freedom score of 80.2 and political freedom score of 99
  • Ireland, with economic freedom score of 82.4 and political freedom score of 97
  • Singapore, with economic freedom score of 87.4 and political freedom score of 49

(The US is conspicuously not on the Pareto curve, though wounded patriots can console themselves that it’s the only country of anywhere near its population size that comes close.)

Note that Hong Kong is not in this dataset, since as part of China, it isn’t ranked separately by Freedom House. However, Heritage gives Hong Kong an economic freedom score of 90.3, which is the highest in the world (Singapore is #2). The political freedom score for China itself is a dismal 18. So, if we assigned Hong Kong the point (18,90.3), that would be a fifth point on the Pareto curve.

To check the robustness of the Pareto curve, I recalculated it using the Economic Freedom in the World index in place of the Heritage index.   The result was basically similar: clustered on the right we find Finland, Iceland, and Luxembourg maximizing political freedom, then Canada, then Switzerland, then New Zealand, and then, as before, Singapore way off on its own maximizing economic freedom.

To confirm the hypothesis of a tradeoff between economic freedom and political freedom, what we’d need in the dataset are “more Singapores”—or better yet, some countries that interpolated between the Western democracies and Singapore.  Conversely, to disprove the tradeoff hypothesis, all it would take is a single country that dominated the rest of the world on both axes, with the political freedom of Scandinavia and the economic freedom of Singapore.  I find it interesting that no such country seems to exist, not even a small city-state or island.

Incidentally, the tradeoff idea is not necessarily rejected by libertarians.  Friedman himself stressed that “political freedom, once established, has a tendency to destroy economic freedom.”  To put it bluntly, if poor people can vote, one of the main things they vote for is to redistribute money to themselves.  There are then three possibilities: either redistribution takes place (and economic freedom as defined by Heritage and the Wall Street Journal goes down), or the poor majority is violently suppressed (and political freedom goes down), or the government is overthrown.  Amusingly, Friedman and Klein seem to be in complete agreement on this central point: it’s just that one of them laments it while the other relishes it.

In summary, I conjecture that the relationship between economic freedom and political freedom is similar to that between jogging and health.  In general, we expect people to be healthier the more they jog, with at least part of the relationship being causal. But it doesn’t follow that jogging 20 hours per day is healthier than jogging one hour; indeed the former might even be detrimental.

Of course, people could accept all this (even find it plunkingly obvious), and still vehemently disagree about the quantitative aspect: exactly how far out is the Pareto curve?  How much jogging is too much?  As usual, it’s the complexity-theoretic questions that are the interesting ones.  The tragedy is that you never even get to those questions if you’re too hung up on computability.

Quantum Computing Since Democritus Lecture 18: Free Will

July 28th, 2008

If you don’t like this latest lecture, please don’t blame me: I had no choice! (Yeah, yeah, I know. You presumably have no choice in criticizing it either. But it can’t hurt to ask!)

Those of you who’ve been reading this blog since the dark days of 2005 might recognize some of the content from this post about Newcomb’s Paradox, and this one about the Free Will Theorem.

The Routerhead: a fable

July 25th, 2008

Inspired by: reading Naomi Klein’s The Shock Doctrine this week alongside Ludwig von Mises’s The Anti-Captialistic Mentality.

Addendum (7/28): Here’s my review of The Shock Doctrine.  If you want to know what I thought of the book, you should probably just read the review and ignore the dumb fable that follows.


I tried unplugging the router and plugging it back in, messing around with my DHCP settings — everything I could think of.  Still no Internet.  Hours passed, then a day.  In desperation, I finally called the tech support number for my Internet service provider, Laissez-Faire Solutions.  After putting me hold for an hour with Brahms and Beethoven, “Ayn” finally picked up the phone.”I don’t know what to tell you,” she said curtly, after I’d explained the situation. “Your connection ought to be working perfectly.”

“But it isn’t.”

“It ought to be.”

“But it isn’t!  Look, isn’t it possible that there’s some failure on your end?”

“You don’t understand, sir. There can be no such thing as a failure on our end. If a failure exists, then it must by definition be on your end and your end alone. What is provided to your home qua home is Internet access qua Internet access. It follows, then, as surely as A is A, that either your router is not configured properly, or your cable is disconnected, or in some other way your own stupidity or incompetence have prevented you from getting online, a failure you now seek absurdly to blame on the Internet itself.”

“I’m not blaming anything on the Internet.  I’m blaming it on you, my ISP.”

“Let me ask you something,” said Ayn.  “Did anyone hold you at gunpoint, or otherwise coerce you to sign up for Internet service with us?”

“Well, I guess not…”

“Then what exactly is your complaint?”

“That the service you agreed to provide isn’t working.”

“As I explained previously, it is working, by definition.  If for some fanciful reason you think otherwise, obviously you have the freedom of switching providers.”

“But all the others suck as much as you do.”

“That is not possible.  Were it the case that every Internet provider sucked, a provider that didn’t suck would have arisen and driven all the others out of business.  The market abhors a vacuum.”

“Yeah, I’ve been waiting more than a decade for this particular vacuum to be filled.  Until it happens, what else would you suggest I do?”

“Did you try going to Google again?”

“I’m still getting a ‘Page Not Found’ error.”

Frustrated, I decided to call the Tech-support Cooperative of the People’s International Proletariat (TCPIP).  Karl picked up the phone.  As I related my conversation with Ayn, Karl doubled over with laughter. “You mean you actually believe they want it to work?”

“Who exactly is the ‘they’ we’re talking about?”

“All of them — the service providers, the government, even the academics who designed the Internet in the first place.  We’ve amassed mountains of evidence that they’re all conspiring to keep the Internet broken, in order to force people like you to sign up for expensive, exploitative ’solutions’ — solutions no one would ever agree to under normal circumstances!  Won’t you join us this weekend? We’re going to carouse around some rich neighborhoods and slice their fiber-optic cables.  Maybe the fatcats will finally get it, once their precious Internet connections work exactly as well as ours do…”

“To be honest, I really just wanted to check my email and blog comments.”

“This is not about the individual; it’s about the community!  There can be no truly reliable connections until the Internet as a whole has been demolished and rebuilt from scratch, until we’ve established a new social order on this planet where everyone is responsible for everyone else being able to get online…”

“Until the millennium comes, can you put me in touch with someone who specializes in fixing Internet connections now?”

“Traitors!  Don’t you see Internet access has to get much, much worse before it can get better?  That fixing your connection would just be a ruse to lull you into complacency and dim your justified anger?”

So what did I end up doing?  Well, until my connection starts working again, I found this unsecured wireless in my apartment building that I’m sometimes able to leech off of, as well as a nearby cafe that offers free wireless from 10 to 4 on weekdays.  And when all else fails I use my Blackberry, pecking out emails on the microscopic keyboard (though that connection, too, has become finicky lately).

I talked again this morning to Ayn and Karl, and they completely agreed with each other that I was beyond hope.  By focusing so obsessively on “fixing” a “problem,” they explained, I’d become distracted from the real goal: namely, comprehending a universal principle that explained why my Internet access wasn’t working, as well as every other question that I might ever want an answer to.  Maybe they’re right.  All I know is, at least for now I can usually get my email when I have to.

Quantum Computing Since Democritus Lecture 17: Fun With The Anthropic Principle

July 24th, 2008

Here it is. There was already a big anthropic debate in the Lecture 16 comments — spurred by a “homework exercise” at the end of that lecture — so I feel absolutely certain that there’s nothing more to argue about. On the off chance I’m wrong, though, you’re welcome to restart the debate; maybe you’ll even tempt me to join in eventually.

The past couple weeks, I was at Foo Camp in Sebastopol, CA, where I had the opportunity to meet some wealthy venture capitalists, and tell them all about quantum computing and why not to invest in it hoping for any short-term payoff other than interesting science. Then I went to Reed College in Portland, OR, to teach a weeklong course on “The Complexity of Boolean Functions” at MathCamp’2008. MathCamp is (as the name might suggest) a math camp for high school students. I myself attended it way back in 1996, where some guy named Karp gave a talk about P and NP that may have changed the course of my life.

Alas, neither camp is the reason I haven’t posted anything for two weeks; for that I can only blame my inherent procrastination and laziness, as well as my steadily-increasing, eminently-justified fear of saying something stupid or needlessly offensive (i.e., the same fear that leads wiser colleagues not to start blogs in the first place).

Quantum Computing Since Democritus Lecture 16: Interactive Proofs

July 8th, 2008

In which I try to give a non-rigorous taste of the interactive proofs revolution that rocked the complexity world in the 1990’s, as well as its consequences for circuit lower bounds. I argue that these results matter because they offer a tiny glimpse of how one can exploit the structure of problems like 3SAT to prove lower bounds—something we know will eventually be needed for the P vs. NP question. If you got off the train before its latest tour through the Complexity Badlands, don’t worry: it will double back into Philosophers’ Valley (where everyone has an opinion and no one has a result) by Lecture 17 (”Fun With Anthropic Principles”).

Arithmetic natural proofs theory is sought

June 30th, 2008

This post will be longer and more technical than most — but what can I say? Sometimes you need to get something technical off your chest. The topic is something my student, Andy Drucker, and I (along with several interested others) have been thinking about on and off for months, and if we’re not going to get a paper out of it, at least we’ll have this blog post.

Complexity theory could be defined as the field concerned with deep, nontrivial, mathematically-sophisticated justifications for failure. For example, we can’t solve NP-complete problems in polynomial time, but maybe that’s not so bad, since we conjecture there is no solution (P≠NP). Of course, we also can’t prove P≠NP — but maybe that’s not so bad either, since we have good explanations for why the problem is so hard, like relativization, natural proofs, and algebrization.

On the other hand, consider the problem of showing that the Permanent of an nxn matrix requires arithmetic circuits of more than polynomial size. (Given a field F—which we’ll assume for this post is finite—an arithmetic circuit over F is a circuit whose only allowed operations are addition, subtraction, and multiplication over F, and that doesn’t have direct access to the bit representations of the field elements.)

The problem of circuit lower bounds for the Permanent is currently at the frontier of complexity theory. As we now know, it’s intimately related both to derandomizing polynomial identity testing and to the τ problem of Blum, Cucker, Shub, and Smale. Alas, not only can we not prove that Perm∉AlgP/poly (which is the street name for this conjecture), we don’t have any good excuse for why we can’t prove it! Relativization and algebrization don’t seem to apply here, since no one would think of using diagonalization-based techniques on such a problem in the first place. So that leaves us with natural proofs.

The theory of natural proofs, which was developed by Razborov and Rudich in 1993 and for which they recently shared the Gödel Prize, started out as an attempt to explain why it’s so hard to prove NP⊄P/poly (i.e., that SAT doesn’t have polynomial-size circuits, which is a slight strengthening of P≠NP). They said: suppose the proof were like most of the circuit lower bound proofs that we actually know (as a canonical example, the proof that Parity is not in AC0). Then as a direct byproduct, the proof would yield an efficient algorithm A that took as input the truth table of a Boolean function f, and determined that either:

  1. f belongs to an extremely large class C of “random-looking” functions, which includes SAT but does not include any function computable by polynomial-size circuits, or
  2. f does not belong to C.

(The requirement that A run in time polynomial in the size of the truth table, N=2n, is called constructivity. The requirement that C be a large class of functions — say, at least a 2-poly(n) fraction of functions — is called largeness.)

Razborov and Rudich then pointed out that such a polynomial-time algorithm A could be used to distinguish truly random functions from pseudorandom functions with non-negligible bias. As follows from the work of Håstad-Impagliazzo-Levin-Luby and Goldreich-Goldwasser-Micali, one could thereby break one-way functions in subexponential time, and undermine almost all of modern cryptography! In other words, if cryptography is possible, then proofs with the property above are not possible. The irony — we can’t prove lower bounds because lower bounds very much like the ones we want to prove are true — is thick enough to spread on toast.

Now suppose we tried to use the same argument to explain why we can’t prove superpolynomial arithmetic circuit lower bounds for the Permanent, over some finite field F. In that case, a little thought reveals that what we’d need is an arithmetic pseudorandom function family over F. More concretely, we’d need a family of functions gs:Fn→F, where s is a short random “seed”, such that:

  1. Every gs is computable by a polynomial-size, constant-depth (or at most log-depth) arithmetic circuit, but
  2. No polynomial-time algorithm, given oracle access to gs (for a randomly-chosen s), is able to distinguish gs from a random low-degree polynomial over F with non-negligible bias.

It’s important not to get so hung up on definitional details that you miss the substantive issue here. However, three comments on the definition seem in order.

Firstly, we restrict gs to be computable by constant or log-depth circuits, since that’s the regime we’re ultimately interested in (more about this later). The Permanent is a low-degree polynomial, and well-known depth reduction theorems say (roughly speaking) that any low-degree polynomial that’s computable by a small circuit, is also computable by a small circuit with very small depth.

Secondly, we say that no polynomial-time algorithm should be able to distinguish gs from a random low-degree polynomial, rather than a random function. The reason is clear: if gs is itself a low-degree polynomial, then it can always be distinguished easily from a random function, just by picking a random line and doing polynomial interpolation! On the other hand, it’s reasonable to hope that within the space of low-degree polynomials, gs looks random—and that’s all we need to draw a natural proofs conclusion. Note that the specific distribution over low-degree polynomials that we simulate doesn’t really matter: it could be (say) the uniform distribution over all degree-d polynomials for some fixed d, or the uniform distribution over polynomials in which no individual variable is raised to a higher power than d.

Thirdly, to get a close analogy with the original Razborov-Rudich theory, we stipulated that no ordinary (Boolean) polynomial-time algorithm should be able to distinguish gs from a random low-degree polynomial. However, this is not essential. If we merely knew (for example) that no polynomial-size arithmetic circuit could distinguish gs from a random low-degree polynomial, then we’d get the weaker but still interesting conclusion that any superpolynomial arithmetic circuit lower bound for the Permanent would have to be “arithmetically non-naturalizing”: that is, it would have to exploit some property of the Permanent that violates either largeness or else “arithmetic constructivity.” There’s a smooth tradeoff here, between the complexity of the distinguishing algorithm and the strength of the natural proofs conclusion that you get.

There’s no question that, if we had an arithmetic pseudorandom function family as above, it would tell us something useful about arithmetic circuit lower bounds. For we do have deep and nontrivial arithmetic circuit lower bounds — for example, those of Nisan and Wigderson (see also here), Razborov and Grigoriev, Grigoriev and Karpinski, Shpilka and Wigderson, Raz (see also here), Raz, Shpilka, and Yehudayoff, Raz and Yehudayoff, and Mignon and Ressayre. And as far as I can tell, all of these lower bounds do in fact naturalize in the sense above. (Indeed, they should even naturalize in the strong sense that there are quasipolynomial-size arithmetic circuits for the relevant properties.) Concretely, most of these techniques involve looking at the truth table (or rather, the “value table”) of the function g:Fn→F to be lower-bounded, constructing so-called partial-derivatives matrices from that truth table, and then lower-bounding the ranks of those matrices. But these operations—in particular, computing the rank—are all polynomial-time (or quasipolynomial-time for arithmetic circuits). Thus, if we could construct arithmetic pseudorandom functions, we could use them to argue that no techniques similar to the ones we know will work to prove superpolynomial arithmetic circuit lower bounds for the Permanent.

So the problem is “merely” one of constructing these goddamned arithmetic pseudorandom functions. Not surprisingly, it’s easy to construct arithmetic function families that seem pseudorandom (concrete example coming later), but the game we’re playing is that you need to be able to base the pseudorandomness of your PRF on some ‘accepted’ or ‘established’ computational intractability assumption. And here, alas, the current toolbox of complexity theory simply doesn’t seem up for the job.

To be sure, we have pseudorandom function families that are computable by constant-depth Boolean threshold circuits — most famously, those of Naor and Reingold, which are pseudorandom assuming that factoring Blum integers or the decisional Diffie-Hellman problem are hard. (Both assumptions, incidentally, are false in the quantum world, but that’s irrelevant for natural proofs purposes, since the proof techniques that we know how to think about yield polynomial-time classical algorithms.) However, the Naor-Reingold construction is based on modular exponentiation, and doing modular exponentiation in constant depth crucially requires using the bit representation of the input numbers. So this is not something that’s going to work in the arithmetic circuit world.

At the moment, it seems the closest available result to what’s needed is that of Klivans and Sherstov in computational learning theory. These authors show (among other things) that if the n1.5-approximate shortest vector problem is hard for quantum computers, then learning depth-3 arithmetic circuits from random examples is intractable for classical computers. (Here quantum computing actually is relevant—since by using techniques of Regev, it’s possible to use a quantum hardness assumption to get a classical hardness consequence!)

This result seems like exactly what we need—so then what’s the problem? Why aren’t we done? Well, it’s that business about the random examples. If the learner is allowed to make correlated or adaptive queries to the arithmetic circuit’s truth table — as we need to assume it can, in the arithmetic natural proofs setting — then we don’t currently have any hardness result. Furthermore, there seems to me to be a basic difficulty in extending Klivans-Sherstov to the case of adaptive queries (though Klivans himself seemed more optimistic). In particular, there’s a nice idea due to Angluin and Kharitonov, which yields a generic way (using digital signatures) for converting hardness-of-learning results against nonadaptive queries to hardness-of-learning results against adaptive queries. But interestingly, the Angluin-Kharitonov reduction depends essentially on our being in the Boolean world, and seems to break down completely in the arithmetic circuit world.

So, is this all Andy and I can say—that we tried to create an arithmetic natural proofs theory, but that ultimately, our attempt to find a justification of failure to find a justification of failure was itself a failure? Well, not quite. I’d like to end this post with one theorem, and one concrete conjecture.

The theorem is that, if we don’t care about the depth of the arithmetic circuits (or, more-or-less equivalently, the degree of the polynomials that they compute), then we can create arithmetic pseudorandom functions over finite fields, and hence the arithmetic natural proofs theory that we wanted. Furthermore, the only assumption we need for this is that pseudorandom functions exist in the ordinary Boolean world — about the weakest assumption one could possibly hope for!

This theorem might seem surprising, since after all, we don’t believe that there’s any general way to take a polynomial-size Boolean circuit C that operates on finite field elements x1,…,xn represented in binary, and simulate C by a polynomial-size arithmetic circuit that uses only addition, subtraction, and multiplication, and not any bit operations. (The best such simulation, due to Boneh and Lipton, is based on elliptic curves and takes moderately exponential time.) Nevertheless, Andy and I are able to show that for pseudorandomness purposes, unbounded-depth Boolean circuits and unbounded-depth arithmetic circuits are essentially equivalent.

To prove the theorem: let the input to our arithmetic circuit C be elements x1,…,xn of some finite field Fp (I’ll assume for simplicity that p is prime; you should think of p as roughly exponential in n). Then what C will do will be to first compute various affine functions of the input:

y1=a0+a1x1+…+anxn, y2=b0+b1x1+…+bnxn, etc.,

as many such functions as are needed, for coefficients ai, bi, etc. that are chosen at random and then “hardwired” into the circuit. C will then raise each yi to the (p-1)/2 power, using repeated squaring. Note that in a finite field Fp, raising y≠0 to the (p-1)/2 power yields either +1 or -1, depending on whether or not y is a quadratic residue. Let zi=(yi+1)/2. Then we now have a collection of 0/1 “bits.” Using these bits as our input, we can now compute whatever Boolean pseudorandom function we like, as follows: NOT(x) corresponds to the polynomial 1-x, AND(x,y) corresponds to xy, and OR(x,y) corresponds to 1-(1-x)(1-y). The result of this will be a collection of pseudorandom output bits, call them w1,…,wm. The final step is to convert back into a pseudorandom finite field element, which we can do as follows:

Output = w1 + 2w2 + 4w3 + 8w4 + … + 2m-1wm.

This will be pseudorandom assuming (as we can) that 2m is much larger than p.

But why does this construction work? That is, assuming the Boolean circuit was pseudorandom, why is the arithmetic circuit simulating it also pseudorandom? Well, this is a consequence of two basic facts:

  1. Affine combinations constitute a family of pairwise-independent hash functions. That is, for every pair (x1,…,xn)≠(y1,…,yn), the probability over a0,…,an that a0+a1x1+…+anxn=a0+a1y1+…+anyn is only 1/p. Furthermore, the pairwise independence can be easily seen to be preserved under raising various affine combinations to the (p-1)/2 power.
  2. If we draw f from a family of pseudorandom functions, and draw h from a family of pairwise-independent hash functions, then f(h(x)) will again be a pseudorandom function. Intuitively this is “obvious”: after all, the only way to distinguish f(h(x)) from random without distinguishing f from random would be to find two inputs x,y such that h(x)=h(y), but since h is pairwise-independent and since the outputs f(h(x)) aren’t going to help, finding such a collision should take exponential time. A formal security argument can be found (e.g.) in this paper by Bellare, Canetti, and Krawczyk.

Now for the conjecture. I promised earlier that I’d give you an explicit candidate for a (low-degree) arithmetic pseudorandom function, so here it is. Given inputs x1,…,xn∈Fp, let m be polynomially related to n, and let L1(x1,…,xn),…,Lm^2(x1,…,xn) be affine functions of x1,…,xn, with the coefficients chosen at random and then “hardwired” into our circuit, as before. Arrange L1(x1,…,xn),…,Lm^2(x1,…,xn) into an mxm matrix, and take the determinant of that matrix as your output. That’s it.

(The motivation for this construction is Valiant’s result from the 1970s, that determinant is universal under projections. That might suggest, though of course it doesn’t prove, that breaking this function should be as hard as breaking any other arithmetic pseudorandom function.)

Certainly the output of the above generator will be computable by an arithmetic circuit of size mO(log m)n. On the other hand, I conjecture that if you don’t know L1,…,Lm^2, and are polynomial-time bounded, then the output of the generator will be indistinguishable to you from that of a random polynomial of degree m. I’m willing to offer $50 to anyone who can prove or disprove this conjecture—or for that matter, who can prove or disprove the more basic conjecture that there exists a low-degree arithmetic pseudorandom function family! (Here, of course, “prove” means “prove modulo some accepted hardness assumption,” while “disprove” means “disprove.”)

But please be quick about it! If we don’t hurry up and find a formal barrier to proving superpolynomial lower bounds for the Permanent, someone might actually roll up their sleeves and prove one—and we certainly don’t want that!