Skip to content

The Election Night Time Warp

November 3, 2020


Has Election Night—not just the election—been modeled adequately?

The Conversation source

Kurt Gödel famously found solutions to Albert Einstein’s equations of general relativity that allow trajectories to loop through time.

Today, early on Election Day and a usual time for our posts on Gödel, I talk about the trajectory of counting votes after tomorrow’s polls close and time-warp effects that everyone watching the returns will see.

I am picking up the vein of ethical algorithms in yesterday’s post, but with a different take about ethical modeling in novel situations where reliable training data is unavailable. I believe there is a responsibility for running simulations not just of tomorrow’s election, such as FiveThirtyEight conducts, but also of tomorrow’s count as it may unfold hour by hour and stretch over many following days.

Gödel also famously believed he had found a logical flaw in the U.S. Constitution that allowed a mechanism for legally instituting a dictatorship, but his argument was never reported. We discussed this at length in 2013, including noting a 2012 paper by Enrique Guerra-Pujol of the University of Central Florida College of Business, who is a frequent reader of this blog. Guerra-Pujol offers a detailed construction of a mechanism based on self-reference applied to the short Article V on amending the Constitution. This topic has been addressed several times since, but what stands out to us is a 2019 paper by Valeria Zahoransky and Christoph Benzmüller. This paper attempts to find a proof of Guerra-Pujol’s mechanism by automated logical inference using the Isabelle/HOL proof assistant. They found a model that satisfies the argument.

Election Evolution Over Time

My post on Election Day 2016 contrasted the relatively stable time evolution of polls in 2012 with the gyrations of 2016. That post began by defending Nate Silver of FiveThirtyEight and his 30% likelihood for Donald Trump against those who had Hillary Clinton over 90%. At the time of our 2012 post on Barack Obama versus Mitt Romney, I had thought Silver’s error bars were too wide, but in 2016 they looked right on the basis of last-minute decision making by impulse that my chess model also registers.

This year, the polls have been even steadier than in 2012, and undecided voters are found to be scarce. Of course, the pandemic has been the greatest factor in the poll numbers, but my first point is that election forecasting uses only the numbers as primary. The algorithms do not have an input that could take values Covid 19, Covid 23, Covid 17… In a normal election, the polling numbers would seem to point to an easier call than in 2012. Silver has, however, expressed caution in explaining why his odds stay just short of 90% for Joe Biden as I write at midnight. My first point of further disquiet begins with a simple fact:

The models have been trained on data from past elections.

A major difference on our flight path to Election Day is the upsurge in early voting and turnout overall. Texas reports that it has already registered more early votes than total votes cast in 2016. We are not saying this difference has been overlooked—of course, models are being adjusted for it every hour. What we are doubting is the existence of a basis for making those adjustments with high confidence. Before we discuss the major issue of the flight path after the polls close, let me insert an analogy from my current chess work.

My Own Ethical Interpolation

My statistical chess model is trained on millions of moves from games at the hours-long pace of standard in-person chess. The pandemic has led to chess moving online where fast time controls are the norm. For instance, the hallowed US Championships were just held in a format of three games per day at a pace of 25 minutes for the whole game plus 5 seconds for each move played, which equates to G/30 (game in 30 minutes) under a simplification used also here. The Online Olympiad held in August by the International Chess Federation (FIDE) gave only 15 minutes plus 5 seconds per move, equating to G/20. I have been asked for input on games at paces all the way down to 1-minute “Bullet” chess.

My solid estimates of how less time affects skill come from the annual FIDE World Rapid and Blitz Championships, which are contested at equivalents of G/25 and G/5, respectively, by hundreds of elite male and female players. For time controls in-between I face twin problems of scant data from in-person chess—basically none for player below master level—and online chess having evident contamination from cheating, as I discussed in a post last June. Hence I interpolate to determine model settings for other time controls.

This diagram shows internal evidence supporting the orange curve obtained via Aitken extrapolation, in that inverse polynomial curves based on other reasoning converge to it. The curve has been supported by recent field tests, including a restricted invitational tournament recently run by Chess.com at G/3 and a large junior league in Britain at G/15 equivalent. Still, the ethical status is:

  1. The pandemic has injected me into online fast chess ahead of the year-long timeframe that would be needed to clean the available large data and rebuild my model directly for all the gamut of different time controls.

  2. So it is ethical for me to give my best estimates, as supported by field tests and cross-checks in my model, but with caveat of their being “an extrapolation of an interpolation.”

  3. But in principle there are more-reliable ways to do the modeling.

My second point is that this year’s election models are in similar boats: The pandemic forces their cantilevered use in new situations. As with chess the data needed for direct training may not be available. As we’ve said in the post, this year’s election task may be “Ethics-Hard.” Now we come to my third and main point about responsibility.

A Night of Waves and Blue/Red Shifts

The new dimension of time is the order in which all the following categories of votes will be counted, in the states that variously allow them:

  1. Early votes cast in-person, such as my wife and I did on the first possible day in New York.

  2. Early votes sent by mail.

  3. In-person votes on Election Day.

  4. Regular votes by mail in states that vote that way.

  5. Absentee ballots, which are the only non-Tuesday option in some states.

This is approximately the order in which votes will be counted, again with state-by-state differences, which especially may lead to votes in category 2 being counted later. The issue is that the Democratic and Republican shares are expected to vary greatly across the categories, enough to cause large shifts in the perceived leader over real time.

Geoffrey Skelley of FiveThirtyEight has an article showing how a 5-point win for Biden in Pennsylvania might still present as a 16-point lead for President Trump on Election Night, when all in-person votes are counted but only some of the votes by mail. Here are the article’s key graphics, the left one showing actual proportions from Pennsylvania’s June primary.

Composite of two figures from FiveThirtyEight Skelley article

Other states that allow early tallying of early votes may show an initial Biden lead before a red-shift toward Trump on Election Night, with more of Biden’s vote share remaining to be counted. Still others are less clear. FiveThirtyEight on Saturday posted a useful state-by-state guide.

Why Important to Model the Count?

Awareness of the time-shifts is important not only for perception but also for the timing of legal challenges that are expected to arise. For instance, there is a continued specter of invalidating 127,000 early votes already cast by a novel drive-up system in Harris County Texas; despite its rejection by a district judge today, it has been appealed higher. There is expectation of legal battles nationwide over procedures that have been altered by measures to cope with the pandemic.

Public perception, however, is the most immediate concern. President Trump stated in a flagged Tweet that we “must have final total on November 3rd” and followed up by saying, “…instead of counting ballots for two weeks, which is totally inappropriate, and I don’t believe that’s by our laws.” Justice Brett Kavanaugh wrote in a formal opinion that states “want to avoid the chaos and suspicions of impropriety that can ensue if thousands of absentee ballots flow in after Election Day and potentially flip the results of an election.” Note his phrase “flip the results”—a sure sign that perception is reality.

Of course many outlets besides FiveThirtyEight are aware of the new election-reporting physics and have published their own guides. They are adjusting their election-night projection models accordingly. Our main question goes further in terms of responsibility:

Has anyone been running simulations of how Election Night vote-counting may unfold?

We believe such simulations are just as important as the ones they have run of a timeless election. Showing them is not only important to inform the many who will be watching, it would be a vaccine against pressure that exploits unexpected perception.

Yet it must be acknowledged that there is neither hard data nor sure knowledge of county-level vote-tallying policies and schedules on which to train such simulations. Nor may there be as many cross-checks as my chess interpolation situation.

This may be another “Ethics-Hard” problem. But it is one already involved in adjusting the projection models that definitely are being deployed. Aside from the many novel and vital modeling problems from the pandemic, it may be the most important one of our near future. And we are thinking of this less than 48 hours—now less than 24 hours—before the first polls close.

Open Problems

How do you think Election Night results will unfold, apart from your estimate of the time-independent “ground truth” of the electorate’s intentions?

The Night of the Ethical Algorithm

November 2, 2020


Algorithms for the Election

Michael Kearns and Aaron Roth are the authors of the book Ethical Algorithms and the The Science of Socially Aware Algorithm Design. It has earned strong reviews including this one in Nature—impressive.

Michael is a long-time friend who is a leader in machine learning, artificial intelligence, and much more. He also overlapped with Ken at Oxford while visiting Les Valiant there in the mid-1980s. He is at the University of Pennsylvania in computer science along with his co-author Roth. Cynthia Dwork and Roth wrote an earlier book on the related issue of Differential Privacy.

Today we will talk about making algorithms ethical.

Tuesday is the 2020 US national election for President, for Congress, and for state and local offices. Every four years we have a national election, and we cannot imagine a better motivation for making sure that algorithms are ethical.

The word “algorithm” appears 157 times in their book. Two words used hand-in-hand with it are “data” (132 times) and “model” (103 times), both spread through all of the book’s 232 pages. Models of electorates, trained on data from past elections, inform the algorithms used by news agencies to make election-night projections. These carry more responsibilities than election-eve forecasts. There have been infamous mistakes, most notably the premature calls of Florida both ways in the 2000 election.

We believe that Tuesday’s election in our novel pandemic situation requires attention to ethics from first principles. We will discuss why this is important. What it means to be ethical here? And how one can make an algorithm ethical?

The Issue

Algorithms have been around forever. Euclid devised his gcd algorithm in 300 BCE. In the first half of the last century, the central issue was how to define that an algorithm is effective. This led to showing that some problems are uncomputable, so that algorithms for them are impossible.

In the second half, the emphasis shifted to whether algorithms are efficient. This led to classifying problems as feasible or (contingently) hard. Although many algorithms for feasible problems have been improved in ways that redouble the effect of faster and cheaper hardware, the study of complexity classes such as {\mathsf{NP}} has given reasons why algorithms for hard problems may never be improvable.

The new territory, that of Kearns and Roth, is whether algorithms are ethical. Current ones that they and others have critqued as unethical accompany models for the likes of mortgages, small-business loans, parole decisions, and college admissions. The training data for these models often bakes in past biases. Besides problems of racial and gender bias and concerns of societal values, the raw fact is that past biases cause the models to miss the mark for today’s applications. For algorithms with such direct application to society, ethical design is critical.

But this requirement is further reaching that one might initially imagine, so that as with computability and complexity, the factors can be ingrained in the problems.

Consider a simple problem: We given a collection of pairs of numbers {(x,y)}. We are to predict whether this number pair has the property

\displaystyle  x + y \ge 0.

This is pretty easy if we can use both {x} and {y}. But imagine a world where {x} is allowed to be viewed but {y} is secret. Perhaps the law requires that we cannot use {y}—it is illegal. Now we might do as poorly as {50\%}. Suppose that the data consists of

\displaystyle  (1,-1), (1,1), (2,-2), (2,2) \dots

Then seeing only {x} gives no advantage, while giving both is perfect. Thus what in these simplified terms counts as an ethical algorithm is a poor predictor, whereas an unethical one is perfect.

The blurb for the Kearns-Roth book says that they “…explain how we can better embed human principles into machine code—without halting the advance of data-driven scientific exploration.” While we agree their approach is vital, we suspect that as with complexity there will be indelibly ethically hard tasks. We wonder whether election modeling has already become one of them.

Ken and I have two separate takes on this. We will do the first and then the other in a second post.

Red/Blue Leakage, Bias, and Ethics

One question on everyone’s minds is whether we will see a repeat of the forecasting misses from 2016. Let us remind that our own Election Day 2016 post started by defending Nate Silver of FiveThirtyEight for giving Donald Trump as much as a 30% chance to defeat Hillary Clinton. He had been attacked by representatives of many news and opinion agencies whose models had Clinton well over 90%.

We wonder whether these models were affected by the kind of biases highlighted in the book by Kearns and Roth. We must say right away that we are neither alleging conscious biases nor questioning the desire for prediction accuracy. One issue in ethical modeling (for parole, loans, admissions) is the divergence between algorithm outcomes that are most predictive versus those that are best for society. Here we agree that accurate prediction—and accurate projections as results come in after the polls close—is paramount. However, the algorithms used for the latter projections (which were not at fault in 2016 but have been wrong previously) may be even more subject to what we as computer scientists with crypto background see as a “leakage” issue.

Here is the point. Ideally, models using polling data and algorithms reading the Election Night returns should read the numbers as if they did not have ‘R’ and ‘D’ attached to them. Their own workings should be invariant under transformations that interchange Joe Biden and Donald Trump, or whoever are opposed in a local race. However, a crucial element in the projection models in particular is knowledge of voting geography. They must use data on the general voting preferences of regions where much of the vote is still extant. Thus they cannot avoid intimate knowledge of who is ‘R’ and who is ‘D.’ There is no double-blind or zero-knowledge approach to the subjects being projected.

There is also the question of error bars. A main point of our 2016 post (and of Silver’s analysis) was the high uncertainty factor that could be read from how the Clinton-Trump race unfolded. Underestimating uncertainty causes overconfidence in models. This can result from “groupthink” of the kind we perceive in newsrooms of many of the same outlets that are doing the projections. The algorithms ought to be isolated from opinions of those in the organization, but again there is reason from the last election to wonder about leakage.

Unlike cases addressed by Kearns and Roth, we do not see a solution to suggest. As in our simple {x+y} example, prior knowledge of the data in full may be needed for prediction. This may just be an “Ethics-Hard” problem.

Open Problems

The word “election” does not appear in Kearns and Roth’s book. What further application of their standpoint to elections would you make?

Ken sees a larger question of ethical modeling decisions given the unprecedented circumstances of the current election. This comes not from spatial geography distorted by the pandemic but rather from the dimension of time injected by massive early voting and late counting of many mailed ballots. He will address this next.

A Vast and Tiny Breakthrough

October 26, 2020


Christofides bound beaten by an epsilon’s idea of epsilon

src1, src2, src3

Anna Karlin, Nathan Klein, and Shayan Oveis Gharan have made a big splash with the number

         ~~~~~~\frac{1}{1,000,000,000,000,000,000,000,000,000,000,000,000}.

No that is not the amount of the US debt, or the new relief bill. It is the fraction by which the hallowed 44-year-old upper bound of {1.5} on the approximation ratio of the metric Traveling Salesperson Problem has been improved. With the help of randomization, we hasten to add.

Today we discuss the larger meaning of their tiny breakthrough.
Read more…

Can We Solve It?

October 23, 2020


It is a Friday

James Maynard is a number theorist. He attended Cambridge as an undergrad and then moved to do his grad work at Oxford at Balliol College. He is now a professor at Oxford. He is one of the world experts on prime density type theorems.

Today, since it is Friday, I thought we would discuss a timely idea of Maynard. Not an idea about time complexity or time in physics, but involving the use of time.
Read more…

Vaccines are Not Developing

October 22, 2020


The search for a vaccine—is not a development.

Edward Jenner was an English physician who created the first vaccine, one for smallpox.

In 1798 he used the weak cox-pox to fool our immune system to create a protection against the deadly small-pox. Jenner is said to have saved more lives than any other human.

Today there is an attempt to create a vaccine against our small-pox of the 21st century.

In his day small-pox killed 10% or more of populations. In our day there is a similar threat. and thus the immense interest in the development of a vaccine. However, there is a misunderstanding about vaccines for COVID-19 that is pervasive. Read the New York Times or watch cable news—CNN, FOX, MSNBC—where “experts” explain how AstraZeneca, Johnson & Johnson, Novavax, and other drug companies are developing a vaccine. What developing means could potentially affect all of us, and a better understanding could save millions of lives.

They are not currently developing the vaccines, they are testing them. The point we want to emphasize is:

The development of a vaccine does not change the vaccine. The vaccine is the same at the start of its testing trials, and remains the same throughout.

The Oxford vaccine AZD1222 is the same today as it was months ago when it was created. The same is true for the other vaccines currently being tested around the world.

A vaccine is not developed in the usual sense. Drug companies can modify: how the drug is made, how it is stored, how it is given, how many doses are needed, and so on. Drug companies cannot modify the vaccine without starting over—the vaccine must remain the same. Trials can lead to a vaccine being adopted, or it can cause the vaccine to be abandoned. In the later case the drug company can try again, but with a different vaccine.

Not Development

Think of the what development means elsewhere.

  • In programming an app: We build a version and try it out. We find bugs and fix them. We use version numbers. Note, there is no AZD1222 version 3.
  • In writing a book: We make a draft. We have people read the draft. We fix typos and inaccuracies. Our quantum book’s {2^{nd}} edition is on version 11.
  • In engineering a product: You get the idea.

Here is a sample explaining vaccine development:

  • Phase I: The vaccine is given to healthy volunteers to see whether it is safe and what dosage (quantity) is most effective.
  • Phase II: The vaccine is given to target groups that would be vaccinated to see what dosage (quantity) is most effective and how well the immune system responds to it.
  • Phase III: The vaccine is given to an even larger group, consisting of thousands of people, to see how well the vaccine works to prevent COVID-19. People who do receive the vaccine are then compared with people who did not receive the vaccine.Note: there is no step that modifies the vaccine.

Consequences

There are several consequences from this insight about vaccines. For one it makes sense to order millions of doses of a vaccine, even one that has not yet been proved to be safe and effective. For example,

The European Commission has placed its first advance order for a coronavirus vaccine, snapping up 300 million doses of AstraZeneca’s AZD1222 candidate developed by the University of Oxford, with an option on another 100 million.

Note we would never order a large number of copies of a book before all editing and typos were fixed. This is a “proof” that the vaccine is the same.

Actually it may make sense to even begin to take the vaccine. Especially for high risk people. In the past inventors of vaccines have often taken their own new vaccine, even before they were sure they worked.

Open Problems

I am a computer scientist with no experience in vaccines. In 1954 I did help test the Jonas Salk polio vaccine. My help was in the form supplying an arm that got a shot of the Salk polio vaccine, I was nine years old then. But I have a math view of vaccines—a viewpoint that sheds light on this misunderstanding.

Are Black Holes Necessary?

October 11, 2020

Our congratulations on the 2020 Nobel Prize in Physics

Composite crop of src1, src2

Roger Penrose, Reinhard Genzel, and Andrea Ghez have won the 2020 Nobel Prize in Physics. The prize is divided half to Penrose for theoretical work and half to Genzel and Ghez for finding a convincing and appreciably large practical example.

Today we congratulate the winners and give further musings on the nature of knowledge and the role of theory.

The physics Nobel has always had the rule that it cannot be for a theory alone, no matter how beautiful and how many mathematical discoveries follow from its development. Stephen Hawking’s theory of black-hole radiation is almost universally accepted, despite its association with paradox, yet it was said that only an empirical confirmation such as mini-black holes being discovered to explode in an accelerator core would have brought it a Nobel. The official citation to Sir Roger says that his prize is:

“for the discovery that black hole formation is a robust prediction of the general theory of relativity.”

What is a “robust” prediction? The word strikes us as having overtones of necessity. Necessary knowledge is the kind we deal with in mathematics. The citation to Genzel and Ghez stays on empirical grounds:

“for the discovery of a supermassive compact object at the centre of our galaxy.”

The “object” must be a black hole—given relativity and its observed gravitational effects, it cannot be otherwise. Among many possible witnesses for the reality of black holes—one being the evident origin of the gravitational waves whose detection brought the 2017 Nobel—the centers of galaxies are hefty examples. The combination of these citations opens several threads we’d like to discuss.

The Proof Horizon of a Black Hole

Dick and I are old enough to remember when black holes had the status of conjecture. One of my childhood astronomy books stated that the Cygnus X-1 X-ray source was the best known candidate for a black hole. In 1974, Hawking bet Kip Thorne that it was not a black hole. The bet lasted until 1990, when Hawking conceded. He wrote the following in his famous book, A Brief History of Time:

This was a form of insurance policy for me. I have done a lot of work on black holes, and it would all be wasted if it turned out that black holes do not exist. But in that case, I would have the consolation of winning my bet. … When we made the bet in 1975, we were 80% certain that Cygnus X-1 was a black hole. By now [1988], I would say that we are about 95% certain, but the bet has yet to be settled.

In the 1980s, I was a student and then postdoc in Penrose’s department, so I was imbued with the ambience of black holes and never had a thought about doubting their existence. I even once spent an hour with John Wheeler, who coined the term “black hole,” when Penrose delegated me to accompany Wheeler to Oxford’s train station for his return to London. But it seems from the record that the progression to regarding black holes as proven entities was as gradual as many argue the act of crossing a large black hole’s event horizon to be. Although the existence of a central black hole from data emanating from Sagittarius had been proposed at least as far back as 1971, the work by Ghez and then Genzel cited for their prize began in 1995. The official announcement for Riccardo Giacconi’s share of the 2002 physics Nobel stated:

“He also detected sources of X-rays that most astronomers now consider to contain black holes.”

This speaks lingering doubt at least about where black holes might be judged to exist, if not their existence at all.

However their time of confirmation might be pinpointed, it is the past five years that have given by far the greatest flood of evidence, including the first visual image of a black hole last year. The fact of their presence in our universe is undeniable. But necessity is a separate matter, and with Penrose this goes back to 1964.

Relativity and Necessity

We have mentioned Kurt Gödel’s solution to the equations of general relativity (GR) in which time travel is possible. This does not mean that time travel must be possible, or that it is possible in our universe. A “solution” to GR is more like a model in logic: it may satisfy a theory’s axioms but have other properties that are contingent (unless the theory is categorical, meaning that all of its models are isomorphic). Gödel’s model has a negative value for Einstein’s cosmological constant; the 2011 physics Nobel went to the discovery that in our universe the constant has a tiny positive value. GR also allows solutions in which some particles (called tachyons) travel faster than light.

That GR has solutions allowing black holes had been known from its infancy in work by Karl Schwarzschild and Johannes Droste. There are also solutions without black holes; a universe with no mass is legal in GR in many ways besides the case of special relativity. Penrose took the opposite tack, of giving minimal conditions under which black holes are necessary. Following this article, we list them informally as follows:

  1. Sufficiently large concentrations of mass exerting gravity exist.
  2. Gravity always attracts, never repels.
  3. No physical effect can travel faster than light.
  4. Gravity determines how light bends and moves.
  5. The space-time manifold is metrically complete.

Penrose showed that any system obeying these properties and evolving in accordance with GR must develop black holes. He showed this without any symmetry assumptions on the system. Thus he derived black holes as a prediction with the force of a theorem derived from minimal axioms.

His 1965 paper actually used a proof by contradiction. He derived five properties needed in order for the system to avoid forming a singularity. Then he showed they are mutually inconsistent—a proof by contradiction. Here is the crux of his paper:

[ Snip from paper ]

In the diagram, time flows up. The point in a nutshell—a very tight nutshell—is that once a surface flows inside the cylinder at the Schwarzschild radius then light and any other motion from it can go only inward toward a singularity. The analysis is possible without the kind of symmetry assumption that had been used to tame the algebraic complexity of the equations of GR. The metric completeness mandates a singularity apart from any symmetries; a periodic equilibrium is ruled out by analysis of Cauchy surfaces.

Necessary For Us?

Like Richard Feynman’s famous diagrams for quantum field theory, Penrose developed his diagrams as tools for shortcutting the vicissitudes of GR. We could devote entire other posts to his famous tiles and triangle and other combinatorial inventions. His tools enable quantifying black-hole formation from observations in our universe.

The question of necessity, however, pertains to other possible universes. Let us take for granted that GR and quantum theory are facets of a physical theory that governs the entire cosmos—the long-sought “theory of everything”—and let us also admit the contention of inflationary theorists that multiple universes are a necessary consequence of any inflation theory. The question remains, are black holes necessary in those universes?

It is possible that those universes might not satisfy axiom 1 above, or might have enough complexity for existence of black holes but not large-scale formation of them. The question then becomes whether black holes must exist in any universe rich enough for sentient life forms such as ourselves to develop. This is a branch of the anthropic principle.

Lee Smolin proposed a mechanism via which black holes engender new universes and so propagate the complexity needed for their large-scale formation. Since complexity also attends the development of sentient life forms, this would place our human existence in the wake of consequence, as opposed to the direction of logic when reasoning by the anthropic principle.

A Little More About Science

The 2020 Nobel Prize in Chemistry was awarded this week to Jennifer Doudna and Emmanuelle Charpentier for their lead roles in developing the CRISPR gene-editing technology, specifically around the protein Cas9.

We argue that two more different types of results cannot be found:

{\bullet } Penrose shows that black holes and general relativity are connected, which is a math result. We still cannot create black holes in a lab to experiment with—or maybe we could but should be very afraid of going anywhere near doing so. It was not clear that there could ever be a real application of this result.

{\bullet } Charpentier and Doudna discover that an existing genetic mechanism could be used to edit genetic material. Clearly this can and was experimented on in labs. Also clear that there are applications of this result. Actually it is now a standard tool used in countless labs. There even are patent battles over the method.

We like the fact that Nobels are given for such diverse type of research. It is not just that one is for astrophysics and one for chemistry. It is that Nobels can be given for very different types of research. We think this is important.

But wait. These results do have something in common, something that sets them apart from any research we can do in complexity theory. Both operate like this:

Observe something important from nature. Something that is there independent of us. Then in Penrose’s case explain why it is true. Then in Charpentier and Doudna’s case, use it to solve some important problems.

We wonder if anything like this could be done in our research world—say in complexity theory?

Open Problems

Besides our congratulations to all those mentioned in this post, Ken expresses special thanks to Sir Roger among other Oxford Mathematical Institute fellows for the kindness recorded here.

[changed note about massless universe]

Knowledge is Good

October 6, 2020


Science is good too

Emil Faber is the pretend founder of the pretend Faber College. The 1978 movie Animal House starts with a close-up of Faber’s statue, which has the inscription, Knowledge Is Good.

Today, Ken and I thought we might talk about knowledge, science, mathematics, proofs, and more.
Read more…

IBM Conference on the Informational Lens

September 27, 2020


Some differences from the Computational Lens

Chai Wah Wu, Jonathan Lenchner, Charles Bennett, and Yuhai Tu are the moderators for the four days of the First IBM Research Workshop on the Informational Lens. The virtual workshop begins Tuesday morning at 10:45 ET. The conference has free registration and of course is online.

Today we preview the conference and discuss a few of the talks.

Read more…

Puzzle Reviews by a Puzzle Writer

September 22, 2020


Not puzzling reviews

Princeton University Press page

Jason Rosenhouse is professor in the Department of Mathematics at James Madison University. His research focuses on algebraic graph theory and analytic number theory involving exponential sums. The former includes a neat paper on expansion properties of a family of graphs associated to block designs, with two undergraduates among its authors. But besides his “real” research, he has written a number of books on puzzles such as The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser. Soon his book Games for Your Mind: The History and Future of Logic Puzzles is to be published.

Today Ken and I thought we would highlight his recent review of a book on math puzzles.
Read more…

Ken Regan Turned 61

September 15, 2020


Happy birthday to Ken

Ken Regan is of course my partner on GLL. He is faculty in the computer science department at the University of Buffalo. His PhD was in 1986 from Oxford University and it was titled On the separation of complexity classes. He was the PhD student of Dominic Welsh who was a student of John Hammersley.

Today I would like to wish Ken a happy birthday.

Read more…