G17 Claudia Cohen Hall
249 South 36th Street
The Thomas and Yvonne Williams Symposia for the Advancement of Logic, Philosophy, and Technology,
The AMCS/PICS Colloquium,
and The Department of Philosophy present
Computation: From Axiomatization to Embodiment
William Bialek (Princeton)
Wilfried Sieg (CMU)
Umesh Vazirani (UC Berkeley)
Friday, December 5, 2014
2:00 - 5:00 p.m.
John Archibald Wheeler/Battelle Professor in Physics, Princeton University
Patrick Suppes Professor of Philosophy and Co-director of the Laboratory for Symbolic and Educational Computing, Carnegie Mellon University
Roger A. Strauch Professor of EECS and Director of the Berkeley Quantum Computation Center, University of California at Berkeley
Sponsored by the School of Arts and Sciences and the School of Engineering and Applied Science.
Titles and Abstracts of Presentations:
Why did we get that one wrong?
Abstract: A painfully familiar fact about computation in the human brain is that it sometimes produces incorrect answers. Different kinds of errors are the subjects of independent literatures: on emotional interference with rational judgment, on problems in reasoning about probability, and more. We also get the wrong answer in much simpler problems of sensory information processing, and in some cases we can see that to err is not merely human—we make the same mistakes as insects, separated from us by hundreds of millions of years of evolution.
In the noisy and cluttered environment where brains have to compute, precise “right answers” generally are inaccessible. The best the brain can do is to minimize errors, and almost always this involves trading between two different kinds of errors: random errors driven by noise and clutter, and systematic errors that would persist even if the noise could magically be removed. One idea, then, is that the systematic errors of our perception arise from the attempt to minimize total error in the presence of noise. If this is correct, then our mistakes do not reflect limitations of our biological hardware, but rather are corollaries of near optimal strategies for noise reduction.
If the brain’s computations are nearly optimal strategies, then there are physical limits on the reliability of the output, and I’ll give a quick review of work showing that real brains (including ours) can approach these limits. Then I’ll dig into the problem of visual motion estimation, where I think we are on the verge of testing the “errors as corollaries of optimality” theory in quantitative detail. Along the way I hope to convince you that sense data are vastly less informative than they seem, a fact about the world that should have broader implications. Finally, some of the same mathematics that we use in understanding errors of motion estimation can be used to “higher level” errors such as the spurious detection of patterns in random sequences.
What is the concept of computation?
Abstract: What is a computer? As we seem to know what computers are, aren’t computations then just processes that can be carried out by computers? To give a sense of what kind of processes might be involved, I sketch some contexts that called for a rigorous notion of computability, namely, problems in mathematics (e.g., Hilbert’s 10th problem), decision problems in logic (e.g., the Entscheidungsproblem for logic), and definitions of “formal system” (e.g., for Gödel’s theorems).
The first classical approach takes processes involved in effectively calculating number theoretic functions as fundamental; it led, through Gödel and Church, to a notion of computability in logical calculi and absoluteness theorems. The second classical approach takes processes as basic that are involved in mechanically deciding problems concerning finite strings; it led, through Turing and Post, to a notion of computability in formal calculi and reducibility theorems.
Generalized features of formal calculi motivate the formulation of an abstract concept of a computable dynamical system. The defining features are finiteness and locality conditions that are satisfied by the standard concrete notions of computation. In addition, a representation theorem can be established: Turing machines can simulate the computations of any concrete system falling under the abstract concept. I sketch an extension of this axiomatic approach to obtain computable parallel dynamical systems and briefly discuss some applications.
Science in the limit of exponential complexity
Abstract: One of the remarkable discoveries of the last quarter century is that quantum many body systems violate the extended Church-Turing thesis and exhibit exponential complexity -- describing the state of such a system of even a few hundred particles would require a classical memory larger than the size of the Universe. This means that a test of quantum mechanics in the limit of high complexity would require scientists to experimentally study systems that are inherently exponentially more powerful than human thought itself!
A little reflection shows that the standard scientific method of "predict and verify" is no longer viable in this regime, since a calculation of the theory's prediction is rendered computationally infeasible. Does this mean that it is impossible to do science in this regime? A remarkable connection with the theory of interactive proof systems (the crown jewel of computational complexity theory) suggests a potential way around this impasse: interactive experiments. Rather than carry out a single experiment, the experimentalist performs a sequence of experiments, and rather than predicting the outcome of each experiment, the experimentalist checks a posteriori that the outcomes of the experiments are consistent with each other and the theory to be tested. Whether this approach will formally work is intimately related to the power of a certain kind of interactive proof system; a question that is currently wide open. Two natural variants of this question have been recently answered in the affirmative, and have resulted in a breakthrough in the closely related area of testing untrusted quantum devices.