I spent the day at the Simons Foundation, where there were talks on evidence all day. Highlights for me were the following. Thomas Hales (Pitt) spoke about his proof of the Kepler Conjecture (about packings of spheres) and the problem of verifying it. A dozen (yes, 12) referees were assigned, a conference was held and the refereeing work proceeded over a three-year seminar process. And yet they could neither verify nor refute the proof! The proof relied heavily on computers. Hales's response: To eschew the refereeing process and work with formal computational verification, which is a system for replacing mathematics referees with computer referees. The project may complete soon, 25 years after his original proof. The proof runs to 300 pages and contains thousands of inequalities, each of which is numbered with a seven-digit random integer hash. And so on!
David Donoho (Stanford) and Bill Press (Texas) spoke about the problem of reproducibility in medical studies and trials: There is now solid evidence that most biomedical results do not hold up under replication, that most effect sizes decrease as samples get larger, and that independent sets of studies come to different conclusions. There are many possible and probable reasons for this, including perverse incentives, investigator freedoms, and editorial decisions at journals. Interestingly, irreproducibility increases with study citation rate and impact. Donoho argued for moving to method pre-registration for confirmatory trials. Press argued for changing incentive structures. They both also argued for changes to educational practices, which relates to things we are thinking about in the Moore–Sloan Data Science Environments.
Tim Maudlin (NYU) talked about the foundations of quantum mechanics, building heavily on old work by J. S. Bell, who he argued is one of the deepest thinkers ever on the foundations of physics. He asked the key question of whether an effective theory might also be a true theory, and what that would mean. He argued that the foundational issues that plague quantum mechanics undermine its claim to be predictive in a principled way: Sure you can predict g−2 of the electron to 11 decimals, but if you don't know what the fundamental objects of the theory are or mean (you don't have a proper ontology), you are making these predictions with heuristic decisions or distinctions. For example, the idea of "measurement" that has to be invoked in most descriptions of quantum mechanics is not well defined nor non-arbitrary.
Fascinating stuff, David. I look forward to your ruminations on Kepler-10c!
ReplyDeleteLee
Fascinating stuff, David. Curious to know your views on Kepler 10c...
ReplyDelete-Lee B