Today I dropped in on Detecting the Unexpected in Baltimore, to provide a last-minute talk replacement. In the question period of my talk, Tom Loredo (Cornell) got us talking about precision vs accuracy. My position is a hard one: We never have ground truth about things like chemical abundances of stars; every chemical abundance is a latent variable; there is no external information we can use to determine whether our abundance measurements are really accurate. My view is that a model is accurate only inasmuch as it makes correct predictions about qualitatively different data. So we are left with only precision for many of our questions of greatest interest. More on this in some longer form, later.
Highlights (for me; very subjective) of the days' talks were stories about citizen science. Chris Lintott (Oxford) told us about tremendous lessons learned from years of Zooniverse, and the non-trivial connections between how you structure a project and how engaged users will become. He also talked about a long-term vision for partnering machine learning and human actors. He answered very thoughtfully a question about the ethical aspects of crowd-sourcing. Brooke Simmons (UCSD) showed us how easy it is to set up a crowd-sourcing project on Zooniverse; they have built an amazingly simple interface and toolkit. Steven Silverberg (Oklahoma) told us about Disk Detective and Julie Banfield (ANU) told us about Radio Galaxy Zoo. They both have amazing super-users, who have contributed to published papers. In the latter project, they have found (somewhat serendipitously) the largest radio galaxy ever found! One take-away from my perspective is that essentially all of the discoveries of the Unexpected have happened in the forums—in the deep social interaction parts of the citizen-science sites.
No comments:
Post a Comment