Like other scientists, astrophysicists first used computers as glorified calculators, but in the emerging era of Big Data, computers are poised to become true scientific partners. Automated tools being developed at Carnegie Mellon University in a new federally sponsored project could hasten that reality.
The sheer size of cosmological data sets and the increasing complexity of theories explaining astrophysical phenomena make such tools essential for continued scientific progress, said Jeff Schneider, a research professor in the Robotics Institute who is leading the three-year, $1.6 million project sponsored by the U.S. Department of Energy.
“Astrophysicists can no longer do ‘science by eye,’ said Schneider, who will be joined by collaborators in physics, machine learning and statistics. “There are just too many variables.”
So the research team will develop methods that use machine learning and statistical techniques to sift through enormous data sets, comparing complex astrophysical simulations with large observational data sets in the search for instances where theory and observations fail to match up.
“The computer can test so many more hypotheses than can a human,” he said, which eventually will make the computer a critical part of scientific discovery. “To the scientist, this will be a qualitatively different experience.”
The research team includes Barnabas Poczos of the Machine Learning Department; Shirley Ho, Rachel Mandelbaum, and Hy Trac of the Department of Physics; and Peter Freeman, Christopher Genovese and Chad Schafer from the Department of Statistics.