
Carnegie Mellon University
In this talk, I argue that human creators and generative models can coexist. To achieve it, we need to allow creators to leverage these models while retaining control over the creation process and data ownership. I will begin by introducing several conditional generative models that improve creators’ control over outputs. Next, I will describe an efficient method for removing copyrighted content from pretrained text-to-image models. Finally, I will discuss our data attribution algorithms that evaluate the influence of each training image on a generated sample.
Bio: Jun-Yan Zhu is the Michael B. Donohue Assistant Professor of Computer Science and Robotics at CMU’s School of Computer Science. Prior to joining CMU, he was a Research Scientist at Adobe Research and a postdoc at MIT CSAIL. He obtained his Ph.D. from UC Berkeley and B.E. from Tsinghua University. He studies computer vision, computer graphics, and computational photography. He is the recipient of the Packard Fellowships for Science and Engineering, the Samsung AI Research of the Year, the NSF CAREER Award, the ACM SIGGRAPH Outstanding Doctoral Dissertation Award, and the UC Berkeley EECS David J. Sakrison Memorial Prize for outstanding doctoral research, among other awards. His work and commentary have been covered in the New Yorker, the New York Times, BBC, CNN, Reuters, and The Economist.