Researchers at the Carnegie Mellon University Robotics Institute and the Massachusetts Institute of Technology (MIT), created CHARCHA ( Computer Human Assessment for Recreating Characters with Human Actions), a secure and personalized verification protocol that allows an individual’s likeness to appear in generative video content. The team was inspired to create CHARCHA to respond to ethical issues in generative AI, such as unauthorized deepfakes.
“When we realized how easy it was to scrape data from the internet and create realistic AI content without consent, we knew we wanted to develop a safeguard,” said Mehul Agarwal, co-lead researcher and 2024 Master’s of Machine Learning student at CMU. “ We are reacting to the growing ability of malicious actors to misuse generative AI and trying to stay ahead of the curve.”
The CHARCHA system was inspired by CAPTCHA’s verification legacy. Whereas CAPTCHA uses text or image tests to differentiate between humans and bots, CHARCHA relies on real time physical interactions to differentiate between them. The program asks users to perform a set of randomized physical actions in front of a webcam, such as turning their head from left to right, squinting their eyes, or smiling with teeth. The live verification process, which takes about 90 seconds, analyzes the actions to ensure that the person is present and imitating the proper requests. By ensuring the person is interacting with the system, CHARCHA prevents pre-recorded video or still images from bypassing verification.
“Built-in algorithms analyze micro-movements to verify that the user is physically present and not a simulation,” said Gauri Agarwal, co-lead researcher and CMU School of Computer Science alumni working at the MIT Media Lab. “Once it is satisfied that you’ve completed the action accurately and in real time, the program will use those images to train our model.”
The CHARCHA process offers a unique level of autonomy to the users who choose to interact with it. Through using the program, users do not have to abandon generative AI content entirely but can instead personalize music videos or other content with confidence.
“Many platforms store data indefinitely, and they have unclear policies on how AI-generated content may be used,” said Mehul. “CHARCHA shifts the responsibility to users by allowing them to verify themselves before any images can be generated. It does not rely on external privacy policies and gives people greater control over their likeness.”
Gauri and Mehul Agarwal presented CHARCHA at the 2024 Conference on Neural Information Processing Systems (NeurIPS), where it gained significant interest from multiple industry leaders.
“I think the positive response from our audience at the conference really highlights the need for security surrounding generative AI tools,” said Gauri. “This helped confirm our belief that CHARCHA could be an essential tool for the future.”
The team has created a website to further promote CHARCHA, where users can join a waitlist to create their own music video ethically with consent.
For More Information: Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu