Three Indian American Purdue University researchers have developed a patent-pending system that protects against identity leaking during AI photo editing by reducing AI’s ability to detect attributes like eye color and facial hair.
The system developed by Vaneet Aggarwal, Dipesh Tamboli and Vineet Punyamoorty is utilized before and after photos are uploaded to an AI editing platform, according to a media release from West Lafayette, Indiana-based public research university.
It is expected to help consumers, businesses and institutions in editing and sharing profile photos, ID images and personal pictures without exposing their private identities to external platforms.
“Results of validation testing show that we can preserve editing quality while dramatically reducing what AI models can learn about your identity,” Aggarwal said. “This is a critical step toward trustworthy generative AI.”
Read: Indian American entrepreneur Aman Gottumukkala joins xAI
Their research has been published in the peer-reviewed journal IEEE Transactions on Artificial Intelligence.
Aggarwal is a University Faculty Scholar and the Reilly Professor of Industrial Engineering with courtesy appointments in the Department of Computer Science and the Elmore Family School of Electrical and Computer Engineering. Tamboli is a doctoral alumnus and Punyamoorty is a doctoral candidate in computer and electrical engineering; both worked in Aggarwal’s research group.
“Our system allows users to mask sensitive regions on their photo, like the face, from an AI editing service,” Tamboli said. “Those regions are masked locally on the user’s device using a detailed outline of the region.”
Tamboli said only the masked image is sent to the AI editing service. “After the image is edited by AI, our system reintegrates the sensitive region back into the edited image using geometric alignment and blending,” he said.
Aggarwal said the Purdue system is the first solution that delivers full privacy as sensitive data never leaves the user’s device. While producing seamless, natural results in the final edited image, it works with any commercial generative AI model, so no retraining is needed.
“It’s privacy by design,” he said. “With our system, the AI platform never sees the face, but the final edited image still looks completely natural.”
The researchers disclosed the system to the Purdue Innovates Office of Technology Commercialization, which has applied for a patent to protect the intellectual property.
Read: Indian American students create health care guide for new immigrants
Addressing privacy risks from AI editing tools, Tamboli said modern generative AI tools edit photos with impressive realism but require users to upload full, unaltered images to cloud-based systems. These images include private details including the face and identifying features.
“Requiring full, unaltered images creates serious privacy and security risks,” he said. “Once a photo is uploaded, users lose control over where their biometric data goes, how it is stored or how it might be misused.”
Tamboli said previous privacy approaches relying on blurring sensitive regions, locking parts of an image, using stylization filters or avoiding cloud upload entirely, fail to fully protect personal identity.
The researchers validated their system by testing how well leading AI foundation models infer biometric attributes from masked versus unmasked images. They found the Purdue system significantly reduced the ability of AI models to detect attributes such as eye color, facial hair and age group. In some cases, attribute-classification accuracy dropped by more than 80%, demonstrating strong protection against identity leakage.
The research team is taking steps to bring the technology closer to real-world deployment, including expanding the system to protect additional sensitive features such as medical details, ID documents and other privacy-critical content.


