Mansi Phute

RenderBender: A Survey on Adversarial Attacks Using Differentiable Rendering

International Joint Conference on Artificial Intelligence (IJCAI) (IJCAI'25), 2025

Haoran Wang
Matthew Lau
Chao Zhang
Zsolt Kira
Willian Lunardi
Martin Andreoni
Wenke Lee
Duen Horng Chau

Abstract

Differentiable rendering techniques like Gaussian Splatting and Neural Radiance Fields have become powerful tools for generating high-fidelity models of 3D objects and scenes. Their ability to produce both physically plausible and differentiable models of scenes are key ingredient needed to produce physically plausible adversarial attacks on DNNs. However, the adversarial machine learning community has yet to fully explore these capabilities, partly due to differing attack goals (e.g., misclassification, misdetection) and a wide range of possible scene manipulations used to achieve them (e.g., alter texture, mesh). This survey contributes a framework that unifies diverse goals and tasks, facilitating easy comparison of existing work, identifying research gaps, and highlighting future directions—ranging from expanding attack goals and tasks to account for new modalities, state-of-the-art models, tools, and pipelines, to underscoring the importance of studying real-world threats in complex scenes.

BibTeX

			
@inproceedings{DBLP:conf/ijcai/HullRB25,
  author={Matthew Hull and Haoran Wang and Matthew Lau and Alec Helbling and Mansi Phute and Chao Zhang and Zsolt Kira and Willian Lunardi and Martin Andreoni and Wenke Lee and Duen Horng Chau},
  title={RenderBender: A Survey on Adversarial Attacks Using Differentiable Rendering},
  year={2025},
  url={https://www.ijcai.org/proceedings/2025},
  booktitle={IJCAI},
  crossref={conf/ijcai/2025}
}