About

I am a final year PhD student at IISc Bangalore, advised by Prof. Venkatesh Babu. I work on developing methods to enhance controllability in vision generative models by designing interfaces beyond text to interact with these models.

I spent an amazing summer at Snap Research working on controllable image editing. Prior to my Ph.D., I gained industry experience at Samsung Research, where I developed face editing solutions for Samsung smartphones, and at ShareChat, focusing on multimodal algorithms for content moderation.

I obtained B.Tech from IIT Delhi in Mathematics and Computing, where I worked with Prof. Prem Kalra on synthetic makeup transfer.

Rishubh Parihar

Recent News

  • [Nov 24] Our work on generating consistent mirror reflections (Reflecting Reality) accepted at 3DV 2025.
  • [Aug 24] Our work on generating multiple attribute edit variations (Attribute Diffusion) accepted at WACV 2025.
  • [Jul 24] Awarded the prestigious Satish Dhawan Research Award for contributions towards controllable generative models.
  • [Jul 24] Our works on fine-grained attribute control (PreciseControl) and affordance aware human placement (Text2Place) accepted at ECCV 2024.

Publications

2025

Zero-Shot Depth-Aware Image Editing Zero-Shot Depth-Aware Image Editing

Zero-Shot Depth-Aware Image Editing with Diffusion Models

Rishubh Parihar*, Sachidanand VS*, R. Venkatesh Babu

ICCV 2025

MonoPlace3D MonoPlace3D

MonoPlace3D: Learning 3D-Aware Object Placement for 3D Monocular Detection

Rishubh Parihar*, Srinjay Sarkar*, Sarthak Vora*, Jogendra Kundu, R. Venkatesh Babu

CVPR 2025

Compass Control Compass Control

🧭 Compass Control: Multi-Object Orientation Control for Text-to-Image Generation

Rishubh Parihar*, Vaibhav Agarwal*, Sachidanand VS, R. Venkatesh Babu

CVPR 2025

Reflecting Reality Reflecting Reality

Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections

Ankit Dhiman*, Manan Shah*, Rishubh Parihar, Yash Bhalgat, Lokesh R Boregowda, R Venkatesh Babu

3DV 2025

Attribute Diffusion Attribute Diffusion

Attribute Diffusion: Diffusion Driven Diverse Attribute Editing

Rishubh Parihar*, Prasanna Balaji*, Raghav Magazine, Sarthak Vora, Varun Jampani, R. Venkatesh Babu

WACV 2025 (Workshop on Diffusion Models, NeurIPS 2023)

2024

PreciseControl PreciseControl

PreciseControl: Enhancing Text-to-Image Diffusion Models with Fine-Grained Attribute Control

Rishubh Parihar*, Sachidanand VS*, Sabariswaran Mani, Tejan Karmali, R. Venkatesh Babu

ECCV 2024

Text2Place Text2Place

Text2Place: Affordance Aware Text Guided Human Placement

Rishubh Parihar, Harsh Gupta, Sachidanand VS, R. Venkatesh Babu

ECCV 2024

BalancingAct BalancingAct

Balancing Act: Distribution-Guided Debiasing in Diffusion Models

Rishubh Parihar*, Abhijnya Bhat*, Saswat Mallick, Abhipsa Basu, Jogendra Nath Kundu, R. Venkatesh Babu

CVPR 2024

2023

Strata-NeRF Strata-NeRF

Strata-NeRF: Neural Radiance Fields for Stratified Scenes

Ankit Dhiman, R Srinath, Harsh Rangwani, Rishubh Parihar, Lokesh R Boregowda, Srinath Sridhar, R. Venkatesh Babu

ICCV 2023

Motion Style Motion Style

We never go out of Style: Motion Disentanglement by Subspace Decomposition of Latent Space

Rishubh Parihar, Raghav Magazine, Piyush Tiwari, R. Venkatesh Babu

Workshop on AI for Content Creation, CVPR 2023

2022

FLAME FLAME

Everything is in the Latent Space: Attribute Style Editing and Attribute Style Manipulation by StyleGAN Latent Space Exploration

Rishubh Parihar, Ankit Dhiman, Tejan Karmali, R. Venkatesh Babu

ACM Multimedia 2022

HSR HSR

Hierarchical Semantic Regularization of Latent Spaces in StyleGANs

Tejan Karmali, Rishubh Parihar, Susmit Agrawal, Harsh Rangwani, Varun Jampani, Manish Singh, R. Venkatesh Babu

ECCV 2022