Text2Place : Affordance Aware Human Guided Placement

ECCV 2024

Indian Institute of Science, Bengalaru
Text2Place: Affordance Aware Human Guided Placement

TLDR: Given a background image, Text2Place predicts the plausible semantic region compatible with the text prompt to place humans. Next, given a few subject images, we perform subject-conditioned inpainting to realistically place humans in appropriate poses following the scene affordances.

Abstract

For a given scene, humans can easily reason for the locations and pose to place objects. Designing a computational model to reason about these affordances poses a significant challenge, mirroring the intuitive reasoning abilities of humans. This work tackles the problem of realistic human insertion in a given background scene termed as Semantic Human Placement. This task is extremely challenging given the diverse backgrounds, scale, and pose of the generated person and, finally, the identity preservation of the person. We divide the problem into the following two stages i) learning semantic masks using text guidance for localizing regions in the image to place humans and ii) subject-conditioned inpainting to place a given subject adhering to the scene affordance within the semantic masks. For learning semantic masks, we leverage rich object-scene priors learned from the text-to-image generative models and optimize a novel parameterization of the semantic mask, eliminating the need for large-scale training. To the best of our knowledge, we are the first ones to provide an effective solution for realistic human placements in diverse real-world scenes. The proposed method can generate highly realistic scene compositions while preserving the background and subject identity. Further, we present results for several downstream tasks - scene hallucination from a single or multiple generated persons and text-based attribute editing. With extensive comparisons against strong baselines, we show the superiority of our method in realistic human placement.



Capabilities

Method Overview

Our approach consists of two stages:

a) Semantic Mask Optimization. Given a background image \( \mathcal{I}_b \), we initialize a blob mask \( \mathcal{M} \) parameterized as Gaussian blobs and a foreground person image \( \mathcal{I}_p \). These two images are combined to form a composite image \( \mathcal{I}_c \), which is used to compute SDS loss with the action prompt. During optimization, only \( \mathcal{M} \) and \( \mathcal{I}_p \) are getting updated via \( \mathcal{I}_c \). After training \( \mathcal{M} \) converge to a plausible human placement region, which is then used for inpainting.

b) Subject conditioned inpainting. Given a few subject images, we perform Textual Inversion to obtain its token embedding \( \mathbf{V*} \). Next, we use the inpainting pipeline of T2I models to perform personalized inpainting of the subject.

Text2Place: Affordance Aware Human Guided Placement Methodology

Results

Text2Place enables placement of celeberaties in diverse backgrounds


Applications

Text based Editing

We can perform text-based editing of the placed subject leveraging the text-conditioned inpainting model.


Example 1

Example 2



Pose Variations

Our novel blob parameterization for semantic mask provides enough space for diverse pose variations for human placement. We leverage diffusion-based inpainting to generate diverse poses for the predicted human placement mask.



Person Hallucination

We can hallucinate new persons by passing action prompts without subject conditioning (e.g., ‘a person sitting on sofa’ ). Our inpainting pipeline generates realistic outputs with humans in diverse poses consistent with the background, follows the text prompts, and preserves the subject’s identity.

Text2Place: Affordance Aware Human Guided Placement Methodology


Scene Hallucination

We can also generate scenes compatible with the person's given pose. We first place a human subject in the background using the predicted semantic mask. Next, we invert the semantic mask and perform outpainting of the region covered by the subject using the same T2I-based inpainting pipeline.

Text2Place: Affordance Aware Human Guided Placement Methodology


Two Person Scene Hallucination

Text2Place: Affordance Aware Human Guided Placement Methodology


Object Scene Placement

Our method can easily adapted to placement of novel objects, given that we perform only test time optimisation for obtaining semantic placement masks.




BibTeX

@article{rishubh2024text2place,
  title={Text2Place : Affordance Aware Human Guided Placement},
  author={Rishubh Parihar, Harsh Gupta, Sachidanand VS, R. Venkatesh Babu},
  journal={European Conference on Computer Vision},
  year={2024}
}

Acknowledgements

We thank Tejan Karmali for regular discussions and Abhijnya Bhat for reviewing the draft and providing helpful feedback. This work was partly supported by PMRF from Govt. of India (Rishubh Parihar) and Kotak IISc AI-ML Centre.