arXiv Analytics

Sign in

arXiv:2402.04504 [cs.CV]AbstractReferencesReviewsResources

Text2Street: Controllable Text-to-image Generation for Street Views

Jinming Su, Songen Gu, Yiting Duan, Xingyue Chen, Junfeng Luo

Published 2024-02-07Version 1

Text-to-image generation has made remarkable progress with the emergence of diffusion models. However, it is still a difficult task to generate images for street views based on text, mainly because the road topology of street scenes is complex, the traffic status is diverse and the weather condition is various, which makes conventional text-to-image models difficult to deal with. To address these challenges, we propose a novel controllable text-to-image framework, named \textbf{Text2Street}. In the framework, we first introduce the lane-aware road topology generator, which achieves text-to-map generation with the accurate road structure and lane lines armed with the counting adapter, realizing the controllable road topology generation. Then, the position-based object layout generator is proposed to obtain text-to-layout generation through an object-level bounding box diffusion strategy, realizing the controllable traffic object layout generation. Finally, the multiple control image generator is designed to integrate the road topology, object layout and weather description to realize controllable street-view image generation. Extensive experiments show that the proposed approach achieves controllable street-view text-to-image generation and validates the effectiveness of the Text2Street framework for street views.

Related articles: Most relevant | Search more
arXiv:1909.07083 [cs.CV] (Published 2019-09-16)
Controllable Text-to-Image Generation
arXiv:2305.18583 [cs.CV] (Published 2023-05-29)
Controllable Text-to-Image Generation with GPT-4
arXiv:2305.14720 [cs.CV] (Published 2023-05-24)
BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing