{ "id": "2302.05543", "version": "v1", "published": "2023-02-10T23:12:37.000Z", "updated": "2023-02-10T23:12:37.000Z", "title": "Adding Conditional Control to Text-to-Image Diffusion Models", "authors": [ "Lvmin Zhang", "Maneesh Agrawala" ], "comment": "33 pages", "categories": [ "cs.CV", "cs.AI", "cs.GR", "cs.HC", "cs.MM" ], "abstract": "We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.", "revisions": [ { "version": "v1", "updated": "2023-02-10T23:12:37.000Z" } ], "analyses": { "keywords": [ "text-to-image diffusion models", "adding conditional control", "control pretrained large diffusion models", "controlnet learns task-specific conditions", "support additional input conditions" ], "note": { "typesetting": "TeX", "pages": 33, "language": "en", "license": "arXiv", "status": "editable" } } }