{ "id": "2305.11116", "version": "v1", "published": "2023-05-18T16:57:57.000Z", "updated": "2023-05-18T16:57:57.000Z", "title": "LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation", "authors": [ "Yujie Lu", "Xianjun Yang", "Xiujun Li", "Xin Eric Wang", "William Yang Wang" ], "categories": [ "cs.CV", "cs.CL" ], "abstract": "Existing automatic evaluation on text-to-image synthesis can only provide an image-text matching score, without considering the object-level compositionality, which results in poor correlation with human judgments. In this work, we propose LLMScore, a new framework that offers evaluation scores with multi-granularity compositionality. LLMScore leverages the large language models (LLMs) to evaluate text-to-image models. Initially, it transforms the image into image-level and object-level visual descriptions. Then an evaluation instruction is fed into the LLMs to measure the alignment between the synthesized image and the text, ultimately generating a score accompanied by a rationale. Our substantial analysis reveals the highest correlation of LLMScore with human judgments on a wide range of datasets (Attribute Binding Contrast, Concept Conjunction, MSCOCO, DrawBench, PaintSkills). Notably, our LLMScore achieves Kendall's tau correlation with human evaluations that is 58.8% and 31.2% higher than the commonly-used text-image matching metrics CLIP and BLIP, respectively.", "revisions": [ { "version": "v1", "updated": "2023-05-18T16:57:57.000Z" } ], "analyses": { "keywords": [ "large language models", "text-to-image synthesis evaluation", "text-image matching metrics clip", "llmscore achieves kendalls tau correlation", "human judgments" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }