The collection of awesome papers on the alignment of diffusion models.
If you are interested in the alignment of diffusion models, please refer to our survey paper "Alignment of Diffusion Models: Fundamentals, Challenges, and Future", which is the first survey on this topic to our knowledge.
We hope to enjoy the adventure of exploring alignment and diffusion models with more researchers.
We try to include recent papers in time, which will be soon added in future revision of our survey paper. Corrections and suggestions are welcomed.
- ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation. NeurIPS 2023, [pdf]
- DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models, NeurIPS 2023, [pdf]
- Aligning Text-to-Image Models using Human Feedback. arXiv 2023, [pdf]
- Aligning Text-to-Image Diffusion Models with Reward Backpropagation. arXiv 2023, [pdf]
- Directly Fine-Tuning Diffusion Models on Differentiable Rewards. ICLR 2024, [pdf]
- CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching. NeurIPS 2024, [pdf]
- PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models. CVPR 2024, [pdf]
- Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases. ICML 2024, [pdf]
- Feedback Efficient Online Fine-Tuning of Diffusion Models. ICML 2024, [pdf]
- Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control. arXiv 2024, [pdf]
- Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review. arXiv 2024, [pdf]
- Aligning Few-Step Diffusion Models with Dense Reward Difference Learning. arXiv 2024, [pdf]
- Reward Fine-Tuning Two-Step Diffusion Models via Learning Differentiable Latent-Space Surrogate Reward. arXiv 2024, [pdf]
- Focus-N-Fix: Region-Aware Fine-Tuning for Text-to-Image Generation. arXiv 2025, [pdf]
- ADT: Tuning Diffusion Models with Adversarial Supervision. arXiv 2025, [pdf]
- Diffusion Model Alignment Using Direct Preference Optimization. CVPR 2024, [pdf]
- Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model. CVPR 2024, [pdf]
- A Dense Reward View on Aligning Text-to-Image Diffusion with Preference. ICML 2024, [pdf]
- Self-Play Fine-tuning of Diffusion Models for Text-to-image Generation. NeurIPS 2024, [pdf]
- Aligning Diffusion Models by Optimizing Human Utility. arXiv 2024, [pdf]
- Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization. arXiv 2024, [pdf]
- Scalable Ranked Preference Optimization for Text-to-Image Generation. arXiv 2024, [pdf]
- Prioritize Denoising Steps on Diffusion Model Preference Alignment via Explicit Denoised Distribution Estimation. arXiv 2024, [pdf]
- PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation. arXiv 2024, [pdf]
- SafetyDPO: Scalable Safety Alignment for Text-to-Image Generation. arXiv 2024, [pdf]
- DSPO: Direct Score Preference Optimization for Diffusion Model Alignment. ICLR 2025, [pdf]
- Direct Distributional Optimization for Provable Alignment of Diffusion Models. ICLR 2025, [pdf]
- Boost Your Human Image Generation Model via Direct Preference Optimization. CVPR 2025, [pdf]
- Curriculum Direct Preference Optimization for Diffusion and Consistency Models. CVPR 2025, [pdf]
- Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization. CVPR 2025, [pdf]
- Personalized Preference Fine-tuning of Diffusion Models. CVPR 2025, [pdf]
- Towards Better Alignment: Training Diffusion Models with Reinforcement Learning Against Sparse Rewards. CVPR 2025, [pdf]
- Calibrated Multi-Preference Optimization for Aligning Diffusion Models. CVPR 2025, [pdf]
- InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment. CVPR 2025, [pdf]
- Refining Alignment Framework for Diffusion Models with Intermediate-Step Preference Ranking. arXiv 2025, [pdf]
- D3PO: Preference-Based Alignment of Discrete Diffusion Models. arXiv 2025, [pdf]
- Aligning Text to Image in Diffusion Models is Easier Than You Think. arXiv 2025, [pdf]
- Optimizing Prompts for Text-to-Image Generation. NeurIPS 2023, [pdf]
- RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions. CHI 2023, [pdf]
- Improving Text-to-Image Consistency via Automatic Prompt Optimization. TMLR 2024, [pdf]
- Dynamic Prompt Optimizing for Text-to-Image Generation. CVPR 2024, [pdf]
- ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization. NeurIPS 2024, [pdf]
- Towards Better Text-to-Image Generation Alignment via Attention Modulation. arXiv 2024, [pdf]
- Inference-Time Alignment of Diffusion Models with Direct Noise Optimization. arXiv 2024, [pdf]
- Not All Noises Are Created Equally: Diffusion Noise Selection and Optimization. arXiv 2024, [pdf]
- Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding. arXiv 2024, [pdf]
- Golden Noise for Diffusion Models: A Learning Framework. arXiv 2024, [pdf]
- ReNeg: Learning Negative Embedding with Reward Guidance. arXiv 2024, [pdf]
- A General Framework for Inference-time Scaling and Steering of Diffusion Models. arXiv 2025, [pdf]
- Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review. arXiv 2025, [pdf]
- Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps. arXiv 2025, [pdf]
- Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection. ICLR 2025, [pdf]
- Test-time Alignment of Diffusion Models without Reward Over-optimization. ICLR 2025, [pdf]
- DyMO: Training-Free Diffusion Model Alignment with Dynamic Multi-Objective Scheduling. CVPR 2025, [pdf]
- Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation. NeurIPS 2023, [pdf]
- AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model. ICLR 2024, [pdf]
- HIVE: Harnessing Human Feedback for Instructional Visual Editing. CVPR 2024, [pdf]
- InstructVideo: Instructing Video Diffusion Models with Human Feedback. CVPR 2024, [pdf]
- DreamReward: Text-to-3D Generation with Human Preference. ECCV 2024, [pdf]
- Tango 2: Aligning diffusion-based text-to-audio generations through direct preference optimization. ACM MM 2024, [pdf]
- Videoscore: Building automatic metrics to simulate fine-grained human feedback for video generation. EMNLP 2024, [pdf]
- Video Diffusion Alignment via Reward Gradients. arXiv 2024, [pdf]
- Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization. arXiv 2024, [pdf]
- VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement. arXiv 2024, [pdf]
- LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment. arXiv 2024, [pdf]
- SoPo: Text-to-Motion Generation Using Semi-Online Preference Optimization. arXiv 2024, [pdf]
- OnlineVPO: Align Video Diffusion Model with Online Video-Centric Preference Optimization. arXiv 2024, [pdf]
- VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation. arXiv 2024, [pdf]
- Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search. arXiv 2025, [pdf]
- HuViDPO: Enhancing Video Generation through Direct Preference Optimization for Human-Centric Alignment. arXiv 2025, [pdf]
- VideoDPO: Omni-Preference Alignment for Video Diffusion Generation. CVPR 2025, [pdf]
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers. ICCV 2023, [pdf]
- Human Preference Score: Better Aligning Text-to-Image Models with Human Preference. ICCV 2023, [pdf]
- ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation. NeurIPS 2023, [pdf]
- Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation. NeurIPS 2023, [pdf]
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation. NeurIPS 2023, [pdf]
- VPGen & VPEval: Visual Programming for Text-to-Image Generation and Evaluation. NeurIPS 2023, [pdf]
- Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis. arXiv 2023, [pdf]
- GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment. NeurIPS 2023 Datasets and Benchmarks, [pdf]
- Holistic Evaluation of Text-to-Image Models. NeurIPS 2023, [pdf]
- Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community. ICLR 2024, [pdf]
- Rich Human Feedback for Text to Image Generation. CVPR 2024, [pdf]
- Learning Multi-Dimensional Human Preference for Text-to-Image Generation. CVPR 2024, [pdf]
- Evaluating Text-to-Visual Generation with Image-to-Text Generation. ECCV 2024, [pdf]
- Multimodal Large Language Models Make Text-to-Image Generative Models Align Better. NeurIPS 2024, [pdf]
- Measuring Style Similarity in Diffusion Models. arXiv 2024, [pdf]
- T2I-CompBench++: An Enhanced and Comprehensive Benchmark for Compositional Text-to-Image Generation. TPAMI 2025, [pdf]
- Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Biometrika 1952, [pdf]
- Individual Choice Behavior. John Wiley 1959, [pdf]
- The Analysis of Permutations. Journal of the Royal Statistical Society. Series C (Applied Statistics) 1975, [pdf]
- Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model. AISTATS 2021, [pdf]
- Models of Human Preference for Learning Reward Functions. arXiv 2022, [pdf]
- Beyond Preferences in AI Alignment. arXiv 2024, [pdf]
- Training Language Models to Follow Instructions with Human Feedback. NeurIPS 2022, [pdf]
- Constitutional AI: Harmlessness from AI Feedback. arXiv 2022, [pdf]
- RRHF: Rank Responses to Align Language Models with Human Feedback without Tears. NeurIPS 2023, [pdf]
- RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment. TMLR 2024, [pdf]
- RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback. ICML 2024, [pdf]
- Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs. ACL 2024, [pdf]
- Direct Preference Optimization: Your Language Model is Secretly a Reward Model. NeurIPS 2023, [pdf]
- Preference Ranking Optimization for Human Alignment. AAAI 2024, [pdf]
- A General Theoretical Paradigm to Understand Learning from Human Preferences. AISTATS 2024, [pdf]
- KTO: Model Alignment as Prospect Theoretic Optimization. ICML 2024, [pdf]
- LiPO: Listwise Preference Optimization through Learning-to-Rank. arXiv 2024, [pdf]
- ORPO: Monolithic Preference Optimization without Reference Model. arXiv 2024, [pdf]
- DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization. arXiv 2024, [pdf]
- Scaling Laws for Reward Model Overoptimization. ICML 2023, [pdf]
- The Alignment Problem from a Deep Learning Perspective. ICLR 2024, [pdf]
- Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints. ICLR 2024, [pdf]
- Nash Learning from Human Feedback. ICML 2024, [pdf]
- Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint. ICML 2024, [pdf]
- Dense Reward for Free in Reinforcement Learning from Human Feedback. ICML 2024, [pdf]
- Position: A Roadmap to Pluralistic Alignment. ICML 2024, [pdf]
- Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications. ICML 2024, [pdf]
- MaxMin-RLHF: Alignment with Diverse Human Preferences. ICML 2024, [pdf]
- Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. ICML 2024, [pdf]
- Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences. ICML 2024, [pdf]
- Generalized Preference Optimization: A Unified Approach to Offline Alignment. ICML 2024, [pdf]
- Human Alignment of Large Language Models through Online Preference Optimisation. ICML 2024, [pdf]
- Understanding the Learning Dynamics of Alignment with Human Feedback. ICML 2024, [pdf]
- Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback. ICML 2024, [pdf]
- Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study. ICML 2024, [pdf]
- BOND: Aligning LLMs with Best-of-N Distillation. arXiv 2024, [pdf]
- Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models. arXiv 2024, [pdf]
- Does RLHF Scale? Exploring the Impacts From Data, Model, and Method. arXiv 2024, [pdf]
- Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback. arXiv 2025, [pdf]
- Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step. arXiv 2025, [pdf]
If you find the paper list useful for your research, you are highly welcome to cite our survey paper on this topic!
@article{liu2024alignment,
title = {Alignment of Diffusion Models: Fundamentals, Challenges, and Future},
author = {Liu, Buhua and Shao, Shitong and Li, Bao and Bai, Lichen, and Xu, Zhiqiang and Xiong, Haoyi and Kwok, James and Helal, Sumi and Xie, Zeke},
journal = {arXiv preprint arXiv 2024.07253},
year = {2024}
}