LLM-based multi-agent poetry generation in non-cooperative environments

Authors

  • Ran Zhang University of Mannheim, Natural Language Learning and Generation (NLLG) Lab
  • Steffen Eger University of Technology Nuremberg (UTN), Natural Language Learning and Generation (NLLG) Lab

Keywords:

poetry generation, social learning, multi-agent system

Abstract

Despite substantial progress in large language models (LLMs) for automatic poetry generation, LLM-generated poetry often lacks diversity, and the training process differs greatly from human learning. Under the rationale that the poetry generation systems should learn more like humans and produce more diverse and novel outputs, we introduce a social learning-based framework that emphasizes non-cooperative interactions besides cooperative interactions to encourage diversity. Our experiments represent the first attempt at LLM-based multi-agent poetry generation in non-cooperative environments, employing both TRAINING-BASED agents (GPT-2) and PROMPT-BASED agents (GPT-3 and GPT-4). Evaluation on 96K generated poems demonstrates that our framework improves the performance of TRAINING-BASED agents, yielding a 3.0–3.7 percentage point (pp) increase in diversity and a 5.6–11.3 pp increase in novelty, as measured by distinct and novel n-grams. Poems generated by TRAINING-BASED agents also exhibit clear group divergence in lexicon, style, and semantics. PROMPT-BASED agents likewise benefit from non-cooperative environments. However, these agents show a decrease in lexical diversity over time and fail to demonstrate the intended group-based divergence within the social network. Our work argues for a paradigm shift in creative tasks such as automatic poetry generation to include social learning processes (via LLM-based agent modeling) similar to human interaction.

DOI:

https://doi.org/10.15398/jlm.v13i2.432

Full article

Published

2026-02-20

How to Cite

Zhang, R., & Eger, S. (2026). LLM-based multi-agent poetry generation in non-cooperative environments. Journal of Language Modelling, 13(2), 261–318. https://doi.org/10.15398/jlm.v13i2.432

Issue

Section

Articles