LLM-based multi-agent poetry generation in non-cooperative environments
Keywords:
poetry generation, social learning, multi-agent systemAbstract
Despite substantial progress in large language models (LLMs) for automatic poetry generation, LLM-generated poetry often lacks diversity, and the training process differs greatly from human learning. Under the rationale that the poetry generation systems should learn more like humans and produce more diverse and novel outputs, we introduce a social learning-based framework that emphasizes non-cooperative interactions besides cooperative interactions to encourage diversity. Our experiments represent the first attempt at LLM-based multi-agent poetry generation in non-cooperative environments, employing both TRAINING-BASED agents (GPT-2) and PROMPT-BASED agents (GPT-3 and GPT-4). Evaluation on 96K generated poems demonstrates that our framework improves the performance of TRAINING-BASED agents, yielding a 3.0–3.7 percentage point (pp) increase in diversity and a 5.6–11.3 pp increase in novelty, as measured by distinct and novel n-grams. Poems generated by TRAINING-BASED agents also exhibit clear group divergence in lexicon, style, and semantics. PROMPT-BASED agents likewise benefit from non-cooperative environments. However, these agents show a decrease in lexical diversity over time and fail to demonstrate the intended group-based divergence within the social network. Our work argues for a paradigm shift in creative tasks such as automatic poetry generation to include social learning processes (via LLM-based agent modeling) similar to human interaction.
DOI:
https://doi.org/10.15398/jlm.v13i2.432Full article
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Ran Zhang, Steffen Eger
This work is licensed under a Creative Commons Attribution 4.0 International License.


