On Meta-Prompting
Authors: Adrian de Wynter, Xun Wang, Qilong Gu, Si-Qing Chen
Abstract: Modern generative language models are capable of interpreting input strings as instructions, or prompts, and carry out tasks based on them. Many approaches to prompting and pre-training these models involve the automated generation of these prompts: meta-prompting, or prompting to obtain prompts. We propose a theoretical framework based on category theory to generalize and describe them. This framework is flexible enough to account for stochasticity, and allows us to obtain formal results around task agnosticity and equivalence of various meta-prompting approaches. Experimentally, we test our framework in two active areas of model research: creativity and ideation. We find that user preference strongly favors (p < 0.01) the prompts generated under meta-prompting, as well as their corresponding outputs, over a series of hardcoded baseline prompts that include the original task definition. Using our framework, we argue that meta-prompting is more effective than basic prompting at generating desirable outputs.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.