Researchers from the Mohamed bin Zayed University of AI conducted a study on 26 different prompting strategies to optimise responses from various large language models (LLMs), including ChatGPT. Their findings revealed that direct prompts and those offering a tip dramatically increased the quality of the responses, with tips showing improvements of up to 45%. Surprisingly, less polite, neutral prompts also led to better outcomes.

Despite all strategies producing satisfactory results, some boosted performance by over 40%. OpenAI recommends various tactics for enhancing ChatGPT effectiveness, but none are explicitly mentioned in official documentation as aligning with the 26 strategies tested by the researchers.

The study also highlighted the importance of prompt design best practices:

  • Conciseness and Clarity: Clear, direct prompts reduce confusion and improve response relevance.
  • Contextual Relevance: Providing sufficient background helps the model understand the task context.
  • Task Alignment: Ensuring the prompt aligns closely with the task improves accuracy.
  • Example Demonstrations: Including examples can guide the model in complex tasks.
  • Avoiding Bias: Neutral language helps minimize model bias.
  • Incremental Prompting: Structuring prompts to guide the model through sequential steps is effective, especially in larger models, which showed greater improvement.

The research concluded that these principled prompting strategies are effective in enhancing the focus and contextual understanding of LLMs, significantly improving response quality. Future research will explore how fine-tuning LLMs with these optimised prompts can further enhance model performance.