Prompt Writing Best Practices
Writing good prompts for language models like ChatGPT is an art form and a still-evolving area of research. While there are some good practices to follow, we highly recommend experimenting with different ways to phrase your prompts to optimize the results you are getting.
Some general tips:
- Imperative instructions: Instead of kindly asking for a favor in a conversationalist tone, giving clear and short instructions often produces better output. Instead of Can you please summarize this article? simply write Summarize the following article:
- Context: It can help the language model to know in what setting it is being executed. This can be a single sentence at the beginning of the prompt, like: Your job is to help an editor finish a new article. or You are responsible for proofreading and fact-checking online articles before they get published.
- Language: If you want the generated output to not be in English, but in a different language (e.g. German), you should write your prompts in that language as well. Adding a simple language hint at the beginning of the prompt, like (deutsch), is often enough.
A "hallucination" happens when the language model generates text that sounds reasonable and fitting, but which contains false claims and invented facts. This is an inherent challenge with all language models, as there is currently no feasible way of automatically fact-checking their output, and their task is only to produce text that sounds plausible. There are, however, some proven ways to reduce the chance of hallucinations.
Especially in the scenario of Rewriting articles (e.g. summarization), we can ask the model to stick to the source it was given and not invent any other facts. Some phrases that might work for this are Make sure that your summary is consistent with the text and does not contradict it. or Verify that all your claims are supported by the original text.
Reducing the temperature for a prompt can also reduce hallucinations, as the model sticks to the more-likely and thus more-likely-to-be-true phrases.
This setting controls the "creativity" of the model. A higher temperature results in more diverse and creative output, while a lower temperature makes the output more deterministic and focused. (Read more here, or dive into technical details here).
If a bit of creativity and out-of-the-box-thinking is desired, this value should be set to something high like 0.7 - 1. This way, re-generating the text should also give very differing answers, and it can work quite well to simply generate a few ones and be inspired by the model. Common use cases are title generation, meta-description generation, or creative article-rewriting tasks like converting a bullet-point list to a full article.
If on the other hand the task is rather straightforward, less creativity is needed, and/or the model should stick more closely to the given text and re-use existing phrases, the temperature should be set to a lower value like 0 - 0.5. Beware that re-generating a response with a low (or even zero) temperature might produce the same results again, and you thus cannot generate different variants. Common use cases are summarization / shortening of existing articles or excerpt generation.