How to Write the Best Prompts and Use Prompt Chaining?

How to Write the Best Prompts and Use Prompt Chaining?

By Albert Mao

Feb 5, 2024

This article summarizes best practices for designing prompts and introduces the method of prompt chaining for decomposing complex and lengthy tasks into simple subtasks for higher efficiency.

Prompting steers large language models (LLMs) into generating relevant responses aligned with the user intent. While language models generate output based on the preceding sequence of tokens, prompting techniques elicit human-like reasoning with neural networks and minimize inaccuracies, aka hallucinations.

In our previous articles, we discussed how large language models demonstrate remarkable zero-shot reasoning capabilities while being excellent few-shot learners with task-specific examples. Other approaches, such as Chain-of-Thought prompting and Tree-of-Thought Prompting, advance LLM capabilities even further by steering the models through a step-by-step process before arriving at a solution. These methodologies let neural networks learn from their mistakes and excel in tasks where models typically struggle, such as arithmetic, symbolic or logical reasoning.

While each of the mentioned prompting approaches has its own merits, the efficiency of prompting is determined by individual prompts. But when the task is too large or complex, even the best of prompts and prompting techniques may fail, calling for decomposing the problem at hand into smaller parts via the process known as prompt chaining. Keep reading to find out about both of these below.

Tips for Designing a Good Prompt

Although studies showcase that LLMs do have some capabilities of guesswork, such as interpreting user intent, neural models perform best when given higher-quality prompts instructing them in detail on the expected task. While there are various approaches for designing good prompts, most of these boil down to the following best practices:

  • Have the instructions about what the model is expected to do at the beginning of the prompt.

  • Use separators such as ### or """  to encircle the instructions and the context.

  • Be specific about the expectations for the output by identifying its format, style and other parameters in the prompt. For example, you can specify how many words or sentences should be included in the output, the style and tone of voice as well as the audience who will read it.

  • Avoid vague prompts that leave room for ambiguity. For example, instead of prompting the model to "create a short list of product benefits, ask an LLM to "create a list of 5 bullet points, with one sentence per bullet, highlighting the product's benefits in order of importance for the selected target audience."

  • Leverage zero-shot or few-shot prompting, providing the model with relatable examples, or fine-tune it on a larger dataset.

  • Focus on what the model should do instead of prompting an LLM on what it should avoid.

  • Use leading words at the end of a prompt to nudge the model into an expected mode. For example, the instruction "let's think step by step" added to the end of the prompt makes the model break the solution into steps, which increases the accuracy of the output and allows tracing of the reasoning process.


What Is Prompt Chaining?

While prompting steers LLMs towards generating more accurate and relevant responses, prompt chaining is designed to make this even more efficient by decomposing complex tasks into subtasks to keep the prompt design as succinct and simple as possible. With the prompt chaining method, LLMs build on their previous answers to deliver more nuanced output.

In some aspects, prompt chaining looks similar to the Tree-of-Thought method while having distinct differences in application and execution. Similar to the Tree-of-Thought method, prompt chaining follows an incremental approach, using intermediate outputs to generate the final answer. Unlike Tree-of-Through, prompt chaining runs in a sequence, processing one step at a time and is more suited for well-structured algorithmic processes such as summarizing, coding, debugging or planning.

The principle of prompt chaining and its benefits are schematically illustrated in a figure in the work by Wu et al. [2022] below.

Figure 1: In the above example, No-Chaining (A) is compared to Prompt Chaining (B), where the original feedback is split into subtasks resulting in more precise suggestions/output. Source: Wu et al. [2022]

Use Case of Prompt Chaining

Added transparency and controllability via prompt chaining expand the scope of potential AI applications to more complex tasks where LLMs still fall short. 

Below are just a few examples demonstrating the capabilities of prompt chaining in practical applications.

Legal Documents Classification

In a study by Trautmann et al. [2023], entitled "Large Language Model Prompt Chaining for Long Legal Document Classification," a team from Thomson Reuters Labs utilizes the prompt chaining method for the classification of extensive legal documents. In the absence of prompt chaining, these tasks presented serious challenges to LLMs due to the sheer scope of such documentation and specific terminology. 

Schematically, the prompt chaining for legal document classification is demonstrated in Figure 2 below.

Figure 2: Application of multi-step prompt chaining for labeling lengthy legal documents. Source: Trautmann et al. [2023]

Scriptwriting

In a work by Mirowski et al. [2022] titled "Co-Writing Screenplays and Theatre Scripts with Language Models," researchers leverage prompt chaining to develop a framework for scriptwriting. The system, named Dramatron, proved effective for creating coherent scripts and screenplays, including the title, characters, story, location descriptions and dialogue.

Figure 3: In Dramatron, the LLM generates a coherent script starting from a log line (brief, answering basic questions such as Who, What, Why, Where, When and How). The arrows in Figure 3 schematically outline how generated text is used to construct prompts for further output by the LLM. Source: Mirowski et al. [2022].

Implement AI Technology Easier with VectorShift

As making prompts specific and up-to-point helps steer LLMs into desired behavior, prompt chaining expands the capabilities of AI applications even further. By decomposing larger tasks into manageable blocks, users can increase model performance and address more complex and creative problems.

Meanwhile, prompt chaining in various contexts, such as creating chatbot assistants, coding, legal, and more, can be made essentially a plug-and-play process when leveraging SDK interfaces and no-code functionality available for AI applications with VectorShift. For more information, please don't hesitate to contact our team or request a free demo

© 2023 VectorShift, Inc. All Rights Reserved.

© 2023 VectorShift, Inc. All Rights Reserved.

© 2023 VectorShift, Inc. All Rights Reserved.