Unlocking LLMs' Potential with Tree-of-Thought Prompting

Unlocking LLMs' Potential with Tree-of-Thought Prompting

By Albert Mao

Jan 5, 2024

Image 3

Unlocking LLMs' Potential with Tree-of-Thought Prompting

This article explains the Tree-of-Thought Prompting (ToT) framework for large language models and its advantages compared to other prompting techniques, such as Chain-of-Thought-prompting.

Prompt engineering is pushing the boundaries for large language models, helping to unlock their potential for solving more complex tasks and providing error-free output. To this end, several studies were conducted with the goal of enabling LLMs to engage in decision-making through a tree-like process now known as the Tree-of-Thought (ToT). Keep reading to find out more about what the ToT is and how it compares with other methods, how it can help LLMs with better decision-making and learn more about different approaches to the ToT from existing studies.

What is Tree-of-Thought Prompting?

In a standard scenario, LLMs arrive at solutions in a linear fashion, generating tokens based on the preceding sequence of tokens without taking any corrective action in case of errors. This specifics limits LLM's ability to correct mistakes, which can be augmented as the model generates more tokens and also can lead to lower quality of the output. 

The Tree-of-Thought (ToT) framework is designed to entice LLMs into a process similar to human reasoning. This approach allows a large language model to make decisions following multiple paths, self-evaluate the choices and conduct backward editing if necessary before arriving at the final answer. 

How Does Tree-of-Thought Prompting Compare with Other Methods?

Schematically, the Tree-of-Thought approach was presented in comparison with other prompting methods in a study by Yao et al. (2023), as shown below:

Figure 1: High-level scheme comparing various prompting approaches where boxes represent language sequences (thoughts)  serving as steps toward problem-solving. Image source: Yao et al. (2023) 

Similar to the Chain-of-Thought (CoT) prompting, the ToT prompting technique is designed to elicit reasoning with LLMs when solving complex tasks. 

Unlike CoT, the Tree-of-Thought prompting technique does not rely on Zero-Shot-prompting or manually or automatically designed demonstrations. Instead, in the Tree-of-Thought prompting methodology, a large language model creates a tree-like structure for each thought represented by a token sequence and self-evaluates those thoughts, leveraging search algorithms, resembling a human reasoning process. 

When to Use Tree of Thought Prompting

The Tree-of-Thought prompting has demonstrated significant improvement in LLMs' abilities to solve complex tasks. In Yao et al. (2023), the research team convincingly showed how ToT has worked for non-trivial assignments, for example, the math game of 24, creative writing tasks and mini crosswords. Specifically, when solving mini crosswords, GPT-4 was able to achieve a success rate of 74% with ToT prompts, compared to just 4% with Chain-of-Thought prompting. 

In general, the researchers note the great potential of the ToT prompting for a wide range of tasks requiring mathematical, symbolic, commonsense and knowledge reasoning. For example, the Tree-of-Thought prompting can be effectively applied to supply chain optimization and similar processes, helping reduce costs, identify bottlenecks and analyze the most expedient routes.

Current Research on Tree-of-Thought Prompting

As of the day of this writing, there are three major pieces of work exploring different paths to Tree-of-Thought prompting.

Yao et al. (2023)

In the study Tree of Thoughts: Deliberate Problem Solving with Large Language Models, Yao et al. generalize over Chain-of-Thought approach with their Tree-of-Thoughts framework and run experiments with various types of tasks. The researchers elicit a tree-like reasoning process with LLMs with search algorithms, including breadth-first search (BFS) and depth-first search (DFS), implemented through custom coding.

Long et al. (2023)

In another work, titled Large Language Model Guided Tree-of-Thought, Long et al. (2023) use a different approach to engage LMS in tree-like multi-round conversations. To achieve this, the team introduces a ToT Controller trained through reinforcement learning (RL). The general schema for this approach to implement ToT is provided below: 

Figure 2: Scheme of software implementing the Tree-of-Thought approach through prompter agent, checker module, memory module and ToT controller. Image source: Long et al. (2023) 

Hulbert (2023)

Another work worth noting is the study by Dave Hulbert, leveraging the ToT prompt to advance the reasoning abilities of GPT-3.5 on a complex question. While GPT-3.5 returned an incorrect answer to the question with a CoT prompt, it was able to arrive at the correct solution with the ToT approach. The question and the prompt are provided in the example below.


Example of Tree of Thought Prompting

In Hulbert's study, the ChatGPT-3.5 was given a representative question:

Bob is in the living room.
He walks to the kitchen, carrying a cup.
He puts a ball in the cup and carries the cup to the bedroom.
He turns the cup upside down, then walks to the garden.
He puts the cup down in the garden then walks to the garage.
Where is the ball

When given the CoT prompt, "Think carefully and logically explaining your response," ChatGPT-3.5 incorrectly replied, "The ball is in the garden," while ChatGPT-4 provided a correct response with an explanation.

Then, the researcher applied a ToT prompt built upon a CoT technique:

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realizes they're wrong at any point, then they leave.
The question is

Given this prompt, the ChatGPT-3.5 engaged in a tree-like thought process and was able to come up with the right answer.

Draw Upon Tree-of-Thought Prompting with VectorShift

The Tree-of-Thought proves itself as a powerful technique, drawing upon the Chain-of-Thought approach. The ToT scheme provides a trajectory for LLM reasoning processes to explore various paths and correct errors on its way to finding a solution to complex problems.

VectorShift can help leverage ToT schemes and build them directly into your applications with no-code or SDK interfaces. For more information, please don't hesitate to get in touch with the VectorShift team or request a free demo.

© 2023 VectorShift, Inc. All Rights Reserved.

© 2023 VectorShift, Inc. All Rights Reserved.

© 2023 VectorShift, Inc. All Rights Reserved.