ChatGPT doesn't plan forward - understanding autoregressive nature of LLMs

The inspiration for this article came to me a recently while I was watching a great keynote by Prof. Yann LeCun.

I strongly recommend dedicating an hour to watch it. 

Here, I am going to focus on one specific concept: the autoregressive nature of LLMs and its real-world implications.

Challenge with code generation

This talk helped me understand one of the persistent issues I run into while working with Claude.

At a high level, Claude, when presented with a complex task and a choice to return either a valid answer or an error, would always return an answer - sometimes valid, but most often, hallucinate with an invalid response.

Let's use an example for clarity. Imagine you were building a code generator that, given a set of functions and a user request, would either return valid code that, when executed, meets the user's request with given functions or return an empty function.

A prompt might look like this:

Here is the task: <task> (...) </task>

In order to complete the task, you must do the following:
1. Analyse the task carefully and break it down into separate steps.
2. For each step, you must assign a function that can fully satisfy that step.
3. Generate a valid TypeScript code that completes the entire task using provided API endpoints.

If you cannot complete the task due to missing or incomplete endpoint, you must return an empty function.

While the details are simplified for brevity, the core message is clear. Claude needs to carry out a systematic analysis, and based on this analysis, determine the output.

The problem was that Claude would always return a code snippet, regardless of whether the necessary API endpoints were available or not.

Autoregressive nature of LLMs explained

Autoregressive LLMs, like GPT, are fundamentally designed to predict the next word in a sequence based on the preceding context.

However, their ability to "plan forward" in the same way humans can intentionally craft a narrative or strategize is limited. They don't inherently have foresight or understanding of future implications.

That said, if provided with a prompt that suggests a certain direction or objective, these models can generate text that appears coherent and in-line with that direction because they've been trained on vast amounts of text and have learned patterns, styles, and typical narrative structures.

But this should not be mistaken for genuine forward planning or intentionality. The model is still reacting word-by-word based on its learned patterns and doesn't truly "understand" or "plan" in the human sense.

Thank you ChatGPT for this explanation 😆


To put it into perspective once again, when Claude would’ve realised it was missing endpoints to complete the user task, the decision to output the code was already made. Think of it as streaming response, where tokens were already sent out to the client.

From direction to reflection

The fix turned out to be really simple, but not that obvious at the first time.

Instead of giving Claude a directive about the kind of response it should produce:

If you cannot complete the task due to missing or incomplete endpoint, you must return an empty function.

I adjusted the prompt to let Claude reflect and comment on its own output:

If you could not complete the task due to missing or incomplete endpoint, you must include <error> at the end of your response with a message explaining what was missing.

And that did the trick!

If tag would be present in the response, it would indicate the code is incomplete and cannot be further evaluated.

Claude works really well with XML tags, so I often use them as a means of transporting parameters and responses. Will cover that in one of the upcoming blog posts, so press that button below!


I encourage you to experiment with how you prompt your LLM and verify whether it is providing factual answers or simply satisfying your request (and hallucinating at the same time). Asking it to self-reflect instead of giving direction may surprise you - in a positive way!

Subscribe to Mike Grabowski
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.