This is me, cadCAD GPT!

This is Part II in a series of three articles introducing cadCAD GPT, an open-source Large Language Model (LLM) framework to support token system simulations based on radCAD or cadCAD Python models.

For further reading:

  • Part I: Hello, cadCAD GPT! Requirements and conceptual design of LLMs to support token system simulations

  • Part III: Let’s chat! Experiments and further development of cadCAD GPT

The Key Components of cadCAD GPT

Token systems are complex dynamical systems with emerging properties. Evaluating, stress-testing, and enhancing these systems' designs requires running extensive system simulations. Token engineers have to iterate on experiments, parameter settings, and a multitude of intermediary results to consider before arriving at a definitive conclusion. cadCAD GPT supports and structures this process to allow token engineers to concentrate their efforts on formulating the essential questions and making informed decisions. Moreover, a natural language interface enables stakeholders without a token engineering background to interact with system models. cadCAD GPT transcends today’s boundaries, empowering a wide spectrum of stakeholder groups to utilize simulations in their decision-making processes.

In this article, we introduce the cadCAD GPT components and how they function. We demonstrate how cadCAD GPT agents are constructed, how to connect cadCAD GPT with any radCAD or cadCAD model, and how the framework can be further expanded and customized.

At its core, cadCAD GPT takes three steps to get to a simulation result:

  • The cadcad_gpt chatbot takes in a user query and passes it to the Planner Agent.

  • The Planner Agent returns a task list and the tools and information to use in the correct order to answer the user query.

  • The Executor Agent loops through each item of the task list one by one and has access to tools and memory. It reasons what inputs the tools would need based on the user query and context and remembers the results to get to the next task.

This orchestration core of cadCAD GPT is equipped with modular, customizable toolkits and memory. It grants cadCAD GPT access to today’s most powerful data analysis and machine learning libraries and external data to simulate token systems.

The cadCAD GPT Chatbot (cadcad_gpt.py)

cadCAD_GPT receives the user input and prints the user output. It initiates the setup of all agents, tools, and memory items. Moreover, it orchestrates all the agents, tools, and memories. While the Planner and Executor Agents process the steps toward the outcome, cadCAD_GPT collects all communication between agents and makes it available to the user.

To initiate the chatbot, we pass an Open AI API key and the cadCAD or radCAD model objects model, simulation, and experiment. Additionally, any information useful to run the simulations can be added as docs (see section Memory below).

Initializing CadCAD_GPT
Initializing CadCAD_GPT

In the current version, with no graphical UI available yet, any question to cadCAD GPT is prompted as a string to cadCAD GPT.

Prompting cadCAD GPT
Prompting cadCAD GPT

Generally, cadCAD GPT user output collects all communication between the Planner and Executor agents. In the current version’s default mode, the user output includes the Planner Agent’s task list and the Executor Agent’s thoughts, actions, and observations. However, cadCAD GPT user outputs can be customized to include or exclude any messages between agents. Our framework allows developers to include all messages to easily verify the results – or display only selected messages for better UX.

cadCAD GPT response (User Output, default mode)
cadCAD GPT response (User Output, default mode)

Planner Agent (agents.py)

After any user input, cadCAD GPT triggers the Planner Agent to process the user question. The Planner agent breaks down the user question into the low-level steps required to achieve the goal. It can reason about the user query and plan the steps needed to accomplish the task. It then describes each step with the tool to use and the context to pass to the tool, to finally provide the task list as an output in a parseable format.What’s unique about the Planner Agent is that it does not follow predefined workflows since it uses an LLM to make contextual decisions at each step, which enables it to solve non-deterministic workflows.

Planner Agent output (complete). For better UX, cadCAD GPT only prints the last line, marked with ``` in the default mode.
Planner Agent output (complete). For better UX, cadCAD GPT only prints the last line, marked with ``` in the default mode.

The Planner Agent runs on OpenAI’s gpt-3.5-turbo with a system prompt that includes instructions about how to reason about a user query. The prompt is inspired by ideas from Chain-of-Thought Reasoning and instructs the LLM to generate intermediate reasoning steps to improve contextual reasoning abilities. Additionally, the Planner Agent can access a dynamic list of tool names and descriptions available (see Toolkit below). Finally, we fine-tuned the Planner Agent’s behavior with few-shot examples to optimize the task list creation and further processing by the Executor agent. This part is a key lever to optimize agents for particular use cases and further tweak how they respond to user prompts. It should only be touched with a sufficient level of experience in AI agent design.

Once the Planner Agent finishes creating the task list, cadcad_gpt parses the plan into a Python list and passes it to the Executor Agent one task at a time.

Executor Agent (agents.py)

The Executor Agent specializes in working with tools and memory. When receiving a task, it uses an LLM to figure out the tool to use and the correct arguments to pass to complete the task.

The Executor Agent’s role is to have a Thought and reason about the task to accomplish. Second, it selects an Action. This step makes a call to OpenAI and generates a JSON, which includes the name of the function and the arguments to pass to the function. Then, the Executor Agent observes the result. It executes the function with the given arguments on a Python shell and reasons about the results. These Thought-Action-Observation loops are inspired by the ReAct framework. Along with Program-Aided Language Modelling techniques, these approaches show remarkably good results in diverse language reasoning, symbolic reasoning and decision-making tasks.

Finally, the Executor Agent saves the chat history in its short-term memory to aid in contextual decision-making for subsequent steps in the task list.

Executor Agent’s Action step output (JSON including a function to call and arguments to pass to the function). In default mode, cadCAD GPT prints this step in a more readable format.
Executor Agent’s Action step output (JSON including a function to call and arguments to pass to the function). In default mode, cadCAD GPT prints this step in a more readable format.
cadCAD GPT user input and output (default mode)
cadCAD GPT user input and output (default mode)

The example above shows a complete set of cadCAD GPT messages from user input and to user output (default mode) going over a multi-step task list. cadCAD GPT prints first the Planner Agent’s task list. Then, it displays Thought, Action, and Observation loops created by the Executor Agent. Finally, since in our example, the user asked for a plot, the Executor Agent provides a diagram using the plotter tool (see Toolkit below).

The Executor Agent uses OpenAI’s gpt-3.5-turbo-0613, equipped to run Python functions while solving a task called Function Calling (see Toolkit below). An initial system prompt includes basic instructions and information about the parameters of the simulation model.

Toolkit (toolkit.py)

Tools available in cadCAD GPT's Toolkit, and planned expansions (italic).
Tools available in cadCAD GPT's Toolkit, and planned expansions (italic).

One of cadCAD GPT’s most powerful features are Toolkits*,* made accessible to agents via Function Calling (see below). cadCAD GPT agents can select and run Python tools and, thus, enable natural language access to powerful data analysis and machine learning libraries. cadCAD GPT agents can interact with cadCAD/radCAD models to run experiments, analyze results, visualize plots, access APIs, and more. cadCAD GPT collects all tools (and memory accessible via tools) in a Toolkit class. The Planner Agent reviews the toolkit descriptions to find suitable tools for the tasklist and then hands them to the Executor Agent to parse arguments and execute.

Function Calling

To unlock OpenAI’s function calling capabilities, all tools in cadCAD GPT have to include a description marked as docstrings with """triple double quotes""", following the PEP 257 – Docstring Conventions.

With these descriptions included, cadCAD_GPT automatically generates a function calling schema for all tools in the toolkit and makes it readily available to both the Planner and the Executor Agent.

Function description to enable Function Calling, according to PEP 257 – Docstring Conventions
Function description to enable Function Calling, according to PEP 257 – Docstring Conventions

Modular Toolkits

We aim to add new tools and memory access via tools to cadCAD GPT continuously. cadCAD GPT makes it easy to expand toolkits and add information and data sources. All Python functions are available to the cadCAD GPT agents as soon as they are added to the toolkit class - provided they contain the necessary elements (see “Function Calling” above). This flexibility makes cadCAD GPT a powerful framework for any project’s specific simulation needs. At publishing date, the cadCAD GPT Toolkit includes the following tools:

  • model_info() : returns current values of the radCAD/cadCAD model objects’ parameters

  • change_param() : changes the parameter of the cadCAD simulation and runs the simulation to update the dataframe

  • analysis_agent : A specialized agent stored as a tool. Builds and executes Python pandas queries to analyze the dataframe.

  • model_documentation() : Allows natural language Question Answering for the model documentation using a Retrieval Augmented Generation approach. This is an example of how long-term memory is made accessible via the respective tool.

  • plotter() : Plots any column of the dataframe.

Memory (memories.py)

Memory is information available to cadCAD GPT. An LLM off the shelf, like OpenAI’s GPT models, is not trained on particular token engineering domain knowledge, or this information might not be available publicly. Thus, we have to make this knowledge accessible via Memory. This approach enables cadCAD GPT to access knowledge. Moreover, token engineers can instruct cadGPT to include or exclude particular data and information. This enables factual consistency, improves the reliability of the generated responses, and helps to mitigate the problem of LLM Hallucinations.

Memory examples include data stored on external servers, knowledge about the industry in a digital book PDF or the context of a task stored in cadCAD GPT’s message history.In our framework, we conceptualize memories into two categories: Short-term memory and long-term memory.

Long-term Memory

Long-term memory is information available to cadCAD GPT at any point in time. Long-term memory does not have to be included in a user input. Both the Planner and Executor agent can access long-term memory in a controllable, verifiable way. See below how different types of long-term information can be made accessible to LLM agents.

Long-term memory via a semantically searchable vector database

Not all user questions to cadCAD GPT might require running a simulation. Users can ask cadCAD GPT about the purpose of a model or the definition of certain terms in an output received.

In the example below, a user asks cadCAD GPT questions about the radCAD/cadCAD model itself, which is made available in the documentation file (cadcad_gpt = CadCAD_GPT(openai_key, model, simulation, experiment, docs). Best practices for building such a model doc file recommend including an introduction to the model, the assumptions of the model, the set of parameters and metrics to observe, and an explanation of the state update logic.

We take this documentation file and split it up into chunks to embed them in a vector database. This allows cadCAD GPT to semantically search the file and answer questions via a Retrieval Augmented Generation setup. Similarly, any text-based data can be made available to agents via semantically searchable vector databases. When executing the task list, the Executor agents utilize the tool model_documentation to fetch the information needed for the user output.

Fetching information from long-term memory
Fetching information from long-term memory

Long-term memory as numerical, tabular data

Another type of long-term memory is the output of simulation runs, blockchain transaction data, or any type of table-format, numerical or text data. The example below shows how the current parameter value “prey death rate” can be extracted from a predator and prey simulation by prompting cadCAD GPT in natural language. The Planner Agent selects the tool “model_info” to fetch the parameter value, and the Executor Agents return the results.

Fetching a parameter value from a simulation
Fetching a parameter value from a simulation

Short-term memory

Short-term memory is information cadCAD GPT updates dynamically before making it available to agents. In default mode, short-term memory is available to agents only while processing the current user input.

cadCAD GPT stores the message history between the user, the Planner Agent, and the Executor Agent to enable optimal, contextual decisions on task list execution in short-term memory. In cadCAD GPT’s default mode, we delete this message history periodically (see Executor Agent above). This solution optimizes agents’ attention to the task and fits OpenAI's context window limitations in OpenAI gpt-3.5-turbo-0613.

cadCAD GPT Further Expansions

Due to its highly modular design, cadCAD GPT can be customized and expanded according to any project’s individual simulation needs. We aim to develop the following expansions with cadCAD GPT alpha users and contributors:

  • Adding new tools:

    • Integrate Python parameter sweeps and A/B testing tools

    • Auto-create a standard cadCAD/radCAD model documentation to allow users to ask questions about the model

  • Optimizing short-term memory

    • Build better logic for wiping and updating short-term memory to improve contextual decision-making for both Planner and Executor Agents
  • Converting short-term memory to long-term memory

    • Allow agents to remember important aspects of a conversation beyond a single user prompt

    • Version control of model parameter settings and experiments

    • Enable to undo tasks

  • Adding Retrieval Augmented Generation to tune Planner Agent’s planning abilities further

    • Build a repository of user inputs and their expected task lists in a token engineering context, which can be dynamically fetched into the system prompt, provides the planner agent with better few-shot examples (see Planner Agent above).

    • Fine-tune the Planner Agent’s model with a big enough token engineering user input and task list repository

  • Newer/alternative LLMs

    • Update cadCAD GPT to use OpenAI GPT-4 Turbo  with new features to call models and tools, a 128K context window, and more

    • Test alternative open-source LLMs

If you are interested in contributing to cadCAD GPT’s further development, sign up for the demo below or drop us a line at contact@tokenengineering.net.

Summary

cadCAD GPT is designed to harness the immense potential of Large Language Models (LLMs) for supporting token system simulations. This article introduces the inner workings of cadCAD GPT, and its ability to interact with Python models following to the cadCAD/radCAD model structure. Through the concept of Toolkits, cadCAD GPT provides access to cutting-edge data analysis and machine learning libraries, enabling the utilization of data in diverse formats stored in Memory. A notable highlight is cadCAD GPT's capability to handle non-deterministic task sequencing, while empowering its human collaborators to oversee and track the workflow of AI agents, ensuring the generation of verifiable and reproducible results.

With cadCAD GPT, token engineers gain the tools needed to explore and optimize the design of complex systems with the support of LLM agents, ultimately contributing to the evolution of token engineering practices.

cadCAD GPT Demo

cadCAD GPT will be available on Thursday, Nov 30, 3:00pm UTC
Sign up for the demo and be the first to get access!

Acknowledgements

cadCAD GPT was kickstarted by funding received from Token Engineering Commons. We thank the TE Commons community, and Gideon Rosenblatt in particular, who encouraged us to embark on this exciting journey. Big thank you to our advisors Roderick McKinley, Richard Blythman, and Robert Koschig for ongoing support and feedback. Shoutout to Dr. Achim Struve, Dimitrios Chatzianagnostou, Stephanie Tramicheck, Ivan Bermejo, Rohan Sundar, and Lukasz Szymanski for the most valuable alpha user feedback and insights, and Kaidlyne Neukam for her tireless support in publishing this work.

The token sales spreadsheet model that informed our token sales experiments is available online, along with a comprehensive online course by Roderick McKinley.

TE Academy is the home for the token engineering community. Learn how to design token systems with rigor and responsibility! Sign up for our newsletter to receive the latest token engineering trends, tools, job offers and ecosystem news.

Subscribe to Token Engineering Academy
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.