In this article, we will explore the correct methods for hashing and signing a complex data structure. This structure encompasses a combination of simple data such as bool
, address
, and uint256
, as well as more intricate data like arrays and nested structs. Specifically, we'll focus on a structure utilized by Kwenta, a derivatives trading platform powered by Synthetix. This structure is pivotal in defining off-chain limit orders, which are subsequently executed on-chain, ensuring cryptographic security. The aim is to provide a clear and comprehensive guide, blending detailed, informative content with a friendly, blog-style approach.
🌊 Before diving into the main content, it's expected that you already have some experience with EIP-712. Additionally, possessing some familiarity with encoding in Solidity will be helpful.
I needed to build limit orders for Kwenta’s trading engine. These orders had to outline all aspects of a trade that could potentially be executed by an unknown party in the future. It was also vital to incorporate measures to protect the trade from exploiting the trader or the protocol when executed. Finally, they needed to be efficient, ensuring minimal on-chain computation.
A naive approach might involve storing the orders on-chain as soon as they are created, and then referring to these details later to ensure certain conditions are met before execution. However, this method can be expensive (for both the trader and protocol), especially if the conditions to be verified and stored on-chain are extensive.
I discovered that signing structured data off-chain, with on-chain verification as described in EIP-712, offered an excellent solution to my challenge. This approach eliminates the need to store order details on-chain, significantly reducing gas consumption. Consequently, the level of specificity required for conditions when creating an order doesn't negatively impact traders, even if the order may never be executed. Additionally, canceling outstanding orders becomes more cost-effective because most of the work occurs off-chain, and the nonce (an on-chain identifier for orders used to mitigate replay attacks) can easily be invalidated on-chain as necessary. Furthermore, if a trader prefers to prioritize execution efficiency over absolute trustlessness, condition validation, which can be the most expensive aspect, can also be delegated to the off-chain system. However, it's important to note that certain details, such as nonce and signer authenticity, are always verified on-chain.
📖 The off-chain verification piece of the solution is a bit out of scope for this article. If you do want to read about it, though, I have written extensively about the techniques/strategies I've used in the protocols' wiki.
Several articles do a fantastic job of walking through the process of hashing structured data following the standards described in EIP-712. However, I have found that many of these do not discuss some of the finer details related to encoding motivations nor how to handle more complicated structured data.
Let's start by defining the limit order data structure, which we will refer to as a conditional order for the rest of this article (as it is also referred to in documentation and code):
struct OrderDetails {
uint128 marketId;
uint128 accountId;
int128 sizeDelta;
uint128 settlementStrategyId;
uint256 acceptablePrice;
bool isReduceOnly;
bytes32 trackingCode;
address referrer;
}
struct ConditionalOrder {
OrderDetails orderDetails;
address signer;
uint256 nonce;
bool requireVerified;
address trustedExecutor;
uint256 maxExecutorFee;
bytes[] conditions;
}
📖 The details around what specific member variables like
requireVerified
ortrackingCode
mean are unimportant here, but if you're interested, check out the wiki!
To properly hash this data, you will want to define a couple of things first.
The domain separator (DOMAIN_SEPARATOR
) is a unique and contextually-based piece of data that serves a vital function by including domain-specific information to establish a foundational security layer. For example, the inclusion of chainId
prevents a signed message from being executed on a duplicate contract that exists on a different chain. For our example, the domain separator will be the hash of the following encoded contents concatenated together:
domain type hash (DOMAIN_TYPEHASH
)
domain details (i.e., hashed name, hashed version, non-hashed chain id, non-hashed verifying contract)
🧂 An optional salt value may be appended to the domain details as a final measure, though its inclusion is not necessary in this instance.
📚 EIP-712 only mandates one domain details field, with additional fields being elective. This gives implementors flexibility in enhancing domain security (i.e., a
name
alone suffices to meet the standard's requirements).
That's a lot of hashing already 😅 so here is some code to help illustrate:
bytes32 DOMAIN_TYPEHASH = keccak256(
"EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)"
);
bytes32 DOMAIN_SEPARATOR = keccak256(
abi.encode(
DOMAIN_TYPEHASH,
keccak256(name),
keccak256(version),
chainId,
verifyingContract
)
);
🏌️♂️ The
DOMAIN_TYPEHASH
andDOMAIN_SEPARATOR
are typically constant within a specific context, so they can be cached to save gas in subsequent transactions. See Solady’s _cachedDomainSeparator for example.
You may notice some “inconsistencies” already. Notice that some data, like the name and version, are hashed within the domain details, but others are not. When you are encoding any structured data following the EIP-712 specification, there are a few things to consider:
Is the data an atomic type?
Is the data a dynamic type?
Is the data a reference type?
📚 EIP-712 neatly categorizes every type in Solidity into one of three distinct types, offering clear guidance on handling them. It's crucial to note, though, that EIP-712 is tailored to be EVM-specific but language-agnostic. This means it can be applied even in programming languages like Vyper. However, as our focus is on Solidity, this article will be specifically curated to discuss concepts in Solidity terms.
After determining the type of data you're working with, encoding it essentially becomes a process of following specific directions. However, take caution; correctly classifying the data type can be trickier than it seems. For instance, you might initially think of address[]
as a dynamic type, but under EIP-712 guidelines, it's actually classified as a reference type.
Atomic data is straightforward. From the perspective of a Solidity developer, there's no need for additional preparation before encoding it.
💡 Atomicity in Data Types: In some contexts, atomic refers to the simplest types of data that can't be broken down further. For example, an integer or a boolean value in a programming language is often considered atomic because it represents a single, indivisible value.
Dynamic data, such as string
and bytes
, simply requires hashing, which converts it into an atomic type (bytes32
) before it is encoded. In our example, this applies to the member variables’ name
and version
.
Reference types (struct
and arrays defined as T[]
where T
is some generic type) are broken down into their contents which are then encoded recursively.
📚 The standard defines reference types as “(…) arrays and structs. Arrays are either fixed size or dynamic and denoted by
Type[n]
orType[]
, respectively. Structs are references to other structs by their name.”
For encoding a struct
, each member variable is processed based on its type. Take, for example, the EIP712Domain
type; each of its member variables is encoded according to its specific type. chainId
is an atomic type, so it can be encoded directly without any changes. However, name
is a dynamic type, necessitating hashing before encoding. After encoding each member variable, the next step is to concatenate them in order, ensuring each encoded member is precisely 32 bytes in length. This might require padding for some member values, making the use of abi.encode
essential (see later sections for why this matters). The need to hash dynamic types arises in part from this requirement for uniform 32-byte encodings.
For arrays, the encoding process involves processing each member (or element) individually based on its type, similar to how a struct
is processed. These processed elements are then concatenated, adhering to the same rule of ensuring each encoded member is exactly 32 bytes in length, as established for encoding a struct
.
🚨 Despite the fact that
strings
/bytes
are implemented as arrays “under the hood” (i.e.,Byte[]
, which technically follows theT[]
pattern that defines reference arrays), the EIP clearly distinguishes them as "dynamic types," while categorizing other arrays as "reference types."
Now is an opportune moment to pause and examine the distinct differences between abi.encode
and abi.encodePacked
. The use of abi.encode
ensures unambiguous results as it includes metadata like type information and offsets. On the other hand, abi.encodePacked
might yield ambiguous results, particularly when encoding two or more dynamic elements, because it excludes such metadata. abi.encodePacked
creates a more compact encoding by skipping padding between elements, with the exception of arrays, where padding is indeed included. However, even with arrays, it's important to note that while abi.encodePacked
may retain padding, it forfeits metadata. This distinction can be crucial in certain encoding scenarios.
Here's an illustration in code of how abi.encodePacked
can result in ambiguous outcomes, and, in this specific scenario, lead to a collision:
abi.encodePacked("ab", "c") == abi.encodePacked("a", "bc")
In the previous example, it's worth noting that without including metadata, distinguishing between the two pieces of original data becomes impossible. However, including metadata alone may still not be sufficient. If we were to include metadata and combine everything without proper padding, it would remain exceedingly challenging to comprehend the data. Let's use the following example to illustrate this point:
console.logBytes(abi.encodePacked("ab", "c"));
/*
0x
616263
*/
console.logBytes(abi.encodePacked("a", "bc"));
/*
0x
616263
*/
console.logBytes(abi.encode("ab", "c"));
/*
0x
0000000000000000000000000000000000000000000000000000000000000040
0000000000000000000000000000000000000000000000000000000000000080
0000000000000000000000000000000000000000000000000000000000000002
6162000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000001
6300000000000000000000000000000000000000000000000000000000000000
*/
console.logBytes(abi.encode("a", "bc"));
/*
0x
0000000000000000000000000000000000000000000000000000000000000040
0000000000000000000000000000000000000000000000000000000000000080
0000000000000000000000000000000000000000000000000000000000000001
6100000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000002
6263000000000000000000000000000000000000000000000000000000000000
*/
If we removed padding from the latter two examples, we would have:
console.logBytes(abi.encode("ab", "c")) -> 0x408026162163
console.logBytes(abi.encode("a", "bc")) -> 0x408016126263
The integrity of this data without padding has been compromised because it is no longer evident where one encoded piece of content or metadata starts or ends (just like we saw in abi.encodePacked
). When data is padded to conform uniformly to 32-byte words, it becomes feasible to predictably parse the data word by word.
Here’s another example, using a test in Foundry, that demonstrates how abi.encodePacked
pads arrays but omits metadata, in contrast to abi.encode
, which preserves such information:
function test_1() public pure {
uint256[] memory arr = new uint256[](2);
arr[0] = 6;
arr[1] = 9;
// see contents of encodedArr & encodePackedArr below
bytes memory encodedArr = abi.encode(arr);
bytes memory encodePackedArr = abi.encodePacked(arr);
assert(keccak256(encodedArr) != keccak256(encodePackedArr));
}
/*
encodedArr:
0x
0000000000000000000000000000000000000000000000000000000000000020 (offset)
0000000000000000000000000000000000000000000000000000000000000002 (length)
0000000000000000000000000000000000000000000000000000000000000006 (element)
0000000000000000000000000000000000000000000000000000000000000009 (element)
encodePackedArr:
0x
0000000000000000000000000000000000000000000000000000000000000006 (element)
0000000000000000000000000000000000000000000000000000000000000009 (element)
*/
Observe how both encodings apply padding to the array, but the packed version omits the metadata, specifically the offset and length.
🎬 If you wish to learn all there is to know about ABI encoding for Solidity, check out this fantastic video dedicated to the topic.
We need to establish a hash that captures both the shape and values of the typed data. Looking at the conditional order data structure we defined earlier, it becomes evident that we're handling another reference type, similar to the EIP712Domain
. To add to the complexity, one of the structs includes a member variable that represents a non-fixed length array of dynamically typed data (a reference type (T[]
) of dynamic types (T
is of type bytes
)). What a mess 😵💫!
Let's start with the simpler task of hashing the nested order details struct
member variable of the conditional order struct
and generating an ORDER_HASH
. This object defines specific order details for a given perpetual futures market in Synthetix v3. Since it doesn't contain any dynamic or reference types, the process should be pretty easy.
bytes32 ORDER_DETAILS_TYPEHASH = keccak256(
"OrderDetails(uint128 marketId,uint128 accountId,int128 sizeDelta,uint128 settlementStrategyId,uint256 acceptablePrice,bool isReduceOnly,bytes32 trackingCode,address referrer)"
);
bytes32 ORDER_HASH = keccak256(
abi.encode(
ORDER_DETAILS_TYPEHASH,
marketId,
accountId,
sizeDelta,
settlementStrategyId,
acceptablePrice,
isReduceOnly,
trackingCode,
referrer
)
);
Consider how we use abi.encode
and not abi.encodePacked
. We want this order hash to be predictive and non-ambiguous for the same reasons listed previously. Using abi.encodePacked
here would pack any data less than 32 bytes together without padding. Another interesting observation is that abi.encode,
in this case, does not include any metadata because all of the contents are atomic.
Observe the following example, showing that even without metadata, all the content is comprehensively included (also, notice the differences between both encoding schemes):
function test_2() public pure {
address addr = address(0xBEEF);
bytes32 b1 = bytes32(uint256(99));
uint256 num1 = 19;
uint128 num2 = 29;
bytes memory encode = abi.encode(addr, b1, num1, num2);
bytes memory encodePacked = abi.encodePacked(addr, b1, num1, num2);
assert(keccak256(encode) != keccak256(encodePacked));
}
/*
encode:
0x
000000000000000000000000000000000000000000000000000000000000beef
0000000000000000000000000000000000000000000000000000000000000063
0000000000000000000000000000000000000000000000000000000000000013
000000000000000000000000000000000000000000000000000000000000001d
encodePacked:
0x
000000000000000000000000000000000000beef000000000000000000000000
0000000000000000000000000000000000000063000000000000000000000000
0000000000000000000000000000000000000013000000000000000000000000
0000001d
*/
Next, we tackle the conditional order hash (CONDITIONAL_ORDER_HASH
), the most complex data structure we've faced so far. By carefully navigating each step, I aim to highlight and address any remaining questions about handling dynamic and reference types.
bytes32 CONDITIONAL_ORDER_TYPEHASH = keccak256(
"ConditionalOrder(OrderDetails orderDetails,address signer,uint256 nonce,bool requireVerified,address trustedExecutor,uint256 maxExecutorFee,bytes[] conditions)OrderDetails(uint128 marketId,uint128 accountId,int128 sizeDelta,uint128 settlementStrategyId,uint256 acceptablePrice,bool isReduceOnly,bytes32 trackingCode,address referrer)"
);
bytes32 CONDITIONAL_ORDER_HASH = keccak256(
abi.encode(
CONDITIONAL_ORDER_TYPEHASH,
ORDER_HASH,
signer,
nonce,
requireVerified,
trustedExecutor,
maxExecutorFee,
conditions /***** TODO: this will NOT work 🚨 *****/
)
);
📚 Notice how the order details member variable (
orderDetails
in theCONDITIONAL_ORDER_TYPEHASH
) is treated relative to the other types within the type hash. Don’t be confused! From the standard: “If the struct type references other struct types (…), then the set of referenced struct types is collected, sorted by name, and appended to the encoding.”
I intentionally marked the conditions
member variable as “TODO
” to emphasize how we handle this particular type of data. Since the conditions
member variable is a reference type (i.e., it is of type bytes[]
), it's essential to encode it following precise instructions to ensure that the result is predictable and free from collisions.
So, would this work?
bytes32 hashedConditions = keccak256(conditions);
As you might’ve guessed, it would not.
You may wonder, "Well, a string is an array of bytes, and we got a predictive and non-ambiguous result from hashing the whole thing. So why can't we do the same with the conditions
member variable?"
It's crucial to keep in mind that the EIP provides precise guidelines on how to handle dynamic types. So, why do we handle dynamic types differently than explicit array types? The reason lies in the versatility of arrays, as they can contain elements of any type, including dynamic and reference types. Consequently, if we don't encode the members of an array correctly, collisions can occur, as demonstrated in the example given earlier (in 2.1.2). These collisions could lead to unpredictable and unsafe data, underscoring the importance of handling dynamic types with care.
Now that we've established the need to encode each element in an array, let's apply these principles to our conditions
. Each element in this array is a dynamic type, which means we need to hash each element prior to encoding. Simple enough. Once each element has been hashed, we can proceed to concatenate them. However, it's important to note that this can be a potential point of confusion. The EIP specifies that we only need to concatenate each encoded element.
🚨 In this context, our objective is to avoid including any additional data, specifically metadata related to the array being encoded.
Examining test_1()
from section 2.1.2, when we encode the array arr
using abi.encode
, we observe that it indeed pads each element in the array. However, it also prefixes the data with metadata. Given our specification for array handling, we aim to exclude this metadata from the encoding. In contrast, abi.encodePacked
pads each element in the array sequentially, and importantly, it does not include metadata. The latter aligns perfectly with our requirements, making it the preferred choice for our encoding needs.
So, below is one way you can encode the conditions
member variable:
bytes32[] memory hashedConditionElements;
for (uint256 i = 0; i < conditions.length; i++) {
hashedConditionElements[i] = keccak256(conditions[i]);
}
bytes32 hashedConditions =
keccak256(abi.encodePacked(hashedConditionElements));
Based on our knowledge of encoding reference types, consider if we knew the array of conditions only contained only two elements. With that information, we could actually assert the following:
// assume conditions variable is defined elsewhere with type bytes[]
function test_3() public pure {
bytes32 hash1 = keccak256(
abi.encode(
keccak256(conditions[0]),
keccak256(conditions[1]),
)
);
bytes32 hash2 = keccak256(
abi.encodePacked(
keccak256(conditions[0]),
keccak256(conditions[1]),
)
);
assert(hash1 == hash2);
}
In the contrived example above, the choice between using abi.encode
or abi.encodePacked
doesn't actually matter in terms of the result. In both cases, we are encoding two atomic types, each with a length of 32 bytes, and there's no metadata involved. Therefore, abi.encode
doesn't need to add any padding, and abi.encodePacked
wouldn't have included padding regardless. The result is consistent in terms of encoding these specific atomic types.
Here's an even simpler fuzz test showcasing the logic:
function test_4(bytes32 x, bytes32 y) public pure {
bytes memory encode = abi.encode(x, y);
bytes memory encodePacked = abi.encodePacked(x, y);
assert(keccak256(encode) == keccak256(encodePacked));
}
However, in the wild, when the length of the conditions is unknown, we need to iterate over each element, hash it, and then create a new array (hashedConditionElements
). Afterward, we must use abi.encodePacked
to concatenate every processed element; otherwise, metadata will be included when it shouldn’t be. Only in cases when the array length is known could we write code similar to test_3()
.
🧠 Make sure you understand when to use
abi.encodePacked
. If you neglect to useabi.encodePacked
when attempting to exclude metadata, your on-chain signature verification system could yield false-negative results when widely-used Web3 libraries like ethers or viem generate correct signatures based on EIP-712.
Below is an example asserting that encoding an array via abi.encodePacked
and abi.encode
will yield different results:
function test_5(bytes32 x, bytes32 y) public pure {
bytes32[] memory arr = new bytes32[](2);
arr[0] = x;
arr[1] = y;
assert(keccak256(abi.encodePacked(arr)) != keccak256(abi.encode(arr)));
}
Now that we know how to process conditions
, let's build the conditional order hash again.
CONDITIONAL_ORDER_TYPEHASH = keccak256(
"ConditionalOrder(OrderDetails orderDetails,address signer,uint256 nonce,bool requireVerified,address trustedExecutor,uint256 maxExecutorFee,bytes[] conditions)OrderDetails(uint128 marketId,uint128 accountId,int128 sizeDelta,uint128 settlementStrategyId,uint256 acceptablePrice,bool isReduceOnly,bytes32 trackingCode,address referrer)"
);
bytes32[] memory hashedConditionElements;
for (uint256 i = 0; i < co.conditions.length; i++) {
hashedConditionElements[i] = keccak256(co.conditions[i]);
}
bytes32 hashedConditions =
keccak256(abi.encodePacked(hashedConditionElements));
CONDITIONAL_ORDER_HASH = keccak256(
abi.encode(
CONDITIONAL_ORDER_TYPEHASH,
ORDER_HASH,
signer,
nonce,
requireVerified,
trustedExecutor,
maxExecutorFee,
hashedConditions /***** this will work ✅ *****/
)
);
Now that we've successfully hashed the conditional order, let's document some key observations that can serve as valuable principles for future reference:
Any data type that requires declaration of its storage location (e.g., memory
, storage
) needs preprocessing before encoding.
Reference types are processed recursively, meaning their nested components are also encoded following the same principles.
While abi.encode
consistently ensures data predictability and safety, it's worth noting that in certain cases, abi.encodePacked
can achieve the same outcome and must be used.
To arrive at the final destination, which is the final hash, we must hash everything that has been created up to this stage:
bytes32 msgHash = keccak256(
abi.encodePacked(
"\x19\x01",
domainSeparator,
CONDITIONAL_ORDER_HASH
)
);
Once more, we observe that we can confidently utilize abi.encodePacked
while still adhering to the standard. The contents being encoded in this context, thanks to the precise rules followed in generating the CONDITIONAL_ORDER_HASH
and domainSeparator
, are deterministic. Additionally, there is only one dynamic type present (the prefix string), and we deliberately avoid padding it. Consequently, concerns about collisions are unwarranted in this scenario.
As a final step, Foundry lets us (in Solidity 🙏) define a private key, generate a public key from it, and sign data in a test environment. Below is how you can sign the hash we created:
(uint8 v, bytes32 r, bytes32 s) = vm.sign(privateKey, msgHash);
Kwenta utilizes a different approach for signature generation in its front end. Similar to many Web3 applications, it depends on a third party tool (not written in Solidity) for creating signatures. To guarantee that our on-chain verification mechanism correctly authenticates signatures produced by this tool, we have incorporated a hardhat test to assert the anticipated result in this context. I strongly recommend that anyone working with EIP-712 verify their work by testing its functionality against battle-tested third-party tools such as ethers and viem.
That's it 🏁
Thank you for reading, and I hope that this information proves valuable in addressing any challenges you may encounter while hashing complex typed data. If you have any questions, require clarification, or have differing perspectives on any of the points presented, please feel free to reach out!
Special thanks to the following colleagues for their contributions to this content, whether through discussions, peer reviews, or guidance: Tom, Jeremy, Aleksey, Adam, Korede, Melville, Jordan, and Jesper.