A technical overview of developing gold.xyz

January 27th, 2022

This is about gold.xyz, our latest product on Solana, and the things we learned along its development. I'll specifically delve into our biggest pain point, namely porting Rust code to JavaScript/TypeScript (JS/TS) for the front-end and how we solved it eventually via Wasm. This article is a bit technical, subjective and opinionated, written by me: a Rust dev with minimal JS experience who wants to write as much Rust and as little JS as possible.

Fundraising with NFTs

We came up with the idea of gold.xyz 2 weeks before the 2021 autumn Solana Hackathon. Our Rust team was super confident that we can deliver the MVP—enabling anyone to start a fundraiser where users can bid to NFTs in consecutive auction cycles—in this short amount of time. Oh boy, how wrong we were... Anyway, we decided to give it a go and jumped into action.

We were off to a good start and the Rust contract was 90% finished in about 10 days. We were using the Metaplex standard for the NFTs. The goal was to have a MasterEdition NFT minted for each fundraiser and a ChildEdition NFT minted to the highest bidder at every auction cycle within a fundraiser. By ChildEdition I mean a unique edition (snapshot) of a MasterEdtition NFT, similar to the ERC721 standard on Ethereum.

Since you can set the maximum number of edition NFTs to be minted for a MasterEdition, the owner of the auction could set how many auction cycles they'd like to have during the fundraiser. Thus, upon initialization, a MasterEdition is minted with a total supply equal to the number of auction cycles. Furthermore, if you know how PDAs work you can imagine how easy it is to find an edition NFT if you know the public key of theMasterEdition NFT's account. So we thought it would add a nice structure to our system if we don't auction separate MasterEdition NFTs within a fundraiser, but have a single MasterEdition that encompasses the NFT family of its editions. This would also facilitate token-gated access management once the NFTs are auctioned off.

It all seemed fine, but then we realized that the metadata of a ChildEdition cannot be modified because it is a static snapshot of the MasterEdition metadata. Even if the metadata of the MasterEdition was set to be mutable, the ChildEdition metadata could not be modified. It was a shame, because we wanted our children NFTs to store unique metadatas with unique images. Since we didn't want to rewrite the code to only mint MasterEditions in every cycle, we came up with the following workaround: we set the MasterEdition metadata to be mutable and update its metadata after every new edition mint. That way, each edition will take a snapshot of a unique metadata containing unique NFT images and attributes.

With that sorted out, we thought the remaining 4 days of the Hackathon will be enough to mock up a webapp through which users can interact with the contract. Thereupon came the setback that eventually led to us to miss the Hackathon deadline. We had all kinds of data structures in Rust that somehow had to be ported to JS. For example, all fundraisers had a unique ID that mapped to the public key of the fundraiser's state account where all kinds of info about the fundraiser is stored. For this mapping we used a BTreeMap<AuctionId, Pubkey> (as HashMap is not supported by the Solana runtime due to its random state) and stored it in a central pool account. Thus, whenever the webapp wanted to query the blockchain for the state of a fundraiser, we queried the central fundraiser pool account for the BTreeMap and then queried the specific fundraiser state by taking the Pubkey value from the BTreeMap via the unique fundraiser ID key. Since then we have reworked this part to not use a BTreeMap, but we have other contracts still make use of this type. So how do we move something like a BTreeMap from rust to JS? Enter Borsh.

Serialization with Borsh

Borsh is a serialization framework developed by Near. Almost all of our Rust data structures stored in accounts had a #[derive(BorshSerialize, BorshDeserialize)]attribute which enabled us to serialize and deserialize them via Borsh. The same serialization framework is available for JS/TS, see BorshJS. Thus, by using BorshJS, one is able to query the data from an account via Solana's RPC client and deserialize this raw byte series into a given JS type using a schema. A schema is a layout map that helps the deserializer figure out how many bytes it should read next to deserialize them into a data structure. Unfortunately, at that time, BorshJS didn't support BTreeMap serialization and deserialization out-of-the-box so we wrote an extension for it. Still, our data was so intricately stored in various accounts, and we had so little JS experience, that we ran out of time writing the serialization schema and classes that were the TS equivalents of our Rust data structures.

It was only 2 days before the deadline we came across the Anchor framework that conveniently ports your Rust code to JS in order to facilitate developing a webapp around your contract. However, at that point it felt like we needed to adjust our Solana coding experience to fit into Anchor's framework. It seemed like a tradeoff between convenience and giving up low-level control over account handling. Furthermore, deploying a tutorial anchor-generated program was twice as expensive as deploying our whole program. This might be due to the added security of Anchor with all kinds of internal account checks. All in all, the main reasons we didn't go with Anchor were that it was missing support for BTreeMap types in the Idl parser which was the backbone of our implementation and we wanted to write our Rust code free from any structural constraints.

Anyway, we missed the Hackathon deadline but were confident about our project, so we started to look for alternatives for our development stack. We had to address porting our Rust code to JS in an automated way while maintaining as much freedom in how we write Rust code as possible. The first step was to open a PR in the BorshJS repo that adds de/serialization support for a Map type. It was surprisingly straightforward to implement this, we just needed an idea of how Borsh serializes a BTreeMap on the Rust side. Turns out, it's similar to a Vec<T>, i.e. there are 4 bytes reserved at the front that contain the length of the vector in little-endian representation followed by the serialized output of each T in the vector. A BTreeMap<K, V> also has 4 bytes reserved at the front that tells us how many key-value (K-V) pairs there are in our map, followed by the sequence of the serialized key and value pairs.

Now that every data structure we used was serializable on both Rust and JS side, we created a tool that automatically generates the TS class equivalents of our Rust data structures. We created a trait called BorshSchema that can be derived for almost any struct and enum in Rust by just adding #[derive(BorshSchema)] on top of it. Then, a parser, inspired by Anchor's Idl solution, looks for all data structures with BorshSchema and generates the TS code and required schema layout for BorshJS de/serialization. For example

struct SomeStruct {
	foo: u32,
	bar: Option<u64>,
	baz: Vec<String>,
	quux: BTreeMap<[u8; 32], Pubkey>,

will generate the following TS output:

export class SomeStruct extends Struct {
	foo: number,
	bar: BN | null,
	baz: string[],
	quux: Map<[32], PublicKey>,

export const SCHEMA = new Map<any, any>([
			kind: 'struct', fields [
				['foo', 'u32'],
				['bar', { kind: 'option', type: 'u64' }],
				['baz', ['string']],
				['quux', { kind: 'map', key: [32], value: 'publicKey' }],

so that whenever we have a serialized SomeStruct stored in an account, we can easily deserialize it on the TS side into a SomeStruct class. The best part is, that it works with enum types as well. Check out how Borsh JS implements something like an enum in TS using a special constructor in the Enum superclass. Same goes for Struct superclasses. So, for example

struct FooStruct {
	foo: Option<String>,
enum SomeEnum {
	UnnamedFields(u64, [String; 2]),
	NamedFields {
		foo_struct: FooStruct,
		bar: Vec<u8>,

will result in

export class FooStruct extends Struct {
	foo: string | null,

export class SomeEnum extends Enum {
	someEnumUnitVariant: SomeEnumUnitVariant,
	someEnumUnnamedFields: SomeEnumUnnamedFields, 
	someEnumNamedFields: SomeEnumNamedFields, 

export class SomeEnumUnitVariant extends Struct {}
export class SomeEnumUnnamedFields extends Struct {
	unnamed_1: BN,
	unnamed_2: string[],

export class SomeEnumNamedFields extends Struct {
	fooStruct: FooStruct,
	bar: number[],

export const SCHEMA = new Map<any, any>([
			kind: 'struct', fields [
				[foo: { kind: 'option', type: 'u64' }],
			kind: 'enum', field: 'enum', values: [
				['someEnumUnitVariant', SomeEnumUnitVariant],
				['someEnumUnnamedFields', SomeEnumUnnamedFields],
				['someEnumNamedFields', SomeEnumNnamedFields],
			kind: `struct`, fields [],
			kind: `struct`, fields [
				['unnamed_1', u64],
				['unnamed_2', ['string', 2]],
			kind: `struct`, fields [
				['fooStruct', FooStruct],
				['bar', ['u8']],

You can imagine how cumbersome it would be to write these things out manually every time you introduce a new type on the Rust side that you'd like to deserialize on the TS side.

Alright, so we have BTreeMap de/serialization support on the TS side. We can serialize (write) virtually anything into an account's data field in the contract—written in Rust—and deserialize (read) that same data from the same account's data field in the webapp—written in TS. Next step is to generate instructions in the webapp that are sent to the contract processor.

Contract instructions with Wasm

A contract instruction executed by a Solana program contains the program's ID, every account's pubkey that will be read or modified by the program and the instruction data that encodes external inputs to the contract. Imagine we have an instruction factory—mainly used for contract tests—like this in Rust:

#[derive(BorshSchema, BorshSerialize, BorshDeserialize)]
pub struct InitializeContractArgs {
	pub foo: u64,
	pub bar: Pubkey,

pub fn initialize_contract(args: &InitializeContractArgs) -> Instruction {
	// some program-derived addresses
	let pda_1 = Pubkey::find_program_address(&pda_1_seeds(),      &PROG_ID);
	let pda_2 = Pubkey::find_program_address(&pda_2_seeds(&args.bar), &PROG_ID);
	// ... additional logic
	let accounts = vec![
		AccountMeta::new(args.bar, true),
		AccountMeta::new(pda_1, false),
		AccountMeta::new_readonly(pda_2, false),

	let instruction_data: Vec<u8> = compute_instruction_data(args);
	Instruction {
		program_id: PROG_ID,
		data: instruction_data,

You can see that we are computing PDAs, creating AccountMeta vectors and generating a binary representation of the instruction data. However, this code is only usable in Rust, but we should be generating this in the webapp from the user input in InitializeContractArgs. So how to avoid writing all of this again in TS? Enter Wasm.

WebAssembly, or Wasm, is a binary instruction format that is blazingly fast and lightweight, perfect for web-based applications. Best part is, Rust can be easily compiled to a Wasm target using the awesome wasm-bindgen tool facilitating high-level interaction between our Rust code and JS. I won't go into details how it works, but we are going to tell the compiler to convert code with the #[wasm_bindgen] attribute into a Wasm binary that can directly be used by our webapp. There's one caveat though, namely that not every type can cross the ABI from Rust to Wasm, i.e. you can't just put #[wasm_bindgen] anywhere that you want to use on the JS side. For example, if InitializeContractArgs in the above example would contain fields that don't implement the IntoWasmAbi trait, then it cannot be converted via wasm_bindgen. Anyway, a serialized stream of bytes &[u8] can always cross the Wasm ABI, so we chose the easy (but not necessarily best) way to Borsh-serialize everything that goes in the Wasm module. Thus, the code above could be used with Wasm like this:

#[wasm_bindgen, js_name = "initializeContractWasm"]
pub fn initialize_contract_wasm(serialized_args: &[u8]) -> Result<String, JsValue> {
	let args = solana_program::borsh::try_from_slice_unchecked(serialized_args)
		.map_err(|e| JsValue::from(e.to_string()))?;
	let instruction = initialize_contract(&args);

Sooo, what do we have here? Well we are telling the compiler to turn the initialize_contract_wasm function into an initializeContractWasm function that can be called directly from JS. We send an unsigned 8-bit integer array into the function and receive the instruction as a String—since Instruction can be serialized to a json string via Serde—or a JsValue wrapping our error type when our function fails. Essentially we receive InitializeContractArgs as a serialized byte array, deserialize it back, pass it to initialize_contract and then serialize the instruction to a String which can be deserialized on the JS side into an Instruction. Yeah, it sounds complicated, and there are many serializations in the process, but it does the trick for now as &[u8] and String types can cross the ABI between Rust and Wasm.

On the JS side we can simply call our Wasm binary by asynchronously importing the generated function from the wasm-pack output directory.

import { serialize } from "borsh";

// ...

const { initializeContractWasm } = await import("./pkg"); // import wasm function
const initializeContractArgs = new InitializeContractArgs ({ ... }); // initialize transaction args
const initializeContractArgsSerialized = serialize(SCHEMA, initializeContractArgs); // serialize args
const initializeContractInstruction = parseInstruction(
	initializeContractWasm(initializeContractArgsSerialized) // call wasm and parse instruction
const transaction = new Transaction().add(instruction);

In the above example you can see that we need to initialize the instruction's input data, then serialize it using the serialize method from BorshJS and the SCHEMA generated by our BorshSchema derive macro. Then we call the imported Wasm function and we need to parse the instruction, since a solana_program::instruction::Instruction type stores account metadata in its accounts field whereas an Instruction type in @solana/web3.js has a keys field for this purpose. Furthermore, every other field in a rust instruction is snake case, while the TS counterpart uses camel case.

Intermezzo - Wasm compatibility

This little interlude will be about how we managed compile our code to Wasm, who knows, it might be helpful for someone out there. Trying to compile something that uses a lot of 3rd party dependencies to a Wasm target is always a nail-biting experience for me. You can never know when the compiler will throw the first error saying that some random dependency of another dependency has some piece of code that doesn't compile to a Wasm target. Well, this happened many times while trying to make Wasm work and I'll show how we found a workaround.

Before solana v1.9.0 some Solana crates were using memmap2 v0.1.0 which could not be compiled to Wasm. Thankfully, others needed memmap2 to compile to a Wasm target as well, so we found old discussions an PRs about the topic and were happy to find out that memmap2 v0.5.0 was published and it compiles to Wasm. However, all published Solana crates were using the v0.1.0 version. I used cargo tree to get an idea of how my dependency tree looked like, took a note of every crate that had memmap2 v0.1.0 as a dependency. Then I forked the source code of solana and bumped all memmap2 dependencies to v0.5.0. Then, instead of using a published version of solana-program in my Cargo.toml, I set every Solana crate to use my local fork of the solana source code as a git dependency.

To my dismay, I still got the same error when trying to compile the code to Wasm. I forgot about the fact, that I'm using the spl-token and metaplex-metadata crates, each of them depending on an older version of solana-program and thus memmap2 v0.1.0. So I forked the solana-program-library and metaplex source codes and set their Solana dependencies to point to our previously forked solana repo with the updated memmap2 dependency. This time it worked, and our code compiled to a Wasm target, however, managing git dependencies on this scale quickly becomes a pain. But once you go down the rabbit hole, there's no turning back.

Thankfully, solana v1.9.0+ is out, and it resolves the memmap2 problem entirely, since it uses v0.5.0. However, spl-token v3.2.0, the latest published version of spl-token still depends on solana-program v1.7.4 which still depends on solana-frozen-abi v1.7.4 which still depends on memmap2 v0.1.0. Therefore, until a newer version of spl-token is published, we are using our local fork of spl-token. Not to mention the fact, that metaplex also depends on spl-token 3.2.0 and thus indirectly on memmap2 v0.1.0. But hopefully, once a Wasm-compatible version of spl-token is published, metaplex will also publish an updated version that can be built to a Wasm target. Anyway, our contract now compiles to Wasm and all instructions can easily be called from the TS side without the need to write these instructions in TS as well. Why work twice and risk introducing bugs in the JS implementation, when we already have these well-tested functions in Rust.

RPC client

Since the instructions are now conveniently created via Wasm, the question arises: why don't we query the blockchain via Wasm? The data required for our webapp to visualize the contract state was stored in various accounts with PDA seed interconnections and other tricky solutions. With little TS experience, it was slow and painful to write all these queries, so we thought, why not write the blockchain queries in Rust, test it there so we are confident it works, and then just add a Wasm wrapper that turns this data useful for our webapp.

Sounds good, but as it turns out, the RpcClient used to query the blockchain is essentially a reqwest::blocking::Client which is not Wasm-compatible. So I quickly gathered what info we need from the blockchain for our webapp and mocked up an asynchronous RpcClient that supports the following simple queries:

  • get_lamports(account: &Pubkey) - returns the balance of an account in Lamports (10−9 SOL).
  • get_owner(account: &Pubkey) - returns the owner of an account.
  • get_account_data(account: &Pubkey) - returns the binary data stored in the account.
  • get_and_deserializeaccount_data(account: &Pubkey) - fetches and attempts to deserialize the account data into a given Rust type.
  • get_minimum_balance_for_rent_exemption(data_len: usize) - returns the minimum balance required for an account to be rent exempt if the data it stores has data_len size.

Of course, the Solana RpcClient has many more methods, i.e. it can send transactions as well, but for now, these methods were all we needed to fetch data for our webapp. We felt way more comfortable writing things in Rust because it gave us more confidence that our code works as intended. Everything was in one place, everything was nicely tested in Rust and thanks to wasm-bindgen it was just a simple wasm-pack build command to generate Wasm from our code.

Putting it all together

After we had the building blocks (Rust + Wasm + Webapp) it was time to put them together. After a bit of research we found this article which was essentially what we needed. We had to configure NextJS 12, which uses the Speedy Web Compiler built in Rust, add wasm-pack-plugin and we were ready to go. You can find the final code here with the following structure:

  • src contains all pages/components that our webapp uses
  • src also contains the contract logic, which is a standalone package containing a minimal TS layer between Wasm calls and functions that the webapp uses directly
  • rust is a submodule where the Rust contract code resides with the Wasm bindings

The template stack can be found here, but it is still in active development.


All in all, setting up this development stack for Solana was an exciting, although sometimes painful journey. However, we learned a lot along the way and I am very happy with the end result, because porting our Rust contracts to TS has become a much quicker and more pleasant experience. Of course, it's not as automated and refined as Anchor, but I think we managed to maintain a nice balance between a level of automation, freedom, flexibility and low-level control over our Rust contracts with minimal structural constraints. Documentation about gold.xyz is coming soon, make sure to follow the project on twitter for the latest updates.

Arweave TX
Ethereum Address
Content Digest