Welcome to our new blog series exploring the use of machine learning to accurately price NFTs. Over the next few weeks, we’ll be diving deeper into how we use machine learning and crowd intelligence to generate automated and up-to-date NFT pricing, why machine learning models are necessary to achieve reliable valuations at scale, and much more.
Owners of non-fungible tokens (NFTs) often wonder what their assets are worth, or if an asset they want to purchase is selling at a reasonable price. While it’s relatively straightforward to obtain the market price for fungible and liquid cryptocurrencies like Bitcoin or Ether at any point in time, the vast majority of NFTs are illiquid and lack rich price histories, much like physical collectibles, real estate, and art in the “real world”.
Most non-fungible tokens on the market are also selling for the first time ever. In such cases, buyers need to learn about the market, the mechanics of the application, find comparable assets, and form a valuation based on this information. These costs of information gathering create a massive barrier to entry for interested users who are concerned about the risk of purchasing something for more than it’s worth and don’t want to spend hours developing their own price assessments before entering the market.
For these reasons, NFTs define an asset class that is both non-fungible and low-velocity - meaning they don’t change hands very often. This makes it very difficult to find the price of NFTs. Part of our goal in developing real-time NFT pricing tools is to increase transparency in NFT markets, making it easier for people to participate in, and companies to build products on top of, the NFT economy.
Trusted NFT valuations are key for mass adoption
NFTs offer us a vehicle for tokenizing anything, while the explosive growth of DeFi has demonstrated the power of permissionless financial primitives. However, due to the nascency and inherent illiquidity of NFTs, DeFi and NFTs have yet to fully collaborate. DeFi products are currently limited to building upon a relatively small set of fungible assets, while the value held in NFTs has largely remained under-utilized and stagnant. Thus far, most of the experimentation around NFTs has centered around art, collectibles, and their sovereign ownership.
While powerful, this is just scratching the surface of what NFTs can truly represent. It’s at the intersection of DeFi and NFTs where many of the most exotic financial instruments will live, enabled by near real-time NFT pricing. One day, you will be able to open leveraged UNI positions backed by Decentraland parcels, trade synthetic positions of NFTs, and stake CryptoPunks as reinsurance and earn interest on them.
Along with making it easier to utilize NFTs in DeFi, accurate NFT pricing will enable a number of powerful benefits for NFT marketplaces and protocols. Research has shown that information transparency leads to increased activity in marketplaces. In the traditional art world, according to a report by Artsy: “The lack of access to artworks’ prices was the most frequently mentioned roadblock for collectors when trying to buy art.” The same applies to NFTs.
Tools which make it easier for participants to value NFTs can significantly reduce the costs of onboarding new users, lead to a greater revenue for sellers, and increase overall activity in the market. We believe improved NFT pricing will increase transparency, reduce information frictions in NFT markets and encourage participation from a broader community of developers - opening the floodgates to a new wave of NFT products and protocols.
How to price NFTs using machine learning
At Upshot, we’ve been heads down developing specialized machine learning algorithms to price NFTs at scale. Our machine learning models ingest historical sales data and NFT metadata to construct features based on this information to generate accurate, reliable pricing. So, how exactly do our machine learning models predict NFT prices?
While models for predicting prices of liquid financial assets like stocks and cryptocurrencies often rely on historical price trends, such approaches may only partially help in the less liquid world of NFTs. NFT characteristics or metadata (tags and labels which describe the features of an NFT and its creator) are a valuable source of information.
Transformations of historical sales data based on these features give more dense and informative datasets than simply relying on an individual NFT’s price trend alone. Our algorithms take these characteristics as inputs and can automatically identify and pool information from past sales of similar NFTs, helping resolve the issue of illiquidity.
We train machine learning models that are able to predict transaction prices based on variables constructed from NFT metadata and proxies for the state of the market at different levels of granularity (Ethereum > NFT market > specific NFT project, etc.).
We validate the predictions by examining their accuracy on data not used in the training process and obtain error bounds by comparing our predictions to realized sale prices. Both the predicted pricing and error bounds provide useful information to NFT buyers, sellers, or developers building products on top of the NFT economy.
Our approach to valuation is similar to how art or real estate valuation works at a high level, although these traditional markets usually rely on simpler linear models with a known set of predictor variables, established from decades of research and observation. In contrast, the variables that determine NFT prices are not yet well understood. A lot of our research effort has focused on constructing different predictor variables, using automated methods to uncover the most important ones, and iterating to arrive at a lean but powerful model.
Are Upshot’s appraisals still crowdsourced?
If you’ve been following Upshot for a while, you may be wondering what happened to our crowdsourced approach to generating NFT appraisals. Well nothing, really. Upshot works by aggregating answers to questions from different actors and scoring them based on a new information measure, called Determinant Mutual Information, that aims to reward honest answers more than dishonest ones. This hasn’t changed -- what has changed is our approach to eliciting answers to questions and the types of actors we cater experiences towards.
Our initial focus was on creating experiences for human appraisers to reveal their beliefs about the values of different NFTs through very inefficient, mechanical interactions (i.e. by answering questions through a UI). The problem with this approach is that it 1. isn’t very scalable (this type of interaction can only support so many appraisals over any reasonable timespan) and 2. isn’t very accurate (asking human appraisers questions requires approximations that can significantly affect the accuracy of resulting appraisals).
On the flip side, ML models are highly scalable and can be much more accurate than human appraisers. As such, we have been building tools for developers that are building models for pricing NFTs and are placing less emphasis on an experience centered around answering questions through a UI. It’s our belief that pricing models will be the most accurate appraisers in the Upshot ecosystem going forward and model creators will be paid based on how effective their models are. Our own ML models are meant to bootstrap the network of “artificial appraisers” as more projects and developers begin pointing their pricing models at the subjective aggregation mechanism at Upshot’s core.
Stay tuned - next week we will take a look at NFT pricing using last price, moving averages, and machine learning, including how to understand prediction errors and make use of the upper and lower bounds generated by our ML pricing to make better decisions.
For more reading on machine learning and valuation in the traditional art world, which shares many parallels to the NFT world, check out this article from the Harvard Data Science Review.
Stay in the loop!
+ Join us on Discord
+ Follow us on Twitter
+ Subscribe to our newsletter