There are at least three places to get useful data on blockchains: BigQuery, Nansen, and Dune Analytics. I will show how each of these can be used to answer a very basic question: what is the historical floor price of a popular NFT collection on OpenSea? In this process, I will highlight the trade-offs between data sources (summarized below):
Nansen is a subscription service. For the standard tier ($399 / quarter), you get access to an excellent UI with many views (see the many great videos on usage here). There is one central limitation: the data is read-only unless you pay $3990 / quarter for access to csv download. This may not matter to users that only want to view the dashboards. In any case, NFT God Mode answers our question (e.g., for BAYC) over the past month.
What if we want to see a longer historical timeframe of floor price? Without access to the Nansen raw data, we are limited to what the dashboard provides. Dune Analytics provides an extensive library of existing, user-generated queries and dashboard for free. The queries can be easily forked, making the logic composable. In addition, some of the dashboards are extremely good (rivaling Nansen in terms of information quality).
There is also a python client for Dune, which allows you to easily pull data for any query into python. Here is my Dune query to answer our question. And below is the Python code I use to access the raw query result and further manipulate / plot the data:
import pandas as pd from duneanalytics import DuneAnalytics # login dune = DuneAnalytics('name', 'pw') dune.login() # fetch token dune.fetch_auth_token() # query result_id = dune.query_result_id(query_id=294423) # fetch query result data = dune.query_result(result_id) result_list=data['data']['get_result_by_result_id'] result_list_clean=[e['data'] for e in result_list ] d=pd.DataFrame(result_list_clean) # reshape and plot d['Day'] = pd.to_datetime(d.Day) d_=d.pivot(index='Day', columns='Name', values='Floor (Approx)') d_.plot(figsize=(15,5))
This plot shows the BAYC floor (along with MAYC and Runners) for the past year:
nick.eth wrote a very good overview of Ethereum analytics using BigQuery already. In short, Big Query can be used to access a lot of raw blockchain transaction data (full list of datasets here). Big Query is populated by the Blockchain ETL library (authored by Evgeny Medvedev, the co-founder of Nansen). It is easy to access data for any erc721 contract from the tables below (see example). Big Query is well-optimized, familiar to many, and offers a Python client for composing queries in Jupyter notebook.
SELECT contracts.address COUNT(1) AS tx_count FROM `bigquery-public-data.crypto_ethereum.contracts` AS contracts JOIN `bigquery-public-data.crypto_ethereum.transactions` AS transactions ON (transactions.to_address = contracts.address) WHERE contracts.is_erc721 = TRUE GROUP BY contracts.address ORDER BY tx_count DESC LIMIT 10
In short, Nansen is an excellent source of data if you strictly want read-only access to high quality dashboards or if you have a large budget to pay for API access. In addition, Nansen has labeled ~100M wallets (associating an address with an ETH name and / or institution), which is very useful for grouping and interpretability.
Dune took a different approach, allowing the community to create dashboards and queries. It’s the most composable (e.g., it’s easy to fork queries from others and re-build dashboards), but less streamlined (e.g., dashboards must be searched for and saved) than Nansen. It’s excellent for developers because you can fork logic and it has a Python client to pull query results into python for further analysis or plotting.
BigQuery is a great source of raw data with fast execution and familiar tools (e.g., a great python client, etc). Of course, it lacks the extensive library of example queries / dashboards that Dune offers. In addition, Dune (along with Nansen) appears to have several tables (e.g., for Opensea and DeFi) that are missing from BigQuery.