Discovery Engine Overview

Discovery engine is the heart of Lucidity’s infrastructure.
We’re excited to delve deeper into the technological advancements powering this enhanced engine, enabling optimisation of leverage positions with unparalleled accuracy and adaptability, catering to an individual user’s risk appetite.

The engine is being designed to be useful for multiple stakeholders and builders across the leverage landscape, ranging vaults/yield aggregators and portfolio managers, to lending protocols, asset managers, liquidators, etc., powering the next era of leverage use cases.

Discovery engine V1

The target audience of the engine’s v1 end-user-facing protocols that aim to help optimise the overall leverage position across protocols. The engine’s output is a relative score given to all available markets for the selected assets, and can be used to power automations and strategies such as auto-refinance, liquidation protection, auto-rebalance, looping, etc.

User Inputs

The engine takes two sets of inputs from a user:

  • Asset Selection:

    • Supply Asset

    • Debt Asset

  • Optimization Parameters: Users can enable various parameters to tailor the engine's scoring based on their preferences.

    • For Lenders:

      • Supplied Liquidity: The total supplied liquidity in a protocol for the selected asset

      • Net Supply APR: The net annual percentage yield for the supply asset, including any protocol rewards

    • For Borrowers:

      • Net APY: supply APY + (borrow APY * LTV)

      • Available Liquidity: The total available liquidity of the borrow asset in a protocol

      • Loan-to-Value (LTV) Ratio: The ratio of the maximum borrowable amount to the collateral supplied

      • Liquidation Threshold (LT): The ratio at which a position is liquidatable

Deep Learning-Based Scoring

To dynamically handle different combinations of parameters selected by the user and to decide on the weights for a composite score, we leverage advanced deep learning techniques like attention mechanisms and meta-models.

Implementation

Step 1: Data preparation

  • Collect and normalise data: index historical on-chain data for all relevant parameters and normalise each parameter to a [0, 1] range.

  • Label creation: define the target variable (label) based on a composite scoring criterion that reflects the desired outcomes (e.g., historical returns, risk-adjusted returns).

  • Dynamic feature set: create a comprehensive feature matrix corresponding to each available market.

Step 2: Designing the model architecture

  • Flexible input layer: capable of accommodating a varying number of features, allowing customisation based on user-selected parameters.

  • Deep learning layers: multiple layers of neural networks, including dense layers, dropout layers for regularisation, and advanced activation functions to enhance model learning.

  • Risk sensitivity layer: for adjusting scores based on the user’s risk sensitivity. This layer could modify the influence of various inputs based on the overall risk score assigned to the user. Currently we need sensitivity input from the user (safe/intermediate/degen), but over time we will index a wallet’s historic data to model the risk sensitivity as well.

Step 3: Incorporating the attention mechanism

  • Dynamic weight allocation: an attention mechanism is integrated to assign dynamic weights to various inputs, focusing on the most impactful features for prediction at any given time.

  • Enhanced contextual understanding: that helps the model understand and prioritise which parameters are more important in different lending and borrowing scenarios.

Step 4: Meta-model configuration

  • Training the base models: individual models are trained for each parameter like Net APY, LTV, and Available Liquidity.

  • Meta-model training: the meta-model, which is another neural network, takes the outputs of individual base models as inputs and learns the optimal way to combine them into a comprehensive score based on past performance and correlations.

Step 5: Dynamic scoring and optimisation

  • User input reception: to incorporate user’s optimisation preferences

  • Real-time score calculation: based on the user inputs, the model dynamically configures the input vectors and computes scores using both the base models and the meta-model.

Workflow example with the Meta-model approach

  1. User selection: Suppose a user selects "Net APY" and "Available Liquidity" as parameters.

  2. Parameter normalisation: These parameters are first normalised.

  3. Input configuration: an input vector with normalised values for the selected parameters is created.

  4. Base model scoring: this vector is fed into the base models to generate scores for each parameter.

  5. Meta-model processing:

    • The individual scores are fed into the meta-model.

    • The meta-model computes a composite score by effectively weighing and combining these individual scores.

  6. Output delivery: the engine outputs the highest-scoring market as the optimal choice based on the composite score.

Benefits of this approach

  • Flexibility:

    • The model can handle any combination of parameters selected by the user.

    • The attention mechanism or meta-model dynamically adjusts the weights based on the importance of each parameter.

  • Scalability:

    • The system scales well with the addition of new parameters.

    • The attention mechanism or meta-model adapts to new data and learns optimal combinations.

  • User-centric:

    • Provides tailored output based on user preferences.

    • Allows users to see the influence of each parameter on the final score.

By incorporating an attention mechanism or a meta-model, our system dynamically handles different combinations of parameters and optimises for user-selected criteria in a sophisticated and scalable manner. This deep learning approach ensures that the system is both flexible and adaptive to varying user needs.

Backtesting framework

We are also developing a robust backtesting framework to evaluate the performance of selected protocols over long historical periods. This framework will simulate both ideal and extreme market conditions, providing valuable insights into the efficacy of our scoring model.

System overview

To ensure robustness, security, and scalability of our discovery engine, we have meticulously designed the following components:

  1. Data ingestion: Indexing realtime and historical onchain data across integrated protocols. Read more about our data engine here.

  2. Data processing: cleaning, normalising, and validating this data to ensure integrity.

  3. Prediction models: utilising advanced ML algorithms for protocol discovery and user behaviour predictions.

  4. Real-time data updates: continuously updating datasets and models with new information.

  5. Output logging: logging every prediction and significant system action for accountability and analysis.

  6. Backtesting framework: simulating market scenarios to evaluate model performance.

Production deployment strategies

  • Scalability: Leveraging cloud services for dynamic resource scaling.

  • Security: Implementing comprehensive security measures.

  • Compliance: Ensuring adherence to financial regulations and data protection laws.

Maintenance and continuous improvement

  • Automated pipelines: establishing CI/CD pipelines for seamless updates.

  • A/B testing: regularly testing new models or features before full-scale deployment.

  • User feedback integration: continuously refining the user experience and model accuracy based on feedback.

A glimpse into the future

Our discovery engine is continuously improving to enhance its predictive capabilities to provide even more accurate and personalised scoring.

Our goal is to evolve the Discovery engine into to be able to offer insights into risk-frameworks and optimisations across three dimensions : protocol-level, asset-level, and borrower-level. Builders will be able to fine-tune the engine’s output to suit their specific use case and target audience.

Join us on this journey

If you are building within the leverage landscape, we invite you to explore our Discovery engine and experience firsthand how it can revolutionise your DeFi lending and borrowing strategies. This also allows us to collect feedback and make sure we’re building in the right direction.

Stay tuned as we roll out more features and enhancements to help you navigate the DeFi landscape with confidence.

Subscribe to Lucidity Finance
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.