Metrics-Based Voting - How Quantitative Metrics Shape Better Decisions
GovXS
0x8038
April 7th, 2025

Abstract

Metrics-based Voting is an emerging class of governance mechanisms that integrates quantitative data with voter preferences to support collective decision-making. In this article, we explore the design space of such systems, with a focus on how metric scores—whether retrospectively measured or predictively generated—can be combined with expressed preferences to allocate resources. Drawing on recent experiments in the crypto sector, we examine key design choices, including the selection of metrics, evaluation algorithms, vote elicitation, aggregation, and normalization. We highlight methodological challenges such as sensitivity of evaluation algorithms, risks of data manipulation, and the absence of strategyproofness in metric-weighted settings. Finally, we introduce a formal model and the GovXS verification approach to evaluate and optimize metrics-based voting systems under real-world constraints.

Introduction

Integrating data and artificial intelligence (AI) with voting mechanisms aims to blend human preferences with neutral, objective evaluation of information. A central challenge in governance is ensuring that voters have the necessary bandwidth, expertise, and neutrality to evaluate options effectively. In practice, this is sometimes handled by simple delegation, which introduces risks such as centralization, principal-agent failures, and vulnerability to attacks [1]. What if we could combine human preferences and metrics-based evidence to increase efficiency, reduce bias, and further strengthen decentralized decision-making?

Metrics-based Voting is a class of governance mechanisms in which voting options are evaluated based on predefined metrics. For example, in a funding system for on-chain projects, factors such as fees generated and the number of unique users may determine the allocation of rewards. Rather than voting directly on projects, participants vote on the relative importance of different metrics, and the final decision is computed by aggregating these preferences with the metric scores of each option.

From a social choice perspective, metrics-based voting merges two distinct concepts:

Subjective Preferences: Voters submit personal preferences, which are then aggregated according to a voting rule. This aligns with a classical social choice theory approach, where votes are considered as individual preferences over possible outcomes. Theoretical concepts such as Arrow’s impossibility theorem and the Gibbard-Satterthwaite theorem are relevant in this context [2].

Objective Ground Truth: The system seeks to approximate an objective evaluation by defining and measuring relevant metrics. The assumption in this setting is that some objective, correct ground truth exists that the community tries to reveal – by defining metrics to evaluate voting options and determining the final voting outcome. This approach corresponds to epistemic democracy, which emphasizes decision-making based on verifiable data [3]. A related theoretical result is Condorcet’s jury theorem, which describes conditions under which collective decision-making converges toward a correct outcome.

By integrating both subjective and objective components, metrics-based voting provides a framework where neutral data and algorithmic information processing complement human choices and personal preferences in governance.

Related Work

In digital networks, integrating human preferences with evidence derived from metrics has gained increasing attention, resulting in a variety of approaches.

Broadly, metrics-based voting can be categorized into two distinct classes. In retrospective mechanisms the metric component measures past achievements. In predictive mechanisms the metric component attempts to predict future performance utilizing e.g. prediction markets to incentivize good forecasting.

Retrospective Mechanisms

In Optimism’s Retro Funding round 4 (2024), voters were asked to vote on 16 impact metrics. The resulting ballot data was used to construct a weighting function  that distributed funding among around 200 projects based on measured impact. In subsequent funding rounds – Onchain Builders round and Dev Tooling round, Optimism further developed evaluation algorithms to assess impact. In the Dev Tooling round, algorithms applied a value chain graph linking development tools to on-chain builders, utilizing OpenRank’s EigenTrust implementation to iteratively distribute trust across projects. Here, voters selected the evaluation algorithm via approval voting, and project funding was allocated accordingly.

Predictive Mechanisms

DAOstack’s Holographic Consensus (2018) introduced a mechanism where network participants stake on proposals, filtering options, and reducing information overload for voters. By staking, participants provide feedback to the governance system regarding which proposals may be under- or over-rated.

Futarchy, originally proposed by Robin Hanson (2000), replaces direct voting on policy proposals with voting on a metric that defines success (e.g., economic growth, public welfare). Prediction markets then determine which policies are most likely to optimize the chosen metric, with participants rewarded based on the accuracy of their forecasts once actual outcomes can be measured.

Gitcoin and Filecoin announced to introduce an experiment that combines predictive mechanisms with community voting in late 2024. In Filecoin’s second Retroactive Public Goods Funding (RetroPGF2) round, a prediction market ran alongside badgeholder voting.

Uniswap Foundation and Optimism Foundation announced a broader collaboration on information markets for governance in 2025, expanding on Futarchy through Butter’s implementation. Butter’s Conditional ​​Funding Markets allow traders to buy and sell tokens whose prices reflect expected performance on a measurable metric chosen by the DAO. The outcome of predictions determines funds distribution to projects, while, over time, successful forecasters make profits (and gain influence), while those with inaccurate predictions lose capital. In parallel, MetaDAO has launched a platform to facilitate Metric Markets, an approach to predict contributor performance on specific metrics and thus, help make grant decisions. Such Futarchy implementations can either serve as standalone decision systems—directly executing outcomes—or act as inputs to evaluation algorithms within broader metrics-based voting frameworks.

(Update 2025/04/16) The Deep Funding initiative tests metrics-based capital allocation for Ethereum’s open-source infrastructure over a dependency graph comprising 30+ core repositories (including consensus and execution clients) and their 5,000 downstream package dependencies. Through an open competition, model developers submit ML/AI-based algorithms that propose funding distributions across the graph. Submitted models are evaluated through pairwise comparisons conducted by expert judges on randomly sampled nodes. The expert votes serve as a benchmark to assess alignment between algorithmic outputs and expert judgment.

Connections to Scientific Research

Metrics-based voting is a new approach with limited real-world deployment, but many recent applications—such as those discussed above—aim to address participatory budgeting problems. This is a well-studied domain in political science, offering a rich body of research to inform emerging designs. Concepts such as approval-based committee voting and its representation properties [4], as well as approval-based budgeting [5], provide theoretical foundations for this class of governance mechanisms. More general frameworks for participatory budgeting offer comparative models [6] for preference elicitation, welfare objectives, fairness axioms, and voter incentives.

In artificial intelligence (AI), metrics-based decision-making has long posed a fundamental challenge due to the lack of a formal specification of the ground truth agreed upon by society - or an approximation thereof. Recent work in the field of Computational Social Choice proposes to let models of multiple people’s moral values vote over the relevant alternatives [7], even in the absence of such ground truth principles. General approaches for automated ethical decision making include concrete algorithms informed by a new theory of swap-dominance efficient voting rules [8].

From an epistemic perspective, the interplay between objective facts and subjective preferences in collective decision-making has been explored by David Estlund [9] and List and Goodin [10], who examine mechanisms designed to improve truth-tracking in group decisions.

Mechanisms that incentivize honest information reporting, such as peer-prediction [11], Impartial Peer-Review [12], and ‘Mechanisms for making crowds truthful’ [13], share commonalities with the challenge of ensuring reliable importance statements about metrics.

Focus of this Article

For the rest of this article, we focus on combining metric scores with voter preferences to determine outcomes. At this point, metric scores may be either measured, based on retrospective data, or predicted, as generated by forecasting mechanisms such as prediction markets. We abstract away from the upstream mechanisms that produce metric forecasts and concentrate on how they are used in the voting rule to make collective decisions.

Recent implementations in the crypto sector—particularly in the context of capital allocation—provide illustrative examples.

Designing Metrics-Based Voting Systems

A metrics-based voting system consists of the following components:

  • Metrics & Evaluation Algorithms:

    • A system for tracking data and verifying contributions (e.g., active users identified via Sybil-resistant addresses, or fee revenues by smart contracts).

    • Evaluation metrics that transform raw data into quantifiable indicators of value.

  • Voter Preferences & Voting Rules:

    • Definition of voter eligibility and mechanisms for eliciting and casting votes on the relevance of metrics.

    • Procedures for vote elicitation, aggregation, and normalization

  • Voting Outcome:

    • Final outcome of the metrics-based voting process

Designing a metrics-based voting system requires careful consideration of metrics selection, evaluation algorithms, and voting rules to ensure robustness and meaningful results. Below, we outline the key steps in this process.

Step 1: Define the Objective of the Voting Process

In metrics-based voting, there is no universal voting design, metric, or evaluation algorithm; the choice of methodology depends on the specific objective of the voting process. Clearly defining the purpose establishes the foundation for metric selection and system design.

Key questions include:

  • Is the goal to identify the most successful candidates based on measured past results, or to predict future potential?

  • Should the system prioritize long-term impact or short-term performance?

  • Should voting reflect retrospective achievements or expected future contributions?

The answers to these questions inform the selection of relevant metrics in the next step.

Step 2: Define Metrics and Data Sources

Metrics translate the voting purpose into measurable criteria. Choosing appropriate metrics and respective data sources requires balancing relevance, reliability, and data availability.

Key considerations:

  • Relevance – Do the selected metrics accurately reflect the intended evaluation criteria?

  • Data Availability – Is there sufficient and verifiable data to compute meaningful outcomes?

  • Granularity – Are the data points detailed enough to capture significant differences among candidates?

  • Performance & Reliability – Are the metrics stable over time and consistently measurable across candidates?

Step 3: Develop the Evaluation Algorithm

Evaluation algorithms allow for fine-tuning how metrics reflect the purpose optimally. In many cases, relying solely on raw data may be insufficient or inappropriate, and additional computation steps are required to derive each option’s final score.

Key tasks:

  • Define how each metric is measured (e.g., should "fees generated" be tracked on a rolling monthly basis? Or summed up across the evaluation period? Measure value in native tokens? Or USD value?).

  • Select an appropriate aggregation method (e.g., averages, trends, weighted sums, logarithmic scoring).

  • Ensure fairness in metric weighting to prevent unintended biases.

  • Conduct validation tests to assess the algorithm’s robustness and stability.

Step 4: Define the Voting Rule

Voting rules govern how voter preferences are collected and aggregated into a final decision.

The three key components are:

Vote Elicitation Method

  • Should voters select a single winner, rank candidates, or rate metrics on a scale?

  • Should voters allocate points or tokens across different options?

Vote Aggregation Rule

  • Should results be determined using Mean, Median, Weighted Sum, Quadratic Voting, or other methods?

  • How should votes be combined with metric scores?

Normalization

  • If distributing funding, slots, or another limited resource, should allocation be proportional to votes?

Each choice influences fairness, strategic behavior, and the overall integrity of the voting system.

Step 5: Validate and Optimize the Voting Design

Before implementation, the system must undergo validation to ensure it aligns with its intended function.

Key validation checks:

  • Metric robustness – Can metrics be easily measured, and are they resistant to manipulation?

  • Evaluation algorithm performance – Does the system effectively distinguish strong candidates from weak ones?

  • Voting rule effectiveness – Does the rule structure yield accurate, fair, and meaningful results?

  • Attack vector analysis – What are the possible ways to manipulate the system, and how costly would attacks be?

Given that building metrics-based voting infrastructure is resource-intensive, early validation ensures that the system design process is maximally efficient.

Step 6: Establish Data Pipelines and Voting Infrastructure

The technical infrastructure must support data collection, processing, and secure voting execution.

Key components:

  • Data Collection and Validation – Ensure data pipelines pull from trusted sources, include fraud detection mechanisms, and update in real-time.

  • Scoring and Evaluation Engine – Implement software that calculates evaluation scores.

  • Voting Interface – Develop a user-friendly and secure platform for casting and verifying votes.

Establishing robust data and computation pipelines ensures accuracy, security, and reliability.

Step 7: Continuous Monitoring and Adaptation

Metrics-based voting systems must evolve to reflect changing priorities, adapt to new threats, and improve fairness over time.

Key monitoring areas:

  • Shifts in purpose and priorities – Introduce new metrics to accommodate changing project landscapes

  • Detection of gaming strategies – Implement strategies to detect malicious behavior

  • Emerging attack vectors – Defend the system against manipulation and attacks

  • Algorithm tuning and recalibration – Periodically reassess weighting methods, normalization approaches, and aggregation rules.

Ongoing governance mechanisms should be in place to make these adjustments without disrupting the integrity of the system.

Addressing New Challenges

At GovXS, we test and verify voting systems for security, efficiency, and fairness. Metrics-based voting poses a number of new challenges that any ecosystem must address.

Sensitivity of Evaluation Algorithms

The design of evaluation algorithms significantly impacts voting outcomes. Optimism Retro Funding evaluation algorithm in S7 Onchain Builders has variants that favor either Growth or Retention. While all variants process the same metrics and underlying data, the Growth variant “Accelerator” rewards increase of values (the positive difference between the current period and the previous period metric values), while the Retention variant “Goldilocks” compares the minimum between the current period and the previous period, and rewards sustainable, stable metric values.

Optimism's evaluation methodology developed for the Retro Funding S7- Onchain Builders Mission, Source: https://github.com/ethereum-optimism/Retro-Funding/tree/main/eval-algos
Optimism's evaluation methodology developed for the Retro Funding S7- Onchain Builders Mission, Source: https://github.com/ethereum-optimism/Retro-Funding/tree/main/eval-algos

In general, an optimal evaluation algorithm should align with the intended design goals while minimizing the impact of outliers to prevent disproportionate distortions in scoring.

At GoVXS, we start with a definition of KPIs to measure performance. Then, we feed real historical data into a model of the voting system. By systematically testing algorithmic variants under real-world conditions, we assess the sensitivity of evaluation algorithms in real-world conditions. We also stress-test systems using synthetic data and extreme-value scenarios. This analysis helps determine whether evaluation algorithms disproportionately favor certain candidate profiles and to what extent results remain stable across distributions.

Data Manipulation

Data manipulation in metrics-based voting can occur when candidates artificially inflate their scores or selectively optimize certain metrics to game the system. This is particularly problematic in scenarios where candidates learn how the evaluation algorithm works and adjust their behavior to maximize their ranking without genuinely improving quality or performance.

For example, an open-source funding system might measure “commits to a code repository”, and add a time decay for developer rewards so that inactive developers receive less than those who contributed more recently. With this mechanism in mind, developers may submit frequent but low-value commits to maintain eligibility. Or teams may delay merging contributions to ensure that rewards fall within an optimal timeframe.

Avoiding data manipulation in metrics-based voting aims to detect unusual, malicious patterns.

At GovXS, we apply time-series analysis to historical data to identify anomalous trends (e.g., irregular liquidity spikes in mining programs). Applied to metrics-based voting, we collect historical data to reveal trends, patterns, and anomalies. By training machine learning models, we can extract relevant features from raw data, and identify outliers in metrics value. Using Hidden Markov Models (HMMs), we can identify latent patterns that suggest gaming strategies. These insights inform refinements to evaluation algorithms and infrastructure for detecting and mitigating fraudulent behavior.

Time series decomposition for finding patterns in liquidity mining activities in Uniswap Pools. The top diagram shows the original TVL values, while the second chart indicates trends. The third chart suggests strong seasonality: there may be specific days of the week when volume and fees are higher. The residuals (fourth diagram), representing the random variation not explained by the trend or seasonality, are fairly consistent over time, however, the large spikes in residuals suggest anomalies due to external factors worth further exploring.
Time series decomposition for finding patterns in liquidity mining activities in Uniswap Pools. The top diagram shows the original TVL values, while the second chart indicates trends. The third chart suggests strong seasonality: there may be specific days of the week when volume and fees are higher. The residuals (fourth diagram), representing the random variation not explained by the trend or seasonality, are fairly consistent over time, however, the large spikes in residuals suggest anomalies due to external factors worth further exploring.

Best Voting Rule: Vote Elicitation, Aggregation and Normalization

A key challenge in metrics-based voting is selecting appropriate vote elicitation, aggregation, and normalization methods.

Our formal model to analyze metrics-based voting designs is constructed by

  • kk metrics

  • mm projects with values over kk metrics: this can be represented as a value matrix valval, with 0<val[j,f]<10 < val[j, f] < 1 being the value of project jj with respect to metric ff.

  • nn voters with importance over kk metrics: this can be represented as an importance matrix impimp, with 0<imp[i,f]<10 < imp[i, f] < 1 being the importance voter ii assigns to metric ff.

  • a voting rule RR is a function that outputs an aggregated importance vector aggagg, with agg[f]agg[f] being the aggregated importance of metric ff.

  • Then, given a value matrix and an aggregated importance vector, the score of each project is computed by a linear combination: score[j]:=f[k]agg[f]value[j,f]score[j] := \sum_{f \in [k]} agg[f] * value[j, f].

This model allows us to plug in different voting design choices based on context-specific objectives.

Vote elicitation options range from selecting the single best evaluation algorithm (with predefined metric weights) to allowing voters to dynamically adjust metric weights based on their preferences.

Vote aggregation methods define how individual voter preferences are combined into a collective decision or outcome. These methods determine the extent to which each voter influences the final result and how diverse inputs are reconciled. Methods range from simple averaging rules (e.g., mean or median) to weighted and non-linear schemes (e.g., quadratic or convex aggregation). Additionally, new governance paradigms, such as continuous elections [14], introduce dynamic models where votes are cast and updated over time, rather than at discrete intervals. Examples include Commitment Voting [15], Perpetual Voting [16], Conviction Voting [17], which embed temporal or stake-based logic into the voting process.

Formally, an optimal aggregation method minimizes the distance between individual voter preferences and the overall outcome, while incentivizing voters to vote according to their true preferences. For example, we can measure the total voter cost of different aggregation methods (how much the overall voting outcome deviates from the individual preferences stated in the voting), and choose the option that minimizes this value.

The diagrams above show the impact of an attack on a project’s Score, and the Ranking of a project. We assess the robustness of four different elicitation methods, comparing aggregation methods mean (blue) and median (brown). In the setup explored above, cumulative elicitation is significantly more robust to attacks, and is the best choice to protect the voting against malicious behavior in this case.
The diagrams above show the impact of an attack on a project’s Score, and the Ranking of a project. We assess the robustness of four different elicitation methods, comparing aggregation methods mean (blue) and median (brown). In the setup explored above, cumulative elicitation is significantly more robust to attacks, and is the best choice to protect the voting against malicious behavior in this case.

Importantly, vote elicitation and aggregation should not be assessed in isolation. The example above shows the performance of different elicitation methods using randomly generated voter inputs. We compare score and rank changes for two aggregation rules—arithmetic mean and median—to illustrate how results vary under different input formats. A key strength of our simulation framework is its ability to model the interplay between metrics, evaluation algorithms, elicitation formats, and aggregation rules, enabling use case-specific analysis and optimization.

A notable insight from our recent theoretical analysis is that strategyproofness—a desirable property in classical voting theory—does not hold in general for metrics-based voting. In particular, traditional strategyproof rules such as the median do not guarantee truthful reporting when applied to metric-weighted inputs. This implies that voters may still have incentives to misrepresent preferences to achieve more favorable outcomes. As a result, additional design measures are required to support incentive compatibility in metrics-based settings.

Lastly, most voting cases require normalization of voting results. For example, in metrics-based voting over funds distribution, a predefined total funding amount must be shared proportionally according to voting outcomes. This normalization step impacts the system properties, and - to a certain degree - distorts the precision an evaluation algorithm aims to achieve. We analyze how normalization impacts system properties and propose safeguards to minimize distortions while ensuring fair allocations.

Attack Vectors

Our attack analysis examines the viability, cost, and computational complexity of manipulating metrics-based voting systems.

The focus of our analysis is to understand the costs versus the profits of an attack, and the computational complexity of conducting an attack. Our preliminary results suggest that certain attacks are weakly NP-hard under specific elicitation and aggregation models, indicating that metrics-based voting may be more resistant to manipulation than traditional voting systems. We are currently conducting further theoretical validation of these findings across common voting frameworks.

Summary

Metrics-based voting is a class of voting designs that integrates quantifiable metrics with voter preferences to enhance decision-making. Instead of direct voting on candidates, participants express preferences over the importance of different metrics, and final outcomes are computed by combining these preferences with measured data.

Benefits

  • Reduces subjectivity by incorporating structured, data-driven evaluation.

  • Increases efficiency by automating key aspects of decision-making.

  • Enhances fairness by weighting decisions based on transparent, predefined criteria.

Challenges

  • Evaluation algorithm sensitivity – Small design choices can significantly impact outcomes.

  • Data manipulation risks – Participants may attempt to game metrics for personal gain.

  • Voting rule complexity – Designing fair, strategyproof aggregation methods remains an open problem.

  • New attack vectors to manipulate the voting results

At GovXS, we design, test, and validate metrics-based voting systems, support their implementation, and establish risk management strategies tailored to each use case. Through comprehensive analysis, we deliver voting designs that are maximally resilient—integrating objective data and subjective preferences to support transparent, fair, and verifiable collective decision-making.

If you are looking for:✔ Data manipulation detection✔ Optimal, stable evaluation algorithms✔ The best voting rules✔ Measures to protect against attacks

Contact us!

References

​​[1] Fritsch, R., Müller, M., Wattenhofer, R. 2022, 'Analyzing Voting Power in Decentralized Governance: Who controls DAOs?', arXiv preprint arXiv:2204.01176.

[2] Sen, Amartya, 1986, ‘Social choice theory.’ Handbook of mathematical economics 3: 1073-1181

[3] Schwartzberg, M. , 2015. Epistemic Democracy and Its Challenges. Annual Review of Political Science, 18 (Volume 18, 2015), 187-203.

[4] Haris Aziz, Markus Brill, Vincent Conitzer, Edith Elkind, Rupert Free-

man, and Toby Walsh, 2017, ‘Justified representation in approval-based committee voting’, Social Choice and Welfare, 48(2), 461–485.

[5] Nimrod Talmon and Piotr Faliszewski, 2019, ‘A framework for approval-based budgeting methods’, in Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 2181–2188.

[6] Haris Aziz and Nisarg Shah, 2021, ‘Participatory budgeting: Models and approaches’, Pathways Between Social Science and Computational Social Science: Theories, Methods, and Interpretations, 215–236.

[7] Conitzer, V., Sinnott-Armstrong, W., Schaich Borg, J., Deng, Y., & Kramer, M. (2017). Moral Decision Making Frameworks for Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11140

[8] Noothigattu, R., Gaikwad, S.'.S., Awad, E., Dsouza, S., Rahwan, I., Ravikumar, P., Procaccia, A.D. 2017, 'A Voting-Based System for Ethical Decision Making', arXiv preprint arXiv:1709.06692.

[9] Estlund, David, 1997, Beyond Fairness and Deliberation: The Epistemic Dimension of Democratic Authority. In James Bohman & William Rehg, Deliberative Democracy: Essays on Reason and Politics. MIT Press. pp. 173-204.

[10] List, Christian & Goodin, Robert E., 2001, Epistemic democracy: Generalizing the Condorcet jury theorem. Journal of Political Philosophy 9 (3):277–306.

[11] Nolan Miller, Paul Resnick, and Richard Zeckhauser, 2005, ‘Eliciting informative feedback: The peer-prediction method’, Management Science, 51(9), 1359–1373.

[12] Kurokawa, David, Omer Lev, Jamie Morgenstern, and Ariel D. Procaccia. "Impartial Peer Review." In IJCAI, vol. 15, pp. 582-588. 2015.

[13] Radu Jurca and Boi Faltings, 2009, ‘Mechanisms for making crowds truthful’, Journal of Artificial Intelligence Research, 34, 209–253.

[14] Del Pia, A., Knop, D., Lassota, A., Sornat, K., & Talmon, N., 2024. Aggregation of Continuous Preferences in One Dimension. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24 (pp. 2748–2756).

[15] Berg, Chris and Davidson, Sinclair and Potts, Jason, 2020, Commitment Voting: A Mechanism for Intensity of Preference Revelation and Long-Term Commitment in Blockchain Governance. Available at SSRN: https://ssrn.com/abstract=3742435 or http://dx.doi.org/10.2139/ssrn.3742435

[16] L. Bulteau, N. Hazon, R. Page, A. Rosenfeld and N. Talmon, 2021, “Justified Representation for Perpetual Voting," in IEEE Access, vol. 9, pp. 96598-96612.

[17] Emmett, J. (2019). Conviction voting: A novel continuous decision making alternative to governance. Commons Stack.

More GovXS Research

About GovXS

GovXS is a research initiative under Token Engineering Academy. Team members include Nimrod Talmon, PhD, Angela Kreitenweis, Eyal Briman, and Muhammad Idrees. GovXS is a member of the Token Engineering Academy Applied Research Network.

Subscribe to GovXS
Receive the latest updates directly to your inbox.
Nft graphic
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from GovXS

Skeleton

Skeleton

Skeleton