In philosophy, "knowledge" is famously defined as "justified, true belief" (the JTB theory). It includes three properties:
Belief: A subjective attitude that a proposition is true
Justification: The belief must be held for some reason (e.g. evidence)
Correctness: The belief must be true
Comparatively, one of the greatest advancements in the cryptocurrency space has been in "zero-knowledge" proofs. Zero-knowledge proofs enable actors to prove that they know the solution to a problem, without revealing that solution to the verifier. This definition is made formal in Goldwasser's seminal paper, "The Knowledge Complexity of Interactive Proof Systems":
What does this definition mean in plain English? An algorithm is zero-knowledge if given bounded queries and time, a third-party can't tell the difference between the algorithm's output vs. the output the third-party could have generated on their own. If you read the full paper, you'll see that L refers to an NP language, meaning that the outputs of this algorithm can be efficiently verified.
If that's zero-knowledge, then it implies sufficient criteria for knowledge: a verifiable solution to an algorithm that a third-party could not have efficiently generated on their own. The criteria imply a few properties:
Non-triviality: The solution could not have been efficiently computed independently
Verifiability: The solution is efficiently verifiable (it belongs to an NP language L)
Correctness: The solution is the correct output of a deterministic algorithm
(Note that Tarun Chitra's post instead focuses on knowledge soundness, rather than on the definition of "knowledge" itself.) This formalism for knowledge is quite different from the philosophical definition above. Why the departure? Is this one of the many cases of computer scientists having a conversation of their own, or are there legitimate reasons why we've re-formalized "knowledge"?
There are certainly parallels between the two formalisms of knowledge. Looking at the JTB properties:
Beliefs are typically not held unless the proposition is not self-evident, bearing semblance to non-triviality.
Justification is a broader concept than verifiability, but both achieve similar ends in that they allow third parties to understand the reasoning.
Correctness in both definitions is about determining the truth of a proposition.
At first glance, you might argue that the philosophical definition is broader, but this isn't true. For example, computationally, if you can provide a non-trivial, correct & verifiable solution, that qualifies as knowledge, even if you have no idea how you arrived at the solution. Under JTB, this lack of justification might prevent it from being knowledge.
Conversely, the cryptographic formalism does not accommodate mere "information" as knowledge. Consider a scenario where a coin is tossed 100 times and you witness the outcome. Under JTB, you do have knowledge of the coin tosses, though this example can get complicated if you guess the coin tosses, or if someone attempts to lie to you about the outcome. Under the computational definition, this information (as in pure bits such as those resulting from a coin toss) can be computationally simulated, so under no circumstance is this knowledge.
Per above, you can see that there are meaningful constraints on cryptographic proofs compared to JTB. As we expand the use cases for blockchains and cryptography, one question is how we could prove more types of knowledge?
There's no advantage in modifying non-triviality as it only expands us to types of knowledge that didn't need proving at all - anyone could have efficiently computed the solution to a trivial problem. Instead, I'm curious to explore what happens if we relax the NP problem constraint. This allows us to expand to other types of problems including co-NP problems, PSPACE-complete problems, or problems that require quantum proofs:
Co-NP would allow us to prove non-membership claims – i.e. a zero knowledge proof that a claim is false. This is not easily accomplishable in NP as proving a negative in NP requires iterating through every outcome to show a statement is always false, which is much more computationally intensive than showing a statement is true once. The difficulty here is of course proving efficiently that the prover did in fact check every single outcome. There is likely a subset of Co-NP that is more tractable.
PSPACE-complete (problems solvable within polynomial space) includes complex game-like situations or interactions which are harder to verify in NP. These sorts of proofs may be helpful as we think about simulating AIs and metaverse-like worlds, while ensuring trust and verifiability. Polynomial space of course doesn't imply polynomial time, so this runs into a verification problem just like above.
"Quantum Merlin Arthur" problems are the quantum analog of NP. These problems involve proving knowledge of quantum states without revealing them, unlocking other types of use cases. I would bet that there is some professor somewhere who is researching this.
Bayesian/probabilistic knowledge: instead of proving a statement is absolutely true, imagine proving a statement is highly probable to be true. This could be useful in cases where a complete deterministic proof is intractable, or in approximation problems where an exact solution is not necessary. I could prove a claim of the form "At least x% of set A belongs to set B" with some minimum confidence, possibly under the random oracle model. This might be provably impossible or require a commit-reveal scheme to prevent abuse of the random oracle.
As blockchains proliferate and AI advances, I will be curious to see how cryptographic proofs are expanded to new methods and use cases, and how that changes the relationship between cryptographic knowledge and philosophical knowledge.
Thank you to Tarun Chitra for helping inspire this post and Neel Somani for edits.