As the integration of machine learning (ML) models in industries grows more intricate, firms increasingly integrate APIs and services from providers like Amazon, Google, and Microsoft to adopt complex ML structures. While for some use cases such low-trust assumptions suffice, this modus operandi presents a critical concern around the integrity and validity of ML inference use. As the capabilities and use-cases of ML models in production grows, authentication of sources of inference will grow ever more relevant. Therefore, we need mechanisms with which we can trustlessly prove and verify sources and traces of inference. We call this: Validity ML.