Sub-Thread Weekly: #16

From text to textiles, from stores to stories— every week we take a confidential behind the scenes look chronicling diverse experiences and accounts of some of the most pivotal, impactful and transformative narratives that make up the real fabric of fashion.

These entries are included as material in the Web3 Fashion Manifesto and also open sourced under CC0 to the Meta Source Vaults.


It’s all about as tedious as supernatural forces can be.

Experts predict that in the next 10 to 100 years scientists will succeed in creating human-level Artificial General Intelligence.

While it is most likely that this task will be accomplished by a government agency or a large corporation, the possibility remains that it will be done by a single inventor or a small team of researchers.

After all, the history of computer science has been built by such garage investors: from Jobs, to Gates, to Page, and more.

But, how do you prove that a super intelligent system has been constructed without having to reveal the design of the system?

If you’ve just invented a paradigm shifting technology worth trillions of dollars, who do you trust?

Sample a bit of noise and add it to the image. Then, you do it again. Sample another bit of noise, add it in. And keep going. Potentially to 1000 steps, or 4000, or thousands and thousands more. Just so many steps in this first instance.

Diffusion models beat GANs on image synthesis. The image sample quality is superior to the current state-of-the-art generative models.

What are you going to end up with?

If you sample so many times, for so many steps, for so long, then you are going to end up with random noise itself.

Take yourself from the data space to a normal distribution.

Can we learn a function that completes this reserve process, some kind of neural net?

If we could invert this mapping and have a process that knows “if I give you an image with some noise, can you tell me what image that came from?” Is that possible?

It’s certainly thinkable. But is it doable?

Run it through the model forward. Diffusion models on unconditional image synthesis use a better architecture through a series of ablations.

Frechet Inception Distance (FID) = FID.

  • FID of 2.97 on ImageNet 128×128
  • FID of 4.59 on ImageNet 256×256
  • FID of 7.72 on ImageNet 512×512.

BigGAN-deep is matched with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution.

How are you coping with all of the overwhelming evidence? Are you curious about what is going to happen next? Following the action with interest?

Lean forward.


Any value brought in from sales of NFTs minted through this article will be used for building out the F₃M Realm treasury, which will eventually be governed and coordinated by the DAO, furthering to decentralise the web3 fashion capital stack.

F₃Manifesto (F₃M) is a rally flag for the entire web3 fashion movement. It’s a label and realm that is built for so much than just the digital and physical threads and collections that it will spin up and release.

Subscribe to F3Manifesto
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.