A Singular Trajectory

In 1999, Ray Kurzweil predicted the following:

A 2009 computer will be a tablet or smaller sized device with a high quality but somewhat conventional display, while in 2019 computers are "largely invisible" and images are mostly projected directly into the retina, and by 2029 computers will communicate through direct neural pathways. Similarly in 2009 there is interest and speculation about the Turing test, by 2019 there are "prevalent reports" of computers passing the test, but not rigorously, while by 2029 machines "routinely" pass the test, although there is still controversy about how machine and human intelligence compare.

Unfortunately, these predictions are “always 20 years away”. But they won’t remain “the same time away”. After observing the current advances in AI, robotics, and manufacturing capabilities, one can make a case that there are general indicators of an acceleration towards the singularity. There may even be longstanding principles that will remain despite the technological and academic disruption that will manifest.

Do GPT Users Dream of Companions?

On November 30, 2022, OpenAI released ChatGPT, a conversational interface and large language model. To many this was a revolutionary moment simply because the chatbot interface was incredibly articulate. The work that it could output was mindboggling for two reasons: it abstracted away computation with saved time, and its answers were convincingly real (when OpenAI deems it safe to answer). It’s quite remarkable that you can infer from an LLM today to get a valid if not sound answer within seconds, when that answer would cost a domain expert several minutes of consideration and an online forum several hours of debate.

It’s a guess, though. ChatGPT isn’t considering your prompt, it’s a broom with arms eternally tasked with carrying water.

Yen Sid's warnings come after the spell
Yen Sid's warnings come after the spell

Chatbots have always been an object of desired companionship. The motivation behind the Turing test might be the desire for a rigorous chatbot that can’t break immersion. We might have already seen the endgame of this with recent articles of testers claiming sentience.

What remains to be tested is whether humans, being social animals, are augmented best by digital homunculi. We hunted together, we farmed together, and now society can be described as a giant buffer of managers & operators of industrial-scale machines, more socialized together than ever. What if humanity only reaches its most productive state because the evolutionary “prefrontal cortex” path of toolmaking & spirituality converge on us fooling ourselves into believing our tools have souls? I would confidently argue that the shifting debate around sentience & intelligent work is one of the first inflection points of the singularity, it directly leads to gain of function, and we are already crossing it.

But what if it’s a trap? I could easily argue that humans optimize for the path of least resistance, choosing to copy or “google” the knowledge that they might have acquired differently through critical thought and repeated failure. And so ChatGPT comes along, with noncommittal, nonsound, & inaccurate answers, muddying a pool of water that we might have claimed to be the “wisdom of the crowds”. A student might use an LLM to write their essay for them, get good grades, and fall flat on their face in the real world. Stack Overflow might get sybil-attacked for personal gain, and the audience (programmers) may conform in some way to a symphony of deepfakes. A script kiddie might prompt ChatGPT for malware. It’s not particularly hysterical to say that a shortcut can get abused, and one should ask the salient question: will the mainstream use of LLMs dull our productive capacity, especially for sound, valid, & divergent thinking?

The Final Puppeteer

The deepest impact that AI can make is in the culture of allocating human capital. A recent opinion described the reaction to ChatGPT fairly well:

The delighted ones were those transfixed by discovering that a machine could apparently carry out a written commission competently. The outrage was triggered by fears of redundancy on the part of people whose employment requires the ability to write workmanlike prose. And the lamentations came from earnest folks (many of them teachers at various levels) whose day jobs involve grading essays hitherto written by students.

So far, so predictable. If we know anything from history, it is that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications. So it was with print, movies, broadcast radio and television and the internet. And I suspect we have just jumped on to the same cognitive merry-go-round.

In trying to understand the implications of AI, I try to isolate out the short-term disruption to guess at the intermediate & longterm consequences, knowing full well that my based conjecture will likely be inaccurate, if not completely wrong. That being said, perhaps a good way of describing the backlash is via market dynamics. AI assistants change the scarcity of content creation, thereby becoming the market-makers to some minor degree. Whenever a proverbial “genie” leaves the “bottle”, it is the consumers that benefit asymmetrically by repricing the market and phasing out suboptimal suppliers. In turn, the suppliers of AI-based production gain cumulatively more capital over time. Is the ethical trespass (like crawling through private artwork) in the new business practice concerning? Does this necessitate reparations like UBI? Maybe, but let’s not pretend that the entire genie can be stuffed back in the bottle with all of its granted wishes reverted.

One could argue that there is an oligopoly of firms that can afford to crawl the entire Internet in order to produce a training dataset. There will likely be a finite number of SaaS that can afford to consume such resources to produce novel ML models. Fewer may be capable of achieving & retaining PMF should ML-based commerce become volatile enough. In the past we were coaxed with psyops like HAL 9000, Skynet, and the Butlerian Jihad. What these don’t sufficiently describe is a semi-stable society with many firms & intelligent agents cooperating over a scarce reagent within an AI economy. To play devil’s advocate, what is the probability that our current capitalist society produces a technocracy, without negative feedback, that can phase out socioeconomic classes, fundamental tenets like human/property rights, or accelerates the possibility of some form of mass destruction?

That might sound buzzwordy, but the impending inflection point to look out for will be a fundamental impact to the way society operates. Right now, I’m writing about an “edition” of an amorphous century-old meme spurred on by a recent chatbot. In a year, someone might write about kneejerk legislation or a tulip mania concerning a specific AI-dependent product. Within 5-10 years, there will be a realized reckoning around the sole proprietorship economy, present forms of government, and personal autonomy/consumption. The “Megacorp” model might remain predominant throughout this disruption, we may find ourselves in a “network state”, or perhaps something more Orwellian. All because computers (procured by any means) will compile our collective use of natural language to subsume many operational and economic functions in today’s society. Whatever the timeline is, this will be a noticeable inflection point well before the “singularity”.

Methodology and Technique

In the case of InstructGPT, the secret sauce is curating the best mimic using a pool of humans (otherwise called Reinforcement Learning via Human Feedback or RLHF). But InstructGPT is an LLM, which is somewhat static & easy to reset. One prompts the interface, and the model behind the scenes has been repeatedly trained to guess the most simply rewarded response to prompts.

OpenAI's illustration of RLHF
OpenAI's illustration of RLHF

However, there are challenges to “perfect” NLP. In the HuggingFace article above:

While these techniques are extremely promising and impactful and have caught the attention of the biggest research labs in AI, there are still clear limitations. The models, while better, can still output harmful or factually inaccurate text without any uncertainty. This imperfection represents a long-term challenge and motivation for RLHF – operating in an inherently human problem domain means there will never be a clear final line to cross for the model to be labeled as complete.

When deploying a system using RLHF, gathering the human preference data is quite expensive due to the mandatory and thoughtful human component. RLHF performance is only as good as the quality of its human annotations, which takes on two varieties: human-generated text, such as fine-tuning the initial LM in InstructGPT, and labels of human preferences between model outputs.

Generating well-written human text answering specific prompts is very costly, as it often requires hiring part-time staff (rather than being able to rely on product users or crowdsourcing). Thankfully, the scale of data used in training the reward model for most applications of RLHF (~50k labeled preference samples) is not as expensive. However, it is still a higher cost than academic labs would likely be able to afford. Currently, there only exists one large-scale dataset for RLHF on a general language model (from Anthropic) and a couple of smaller-scale task-specific datasets (such as summarization data from OpenAI). The second challenge of data for RLHF is that human annotators can often disagree, adding a substantial potential variance to the training data without ground truth.

RLHF can be applied to machine learning outside of natural language processing (NLP). For example, Deepmind has explored using this for multimodal agents. The same challenges apply in this context:

Scalable reinforcement learning (RL) relies on precise reward functions that are cheap to query. When RL has been possible to apply, it has led to great achievements, creating AIs that can match extrema in the distribution of human talent (Silver et al., 2016; Vinyals et al., 2019). However, such reward functions are not known for many of the open-ended behaviours that people routinely engage in. For example, consider an everyday interaction, such as asking someone “to set a cup down near you.” For a reward model to adequately assess this interaction, it would need to be robust to the multitude of ways that the request could be made in natural language and the multitude of ways the request could be fulfilled (or not), all while being insensitive to irrelevant factors of variation (the colour of the cup) and ambiguities inherent in language (what is ‘near’?). To instill a broader range of expert-level capabilities with RL, we therefore need a method to produce precise, queryable reward functions that respect the complexity, variability, and ambiguity of human behaviour. Instead of programming reward functions, one option is to build them using machine learning. Rather than try to anticipate and formally define rewarding events, we can instead ask humans to assess situations and provide supervisory information to learn a reward function. For cases where humans can naturally, intuitively, and quickly provide such judgments, RL using such learned reward models can effectively improve agents (Christiano et al., 2017; Ibarz et al., 2018; Stiennon et al., 2020; Ziegler et al., 2019).

Many elements leading to the singularity await further development, and it’s feasible that we can determine what they are with more surety than the timeframe it costs us to implement them. Chris Lattner mentions the “sparsely-gated Mixture of Experts” from his POV:

To describe it simply, maybe there’s an intermediary that can curate and combine the inputs of many “experts”.

This is a wide design space for further research. Maybe the intermediary should be selective in a different manner.

Maybe the intermediary can take advantage of spatial data.

One particularly fascinating work is the Nethack Learning Environment. Much like Twitch Plays Pokemon was viable because the JRPG was turn-based with relatively simple input, NLE is also turn-based with just keyboard input. Moreover, it has procedural generation in several environments at different stages of the game, making it a devilishly useful crucible for training AI. From my own experience of playing this game, one has to curate & combine many strategies on a turn-by-turn basis. With metagaming strategies (cheating) like polypiling and bones harvesting, there’s a lot of ways for an AI to learn further on a game-by-game basis.

*slaps interface* "this Unicode can fit so many objects in it"
*slaps interface* "this Unicode can fit so many objects in it"

There’s corporate-upscaled ML like the recent developments at Tesla & Neuralink. Unfortunately, these cannot be viewed in the OSS lens that other AI research might be viewed. It’s not really blue skies, but there’s some useful insight in purpose-specific industrial applications. One major nuance is that the industrial-scale production invites industrial-scale feedback for brute force reinforcement learning. Optimus might be a gimmick, but it might improve androids more than Atlas has in the past 9 years. Neuralink implants might kill the subject, but they force the development of incredibly precise surgical machinery & parts.

Feedback in manufacturing is great, but it will be most in demand within the health sector. Right now, we’re early adopters of retail biosensors. In time, homomorphic cryptography will allow machine learning to take advantage of a massive corpus of health data. We’ve crowdsourced the consumption of pharmaceuticals for tens of thousands of years, but it remains to be seen how we coexist with AI that can manage dosages of any arbitrary substance over any arbitrary timespan. In meantime, homomorphic encryption remains unused as much as it remains inefficient compared to “plaintext”.

Google Brain just released Robotics Transformer-1. It may only be an arm with simple tasks in the first version, but there’s clear potential to iterate this with more tokenized actions that take place in common construction environments. It’s not going to impress as much as the version that can manifest a self-assembling factory. Since the global economy revolves around freight transport, I would not be surprised if 100x more “zero-emission” container ships eventually get built in such a facility compared to the current global fleet of ~6000 container ships. This is also going to be a massive tide change in the housing crisis, where zoning ordinances allow it to fully take effect.

I should also mention the Alberta Plan, which plans out 12 plausible steps of ability development towards AGI.

Simply put, the inflection points in the methodology & techniques to transition from ANI to AGI to ASI will be self-explanatory.

From ChatGPT's output to our eyes
From ChatGPT's output to our eyes

“Progress Should be Exponential”

The above statement is probably wrong, and blind to external context. We humans are already complex, tool-using sparse neural networks as individuals; as groups we are self-organizing, social-learning, and environment-engineering. If you’ve gotten this far, I apologize for the misdirect in the “Puppeteer” section, which presumes Hobbes over Locke. It cannot be clearer in the recent developments of cryptography and distributed (adversarial) computing that humans are self-governing to such a degree that they can maintain a global state (history) that is Turing-complete. There’s also the phenomenon known as the Mechanical Turk. The point is, whatever AI product drops in any arbitrary timespan, there’s going to be a ripe developer ecosystem that can outpace it through coordinated execution, augmented by contemporaneous AI tooling and verifiable work.

This leads to the present thought experiment: do we even need to realize every predicted inflection point before The Singularity™? For every proprietary improvement in commercialized model training, there is probably a viable method for realizing that improvement in the public domain. StableDiffusion has already spurred a dialogue around this concept. I would argue further that crowdsourcing has accelerated sufficiently in the past decade (as Twitch Plays Pokemon, social networks, and DAOs have demonstrated) that the singularity is already a red herring. Just as Ethereum scaling solutions attempt to use cryptography like zk-SNARKs to reduce the infrastructural needs of a network, we will attempt to implement lightweight solutions that reduce the need for AI to be brute-forced & monetized by a present megacorporation.

In fact, one of the best ways to argue against our current elation at OpenAI’s model is the somewhat predictable behavior in financial markets and similar systems of social capital on social networks. Twitter aggregates the news because its users can broadcast and be amplified worldwide by legitimate personalities. Growth stocks can rise and fall tremendously with global trends like COVID lockdowns and central bank monetary policies. It doesn’t take a lot of imagination to visualize a startup in a very short timeframe that can manifest AI-like PMF as a self-regulating, self-orchestrating community. Likewise, it should be hard to imagine that OpenAI can afford a 5 year moat more than they can get funded at a $29 billion valuation. There’s probably hundreds of billions of dollars in OpEx that can get freed up in many sectors with present technology and further business development. Some fraction of this windfall will probably flow into self-sustaining crowdsourcing & AI development, if & when the federal funds rate hits zero.

In the series Westworld, an AI system named Rehoboam imposes an order to human affairs by manipulation and prediction of the future via analysis of a large data set. In our world, governments & corporations attempt the same enterprise. Repeatedly since the Industrial Revolution, disruptive innovations have manifested outside of bureaucracies; today, they’re happening at an ever-increasing pace. The public domain has grown in depth & scope in recent decades, forcing many technologies to be opensourced regardless of their commercialization.

Who can say, with any certainty, whether this trajectory is inevitable or impossible?

Subscribe to m_j_r
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.