Preface: It took me way longer than expected to finish this essay. Apologies, dear reader! I had to cut out a lot, and it still is both too long and too short. If you are in a hurry, feel free to scroll down to the end and focus on the modes and strategies of propaganda.
Why is it important to think about propaganda? Because at a macro level propaganda is noise, and in times of turbulence there is nothing more valuable than the ability to see through the noise.
I’ve been meaning to write about the events in Ukraine for a while now, but discovered that what I wanted to say can be broken into at least three different posts. This is the first post in that series.
Allen Dulles, one of the founders of the CIA and the gray cardinal of the US establishment for decades, once said that people can be confused with facts, but it's very difficult to confuse them if they know the trends.
Nothing helps us to better understand any given historical moment than skipping 30+ years into the past and exploring the imaginaries of the future people had back then. Our medieval ancestors inhabited a world where the future existed as part of a sacralized cyclical time, on which the three Abrahamic religions superimposed the myth of the final revelation. The result was a synthetic vision of time, at once cyclical as personified in the festive rituals of the pre-Abrahamic solar calendar of equinox precessions, and millenarian as personified by the concept of a linear and foreordained end to the cycle. The future contained a repetition of rituals leading to an apocalypse.
The protestant revolution in Europe, and the displacement of the theist principle with that of Reason, shattered both of these futures. Now the future became infinite progress. The French revolution and the Napoleonic wars stabilized this future as the dominant paradigm of the West. The only question now became that of determining where infinite progress leads us to. This is the context in which Nietzsche declared that God is dead, a statement still vastly misunderstood, arguing that the void in the future of progress has to be filled by the Ubermensch.
Of course, the entire edifice of progress and reason was smashed to pieces in the Great War, with entire generations of Western men, reared on the myth of progress and triumph of reason, fertilizing with their blood the fields of Europe. Blood and soil was all that was left of the future now. Not surprisingly, that was the future picked up by the National-Socialists and Fascists, leading into the Second Great War, which accomplished the seemingly impossible by burning the future entirely into the hellish fire of the Bomb. After that, no future was left in the West, only an infinite one-dimensional now of endless consumption. The future was supplanted by two terms – more and now – which encapsulate everything the West has stood for ever since.
In the 1950s and 60s, the Soviet Union decided in a brief moment of collective hallucination to imagine a different future, in the stars. The Soviets even sent a multitude of emissaries into that future, first animals, then Gagarin and Tereshkova. However, the euphoria subsided, a generation woke up from the hallucination, and the future came crashing down, symbolized by the falling of the Mir space station and the collapse of the union. More and now triumphed here too.
These are some loosely organized observations about the nature of network topologies in the wild.
In terms of both agency and information, all entities, be they singular [person], plural [clan/tribe/small company], or meta-plural [nation/empire/global corporation] are essentially stacks of various network topologies. To understand how the entities operate in space these topologies can be simplified to a set of basic characteristics. When networks are mapped and discussed, it is usually at this 2-dimensional level. However, in addition to operating in space, all entities have to perform themselves in time.
This performative aspect of networks is harder to grasp, as it involves a continuously looping process of encountering other networks and adapting to them. In the process of performative adaptation all networks experience dynamic changes to their topologies, which in turn challenge their internal coherence. This process is fractal, in that at any one moment there is a vast multiplicity of networks interacting with each other across the entire surface of their periphery [important qualification here – fully distributed networks are all periphery]. There are several important aspects to this process, which for simplicity’s sake can be reduced to an interaction of two networks and classified as follows:
1] the topology of the network we are observing [A];
The more complex and orderly the system, the more it is prone to confuse its internal states for external reality. It confuses its internal order for an external one.
That is why when things glitch and get weird we see new and strange states appear. The system’s internal cohesion momentarily glitches or breaks, and we get the chance to reframe our cognitive image of reality.
That’s when we learn.
Building on my earlier posts on paradigm shifts and framing, I continue my interest in the process of shifting perception between models of reality. Paradigm shifts are fundamentally always shifts in the way we perceive reality. Perception itself is the dynamic outcome of the interactions between frames and schema.
When this model of perception is inserted in a complex and chaotically changing environment we end up with a cyclical process involving the reception and processing of external stimuli, followed by action or its absence and a repeated reception of stimuli closing a feedback loop. This process maps very well to John Boyd’s OODA loop concept, where OODA stands for observe-orient-decide-act. The key stage of the OODA loop is orientation, because it is in the orientation stage that external stimuli, and frames, interface with the internal perception frame and the schema that form it. In this lecture I discuss the OODA loop concept as a cyclical decision making and feedback process, and focus on the orientation stage as the key aspect of that process.
We live in interesting times. Times of transition, involving the collapse of an old order and the shift to a new paradigm. Such transitions are often mistaken for technological changes or revolutionary shifts in material conditions. Actually they are neither. Paradigm shifts are fundamentally always shifts of perception, that is, shifts in the way we perceive reality and therefore in the way we act in the world.
Therefore, to understand a paradigm shift we need to understand how perceptions of reality can be modulated and altered at scale. In other words, we need to understand the mechanics of perception. In this lecture I discuss the concepts of schema and frames as the building blocks of the mechanics of perception. I examine the way framing can be used to alter perceptions and discuss the case of Edward Bernays’ Torches of Freedom campaign.
You better start believing in paradigm shifts Miss Turner, because you’re in one.
Here’s a lecture I recorded recently, discussing the concept of paradigms and the process of paradigm shifts. I discuss paradigm shifts based on Thomas Kuhn’s The Structure of Scientific Revolutions, though my focus is on a more general understanding of the process, involving awareness of change and the key mechanics of the phase transition from one paradigm to another. I also use Jordan Hall’s excellent short essay On Thinking and Simulated Thinking to illustrate how paradigm shifts necessitate a phase transition in thinking about and orienting ourselves in a given reality.
I think the year 2020 so far bears the marks of a massive socio-political-economic phase transition to a new paradigm, and that the mid 2020s will be unrecognizable to someone from 1996 or 2006.
There is a story that when Zhou Enlai, the late premier of China under Mao, visited France for the first time in the 1950s he was asked what he thinks of the French Revolution. His answer was ‘It’s too early to say.’
please scan your item
please place your item in the bagging area
please remove your unscanned item
please place your item in the bagging area
please place your item in the bagging area
please wait for assistance
In his Thought Reform and the Psychology of Totalism Robert Jay Lifton gives the following interesting definition of the language of a totalist environment [p.429]:
The language of the totalist environment is characterized by the thought-terminating cliché. […] [B]rief, highly reductive, definitive-sounding phrases, easily memorized and easily expressed […] become the start and finish of any ideological analysis.
In other words, the preponderance of thought-terminating cliché phrases such as ‘agree to disagree’, ‘it’s all relative’, ‘this is hate-speech’, ‘these are the facts’, ‘[authority figures] all agree’, ‘this is [x] privilege’, ‘that’s your opinion’ is a symptom of being in a totalist environment.
A totalist environment is characterized by fully synthetic thinking, itself a function of a dynamic milieu control of information. In an environment of dynamic milieu control, certain information inputs – phrases, words, images, feelings – are branded as undesirable and banned from circulation. This in turn means that any and all thoughts associated with these information inputs become undesirable and dangerous.
Every system is in its essence a network of actors that perform it from moment to moment into existence. The participants in the system, or actors in the network, enact and perform it through their daily routine operations.
Some of these routine operations are beneficial to the system being performed, and some are not. Some add to the energy of the system and therefore reduce entropy, while others take away from that energy and increase entropy. If the former outweigh the latter, we can say the system is net positive in its energy balance because it generates more energy than it wastes. If the latter outweigh the former, we can say the system is net negative in its energy balance as it wastes more energy than it generates. How to distinguish between the two in practice?
The rule of thumb is that any action that increases complexity in a system is long term entropic for that system. In other words, it increases disorder and the energy costs needed to maintain the internal coherence of the system and is therefore irrational from the system’s perspective. For example, this includes all actions and system routines that increase friction within the system, such as adding steps needed to complete a task, adding reporting paperwork, adding bureaucratic levels a message must go through, etc. Every operation a piece of information needs to go through in order to travel between the periphery, where contact with external reality happens, and the center, where decision making occurs, comes at an energy cost and generates friction. Over time and at scale these stack up and increase entropy within the system.
Needless to say, the more hierarchical and centralized an organization is, the more entropy it generates internally.
I had an interesting conversation about my essay on the Red Queen Trap with someone on LinkedIn, and it made me think about something I did not explain in the essay.
In an ideal environment each element of a system will be acting rationally and striving towards its own preservation and, by extension, the preservation of the system. Rational action here can be understood as the action resulting in optimal energy efficiency from a given number of viable options, where optimal energy efficiency is a function of the energy that must be spent on the action vs the energy that is gained from performing the action. The scenario I describe in the Red Queen Trap essay is set in such an ideal environment.
However, in the real world individual network actors do not often act rationally towards their own or the system’s preservation. This is not necessarily out of stupidity or malice but is often due to limited information – what Clausewitz called ‘the fog of war’ – or a host of other potential motivations which appear irrational from the perspective of the system’s survival. What is more, the closer an actor is to the system’s decision-making centers, the higher the impact of their irrational decisions on the overall state of the system. The irrational decisions of front-line staff [the periphery] are of an entirely different magnitude to the irrational decisions of senior management [the decision-making center].
In practice this means that in complex hierarchical systems decision-making centers will have much higher entropy than the periphery. In other words, they will be dissipating a lot of energy on internal battles over irrational decisions, in effect actively sabotaging the internal cohesion of the system. As a reminder, the lower the internal cohesion of a system, the more energy the system must spend on performing itself into existence. The higher entropy of decision-making centers may be harder to observe in the normal course of operations but becomes immediately visible during special cases such as organizational mergers or other types of system-wide restructuring.
The Red Queen Trap is to be found in the famous Red Queen paradox from Lewis Carroll’s Through the Looking Glass. In this story, a sequel to Alice’s Adventures in Wonderland, Alice climbs into a mirror and enters a world in which everything is reversed. There, she encounters the Red Queen who explains to her the rules of the world resembling a game of chess. Among other things, the Red Queen tells Alice:
It takes all the running you can do, to keep in the same place.
On the face of it, this is an absurd paradox, but it reveals an important insight about a critical point in the life of every system. Let me explain.
Every system, be that a single entity or a large organization, must perform itself into existence from moment to moment. If it stops doing that it succumbs to entropy and falls apart. Spoiler alert, in the long run entropy always wins.