This entry is dedicated to Gordon Earle Moore (January 3, 1929 - March 24, 2023) Gordon Moore was an American businessman, engineer, and co-founder of Intel Corporation. His contributions to chemistry and computer science changed the world and fueled the information revolution.
Moore was a California kid. Born as the second son of William Harold (county sheriff) and Florence “Mira” (homemaker) in San Mateo, he displayed an early passion for chemistry. This passion would take him through Berkeley to Caltech where he would ultimately receive his PhD in chemistry in 1954. His postdoctoral research at the Applied Physics Lab at Johns Hopkins led him to Fairchild Semiconductor Laboratory which was funded by Beckman Instruments. Beckman established the seminal Shockley Semiconductor Lab in Mountainview, California because of proximity to his ailing mother, and thus, “Silicon Valley” was born.
Moore briefly joined Shockley but quickly left after recognizing a misalignment with senior leadership. He was backed by Sherman Fairchild to start the influential Fairchild Semiconductor Corporation. It was during his time at Fairchild that he would receive fame for his market predictions of the semiconductor industry over the next 10 years. Moore’s Law was born after an interview with Electronics Magazine. The prediction became the market target for miniaturization and is responsible for many areas of technological change. Understanding Moore’s Law offers an important perspective on the evolution of tech.
Moore went on to found NM Electronics, which later became Intel Corporation. Intel pioneered new tech for computer memory, integrated circuits, and microprocessor design. You can see the doubling effect clearly in the graph below:
So why is the history of the semiconductor industry so important to what’s happening right now?
It’s really all about transistors and performance. The more transistors, the faster the clock speed. CPU performance is measured in MIPS which stands for Million Instructions Per Second. Let’s just say we’ve come a long way with speed and computing power in the last 40+ years. This continuum has allowed the WAY we compute to change.
Early chips were designed for centralized processing (CPUs). These chips were good for general-purpose tasks like running operating systems, applications, and simple gaming. In the 1990’s, as consumers moved to computing en-mass, chips moved from centralized to graphical processing to handle complex graphics and parallel computing. NVIDIA entered the world stage in 1999 with the first GPU, another breakthrough in technology that enabled a much more robust computing experience.
Currently, we’re seeing an explosion in demand for neural networks and machine learning processing. CPU’s/GPU’s can do this work, but gradually, similar neural network tasks will be done by dedicated NPU (neural processing units). The processing within these chips resembles the biomimicry of neurons/synapses in the human brain.
Here are some practical applications of NPU according to Bing-enabled GPT:
AI scene recognition in photographs, including object detection.
Judgment of light source and dark light details to synthesize night scenes.
Voice recognition.
Pre-determination rendering to improve the smoothness of video.
Pre-determination of touch to improve the following hand and sensitivity.
There are ton of different types of processing units, specific to holographic, dataflow, acceleration, and even wearables. But we have to remember processors are designed for speed. Currently, enterprise system design is akin to a Rube Goldberg machine or the Self-Operating Napkin.
The average enterprise deploys over 200 apps. That translates to between 40-60 per department and 10 per user. Couple that with the average American carrying 80 apps on their phone - using 9 per day and 30 per month(source). In order to get those apps to ‘talk’ there needs to be an API, which creates a redundant communication pattern and slows down the communication across the system. It also requires management which means more resources.
So….let’s take a breath.
The reason we’re FEELING anxious is that our whole computing system is out of whack. Disparate methods of communication have our notification systems in constant ping. A red dot here, a chime there, maybe a periodic vibration over there. This has us hypervigilant. Our processors have gone from centralized and simple to complex and neurological in 20 years. The systems we’re working on are effectively taped together with API’s and there’s no centralized datasphere from which to find, save, share, and search. On top of that, management styles are different and leadership expectations are too.
There’s good news. It can be fixed. 💥
It starts with teams. Team dynamics, team structure, and even the methodology by which teams are formed will change. Teams thrive when they embody the spectrum of acumen. Thought diversity, individual personality type, leadership style, and the ability to recognize and act upon change are the ingredients in the sauce. Providing that type of diversity, that level of intentionality when assembling a team - and tying that team to the very best resources is the modern responsibility of the organization.
A cross-section of most current teams resembles that of our current systems architecture. Teams are as malaligned and disparate as the applications that are designed to support them.
At its core, the problem with CRE is with people, not place. If we want to fix the office, we have to address the anxiety caused by our systems. We have to rebuild our teams.
People first, then platform, followed by place….
The office is the simplest to fix.
#TotalTenancy™ #OrionGrowth
Photo by BoliviaInteligente on Unsplash