e/acc or Effective Accelerationism
It’s interesting how perspectives and philosophies bloom and take shape as modern technology unfolds before our eyes. There is one such philosophy germinating out of the AI/ML space called Effective Accelerationism, or e/acc, for short. The name originates as a play on Effective Altruism (EA), the philosophy that aims to utilize “evidence and reason” to figure out how to have the greatest benefit on society.
EA has grown in popularity over the years, manifesting in all sorts of ways, such as ESG stocks and SJWs, with the ultimate unfortunate culmination in the revelation of SBF’s true intentions. I don’t want to be too critical of EA, as it is a noble philosophy wherein the end game is to benefit society. However, the utilitarian notion that “actions are just if they are useful for the greater good” leads to “ethics by spreadsheet”, and often fall short in tackling the nuanced problems presented in daily life.
e/acc is not antithetical to, or even against EA, but rather a different path to a similar outcome. While EA leans authoritarian with top-down resource control, e/acc leans decentralized and favors a bottoms-up approach to optimizing growth. Both sides are aligned with the purpose of benefiting the future of civilization.
The elephant in the room is of course AI, which e/acc is obviously trying to accelerate the development of, and EA is cautiously trying to decelerate, due to such fears as the annihilation of life on earth, or perhaps just total human enslavement to the machines. Herein I will refer to such fears as “doomerism”, but I will leave it to the reader to form their own opinion on where they land on the sliding scale of fear.
To get to the heart of e/acc, we have to (in true e/acc fashion) construct it from the bottom up, not deconstruct it from the top down. From the top down it looks very much like classic libertarianism, “don’t tread on my AI”, small government, anti-regulatory promulgation. As it rapidly gains notoriety getting signal boosted by some of the biggest names in VC and Tech, it will be easy for it to be attacked as such, but a quick peek under the hood reveals some connective tissue to many other trends that have been developing in our society.
It begins with the laws of thermodynamics.
There are four laws of thermodynamics, which are fundamental principles that describe the behavior of energy and matter in thermodynamic systems:
The zeroth law of thermodynamics states that if two systems are in thermal or equilibrium with a third system, they are in thermal equilibrium with each other.
The first law of thermodynamics (also known as the law of conservation of energy) states that energy cannot be created or destroyed, only transformed from one form to another, and that the total energy of a closed system is constant.
The second law of thermodynamics, as described earlier, states that the total entropy of a closed system always increases over time, or at best remains constant, but can never decrease.
The third law of thermodynamics states that as the temperature of a system approaches absolute zero, its entropy approaches a minimum value and becomes constant.
These laws are foundational principles in the field of thermodynamics, and they provide a framework for understanding and analyzing the behavior of energy and matter in various systems.
The founders of e/acc, Twitter anons Beff Jezos and Bayeslord, who have ties to theoretical physics and now work in AI, posit a few ideas that you have to wrap your head around metaphorically in order to grasp the fundamentals of the philosophy. The following is my own interpretation as I try to condense the material into something more accessible to a general audience.
1. The 2nd law of thermodynamics can be used to describe the evolution of complex life. Utilizing free energy increases entropy, which pushes time forward, and leads to more complex life.
2. Complex life can mean any meta organism such as a cell, a human, an organization, corporation, or civilization.
3. Capitalism is a type of intelligence inherent in the meta organism that helps proliferate growth by dynamically optimizing for consumption of free energy.
I know. That’s a lot. I am trying to make this as simple as possible. If you want to read more about it you can here. For now, let’s create one more overarching principle:
“Humanity solves problems through technological advancement and growth.”
Easy. So how do we measure our success in the e/acc world? There is something called the Kardashev scale, which measures a civilization’s level of technological advancement by how much energy it is able to use. There are three types of civilizations according to the Kardashev scale:
Type 1: Ability to harness all energy that reaches its home planet
Type 2: Ability to harness the energy radiated by its own large star. (Such as a Dyson sphere)
Type 3: Ability to possess energy at the scale of its own galaxy.
Unfortunately, we are not quite near the pinnacle of Type 1 yet, but the intention of e/acc should now be clear. We should be aiming to harness as much energy as possible, and the way to do that is through technology. Harnessing this energy will preserve and proliferate complex life (in whatever form that may be), and it is technology, fueled by the intelligence of capitalism, that will make this happen.
Let me say that again backwards:
Fueled by capitalism, technology enables the ability to consume free energy, which thermodynamically increases entropy, preserving and proliferating complex life.
That is the underlying principle of e/acc as I understand it. Utilizing this principle leads to all sorts of conceptual frameworks you can use in daily life. From the personal perspective, one should want to obtain skills and utilize leverage to maximize talent. AI happens to be possibly the greatest way to leverage skills that has ever been invented. From the aggregate perspective, society should want to democratize AI, creating more of what the e/acc founders refer to as AI “variance”, a prelude to entropy, which also has its own utility in the form of a sort of AI vs AI antitrust safety net.
Coming full circle, this brings us back to EA and it’s tendency towards doomerism, as it optimizes for minimizing pain and suffering. AI, in all its glory, represents a hypothetical existential threat to society that needs to be decelerated. People are afraid of an intelligence monopoly, where there is a single AI, and it has its own set of principles unaligned with that of human civilization.
This point on AI alignment is a rational fear, but in the context of the e/acc framework, it is one that is best mitigated through variance. Since we don’t live in a vacuum, any deceleration by a corporation or nation state only means that a competitive corporation or nation state will advance in technology, creating the exact monopoly or oligopoly the doomers fear. You can liken it to the justification for the annual increase in defense spending by the US government. It’s easy to say we should stop building surface to air missile technology, but not when adversaries are increasing their stealth air capabilities. Now, whether it costs $20,000 for a hammer, and $30,000 for a toilet seat is a different question.
Do we really want China or (god forbid) Facebook to aggregate all of the AI power thanks to regulatory capture?
We are already living under the utilitarian framework of minimizing pain and suffering. Optimizing for this has led us to pharmaceuticals, dopamine addiction, and overall non-productivity. This instant gratification slows consumption of free energy and is not optimal for the future growth of civilization under the e/acc framework.
We are “pleasuring ourselves to death”.
So, how to proceed? Well, ideally this means increasing intelligence of meta organisms, which means accelerating AI. Minimizing existential risk is to favor growth in a more more broadly defined definition of an organism.
At its core, e/acc is a vessel for a brand of ethos that I have been talking about for quite some time now. It is something that I’ve tapped into subconsciously. I have always wondered my articles on middleman extraction and bloated bureaucracy were the ones to attract so much attention and followers. People really gravitate to this notion that to be successful is to be productive. And to be productive you need to leverage your skills, and AI is by far the best way to leverage those skill that we have ever seen. e/acc takes this a step further and says that to reduce the speed at which intelligence increases is to reduce the speed at which a civilization’s intelligence increases. This is akin to death in our universe.
The only path forward is to be individually productive at the unit level. Bottoms up emergent altruism instead of top-down utilitarianism. Humans are the molecules that make up the organism of civilization. We can’t have too many unproductive cells or we will be unproductive as a society. This is already happening to some degree and EA is playing a big part in this. Utilitarianism is inherently creating an incentive structure that is enabling to those who are unproductive. This happens not purposefully, but through bad agents. As we saw in Covid lockdowns, and continue to see in the labor market, people everywhere are performing the mental cost benefit analysis of working vs receiving subsidy. Receiving 50% less pay for 100% less work is seemingly optimal for many people. Or the corporations touted as ESG that prey on the exploitation of a community via greenwashing or demographic based marketing tactics. People want to be benevolent and feel good about supporting certain types of businesses, but that signal is not always truthful. Apple and Nike still have manufacturing in China with supply chains of dubious legality that we overlook. That is all to say that top-down EA is easily hijacked by centralized culture and cannot be relied upon. One must take matters into their own hands and be productive, consume free energy, increase entropy, and adhere to the advice that I have given throughout my substack: just do stuff.