Ok so I have been waiting to write about the 6 month anniversary of ChatGPT and how amazing the escalation of AI iteration has been, the insane amount of products that have launched, et cetera, but I then had an epiphany. It’s not what’s you all are thinking: is AI is at the top of its hype cycle?
It is commonly said by enthusiasts that all tech products go through the Gartner Hype Cycle, and it is commonly said by realists that not all tech products go through the Gartner Hype Cycle. To be clear, I am not super hype on the hype cycle, but I do have an inkling that if AI is at the top of its hype cycle, the current wave of doomerism rhetoric is merely a tool to turn that hype cycle into an S-curve.
Here’s why:
The calls for AI being an “existential crisis” came way too fast. Everyone has been saying for years that AI is going to automate away millions of jobs, and it has yet to in aggregate. The current wave of “doomerism” feels too closely linked to politics, the narrative seems too choreographed. Like the video montages of Sinclair Broadcasting group spouting the exact same news coverage word for word by anchors on hundreds of news shows, the talk track on AI danger reeks of the same general aroma.
Sam Altman going to Congress to pre-emptively self-regulate. On the outset, it looks like a smart move from Altman, in regulatory capture sort of chess move. He is essentially saying “our technology is so profound and world changing, and I am so altruistic and kind, that I want to make sure we protect humanity from the dangers of AI domination.” Sam has always leaned very progressive, and he has not been shy in his demonstrativeness in proclaiming that AI will absolve humans from most work, which will eventually call for a Universal Basic Income. If the technology is as profound as he says it is, he is a god amongst men, savior of humanity (by blocking it). He also gets to own the company that has the most developed tech, while instructing the government on how to limit competing organizations. If the technology is not as profound as he says, and GPT4 (which is pretty fucking astonishing btw) is the top of the S-curve, then he has successfully propagated the continuation of the AI hype cycle. As the astonishing wave of product escalation cools off, Altman can claim that this is “on purpose”, so as not to allow humanity to fall to waste at the hands of the AI. It’s the perfect poker hand, he either has the nuts or he doesn’t, but no one is in a position to call.
The open letter for “all AIs larger that GPT4 to be slowed down” to give other people time to catch up, cosigned by many including Elon Musk. Again, from the outside looking in, the rift between Altman and Musk looks like two competing giants both looking to help humanity, but should the split have been manufactured, it offers more credence to Musk’s call for the slowdown of AI (which, to his credit, he has been calling for for years). If Musk was still a part of OpenAI, both of them calling for regulation could look collusion-y, especially with Musk no longer being a darling of the Media-Left. But, with both Musk and Altman, seemingly now on opposite ends of the political spectrum, seemingly no longer simpatico, it adds that extra credence to the overall calls for deceleration/regulation. Musk is also known to overpromise and underdeliver, and the AI on Full Self Driving has taken longer than promised, and the household robot butler thing could be pushed back in the same manner. AI regulation (self inflicted or not) could be the scapegoat needed for stalled out Tesla stock growth in the coming years.
The rapid pace of product development has been due entirely to OpenAI’s API releases. The manner in which they released versions of GPT3.5, 3.8, 4.0, to consumer, then developer only, has provided a bit of a governor of the tools that can be built. With each release seemingly have been iterated by use of the preceding version (AI being leveraged to create better AI), the releases could have just as easily been pre-planned for such an escalation.
This one is kind of obvious but saying that AI is going to end the world just makes the technology seem that much cooler, and more dangerous. This makes it more scarce, more desirable, and more expensive.
I’m not saying I am fully bought in on my hypothesis here, but my conspiracy sense is tingling. While AI products are mind-blowing, and I consider myself an accelerationist, I have always thought the doomerism to be quite silly. What reason could there be to cause such a hysteria? Well, that leads to the last point I will make on this, political power:
AI vs Unions. We all know unions are an enormous political power, and AI is going to be the antithesis of a union. I can foresee AI being used as a political wedge in the next election, with labor unions at the forefront. The Media has been anti-Tech for quite some time now as they have had their lunch eaten to the brink of starvation. We also know the Media and the political left have been growing much closer. It is not a stretch to think that Altman and Musk (technologists) are using this as some form of political capital. Most of us (I hope) don’t buy into the Left vs Right hysterics that the Media likes to portray. The conflict is between haves and have-nots, and the real battle (as I have said many times) is between the Top vs Bottom. Altman and Musk represent opposite sides on the Bottom, so it is incongruous they would be advocates to this obviously Top ideology. But, when so much money is at stake, and you remember that electric energy is upstream of all of this, it is not too far-fetched. Plus, when you factor in Musk’s history with the government, using government incentives to boost Tesla sales, and having the government be the biggest client of SpaceX, it further solidifies the fact that the easiest way to make money is to be aligned with the system.
Clearly, a hype cycle suffers a massive deflation of energy after it peaks and enters into the trough of disillusionment. Markets follow hype, and prices correlate thusly (see: Crypto). VC pivoted from Web3 to AI on a dime. Where, and how, are we going to pump our money into things if AI peaks and all we get are a bunch of 10x developers and a lousy t-shirt? GAMMA stocks, which drive the equity markets, are all tied to the future development of AI. They don’t need tent-poles, they need steady, continuous stock growth via value capture of compute and API token usage. Throw in a few products to boot, along with their newfound friendliness with government, and they have enough to ride through the next few years of high interest rates and non-free money.
Is AI the best piece of technology introduced in the past decade? Yes. But as I have written, the potential you as a consumer get out of it is directly correlated with how much effort you put in. 10x of 0 effort is 0. None of this changes how we meager underlings work on the ground to build value, as we can only build the new cool tools with the tools we are given. It is good practice to not project too far into the future though, and make certain life decisions based on assumptions of what could happen. With the value capture coming at the energy/compute/model level, and the commoditization of tools via open source code, the immediate future of AI is (and always has been), not what AI is going to automate, but what new and interesting product can be built utilizing AI that was not previously possible. Said another way, the discovery of fire was cool, but even cooler was the ability to cook meat with it, and the culture shifts that came after, and are still developing to this day. So if you are a capital allocator, or a founder, or an operator: don’t look for the fire, look for the BBQ.
Excellent post Brian, thank you for sharing. I like the lens through which you are putting the current discourse.
It's hard to tell where AI the tech is in the those curves, but as far as I'm concerned, the public discourse is definitely at the peak of the hype cycle. See https://mokagio.substack.com/p/beware-the-ai-apocalypse-prophecies.
The conclusion is also inspiring: AI value to individual consumers is, currently, directly proportional to the effort they put into it. The idea resonates. I concluded my TDD with Copilot guide, https://github.com/readme/guides/github-copilot-automattic, saying:
> Armed with an excavator, you'll dig a hole faster than with a shovel. But you need to know how to operate the machine before you can safely start digging. Likewise, AI can help you write code faster, but doing so effectively requires you to learn how to use it.
I also think the AI as enabler perspective is the only productive way to look at this (and any other) technology. Love the fire -> cooking analogy. If you indulge another self-plug, in "What happens when publishing apps is as easy as sharing videos?", https://mokagio.substack.com/p/what-happens-when-publishing-apps, I came to a similar conclusion.
> Generative AIs could do to programming what WordPress and YouTube did to publishing. They could remove the barrier to entry and give anyone with access to the internet a shot at making their ideas concrete. [...]
>
> Imagine what could happen if anyone could get their app ideas into the world. Imagine how much time could be saved if anyone who needed to automate a mundane task could write a script for it. Imagine all the untapped potential that could be unleashed and all the progress it would generate.
Thanks for sharing. Keep up the great work!