Video summary:
In this discussion moderated by Nicholas Adlerbert, founder of the Norskin Foundation, Daniel Schmachtenberger, founding member of the Consilience Project, talks about the need to shift our focus from valuing economic success to valuing impact on society and the environment. He criticizes the celebration of unicorns (startups with billion-dollar valuations) without considering the harm they may cause to society or the planet. Schmachtenberger argues that the current growth-oriented society is not compatible with a finite planet and highlights the need for impact entrepreneurs who are addressing the world’s greatest challenges.
He discusses the concept of the Meta Crisis, which refers to the underlying dynamics that give rise to various crises, such as climate change, social injustice, and species extinction. Schmachtenberger emphasizes the role of technology, particularly exponential technologies like artificial intelligence (AI) and synthetic biology, in exacerbating these crises. He raises concerns about the misuse of AI, such as the development of AI weapons, and the potential catastrophic consequences of advanced technologies.
Schmachtenberger suggests that the current global market system, driven by a narrow focus on economic growth and financial interests, is incompatible with the well-being of the planet and humanity. He describes the market as a misaligned superintelligence that relies on human action and incentivizes the pursuit of monetary gain above all else. He argues that a fundamental change in the market system is necessary to ensure compatibility with the biosphere and the long-term survival of humanity.
While acknowledging the complexity and darkness of the challenges we face, Schmachtenberger encourages people to confront their feelings of depression and outrage. He emphasizes the importance of recognizing the inherent beauty and sacredness of life, and the need to take action to preserve it. He urges individuals to engage in activities that align with their values and to actively seek out information and perspectives that remind them of the true state of the world. He concludes by emphasizing the need for a paradigm shift in our economic, governance, education, and religious systems to safely steward the power of advanced technologies and prevent global catastrophe.
“Metacrisis” huh. As a name for the thing I like “the Long Emergency” better. AI in its currently-existing forms is important only in that it’s yet another refinement that can have some use in enhancing the efficiency of the systems we’ve been building up for the past few hundreds of years, but we are at the point where the diminishing returns from such novelties will no longer be effective in staving off disaster for any appreciable length of time. Large language models are already passé.
I find ‘metacrisis’ more descriptive and satisfying for the reasons Daniel talked about in the video - that it’s not just the many crises we face, but the underlying systems that are creating the crises (ie, Moloch). Also, it doesn’t matter if AI is not effective at staving off disaster, as long as it creates value for the market it will be deployed with mind-boggling scale and resource use even as the world burns.
I think this speaker and the people pushing the term “metacrisis” in general misunderstand humanity’s predicament in a way that leads them to rate too highly the potential of those AI systems that have been made so far (that we know of.) It’s interesting technology, but its potential threats are even more over-hyped lately than its potential benefits. We have better things to do than worry too much about either.
What do you think he’s getting wrong about our predicament? AI wasn’t really a focus in this talk, just showing how we knowingly develop dangerous tools and act against our collective interests because of our system of incentives and multipolar traps.
I just don’t think that the “exponential tech curve” is all that exponential or all that relevant a factor compared to for example the pretty low-tech way in which we’re burning ungodly quantities of fossil fuels and using the energy thus produced to eat the whole planet. It’s not only AI that I think is over-hyped, it’s many of the things I saw when scanning through the video transcript. Finely-tuned supply chains, genetically modified crops, ridiculous financial system fuckery, and other such things are increasingly required to keep it all barely chugging along, but it seems to me that they and “Tech” in general are not the cause of or the solution to our problems unless you go back to technology and modes of social organization invented in the 19th century and before. Crooked Timber: “As Cosma said, the true Singularity began two centuries ago at the commencement of the Long Industrial Revolution.”
But then again a substantial part of my reaction was prompted by things I read on searxing the word “metacrisis”, so perhaps not entirely fair to this video.
Fair enough all good points, but the reason we’re all here beyond the natural carrying capacity and eating the planet is because of the exponential tech curve (Haber-Bosch and others) that we’ve been in since discovering fossil fuels. If the energy is there, we will use it to grow at the detriment to everything else, it’s in our nature. If we somehow manage to complete the green energy transition that’s probably even worse for our long term survival, because instead of running out of accessible fossil fuels and being forced to degrow, we’ll keep the growth machine running and accelerate this mass extinction and knock down the rest of the planetary boundaries. All new technologies will allow us to increase the scale of our impacts to the planet.
Thanks for the link, I haven’t read it yet but it looks interesting.
Yeah it’s the “tech curve” being exponential that I don’t see happening. The Haber-Bosch process (after the end of the 19th century, but only by a few years) was revolutionary, and a fine example of the kind of rapid increase in our ability to exploit the hell out of everything that hasn’t been happening so much lately. The increase in technologically-enabled power may have started looking exponential at a certain point, but I don’t believe it has continued like that all the way to the present. The gains today are more incremental, less momentous. The rise and fall of Moore’s law shows a similar pattern in microcosm: Great new invention, rapid improvement, exponential growth that people assume will last forever, then its limits are approached and further progress in that direction is slow and complicated. When I try to imagine how the total curve of technological power has gone I can’t avoid the impression that its rate of growth topped out somewhere mid 20th century at the latest.
Sure it’d be disastrous if we somehow kept up for any great length of time the exponential growth in energy use, production, and population even without fossil fuels, but the idea that this might actually happen starting from this level seems more like a techno-optimist fantasy than any kind of realistic scenario worth considering. Like some other ideas of the Consilience Project it seems to me as if it might be more relevant to some future post-collapse civilisation that needs to avoid making the same mistakes that will bring down ours.
I admit I’m on the verge of losing sight of the overall point of this thread. But thinking about it more, I will add that looking at the actual tech curve itself may not be that important, depending on where you draw the line between technology and capability. For example, it may not matter that increase in transistor density is slowing down when global total compute keeps increasing exponentially. Further, how would quantum computing factor into this (the movement in the cryptography space suggests that a post-quantum world is imminent). On the topic of LLMs, would it matter if those stagnate while the ability of companies and states to manipulate us and drown us in misinformation keeps growing exponentially? And how would the advent of AGI factor into this - in some ways that would be the last invention we have to make ourselves. I guess the point is that some advancements, even ones that are just incremental, seem to have an outsized effect on our ability to impact the world around us.
Anyway, I read the article you linked and enjoyed it. It reminded me a bit of Meditations On Moloch, also a good read if you haven’t seen it yet, attempting to explain the behaviors of civilizations.
Great video, thanks for posting it.