A Critique of Longtermism

 

I recently watched a video about longtermism uploaded by a YouTube channel called Sabine Hossenfelder, which I highly recommend for its objective perspective on scientific news and complex real world issues. Longtermism, as defined in the video, is “the philosophical idea that the long-term future of humanity is way more important than the present.” Longtermists maintain that because the approximately 8 billion living humans are a tiny fraction of the lives that will populate the future, long-term issues are many orders of magnitude more significant than seemingly pressing issues like starvation, which afflict a ‘mere’ few hundred million. Sabine’s video presents longtermist views espoused by technocrats Elon Musk and Peter Thiel in addition to common arguments regarding these ideologies. I want to posit an argument against longtermism as a nonpartisan teenager who is admittedly dissatisfied with the trajectory of human innovation. 


Most longtermists contend society should prioritize issues that threaten humanity far from now over exigent issues. Their rationale is that any investment in long-term problems—no matter how speculative—has such a worthwhile return when factoring in the trillions of humans who will live after us. However, they fail to consider that investment in exigent issues, like education, empowers more people to tackle long-term problems. Although this philosophical article is not the place to calculate these returns, it is at least worth mentioning. 


Furthermore, longtermists argue that it is acceptable to risk valuable resources on predictions about the long-term future, which is also questionable at best. Not only have experts regularly failed to predict the future, but these predictions have influenced some of the most abysmal decisions in history. Countless preventable financial crises demonstrate the disastrous consequences of the predictive hubris of experts. I’m not saying that we shouldn’t commit any resources to a vision for the future. Experts have predicted that climate change will pose a massive risk to humanity if unchecked. However, we should prioritize non-speculatory predictions rooted in rigorously tested scientific phenomena over the fantasies peddled by some billionaire’s whims. 


And if issues that may become pressing in 100 years are more worthy of resource expenditure than those that may become pressing in 10 years, how far ahead should we prioritize? Issues that may exist 100 years from now are less important than issues that may exist 1000 years from now and so on. Longtermism fails to deliver a crystallized plan of action and holds no bounds. I hold nothing against billionaires and politicians, but we can’t ignore the inherently political nature of human progress. Even without corroborating evidence, my argument soundly refutes longtermism—at least until Gates, Altman, and Zuck become more transparent about their plans.




Comments