“The Permanent Problem” is an ongoing series of essay about the challenges of capitalist mass affluence as well as the solutions to them. You can access the full collection here, or subscribe to brinklindsey.substack.com to get them straight to your inbox. 

Over the course of twenty essays so far, I’ve attempted to describe the nature of the predicament now facing the United States and other rich democracies – and, by extension, all of humanity in the longer run. We have achieved material plenty, and a corresponding expansion of human possibilities beyond all prior imagining, but we have thus far been unable to translate this achievement into the widespread flourishing that constitutes the next great leap of progress.

We feel stuck, and tantalized – our expectations raised but unmet, the possibility of a much better life within our view but beyond our grasp. There is the sense that we are living in a no-man’s-land, a time of transition, that we are moving beyond one stage of history but have not yet found the next. Hence all the talk of “late capitalism,” and the fact that we so frequently identify our current position by what it no longer is: post-industrial, post-modern. Gramsci’s famous line, uttered in a very different context, crops up again and again these days because it seems so timely: “The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.”

I’ve painted a fairly dark picture up to now. I understand and occasionally try to remind readers of the mitigating context – namely, that we are members of the richest, healthiest, best-educated societies that have ever existed. I don’t believe we’re doomed, not by a long shot: There is much in what’s going today to be encouraged by and hopeful about. But I do think our problems are formidable, and that a future much worse than today along many key dimensions looms as a very real possibility. We are at a critical juncture, playing for high stakes, and things could go terribly wrong.

Moreover, it’s my sense that achieving real progress while simultaneously avoiding disaster will require big changes in outlooks, institutions, norms, and policies. The path we have been on in recent decades has by and large been the wrong one, compounding rather than alleviating our difficulties. This is why I’ve adopted the language of crisis to describe our current situation: to set the stage for discussing possible ways forward. In the essays to come, I will explore possible new paths for social development and progress, paths that represent significant, even radical departures from the status quo. By highlighting the downsides and dangers of the path we’re on, I’ve hoped to make readers more open to considering those departures sympathetically. And it seemed to me that my ideas about possibilities for rejuvenation and renewal would make more sense, and my discussion of them would be more persuasive, if I first laid out in some detail the problems they are meant to solve.

But before embarking on this next phase of the series, I want to push back against what I’ve done up to now. Persuasive writing often involves a bit of deception: papering over the weak points in arguments, not coming clean with readers about the writer’s own doubts. It’s fairly common among professional scribblers, and this describes my own experience pretty much every time out of the blocks, to experience the writing process as a bipolar roller coaster ride, careening back and forth between the exhilarating feeling that you’ve struck gold and the exasperated sense that you have nothing to say of any conceivable interest. Writing in essay form allows you to be fully open with readers – to share with them, not only your conclusions, but the often circuitous path you took to arrive at them. So let me take advantage of this format and expose my own doubts about the project I’ve undertaken.

Specifically, I want to make the case here that all my talk of crisis is overblown. Sure, there are big problems at present, but please name a time when there haven’t been. Posting recently on AI and existential risk, Tyler Cowen made a point about the doomsayers that is also relevant to my own hand-wringing:

No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns.

These words hit home to me because I’ve cried wolf before – earlier in my career, on a very different topic, but it’s the wrongest I’ve ever been in my public life. And if I’m wrong again this time, I’m being wrong in the same way.

My reaction was hardly uncommon, but the 9/11 attacks really threw me for a loop. I knew about al-Qaeda, and had a general abstract appreciation of the fact that Islamist terrorism was a problem and worth worrying about, but I hadn’t taken it all that seriously. My mental image of the terrorist threat was the 1993 World Trade Center bombing – which not only failed to do great damage, but also came off as a cartoonishly amateur operation (the group’s spiritual leader was a blind cleric who wore a Santa Claus hat, and one of the perpetrators was arrested trying to get the deposit back on the van they rented to carry the bomb). So I understood that there was a risk – six people had been murdered in that attack – but the fact that the risk existed didn’t have any real influence over my thought. It didn’t cause me to reexamine any premises or rethink any position.

9/11 changed all that. In addition to the shock and horror and grief that I felt along with so many others, I was upset with myself for my own prior complacency. And so, with that kind of pendulum swing that intellectual types are prone to, I swung hard in the other direction. When 9/11 was followed by various anthrax scares, and then Washington, D.C., was gripped with fear for weeks as a pair of snipers drove around shooting people at random, I became convinced that we had entered a dark and dangerous new phase of history. I saw how incredibly fragile our relatively high-trust society was, how vulnerable it was to random acts of destruction. Very small numbers of people willing to risk it all to see the world burn sufficed to unleash untold chaos, and it now seemed to me that an adequate supply of such people clearly existed. 9/11, I feared, was the signal to them that their moment was at hand.

This sense of crisis – the sense that, in the face of violent nihilism, our contemporary way of life was a sitting duck – led me to think that bold, decisive, aggressive action was called for. Swept up in that panic, I bought the idea that we could not tolerate rogue states with WMD programs; I bought the idea that our support for Arab dictatorships had led to our current vulnerability, and that democratizing that part of the world was a national security imperative. So I supported the Iraq War, supported it vociferously and in writing, and argued heatedly with colleagues at the Cato Institute (the staff was split over the war, but most people – including the foreign policy team and the senior leadership – were adamantly opposed).

It took me a couple of years to face up to and admit how wrong I had been. We were not, in fact, sitting ducks: A significant but far from radical upgrading of our intelligence and law enforcement efforts succeeded in rolling up terrorist networks and preventing another major attack from occurring. The dark and dangerous new phase of history had been a figment of my overactive imagination.

The whole episode left me with a much stronger sense of my own fallibility, and was one important step in my broader move away ideological certainties of any kind (although, ironically enough, I had erred in this instance by bucking libertarian orthodoxy). In particular, I saw that I was especially vulnerable to a particular kind of error. I believe that my most distinctive intellectual talent is a gift for generalization and synthesis, but that gift comes with a matching booby prize: a kind of cognitive pareidolia, connecting dots across a broad canvas to see a compelling picture of something that doesn’t really exist.

So now that I’m once again crying wolf, arguing that we’re in a time of crisis, I can’t help but worry that I’m overreacting in the same way that I did two decades ago. This worry is reinforced by my recognition that things have looked bad before, maybe worse than they do now, yet in the end the resilience of liberal democratic capitalism sufficed to carry us through those times of trouble. I was a teenager during the 1970s, and I remember the general sense that everything was falling apart: Watergate, the last overloaded helicopter taking off from the U.S. embassy in Saigon, gas lines, “Whip Inflation Now,” the massive crime wave and the “Dirty Harry”/“Death Wish” reaction. In just 18 months over 1971 and 1972, terrorists set off an astonishing 2,500 bombs around the country. Yet by the 80s, the mood had shifted dramatically: economic growth returned, the stock market soared, gas prices came back down, and patriotism became fashionable again. The 1930s, when both my parents were born, was an altogether sterner test. Yet despite the worst economic crisis in modern history, and the outbreak of the bloodiest war in human history, both capitalism and liberal democracy endured and indeed triumphed.

The lessons of history, including personal history, thus offer good reasons to believe that our social and political order, which right now looks so fragile and beset with problems, is in fact incredibly durable, possessed of deep inner strengths and an indefatigable capacity for reinvention and renewal. And to be fair to myself, I’ve never said otherwise. I’ve employed the language of crisis, not that of inevitable doom or irresistible decline. Crises can be overcome, just as we overcame the crises of the 30s and 70s. But we should grant that this is not the only possible outcome: Just because we have made it through past crises doesn’t mean we’ll rise to the occasion again. Remember, in the story of the boy who cried wolf, the wolf eventually shows up.

For the remainder of this essay, then, let me go beyond these general historical reassurances and offer specific reasons for thinking that the problems we face now are surmountable – and, indeed, are currently in the process of being surmounted. For each of capitalism’s three crises I have discussed – the crises of dynamism, inclusion, and politics – we can see developments now underway that may well lead us past our current difficulties.

As to dynamism, we are currently witnessing exciting progress and key breakthroughs in a wide range of technological fields. Just over the past few months, the pace of astonishing new developments in generative AI models has been nothing short of overwhelming – for those of us on the outside looking in, amazing things are happening so fast it’s just impossible to keep up. ChatGPT is, by one measure at least, already the most successful product in the history of capitalism: It reached 100 million active monthly users in just two months. These new large language models hold out the possibility of significant, economywide productivity gains and potentially transformative new capabilities in, among other places, medical care and education. On the other hand, many people with impressive expertise in the field are deeply concerned that AI’s disruptive and destructive potential could lead to catastrophe – up to and including human extinction. With possibilities ranging from the millenarian to the apocalyptic, the current state of affairs may be simultaneously thrilling and deeply unsettling – but the last thing you’d do is complain about a lack of dynamism.

Of course, remarkable progress in information technology is nothing new: Rollicking creative destruction in the realm of bits has been a constant even as we have seen a general slowdown in growth and innovation elsewhere. But now, developments on many fronts suggest that a new era of rapid progress in the realm of atoms may actually be at hand. In energy, we see not only plummeting costs for solar and wind power, but also important breakthroughs in geothermal, small modular nuclear, and fusion. In transportation, SpaceX successfully launched its mammoth new Starship in an April test flight, new companies are trying to revive supersonic air travel, drones are becoming ubiquitous, and driverless cars, trucks, and delivery vehicles continue to improve. In healthcare, CRISPR and AlphaFold represent breathtaking new capabilities that could unlock a whole host of miraculous cures and treatments.  

Although it’s encouraging to see so much in the pipeline of promising new technologies, we need to remember that the crisis of dynamism extends well past the R&D phase. Our most vexing difficulties consist of all the obstacles that block the path between invention and implementation at scale. Here, too, though, there are hopeful signs. The brute facts of climate change and runaway housing prices are forcing progressives to face up to the fact that good intentions are not enough; there is growing recognition that the status quo of “everything-bagel liberalism,” which prioritizes saying yes to favored interest groups over actually getting things done, must be abandoned in favor of a “liberalism that builds.” While the so-called “abundance agenda” remains little more than an idea on a blackboard, the fact that these ideas are now in circulation and gaining traction marks an important change. As Lucy told Charlie Brown, “The mere fact that you realize you need help indicates that you are not too far gone.”

The crisis of inclusion has many dimensions, but they all relate back to the declining relative value of the economic contributions of ordinary workers. Over recent years, however, that gap has actually been closing in the United States. There are many different metrics for assessing such things, but my colleague Matt Yglesias offers a good rundown in a recent “Slow Boring” post. According to the Congressional Budget Office, the U.S. Gini coefficient – the broadest single measure of income inequality – declined by 5 percent between 2007 and 2019. That’s not nearly enough to reverse the big run-up in inequality during the 1980s and 90s, but it’s encouraging nonetheless. More recent data are harder to parse because of the pandemic and the ensuing rise in inflation, but the growth in real wages since 2019 has been strongest at the bottom and weakest at the top. What we’ve learned, then, from the experience of the late 90s and the years since 2015 is that neither wage stagnation nor current income gaps are inevitable: When labor markets are tight, wages can grow smartly and incomes in the bottom half can gain ground on those at the top. A decent amount of the economic marginalization we’ve seen in recent decades has been due less to the inevitable workings of capitalism than to inadequate demand. (Alas, with the return of inflation and its massive unpopularity with voters, maintaining tight labor markets may prove politically challenging.)

Meanwhile, according to that same CBO report that discussed Gini coefficient trends, over the period from 1979 to 2019, the U.S. tax and transfer system taken as a whole grew more progressive. In other words, the net effect of taxes and transfers in reducing inequality has increased over time. Notwithstanding the shift away from class-based politics and the growing professionalization of the political process, the egalitarian impulse has thus remained strong enough to produce a clear progressive trend in fiscal policy. And in recent years, especially in response to the pandemic, a newfound willingness to use aggressive government spending to support household incomes has been in evidence. Real incomes surged in 2020 despite the huge disruption in economic activity, all on the strength of transfers. (Since I’m trying to be upbeat here, I won’t dwell on the fate of the expanded Child Tax Credit, an anti-poverty wonder drug that slashed child poverty rates by 30 percent until the program was discontinued after less than a year – with remarkably little rallying to its defense.)

The surging development of generative AI promises to have a major impact on, among other things, labor markets – but in what direction? Fears of mass joblessness rise with every AI breakthrough, and it’s certainly possible that the next rounds of automation will further marginalize the contributions of ordinary workers. Most prior automation has focused on replacing manual and routine clerical labor, thus ultimately reducing relative demand for what ordinary workers have to offer. But generative AI looks poised to automate professional expertise and complex cognitive tasks like computer programming and written communication – and thus, over time, it may eat away at the wage premium currently enjoyed by college-educated knowledge workers. Early indications are that ChatGPT reduces the time to complete writing tasks about equally for everyone, but that it offers a relatively bigger quality boost to the output of less productive workers. In other words, ChatGPT seems to reduce skill inequality among workers.

Meanwhile, over the longer term, large language models hold out the promise for personalized AI tutoring and coaching – think of the “A Young Lady’s Illustrated Primer” in Neal Stephenson’s The Diamond Age. We know that educational achievement gaps can be closed with intensive high-quality instruction; unfortunately, it’s a silver bullet that doesn’t scale because silver bullets are too expensive. But if AI was able to give every child an immensely knowledgeable, unfailingly patient private tutor with intimate knowledge of his intellectual strengths and weaknesses, liberation from one-size-fits-all classrooms could mean a dramatic leveling up in human capital across the population. And since our new class divide is ultimately a human capital divide, AI breakthroughs may end up reversing the long inegalitarian slide we’ve been on for decades.

And what about our political crisis? In my last essay, I concluded that liberal democracy looks likely to withstand the authoritarian populist challenge. Populism has proved a powerful mobilizer of discontent, enough to disrupt politics in many different countries and to knock a few of them at least temporarily off the democratic path. But at the same time, it’s also highly effective in provoking a massive countermobilization, as we have seen in the United States. The U.S. situation is complicated by the fact that our constitution’s major anti-majoritarian institutions – the Senate, the Electoral College, and the Supreme Court – all presently serve to exaggerate the power of the right. That alignment, though, is not a longstanding one, and we shouldn’t expect it to last forever. And so, I am reasonably confident that the solid majorities that regularly rise to reject Trump and Trumpism will prevail in the end.

The corresponding (but not equivalently threatening) form of extremism on the left, the new style of social justice radicalism that emerged in the 2010s, now looks to be past its peak. What is widely referred to as “wokeness” – I stay away from the word because it’s become a general term of abuse on the right – originated in those petri dishes of new intellectual fashions, university campuses and the internet. From there, in relatively rapid fashion, it fanned out into K-12 education, the professions, and corporate America. During this expansion phase, fear of being called out for racism, sexism, or various phobias created a vacuum that activists rushed to fill, shaking up institutions and putting their stamp on them. At this point, though, momentum appears to be flagging – as documented in this recent piece by Niskanen senior fellow Musa al-Gharbi. Arguing along similar lines, Noah Smith compares the current wave of social justice activism to a prairie fire – it can cover a lot of ground quickly, but just as quickly consumes all its fuel and exhausts itself. Specifically, he notes, “there’s a tendency for movements that rely heavily on opportunistic denunciation (i.e., what we now call ‘canceling’ and what 70s activists called ‘trashing’) to burn themselves out; once everyone either gets denounced or realized that they’re in line for denunciation, there’s just no more fuel for the fire.”

Even as we can look forward to a politics less deformed by mutual demonization, we can also see how the force of events might push politics back toward more serious engagement with real-world problems. During the “holiday from history” that commenced with the death of the Soviet Union and the birth of the internet, the stakes of political conflict declined sharply – a change that was predictably accompanied by a turn toward the frivolous and self-indulgent. But although we may have tuned out reality for a while, reality wasn’t done with us. Climate change went from being an intellectual abstraction to a fact of life; infectious disease came roaring back, with a series of 21st century outbreaks that fizzled and then a global pandemic that killed 20 million; China’s abandonment of liberalization and return to one-man rule, together with Russia’s delusional push to restore its empire, have sufficed to bring great power rivalry out of mothballs. Of course we could fail to manage these challenges successfully, but their existence and salience do at least create incentives for the resumption of a more sober and constructive politics.

Of course, responding to those incentives effectively will not be easy in the current media environment. In its vast complex of funhouse mirrors, everything gets distorted, everything gets sensationalized, everything gets trivialized. Never has more timely and relevant information been at our fingertips, yet never has assembling a common set of facts on which to base collective action been more challenging. I’ve given full vent to my concerns about this state of affairs in earlier essays, but here – where I’m trying to push back against doom and gloom – I can note simply that new communications technologies are always profoundly disruptive. The invention of the alphabet and the ensuing spread of literacy helped to usher in the Axial Age; the printing press, which made possible the first truly mass media, fanned the flame of religious wars in Europe that redrew maps and claimed millions of lives. The enormous usefulness of these technologies, though, has in all previous cases so outweighed their potential for harm that our world is now unimaginable without them. With regard to the internet and social media, we are still in very early days. There is good reason to expect that these tools will be modified over time to amplify their virtues and mitigate their defects; likewise, there is reason to expect that we will gradually adapt to these technologies, learning to use them more effectively while reducing our vulnerability to distraction and manipulation.

I’m not going to back away from my assessment that we are living in a time of crisis. The fact that we’ve weathered other storms before does nothing to refute the claim that we’re in the middle of a raging one now. We face big and deep-seated problems, and it’s not hysterical to point them out and the nontrivial risk of catastrophe that they bring. But our past successes in muddling through should be a source of comfort, and more importantly equanimity. Marx was right to say that men make their own history, but not in circumstances of their own choosing. It was not our choice to live in times as filled with promise and peril as these, but it is our opportunity. As we try to write the upcoming pages, neither Panglossian complacency nor resigned defeatism will be of any help at all.