March 26, 2026

The future of innovation, with Andrew McAfee

Brink Lindsey

After years of disappointing productivity growth, are we about to experience an AI-powered breakout? On this episode of The Permanent Problem podcast, Brink Lindsey welcomes Andrew McAfee, a principal research scientist at MIT Sloan School of Management and the author of (most recently) The Geek Way, to discuss the current state and future prospects of technological and economic dynamism. They start off by reviewing recent developments in AI and discussing whether LLMs will lead soon to superhuman machine intelligence. They then dive into the potential of current LLM technology to substitute for white-collar knowledge work, emphasizing the tortuous, trial-and-error process of technological diffusion and the distinction between eliminating tasks and eliminating jobs. Here McAfee points out how the new style of business organization he calls the “Geek Way” can accelerate this discovery process. Finally, Lindsey and McAfee review the political barriers to innovation erected by today’s interest-group “vetocracy” and the daunting severity of the problem in western Europe.

Transcript

Brink Lindsey: Hello everybody and welcome to The Permanent Problem. On today’s episode, I’m delighted to have as my guest Andrew McAfee, who is a research scientist at MIT and the author of a number of excellent books, three of them with Erik Brynjolfsson — The Race Against the Machine, The Second Machine Age, and Machine, Platform, Crowd — and then more recently the anti-degrowth manifesto More from Less and most recently The Geek Way. And you’re also running a Substack by the same name, The Geek Way, right now. So Andrew McAfee, welcome to the show. Wonderful to have you.

Andrew McAfee: It’s really good to be here. Thank you for having me. Erik and I call those books The Machine Trilogy.

Lindsey: That’s good, good. So I’ve just written a book, The Permanent Problem, which you graciously blurbed. And in it, I talk about a triple crisis of capitalism. The central crisis in my argument is a crisis of inclusion, a weakening connection between economic growth and overall wellbeing. But a subsidiary crisis is what I call a crisis of dynamism — the fact that the actual capacity of the system to keep delivering economic growth is sputtering in advanced economies around the world. And then finally, the third element is a crisis of politics. We’ve got big collective action problems which are causing the other crises, which we might be able to do something about through politics, and yet our politics is now swamped with dysfunction as well. So we’re going to focus on dynamism, on the current state of technological dynamism and its prospects.

I focus on the downside in the book. We’re going to focus more on the upside here, but the fact is that the 21st century has been a slow-growth century for the most advanced economies of the capitalist world. Outright stagnation for years at a stretch has not been uncommon in many European economies. And even in the United States, which has been the star performer among the richest economies, growth is still way down from what the average rate was during the 20th century. Although perhaps just now, things might be starting to tick upwards. So we’ll talk about that. And I call this a crisis of dynamism rather than just a decline in dynamism because I think things could go either way. We have some real dazzling possibilities right now for new technologies that are transformative in their potential, and yet we have a lot of barriers to actually moving from the lab to diffusion out through the economy.

But first, let’s focus on those possibilities. Over the past decade or so, after a very sleepy few decades of technological dynamism in the world of atoms, as opposed to the world of bits, we’re seeing real excitement, a real flurry of innovation and breakthroughs in hard tech — most spectacularly, dramatically falling costs for solar and wind power. We’ve got a lot of new investment and interest in advanced geothermal and new nuclear designs. We’ve seen spectacular cost decreases in launch costs thanks to SpaceX. We’ve got self-driving cars in a number of US cities.

And then going back to the world of bits, just in recent years, all we can think about these days is the breakout of artificial intelligence in the form of large language models. Just recently, just in the last month or so, we’ve had another round of heavy-duty hype and panic that followed the release of a couple of new models from OpenAI and Anthropic and some viral doom-and-gloom posts that riff on where people think that technology is going. So let’s talk first about AI and let me introduce it with the metaphor that you and Erik used in your earlier books, the second half of the chessboard. It feels like we’re there now. So explain that metaphor and then relate it to where we are in AI these days.

McAfee: It’s actually Ray Kurzweil’s metaphor. We cited him when we appropriated it, which Brink, as you know, is totally fair game in academia.

Lindsey: Yes.

McAfee: You can take someone else’s idea and run with it, but you need to cite them. So it was Ray’s original metaphor, and Erik and I wrote Race Against the Machine. It was a glorified pamphlet. We self-published it on Kindle because we had a couple of ideas we wanted to get out there and we got some attention to it. We said, “Hey, well done.” And then—

Lindsey: By the way, there was just this one year of Kindle Singles, 2011, 2012. I wrote one. They were great, but just one year of them.

McAfee: And we got inspired by Tyler Cowen who put out The Great Stagnation, I think originally as a Kindle book. And we’re like, “Okay, great.” And then Rafe Sagalyn, who’s an amazing, wonderful literary agent — he’s still our agent, still my agent — called us up, and he doesn’t cold-call many authors. And he called us up and he said, “Okay, what you two did with that was cute, but now…” Rafe doesn’t use this language. He said, “Now we’re going to do the grownup version where you get an actual publisher and an actual editor and an actual advance. Because this book, these ideas deserve more space and more of a platform.”

And he said, “In particular, this metaphor of the second half of the chessboard is really interesting.” And we’re like, “Oh, okay.” So that metaphor, again, which comes from Kurzweil, is that the way to think about exponential progress — one way to think about it — is to imagine putting one grain of rice on the first square of a chessboard, and then doubling it for each of the following 63 squares. And what he’s trying to convey with that is that exponential—

Lindsey: And wasn’t there some legend about some advisor to the emperor who asked for payment in those terms?

McAfee: Yeah, exactly. And the only fuzziness in the legend is whether the emperor beheads the guy when he finds out exactly how much rice he owes him. But the moral of the legend, the math lesson hidden in the legend, is that exponential progress is deceptive because it looks underwhelming for a while. And then at one point in time, it looks astonishing and there’s this, where did that come from?

Lindsey: Right, this discontinuous leap, it appears out of nowhere.

McAfee: Right. But all that’s going on is that steady doubling in the background. And the reason that metaphor or that fable is relevant these days is that Moore’s Law is steady exponential progress in all of the building blocks of computing — memory, processing, storage, bandwidth, energy cost, almost doesn’t matter. They all have been doubling with almost clockwork predictability and regularity for the entire history of the digital era. And so Kurzweil grabbed this notion of the second half of the chessboard to make us aware that at some point, that constant Moore’s Law in the background, that constant doubling is suddenly going to look very, very weird. And so if you start with putting one grain of rice on the first square of that chessboard, double, double, double. Kurzweil’s point was the first half of the chessboard — by the time you get to 32 squares, you’ve got a big pile of rice, but you’re not freaking out about anything.

And it’s only in the second half of the chessboard when that doubling yields eye-wateringly big numbers. I think by the time you get to the 64th, I believe it’s either in the quadrillions or quintillions where you’re counting the number of grains of rice. That’s why the emperor gets mad at the clever advisor. And I think we calculated — I believe it’s way bigger than Mount Everest — the amount of rice that you will have.

And so we use that imagery of… When we wrote The Second Machine Age, which was in 2014, we said, depending on how you start counting, it feels like we’re entering the second half of the chessboard. And I think that is a very loose, very poorly phrased prediction that has held up pretty well because here we are on the second half of the chessboard. And one of the things that you should expect are multiple weird surprises, because Kurzweil’s point and our point was that things are different in the second half of the chessboard when you’ve got this much accumulated computing muscle behind you for everything. And one of the weirdest things that’s happened is the emergence of generative AI and large language models. And one of the fun parts of my job is I get to talk to the alpha geeks of the computer science and AI communities. One of the questions I’ve learned to ask them is, “Were you all expecting this? Were you all expecting that it was only a matter of time until we get large language models that can do all these astonishing things?” And without exception, they’ve said, “Oh, hell no. This was a curveball. This was out of left field.” The paper that started the whole thing — I’m sure you know this — was written in 2017 by a large team from Google, and it proposed a new approach for machine translation. And it said, “This seems to work really well. You don’t even need matched sets of documents in the two languages that you’re trying to go between. All you do is upload a huge amount of documents in different languages and it figures it out.” This was wild.

And then two other wild things occurred. People realized that as you added more data and more compute, the performance of the system, the error rate, the things that it could do kept getting better. And so we say that we’re now in the scaling era of AI and it’s not over yet. We keep on waiting for that scaling to plateau or asymptote. And in most ways, it has not done that yet. And then the third thing that was really weird was that this approach, this transformer architecture, was not just good for automatic translation — it was good for almost any language task that we could think of. And so one of the points that Rod Brooks, fantastic computer scientist and co-founder of iRobot, has been making for a long time is that computers have always been lousy at writing code. They were just very bad at writing their own language. Now they’re not. And part of that excitement that you bring up are these recent models in particular from Anthropic, with the latest models of Claude and Claude Code and Claude Cowork, where these things are really, really good at writing code.

And I’ve lost track of the number of colleagues I have who have said to me at some point, “I wrote a thousand lines of Python today.” And my initial reaction was, “Yeah, good for you. You had a productive day.” And they said, “You don’t understand. I don’t know Python. I have no idea what I’m doing, but all I’m doing is talking to Claude and telling it what I want. And all of a sudden, I’ve got a huge amount of new code on my computer.” It’s not just us dilettantes — the people who write industrial-strength code for a living, many of them are incredibly excited about this. Anthropic just announced a little while ago that they now have a module that works for COBOL, this way-more-than-half-a-century-old computer language that is—

Lindsey: The equivalent of cuneiform for computer languages.

McAfee: Yeah, exactly. But it’s apparently still running the old mainframes that sit in the basements of a lot of banks and keep track of who exactly has how much money. COBOL is still being used for that stuff.

Lindsey: I think unemployment insurance systems, likewise. Yeah.

McAfee: Yeah. I was teaching a group of executives at a bank the other day, and I was trying to make this point about Moore’s Law and obsolescence. And I said, “Now, we don’t have very many 20-year-old computers.” And then they all looked at each other and said, “Oh yeah, we actually do.” But Anthropic’s point was we can reach way back into the mists of time and probably recreate that COBOL code and finally get rid of that clanky mainframe in the basement of the bank. That’s where we are now. It’s just a very weird time.

Lindsey: And so when AI can code, then you get this potential positive feedback loop acceleration where you use AI to improve the next model of AI and you start to get towards that nirvana of recursive self-improvement.

McAfee: And I love it. In my X feed these days, that’s now an abbreviation — it’s just RSI. And my alpha geeks in Silicon Valley are talking about we’re now here. This is not hypothetical anymore. Things are recursively self-improving. I ran into the Anthropic CEO, Dario Amodei, at Davos earlier this year, and I just wanted to say hi. I said, “Hey, this is amazing. It must be a blast to be running this.” And he had this dazed look on his face and he said, “It’s astonishing.” And I said, “Okay, which of your models should I use? I’m not a professional software developer.” And he said, “Get Claude Cowork and mess around with it.” And I said, “Okay. By the way, is it true that Claude Cowork was entirely written by Claude?” And he said, “Yeah, basically.” So that’s where we are.

Brink, you might know of Patrick Collison, who’s the co-founder of Stripe and just unbelievably accomplished, very thoughtful, really smart guy and a very good programmer himself. I was reading his X feed and I think early this week he said something — if I’m exaggerating, I’m not doing it deliberately and I’m not exaggerating very much — he said, “I downloaded half a century of weather data and trained a weather forecasting system over the weekend. And then I asked it to suggest improvements to itself and it did that.” This is while he’s running a gigantic payments company as his day job. So we are in uncharted territory.

Lindsey: So we’ve now reached the point where AI, the large language models, have capabilities which look incredibly disruptive, transformative — they’re a big deal. This is a general purpose technology that is going to diffuse throughout the economy, change business practices, change lots of things, right? But it ain’t stopped now, right? We have not plateaued.

So before we talk about what’s going on right now, let’s talk about your thoughts on where this current large language model architecture is going to go. Some people are arguing breathlessly that we’re a year or two away from whatever you want to call it — superhuman AI, strong AI, a country of geniuses in a data center. Others think that something that’s purely a creature of language and doesn’t have any contact with the external world can’t be very creative outside of its training data. And so for the real breakthrough — the kind of AI that will tell us how to build fusion reactors and cure cancer — maybe we’re going to need something beyond just the large language model architecture. What’s your hunch?

McAfee: And thank you for that final word. It’s my hunch. As you just articulated, there’s an extremely active debate going on among the alpha geeks of AI and computer science about how far this LLM approach will take us. And there are some who say it will get us to artificial general intelligence where the technology can do essentially anything a person can do — that’s the general part — and do it at superhuman levels. Sometimes they talk about ASI, artificial super intelligence. It can clearly get us a long way there. We are seeing it make advances in some of the hard sciences. Just recently, people have been feeding the latest models unsolved mathematics conjectures and asking it to look at proofs and asking it to look through very deeply theoretical physics papers and see if there are mistakes in them, and the technologies are doing a good job at those things.

So these are weirdly capable machines. The debate is whether they’re going to get us all the way there or not. And I don’t have an informed perspective on that debate, but I do listen pretty carefully when people like Fei-Fei Li and Yann LeCun talk about the fact that it is a model of language and that it’s a very powerful one. We need more than that to reach this promised land of actual artificial general intelligence.

And their main critique, if I understand it correctly, is that as you pointed out, these large language models don’t have a world model. They just have a model of this thing that you and I are doing. And because this thing is so powerful and so flexible, a large language model actually gets you a long way. And we can take out the language — we can swap in maybe a large genetics model or a large materials model or a large pathology model. There was just a wild attempt to throw a bunch of sleep data into a transformer architecture and we now have a large sleep model. I think that’s wild because sleep is this basically important phenomenon for many, many living things that we understand very poorly.

Lindsey: We still don’t have a very good idea why we have to do this, why we have to conk out—

McAfee: Right. And what happens and how it’s connected to our health and our longevity — and we’re going to get insight from all that. What Yann and Fei-Fei and other people say is, “That’s very powerful. It’s still not a complete world model. The machine — we’re not teaching it how this universe that we live in works, how if you drop something, it falls toward the center of the earth, unless it’s lighter than air, in which case it goes up. The mind-bending cluster of rules that govern how the universe works.” They say that’s not the goal of a large language model, so we shouldn’t expect them to get there. And they have both founded companies to build world models and to take another vector on AI. I’m excited about that. These are incredibly smart, very, very good researchers.

Lindsey: Now they’ve got some resources behind them, we’ll see what they do, right?

McAfee: They’ve got real resources behind them. I’m super eager to see what they come up with. And more generally, we have been throwing a ton of our effort at large language models for the past — call it at least three years, at least the ChatGPT era. Great. I think the same as you. I am really heartened to see pretty serious amounts of money starting to be thrown at other vectors of AI. Wonderful. The more shots on goal we get, the more likely we are to come up with very powerful things.

Lindsey: Yeah. In my own confessedly primitive way of thinking about these things — because it’s not informed at all — but it seems like there could be, in the corpus of what we know right now, which is what we’ve trained AI on, all kinds of implicit latent knowledge that we just haven’t connected the dots on. So it could be that just out of that, AI large language models can make all kinds of truly interesting, important discoveries just by finding what’s latent in what we already know, but didn’t know that we knew. But it still seems to me… I’m partial to the argument that you need some kind of external model of the world. So I’m very interested to see how those approaches work out.

McAfee: But you also brought up the other fundamental point, which is that we’re not done getting value out of large language models, let alone large fill-in-the-blank models. I’ve been following some of the best mathematicians in the world saying, again just recently, “These models went from being not that useful to my actual work to being very, very useful and very good at them.” Again, to your initial point, exponential progress is weird and deceptive, and it feels like in many arenas, we’re there. We’re in the second half of the chessboard.

Lindsey: So it seems to me now, it’s clear enough that large language models in their current state are very good coders. And so they’re going to transform the work of software engineers. They’re going to substitute for a lot of work that’s being done now. They’re going to empower some people to be much more productive than before. It could be at some point there are loops that humans get cut out of altogether. I don’t know, but it’s just obvious that big change is coming to software engineering. It seems pretty obvious to me that with the state of the art right now, big change is coming to knowledge work generally. The facility that these models have with language, their ability to churn out well-organized, thoughtful prose, good arguments, anticipate counterarguments — whether we want to call it thinking or not, they’re doing the equivalent of thinking at quite a high level, certainly at a level as high as or higher than many white-collar workers are operating at today.

And so I think it’s already inevitable that what we’ve got is going to replace a lot of existing white-collar knowledge work. That’s not the same thing as eliminate a lot of white-collar knowledge jobs, but it is going to replace a lot of work. The most breathless projections of transformative change, I think, wildly undrestimate how friction-ridden a process the diffusion of these capabilities is, because the capabilities are great. You can test them against Math Olympiad questions and things like that, in artificial structures.

But out in the real world, to actually take these capabilities and figure out how you get a durable commercial advantage out of them – the people at the frontier labs, they don’t know the answers to those questions. They have no idea. It’s the people in those businesses that are going to have to figure it out and there’s going to be a lot of trial and error. We know every general purpose technology that came before took decades for people to figure out, how to use electricity to completely reconfigure the layout of factories — took decades to figure that out. So this will probably go faster, but tell me your thoughts about where we are now in this process and what you see as the prospects for dramatic change in business practices, business processes, and just the organization of business life because of these technologies.

McAfee: I want to bring up one other factor that’s going to be really important for determining the pace of change here. I’m doing this really interesting round of executive teaching at MIT Sloan, where I work. And as you can imagine, interest in AI and curiosity about AI and alarm about AI are very widespread in the business world. So there’s this really interesting series of two-day bootcamps that MIT Sloan put together in partnership with McKinsey where the requirement is that most, if not all, of the senior team of decent-sized companies — companies you have absolutely heard of — together come to campus for a couple of days for an AI bootcamp. And not just learn to use ChatGPT or whatever, but to give the entire senior leadership team a focused opportunity together to understand some aspects of what’s going on and to brainstorm about what they’re going to do about it.

In my decently long career in business schools, I have never seen this density of the entire leadership teams of medium to big companies coming through focused on this one topic and willing to devote this level of energy and time to it. And the one thing that I observe over and over is that in these companies that are coming through — and I want to be clear, their senior leadership team is engaged, whether or not they themselves have a clear idea of what AI is going to be doing for them or how it’s going to change their company — they are keenly interested in the question and they know it’s an important question. It ain’t going to be quick inside these companies, for reasons that have absolutely nothing to do with the technology.

These companies — I’m going to use the label legacy companies for them, and what I mean by that is they became big and successful during the 20th century, not the 21st century — these companies have been around for a while for the most part. And I think they’re still carrying around a metaphorical 20th-century playbook for leadership, management, what running a company actually looks like when you’re doing it well.

And that metaphorical 20th-century playbook involves a whole lot of integration, collaboration, communication, socializing ideas, a whole lot of checks and balances, a real deep-seated fear of making costly mistakes, a reverence for process, and a whole lot of planning and analysis upfront. And that playbook is going to limit the speed with which these companies can get things done.

You have probably heard — I forget if it’s Francis Fukuyama’s definition of bureaucracy, I think it might be his — he says, “It’s a promiscuous distribution of the power to hold things up.” And so we wonder why big companies get so bureaucratic. I think it’s because they’re following this metaphorical playbook where, without intending to jam everything up, that’s actually what you wind up with. That’s the organization that you wind up with.

Lindsey: And it’s just more or less inevitable that when a new technology hits established enterprises with established ways of doing things, they’re going to want to put this thing on top of what they’ve got already, right?

McAfee: Right. And they’re going to—

Lindsey: And they’re going to be very scared and averse to going and tearing out their foundations and rebuilding everything from scratch.

McAfee: And they’re going to want to think about it for a while and they’re going to want to get all hands on deck and they’re going to want to make sure that all the people who think about different aspects of the company are involved in this, and they’re going to do it at a cadence that hearkens back to the 20th century.

Lindsey: Right. And that respects all of those stakeholders it needs to bring along. Meanwhile, some little tiny startup is going to enter their industry, not face any of those constraints, not have to rebuild, but build from scratch. And so it’s very frequently the case that it’s new enterprises that ultimately figure out how to make proper use of new technologies because the established ones are just too wedded to old ways of doing things.

McAfee: And as you just pointed out, the pattern from business history is fairly clear. We’ve got pretty good data about the most recent big general purpose technology that washed across the economy, which was electrification of manufacturing about a century ago. We’ve got decent data about what happened there. The companies on top at the start of that transition were not the companies on top at the end of that transition. For the reasons we’ve been talking about, they typically didn’t see all the benefits of electrification.

And to simplify a little bit, if you were running a factory powered by a big old steam engine in the basement, when electricity came along and the cost advantages became obvious, a lot of factories said, “Okay, great. We’ll take the steam engine out and we’ll put a great big motor in the basement — ta-da, we have electrified.” And they actually had. Their factory ran on electricity. And then these weirdos showed up and said, “Wait a minute, why stop at one? These electric motor things are still powerful when they’re small — we could put them all over the factory.”

Lindsey: And why do we need multi-story factories? Why can’t we just build everything flat and have lots of motors, right? Yeah.

McAfee: And you can go back and look — they had journals of manufacturing back then. This debate played out for decades where the idea that you would have a separate motor on every machine in the factory, this was crazy talk until it was inevitable. So this is to say, when you’ve got a bunch of incumbents around, the transition takes time.

Now, you identified the main force that will cause this transition to be fast, which is disruptors — startups or small companies that don’t have that legacy, either of mindset or of established norm and process and routine — that come into these industries. And very often, not always, but very often, one of them comes along that makes the incumbents look bad at their own game from a standing start. I’m thinking of SpaceX in this case, which was founded this century — founded in 2002. Fourteen years after SpaceX was founded, it relaunched the first commercially viable orbital rocket. It went up, it came back down, they refurbished it, and they put it back up. They did that within 15 years of their founding.

Today, the rest of the space industry combined has done zero relaunches of commercially viable orbital rockets. Blue Origin has put it up and brought it back down, and now they’re doing the refurbishment process. So that number’s going to go from zero to one. But again, all the incumbents put together have done zero. SpaceX has bathed the world in cheap connectivity from satellites with Starlink, and they remain the only organization certified by NASA to take American astronauts into space from American soil. Boeing thought they were going to get that same designation in 2024 when their Starliner spacecraft took two astronauts up to the space station. NASA looked at the data from that flight and—

Lindsey: Didn’t let them bring them home, right?

McAfee: No. They said, “You’re not bringing those astronauts home.” The story is unbelievable, and I think it’s indicative of something. The story is crazy, which we just learned last year into this year. As Butch Wilmore was piloting the Starliner spacecraft toward the International Space Station — on the far side of which was nothing but infinity and a really unpleasant death for those two astronauts — the layers of redundancy that you build into a spacecraft started to fail. And so the thrusters are actually, I think, firing on the outside of the spacecraft. And as he’s getting closer to the space station, which has astronauts on it — so crashing into that is also a really bad idea — all the levels of redundancy fade away. And I forget if he still had one level or if they were down to no redundancy at all. And they had, as you can imagine, a very quick huddle about what they were going to do about this.

And the solution was what you and I did with our old Windows 95 laptops when they had the blue screen of death. Effectively, they shut the spacecraft down, waited 30 seconds, powered it back up. Lo and behold, they got enough redundancy to dock that spacecraft and get the astronauts on the space station. NASA said, “You are not bringing those astronauts home.” And then SpaceX went up last fall and got them after eight months in the Space Station and brought them back.

So there are two fascinating things going on here. One of them is what happens to great American companies that makes them lose their way, that makes them unable literally to complete their missions? We see this over and over. You and I have been around long enough that we used to hold up GE as the exemplar of managerial excellence in the 20th century. We don’t do that anymore. Boeing was an iconic American company.

If you spend enough time with this managerial playbook, I do think things get gummed up. I think you become sclerotic and inward-looking. Pivoting becomes really hard. When you just layer on more steps in the process, more processes, more people involved, when you promiscuously distribute the power to hold things up, the end state is not a happy end state. So that’s question number one. Question number two is the flip side, which is what is allowing these new — I call them geek companies — to come along, do extraordinary things in really capital-intensive, very knowledge-intensive, very clubby industries, and make the incumbents look bad at their own game? Which is what SpaceX has been doing for coming on 25 years now. Think about Netflix becoming a Hollywood studio and almost buying the iconic studio Time Warner last year and earlier this year. There’s a real shift going on.

So in addition to coming up with science-fiction technologies, I think a really underappreciated phenomenon — and the reason I wrote The Geek Way — is that these geeks clustered in Northern California, I think they’ve upgraded substantially a really, really, really powerful technology called the company, called this thing that we do and this thing called the private sector and the economy to provide goods and services to people. I think that machinery has had a huge upgrade. And I’m going to bring this back to my executive education. I am primarily not teaching the leaders of geek companies. I’m teaching the executives of legacy companies and they are going to be, in some cases, confronted by a bunch of geeks showing up in their industry with this new approach to running an organization. But in some cases, they’re not. And I’ve taught execs at several different banks by now.

And I’ve learned not to say, “There are four 23-year-olds with Claude Code installed on their laptop, they’re going to take away your market share.” I don’t believe that. Unfortunately, the reason I don’t believe that is not because these banks are keeping up with the geeks — it’s because they’re protected by a thicket of regulation where retail banking in the US is not a thing that the extraordinary investors and entrepreneurs of Silicon Valley are interested in, because you have to go ask permission from so many people to open a bank.

So one of the gating factors — to get back to your question about how quickly this science-fiction toolkit is going to change the business landscape and the leading companies in industry after industry — one of them is going to be just how protected in every sense of the word are the incumbents in that industry. Because some of those barriers have gone way down.

Think, again, about space exploration. That’s extraordinarily knowledge-intensive. Successfully launching rockets and bringing astronauts up and down is honestly one of the hardest things we humans have ever accomplished. It’s knowledge-intensive. It’s extremely capital-intensive. It’s oligopolistic. You have clubby relationships with a very small number of buyers historically. Those were all fairly serious barriers to entry. And now SpaceX is making all the incumbents look bad.

So some of those barriers are going down. However, I do not think that Patrick Collison and Stripe are going to be operating the dominant retail bank in America in five years. I don’t think that’s on their product roadmap. I wish it were, but I don’t think it is.

Lindsey: So let’s talk about what’s happening right now. So your sometimes co-author Erik Brynjolfsson has done some research that claims to find evidence of productivity growth acceleration going on right now thanks to AI. You’ve written sympathetically about his research on your Substack. So tell me a little bit about what he’s found and what you think.

McAfee: One of my favorite economists to talk to, in addition to Erik, is Robert Gordon, Bob Gordon, who’s at Northwestern, phenomenal scholar of productivity and growth. And he’s a pessimist.

Lindsey: And famous curmudgeon.

McAfee: Yeah, he’s great. I love talking to incredibly smart grouches. They make me really… Erik is not a grouch. We’ve got very different dispositions. Bob’s a curmudgeonly dude, and he thinks that all of this hype around computers is overdone because he keeps looking at the productivity statistics, which is exactly the right place to look to see if things are living up to the hype. All of us digital geeks think we invented this wondrous toolkit, and Bob keeps asking the right question, which is why isn’t it showing up in the productivity statistics?

Lindsey: And just as far as what the data show, the internet gave us 10 years of accelerated productivity growth in the United States. Nowhere else. Just us. And just 10 years.

McAfee: 10 years, right? And so from ’95 to about 2004, labor productivity in the US had a real uptick, a quantum leap up. And so Erik and I and all of the other digital cheerleaders are like, “Woo-hoo, here we are. It’s the promised land.” And then Bob keeps on pointing out the uncomfortable truth that productivity growth went back down to these anemic levels and has been there ever since.

So Erik — I love the guy — he said, “Okay, I am so optimistic about the current toolkit of digital,” including the previous generation of AI, machine learning, and now generative AI, that even before generative AI came along, he said to Bob, “Come on, let’s place a bet. Let’s place a public bet about the future of productivity growth in the United States.” And so they set a benchmark of, I believe, 1.8% per year. And Erik said, “I’ll take the over.” Bob said, “I’ll take the under.” And the bet runs until, I believe it’s 2029, but I’m not 100% sure about that.

And Erik just wrote an op-ed in the Financial Times where he said — way too early to call victory, but the productivity numbers, not just for one quarter because you know it’s a very jumpy measure — the productivity growth numbers for the US have been looking up for 2025 and they’ve been staying up quarter after quarter. Erik says, “I think the era of AI-driven productivity is finally here.” As you point out, I wrote a Substack echoing what he’s saying, and I think Erik is going to win his bet with Bob. Bob’s point was it was a long time coming and it is not yet here — you don’t see hyper-charged American productivity across the economy yet at all. And he’s right about that.

Lindsey: So let’s look at the darker side of technological dynamism. Another name for productivity growth is creative destruction. That’s got the word destruction in it. And so in addition to creating new opportunities, you take a lot of people who had a comfortable status quo and you make it very uncomfortable. So it looks like — you tell me — there’s some preliminary research by Erik as well, I think, on possible labor market effects of an AI-induced drop-off in demand for new college grads. Let’s talk a little bit about that, but then let’s widen the lens and look out farther.

So at the most breathless end of projections, you have the idea of rapid technological unemployment, mass layoffs, big spikes in unemployment. I’m skeptical about that, but I can picture decades of the same thing happening to knowledge workers as what happened to factory workers and manual workers over the last half century, which is declining relative demand for your particular set of skills and relatively flat wage growth and anemic prospects for a whole lot of people. That seems to me entirely plausible. But tell me, what does your crystal ball show you?

McAfee: And I see we’re just about out of time, right? Because this is a gigantic question. You’re asking questions about our shared prosperity. I have trouble thinking of a more fundamental question, and it’s very uncertain. And I do think we are living, again, in this era of generative AI — in a new chapter of that endless discussion about this endless struggle for shared prosperity.

So let’s anchor the discussion. At one end, you hear a lot of the alpha geeks of Silicon Valley saying breathless things about the looming era — short-term, very short-term — of massive technological unemployment because we have a tool that is so good at knowledge work, at language tasks, and as we’ve already talked about, not just language tasks. And they’re correct about that.

One of the weird things to me is that some of the people saying the most breathless and alarmist things are running the AI companies. I don’t know if their comms department loves it when they go off on these rants about massive looming technological unemployment, but there are informed people in the Valley who believe pretty deeply that in a year or two, we’re not going to have whole swaths of people who are currently employed doing pretty well-paid white-collar knowledge work.

I’m not that far out on the spectrum of where I think things are going to land. I keep remembering that in, I think it was 2016 or 2017, Geoff Hinton — who’s absolutely one of the grandparents of modern AI — said that we should stop training radiologists because the technologies that we already had at that time and that we were accumulating were better at the pattern recognition task of looking at an image and diagnosing what’s going on. They were better at that than not just okay human doctors, but almost any human doctor. And given that pattern recognition superiority, Hinton said, “This profession is going to wither and we should stop training people at it.” There are more radiologists in America today than there were when he made that recommendation, even though, as he foresaw, in general, the machine learning technologies are better at diagnosing what’s going on from a medical image than even top human beings are. He was right about that. He was wrong about the employment consequences of that.

Lindsey: And this critical distinction between eliminating work and eliminating jobs.

McAfee: Right. And this gets to a distinction between tasks and jobs. And this is something that Erik has talked about, that Daniel Rock has talked about — that a job is a bundle of tasks and just because you automate some of them, if you automate 40% of the tasks, it does not necessarily follow that you can eliminate 40% of the people. That’s the wrong way to think about it. It doesn’t even follow that the number of people employed doing that job is going to go down, because the other tasks can become more valuable; as the cost of doing the thing goes down, people might want more of it, and those people can find… And so the people doing that bundle of tasks, even if it shrinks because of technology, still have plenty of employment opportunities out there. That’s been the pattern with radiologists.

Now, is that going to be the pattern for software engineers, for people who write code for a living? And so you mentioned this other interesting research that Erik did. He looked at very recent hiring data and lo and behold, he finds that in the professions that are most exposed to generative AI — relying on work that Daniel Rock and a team at OpenAI did; Daniel is Erik’s doctoral student; the three of us along with James Milin, a fourth guy, are the co-founders of an AI company — so I’m talking my own book here, we need to keep that in mind. Daniel and company, Rock and company, did a lovely piece of research where they said, “Here’s how to think about which jobs are most exposed to generative AI.” Erik grabbed that and said, “As you would expect, it’s in the professions that are most exposed where we see the greatest softening in demand for recent college grads.” Really interesting finding, lots of controversy, lots of debate about it, as there should be. This is how science and social science work.

But we are seeing some early signs of softening of demand right where you’d expect it to be, right in the most exposed professions and right in the newest entrants to the labor force, because those are the people where you have to teach them the most before you can start getting any actual productivity out of them. You take a CS grad fresh out of college and you have to teach them how business actually works and how work actually happens inside organizations. So there’s an apprenticeship phase and there’s softening demand there.

How do we take that finding forward? Man, I have no idea. And the reason I don’t is because this is such a general purpose technology. It can do so many things, it can do them very well. Many parts of the economy are not protected by a thicket of regulation, and that’s where we’re going to see the work of writing software change the fastest.

Lindsey: What we never can get out in front of is that by eliminating some tasks, we then create a whole bunch of new tasks that are very important now.

McAfee: And that’s the point, right? And you brought up Schumpeter’s great phrase of creative destruction. When creative destruction happens, it is always easier to predict the destruction than the creation. And the thing that you and I both believe is that the best way to deal with that is not to try to channel the creative destruction and say, “Okay, you can’t touch those jobs or you can’t touch that industry.” It’s to let the innovators and the entrepreneurs do what they do, which is look around at the landscape of opportunities, figure something out, and spin up a company to take advantage of that opportunity — and that company needs people.

I don’t think that we’re heading into the era… I hear this hype online these days about the one-person company where it’s just one entrepreneur with a great idea and a whole bunch of Claude Code capturing a big market. There’ll be a little bit of that solo entrepreneurship, but it’s still a group sport. You bring up the other important phenomenon, which is that creative destruction can sometimes lower overall employment in an industry. Happened with agriculture, happened with being a longshoreman. You point out it’s happening in the manufacturing sector. I don’t need to tell you, America’s manufacturing output continues to go up year after year, but manufacturing employment in general keeps declining. We have decoupled output in that industry from employment growth in that industry.

Are we going to see that with this era of AI? I’m pretty confident we are. How big, how widespread? Wow, is that an open question. It’s absolutely a question that we need to keep our eyes on, but I am really reluctant to engage in proactive restructuring of the economy or trying to channel AI so that it displaces as few current workers as possible.

You know there’s this ongoing debate between two schools of innovation: one called upstream governance, which our colleagues and friends in Europe seem to be really fond of, where they try to channel the innovation so that it does good — but really it’s a harm minimization strategy. The reason I don’t love that is it’s an incumbent protection strategy and I don’t think that’s a great business to be in. The flip side is permissionless innovation, which is the team that I’m on, that says, “Look, you’ve got to let it cook. You’ve got to let the innovators and the entrepreneurs do what they do. There will be dislocations. That’s the destruction part of creative destruction.”

Lindsey: So that’s a perfect segue to the other half of the crisis of dynamism — not the dazzling possibilities, but the frustrating barriers — because there’s a whole lot of areas of economic life these days where people aren’t allowed to experiment and figure out what works and have free run to do trial and error, especially outside of the United States. But even in the United States, we’ve seen this growing sclerosis, it’s become very difficult to accomplish large-scale rearrangements of atoms out there in the real world. There’s this vetocracy, these hydra-headed thousand veto points, the gauntlet that must be run to build anything, to make anything big. And beyond that, there’s a general risk-averse cultural complacency — we know we have all these problems and we know they’re terribly inefficient, but it’s just more trouble than it’s worth to deal with them.

So we’re just so rich and so complacent that we know we’re not close to optimal in a whole bunch of areas, and yet we just throw up our hands and say, “Oh, it’s just impossible to do anything about that.” That’s a deep-seated cultural underpinning of a lot of our bad policies — this ennui, that we don’t feel urgent enough about fixing problems and we let them fester. And this takes place not only in bottling up what private sector actors are able to do, but also in those areas where government needs to play a role, its effectiveness in playing that role these days is hamstrung by the same vetocracy, the same proceduralism run amok.

And here I’m interested in — you wrote The Geek Way aimed at corporate managers and at how businesses are run, but there’s a lot of lessons from The Geek Way about how government ought to be run. You did a post where you looked to a nearby alternative universe where Jen Pahlka’s book was the gospel for an alternative DOGE. So Jen Pahlka’s book, Recoding America — Jen Pahlka, by the way, a senior fellow at the Niskanen Center — is an excellent from-the-trenches report on government dysfunction.

And again and again, it’s this waterfall approach to management. Everything starts at the top and proceeds down to the next level and then proceeds down to the next level. Information never flows back from the front lines back up to the top. It’s just a one-way trip. And then when the rubber meets the road and there’s disaster, there’s no ability to respond, to adapt, to iterate. Whereas the whole MO of the Silicon Valley startup is this move fast and break things, iterate, learn from experience, don’t come in there with a whole bunch of preconceived notions, build and react. So tell me how that’s applicable to how we can get more effective government?

McAfee: And the one other aspect of this geek approach to getting big things done — which as you point out is inherently iterative, fast-paced, involves valid feedback loops — it also involves a person called a DRI, a directly responsible individual. In other words, you’re on the hook for this. And one of the things that Jen points out in Recoding America — which is a great book but a depressing read — and that also comes out in books like, what’s it called, The Unaccountability Machine? What was the book that came out last year, a couple of years ago?

Lindsey: Yes. I know what you’re talking about. Dan Davies. Yeah.

McAfee: Yeah. And the author makes a convincing argument that the whole point is to distribute responsibility and accountability so that nobody has it — so that everybody can claim, “I did my job. I did that one little box on the waterfall diagram successfully. I was not responsible for the outcome. So if you’re unhappy with the outcome, I don’t know who you’re supposed to haul before Congress, but it’s not me. I was doing my part.” And Jen describes just the outcomes of that philosophy. She tells stories that I can’t believe. And she and her colleagues got in the trenches and they tried to make the technologies of government work better.

Lindsey: So in her experience with Code for America, which she co-founded, she worked with state and local governments, and then she had experience in the Obama administration at the federal level. So she’s seen it all, right?

McAfee: She’s seen it all. And the stories she tells… I know Jen and she’s amazing. And if somebody else had written the book, I might not have believed them because they’re so farcical. She tells a story about the website that Veterans Affairs stood up to let veterans sign up for health insurance via the VA, which is a thing that they’re entitled to do. And the story she tells was that the only place, apparently, where they could get the signup process to work was the VA headquarters. It had the right combination of this version of Flash installed and this version of Netscape Navigator — which they stopped supporting in 2003 — and just this unreal combination of technologies. And somebody built a signup process that worked with that bundle and nothing else. So it was just impossible for veterans, people who served our country, to get the benefits that they were entitled to.

And it took a veteran getting the attention of a member of Congress and sitting down with them and walking them through this “the thing won’t load” process before any will appeared to make things better. And then sure enough, everybody says, “Yeah, well, my job was just to make sure the Flash animation worked correctly,” or whatever. So this is tooth-grindingly frustrating. And Jen’s book highlights how hard that is to fix in the public sector.

And we have incredibly dogged, tenacious entrepreneurs in the private sector attacking this problem by creative destruction. I’m not sure what the vector is in the public sector because, believe me, I am not advocating treason. I don’t want to disrupt the US government and overthrow it with something better. That’s not what I’m talking about, but I share your frustration that there’s just not more willpower to do some of the moderately heavy lifting to make things faster, better, and to let the geeks of government do what they can do.

Lindsey: And if we think we have problems in the United States, well, we’re world beaters compared to Europe. You’ve written of late about the crazy disparity in startup incubation environments, the contrast between the United States and Europe. What accounts for that and is there any hope for turning things around?

McAfee: As you did, I read the Draghi report when it came out a couple of years ago on European competitiveness. And I was expecting this relatively dry document written in high bureaucratese about these isolated things that Europe was doing really well. And the Draghi report is not that. The Draghi report is this very hard-hitting report coming from a really senior European technocrat saying, essentially — I’m paraphrasing, right — “Gang, if we don’t fix this problem, Europe cannot do what Europe wants to do.” And by that he means we cannot continue to have a high standard of living, to take better care of the planet, and to offer the social programs that we believe is the right set. He said, “If we cannot increase our growth and increase our competitiveness, we cannot do all of that. We will have to make some very, very difficult choices.”

So he described increasing European growth rates and competitiveness in productivity as an existential challenge for the EU. Man, I love that. And there was a sentence in the report that I had to read a couple of times. He said, “There have been no from-scratch companies founded in Europe in the last 50 years with a market cap of at least $100 billion.” I said, “What?” This doesn’t include things like ASML, which is a really fantastic company integral to making all the chips that power the AI world, but that’s not a from-scratch company — it’s the result of a spinoff or a merger from Philips.

So Draghi said that and I said, “Wow, I wonder how bad the problem is.” So we took it down from $100 billion to $10 billion. And then I looked at the numbers and the total value of European from-scratch companies in the last 50 years — it turns out there’s a handful of them. There are on the order of 20-ish, and their total valuation — all EU from-scratch companies above $10 billion, public companies combined — are worth about half a trillion dollars. In the United States, that equivalent group is worth $35 trillion.

So this is just to underscore your point: Europe is falling behind, not in every area. There’s this endless argument about who’s got a higher standard of living, Europeans or Americans. I don’t want to talk about that. I find that less interesting than the fact that, as Draghi points out, the growth engine is slowing down and the competitiveness engine is very clearly sputtering along and not keeping up. And when you’ve got the situation and the challenges that Draghi observed, on top of which there is now a land war in Europe and prosperity and innovation really help you win wars, there’s no shortage of real problems out there. And I believe our European friends and colleagues, the other members of Team Liberal Democratic West, are falling behind — their growth engine is falling behind.

There are a bunch of reasons for that. And as you know, there’s another endless debate between hard and soft factors for the magic of entrepreneurship. Cultures are different. I absolutely believe that. I concentrate on the hard factors though, because I think the soft factors are a cop-out. And I talk to plenty of young Europeans, I teach them, who are super tenacious, they’re ambitious, they look at what’s happening in the Valley and it’s exciting in all kinds of ways. And they would rather go back to their home country — they’re proud Europeans, they’re proud French or German or whatever — and they would rather build their startups and try to go change the world back home.

And I think what’s going on is just this combination of rigidities that you and I have been talking about over and over. Labor laws in Europe are very, very strict in most EU countries. There are different tax regimes that discourage not just entrepreneurs but the very best capital allocators — the very best investors in the world are really good at understanding how the tax landscape affects their returns. And then on top of all of that, the EU just keeps on layering more and more regulation on the digital ecosystem.

They started with GDPR and the research on that is overwhelming. It slowed down innovation and entrepreneurship, it reinforced the advantages of the American incumbents, and it gave very nebulous feel-good privacy benefits. I don’t love the trade-off between nebulous feel-good benefits and real, tangible, very large costs. I think that’s a lousy trade-off. And then we’ve had the Digital Services Act, the Digital Markets Act, the AI Act, which a number of European companies are now petitioning the EU to roll back or pause or whatever, because it is going to tamp down on the growth of a thriving AI ecosystem in Europe. So there are a bunch of own goals happening in Europe.

Lindsey: And there’s a mentality there, in some quarters at least, that Europe’s path to preeminence is being a regulatory superpower.

McAfee: People say that. Great.

Lindsey: I don’t think that’s a good way.

McAfee: And I’ve heard some of the senior people in the EU and the senior people in charge of digital in the EU make that argument, I think with a straight face, and say, “Look, the way that you become a world-leading hotbed of innovation and entrepreneurship and success with this digital toolkit is by having a regulatory superpower.” And by which they mean—

Lindsey: Making it more difficult to do here than anywhere else in the world.

McAfee: Yeah. By which they mean that superpower is not taking stuff out. That superpower is adding stuff, and it’s just not going to work. Now, again, there are all kinds of debates to be had about whether the European approach or the American approach to different aspects of life is better. I don’t think you can make a credible argument that the European approach to innovation and entrepreneurship, especially in this digital era, is better. I think it’s demonstrably worse. And I don’t see — to your point — the willpower to change things, because changing things involves getting the heck out of the way. And I do think one of the cultural differences, which matters, is that Europeans really don’t seem to want a bunch of arrogant, wise-ass tech billionaires doing the things that some of our brilliant, wise-ass, mouthy tech billionaires do.

Lindsey: Okay. On that sober note, we’ve gone over an hour. I could keep you here for a lot longer, but I think we’ll wind it up here. Andrew McAfee, it was a great pleasure talking with you and thanks so much for being on The Permanent Problem.

McAfee: It’s a real pleasure. Congrats on the book. I hope it makes a difference and good luck with everything.