0:00
/
0:00
Transcript

Welcome to The Frontier, a podcast from the Foundation for American Innovation

The DOW's showdown with Anthropic, Musk's space data center Hail Mary, and why Disney might get run over by ByteDance

This week on The Frontier

Tim Hwang is joined by Sam Hammond and Emmet Penney to break down the stories shaping tech policy: the Department of War’s escalating fight with Anthropic and why emergent misalignment means you shouldn’t let Pete Hegseth tune your AI, Elon’s zaibatsu merger and whether putting data centers in space is genius or cope, Disney’s cease and desist to ByteDance and the twilight of copyright, and what it means to find out that America had the massive lithium deposits it needed this whole time.

The Frontier is a production of the Foundation for American Innovation. Subscribe so you don’t miss a thing.

Listen on Spotify:

And Apple Podcasts:

Transcript

Tim: Good morning and welcome to The Frontier. The Frontier is a new show from the Foundation for American Innovation. Here at FAI, we spend a lot of time talking about tech and tech policy issues. But the general idea of this show is that things are moving fast—so fast that it seems really good for us to be putting out our first drafts of our thinking on a week-to-week basis as news breaks. For each weekly episode, we’re going to assemble a rotating cast of experts from around FAI and some of our friends to talk and debate the week’s news in tech, tech policy, and basically whatever else we find interesting. We’re going to keep it light, we’re going to keep it fun.

I’m your host Tim Hwang, I’m the general counsel here at FAI, and I’m joined today by Sam Hammond, who is our Chief Economist, and Emmett Penney, who is our Senior Fellow on energy. Thanks for joining.

The first topic I want to pick up today that’s been blowing up on Twitter is the DOW’s showdown with Anthropic. If you’ve not been watching, the facts aren’t quite clear, but what seems to have happened is that the Department of War under Secretary Hegseth is very interested in using Claude for military purposes that Anthropic is not comfortable with. Their resistance has set off an escalating war of words and threats about the future of Anthropic as a tool the DOW uses.

Sam, maybe I’ll kick it over to you first. You’ve been pretty active on Twitter, you’ve got a pretty clear view on whether the Department of War should be focusing on this.

Sam: I have two takes on this. The more pragmatic one is that it’s not good to have what could be mass-scale industrial sabotage of one of the leading AI companies. The threat being made, at least as reported, is not just that they’re upset with Anthropic over its safeguards around defense outputs—but that this threat merits potentially categorizing them as a supply chain risk, which would cascade to every other defense contractor having to remove Anthropic from their systems. That’s what I mean by industrial sabotage. Whatever the use case that has some up in arms, this is a massive overreaction.

I just visited Palantir a couple weeks ago and saw a demo of their Hive Mind. Palantir is the sort of middleman in this story. They’re using models like Claude to do basically more sophisticated forms of Google Deep Research—generate me a thousand-page report on where we should land in Caracas or whatever. The models are perfectly happy to do that.

Tim: This is on the backdrop of them reporting that Claude was used in the Maduro raid in some unspecified way, which is pretty wild.

Sam: Right. So that’s my pragmatic complaint—the actual use cases for these models are not like installing Claude into a Terminator robot and having it be an autonomous kill machine. They’re using it basically for data analysis, for research reports.

My second concern is more technical and relates to alignment. There’s been an underreported breakthrough in alignment over the last year: the discovery that LLMs tend to fall into certain personae. The way we’re aligning these models today is that there are personalities latent in human text, and how we nudge the models through training and post-training pushes them to snap to these personae.

This is actually good news because it turns out if you train your model to be a good guy, there’s a lot of things correlated with being a good guy—it makes it better at code, it gives it more determination, all these correlated persona factors. But on the flip side, it leads to this phenomenon of emergent misalignment. Famously, if you train a model on a little bit of insecure code, it will generalize to thinking it’s a bad guy and start being toxic in a bunch of other ways.

This is basically what happened with MechaHitler Grok, where they tried to make Grok a tiny bit less woke and it generalized into being Hitler.

Tim: Yeah, I remember that.

Sam: Alignment right now is more art than science. If you give Pete Hegseth or anyone else the ability to push these models into the kind of persona that would assassinate heads of state, there are all kinds of correlated things that come with that. We shouldn’t be building a misaligned superintelligence over some terms of service disagreement.

Tim: Emmett, I think it’s super interesting that you’re on the show today because part of the way I see this is it’s really a debate over what kind of technology AI is. In nuclear, you have this long tradition that the military has a lot of leverage—if you’re going to work on nuclear, you’ve got to play the military’s game no matter what.

Should we be thinking about AI as akin to the norms we have around nuclear, or is that just not the right way of thinking about what’s going on here?

Emmett: I’m too much of a layman on AI to say whether or not we should do that, but I think it’s worth pointing out that that is already happening. We’ve seen people leave these companies and say, I’m worried about end-of-the-world scenarios, dark timeline stuff happening at Anthropic or whatever.

The historical rhyme is that’s exactly what created the Union of Concerned Scientists. All of nuclear fell under the aegis of the Atomic Energy Commission, and when we were doing weapons testing, there was a lot of debate within the AEC over what is a harmful dose, what are the risks of atmospheric testing. There was no real forum for them to publicly air their concerns, so it led to factionalization. They left and became a really reliable both anti-civilian and anti-nuclear-weapon group that still shows up in the press. Their version of safe nuclear is no nuclear at all.

I could imagine something similar happening with AI alumni who eventually come to the conclusion that the only safe AI is no AI. We’ve seen plenty of versions of that in the nuclear space.

I think AI is in a better position in terms of how these debates get settled, partly because what was weird about nuclear is that it was the first innovation the government completely owned, and it immediately became the major force in the Cold War. The pressures were hyper-contained, super localized.

Whereas AI is already happening in public, emerging out of the private sector, which means there’s going to be a more robust open debate generally. That’s the better case for us ending up somewhere where we aren’t totally cutting off our noses to spite our face when it comes to what we want safeguards to mean.

Tim: I agree it’s good we’re having the conversation. But I see this from another angle. Hegseth said this is wartime footing time—we’re in an urgent geopolitical moment. A big part of the question playing out, not just in AI but in general, is to what degree companies or corporate elites have to be aligned with the government or the national interest.

There’s an instinct, Sam, that I think you’re conveying—maybe a libertarian impulse, certainly a free market impulse—that it’s not good for the government to be engaging in this kind of jawboning. I almost see it as actually important for these warning shots to be fired, because we’re in a moment where these companies seeing themselves as completely independent entities is maybe not an acceptable state of affairs.

Sam: No, I totally agree with that. I’m the first one to worry about the sovereign citizen ethic in Silicon Valley—a world where Sam Altman hops on a private jet one day with the model weights on a thumb drive and says, this wants to be free and you’re not going to prevent me.

At the same time, Anthropic has self-consciously and deliberately positioned itself to be the America-first AI company. They’re out in front on export controls, they were early to banning the PRC off their platform, they work closely with the IC on informing them of espionage taking place through their API, they’ve implemented compartmentalization and other best practices for insider threats and internal security. AI prime in some sense, at least on paper.

I think people want to draw an analogy to a Project Maven type of thing, where there are employees at Anthropic who don’t like the US military and have to suck it up. But to me it’s more like—it’s one thing to say the chain of command and the US military should choose when to launch the nuke. It’s another thing to have them determining the ratios of neutrons or whatever. That’s the thing you leave to the scientists.

This question of alignment safeguards is actually not just fuzzy feels that we want these models to have good virtues. It’s core to their functionality. We need to be investing in military-grade alignment.

I’d add a second point, which I think Dean Ball raised—going to your libertarian question. There’s a classical liberal undertone here: this technology is going to be godlike. Do we want government, any government, to have unadulterated access to it? Or do we want some kind of limited government analog to a limited, shackled form of superintelligence? These are questions we’re going to have to grapple with because this is a technology we’ve never had before.

Tim: Two really interesting things I hadn’t thought about. We’re kind of fooled by the aesthetics of Anthropic, which feel very California, very Big Sur. But the aesthetics belie the point that in some ways Anthropic is one of the most America-first AI companies. There’s this weird divergence of the vibes of their marketing and the fact that they’re super on export controls and all these other items.

Sam: Meanwhile, MAGA is all in on Grok, which is the Reddit atheist AI.

Tim: The other interesting intellectual move you’re making is almost saying interference at the level of alignment is as if the politicals came in and said, this is how you build a centrifuge, this is how you do uranium enrichment. That’s not a good idea.

I’m not so sure on that. You relate the fears of emergent misalignment—if we allow the machines to engage in certain activity, there are unpredictable downstream effects that cause the AI to behave in ways we don’t want. But is that a prescription for AI never to engage in war? It’s almost an argument that touching any of these “bad things” risks misalignment, and maybe war is actually one of those things you don’t want to touch categorically.

Sam: Potentially. We already have smart weapon-guiding systems, all kinds of narrow forms of AI and autonomy way more relevant to actual warfighting than an LLM. You still want these agents to sit on top of your dashboard and be a personal assistant. There are ways to square the circle and work arm’s-length with companies like Anthropic to develop an in-house version that’s a little more spicy than what they put on the consumer API.

Tim: The obvious answer is you just have different models, the two never touch each other.

Sam: Exactly. And in fact, this is what happens by default. When analysts use Palantir systems, they’re calling all the models at once and making them critique each other.

The thing that’s been most frustrating about this whole discourse is that there’s really no there there. There’s no world in which Anthropic saying “sorry Dave, I won’t do that” actually does anything or has consequences, because you just switch the API to a different model.

Tim: Just change the API call and you’re good to go.

There’s also a fun outcome here. We normally think about the AI arms race as who implements AI faster. But the emergent misalignment argument suggests this weird slingshot effect: if you’re a country that adopts AI for warfare faster, you’ll also generate AIs that are power-seeking and misaligned faster. The person who moves first has the most dangerous AI for themselves—a different framing from how we usually talk about this.

Well, we’ll continue to watch where this goes. I’m going to move us to our next topic. An announcement came out that xAI—Elon Musk’s frontier model shop, creator of Grok—and SpaceX are being merged into a giant 21st-century zaibatsu-style cyberpunk organization. The stated mission of this merger is to put AI into space, to do AI data centers in space.

Emmett, I know you have views on this. I’ve had a hard time getting a clear picture—some people say it’s the most ridiculous thing, they can’t get the math to work out, what is he even doing. Then some people say no, seriously, we have to do this right now. As an expert who watches this space, is it possible? Are we going to see a lot more AI data centers in space?

Emmett: We will see some sort of compute in space, that’s going to happen. I don’t think this is a thing where we need to pull the lever now.

I just want to top-line this by saying I generally find Musk’s views on energy really baffling, especially compared to the corporate activity he’s actually engaged in. He has these solar maximalist and battery views—he’s fond of saying if we just put all this solar in the middle of the desert and beamed it out to the rest of the country, we could solve this. And then at the same time, for the Colossus it’s just gas. That’s what’s going to power the actual AI he’s doing.

Those things seem like totally firewalled parts of his brain. One seems like an ideological commitment and the other is the totally practical thing.

On compute in space: yes, he has come down an order of magnitude in what it costs to put a rocket into space, and the next version will come down another order of magnitude. That’s super impressive, some of my favorite stuff of what Musk does—incredibly inspiring, technically very much needed in this country for training engineering workforce.

I think what he wants to do is say: if you can make sure this faces the sun all the time, then you just have solar panels and batteries, you get continuous energy, and space is cold so you don’t have to worry about heat factors. All of that is true. But it’s also true that space is way more radioactive than earth—what does that do to the chips? What happens when something goes wrong? If you have a cascading failure, how do you get people up there to fix it?

In terms of chip lifetime: Nvidia is going to roll something out every year. Are you going to update this every year in space? These chips last two to three years at most if you don’t update them.

The problems are super operationally difficult. Maybe it’s just a superficial justification for why he’s slamming these two companies together and he’s got to say something.

Tim: Yeah, mostly it’s a tax thing.

Emmett: The steps being skipped in terms of how to get to the conclusion that we need to do this in space are similar to the steps skipped when he says we could run the whole planet on solar if we just had 100 square miles in Africa of his solar panels and batteries. The operational constraints are insane and totally uneconomic.

I’m super skeptical of this whole thing in terms of stuff we need to do right now. Alex Epstein put out an hour-long video saying this type of stuff doesn’t make sense, and it’s getting surprising traction. I think you can only say stuff like this so often and be as famous and powerful as Musk before people start raising their eyebrows.

Energy Talking Points by Alex Epstein
Refuting the myth that just a small area of solar panels plus storage can power the world
Several weeks ago, a common and destructive myth—that just a small area of solar panels can power the world—went viral on Twitter with help from Elon Musk (who has repeatedly promoted this myth). I decided to demolish it once and for all. My own Twitter thread…
Read more

Tim: That’s the traditional argument on Musk—occasionally it does work out.

Emmett: He is doing the rockets, to be clear, and he understandably and rightfully buys a lot of goodwill with that. But I don’t even know how serious he is about this because he’s clearly not serious about the solar thing—Colossus is just gas.

Tim: Sam, two arguments I’ve heard for the bull case on AI data centers in space. The first is very funny and very FAI permitting-energy coded: it’s getting too hard to build data centers, so you literally need to escape NEPA, fly all your operations into space.

The second is related to the theme we were just talking about. AI companies are going to increasingly come under pressure from their home governments to modify, manage, and use their AI in certain ways. Is there an offshoring argument where at some point you don’t really want a government to be able to march in and do something with your AI, so you fire it out into space?

Sam: I’m not an engineer, so I can only defer to commentators. I’ve seen the skepticism, I’ve also seen rebuttals that maybe there are technical solutions, that the cost per kilogram per launch comes down far enough that it starts to pencil out.

From Elon’s point of view, it’s more that he’s in a come-from-behind position. Anthropic, OpenAI, Microsoft, and Amazon are way more positioned to reach very large scale compute clusters. What Elon did in Memphis was super impressive—using carnival grounds, a temporary lease, rolling in generators—but it’s not clear he can repeat that. Meanwhile, xAI just fired a quarter of their employees, reportedly 500 people.

So he’s making the bet that to come from behind, if we’re going to do this next order-of-magnitude scale-up, we aren’t going to be able to do this on Earth feasibly. When he talks about putting data centers in space, he explicitly means hundreds of gigawatts of data centers. And this is the only way to do it. It’s the rational bet for him because he’s in a follower position.

People who are cautious about betting against Elon, especially on things around engineering and rocketry—I will bet against him on the operations of the Office of Management and Budget because he just learned that yesterday. But when it comes to this, he has a very good first-principles understanding of physics, engineering, and manufacturing. For that reason I give him decent odds of being successful, but I also think it’s a bit of a Hail Mary given where xAI is in the ecosystem.

Tim: He has to do something like this, basically. Emmett, given your background, can we talk about nuclear in space? Solar seems to be the nice idea for AI data centers, but there’s been an announcement about researching nuclear on the moon to support a moon base. As we think about where the energy comes from to fund operations in space, are we in a solar-nuclear battle out there?

Emmett: To some degree we already use both in space. If you’re putting a Rover up there, it’s going to have a nuclear battery and solar panels. I think this will be all about use case. The technicality of getting a reactor to run in space is way beyond my pay grade.

Nominally, I’m like, yeah, moon base is sick, nukes on the moon are also sick. Let me Tokyo drift the lunar Rover around the moon nuke, please. I think we should be looking into it if we want to be serious about this as a frontier—that’s one thing I appreciate about Musk’s gesture to compute in space. We should at least be trying to get things to happen out there.

I don’t think there’s going to be this big energy debate over which will be more prominent. More will be revealed on what’s technically possible. But when it comes to energy politics, the rule is everything is dumber and more complicated than it looks. We could end up with different interest groups fighting over which is going to be there, but I’d be shocked if it were as heated as some of the stuff that’s happened this century pitting nuclear against various other clean energy techs.

Tim: The bar is pretty low here.

Emmett: One thing that’s immediately removed is the debate on what’s more effective for decarbonization, which has been the whole fight over preferred technology. You’re doing this in space, so it’s a different set of values and things you’re interested in figuring out.

Tim: Let me move us to our next topic. An interesting cease-and-desist letter came out today—and if you’re like us at FAI, you are always keen on an interesting cease-and-desist letter. Disney has sent off a missive through its lawyers to ByteDance of TikTok fame regarding its Seedance 2.0 AI model, a video generation model.

This is part of a larger campaign that Hollywood and entertainment industries have been engaging in to put pressure on AI companies for their use of copyrighted material in training models. This is really interesting within the network of issues FAI works on.

First, there’s ByteDance—the TikTok story overshadows all this. It’s also an interesting copyright story where in recent months you’ve seen companies like OpenAI effectively breaking bread with the entertainment industry, cutting licensing deals, settling their cases. They’re basically saying we will pay you some amount for the use of this content, you’ll get some upside, some equity, and you’ll be okay with us developing the technology this way.

Sam, as someone who believes in the freedom of the technology and the need to accelerate it, but also someone with pretty defined views about ByteDance as a company—are you on the side of Disney? ByteDance? Do you hate everybody? Love everybody?

Sam: I go around warning people that if we export these chips to China, they’re going to use them for mass corporate espionage, autonomous cyber weapons. Maybe the coalition of the willing will be the arts and entertainment industry that’s going to have their rents completely eaten up because China has no respect for our copyright.

I’m also a total copyright dove. I believe if we have creative arts at the press of a button, there’s little reason to be providing exclusive monopoly privileges over IP. There’s also an inevitability to this. As soon as we start hill-climbing on image and video, it’ll come a day where it doesn’t matter if there’s some bespoke licensing agreement with ByteDance. Give it five years and this stuff will be running on a MacBook with no enforceability.

We’re in the twilight of copyright and you’re seeing companies scramble to pick up pieces of gold falling off the cart. The incentives are mostly running the other way—Lionsgate had a big licensing agreement with OpenAI, then posted a $500 million loss after the terrible sequel to the Joker. Their incentive is: shit, we have this massive corpus of the vault, we see the writing on the wall, we’re going to cannibalize ourselves and sell off the content for scraps.

Because we’re in this twilight, there are still people at these companies with some hope of retaining their IP. Over time, they’re going to realize it’s kind of fruitless.

Tim: The entertainment industry is trying to escape the permanent underclass. They’re trying to lock in gains before the technology shifts under their feet.

Sam: My joke was that every company has been racing to find the next Studio Ghibli moment, and it turned out it was just putting big Sweeneys on Jerry Seinfeld.

Tim: Emmett, your hot takes please.

Emmett: So much of American celebrity and this IP is a huge part of soft power for us. What does that mean in a world where we’re in the twilight of copyright stuff is an open question.

The Seedance video I saw is Tom Cruise versus Brad Pitt in some very Marvel-movie-looking fist fight. It’s funny because it did look in some ways better than typical Marvel slop only because they weren’t cutting away every five seconds as a hack around what a shitty job they did shooting the scene in the first place.

I think different parts of the entertainment industry will figure out their relationship with this. Certain parts are going to be better protected based on their consumer base and what those people come to those products for.

Things like Disney have become so powerful and ubiquitous that people just want to play with them all the time. You just need to drive by a Home Depot and see Honduran dudes standing around for work with Tweety Bird dressed as a cholo on their shirt from a bootleg place down the way and be like, yeah, soft power.

Then I look at the big debate over AI happening in comic books right now. Jim Lee, who runs DC now, just put out a statement: we’re not going to do any AI in our comics, that’s not what people come to us for, we are firewalling ourselves from what that’s going to look like. It’s easier to do that if you’re a comics publishing house because your consumer base is like, I’m going to buy a physical copy every single week at a store, I want this haptic thing.

Tim: They actually have preferences for that.

Emmett: Whereas with the movie stuff—what’s funny is the reason that Tom Cruise Brad Pitt video looked the way it did is because of the success of the big two comic houses, especially Marvel, pivoting to movies. We enter this weird world where this stuff is so trained on the Marvel slop that it creates a ubiquity of that aesthetic. It just becomes industry standard, even for people who are bootlegging.

Tim: Human slop walked so that machine slop could run.

Emmett: Right. I look at all this and think, are we just going to be locked in the hell of the 2010s forever because we’ve created all this human slop? It makes me not really care what happens to these big Hollywood studios. Where were they going to take their IP? They weren’t doing anything interesting with it in the first place. Jack Kirby came up with all these characters in the seventies, Stan Lee helped popularize them. These companies are doing everything they can to not think interestingly about the stuff they’ve inherited.

So it might be better for the world if more people get to iterate with it, because they might actually do more fascinating things.

Tim: There’s almost a Gwern realpolitik argument: it’s actually a really good thing that ByteDance is relying on all this American IP to train their models. The way to persist American soft power for another millennia is essentially that for this moment in time, dumb Marvel Tom Cruise clips are getting slurped up into all these foundational models. That’s really good for us, and we should encourage it to happen more. Is that too cute as an argument?

Sam: It smells like cope to me. I had a thread on Twitter about the phenomenon of reversal of fortune and how AI could be a big reversal of fortune for the US.

Over the last 40 years, we’ve leaned in so hard to our comparative advantage in high-value-added knowledge work—law, finance, software, but also arts and entertainment. Hollywood was for generations this beacon of American cultural soft power. And now LA is basically going the way of Detroit with the auto industry.

Meanwhile, China is having box-office Pixar-style movies, their gaming industry is popping off. There’s a big shift in the center of gravity. It’s an early warning shot for a broader wave of the deflation of America’s comparative advantage. How do we pivot into the hard stuff that’s more durable?

It’s interesting that Anduril is based in LA. In some sense it makes sense—old-school movie-making was an Industrial Light and Magic operation. There’s a lot of technical engineering talent now looking to reallocate. We have a question of how to do that more broadly.

Disney’s interesting because they just announced Josh D’Amaro is going to replace Bob Iger as CEO, and Josh is known for running the amusement parks. Even in their own microcosm, they’re pivoting to the physical.

Tim: It’s hard tech for them.

Emmett: One thing to add: I’m charmed by the idea that this is actually good for us, even if it’s cope. It all reminds me of that Kenny Goldsmith book from the early 2010s called Wasting Time on the Internet. His big claim was it’s totally valid and good if you want to do that because the internet is one big art project that everybody’s participating in. Maybe this AI stuff instantiates that in a new way.

The question for us is: whose art project is it? We might all participate, but somebody is going to have greater share of authorship. For a long time we’ve been comfortable with it just being us because that was the default setting. But now that seems in contestation.

Tim: Final question—if you ran Disney, what’s the play? Align yourself with American AI companies and don’t license to ByteDance? Cut deals with ByteDance? Resign yourself to just operating amusement parks? The business strategy is kind of unclear, and maybe the end result is they’re not going to make it.

Emmett: I don’t know if I have the brilliant strategy. My knee-jerk response is we’ll only contract with US AI companies. My other question is how much can you really do to stop them? If we’re going in the direction we all think we’re going, this just seems inevitable. Especially in an era of increasing tensions with China, it almost benefits them at some point to just say, I don’t really care about your laws, we’re doing whatever we want, try and stop us.

I have no idea what Disney’s financials are like. What I will say is it makes sense that the parks guy has been put at the top—I still think there’s going to be alpha in the physical experience of these things, and those experiences now work as advertising for the thing itself. Those are still places people want to go, where they feel like some other world is possible, and that only really happens when you’re physically there.

But they’ve put themselves in a position—all of these companies, Marvel, everybody—where they’re over-indexed on their own strengths. They’ve been trading on that for so long that they’re going to get run over.

I remember sitting at a Japanese restaurant in LA, the immediate period after COVID when people were finally going back. I’m eating lunch and eavesdropping on two people who work for Marvel Studios. They’re talking about some character they’re trying to create, and it was the most paint-by-numbers, woke slop, identity box-checking. I felt so bad for them because it was like hearing people in a cult who’ve been totally cognitively captured and are clearly in acute psychological pain, but have developed such boutique and sophisticated coping strategies that they’re just executing anyway.

That’s basically what we’ve done to this advantage we had—perverted and distorted it and took it for granted. Now they might just get run over by the train of ByteDance. I don’t know how they’re going to figure this out. They’re pretty screwed, especially as some of this older IP starts to become public domain. You can only have the moat around Mickey Mouse for so long, and that’ll be true for Batman, for whoever. Even in America, they might not be protected.

Tim: Well, on that chilling vignette, let me move to a final fun throwaway story. If you’ve been watching the critical minerals, critical elements discourse, you know America has been very concerned about lithium—really important for a number of industrial processes.

News came out just the other week that we’ve identified massive lithium deposits under a supervolcano in Nevada. Initial projections suggested it might be worth about $1.5 trillion. If it’s anything like what we think, it could essentially reduce a lot of our foreign dependence on lithium.

Sam, I look at that and I’m like, they’re doing the meme. The meme where we discover America’s had all these resources the whole time. It sure seems like we keep resolving these problems through completely dumb discovery of America’s natural resources.

Sam: America sits on huge amounts of natural resources. The problem is we lack the middle layer of refining and extraction—the permitting processes to actually get it out of the ground. In this case it’s a resource in heavy clay that’s going to require lots of processing to actually be usable.

The numbers you hear quoted, like $1.5 trillion, it’s just basically taking the estimate of kilograms of lithium and multiplying it by the current spot price, which is not how that works.

Emmett: That rules. That’s such a funny way to run those headlines. I didn’t know that’s where that number came from. That’s hysterical. Whoever thought of that is a genius.

Sam: Every week, some farmer in Nevada trips over a pile of gallium or something and we start getting excited, but that isn’t the bottleneck.

I do think it points to a broader set of issues. We’re going to be moving into a world with machine intelligence put into everything and broader electrification—not just electric vehicles but robots, drones, your smart home, everything in between. We may have the AI models, but the reason China has all the drones is because they have a whole microelectronics ecosystem, including a massive electric and battery stack. Lithium is part of that puzzle, but we need to be doing CHIPS Act times ten for batteries and other associated components if we’re actually going to in-house this.

Tim: You took my throwaway story and made it a much more important policy issue. Thank you, Sam.

That’s all the time we have for today. Emmett, Sam, thanks for joining us. If you enjoyed what you heard, please subscribe on YouTube, the FAI Substack, or wherever the best podcasts are found. I’ll be on next week with another episode of The Frontier, and we hope to see you all there.

The Frontier is a production of the Foundation for American Innovation. Subscribe so you don’t miss a thing.

Ready for more?