NVIDIA's Blackwell chip and Anthropic's Claude Mythos are the twin pressures reshaping the AI stack right now. Blackwell addresses the core bottleneck: inference compute at scale. Claude Mythos, Anthropic's latest model release, is framed here not as an incremental update but as a step-change toward AGI-class reasoning, with the hosts drawing a direct line between raw hardware throughput and what these models can now do.

The most useful part of this episode is the breakdown of GPU market dynamics and the rise of neoclouds, the specialized infrastructure layer sitting between hyperscalers and end users. If you want to understand why inference cost curves matter more than training benchmarks right now, the segment starting at 19:06 is the one to watch. The hosts also spend real time at 9:41 defining AGI in operational terms, not philosophical ones, which is rarer than it should be.

The ethical implications of Mythos get surface-level treatment here, but the hardware and market structure analysis is grounded and specific. This is worth reading in full for the neocloud framing alone, a structural shift most coverage ignores entirely. The next question the hosts leave open: who controls inference infrastructure controls AI deployment, and that fight is just starting.

[WATCH ON YOUTUBE →]