OpenAI's Codex has 1 million weekly active developers and grew 5x since January. The macOS desktop app shipped in early February. Days later, OpenAI released GPT-5.3-Codex, which they describe as the first model that helped train its own successor. Codex writes more than 90% of its own code.

This piece goes inside the build with three OpenAI principals: Thibault Sottiaux, head of Codex; researcher Shao-Qian Mah, who trains the models; and Emma Tang, head of data infrastructure, whose team used Codex to build an internal data agent in two months instead of the year-plus it would have taken before. The architecture choices, including why Rust, how the agent loop works, and how tiered code review handles a PR volume that is breaking traditional workflows, are all covered in detail.

The full piece is worth reading for what it reveals about where the cracks are forming: the PR review process is buckling under Codex-generated volume, some engineers are reverting to tab-complete, and OpenAI is internally debating a '30/70 rule' for human versus agent contribution. These are not hypothetical futures. They are happening now at the company building the tool.

[READ ORIGINAL →]