Trevor Chow

I think about AGI - if you do too, I’d love to meet you!

Previously, I researched inference time scaling at Hazy Research Lab and traded index volatility at Optiver.

I did half of my undergrad at Stanford and half at Cambridge. When I was younger, I was a King’s Scholar and competed in the UK maths and debating circuits.

Outside work, I split my time between ski slopes, skydiving dropzones and the Eras Tour.

Feeling the AGI

Pre-Training Isn't Dead, It's Just Resting (Apr 2025)

Pre-training scaling laws haven’t bent, but the marginal dollar has moved to RL. It’ll come back.

The Intelligence Consolidation (Apr 2025)

Scaling laws reward the consolidation of frontier AI labs, but investors are diversifying anyways. That’s a mistake.

Inference-Time Routing with Smoothie (Dec 2024)

It is possible to select the best LLM output at inference time without labelled data. Accepted to NeurIPS ‘24. Blog post here. Code here.

Strawberries and the Reasoning Models (Oct 2024)

o1 is a bigger deal than ChatGPT, because it marks the start of the reasoning paradigm.

Three Revolutions in Pre-Training (Oct 2024)

The three key insights from pre-training LLMs in the last 4 years is the importance of data.

Incidental Causes of Polysemanticity (Nov 2023)

Polysemantic neurons, which are an obstacle to interpreting AI, can arise incidentally. Accepted to BGPT & Re-Align @ ICLR ‘24. Blog post here. Code here.

Transformative AI and Real Interest Rates (Jan 2023)

The arrival of AGI means an increase in real interest rates. Accepted to Oxford GPR ‘23 and covered by Vox, The Economist, The FT, MR etc. Blog post here.

Pre-Training Isn't Dead, It's Just Resting (Apr 2025)

Pre-training scaling laws haven’t bent, but the marginal dollar has moved to RL. It’ll come back.

The Intelligence Consolidation (Apr 2025)

Scaling laws reward the consolidation of frontier AI labs, but investors are diversifying anyways. That’s a mistake.

Inference-Time Routing with Smoothie (Dec 2024)

It is possible to select the best LLM output at inference time without labelled data. Accepted to NeurIPS ‘24. Blog post here. Code here.

Strawberries and the Reasoning Models (Oct 2024)

o1 is a bigger deal than ChatGPT, because it marks the start of the reasoning paradigm.

Three Revolutions in Pre-Training (Oct 2024)

The three key insights from pre-training LLMs in the last 4 years is the importance of data.

Incidental Causes of Polysemanticity (Nov 2023)

Polysemantic neurons, which are an obstacle to interpreting AI, can arise incidentally. Accepted to BGPT & Re-Align @ ICLR ‘24. Blog post here. Code here.

Transformative AI and Real Interest Rates (Jan 2023)

The arrival of AGI means an increase in real interest rates. Accepted to Oxford GPR ‘23 and covered by Vox, The Economist, The FT, MR etc. Blog post here.