Now

In Zurich, in the second year of my PhD at ETH.

Working on evaluations for better-than-human LLM forecasters (followup to this paper), and sometimes thinking about reverse engineering model details.

Reading a lot of safety papers published recently, and summarizing the most important ones on my Substack newsletter and Twitter.

I continue to believe that we passed peak data relevance some time ago, and that future models will draw most of their training signal from some kind of reinforcement learning or self-distillation.

Nine papers in my PhD so far, more soon:


Last updated July 2024.

What is a “now” page?