Summary: Engage with Ryan Greenblatt and others on what I’m now calling “Seeing Past AGI”. Ryan keeps pointing out that there are important disagreements which won’t manifest until after AGI is reached, and which may be hard to shed light on until that point (which may be too late to be of much use). Working with Ryan and other usual suspects, I’d be interested in digging into this, try to clearly characterize the dramatic things Ryan expects to see happen post-AGI, and identify precursors which Ryan would agree ought to be visible pre-AGI. This might turn out to fit neatly into the existing AI Watersheds framework, or might turn into a bit of a separate sub-project. As we close in on the relevant milestones, we won’t resolve the first two bullets above [I think this meant the first two cruxes]. The difficult questions are not when X gets automated, it’s what happens afterwards. Can’t imagine any measurement that would disambiguate AINT for him. A lot of these metrics do shed light on whether / when we’ll get AGI, or automation of AI R&D. I have a hard time imagining updating toward anything like “full cheap automation of cognitive labor would increase GDP growth by <10%” I have a hard time imagining updating against large impacts of ASI in advance I’d like to push on this and try to come up with ways to forecast past the AGI event horizon – I have a conviction that we can do better than just throwing up our collective hands. My plan for a next step is to engage with, perhaps, Ryan and Sayash to really dig into their models of what happens post-AGI. [Abi: Let’s chat about different definitions of AGI that Sayash and Ryan have. I wonder whether getting very specific on this question will point to two very different visions here, which are leading to some of the divergence. + how we handle this in our approach.] (related: per https://blog.ai-futures.org/p/ai-as-profoundly-abnormal-technology, offer to meditate a conversation between Sayash+Arvind and the AI 2027 crew) If/when we pursue this: Think about Ryan’s comment about takeoff in the phase 2 doc. Come up with a plan for drilling in on takeoff models and early indicators. Work with Sayash and Ryan to drill in on disagreements post-AGI and then look for early indicators Abi: During our call, let’s chat about different definitions of AGI that Sayash and Ryan have. I wonder whether getting very specific on this question will point to two very different visions here, which are leading to some of the divergence. + how we handle this in our approach https://blog.ai-futures.org/p/ai-as-profoundly-abnormal-technology https://x.com/sayashk/status/1964016339690909847 [Ryan, under “measuring freedom of action”] It seems really hard to use indicators to distinguish between my perspective and one that predicts way less freedom of action at the point of ~full automation of AI R&D. Maybe the crux is mostly capabilities, but then it comes back to crux 1. Ryan: as we close in on the relevant milestones, we won’t resolve the first two bullets above. The difficult questions are not when X gets automated, it’s what happens afterwards. Can’t imagine any measurement that would disambiguate AINT for him. A lot of these metrics do shed light on whether / when we’ll get AGI, or automation of AI R&D. Delaying timelines by decades is a big deal, but doesn’t have a decisive effect on what I ultimately expect to happen. E.g., it’s like a factor of 3 on various things, not a factor of 10-100. More random takes from Ryan: I can imagine updating towards much longer timelines (though this is limited by some exogenous rate of large breakthroughs) I can imagine updating away from software-only singularity based on detailed empirical evidence about returns to compute vs labor within AI companies (especially if this evidence is coming in as we’re automating) Though idk how big this update would be. I can imagine updating towards moving through the human range slower than I currently expect via a variety of mechanisms. I have a hard time imagining updating toward anything like “full cheap automation of cognitive labor would increase GDP growth by <10%” I have a hard time imagining updating against large impacts of ASI in advance After this happens, I’d update, but this is too late.