8/6/25 This is a project to advance the conversation regarding when, how suddenly, and how strongly AI will have a transformative impact on the world. In other words: when big things will start to happen, how rapid the onset / escalation will be, and how far the escalation will go. This is related to the concepts of “timeline” and “takeoff speed”, but focuses on impact (things actually happening in the world) rather than capabilities. Get his high-level take – review Questions for New Participants. Ask what aspects of the question I might be missing, and how he’d like to participate going forward. I want his views on timelines / impact, in his case especially impact outside of the US. How will adoption play out differently in the global south? The thesis he’s found most persuasive is AI as Normal Technology. He was at a conference called Dialogue, industry leaders engaging with unconventional questions. He should nominate Taren and me. He went in as a fast-timelines person. He went to a bear-case session, lots of tech people, including Jonathan Rosk (founder of Groq). They spent 90 minutes discussing how timelines could be longer than expected. The core argument he came away with is that diffusion really takes a long time. E.g. there are Fortune 500 companies still using fax machines. Our economy and decision-making processes are so fragmented and messy. Trying to answer my crux questions at a societal level is unhelpfully generalizing. More tractable & beneficial is to pick some sectors, industries, types of problems, and come up with ways of measuring change in those, as leading indicators for what might happen in other sectors and industries. E.g. software engineering may be one of the more tractable problem spaces for AI; he’d be very interested in tracking diffusion here: hiring practices, how much autonomy is being granted, what level of productivity is it unlocking. If CS is a fast example, find a few slow examples, get super specific & granular about measuring diffusion. Get an idea of the uneven distribution. In the global south, there are four problems that need to be solved for AI to have a meaningful impact. E.g. in sub-Saharan Africa. In order: Hardware Rapidly accelerating mobile phone adoption, but in poor countries it’s still mostly dumbphones. Connectivity challenges, cost of airtime. Expensive; regulatory barriers. Will be 5-10 years before we start to see meaningful levels of hardware adoption that make the basics of AI accessible at the consumer or enterprise level. Software Language performance Google is convinced that within a couple of years, language will be a solved problem – just needs some fine-tuning. Other leading labs aren’t as confident, perhaps because they don’t have have the training data, that might be 5 years. Content (didn’t get a chance to ask what he meant by this, maybe region-specific knowledge?) Accessibility (will partially be addressed by hardware) Multi-modality is important for non-literate populations Connectivity issues interfere with always-on voice mode UX stuff, e.g. support voice memos. This won’t get much attention for 10 years, everyone is focused on the land grab in the west. Economic opportunity Until basic needs are met, AI isn’t very relevant. 800M people aren’t in a position to meaningfully benefit, because they don’t have the infrastructure and economic opportunities. E.g. medical advice doesn’t help much if you don’t have access to medicine. He’s doing research on all of these at the moment. At the end of the day, this will be just another trickle-down technology. Doesn’t affect the impact on the world at large. What is he most uncertain about? Cruxes that feel important to him: He has a prior that labor market disruption is not going to be meaningfully different from other times in history… but he’s highly uncertain. Evidence that might push him to believe that meaningfully different levels and pace of labor market disruption would [?]. The burden of proof is on this time being meaningfully different from the past. If AI gets good at something, we’ll focus on something else. He hasn’t seen enough evidence to shift his priors. Leading indicators currently reinforce his prior: unemployment is low. It’s harder to get a job as a junior developer, but not impossible, and mostly seems to be due to other factors. Even if capabilities advance, diffusion challenges will leave room for human workers. Our institutions aren’t going to turn everything over to AI. How offense vs. defense net out for various x-risks. Increases in defensive capabilities are often under-appreciated, but he’s not confident and the use cases are all different. Frontiers of autonomy – will agents be able to avoid going down unproductive, token-burning rabbit holes? Feels like a solvable problem, but he’s not sure on what time frame. He thinks our general risk aversion will slow a lot of things down, for better and worse. E.g. we’ll forego applications in mental health or AVs. Things that can have a large impact on society and don’t have meaningful regulatory or corporate bottlenecks / need for human process changes: the intersection is fairly small.