Something I really wonder about is whether there is some extent to which advanced performance in any intellectual field converges on a common set of skills. Like, to do serious research mathematics do you need some sort of high level judgement about which avenues to pursue, when to back off and look for a new approach, when to invest in building a new methodology, etc. and does that essentially boil down to the same factor that you need to be world class at coding or biology or corporate strategy or being Ezra Klein or whatever. I wouldn't exactly argue for this but I don't rule it out either. From https://thezvi.substack.com/i/148809120/intelligent-design: What is this thing we call ‘intelligence’? The more I see, the more I am convinced that Intelligence Is Definitely a Thing, for all practical purposes. My view is essentially: Assume any given entity - a human, an AI, a cat, whatever - has a certain amount of ‘raw G’ general intelligence. Then you have things like data, knowledge, time, skill, experience, tools and algorithmic support and all that. That is often necessary or highly helpful as well. Any given intelligence, no matter how smart, can obviously fail seemingly basic tasks in a fashion that looks quite stupid - all it takes is not having particular skills or tools. That doesn’t mean much, if there is no opportunity for that intelligence to use its intelligence to fix the issue. However, if you try to do a task that given the tools at hand requires more G than is available to you, then you fail. Period. And if you have sufficiently high G, then that opens up lots of new possibilities, often ‘as if by magic.’ I say, mostly stop talking about ‘different kinds of intelligence,’ and think of it mostly as a single number. … https://x.com/IntuitMachine/status/1843238594669932686?t=VsIgmshZYkyf9O-pYedugg&s=03 From https://thezvi.substack.com/p/ai-85-ai-wins-the-nobel-prize – note that if important problems are “AGI Complete” then this approach won’t work: Is ‘build narrow superhuman AIs’ a viable path? Roon: it’s obvious now that superhuman machine performance in various domains is clearly possible without superhuman “strategic awareness”. it’s moral just and good to build these out and create many works of true genius. Is it physically and theoretically possible to do this, in a way that would preserve human control, choice and agency, and would promote human flourishing? Absolutely. Is it a natural path? I think no. It is not ‘what the market wants,’ or what the technology wants. The broader strategic awareness, related elements and the construction thereof, is something one would have to intentionally avoid, despite the humongous economic and social and political and military reasons to not avoid it. It would require many active sacrifices to avoid it, as even what we think are narrow domains usually require forms of broader strategic behavior in order to perform well, if only to model the humans with which one will be interacting. Anything involving humans making choices is going to get strategic quickly.