Summarizer

LLM Output

llm/26ba9d27-8aa5-4ccd-ac0e-aa7e277e9792/af1743b5-a586-455d-aba9-669c1e3fd7af-output.json

summary

# Summary: More Thoughts About AGI

This Notion page compiles extensive notes and reflections on artificial general intelligence (AGI), exploring fundamental questions about AI capabilities, limitations, and future trajectories.

## Intelligence as Compression and Heuristics

A central theme is that intelligence exists only in practice, not theory. Intelligence functions as compression, but no universal compression algorithm exists—any compression must operate within specific domains and distributions. This means AI development and assessment will inherently be "heuristic, experimental, messy, and fumbling." No single technique—whether self-evaluation, multi-agent systems, or memory recording—will serve as a silver bullet. The challenge lies in knowing when and how extensively to apply different tools.

## The "Drop-in Remote Worker" Concept

The document critically examines the idea that AI will function as a "drop-in remote worker" replacing human employees. This framing may be a "faster horses" mistake—projecting current limitations onto future capabilities rather than recognizing that transformative technologies typically reshape work entirely rather than simply accelerating existing patterns. Real capability emerges from relationships, feedback loops, and collective intelligence rather than isolated individual agents.

## Scaling Laws and Benchmarks

The author questions whether AI scaling laws are as smooth and predictable as commonly believed, noting that learning curves across technologies tend to "wiggle all over the place." Moore's Law succeeded partly as a self-fulfilling prophecy rather than natural law. Current benchmarks may poorly reflect real-world utility, with companies potentially "benchmaxxing" rather than building genuinely useful capabilities.

## Missing Capabilities in Current AI

Several critical gaps in current AI systems are identified:
- **Continuous learning**: LLMs cannot learn on-the-job or form new memories after training
- **Long-term coherence**: Models struggle with tasks requiring sustained attention over extended periods
- **Genuine exploration**: Unlike humans who develop new mental frameworks spontaneously, AI lacks mechanisms for background processing and insight generation
- **Context management**: AI systems struggle with files, lose track of information in conversations, and cannot effectively navigate large codebases

## The Jaggedness of AI Capabilities

AI capabilities are described as highly "jagged"—impressive in some narrow domains while failing at seemingly simple tasks. This jaggedness runs both ways: AI's superhuman strengths in certain areas may compensate for weaknesses, but fundamental limitations in adaptability, memory, and coherent long-horizon planning remain significant barriers.

## Economic and Labor Implications

The document explores when AI might cause labor market disruption, suggesting the tipping point occurs when AI becomes more adaptable than humans at filling new economic niches. Currently, humans still reach emerging opportunities first, but this advantage could erode. The author notes that "diffusion lag" in AI adoption often reflects genuine product-market fit problems rather than mere resistance to change.

## Biological Intelligence and Search Efficiency

Referenced research proposes measuring biological intelligence as "search efficiency in problem spaces"—quantifying how many orders of magnitude more efficient an agent is compared to random search. This framework suggests even simple organisms demonstrate remarkable intelligence when measured against appropriate baselines, supporting arguments for "cognition all the way down" in biological systems.

## Path Forward

The document suggests AI progress may require fundamentally new approaches, including systems capable of continuous learning, "daydreaming" background processes for spontaneous insight generation, and better integration of tacit knowledge. Current approaches optimizing for benchmarks and immediate utility may be insufficient for achieving genuine artificial general intelligence. The emphasis on reasoning models and extended thinking represents progress, but significant architectural innovations may still be needed.

← Back to job