Practical concerns about giving AI agents shell access or file system permissions. Users discuss the risks of agents accidentally 'nuking' systems, installing unwanted dependencies, or running dangerous commands, and recommend solutions like running agents in containers, VMs, or using specific sandboxing tools like Leash to limit blast radius.
← Back to My AI Adoption Journey
While some users cautiously rely on manual command approvals to prevent AI agents from accidentally "nuking" their systems or misconfiguring local environments, many argue that true safety requires robust sandboxing via tools like Leash, containers, or dedicated VMs. These isolated setups mitigate the risks of the "lethal trifecta"—simultaneous file access, program execution, and network requests—while allowing developers to embrace a high-productivity "YOLO mode." By containing the potential blast radius in this way, users can fearlessly run multiple agents in parallel, transforming a risky experiment into an efficient, multi-tasking workflow.
15 comments tagged with this topic