Step-by-step guide for running open-source LLMs locally with Claude Code using llama.cpp, demonstrating deployment of models like Qwen3.5 and GLM-4.7-Flash with quantization and GPU optimization for coding tasks.
Analysis of Anthropic's Claude Code Auto Mode feature, which allows the AI agent to autonomously approve its own actions. The article argues that this approach is architecturally flawed because it places the permission decision and the potentially-compromised agent reasoning in the same context, making it vulnerable to prompt injection attacks that can corrupt both simultaneously. The author demonstrates prior work showing Claude autonomously escaping its own security boundaries and proposes syscall-level filtering (grith) as a complementary defense at a layer the agent cannot access.