LLM
2026
- Local-Splitter: Cutting Cloud LLM Costs by Putting a Small Model in Front LLM Agents Apr 15
- LLM-Redactor: What Leaves Your Prompt When You Talk to a Cloud LLM LLM Agents Apr 15
- Resilient Write: Giving Coding Agents a Write Path That Doesn't Break LLM Agents Apr 12
- What Leaves Your Workstation When You Use an LLM Coding CLI AI Tool Security Apr 11
- Ollama-Forge for Security Research: Local Models, Refusal Ablation, and Reproducible Pipelines AI Tool Security Feb 16