Security of agentic orchestration
How should memory, tool use, permissions, and long-horizon autonomy be bounded so that agentic systems remain useful without becoming operationally brittle or unsafe?
Use this page for brief updates, evolving questions, technical observations, and commentary on directions that are still taking shape.
How should memory, tool use, permissions, and long-horizon autonomy be bounded so that agentic systems remain useful without becoming operationally brittle or unsafe?
What changes when sensing, compute, communication, timing, and actuation all become part of the attack surface rather than only the model weights?
Which combinations of robust learning, runtime checks, architectural hardening, and physical countermeasures remain practical under real area, power, latency, and deployment constraints?
Add one short entry whenever you want to comment on a recent paper, industry trend, benchmark gap, or open problem. This keeps the website active and intellectually current.