Structured map of AI security themes
This page is the central entry point into the survey-style sections of the website.
Software security
Adversarial attacks, poisoning, privacy leakage, model extraction, prompt-layer threats, and robust training.
Hardware security
Side-channel analysis, fault injection, accelerator leakage, system exposure, and implementation-aware trust.
Cloud AI security
Shared infrastructure, data exposure, API abuse, and secure model serving.
Edge AI security
Physical access, constrained devices, embedded accelerators, and device-side defenses.
Predictive AI security
Classifiers, detectors, ranking systems, and forecasting models under attack.
Generative AI security
LLMs, multimodal generation, prompt injection, jailbreaks, and deployment misuse.
Agentic AI security
Planning loops, tools, memory, permissions, and autonomy boundaries.
Physical AI security
Embodied intelligence, robotics, sensing, latency, control, and safety-aware trust.
Countermeasures
Cross-layer defenses spanning algorithms, runtimes, architectures, and hardware.
Readable structure for long technical topics
Each section follows a similar logic: overview, threat model, representative attacks, countermeasures, and open challenges. This makes the site easier to grow over time and easier for readers to navigate without getting lost in dense text.