Security of AI Systems from Models and Accelerators to Agents and Physical Intelligence
I am Brojogopal Sapui. This website is designed as a technical research portal that combines survey-style explanations, ongoing notes, and cross-layer perspectives on trustworthy AI across software, hardware, cloud, edge, agentic, generative, predictive, and physical AI.
What this website is meant to do
This is not only a personal homepage. It is a structured knowledge hub for AI security where each major theme is broken into readable sections, subsections, open problems, and practical countermeasure discussions.
Software Security
Adversarial manipulation, poisoning, privacy leakage, prompt-layer threats, model extraction, and secure training.
Hardware Security
Side-channel analysis, fault injection, accelerator leakage, implementation gaps, and hardware-aware defenses.
Generative AI
Prompt injection, jailbreaks, misuse, alignment, watermarking, and system-level containment.
Physical AI
Security of embodied systems where sensing, computation, timing, and control are tightly coupled.
Cross-layer security with a hardware-aware lens
My work sits at the boundary of AI security, hardware security, edge intelligence, and trustworthy deployment. A central theme is that real security cannot stop at algorithmic robustness alone. It must also account for implementations, accelerators, physical access, system integration, and real deployment constraints.
- Security of AI models across software and hardware layers
- Side-channel and fault-injection perspectives for edge AI
- Trust, resilience, and countermeasures for physical AI
- Survey-style synthesis of open challenges and future directions
A living technical map
The site is organized so that each theme can grow over time. You can publish structured overview pages, add short research-watch posts, include diagrams, and keep your readers updated without redesigning everything every few months.
Security categories covered on this website
These pages are written to be accessible for students and collaborators while still being technical enough for researchers and engineers.
Predictive AI Security
Classifiers, detectors, ranking systems, forecasting models, and their attack/defense landscape.
Agentic AI Security
Autonomous planning loops, tools, memory, permissions, orchestration, and operational containment.
Countermeasures
Cross-layer defenses spanning robust learning, runtime monitoring, hardware hardening, and co-design.
Cloud AI Security
Shared infrastructure, API exposure, privacy, access control, and secure serving.
Edge AI Security
Physical exposure, constrained devices, accelerator implementations, and deployment realism.
Full Research Map
Browse all section pages and use this site as a technical survey hub.
Highlighted ongoing directions
These shorter note-style pages are ideal for commenting on new papers, industry trends, benchmarks, and emerging challenges.
Why agentic AI changes the security problem
Security is no longer only about a single model output. It now involves autonomy boundaries, memory, tools, and action sequences.
Research WatchEdge AI makes implementation details matter
On-device deployment shifts the problem from purely software trust to physical, architectural, and side-channel aware trust.
Research WatchPhysical AI needs security and safety together
When AI senses and acts in the world, latency, reliability, and security become tightly coupled design goals.
Turn this into your real public research portal
Replace the remaining placeholders with your official links, add your publication list, and start publishing one short update at a time.