Curated books, papers, tools, and study paths
This page is designed as a practical resource hub rather than a publication list. It collects materials across AI security, edge AI, threat modeling, adversarial machine learning, accelerators, hardware trust, embedded systems, and side-channel / fault-injection security, so that the site remains useful not only as a research portal but also as a working study map.
First month
Threat modeling for AI systems, basic adversarial ML, high-level edge AI pipeline, and GPU / accelerator fundamentals.
3–6 months
Secure deployment, model extraction / poisoning / backdoors, TinyML or Edge AI workflows, secure boot, and architecture-aware security.
6–18 months
Hardware-security depth, side-channel and fault injection, accelerator microarchitecture, device-to-cloud assurance, and cross-layer co-design.
- Weeks 1–2: threat modeling for AI systems, basic adversarial ML, edge AI pipeline, GPU / accelerator basics.
- Weeks 3–6: Hugging Face / Transformers, Edge Impulse or TinyML flow, ChipWhisperer documentation, and 3–5 landmark accelerator papers.
- Months 2–3: model extraction, poisoning, backdoors, secure deployment, secure boot, roots of trust, and memory / dataflow in accelerators.
- Months 3–6: advanced hardware-security texts and small lab reproductions for side-channel, glitching, and robustness evaluation.
| Book | Why it is useful | Best for |
|---|---|---|
| Programming Massively Parallel Processors | Practical GPU architecture, CUDA execution model, and memory behavior. |
GPU / AI accelerator |
| Computer Architecture: A Quantitative Approach | Broad architecture foundation covering memory hierarchy, parallelism, and system trade-offs. |
Computer architecture |
| Domain-Specific Architectures | Why TPUs, NPUs, and AI accelerators exist and how their design philosophy differs from general-purpose systems. |
AI hardware systems |
| Efficient Processing of Deep Neural Networks | The most focused book for DNN accelerator dataflow, energy, sparsity, and mapping. |
AI accelerator design |
| Deep Learning | Foundational neural-network concepts for both predictive and generative models. |
ML foundations |
| Pattern Recognition and Machine Learning | Strong probabilistic ML grounding, especially useful for predictive AI. |
Predictive / classical ML |
| TinyML | Excellent bridge from trained model to microcontroller-class deployment. |
Edge AI |
| Computer Systems: A Programmer’s Perspective | Builds the software-hardware mental model needed for systems security and performance. |
HW / SW boundary |
| Digital Design and Computer Architecture | Logic-to-processor intuition with clear HDL and architecture context. |
Digital design |
| Introduction to Hardware Security and Trust | Broad foundation for ASIC, FPGA, embedded, and hardware trust thinking. |
Hardware security |
| Power Analysis Attacks | Classic text for side-channel reasoning and measurement intuition. |
Side-channel analysis |
| The Hardware Hacking Handbook | Practical attack mindset for embedded systems, glitching, and board-level exploitation. |
Embedded attacks |
- CUDA C++ Programming Guide — official starting point for execution and memory model.
- CUDA Best Practices Guide — optimization rules that map directly to hardware behavior.
- NVIDIA H100 / Hopper resources — useful for current AI-hardware direction.
- NVIDIA Blackwell overview — high-level view of next-generation AI compute framing.
- CUTLASS — understand GEMM tiling and software abstractions close to GPU hardware.
- llm.c — minimal code path to understand LLM internals and training flow.
- Domain-Specific Architectures — strong conceptual framing for accelerator thinking.
- Attention Is All You Need — the transformer paper.
- The Illustrated Transformer — one of the best intuitive explanations.
- Hugging Face LLM Course — practical modern LLM onboarding.
- Transformers documentation — API plus conceptual grounding.
- Karpathy: Let’s build GPT from scratch — concept-to-code bridge.
- nanoGPT — compact GPT training codebase.
- vLLM — memory-efficient serving engine relevant for platform and security thinking.
- The Elements of Statistical Learning — essential for classical ML intuition.
- XGBoost paper — still highly relevant for predictive systems.
- ResNet paper — foundational deep predictive architecture.
- Edge Impulse — one of the easiest structured entry points for embedded / edge ML.
- TensorFlow Model Optimization Toolkit — quantization and pruning for deployable models.
- Model Optimization GitHub — code examples for deployment-friendly compression.
- Explaining and Harnessing Adversarial Examples — classic FGSM entry point.
- Adversarial Examples in Modern Machine Learning: A Review — broad survey.
- OWASP Machine Learning Security Top 10 — practical system attack surface map.
- MITRE ATLAS — tactics and techniques against AI-enabled systems.
- Microsoft Threat Modeling Tool — useful structure for systematic security reasoning.
- CleverHans — adversarial examples library.
- RobustBench — standardized robustness benchmarks and leaderboards.
- ChipWhisperer documentation — primary starting point for power analysis and glitching.
- ChipWhisperer GitHub — examples, notebooks, and lab support.
- NewAE Technology channel — practical demos and setup videos.
- Power Analysis Attacks — classic side-channel text.
- Introduction to Hardware Security and Trust — broad hardware-security foundation.
- The Hardware Hacking Handbook — practical embedded attack mindset.
- Nand2Tetris — one of the cleanest first-principles paths from logic to systems.
- From Nand to Tetris course — structured version for guided learning.
- The RISC-V Reader — compact ISA-focused introduction.
- Digital Design and Computer Architecture — great for datapath / processor intuition.
- ChipVerify UVM tutorials — practical verification reference.
- Practical Binary Analysis — useful where AI software stacks meet firmware and reverse engineering.
- Andrej Karpathy — deep learning and LLM intuition.
- Neural Networks: Zero to Hero — strong concept-first series.
- Hugging Face course resources — practical onboarding for modern LLM tooling.
- NVIDIA GTC — current architecture and systems talks.
- NewAE — side-channel and glitching demos.
- Mini-project 1: implement FGSM / PGD on a small vision model and clearly document assumptions, metrics, and mitigations.
- Mini-project 2: build a threat model for an edge-AI camera including sensor path, preprocessing, model, runtime, firmware, update path, secure boot, and key storage.
- Mini-project 3: read TPU + Eyeriss + one NVIDIA whitepaper and draw your own accelerator block diagram with memory hierarchy and attack surfaces.
- Mini-project 4: deploy a TinyML or Edge Impulse model and explicitly note where model integrity and extraction risks appear.
- Mini-project 5: reproduce a simple ChipWhisperer power analysis or glitching lab with focus on measurement quality and reproducibility.
- Mini-project 6: map MITRE ATLAS + OWASP ML Top 10 to one real edge-AI product and write a one-page security architecture note.
Days 1–7
Read OWASP ML Security Top 10 and MITRE ATLAS. Read Goodfellow FGSM and one adversarial ML survey. Watch one transformer / GPT video.
Days 8–18
Read TPU and Eyeriss. Study CUDA execution and memory model. Build one toy attack / defense notebook and one threat model for an edge-AI product.
Days 19–30
Study ChipWhisperer getting started, read a GPU / accelerator whitepaper, and prepare a short personal briefing on attack surface, transferable expertise, and open questions.