Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional's guide to AI attacks, threat modeling, and securing AI with

Sotiropoulos, John

  • 出版商: Packt Publishing
  • 出版日期: 2024-07-26
  • 售價: $2,010
  • 貴賓價: 9.5$1,910
  • 語言: 英文
  • 頁數: 586
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1835087981
  • ISBN-13: 9781835087985
  • 相關分類: 人工智慧資訊安全
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

Understand how adversarial attacks work against predictive and generative AI, and learn how to safeguard AI and LLM projects with practical examples leveraging OWASP, MITRE, and NIST

Key Features:

- Understand the connection between AI and security by learning about adversarial AI attacks

- Discover the latest security challenges in adversarial AI by examining GenAI, deepfakes, and LLMs

- Implement secure-by-design methods and threat modeling, using standards and MLSecOps to safeguard AI systems

- Purchase of the print or Kindle book includes a free PDF eBook

Book Description:

Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips cybersecurity professionals with the skills to secure AI technologies, moving beyond research hype or business-as-usual strategies.

The strategy-based book is a comprehensive guide to AI security, presenting a structured approach with practical examples to identify and counter adversarial attacks. This book goes beyond a random selection of threats and consolidates recent research and industry standards, incorporating taxonomies from MITRE, NIST, and OWASP. Next, a dedicated section introduces a secure-by-design AI strategy with threat modeling to demonstrate risk-based defenses and strategies, focusing on integrating MLSecOps and LLMOps into security systems. To gain deeper insights, you'll cover examples of incorporating CI, MLOps, and security controls, including open-access LLMs and ML SBOMs. Based on the classic NIST pillars, the book provides a blueprint for maturing enterprise AI security, discussing the role of AI security in safety and ethics as part of Trustworthy AI.

By the end of this book, you'll be able to develop, deploy, and secure AI systems effectively.

What You Will Learn:

- Understand poisoning, evasion, and privacy attacks and how to mitigate them

- Discover how GANs can be used for attacks and deepfakes

- Explore how LLMs change security, prompt injections, and data exposure

- Master techniques to poison LLMs with RAG, embeddings, and fine-tuning

- Explore supply-chain threats and the challenges of open-access LLMs

- Implement MLSecOps with CIs, MLOps, and SBOMs

Who this book is for:

This book tackles AI security from both angles - offense and defense. AI builders (developers and engineers) will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats and mitigate risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, you'll need a basic understanding of security, ML concepts, and Python.

Table of Contents

- Getting Started with AI

- Building Our Adversarial Playground

- Security and Adversarial AI

- Poisoning Attacks

- Model Tampering with Trojan Horses and Model Reprogramming

- Supply Chain Attacks and Adversarial AI

- Evasion Attacks against Deployed AI

- Privacy Attacks - Stealing Models

- Privacy Attacks - Stealing Data

- Privacy-Preserving AI

- Generative AI - A New Frontier

- Weaponizing GANs for Deepfakes and Adversarial Attacks

- LLM Foundations for Adversarial AI

- Adversarial Attacks with Prompts

- Poisoning Attacks and LLMs

- Advanced Generative AI Scenarios

- Secure by Design and Trustworthy AI

- AI Security with MLSecOps

- Maturing AI Security

商品描述(中文翻譯)

了解對抗性攻擊如何影響預測性和生成性人工智慧,並學習如何利用 OWASP、MITRE 和 NIST 的實際範例來保護 AI 和 LLM 專案。

主要特點:
- 透過學習對抗性 AI 攻擊,了解 AI 與安全之間的關聯
- 透過檢視 GenAI、深度偽造和 LLM,發現對抗性 AI 的最新安全挑戰
- 實施安全設計方法和威脅建模,使用標準和 MLSecOps 來保護 AI 系統
- 購買印刷版或 Kindle 書籍可獲得免費 PDF 電子書

書籍描述:
對抗性攻擊利用惡意數據欺騙 AI 系統,通過利用 AI 的學習方式創造新的安全風險。這對網路安全構成挑戰,因為它迫使我們防範全新類型的威脅。本書揭開對抗性攻擊的神秘面紗,並為網路安全專業人士提供保護 AI 技術所需的技能,超越研究炒作或一成不變的商業策略。

這本以策略為基礎的書籍是 AI 安全的全面指南,提供結構化的方法和實際範例,以識別和對抗對抗性攻擊。本書不僅隨機選擇威脅,而是整合了最近的研究和行業標準,並納入 MITRE、NIST 和 OWASP 的分類法。接下來,專門的部分介紹了一種安全設計的 AI 策略,並通過威脅建模展示基於風險的防禦和策略,重點在於將 MLSecOps 和 LLMOps 整合到安全系統中。為了獲得更深入的見解,您將涵蓋將 CI、MLOps 和安全控制納入的範例,包括開放存取的 LLM 和 ML SBOM。基於經典的 NIST 支柱,本書提供了一個成熟企業 AI 安全的藍圖,討論 AI 安全在安全性和倫理方面的角色,作為可信賴 AI 的一部分。

在本書結束時,您將能夠有效地開發、部署和保護 AI 系統。

您將學到的內容:
- 了解中毒、逃避和隱私攻擊及其緩解方法
- 發現 GAN 如何用於攻擊和深度偽造
- 探索 LLM 如何改變安全性、提示注入和數據暴露
- 精通使用 RAG、嵌入和微調來中毒 LLM 的技術
- 探索供應鏈威脅和開放存取 LLM 的挑戰
- 實施 MLSecOps 與 CI、MLOps 和 SBOM

本書適合對象:
本書從攻防兩個角度探討 AI 安全。AI 建設者(開發者和工程師)將學習如何創建安全系統,而網路安全專業人士,如安全架構師、分析師、工程師、道德駭客、滲透測試者和事件響應者,將發現對抗威脅和減輕攻擊者風險的方法。本書還為領導者提供了一種安全設計的方法,以安全為考量來構建 AI。要充分利用本書,您需要對安全、機器學習概念和 Python 有基本了解。

目錄:
- 開始使用 AI
- 建立我們的對抗性遊樂場
- 安全與對抗性 AI
- 中毒攻擊
- 使用木馬和模型重編程進行模型篡改
- 供應鏈攻擊與對抗性 AI
- 對已部署 AI 的逃避攻擊
- 隱私攻擊 - 竊取模型
- 隱私攻擊 - 竊取數據
- 隱私保護 AI
- 生成性 AI - 新的前沿
- 將 GAN 武器化以進行深度偽造和對抗性攻擊
- 對抗性 AI 的 LLM 基礎
- 使用提示的對抗性攻擊
- 中毒攻擊與 LLM
- 進階生成性 AI 情境
- 安全設計與可信賴 AI
- 使用 MLSecOps 的 AI 安全
- 成熟的 AI 安全