Adversarial Machine Learning (Synthesis Lectures on Artificial Intelligence and Machine Learning)
Yevgeniy Vorobeychik, Murat Kantarcioglu
- 出版商: Morgan & Claypool
- 出版日期: 2018-08-08
- 定價: $2,310
- 售價: 9.0 折 $2,079
- 語言: 英文
- 頁數: 152
- 裝訂: Paperback
- ISBN: 1681733951
- ISBN-13: 9781681733951
-
相關分類:
人工智慧、Machine Learning
立即出貨 (庫存=1)
買這商品的人也買了...
-
$420$357 -
$600$510 -
$520$390 -
$520$390 -
$620$558 -
$680$612 -
$780$663 -
$680$510 -
$680$537 -
$580$493 -
$590$502 -
$520$390 -
$600$540 -
$680$578 -
$450$338 -
$880$695 -
$420$357 -
$860$774 -
$354$336 -
$301區塊鏈 2.0 以太坊應用開發指南
-
$480$408 -
$407Power BI 數據分析:報表設計和數據可視化應用大全
-
$620$527 -
$580$522 -
$352對抗機器學習:機器學習系統中的攻擊和防禦
相關主題
商品描述
The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicius objects they develop.
The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research.
Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings.
商品描述(中文翻譯)
大量高質量數據的增加,再加上過去幾十年的重大技術進步,使得機器學習成為一種在視覺、語言、金融和安全等廣泛任務中被廣泛應用的主要工具。然而,成功也帶來了重要的新挑戰:許多機器學習應用具有對抗性。有些應用是對抗性的,因為它們具有安全性的關鍵性,例如自動駕駛。在這些應用中,對手可能是一個惡意方,旨在引起擁堵或事故,甚至可能模擬暴露預測引擎中的漏洞的不尋常情況。其他應用是對抗性的,因為它們的任務和/或使用的數據是對抗性的。例如,安全領域中的一類重要問題涉及檢測,例如惡意軟件、垃圾郵件和入侵檢測。使用機器學習來檢測惡意實體創造了一種激勵,使對手試圖通過改變其行為或開發的惡意對象的內容來逃避檢測。
對抗性機器學習領域已經出現,研究機器學習方法在對抗性環境中的漏洞,並開發技術使學習對對抗性操作具有韌性。本書提供了該領域的技術概述。在回顧機器學習概念和方法以及這些方法在對抗性環境中的常見用例之後,我們提出了對機器學習的攻擊的一般分類。然後,我們討論了兩個主要攻擊類別及相應的防禦措施:決策時間攻擊,其中對手在預測時改變學習模型所見實例的性質以引起錯誤,以及污染或訓練時間攻擊,其中實際訓練數據集被惡意修改。在我們最後一章中,我們專門討論了對深度學習的最新攻擊技術,以及提高深度神經網絡韌性的方法。最後,我們討論了對抗性學習領域中幾個重要問題,我們認為需要進一步研究。
鑑於對抗性機器學習領域的日益關注,我們希望本書能為讀者提供在對抗性環境中成功從事機器學習研究和實踐所需的工具。