Attacks, Defenses and Testing for Deep Learning
Chen, Jinyin, Zhang, Ximin, Zheng, Haibin
- 出版商: Springer
- 出版日期: 2024-06-04
- 售價: $9,640
- 貴賓價: 9.5 折 $9,158
- 語言: 英文
- 頁數: 399
- 裝訂: Hardcover - also called cloth, retail trade, or trade
- ISBN: 9819704243
- ISBN-13: 9789819704248
-
相關分類:
DeepLearning
海外代購書籍(需單獨結帳)
相關主題
商品描述
This book provides a systematic study on the security of deep learning. With its powerful learning ability, deep learning is widely used in CV, FL, GNN, RL, and other scenarios. However, during the process of application, researchers have revealed that deep learning is vulnerable to malicious attacks, which will lead to unpredictable consequences. Take autonomous driving as an example, there were more than 12 serious autonomous driving accidents in the world in 2018, including Uber, Tesla and other high technological enterprises. Drawing on the reviewed literature, we need to discover vulnerabilities in deep learning through attacks, reinforce its defense, and test model performance to ensure its robustness.
Attacks can be divided into adversarial attacks and poisoning attacks. Adversarial attacks occur during the model testing phase, where the attacker obtains adversarial examples by adding small perturbations. Poisoning attacks occur during the model training phase, wherethe attacker injects poisoned examples into the training dataset, embedding a backdoor trigger in the trained deep learning model.
An effective defense method is an important guarantee for the application of deep learning. The existing defense methods are divided into three types, including the data modification defense method, model modification defense method, and network add-on method. The data modification defense method performs adversarial defense by fine-tuning the input data. The model modification defense method adjusts the model framework to achieve the effect of defending against attacks. The network add-on method prevents the adversarial examples by training the adversarial example detector.
Testing deep neural networks is an effective method to measure the security and robustness of deep learning models. Through test evaluation, security vulnerabilities and weaknesses in deep neural networks can be identified. By identifying and fixing these vulnerabilities, the security and robustness of the model can be improved.
Our audience includes researchers in the field of deep learning security, as well as software development engineers specializing in deep learning.
商品描述(中文翻譯)
本書提供了對深度學習安全性的系統性研究。由於其強大的學習能力,深度學習被廣泛應用於計算機視覺(CV)、聯邦學習(FL)、圖神經網絡(GNN)、強化學習(RL)等場景。然而,在應用過程中,研究人員發現深度學習容易受到惡意攻擊,這將導致不可預測的後果。以自動駕駛為例,2018年全球發生了超過12起嚴重的自動駕駛事故,包括Uber、Tesla等高科技企業。根據回顧的文獻,我們需要通過攻擊來發現深度學習中的漏洞,加強其防禦,並測試模型性能以確保其穩健性。
攻擊可以分為對抗性攻擊和中毒攻擊。對抗性攻擊發生在模型測試階段,攻擊者通過添加小的擾動來獲取對抗樣本。中毒攻擊則發生在模型訓練階段,攻擊者將中毒樣本注入訓練數據集中,並在訓練的深度學習模型中嵌入後門觸發器。
有效的防禦方法是深度學習應用的重要保障。現有的防禦方法分為三種類型,包括數據修改防禦方法、模型修改防禦方法和網絡附加方法。數據修改防禦方法通過微調輸入數據來執行對抗防禦。模型修改防禦方法調整模型框架以達到防禦攻擊的效果。網絡附加方法則通過訓練對抗樣本檢測器來防止對抗樣本的出現。
測試深度神經網絡是衡量深度學習模型安全性和穩健性的有效方法。通過測試評估,可以識別深度神經網絡中的安全漏洞和弱點。通過識別和修復這些漏洞,可以提高模型的安全性和穩健性。
我們的受眾包括深度學習安全領域的研究人員,以及專注於深度學習的軟體開發工程師。
作者簡介
作者簡介(中文翻譯)
陳金銀於2004年和2009年分別獲得中國杭州浙江工業大學的學士和博士學位。她目前是浙江工業大學網絡空間安全研究所的教授及博士生導師。她在過去五年的研究工作旨在滿足國防和公共安全等智能應用的安全需求。她已在ICSE、USENIX、ACL、ECCV、IJCAI、IEEE TDSC、IEEE TKDE、IEEE TSMC、IEEE TCAS、IEEE TNSE、IEEE TCSS、IEEE TCCN、Information Sciences、Computers and Security等期刊上發表了60多篇論文。她在《自動化學報》、《軟體學報》、《電子學報》、《通信學報》、《信息安全學報》和《計算機研究與發展》等國內一級中文期刊上發表了20多篇論文。她已申請了200多項相關專利,並獲得了90多項相關發明專利授權。她開發了幾個平台/工具,如網絡智能模型安全智能免疫原型系統、智能系統數據和算法安全檢測平台、智能系統安全分析與測試增強平台。她與軍事科學院、公安部第三研究所、國家網絡信息辦公室及華為新加坡研究所建立了密切的合作關係。一些系統已在相關實用平台上部署和應用。