此商品已下架,若仍需天瓏代為查詢或代購書籍,請與門市客服人員聯繫,或 E-mail 至 service@tenlong.com.tw 將有專人為您服務。

The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition (ACM Books)
暫譯: 多模態多感測介面的手冊:信號處理、架構及情感與認知的檢測 (ACM 書籍)

  • 出版商: Morgan & Claypool
  • 出版日期: 2018-10-08
  • 售價: $3,530
  • 貴賓價: 9.5$3,354
  • 語言: 英文
  • 頁數: 531
  • 裝訂: Paperback
  • ISBN: 1970001682
  • ISBN-13: 9781970001686
  • 相關分類: 感測器 Sensor
  • 海外代購書籍(需單獨結帳)

商品描述

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces: user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users' states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities, and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on the timely and controversial challenge topic of multimodal deep learning. The discussion focuses on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.

商品描述(中文翻譯)

《多模態-多感測介面的手冊》提供了關於新電腦介面的主導範式的第一本權威資源:用戶輸入涉及新媒體(語音、多點觸控、手勢和身體動作、面部表情、書寫)嵌入在多模態-多感測介面中,這些介面通常包括生物信號。這本編輯集由國際專家和該領域的先驅撰寫。它為在這個及相關領域工作的專業人士提供了教科書、參考資料和技術路線圖。本手冊的第二卷以多模態信號處理、架構和機器學習為開端。它包括最近的深度學習方法,用於處理多感測和多模態的用戶數據及互動,以及上下文敏感性。另一個亮點是處理有關用戶狀態和特徵的信息,這是下一代用戶介面中一個令人興奮的新興能力。這些章節討論了來自各種模態的情感和社會信號的實時多模態分析,以及用戶對情感表達的感知。後續章節討論了使用行為和生理信號進行的認知狀態的多模態處理,以檢測認知負荷、領域專業知識、欺騙和抑鬱。這些章節提供了系統設計和處理的逐步示例,開發和評估新系統的工具和實用資源的信息,以及掌握這個快速擴展領域的術語和教程支持。在本卷的最後一部分,專家們就多模態深度學習這一及時且具爭議性的挑戰主題交換意見。討論重點在於多模態-多感測介面在未來十年最有可能如何提升人類表現。