郵箱登錄 | 所務辦公 | 收藏本站 | English | 中國科學院
 
首頁 計算所概況 新聞動態 科研成果 研究隊伍 國際交流 技術轉移 研究生教育 學術出版物 黨群園地 科學傳播 信息公開
國際交流
學術活動
交流動態
現在位置:首頁 > 國際交流 > 學術活動
From AI 1.0, AI 2.0,to XAI3.0
2019-06-17 | 【 【打印】【關閉】

  時間:2019年6月24日(周一)上午10:00-11:00

  地點:計算所四層446會議室

  報告人::Prof.Sun-Yuan KUNG, 普林斯頓大學

  摘要:

  Deep Learning (NN/AI 2.0) depends solely on Back-propagation (BP), now classic learning paradigm whose supervision is exclusively accessed via the external interfacing nodes (i.e. input/output neurons). Hampered by BP's external learning paradigm, Deep Learning has been limited to learning the parameters of the neural nets (NNs). The task of finding optimal structure is often by trial and error. It is naturally desirable to see the next generation of NN/AI technology fully addressing the issue of simultaneously training both parameter and structure of NNs. To this end, we propose an internal learning paradigm, which represents a processing model for Internal Neuron's Explainablility, championed by DARPA's XAI (or AI3.0). Practically, in order to evaluate/train hidden layers/nodes for directly, we propose an Explainable Neural Networks (XNN) based on an internal learning paradigm comprising (1) internal teacher labels (ITL); and (2) internal optimization metrics (IOM). Furthermore, by combining external and internal learning, itleads to a joint parameter/structure design of Deep Learning/Compression. Pursuant to our simulation studies, the proposed XNN facilitates simultaneously trimming hidden nodes and raising accuracy. Moreover, it appears to outperform several prominent pruning methods based on the based on the existing Deep Learning paradigms. Furthermore, it opens up a new research possibility on supporting inter-machine mutual learning.

  報告人簡介:

  S.Y. Kung, Life Fellow of IEEE, is a Professor at Department of Electrical Engineering in Princeton University. His research areas include machine learning, data mining, systematic design of (deep-learning) neural networks, statistical estimation, VLSI array processors, signal and multimedia information processing, and most recently compressive privacy. He was a founding member of several Technical Committees (TC) of the IEEE Signal Processing Society. He was elected to Fellow in 1988 and served as a Member of the Board of Governors of the IEEE Signal Processing Society (1989-1991). He was a recipient of IEEE Signal Processing Society's Technical Achievement Award for the contributions on "parallel processing and neural network algorithms for signal processing" (1992); a Distinguished Lecturer of IEEE Signal Processing Society (1994); a recipient of IEEE Signal Processing Society's Best Paper Award for his publication on principal component neural networks (1996); and a recipient of the IEEE Third Millennium Medal (2000). Since 1990, he has been the Editor-In-Chief of the Journal of VLSI Signal Processing Systems. He served as the first Associate Editor in VLSI Area (1984) and the first Associate Editor in Neural Network (1991) for the IEEE Transactions on Signal Processing. He has authored and co-authored more than 500 technical publications and numerous textbooks including ``VLSI Array Processors'', Prentice-Hall (1988); ``Digital Neural Networks'', Prentice-Hall (1993); ``Principal Component Neural Networks'', John-Wiley(1996); ``Biometric Authentication: A Machine Learning Approach'', Prentice-Hall (2004); and ``Kernel Methods and Machine Learning”, Cambridge University Press (2014)

 
網站地圖 | 聯系我們 | 意見反饋 | 所長信箱
 
京ICP備05002829號 京公網安備1101080060號
华东15选5中奖规则