Abstract:
After decades of advancements, artificial intelligence algorithms have become
increasingly sophisticated, with sparse computing playing a pivotal role in their
evolution. This talk will first review and summarize the characteristics of sparsity in
AI 1.0 & 2.0. Subsequently, three representative AI algorithms - Convolutional Neural
Network, Graph Neural Network, and Large Language Model, along with their sparse
computing optimizations, will be discussed. By analyzing the similarities and
differences of these AI algorithms, we will propose the development trend of sparse
computing. Finally, this talk will envision a future, where a coordinated dataflow,
instruction-set-architecture, and hardware co-design approach would provide a great
opportunity for efficient and general processing of various sparsity and sparse AI
algorithms.
Biography:
Yu Wang, professor, IEEE fellow, chair of the Department of Electronic Engineering of
Tsinghua University, dean of Institute for Electronics and Information Technology in
Tianjin, and vice dean of School of information science and technology of Tsinghua
University. His research interests include the application specific heterogeneous
computing, processing-in-memory, intelligent multi-agent system, and power/reliability
aware system design methodology. Yu Wang has published more than 80 journals (60
IEEE/ACM journals) and 200 conference papers in the areas of EDA, FPGA, VLSI Design, and
Embedded Systems, with the Google citation more than 17,000. He has received four best
paper awards and 12 best paper nominations. Yu Wang has been an active volunteer in the
design automation, VLSI, and FPGA conferences. He will serve as TPC chair for ASP-DAC
2025. He serves as the editor of important journals in the field such as ACM TODAES and
IEEE TCAD and program committee member for leading conferences in the top EDA and FPGA
conferences.