Erdal Arıkan

  • Professor
  • Bilkent University
  • Title: Polar Coding for Tb/s Applications
  • Abstract: The talk will discuss polar codes for Tb/s applications where implementations constraints such as energy efficiency and area efficiency become primary design constraints.
  • Bio: Erdal Arikan (IEEE Fellow) received the B.S. degree in electrical engineering from the California Institute of Technology, Pasadena, CA, USA, in 1981, and the M.S. and Ph.D. degrees in electrical engineering from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 1982 and 1985, respectively. Since 1987, he has been with the Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey, where he is a Professor. He is also the Founder of Polaran Ltd, a company specializing in polar coding products. He was a recipient of the 2010 IEEE Information Theory Society Paper Award, the 2013 IEEE W.R.G. Baker Award, the 2018 IEEE Hamming Medal, and the 2019 Claude E. Shannon Award.

  • Personal Page

    Alexander Barg

  • Professor
  • University of Maryland
  • Title: Information Coding under Communication Constraints
  • Abstract: Distributed storage of data has added new dimensions to the general problem of information coding and recovery. In addition to data protection from errors or erasures, a new constraint arising in such systems relates to the amount of communication required to perform the task of data recovery. Decoding under communication constraints is a new direction, which has ushered in a large number of problems previously not studied within  information and coding theory.

    In this tutorial we overview the main problems in coding for distributed storage systems. After motivating and formulating the central questions, in the first part of this tutorial we discuss basic algebraic constructions of locally recoverable and regenerating codes. We also briefly mention the link to locally testable codes, which is one of the central problems in theoretical computer science.

    In the second part we will discuss problems of node recovery in systems where the connections between the nodes are constrained by a graph. The task of restoring the value of a vertex based on values of its neighborhood in the graph gives rise to the notion of storage codes on graphs, and we mention recent results in this area. Extending these studies to infinite graphs such as Z, leads to the consideration of recoverable systems, and we discuss the known results and open problems concerning their capacity and entropy.

    Finally, the problem of regenerating codes can also be formulated on graphs, where the repair bandwidth is controlled by the length of the path from the helpers to the failed vertex. We mention recent results in this area both for deterministic graphs and for standard ensembles of random graphs. 

  • Bio: Alexander Barg (Fellow, IEEE) is currently a Professor with the Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD, USA. He is broadly interested in information and coding theory, applied probability, and algebraic combinatorics, and has published about a hundred research papers. He received the 2015 Information Theory Society Paper Award, was a plenary speaker at the 2016 IEEE International Symposium on Information Theory (Barcelona, Spain), and has served as the Editor-in-Chief (2018–2019) of IEEE TRANSACTIONS ON INFORMATION THEORY.

  • Personal Page

    Shirin Saeedi Bidokhti

  • IEEE Information Theory Society 2022 Goldsmith Lecturer
  • the University of Pennsylvania
  • Title: Neural Compression: Algorithms and Fundamental Limits
  • Abstract: Driven by advances in deep neural networks (DNNs) and generative models, rapid progress has been made in designing DNN-based compression schemes that have high performance at reasonable complexity on real-world data. This tutorial provides a self-contained overview on deep learning-based compression schemes where they meet information theoretic designs. In this regard, a fundamental question in designing compression schemes is how well they perform in comparison with the known theoretical limits. In the lossless case, we will discuss compression schemes that leverage deep generative models to achieve rates close to the entropy of the source. In the lossy setting, we will first develop deep learning-based methods to estimate the rate-distortion function for real-world datasets, and examine how popular DNN-based lossy compressors compare. Finally, we will discuss how the rate-distortion function estimators can be used to construct operational one-shot lossy compression schemes with guarantees on the achievable rate and distortion.
  • Bio: Shirin Saeedi Bidokhti is an assistant professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania (UPenn). She received her M.Sc. and Ph.D. degrees in Computer and Communication Sciences from the Swiss Federal Institute of Technology (EPFL). Prior to joining UPenn, she was a postdoctoral scholar at Stanford University and the Technical University of Munich. She has also held short-term visiting positions at ETH Zurich, University of California at Los Angeles, and the Pennsylvania State University. Her research interests broadly include the design and analysis of network strategies that are scalable, practical, and efficient for use in Internet of Things (IoT) applications, information transfer on networks, as well as data compression techniques for big data. She is a recipient of the 2022 IT society Goldsmith lecturer award, 2021 NSF-CAREER award, 2019 NSF-CRII Research Initiative award and the prospective researcher and advanced postdoctoral fellowships from the Swiss National Science Foundation.

  • Personal Page

    Tie Liu

  • Professor
  • Texas A&M University
  • Title: Information-Theoretic Analysis of Concentration and Generalization
  • Abstract: Concentration and generalization are two fundamental problems in statistical data analysis and learning theory. This tutorial provides an overview of characterizing concentration and generalization using information-theoretic tools. This tutorial is naturally divided into two parts. In the first part, we shall focus on the concentration of a general function of independent variables and present the entropy method for leveraging various notions of functional stability into exponential tail bounds. In the second part, we shall focus on the generalization of a data-dependent query and present a systematic approach for relating it to various information-theoretic notions of algorithmic stability. As an application, we shall consider the generalization of adaptively selected queries, where each query can be selected based on the responses to previous queries, and discuss the design of response mechanisms that guarantee simultaneous generalization of all queries.
  • Bio: Tie Liu (Senior Member, IEEE) received his B.S. (1998) and M.S. (2000) degrees, both in electrical engineering, from Tsinghua University, Beijing, China and a second M.S. degree in Mathematics (2004) and a Ph.D. degree in electrical and computer engineering (2006) from the University of Illinois at Urbana-Champaign. Since August 2006 he has been with Texas A&M University, where he is currently a Professor in the Department of Electrical and Computer Engineering. His primary research interest is in the area of information theory and statistical information processing. Dr. Liu received an M. E. Van Valkenburg Graduate Research Award (2006) from the University of Illinois at Urbana-Champaign and a CAREER Award (2009) from the National Science Foundation. He was a Technical Program Committee Co-Chair for the 2008 IEEE GLOBECOM, a General Co-Chair for the 2011 IEEE North American School of Information Theory, and an Associate Editor for Shannon Theory for the IEEE TRANSACTIONS ON INFORMATION THEORY from 2014-2016.

  • Personal Page

    Xiaohu Tang

  • Professor
  • Southwest Jiaotong University
  • Title: Repair of MDS Storage Code with High Rate
  • Abstract: MDS codes with high rate are widely employed in many practical storage systems . In those systems, node failure is common such that the repair is crucial to the performance. This tutorial will present several recent advances in the repair of (n,k) MDS storage code with high rate, including: (1) (Near) Optimal repair of single node for a fixed number of help nodes; (2) (Near) Optimal repair of single node for a set of numbers of help nodes; (3) (Near) Optimal repair of multiple nodes.

  • Bio: Xiaohu Tang (Member, IEEE) received the B.S. degree in applied mathematics from the Northwest Polytechnic University, Xi’an, China, the M.S. degree in applied mathematics from the Sichuan University, Chengdu, China, and the Ph.D. degree in electronic engineering from the Southwest Jiaotong University, Chengdu, China, in 1992, 1995, and 2001 respectively.

    From 2003 to 2004, he was a research associate in the Department of Electrical and Electronic Engineering, Hong Kong University of Science and Technology. From 2007 to 2008, he was a visiting professor at University of Ulm, Germany. Since 2001, he has been in the School of Information Science and Technology, Southwest Jiaotong University, where he is currently a professor. His research interests include coding theory, network security, distributed storage and information processing for big data.

    Dr. Tang was the recipient of the National Excellent Doctoral Dissertation award in 2003 (China), the Humboldt Research Fellowship in 2007 (Germany), and the Outstanding Young Scientist Award by NSFC in 2013 (China). He serves as Associate Editors for several journals including IEEE TRANSACTIONS ON INFORMATION THEORY and IEICE Transactions on Fundamentals, and served on a number of technical program committees of conferences.

  • Personal Page

    Yao Xie

  • Harold R. and Mary Anne Nash Early Career Professor and Associate Professor
  • The Georgia Institute of Technology
  • Title: Some Recent Advances in Modern Hypothesis Tests
  • Abstract: Hypothesis testing, which aims to find a decision rule to discriminate between two or multiple hypotheses based on data, is a fundamental problem in statistics and an essential building block for machine learning (ML) and signal processing problems such as classification and detection. Intriguing and outstanding challenges include developing hypothesis tests for modern data (e.g., high-dimensional data with low-dimensional structures and streaming data), bridging hypothesis tests and deep learning in developing reliable machine learning, and computationally efficient modern hypothesis tests. The benefit of the bridge goes both ways: on the one hand, it will enable us to leverage deep learning to develop efficient and powerful testing tools for high-dimensional and complex data; on the other hand, we can use testing methodologies to develop principled validation tools for machine learning models and provide a theoretical foundation of deep models themselves. This tutorial will present several recent advances in modern hypothesis tests, including: (1) Robust hypothesis tests with performance guarantees and efficient computational methods to leverage modern optimization. (2) Deep learning-based non-parametric two-sample tests that exploit low-dimensional structure in data. (3) Goodness-of-fit tests as model diagnosis tools for deep learning models. (4) Sequential hypothesis test and change-point detection, an important type of sequential hypothesis test for various applications.

  • Bio: Yao Xie is the Harold R. and Mary Anne Nash Early Career Professor and Associate Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech.

    Her research interests are in sequential statistical methods, statistical signal processing, big data analysis, compressed sensing, optimization, and has been involved in applications to wireless communications, sensor networks, medical and astronomical imaging.

    Dr. Xie previously served as Research Scientist in the Electrical and Computer Engineering Department at Duke University after receiving her Ph.D. in Electrical Engineering (minor in Mathematics) from Stanford University in 2011.

  • Personal Page

    Jinhong Yuan

  • Professor
  • University of New South Wales
  • Title: Generalized Partially Coupled Turbo-Like Codes
  • Abstract: Spatially coupled (SC) codes have gained much attention from both academia and industry for the past two decades for their considerable convolutional gains compared to their underlying block codes. For SC-LDPC codes and SC turbo-like codes, it has been proved that the belief-propagation (BP) decoding threshold can saturate to the maximum a posterior probability (MAP) decoding threshold of the underlying block codes. However, current communication standards, i.e., 4G and 5G, still adopt uncoupled block codes. A natural question arise is can we design new and powerful SC codes compatible with current or future standards such that the underlying component code encoding and decoding can be kept unchanged from those in the standards.

    In this talk, we introduce a class of spatially coupled codes, namely partially information and partially parity coupled codes. The main idea is that consecutive codewords coupled such that a portion of information or parity bits from previous codewords become part of the information sequence of the current codeword. This class of codes have two main features. First, the code rate can be flexibly adjusted by varying the coupling ratio. Second, the component encoders and decoders can adopt those off-the-shelf. We start with the construction methods for partially coupled turbo codes and study the corresponding graph models. We then derive the density evolution equations for the corresponding ensembles on the binary erasure channel to compute their iterative decoding thresholds. Density evolution analysis shows that the proposed partially parity coupled turbo codes have thresholds within 0.0002 to the BEC capacity from rates 1/3 to 9/10, yielding an attractive way for constructing capacity-approaching channel codes. We further propose generalized spatially coupled parallel concatenated codes (GSC-PCCs), which can be seen as a combination of partially information coupled turbo codes and the conventional spatially coupled parallel concatenated codes. The GSC-PCCs are proved to exhibit threshold saturation and achieve capacity. Note that this is the first class of turbo-like codes that are proved to be capacity-achieving. In addition, we generalize the idea of partial coupling to construct new SC codes with component codes based on LDPC codes, duo-binary turbo codes, polar codes, and product codes.

  • Bio: Professor Jinhong Yuan is a Professor of Telecommunications with the School of Electrical Engineering and Telecommunications. He received the B.E. and Ph.D degrees in Electronics Engineering in 1991 and 1997, respectively. From 1997 to 1999 he was a Research Fellow at the School of Electrical Engineering, the University of Sydney, Sydney, Australia. In 2000 he joined the School of Electrical Engineering and Telecommunications, the University of New South Wales, Sydney, Australia, where he is currently a Professor and Head of Telecommunications of the school. He has published two books, two book chapters, over 300 papers in telecommunications journals and conference proceedings and 40 industrial reports. He is a co-inventor of one patent on MIMO systems and two patents on low-density-parity-check (LDPC) codes. He co-authored four Best Paper Awards and one Best Poster Award, including a Best Paper Award of IEEE Wireless Communications and Networking Conference (WCNC), Cancun, Mexico in 2011, and a Best Paper Award of IEEE International Symposium on Wireless Communications Systems (ISWCS), Trondheim, Norway in 2007. His publication is available from http://www2.ee.unsw.edu.au/wcl/JYuan.html. He serves as the IEEE NSW Chair of joint Communications/Signal Processions/Ocean Engineering Chapter and an Associate Editor for IEEE Transactions on Communications and an Associate Editor for IEEE Transactions on Wireless Communications. 

    His research interests include: 
    • Mobile and Wireless Communications
    • Satellite Communications
    • Underwater Communications
    • Integrated Communications and Sensing (ICAS)
    • IoT
    • Millimeter Waves
    • Information Theory and Error Control Coding
    • Turbo Coding and Iterative Processing
    • Space-Time Coding, Processing and MIMO Techniques
    • Wideband CDMA, OFDM, and OTFS

  • Personal Page

    Lizhong Zheng

  • Professor
  • Massachusetts Institute of Technology
  • Title: Understanding Deep Learning With an Information Geometric Method
  • Abstract: When applying information theoretic analysis to machine learning problems, we often face the difficulty of describing the relation between a number of different distributions: the ground truth distribution, the empirical distribution of the training and the testing sets, the parameterized family of distributions we can use for approximations, the learned models and the updates in each iteration, etc. We argue in this tutorial that a good way to describe this complex situation is often a geometric approach. We introduce the machinery of a simplified information geometry analysis, with the basic techniques of local approximations and the key concepts including the Fisher information metrics, i-projection, mismatched statistics. We show some learning theory applications of these tools in the analysis of the strong data processing inequality, generalization error, model selection, as well as more applied problems like understanding deep neural networks, transfer learning and multi-modal learning problems.

  • Bio: Lizhong Zheng is a Professor in the Department of Electrical Engineering and Computer Science at MIT. He works in the general area of information theory, statistical inference, data processing, wireless communications and networks. 

    Lizhong Zheng received the B.S and M.S. degrees, in 1994 and 1997 respectively, from the Department of Electronic Engineering, Tsinghua University, China, and the Ph.D. degree, in 2002, from the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Since 2002, he has been working at MIT, where he is currently a professor of Electrical Engineering. His research interests include information theory, statistical inference, communications, and networks theory. He received Eli Jury award from UC Berkeley in 2002, IEEE Information Theory Society Paper Award in 2003, and NSF CAREER award in 2004, and the AFOSR Young Investigator Award in 2007. He served as an associate editor for IEEE Transactions on Information Theory, and the general co-chair for the IEEE International Symposium on Information Theory in 2012. He is an IEEE fellow. 

  • Personal Page