Machine Learning for Communication systems

MALCOM
Abstract

This course introduces fundamental concepts in machine learning with applications to networked systems and the Internet of Intelligent Things (IoT). Students will gain foundational knowledge of cutting-edge methods, including autoencoders, deep generative models, and reinforcement learning. They will also get familiar with fundamental information-theoretic frameworks (e.g., information bottleneck) and theoretical principles. We will also introduce large-scale distributed and decentralized learning over wireless networks, in particular under constraints (completion time, radio resources, computational efficiency, etc.). Finally, we highlight key theoretical and practical challenges, together with emerging topics, such as trustworthiness, fairness, and energy efficiency.

Teaching and Learning Methods: Lectures, exercise sessions, and lab sessions. Each lecture starts summarizing key concepts from the previous lecture. Part of each lecture is often dedicated to illustrative examples and exercises.

Course PoliciesAttendance to the lab. sessions are mandatory. Attendance to lectures and exercise sessions are highly recommended.

Bibliography
  • Book:  SHALEV-SHWARTZ S., BEN-DAVID S. Understanding Machine Learning. Cambridge University Press, 2014, 410p.

  • Book: MOHRI M., ROSTAMIZADEH A., TALWALKAR A. Foundations of Machine Learning. MIT Press, 2012, 412p.

Requirements

Basic knowledge in linear algebra, probability, and calculus

Description

1. Machine Learning Techniques

  • Preliminaries & Recap on ML basics
  • Basic fundamentals of deep learning
  • Autoencoders and End-to-End Communication Systems
  • Deep Generative models (VAEs and GANs)
  • Applications to autonomous networked systems and Internet of Intelligent Things (IoIT)

2. Theoretical Aspects

  • Information theoretic measures
  • Statistical distances
  • Information bottleneck and rate distortion theory

3. Distributed Machine Learning over Networks

  • Distributed optimization in resource-constrained systems
  • Communication-Efficient Distributed Edge Learning
  • Federated learning
  • Decentralized learning
  • Low-latency and on-device AI

4. Reinforcement Learning

  • Markov decision processes
  • Q-learning and Policy Optimization methods
  • Deep Reinforcement Learning (DRL)
  • Multi-agent systems

5. Emerging Topics

  • Trustworthiness and Fairness
  • Explainability & Interpretability
  • Sustainable and Green AI

Learning outcomes:

  • Understand the fundamentals of machine learning and deep learning

  • Be able to apply learning algorithms to communication problems and networked systems

  • Understand the communication aspects involved in distributed edge learning

  • Be able to follow recent developments and emerging directions in ML theory and applications.

Nb hours: 42.00

Evaluation:

  • Lab. reports (30% of the final grade),
  • Final Exam (70% of the final grade) – written.