ELEC 571V.101 – Computational Control (COCO)
Winter Term, 2025–26. Instructor: Alberto Padoan. Links: Website | Canvas | Piazza.
Lectures
Time: Every Monday and Wednesday, 12:30 – 14:00.
Room: UBCV | Hector J. MacLeod Building (MCLD) | Floor 3 | Room 3002
Office Hours
Primarily, via Piazza. Alternatively, after lectures by appointment via written email.
Credits
Units: 3. Letter grade.
Course Description
This graduate course offers an introduction to modern computational methods for feedback control of complex dynamical systems, including:
Dynamic programming and linear quadratic optimal control
Model Predictive Control (MPC)
Data-Driven Predictive Control (DDPC)
Markov Decision Processes (MDPs)
Monte Carlo learning and Reinforcement Learning (RL)
Learning Objectives
Students completing this course should be able to:
Reason about control problems beyond classical methods (e.g., PID)
Formulate control tasks with uncertainty, safety, and performance constraints
Design controllers using optimization-based and data-driven techniques
Implement control algorithms (e.g., MPC, DeePC) in Python
Critically analyze current research in computational control
Course Schedule (tentative)
Week 1-2: Introduction and overview; recap of convex optimization - Slides
Week 2-3: Dynamic programming and linear quadratic optimal control
Weeks 3–5: Model Predictive Control and variants
Week 5-6: Elements of subspace system identification
Week 6-7: Elements of behavioral system theory
Weeks 7–8: Data-driven predictive control
Week 9-10: Markov Decision rocesses
Week 10-11: Elements of Monte Carlo learning
Week 11-12: Elements of Reinforcement Learning
Week 12: Recap week — course projects due
Week 13: Wrap-up
Material & References
All lecture materials — slides, annotated slides, exercises, and notebooks — are available in this online folder. New materials are added before each lecture or shortly afterwards.
In addition to the lecture slides, check out the Resources page and the following references:
S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.
J.B. Rawlings, D.Q. Mayne, and M. Diehl, Model Predictive Control: Theory, Computation, and Design. 2nd ed., Nob Hill Publishing, 2017.
R.S. Sutton and A.G. Barto, Reinforcement Learning: An Introduction*. 2nd ed., MIT Press, 2018.
Prerequisites
Background in calculus, linear algebra, and control fundamentals is expected.
Prior exposure to convex optimization, linear systems, or Lyapunov theory is helpful but not required.
Familiarity with Python is required (the course project will require the use of Python notebooks).
Assessment
10% – Participation (class and Piazza)
40% – Final project
30% – Written report (mini-paper, end of term)
10% – Python notebook (functional, well-documented, reproducible)
50% – Final written exam (closed book, no aids — December TBA; no oral exam)
Note: Use of generative AI is permitted as a research tool. However, all submitted work must reflect the student’s own understanding. Work primarily generated by AI will receive a grade of zero.
Course Project
The project consists of a written report and Python notebook. Students will apply course concepts to a system or problem of their choice. Goal:
Identify a challenging control problem and explain why existing strategies are inadequate.
Propose an advanced control approach inspired by the course material.
Demonstrate its effectiveness and suitability through simulations and/or a paper-style analysis.
Benchmark problems and simulators will be suggested, but original ideas are welcome.
Late policy
Deadlines are firm — late work or missed assessments will not be graded, consistent with the Academic Calendar on Grading Practices.
Disclaimers
The course material is adapted from “Computational Control”, developed by S. Bolognani and colleagues at ETH Zürich.
Lectures and course materials, including presentations, tests, outlines, and similar materials, are licensed under a
Creative Commons Attribution-ShareAlike 4.0 International License.
This is not an official course webpage from UBC; it is maintained personally by the instructor. This being the inaugural run, please anticipate occasional technical adjustments. Thank you for your flexibility as we refine the experience.
Feedback
If you have suggestions or found the material useful, I would be happy to hear from you. Please use: alberto [DOT] padoan [@] ubc [DOT] ca
|