Program
Address: 10th floor, Melbourne Law (Building 106), 185 Pelham St, Carlton VIC 3053
- Electricity grids are experiencing an unprecedented transformation to a more decentralised energy future as many countries transition to a low emission economy supporting action for climate change. A decentralised yet integrated energy future requires electricity grids to be responsive to changes in traditional services while enabling new opportunities for sharing and balancing distributed generation, storage and loads. The growth in non-dispatchable electricity generation, particularly from renewable sources, places greater reliance on demand response – the non-disruptive control of loads – for maintaining balance between generation and demand. Participation in fast-acting demand response can be maximised through aggregation of many small electrical loads, such as air conditioners (ACs). The potential of air conditioners (ACs) in providing demand-response network services has been actively investigated over recent years. Yet, while the majority of the existing methods are based on models for traditional air conditioners with fixed-speed (on/off) compressor operation, ACs with a variable-speed (inverter-driven) compressor dominate the market trends today due to their high efficiency and flexibility. This presentation will provide an overview of recent advances in modelling and control of aggregate AC load, with a focus on ensembles of variable-speed ACs. The presentation will discuss important peculiarities in the aggregate demand response of variable-speed AC loads as compared with that of traditional fixed-speed AC loads. Numerical examples will illustrate the application of MPC strategies to control aggregate AC demand via a common approach based on remote thermostat manipulation, and via implementation of direct demand response modes prescribed by the Australian Standard AS4755. ↩
- Control systems are being connected to networks (the internet, for example), in order to benefit from a huge amount of online data and ever-increasing computing capability. This change brings us new opportunities as well as new challenges, such as computing an optimal control strategy for large-scale systems subject to real-time constraints and providing security and privacy guarantees while maintaining satisfactory system performance. This work focuses on developing novel and distributed optimization algorithms to solve these emerging problems. More precisely, we will discuss inexact splitting methods (also known as inexact alternating direction methods in optimisation), and show how to use them to solve distributed model predictive control (MPC) problems with limited computation and communication resources. Furthermore, we will talk about novel distributed optimisation algorithms with privacy guarantees on local information, and discuss the trade-off between the algorithm performance and the privacy level. ↩
- Optimal power flow (OPF) problems are fundamental for power system operations. They are nonconvex and, in future applications, time-varying. We present a first-order proximal primal-dual algorithm and a second-order algorithm for general time-varying nonconvex optimization and bound their tracking performance. We incorporate real-time feedback in our algorithms for applications to time-varying OPF problems, and illustrate their tracking performance numerically. ↩
- the problem of incorporating future information in the form of the estimated value of future gradients in online convex optimisation is investigated. This is motivated by demand response in power systems, where forecasts about the current round, using historical data e.g., the weather or the loads’ behaviour, can be used to improve on predictions. Specifically, we introduce an additional predictive step that follows the standard online convex optimisation step when certain conditions on the estimated gradient and descent direction are met. We show that under these conditions and without any extra assumptions on the predictability of the environment, the predictive update strictly improves on the performance of the standard update. We present two types of predictive update for various family of loss functions. We provide a regret bound for each of our predictive online convex optimisation algorithms. We apply our framework to an example based on demand response which demonstrates its superior performance to a standard online convex optimisation algorithm. ↩
- This talk considers a version of the Wiener filtering problem for equalization of passive quantum linear quantum systems. We demonstrate that taking into consideration the quantum nature of the signals involved leads to features typically not encountered in classical equalization problems. Most significantly, finding a mean-square optimal quantum equalizing filter amounts to solving a nonconvex constrained optimization problem. We discuss two approaches to solving this problem, both involving a relaxation of the constraint. In both cases, unlike classical equalization, there is a threshold on the variance of the noise below which an improvement of the mean-square error cannot be guaranteed. ↩
- A common, sometimes implicit, assumption within the privacy literature is the use of randomization. Differential privacy and identifiability, two popular and related definitions from the privacy literature, assume the use of randomized functions. Information-theoretic privacy often falls back on measures, such as entropy and mutual information, that are only defined for random variables. However, many popular heuristic-based privacy-preserving methods, such as k-anonymity, are deterministic. They employ deterministic mappings, such as suppression and generalization, and are applied to non-stochastic datasets. Randomized, or stochastic, privacy-preserving policies have been shown to cause problems, such as un-truthfulness and generation of unreasonable/unrealistic outputs, which can be undesirable in practice. Motivated by these observations, in this talk, we develop deterministic privacy frameworks. We first borrow some useful concepts from the sparse literature on non-stochastic information theory and generalize other concepts from information theory and signal processing literatures to the non-stochastic setting. Particularly, we use a worst-case measure of uncertainty and the maximin information to measure private information leakage. We pose the problem of determining deterministic privacy-preserving policies as maximization of the measure of privacy subject to a constraint on the worst-case distortion. This way, we can show that the optimal privacy-preserving policy is a piecewise constant function in the form of a quantization operator. We show that we can use our privacy metrics to analyse k-anonymity, proving that it is in fact not privacy-preserving. Finally, we take a slight detour to develop a theory of non-stochastic hypothesis testing for generalizing the concept of identifiability from the privacy literature to the non-scholastic setting. ↩
- In this work, we discuss this potential computational privacy risks in distributed computing protocols in the form of structured system identification, and then propose and thoroughly analyze a Privacy-Preserving-Summation-Consistent (PPSC) mechanism as a generic privacy encryption subroutine for consensus-based distributed computations. The central idea is that the consensus manifold is where we can both hide node privacy and achieve computational accuracy. It turns out that the conventional deterministic and random gossip algorithms can be used to realize the PPSC mechanism over a given network. We establish concrete privacy-preservation conditions by proving the impossibility for the reconstruction of the network input from the output of the gossip-based PPSC mechanism against eavesdroppers with full network knowledge, and by showing that the PPSC mechanism can achieve differential privacy at arbitrary privacy levels. ↩
- It is known that many optimal control problems encoded as dynamic programs can equivalently be characterised through the solution of a linear program (LP). For systems with continuous state and action spaces, the resulting LP involves an infinite number of decision variables (e.g. taking values in the space of real valued functions of the state) and an infinite number of constraints (e.g. one inequality constraint for each state-action pair). Replacing the infinite LP by a finite dimensional counterpart provides a method for approximating the solution of the original optimal control problem, in the spirit of Approximate Dynamic Programming. The question is how good the resulting approximation is. We show how recent results on optimal value approximation in randomised optimisation can be leveraged to derive probabilistic bounds on the value function error incurred in such LP-based ADP approximations. As an initial step in the direction of data driven control, we also discuss how (at least in the case of deterministic systems) the approximation can be carried out using sample paths of the system state evolution, bypassing the need for identifying a model. The talk is based on joint work with Angeliki Kamoutsi, Daniel Kuhn, Peyman Mohajerin, and Tobias Sutter. ↩
- ↩
- Change is necessary in any dynamic environment, but there is always a cost incurred when implementing change; one of the most obvious is wear and tear on the physical components in a system. In the optimal control field, the cost of change is almost always ignored, and this can lead to “optimal” control strategies that are volatile and impractical to implement. This talk introduces a class of non-smooth optimal control problems in which the cost of change is incorporated via an objective term that penalizes the total variation of the control signal. We describe a computational method, based on a smoothing transformation and nonlinear programming, for solving this class of problems. We also discuss two applications in fisheries and crane control. ↩
- The field of fast-MPC, or the use of embedded optimization for high speed control, is a rapidly growing field in academia and increasingly in industry. Achieving the required extremely high speed optimization, often within micro-seconds, on low-end embedded platforms calls for a wide range of heuristic procedures for both the control design, as well as in the implementation of the optimization algorithms themselves. This semi-heuristic process leads to complex control laws that can be very efficient, but that are also extremely difficult to tune and design. This talk will introduce a framework for the non-conservative analysis of many of the heuristics used in these controllers via a convex sum-of-squares approach. We will then build on this framework to develop a synthesis procedure by exploiting a link between neural networks and parametric quadratic programs, in order to produce very high-speed embedded optimization-based control laws, and finally give a number of examples. ↩
- Connections between Hamilton’s action principle and optimal control are explored for energy conserving systems evolving on short and long time horizons. On short horizons, convexity of the action functional with respect to momentum trajectories is often guaranteed, ensuring that an associated optimal control problem is well-defined. Consequently, stationary action is achieved as least action, as encapsulated by the corresponding value function, and the associated equations of motion are described by the characteristic system corresponding to the attendant Hamilton-Jacobi-Bellman (HJB) partial differential equation (PDE). For longer horizons, the encapsulation of stationary (least) action within this optimal control formulation is lost, via loss of convexity of the action, leading to finite escape phenomena exhibited by the value function. This motivates the consideration of stationary control problems, as opposed to optimal control problems, whose value can propagate through these finite escape phenomena to longer horizons. ↩
- ↩
- ↩
- The control and design of multiple autonomous agents for distributed tasks relies on the structure of the agent-to-agent interaction topology. The topology can be encoded through a graph or network giving rise to the term networked dynamic systems. Through the design or adaption of the network topology, the propagation of information can be biased towards certain agents in the network altering the system performance. Graph optimisation serves as a systematic method to performance this redesign. The talk will explore elements of this problem for agreement-based network dynamics including distributed and online topology optimisation design and the impact of the network structure on the optimisation algorithm performance. ↩