首页    期刊浏览 2025年03月02日 星期日
登录注册

文章基本信息

  • 标题:Optimal PID and Antiwindup Control Design as a Reinforcement Learning Problem
  • 本地全文:下载
  • 作者:Nathan P. Lawrence ; Gregory E. Stewart ; Philip D. Loewen
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2020
  • 卷号:53
  • 期号:2
  • 页码:236-241
  • DOI:10.1016/j.ifacol.2020.12.129
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractDeep reinforcement learning (DRL) has seen several successful applications to process control. Common methods rely on a deep neural network structure to model the controller or process. With increasingly complicated control structures, the closed-loop stability of such methods becomes less clear. In this work, we focus on the interpretability of DRL control methods. In particular, we view linear fixed-structure controllers as shallow neural networks embedded in the actor-critic framework. PID controllers guide our development due to their simplicity and acceptance in industrial practice. We then consider input saturation, leading to a simple nonlinear control structure. In order to effectively operate within the actuator limits we then incorporate a tuning parameter for anti-windup compensation. Finally, the simplicity of the controller allows for straightforward initialization. This makes our method inherently stabilizing, both during and after training, and amenable to known operational PID gains.
  • 关键词:Keywordsneural networksreinforcement learningactor-critic networksprocess controlPID controlanti-windup compensation
国家哲学社会科学文献中心版权所有