摘要:AbstractIn dual control, the manipulated variables are used to both regulate the system and identify unknown parameters. The joint probability distribution of the system state and the parameters is known as the hyperstate. The paper proposes a method to perform dual control using a deep reinforcement learning algorithm in combination with a neural network model trained to represent hyperstate transitions. The hyperstate is compactly represented as the parameters of a mixture model that is fitted to Monte Carlo samples of the hyperstate. The representation is used to train a hyperstate transition model, which is used by a standard reinforcement learning algorithm to find a dual control policy. The method is evaluated on a simple nonlinear system, which illustrates a situation where probing is needed, but it can also scale to high-dimensional systems. The method is demonstrated to be able to learn a probing technique that reduces the uncertainty of the hyperstate, resulting in improved control performance.
关键词:Keywordsreinforcement learning controladaptive controlnonlinear adaptive controladaptive control by neural networksstochastic optimal control problemsBayesian methods