摘要:AbstractThe Dynamic Programming approach for optimal control problem establishes that the necessary condition for optimality is that the minimum cost function (the Bellman functional) must satisfy the Hamilton-Jacobi-Bellman equation. A sufficient condition is that if there exists a functional that satisfies the Hamilton-Jacobi-Bellman equation, then it is the minimum cost function. For linear time-delay systems, Krasovskii proposed the Bellman functional, and an optimal structure for the controller was reported. The Dynamic Programming combined with prescribed derivative functionals leads to an iterative procedure which allows finding suboptimal controls law at each step. There is numerical evidence that shows that these functionals are equivalent. However, their algebraic structure is different. The Bellman functional has only three terms and the iterative functional is composed by thirteen summands. The algebraic relation between both functionals is not easy to see. The present contribution gives a proof of this connection by using Fubini’s Theorem.
关键词:KeywordsTime-delay systemsDynamic ProgrammingBellman functionalOptimal control