摘要:AbstractThis article details a new threat to NN formal verification that is well known in the formal verification of classical systems: errors in the learned model of a NN could cause the NN to pass formal verification on a property while violating the same property in real life. The solution to this threat for classical systems (which is expert reviews) is inadequate for NN due to their lack of explainability. Here, we propose a detection and recovery mechanism to tolerate it. This mechanism is based on a mathematical diversification of the system's model and the online verification of the formal safety properties. It was successfully implemented and validated on an application example, which, to our knowledge, is one of the most concrete NN formal verification in the literature: the Adaptive Cruise Control function of an autonomous car.
关键词:KeywordsDependabilityNeural approximations for optimal controlestimationApplications and fault tolerant controlfault tolerant controlreconfigurable control