摘要:Manufacturers are increasingly under pressure to develop dynamic production systems and supply networks that can adjust to the climate, political, and social changes anywhere in the world at any time. Adoption of the Industry 4.0 paradigm aids in the completion of these objectives. Modern production systems necessitate a high level of manufacturing flexibility. At the same time, to keep up with the competition, manufacturers must make pledges to meet specified deadlines. In recent years, there has been a rise in interest in employing machine learning, particularly reinforcement learning, to solve production scheduling challenges of varying complexity. The general technique is to decompose the scheduling problem into a Markov Decision Process (MDP), after which an RL agent is trained using a simulation that implements the MDP. In this setting, this paper presents, in an application environment, a dispatching rule based on a deep reinforcement learning (DRL) algorithm. A DRL approach uses the DQN as the learning agent's training algorithm. The network's task is to identify the position of the job that will be executed. The objective is to present an algorithm that takes both the due date and the state of the production line into consideration to schedule jobs to meet the due dates and, at the same time, boost productivity. A flow shop configuration is considered and the performances of the proposed method are compared with the ones of dispatching rules already proposed in the scientific literature. To do so, the settings of the DRL algorithm must be specified, such as the state space, the reward function, and the hyperparameters, whereas the action is the choice of which job to be introduced in the production line. The overall objective of this research is to provide a general scheduling tool that may be used in a variety of situations, including unexpected ones.