首页    期刊浏览 2024年12月02日 星期一
登录注册

文章基本信息

  • 标题:Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks
  • 本地全文:下载
  • 作者:Yalong Pi ; Nick Duffield ; Amir H. Behzadan
  • 期刊名称:Computational Urban Science
  • 电子版ISSN:2730-6852
  • 出版年度:2022
  • 卷号:2
  • 期号:1
  • 页码:1-16
  • DOI:10.1007/s43762-021-00031-w
  • 语种:English
  • 出版社:Springer
  • 摘要:Abstract Accurate and prompt traffic data are necessary for the successful management of major events. Computer vision techniques, such as convolutional neural network (CNN) applied on video monitoring data, can provide a cost-efficient and timely alternative to traditional data collection and analysis methods. This paper presents a framework designed to take videos as input and output traffic volume counts and intersection turning patterns. This framework comprises a CNN model and an object tracking algorithm to detect and track vehicles in the camera’s pixel view first. Homographic projection then maps vehicle spatial-temporal information (including unique ID, location, and timestamp) onto an orthogonal real-scale map, from which the traffic counts and turns are computed. Several video data are manually labeled and compared with the framework output. The following results show a robust traffic volume count accuracy up to 96.91%. Moreover, this work investigates the performance influencing factors including lighting condition (over a 24-h-period), pixel size, and camera angle. Based on the analysis, it is suggested to place cameras such that detection pixel size is above 2343 and the view angle is below 22°, for more accurate counts. Next, previous and current traffic reports after Texas A&M home football games are compared with the framework output. Results suggest that the proposed framework is able to reproduce traffic volume change trends for different traffic directions. Lastly, this work also contributes a new intersection turning pattern, i.e., counts for each ingress-egress edge pair, with its optimization technique which result in an accuracy between 43% and 72%.
国家哲学社会科学文献中心版权所有