期刊名称:International Journal of Grid and Distributed Computing
印刷版ISSN:2005-4262
出版年度:2015
卷号:8
期号:5
页码:287-302
DOI:10.14257/ijgdc.2015.8.5.29
出版社:SERSC
摘要:Effective allocation of shared resources limited is a key problem for chip multiprocessors. As the processor core growth in the scale, multi thread for the shared resource limited system competition will become more intense, the performance of the system will also be more significant. In order to alleviate this problem, a fair and effective multi thread shared resources allocation scheduling algorithm is important. In all kinds of shared resources, the largest effect on the system performance is the shared cache and DRAM system. There are essential differences between the last level cache and a cache. The goal of a cache design is to provide fast data processor which requires high access speed. However, the object of the last level cache is to save data in the chip as much as possible, and the access speed requirements are not too high, it is more subject to the plate number of available transistors. Management level cache LRU strategy and its approximate algorithm are not applicable to the large capacity last level cache for traditional. It may cause destructive interference between threads, cache thrashing of stream media program lead, which will lead to a decline in the performance of processor. This paper focuses on the analysis of some hot problems of the last level cache management in the process of the large capacity of multi-core platform sharing, and puts forward the corresponding costs less.
关键词:Chip multiprocessor; last level cache; cache partitioning; Memory access