Small-LRU: A Hardware Efficient Hybrid Replacement Policy

Replacement policy plays a major role in improving the performance of the modern highly associative cache memories. As the demand of data intensive application is increasing it is highly required that the size of the Last Level Cache (LLC) must be increased. Increasing the size of the LLC also increases the associativity of the cache. Modern LLCs are divided into multiple banks where each bank is a set-associative cache. The replacement policy implemented on such highly associative banks consume significant hardware (storage and area) overhead. Also the Least Recently Used (LRU) based replacement policy has an issue of dead blocks. A block in the cache is called dead, if the block is not used in the future before its eviction from the cache. In LRU policy, a dead block can not be remove early until it become LRU-block. So, we have proposed a replacement technique which is capable of removing dead block early with reduced hardware cost between 77% to 91% in comparison to baseline techniques. In this policy random replacement is used for 70% ways and LRU is applied for rest of the ways. The early eviction of dead blocks also improves the performance of the system by 5%. Keywords—Replacement policies; cache memories; last level cache; hardware overheads; dead block


I. INTRODUCTION
Replacement policy plays the most significant role in the performance of highly set-associative cache architecture. In multi-level cache, the first level cache (L1) is allotted as private cache to individual core whereas the large last level cache (LLC) is shared by all the cores. To reduce the access latency the LLC is also divided into multiple banks where each bank is a set-associative cache. The data distribution among the banks is based on different data mapping policies [1]. In this work we consider Static Non Uniform Cache Access (SNUCA) where each block has a fixed bank to be mapped and the bank is called the home-bank of the block [1]. Fig. 1 shows a multicore processor having 4 cores. Each core has a private L1 cache and a part of shared L2 cache. To make the design simple we have not divided the L2 into more than 4 banks. The rest of the paper follows the same multicore processor as shown in Fig.  1. Each bank is a set-associative cache as shown in Fig. 2.
Today's data intensive applications demand larger and higher associative cache (specially LLC). These highly associative cache reduces the conflict misses and hence improves the performance of the system. But these highly associative cache has some overheads in terms of hardware. One such *corresponding author overhead is because of maintaining replacement policy for such highly associative banks. In set-associative cache (or bank), each set maintains its own replacement policy. This policy is required to replace an existing block from the cache.
As mentioned above, each set in the set-associative cache has its separate replacement hardware. For an N -way set associative cache, each set maintains separate hardware for its replacement policy. To insert a newly incoming block in the set, one of the existing block need to be evicted first known as the victim block. The purpose of replacement policy is to select a victim block to replace it with the recently requested block. One of the most popular and well known replacement policy is called Least Recently Used (LRU) policy. The concept of this technique is well known and not necessary to discuss here. To maintain LRU policy in each set having N ways, each way must uniquely represent its age relative to the other ways. For example, if there are only 2 ways in a cache then each set needs only 1 bit to maintain the relative age. Bit-0 means old and bit-1 means new. Similarly 2 bits are required to uniquely maintain the relative age in case of a 4-way set associative cache. Hence for a N -way set associative cache, log 2 N bits are required to maintain the relative age of each ways in the set. The details about the hardware overheads required to maintain replacement policy is discussed in Section II.
The three important operations of any replacement policy are: • Eviction: During the replacement process which block should be evicted from the cache. In case of LRU replacement policy, the eviction mechanism always selects the least recently used block as a victim block.
• Insertion: The newly incoming block replaces the victim block in the cache. But its position w.r.t. the replacement policy is determined by its insertion mechanism. In case of LRU replacement policy the newly incoming block is always placed in the MRU position.
• Promotion: This operation handles about what to do when a block is bing accessed from the cache. In case of LRU replacement policy, the promotion mechanism makes such blocks as MRU. It means that if a block B present in the bank and a request has been made to access the block. In this case the block will be accessed from the bank and hence it considered as hit. But after the access, the block will be moved to the MRU position.
The main reason to use such highly associative caches even in presence of hardware overheads is the performance. A highly associative cache reduces the conflict misses in the system and hence improves the system performance. These motivates researchers to reduces the hardware overhead of such highly associative caches. In this work we are mainly targeting the hardware overhead of the replacement policy of such cache.
The LRU based replacement policy are simple but faces a problem of dead blocks [2], [3], [4], [5], [6]. Since the insertion mechanism of LRU inserts a block at the MRU position and the eviction mechanism selects the victim from the LRU position, it takes long time in a highly associative cache to make a block LRU from MRU. It has been found that there are some blocks which are accessed only once in the cache. Such blocks though never used again but not possible to remove until they become LRU. Such block are called dead-blocks. Alternatively, a block is called dead-block at a particular instance, if the block will never used in the future before evicting it from the cache. The LRU based policy faces the issue of dead block and many technology has already been proposed to reduces the presence of such blocks [7]. Most of these dead block prediction policies are costly in terms of storage capacity as they require to maintain additional bits for prediction. Our proposed work is capable of early eviction of dead block with minimized hardware cost. Though the proposed policy is not as smart as [7] in terms of dead block prediction but managed to reduce hardware cost significantly. The proposed policy attempts to reduce hardware cost of LRU based techniques with high dead block prediction ability so that hit rate of the memory can be improved The organization of the paper is as follows. The next section discuss about the background and related works. The proposed Small-LRU is discussed in Section III. Section IV gives the experimental analysis and finally Section V concludes the paper.

II. BACKGROUND
The simplest replacement policy is known as FIFO (Fisrt In First Out) which uses a straight forward strategy to replace victim block while the most widely used traditional replacement policy is LRU (Least Recently Used) which selects a victim block based on reference history. Some similar policies are MRU (Most Recently Used) and Random. In case of Nways set-associative cache, LRU policy require N × log 2 N bits to represent a set. For example, a 4-ways set-associative cache require log 2 4 = 2bits to represent a single way and 4 × 2 = 8 bits require to represent a complete set. most of these traditional policies require the same hardware cost to implement. Random policy does not require any additional bit to select victim block but not suitable for general purpose.
Replacement policy is major area of research from the last two decades. The work in replacement policy can be divided into the following two categories: (a) Performance oriented like [7], [8], [9], [10] (b) Overhead reduction oriented like [11], [12]. The most efficient replacement policy was proposed in 1965 [13], which states that the evicted block must the block which will reuse in the furthest future. The policy is considered as the optimal replacement policy. Unfortunately it is not possible to implement this optimal policy in any physical computer as it needs the knowledge of future. Hence all the practical replacement policy proposed are trying to come closer to this policy in terms of performance. Note that, in case of replacement policy, the performance means reduction in the number of cache misses. There is a huge gap still exist between all the implementable replacement policies and the Optimal replacement policy [7], [11].
Many recent replacement policies have attempted to mimic the functionality of the optimal replacement policy using the modern techniques like Machine Learning and Artificial Intelligence [7], [10]. Even after all such attempts the gap is still exists. In [10] the authors have introduced a replacement policy which can predict future based on its past experiences. Another similar technique has been proposed in [14].
Removing dead blocks from the cache is also a major responsibility of the replacement policies. The LRU based replacement policies fail to remove dead blocks early [7]. Since detecting dead block also needs the knowledge of future it can only be predicted with some efficient prediction mechanism. Some well known dead block prediction based replacement www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 9, 2020 policies are [7], [8], [15], [16]. Most of these techniques has high hardware overhead as they need to maintain some prediction tables. Some low-overhead based dead block predictors are [11], [12]. The technique proposed in this paper is also a low-overhead based replacement policy.

III. SMALL-LRU
In this work a highly associative set is divided into two parts: LRU-Part and Random-Part. The LRU-Part maintains LRU replacement policy while the Random-Part maintains random replacement policy. LRU-Part is relatively small and have only 30% of the total ways. Thus the technique is called Small-LRU. During the eviction of a block, the Random-Part selects a victim block randomly and place it to the LRU-part. To accommodate the block coming from Random-Part, LRU-Part removes its least recently used block. An example of Small-LRU is shown in Fig. 3.
The main advantage of Small-LRU can be divided into two parts: (a) Reducing hardware overhead and (b) Reducing the presence of dead blocks in the cache. The hardware overhead is largely reduced as the random replacement policy needs almost zero overhead. The dead blocks are present in any large sized cache memories. Since 70% of the cache is random the dead blocks will be evicted early from the set. On the other hand, if there is a highly used block being randomly picked by the random-part then block will be given chances by placing it in the MRU position of the LRU-Part. Another hit to the block will again move the block into the Random-Part.
The highly associative sets are used to remove the conflict misses in the cache. The proposed design is still equally capable of removing the conflict misses but with significantly reduced hardware overheads. Experimental analysis as discussed in Section IV shows that the proposed mechanism is good for most of the benchmarks.

A. Replacement Operations
Insertion Policy: Every time a newly inserted block will be placed in the Random-Part. The insertion policy needs to move a block from Random-Part to the LRU-Part for making room for the incoming block. To place the migrated block from Random-Part to the LRU-Part, the LRU-Part removes its LRU block.
Promotion Policy: If block from LRU-Part is need to be promoted then it will be placed on the random-part. Promoting a block from LRU-Part to Random-Part also needs some adjustments. Before doing such promotion a random block from the random-part will be moved to the LRU position of the LRU-Part.
Eviction Policy: Every time the evicted block is the LRU block of the LRU-Part.
B. Advantage of Small-LRU 1) Hardware Cost: As mentined in Section II, LRU policy requires to maintain N ×log 2 N bits to represent each set of an N -ways set-associative cache. Since Small-LRU use random replacement policy on 70% ways and LRU only on 30% ways, the hardware overhead is reduces by more than 70%. In Small-LRU the number of additional bits required is M × log 2 M , where M = 0.3 × N . Table I represents the improvement achived in terms of hardware cost. It is observed that Small-LRU policy have reduced storage cost of LRU policy by 77% to 91% depending on the degree of associativity.
2) Dead Block Prediction: Early prediction of dead block is always a challenging task as discussed in Section I. Removing a dead block early from the cache is always a better choice. In Small-LRU, R-Part randomly moves a block to the MRU position of LRU-Part. The LRU-Part has only 30% ways from the total ways available in the cache. Hence even the original cache is highly associative a dead block can be removed early from the cache. A smarter replacement policy like DIP [9] or RRIP [15] in the LRU-Part may help to remove dead blocks even better.

IV. EXPERIMENTAL ANALYSIS
System Architecture is implemented in gem5, a full-system simulator [17]. We have simulated a multicore processor (4 cores) with two level of cache memory. The upper level cache L1 is used as private cache to each core and last level cache LLC is shared among the cores. The baseline replacement policy as well as the proposed policies are implemented using Ruby module. PARSEC benchmark [18] applications are simulated in ALPHA system architecture.
To analyze the performance, all the benchmark applications are executed on the target machine designed using proposed replacement policy as well as the baseline replacement policies 200 million cycles. the specification of target machine used to implement Small-LRU is shown in Table II.

A. Result Analysis with Baseline-1
We considered LRU policy as baseline-1 to compare the result of our proposed policy. Statistical comparison in terms of MPKI (Miss Per Kilo Instructions) is shown in Fig. 4. It is observed from the figure that that Small-LRU have reduced MPKI by removing dead block early from cache. Fig. 5 shows   the improvement in CPI (Cycle Per Instructions) due to the reduction in the number of cache miss. The proposed policy reduces MPKI by 10% and CPI by 5% on average. Though the improvement in the performance of the system is not significant with Small-LRU policy but the major advantage of this policy is the reduction of hardware cost without suffering the performance of the system. Fig. 6 depicts the percentage of bits reduction to implement Small-LRU compared to baseline-1 which is between 77% to 91%.

B. Result Analysis with Baseline-2
We have also compared Small-LRU with a multicore processor having 16-way associative banks. We call this design as Baseline-2. Fig. 7 shows that Small-LRU reduces the MPKI by 10% in comparison to the MPKI of baseline-2. By comparing the CPI of both the techniques it is observed that Small-LRU improves the performance of the system by 5.5% in comparison to baseline-2. Fig. 8 shows the statistical  www.ijacsa.thesai.org comparison of CPI between the Small-LRU and Baseline-2. Same as Baseline-1, the CPI improvement of Small-LRU over Baseline-2 is not significantly higher but enough to prove that Small-LRU reduces hardware overhead without degrading the performance.
V. CONCLUSION Replacement policy plays a major role in improving the performance of the modern highly associative cache memories. As the demand of data intensive application is increasing it is highly required that the size of the Last Level Cache (LLC) must be increased. Increasing the size of the LLC also increases the associativity of the cache. Modern LLCs are divided into multiple banks where each bank is a setassociative cache. The replacement policy implemented on such highly associative banks consume significant hardware (storage and area) overhead. Also the Least Recently Used (LRU) based replacement policy has an issue of dead blocks. A block in the cache is called dead, if the block is not used in the future before its eviction from the cache. Removing such dead block early from the cache is not possible in LRU policy.
In this paper we have proposed Small-LRU policy to reduce the hardware cost by more than 70% and also improves the performance by removing early dead blocks. In this policy random replacement is used for 70% ways and LRU is applied for rest of the ways. A block is always inserted in the Random-Part and evicted from the LRU-Part. Early eviction of dead blocks improves the MPKI and CPI of system using Small-LRU by 10% and 5.5%, respectively.