학술논문

Adaptive Management With Request Granularity for DRAM Cache Inside nand-Based SSDs
Document Type
Periodical
Source
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on. 42(8):2475-2487 Aug, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Random access memory
Ash
Tail
Solid state drives
Proposals
Parallel processing
Costs
Adaptive configuration
cache management
performance
request granularity
solid-state drives (SSDs)
time discounting
Language
ISSN
0278-0070
1937-4151
Abstract
Most flash-based solid-state drives (SSDs) adopt an onboard dynamic random access memory (DRAM) to buffer hot write data. Then, the write or overwrite operations can be absorbed by the DRAM cache, given that there is sufficient locality in the applications’ I/O access pattern, to consequently avoid flushing the write data onto underlying SSD cells. After analyzing typical real-world workloads over SSDs, we observed that the buffered data of small-size requests are more likely to be reaccessed than those of large write requests. To efficiently utilize the limited space of DRAM cache, this article proposes an adaptive request granularity-based cache management scheme for SSDs. First, we introduce a request block corresponding to a write request, as the cache management granularity, and propose a dynamic manner for classifying small and large request blocks. Next, we design three-level linked lists for supporting different routines of upgradation for small and large request blocks, once their data have been hit in the cache. Finally, we present a scheme of evicting the request blocks having the minimum cost in cache replacement, by taking both factors of access hotness and time discounting into account. Experimental results show that our proposal can yield improvements on cache hits and the overall I/O latency by 21.8% and 14.7% on average, compared to state-of-the-art cache management schemes inside SSDs.