학술논문

SieveMem: A Computation-in-Memory Architecture for Fast and Accurate Pre-Alignment
Document Type
Conference
Source
2023 IEEE 34th International Conference on Application-specific Systems, Architectures and Processors (ASAP) ASAP Application-specific Systems, Architectures and Processors (ASAP), 2023 IEEE 34th International Conference on. :156-164 Jul, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Filtering
Pipelines
Genomics
Systems architecture
Graphics processing units
Computer architecture
Filtering algorithms
Alignment
Pre-alignment Filter
Computation in Memory
Emerging Memory Technology
Hardware Accelerator
Language
ISSN
2160-052X
Abstract
The high execution time of DNA sequence alignment negatively affects many genomic studies that rely on sequence alignment results. Pre-alignment filtering was introduced as a step before alignment to reduce the execution time of short-read sequence alignment greatly. With its success, i.e., achieving high accuracy and thus removing unnecessary alignments, the filtering itself now constitutes the larger portion of the execution time. A significant contributing factor entails the movement of sequences from the memory to the processing units, while a majority will filter out as they do not result in an acceptable alignment. State-of-the-art (SotA) pre-alignment filtering accelerators suffer from the same overhead for data movements. Furthermore, these accelerators lack support for future pre-alignment filtering algorithms using the same operations and underlying hardware. This paper addresses these shortcomings by introducing SieveMem. SieveMem is an architecture that exploits the Computation-in-Memory paradigm with memristive-based devices to support shared kernels of pre-alignment filters and algorithms inside the memory (i.e., preventing data movements). SieveMem architecture also provides support for future algorithms. SieveMem supports more than 47.6% of shared operations among all top 5 SotA filters. Moreover, SieveMem includes a hardware-friendly pre-alignment filtering algorithm called BandedKrait, inspired by a combination of mentioned kernels. Our evaluations show that SieveMem provides up to 331.1 x and $\mathbf{446.8}\times$ improvement in the execution time of the two most-common kernels. Our evaluations also show that BandedKrait provides accuracy at the SotA level. Using BandedKrait on SieveMem, a design we call Mem-BandedKrait, one can improve the execution time of end-to-end sequence alignment irrespective of the dataset, which can go up to $\mathbf{91.4}\times$ compared to the SotA accelerator on GPU.