학술논문

GEM: Ultra-Efficient Near-Memory Reconfigurable Acceleration for Read Mapping by Dividing and Predictive Scattering
Document Type
Periodical
Source
IEEE Transactions on Parallel and Distributed Systems IEEE Trans. Parallel Distrib. Syst. Parallel and Distributed Systems, IEEE Transactions on. 34(12):3059-3072 Dec, 2023
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Genomics
Bioinformatics
Indexes
Random access memory
Computational modeling
Throughput
Termination of employment
Data-centric computing
dividing and scattering
genome assembly
near memory computing
read mapping
reconfigurable computing
Language
ISSN
1045-9219
1558-2183
2161-9883
Abstract
Read mapping, which maps billions of reads to a reference DNA, poses a significant performance bottleneck in genomic analysis. Current accelerators for read mapping are primarily bounded by the intensive and random memory access to huge datasets. Near-data processing (NDP) infrastructures are promising to provide extremely high bandwidth. However, existing frameworks failed to reach this potential due to poor locality and high redundancy. Our idea is to introduce prediction under the insight that candidate mapping positions become predictable when the reference is organized in coarse-grain slices. We present GEM (Genomic Memory), an ultra-efficient near-memory accelerator for read mapping. GEM adopts a novel data-centric framework, named dividing-and-predictive-scattering (DPS), which synthesizes information of seed existence to predict the target mapping locations to reduce memory access redundancy. During preparation, DPS divides the reference into coarse-grained slices and creates predictive filters to assess the likelihood of reads belonging to each slice. During mapping, DPS predicts and scatters reads to considerably fewer slices compared than without prediction. By employing small on-chip SRAM-based predictors with high accuracy, DPS minimizes unnecessary DRAM access and data movement from remote memory. In essence, DPS trades pre-seeding predictors for localized access patterns and low redundancy, hence achieving high throughput for data-intensive applications. We implement GEM by integrating coarse-grain reconfigurable architectures (CGRAs) in the logic layer of a 3D-stacked DRAM infrastructure, utilizing the massive banks as slices. GEM leverages CGRAs for their flexibility in supporting various algorithms tailored to different datasets. Bloom filters are leveraged for slice prediction, providing an error rate below 1%. Evaluation results demonstrate that GEM reduces memory requests by 95% and alignments by 87%, achieving a throughput improvement of 15.3× and 11.0× compared to compute-centric and broadcast-based baselines on the same NDP platform. Overall, GEM achieves a $3.5\times$3.5× throughput improvement and $2.1\times$2.1× energy efficiency compared to state-of-the-art ASIC accelerators.