학술논문

Enhancing Inference Performance through Include only Literal Incorporation in Tsetlin Machine
Document Type
Conference
Source
2023 International Symposium on the Tsetlin Machine (ISTM) Tsetlin Machine (ISTM), 2023 International Symposium on the. :1-8 Aug, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Signal Processing and Analysis
Memory management
Software algorithms
Inference algorithms
Hardware
Task analysis
Standards
Field programmable gate arrays
Artificial Intelligence
Machine Learning
Interpretability
Tsetlin Machine
Field Programmable Gate Array
Language
Abstract
The Tsetlin Machine is a novel and powerful algorithm for pattern recognition and decision-making tasks that has gained significant traction in recent years. Its features make it highly suitable for energy-efficient hardware implementations. This paper presents an FPGA design and implementation of an inference accelerator for a Multi Class Tsetlin Machine (MCTM). The proposed design aims at using the sparseness of the multi-class Tsetlin machine to optimize the inference algorithm implementation and provide a fast and resource-efficient implementation on Xilinx’s Zedboard. We train the multi-class Tsetlin Machine model in software on the MNIST dataset and subsequently port the model to hardware. Further, we demonstrate and evaluate the performance of the proposed design using test images. It is observed that our design uses $21.1 \times$ less memory and is $30.8 \times$ faster compared to a standard design, with slightly more resource utilization for a given set of parameters.