학술논문

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems
Document Type
Working Paper
Author
Yik, JasonBerghe, Korneel Van denBlanken, Douwe denBouhadjar, YounesFabre, MaximeHueber, PaulKleyko, DenisPacik-Nelson, NoahSun, Pao-Sheng VincentTang, GuangzhiWang, ShenqiZhou, BiyanAhmed, Soikat HasanJoseph, George VathakkattilLeto, BenedettoMicheli, AuroraMishra, Anurag KumarLenz, GregorSun, TaoAhmed, ZerghamAkl, MahmoudAnderson, BrianAndreou, Andreas G.Bartolozzi, ChiaraBasu, ArindamBogdan, PetrutBohte, SanderBuckley, SoniaCauwenberghs, GertChicca, ElisabettaCorradi, Federicode Croon, GuidoDanielescu, AndreeaDaram, AnuragDavies, MikeDemirag, YigitEshraghian, JasonFischer, TobiasForest, JeremyFra, VittorioFurber, SteveFurlong, P. MichaelGilpin, WilliamGilra, AdityaGonzalez, Hector A.Indiveri, GiacomoJoshi, SiddharthKaria, VedantKhacef, LyesKnight, James C.Kriener, LauraKubendran, RajkumarKudithipudi, DhireeshaLiu, Yao-HongLiu, Shih-ChiiMa, HaoyuanManohar, RajitMargarit-Taulé, Josep MariaMayr, ChristianMichmizos, KonstantinosMuir, DylanNeftci, EmreNowotny, ThomasOttati, FabrizioOzcelikkale, AycaPanda, PriyadarshiniPark, JongkilPayvand, MelikaPehle, ChristianPetrovici, Mihai A.Pierro, AlessandroPosch, ChristophRenner, AlphaSandamirskaya, YuliaSchaefer, Clemens JSvan Schaik, AndréSchemmel, JohannesSchmidgall, SamuelSchuman, CatherineSeo, Jae-sunSheik, SadiqueShrestha, Sumit BamSifalakis, ManolisSironi, AmosStewart, MatthewStewart, KennethStewart, Terrence C.Stratmann, PhilippTimcheck, JonathanTömen, NergisUrgese, GianvitoVerhelst, MarianVineyard, Craig M.Vogginger, BernhardYousefzadeh, AmirrezaZohora, Fatima TuzFrenkel, CharlotteReddi, Vijay Janapa
Source
Subject
Computer Science - Artificial Intelligence
Language
Abstract
Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we present initial performance baselines across various model architectures on the algorithm track and outline the system track benchmark tasks and guidelines. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community.
Comment: Updated from whitepaper to full perspective article preprint