학술논문

Pooling Techniques in Hybrid Quantum-Classical Convolutional Neural Networks
Document Type
Conference
Source
2023 IEEE International Conference on Quantum Computing and Engineering (QCE) QCE Quantum Computing and Engineering (QCE), 2023 IEEE International Conference on. 01:601-610 Sep, 2023
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Engineering Profession
Training
Machine learning algorithms
Qubit
Quantum mechanics
Training data
Computer architecture
Machine learning
quantum machine learning
quantum pooling layers
quantum convolutional neural networks
medical imaging
Language
Abstract
Quantum machine learning has received significant interest in recent years, with theoretical studies showing that quantum variants of classical machine learning algorithms can provide good generalization from small training data sizes. However, there are notably no strong theoretical insights about what makes a quantum circuit design better than another, and comparative studies between quantum equivalents have not been done for every type of classical layers or techniques crucial for classical machine learning. Particularly, the pooling layer within convolutional neural networks is a fundamental operation left to explore. Pooling mechanisms significantly improve the performance of classical machine learning algorithms by playing a key role in reducing input dimensionality and extracting clean features from the input data. In this work, an in-depth study of pooling techniques in hybrid quantum-classical convolutional neural networks (QCCNNs) for classifying 2D medical images is performed. The performance of four different quantum and hybrid pooling techniques is studied: mid-circuit measurements, ancilla qubits with controlled gates, modular quantum pooling blocks and qubit selection with classical postprocessing. We find similar or better performance in comparison to an equivalent classical model and QCCNN without pooling and conclude that it is promising to study architectural choices in QCCNNs in more depth for future applications.