학술논문

Deep image captioning using an ensemble of CNN and LSTM based deep neural networks.
Document Type
Article
Source
Journal of Intelligent & Fuzzy Systems. 2021, Vol. 40 Issue 4, p5761-5769. 9p.
Subject
*DEEP learning
Language
ISSN
1064-1246
Abstract
The paper is concerned with the problem of Image Caption Generation. The purpose of this paper is to create a deep learning model to generate captions for a given image by decoding the information available in the image. For this purpose, a custom ensemble model was used, which consisted of an Inception model and a 2-layer LSTM model, which were then concatenated and dense layers were added. The CNN part encodes the images and the LSTM part derives insights from the given captions. For comparative study, GRU and Bi-directional LSTM based models are also used for the caption generation to analyze and compare the results. For the training of images, the dataset used is the flickr8k dataset and for word embedding, dataset used is GloVe Embeddings to generate word vectors for each word in the sequence. After vectorization, Images are then fed into the trained model and inferred to create new auto-generated captions. Evaluation of the results was done using Bleu Scores. The Bleu-4 score obtained in the paper is 55.8%, and using LSTM, GRU, and Bi-directional LSTM respectively. [ABSTRACT FROM AUTHOR]