Please use this identifier to cite or link to this item:
http://210.212.227.212:8080/xmlui/handle/123456789/326| Title: | AN EMBRACIVE STUDY OF IMAGE CAPTION GENERATION USING PRE-TRAINED NEURAL NETWORKS |
| Authors: | NANDANA, ANIL JASMIN, M R |
| Issue Date: | Jul-2022 |
| Series/Report no.: | ;TKM20MCA2024 |
| Abstract: | It's a difficult task to automatically generate natural language descriptions of an image's content. Though, unlike humans, it does not come readily to machines. However, implementing this capability would surely alter how machines interact with us. The recent advancement of object recognition from photos has resulted in a paradigm for captioning images based on their object relationships. Various picture caption producing models based on pre-trained neural networks are presented in this research, with an emphasis on the various CNN architecture and LSTM to examine their influence on phrase synthesis. For creating a caption from an image, a combination of neural networks is more suited. The quality of generated captions is calculated using BLEU Metrics. |
| URI: | http://210.212.227.212:8080/xmlui/handle/123456789/326 |
| Appears in Collections: | 2022 |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 20MCA424_S4_An embracive study of image caption generation using pre-trained neural networks - Nandana Anil.pdf | 1.6 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.