DSpace Repository

AN EMBRACIVE STUDY OF IMAGE CAPTION GENERATION USING PRE-TRAINED NEURAL NETWORKS

Show simple item record

dc.contributor.author NANDANA, ANIL
dc.contributor.author JASMIN, M R
dc.date.accessioned 2022-12-08T05:31:25Z
dc.date.available 2022-12-08T05:31:25Z
dc.date.issued 2022-07
dc.identifier.uri http://210.212.227.212:8080/xmlui/handle/123456789/326
dc.description.abstract It's a difficult task to automatically generate natural language descriptions of an image's content. Though, unlike humans, it does not come readily to machines. However, implementing this capability would surely alter how machines interact with us. The recent advancement of object recognition from photos has resulted in a paradigm for captioning images based on their object relationships. Various picture caption producing models based on pre-trained neural networks are presented in this research, with an emphasis on the various CNN architecture and LSTM to examine their influence on phrase synthesis. For creating a caption from an image, a combination of neural networks is more suited. The quality of generated captions is calculated using BLEU Metrics. en_US
dc.language.iso en en_US
dc.relation.ispartofseries ;TKM20MCA2024
dc.title AN EMBRACIVE STUDY OF IMAGE CAPTION GENERATION USING PRE-TRAINED NEURAL NETWORKS en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account