Please use this identifier to cite or link to this item: http://210.212.227.212:8080/xmlui/handle/123456789/326
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNANDANA, ANIL-
dc.contributor.authorJASMIN, M R-
dc.date.accessioned2022-12-08T05:31:25Z-
dc.date.available2022-12-08T05:31:25Z-
dc.date.issued2022-07-
dc.identifier.urihttp://210.212.227.212:8080/xmlui/handle/123456789/326-
dc.description.abstractIt's a difficult task to automatically generate natural language descriptions of an image's content. Though, unlike humans, it does not come readily to machines. However, implementing this capability would surely alter how machines interact with us. The recent advancement of object recognition from photos has resulted in a paradigm for captioning images based on their object relationships. Various picture caption producing models based on pre-trained neural networks are presented in this research, with an emphasis on the various CNN architecture and LSTM to examine their influence on phrase synthesis. For creating a caption from an image, a combination of neural networks is more suited. The quality of generated captions is calculated using BLEU Metrics.en_US
dc.language.isoenen_US
dc.relation.ispartofseries;TKM20MCA2024-
dc.titleAN EMBRACIVE STUDY OF IMAGE CAPTION GENERATION USING PRE-TRAINED NEURAL NETWORKSen_US
Appears in Collections:2022



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.