Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/9232
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShah, Krishna-
dc.date.accessioned2020-07-24T05:38:54Z-
dc.date.available2020-07-24T05:38:54Z-
dc.date.issued2019-06-01-
dc.identifier.urihttp://10.1.7.192:80/jspui/handle/123456789/9232-
dc.description.abstractThe field of audio processing and generation has seen quite an interest in the rise of deep learning domain. This Project aims to generate music using Deep learning based on mood. It implies the nature of output to be generated. It is possible to generate music without the need for working with instruments artists. The process followed is to predict the expression of the human face as fast and as accurate as possible and generate the music based on their mood. Face recognition is done with CNN Music is composed initially using RNN but suffers from lack of global structure as it cannot keep track of events so approached discussed here is using RNN and LSTM which is a good mechanism which takes input from emotion generated to compose music with accuracy and timing Constraints.en_US
dc.publisherInstitute of Technologyen_US
dc.relation.ispartofseries17MCEC15;-
dc.subjectComputer 2017en_US
dc.subjectProject Report 2017en_US
dc.subjectComputer Project Reporten_US
dc.subjectProject Reporten_US
dc.subject17MCEen_US
dc.subject17MCECen_US
dc.subject17MCEC15en_US
dc.titleMood Based Music Generationen_US
dc.typeDissertationen_US
Appears in Collections:Dissertation, CE

Files in This Item:
File Description SizeFormat 
17MCEC15.pdf17MCEC152.26 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.