Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/4820
Full metadata record
DC FieldValueLanguage
dc.contributor.authorParmar, Vikas-
dc.date.accessioned2014-08-14T08:16:06Z-
dc.date.available2014-08-14T08:16:06Z-
dc.date.issued2014-06-01-
dc.identifier.urihttp://hdl.handle.net/123456789/4820-
dc.description.abstractA common way to learn a classier is to train a classier with the help of available setof labeled information with predefine set of classes. Supervised learning algorithmsrequire set of labeled data of every class to engender a classification function. Oneof the shortcomings of this classical paradigm is that in order to learn the functionaccurately, a bulk of labeled examples are needed. To achieve accurate result pullof labeled data is required, what to do if only few labeled data is available? Task isto examine the influence on the accuracy of the recommender when it is built usingunlabeled examples in addition to the labeled examples. Co-training and Self-Training allows to incorporate unlabeled examples while learning a classifier/recommender. Usefulness of these two algorithms is investigatedby experimental study using three different datasets those are MovieLens Dataset, Jester Dataset and Hetrec Dataset. Accuracy and f-measure are used as the evaluation measures.en_US
dc.publisherInstitute of Technologyen_US
dc.relation.ispartofseries12MCEC32;-
dc.subjectComputer 2012en_US
dc.subjectProject Report 2012en_US
dc.subjectComputer Project Reporten_US
dc.subjectProject Reporten_US
dc.subject12MCEen_US
dc.subject12MCECen_US
dc.subject12MCEC32en_US
dc.titleLearning From Labeled And Unlabeled Dataen_US
dc.typeDissertationen_US
Appears in Collections:Dissertation, CE

Files in This Item:
File Description SizeFormat 
12MCEC32.pdf12MCEC321.77 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.