Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/11437
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBhavesh-
dc.date.accessioned2023-03-22T08:47:16Z-
dc.date.available2023-03-22T08:47:16Z-
dc.date.issued2021-07-
dc.identifier.urihttp://10.1.7.192:80/jspui/handle/123456789/11437-
dc.description.abstractStudy and Realization of Heterogeneous Learning Architecture for Knowledge Distillation based Classification Hickam’sdictum: Deep Neural Networks can have as many parameters and/or configurations as the developer may please. The Deep learning frameworks have progressed beyond human recognition capabilities, but one facesdifficultytodeploythosemodelsinreal-timeapplications. Despite their effectiveness, these models are built up on large scale data sets and usually have parameters in the billion scale. Various compression techniques and algorithms are developed, yet no comprehensive approaches available to transfer them to handheld devices. Now, it is the perfect opportunity to either optimize them or develop architectures for implementation on the embedded platforms. Knowledge distillation provides efficient and effective teacher-student learning for a variety of different visual recognition tasks because a lightweight student network can be easilytrainedundertheguidanceofthehigh-capacityteacher networks. The present deep learning architectures support learning capabilities but they lack flexibility for applying learned knowledge not he tasks in other unfamiliar domains. A huge gap still exists between developing fancy algorithm for a specific task and deploying the algorithm for online/offline production run. This work tries to fill this gap with the deep neural network based solution for object classification/detection in unrelated domains with a focus on the reduced footprint of the developed model. The teacher-student architecture is developed with binary classification problem and knowledge distillation based transfer learning approach shows a 20% improve mention terms of classification accuracy with respect to baseline teacher model. Various hardware frame works have their in here characteristics which effect the algorithm sufficiency. The CPU based implementation wins in term of latency, while GPU wins in terms of computation. Other types of hardware like custom DSP or SoC are developed to balance the trade-off. FPGA based SoC has advantage of both computation and latency. For performance comparison, the proposed model is run on different platforms. Porting this on various CPU/GPU/FPGA platforms shows that, it does not require compute intensive hardware. Hence Transfer Learning (TL) based method may be used for the practical application on the edge devices. A novel heterogeneous architecture is proposed which is easily portable as well as scalable. The scalability is tested with binary, ternary, and multi-class identification and their performance is compared on basis of inference speed. The multiple domains are also considered to prove that for similar recognition accuracy the inference speed achieved is about 50 frames per second or 20ms per image. This is a realistic figure; thus the architecture can be easily ported to IoT based Edge devices with limited compute/power capabilities. Further, this approach can be generalized as per the application requirement with minimal changes, provided the data set format compatibility. Transferring the knowledge to smaller models has good scope for the resource-constrained IoT based edge devices.en_US
dc.language.isoen_USen_US
dc.publisherInstitute of Technologyen_US
dc.relation.ispartofseriesTT000122;-
dc.subjectThesesen_US
dc.subjectEC Thesesen_US
dc.subjectTheses ECen_US
dc.subjectDr. Narendra P. Gajjaren_US
dc.subject14EXTPHDE128en_US
dc.subjectTT000122en_US
dc.subjectTheses ITen_US
dc.subjectITFEC004en_US
dc.titleStudy and Realization of Heterogeneous Learning Architecture for Knowledge Distillation based Classificationen_US
dc.typeThesisen_US
Appears in Collections:Ph.D. Research Reports

Files in This Item:
File Description SizeFormat 
TT000122.pdfTT0001224.19 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.