Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/9388
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJain, Kushal-
dc.date.accessioned2020-10-05T06:57:24Z-
dc.date.available2020-10-05T06:57:24Z-
dc.date.issued2020-06-01-
dc.identifier.urihttp://10.1.7.192:80/jspui/handle/123456789/9388-
dc.description.abstractArtificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. An AI accelerator is a class of microprocessor or computer system designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Neural network processors (NNP) are a family of neural processors designed by Intel for the acceleration of artificial intelligence workloads. The NNP-I and the NNP-T are intended for two different markets, inference and training. “Training” is the work of creating and teaching a neural network how to process data in the first place. “Inference” refers to the task of actually running the now-trained neural network model. Convolution Neural Network (CNN) is a type of deep neural networks that are commonly used for object detection and classification. State-of-the-art hardware for training and inference of CNN architectures require a considerable amount of computation and memory intensive resources. This complex design requires a very efficient verification environment. Tasks described in this report are helping in improving the quality of verification environment, reduce the verification timing, ease the process of debug and coverage improvement. As a part of this project completed multiple activity like creation of OVM test bench for fine granular testing of converters in IP, Coverage Analysis for finding gaps in the tests, integration of a bridge module at IP verification level to reduce the turnaround time due to bugs in the design. After implementation of all this activity there is 66% reduction in turnaround time, 20% reduction in debugging time and coverage is improved from 42% to 85%.en_US
dc.publisherInstitute of Technologyen_US
dc.relation.ispartofseries18MECV09;-
dc.subjectEC 2018en_US
dc.subjectProject Report 2018en_US
dc.subjectEC Project Reporten_US
dc.subjectEC (VLSI)en_US
dc.subjectVLSIen_US
dc.subjectVLSI 2018en_US
dc.subject18MECen_US
dc.subject18MECVen_US
dc.subject18MECV09en_US
dc.titleValidation of Machine Learning Accelerator IPen_US
dc.typeDissertationen_US
Appears in Collections:Dissertation, EC (VLSI)

Files in This Item:
File Description SizeFormat 
18MECV09.pdf18MECV091.77 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.