Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/9552
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKalariya, Nirali-
dc.date.accessioned2021-01-06T04:23:22Z-
dc.date.available2021-01-06T04:23:22Z-
dc.date.issued2020-06-01-
dc.identifier.urihttp://10.1.7.192:80/jspui/handle/123456789/9552-
dc.description.abstractPC platform validation requires to choose set of optimal test cases and prioritize each of the test case and then executes those test cases on weekly basis to suitable the operating system and software requirements. More than three thousand test case need to be run on a weekly basis, so filtering the set of optimal test cases among all which are best suited for run is difficult task. It is important to find a simple, standardized and consistent approach that can be applied across multiple products in order to improve efficiency and consistently scale across products. We suggest a simplistic approach to help identify suitable, cost-optimized test cases to run in a continuous environment without sacrificing validation coverage or identification of defects.en_US
dc.publisherInstitute of Technologyen_US
dc.relation.ispartofseries18MCEN05;-
dc.subjectComputer 2018en_US
dc.subjectProject Report 2018en_US
dc.subjectComputer Project Reporten_US
dc.subjectProject Reporten_US
dc.subject18MCENen_US
dc.subject18MCEN05en_US
dc.subjectNTen_US
dc.subjectNT 2018en_US
dc.subjectCE (NT)en_US
dc.titleSmart Assistant for Intel System Integration and Validationen_US
dc.typeDissertationen_US
Appears in Collections:Dissertation, CE (NT)

Files in This Item:
File Description SizeFormat 
18MCEN05.pdf18MCEN05927.35 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.