Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/11909
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShah, Parin-
dc.date.accessioned2023-08-19T05:18:19Z-
dc.date.available2023-08-19T05:18:19Z-
dc.date.issued2023-06-01-
dc.identifier.urihttp://10.1.7.192:80/jspui/handle/123456789/11909-
dc.description.abstractAs Artificial Intelligence (AI) systems become more widely used in different applications, there is growing concern about their vulnerability to adversarial attacks. Adversarial attacks refer to intentional efforts aimed at manipulating AI models by introducing specially designed input data. These attacks are intended to make the models generate incorrect or unintended outputs. While most studies have concentrated on attacking AI models deployed in cloud or server-based setups, the deployment of AI on embedded hardware poses distinctive challenges and possibilities for adversarial attacks. This thesis explores the landscape of adversarial attacks on AI and their deployment on embedded hardware. We provide an overview of different types of adversarial attacks, including evasion attacks and model extraction attacks, and discuss their implications for embedded systems. Evasion attacks aim to deceive AI models during inference, while model extraction attacks are focused on compromising the performance and integrity of the original AI model by extracting its valuable features. In these attacks, adversaries attempt to obtain a surrogate or clone of the target model by leveraging various techniques. By extracting essential information such as weights, architecture, or decision boundaries, attackers can replicate the model's functionality or gain valuable insights into its inner workings. This compromised surrogate model can then be used for malicious purposes, such as intellectual property theft, unauthorized use, or launching further attacks. We examine the specific challenges posed by deploying AI on resource-constrained embedded devices, such as limited computational power, memory constraints, and real-time processing requirements. Furthermore, we investigate the potential ramifications of adversarial attacks on several widely used publicly available datasets, including MNIST handwritten digits, CIFAR 10, Gesture Recognition, and others. We emphasize the possible outcomes that can arise from successful attacks, such as extracting important model features and circumventing accurate predictions. Overall, this thesis aims to raise awareness about the vulnerability of AI deployed on embedded hardware to adversarial attacks and provides insights into the challenges, potential impacts, and mitigation strategies. By understanding these issues, stakeholders can make informed decisions to enhance the security and reliability of embedded AI systems in the face of evolving adversarial threats.en_US
dc.publisherInstitute of Technologyen_US
dc.relation.ispartofseries21MECE10;-
dc.subjectEC 2021en_US
dc.subjectProject Report 2021en_US
dc.subjectEC Project Reporten_US
dc.subjectEC (ES)en_US
dc.subjectEmbedded Systemsen_US
dc.subjectEmbedded Systems 2021en_US
dc.subject21MECen_US
dc.subject21MECEen_US
dc.subject21MECE10en_US
dc.titleEmbedded AI Securityen_US
dc.typeDissertationen_US
Appears in Collections:Dissertation, EC (ES)

Files in This Item:
File Description SizeFormat 
21MECE10.pdf21MECE107.91 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.