Please use this identifier to cite or link to this item: http://10.1.7.192:80/jspui/handle/123456789/11592
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAdhyaru, D. M.
dc.date.accessioned2023-04-20T11:06:01Z-
dc.date.available2023-04-20T11:06:01Z-
dc.date.issued2010
dc.identifier.citationInternational Symposium on Control, Automation and robotics (ISCAR-2010)en
dc.identifier.urihttp://10.1.7.192:80/jspui/handle/123456789/11592-
dc.description.abstractIn present paper Reinforcement learning (RL) based control is implemented with Actor-critic based technique. Generation of reinforcement signal is described earlier with binary sign function for Actor-Critic based Reinforcement Learning. In the present paper two more discriminate functions introduced to generate reinforcement signal. It is proved with the results that these functions have faster convergence then earlier method to obtain reinforcement signal. In this paper, 2-link robot manipulator system’s dynamics is used to simulate results. It is assumed that the system has a certain ‘canonical’ structure. Present paper also shows reinforcement learning based controller achieves faster convergence then conventional controller based on loop gains. Convergence proof of proposed algorithm is defined with Lyapunov stability theory. Also we have proposed generalized Lyapunov function for any discriminant functions and system expressed in canonical form.en
dc.relation.ispartofseriesITFIC002-2en
dc.subjectReinforcement Learningen
dc.subjectNeural Networken
dc.subjectLyapunov Stabilityen
dc.subjectDiscriminate Functionen
dc.subjectIC Faculty Paperen
dc.subjectFaculty Paperen
dc.subjectITFIC002en
dc.titleReinforcement Learning based Control Systems and its Stability Analysisen
dc.typeFaculty Papersen
Appears in Collections:Faculty Papers, E&I

Files in This Item:
File Description SizeFormat 
ITFIC002-2.pdfITFIC002-2106.27 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.