Explainable gait recognition with prototyping encoder-decoder
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jucheol Moon | - |
dc.contributor.author | Yong-Min Shin | - |
dc.contributor.author | 박진덕 | - |
dc.contributor.author | Nelson Hebert Minaya | - |
dc.contributor.author | Won-Yong Shin | - |
dc.contributor.author | Sang-Il Choi | - |
dc.date.accessioned | 2023-10-17T07:40:04Z | - |
dc.date.available | 2023-10-17T07:40:04Z | - |
dc.date.issued | 2022-03 | - |
dc.identifier.issn | 1932-6203 | - |
dc.identifier.uri | https://yscholarhub.yonsei.ac.kr/handle/2021.sw.yonsei/6769 | - |
dc.description.abstract | Human gait is a unique behavioral characteristic that can be used to recognize individuals. Collecting gait information widely by the means of wearable devices and recognizing people by the data has become a topic of research. While most prior studies collected gait information using inertial measurement units, we gather the data from 40 people using insoles, including pressure sensors, and precisely identify the gait phases from the long time series using the pressure data. In terms of recognizing people, there have been a few recent studies on neural network-based approaches for solving the open set gait recognition problem using wearable devices. Typically, these approaches determine decision boundaries in the latent space with a limited number of samples. Motivated by the fact that such methods are sensitive to the values of hyper-parameters, as our first contribution, we propose a new network model that is less sensitive to changes in the values using a new prototyping encoder-decoder network architecture. As our second contribution, to overcome the inherent limitations due to the lack of transparency and interpretability of neural networks, we propose a new module that enables us to analyze which part of the input is relevant to the overall recognition performance using explainable tools such as sensitivity analysis (SA) and layerwise relevance propagation (LRP). | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | PUBLIC LIBRARY SCIENCE | - |
dc.title | Explainable gait recognition with prototyping encoder-decoder | - |
dc.title.alternative | Explainable gait recognition with prototyping encoder–decoder | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1371/journal.pone.0264783 | - |
dc.identifier.scopusid | 2-s2.0-85126280286 | - |
dc.identifier.bibliographicCitation | PLOS ONE, v.17, no.3, pp e0264783-1 - e0264783-20 | - |
dc.citation.title | PLOS ONE | - |
dc.citation.volume | 17 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | e0264783-1 | - |
dc.citation.endPage | e0264783-20 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Gait recognition | - |
dc.subject.keywordAuthor | Smart insole | - |
dc.subject.keywordAuthor | Neural network | - |
dc.subject.keywordAuthor | Prototytping encoder-decoder | - |
dc.subject.keywordAuthor | Interpretability | - |
Items in Scholar Hub are protected by copyright, with all rights reserved, unless otherwise indicated.
Yonsei University 50 Yonsei-ro Seodaemun-gu, Seoul, 03722, Republic of Korea1599-1885
© 2021 YONSEI UNIV. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.