Enhancing performance of Gabriel graph-based classifiers by a hardware co-processor for embedded system applications.
Nenhuma Miniatura Disponível
Data
2020
Título da Revista
ISSN da Revista
Título de Volume
Editor
Resumo
It is well known that there is an increasing interest in edge computing to reduce the distance between
cloud and end devices, especially for Machine Learning (ML) methods. However, when related to latency-sensitive applications, little work can be found in ML literature on suitable embedded systems implementations. This paper presents new ways to implement the decision rule of a large margin classifier based on Gabriel graphs as well as an efficient implementation of this on an embedded
system. The proposed approach uses the nearest neighbor method as the decision rule, and the implementation starts from an RTL pipeline architecture developed for binary large margin classifiers and proposes the integration in a hardware/software co-design. Results showed that the proposed approach was statistically similar to the classifier and had a speedup factor of up to 8x compared to the classifier executed in software, with performance suitable for ML latency-sensitive applications.
Descrição
Palavras-chave
Machine learning, System on a chip, Large margin, Latency sensitive, IoT
Citação
ARIAS GARCIA, J. et al. Enhancing performance of Gabriel graph-based classifiers by a hardware co-processor for embedded system applications. IEEE Transactions on Industrial Informatics, v. 17, p. 1186-1196, 2021. Disponível em: <https://ieeexplore.ieee.org/document/9072429>. Acesso em: 29 abr. 2022.