Convolutional structures and marginal statistics--a study based on K-nearest neighbours

Main Article Content

Jugurta Montalvão
Jânio Canuto
Elyson Carvalho

Abstract

This paper addresses statistical tricks found in deep convolutive neural networks. First, the most relevant statistical tricks are studied under the perspective of data scarcity, then one of them, directly related to convolution-like structures, is regarded as a random variable marginalization. The same kind of marginalization is implemented in an ensemble of K-nearest neighbours cells, where each cell yields scores instead of class labels. Scores are then combined to improve classification accuracy, as compared to a conventional K-nearest neighbours classifier in experiments with two emblematic datasets---MNIST and CIFAR-10. This improvement is regarded as evidence of the variable marginalization effect over performance, whereas it is discussed the potential for further lessons learned from deep neural networks to be transferred to KNN based classifiers, whose advantage is to allow for explainable artificial intelligence.

Article Details

How to Cite
Montalvão, J., Canuto, J., & Carvalho, E. (2018). Convolutional structures and marginal statistics--a study based on K-nearest neighbours. Journal of Communication and Information Systems, 33(1). https://doi.org/10.14209/jcis.2018.19
Section
Regular Papers

Most read articles by the same author(s)