This work studies and evaluates extending the generalization of neural networks trained on single class data to provide classification output for multilabel data without further training. In this context we evaluate and compare performance of neural networks with conventional classification/output layer against neural networks utilizing the Kasami Orthogonal Classification Layer (KOCL) proposed in [5]. KOCL is used as an output layer for classification networks and consists of a fully connected layer same like conventional classification/output layer but it has fixed weights (non-trainable) that are equal to a set of orthogonal Kasami codes. Evaluation was carried out by training (two types of networks) a VGG/WResNet network on the standard MNIST, FashionMNIST, CIFAR10, SVHN single label data-sets then testing it on multi-label images synthetically generating by merging two random images from the same data-set. Results show that neural networks trained on single label images can generalize to multi-label images and provide classification predictions with high accuracy. Moreover when comparing neural networks trained with conventional classification layer to networks trained with KOCL then testing them for multi-label classification under the same setup, the later provided far better multi-label classification accuracy.
Supplementary notes can be added here, including code, math, and images.