Training with single class images and Generalizing for multi-class images using Kasami Orthogonal Classification Layer

Image Taken from the paper

Abstract

This work studies and evaluates extending the generalization of neural networks trained on single class data to provide classification output for multilabel data without further training. In this context we evaluate and compare performance of neural networks with conventional classification/output layer against neural networks utilizing the Kasami Orthogonal Classification Layer (KOCL) proposed in [5]. KOCL is used as an output layer for classification networks and consists of a fully connected layer same like conventional classification/output layer but it has fixed weights (non-trainable) that are equal to a set of orthogonal Kasami codes. Evaluation was carried out by training (two types of networks) a VGG/WResNet network on the standard MNIST, FashionMNIST, CIFAR10, SVHN single label data-sets then testing it on multi-label images synthetically generating by merging two random images from the same data-set. Results show that neural networks trained on single label images can generalize to multi-label images and provide classification predictions with high accuracy. Moreover when comparing neural networks trained with conventional classification layer to networks trained with KOCL then testing them for multi-label classification under the same setup, the later provided far better multi-label classification accuracy.

Type
Publication
Appeared as an Extended Abstract at the 3rd Northern Lights Deep Learning Conference (NLDL) on 9-11 January 2021, held virtually
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.

Supplementary notes can be added here, including code, math, and images.