Classification of Paper Quality Using Neural
Radial Basis Function
A radial basis function (RBF) network will be used on the cardboard paper
data. RBF networks are often not suitable for use in high-dimensional
problems like this one, which has 15 input dimensions. It is, however,
possible to choose to use only a subset of the inputs. Choosing a subset
of inputs can be seen as a simple form of feature extraction. To simplify
the problem, only three inputs will be used and only two of the paper
types will be classified. Due to the random nature of the initialization and training
processes, the result will vary each time the commands are
Load the Mathematica application package
Networks and the data.
Select input data from sensors 6, 7, and 8 and class data from classes 4
For classification, it is advantageous to add a sigmoidal nonlinearity at
the output of the RBF network. This constrains the output to the range 0
to 1. Also, a better-conditioned training problem is often obtained if the
linear submodel is excluded.
Initialize the RBF network with six neurons, no linear submodel, and a
Sigmoid output nonlinearity.
Train the RBF network for 20 iterations.
The trained RBF network can now be used to classify input vectors by
applying the network to them.
Classify the 24th paper data sample from the validation data.
The output of this RBF network is a real-valued function that takes values
between 0 and 1. A crisp classification can be obtained in various ways. A
simple way is to set values larger than 0.5 to 1 and smaller than 0.5 to 0
in the following manner. (Here, True rather than 1 indicates
the sample's class.)
The classification can also be displayed with a bar chart in which the correctly
classified data is on the diagonal and the incorrectly classified samples are off the
diagonal. A crisp classification can be obtained by changing the output
nonlinearity from the smooth step sigmoid to the discrete step by entering
the option OutputNonlinearity -> UnitStep.
Plot the classification result on the validation data.
The classification result is not particularly impressive, but due to the randomness
in the initialization, you may reevaluate the example, and the result might become better.
Use NetPlot to look at the classification performance improvement during
training for each class.
Plot the progress of the classifier on the validation data.
You can repeat the example, selecting different components of the input values
and different classes for the output.
Because the radial basis functions have local support where they are nonzero,
the training often gets trapped in poor
local minima. Quite frequently, one of the classes will be totally misclassified.
You will see this if you reevaluate
the example with a different initialization of the RBF network.
Correct classification is obtained when there is one
basis function at each class center. If any of the class centers has not managed
to attract a basis function during the
training, then this class will not be correctly classified.