Hello, everyone! My name’s Sophie and I’m an intern at Maplesoft. @Samir Khan asked me to develop a couple of demonstration applications using the DeepLearning package - my work is featured on the Application Center

I thought I’d describe two critical commands used in the applications – DNNClassifier() and DNNRegressor().

DeepLearning:-DNNClassifier

The DNNClassifier calls tf.estimator.DNNClassifier from the Tensorflow Python API. This command builds a feedforward multilayer neural network that is trained with a set of labeled data in order to perform classification on similar, unlabeled data.

Input Data and Feature Column

Dataset used for training and validating the classifier has the type DataFrame in Maple. In the Prediction of malignant/benign of breast mass example, the training set is a DataFrame with 32 columns in total, with column labels: “ID Number”, “Diagnosis”, “radius”, “texture”, etc. Note that labeling the columns of the dataset is mandatory, as later the neural network needs to identify which feature column corresponds to which list of values.

Feature columns are what come between the raw input data and the classifier model; they are required by Tensorflow to specify how the input data should be transformed before given to the model. Maple now supports three types of Feature Columns, including:

  • NumericColumn that represents real, numerical figure,
  • CategoricalColumn that denotes categorical(ordinal) data
  • BucketizedColumn that organizes continuous data into a discrete number buckets with specified boundaries.

In this application, the input data consists of 30 real, numeric values that represents physical traits of a cell nucleus computed from a digitized image of the breast mass. We create a list of NumericColumns by calling

with(DeepLearning):
fc := [seq(NumericColumn(u,shape=[1]), u in cols[3..])]:

where cols is a list of column labels and shape[1] indicates that each data input is just a single numeric value.

Building, training and evaluating the neural network

When we create a DNNClassifier, we need to specify the feature columns (input layer), the architecture of the neural network (hidden layers) and the number of classes (output layer). Recall that the DNNClassifier builds a feedforward multilayer neural network, hence when we call the function, we need to indicate how many hidden layers we want and how many nodes there should be on each of the layer. This is done by passing a list of non-negative integers as the parameter hidden_units when we call the function. In the example, we did:

classifier := DNNClassifier(fc, hidden_units=[20,40,20],num_classes=2):

where we set 3 hidden layer each with 20, 40, 20 nodes respectively. In addition, there are 30 input nodes (i.e. the number of feature columns) and 1 output node (i.e. binary classification). The diagram below illustrates a simpler example with an input layer with 3 nodes, 2 hidden layers with 7, 5 nodes and an output layer with 1 node.

(Created using NN-SVG by https://github.com/zfrenchee/NN-SVG)

After we built the model, we can train it by calling

classifier:-Train(train_data[3..32], train_data[2], steps = 256, num_epochs = 3, shuffle = true):

where we

  1. Give the training data (train_data[3..32]) and the corresponding labels (train_data[2]) to the model.
  2. Specified that the entire dataset will be passed to the model for three times and each iteration has 256 steps.
  3. Specified that data batches for training will be created by randomly shuffling the tensors.

Now the training process is complete, we can use the validation set to evaluate the effectiveness of our model.

classifier:-Evaluate(test_data[3..32],test_data[2], steps = 32);

The output indicates an accuracy of ~92.11% in this case. There are more indices like accuracy_basline, auc, average_loss that help us decide if we need to modify the architecture for better performance.

Making a prediction

We then build a predictor function that takes an arbitrary set of measurements as a DataSeries and returns a prediction generated by the trained DNN classifier.

predictor := proc (ds) classifier:-Predict(Transpose(DataFrame(ds)), num_epochs = 1, shuffle = false)[1] end proc;

Now we can pass a DataSeries with 30 labeled rows to the predictor: (Recall the cols is a list of the column names)

ds := DataSeries([11.49, 14.59, 73.99, 404.9, 0.1046, 8.23E-02, 5.31E-02, 1.97E-02, 0.1779, 6.57E-02, 0.2034, 1.166, 1.567, 14.34, 4.96E-03, 2.11E-02, 4.16E-02, 8.04E-03, 1.84E-02, 3.61E-03, 12.4, 21.9, 82.04, 467.6, 0.1352, 0.201, 0.2596, 7.43E-02, 0.2941, 9.18E-02], labels = cols[3..]); 
predictor(ds);

The output indicates that the probability of this data being a class _id [0] is ~90.79%. In other words, according to our model, the probability of this breast mass cell being benign is ~90.79%.

DeepLearning:-DNNRegressor

The use of the DNNRegressor is very similar (almost identical) to that of the Classifier, the only significant difference is that while the Classifier predicts discrete labels as classes, the Regressor predicts a continuous qualitative result with the provided data (Note that CategoricalColumn is still applicable). For more details about the basic usage of the DNNRegressor, please refer to Predicting the burnt area of a forest fires with DNN Regressor.

 

Please Wait...