Welcome to Anduril

Anduril is a neural network library for python. It aims to be fast and robust without sacrificing its ease of use. The code for Anduril is available on github.

Installation Instructions

Anduril has the following dependencies


Download the appropriate version of Anduril
Extract the tarball file to the location of your preference. Now, 'cd' into the Anduril directory and execute the following commands:

$> python config.py
$> sudo python setup.py install

Anduril should now be successfully installed.
Supported OS: Linux, Unix

Documentation

Constructor

Anduril(void)

The constructor takes no inputs.

Initializer

Anduril.init(sconfig, classreg = 0, numcores = 1, gradd = 0, costfunc = 0)

This method is used to initilize the neural network. The architecture is specified by the string sconfig with - separated numbers. Anduril automatically detects the number of input and output layers depending on the input file, thus only the architecture of the hidden layers needs to be specified in sconfig. The user is required to specify what would be the purpose of the neural network, regression or classification. The variable classreg should be set to 1 for regression and 0 for classification. Using numcores the user can specify the number of cores at thier disposal, if the number of cores is greater than 1 Anduril would use that to its advantage to perform parallel calculations. They can also decide what gradient descent algorithim they prefer to use, set gradd to 0 for full-batch gradient descent and 1 for mini-batch gradient descent. For classification one can specify NNet to use the cross entropy cost function which can be done by specifying costfunc to be 1 (not available yet). By default classification is done using sigmoid activation functions on all the layers and regression by the tanh(x) + 0.1x function. This can be changed using Anduril.func_arch(flayer).


Loading Methods

Anduril.load(filename, mode = 0, sep1 = ",", sep2 = " ")

This method is used to load the file into the network. The filename is specified by filename. If mode is 0 then the data is split into training, and test data, if mode is 1 then the entire file is used for training. This method assumes that the file contains both input and output data where each component of the input and output vector is seperated by the string specified in sep1 and the input and output vectors are seperated by the string specified in sep2. By default sep1 is a comma (",") and sep2 is a space (" ").


Training Methods

Anduril.train_net(epoch, lrate, mode = 1, verbose = 0, logfile = " ")

This methods trains the neural network using standard backpropogation, the learning rate is specified by the variable lrate. epoch is used to specify the number of epochs for which the network is supposed to be trained. If mode is set to 1 it gives the current accuracy and/or RMSE of the neural net on the given data set. If verbose equals 1 more detailed output is produced giving the RMSE and accuracy as applicable after each training iteration.If a log file is specified, the output is stored in a file with the same name and a ".and" extention. This also stores a .dat file which contains the RMSE of each epoch, this file can be opened by a graphing program like Grace to examine the learning curve.
Note: Logfiles are only saved if mode = 1.


Anduril.train_rprop(epoch, mode = 1, verbose = 0, logfile = " ",tmax = 1.0)

This method trains the neural network using resilient backpropogation. epoch is used to specify the number of epochs for which the network is supposed to be trained. If mode is set to 1 it prints the RMSE and/or accuracy as the net is trained. The variable tmax sets an upper bound on the amount by which a particular weight can change. If verbose equals 1 more detailed output is produced giving the RMSE and accuracy as applicable after each training iteration. If a log file is specified, the output is stored in a file with the same name and a ".and" extention. This also stores a .dat file which contains the RMSE of each epoch, this file can be opened by a graphing program like Grace to examine the learning curve.
Note: Logfiles are only saved if mode = 1.


Additional Methods

Anduril.func_arch(flayer)

The user can specify a very combinations of activation functions for the network by giving thier choice as a string. e.g Anduril.func_arch("030") would use the sigmoid function for the first hidden layer, a tanh + linear function for the second hidden layer and sigmoid again for the output layer. The list of available activation functions and thier corresponding numeric values are:


Anduril.test_file(filename,netname = " ", sep1 = ",", sep2 = " ")

This method allows the user to upload a file to test the neural network against. The variable netname is used to specify the network against which the file should be tested, if left to the default value it is tested against the net already loaded.


Anduril.test_net(verbose = 0)

This method allows the user to test thier neural network on the file uploaded in load. If verbose equals 1 more detailed output is produced giving the RMSE and accuracy as applicable after each training iteration.


Anduril.savenet(netname)

Save the given network with the a name specified in netname.


Anduril.loadnet(netname)

Load the neural network saved with the name specified in netname.


Anduril.snets()

View all saved neural networks in the given directory.

Anduril.error_stats(num_bins)

This method can be used to examine the performance of a network on the train and test data set. It plots a histogram of the difference between the actual value and predicted value along with their means and standard deviaitons. Currently this only works with single variable regression. The num_bins variable is used to specify the number of bins of the histograms.

Examples

To start using Anduril just import it as follows:

>>> from Anduril import *

Now we can declare a new Anduril object, initialize it and load input files

>>> N = Anduril()

>>> N.init("10-20",classreg = 1, gradd = 1)

>>> N.load("input.txt")

This initalizes a neural network with two hidden layers with 10 units in the first hidden layer and 20 units in the second. It is setup for regression as specified by the classreg variable, and since gradd is set to 1 it would train itself with minibatches.

Now we can start training the network for a 100 epochs as follows:

>>> N.train_rprop(epoch = 100, logfile = "out.log")

Or

>>> N.train_net(epoch = 100, 0.01,logfile = "out.log")

Now if I would like to test the network then I can do it as follows:

>>> N.test_net(1)

And if I would like to save it with the name "first_nn":

>>> N.savenet("first_nn")

Publications