Lightweight Neural Network++ documentation

Visit the lwnnplus home page, download the technical report in english or italian.

Main Page | Class Hierarchy | Class List | File List | Class Members

network Class Reference

Class implementing a feed forward neural network with backpropagation learning. More...

#include <network.h>

List of all members.

Public Member Functions

 network (int activ, int no_of_layers,...)
 Constructor for a network.
 network (int activ, vector< int > layers)
 Constructor for a network.
 network (const char *filename, bool binary=true)
 Constructor. Load network from file.
 network (const network &b)
 Copy constructor.
 ~network ()
 Destructor. Free memory allocated for a network.
void randomize (float range)
 Assign random values to all weights in the network.
float get_momentum () const
 Retrieve the momentum of a network.
float get_learning_rate () const
 Retrieve the momentum of a network.
int get_no_of_inputs () const
 Retrieve the number of inputs of a network.
int get_no_of_outputs () const
 Retrieve the number of outputs of a network.
int get_no_of_layers () const
 Retrieve the number of layers of a network.
int get_no_of_neurons (int l) const
 Retrieve number of neurons on a layer of a netwrwork.
float get_weight (int l, int nl, int nu) const
 Retrieve a weight of a network.
int get_no_of_patterns () const
 Retrieve the number of patterns in batch training.
int get_activation () const
 Retrieve the activation function of network (network::LOGISTIC or network::TANH).
float get_output_error () const
 Retrieve the output error of a network.
float get_max_learning_rate ()
 Retrieve maximum learning rate allowed in SuperSab mode.
float get_min_learning_rate ()
 Retrieve minimum learning rate allowed in SuperSab mode.
float get_ssab_up_factor ()
 Retrieve factor for increasing learning rate in SuperSab mode.
float get_ssab_down_factor ()
 Retrieve factor for decreasing learning rate in SuperSab mode.
void set_learning_rate (float learning_rate)
 Change the learning rate of a network.
void set_activation (int num_func)
 Set activation function of the network.
void set_momentum (float momentum)
 Change the momentum of a network.
void set_max_learning_rate (float max)
 Set maximum learning rate allowed in SuperSab mode.
void set_min_learning_rate (float min)
 Set minimum learning rate allowed in SuperSab mode.
void set_ssab_up_factor (float factor)
 Set factor for increasing learning rate in SuperSab mode.
void set_ssab_down_factor (float factor)
 Set factor for decreasing learning rate in SuperSab mode.
void save (const char *filename) const
 Write a network to a binary file.
void load (const char *filename)
 Read a network from a binary file.
void friendly_print (const bool show=false) const
 Write a network to stdout in a friendly format.
void print () const
 Write a network to a stdout.
void textsave (const char *filename) const
 Write a network to a text file.
void textload (const char *filename)
 Read a network from a text file.
float compute_output_error (const float *target)
 Compute the output error of a network.
float compute_average_error (const float *target) const
 Compute the average error of a network.
float compute_quadratic_error (const float *target) const
 Compute the quadratic error a network.
float compute_max_error (const float *target) const
 Compute the max error a network.
void compute (const float *input, float *output)
 Compute outputs of a network for given inputs.
void train ()
 Train a network.
bool is_ssab_active () const
 True if ssab is active.
int count_weights () const
 Count the number of weights of the network.
int begin_ssab ()
 Begin SuperSab mode setting the nus to learning rate of the network.
void train_ssab ()
 Train a network in ssab mode.
int reset_ssab ()
 Reset the values of learning rates of the network to learning_rate in SuperSab mode.
void free_ssab ()
 Free the memory used for SuperSab and end SuperSab mode.
bool save_ssab (const char *filename) const
 Write SuperSab learning rates to a binary file.
bool load_ssab (const char *filename)
 Load SuperSab learning rates from a binary file.
int ssab_print_nus () const
 Print learning rates for SuperSab mode.
int ssab_stats (float &average, float &max, float &min, int &n_max, int &n_min)
 Make some statistics about learning rates in SuperSab mode.
void begin_batch ()
 Begin training in batch mode.
void train_batch ()
 Train a network in batch mode.
void end_batch ()
 End training in batch mode adjusting weights.
void end_batch_ssab ()
 End training in batch mode adjusting weights with SuperSab.
void jolt (float factor, float range)
 Make small random changes to the weight of a network.
const networkoperator= (const network &b)
 Overloaded operator=.

Static Public Attributes

const int LOGISTIC = NET_LOGISTIC
 Public constant for the logistic function.
const int TANH = NET_TANH
 Public constant for the tanh function.

Friends

ostream & operator<< (ostream &, const network &)
 Write a network on a stream.


Detailed Description

Class implementing a feed forward neural network with backpropagation learning.


Constructor & Destructor Documentation

network::network int  activ,
int  no_of_layers,
  ...
 

Constructor for a network.

Parameters:
activ activation function (network::LOGISTIC or network::TANH)
no_of_layers Integer.
... Sequence of integers.
Allocate memory for a neural network with no_of_layers layers, including the input and output layer. The number of neurons in each layer is given as ..., starting with the input layer and ending with the output layer.

The parameters of the network are set to default values. (for example momentum is 0). You can change them later by the mutators methods.

If no_of_layers < 2 throws a runtime_error exception

network::network int  activ,
vector< int >  layers
 

Constructor for a network.

Parameters:
activ activation function (network::LOGISTIC or network::TANH)
layers vector of integers containing the number of neurons of each layer
Allocate memory for a neural network with layers.size() layers, including the input and output layer. The number of neurons in each layer is given in the vector, starting with the input layer and ending with the output layer.

The parameters of the network are set to default values. (for example momentum is 0). You can change them later by the mutators methods.

If layers.size() < 2 throws a runtime_error exception

network::network const char *  filename,
bool  binary = true
 

Constructor. Load network from file.

Parameters:
filename Pointer to name of file to load
binary bool if true (default) the file is binary otherwise is a text file If filename does not exist throws a runtime_error exception


Member Function Documentation

void network::randomize float  range  ) 
 

Assign random values to all weights in the network.

Parameters:
range Floating point number.
All weights in the neural network are assigned a random value from the interval [-range, range].

float network::get_momentum  )  const [inline]
 

Retrieve the momentum of a network.

Returns:
Momentum of the neural work.

float network::get_learning_rate  )  const [inline]
 

Retrieve the momentum of a network.

Returns:
Learning rate of the neural work.

int network::get_no_of_inputs  )  const [inline]
 

Retrieve the number of inputs of a network.

Returns:
Number of neurons in the input layer of the neural network.

int network::get_no_of_outputs  )  const [inline]
 

Retrieve the number of outputs of a network.

Returns:
Number of neurons in the output layer of the neural network.

int network::get_no_of_layers  )  const [inline]
 

Retrieve the number of layers of a network.

Returns:
Number of layers, including the input and output layers, of the neural network.

int network::get_no_of_neurons int  l  )  const
 

Retrieve number of neurons on a layer of a netwrwork.

Parameters:
l layer index ( should be 0 <= l < get_no_of_layers() )
Returns:
number of neurons on layer l

float network::get_weight int  l,
int  nl,
int  nu
const
 

Retrieve a weight of a network.

Parameters:
l Number of lower layer.
nl Number of neuron in the lower layer.
nu Number of neuron in the next layer.
Returns:
Weight connecting the neuron numbered nl in the layer numbered l with the neuron numbered nu in the layer numbered l+1.

int network::get_no_of_patterns  )  const [inline]
 

Retrieve the number of patterns in batch training.

Returns:
number of patterns

int network::get_activation  )  const [inline]
 

Retrieve the activation function of network (network::LOGISTIC or network::TANH).

Returns:
activation function

float network::get_output_error  )  const [inline]
 

Retrieve the output error of a network.

Returns:
Output error of the neural network.
Before calling this routine, compute() and compute_output_error() should have been called to compute outputs for given inputs and to acually compute the output error. This routine merely returns the output error (which is stored internally in the neural network).

float network::get_max_learning_rate  )  [inline]
 

Retrieve maximum learning rate allowed in SuperSab mode.

Returns:
float maximum learning rate
Values of learning rates cannot be greater than this value

float network::get_min_learning_rate  )  [inline]
 

Retrieve minimum learning rate allowed in SuperSab mode.

Returns:
float minimum learning rate
Values of learning rates cannot be lesser than this value

float network::get_ssab_up_factor  )  [inline]
 

Retrieve factor for increasing learning rate in SuperSab mode.

Returns:
float factor for increasing learning rate
In SuperSab mode: if delta at this step has the same sign of delta at the previous step, the learning rate of that weight is multiplied by this value

float network::get_ssab_down_factor  )  [inline]
 

Retrieve factor for decreasing learning rate in SuperSab mode.

Returns:
float factor for decreasing learning rate
In SuperSab mode: if delta at this step has the opposite sign of delta at the previous step, the learning rate of that weight is multiplied by this value

void network::set_learning_rate float  learning_rate  )  [inline]
 

Change the learning rate of a network.

Parameters:
learning_rate Floating point number.

void network::set_activation int  num_func  ) 
 

Set activation function of the network.

Parameters:
num_func Number of function (network::LOGISTIC or network::TANH)

void network::set_momentum float  momentum  )  [inline]
 

Change the momentum of a network.

Parameters:
momentum Floating point number.

void network::set_max_learning_rate float  max  ) 
 

Set maximum learning rate allowed in SuperSab mode.

Parameters:
max maximum learning rate
Values of learning rates cannot be greater than this value.

If the previous max learning rate was greater than the new one and SuperSab mode is active, all the learning rates are changed to make them lesser than the new maximum.

So, if you just want to change default max learning rate, call this method before begin_ssab().

void network::set_min_learning_rate float  min  ) 
 

Set minimum learning rate allowed in SuperSab mode.

Parameters:
min minimum learning rate
Values of learning rates cannot be lesser than this value

If the previous min learning rate was lesser than the new one and SuperSab mode is active, all the learning rates are changed to make them greater than the new minimum.

So, if you just want to change default min learning rate, call this method before begin_ssab().

void network::set_ssab_up_factor float  factor  )  [inline]
 

Set factor for increasing learning rate in SuperSab mode.

Parameters:
factor (for increasing learning rate)
In SuperSab mode: if delta at this step has the same sign of delta at the previous step, the learning rate of that weight is multiplied by this value ( should be factor > 1 )

void network::set_ssab_down_factor float  factor  )  [inline]
 

Set factor for decreasing learning rate in SuperSab mode.

Parameters:
factor (for decreasing learning rate)
In SuperSab mode: if delta at this step has the opposite sign of delta at the previous step, the learning rate of that weight is multiplied by this value ( should be 0 < factor < 1 )

void network::save const char *  filename  )  const
 

Write a network to a binary file.

Parameters:
filename Pointer to name of file to write to.
If it is impossible to write on the file throws a runtime_error exception.

void network::load const char *  filename  ) 
 

Read a network from a binary file.

Parameters:
filename Pointer to name of file to read from. If filename does not exist, or the format of the file is wrong throws a runtime_error exception.
It is possible to import files in old format used by C-library lwneuralnet

void network::friendly_print const bool  show = false  )  const
 

Write a network to stdout in a friendly format.

Parameters:
show If show==true weights are displayed
Se also operator<<()

void network::textsave const char *  filename  )  const
 

Write a network to a text file.

Parameters:
filename Pointer to name of file to write to.
If it is impossible to write on the file throws a runtime_error exception.

void network::textload const char *  filename  ) 
 

Read a network from a text file.

Parameters:
filename Pointer to name of file to read from.
If filename does not exist throws a runtime_error exception

float network::compute_output_error const float *  target  ) 
 

Compute the output error of a network.

Parameters:
target Pointer to a sequence of floating point numbers.
Returns:
Output error of the neural network.
The return value is the square of the Euclidean distance between the actual output and the target. This routine also prepares the network for backpropagation training by storing (internally in the neural network) the errors associated with each of the outputs.

float network::compute_average_error const float *  target  )  const
 

Compute the average error of a network.

Parameters:
target Pointer to a sequence of floating point numbers.
Returns:
Average error of the neural network.
The average error is defined as the average value of absolute differences between output and target

float network::compute_quadratic_error const float *  target  )  const
 

Compute the quadratic error a network.

Parameters:
target Pointer to a sequence of floating point numbers.
Returns:
Quadratic error of the neural network.
The quadratic error is defined as sqrt(sum ( T_j - O_j )^2) / N where T_j are targets and O_j are outputs

float network::compute_max_error const float *  target  )  const
 

Compute the max error a network.

Parameters:
target Pointer to a sequence of floating point numbers.
Returns:
Maximum error of the neural network.
The maximum error is defined as the maximum of absolute differences between outputs and targets.

void network::compute const float *  input,
float *  output
 

Compute outputs of a network for given inputs.

Parameters:
input Pointer to sequence of floating point numbers.
output Pointer to sequence of floating point numbers or NULL.
Compute outputs of a neural network for given inputs by forward propagating the inputs through the layers. If output is non-NULL, the outputs are copied to output (otherwise they are only stored internally in the network).

void network::train  ) 
 

Train a network.

Before calling this routine, compute() and compute_output_error() should have been called to compute outputs for given inputs and to prepare the neural network for training by computing the output error. This routine performs the actual training by backpropagating the output error through the layers.

bool network::is_ssab_active  )  const [inline]
 

True if ssab is active.

Returns:
true if supersab mode is active, false otherwise.

int network::count_weights  )  const
 

Count the number of weights of the network.

Returns:
number of weights

int network::begin_ssab  ) 
 

Begin SuperSab mode setting the nus to learning rate of the network.

Precondition: (! is_ssab_active()) i.e. begin_ssab was not called before.

If is_ssab_active() and you want to reset the values of nus, use reset_ssab() or if you want to free memory used for SuperSab, use free_ssab()

Returns:
-1 on failure, number of weights of the net otherwise.

void network::train_ssab  ) 
 

Train a network in ssab mode.

Before calling this routine, begin_ssab() should have been called to begin SuperSab training.

Furthermore, for the current input/output pair, compute() and compute_output_error() should have been called to compute outputs for given inputs and to prepare the neural network for training by computing the output error. This routine performs the actual training by backpropagating the output error through the layers and changing the weights.

The better way to use SuperSab is in combination with batch training, using train_batch() for the training and end_batch_ssab() at the end of every epoch.

int network::reset_ssab  ) 
 

Reset the values of learning rates of the network to learning_rate in SuperSab mode.

Precondition: is_ssab_active()

Returns:
int -1 on failure (SuperSab mode is not active), the number of weights of the network otherwise.

void network::free_ssab  ) 
 

Free the memory used for SuperSab and end SuperSab mode.

After the call of free_ssab, the values of learning rates are lost and SuperSab mode is off.

bool network::save_ssab const char *  filename  )  const
 

Write SuperSab learning rates to a binary file.

Parameters:
filename Pointer to name of file to write to.
Returns:
true on success, false on failure.

bool network::load_ssab const char *  filename  ) 
 

Load SuperSab learning rates from a binary file.

Parameters:
filename Pointer to name of file to read from.
Returns:
true on success, false on failure.

int network::ssab_print_nus  )  const
 

Print learning rates for SuperSab mode.

Returns:
number of weights in the network, -1 if SSab mode is not active

int network::ssab_stats float &  average,
float &  max,
float &  min,
int &  n_max,
int &  n_min
 

Make some statistics about learning rates in SuperSab mode.

Returns:
-1 if SuperSab mode is not active, number of weights of the network otherwise
Parameters:
average the average of learning rates
max the max value of learning rates
min the min value of learning rates
n_max number of learning rates equal to max
n_min number of learning rates equal to min

void network::train_batch  ) 
 

Train a network in batch mode.

Before calling this routine, begin_batch() should have been called (at the start of the batch) to begin batch training. Furthermore, for the current input/target pair, compute() and compute_output_error() should have been called to compute outputs for given the inputs and to prepare the neural network for training by computing the output error using the given targets. This routine performs the actual training by backpropagating the output error through the layers, but does not change the weights. The weights will be changed when (at the end of the batch) end_batch() (or end_batch_ssab()) is called.

void network::end_batch  ) 
 

End training in batch mode adjusting weights.

Adjust the weights in the neural network according to the average delta of all patterns in the batch.

void network::end_batch_ssab  ) 
 

End training in batch mode adjusting weights with SuperSab.

Adjust the weights in the neural network according to the average delta of all patterns in the batch and with SuperSab.

For using SuperSab mode in batch training you should call once begin_ssab(), then begin_batch() at the beginning of every epoch, train the network with train_batch() and then call end_batch_ssab() at the end of every epoch.

void network::jolt float  factor,
float  range
 

Make small random changes to the weight of a network.

Parameters:
factor Floating point number.
range Floating point number.
All weights in the neural network that are in absolute value smaller than range become a random value from the interval [-range,range]. All other weights get multiplied by a random value from the interval [1-factor,1+factor].


Friends And Related Function Documentation

ostream& operator<< ostream &  ,
const network
[friend]
 

Write a network on a stream.

Same format as friendly_print() (friendly_print(false) i.e. weights are not displayed)

Usage: os << net;


The documentation for this class was generated from the following file:
Generated on Tue Oct 12 00:32:12 2004 for Lightweight Neural Network ++ by  doxygen 1.3.9