Skip to content

A simple configurable neural network with support for multiple layer types. Just to prove myself that I'm alble to do it.

Notifications You must be signed in to change notification settings

andrej1991/neural_networks

Repository files navigation

1)DESCRIPTION

This program is ment for image classification with neural networks.


2) SAMPLE CONFIG

training_input: "<some path>/input.dat"
required_training_output: "<some path>/required_output.dat"
validation_input: "<some path>/validation_input.dat"
required_validation_output: "<some path>/validation_output.dat"
thread_count: <number of threads you want the network to run on>
thread_count: 4
change_learning_cost: 3
cpulimit: 200
input_row: 32
input_col: 32
input_channel_count: 3   #in case of black and white picture it's 1
epochs: 3
minibatch_count: -1
minibatch_len: 16
learning_rate: 0.0003
momentum: 0.9
denominator: 0.00001
regularization_rate: 0
traninig_data_len: 5000
validation_data_len: 1000
dropout_probability: 0.0
cost_function_type: 'log_likelihood'
layers:
    - layer-0:
        layer_type: "convolutional"
        neuron_type: 'leaky_relu'
        weights_row: 5
        weights_col: 5
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 20
    - layer-1:
        layer_type: "convolutional"
        neuron_type: 'leaky_relu'
        weights_row: 5
        weights_col: 5
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 30
    - layer-2:
        layer_type: "convolutional"
        input_from: [ "layer-0", "!layer-1" ]
        neuron_type: 'leaky_relu'
        weights_row: 3
        weights_col: 3
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 40
    - layer-3:
        layer_type: "convolutional"
        neuron_type: 'leaky_relu'
        weights_row: 3
        weights_col: 3
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 40
    - layer-4:
        layer_type: "convolutional"
        input_from: [ "InputLayer", "!layer-3" ]
        neuron_type: 'sigmoid'
        weights_row: 6
        weights_col: 6
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 40
    - layer-4.1:
        layer_type: "convolutional"
        neuron_type: 'sigmoid'
        weights_row: 4
        weights_col: 4
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 40
    - layer-5:
        layer_type: "convolutional"
        input_from: [ "layer-3", "layer-1", "layer-4.1" ]
        neuron_type: 'leaky_relu'
        weights_row: 3
        weights_col: 3
        horizontal_stride: 1
        vertical_stride: 1
        feature_map_count: 60
    - layer-6:
        layer_type: "flatten"
    - layer-7:
        layer_type: 'fully_connected'
        neuron_type: 'tanh'
        weights_row: 100
    - layer-8:
        layer_type: 'softmax'
        weights_row: 10
        

The name of the layers can be anything, but cannot start with '!' character.
For simplicity I'd recomend to use the "layer-<id>" format.

"input_from" is a list of layers which provide input to the current layer. The layer before the current layer automatically gives input to the current layer. 
If you do not want to get input from the previous layer you can put a '!' character before the name of the layer you do not want input from as seen in layer-4.
The name "InputLayer" is reserved for the input layer of the network.
The flatten layer can serve as an input only for a single layer.

Supported layer types are: convolutional, fully_connected, softmax, pooling and flatten
Supported neuron types are: sigmoid, tanh, relu, leaky_relu
Supported cost functions are: quadratic, cross_entropy and log_likellyhood

An image srchitecture of the sample config:


 ___
 | |                       ___________
 | |                       |         |
 | |                   |---| layer-1 |--------------------|
 | |    ___________    |   |_________|                    |
 | |    |         |    |                                  |
 | |----| layer-0 |----|                                  |   ___________
 |I|    |_________|    |   ___________   ___________      |___|         |   ___________   ___________   ___________
 |N|                   |   |         |   |         |          |         |   |         |   |         |   |         |
 |P|                   |---| layer-2 |---| layer-3 |----------| layer-5 |---| layer-6 |---| layer-7 |---| layer-8 |
 |U|                       |_________|   |_________|          |         |   |_________|   |_________|   |_________|
 |T|                                                      |---|_________|
 | |                                                      |
 | |     ___________    _____________                     |
 | |     |         |    |           |                     |
 | |-----| layer-4 |----| layer-4.1 |---------------------|
 | |     |_________|    |___________|
 |_|

3) STRUCTURE OF THE TRAINING AND VALIDATION DATA

Input and output has to be in a separate file.
The current limitation is that all the training data needs to be in one file. This limitation is applicable for the required output also.

3.1) The structure of the input data
The chanels of an input image are followed one by one. For example in case of RGB image first all the red pixels are stored then all the blue and finally all the green.
Then it is followed by the next image. All the data is stored in double format.
Reading in the image from bmp is planned.

3.2) The structure of the required training output
The length of the required output vector is the same as the count of the neurons in the final layer.
All the elements of that vector is stored in double.
In case of image classification all the values are 0.0 of the required output vector, but the value of the index which is mapped to the category of the input image.
That one value is 1.0.


4) DEPENDENCIES

libyaml-cpp


5) BUILDING THE BINARY

This project is written in Code Blocks. The project file (neural_networks.cbp) is included in the repo.


6) EXECUTION SAMPLE

The first argument of the program is the configuration file. An example in Unix based systems:

/path/to/your/binary/neural_networks /path/to/your/config/config.yaml


About

A simple configurable neural network with support for multiple layer types. Just to prove myself that I'm alble to do it.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages