Module: Neuronet
- Defined in:
- lib/neuronet.rb,
lib/neuronet/layer.rb,
lib/neuronet/scale.rb,
lib/neuronet/neuron.rb,
lib/neuronet/gaussian.rb,
lib/neuronet/constants.rb,
lib/neuronet/connection.rb,
lib/neuronet/log_normal.rb,
lib/neuronet/feed_forward.rb,
lib/neuronet/scaled_network.rb
Overview
Neuronet module
Defined Under Namespace
Classes: Connection, FeedForward, Gaussian, Layer, LogNormal, Neuron, Scale, ScaledNetwork
Constant Summary collapse
- VERSION =
'7.0.230416'
- FORMAT =
Neuronet allows one to set the format to use for displaying float values, mostly used in the inspect methods. [Docs](docs.ruby-lang.org/en/master/format_specifications_rdoc.html)
'%.13g'
- SQUASH =
An artificial neural network uses a squash function to determine the activation value of a neuron. The squash function for Neuronet is the [Sigmoid function](en.wikipedia.org/wiki/Sigmoid_function) which sets the neuron’s activation value between 0.0 and 1.0. This activation value is often thought of on/off or true/false. For classification problems, activation values near one are considered true while activation values near 0.0 are considered false. In Neuronet I make a distinction between the neuron’s activation value and it’s representation to the problem. This attribute, activation, need never appear in an implementation of Neuronet, but it is mapped back to it’s unsquashed value every time the implementation asks for the neuron’s value. One should scale the problem with most data points between -1 and 1, extremes under 2s, and no outbounds above 3s. Standard deviations from the mean is probably a good way to figure the scale of the problem.
->(unsquashed) { 1.0 / (1.0 + Math.exp(-unsquashed)) }
- UNSQUASH =
->(squashed) { Math.log(squashed / (1.0 - squashed)) }
- DERIVATIVE =
->(squash) { squash * (1.0 - squash) }
- BZERO =
I’ll want to have a neuron roughly mirror another later. Let [v] be the squash of v. Consider:
v = b + w*[v]
There is no constant b and w that will satisfy the above equation for all v. But one can satisfy the equation for v in 0, 1. Find b and w such that:
A: 0 = b + w*[0] B: 1 = b + w*[1] C: -1 = b + w*[-1]
Use A and B to solve for b and w:
A: 0 = b + w*[0] b = -w*[0] B: 1 = b + w*[1] 1 = -w*[0] + w*[1] 1 = w*(-[0] + [1]) w = 1/([1] - [0]) b = -[0]/([1] - [0])
Verify A, B, and C:
A: 0 = b + w*[0] 0 = -[0]/([1] - [0]) + [0]/([1] - [0]) 0 = 0 # OK B: 1 = b + w*[1] 1 = -[0]/([1] - [0]) + [1]/([1] - [0]) 1 = ([1] - [0])/([1] - [0]) 1 = 1 # OK
Using the squash function identity, [v] = 1 - [-v]:
C: -1 = b + w*[-1] -1 = -[0]/([1] - [0]) + [-1]/([1] - [0]) -1 = ([-1] - [0])/([1] - [0]) [0] - [1] = [-1] - [0] [0] - [1] = 1 - [1] - [0] # Identity substitution. [0] = 1 - [0] # OK, by identity(0=-0).
Evaluate given that [0] = 0.5:
b = -[0]/([1] - [0]) b = [0]/([0] - [1]) b = 0.5/(0.5 - [1]) w = 1/([1] - [0]) w = 1/([1] - 0.5) w = -2 * 0.5/(0.5 - [1]) w = -2 * b
0.5 / (0.5 - SQUASH[1.0])
- WONE =
-2.0 * BZERO
- NOISE =
Although the implementation is free to set all parameters for each neuron, Neuronet by default creates zeroed neurons. Association between inputs and outputs are trained, and neurons differentiate from each other randomly. Differentiation among neurons is achieved by noise in the back-propagation of errors. This noise is provided by rand + rand. I chose rand + rand to give the noise an average value of one and a bell shape distribution.
->(error) { error * (rand + rand) }
- NO_NOISE =
One may choose not to have noise.
->(error) { error }
- MAXW =
9.0
- MAXB =
Maximum weight
18.0
- MAXV =
Maximum bias
36.0
- LEARNING =
Mu learning factor.
1.0
Class Attribute Summary collapse
-
.bzero ⇒ Object
Returns the value of attribute bzero.
-
.derivative ⇒ Object
Returns the value of attribute derivative.
-
.format ⇒ Object
Returns the value of attribute format.
-
.learning ⇒ Object
Returns the value of attribute learning.
-
.maxb ⇒ Object
Returns the value of attribute maxb.
-
.maxv ⇒ Object
Returns the value of attribute maxv.
-
.maxw ⇒ Object
Returns the value of attribute maxw.
-
.noise ⇒ Object
Returns the value of attribute noise.
-
.squash ⇒ Object
Returns the value of attribute squash.
-
.unsquash ⇒ Object
Returns the value of attribute unsquash.
-
.wone ⇒ Object
Returns the value of attribute wone.
Class Attribute Details
.bzero ⇒ Object
Returns the value of attribute bzero.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def bzero @bzero end |
.derivative ⇒ Object
Returns the value of attribute derivative.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def derivative @derivative end |
.format ⇒ Object
Returns the value of attribute format.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def format @format end |
.learning ⇒ Object
Returns the value of attribute learning.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def learning @learning end |
.maxb ⇒ Object
Returns the value of attribute maxb.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def maxb @maxb end |
.maxv ⇒ Object
Returns the value of attribute maxv.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def maxv @maxv end |
.maxw ⇒ Object
Returns the value of attribute maxw.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def maxw @maxw end |
.noise ⇒ Object
Returns the value of attribute noise.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def noise @noise end |
.squash ⇒ Object
Returns the value of attribute squash.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def squash @squash end |
.unsquash ⇒ Object
Returns the value of attribute unsquash.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def unsquash @unsquash end |
.wone ⇒ Object
Returns the value of attribute wone.
96 97 98 |
# File 'lib/neuronet/constants.rb', line 96 def wone @wone end |