Class: Neuronet::Layer
- Inherits:
-
Array
- Object
- Array
- Neuronet::Layer
- Defined in:
- lib/neuronet/layer.rb
Overview
Layer is an array of neurons.
Instance Method Summary collapse
-
#antithesis ⇒ Object
Doubles up the input mirroring and anti-mirroring it.
-
#average(sign = 1) ⇒ Object
Set layer to average input.
- #average_mju ⇒ Object
-
#connect(layer = Layer.new(length), weights: []) ⇒ Object
Allows one to fully connect layers.
-
#initialize(length) ⇒ Layer
constructor
Length is the number of neurons in the layer.
-
#inspect ⇒ Object
Layer inspects as “label:value,…”.
-
#mirror(sign = 1) ⇒ Object
Set layer to mirror input: bias = BZERO.
-
#partial ⇒ Object
updates layer with current values of the previous layer.
-
#set(inputs) ⇒ Object
This is where one enters the “real world” inputs.
-
#synthesis ⇒ Object
Sums two corresponding input neurons above each neuron in the layer.
-
#to_s ⇒ Object
Layer puts as “label,…”.
-
#train(target, mju = nil) ⇒ Object
Takes the real world target for each neuron in this layer and backpropagates the error to each neuron.
-
#values ⇒ Object
Returns the real world values: [value, …].
Constructor Details
#initialize(length) ⇒ Layer
Length is the number of neurons in the layer.
8 9 10 |
# File 'lib/neuronet/layer.rb', line 8 def initialize(length) super(length) { Neuron.new } end |
Instance Method Details
#antithesis ⇒ Object
Doubles up the input mirroring and anti-mirroring it. The layer should be twice the size of the input.
48 49 50 51 52 53 54 55 |
# File 'lib/neuronet/layer.rb', line 48 def antithesis sign = 1 each_with_index do |n, i| n.connections[i / 2].weight = sign * Neuronet.wone n.bias = sign * Neuronet.bzero sign = -sign end end |
#average(sign = 1) ⇒ Object
Set layer to average input.
71 72 73 74 75 76 77 78 |
# File 'lib/neuronet/layer.rb', line 71 def average(sign = 1) bias = sign * Neuronet.bzero each do |n| n.bias = bias weight = sign * Neuronet.wone / n.connections.length n.connections.each { _1.weight = weight } end end |
#average_mju ⇒ Object
85 86 87 |
# File 'lib/neuronet/layer.rb', line 85 def average_mju Neuronet.learning * sum { Neuron.mju(_1) } / length end |
#connect(layer = Layer.new(length), weights: []) ⇒ Object
Allows one to fully connect layers.
24 25 26 27 28 29 30 31 32 |
# File 'lib/neuronet/layer.rb', line 24 def connect(layer = Layer.new(length), weights: []) # creates the neuron matrix... each_with_index do |neuron, i| weight = weights[i] || 0.0 layer.each { neuron.connect(_1, weight:) } end # The layer is returned for chaining. layer end |
#inspect ⇒ Object
Layer inspects as “label:value,…”
102 103 104 |
# File 'lib/neuronet/layer.rb', line 102 def inspect map(&:inspect).join(',') end |
#mirror(sign = 1) ⇒ Object
Set layer to mirror input:
bias = BZERO.
weight = WONE
Input should be the same size as the layer. One can set sign to -1 to anti-mirror. One can set sign to other than |1| to scale.
39 40 41 42 43 44 |
# File 'lib/neuronet/layer.rb', line 39 def mirror(sign = 1) each_with_index do |neuron, index| neuron.bias = sign * Neuronet.bzero neuron.connections[index].weight = sign * Neuronet.wone end end |
#partial ⇒ Object
updates layer with current values of the previous layer
81 82 83 |
# File 'lib/neuronet/layer.rb', line 81 def partial each(&:partial) end |
#set(inputs) ⇒ Object
This is where one enters the “real world” inputs.
13 14 15 16 |
# File 'lib/neuronet/layer.rb', line 13 def set(inputs) 0.upto(length - 1) { self[_1].value = inputs[_1] || 0.0 } self end |
#synthesis ⇒ Object
Sums two corresponding input neurons above each neuron in the layer. Input should be twice the size of the layer.
59 60 61 62 63 64 65 66 67 68 |
# File 'lib/neuronet/layer.rb', line 59 def synthesis semi = Neuronet.wone / 2 each_with_index do |n, i| j = i * 2 c = n.connections n.bias = Neuronet.bzero c[j].weight = semi c[j + 1].weight = semi end end |
#to_s ⇒ Object
Layer puts as “label,…”
107 108 109 |
# File 'lib/neuronet/layer.rb', line 107 def to_s map(&:to_s).join(',') end |
#train(target, mju = nil) ⇒ Object
Takes the real world target for each neuron in this layer and backpropagates the error to each neuron.
91 92 93 94 95 96 97 98 99 |
# File 'lib/neuronet/layer.rb', line 91 def train(target, mju = nil) 0.upto(length - 1) do |index| neuron = self[index] error = (target[index] - neuron.value) / (mju || (Neuronet.learning * Neuron.mju(neuron))) neuron.backpropagate(error) end self end |
#values ⇒ Object
Returns the real world values: [value, …]
19 20 21 |
# File 'lib/neuronet/layer.rb', line 19 def values map(&:value) end |