Class: Minimization::NewtonRaphson
- Inherits:
-
Unidimensional
- Object
- Unidimensional
- Minimization::NewtonRaphson
- Defined in:
- lib/minimization.rb
Overview
Classic Newton-Raphson minimization method. Requires first and second derivative
Usage
f = lambda {|x| x**2}
fd = lambda {|x| 2x}
fdd = lambda {|x| 2}
min = Minimization::NewtonRaphson.new(-1000,1000, f,fd,fdd)
min.iterate
min.x_minimum
min.f_minimum
Constant Summary
Constants inherited from Unidimensional
Unidimensional::EPSILON, Unidimensional::MAX_ITERATIONS
Instance Attribute Summary
Attributes inherited from Unidimensional
#epsilon, #expected, #f_minimum, #iterations, #log, #log_header, #x_minimum
Class Method Summary collapse
-
.minimize(*args) ⇒ Object
Raises an error.
Instance Method Summary collapse
-
#initialize(lower, upper, proc, proc_1d, proc_2d) ⇒ NewtonRaphson
constructor
Parameters: *
lower
: Lower possible value *upper
: Higher possible value *proc
: Original function *proc_1d
: First derivative *proc_2d
: Second derivative. - #iterate ⇒ Object
Methods inherited from Unidimensional
Constructor Details
#initialize(lower, upper, proc, proc_1d, proc_2d) ⇒ NewtonRaphson
Parameters:
-
lower
: Lower possible value -
upper
: Higher possible value -
proc
: Original function -
proc_1d
: First derivative -
proc_2d
: Second derivative
108 109 110 111 112 |
# File 'lib/minimization.rb', line 108 def initialize(lower, upper, proc, proc_1d, proc_2d) super(lower,upper,proc) @proc_1d=proc_1d @proc_2d=proc_2d end |
Class Method Details
.minimize(*args) ⇒ Object
Raises an error
114 115 116 |
# File 'lib/minimization.rb', line 114 def self.minimize(*args) raise "You should use #new and #iterate" end |
Instance Method Details
#iterate ⇒ Object
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
# File 'lib/minimization.rb', line 117 def iterate # First x_prev=@lower x=@expected failed=true k=0 while (x-x_prev).abs > @epsilon and k<@max_iteration k+=1 x_prev=x x=x-(@proc_1d.call(x).quo(@proc_2d.call(x))) f_prev=f(x_prev) f=f(x) x_min,x_max=[x,x_prev].min, [x,x_prev].max f_min,f_max=[f,f_prev].min, [f,f_prev].max @log << [k, x_min, x_max, f_min, f_max, (x_prev-x).abs, (f-f_prev).abs] end raise FailedIteration, "Not converged" if k>=@max_iteration @x_minimum = x; @f_minimum = f(x); end |