Class: Rumale::LinearModel::LinearRegression
- Inherits:
-
BaseLinearModel
- Object
- BaseLinearModel
- Rumale::LinearModel::LinearRegression
- Includes:
- Base::Regressor
- Defined in:
- lib/rumale/linear_model/linear_regression.rb
Overview
LinearRegression is a class that implements ordinary least square linear regression with mini-batch stochastic gradient descent optimization.
Instance Attribute Summary collapse
-
#bias_term ⇒ Numo::DFloat
readonly
Return the bias term (a.k.a. intercept).
-
#rng ⇒ Random
readonly
Return the random generator for random sampling.
-
#weight_vec ⇒ Numo::DFloat
readonly
Return the weight vector.
Attributes included from Base::BaseEstimator
Instance Method Summary collapse
-
#fit(x, y) ⇒ LinearRegression
Fit the model with given training data.
-
#initialize(fit_bias: false, bias_scale: 1.0, max_iter: 1000, batch_size: 10, optimizer: nil, n_jobs: nil, random_seed: nil) ⇒ LinearRegression
constructor
Create a new ordinary least square linear regressor.
-
#marshal_dump ⇒ Hash
Dump marshal data.
-
#marshal_load(obj) ⇒ nil
Load marshal data.
-
#predict(x) ⇒ Numo::DFloat
Predict values for samples.
Methods included from Base::Regressor
Constructor Details
#initialize(fit_bias: false, bias_scale: 1.0, max_iter: 1000, batch_size: 10, optimizer: nil, n_jobs: nil, random_seed: nil) ⇒ LinearRegression
Create a new ordinary least square linear regressor.
45 46 47 48 49 50 51 52 53 54 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 45 def initialize(fit_bias: false, bias_scale: 1.0, max_iter: 1000, batch_size: 10, optimizer: nil, n_jobs: nil, random_seed: nil) check_params_float(bias_scale: bias_scale) check_params_integer(max_iter: max_iter, batch_size: batch_size) check_params_boolean(fit_bias: fit_bias) check_params_type_or_nil(Integer, n_jobs: n_jobs, random_seed: random_seed) check_params_positive(max_iter: max_iter, batch_size: batch_size) keywd_args = method(:initialize).parameters.map { |_t, arg| [arg, binding.local_variable_get(arg)] }.to_h.merge(reg_param: 0.0) super(keywd_args) end |
Instance Attribute Details
#bias_term ⇒ Numo::DFloat (readonly)
Return the bias term (a.k.a. intercept).
26 27 28 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 26 def bias_term @bias_term end |
#rng ⇒ Random (readonly)
Return the random generator for random sampling.
30 31 32 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 30 def rng @rng end |
#weight_vec ⇒ Numo::DFloat (readonly)
Return the weight vector.
22 23 24 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 22 def weight_vec @weight_vec end |
Instance Method Details
#fit(x, y) ⇒ LinearRegression
Fit the model with given training data.
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 61 def fit(x, y) check_sample_array(x) check_tvalue_array(y) check_sample_tvalue_size(x, y) n_outputs = y.shape[1].nil? ? 1 : y.shape[1] n_features = x.shape[1] if n_outputs > 1 @weight_vec = Numo::DFloat.zeros(n_outputs, n_features) @bias_term = Numo::DFloat.zeros(n_outputs) if enable_parallel? models = parallel_map(n_outputs) { |n| partial_fit(x, y[true, n]) } n_outputs.times { |n| @weight_vec[n, true], @bias_term[n] = models[n] } else n_outputs.times { |n| @weight_vec[n, true], @bias_term[n] = partial_fit(x, y[true, n]) } end else @weight_vec, @bias_term = partial_fit(x, y) end self end |
#marshal_dump ⇒ Hash
Dump marshal data.
96 97 98 99 100 101 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 96 def marshal_dump { params: @params, weight_vec: @weight_vec, bias_term: @bias_term, rng: @rng } end |
#marshal_load(obj) ⇒ nil
Load marshal data.
105 106 107 108 109 110 111 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 105 def marshal_load(obj) @params = obj[:params] @weight_vec = obj[:weight_vec] @bias_term = obj[:bias_term] @rng = obj[:rng] nil end |
#predict(x) ⇒ Numo::DFloat
Predict values for samples.
89 90 91 92 |
# File 'lib/rumale/linear_model/linear_regression.rb', line 89 def predict(x) check_sample_array(x) x.dot(@weight_vec.transpose) + @bias_term end |