Class: Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb

Overview

Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.

Defined Under Namespace

Classes: ConfidenceMetricsEntry, ConfusionMatrix

Instance Attribute Summary collapse

Instance Attribute Details

#annotation_spec_idArray<String>

Returns Output only. The annotation spec ids used for this evaluation.

Returns:

  • (Array<String>)

    Output only. The annotation spec ids used for this evaluation.



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end

#au_prcFloat

Returns Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.

Returns:

  • (Float)

    Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end

#au_rocFloat

Returns Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.

Returns:

  • (Float)

    Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end

#base_au_prcFloat

Returns Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.

Returns:

  • (Float)

    Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end

#confidence_metrics_entryArray<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry>

Returns Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.

Returns:

  • (Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry>)

    Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end

#confusion_matrixGoogle::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix

Returns Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.

Returns:



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end

#log_lossFloat

Returns Output only. The Log Loss metric.

Returns:

  • (Float)

    Output only. The Log Loss metric.



102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/google/cloud/automl/v1beta1/doc/google/cloud/automl/v1beta1/classification.rb', line 102

class ClassificationEvaluationMetrics
  # Metrics for a single confidence threshold.
  # @!attribute [rw] confidence_threshold
  #   @return [Float]
  #     Output only. Metrics are computed with an assumption that the model
  #     never returns predictions with score lower than this value.
  # @!attribute [rw] position_threshold
  #   @return [Integer]
  #     Output only. Metrics are computed with an assumption that the model
  #     always returns at most this many predictions (ordered by their score,
  #     descendingly), but they all still need to meet the confidence_threshold.
  # @!attribute [rw] recall
  #   @return [Float]
  #     Output only. Recall (True Positive Rate) for the given confidence
  #     threshold.
  # @!attribute [rw] precision
  #   @return [Float]
  #     Output only. Precision for the given confidence threshold.
  # @!attribute [rw] false_positive_rate
  #   @return [Float]
  #     Output only. False Positive Rate for the given confidence threshold.
  # @!attribute [rw] f1_score
  #   @return [Float]
  #     Output only. The harmonic mean of recall and precision.
  # @!attribute [rw] recall_at1
  #   @return [Float]
  #     Output only. The Recall (True Positive Rate) when only considering the
  #     label that has the highest prediction score and not below the confidence
  #     threshold for each example.
  # @!attribute [rw] precision_at1
  #   @return [Float]
  #     Output only. The precision when only considering the label that has the
  #     highest prediction score and not below the confidence threshold for each
  #     example.
  # @!attribute [rw] false_positive_rate_at1
  #   @return [Float]
  #     Output only. The False Positive Rate when only considering the label that
  #     has the highest prediction score and not below the confidence threshold
  #     for each example.
  # @!attribute [rw] f1_score_at1
  #   @return [Float]
  #     Output only. The harmonic mean of {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#recall_at1 recall_at1} and {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry#precision_at1 precision_at1}.
  # @!attribute [rw] true_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that match a ground truth
  #     label.
  # @!attribute [rw] false_positive_count
  #   @return [Integer]
  #     Output only. The number of model created labels that do not match a
  #     ground truth label.
  # @!attribute [rw] false_negative_count
  #   @return [Integer]
  #     Output only. The number of ground truth labels that are not matched
  #     by a model created label.
  # @!attribute [rw] true_negative_count
  #   @return [Integer]
  #     Output only. The number of labels that were not created by the model,
  #     but if they would, they would not match a ground truth label.
  class ConfidenceMetricsEntry; end

  # Confusion matrix of the model running the classification.
  # @!attribute [rw] annotation_spec_id
  #   @return [Array<String>]
  #     Output only. IDs of the annotation specs used in the confusion matrix.
  #     For Tables CLASSIFICATION
  #
  #     {Google::Cloud::AutoML::V1beta1::TablesModelMetadata#prediction_type prediction_type}
  #     only list of {Annotation_spec_display_name-s} is populated.
  # @!attribute [rw] row
  #   @return [Array<Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix::Row>]
  #     Output only. Rows in the confusion matrix. The number of rows is equal to
  #     the size of `annotation_spec_id`.
  #     `row[i].value[j]` is the number of examples that have ground truth of the
  #     `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by
  #     the model being evaluated.
  class ConfusionMatrix
    # Output only. A row in the confusion matrix.
    # @!attribute [rw] example_count
    #   @return [Array<Integer>]
    #     Output only. Value of the specific cell in the confusion matrix.
    #     The number of values each row has (i.e. the length of the row) is equal
    #     to the length of the `annotation_spec_id` field or, if that one is not
    #     populated, length of the {Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfusionMatrix#display_name display_name} field.
    class Row; end
  end
end