Class: Google::Apis::BigqueryV2::JobConfigurationLoad

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/bigquery_v2/classes.rb,
generated/google/apis/bigquery_v2/representations.rb,
generated/google/apis/bigquery_v2/representations.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ JobConfigurationLoad

Returns a new instance of JobConfigurationLoad.



1320
1321
1322
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1320

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#allow_jagged_rowsBoolean Also known as: allow_jagged_rows?

[Optional] Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats. Corresponds to the JSON property allowJaggedRows

Returns:

  • (Boolean)


1165
1166
1167
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1165

def allow_jagged_rows
  @allow_jagged_rows
end

#allow_quoted_newlinesBoolean Also known as: allow_quoted_newlines?

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false. Corresponds to the JSON property allowQuotedNewlines

Returns:

  • (Boolean)


1172
1173
1174
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1172

def allow_quoted_newlines
  @allow_quoted_newlines
end

#autodetectBoolean Also known as: autodetect?

[Experimental] Indicates if we should automatically infer the options and schema for CSV and JSON sources. Corresponds to the JSON property autodetect

Returns:

  • (Boolean)


1179
1180
1181
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1179

def autodetect
  @autodetect
end

#create_dispositionString

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion. Corresponds to the JSON property createDisposition

Returns:

  • (String)


1190
1191
1192
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1190

def create_disposition
  @create_disposition
end

#destination_tableGoogle::Apis::BigqueryV2::TableReference

[Required] The destination table to load the data into. Corresponds to the JSON property destinationTable



1195
1196
1197
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1195

def destination_table
  @destination_table
end

#encodingString

[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties. Corresponds to the JSON property encoding

Returns:

  • (String)


1203
1204
1205
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1203

def encoding
  @encoding
end

#field_delimiterString

[Optional] The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (','). Corresponds to the JSON property fieldDelimiter

Returns:

  • (String)


1213
1214
1215
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1213

def field_delimiter
  @field_delimiter
end

#ignore_unknown_valuesBoolean Also known as: ignore_unknown_values?

[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Corresponds to the JSON property ignoreUnknownValues

Returns:

  • (Boolean)


1224
1225
1226
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1224

def ignore_unknown_values
  @ignore_unknown_values
end

#max_bad_recordsFixnum

[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. Corresponds to the JSON property maxBadRecords

Returns:

  • (Fixnum)


1233
1234
1235
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1233

def max_bad_records
  @max_bad_records
end

#projection_fieldsArray<String>

[Experimental] If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result. Corresponds to the JSON property projectionFields

Returns:

  • (Array<String>)


1243
1244
1245
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1243

def projection_fields
  @projection_fields
end

#quoteString

[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true. Corresponds to the JSON property quote

Returns:

  • (String)


1254
1255
1256
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1254

def quote
  @quote
end

#schemaGoogle::Apis::BigqueryV2::TableSchema

[Optional] The schema for the destination table. The schema can be omitted if the destination table already exists, or if you're loading data from Google Cloud Datastore. Corresponds to the JSON property schema



1261
1262
1263
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1261

def schema
  @schema
end

#schema_inlineString

[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[, Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT". Corresponds to the JSON property schemaInline

Returns:

  • (String)


1267
1268
1269
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1267

def schema_inline
  @schema_inline
end

#schema_inline_formatString

[Deprecated] The format of the schemaInline property. Corresponds to the JSON property schemaInlineFormat

Returns:

  • (String)


1272
1273
1274
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1272

def schema_inline_format
  @schema_inline_format
end

#schema_update_optionsArray<String>

[Experimental] Allows the schema of the desitination table to be updated as a side effect of the load job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable. Corresponds to the JSON property schemaUpdateOptions

Returns:

  • (Array<String>)


1284
1285
1286
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1284

def schema_update_options
  @schema_update_options
end

#skip_leading_rowsFixnum

[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. Corresponds to the JSON property skipLeadingRows

Returns:

  • (Fixnum)


1291
1292
1293
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1291

def skip_leading_rows
  @skip_leading_rows
end

#source_formatString

[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". The default value is CSV. Corresponds to the JSON property sourceFormat

Returns:

  • (String)


1299
1300
1301
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1299

def source_format
  @source_format
end

#source_urisArray<String>

[Required] The fully-qualified URIs that point to your data in Google Cloud Storage. Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Corresponds to the JSON property sourceUris

Returns:

  • (Array<String>)


1306
1307
1308
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1306

def source_uris
  @source_uris
end

#write_dispositionString

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Corresponds to the JSON property writeDisposition

Returns:

  • (String)


1318
1319
1320
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1318

def write_disposition
  @write_disposition
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
# File 'generated/google/apis/bigquery_v2/classes.rb', line 1325

def update!(**args)
  @allow_jagged_rows = args[:allow_jagged_rows] if args.key?(:allow_jagged_rows)
  @allow_quoted_newlines = args[:allow_quoted_newlines] if args.key?(:allow_quoted_newlines)
  @autodetect = args[:autodetect] if args.key?(:autodetect)
  @create_disposition = args[:create_disposition] if args.key?(:create_disposition)
  @destination_table = args[:destination_table] if args.key?(:destination_table)
  @encoding = args[:encoding] if args.key?(:encoding)
  @field_delimiter = args[:field_delimiter] if args.key?(:field_delimiter)
  @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values)
  @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records)
  @projection_fields = args[:projection_fields] if args.key?(:projection_fields)
  @quote = args[:quote] if args.key?(:quote)
  @schema = args[:schema] if args.key?(:schema)
  @schema_inline = args[:schema_inline] if args.key?(:schema_inline)
  @schema_inline_format = args[:schema_inline_format] if args.key?(:schema_inline_format)
  @schema_update_options = args[:schema_update_options] if args.key?(:schema_update_options)
  @skip_leading_rows = args[:skip_leading_rows] if args.key?(:skip_leading_rows)
  @source_format = args[:source_format] if args.key?(:source_format)
  @source_uris = args[:source_uris] if args.key?(:source_uris)
  @write_disposition = args[:write_disposition] if args.key?(:write_disposition)
end