Class: Google::Cloud::Bigquery::ExtractJob::Updater

Inherits:
Google::Cloud::Bigquery::ExtractJob show all
Defined in:
lib/google/cloud/bigquery/extract_job.rb

Overview

Yielded to a block to accumulate changes for an API request.

Attributes collapse

Methods inherited from Google::Cloud::Bigquery::ExtractJob

#avro?, #compression?, #csv?, #delimiter, #destinations, #destinations_counts, #destinations_file_counts, #json?, #ml_tf_saved_model?, #ml_xgboost_booster?, #model?, #print_header?, #source, #table?, #use_avro_logical_types?

Methods inherited from Job

#configuration, #created_at, #delete, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #num_child_jobs, #parent_job_id, #pending?, #project_id, #reservation_usage, #running?, #script_statistics, #session_id, #started_at, #state, #statistics, #status, #transaction_id, #user_email

Instance Method Details

#cancelObject



441
442
443
# File 'lib/google/cloud/bigquery/extract_job.rb', line 441

def cancel
  raise "not implemented in #{self.class}"
end

#compression=(value) ⇒ Object

Sets the compression type. Not applicable when extracting models.

Parameters:

  • value (String)

    The compression type to use for exported files. Possible values include GZIP and NONE. The default value is NONE.



348
349
350
# File 'lib/google/cloud/bigquery/extract_job.rb', line 348

def compression= value
  @gapi.configuration.extract.compression = value
end

#delimiter=(value) ⇒ Object

Sets the field delimiter. Not applicable when extracting models.

Parameters:

  • value (String)

    Delimiter to use between fields in the exported data. Default is ,.



359
360
361
# File 'lib/google/cloud/bigquery/extract_job.rb', line 359

def delimiter= value
  @gapi.configuration.extract.field_delimiter = value
end

#format=(new_format) ⇒ Object

Sets the destination file format. The default value for tables is csv. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is ml_tf_saved_model.

Supported values for tables:

Supported values for models:

  • ml_tf_saved_model - TensorFlow SavedModel
  • ml_xgboost_booster - XGBoost Booster

Parameters:

  • new_format (String)

    The new source format.



383
384
385
# File 'lib/google/cloud/bigquery/extract_job.rb', line 383

def format= new_format
  @gapi.configuration.extract.update! destination_format: Convert.source_format(new_format)
end

#header=(value) ⇒ Object

Print a header row in the exported file. Not applicable when extracting models.

Parameters:

  • value (Boolean)

    Whether to print out a header row in the results. Default is true.



395
396
397
# File 'lib/google/cloud/bigquery/extract_job.rb', line 395

def header= value
  @gapi.configuration.extract.print_header = value
end

#labels=(value) ⇒ Object

Sets the labels to use for the job.

Parameters:

  • value (Hash)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.


421
422
423
# File 'lib/google/cloud/bigquery/extract_job.rb', line 421

def labels= value
  @gapi.configuration.update! labels: value
end

#location=(value) ⇒ Object

Sets the geographic location where the job should run. Required except for US and EU.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

destination = "gs://my-bucket/file-name.csv"
extract_job = table.extract_job destination do |j|
  j.location = "EU"
end

extract_job.wait_until_done!
extract_job.done? #=> true

Parameters:

  • value (String)

    A geographic location, such as "US", "EU" or "asia-northeast1". Required except for US and EU.



331
332
333
334
335
336
337
338
# File 'lib/google/cloud/bigquery/extract_job.rb', line 331

def location= value
  @gapi.job_reference.location = value
  return unless value.nil?

  # Treat assigning value of nil the same as unsetting the value.
  unset = @gapi.job_reference.instance_variables.include? :@location
  @gapi.job_reference.remove_instance_variable :@location if unset
end

#reload!Object Also known as: refresh!



449
450
451
# File 'lib/google/cloud/bigquery/extract_job.rb', line 449

def reload!
  raise "not implemented in #{self.class}"
end

#rerun!Object



445
446
447
# File 'lib/google/cloud/bigquery/extract_job.rb', line 445

def rerun!
  raise "not implemented in #{self.class}"
end

#use_avro_logical_types=(value) ⇒ Object

Indicate whether to enable extracting applicable column types (such as TIMESTAMP) to their corresponding AVRO logical types (timestamp-micros), instead of only using their raw types (avro-long).

Only used when #format is set to "AVRO" (#avro?).

Parameters:

  • value (Boolean)

    Whether applicable column types will use their corresponding AVRO logical types.



437
438
439
# File 'lib/google/cloud/bigquery/extract_job.rb', line 437

def use_avro_logical_types= value
  @gapi.configuration.extract.use_avro_logical_types = value
end

#wait_until_done!Object



454
455
456
# File 'lib/google/cloud/bigquery/extract_job.rb', line 454

def wait_until_done!
  raise "not implemented in #{self.class}"
end