Class: Google::Cloud::Bigquery::QueryJob::Updater

Inherits:
Google::Cloud::Bigquery::QueryJob show all
Defined in:
lib/google/cloud/bigquery/query_job.rb

Overview

Yielded to a block to accumulate changes for a patch request.

Attributes collapse

Methods inherited from Google::Cloud::Bigquery::QueryJob

#batch?, #bytes_processed, #cache?, #cache_hit?, #clustering?, #clustering_fields, #data, #ddl?, #ddl_operation_performed, #ddl_target_routine, #ddl_target_table, #deleted_row_count, #destination, #dml?, #dryrun?, #encryption, #flatten?, #inserted_row_count, #interactive?, #large_results?, #legacy_sql?, #maximum_billing_tier, #maximum_bytes_billed, #num_dml_affected_rows, #query_plan, #range_partitioning?, #range_partitioning_end, #range_partitioning_field, #range_partitioning_interval, #range_partitioning_start, #standard_sql?, #statement_type, #time_partitioning?, #time_partitioning_expiration, #time_partitioning_field, #time_partitioning_require_filter?, #time_partitioning_type, #udfs, #updated_row_count

Methods inherited from Job

#configuration, #created_at, #delete, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #num_child_jobs, #parent_job_id, #pending?, #project_id, #reservation_usage, #running?, #script_statistics, #session_id, #started_at, #state, #statistics, #status, #transaction_id, #user_email

Instance Method Details

#cache=(value) ⇒ Object

Specifies to look in the query cache for results.

Parameters:

  • value (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.



865
866
867
# File 'lib/google/cloud/bigquery/query_job.rb', line 865

def cache= value
  @gapi.configuration.query.use_query_cache = value
end

#cancelObject



1596
1597
1598
# File 'lib/google/cloud/bigquery/query_job.rb', line 1596

def cancel
  raise "not implemented in #{self.class}"
end

#clustering_fields=(fields) ⇒ Object

Sets the list of fields on which data should be clustered.

Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.

BigQuery supports clustering for both partitioned and non-partitioned tables.

See Google::Cloud::Bigquery::QueryJob#clustering_fields, Table#clustering_fields and Table#clustering_fields=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM my_table" do |job|
  job.table = destination_table
  job.time_partitioning_type = "DAY"
  job.time_partitioning_field = "dob"
  job.clustering_fields = ["last_name", "first_name"]
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • fields (Array<String>)

    The clustering fields. Only top-level, non-repeated, simple-type fields are supported.

See Also:



1591
1592
1593
1594
# File 'lib/google/cloud/bigquery/query_job.rb', line 1591

def clustering_fields= fields
  @gapi.configuration.query.clustering ||= Google::Apis::BigqueryV2::Clustering.new
  @gapi.configuration.query.clustering.fields = fields
end

#create=(value) ⇒ Object

Sets the create disposition for creating the query results table.

create new tables. The default value is needed.

The following values are supported:

  • needed - Create the table if it does not exist.
  • never - The table must already exist. A 'notFound' error is raised if the table does not exist.

Parameters:

  • value (String)

    Specifies whether the job is allowed to



1043
1044
1045
# File 'lib/google/cloud/bigquery/query_job.rb', line 1043

def create= value
  @gapi.configuration.query.create_disposition = Convert.create_disposition value
end

#create_session=(value) ⇒ Object

Sets the create_session property. If true, creates a new session, where session id will be a server generated random id. If false, runs query with an existing #session_id=, otherwise runs query in non-session mode. The default value is false.

value is false.

Parameters:

  • value (Boolean)

    The create_session property. The default



1057
1058
1059
# File 'lib/google/cloud/bigquery/query_job.rb', line 1057

def create_session= value
  @gapi.configuration.query.create_session = value
end

#dataset=(value) ⇒ Object

Sets the default dataset of tables referenced in the query.

Parameters:

  • value (Dataset)

    The default dataset to use for unqualified table names in the query.



902
903
904
# File 'lib/google/cloud/bigquery/query_job.rb', line 902

def dataset= value
  @gapi.configuration.query.default_dataset = @service.dataset_ref_from value
end

#dryrun=(value) ⇒ Object Also known as: dry_run=

Sets the dry run flag for the query job.

Parameters:

  • value (Boolean)

    If set, don't actually run this job. A valid query will return a mostly empty response with some processing statistics, while an invalid query will return the same error it would if it wasn't a dry run..



1105
1106
1107
# File 'lib/google/cloud/bigquery/query_job.rb', line 1105

def dryrun= value
  @gapi.configuration.dry_run = value
end

#encryption=(val) ⇒ Object

Sets the encryption configuration of the destination table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"

key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d"
encrypt_config = bigquery.encryption kms_key: key_name
job = bigquery.query_job "SELECT 1;" do |job|
  job.table = dataset.table "my_table", skip_lookup: true
  job.encryption = encrypt_config
end

Parameters:

  • val (Google::Cloud::BigQuery::EncryptionConfiguration)

    Custom encryption configuration (e.g., Cloud KMS keys).



1245
1246
1247
# File 'lib/google/cloud/bigquery/query_job.rb', line 1245

def encryption= val
  @gapi.configuration.query.update! destination_encryption_configuration: val.to_gapi
end

#external=(value) ⇒ Object

Sets definitions for external tables used in the query.

Parameters:

  • value (Hash<String|Symbol, External::DataSource>)

    A Hash that represents the mapping of the external tables to the table names used in the SQL query. The hash keys are the table names, and the hash values are the external table objects.



1203
1204
1205
1206
1207
# File 'lib/google/cloud/bigquery/query_job.rb', line 1203

def external= value
  external_table_pairs = value.map { |name, obj| [String(name), obj.to_gapi] }
  external_table_hash = external_table_pairs.to_h
  @gapi.configuration.query.table_definitions = external_table_hash
end

#flatten=(value) ⇒ Object

Flatten nested and repeated fields in legacy SQL queries.

Parameters:

  • value (Boolean)

    This option is specific to Legacy SQL. Flattens all nested and repeated fields in the query results. The default value is true. large_results parameter must be true if this is set to false.



891
892
893
# File 'lib/google/cloud/bigquery/query_job.rb', line 891

def flatten= value
  @gapi.configuration.query.flatten_results = value
end

#labels=(value) ⇒ Object

Sets the labels to use for the job.

Parameters:

  • value (Hash)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.


1157
1158
1159
# File 'lib/google/cloud/bigquery/query_job.rb', line 1157

def labels= value
  @gapi.configuration.update! labels: value
end

#large_results=(value) ⇒ Object

Allow large results for a legacy SQL query.

Parameters:

  • value (Boolean)

    This option is specific to Legacy SQL. If true, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires table parameter to be set.



878
879
880
# File 'lib/google/cloud/bigquery/query_job.rb', line 878

def large_results= value
  @gapi.configuration.query.allow_large_results = value
end

#legacy_sql=(value) ⇒ Object

Sets the query syntax to legacy SQL.

Parameters:

  • value (Boolean)

    Specifies whether to use BigQuery's legacy SQL dialect for this query. If set to false, the query will use BigQuery's standard SQL dialect. Optional. The default value is false.



1173
1174
1175
# File 'lib/google/cloud/bigquery/query_job.rb', line 1173

def legacy_sql= value
  @gapi.configuration.query.use_legacy_sql = value
end

#location=(value) ⇒ Object

Sets the geographic location where the job should run. Required except for US and EU.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"

job = bigquery.query_job "SELECT 1;" do |query|
  query.table = dataset.table "my_table", skip_lookup: true
  query.location = "EU"
end

Parameters:

  • value (String)

    A geographic location, such as "US", "EU" or "asia-northeast1". Required except for US and EU.



835
836
837
838
839
840
841
842
# File 'lib/google/cloud/bigquery/query_job.rb', line 835

def location= value
  @gapi.job_reference.location = value
  return unless value.nil?

  # Treat assigning value of nil the same as unsetting the value.
  unset = @gapi.job_reference.instance_variables.include? :@location
  @gapi.job_reference.remove_instance_variable :@location if unset
end

#maximum_bytes_billed=(value) ⇒ Object

Sets the maximum bytes billed for the query.

Parameters:

  • value (Integer)

    Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). Optional. If unspecified, this will be set to your project default.



1131
1132
1133
# File 'lib/google/cloud/bigquery/query_job.rb', line 1131

def maximum_bytes_billed= value
  @gapi.configuration.query.maximum_bytes_billed = value
end

#params=(params) ⇒ Object

Sets the query parameters. Standard SQL only.

Use #set_params_and_types to set both params and types.

Parameters:

  • params (Array, Hash)

    Standard SQL only. Used to pass query arguments when the query string contains either positional (?) or named (@myparam) query parameters. If value passed is an array ["foo"], the query must use positional query parameters. If value passed is a hash { myparam: "foo" }, the query must use named query parameters. When set, legacy_sql will automatically be set to false and standard_sql to true.

    BigQuery types are converted from Ruby types as follows:

    BigQuery Ruby Notes
    BOOL true/false
    INT64 Integer
    FLOAT64 Float
    NUMERIC BigDecimal BigDecimal values will be rounded to scale 9.
    BIGNUMERIC BigDecimal NOT AUTOMATIC: Must be mapped using types.
    STRING String
    DATETIME DateTime DATETIME does not support time zone.
    DATE Date
    GEOGRAPHY String (WKT or GeoJSON) NOT AUTOMATIC: Must be mapped using types.
    JSON String (Stringified JSON) String, as JSON does not have a schema to verify.
    TIMESTAMP Time
    TIME Google::Cloud::BigQuery::Time
    BYTES File, IO, StringIO, or similar
    ARRAY Array Nested arrays, nil values are not supported.
    STRUCT Hash Hash keys may be strings or symbols.

    See Data Types for an overview of each BigQuery data type, including allowed values. For the GEOGRAPHY type, see Working with BigQuery GIS data.



942
943
944
# File 'lib/google/cloud/bigquery/query_job.rb', line 942

def params= params
  set_params_and_types params
end

#priority=(value) ⇒ Object

Sets the priority of the query.

Parameters:

  • value (String)

    Specifies a priority for the query. Possible values include INTERACTIVE and BATCH.



851
852
853
# File 'lib/google/cloud/bigquery/query_job.rb', line 851

def priority= value
  @gapi.configuration.query.priority = priority_value value
end

#range_partitioning_end=(range_end) ⇒ Object

Sets the end of range partitioning, exclusive, for the destination table. See Creating and using integer range partitioned tables.

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

See #range_partitioning_start=, #range_partitioning_interval= and #range_partitioning_field=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • range_end (Integer)

    The end of range partitioning, exclusive.



1400
1401
1402
1403
1404
1405
# File 'lib/google/cloud/bigquery/query_job.rb', line 1400

def range_partitioning_end= range_end
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.range.end = range_end
end

#range_partitioning_field=(field) ⇒ Object

Sets the field on which to range partition the table. See Creating and using integer range partitioned tables.

See #range_partitioning_start=, #range_partitioning_interval= and #range_partitioning_end=.

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • field (String)

    The range partition field. the destination table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.



1283
1284
1285
1286
1287
1288
# File 'lib/google/cloud/bigquery/query_job.rb', line 1283

def range_partitioning_field= field
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.field = field
end

#range_partitioning_interval=(range_interval) ⇒ Object

Sets width of each interval for data in range partitions. See Creating and using integer range partitioned tables.

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

See #range_partitioning_field=, #range_partitioning_start= and #range_partitioning_end=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • range_interval (Integer)

    The width of each interval, for data in partitions.



1361
1362
1363
1364
1365
1366
# File 'lib/google/cloud/bigquery/query_job.rb', line 1361

def range_partitioning_interval= range_interval
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.range.interval = range_interval
end

#range_partitioning_start=(range_start) ⇒ Object

Sets the start of range partitioning, inclusive, for the destination table. See Creating and using integer range partitioned tables.

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

See #range_partitioning_field=, #range_partitioning_interval= and #range_partitioning_end=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • range_start (Integer)

    The start of range partitioning, inclusive.



1322
1323
1324
1325
1326
1327
# File 'lib/google/cloud/bigquery/query_job.rb', line 1322

def range_partitioning_start= range_start
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.range.start = range_start
end

#reload!Object Also known as: refresh!



1604
1605
1606
# File 'lib/google/cloud/bigquery/query_job.rb', line 1604

def reload!
  raise "not implemented in #{self.class}"
end

#rerun!Object



1600
1601
1602
# File 'lib/google/cloud/bigquery/query_job.rb', line 1600

def rerun!
  raise "not implemented in #{self.class}"
end

#session_id=(value) ⇒ Object

Sets the session ID for a query run in session mode. See #create_session=.

Parameters:

  • value (String)

    The session ID. The default value is nil.



1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
# File 'lib/google/cloud/bigquery/query_job.rb', line 1067

def session_id= value
  @gapi.configuration.query.connection_properties ||= []
  prop = @gapi.configuration.query.connection_properties.find { |cp| cp.key == "session_id" }
  if prop
    prop.value = value
  else
    prop = Google::Apis::BigqueryV2::ConnectionProperty.new key: "session_id", value: value
    @gapi.configuration.query.connection_properties << prop
  end
end

#set_params_and_types(params, types = nil) ⇒ Object

Sets the query parameters. Standard SQL only.

Parameters:

  • params (Array, Hash)

    Standard SQL only. Used to pass query arguments when the query string contains either positional (?) or named (@myparam) query parameters. If value passed is an array ["foo"], the query must use positional query parameters. If value passed is a hash { myparam: "foo" }, the query must use named query parameters. When set, legacy_sql will automatically be set to false and standard_sql to true.

    BigQuery types are converted from Ruby types as follows:

    BigQuery Ruby Notes
    BOOL true/false
    INT64 Integer
    FLOAT64 Float
    NUMERIC BigDecimal BigDecimal values will be rounded to scale 9.
    BIGNUMERIC BigDecimal NOT AUTOMATIC: Must be mapped using types.
    STRING String
    DATETIME DateTime DATETIME does not support time zone.
    DATE Date
    GEOGRAPHY String (WKT or GeoJSON) NOT AUTOMATIC: Must be mapped using types.
    JSON String (Stringified JSON) String, as JSON does not have a schema to verify.
    TIMESTAMP Time
    TIME Google::Cloud::BigQuery::Time
    BYTES File, IO, StringIO, or similar
    ARRAY Array Nested arrays, nil values are not supported.
    STRUCT Hash Hash keys may be strings or symbols.

    See Data Types for an overview of each BigQuery data type, including allowed values. For the GEOGRAPHY type, see Working with BigQuery GIS data.

  • types (Array, Hash) (defaults to: nil)

    Standard SQL only. Types of the SQL parameters in params. It is not always possible to infer the right SQL type from a value in params. In these cases, types must be used to specify the SQL type for these values.

    Arguments must match the value type passed to params. This must be an Array when the query uses positional query parameters. This must be an Hash when the query uses named query parameters. The values should be BigQuery type codes from the following list:

    • :BOOL
    • :INT64
    • :FLOAT64
    • :NUMERIC
    • :BIGNUMERIC
    • :STRING
    • :DATETIME
    • :DATE
    • :GEOGRAPHY
    • :JSON
    • :TIMESTAMP
    • :TIME
    • :BYTES
    • Array - Lists are specified by providing the type code in an array. For example, an array of integers are specified as [:INT64].
    • Hash - Types for STRUCT values (Hash objects) are specified using a Hash object, where the keys match the params hash, and the values are the types value that matches the data.

    Types are optional.

Raises:

  • (ArgumentError)


1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
# File 'lib/google/cloud/bigquery/query_job.rb', line 1007

def set_params_and_types params, types = nil
  types ||= params.class.new
  raise ArgumentError, "types must use the same format as params" if types.class != params.class

  case params
  when Array
    @gapi.configuration.query.use_legacy_sql = false
    @gapi.configuration.query.parameter_mode = "POSITIONAL"
    @gapi.configuration.query.query_parameters = params.zip(types).map do |param, type|
      Convert.to_query_param param, type
    end
  when Hash
    @gapi.configuration.query.use_legacy_sql = false
    @gapi.configuration.query.parameter_mode = "NAMED"
    @gapi.configuration.query.query_parameters = params.map do |name, param|
      type = types[name]
      Convert.to_query_param(param, type).tap { |named_param| named_param.name = String name }
    end
  else
    raise ArgumentError, "params must be an Array or a Hash"
  end
end

#standard_sql=(value) ⇒ Object

Sets the query syntax to standard SQL.

Parameters:

  • value (Boolean)

    Specifies whether to use BigQuery's standard SQL dialect for this query. If set to true, the query will use standard SQL rather than the legacy SQL dialect. Optional. The default value is true.



1189
1190
1191
# File 'lib/google/cloud/bigquery/query_job.rb', line 1189

def standard_sql= value
  @gapi.configuration.query.use_legacy_sql = !value
end

#table=(value) ⇒ Object

Sets the destination for the query results table.

Parameters:

  • value (Table)

    The destination table where the query results should be stored. If not present, a new table will be created according to the create disposition to store the results.



1118
1119
1120
# File 'lib/google/cloud/bigquery/query_job.rb', line 1118

def table= value
  @gapi.configuration.query.destination_table = table_ref_from value
end

#time_partitioning_expiration=(expiration) ⇒ Object

Sets the partition expiration for the destination table. See Partitioned Tables.

The destination table must also be partitioned. See #time_partitioning_type=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM UNNEST(" \
                        "GENERATE_TIMESTAMP_ARRAY(" \
                        "'2018-10-01 00:00:00', " \
                        "'2018-10-10 00:00:00', " \
                        "INTERVAL 1 DAY)) AS dob" do |job|
  job.table = destination_table
  job.time_partitioning_type = "DAY"
  job.time_partitioning_expiration = 86_400
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • expiration (Integer)

    An expiration time, in seconds, for data in partitions.



1528
1529
1530
1531
# File 'lib/google/cloud/bigquery/query_job.rb', line 1528

def time_partitioning_expiration= expiration
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! expiration_ms: expiration * 1000
end

#time_partitioning_field=(field) ⇒ Object

Sets the field on which to partition the destination table. If not set, the destination table is partitioned by pseudo column _PARTITIONTIME; if set, the table is partitioned by this field. See Partitioned Tables.

The destination table must also be partitioned. See #time_partitioning_type=.

You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM UNNEST(" \
                        "GENERATE_TIMESTAMP_ARRAY(" \
                        "'2018-10-01 00:00:00', " \
                        "'2018-10-10 00:00:00', " \
                        "INTERVAL 1 DAY)) AS dob" do |job|
  job.table = destination_table
  job.time_partitioning_type  = "DAY"
  job.time_partitioning_field = "dob"
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • field (String)

    The partition field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.



1489
1490
1491
1492
# File 'lib/google/cloud/bigquery/query_job.rb', line 1489

def time_partitioning_field= field
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! field: field
end

#time_partitioning_require_filter=(val) ⇒ Object

If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.

Parameters:

  • val (Boolean)

    Indicates if queries over the destination table will require a partition filter. The default value is false.



1544
1545
1546
1547
# File 'lib/google/cloud/bigquery/query_job.rb', line 1544

def time_partitioning_require_filter= val
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! require_partition_filter: val
end

#time_partitioning_type=(type) ⇒ Object

Sets the partitioning for the destination table. See Partitioned Tables. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively.

You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM UNNEST(" \
                        "GENERATE_TIMESTAMP_ARRAY(" \
                        "'2018-10-01 00:00:00', " \
                        "'2018-10-10 00:00:00', " \
                        "INTERVAL 1 DAY)) AS dob" do |job|
  job.table = destination_table
  job.time_partitioning_type = "DAY"
end

job.wait_until_done!
job.done? #=> true

Parameters:

  • type (String)

    The partition type. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively.



1443
1444
1445
1446
# File 'lib/google/cloud/bigquery/query_job.rb', line 1443

def time_partitioning_type= type
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! type: type
end

#udfs=(value) ⇒ Object

Sets user defined functions for the query.

Parameters:

  • value (Array<String>, String)

    User-defined function resources used in the query. May be either a code resource to load from a Google Cloud Storage URI (gs://bucket/path), or an inline resource that contains code for a user-defined function (UDF). Providing an inline code resource is equivalent to providing a URI for a file containing the same code. See User-Defined Functions.



1221
1222
1223
# File 'lib/google/cloud/bigquery/query_job.rb', line 1221

def udfs= value
  @gapi.configuration.query.user_defined_function_resources = udfs_gapi_from value
end

#wait_until_done!Object



1609
1610
1611
# File 'lib/google/cloud/bigquery/query_job.rb', line 1609

def wait_until_done!
  raise "not implemented in #{self.class}"
end

#write=(value) ⇒ Object

Sets the write disposition for when the query results table exists.

Parameters:

  • value (String)

    Specifies the action that occurs if the destination table already exists. The default value is empty.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - A 'duplicate' error is returned in the job result if the table exists and contains data.


1092
1093
1094
# File 'lib/google/cloud/bigquery/query_job.rb', line 1092

def write= value
  @gapi.configuration.query.write_disposition = Convert.write_disposition value
end