Class: Google::Cloud::Bigquery::External::DataSource

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/bigquery/external/data_source.rb

Overview

DataSource

External::DataSource and its subclasses represents an external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.

The AVRO and Datastore Backup formats use DataSource. See CsvSource, JsonSource, SheetsSource, BigtableSource for the other formats.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

avro_url = "gs://bucket/path/to/*.avro"
avro_table = bigquery.external avro_url do |avro|
  avro.autodetect = true
end

data = bigquery.query "SELECT * FROM my_ext_table",
                      external: { my_ext_table: avro_table }

# Iterate over the first page of results
data.each do |row|
  puts row[:name]
end
# Retrieve the next page of results
data = data.next if data.next?

Hive partitioning options:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Instance Method Summary collapse

Instance Method Details

#autodetectBoolean

Indicates if the schema and format options are detected automatically.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.autodetect = true
end

csv_table.autodetect #=> true

Returns:

  • (Boolean)


318
319
320
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 318

def autodetect
  @gapi.autodetect
end

#autodetect=(new_autodetect) ⇒ Object

Set whether to detect schema and format options automatically. Any option specified explicitly will be honored.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.autodetect = true
end

csv_table.autodetect #=> true

Parameters:

  • new_autodetect (Boolean)

    New autodetect value



340
341
342
343
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 340

def autodetect= new_autodetect
  frozen_check!
  @gapi.autodetect = new_autodetect
end

#avro?Boolean

Whether the data format is "AVRO".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

avro_url = "gs://bucket/path/to/*.avro"
avro_table = bigquery.external avro_url

avro_table.format #=> "AVRO"
avro_table.avro? #=> true

Returns:

  • (Boolean)


183
184
185
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 183

def avro?
  @gapi.source_format == "AVRO"
end

#backup?Boolean

Whether the data format is "DATASTORE_BACKUP".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

backup_url = "gs://bucket/path/to/data.backup_info"
backup_table = bigquery.external backup_url

backup_table.format #=> "DATASTORE_BACKUP"
backup_table.backup? #=> true

Returns:

  • (Boolean)


203
204
205
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 203

def backup?
  @gapi.source_format == "DATASTORE_BACKUP"
end

#bigtable?Boolean

Whether the data format is "BIGTABLE".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

bigtable_url = "https://googleapis.com/bigtable/projects/..."
bigtable_table = bigquery.external bigtable_url

bigtable_table.format #=> "BIGTABLE"
bigtable_table.bigtable? #=> true

Returns:

  • (Boolean)


223
224
225
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 223

def bigtable?
  @gapi.source_format == "BIGTABLE"
end

#compressionString

The compression type of the data source. Possible values include "GZIP" and nil. The default value is nil. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.compression = "GZIP"
end

csv_table.compression #=> "GZIP"

Returns:

  • (String)


364
365
366
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 364

def compression
  @gapi.compression
end

#compression=(new_compression) ⇒ Object

Set the compression type of the data source. Possible values include "GZIP" and nil. The default value is nil. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.compression = "GZIP"
end

csv_table.compression #=> "GZIP"

Parameters:

  • new_compression (String)

    New compression value



388
389
390
391
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 388

def compression= new_compression
  frozen_check!
  @gapi.compression = new_compression
end

#csv?Boolean

Whether the data format is "CSV".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url

csv_table.format #=> "CSV"
csv_table.csv? #=> true

Returns:

  • (Boolean)


123
124
125
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 123

def csv?
  @gapi.source_format == "CSV"
end

#formatString

The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url

csv_table.format #=> "CSV"

Returns:

  • (String)


103
104
105
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 103

def format
  @gapi.source_format
end

#hive_partitioning?Boolean

Checks if hive partitioning options are set.

Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: avro, csv, json, orc and parquet. If your data is stored in ORC or Parquet on Cloud Storage, see Querying columnar formats on Cloud Storage.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (Boolean)

    true when hive partitioning options are set, or false otherwise.



535
536
537
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 535

def hive_partitioning?
  !@gapi.hive_partitioning_options.nil?
end

#hive_partitioning_modeString?

The mode of hive partitioning to use when reading data. The following modes are supported:

  1. AUTO: automatically infer partition key name(s) and type(s).
  2. STRINGS: automatically infer partition key name(s). All types are interpreted as strings.
  3. CUSTOM: partition key schema is encoded in the source URI prefix.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (String, nil)

    The mode of hive partitioning, or nil if not set.



566
567
568
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 566

def hive_partitioning_mode
  @gapi.hive_partitioning_options.mode if hive_partitioning?
end

#hive_partitioning_mode=(mode) ⇒ Object

Sets the mode of hive partitioning to use when reading data. The following modes are supported:

  1. auto: automatically infer partition key name(s) and type(s).
  2. strings: automatically infer partition key name(s). All types are interpreted as strings.
  3. custom: partition key schema is encoded in the source URI prefix.

Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: avro, csv, json, orc and parquet. If your data is stored in ORC or Parquet on Cloud Storage, see Querying columnar formats on Cloud Storage.

See #format, #hive_partitioning_require_partition_filter= and #hive_partitioning_source_uri_prefix=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Parameters:

  • mode (String, Symbol)

    The mode of hive partitioning to use when reading data.



604
605
606
607
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 604

def hive_partitioning_mode= mode
  @gapi.hive_partitioning_options ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new
  @gapi.hive_partitioning_options.mode = mode.to_s.upcase
end

#hive_partitioning_require_partition_filter=(require_partition_filter) ⇒ Object

Sets whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified.

See #format, #hive_partitioning_mode= and #hive_partitioning_source_uri_prefix=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Parameters:

  • require_partition_filter (Boolean)

    true if a partition filter must be specified, false otherwise.



665
666
667
668
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 665

def hive_partitioning_require_partition_filter= require_partition_filter
  @gapi.hive_partitioning_options ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new
  @gapi.hive_partitioning_options.require_partition_filter = require_partition_filter
end

#hive_partitioning_require_partition_filter?Boolean

Whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (Boolean)

    true when queries over this table require a partition filter, or false otherwise.



634
635
636
637
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 634

def hive_partitioning_require_partition_filter?
  return false unless hive_partitioning?
  !@gapi.hive_partitioning_options.require_partition_filter.nil?
end

#hive_partitioning_source_uri_prefixString?

The common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:

gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro

When hive partitioning is requested with either AUTO or STRINGS mode, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (String, nil)

    The common prefix for all source uris, or nil if not set.



703
704
705
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 703

def hive_partitioning_source_uri_prefix
  @gapi.hive_partitioning_options.source_uri_prefix if hive_partitioning?
end

#hive_partitioning_source_uri_prefix=(source_uri_prefix) ⇒ Object

Sets the common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:

gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro

When hive partitioning is requested with either AUTO or STRINGS mode, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).

See #format, #hive_partitioning_mode= and #hive_partitioning_require_partition_filter=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Parameters:

  • source_uri_prefix (String)

    The common prefix for all source uris.



742
743
744
745
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 742

def hive_partitioning_source_uri_prefix= source_uri_prefix
  @gapi.hive_partitioning_options ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new
  @gapi.hive_partitioning_options.source_uri_prefix = source_uri_prefix
end

#ignore_unknownBoolean

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

BigQuery treats trailing columns as an extra in CSV, named values that don't match any column names in JSON. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.ignore_unknown = true
end

csv_table.ignore_unknown #=> true

Returns:

  • (Boolean)


419
420
421
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 419

def ignore_unknown
  @gapi.ignore_unknown_values
end

#ignore_unknown=(new_ignore_unknown) ⇒ Object

Set whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

BigQuery treats trailing columns as an extra in CSV, named values that don't match any column names in JSON. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.ignore_unknown = true
end

csv_table.ignore_unknown #=> true

Parameters:

  • new_ignore_unknown (Boolean)

    New ignore_unknown value



449
450
451
452
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 449

def ignore_unknown= new_ignore_unknown
  frozen_check!
  @gapi.ignore_unknown_values = new_ignore_unknown
end

#json?Boolean

Whether the data format is "NEWLINE_DELIMITED_JSON".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

json_url = "gs://bucket/path/to/data.json"
json_table = bigquery.external json_url

json_table.format #=> "NEWLINE_DELIMITED_JSON"
json_table.json? #=> true

Returns:

  • (Boolean)


143
144
145
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 143

def json?
  @gapi.source_format == "NEWLINE_DELIMITED_JSON"
end

#max_bad_recordsInteger

The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.max_bad_records = 10
end

csv_table.max_bad_records #=> 10

Returns:

  • (Integer)


476
477
478
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 476

def max_bad_records
  @gapi.max_bad_records
end

#max_bad_records=(new_max_bad_records) ⇒ Object

Set the maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.max_bad_records = 10
end

csv_table.max_bad_records #=> 10

Parameters:

  • new_max_bad_records (Integer)

    New max_bad_records value



502
503
504
505
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 502

def max_bad_records= new_max_bad_records
  frozen_check!
  @gapi.max_bad_records = new_max_bad_records
end

#orc?Boolean

Whether the data format is "ORC".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :orc do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end
external_data.format #=> "ORC"
external_data.orc? #=> true

Returns:

  • (Boolean)


246
247
248
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 246

def orc?
  @gapi.source_format == "ORC"
end

#parquet?Boolean

Whether the data format is "PARQUET".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end
external_data.format #=> "PARQUET"
external_data.parquet? #=> true

Returns:

  • (Boolean)


269
270
271
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 269

def parquet?
  @gapi.source_format == "PARQUET"
end

#sheets?Boolean

Whether the data format is "GOOGLE_SHEETS".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

sheets_url = "https://docs.google.com/spreadsheets/d/1234567980"
sheets_table = bigquery.external sheets_url

sheets_table.format #=> "GOOGLE_SHEETS"
sheets_table.sheets? #=> true

Returns:

  • (Boolean)


163
164
165
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 163

def sheets?
  @gapi.source_format == "GOOGLE_SHEETS"
end

#urlsArray<String>

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified, and it must end with '.backup_info'. Also, the '' wildcard character is not allowed.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url

csv_table.urls #=> ["gs://bucket/path/to/data.csv"]

Returns:

  • (Array<String>)


296
297
298
# File 'lib/google/cloud/bigquery/external/data_source.rb', line 296

def urls
  @gapi.source_uris
end