Class: Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions

Inherits:
Object
  • Object
show all
Extended by:
Protobuf::MessageExts::ClassMethods
Includes:
Protobuf::MessageExts
Defined in:
proto_docs/google/cloud/bigquery/storage/v1/stream.rb

Overview

Options dictating how we read a table.

Defined Under Namespace

Modules: ResponseCompressionCodec

Instance Attribute Summary collapse

Instance Attribute Details

#arrow_serialization_options::Google::Cloud::Bigquery::Storage::V1::ArrowSerializationOptions

Returns Optional. Options specific to the Apache Arrow output format.

Returns:



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
# File 'proto_docs/google/cloud/bigquery/storage/v1/stream.rb', line 183

class TableReadOptions
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # Specifies which compression codec to attempt on the entire serialized
  # response payload (either Arrow record batch or Avro rows). This is
  # not to be confused with the Apache Arrow native compression codecs
  # specified in ArrowSerializationOptions. For performance reasons, when
  # creating a read session requesting Arrow responses, setting both native
  # Arrow compression and application-level response compression will not be
  # allowed - choose, at most, one kind of compression.
  module ResponseCompressionCodec
    # Default is no compression.
    RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0

    # Use raw LZ4 compression.
    RESPONSE_COMPRESSION_CODEC_LZ4 = 2
  end
end

#avro_serialization_options::Google::Cloud::Bigquery::Storage::V1::AvroSerializationOptions

Returns Optional. Options specific to the Apache Avro output format.

Returns:



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
# File 'proto_docs/google/cloud/bigquery/storage/v1/stream.rb', line 183

class TableReadOptions
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # Specifies which compression codec to attempt on the entire serialized
  # response payload (either Arrow record batch or Avro rows). This is
  # not to be confused with the Apache Arrow native compression codecs
  # specified in ArrowSerializationOptions. For performance reasons, when
  # creating a read session requesting Arrow responses, setting both native
  # Arrow compression and application-level response compression will not be
  # allowed - choose, at most, one kind of compression.
  module ResponseCompressionCodec
    # Default is no compression.
    RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0

    # Use raw LZ4 compression.
    RESPONSE_COMPRESSION_CODEC_LZ4 = 2
  end
end

#response_compression_codec::Google::Cloud::Bigquery::Storage::V1::ReadSession::TableReadOptions::ResponseCompressionCodec

Returns Optional. Set response_compression_codec when creating a read session to enable application-level compression of ReadRows responses.

Returns:



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
# File 'proto_docs/google/cloud/bigquery/storage/v1/stream.rb', line 183

class TableReadOptions
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # Specifies which compression codec to attempt on the entire serialized
  # response payload (either Arrow record batch or Avro rows). This is
  # not to be confused with the Apache Arrow native compression codecs
  # specified in ArrowSerializationOptions. For performance reasons, when
  # creating a read session requesting Arrow responses, setting both native
  # Arrow compression and application-level response compression will not be
  # allowed - choose, at most, one kind of compression.
  module ResponseCompressionCodec
    # Default is no compression.
    RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0

    # Use raw LZ4 compression.
    RESPONSE_COMPRESSION_CODEC_LZ4 = 2
  end
end

#row_restriction::String

Returns SQL text filtering statement, similar to a WHERE clause in a query. Aggregates are not supported.

Examples: "int_field > 5" "date_field = CAST('2014-9-27' as DATE)" "nullable_field is not NULL" "st_equals(geo_field, st_geofromtext("POINT(2, 2)"))" "numeric_field BETWEEN 1.0 AND 5.0"

Restricted to a maximum length for 1 MB.

Returns:

  • (::String)

    SQL text filtering statement, similar to a WHERE clause in a query. Aggregates are not supported.

    Examples: "int_field > 5" "date_field = CAST('2014-9-27' as DATE)" "nullable_field is not NULL" "st_equals(geo_field, st_geofromtext("POINT(2, 2)"))" "numeric_field BETWEEN 1.0 AND 5.0"

    Restricted to a maximum length for 1 MB.



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
# File 'proto_docs/google/cloud/bigquery/storage/v1/stream.rb', line 183

class TableReadOptions
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # Specifies which compression codec to attempt on the entire serialized
  # response payload (either Arrow record batch or Avro rows). This is
  # not to be confused with the Apache Arrow native compression codecs
  # specified in ArrowSerializationOptions. For performance reasons, when
  # creating a read session requesting Arrow responses, setting both native
  # Arrow compression and application-level response compression will not be
  # allowed - choose, at most, one kind of compression.
  module ResponseCompressionCodec
    # Default is no compression.
    RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0

    # Use raw LZ4 compression.
    RESPONSE_COMPRESSION_CODEC_LZ4 = 2
  end
end

#sample_percentage::Float

Returns Optional. Specifies a table sampling percentage. Specifically, the query planner will use TABLESAMPLE SYSTEM (sample_percentage PERCENT). The sampling percentage is applied at the data block granularity. It will randomly choose for each data block whether to read the rows in that data block. For more details, see https://cloud.google.com/bigquery/docs/table-sampling).

Returns:

  • (::Float)

    Optional. Specifies a table sampling percentage. Specifically, the query planner will use TABLESAMPLE SYSTEM (sample_percentage PERCENT). The sampling percentage is applied at the data block granularity. It will randomly choose for each data block whether to read the rows in that data block. For more details, see https://cloud.google.com/bigquery/docs/table-sampling)



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
# File 'proto_docs/google/cloud/bigquery/storage/v1/stream.rb', line 183

class TableReadOptions
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # Specifies which compression codec to attempt on the entire serialized
  # response payload (either Arrow record batch or Avro rows). This is
  # not to be confused with the Apache Arrow native compression codecs
  # specified in ArrowSerializationOptions. For performance reasons, when
  # creating a read session requesting Arrow responses, setting both native
  # Arrow compression and application-level response compression will not be
  # allowed - choose, at most, one kind of compression.
  module ResponseCompressionCodec
    # Default is no compression.
    RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0

    # Use raw LZ4 compression.
    RESPONSE_COMPRESSION_CODEC_LZ4 = 2
  end
end

#selected_fields::Array<::String>

Returns Optional. The names of the fields in the table to be returned. If no field names are specified, then all fields in the table are returned.

Nested fields -- the child elements of a STRUCT field -- can be selected individually using their fully-qualified names, and will be returned as record fields containing only the selected nested fields. If a STRUCT field is specified in the selected fields list, all of the child elements will be returned.

As an example, consider a table with the following schema:

{ "name": "struct_field", "type": "RECORD", "mode": "NULLABLE", "fields": [ { "name": "string_field1", "type": "STRING", . "mode": "NULLABLE" }, { "name": "string_field2", "type": "STRING", "mode": "NULLABLE" } ] }

Specifying "struct_field" in the selected fields list will result in a read session schema with the following logical structure:

struct_field { string_field1 string_field2 }

Specifying "struct_field.string_field1" in the selected fields list will result in a read session schema with the following logical structure:

struct_field { string_field1 }

The order of the fields in the read session schema is derived from the table schema and does not correspond to the order in which the fields are specified in this list.

Returns:

  • (::Array<::String>)

    Optional. The names of the fields in the table to be returned. If no field names are specified, then all fields in the table are returned.

    Nested fields -- the child elements of a STRUCT field -- can be selected individually using their fully-qualified names, and will be returned as record fields containing only the selected nested fields. If a STRUCT field is specified in the selected fields list, all of the child elements will be returned.

    As an example, consider a table with the following schema:

    { "name": "struct_field", "type": "RECORD", "mode": "NULLABLE", "fields": [ { "name": "string_field1", "type": "STRING", . "mode": "NULLABLE" }, { "name": "string_field2", "type": "STRING", "mode": "NULLABLE" } ] }

    Specifying "struct_field" in the selected fields list will result in a read session schema with the following logical structure:

    struct_field { string_field1 string_field2 }

    Specifying "struct_field.string_field1" in the selected fields list will result in a read session schema with the following logical structure:

    struct_field { string_field1 }

    The order of the fields in the read session schema is derived from the table schema and does not correspond to the order in which the fields are specified in this list.



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
# File 'proto_docs/google/cloud/bigquery/storage/v1/stream.rb', line 183

class TableReadOptions
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # Specifies which compression codec to attempt on the entire serialized
  # response payload (either Arrow record batch or Avro rows). This is
  # not to be confused with the Apache Arrow native compression codecs
  # specified in ArrowSerializationOptions. For performance reasons, when
  # creating a read session requesting Arrow responses, setting both native
  # Arrow compression and application-level response compression will not be
  # allowed - choose, at most, one kind of compression.
  module ResponseCompressionCodec
    # Default is no compression.
    RESPONSE_COMPRESSION_CODEC_UNSPECIFIED = 0

    # Use raw LZ4 compression.
    RESPONSE_COMPRESSION_CODEC_LZ4 = 2
  end
end