Class: Google::Cloud::Dataproc::V1::GkeNodePoolTarget

Inherits:
Object
  • Object
show all
Extended by:
Protobuf::MessageExts::ClassMethods
Includes:
Protobuf::MessageExts
Defined in:
proto_docs/google/cloud/dataproc/v1/shared.rb

Overview

GKE node pools that Dataproc workloads run on.

Defined Under Namespace

Modules: Role

Instance Attribute Summary collapse

Instance Attribute Details

#node_pool::String

Returns Required. The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'.

Returns:

  • (::String)

    Required. The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'



308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
# File 'proto_docs/google/cloud/dataproc/v1/shared.rb', line 308

class GkeNodePoolTarget
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # `Role` specifies the tasks that will run on the node pool. Roles can be
  # specific to workloads. Exactly one
  # {::Google::Cloud::Dataproc::V1::GkeNodePoolTarget GkeNodePoolTarget} within the
  # virtual cluster must have the `DEFAULT` role, which is used to run all
  # workloads that are not associated with a node pool.
  module Role
    # Role is unspecified.
    ROLE_UNSPECIFIED = 0

    # At least one node pool must have the `DEFAULT` role.
    # Work assigned to a role that is not associated with a node pool
    # is assigned to the node pool with the `DEFAULT` role. For example,
    # work assigned to the `CONTROLLER` role will be assigned to the node pool
    # with the `DEFAULT` role if no node pool has the `CONTROLLER` role.
    DEFAULT = 1

    # Run work associated with the Dataproc control plane (for example,
    # controllers and webhooks). Very low resource requirements.
    CONTROLLER = 2

    # Run work associated with a Spark driver of a job.
    SPARK_DRIVER = 3

    # Run work associated with a Spark executor of a job.
    SPARK_EXECUTOR = 4
  end
end

#node_pool_config::Google::Cloud::Dataproc::V1::GkeNodePoolConfig

Returns Input only. The configuration for the GKE node pool.

If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.

If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.

This is an input only field. It will not be returned by the API.

Returns:

  • (::Google::Cloud::Dataproc::V1::GkeNodePoolConfig)

    Input only. The configuration for the GKE node pool.

    If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.

    If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.

    This is an input only field. It will not be returned by the API.



308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
# File 'proto_docs/google/cloud/dataproc/v1/shared.rb', line 308

class GkeNodePoolTarget
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # `Role` specifies the tasks that will run on the node pool. Roles can be
  # specific to workloads. Exactly one
  # {::Google::Cloud::Dataproc::V1::GkeNodePoolTarget GkeNodePoolTarget} within the
  # virtual cluster must have the `DEFAULT` role, which is used to run all
  # workloads that are not associated with a node pool.
  module Role
    # Role is unspecified.
    ROLE_UNSPECIFIED = 0

    # At least one node pool must have the `DEFAULT` role.
    # Work assigned to a role that is not associated with a node pool
    # is assigned to the node pool with the `DEFAULT` role. For example,
    # work assigned to the `CONTROLLER` role will be assigned to the node pool
    # with the `DEFAULT` role if no node pool has the `CONTROLLER` role.
    DEFAULT = 1

    # Run work associated with the Dataproc control plane (for example,
    # controllers and webhooks). Very low resource requirements.
    CONTROLLER = 2

    # Run work associated with a Spark driver of a job.
    SPARK_DRIVER = 3

    # Run work associated with a Spark executor of a job.
    SPARK_EXECUTOR = 4
  end
end

#roles::Array<::Google::Cloud::Dataproc::V1::GkeNodePoolTarget::Role>

Returns Required. The roles associated with the GKE node pool.

Returns:



308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
# File 'proto_docs/google/cloud/dataproc/v1/shared.rb', line 308

class GkeNodePoolTarget
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # `Role` specifies the tasks that will run on the node pool. Roles can be
  # specific to workloads. Exactly one
  # {::Google::Cloud::Dataproc::V1::GkeNodePoolTarget GkeNodePoolTarget} within the
  # virtual cluster must have the `DEFAULT` role, which is used to run all
  # workloads that are not associated with a node pool.
  module Role
    # Role is unspecified.
    ROLE_UNSPECIFIED = 0

    # At least one node pool must have the `DEFAULT` role.
    # Work assigned to a role that is not associated with a node pool
    # is assigned to the node pool with the `DEFAULT` role. For example,
    # work assigned to the `CONTROLLER` role will be assigned to the node pool
    # with the `DEFAULT` role if no node pool has the `CONTROLLER` role.
    DEFAULT = 1

    # Run work associated with the Dataproc control plane (for example,
    # controllers and webhooks). Very low resource requirements.
    CONTROLLER = 2

    # Run work associated with a Spark driver of a job.
    SPARK_DRIVER = 3

    # Run work associated with a Spark executor of a job.
    SPARK_EXECUTOR = 4
  end
end