Class: Google::Cloud::Dataproc::V1::GkeNodePoolTarget

Inherits:
Object
  • Object
show all
Extended by:
Protobuf::MessageExts::ClassMethods
Includes:
Protobuf::MessageExts
Defined in:
proto_docs/google/cloud/dataproc/v1/shared.rb

Overview

GKE node pools that Dataproc workloads run on.

Defined Under Namespace

Modules: Role

Instance Attribute Summary collapse

Instance Attribute Details

#node_pool::String

Returns Required. The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'.

Returns:

  • (::String)

    Required. The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'



367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
# File 'proto_docs/google/cloud/dataproc/v1/shared.rb', line 367

class GkeNodePoolTarget
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # `Role` specifies the tasks that will run on the node pool. Roles can be
  # specific to workloads. Exactly one
  # {::Google::Cloud::Dataproc::V1::GkeNodePoolTarget GkeNodePoolTarget} within the
  # virtual cluster must have the `DEFAULT` role, which is used to run all
  # workloads that are not associated with a node pool.
  module Role
    # Role is unspecified.
    ROLE_UNSPECIFIED = 0

    # At least one node pool must have the `DEFAULT` role.
    # Work assigned to a role that is not associated with a node pool
    # is assigned to the node pool with the `DEFAULT` role. For example,
    # work assigned to the `CONTROLLER` role will be assigned to the node pool
    # with the `DEFAULT` role if no node pool has the `CONTROLLER` role.
    DEFAULT = 1

    # Run work associated with the Dataproc control plane (for example,
    # controllers and webhooks). Very low resource requirements.
    CONTROLLER = 2

    # Run work associated with a Spark driver of a job.
    SPARK_DRIVER = 3

    # Run work associated with a Spark executor of a job.
    SPARK_EXECUTOR = 4
  end
end

#node_pool_config::Google::Cloud::Dataproc::V1::GkeNodePoolConfig

Returns Input only. The configuration for the GKE node pool.

If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.

If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.

This is an input only field. It will not be returned by the API.

Returns:

  • (::Google::Cloud::Dataproc::V1::GkeNodePoolConfig)

    Input only. The configuration for the GKE node pool.

    If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.

    If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.

    This is an input only field. It will not be returned by the API.



367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
# File 'proto_docs/google/cloud/dataproc/v1/shared.rb', line 367

class GkeNodePoolTarget
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # `Role` specifies the tasks that will run on the node pool. Roles can be
  # specific to workloads. Exactly one
  # {::Google::Cloud::Dataproc::V1::GkeNodePoolTarget GkeNodePoolTarget} within the
  # virtual cluster must have the `DEFAULT` role, which is used to run all
  # workloads that are not associated with a node pool.
  module Role
    # Role is unspecified.
    ROLE_UNSPECIFIED = 0

    # At least one node pool must have the `DEFAULT` role.
    # Work assigned to a role that is not associated with a node pool
    # is assigned to the node pool with the `DEFAULT` role. For example,
    # work assigned to the `CONTROLLER` role will be assigned to the node pool
    # with the `DEFAULT` role if no node pool has the `CONTROLLER` role.
    DEFAULT = 1

    # Run work associated with the Dataproc control plane (for example,
    # controllers and webhooks). Very low resource requirements.
    CONTROLLER = 2

    # Run work associated with a Spark driver of a job.
    SPARK_DRIVER = 3

    # Run work associated with a Spark executor of a job.
    SPARK_EXECUTOR = 4
  end
end

#roles::Array<::Google::Cloud::Dataproc::V1::GkeNodePoolTarget::Role>

Returns Required. The roles associated with the GKE node pool.

Returns:



367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
# File 'proto_docs/google/cloud/dataproc/v1/shared.rb', line 367

class GkeNodePoolTarget
  include ::Google::Protobuf::MessageExts
  extend ::Google::Protobuf::MessageExts::ClassMethods

  # `Role` specifies the tasks that will run on the node pool. Roles can be
  # specific to workloads. Exactly one
  # {::Google::Cloud::Dataproc::V1::GkeNodePoolTarget GkeNodePoolTarget} within the
  # virtual cluster must have the `DEFAULT` role, which is used to run all
  # workloads that are not associated with a node pool.
  module Role
    # Role is unspecified.
    ROLE_UNSPECIFIED = 0

    # At least one node pool must have the `DEFAULT` role.
    # Work assigned to a role that is not associated with a node pool
    # is assigned to the node pool with the `DEFAULT` role. For example,
    # work assigned to the `CONTROLLER` role will be assigned to the node pool
    # with the `DEFAULT` role if no node pool has the `CONTROLLER` role.
    DEFAULT = 1

    # Run work associated with the Dataproc control plane (for example,
    # controllers and webhooks). Very low resource requirements.
    CONTROLLER = 2

    # Run work associated with a Spark driver of a job.
    SPARK_DRIVER = 3

    # Run work associated with a Spark executor of a job.
    SPARK_EXECUTOR = 4
  end
end