Class: Google::Apis::DataprocV1beta2::OrderedJob

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dataproc_v1beta2/classes.rb,
lib/google/apis/dataproc_v1beta2/representations.rb,
lib/google/apis/dataproc_v1beta2/representations.rb

Overview

A job executed by the workflow.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ OrderedJob

Returns a new instance of OrderedJob.



2474
2475
2476
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2474

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#hadoop_jobGoogle::Apis::DataprocV1beta2::HadoopJob

A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



2395
2396
2397
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2395

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1beta2::HiveJob

A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



2401
2402
2403
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2401

def hive_job
  @hive_job
end

#labelsHash<String,String>

Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \ pLl\pLo0,62Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \pLl\pLo\pN_-0,63No more than 32 labels can be associated with a given job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


2410
2411
2412
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2410

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1beta2::PigJob

A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



2416
2417
2418
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2416

def pig_job
  @pig_job
end

#prerequisite_step_idsArray<String>

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow. Corresponds to the JSON property prerequisiteStepIds

Returns:

  • (Array<String>)


2422
2423
2424
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2422

def prerequisite_step_ids
  @prerequisite_step_ids
end

#presto_jobGoogle::Apis::DataprocV1beta2::PrestoJob

A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. Corresponds to the JSON property prestoJob



2430
2431
2432
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2430

def presto_job
  @presto_job
end

#pyspark_jobGoogle::Apis::DataprocV1beta2::PySparkJob

A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



2436
2437
2438
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2436

def pyspark_job
  @pyspark_job
end

#schedulingGoogle::Apis::DataprocV1beta2::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



2441
2442
2443
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2441

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1beta2::SparkJob

A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to CommonJob.jar_file_uris, and then specify the main class name in main_class. Corresponds to the JSON property sparkJob



2450
2451
2452
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2450

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1beta2::SparkRJob

A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



2456
2457
2458
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2456

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1beta2::SparkSqlJob

A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. Corresponds to the JSON property sparkSqlJob



2462
2463
2464
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2462

def spark_sql_job
  @spark_sql_job
end

#step_idString

Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc- workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. Corresponds to the JSON property stepId

Returns:

  • (String)


2472
2473
2474
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2472

def step_id
  @step_id
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2479

def update!(**args)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids)
  @presto_job = args[:presto_job] if args.key?(:presto_job)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @step_id = args[:step_id] if args.key?(:step_id)
end