Class: Google::Apis::DataprocV1beta2::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1beta2::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1beta2/classes.rb,
lib/google/apis/dataproc_v1beta2/representations.rb,
lib/google/apis/dataproc_v1beta2/representations.rb
Overview
A Dataproc job resource.
Instance Attribute Summary collapse
-
#done ⇒ Boolean
(also: #done?)
Output only.
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries.
-
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
-
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only.
-
#submitted_by ⇒ String
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job.
1640 1641 1642 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1640 def initialize(**args) update!(**args) end |
Instance Attribute Details
#done ⇒ Boolean Also known as: done?
Output only. Indicates whether the job is completed. If the value is false,
the job is still in progress. If true, the job is completed, and status.state
field will indicate if it was successful, failed, or cancelled.
Corresponds to the JSON property done
1515 1516 1517 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1515 def done @done end |
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1523 1524 1525 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1523 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1529 1530 1531 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1529 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1537 1538 1539 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1537 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
1543 1544 1545 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1543 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1550 1551 1552 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1550 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1559 1560 1561 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1559 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
1565 1566 1567 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1565 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Dataproc job config.
Corresponds to the JSON property placement
1570 1571 1572 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1570 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT:
The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/
concepts/components/presto) must be enabled when the cluster is created to
submit a Presto job to the cluster.
Corresponds to the JSON property prestoJob
1578 1579 1580 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1578 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1584 1585 1586 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1584 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1589 1590 1591 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1589 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1594 1595 1596 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1594 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN. The specification of the main method to call to drive
the job. Specify either the jar file that contains the main class or the main
class name. To pass both a main jar and a main class in that jar, add the jar
to CommonJob.jar_file_uris, and then specify the main class name in main_class.
Corresponds to the JSON property sparkJob
1603 1604 1605 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1603 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
1609 1610 1611 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1609 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
1615 1616 1617 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1615 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Dataproc job status.
Corresponds to the JSON property status
1620 1621 1622 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1620 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1625 1626 1627 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1625 def status_history @status_history end |
#submitted_by ⇒ String
Output only. The email address of the user submitting the job. For jobs
submitted on the cluster, the address is username@hostname.
Corresponds to the JSON property submittedBy
1631 1632 1633 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1631 def submitted_by @submitted_by end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1638 1639 1640 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1638 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 |
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 1645 def update!(**args) @done = args[:done] if args.key?(:done) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @submitted_by = args[:submitted_by] if args.key?(:submitted_by) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |