Class: Google::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/remotebuildexecution_v2/classes.rb,
lib/google/apis/remotebuildexecution_v2/representations.rb,
lib/google/apis/remotebuildexecution_v2/representations.rb

Overview

A response corresponding to a single blob that the client tried to upload.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse

Returns a new instance of BuildBazelRemoteExecutionV2BatchUpdateBlobsResponseResponse.



600
601
602
# File 'lib/google/apis/remotebuildexecution_v2/classes.rb', line 600

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#digestGoogle::Apis::RemotebuildexecutionV2::BuildBazelRemoteExecutionV2Digest

A content digest. A digest for a given blob consists of the size of the blob and its hash. The hash algorithm to use is defined by the server. The size is considered to be an integral part of the digest and cannot be separated. That is, even if the hash field is correctly specified but size_bytes is not, the server MUST reject the request. The reason for including the size in the digest is as follows: in a great many cases, the server needs to know the size of the blob it is about to work with prior to starting an operation with it, such as flattening Merkle tree structures or streaming it to a worker. Technically, the server could implement a separate metadata store, but this results in a significantly more complicated implementation as opposed to having the client specify the size up-front (or storing the size along with the digest in every message where digests are embedded). This does mean that the API leaks some implementation details of (what we consider to be) a reasonable server implementation, but we consider this to be a worthwhile tradeoff. When a Digest is used to refer to a proto message, it always refers to the message in binary encoded form. To ensure consistent hashing, clients and servers MUST ensure that they serialize messages according to the following rules, even if there are alternate valid encodings for the same message: * Fields are serialized in tag order. * There are no unknown fields. * There are no duplicate fields. * Fields are serialized according to the default semantics for their type. Most protocol buffer implementations will always follow these rules when serializing, but care should be taken to avoid shortcuts. For instance, concatenating two messages to merge them may produce duplicate fields. Corresponds to the JSON property digest



588
589
590
# File 'lib/google/apis/remotebuildexecution_v2/classes.rb', line 588

def digest
  @digest
end

#statusGoogle::Apis::RemotebuildexecutionV2::GoogleRpcStatus

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. Corresponds to the JSON property status



598
599
600
# File 'lib/google/apis/remotebuildexecution_v2/classes.rb', line 598

def status
  @status
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



605
606
607
608
# File 'lib/google/apis/remotebuildexecution_v2/classes.rb', line 605

def update!(**args)
  @digest = args[:digest] if args.key?(:digest)
  @status = args[:status] if args.key?(:status)
end