Class: Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
- Inherits:
-
Object
- Object
- Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/videointelligence_v1p2beta1/classes.rb,
lib/google/apis/videointelligence_v1p2beta1/representations.rb,
lib/google/apis/videointelligence_v1p2beta1/representations.rb
Overview
Annotation results for a single video.
Instance Attribute Summary collapse
-
#error ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus
The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. -
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
-
#face_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1FaceAnnotation>
Deprecated.
-
#face_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1FaceDetectionAnnotation>
Face detection annotations.
-
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Label annotations on frame level.
-
#input_uri ⇒ String
Video file location in Cloud Storage.
-
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
-
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
-
#person_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1PersonDetectionAnnotation>
Person detection annotations.
-
#segment ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment
Video segment.
-
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on video level or user-specified segment level.
-
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
-
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>
Shot annotations.
-
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on shot level.
-
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on shot level.
-
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>
Speech transcription.
-
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>
OCR text detection and tracking.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
constructor
A new instance of GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
Returns a new instance of GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults.
3345 3346 3347 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3345 def initialize(**args) update!(**args) end |
Instance Attribute Details
#error ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus
The Status
type defines a logical error model that is suitable for different
programming environments, including REST APIs and RPC APIs. It is used by
gRPC. Each Status
message contains three pieces of
data: error code, error message, and error details. You can find out more
about this error model and how to work with it in the API Design Guide.
Corresponds to the JSON property error
3248 3249 3250 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3248 def error @error end |
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only). If no
explicit content has been detected in a frame, no annotations are present for
that frame.
Corresponds to the JSON property explicitAnnotation
3255 3256 3257 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3255 def explicit_annotation @explicit_annotation end |
#face_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1FaceAnnotation>
Deprecated. Please use face_detection_annotations
instead.
Corresponds to the JSON property faceAnnotations
3260 3261 3262 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3260 def face_annotations @face_annotations end |
#face_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1FaceDetectionAnnotation>
Face detection annotations.
Corresponds to the JSON property faceDetectionAnnotations
3265 3266 3267 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3265 def face_detection_annotations @face_detection_annotations end |
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Label annotations on frame level. There is exactly one element for each unique
label.
Corresponds to the JSON property frameLabelAnnotations
3271 3272 3273 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3271 def frame_label_annotations @frame_label_annotations end |
#input_uri ⇒ String
Video file location in Cloud Storage.
Corresponds to the JSON property inputUri
3276 3277 3278 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3276 def input_uri @input_uri end |
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
Corresponds to the JSON property logoRecognitionAnnotations
3281 3282 3283 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3281 def logo_recognition_annotations @logo_recognition_annotations end |
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
Corresponds to the JSON property objectAnnotations
3286 3287 3288 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3286 def object_annotations @object_annotations end |
#person_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1PersonDetectionAnnotation>
Person detection annotations.
Corresponds to the JSON property personDetectionAnnotations
3291 3292 3293 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3291 def person_detection_annotations @person_detection_annotations end |
#segment ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment
Video segment.
Corresponds to the JSON property segment
3296 3297 3298 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3296 def segment @segment end |
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on video level or user-specified segment level.
There is exactly one element for each unique label.
Corresponds to the JSON property segmentLabelAnnotations
3302 3303 3304 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3302 def segment_label_annotations @segment_label_annotations end |
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
There is exactly one element for each unique label. Compared to the existing
topical segment_label_annotations
, this field presents more fine-grained,
segment-level labels detected in video content and is made available only when
the client sets LabelDetectionConfig.model
to "builtin/latest" in the
request.
Corresponds to the JSON property segmentPresenceLabelAnnotations
3312 3313 3314 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3312 def segment_presence_label_annotations @segment_presence_label_annotations end |
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>
Shot annotations. Each shot is represented as a video segment.
Corresponds to the JSON property shotAnnotations
3317 3318 3319 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3317 def shot_annotations @shot_annotations end |
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on shot level. There is exactly one element for each
unique label.
Corresponds to the JSON property shotLabelAnnotations
3323 3324 3325 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3323 def shot_label_annotations @shot_label_annotations end |
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on shot level. There is exactly one element for
each unique label. Compared to the existing topical shot_label_annotations
,
this field presents more fine-grained, shot-level labels detected in video
content and is made available only when the client sets LabelDetectionConfig.
model
to "builtin/latest" in the request.
Corresponds to the JSON property shotPresenceLabelAnnotations
3332 3333 3334 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3332 def shot_presence_label_annotations @shot_presence_label_annotations end |
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>
Speech transcription.
Corresponds to the JSON property speechTranscriptions
3337 3338 3339 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3337 def speech_transcriptions @speech_transcriptions end |
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>
OCR text detection and tracking. Annotations for list of detected text
snippets. Each will have list of frame information associated with it.
Corresponds to the JSON property textAnnotations
3343 3344 3345 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3343 def text_annotations @text_annotations end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 |
# File 'lib/google/apis/videointelligence_v1p2beta1/classes.rb', line 3350 def update!(**args) @error = args[:error] if args.key?(:error) @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation) @face_annotations = args[:face_annotations] if args.key?(:face_annotations) @face_detection_annotations = args[:face_detection_annotations] if args.key?(:face_detection_annotations) @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations) @input_uri = args[:input_uri] if args.key?(:input_uri) @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations) @object_annotations = args[:object_annotations] if args.key?(:object_annotations) @person_detection_annotations = args[:person_detection_annotations] if args.key?(:person_detection_annotations) @segment = args[:segment] if args.key?(:segment) @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations) @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations) @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations) @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations) @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations) @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions) @text_annotations = args[:text_annotations] if args.key?(:text_annotations) end |