Class: AWS::S3::S3Object
- Inherits:
-
Object
- Object
- AWS::S3::S3Object
- Includes:
- DataOptions
- Defined in:
- lib/aws/s3/s3_object.rb
Overview
Represents an object in S3 identified by a key.
object = bucket.objects["key-to-my-object"]
object.key #=> 'key-to-my-object'
See ObjectCollection for more information on finding objects.
Writing and Reading S3Objects
obj = bucket.objects["my-text-object"]
obj.write("MY TEXT")
obj.read
#=> "MY TEXT"
obj.write(File.new("README.txt"))
obj.read
# should equal File.read("README.txt")
Instance Attribute Summary collapse
-
#bucket ⇒ Bucket
readonly
The bucket this object is in.
-
#key ⇒ String
readonly
The objects unique key.
Instance Method Summary collapse
-
#==(other) ⇒ Boolean
(also: #eql?)
Returns true if the other object belongs to the same bucket and has the same key.
-
#acl ⇒ AccessControlList
Returns the object’s access control list.
-
#acl=(acl) ⇒ nil
Sets the object’s access control list.
-
#content_length ⇒ Integer
Size of the object in bytes.
-
#content_type ⇒ String
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
-
#copy_from(source, options = {}) ⇒ nil
Copies data from one S3 object to another.
-
#copy_to(target, options = {}) ⇒ nil
Copies data from the current object to another object in S3.
-
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
-
#etag ⇒ String
Returns the object’s ETag.
- #exists? ⇒ Boolean
-
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:.
-
#initialize(bucket, key, opts = {}) ⇒ S3Object
constructor
A new instance of S3Object.
-
#last_modified ⇒ Time
Returns the object’s last modified time.
-
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
-
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload.
-
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
-
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object.
-
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
-
#read(options = {}, &blk) ⇒ Object
Fetches the object data from S3.
-
#reduced_redundancy=(value) ⇒ true, false
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
-
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt the object on the server side, or
nil
if SSE was not used when storing the object. -
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
-
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object.
-
#versions ⇒ ObjectVersionCollection
Returns a colletion representing all the object versions for this object.
-
#write(options_or_data = nil, options = nil) ⇒ S3Object, ObjectVersion
Writes data to the object in S3.
Constructor Details
#initialize(bucket, key, opts = {}) ⇒ S3Object
Returns a new instance of S3Object.
45 46 47 48 49 |
# File 'lib/aws/s3/s3_object.rb', line 45 def initialize(bucket, key, opts = {}) super @key = key @bucket = bucket end |
Instance Attribute Details
#bucket ⇒ Bucket (readonly)
Returns The bucket this object is in.
55 56 57 |
# File 'lib/aws/s3/s3_object.rb', line 55 def bucket @bucket end |
#key ⇒ String (readonly)
Returns The objects unique key.
52 53 54 |
# File 'lib/aws/s3/s3_object.rb', line 52 def key @key end |
Instance Method Details
#==(other) ⇒ Boolean Also known as: eql?
Returns true if the other object belongs to the same bucket and has the same key.
64 65 66 |
# File 'lib/aws/s3/s3_object.rb', line 64 def ==(other) other.kind_of?(S3Object) and other.bucket == bucket and other.key == key end |
#acl ⇒ AccessControlList
Returns the object’s access control list. This will be an instance of AccessControlList, plus an additional change
method:
object.acl.change do |acl|
# remove any grants to someone other than the bucket owner
owner_id = object.bucket.owner.id
acl.grants.reject! do |g|
g.grantee.canonical_user_id != owner_id
end
end
Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it’s possible that you may overwrite a concurrent update to the ACL using this method.
673 674 675 676 677 678 679 680 681 |
# File 'lib/aws/s3/s3_object.rb', line 673 def acl acl = client.get_object_acl( :bucket_name => bucket.name, :key => key ).acl acl.extend ACLProxy acl.object = self acl end |
#acl=(acl) ⇒ nil
Sets the object’s access control list. acl
can be:
-
An XML policy as a string (which is passed to S3 uninterpreted)
-
An AccessControlList object
-
Any object that responds to
to_xml
-
Any Hash that is acceptable as an argument to AccessControlList#initialize.
692 693 694 695 696 697 698 |
# File 'lib/aws/s3/s3_object.rb', line 692 def acl=(acl) client.set_object_acl( :bucket_name => bucket.name, :key => key, :acl => acl) nil end |
#content_length ⇒ Integer
Returns Size of the object in bytes.
117 118 119 |
# File 'lib/aws/s3/s3_object.rb', line 117 def content_length head.content_length end |
#content_type ⇒ String
S3 does not compute content-type. It reports the content-type as was reported during the file upload.
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
125 126 127 |
# File 'lib/aws/s3/s3_object.rb', line 125 def content_type head.content_type end |
#copy_from(source, options = {}) ⇒ nil
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don’t specify any of these options when copying, the object will have the default values as described below.
Copies data from one S3 object to another.
S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 |
# File 'lib/aws/s3/s3_object.rb', line 488 def copy_from source, = {} copy_opts = { :bucket_name => bucket.name, :key => key } copy_opts[:copy_source] = case source when S3Object "#{source.bucket.name}/#{source.key}" when ObjectVersion copy_opts[:version_id] = source.version_id "#{source.object.bucket.name}/#{source.object.key}" else case when [:bucket] then "#{[:bucket].name}/#{source}" when [:bucket_name] then "#{[:bucket_name]}/#{source}" else "#{self.bucket.name}/#{source}" end end if [:metadata] copy_opts[:metadata] = [:metadata] copy_opts[:metadata_directive] = 'REPLACE' else copy_opts[:metadata_directive] = 'COPY' end copy_opts[:acl] = [:acl] if [:acl] copy_opts[:version_id] = [:version_id] if [:version_id] copy_opts[:server_side_encryption] = [:server_side_encryption] if .key?(:server_side_encryption) (copy_opts) if [:reduced_redundancy] copy_opts[:storage_class] = 'REDUCED_REDUNDANCY' else copy_opts[:storage_class] = 'STANDARD' end client.copy_object(copy_opts) nil end |
#copy_to(target, options = {}) ⇒ nil
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don’t specify any of these options when copying, the new object will have the default values as described below.
Copies data from the current object to another object in S3.
S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 |
# File 'lib/aws/s3/s3_object.rb', line 584 def copy_to target, = {} unless target.is_a?(S3Object) bucket = case when [:bucket] then [:bucket] when [:bucket_name] Bucket.new([:bucket_name], :config => config) else self.bucket end target = S3Object.new(bucket, target) end copy_opts = .dup copy_opts.delete(:bucket) copy_opts.delete(:bucket_name) target.copy_from(self, copy_opts) end |
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
149 150 151 152 153 154 |
# File 'lib/aws/s3/s3_object.rb', line 149 def delete = {} [:bucket_name] = bucket.name [:key] = key client.delete_object() nil end |
#etag ⇒ String
Returns the object’s ETag.
Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.
105 106 107 |
# File 'lib/aws/s3/s3_object.rb', line 105 def etag head.etag end |
#exists? ⇒ Boolean
70 71 72 73 74 75 76 |
# File 'lib/aws/s3/s3_object.rb', line 70 def exists? head rescue Errors::NoSuchKey => e false else true end |
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:
-
metadata (hash of user-supplied key-value pairs)
-
content_length (integer, number of bytes)
-
content_type (as sent to S3 when uploading the object)
-
etag (typically the object’s MD5)
-
server_side_encryption (the algorithm used to encrypt the object on the server side, e.g.
:aes256
)
93 94 95 96 |
# File 'lib/aws/s3/s3_object.rb', line 93 def head = {} client.head_object(.merge( :bucket_name => bucket.name, :key => key)) end |
#last_modified ⇒ Time
Returns the object’s last modified time.
112 113 114 |
# File 'lib/aws/s3/s3_object.rb', line 112 def last_modified head.last_modified end |
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
160 161 162 163 |
# File 'lib/aws/s3/s3_object.rb', line 160 def = {} [:config] = config ObjectMetadata.new(self, ) end |
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.
405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 |
# File 'lib/aws/s3/s3_object.rb', line 405 def multipart_upload( = {}) = .dup () upload = multipart_uploads.create() if block_given? result = nil begin yield(upload) ensure result = upload.close end result else upload end end |
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
431 432 433 |
# File 'lib/aws/s3/s3_object.rb', line 431 def multipart_uploads ObjectUploadCollection.new(self) end |
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.
798 799 800 |
# File 'lib/aws/s3/s3_object.rb', line 798 def presigned_post( = {}) PresignedPost.new(bucket, .merge(:key => key)) end |
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
786 787 788 789 |
# File 'lib/aws/s3/s3_object.rb', line 786 def public_url( = {}) req = request_for_signing() build_uri([:secure] != false, req) end |
#read(options = {}, &blk) ⇒ Object
Fetches the object data from S3.
635 636 637 638 639 |
# File 'lib/aws/s3/s3_object.rb', line 635 def read( = {}, &blk) [:bucket_name] = bucket.name [:key] = key client.get_object().data end |
#reduced_redundancy=(value) ⇒ true, false
Changing the storage class of an object incurs a COPY operation.
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
814 815 816 817 |
# File 'lib/aws/s3/s3_object.rb', line 814 def reduced_redundancy= value copy_from(key, :reduced_redundancy => value) value end |
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt the object on the server side, or nil
if SSE was not used when storing the object.
132 133 134 |
# File 'lib/aws/s3/s3_object.rb', line 132 def server_side_encryption head.server_side_encryption end |
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
138 139 140 |
# File 'lib/aws/s3/s3_object.rb', line 138 def server_side_encryption? !server_side_encryption.nil? end |
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.
767 768 769 770 771 772 773 774 775 776 777 |
# File 'lib/aws/s3/s3_object.rb', line 767 def url_for(method, = {}) req = request_for_signing() method = http_method(method) expires = ([:expires]) req.add_param("AWSAccessKeyId", config.signer.access_key_id) req.add_param("Signature", signature(method, expires, req)) req.add_param("Expires", expires) build_uri([:secure] != false, req) end |
#versions ⇒ ObjectVersionCollection
Returns a colletion representing all the object versions for this object.
bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest
172 173 174 |
# File 'lib/aws/s3/s3_object.rb', line 172 def versions ObjectVersionCollection.new(self) end |
#write(options = {}) ⇒ S3Object, ObjectVersion #write(data, options = {}) ⇒ S3Object, ObjectVersion
Writes data to the object in S3. This method will attempt to intelligently choose between uploading in one request and using #multipart_upload.
Unless versioning is enabled, any data currently in S3 at #key will be replaced.
You can pass :data
or :file
as the first argument or as options. Example usage:
obj = s3.buckets.mybucket.objects.mykey
obj.write("HELLO")
obj.write(:data => "HELLO")
obj.write(Pathname.new("myfile"))
obj.write(:file => "myfile")
# writes zero-length data
obj.write(:metadata => { "avg-rating" => "5 stars" })
289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 |
# File 'lib/aws/s3/s3_object.rb', line 289 def write( = nil, = nil) (, ) = (, ) () if use_multipart?(, ) .delete(:multipart_threshold) multipart_upload() do |upload| each_part(, ) do |part| upload.add_part(part) end end else opts = { :bucket_name => bucket.name, :key => key } resp = client.put_object(opts.merge().merge()) if resp.version_id ObjectVersion.new(self, resp.version_id) else self end end end |