Class: AWS::S3::S3Object
- Inherits:
-
Object
- Object
- AWS::S3::S3Object
- Defined in:
- lib/aws/s3/s3_object.rb
Overview
Represents an object in S3. Objects live in a bucket and have unique keys.
Getting Objects
You can get an object by its key.
s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key'] # no request made
You can also get objects by enumerating a objects in a bucket.
bucket.objects.each do |obj|
puts obj.key
end
See ObjectCollection for more information on finding objects.
Creating Objects
You create an object by writing to it. The following two expressions are equivalent.
obj = bucket.objects.create('key', 'data')
obj = bucket.objects['key'].write('data')
Writing Objects
To upload data to S3, you simply need to call #write on an object.
obj.write('Hello World!')
obj.read
#=> 'Hello World!'
Uploading Files
You can upload a file to S3 in a variety of ways. Given a path to a file (as a string) you can do any of the following:
# specify the data as a path to a file
obj.write(Pathname.new(path_to_file))
# also works this way
obj.write(:file => path_to_file)
# Also accepts an open file object
file = File.open(path_to_file, 'r')
obj.write(file)
All three examples above produce the same result. The file will be streamed to S3 in chunks. It will not be loaded entirely into memory.
Streaming Uploads
When you call #write with any IO-like object (must respond to #read and #eof?), it will be streamed to S3 in chunks.
While it is possible to determine the size of many IO objects, you may have to specify the :content_length of your IO object. If the exact size can not be known, you may provide an :estimated_content_length
. Depending on the size (actual or estimated) of your data, it will be uploaded in a single request or in multiple requests via #multipart_upload.
You may also stream uploads to S3 using a block:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
Reading Objects
You can read an object directly using #read. Be warned, this will load the entire object into memory and is not recommended for large objects.
obj.write('abc')
puts obj.read
#=> abc
Streaming Downloads
If you want to stream an object from S3, you can pass a block to #read.
File.open('output', 'w') do |file|
large_object.read do |chunk|
file.write(chunk)
end
end
Encryption
Amazon S3 can encrypt objects for you service-side. You can also use client-side encryption.
Server Side Encryption
Amazon S3 provides server side encryption for an additional cost. You can specify to use server side encryption when writing an object.
obj.write('data', :server_side_encryption => :aes256)
You can also make this the default behavior.
AWS.config(:s3_server_side_encryption => :aes256)
s3 = AWS::S3.new
s3.buckets['name'].objects['key'].write('abc') # will be encrypted
Client Side Encryption
Client side encryption utilizes envelope encryption, so that your keys are never sent to S3. You can use a symetric key or an asymmetric key pair.
Symmetric Key Encryption
An AES key is used for symmetric encryption. The key can be 128, 192, and 256 bit sizes. Start by generating key or read a previously generated key.
# generate a new random key
my_key = OpenSSL::Cipher.new("AES-256-ECB").random_key
# read an existing key from disk
my_key = File.read("my_key.der")
Now you can encrypt locally and upload the encrypted data to S3. To do this, you need to provide your key.
obj = bucket.objects["my-text-object"]
# encrypt then upload data
obj.write("MY TEXT", :encryption_key => my_key)
# try read the object without decrypting, oops
obj.read
#=> '.....'
Lastly, you can download and decrypt by providing the same key.
obj.read(:encryption_key => my_key)
#=> "MY TEXT"
Asymmetric Key Pair
A RSA key pair is used for asymmetric encryption. The public key is used for encryption and the private key is used for decryption. Start by generating a key.
my_key = OpenSSL::PKey::RSA.new(1024)
Provide your key to #write and the data will be encrypted before it is uploaded. Pass the same key to #read to decrypt the data when you download it.
obj = bucket.objects["my-text-object"]
# encrypt and upload the data
obj.write("MY TEXT", :encryption_key => my_key)
# download and decrypt the data
obj.read(:encryption_key => my_key)
#=> "MY TEXT"
Configuring storage locations
By default, encryption materials are stored in the object metadata. If you prefer, you can store the encryption materials in a separate object in S3. This object will have the same key + ‘.instruction’.
# new object, does not exist yet
obj = bucket.objects["my-text-object"]
# no instruction file present
bucket.objects['my-text-object.instruction'].exists?
#=> false
# store the encryption materials in the instruction file
# instead of obj#metadata
obj.write("MY TEXT",
:encryption_key => MY_KEY,
:encryption_materials_location => :instruction_file)
bucket.objects['my-text-object.instruction'].exists?
#=> true
If you store the encryption materials in an instruction file, you must tell #read this or it will fail to find your encryption materials.
# reading an encrypted file whos materials are stored in an
# instruction file, and not metadata
obj.read(:encryption_key => MY_KEY,
:encryption_materials_location => :instruction_file)
Configuring default behaviors
You can configure the default key such that it will automatically encrypt and decrypt for you. You can do this globally or for a single S3 interface
# all objects uploaded/downloaded with this s3 object will be
# encrypted/decrypted
s3 = AWS::S3.new(:s3_encryption_key => "MY_KEY")
# set the key to always encrypt/decrypt
AWS.config(:s3_encryption_key => "MY_KEY")
You can also configure the default storage location for the encryption materials.
AWS.config(:s3_encryption_materials_location => :instruction_file)
Instance Attribute Summary collapse
-
#bucket ⇒ Bucket
readonly
The bucket this object is in.
-
#key ⇒ String
readonly
The objects unique key.
Instance Method Summary collapse
-
#==(other) ⇒ Boolean
(also: #eql?)
Returns true if the other object belongs to the same bucket and has the same key.
-
#acl ⇒ AccessControlList
Returns the object’s access control list.
-
#acl=(acl) ⇒ nil
Sets the objects’s ACL (access control list).
-
#content_length ⇒ Integer
Size of the object in bytes.
-
#content_type ⇒ String
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
-
#copy_from(source, options = {}) ⇒ nil
Copies data from one S3 object to another.
-
#copy_to(target, options = {}) ⇒ S3Object
Copies data from the current object to another object in S3.
-
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
-
#etag ⇒ String
Returns the object’s ETag.
-
#exists? ⇒ Boolean
Returns
true
if the object exists in S3. - #expiration_date ⇒ DateTime?
- #expiration_rule_id ⇒ String?
-
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:.
-
#initialize(bucket, key, opts = {}) ⇒ S3Object
constructor
A new instance of S3Object.
-
#last_modified ⇒ Time
Returns the object’s last modified time.
-
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
-
#move_to(target, options = {}) ⇒ S3Object
(also: #rename_to)
Moves an object to a new key.
-
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload.
-
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
-
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object.
-
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
-
#read(options = {}, &read_block) ⇒ Object
Fetches the object data from S3.
-
#reduced_redundancy=(value) ⇒ true, false
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
-
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt the object on the server side, or
nil
if SSE was not used when storing the object. -
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
-
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object.
-
#versions ⇒ ObjectVersionCollection
Returns a collection representing all the object versions for this object.
-
#write(data, options = {}) ⇒ S3Object, ObjectVersion
Uploads data to the object in S3.
Constructor Details
#initialize(bucket, key, opts = {}) ⇒ S3Object
Returns a new instance of S3Object.
245 246 247 248 249 |
# File 'lib/aws/s3/s3_object.rb', line 245 def initialize(bucket, key, opts = {}) super @key = key @bucket = bucket end |
Instance Attribute Details
#bucket ⇒ Bucket (readonly)
Returns The bucket this object is in.
255 256 257 |
# File 'lib/aws/s3/s3_object.rb', line 255 def bucket @bucket end |
#key ⇒ String (readonly)
Returns The objects unique key.
252 253 254 |
# File 'lib/aws/s3/s3_object.rb', line 252 def key @key end |
Instance Method Details
#==(other) ⇒ Boolean Also known as: eql?
Returns true if the other object belongs to the same bucket and has the same key.
264 265 266 |
# File 'lib/aws/s3/s3_object.rb', line 264 def == other other.kind_of?(S3Object) and other.bucket == bucket and other.key == key end |
#acl ⇒ AccessControlList
Returns the object’s access control list. This will be an instance of AccessControlList, plus an additional change
method:
object.acl.change do |acl|
# remove any grants to someone other than the bucket owner
owner_id = object.bucket.owner.id
acl.grants.reject! do |g|
g.grantee.canonical_user_id != owner_id
end
end
Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it’s possible that you may overwrite a concurrent update to the ACL using this method.
1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 |
# File 'lib/aws/s3/s3_object.rb', line 1050 def acl resp = client.get_object_acl(:bucket_name => bucket.name, :key => key) acl = AccessControlList.new(resp.data) acl.extend ACLProxy acl.object = self acl end |
#acl=(acl) ⇒ nil
Sets the objects’s ACL (access control list). You can provide an ACL in a number of different formats.
1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 |
# File 'lib/aws/s3/s3_object.rb', line 1065 def acl=(acl) client_opts = {} client_opts[:bucket_name] = bucket.name client_opts[:key] = key client.put_object_acl((acl).merge(client_opts)) nil end |
#content_length ⇒ Integer
Returns Size of the object in bytes.
317 318 319 |
# File 'lib/aws/s3/s3_object.rb', line 317 def content_length head.content_length end |
#content_type ⇒ String
S3 does not compute content-type. It reports the content-type as was reported during the file upload.
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
325 326 327 |
# File 'lib/aws/s3/s3_object.rb', line 325 def content_type head.content_type end |
#copy_from(source, options = {}) ⇒ nil
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don’t specify any of these options when copying, the object will have the default values as described below.
Copies data from one S3 object to another.
S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 |
# File 'lib/aws/s3/s3_object.rb', line 788 def copy_from source, = {} copy_opts = { :bucket_name => bucket.name, :key => key } copy_opts[:copy_source] = case source when S3Object "#{source.bucket.name}/#{source.key}" when ObjectVersion copy_opts[:version_id] = source.version_id "#{source.object.bucket.name}/#{source.object.key}" else case when [:bucket] then "#{[:bucket].name}/#{source}" when [:bucket_name] then "#{[:bucket_name]}/#{source}" else "#{self.bucket.name}/#{source}" end end copy_opts[:metadata_directive] = 'COPY' # Saves client-side encryption headers and copies the instruction file copy_cse_materials(source, ) do |cse_materials| if [:metadata] copy_opts[:metadata] = [:metadata].merge(cse_materials) copy_opts[:metadata_directive] = 'REPLACE' end end if [:content_disposition] copy_opts[:content_disposition] = [:content_disposition] copy_opts[:metadata_directive] = "REPLACE" end if [:content_type] copy_opts[:content_type] = [:content_type] copy_opts[:metadata_directive] = "REPLACE" end if [:cache_control] copy_opts[:cache_control] = [:cache_control] copy_opts[:metadata_directive] = "REPLACE" end copy_opts[:acl] = [:acl] if [:acl] copy_opts[:version_id] = [:version_id] if [:version_id] copy_opts[:server_side_encryption] = [:server_side_encryption] if .key?(:server_side_encryption) (copy_opts) if [:reduced_redundancy] copy_opts[:storage_class] = 'REDUCED_REDUNDANCY' else copy_opts[:storage_class] = 'STANDARD' end client.copy_object(copy_opts) nil end |
#copy_to(target, options = {}) ⇒ S3Object
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don’t specify any of these options when copying, the new object will have the default values as described below.
Copies data from the current object to another object in S3.
S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 |
# File 'lib/aws/s3/s3_object.rb', line 908 def copy_to target, = {} unless target.is_a?(S3Object) bucket = case when [:bucket] then [:bucket] when [:bucket_name] Bucket.new([:bucket_name], :config => config) else self.bucket end target = S3Object.new(bucket, target) end copy_opts = .dup copy_opts.delete(:bucket) copy_opts.delete(:bucket_name) target.copy_from(self, copy_opts) target end |
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
369 370 371 372 373 374 375 376 377 378 379 380 381 382 |
# File 'lib/aws/s3/s3_object.rb', line 369 def delete = {} client.delete_object(.merge( :bucket_name => bucket.name, :key => key)) if [:delete_instruction_file] client.delete_object( :bucket_name => bucket.name, :key => key + '.instruction') end nil end |
#etag ⇒ String
Returns the object’s ETag.
Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.
305 306 307 |
# File 'lib/aws/s3/s3_object.rb', line 305 def etag head.etag end |
#exists? ⇒ Boolean
Returns true
if the object exists in S3.
270 271 272 273 274 275 276 |
# File 'lib/aws/s3/s3_object.rb', line 270 def exists? head rescue Errors::NoSuchKey => e false else true end |
#expiration_date ⇒ DateTime?
330 331 332 |
# File 'lib/aws/s3/s3_object.rb', line 330 def expiration_date head.expiration_date end |
#expiration_rule_id ⇒ String?
335 336 337 |
# File 'lib/aws/s3/s3_object.rb', line 335 def expiration_rule_id head.expiration_rule_id end |
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:
-
metadata (hash of user-supplied key-value pairs)
-
content_length (integer, number of bytes)
-
content_type (as sent to S3 when uploading the object)
-
etag (typically the object’s MD5)
-
server_side_encryption (the algorithm used to encrypt the object on the server side, e.g.
:aes256
)
293 294 295 296 |
# File 'lib/aws/s3/s3_object.rb', line 293 def head = {} client.head_object(.merge( :bucket_name => bucket.name, :key => key)) end |
#last_modified ⇒ Time
Returns the object’s last modified time.
312 313 314 |
# File 'lib/aws/s3/s3_object.rb', line 312 def last_modified head.last_modified end |
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
388 389 390 391 |
# File 'lib/aws/s3/s3_object.rb', line 388 def = {} [:config] = config ObjectMetadata.new(self, ) end |
#move_to(target, options = {}) ⇒ S3Object Also known as: rename_to
Moves an object to a new key.
This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.
bucket = s3.buckets['old-bucket']
old_obj = bucket.objects['old-key']
# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')
old_obj.key #=> 'old-key'
old_obj.exists? #=> false
new_obj.key #=> 'new-key'
new_obj.exists? #=> true
If you need to move an object to a different bucket, pass :bucket
or :bucket_name
.
obj = s3.buckets['old-bucket'].objects['old-key']
obj.move_to('new-key', :bucket_name => 'new_bucket')
If the copy succeeds, but the then the delete fails, an error will be raised.
716 717 718 719 720 |
# File 'lib/aws/s3/s3_object.rb', line 716 def move_to target, = {} copy = copy_to(target, ) delete copy end |
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.
651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 |
# File 'lib/aws/s3/s3_object.rb', line 651 def multipart_upload( = {}) = .dup () upload = multipart_uploads.create() if block_given? begin yield(upload) upload.close rescue => e upload.abort raise e end else upload end end |
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
677 678 679 |
# File 'lib/aws/s3/s3_object.rb', line 677 def multipart_uploads ObjectUploadCollection.new(self) end |
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.
1196 1197 1198 |
# File 'lib/aws/s3/s3_object.rb', line 1196 def presigned_post( = {}) PresignedPost.new(bucket, .merge(:key => key)) end |
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
1184 1185 1186 1187 |
# File 'lib/aws/s3/s3_object.rb', line 1184 def public_url( = {}) [:secure] = config.use_ssl? unless .key?(:secure) build_uri(request_for_signing(), ) end |
#read(options = {}, &read_block) ⇒ Object
:range
option cannot be used with client-side encryption
All decryption reads incur at least an extra HEAD operation.
Fetches the object data from S3. If you pass a block to this method, the data will be yielded to the block in chunks as it is read off the HTTP response.
Read an object from S3 in chunks
When downloading large objects it is recommended to pass a block to #read. Data will be yielded to the block as it is read off the HTTP response.
# read an object from S3 to a file
File.open('output.txt', 'w') do |file|
bucket.objects['key'].read do |chunk|
file.write(chunk)
end
end
Reading an object without a block
When you omit the block argument to #read, then the entire HTTP response and read and the object data is loaded into memory.
bucket.objects['key'].read
#=> 'object-contents-here'
1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 |
# File 'lib/aws/s3/s3_object.rb', line 1005 def read = {}, &read_block [:bucket_name] = bucket.name [:key] = key if should_decrypt?() get_encrypted_object(, &read_block) else get_object(, &read_block) end end |
#reduced_redundancy=(value) ⇒ true, false
Changing the storage class of an object incurs a COPY operation.
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
1212 1213 1214 1215 |
# File 'lib/aws/s3/s3_object.rb', line 1212 def reduced_redundancy= value copy_from(key, :reduced_redundancy => value) value end |
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt the object on the server side, or nil
if SSE was not used when storing the object.
342 343 344 |
# File 'lib/aws/s3/s3_object.rb', line 342 def server_side_encryption head.server_side_encryption end |
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
348 349 350 |
# File 'lib/aws/s3/s3_object.rb', line 348 def server_side_encryption? !server_side_encryption.nil? end |
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.
1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 |
# File 'lib/aws/s3/s3_object.rb', line 1154 def url_for(method, = {}) [:secure] = config.use_ssl? unless .key?(:secure) req = request_for_signing() method = http_method(method) expires = ([:expires]) req.add_param("AWSAccessKeyId", config.credential_provider.access_key_id) req.add_param("versionId", [:version_id]) if [:version_id] req.add_param("Signature", signature(method, expires, req)) req.add_param("Expires", expires) req.add_param("x-amz-security-token", config.credential_provider.session_token) if config.credential_provider.session_token secure = .fetch(:secure, config.use_ssl?) build_uri(req, ) end |
#versions ⇒ ObjectVersionCollection
Returns a collection representing all the object versions for this object.
bucket.versioning_enabled? # => true
version = bucket.objects["mykey"].versions.latest
400 401 402 |
# File 'lib/aws/s3/s3_object.rb', line 400 def versions ObjectVersionCollection.new(self) end |
#write(data, options = {}) ⇒ S3Object, ObjectVersion
Uploads data to the object in S3.
obj = s3.buckets['bucket-name'].objects['key']
# strings
obj.write("HELLO")
# files (by path)
obj.write(Pathname.new('path/to/file.txt'))
# file objects
obj.write(File.open('path/to/file.txt', 'r'))
# IO objects (must respond to #read and #eof?)
obj.write(io)
Multipart Uploads vs Single Uploads
This method will intelligently choose between uploading the file in a signal request and using #multipart_upload. You can control this behavior by configuring the thresholds and you can disable the multipart feature as well.
# always send the file in a single request
obj.write(file, :single_request => true)
# upload the file in parts if the total file size exceeds 100MB
obj.write(file, :multipart_threshold => 100 * 1024 * 1024)
543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
# File 'lib/aws/s3/s3_object.rb', line 543 def write *args, &block = (*args, &block) add_storage_class_option() () () if use_multipart?() write_with_multipart() else write_with_put_object() end end |