Class: AWS::S3::S3Object
- Inherits:
-
Object
- Object
- AWS::S3::S3Object
- Defined in:
- lib/aws/s3/s3_object.rb
Overview
Represents an object in S3. Objects live in a bucket and have unique keys.
Getting Objects
You can get an object by its key.
s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key'] # no request made
You can also get objects by enumerating a objects in a bucket.
bucket.objects.each do |obj|
puts obj.key
end
See ObjectCollection for more information on finding objects.
Creating Objects
You create an object by writing to it. The following two expressions are equivalent.
obj = bucket.objects.create('key', 'data')
obj = bucket.objects['key'].write('data')
Writing Objects
To upload data to S3, you simply need to call #write on an object.
obj.write('Hello World!')
obj.read
#=> 'Hello World!'
Uploading Files
You can upload a file to S3 in a variety of ways. Given a path to a file (as a string) you can do any of the following:
# specify the data as a path to a file
obj.write(Pathname.new(path_to_file))
# also works this way
obj.write(:file => path_to_file)
# Also accepts an open file object
file = File.open(path_to_file, 'r')
obj.write(file)
All three examples above produce the same result. The file will be streamed to S3 in chunks. It will not be loaded entirely into memory.
Streaming Uploads
When you call #write with any IO-like object (must respond to
read and #eof?), it will be streamed to S3 in chunks.
While it is possible to determine the size of many IO objects, you may
have to specify the :content_length of your IO object.
If the exact size can not be known, you may provide an
:estimated_content_length
. Depending on the size (actual or
estimated) of your data, it will be uploaded in a single request or
in multiple requests via #multipart_upload.
You may also stream uploads to S3 using a block:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
Reading Objects
You can read an object directly using #read. Be warned, this will load the entire object into memory and is not recommended for large objects.
obj.write('abc')
puts obj.read
#=> abc
Streaming Downloads
If you want to stream an object from S3, you can pass a block to #read.
File.open('output', 'w') do |file|
large_object.read do |chunk|
file.write(chunk)
end
end
Encryption
Amazon S3 can encrypt objects for you service-side. You can also use client-side encryption.
Server Side Encryption
Amazon S3 provides server side encryption for an additional cost. You can specify to use server side encryption when writing an object.
obj.write('data', :server_side_encryption => :aes256)
You can also make this the default behavior.
AWS.config(:s3_server_side_encryption => :aes256)
s3 = AWS::S3.new
s3.buckets['name'].objects['key'].write('abc') # will be encrypted
Client Side Encryption
Client side encryption utilizes envelope encryption, so that your keys are never sent to S3. You can use a symetric key or an asymmetric key pair.
Symmetric Key Encryption
An AES key is used for symmetric encryption. The key can be 128, 192, and 256 bit sizes. Start by generating key or read a previously generated key.
# generate a new random key
my_key = OpenSSL::Cipher.new("AES-256-ECB").random_key
# read an existing key from disk
my_key = File.read("my_key.der")
Now you can encrypt locally and upload the encrypted data to S3. To do this, you need to provide your key.
obj = bucket.objects["my-text-object"]
# encrypt then upload data
obj.write("MY TEXT", :encryption_key => my_key)
# try read the object without decrypting, oops
obj.read
#=> '.....'
Lastly, you can download and decrypt by providing the same key.
obj.read(:encryption_key => my_key)
#=> "MY TEXT"
Asymmetric Key Pair
A RSA key pair is used for asymmetric encryption. The public key is used for encryption and the private key is used for decryption. Start by generating a key.
my_key = OpenSSL::PKey::RSA.new(1024)
Provide your key to #write and the data will be encrypted before it is uploaded. Pass the same key to #read to decrypt the data when you download it.
obj = bucket.objects["my-text-object"]
# encrypt and upload the data
obj.write("MY TEXT", :encryption_key => my_key)
# download and decrypt the data
obj.read(:encryption_key => my_key)
#=> "MY TEXT"
Configuring storage locations
By default, encryption materials are stored in the object metadata. If you prefer, you can store the encryption materials in a separate object in S3. This object will have the same key + '.instruction'.
# new object, does not exist yet
obj = bucket.objects["my-text-object"]
# no instruction file present
bucket.objects['my-text-object.instruction'].exists?
#=> false
# store the encryption materials in the instruction file
# instead of obj#metadata
obj.write("MY TEXT",
:encryption_key => MY_KEY,
:encryption_materials_location => :instruction_file)
bucket.objects['my-text-object.instruction'].exists?
#=> true
If you store the encryption materials in an instruction file, you must tell #read this or it will fail to find your encryption materials.
# reading an encrypted file whos materials are stored in an
# instruction file, and not metadata
obj.read(:encryption_key => MY_KEY,
:encryption_materials_location => :instruction_file)
Configuring default behaviors
You can configure the default key such that it will automatically encrypt and decrypt for you. You can do this globally or for a single S3 interface
# all objects uploaded/downloaded with this s3 object will be
# encrypted/decrypted
s3 = AWS::S3.new(:s3_encryption_key => "MY_KEY")
# set the key to always encrypt/decrypt
AWS.config(:s3_encryption_key => "MY_KEY")
You can also configure the default storage location for the encryption materials.
AWS.config(:s3_encryption_materials_location => :instruction_file)
Instance Attribute Summary collapse
-
#bucket ⇒ Bucket
readonly
The bucket this object is in.
-
#key ⇒ String
readonly
The objects unique key.
Instance Method Summary collapse
-
#==(other) ⇒ Boolean
(also: #eql?)
Returns true if the other object belongs to the same bucket and has the same key.
-
#acl ⇒ AccessControlList
Returns the object's access control list.
-
#acl=(acl) ⇒ nil
Sets the objects's ACL (access control list).
-
#content_length ⇒ Integer
Size of the object in bytes.
-
#content_type ⇒ String
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
-
#copy_from(source, options = {}) ⇒ nil
Copies data from one S3 object to another.
-
#copy_to(target, options = {}) ⇒ S3Object
Copies data from the current object to another object in S3.
-
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
-
#etag ⇒ String
Returns the object's ETag.
-
#exists? ⇒ Boolean
Returns
true
if the object exists in S3. - #expiration_date ⇒ DateTime?
- #expiration_rule_id ⇒ String?
-
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:.
-
#initialize(bucket, key, opts = {}) ⇒ S3Object
constructor
A new instance of S3Object.
-
#last_modified ⇒ Time
Returns the object's last modified time.
-
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
-
#move_to(target, options = {}) ⇒ S3Object
(also: #rename_to)
Moves an object to a new key.
-
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload.
-
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
-
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object.
-
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
-
#read(options = {}, &read_block) ⇒ Object
Fetches the object data from S3.
-
#reduced_redundancy=(value) ⇒ true, false
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
-
#restore(options = {}) ⇒ Boolean
Restores a temporary copy of an archived object from the Glacier storage tier.
- #restore_expiration_date ⇒ DateTime?
-
#restore_in_progress? ⇒ Boolean
Whether a #restore operation on the object is currently being performed on the object.
-
#restored_object? ⇒ Boolean
Whether the object is a temporary copy of an archived object in the Glacier storage class.
-
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt the object on the server side, or
nil
if SSE was not used when storing the object. -
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
-
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object.
-
#versions ⇒ ObjectVersionCollection
Returns a collection representing all the object versions for this object.
-
#write(data, options = {}) ⇒ S3Object, ObjectVersion
Uploads data to the object in S3.
Constructor Details
#initialize(bucket, key, opts = {}) ⇒ S3Object
Returns a new instance of S3Object.
245 246 247 248 249 |
# File 'lib/aws/s3/s3_object.rb', line 245 def initialize(bucket, key, opts = {}) super @key = key @bucket = bucket end |
Instance Attribute Details
#bucket ⇒ Bucket (readonly)
Returns The bucket this object is in.
255 256 257 |
# File 'lib/aws/s3/s3_object.rb', line 255 def bucket @bucket end |
#key ⇒ String (readonly)
Returns The objects unique key.
252 253 254 |
# File 'lib/aws/s3/s3_object.rb', line 252 def key @key end |
Instance Method Details
#==(other) ⇒ Boolean Also known as: eql?
Returns true if the other object belongs to the same bucket and has the same key.
264 265 266 |
# File 'lib/aws/s3/s3_object.rb', line 264 def == other other.kind_of?(S3Object) and other.bucket == bucket and other.key == key end |
#acl ⇒ AccessControlList
Returns the object's access control list. This will be an
instance of AccessControlList, plus an additional change
method:
object.acl.change do |acl|
# remove any grants to someone other than the bucket owner
owner_id = object.bucket.owner.id
acl.grants.reject! do |g|
g.grantee.canonical_user_id != owner_id
end
end
Note that changing the ACL is not an atomic operation; it fetches the current ACL, yields it to the block, and then sets it again. Therefore, it's possible that you may overwrite a concurrent update to the ACL using this method.
1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 |
# File 'lib/aws/s3/s3_object.rb', line 1102 def acl resp = client.get_object_acl(:bucket_name => bucket.name, :key => key) acl = AccessControlList.new(resp.data) acl.extend ACLProxy acl.object = self acl end |
#acl=(acl) ⇒ nil
Sets the objects's ACL (access control list). You can provide an ACL in a number of different formats.
1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 |
# File 'lib/aws/s3/s3_object.rb', line 1117 def acl=(acl) client_opts = {} client_opts[:bucket_name] = bucket.name client_opts[:key] = key client.put_object_acl((acl).merge(client_opts)) nil end |
#content_length ⇒ Integer
Returns Size of the object in bytes.
317 318 319 |
# File 'lib/aws/s3/s3_object.rb', line 317 def content_length head[:content_length] end |
#content_type ⇒ String
S3 does not compute content-type. It reports the content-type as was reported during the file upload.
Returns the content type as reported by S3, defaults to an empty string when not provided during upload.
325 326 327 |
# File 'lib/aws/s3/s3_object.rb', line 325 def content_type head[:content_type] end |
#copy_from(source, options = {}) ⇒ nil
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from the source object. If you don't specify any of these options when copying, the object will have the default values as described below.
Copies data from one S3 object to another.
S3 handles the copy so the clients does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 |
# File 'lib/aws/s3/s3_object.rb', line 849 def copy_from source, = {} = .dup [:copy_source] = case source when S3Object "#{source.bucket.name}/#{source.key}" when ObjectVersion [:version_id] = source.version_id "#{source.object.bucket.name}/#{source.object.key}" else if [:bucket] "#{.delete(:bucket).name}/#{source}" elsif [:bucket_name] "#{.delete(:bucket_name)}/#{source}" else "#{self.bucket.name}/#{source}" end end if [:metadata, :content_disposition, :content_type, :cache_control, ].any? {|opt| .key?(opt) } then [:metadata_directive] = 'REPLACE' else [:metadata_directive] ||= 'COPY' end # copies client-side encryption materials (from the metadata or # instruction file) if .delete(:client_side_encrypted) copy_cse_materials(source, ) end () [:storage_class] = .delete(:reduced_redundancy) ? 'REDUCED_REDUNDANCY' : 'STANDARD' [:bucket_name] = bucket.name [:key] = key client.copy_object() nil end |
#copy_to(target, options = {}) ⇒ S3Object
This operation does not copy the ACL, storage class (standard vs. reduced redundancy) or server side encryption setting from this object to the new object. If you don't specify any of these options when copying, the new object will have the default values as described below.
Copies data from the current object to another object in S3.
S3 handles the copy so the client does not need to fetch the data and upload it again. You can also change the storage class and metadata of the object when copying.
959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 |
# File 'lib/aws/s3/s3_object.rb', line 959 def copy_to target, = {} unless target.is_a?(S3Object) bucket = case when [:bucket] then [:bucket] when [:bucket_name] Bucket.new([:bucket_name], :config => config) else self.bucket end target = S3Object.new(bucket, target) end copy_opts = .dup copy_opts.delete(:bucket) copy_opts.delete(:bucket_name) target.copy_from(self, copy_opts) target end |
#delete(options = {}) ⇒ nil
Deletes the object from its S3 bucket.
394 395 396 397 398 399 400 401 402 403 404 405 406 407 |
# File 'lib/aws/s3/s3_object.rb', line 394 def delete = {} client.delete_object(.merge( :bucket_name => bucket.name, :key => key)) if [:delete_instruction_file] client.delete_object( :bucket_name => bucket.name, :key => key + '.instruction') end nil end |
#etag ⇒ String
Returns the object's ETag.
Generally the ETAG is the MD5 of the object. If the object was uploaded using multipart upload then this is the MD5 all of the upload-part-md5s.
305 306 307 |
# File 'lib/aws/s3/s3_object.rb', line 305 def etag head[:etag] end |
#exists? ⇒ Boolean
Returns true
if the object exists in S3.
270 271 272 273 274 275 276 |
# File 'lib/aws/s3/s3_object.rb', line 270 def exists? head rescue Errors::NoSuchKey => e false else true end |
#expiration_date ⇒ DateTime?
330 331 332 |
# File 'lib/aws/s3/s3_object.rb', line 330 def expiration_date head[:expiration_date] end |
#expiration_rule_id ⇒ String?
335 336 337 |
# File 'lib/aws/s3/s3_object.rb', line 335 def expiration_rule_id head[:expiration_rule_id] end |
#head(options = {}) ⇒ Object
Performs a HEAD request against this object and returns an object with useful information about the object, including:
- metadata (hash of user-supplied key-value pairs)
- content_length (integer, number of bytes)
- content_type (as sent to S3 when uploading the object)
- etag (typically the object's MD5)
- server_side_encryption (the algorithm used to encrypt the
object on the server side, e.g.
:aes256
)
293 294 295 296 |
# File 'lib/aws/s3/s3_object.rb', line 293 def head = {} client.head_object(.merge( :bucket_name => bucket.name, :key => key)) end |
#last_modified ⇒ Time
Returns the object's last modified time.
312 313 314 |
# File 'lib/aws/s3/s3_object.rb', line 312 def last_modified head[:last_modified] end |
#metadata(options = {}) ⇒ ObjectMetadata
Returns an instance of ObjectMetadata representing the metadata for this object.
434 435 436 437 |
# File 'lib/aws/s3/s3_object.rb', line 434 def = {} [:config] = config ObjectMetadata.new(self, ) end |
#move_to(target, options = {}) ⇒ S3Object Also known as: rename_to
Moves an object to a new key.
This works by copying the object to a new key and then deleting the old object. This function returns the new object once this is done.
bucket = s3.buckets['old-bucket']
old_obj = bucket.objects['old-key']
# renaming an object returns a new object
new_obj = old_obj.move_to('new-key')
old_obj.key #=> 'old-key'
old_obj.exists? #=> false
new_obj.key #=> 'new-key'
new_obj.exists? #=> true
If you need to move an object to a different bucket, pass
:bucket
or :bucket_name
.
obj = s3.buckets['old-bucket'].objects['old-key']
obj.move_to('new-key', :bucket_name => 'new_bucket')
If the copy succeeds, but the then the delete fails, an error will be raised.
774 775 776 777 778 |
# File 'lib/aws/s3/s3_object.rb', line 774 def move_to target, = {} copy = copy_to(target, ) delete copy end |
#multipart_upload(options = {}) {|upload| ... } ⇒ S3Object, ObjectVersion
Performs a multipart upload. Use this if you have specific needs for how the upload is split into parts, or if you want to have more control over how the failure of an individual part upload is handled. Otherwise, #write is much simpler to use.
709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 |
# File 'lib/aws/s3/s3_object.rb', line 709 def multipart_upload( = {}) = .dup () upload = multipart_uploads.create() if block_given? begin yield(upload) upload.close rescue => e upload.abort raise e end else upload end end |
#multipart_uploads ⇒ ObjectUploadCollection
Returns an object representing the collection of uploads that are in progress for this object.
735 736 737 |
# File 'lib/aws/s3/s3_object.rb', line 735 def multipart_uploads ObjectUploadCollection.new(self) end |
#presigned_post(options = {}) ⇒ PresignedPost
Generates fields for a presigned POST to this object. This method adds a constraint that the key must match the key of this object. All options are sent to the PresignedPost constructor.
1252 1253 1254 |
# File 'lib/aws/s3/s3_object.rb', line 1252 def presigned_post( = {}) PresignedPost.new(bucket, .merge(:key => key)) end |
#public_url(options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a public (not authenticated) URL for the object.
1240 1241 1242 1243 |
# File 'lib/aws/s3/s3_object.rb', line 1240 def public_url( = {}) [:secure] = config.use_ssl? unless .key?(:secure) build_uri(request_for_signing(), ) end |
#read(options = {}, &read_block) ⇒ Object
:range
option cannot be used with client-side encryption
All decryption reads incur at least an extra HEAD operation.
Fetches the object data from S3. If you pass a block to this method, the data will be yielded to the block in chunks as it is read off the HTTP response.
Read an object from S3 in chunks
When downloading large objects it is recommended to pass a block to #read. Data will be yielded to the block as it is read off the HTTP response.
# read an object from S3 to a file
File.open('output.txt', 'w') do |file|
bucket.objects['key'].read do |chunk|
file.write(chunk)
end
end
Reading an object without a block
When you omit the block argument to #read, then the entire HTTP response and read and the object data is loaded into memory.
bucket.objects['key'].read
#=> 'object-contents-here'
1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 |
# File 'lib/aws/s3/s3_object.rb', line 1056 def read = {}, &read_block [:bucket_name] = bucket.name [:key] = key if should_decrypt?() get_encrypted_object(, &read_block) else resp_data = get_object(, &read_block) block_given? ? resp_data : resp_data[:data] end end |
#reduced_redundancy=(value) ⇒ true, false
Changing the storage class of an object incurs a COPY operation.
Changes the storage class of the object to enable or disable Reduced Redundancy Storage (RRS).
1268 1269 1270 1271 |
# File 'lib/aws/s3/s3_object.rb', line 1268 def reduced_redundancy= value copy_from(key, :reduced_redundancy => value) value end |
#restore(options = {}) ⇒ Boolean
Restores a temporary copy of an archived object from the
Glacier storage tier. After the specified days
, Amazon
S3 deletes the temporary copy. Note that the object
remains archived; Amazon S3 deletes only the restored copy.
Restoring an object does not occur immediately. Use #restore_in_progress? to check the status of the operation.
420 421 422 423 424 425 426 427 428 |
# File 'lib/aws/s3/s3_object.rb', line 420 def restore = {} [:days] ||= 1 client.restore_object( :bucket_name => bucket.name, :key => key, :days => [:days]) true end |
#restore_expiration_date ⇒ DateTime?
366 367 368 |
# File 'lib/aws/s3/s3_object.rb', line 366 def restore_expiration_date head[:restore_expiration_date] end |
#restore_in_progress? ⇒ Boolean
Returns whether a #restore operation on the object is currently being performed on the object.
356 357 358 |
# File 'lib/aws/s3/s3_object.rb', line 356 def restore_in_progress? head[:restore_in_progress] end |
#restored_object? ⇒ Boolean
Returns whether the object is a temporary copy of an archived object in the Glacier storage class.
373 374 375 |
# File 'lib/aws/s3/s3_object.rb', line 373 def restored_object? !!head[:restore_expiration_date] end |
#server_side_encryption ⇒ Symbol?
Returns the algorithm used to encrypt
the object on the server side, or nil
if SSE was not used
when storing the object.
342 343 344 |
# File 'lib/aws/s3/s3_object.rb', line 342 def server_side_encryption head[:server_side_encryption] end |
#server_side_encryption? ⇒ true, false
Returns true if the object was stored using server side encryption.
348 349 350 |
# File 'lib/aws/s3/s3_object.rb', line 348 def server_side_encryption? !server_side_encryption.nil? end |
#url_for(method, options = {}) ⇒ URI::HTTP, URI::HTTPS
Generates a presigned URL for an operation on this object. This URL can be used by a regular HTTP client to perform the desired operation without credentials and without changing the permissions of the object.
1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 |
# File 'lib/aws/s3/s3_object.rb', line 1210 def url_for(method, = {}) [:secure] = config.use_ssl? unless .key?(:secure) req = request_for_signing() method = http_method(method) expires = ([:expires]) req.add_param("AWSAccessKeyId", config.credential_provider.access_key_id) req.add_param("versionId", [:version_id]) if [:version_id] req.add_param("Signature", signature(method, expires, req)) req.add_param("Expires", expires) req.add_param("x-amz-security-token", config.credential_provider.session_token) if config.credential_provider.session_token secure = .fetch(:secure, config.use_ssl?) build_uri(req, ) end |
#versions ⇒ ObjectVersionCollection
Returns a collection representing all the object versions for this object.
448 449 450 |
# File 'lib/aws/s3/s3_object.rb', line 448 def versions ObjectVersionCollection.new(self) end |
#write(data, options = {}) ⇒ S3Object, ObjectVersion
Uploads data to the object in S3.
obj = s3.buckets['bucket-name'].objects['key']
# strings
obj.write("HELLO")
# files (by path)
obj.write(Pathname.new('path/to/file.txt'))
# file objects
obj.write(File.open('path/to/file.txt', 'r'))
# IO objects (must respond to #read and #eof?)
obj.write(io)
Multipart Uploads vs Single Uploads
This method will intelligently choose between uploading the file in a signal request and using #multipart_upload. You can control this behavior by configuring the thresholds and you can disable the multipart feature as well.
# always send the file in a single request
obj.write(file, :single_request => true)
# upload the file in parts if the total file size exceeds 100MB
obj.write(file, :multipart_threshold => 100 * 1024 * 1024)
597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 |
# File 'lib/aws/s3/s3_object.rb', line 597 def write *args, &block = (*args, &block) add_storage_class_option() () () if use_multipart?() write_with_multipart() else write_with_put_object() end end |