Class: AWS::S3::S3Object

Inherits:
Base
  • Object
show all
Includes:
SelectiveAttributeProxy
Defined in:
lib/aws/s3/object.rb,
lib/aws/s3/response.rb

Overview

S3Objects represent the data you store on S3. They have a key (their name) and a value (their data). All objects belong to a bucket.

You can store an object on S3 by specifying a key, its data and the name of the bucket you want to put it in:

S3Object.store(
  'headshot.jpg', 
  File.open('headshot.jpg'), 
  'photos', 
  :content_type => 'image/jpg'
)

You can read more about storing files on S3 in the documentation for S3Object.store.

If you just want to fetch an object you’ve stored on S3, you just specify its name and its bucket:

picture = S3Object.find 'headshot.jpg', 'photos'

N.B. The actual data for the file is not downloaded in both the example where the file appeared in the bucket and when fetched directly. You get the data for the file like this:

picture.value

You can fetch just the object’s data directly:

S3Object.value 'headshot.jpg', 'photos'

Or stream it by passing a block to stream:

File.open('song.mp3', 'w') do |file|
  S3Object.stream('song.mp3', 'jukebox') do |chunk|
    file.write chunk
  end
end

The data of the file, once download, is cached, so subsequent calls to value won’t redownload the file unless you tell the object to reload its value:

# Redownloads the file's data
song.value(:reload)

Other functionality includes:

# Copying an object
S3Object.copy 'headshot.jpg', 'headshot2.jpg', 'photos'

# Renaming an object
S3Object.rename 'headshot.jpg', 'portrait.jpg', 'photos'

# Deleting an object
S3Object.delete 'headshot.jpg', 'photos'

More about objects and their metadata

You can find out the content type of your object with the content_type method:

song.content_type
# => "audio/mpeg"

You can change the content type as well if you like:

song.content_type = 'application/octet-stream'
song.store

(Keep in mind that due to limitiations in S3’s exposed API, the only way to change things like the content_type is to PUT the object onto S3 again. In the case of large files, this will result in fully re-uploading the file.)

A bevie of information about an object can be had using the about method:

pp song.about
{"last-modified"    => "Sat, 28 Oct 2006 21:29:26 GMT",
 "content-type"     => "binary/octect-stream",
 "etag"             => "\"dc629038ffc674bee6f62eb64ff3a\"",
 "date"             => "Sat, 28 Oct 2006 21:30:41 GMT",
 "x-amz-request-id" => "B7BC68F55495B1C8",
 "server"           => "AmazonS3",
 "content-length"   => "3418766"}

You can get and set metadata for an object:

song.
# => {}
song.[:album] = "A River Ain't Too Much To Love"
# => "A River Ain't Too Much To Love"
song.[:released] = 2005
pp song.
{"x-amz-meta-released" => 2005, 
  "x-amz-meta-album"   => "A River Ain't Too Much To Love"}
song.store

That metadata will be saved in S3 and is hence forth available from that object:

song = S3Object.find('black-flowers.mp3', 'jukebox')
pp song.
{"x-amz-meta-released" => "2005", 
  "x-amz-meta-album"   => "A River Ain't Too Much To Love"}
song.metada[:released]
# => "2005"
song.metada[:released] = 2006
pp song.metada
{"x-amz-meta-released" => 2006, 
 "x-amz-meta-album"    => "A River Ain't Too Much To Love"}

Defined Under Namespace

Classes: About, Metadata, Response, Value

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Methods included from SelectiveAttributeProxy

included

Methods inherited from Base

current_bucket, request, set_current_bucket_to

Constructor Details

#initialize(attributes = {}) {|_self| ... } ⇒ S3Object

Initializes a new S3Object.

Yields:

  • (_self)

Yield Parameters:



373
374
375
376
377
378
# File 'lib/aws/s3/object.rb', line 373

def initialize(attributes = {}, &block)
  super
  self.value  = attributes.delete(:value) 
  self.bucket = attributes.delete(:bucket)
  yield self if block_given?
end

Dynamic Method Handling

This class handles dynamic methods through the method_missing method in the class AWS::S3::Base

Instance Attribute Details

#value(options = {}, &block) ⇒ Object

Lazily loads object data.

Force a reload of the data by passing :reload.

object.value(:reload)

When loading the data for the first time you can optionally yield to a block which will allow you to stream the data in segments.

object.value do |segment|
  send_data segment
end

The full list of options are listed in the documentation for its class method counter part, S3Object::value.



434
435
436
437
438
439
440
441
442
443
444
# File 'lib/aws/s3/object.rb', line 434

def value(options = {}, &block)
  if options.is_a?(Hash)
    reload = !options.empty?
  else
    reload  = options
    options = {}
  end
  memoize(reload) do
    self.class.stream(key, bucket.name, options, &block)
  end
end

Class Method Details

.about(key, bucket = nil, options = {}) ⇒ Object

Fetch information about the key with name from bucket. Information includes content type, content length, last modified time, and others.



182
183
184
# File 'lib/aws/s3/object.rb', line 182

def about(key, bucket = nil, options = {})
  About.new(head(path!(bucket, key, options), options).headers)
end

.copy(key, copy_key, bucket = nil, options = {}) ⇒ Object

Makes a copy of the object with key to copy_name.



167
168
169
170
171
172
# File 'lib/aws/s3/object.rb', line 167

def copy(key, copy_key, bucket = nil, options = {})
  bucket          = bucket_name(bucket)
  original        = find(key, bucket)
  default_options = {:content_type => original.content_type}
  store(copy_key, original.value, bucket, default_options.merge(options)).success?
end

.createObject

When storing an object on the S3 servers using S3Object.store, the data argument can be a string or an I/O stream. If data is an I/O stream it will be read in segments and written to the socket incrementally. This approach may be desirable for very large files so they are not read into memory all at once.

# Non streamed upload
S3Object.store('simple-text-file.txt', 
               'hello world!', 
               'marcel', 
               :content_type => 'text/plain')

# Streamed upload
S3Object.store('roots.mpeg', 
               File.open('roots.mpeg'), 
               'marcel', 
               :content_type => 'audio/mpeg')


212
213
214
215
# File 'lib/aws/s3/object.rb', line 212

def store(key, data, bucket = nil, options = {})
  validate_key!(key)
  put(path!(bucket, key, options), options, data) # Don't call .success? on response. We want to get the etag.
end

.delete(key, bucket = nil, options = {}) ⇒ Object

Delete object with key from bucket.



187
188
189
190
191
# File 'lib/aws/s3/object.rb', line 187

def delete(key, bucket = nil, options = {})
  # A bit confusing. Calling super actually makes an HTTP DELETE request. The delete method is
  # defined in the Base class. It happens to have the same name.
  super(path!(bucket, key, options), options).success?
end

.find(key, bucket = nil) ⇒ Object

Returns the object whose key is name in the specified bucket. If the specified key does not exist, a NoSuchKey exception will be raised.



134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
# File 'lib/aws/s3/object.rb', line 134

def find(key, bucket = nil)
  # N.B. This is arguably a hack. From what the current S3 API exposes, when you retrieve a bucket, it
  # provides a listing of all the files in that bucket (assuming you haven't limited the scope of what it returns).
  # Each file in the listing contains information about that file. It is from this information that an S3Object is built.
  #
  # If you know the specific file that you want, S3 allows you to make a get request for that specific file and it returns
  # the value of that file in its response body. This response body is used to build an S3Object::Value object. 
  # If you want information about that file, you can make a head request and the headers of the response will contain 
  # information about that file. There is no way, though, to say, give me the representation of just this given file the same 
  # way that it would appear in a bucket listing.
  #
  # When fetching a bucket, you can provide options which narrow the scope of what files should be returned in that listing.
  # Of those options, one is <tt>marker</tt> which is a string and instructs the bucket to return only object's who's key comes after
  # the specified marker according to alphabetic order. Another option is <tt>max-keys</tt> which defaults to 1000 but allows you
  # to dictate how many objects should be returned in the listing. With a combination of <tt>marker</tt> and <tt>max-keys</tt> you can
  # *almost* specify exactly which file you'd like it to return, but <tt>marker</tt> is not inclusive. In other words, if there is a bucket
  # which contains three objects who's keys are respectively 'a', 'b' and 'c', then fetching a bucket listing with marker set to 'b' will only
  # return 'c', not 'b'. 
  #
  # Given all that, my hack to fetch a bucket with only one specific file, is to set the marker to the result of calling String#previous on
  # the desired object's key, which functionally makes the key ordered one degree higher than the desired object key according to 
  # alphabetic ordering. This is a hack, but it should work around 99% of the time. I can't think of a scenario where it would return
  # something incorrect.
  bucket = Bucket.find(bucket_name(bucket), :marker => key.previous, :max_keys => 1)
  # If our heuristic failed, trigger a NoSuchKey exception
  if (object = bucket.objects.first) && object.key == key
    object 
  else 
    raise NoSuchKey.new("No such key `#{key}'", bucket)
  end
end

.path!(bucket, name, options = {}) ⇒ Object

:nodoc:



254
255
256
257
258
259
260
261
# File 'lib/aws/s3/object.rb', line 254

def path!(bucket, name, options = {}) #:nodoc:
  # We're using the second argument for options
  if bucket.is_a?(Hash)
    options.replace(bucket)
    bucket = nil
  end
  '/' << File.join(bucket_name(bucket), name)
end

.rename(from, to, bucket = nil, options = {}) ⇒ Object

Rename the object with key from to have key in to.



175
176
177
178
# File 'lib/aws/s3/object.rb', line 175

def rename(from, to, bucket = nil, options = {})
  copy(from, to, bucket, options)
  delete(from, bucket)
end

.saveObject

When storing an object on the S3 servers using S3Object.store, the data argument can be a string or an I/O stream. If data is an I/O stream it will be read in segments and written to the socket incrementally. This approach may be desirable for very large files so they are not read into memory all at once.

# Non streamed upload
S3Object.store('simple-text-file.txt', 
               'hello world!', 
               'marcel', 
               :content_type => 'text/plain')

# Streamed upload
S3Object.store('roots.mpeg', 
               File.open('roots.mpeg'), 
               'marcel', 
               :content_type => 'audio/mpeg')


213
214
215
216
# File 'lib/aws/s3/object.rb', line 213

def store(key, data, bucket = nil, options = {})
  validate_key!(key)
  put(path!(bucket, key, options), options, data) # Don't call .success? on response. We want to get the etag.
end

.store(key, data, bucket = nil, options = {}) ⇒ Object

When storing an object on the S3 servers using S3Object.store, the data argument can be a string or an I/O stream. If data is an I/O stream it will be read in segments and written to the socket incrementally. This approach may be desirable for very large files so they are not read into memory all at once.

# Non streamed upload
S3Object.store('simple-text-file.txt', 
               'hello world!', 
               'marcel', 
               :content_type => 'text/plain')

# Streamed upload
S3Object.store('roots.mpeg', 
               File.open('roots.mpeg'), 
               'marcel', 
               :content_type => 'audio/mpeg')


208
209
210
211
# File 'lib/aws/s3/object.rb', line 208

def store(key, data, bucket = nil, options = {})
  validate_key!(key)
  put(path!(bucket, key, options), options, data) # Don't call .success? on response. We want to get the etag.
end

.stream(key, bucket = nil, options = {}, &block) ⇒ Object



126
127
128
129
130
# File 'lib/aws/s3/object.rb', line 126

def stream(key, bucket = nil, options = {}, &block)
  value(key, bucket, options) do |response|
    response.read_body(&block)
  end
end

.url_for(name, bucket = nil, options = {}) ⇒ Object

All private objects are accessible via an authenticated GET request to the S3 servers. You can generate an authenticated url for an object like this:

S3Object.url_for('beluga_baby.jpg', 'marcel_molina')

By default authenticated urls expire 5 minutes after they were generated.

Expiration options can be specified either with an absolute time since the epoch with the :expires options, or with a number of seconds relative to now with the :expires_in options:

# Absolute expiration date 
# (Expires January 18th, 2038)
doomsday = Time.mktime(2038, 1, 18).to_i
S3Object.url_for('beluga_baby.jpg', 
                 'marcel', 
                 :expires => doomsday)

# Expiration relative to now specified in seconds 
# (Expires in 3 hours)
S3Object.url_for('beluga_baby.jpg', 
                 'marcel', 
                 :expires_in => 60 * 60 * 3)

You can specify whether the url should go over SSL with the :use_ssl option:

# Url will use https protocol
S3Object.url_for('beluga_baby.jpg', 
                 'marcel', 
                 :use_ssl => true)

By default, the ssl settings for the current connection will be used.

If you have an object handy, you can use its url method with the same objects:

song.url(:expires_in => 30)


250
251
252
# File 'lib/aws/s3/object.rb', line 250

def url_for(name, bucket = nil, options = {})
  connection.url_for(path!(bucket, name, options), options) # Do not normalize options
end

.value(key, bucket = nil, options = {}, &block) ⇒ Object

Returns the value of the object with key in the specified bucket.

Conditional GET options

  • :if_modified_since - Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).

  • :if_unmodified_since - Return the object only if it has not been modified since the specified time, otherwise raise PreconditionFailed.

  • :if_match - Return the object only if its entity tag (ETag) is the same as the one specified, otherwise raise PreconditionFailed.

  • :if_none_match - Return the object only if its entity tag (ETag) is different from the one specified, otherwise return a 304 (not modified).

Other options

  • :range - Return only the bytes of the object in the specified range.



122
123
124
# File 'lib/aws/s3/object.rb', line 122

def value(key, bucket = nil, options = {}, &block)
  Value.new(get(path!(bucket, key, options), options, &block))
end

Instance Method Details

#==(s3object) ⇒ Object

:nodoc:



539
540
541
# File 'lib/aws/s3/object.rb', line 539

def ==(s3object) #:nodoc:
  path == s3object.path
end

#aboutObject

Interface to information about the current object. Information is read only, though some of its data can be modified through specific methods, such as content_type and content_type=.

 pp some_object.about
   {"last-modified"    => "Sat, 28 Oct 2006 21:29:26 GMT",
    "x-amz-id-2"       =>  "LdcQRk5qLwxJQiZ8OH50HhoyKuqyWoJ67B6i+rOE5MxpjJTWh1kCkL+I0NQzbVQn",
    "content-type"     => "binary/octect-stream",
    "etag"             => "\"dc629038ffc674bee6f62eb68454ff3a\"",
    "date"             => "Sat, 28 Oct 2006 21:30:41 GMT",
    "x-amz-request-id" => "B7BC68F55495B1C8",
    "server"           => "AmazonS3",
    "content-length"   => "3418766"}

some_object.content_type
# => "binary/octect-stream"
some_object.content_type = 'audio/mpeg'
some_object.content_type
# => 'audio/mpeg'
some_object.store


465
466
467
# File 'lib/aws/s3/object.rb', line 465

def about
  stored? ? self.class.about(key, bucket.name) : About.new
end

#belongs_to_bucket?Boolean Also known as: orphan?

Returns true if the current object has been assigned to a bucket yet. Objects must belong to a bucket before they can be saved onto S3.

Returns:

  • (Boolean)


394
395
396
# File 'lib/aws/s3/object.rb', line 394

def belongs_to_bucket?
  !@bucket.nil?
end

#bucketObject

The current object’s bucket. If no bucket has been set, a NoBucketSpecified exception will be raised. For cases where you are not sure if the bucket has been set, you can use the belongs_to_bucket? method.



382
383
384
# File 'lib/aws/s3/object.rb', line 382

def bucket
  @bucket or raise NoBucketSpecified
end

#bucket=(bucket) ⇒ Object

Sets the bucket that the object belongs to.



387
388
389
390
# File 'lib/aws/s3/object.rb', line 387

def bucket=(bucket)
  @bucket = bucket
  self
end

#copy(copy_name, options = {}) ⇒ Object

Copies the current object, given it the name copy_name. Keep in mind that due to limitations in S3’s API, this operation requires retransmitting the entire object to S3.



505
506
507
# File 'lib/aws/s3/object.rb', line 505

def copy(copy_name, options = {})
  self.class.copy(key, copy_name, bucket.name, options)
end

#deleteObject

Deletes the current object. Trying to save an object after it has been deleted with raise a DeletedObject exception.



497
498
499
500
501
# File 'lib/aws/s3/object.rb', line 497

def delete
  bucket.update(:deleted, self)
  freeze
  self.class.delete(key, bucket.name)
end

#etag(reload = false) ⇒ Object



515
516
517
518
519
520
# File 'lib/aws/s3/object.rb', line 515

def etag(reload = false)
  return nil unless stored?
  memoize(reload) do
    reload ? about(reload)['etag'][1...-1] : attributes['e_tag'][1...-1]
  end
end

#inspectObject

Don’t dump binary data :)



551
552
553
# File 'lib/aws/s3/object.rb', line 551

def inspect #:nodoc:
  "#<AWS::S3::S3Object:0x#{object_id} '#{path}'>"
end

#keyObject

Returns the key of the object. If the key is not set, a NoKeySpecified exception will be raised. For cases where you are not sure if the key has been set, you can use the key_set? method. Objects must have a key set to be saved onto S3. Objects which have already been saved onto S3 will always have their key set.



402
403
404
# File 'lib/aws/s3/object.rb', line 402

def key
  attributes['key'] or raise NoKeySpecified
end

#key=(value) ⇒ Object

Sets the key for the current object.



407
408
409
# File 'lib/aws/s3/object.rb', line 407

def key=(value)
  attributes['key'] = value
end

#key_set?Boolean

Returns true if the current object has had its key set yet. Objects which have already been saved will always return true. This method is useful for objects which have not been saved yet so you know if you need to set the object’s key since you can not save an object unless its key has been set.

object.store if object.key_set? && object.belongs_to_bucket?

Returns:

  • (Boolean)


416
417
418
# File 'lib/aws/s3/object.rb', line 416

def key_set?
  !attributes['key'].nil?
end

#metadataObject

Interface to viewing and editing metadata for the current object. To be treated like a Hash.

some_object.
# => {}
some_object.[:author] = 'Dave Thomas'
some_object.
# => {"x-amz-meta-author" => "Dave Thomas"}
some_object.[:author]
# => "Dave Thomas"


479
480
481
# File 'lib/aws/s3/object.rb', line 479

def 
  about.
end

#ownerObject

Returns the owner of the current object.



523
524
525
# File 'lib/aws/s3/object.rb', line 523

def owner 
  Owner.new(attributes['owner'])
end

#pathObject

:nodoc:



543
544
545
546
547
548
# File 'lib/aws/s3/object.rb', line 543

def path #:nodoc:
  self.class.path!(
    belongs_to_bucket? ? bucket.name : '(no bucket)', 
    key_set?           ? key         : '(no key)'
  )
end

#rename(to, options = {}) ⇒ Object

Rename the current object. Keep in mind that due to limitations in S3’s API, this operation requires retransmitting the entire object to S3.



511
512
513
# File 'lib/aws/s3/object.rb', line 511

def rename(to, options = {})
  self.class.rename(key, to, bucket.name, options)
end

#store(options = {}) ⇒ Object Also known as: create, save

Saves the current object with the specified options. Valid options are listed in the documentation for S3Object::store.

Raises:



485
486
487
488
489
490
491
# File 'lib/aws/s3/object.rb', line 485

def store(options = {})
  raise DeletedObject if frozen?
  options  = about.to_headers.merge(options) if stored?
  response = self.class.store(key, value, bucket.name, options)
  bucket.update(:stored, self)
  response.success?
end

#stored?Boolean

Returns true if the current object has been stored on S3 yet.

Returns:

  • (Boolean)


535
536
537
# File 'lib/aws/s3/object.rb', line 535

def stored?
  !attributes['e_tag'].nil?
end

#url(options = {}) ⇒ Object

Generates an authenticated url for the current object. Accepts the same options as its class method counter part S3Object.url_for.



530
531
532
# File 'lib/aws/s3/object.rb', line 530

def url(options = {})
  self.class.url_for(key, bucket.name, options)
end