Class: Mir::Disk::Amazon
- Inherits:
-
Object
- Object
- Mir::Disk::Amazon
- Defined in:
- lib/mir/disk/amazon.rb
Constant Summary collapse
- DEFAULT_CHUNK_SIZE =
This is the default size in bytes at which files will be split and stored on S3. From trial and error, 5MB seems to be a good default size for chunking large files.
5*(2**20)
Instance Attribute Summary collapse
-
#bucket_name ⇒ Object
readonly
Returns the value of attribute bucket_name.
-
#connection ⇒ Object
readonly
Returns the value of attribute connection.
Class Method Summary collapse
-
.s3_key(path) ⇒ String
Converts a path name to a key that can be stored on s3.
Instance Method Summary collapse
-
#chunk_size ⇒ Integer
The size in bytes of each chunk.
-
#chunk_size=(n) ⇒ void
Sets the maximum number of bytes to be used per chunk sent to S3.
-
#collections ⇒ Hash
Returns the buckets available from S3.
-
#connected? ⇒ Boolean
Whether a connection to S3 has been established.
-
#copy(from, dest) ⇒ void
Copies the remote resource to the local filesystem.
-
#delete(file_path) ⇒ Boolean
Deletes the remote version of the file.
-
#initialize(settings = {}) ⇒ Amazon
constructor
Attempts to create a new connection to Amazon S3.
-
#key_exists?(key) ⇒ Boolean
Whether the key exists in S3.
-
#read(key) ⇒ String
Retrieves the complete object from S3.
-
#volume ⇒ RightAws::S3::Bucket
Retrieves the bucket from S3.
-
#write(file_path) ⇒ void
Writes a file to Amazon S3.
Constructor Details
#initialize(settings = {}) ⇒ Amazon
Attempts to create a new connection to Amazon S3
35 36 37 38 39 40 41 |
# File 'lib/mir/disk/amazon.rb', line 35 def initialize(settings = {}) @bucket_name = settings[:bucket_name] @access_key_id = settings[:access_key_id] @secret_access_key = settings[:secret_access_key] @chunk_size = settings[:chunk_size] || DEFAULT_CHUNK_SIZE @connection = try_connect end |
Instance Attribute Details
#bucket_name ⇒ Object (readonly)
Returns the value of attribute bucket_name.
14 15 16 |
# File 'lib/mir/disk/amazon.rb', line 14 def bucket_name @bucket_name end |
#connection ⇒ Object (readonly)
Returns the value of attribute connection.
14 15 16 |
# File 'lib/mir/disk/amazon.rb', line 14 def connection @connection end |
Class Method Details
.s3_key(path) ⇒ String
Converts a path name to a key that can be stored on s3
20 21 22 23 24 25 26 |
# File 'lib/mir/disk/amazon.rb', line 20 def self.s3_key(path) if path[0] == File::SEPARATOR path[1..-1] else path end end |
Instance Method Details
#chunk_size ⇒ Integer
The size in bytes of each chunk
59 60 61 |
# File 'lib/mir/disk/amazon.rb', line 59 def chunk_size @chunk_size end |
#chunk_size=(n) ⇒ void
This method returns an undefined value.
Sets the maximum number of bytes to be used per chunk sent to S3
52 53 54 55 |
# File 'lib/mir/disk/amazon.rb', line 52 def chunk_size=(n) raise ArgumentError unless n > 0 @chunk_size = n end |
#collections ⇒ Hash
Returns the buckets available from S3
45 46 47 |
# File 'lib/mir/disk/amazon.rb', line 45 def collections @connection.list_bucket.select(:key) end |
#connected? ⇒ Boolean
Whether a connection to S3 has been established
100 101 102 |
# File 'lib/mir/disk/amazon.rb', line 100 def connected? @connection_success end |
#copy(from, dest) ⇒ void
This method returns an undefined value.
Copies the remote resource to the local filesystem
81 82 83 84 85 86 87 |
# File 'lib/mir/disk/amazon.rb', line 81 def copy(from, dest) open(dest, 'w') do |file| key = self.class.s3_key(from) remote_file = MultiPartFile.new(self, key) remote_file.get(dest) end end |
#delete(file_path) ⇒ Boolean
Deletes the remote version of the file
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
# File 'lib/mir/disk/amazon.rb', line 112 def delete(file_path) key = self.class.s3_key(file_path) Mir.logger.info "Deleting remote object #{file_path}" begin remote_file = MultiPartFile.new(self, key) rescue Disk::RemoteFileNotFound => e Mir.logger.warn "Could not find remote resource '#{key}'" return false end if remote_file.multipart? delete_parts(key) else connection.delete(bucket_name, key) end end |
#key_exists?(key) ⇒ Boolean
Whether the key exists in S3
67 68 69 70 71 72 73 74 75 |
# File 'lib/mir/disk/amazon.rb', line 67 def key_exists?(key) begin connection.head(bucket_name, key) rescue RightAws::AwsError => e return false end true end |
#read(key) ⇒ String
Retrieves the complete object from S3. Note this method will not stream the object and will return the value of the object stored on S3
94 95 96 |
# File 'lib/mir/disk/amazon.rb', line 94 def read(key) connection.get_object(bucket_name, key) end |
#volume ⇒ RightAws::S3::Bucket
Retrieves the bucket from S3
106 107 108 |
# File 'lib/mir/disk/amazon.rb', line 106 def volume connection.bucket(bucket_name, true) end |
#write(file_path) ⇒ void
This method returns an undefined value.
Writes a file to Amazon S3. If the file size exceeds the chunk size, the file will be written in chunks
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
# File 'lib/mir/disk/amazon.rb', line 136 def write(file_path) key = self.class.s3_key(file_path) if File.size(file_path) <= chunk_size connection.put(bucket_name, key, open(file_path)) raise Disk::IncompleteTransmission unless equals?(file_path, key) else delete_parts(file_path) # clean up remaining part files if any exist open(file_path, "rb") do |source| part_id = 1 while part = source.read(chunk_size) do part_name = Mir::Utils.filename_with_sequence(key, part_id) Mir.logger.debug "Writing part #{part_name}" temp_file(part_name) do |tmp| tmp.binmode tmp.write(part) tmp.rewind connection.put(bucket_name, part_name, open(tmp.path)) raise Disk::IncompleteTransmission unless equals?(tmp.path, part_name) end part_id += 1 end end end Mir.logger.info "Completed upload #{file_path}" end |