Class: Delayed::Job
- Inherits:
-
Object
- Object
- Delayed::Job
- Includes:
- DataMapper::Resource
- Defined in:
- lib/dm-delayed-job/job.rb
Overview
A job object that is persisted to the database. Contains the work object as a YAML field.
Constant Summary collapse
- MAX_ATTEMPTS =
25
- MAX_RUN_TIME =
4 * 60 * 60
- NextTaskSQL =
'(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR (locked_by = ?)) AND failed_at IS NULL'
- NextTaskOrder =
[:priority.desc, :run_at.asc]
- ParseObjectFromYaml =
/\!ruby\/\w+\:([^\s]+)/
Class Method Summary collapse
-
.clear_locks! ⇒ Object
When a worker is exiting, make sure we don’t have any locked jobs.
- .delete_all ⇒ Object
-
.enqueue(*args, &block) ⇒ Object
Add a job to the queue.
-
.find_available(limit = 5, max_run_time = MAX_RUN_TIME) ⇒ Object
Find a few candidate jobs to run (in case some immediately get locked by others).
- .last ⇒ Object
- .logger ⇒ Object
-
.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME) ⇒ Object
Run the next job we can get an exclusive lock on.
- .update_all(with, from) ⇒ Object
-
.work_off(num = 100) ⇒ Object
Do num jobs and return stats on success/failure.
Instance Method Summary collapse
- #failed? ⇒ Boolean (also: #failed)
-
#invoke_job ⇒ Object
Moved into its own method so that new_relic can trace it.
-
#lock_exclusively!(max_run_time, worker = worker_name) ⇒ Object
Lock this job for this worker.
-
#log_exception(error) ⇒ Object
This is a good hook if you need to report job processing errors in additional or different ways.
- #logger ⇒ Object
- #name ⇒ Object
- #payload_object ⇒ Object
- #payload_object=(object) ⇒ Object
-
#reschedule(message, backtrace = [], time = nil) ⇒ Object
Reschedule the job in the future (when a job fails).
-
#run_with_lock(max_run_time, worker_name) ⇒ Object
Try to run one job.
-
#unlock ⇒ Object
Unlock this job (note: not saved to DB).
Class Method Details
.clear_locks! ⇒ Object
When a worker is exiting, make sure we don’t have any locked jobs.
75 76 77 |
# File 'lib/dm-delayed-job/job.rb', line 75 def self.clear_locks! update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name]) end |
.delete_all ⇒ Object
58 59 60 |
# File 'lib/dm-delayed-job/job.rb', line 58 def self.delete_all all.destroy! end |
.enqueue(*args, &block) ⇒ Object
Add a job to the queue
146 147 148 149 150 151 152 153 154 155 156 157 |
# File 'lib/dm-delayed-job/job.rb', line 146 def self.enqueue(*args, &block) object = block_given? ? EvaledJob.new(&block) : args.shift unless object.respond_to?(:perform) || block_given? raise ArgumentError, 'Cannot enqueue items which do not respond to perform' end priority = args.first || 0 run_at = args[1] Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at) end |
.find_available(limit = 5, max_run_time = MAX_RUN_TIME) ⇒ Object
Find a few candidate jobs to run (in case some immediately get locked by others). Return in random order prevent everyone trying to do same head job at once.
161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
# File 'lib/dm-delayed-job/job.rb', line 161 def self.find_available(limit = 5, max_run_time = MAX_RUN_TIME) time_now = db_time_now sql = NextTaskSQL.dup conditions = [time_now, time_now - max_run_time, worker_name] if self.min_priority sql << ' AND (priority >= ?)' conditions << min_priority end if self.max_priority sql << ' AND (priority <= ?)' conditions << max_priority end conditions.unshift(sql) orig, DataMapper.logger.level = DataMapper.logger.level, :error records = all(:conditions => conditions, :order => Delayed::Job::NextTaskOrder, :limit => limit) DataMapper.logger.level = orig records.sort_by { rand() } end |
.last ⇒ Object
62 63 64 |
# File 'lib/dm-delayed-job/job.rb', line 62 def self.last all.last end |
.logger ⇒ Object
70 71 72 |
# File 'lib/dm-delayed-job/job.rb', line 70 def self.logger DataMapper.logger end |
.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME) ⇒ Object
Run the next job we can get an exclusive lock on. If no jobs are left we return nil
190 191 192 193 194 195 196 197 198 199 200 |
# File 'lib/dm-delayed-job/job.rb', line 190 def self.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME) # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next. # this leads to a more even distribution of jobs across the worker processes find_available(5, max_run_time).each do |job| t = job.run_with_lock(max_run_time, worker_name) return t unless t == nil # return if we did work (good or bad) end nil # we didn't do any work, all 5 were not lockable end |
.update_all(with, from) ⇒ Object
54 55 56 |
# File 'lib/dm-delayed-job/job.rb', line 54 def self.update_all(with, from) repository(:default).adapter.execute("UPDATE #{storage_names[:default]} SET #{Array(with)[0]} WHERE #{Array(from)[0]}", *Array(with)[1..-1].concat(Array(from)[1..-1])).affected_rows end |
.work_off(num = 100) ⇒ Object
Do num jobs and return stats on success/failure. Exit early if interrupted.
237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 |
# File 'lib/dm-delayed-job/job.rb', line 237 def self.work_off(num = 100) success, failure = 0, 0 num.times do case self.reserve_and_run_one_job when true success += 1 when false failure += 1 else break # leave if no work could be done end break if $exit # leave if we're exiting end return [success, failure] end |
Instance Method Details
#failed? ⇒ Boolean Also known as: failed
79 80 81 |
# File 'lib/dm-delayed-job/job.rb', line 79 def failed? failed_at end |
#invoke_job ⇒ Object
Moved into its own method so that new_relic can trace it.
256 257 258 |
# File 'lib/dm-delayed-job/job.rb', line 256 def invoke_job payload_object.perform end |
#lock_exclusively!(max_run_time, worker = worker_name) ⇒ Object
Lock this job for this worker. Returns true if we have the lock, false otherwise.
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
# File 'lib/dm-delayed-job/job.rb', line 204 def lock_exclusively!(max_run_time, worker = worker_name) now = self.class.db_time_now affected_rows = if locked_by != worker # We don't own this job so we will update the locked_by name and the locked_at self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?)", id, (now - max_run_time.to_i)]) else # We already own this job, this may happen if the job queue crashes. # Simply resume and update the locked_at self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker]) end if affected_rows == 1 self.locked_at = now self.locked_by = worker return true else return false end end |
#log_exception(error) ⇒ Object
This is a good hook if you need to report job processing errors in additional or different ways
230 231 232 233 |
# File 'lib/dm-delayed-job/job.rb', line 230 def log_exception(error) logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.} - #{attempts} failed attempts" logger.error(error) end |
#logger ⇒ Object
66 67 68 |
# File 'lib/dm-delayed-job/job.rb', line 66 def logger DataMapper.logger end |
#name ⇒ Object
88 89 90 91 92 93 94 95 96 97 |
# File 'lib/dm-delayed-job/job.rb', line 88 def name @name ||= begin payload = payload_object if payload.respond_to?(:display_name) payload.display_name else payload.class.name end end end |
#payload_object ⇒ Object
84 85 86 |
# File 'lib/dm-delayed-job/job.rb', line 84 def payload_object @payload_object ||= deserialize(self.handler) end |
#payload_object=(object) ⇒ Object
99 100 101 |
# File 'lib/dm-delayed-job/job.rb', line 99 def payload_object=(object) self.handler = object.to_yaml end |
#reschedule(message, backtrace = [], time = nil) ⇒ Object
Reschedule the job in the future (when a job fails). Uses an exponential scale depending on the number of failed attempts.
105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
# File 'lib/dm-delayed-job/job.rb', line 105 def reschedule(, backtrace = [], time = nil) if self.attempts < MAX_ATTEMPTS time ||= Job.db_time_now + (attempts ** 4) + 5 self.attempts += 1 self.run_at = time self.last_error = + "\n" + backtrace.join("\n") self.unlock save else logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consequetive failures." destroy_failed_jobs ? destroy : update(:failed_at => Delayed::Job.db_time_now) end end |
#run_with_lock(max_run_time, worker_name) ⇒ Object
Try to run one job. Returns true/false (work done/work failed) or nil if job can’t be locked.
122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
# File 'lib/dm-delayed-job/job.rb', line 122 def run_with_lock(max_run_time, worker_name) logger.info "* [JOB] aquiring lock on #{name}" unless lock_exclusively!(max_run_time, worker_name) # We did not get the lock, some other worker process must have logger.warn "* [JOB] failed to aquire exclusive lock for #{name}" return nil # no work done end begin runtime = Benchmark.realtime do invoke_job # TODO: raise error if takes longer than max_run_time destroy end # TODO: warn if runtime > max_run_time ? logger.info "* [JOB] #{name} completed after %.4f" % runtime return true # did work rescue Exception => e reschedule e., e.backtrace log_exception(e) return false # work failed end end |
#unlock ⇒ Object
Unlock this job (note: not saved to DB)
224 225 226 227 |
# File 'lib/dm-delayed-job/job.rb', line 224 def unlock self.locked_at = nil self.locked_by = nil end |