Module: Sidekiq::Job::ClassMethods
- Defined in:
- lib/sidekiq/testing.rb,
lib/sidekiq/job.rb
Overview
The Sidekiq testing infrastructure overrides perform_async so that it does not actually touch the network. Instead it stores the asynchronous jobs in a per-class array so that their presence/absence can be asserted by your tests.
This is similar to ActionMailer’s :test delivery_method and its ActionMailer::Base.deliveries array.
Example:
require 'sidekiq/testing'
assert_equal 0, HardJob.jobs.size
HardJob.perform_async(:something)
assert_equal 1, HardJob.jobs.size
assert_equal :something, HardJob.jobs[0]['args'][0]
You can also clear and drain all job types:
Sidekiq::Job.clear_all # or .drain_all
This can be useful to make sure jobs don’t linger between tests:
RSpec.configure do |config|
config.before(:each) do
Sidekiq::Job.clear_all
end
end
or for acceptance testing, i.e. with cucumber:
AfterStep do
Sidekiq::Job.drain_all
end
When I sign up as "[email protected]"
Then I should receive a welcome email to "[email protected]"
Instance Method Summary collapse
-
#build_client ⇒ Object
:nodoc:.
-
#clear ⇒ Object
Clear all jobs for this worker.
-
#client_push(item) ⇒ Object
:nodoc:.
- #delay(*args) ⇒ Object
- #delay_for(*args) ⇒ Object
- #delay_until(*args) ⇒ Object
-
#drain ⇒ Object
Drain and run all jobs for this worker.
- #execute_job(worker, args) ⇒ Object
-
#jobs ⇒ Object
Jobs queued for this worker.
- #perform_async(*args) ⇒ Object
-
#perform_bulk(*args, **kwargs) ⇒ Object
Push a large number of jobs to Redis, while limiting the batch of each job payload to 1,000.
-
#perform_in(interval, *args) ⇒ Object
(also: #perform_at)
interval
must be a timestamp, numeric or something that acts numeric (like an activesupport time interval). -
#perform_inline(*args) ⇒ Object
(also: #perform_sync)
Inline execution of job’s perform method after passing through Sidekiq.client_middleware and Sidekiq.server_middleware.
-
#perform_one ⇒ Object
Pop out a single job and perform it.
- #process_job(job) ⇒ Object
-
#queue ⇒ Object
Queue for this worker.
- #queue_as(q) ⇒ Object
- #set(options) ⇒ Object
-
#sidekiq_options(opts = {}) ⇒ Object
Allows customization for this type of Job.
Instance Method Details
#build_client ⇒ Object
:nodoc:
367 368 369 370 371 |
# File 'lib/sidekiq/job.rb', line 367 def build_client # :nodoc: pool = Thread.current[:sidekiq_redis_pool] || ["pool"] || Sidekiq.default_configuration.redis_pool client_class = ["client_class"] || Sidekiq::Client client_class.new(pool: pool) end |
#clear ⇒ Object
Clear all jobs for this worker
245 246 247 |
# File 'lib/sidekiq/testing.rb', line 245 def clear Queues.clear_for(queue, to_s) end |
#client_push(item) ⇒ Object
:nodoc:
352 353 354 355 356 357 358 359 360 361 362 363 364 365 |
# File 'lib/sidekiq/job.rb', line 352 def client_push(item) # :nodoc: raise ArgumentError, "Job payloads should contain no Symbols: #{item}" if item.any? { |k, v| k.is_a?(::Symbol) } # allow the user to dynamically re-target jobs to another shard using the "pool" attribute # FooJob.set(pool: SOME_POOL).perform_async old = Thread.current[:sidekiq_redis_pool] pool = item.delete("pool") Thread.current[:sidekiq_redis_pool] = pool if pool begin build_client.push(item) ensure Thread.current[:sidekiq_redis_pool] = old end end |
#delay(*args) ⇒ Object
265 266 267 |
# File 'lib/sidekiq/job.rb', line 265 def delay(*args) raise ArgumentError, "Do not call .delay on a Sidekiq::Job class, call .perform_async" end |
#delay_for(*args) ⇒ Object
269 270 271 |
# File 'lib/sidekiq/job.rb', line 269 def delay_for(*args) raise ArgumentError, "Do not call .delay_for on a Sidekiq::Job class, call .perform_in" end |
#delay_until(*args) ⇒ Object
273 274 275 |
# File 'lib/sidekiq/job.rb', line 273 def delay_until(*args) raise ArgumentError, "Do not call .delay_until on a Sidekiq::Job class, call .perform_at" end |
#drain ⇒ Object
Drain and run all jobs for this worker
250 251 252 253 254 255 256 |
# File 'lib/sidekiq/testing.rb', line 250 def drain while jobs.any? next_job = jobs.first Queues.delete_for(next_job["jid"], next_job["queue"], to_s) process_job(next_job) end end |
#execute_job(worker, args) ⇒ Object
275 276 277 |
# File 'lib/sidekiq/testing.rb', line 275 def execute_job(worker, args) worker.perform(*args) end |
#jobs ⇒ Object
Jobs queued for this worker
240 241 242 |
# File 'lib/sidekiq/testing.rb', line 240 def jobs Queues.jobs_by_class[to_s] end |
#perform_async(*args) ⇒ Object
285 286 287 |
# File 'lib/sidekiq/job.rb', line 285 def perform_async(*args) Setter.new(self, {}).perform_async(*args) end |
#perform_bulk(*args, **kwargs) ⇒ Object
Push a large number of jobs to Redis, while limiting the batch of each job payload to 1,000. This method helps cut down on the number of round trips to Redis, which can increase the performance of enqueueing large numbers of jobs.
items
must be an Array of Arrays.
For finer-grained control, use ‘Sidekiq::Client.push_bulk` directly.
Example (3 Redis round trips):
SomeJob.perform_async(1)
SomeJob.perform_async(2)
SomeJob.perform_async(3)
Would instead become (1 Redis round trip):
SomeJob.perform_bulk([[1], [2], [3]])
315 316 317 |
# File 'lib/sidekiq/job.rb', line 315 def perform_bulk(*args, **kwargs) Setter.new(self, {}).perform_bulk(*args, **kwargs) end |
#perform_in(interval, *args) ⇒ Object Also known as: perform_at
interval
must be a timestamp, numeric or something that acts
numeric (like an activesupport time interval).
321 322 323 324 325 326 327 328 329 330 331 332 |
# File 'lib/sidekiq/job.rb', line 321 def perform_in(interval, *args) int = interval.to_f now = Time.now.to_f ts = ((int < 1_000_000_000) ? now + int : int) item = {"class" => self, "args" => args} # Optimization to enqueue something now that is scheduled to go out now or in the past item["at"] = ts if ts > now client_push(item) end |
#perform_inline(*args) ⇒ Object Also known as: perform_sync
Inline execution of job’s perform method after passing through Sidekiq.client_middleware and Sidekiq.server_middleware
290 291 292 |
# File 'lib/sidekiq/job.rb', line 290 def perform_inline(*args) Setter.new(self, {}).perform_inline(*args) end |
#perform_one ⇒ Object
Pop out a single job and perform it
259 260 261 262 263 264 |
# File 'lib/sidekiq/testing.rb', line 259 def perform_one raise(EmptyQueueError, "perform_one called with empty job queue") if jobs.empty? next_job = jobs.first Queues.delete_for(next_job["jid"], queue, to_s) process_job(next_job) end |
#process_job(job) ⇒ Object
266 267 268 269 270 271 272 273 |
# File 'lib/sidekiq/testing.rb', line 266 def process_job(job) inst = new inst.jid = job["jid"] inst.bid = job["bid"] if inst.respond_to?(:bid=) Sidekiq::Testing.server_middleware.invoke(inst, job, job["queue"]) do execute_job(inst, job["args"]) end end |
#queue ⇒ Object
Queue for this worker
235 236 237 |
# File 'lib/sidekiq/testing.rb', line 235 def queue ["queue"] end |
#queue_as(q) ⇒ Object
277 278 279 |
# File 'lib/sidekiq/job.rb', line 277 def queue_as(q) ("queue" => q.to_s) end |
#set(options) ⇒ Object
281 282 283 |
# File 'lib/sidekiq/job.rb', line 281 def set() Setter.new(self, ) end |
#sidekiq_options(opts = {}) ⇒ Object
Allows customization for this type of Job. Legal options:
queue - use a named queue for this Job, default 'default'
retry - enable the RetryJobs middleware for this Job, *true* to use the default
or *Integer* count
backtrace - whether to save any error backtrace in the retry payload to display in web UI,
can be true, false or an integer number of lines to save, default *false*
pool - use the given Redis connection pool to push this type of job to a given shard.
In practice, any option is allowed. This is the main mechanism to configure the options for a specific job.
348 349 350 |
# File 'lib/sidekiq/job.rb', line 348 def (opts = {}) super end |