Class: Sidekiq::Client
- Inherits:
-
Object
- Object
- Sidekiq::Client
- Includes:
- JobUtil, TestingClient
- Defined in:
- lib/sidekiq/client.rb
Constant Summary
Constants included from JobUtil
Instance Attribute Summary collapse
-
#redis_pool ⇒ Object
Returns the value of attribute redis_pool.
Class Method Summary collapse
-
.enqueue(klass, *args) ⇒ Object
Resque compatibility helpers.
-
.enqueue_in(interval, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_in(3.minutes, MyJob, ‘foo’, 1, :bat => ‘bar’).
-
.enqueue_to(queue, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_to(:queue_name, MyJob, ‘foo’, 1, :bat => ‘bar’).
-
.enqueue_to_in(queue, interval, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_to_in(:queue_name, 3.minutes, MyJob, ‘foo’, 1, :bat => ‘bar’).
- .push(item) ⇒ Object
- .push_bulk ⇒ Object
-
.via(pool) ⇒ Object
Allows sharding of jobs across any number of Redis instances.
Instance Method Summary collapse
-
#cancel!(jid) ⇒ Object
Cancel the IterableJob with the given JID.
-
#initialize(*args, **kwargs) ⇒ Client
constructor
Sidekiq::Client is responsible for pushing job payloads to Redis.
-
#middleware(&block) ⇒ Object
Define client-side middleware:.
-
#push(item) ⇒ Object
The main method used to push a job to Redis.
-
#push_bulk(items) ⇒ Object
Push a large number of jobs to Redis.
Methods included from JobUtil
#normalize_item, #normalized_hash, #validate, #verify_json
Constructor Details
#initialize(*args, **kwargs) ⇒ Client
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
# File 'lib/sidekiq/client.rb', line 45 def initialize(*args, **kwargs) if args.size == 1 && kwargs.size == 0 warn "Sidekiq::Client.new(pool) is deprecated, please use Sidekiq::Client.new(pool: pool), #{caller(0..3)}" # old calling method, accept 1 pool argument @redis_pool = args[0] @chain = Sidekiq.default_configuration.client_middleware @config = Sidekiq.default_configuration else # new calling method: keyword arguments @config = kwargs[:config] || Sidekiq.default_configuration @redis_pool = kwargs[:pool] || Thread.current[:sidekiq_redis_pool] || @config&.redis_pool @chain = kwargs[:chain] || @config&.client_middleware raise ArgumentError, "No Redis pool available for Sidekiq::Client" unless @redis_pool end end |
Instance Attribute Details
#redis_pool ⇒ Object
Returns the value of attribute redis_pool.
31 32 33 |
# File 'lib/sidekiq/client.rb', line 31 def redis_pool @redis_pool end |
Class Method Details
.enqueue(klass, *args) ⇒ Object
206 207 208 |
# File 'lib/sidekiq/client.rb', line 206 def enqueue(klass, *args) klass.client_push("class" => klass, "args" => args) end |
.enqueue_in(interval, klass, *args) ⇒ Object
234 235 236 |
# File 'lib/sidekiq/client.rb', line 234 def enqueue_in(interval, klass, *args) klass.perform_in(interval, *args) end |
.enqueue_to(queue, klass, *args) ⇒ Object
213 214 215 |
# File 'lib/sidekiq/client.rb', line 213 def enqueue_to(queue, klass, *args) klass.client_push("queue" => queue, "class" => klass, "args" => args) end |
.enqueue_to_in(queue, interval, klass, *args) ⇒ Object
220 221 222 223 224 225 226 227 228 229 |
# File 'lib/sidekiq/client.rb', line 220 def enqueue_to_in(queue, interval, klass, *args) int = interval.to_f now = Time.now.to_f ts = ((int < 1_000_000_000) ? now + int : int) item = {"class" => klass, "args" => args, "at" => ts, "queue" => queue} item.delete("at") if ts <= now klass.client_push(item) end |
.push(item) ⇒ Object
190 191 192 |
# File 'lib/sidekiq/client.rb', line 190 def push(item) new.push(item) end |
.push_bulk ⇒ Object
194 195 196 |
# File 'lib/sidekiq/client.rb', line 194 def push_bulk(...) new.push_bulk(...) end |
.via(pool) ⇒ Object
Allows sharding of jobs across any number of Redis instances. All jobs defined within the block will use the given Redis connection pool.
pool = ConnectionPool.new { Redis.new }
Sidekiq::Client.via(pool) do
SomeJob.perform_async(1,2,3)
SomeOtherJob.perform_async(1,2,3)
end
Generally this is only needed for very large Sidekiq installs processing thousands of jobs per second. I do not recommend sharding unless you cannot scale any other way (e.g. splitting your app into smaller apps).
180 181 182 183 184 185 186 187 |
# File 'lib/sidekiq/client.rb', line 180 def self.via(pool) raise ArgumentError, "No pool given" if pool.nil? current_sidekiq_pool = Thread.current[:sidekiq_redis_pool] Thread.current[:sidekiq_redis_pool] = pool yield ensure Thread.current[:sidekiq_redis_pool] = current_sidekiq_pool end |
Instance Method Details
#cancel!(jid) ⇒ Object
Cancel the IterableJob with the given JID. **NB: Cancellation is asynchronous.** Iteration checks every five seconds so this will not immediately stop the given job.
64 65 66 67 68 69 70 71 72 73 74 75 76 |
# File 'lib/sidekiq/client.rb', line 64 def cancel!(jid) key = "it-#{jid}" _, result, _ = Sidekiq.redis do |c| c.pipelined do |p| p.hsetnx(key, "cancelled", Time.now.to_i) p.hget(key, "cancelled") p.expire(key, Sidekiq::Job::Iterable::STATE_TTL) # TODO When Redis 7.2 is required # p.expire(key, Sidekiq::Job::Iterable::STATE_TTL, "nx") end end result.to_i end |
#middleware(&block) ⇒ Object
23 24 25 26 27 28 29 |
# File 'lib/sidekiq/client.rb', line 23 def middleware(&block) if block @chain = @chain.dup yield @chain end @chain end |
#push(item) ⇒ Object
The main method used to push a job to Redis. Accepts a number of options:
queue - the named queue to use, default 'default'
class - the job class to call, required
args - an array of simple arguments to the perform method, must be JSON-serializable
at - timestamp to schedule the job (optional), must be Numeric (e.g. Time.now.to_f)
retry - whether to retry this job if it fails, default true or an integer number of retries
retry_for - relative amount of time to retry this job if it fails, default nil
backtrace - whether to save any error backtrace, default false
If class is set to the class name, the jobs’ options will be based on Sidekiq’s default job options. Otherwise, they will be based on the job class’s options.
Any options valid for a job class’s sidekiq_options are also available here.
All keys must be strings, not symbols. NB: because we are serializing to JSON, all symbols in ‘args’ will be converted to strings. Note that backtrace: true can take quite a bit of space in Redis; a large volume of failing jobs can start Redis swapping if you aren’t careful.
Returns a unique Job ID. If middleware stops the job, nil will be returned instead.
Example:
push('queue' => 'my_queue', 'class' => MyJob, 'args' => ['foo', 1, :bat => 'bar'])
103 104 105 106 107 108 109 110 111 112 113 |
# File 'lib/sidekiq/client.rb', line 103 def push(item) normed = normalize_item(item) payload = middleware.invoke(item["class"], normed, normed["queue"], @redis_pool) do normed end if payload verify_json(payload) raw_push([payload]) payload["jid"] end end |
#push_bulk(items) ⇒ Object
Push a large number of jobs to Redis. This method cuts out the redis network round trip latency. It pushes jobs in batches if more than ‘:batch_size` (1000 by default) of jobs are passed. I wouldn’t recommend making ‘:batch_size` larger than 1000 but YMMV based on network quality, size of job args, etc. A large number of jobs can cause a bit of Redis command processing latency.
Takes the same arguments as #push except that args is expected to be an Array of Arrays. All other keys are duplicated for each job. Each job is run through the client middleware pipeline and each job gets its own Job ID as normal.
Returns an array of the of pushed jobs’ jids, may contain nils if any client middleware prevented a job push.
Example (pushing jobs in batches):
push_bulk('class' => MyJob, 'args' => (1..100_000).to_a, batch_size: 1_000)
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
# File 'lib/sidekiq/client.rb', line 133 def push_bulk(items) batch_size = items.delete(:batch_size) || items.delete("batch_size") || 1_000 args = items["args"] at = items.delete("at") raise ArgumentError, "Job 'at' must be a Numeric or an Array of Numeric timestamps" if at && (Array(at).empty? || !Array(at).all? { |entry| entry.is_a?(Numeric) }) raise ArgumentError, "Job 'at' Array must have same size as 'args' Array" if at.is_a?(Array) && at.size != args.size jid = items.delete("jid") raise ArgumentError, "Explicitly passing 'jid' when pushing more than one job is not supported" if jid && args.size > 1 normed = normalize_item(items) slice_index = 0 result = args.each_slice(batch_size).flat_map do |slice| raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless slice.is_a?(Array) && slice.all?(Array) break [] if slice.empty? # no jobs to push payloads = slice.map.with_index { |job_args, index| copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12)) copy["at"] = (at.is_a?(Array) ? at[slice_index + index] : at) if at result = middleware.invoke(items["class"], copy, copy["queue"], @redis_pool) do verify_json(copy) copy end result || nil } slice_index += batch_size to_push = payloads.compact raw_push(to_push) unless to_push.empty? payloads.map { |payload| payload&.[]("jid") } end result.is_a?(Enumerator::Lazy) ? result.force : result end |