Class: SecApi::Middleware::RateLimiter

Inherits:
Faraday::Middleware
  • Object
show all
Defined in:
lib/sec_api/middleware/rate_limiter.rb

Overview

Faraday middleware that extracts rate limit headers from API responses, proactively throttles requests when approaching the rate limit, and queues requests when the rate limit is exhausted.

This middleware parses X-RateLimit-* headers from sec-api.io responses and updates a shared state store with the current rate limit information. When the remaining quota drops below a configurable threshold, the middleware will sleep until the rate limit window resets to avoid hitting 429 errors. When remaining reaches 0 (exhausted), requests are queued until reset.

Headers parsed:

  • X-RateLimit-Limit: Total requests allowed per time window

  • X-RateLimit-Remaining: Requests remaining in current window

  • X-RateLimit-Reset: Unix timestamp when the limit resets

Position in middleware stack: After Retry, before ErrorHandler This ensures we capture headers from the final response (after retries) and can extract rate limit info even from error responses (429).

Examples:

Middleware stack integration

Faraday.new(url: base_url) do |conn|
  conn.request :retry, retry_options
  conn.use SecApi::Middleware::RateLimiter,
    state_store: tracker,
    threshold: 0.1  # Throttle when < 10% remaining
  conn.use SecApi::Middleware::ErrorHandler
  conn.adapter Faraday.default_adapter
end

See Also:

Constant Summary collapse

LIMIT_HEADER =

Header name for total requests allowed per time window.

Returns:

  • (String)

    lowercase header name

"x-ratelimit-limit"
REMAINING_HEADER =

Header name for requests remaining in current window.

Returns:

  • (String)

    lowercase header name

"x-ratelimit-remaining"
RESET_HEADER =

Header name for Unix timestamp when the limit resets.

Returns:

  • (String)

    lowercase header name

"x-ratelimit-reset"
DEFAULT_THRESHOLD =

Default throttle threshold (10% remaining). Rationale: 10% provides a safety buffer to avoid hitting 429 while not being overly conservative. At typical sec-api.io limits (~100 req/min), 10% = 10 requests buffer, which handles small bursts. Lower values risk 429s; higher values waste capacity. (Architecture ADR-4: Rate Limiting Strategy)

0.1
DEFAULT_QUEUE_WAIT_WARNING_THRESHOLD =

Default warning threshold for excessive wait times (5 minutes). Rationale: 5 minutes is long enough to indicate potential issues (API outage, misconfigured limits) but short enough to be actionable. Matches typical monitoring alert thresholds for request latency.

300
DEFAULT_QUEUE_WAIT_SECONDS =

Default wait time when rate limit is exhausted but reset_at is unknown (60 seconds). Rationale: sec-api.io rate limit windows are typically 60 seconds. When the API doesn’t send X-RateLimit-Reset header, this provides a reasonable fallback that aligns with expected window duration without excessive waiting.

60

Instance Method Summary collapse

Constructor Details

#initialize(app, options = {}) ⇒ RateLimiter

Creates a new RateLimiter middleware instance.

Examples:

With custom threshold, callbacks, and logging

tracker = SecApi::RateLimitTracker.new
middleware = SecApi::Middleware::RateLimiter.new(app,
  state_store: tracker,
  threshold: 0.2,  # Throttle at 20% remaining
  on_throttle: ->(info) { puts "Throttling for #{info[:delay]}s" },
  on_queue: ->(info) { puts "Request queued, #{info[:queue_size]} waiting" },
  on_dequeue: ->(info) { puts "Request dequeued after #{info[:waited]}s" },
  on_excessive_wait: ->(info) { puts "Warning: wait time #{info[:wait_time]}s" },
  logger: Rails.logger,
  log_level: :info
)

Parameters:

  • app (#call)

    The next middleware in the stack

  • options (Hash) (defaults to: {})

    Configuration options

Options Hash (options):

  • :state_store (RateLimitTracker)

    The tracker to update with rate limit info

  • :threshold (Float) — default: 0.1

    Throttle when percentage remaining drops below this value (0.0-1.0). Default is 0.1 (10%).

  • :on_throttle (Proc, nil)

    Callback invoked when throttling occurs. Receives a hash with :remaining, :limit, :delay, :reset_at, and :request_id keys.

  • :on_queue (Proc, nil)

    Callback invoked when a request is queued due to exhausted rate limit (remaining = 0). Receives a hash with :queue_size, :wait_time, :reset_at, and :request_id keys.

  • :queue_wait_warning_threshold (Integer) — default: 300

    Seconds threshold for warning about excessive wait times. Default is 300 (5 minutes).

  • :on_excessive_wait (Proc, nil)

    Callback invoked when wait time exceeds queue_wait_warning_threshold. Receives a hash with :wait_time, :threshold, :reset_at, and :request_id keys.

  • :on_dequeue (Proc, nil)

    Callback invoked when a request exits the queue (after waiting). Receives a hash with :queue_size, :waited, and :request_id keys.

  • :logger (Logger, nil)

    Logger instance for structured rate limit logging. Set to nil (default) to disable logging.

  • :log_level (Symbol) — default: :info

    Log level for rate limit events.



109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
# File 'lib/sec_api/middleware/rate_limiter.rb', line 109

def initialize(app, options = {})
  super(app)
  @state_store = options[:state_store]
  @threshold = options.fetch(:threshold, DEFAULT_THRESHOLD)
  @on_throttle = options[:on_throttle]
  @on_queue = options[:on_queue]
  @on_dequeue = options[:on_dequeue]
  @on_excessive_wait = options[:on_excessive_wait]
  @queue_wait_warning_threshold = options.fetch(
    :queue_wait_warning_threshold,
    DEFAULT_QUEUE_WAIT_WARNING_THRESHOLD
  )
  @logger = options[:logger]
  @log_level = options.fetch(:log_level, :info)
  # Thread-safety design: Mutex + ConditionVariable pattern for efficient blocking.
  # Why not just sleep? Sleep wastes CPU cycles polling. ConditionVariable allows
  # threads to truly wait (zero CPU) until signaled, crucial for high-concurrency
  # workloads (Sidekiq, Puma) where many threads may be rate-limited simultaneously.
  # Why not atomic counters? We need to coordinate multiple operations (check state,
  # increment queue, wait) atomically, which requires a mutex.
  @mutex = Mutex.new
  @condition = ConditionVariable.new
end

Instance Method Details

#call(env) ⇒ Faraday::Response

Processes the request with rate limit queueing, throttling, and header extraction.

Before sending the request:

  1. Generates a unique request_id (UUID) for tracing across callbacks

  2. If rate limit is exhausted (remaining = 0), queues the request until reset

  3. Otherwise, checks if below threshold and throttles if needed

After the response, extracts rate limit headers to update state.

Parameters:

  • env (Faraday::Env)

    The request/response environment

Returns:

  • (Faraday::Response)

    The response



154
155
156
157
158
159
160
161
162
163
164
165
# File 'lib/sec_api/middleware/rate_limiter.rb', line 154

def call(env)
  # Generate unique request_id for tracing across all callbacks
  request_id = env[:request_id] ||= SecureRandom.uuid

  wait_if_exhausted(request_id)
  throttle_if_needed(request_id)

  @app.call(env).on_complete do |response_env|
    extract_rate_limit_headers(response_env)
    signal_waiting_threads
  end
end

#queued_countInteger

Returns the current count of queued (waiting) requests.

Delegates to the state store if available, otherwise returns 0.

Returns:

  • (Integer)

    Number of requests currently waiting for rate limit reset



138
139
140
# File 'lib/sec_api/middleware/rate_limiter.rb', line 138

def queued_count
  @state_store&.queued_count || 0
end