Class: MinifluxSanity
- Inherits:
-
Object
- Object
- MinifluxSanity
- Defined in:
- lib/miniflux_sanity.rb
Instance Method Summary collapse
- #fetch_entries ⇒ Object
- #filter_before_cutoff(entries:) ⇒ Object
-
#initialize(token:, host:, days:) ⇒ MinifluxSanity
constructor
A new instance of MinifluxSanity.
- #is_older_than_cutoff?(published_at:) ⇒ Boolean
- #last_fetched_today? ⇒ Boolean
- #mark_entries_as_read ⇒ Object
Constructor Details
#initialize(token:, host:, days:) ⇒ MinifluxSanity
Returns a new instance of MinifluxSanity.
7 8 9 10 11 12 13 14 15 |
# File 'lib/miniflux_sanity.rb', line 7 def initialize(token:, host:, days:) # Configuration object # TODO Is there a way to pass this cleanly? We're passing everything we receive, with the exact same argument names as well @@config = Config.new token: token, host: host, days: days # Set up miniflux and cache clients @@miniflux_client = MinifluxApi.new host: host, token: @@config.auth[:token] @@cache_client = Cache.new path: "cache.json" end |
Instance Method Details
#fetch_entries ⇒ Object
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# File 'lib/miniflux_sanity.rb', line 33 def fetch_entries if self.last_fetched_today? puts "Last run was today, skipping fetch." exit else puts "Now collecting all unread entries before the specified date." end # We get these in blocks of 250 # When we hit <250, we stop because that is the last call to make! size = 0 limit = 250 count = limit until count < limit do entries = @@miniflux_client.get_entries before: @@config., offset: size, limit: limit if entries.length < 1 puts "No more new entries" exit true end # TODO This *should* be a bang-style method based on how Ruby is written. It should modify the entries list itself. Probably if we modeled an Entry and Entries... we could add this as a method on the Entries Model. entries = self.filter_before_cutoff entries: entries count = entries.count size = size + count puts "Fetched #{size} entries." @@cache_client.last_fetched = Date.today.to_s @@cache_client.size = size @@cache_client.add_entries_to_file data: entries unless count < limit puts "Fetching more..." end end end |
#filter_before_cutoff(entries:) ⇒ Object
72 73 74 75 |
# File 'lib/miniflux_sanity.rb', line 72 def filter_before_cutoff(entries:) # Just for some extra resilience, we make sure to check the published_at date before we filter it. This would be helpful where the Miniflux API itself has a bug with its before filter, for example. entries.filter { |entry| is_older_than_cutoff? published_at: entry["published_at"] } end |
#is_older_than_cutoff?(published_at:) ⇒ Boolean
25 26 27 28 29 30 31 |
# File 'lib/miniflux_sanity.rb', line 25 def is_older_than_cutoff?(published_at:) if Date.parse(published_at).to_time.to_i > @@config. false else true end end |
#last_fetched_today? ⇒ Boolean
17 18 19 20 21 22 23 |
# File 'lib/miniflux_sanity.rb', line 17 def last_fetched_today? if @@cache_client.last_fetched.nil? false else Date.parse(@@cache_client.last_fetched.to_s) == Date.today end end |
#mark_entries_as_read ⇒ Object
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
# File 'lib/miniflux_sanity.rb', line 77 def mark_entries_as_read start = 0 interval = 10 cached_data = @@cache_client.read_from_file while @@cache_client.size != 0 do stop = start + interval # For every 10 entries, mark as read. # Reduce size and remove entries accordingly in our file. filtered_data = cached_data["data"][start...stop] ids_to_mark_read = filtered_data.map { |entry| entry["id"] } @@miniflux_client.mark_entries_read ids: ids_to_mark_read @@cache_client.size -= interval @@cache_client.remove_entries_from_file ids: ids_to_mark_read start += interval puts "#{@@cache_client.size} entries left to be mark as read." end end |