Class: CraigScrape

Inherits:
Object
  • Object
show all
Defined in:
lib/libcraigscrape.rb,
lib/geo_listings.rb

Overview

A base class encapsulating the various libcraigscrape objects, and providing most of the craigslist interaction methods. Currently, we’re supporting the old Class methods in a legacy-compatibility mode, but these methods are marked for deprecation. Instead, create an instance of the Craigslist object, and use its Public Instance methods. See the README for easy to follow examples.

Defined Under Namespace

Classes: GeoListings, Listings, Posting, Scraper

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(*args) ⇒ CraigScrape

Takes a variable number of site/path specifiers (strings) as an argument. This list gets flattened and passed to CraigScrape::GeoListings.find_sites . See that method’s rdoc for a complete set of rules on what arguments are allowed here.



38
39
40
# File 'lib/libcraigscrape.rb', line 38

def initialize(*args)
  @sites_specs = args.flatten
end

Class Method Details

.scrape_full_post(post_url) ⇒ Object

This method is for legacy compatibility and is not recommended for use by new projects. Instead, consider using CraigScrape::Posting.new

Scrapes a single Post Url, and returns a Posting object representing its contents. Mostly here to preserve backwards-compatibility with the older api, CraigScrape::Listings.new “listing_url” does the same thing



159
160
161
# File 'lib/libcraigscrape.rb', line 159

def scrape_full_post(post_url)
  CraigScrape::Posting.new post_url
end

.scrape_listing(listing_url) ⇒ Object

This method is for legacy compatibility and is not recommended for use by new projects. Instead, consider using CraigScrape::Listings.new

Scrapes a single listing url and returns a Listings object representing the contents. Mostly here to preserve backwards-compatibility with the older api, CraigScrape::Listings.new “listing_url” does the same thing



127
128
129
# File 'lib/libcraigscrape.rb', line 127

def scrape_listing(listing_url)
  CraigScrape::Listings.new listing_url
end

.scrape_posts(listing_url, count) ⇒ Object

This method is for legacy compatibility and is not recommended for use by new projects. Instead, consider using the CraigScrape::each_post method.

Continually scrapes listings, using the supplied url as a starting point, until ‘count’ summaries have been retrieved or no more ‘next page’ links are avialable to be clicked on. Returns an array of PostSummary objects.



168
169
170
171
# File 'lib/libcraigscrape.rb', line 168

def scrape_posts(listing_url, count)
  count_so_far = 0
  self.scrape_until(listing_url) {|post| count_so_far+=1; count < count_so_far }
end

.scrape_posts_since(listing_url, newer_then) ⇒ Object

This method is for legacy compatibility and is not recommended for use by new projects. Instead, consider using the CraigScrape::posts_since method.

Continually scrapes listings, until the date newer_then has been reached, or no more ‘next page’ links are avialable to be clicked on. Returns an array of PostSummary objects. Dates are based on the Month/Day ‘datestamps’ reported in the listing summaries. As such, time-based cutoffs are not supported here. The scrape_until method, utilizing the SummaryPost.full_post method could achieve time-based cutoffs, at the expense of retrieving every post in full during enumerations.

Note: The results will not include post summaries having the newer_then date themselves.



182
183
184
# File 'lib/libcraigscrape.rb', line 182

def scrape_posts_since(listing_url, newer_then)
  self.scrape_until(listing_url) {|post| post. <= newer_then}
end

.scrape_until(listing_url, &post_condition) ⇒ Object

This method is for legacy compatibility and is not recommended for use by new projects. Instead, consider using the CraigScrape::each_post method.

Continually scrapes listings, using the supplied url as a starting point, until the supplied block returns true or until there’s no more ‘next page’ links available to click on



136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# File 'lib/libcraigscrape.rb', line 136

def scrape_until(listing_url, &post_condition)
  ret = []

  listings = CraigScrape::Listings.new listing_url
  catch "ScrapeBreak" do
    while listings do
      listings.posts.each do |post|
        throw "ScrapeBreak" if post_condition.call(post)
        ret << post
      end

      listings = listings.next_page
    end
  end

  ret
end

Instance Method Details

#each_listing(*fragments) ⇒ Object

Determines all listings which can be construed by combining the sites specified in the object constructor with the provided url-path fragments.

Passes the first page listing of each of these urls to the provided block.



53
54
55
# File 'lib/libcraigscrape.rb', line 53

def each_listing(*fragments)
  listing_urls_for(fragments).each{|url| yield Listings.new(url) }
end

#each_page_in_each_listing(*fragments) ⇒ Object

Determines all listings which can be construed by combining the sites specified in the object constructor with the provided url-path fragments.

Passes each page on every listing for the passed URLs to the provided block.



61
62
63
64
65
66
67
68
# File 'lib/libcraigscrape.rb', line 61

def each_page_in_each_listing(*fragments)
  each_listing(*fragments) do |listing|
    while listing
      yield listing
      listing = listing.next_page
    end
  end
end

#each_post(*fragments) ⇒ Object

Determines all listings which can be construed by combining the sites specified in the object constructor with the provided url-path fragments.

Passes all posts from each of these urls to the provided block, in the order they’re parsed (for each listing, newest posts are returned first).



83
84
85
# File 'lib/libcraigscrape.rb', line 83

def each_post(*fragments)
  each_page_in_each_listing(*fragments){ |l| l.posts.each{|p| yield p} }
end

#listings(*fragments) ⇒ Object

Determines all listings which can be construed by combining the sites specified in the object constructor with the provided url-path fragments.

Returns the first page listing of each of these urls to the provided block.



74
75
76
# File 'lib/libcraigscrape.rb', line 74

def listings(*fragments)
  listing_urls_for(fragments).collect{|url| Listings.new url }
end

#posts(*fragments) ⇒ Object

Determines all listings which can be construed by combining the sites specified in the object constructor with the provided url-path fragments.

Returns all posts from each of these urls, in the order they’re parsed (newest posts first).



92
93
94
95
96
# File 'lib/libcraigscrape.rb', line 92

def posts(*fragments)
  ret = []
  each_page_in_each_listing(*fragments){ |l| ret += l.posts }
  ret
end

#posts_since(newer_then, *fragments) ⇒ Object

Determines all listings which can be construed by combining the sites specified in the object constructor with the provided url-path fragments.

Returns all posts from each of these urls, which are newer than the provider ‘newer_then’ date. (Returns ‘newest’ posts first).



103
104
105
106
107
108
109
110
111
112
113
# File 'lib/libcraigscrape.rb', line 103

def posts_since(newer_then, *fragments)
  ret = []
  fragments.each do |frag|
    each_post(frag) do |p|
      break if p.post_time <= newer_then
      ret << p
    end
  end

  ret
end

#sitesObject

Returns which sites are included in any operations performed by this object. This is directly ascertained from the initial constructor’s spec-list



44
45
46
47
# File 'lib/libcraigscrape.rb', line 44

def sites
  @sites ||= GeoListings.find_sites @sites_specs
  @sites
end