Module: Irc::Utils
- Defined in:
- lib/rbot/core/utils/utils.rb,
lib/rbot/core/utils/httputil.rb
Overview
Miscellaneous useful functions
Defined Under Namespace
Classes: HttpUtil
Constant Summary collapse
- SEC_PER_MIN =
Seconds per minute
60
- SEC_PER_HR =
Seconds per hour
SEC_PER_MIN * 60
- SEC_PER_DAY =
Seconds per day
SEC_PER_HR * 24
- SEC_PER_MNTH =
Seconds per (30-day) month
SEC_PER_DAY * 30
- SEC_PER_YR =
Second per (30*12 = 360 day) year
SEC_PER_MNTH * 12
- @@bot =
nil
- @@safe_save_dir =
nil
Class Method Summary collapse
-
.bot ⇒ Object
The bot instance.
-
.bot=(b) ⇒ Object
Set up some Utils routines which depend on the associated bot.
-
.check_location(ds, rx) ⇒ Object
HTML info filters often need to check if the webpage location of a passed DataStream ds matches a given Regexp.
-
.decode_html_entities(str) ⇒ Object
Decode HTML entities in the String str, using HTMLEntities if the package was found, or UNESCAPE_TABLE otherwise.
-
.distance_of_time_in_words(minutes) ⇒ Object
Translates a number of minutes into verbal distances.
-
.get_first_pars(urls, count, opts = {}) ⇒ Object
Get the first pars of the first count urls.
-
.get_html_info(doc, opts = {}) ⇒ Object
This method extracts title, content (first par) and extra information from the given document doc.
-
.get_resp_html_info(resp, opts = {}) ⇒ Object
This method extracts title, content (first par) and extra information from the given Net::HTTPResponse resp.
-
.get_string_html_info(text, opts = {}) ⇒ Object
This method extracts title and content (first par) from the given HTML or XML document text, using standard methods (String#ircify_html_title, Utils.ircify_first_html_par).
-
.ircify_first_html_par(xml_org, opts = {}) ⇒ Object
Try to grab and IRCify the first HTML par (<p> tag) in the given string.
-
.ircify_first_html_par_wh(xml_org, opts = {}) ⇒ Object
HTML first par grabber using hpricot.
-
.ircify_first_html_par_woh(xml_org, opts = {}) ⇒ Object
HTML first par grabber without hpricot.
-
.safe_exec(command, *args) ⇒ Object
Execute an external program, returning a String obtained by redirecting the program’s standards errors and output.
-
.safe_save(file) {|temp| ... } ⇒ Object
Safely (atomically) save to file, by passing a tempfile to the block and then moving the tempfile to its final location when done.
-
.secs_to_short(seconds) ⇒ Object
Turn a number of seconds into a hours:minutes:seconds e.g.
-
.secs_to_string(secs) ⇒ Object
Turn a number of seconds into a human readable string, e.g 2 days, 3 hours, 18 minutes and 10 seconds.
-
.secs_to_string_case(array, var, string, plural) ⇒ Object
Auxiliary method needed by Utils.secs_to_string.
-
.timeago(time, options = {}) ⇒ Object
Returns human readable time.
-
.try_htmlinfo_filters(ds) ⇒ Object
This method runs an appropriately-crafted DataStream ds through the filters in the :htmlinfo filter group, in order.
Class Method Details
.bot ⇒ Object
The bot instance
164 165 166 |
# File 'lib/rbot/core/utils/utils.rb', line 164 def Utils.bot @@bot end |
.bot=(b) ⇒ Object
Set up some Utils routines which depend on the associated bot.
169 170 171 172 173 |
# File 'lib/rbot/core/utils/utils.rb', line 169 def Utils.bot=(b) debug "initializing utils" @@bot = b @@safe_save_dir = "#{@@bot.botclass}/safe_save" end |
.check_location(ds, rx) ⇒ Object
HTML info filters often need to check if the webpage location of a passed DataStream ds matches a given Regexp.
642 643 644 645 646 647 648 649 650 |
# File 'lib/rbot/core/utils/utils.rb', line 642 def Utils.check_location(ds, rx) debug ds[:headers] if h = ds[:headers] loc = [h['x-rbot-location'],h['location']].flatten.grep(rx) end loc ||= [] debug loc return loc.empty? ? nil : loc end |
.decode_html_entities(str) ⇒ Object
Decode HTML entities in the String str, using HTMLEntities if the package was found, or UNESCAPE_TABLE otherwise.
324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 |
# File 'lib/rbot/core/utils/utils.rb', line 324 def Utils.decode_html_entities(str) if defined? ::HTMLEntities return HTMLEntities.decode_entities(str) else str.gsub(/(&(.+?);)/) { symbol = $2 # remove the 0-paddng from unicode integers if symbol =~ /^#(\d+)$/ symbol = $1.to_i.to_s end # output the symbol's irc-translated character, or a * if it's unknown UNESCAPE_TABLE[symbol] || (symbol.match(/^\d+$/) ? [symbol.to_i].pack("U") : '*') } end end |
.distance_of_time_in_words(minutes) ⇒ Object
Translates a number of minutes into verbal distances. e.g. 0.5 => less than a minute
70 => about one hour
263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 |
# File 'lib/rbot/core/utils/utils.rb', line 263 def Utils.distance_of_time_in_words(minutes) case when minutes < 1 _("less than a minute") when minutes < 50 _("%{m} minutes") % {:m => minutes} when minutes < 90 _("about one hour") when minutes < 1080 _("%{m} hours") % {:m => (minutes / 60).round} when minutes < 1440 _("one day") when minutes < 2880 _("about one day") else _("%{m} days") % {:m => (minutes / 1440).round} end end |
.get_first_pars(urls, count, opts = {}) ⇒ Object
Get the first pars of the first count urls. The pages are downloaded using the bot httputil service. Returns an array of the first paragraphs fetched. If (optional) opts :message is specified, those paragraphs are echoed as replies to the IRC message passed as opts :message
688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 |
# File 'lib/rbot/core/utils/utils.rb', line 688 def Utils.get_first_pars(urls, count, opts={}) idx = 0 msg = opts[:message] retval = Array.new while count > 0 and urls.length > 0 url = urls.shift idx += 1 begin info = Utils.get_html_info(URI.parse(url), opts) par = info[:content] retval.push(par) if par msg.reply "[#{idx}] #{par}", :overlong => :truncate if msg count -=1 end rescue debug "Unable to retrieve #{url}: #{$!}" next end end return retval end |
.get_html_info(doc, opts = {}) ⇒ Object
This method extracts title, content (first par) and extra information from the given document doc.
doc can be an URI, a Net::HTTPResponse or a String.
If doc is a String, only title and content information are retrieved (if possible), using standard methods.
If doc is an URI or a Net::HTTPResponse, additional information is retrieved, and special title/summary extraction routines are used if possible.
557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 |
# File 'lib/rbot/core/utils/utils.rb', line 557 def Utils.get_html_info(doc, opts={}) case doc when String Utils.get_string_html_info(doc, opts) when Net::HTTPResponse Utils.get_resp_html_info(doc, opts) when URI ret = DataStream.new @@bot.httputil.get_response(doc) { |resp| ret.replace Utils.get_resp_html_info(resp, opts) } return ret else raise end end |
.get_resp_html_info(resp, opts = {}) ⇒ Object
This method extracts title, content (first par) and extra information from the given Net::HTTPResponse resp.
Currently, the only accepted options (in opts) are
- uri_fragment
-
the URI fragment of the original request
- full_body
-
get the whole body instead of @@bot.config bytes only
Returns a DataStream with the following keys:
- text
-
the (partial) body
- title
-
the title of the document (if any)
- content
-
the first paragraph of the document (if any)
- headers
-
the headers of the Net::HTTPResponse. The value is a Hash whose keys are lowercase forms of the HTTP header fields, and whose values are Arrays.
594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 |
# File 'lib/rbot/core/utils/utils.rb', line 594 def Utils.get_resp_html_info(resp, opts={}) case resp when Net::HTTPSuccess loc = URI.parse(resp['x-rbot-location'] || resp['location']) rescue nil if loc and loc.fragment and not loc.fragment.empty? opts[:uri_fragment] ||= loc.fragment end ret = DataStream.new(opts.dup) ret[:headers] = resp.to_hash ret[:text] = partial = opts[:full_body] ? resp.body : resp.partial_body(@@bot.config['http.info_bytes']) filtered = Utils.try_htmlinfo_filters(ret) if filtered return filtered elsif resp['content-type'] =~ /^text\/|(?:x|ht)ml/ ret.merge!(Utils.get_string_html_info(partial, opts)) end return ret else raise UrlLinkError, "getting link (#{resp.code} - #{resp.})" end end |
.get_string_html_info(text, opts = {}) ⇒ Object
This method extracts title and content (first par) from the given HTML or XML document text, using standard methods (String#ircify_html_title, Utils.ircify_first_html_par)
Currently, the only accepted option (in opts) is
- uri_fragment
-
the URI fragment of the original request
660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 |
# File 'lib/rbot/core/utils/utils.rb', line 660 def Utils.get_string_html_info(text, opts={}) debug "getting string html info" txt = text.dup title = txt.ircify_html_title debug opts if frag = opts[:uri_fragment] and not frag.empty? fragreg = /<a\s+(?:[^>]+\s+)?(?:name|id)=["']?#{frag}["']?[^>]*>/im debug fragreg debug txt if txt.match(fragreg) # grab the post-match txt = $' end debug txt end c_opts = opts.dup c_opts[:strip] ||= title content = Utils.ircify_first_html_par(txt, c_opts) content = nil if content.empty? return {:title => title, :content => content} end |
.ircify_first_html_par(xml_org, opts = {}) ⇒ Object
Try to grab and IRCify the first HTML par (<p> tag) in the given string. If possible, grab the one after the first heading
It is possible to pass some options to determine how the stripping occurs. Currently supported options are
- strip
-
Regex or String to strip at the beginning of the obtained text
- min_spaces
-
minimum number of spaces a paragraph should have
350 351 352 353 354 355 356 |
# File 'lib/rbot/core/utils/utils.rb', line 350 def Utils.ircify_first_html_par(xml_org, opts={}) if defined? ::Hpricot Utils.ircify_first_html_par_wh(xml_org, opts) else Utils.ircify_first_html_par_woh(xml_org, opts) end end |
.ircify_first_html_par_wh(xml_org, opts = {}) ⇒ Object
HTML first par grabber using hpricot
359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 |
# File 'lib/rbot/core/utils/utils.rb', line 359 def Utils.ircify_first_html_par_wh(xml_org, opts={}) doc = Hpricot(xml_org) # Strip styles and scripts (doc/"style|script").remove debug doc strip = opts[:strip] strip = Regexp.new(/^#{Regexp.escape(strip)}/) if strip.kind_of?(String) min_spaces = opts[:min_spaces] || 8 min_spaces = 0 if min_spaces < 0 txt = String.new pre_h = pars = by_span = nil while true debug "Minimum number of spaces: #{min_spaces}" # Initial attempt: <p> that follows <h\d> if pre_h.nil? pre_h = Hpricot::Elements[] found_h = false doc.search("*") { |e| next if e.bogusetag? case e.pathname when /^h\d/ found_h = true when 'p' pre_h << e if found_h end } debug "Hx: found: #{pre_h.pretty_inspect}" end pre_h.each { |p| debug p txt = p.to_html.ircify_html txt.sub!(strip, '') if strip debug "(Hx attempt) #{txt.inspect} has #{txt.count(" ")} spaces" break unless txt.empty? or txt.count(" ") < min_spaces } return txt unless txt.empty? or txt.count(" ") < min_spaces # Second natural attempt: just get any <p> pars = doc/"p" if pars.nil? debug "par: found: #{pars.pretty_inspect}" pars.each { |p| debug p txt = p.to_html.ircify_html txt.sub!(strip, '') if strip debug "(par attempt) #{txt.inspect} has #{txt.count(" ")} spaces" break unless txt.empty? or txt.count(" ") < min_spaces } return txt unless txt.empty? or txt.count(" ") < min_spaces # Nothing yet ... let's get drastic: we look for non-par elements too, # but only for those that match something that we know is likely to # contain text # Some blogging and forum platforms use spans or divs with a 'body' or # 'message' or 'text' in their class to mark actual text. Since we want # the class match to be partial and case insensitive, we collect # the common elements that may have this class and then filter out those # we don't need. If no divs or spans are found, we'll accept additional # elements too (td, tr, tbody, table). if by_span.nil? by_span = Hpricot::Elements[] extra = Hpricot::Elements[] doc.search("*") { |el| next if el.bogusetag? case el.pathname when AFTER_PAR_PATH by_span.push el if el[:class] =~ AFTER_PAR_CLASS or el[:id] =~ AFTER_PAR_CLASS when AFTER_PAR_EX extra.push el if el[:class] =~ AFTER_PAR_CLASS or el[:id] =~ AFTER_PAR_CLASS end } if by_span.empty? and not extra.empty? by_span.concat extra end debug "other \#1: found: #{by_span.pretty_inspect}" end by_span.each { |p| debug p txt = p.to_html.ircify_html txt.sub!(strip, '') if strip debug "(other attempt \#1) #{txt.inspect} has #{txt.count(" ")} spaces" break unless txt.empty? or txt.count(" ") < min_spaces } return txt unless txt.empty? or txt.count(" ") < min_spaces # At worst, we can try stuff which is comprised between two <br> # TODO debug "Last candidate #{txt.inspect} has #{txt.count(" ")} spaces" return txt unless txt.count(" ") < min_spaces break if min_spaces == 0 min_spaces /= 2 end end |
.ircify_first_html_par_woh(xml_org, opts = {}) ⇒ Object
HTML first par grabber without hpricot
468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 |
# File 'lib/rbot/core/utils/utils.rb', line 468 def Utils.ircify_first_html_par_woh(xml_org, opts={}) xml = xml_org.gsub(/<!--.*?-->/m, '').gsub(/<script(?:\s+[^>]*)?>.*?<\/script>/im, "").gsub(/<style(?:\s+[^>]*)?>.*?<\/style>/im, "") strip = opts[:strip] strip = Regexp.new(/^#{Regexp.escape(strip)}/) if strip.kind_of?(String) min_spaces = opts[:min_spaces] || 8 min_spaces = 0 if min_spaces < 0 txt = String.new while true debug "Minimum number of spaces: #{min_spaces}" header_found = xml.match(HX_REGEX) if header_found header_found = $' while txt.empty? or txt.count(" ") < min_spaces candidate = header_found[PAR_REGEX] break unless candidate txt = candidate.ircify_html header_found = $' txt.sub!(strip, '') if strip debug "(Hx attempt) #{txt.inspect} has #{txt.count(" ")} spaces" end end return txt unless txt.empty? or txt.count(" ") < min_spaces # If we haven't found a first par yet, try to get it from the whole # document header_found = xml while txt.empty? or txt.count(" ") < min_spaces candidate = header_found[PAR_REGEX] break unless candidate txt = candidate.ircify_html header_found = $' txt.sub!(strip, '') if strip debug "(par attempt) #{txt.inspect} has #{txt.count(" ")} spaces" end return txt unless txt.empty? or txt.count(" ") < min_spaces # Nothing yet ... let's get drastic: we look for non-par elements too, # but only for those that match something that we know is likely to # contain text # Attempt #1 header_found = xml while txt.empty? or txt.count(" ") < min_spaces candidate = header_found[AFTER_PAR1_REGEX] break unless candidate txt = candidate.ircify_html header_found = $' txt.sub!(strip, '') if strip debug "(other attempt \#1) #{txt.inspect} has #{txt.count(" ")} spaces" end return txt unless txt.empty? or txt.count(" ") < min_spaces # Attempt #2 header_found = xml while txt.empty? or txt.count(" ") < min_spaces candidate = header_found[AFTER_PAR2_REGEX] break unless candidate txt = candidate.ircify_html header_found = $' txt.sub!(strip, '') if strip debug "(other attempt \#2) #{txt.inspect} has #{txt.count(" ")} spaces" end debug "Last candidate #{txt.inspect} has #{txt.count(" ")} spaces" return txt unless txt.count(" ") < min_spaces break if min_spaces == 0 min_spaces /= 2 end end |
.safe_exec(command, *args) ⇒ Object
Execute an external program, returning a String obtained by redirecting the program’s standards errors and output
286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
# File 'lib/rbot/core/utils/utils.rb', line 286 def Utils.safe_exec(command, *args) IO.popen("-") { |p| if p return p.readlines.join("\n") else begin $stderr.reopen($stdout) exec(command, *args) rescue Exception => e puts "exec of #{command} led to exception: #{e.pretty_inspect}" Kernel::exit! 0 end puts "exec of #{command} failed" Kernel::exit! 0 end } end |
.safe_save(file) {|temp| ... } ⇒ Object
Safely (atomically) save to file, by passing a tempfile to the block and then moving the tempfile to its final location when done.
call-seq: Utils.safe_save(file, &block)
310 311 312 313 314 315 316 317 318 |
# File 'lib/rbot/core/utils/utils.rb', line 310 def Utils.safe_save(file) raise 'No safe save directory defined!' if @@safe_save_dir.nil? basename = File.basename(file) temp = Tempfile.new(basename,@@safe_save_dir) temp.binmode yield temp if block_given? temp.close File.rename(temp.path, file) end |
.secs_to_short(seconds) ⇒ Object
Turn a number of seconds into a hours:minutes:seconds e.g. 3:18:10 or 5’12“ or 7s
226 227 228 229 230 231 232 233 234 235 236 237 |
# File 'lib/rbot/core/utils/utils.rb', line 226 def Utils.secs_to_short(seconds) secs = seconds.to_i # make sure it's an integer mins, secs = secs.divmod 60 hours, mins = mins.divmod 60 if hours > 0 return ("%s:%s:%s" % [hours, mins, secs]) elsif mins > 0 return ("%s'%s\"" % [mins, secs]) else return ("%ss" % [secs]) end end |
.secs_to_string(secs) ⇒ Object
Turn a number of seconds into a human readable string, e.g 2 days, 3 hours, 18 minutes and 10 seconds
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
# File 'lib/rbot/core/utils/utils.rb', line 199 def Utils.secs_to_string(secs) ret = [] years, secs = secs.divmod SEC_PER_YR secs_to_string_case(ret, years, _("year"), _("years")) if years > 0 months, secs = secs.divmod SEC_PER_MNTH secs_to_string_case(ret, months, _("month"), _("months")) if months > 0 days, secs = secs.divmod SEC_PER_DAY secs_to_string_case(ret, days, _("day"), _("days")) if days > 0 hours, secs = secs.divmod SEC_PER_HR secs_to_string_case(ret, hours, _("hour"), _("hours")) if hours > 0 mins, secs = secs.divmod SEC_PER_MIN secs_to_string_case(ret, mins, _("minute"), _("minutes")) if mins > 0 secs = secs.to_i secs_to_string_case(ret, secs, _("second"), _("seconds")) if secs > 0 or ret.empty? case ret.length when 0 raise "Empty ret array!" when 1 return ret.to_s else return [ret[0, ret.length-1].join(", ") , ret[-1]].join(_(" and ")) end end |
.secs_to_string_case(array, var, string, plural) ⇒ Object
Auxiliary method needed by Utils.secs_to_string
188 189 190 191 192 193 194 195 |
# File 'lib/rbot/core/utils/utils.rb', line 188 def Utils.secs_to_string_case(array, var, string, plural) case var when 1 array << "1 #{string}" else array << "#{var} #{plural}" end end |
.timeago(time, options = {}) ⇒ Object
Returns human readable time. Like: 5 days ago
about one hour ago
options :start_date, sets the time to measure against, defaults to now :date_format, used with <tt>to_formatted_s<tt>, default to :default
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
# File 'lib/rbot/core/utils/utils.rb', line 245 def Utils.timeago(time, = {}) start_date = .delete(:start_date) || Time.new date_format = .delete(:date_format) || :default delta_minutes = (start_date.to_i - time.to_i).floor / 60 if delta_minutes.abs <= (8724*60) # eight weeks? I'm lazy to count days for longer than that distance = Utils.distance_of_time_in_words(delta_minutes); if delta_minutes < 0 _("%{d} from now") % {:d => distance} else _("%{d} ago") % {:d => distance} end else return _("on %{date}") % {:date => system_date.to_formatted_s(date_format)} end end |
.try_htmlinfo_filters(ds) ⇒ Object
This method runs an appropriately-crafted DataStream ds through the filters in the :htmlinfo filter group, in order. If one of the filters returns non-nil, its results are merged in ds and returned. Otherwise nil is returned.
The input DataStream shuold have the downloaded HTML as primary key (:text) and possibly a :headers key holding the resonse headers.
626 627 628 629 630 631 632 633 634 635 636 637 638 |
# File 'lib/rbot/core/utils/utils.rb', line 626 def Utils.try_htmlinfo_filters(ds) filters = @@bot.filter_names(:htmlinfo) return nil if filters.empty? cur = nil # TODO filter priority filters.each { |n| debug "testing filter #{n}" cur = @@bot.filter(@@bot.global_filter_name(n, :htmlinfo), ds) debug "returned #{cur.pretty_inspect}" break if cur } return ds.merge(cur) if cur end |