Class: Sphinx::Client
- Inherits:
-
Object
- Object
- Sphinx::Client
- Includes:
- Constants
- Defined in:
- lib/sphinx/client.rb
Overview
The Sphinx Client API is used to communicate with searchd
daemon and perform requests.
Constant Summary
Constants included from Constants
Sphinx::Constants::QUERY_FLAGS, Sphinx::Constants::SEARCHD_COMMAND_EXCERPT, Sphinx::Constants::SEARCHD_COMMAND_FLUSHATTRS, Sphinx::Constants::SEARCHD_COMMAND_KEYWORDS, Sphinx::Constants::SEARCHD_COMMAND_PERSIST, Sphinx::Constants::SEARCHD_COMMAND_SEARCH, Sphinx::Constants::SEARCHD_COMMAND_STATUS, Sphinx::Constants::SEARCHD_COMMAND_UPDATE, Sphinx::Constants::SEARCHD_ERROR, Sphinx::Constants::SEARCHD_OK, Sphinx::Constants::SEARCHD_RETRY, Sphinx::Constants::SEARCHD_WARNING, Sphinx::Constants::SPH_ATTR_BIGINT, Sphinx::Constants::SPH_ATTR_BOOL, Sphinx::Constants::SPH_ATTR_FACTORS, Sphinx::Constants::SPH_ATTR_FLOAT, Sphinx::Constants::SPH_ATTR_INTEGER, Sphinx::Constants::SPH_ATTR_MULTI, Sphinx::Constants::SPH_ATTR_MULTI64, Sphinx::Constants::SPH_ATTR_ORDINAL, Sphinx::Constants::SPH_ATTR_STRING, Sphinx::Constants::SPH_ATTR_TIMESTAMP, Sphinx::Constants::SPH_FILTER_FLOATRANGE, Sphinx::Constants::SPH_FILTER_RANGE, Sphinx::Constants::SPH_FILTER_VALUES, Sphinx::Constants::SPH_GROUPBY_ATTR, Sphinx::Constants::SPH_GROUPBY_ATTRPAIR, Sphinx::Constants::SPH_GROUPBY_DAY, Sphinx::Constants::SPH_GROUPBY_MONTH, Sphinx::Constants::SPH_GROUPBY_WEEK, Sphinx::Constants::SPH_GROUPBY_YEAR, Sphinx::Constants::SPH_MATCH_ALL, Sphinx::Constants::SPH_MATCH_ANY, Sphinx::Constants::SPH_MATCH_BOOLEAN, Sphinx::Constants::SPH_MATCH_EXTENDED, Sphinx::Constants::SPH_MATCH_EXTENDED2, Sphinx::Constants::SPH_MATCH_FULLSCAN, Sphinx::Constants::SPH_MATCH_PHRASE, Sphinx::Constants::SPH_RANK_BM25, Sphinx::Constants::SPH_RANK_EXPR, Sphinx::Constants::SPH_RANK_FIELDMASK, Sphinx::Constants::SPH_RANK_MATCHANY, Sphinx::Constants::SPH_RANK_NONE, Sphinx::Constants::SPH_RANK_PROXIMITY, Sphinx::Constants::SPH_RANK_PROXIMITY_BM25, Sphinx::Constants::SPH_RANK_SPH04, Sphinx::Constants::SPH_RANK_WORDCOUNT, Sphinx::Constants::SPH_SORT_ATTR_ASC, Sphinx::Constants::SPH_SORT_ATTR_DESC, Sphinx::Constants::SPH_SORT_EXPR, Sphinx::Constants::SPH_SORT_EXTENDED, Sphinx::Constants::SPH_SORT_RELEVANCE, Sphinx::Constants::SPH_SORT_TIME_SEGMENTS, Sphinx::Constants::VER_COMMAND_EXCERPT, Sphinx::Constants::VER_COMMAND_FLUSHATTRS, Sphinx::Constants::VER_COMMAND_KEYWORDS, Sphinx::Constants::VER_COMMAND_PERSIST, Sphinx::Constants::VER_COMMAND_QUERY, Sphinx::Constants::VER_COMMAND_SEARCH, Sphinx::Constants::VER_COMMAND_STATUS, Sphinx::Constants::VER_COMMAND_UPDATE
Instance Attribute Summary collapse
-
#logger ⇒ Object
readonly
Log debug/info/warn to the given Logger, defaults to nil.
-
#reqretries ⇒ Object
readonly
Number of request retries.
-
#reqtimeout ⇒ Object
readonly
Request timeout in seconds.
-
#retries ⇒ Object
readonly
Number of connection retries.
-
#servers ⇒ Object
readonly
List of searchd servers to connect to.
-
#timeout ⇒ Object
readonly
Connection timeout in seconds.
Instance Method Summary collapse
-
#add_query(query, index = '*', comment = '', log = true) ⇒ Integer
(also: #AddQuery)
Adds additional query with current settings to multi-query batch.
-
#build_excerpts(docs, index, words, opts = {}) ⇒ Array<String>, false
(also: #BuildExcerpts)
Excerpts (snippets) builder function.
-
#build_keywords(query, index, hits) ⇒ Array<Hash>
(also: #BuildKeywords)
Extracts keywords from query using tokenizer settings for given index, optionally with per-keyword occurrence statistics.
-
#close ⇒ Boolean
(also: #Close)
Closes previously opened persistent connection.
-
#connect_error? ⇒ Boolean
(also: #IsConnectError)
Checks whether the last error was a network error on API side, or a remote error reported by searchd.
-
#escape_string(string) ⇒ String
(also: #EscapeString)
Escapes characters that are treated as special operators by the query language parser.
-
#flush_attributes ⇒ Integer
(also: #FlushAttributes, #FlushAttrs, #flush_attrs)
Force attribute flush, and block until it completes.
-
#initialize(logger = nil) ⇒ Client
constructor
Constructs the
Sphinx::Client
object and sets options to their default values. -
#inspect ⇒ Object
Returns a string representation of the sphinx client object.
-
#last_error ⇒ String
(also: #GetLastError)
Returns last error message, as a string, in human readable format.
-
#last_warning ⇒ String
(also: #GetLastWarning)
Returns last warning message, as a string, in human readable format.
-
#open ⇒ Boolean
(also: #Open)
Opens persistent connection to the server.
-
#query(query, index = '*', comment = '') {|Client| ... } ⇒ Hash, false
(also: #Query)
Connects to searchd server, runs given search query with current settings, obtains and returns the result set.
-
#reset_filters ⇒ Sphinx::Client
(also: #ResetFilters)
Clears all currently set filters.
-
#reset_group_by ⇒ Sphinx::Client
(also: #ResetGroupBy)
Clears all currently group-by settings, and disables group-by.
- #reset_outer_select ⇒ Object (also: #ResetOuterSelect)
-
#reset_overrides ⇒ Sphinx::Client
(also: #ResetOverrides)
Clear all attribute value overrides (for multi-queries).
- #reset_query_flag ⇒ Object (also: #ResetQueryFlag)
-
#run_queries ⇒ Array<Hash>
(also: #RunQueries)
Connect to searchd, runs a batch of all queries added using #add_query, obtains and returns the result sets.
-
#set_connect_timeout(timeout, retries = 1) ⇒ Sphinx::Client
(also: #SetConnectTimeout)
Sets the time allowed to spend connecting to the server before giving up and number of retries to perform.
-
#set_field_weights(weights) ⇒ Sphinx::Client
(also: #SetFieldWeights)
Binds per-field weights by name.
-
#set_filter(attribute, values, exclude = false) ⇒ Sphinx::Client
(also: #SetFilter)
Adds new integer values set filter.
-
#set_filter_float_range(attribute, min, max, exclude = false) ⇒ Sphinx::Client
(also: #SetFilterFloatRange)
Adds new float range filter.
-
#set_filter_range(attribute, min, max, exclude = false) ⇒ Sphinx::Client
(also: #SetFilterRange)
Adds new integer range filter.
-
#set_geo_anchor(attrlat, attrlong, lat, long) ⇒ Sphinx::Client
(also: #SetGeoAnchor)
Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.
-
#set_group_by(attribute, func, groupsort = '@group desc') ⇒ Sphinx::Client
(also: #SetGroupBy)
Sets grouping attribute, function, and groups sorting mode; and enables grouping (as described in Section 4.6, “Grouping (clustering) search results”).
-
#set_group_distinct(attribute) ⇒ Sphinx::Client
(also: #SetGroupDistinct)
Sets attribute name for per-group distinct values count calculations.
-
#set_id_range(min, max) ⇒ Sphinx::Client
(also: #SetIDRange)
Sets an accepted range of document IDs.
-
#set_index_weights(weights) ⇒ Sphinx::Client
(also: #SetIndexWeights)
Sets per-index weights, and enables weighted summing of match weights across different indexes.
-
#set_limits(offset, limit, max = 0, cutoff = 0) ⇒ Sphinx::Client
(also: #SetLimits)
Sets offset into server-side result set (
offset
) and amount of matches to return to client starting from that offset (limit
). -
#set_match_mode(mode) ⇒ Sphinx::Client
(also: #SetMatchMode)
Sets full-text query matching mode.
-
#set_max_query_time(max) ⇒ Sphinx::Client
(also: #SetMaxQueryTime)
Sets maximum search query time, in milliseconds.
- #set_outer_select(orderby, offset, limit) ⇒ Object (also: #SetOuterSelect)
-
#set_override(attribute, attrtype, values) ⇒ Sphinx::Client
(also: #SetOverride)
Sets temporary (per-query) per-document attribute value overrides.
-
#set_query_flag(flag_name, flag_value) ⇒ Sphinx::Client
(also: #SetQueryFlag)
Allows to control a number of per-query options.
-
#set_ranking_mode(ranker, rankexpr = '') ⇒ Sphinx::Client
(also: #SetRankingMode)
Sets ranking mode.
-
#set_request_timeout(timeout, retries = 1) ⇒ Sphinx::Client
(also: #SetRequestTimeout)
Sets the time allowed to spend performing request to the server before giving up and number of retries to perform.
-
#set_retries(count, delay = 0) ⇒ Sphinx::Client
(also: #SetRetries)
Sets distributed retry count and delay.
-
#set_select(select) ⇒ Sphinx::Client
(also: #SetSelect)
Sets the select clause, listing specific attributes to fetch, and expressions to compute and fetch.
-
#set_server(host, port = 9312) ⇒ Sphinx::Client
(also: #SetServer)
Sets searchd host name and TCP port.
-
#set_servers(servers) ⇒ Sphinx::Client
(also: #SetServers)
Sets the list of searchd servers.
-
#set_sort_mode(mode, sortby = '') ⇒ Sphinx::Client
(also: #SetSortMode)
Set matches sorting mode.
-
#set_weights(weights) ⇒ Sphinx::Client
(also: #SetWeights)
deprecated
Deprecated.
Use #set_field_weights instead.
-
#status ⇒ Array<Array>, Array<Hash>
(also: #Status)
Queries searchd status, and returns an array of status variable name and value pairs.
-
#update_attributes(index, attrs, values, mva = false, ignore_non_existent = false) ⇒ Integer
(also: #UpdateAttributes)
Instantly updates given attribute values in given documents.
Constructor Details
#initialize(logger = nil) ⇒ Client
Constructs the Sphinx::Client
object and sets options to their default values.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
# File 'lib/sphinx/client.rb', line 47 def initialize(logger = nil) # per-query settings @offset = 0 # how many records to seek from result-set start (default is 0) @limit = 20 # how many records to return from result-set starting at offset (default is 20) @mode = SPH_MATCH_ALL # query matching mode (default is SPH_MATCH_ALL) @weights = [] # per-field weights (default is 1 for all fields) @sort = SPH_SORT_RELEVANCE # match sorting mode (default is SPH_SORT_RELEVANCE) @sortby = '' # attribute to sort by (defualt is "") @min_id = 0 # min ID to match (default is 0, which means no limit) @max_id = 0 # max ID to match (default is 0, which means no limit) @filters = [] # search filters @groupby = '' # group-by attribute name @groupfunc = SPH_GROUPBY_DAY # function to pre-process group-by attribute value with @groupsort = '@group desc' # group-by sorting clause (to sort groups in result set with) @groupdistinct = '' # group-by count-distinct attribute @maxmatches = 1000 # max matches to retrieve @cutoff = 0 # cutoff to stop searching at (default is 0) @retrycount = 0 # distributed retries count @retrydelay = 0 # distributed retries delay @anchor = [] # geographical anchor point @indexweights = [] # per-index weights @ranker = SPH_RANK_PROXIMITY_BM25 # ranking mode (default is SPH_RANK_PROXIMITY_BM25) @rankexpr = '' # ranking expression @maxquerytime = 0 # max query time, milliseconds (default is 0, do not limit) @fieldweights = {} # per-field-name weights @overrides = [] # per-query attribute values overrides @select = '*' # select-list (attributes or expressions, with optional aliases) @query_flags = 0 @predictedtime = 0 @outerorderby = '' @outeroffset = 0 @outerlimit = 0 @hasouter = false # per-reply fields (for single-query case) @error = '' # last error message @warning = '' # last warning message @connerror = false # connection error vs remote error flag @reqs = [] # requests storage (for multi-query case) @mbenc = '' # stored mbstring encoding @timeout = 0 # connect timeout @retries = 1 # number of connect retries in case of emergency @reqtimeout = 0 # request timeout @reqretries = 1 # number of request retries in case of emergency # per-client-object settings # searchd servers list @servers = [Sphinx::Server.new(self, 'localhost', 9312, false)].freeze @logger = logger logger.info { "[sphinx] version: #{VERSION}, #{@servers.inspect}" } if logger end |
Dynamic Method Handling
This class handles dynamic methods through the method_missing method
#method_missing(method_id, *arguments, &block) ⇒ Object (protected)
Enables ability to skip set_
prefix for methods inside #query block.
2556 2557 2558 2559 2560 2561 2562 |
# File 'lib/sphinx/client.rb', line 2556 def method_missing(method_id, *arguments, &block) if @inside_eval and self.respond_to?("set_#{method_id}") self.send("set_#{method_id}", *arguments) else super end end |
Instance Attribute Details
#logger ⇒ Object (readonly)
Log debug/info/warn to the given Logger, defaults to nil.
39 40 41 |
# File 'lib/sphinx/client.rb', line 39 def logger @logger end |
#reqretries ⇒ Object (readonly)
Number of request retries.
36 37 38 |
# File 'lib/sphinx/client.rb', line 36 def reqretries @reqretries end |
#reqtimeout ⇒ Object (readonly)
Request timeout in seconds.
33 34 35 |
# File 'lib/sphinx/client.rb', line 33 def reqtimeout @reqtimeout end |
#retries ⇒ Object (readonly)
Number of connection retries.
30 31 32 |
# File 'lib/sphinx/client.rb', line 30 def retries @retries end |
#servers ⇒ Object (readonly)
List of searchd servers to connect to.
24 25 26 |
# File 'lib/sphinx/client.rb', line 24 def servers @servers end |
#timeout ⇒ Object (readonly)
Connection timeout in seconds.
27 28 29 |
# File 'lib/sphinx/client.rb', line 27 def timeout @timeout end |
Instance Method Details
#add_query(query, index = '*', comment = '', log = true) ⇒ Integer Also known as: AddQuery
Adds additional query with current settings to multi-query batch. query
is a query string. index
is an index name (or names) string. Additionally if provided, the contents of comment
are sent to the query log, marked in square brackets, just before the search terms, which can be very useful for debugging. Currently, this is limited to 128 characters. Returns index to results array returned from #run_queries.
Batch queries (or multi-queries) enable searchd to perform internal optimizations if possible. They also reduce network connection overheads and search process creation overheads in all cases. They do not result in any additional overheads compared to simple queries. Thus, if you run several different queries from your web page, you should always consider using multi-queries.
For instance, running the same full-text query but with different sorting or group-by settings will enable searchd to perform expensive full-text search and ranking operation only once, but compute multiple group-by results from its output.
This can be a big saver when you need to display not just plain search results but also some per-category counts, such as the amount of products grouped by vendor. Without multi-query, you would have to run several queries which perform essentially the same search and retrieve the same matches, but create result sets differently. With multi-query, you simply pass all these queries in a single batch and Sphinx optimizes the redundant full-text search internally.
#add_query internally saves full current settings state along with the query, and you can safely change them afterwards for subsequent #add_query calls. Already added queries will not be affected; there's actually no way to change them at all. Here's an example:
sphinx.set_sort_mode(:relevance)
sphinx.add_query("hello world", "documents")
sphinx.set_sort_mode(:attr_desc, :price)
sphinx.add_query("ipod", "products")
sphinx.add_query("harry potter", "books")
results = sphinx.run_queries
With the code above, 1st query will search for “hello world” in “documents” index and sort results by relevance, 2nd query will search for “ipod” in “products” index and sort results by price, and 3rd query will search for “harry potter” in “books” index while still sorting by price. Note that 2nd #set_sort_mode call does not affect the first query (because it's already added) but affects both other subsequent queries.
Additionally, any filters set up before an #add_query will fall through to subsequent queries. So, if #set_filter is called before the first query, the same filter will be in place for the second (and subsequent) queries batched through #add_query unless you call #reset_filters first. Alternatively, you can add additional filters as well.
This would also be true for grouping options and sorting options; no current sorting, filtering, and grouping settings are affected by this call; so subsequent queries will reuse current query settings.
#add_query returns an index into an array of results that will be returned from #run_queries call. It is simply a sequentially increasing 0-based integer, ie. first call will return 0, second will return 1, and so on. Just a small helper so you won't have to track the indexes manualy if you need then.
1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 |
# File 'lib/sphinx/client.rb', line 1512 def add_query(query, index = '*', comment = '', log = true) logger.debug { "[sphinx] add_query('#{query}', '#{index}', '#{comment}'), #{self.inspect}" } if log and logger # build request # mode and limits request = Request.new request.put_int @query_flags, @offset, @limit, @mode # ranker request.put_int @ranker request.put_string @rankexpr if @ranker == SPH_RANK_EXPR # sorting request.put_int @sort request.put_string @sortby # query itself request.put_string query # weights request.put_int_array @weights # indexes request.put_string index # id64 range marker request.put_int 1 # id64 range request.put_int64 @min_id.to_i, @max_id.to_i # filters request.put_int @filters.length @filters.each do |filter| request.put_string filter['attr'] request.put_int filter['type'] case filter['type'] when SPH_FILTER_VALUES request.put_int64_array filter['values'] when SPH_FILTER_RANGE request.put_int64 filter['min'], filter['max'] when SPH_FILTER_FLOATRANGE request.put_float filter['min'], filter['max'] else raise SphinxInternalError, 'Internal error: unhandled filter type' end request.put_int filter['exclude'] ? 1 : 0 end # group-by clause, max-matches count, group-sort clause, cutoff count request.put_int @groupfunc request.put_string @groupby request.put_int @maxmatches request.put_string @groupsort request.put_int @cutoff, @retrycount, @retrydelay request.put_string @groupdistinct # anchor point if @anchor.empty? request.put_int 0 else request.put_int 1 request.put_string @anchor['attrlat'], @anchor['attrlong'] request.put_float @anchor['lat'], @anchor['long'] end # per-index weights request.put_int @indexweights.length @indexweights.sort_by { |idx, _| idx }.each do |idx, weight| request.put_string idx.to_s request.put_int weight end # max query time request.put_int @maxquerytime # per-field weights request.put_int @fieldweights.length @fieldweights.sort_by { |idx, _| idx }.each do |field, weight| request.put_string field.to_s request.put_int weight end # comment request.put_string comment # attribute overrides request.put_int @overrides.length for entry in @overrides do request.put_string entry['attr'] request.put_int entry['type'], entry['values'].size entry['values'].each do |id, val| request.put_int64 id case entry['type'] when SPH_ATTR_FLOAT request.put_float val.to_f when SPH_ATTR_BIGINT request.put_int64 val.to_i else request.put_int val.to_i end end end # select-list request.put_string @select # max_predicted_time request.put_int @predictedtime if @predictedtime > 0 # outer select request.put_string @outerorderby request.put_int @outeroffset, @outerlimit, (@hasouter ? 1 : 0) # store request to requests array @reqs << request.to_s; return @reqs.length - 1 end |
#build_excerpts(docs, index, words, opts = {}) ⇒ Array<String>, false Also known as: BuildExcerpts
Excerpts (snippets) builder function. Connects to searchd, asks it to generate excerpts (snippets) from given documents, and returns the results.
docs
is a plain array of strings that carry the documents' contents. index
is an index name string. Different settings (such as charset, morphology, wordforms) from given index will be used. words
is a string that contains the keywords to highlight. They will be processed with respect to index settings. For instance, if English stemming is enabled in the index, “shoes” will be highlighted even if keyword is “shoe”. Starting with version 0.9.9-rc1, keywords can contain wildcards, that work similarly to star-syntax available in queries.
1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 |
# File 'lib/sphinx/client.rb', line 1810 def build_excerpts(docs, index, words, opts = {}) raise ArgumentError, '"docs" argument must be Array' unless docs.kind_of?(Array) raise ArgumentError, '"index" argument must be String' unless index.kind_of?(String) or index.kind_of?(Symbol) raise ArgumentError, '"words" argument must be String' unless words.kind_of?(String) raise ArgumentError, '"opts" argument must be Hash' unless opts.kind_of?(Hash) docs.each do |doc| raise ArgumentError, '"docs" argument must be Array of Strings' unless doc.kind_of?(String) end # fixup options opts = HashWithIndifferentAccess.new( :before_match => '<b>', :after_match => '</b>', :chunk_separator => ' ... ', :limit => 256, :limit_passages => 0, :limit_words => 0, :around => 5, :exact_phrase => false, :single_passage => false, :use_boundaries => false, :weight_order => false, :query_mode => false, :force_all_words => false, :start_passage_id => 1, :load_files => false, :html_strip_mode => 'index', :allow_empty => false, :passage_boundary => 'none', :emit_zones => false, :load_files_scattered => false ).update(opts) # build request # v.1.2 req flags = 1 flags |= 2 if opts[:exact_phrase] flags |= 4 if opts[:single_passage] flags |= 8 if opts[:use_boundaries] flags |= 16 if opts[:weight_order] flags |= 32 if opts[:query_mode] flags |= 64 if opts[:force_all_words] flags |= 128 if opts[:load_files] flags |= 256 if opts[:allow_empty] flags |= 512 if opts[:emit_zones] flags |= 1024 if opts[:load_files_scattered] request = Request.new request.put_int 0, flags # mode=0, flags=1 (remove spaces) # req index request.put_string index.to_s # req words request.put_string words # options request.put_string opts[:before_match], opts[:after_match], opts[:chunk_separator] request.put_int opts[:limit].to_i, opts[:around].to_i request.put_int opts[:limit_passages].to_i, opts[:limit_words].to_i, opts[:start_passage_id].to_i request.put_string opts[:html_strip_mode], opts[:passage_boundary] # documents request.put_int docs.size request.put_string(*docs) response = perform_request(:excerpt, request) # parse response docs.map { response.get_string } end |
#build_keywords(query, index, hits) ⇒ Array<Hash> Also known as: BuildKeywords
Extracts keywords from query using tokenizer settings for given index, optionally with per-keyword occurrence statistics. Returns an array of hashes with per-keyword information.
query
is a query to extract keywords from. index
is a name of the index to get tokenizing settings and keyword occurrence statistics from. hits
is a boolean flag that indicates whether keyword occurrence statistics are required.
The result set consists of Hashes with the following keys and values:
'tokenized'
-
Tokenized keyword.
'normalized'
-
Normalized keyword.
'docs'
-
A number of documents where keyword is found (if
hits
param istrue
). 'hits'
-
A number of keywords occurrences among all documents (if
hits
param istrue
).
1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 |
# File 'lib/sphinx/client.rb', line 1918 def build_keywords(query, index, hits) raise ArgumentError, '"query" argument must be String' unless query.kind_of?(String) raise ArgumentError, '"index" argument must be String' unless index.kind_of?(String) or index.kind_of?(Symbol) raise ArgumentError, '"hits" argument must be Boolean' unless hits.kind_of?(TrueClass) or hits.kind_of?(FalseClass) # build request request = Request.new # v.1.0 req request.put_string query # req query request.put_string index # req index request.put_int hits ? 1 : 0 response = perform_request(:keywords, request) # parse response nwords = response.get_int (0...nwords).map do tokenized = response.get_string normalized = response.get_string entry = HashWithIndifferentAccess.new('tokenized' => tokenized, 'normalized' => normalized) entry['docs'], entry['hits'] = response.get_ints(2) if hits entry end end |
#close ⇒ Boolean Also known as: Close
Closes previously opened persistent connection.
This method could be used only when a single searchd server configured.
2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 |
# File 'lib/sphinx/client.rb', line 2207 def close if @servers.size > 1 @error = 'too many servers. persistent socket allowed only for a single server.' return false end unless @servers.first.persistent? @error = 'not connected' return false; end @servers.first.close_persistent! end |
#connect_error? ⇒ Boolean Also known as: IsConnectError
Checks whether the last error was a network error on API side, or a remote error reported by searchd. Returns true if the last connection attempt to searchd failed on API side, false otherwise (if the error was remote, or there were no connection attempts at all).
199 200 201 |
# File 'lib/sphinx/client.rb', line 199 def connect_error? @connerror || false end |
#escape_string(string) ⇒ String Also known as: EscapeString
Escapes characters that are treated as special operators by the query language parser.
This function might seem redundant because it's trivial to implement in any calling application. However, as the set of special characters might change over time, it makes sense to have an API call that is guaranteed to escape all such characters at all times.
@example:
escaped = sphinx.escape_string "[email protected]/string"
2065 2066 2067 |
# File 'lib/sphinx/client.rb', line 2065 def escape_string(string) string.to_s.gsub(/([\\()|\[email protected]~"&\/\^\$=])/, '\\\\\\1') end |
#flush_attributes ⇒ Integer Also known as: FlushAttributes, FlushAttrs, flush_attrs
Force attribute flush, and block until it completes.
2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 |
# File 'lib/sphinx/client.rb', line 2129 def flush_attributes request = Request.new response = perform_request(:flushattrs, request) # parse response begin response.get_int rescue EOFError @error = 'unexpected response length' -1 end end |
#inspect ⇒ Object
Returns a string representation of the sphinx client object.
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
# File 'lib/sphinx/client.rb', line 103 def inspect params = { :error => @error, :warning => @warning, :connect_error => @connerror, :servers => @servers, :connect_timeout => { :timeout => @timeout, :retries => @retries }, :request_timeout => { :timeout => @reqtimeout, :retries => @reqretries }, :retries => { :count => @retrycount, :delay => @retrydelay }, :limits => { :offset => @offset, :limit => @limit, :max => @maxmatches, :cutoff => @cutoff }, :max_query_time => @maxquerytime, :overrides => @overrides, :select => @select, :match_mode => @mode, :ranking => { :mode => @ranker, :expression => @rankexpr }, :sort_mode => { :mode => @sort, :sort_by => @sortby }, :weights => @weights, :field_weights => @fieldweights, :index_weights => @indexweights, :id_range => { :min => @min_id, :max => @max_id }, :filters => @filters, :geo_anchor => @anchor, :group_by => { :attribute => @groupby, :func => @groupfunc, :sort => @groupsort }, :group_distinct => @groupdistinct, :query_flags => { :bitset => @query_flags, :predicted_time => @predictedtime }, :outer_select => { :has_outer => @hasouter, :sort_by => @outerorderby, :offset => @outeroffset, :limit => @outerlimit}, } "<Sphinx::Client: %d servers, params: %s>" % [@servers.length, params.inspect] end |
#last_error ⇒ String Also known as: GetLastError
Returns last error message, as a string, in human readable format. If there were no errors during the previous API call, empty string is returned.
You should call it when any other function (such as #query) fails (typically, the failing function returns false). The returned string will contain the error description.
The error message is not reset by this call; so you can safely call it several times if needed.
156 157 158 |
# File 'lib/sphinx/client.rb', line 156 def last_error @error end |
#last_warning ⇒ String Also known as: GetLastWarning
Returns last warning message, as a string, in human readable format. If there were no warnings during the previous API call, empty string is returned.
You should call it to verify whether your request (such as #query) was completed but with warnings. For instance, search query against a distributed index might complete succesfully even if several remote agents timed out. In that case, a warning message would be produced.
The warning message is not reset by this call; so you can safely call it several times if needed.
180 181 182 |
# File 'lib/sphinx/client.rb', line 180 def last_warning @warning end |
#open ⇒ Boolean Also known as: Open
Opens persistent connection to the server.
This method could be used only when a single searchd server configured.
2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 |
# File 'lib/sphinx/client.rb', line 2167 def open if @servers.size > 1 @error = 'too many servers. persistent socket allowed only for a single server.' return false end if @servers.first.persistent? @error = 'already connected' return false; end request = Request.new request.put_int(1) perform_request(:persist, request, nil) do |server, socket| server.make_persistent!(socket) end true end |
#query(query, index = '*', comment = '') {|Client| ... } ⇒ Hash, false Also known as: Query
Connects to searchd server, runs given search query with current settings, obtains and returns the result set.
query
is a query string. index
is an index name (or names) string. Returns false and sets #last_error message on general error. Returns search result set on success. Additionally, the contents of comment
are sent to the query log, marked in square brackets, just before the search terms, which can be very useful for debugging. Currently, the comment is limited to 128 characters.
Default value for index
is "*"
that means to query all local indexes. Characters allowed in index names include Latin letters (a-z), numbers (0-9), minus sign (-), and underscore (_); everything else is considered a separator. Therefore, all of the following samples calls are valid and will search the same two indexes:
sphinx.query('test query', 'main delta')
sphinx.query('test query', 'main;delta')
sphinx.query('test query', 'main, delta');
Index specification order matters. If document with identical IDs are found in two or more indexes, weight and attribute values from the very last matching index will be used for sorting and returning to client (unless explicitly overridden with #set_index_weights). Therefore, in the example above, matches from “delta” index will always win over matches from “main”.
On success, #query returns a result set that contains some of the found matches (as requested by #set_limits) and additional general per-query statistics. The result set is an Hash
with the following keys and values:
"matches"
-
Array with small Hashes containing document weight and attribute values.
"total"
-
Total amount of matches retrieved on server (ie. to the server side result set) by this query. You can retrieve up to this amount of matches from server for this query text with current query settings.
"total_found"
-
Total amount of matching documents in index (that were found and procesed on server).
"words"
-
Hash which maps query keywords (case-folded, stemmed, and otherwise processed) to a small Hash with per-keyword statitics (“docs”, “hits”).
"error"
-
Query error message reported by searchd (string, human readable). Empty if there were no errors.
"warning"
-
Query warning message reported by searchd (string, human readable). Empty if there were no warnings.
Please note: you can use both strings and symbols as Hash
keys.
It should be noted that #query carries out the same actions as #add_query and #run_queries without the intermediate steps; it is analoguous to a single #add_query call, followed by a corresponding #run_queries, then returning the first array element of matches (from the first, and only, query.)
1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 |
# File 'lib/sphinx/client.rb', line 1396 def query(query, index = '*', comment = '', &block) @reqs = [] if block_given? if block.arity > 0 yield self else begin @inside_eval = true instance_eval(&block) ensure @inside_eval = false end end end logger.debug { "[sphinx] query('#{query}', '#{index}', '#{comment}'), #{self.inspect}" } if logger self.add_query(query, index, comment, false) results = self.run_queries # probably network error; error message should be already filled return false unless results.instance_of?(Array) @error = results[0]['error'] @warning = results[0]['warning'] return false if results[0]['status'] == SEARCHD_ERROR return results[0] end |
#reset_filters ⇒ Sphinx::Client Also known as: ResetFilters
Clears all currently set filters.
This call is only normally required when using multi-queries. You might want to set different filters for different queries in the batch. To do that, you should call #reset_filters and add new filters using the respective calls.
1235 1236 1237 1238 1239 |
# File 'lib/sphinx/client.rb', line 1235 def reset_filters @filters = [] @anchor = [] self end |
#reset_group_by ⇒ Sphinx::Client Also known as: ResetGroupBy
Clears all currently group-by settings, and disables group-by.
This call is only normally required when using multi-queries. You can change individual group-by settings using #set_group_by and #set_group_distinct calls, but you can not disable group-by using those calls. #reset_group_by fully resets previous group-by settings and disables group-by mode in the current state, so that subsequent #add_query calls can perform non-grouping searches.
1259 1260 1261 1262 1263 1264 1265 |
# File 'lib/sphinx/client.rb', line 1259 def reset_group_by @groupby = '' @groupfunc = SPH_GROUPBY_DAY @groupsort = '@group desc' @groupdistinct = '' self end |
#reset_outer_select ⇒ Object Also known as: ResetOuterSelect
1295 1296 1297 1298 1299 1300 1301 |
# File 'lib/sphinx/client.rb', line 1295 def reset_outer_select @outerorderby = '' @outeroffset = 0 @outerlimit = 0 @hasouter = 0 self end |
#reset_overrides ⇒ Sphinx::Client Also known as: ResetOverrides
Clear all attribute value overrides (for multi-queries).
This call is only normally required when using multi-queries. You might want to set field overrides for different queries in the batch. To do that, you should call #reset_overrides and add new overrides using the respective calls.
1282 1283 1284 1285 |
# File 'lib/sphinx/client.rb', line 1282 def reset_overrides @overrides = [] self end |
#reset_query_flag ⇒ Object Also known as: ResetQueryFlag
1288 1289 1290 1291 1292 |
# File 'lib/sphinx/client.rb', line 1288 def reset_query_flag @query_flags = 0 @predictedtime = 0 self end |
#run_queries ⇒ Array<Hash> Also known as: RunQueries
Connect to searchd, runs a batch of all queries added using #add_query, obtains and returns the result sets. Returns false
and sets #last_error message on general error (such as network I/O failure). Returns a plain array of result sets on success.
Each result set in the returned array is exactly the same as the result set returned from #query.
Note that the batch query request itself almost always succeds — unless there's a network error, blocking index rotation in progress, or another general failure which prevents the whole request from being processed.
However individual queries within the batch might very well fail. In this case their respective result sets will contain non-empty “error” message, but no matches or query statistics. In the extreme case all queries within the batch could fail. There still will be no general error reported, because API was able to succesfully connect to searchd, submit the batch, and receive the results — but every result set will have a specific error message.
1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 |
# File 'lib/sphinx/client.rb', line 1659 def run_queries logger.debug { "[sphinx] run_queries(#{@reqs.length} queries)" } if logger if @reqs.empty? @error = 'No queries defined, issue add_query() first' return false end reqs, nreqs = @reqs.join(''), @reqs.length @reqs = [] response = perform_request(:search, reqs, [0, nreqs]) # parse response (1..nreqs).map do result = HashWithIndifferentAccess.new(:error => '', :warning => '') # extract status status = result[:status] = response.get_int if status != SEARCHD_OK = response.get_string if status == SEARCHD_WARNING result[:warning] = else result[:error] = next result end end # read schema nfields = response.get_int result[:fields] = (1..nfields).map { response.get_string } attrs_names_in_order = [] nattrs = response.get_int attrs = nattrs.times.inject(HashWithIndifferentAccess.new) do |hash, idx| name, type = response.get_string, response.get_int hash[name] = type attrs_names_in_order << name hash end result[:attrs] = attrs # read match count count, id64 = response.get_ints(2) # read matches result[:matches] = (1..count).map do doc = id64 == 0 ? response.get_int : response.get_int64 weight = response.get_int # This is a single result put in the result['matches'] array match = HashWithIndifferentAccess.new(:id => doc, :weight => weight) match[:attrs] = attrs_names_in_order.inject(HashWithIndifferentAccess.new) do |hash, name| hash[name] = case attrs[name] when SPH_ATTR_BIGINT # handle 64-bit ints response.get_int64 when SPH_ATTR_FLOAT # handle floats response.get_float when SPH_ATTR_STRING # handle string response.get_string when SPH_ATTR_FACTORS # ??? response.get_int when SPH_ATTR_MULTI # handle array of integers val = response.get_int response.get_ints(val) if val > 0 when SPH_ATTR_MULTI64 # handle array of 64-bit integers val = response.get_int (val / 2).times.map { response.get_int64 } else # handle everything else as unsigned ints response.get_int end hash end match end result[:total], result[:total_found], msecs = response.get_ints(3) result[:time] = '%.3f' % (msecs / 1000.0) nwords = response.get_int result[:words] = nwords.times.inject({}) do |hash, idx| word = response.get_string docs, hits = response.get_ints(2) hash[word] = HashWithIndifferentAccess.new(:docs => docs, :hits => hits) hash end result end end |
#set_connect_timeout(timeout, retries = 1) ⇒ Sphinx::Client Also known as: SetConnectTimeout
Sets the time allowed to spend connecting to the server before giving up and number of retries to perform.
In the event of a failure to connect, an appropriate error code should be returned back to the application in order for application-level error handling to advise the user.
When multiple servers configured through #set_servers method, and retries
number is greater than 1, library will try to connect to another server. In case of single server configured, it will try to reconnect retries
times.
Please note, this timeout will only be used for connection establishing, not for regular API requests.
333 334 335 336 337 338 339 340 341 |
# File 'lib/sphinx/client.rb', line 333 def set_connect_timeout(timeout, retries = 1) raise ArgumentError, '"timeout" argument must be Integer' unless timeout.kind_of?(Integer) raise ArgumentError, '"retries" argument must be Integer' unless retries.kind_of?(Integer) raise ArgumentError, '"retries" argument must be greater than 0' unless retries > 0 @timeout = timeout @retries = retries self end |
#set_field_weights(weights) ⇒ Sphinx::Client Also known as: SetFieldWeights
Binds per-field weights by name. Parameter must be a Hash
mapping string field names to integer weights.
Match ranking can be affected by per-field weights. For instance, see Section 4.4, “Weighting” for an explanation how phrase proximity ranking is affected. This call lets you specify what non-default weights to assign to different full-text fields.
The weights must be positive 32-bit integers. The final weight will be a 32-bit integer too. Default weight value is 1. Unknown field names will be silently ignored.
There is no enforced limit on the maximum weight value at the moment. However, beware that if you set it too high you can start hitting 32-bit wraparound issues. For instance, if you set a weight of 10,000,000 and search in extended mode, then maximum possible weight will be equal to 10 million (your weight) by 1 thousand (internal BM25 scaling factor, see Section 4.4, “Weighting”) by 1 or more (phrase proximity rank). The result is at least 10 billion that does not fit in 32 bits and will be wrapped around, producing unexpected results.
835 836 837 838 839 840 841 842 843 844 845 |
# File 'lib/sphinx/client.rb', line 835 def set_field_weights(weights) raise ArgumentError, '"weights" argument must be Hash' unless weights.kind_of?(Hash) weights.each do |name, weight| unless (name.kind_of?(String) or name.kind_of?(Symbol)) and weight.kind_of?(Integer) raise ArgumentError, '"weights" argument must be Hash map of strings to integers' end end @fieldweights = weights self end |
#set_filter(attribute, values, exclude = false) ⇒ Sphinx::Client Also known as: SetFilter
Adds new integer values set filter.
On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. values
must be a plain array containing integer values. exclude
must be a boolean value; it controls whether to accept the matching documents (default mode, when exclude
is false
) or reject them.
Only those documents where attribute
column value stored in the index matches any of the values from values
array will be matched (or rejected, if exclude
is true
).
957 958 959 960 961 962 963 964 965 966 967 968 |
# File 'lib/sphinx/client.rb', line 957 def set_filter(attribute, values, exclude = false) raise ArgumentError, '"attribute" argument must be String or Symbol' unless attribute.kind_of?(String) or attribute.kind_of?(Symbol) values = [values] if values.kind_of?(Integer) raise ArgumentError, '"values" argument must be Array' unless values.kind_of?(Array) raise ArgumentError, '"values" argument must be Array of Integers' unless values.all? { |v| v.kind_of?(Integer) } raise ArgumentError, '"exclude" argument must be Boolean' unless [TrueClass, FalseClass].include?(exclude.class) if values.any? @filters << { 'type' => SPH_FILTER_VALUES, 'attr' => attribute.to_s, 'exclude' => exclude, 'values' => values } end self end |
#set_filter_float_range(attribute, min, max, exclude = false) ⇒ Sphinx::Client Also known as: SetFilterFloatRange
Adds new float range filter.
On this call, additional new filter is added to the existing list of filters. attribute
must be a string with attribute name. min
and max
must be floats that define the acceptable attribute values range (including the boundaries). exclude
must be a boolean value; it controls whether to accept the matching documents (default mode, when exclude
is false) or reject them.
Only those documents where attribute
column value stored in the index is between min
and max
(including values that are exactly equal to min
or max
) will be matched (or rejected, if exclude
is true).
1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 |
# File 'lib/sphinx/client.rb', line 1046 def set_filter_float_range(attribute, min, max, exclude = false) raise ArgumentError, '"attribute" argument must be String or Symbol' unless attribute.kind_of?(String) or attribute.kind_of?(Symbol) raise ArgumentError, '"min" argument must be Numeric' unless min.kind_of?(Numeric) raise ArgumentError, '"max" argument must be Numeric' unless max.kind_of?(Numeric) raise ArgumentError, '"max" argument greater or equal to "min"' unless min <= max raise ArgumentError, '"exclude" argument must be Boolean' unless exclude.kind_of?(TrueClass) or exclude.kind_of?(FalseClass) @filters << { 'type' => SPH_FILTER_FLOATRANGE, 'attr' => attribute.to_s, 'exclude' => exclude, 'min' => min.to_f, 'max' => max.to_f } self end |
#set_filter_range(attribute, min, max, exclude = false) ⇒ Sphinx::Client Also known as: SetFilterRange
Adds new integer range filter.
On this call, additional new filter is added to the existing list of filters. attribute
must be a string with attribute name. min
and max
must be integers that define the acceptable attribute values range (including the boundaries). exclude
must be a boolean value; it controls whether to accept the matching documents (default mode, when exclude
is false) or reject them.
Only those documents where attribute
column value stored in the index is between min
and max
(including values that are exactly equal to min
or max
) will be matched (or rejected, if exclude
is true).
1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 |
# File 'lib/sphinx/client.rb', line 1003 def set_filter_range(attribute, min, max, exclude = false) raise ArgumentError, '"attribute" argument must be String or Symbol' unless attribute.kind_of?(String) or attribute.kind_of?(Symbol) raise ArgumentError, '"min" argument must be Integer' unless min.kind_of?(Integer) raise ArgumentError, '"max" argument must be Integer' unless max.kind_of?(Integer) raise ArgumentError, '"max" argument greater or equal to "min"' unless min <= max raise ArgumentError, '"exclude" argument must be Boolean' unless exclude.kind_of?(TrueClass) or exclude.kind_of?(FalseClass) @filters << { 'type' => SPH_FILTER_RANGE, 'attr' => attribute.to_s, 'exclude' => exclude, 'min' => min, 'max' => max } self end |
#set_geo_anchor(attrlat, attrlong, lat, long) ⇒ Sphinx::Client Also known as: SetGeoAnchor
Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.
attrlat
and attrlong
must be strings that contain the names of latitude and longitude attributes, respectively. lat
and long
are floats that specify anchor point latitude and longitude, in radians.
Once an anchor point is set, you can use magic "@geodist"
attribute name in your filters and/or sorting expressions. Sphinx will compute geosphere distance between the given anchor point and a point specified by latitude and lognitude attributes from each full-text match, and attach this value to the resulting match. The latitude and longitude values both in #set_geo_anchor and the index attribute data are expected to be in radians. The result will be returned in meters, so geodistance value of 1000.0 means 1 km. 1 mile is approximately 1609.344 meters.
1089 1090 1091 1092 1093 1094 1095 1096 1097 |
# File 'lib/sphinx/client.rb', line 1089 def set_geo_anchor(attrlat, attrlong, lat, long) raise ArgumentError, '"attrlat" argument must be String or Symbol' unless attrlat.kind_of?(String) or attrlat.kind_of?(Symbol) raise ArgumentError, '"attrlong" argument must be String or Symbol' unless attrlong.kind_of?(String) or attrlong.kind_of?(Symbol) raise ArgumentError, '"lat" argument must be Numeric' unless lat.kind_of?(Numeric) raise ArgumentError, '"long" argument must be Numeric' unless long.kind_of?(Numeric) @anchor = { 'attrlat' => attrlat.to_s, 'attrlong' => attrlong.to_s, 'lat' => lat.to_f, 'long' => long.to_f } self end |
#set_group_by(attribute, func, groupsort = '@group desc') ⇒ Sphinx::Client Also known as: SetGroupBy
Sets grouping attribute, function, and groups sorting mode; and enables grouping (as described in Section 4.6, “Grouping (clustering) search results”).
attribute
is a string that contains group-by attribute name. func
is a constant that chooses a function applied to the attribute value in order to compute group-by key. groupsort
is a clause that controls how the groups will be sorted. Its syntax is similar to that described in Section 4.5, “SPH_SORT_EXTENDED mode”.
Grouping feature is very similar in nature to GROUP BY
clause from SQL. Results produces by this function call are going to be the same as produced by the following pseudo code:
SELECT ... GROUP BY func(attribute) ORDER BY groupsort
Note that it's groupsort
that affects the order of matches in the final result set. Sorting mode (see #set_sort_mode) affect the ordering of matches within group, ie. what match will be selected as the best one from the group. So you can for instance order the groups by matches count and select the most relevant match within each group at the same time.
Starting with version 0.9.9-rc2, aggregate functions (AVG()
, MIN()
, MAX()
, SUM()
) are supported through #set_select API call when using GROUP BY
.
You can specify group function and attribute as String (“attr”, “day”, etc), Symbol (:attr, :day, etc), or Fixnum constant (SPH_GROUPBY_ATTR, SPH_GROUPBY_DAY, etc).
1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 |
# File 'lib/sphinx/client.rb', line 1152 def set_group_by(attribute, func, groupsort = '@group desc') raise ArgumentError, '"attribute" argument must be String or Symbol' unless attribute.kind_of?(String) or attribute.kind_of?(Symbol) raise ArgumentError, '"groupsort" argument must be String' unless groupsort.kind_of?(String) func = parse_sphinx_constant('func', func, :groupby) @groupby = attribute.to_s @groupfunc = func @groupsort = groupsort self end |
#set_group_distinct(attribute) ⇒ Sphinx::Client Also known as: SetGroupDistinct
Sets attribute name for per-group distinct values count calculations. Only available for grouping queries.
attribute
is a string that contains the attribute name. For each group, all values of this attribute will be stored (as RAM limits permit), then the amount of distinct values will be calculated and returned to the client. This feature is similar to COUNT(DISTINCT)
clause in standard SQL; so these Sphinx calls:
sphinx.set_group_by(:category, :attr, '@count desc')
sphinx.set_group_distinct(:vendor)
can be expressed using the following SQL clauses:
SELECT id, weight, all-attributes,
COUNT(DISTINCT vendor) AS @distinct,
COUNT(*) AS @count
FROM products
GROUP BY category
ORDER BY @count DESC
In the sample pseudo code shown just above, #set_group_distinct call corresponds to COUNT(DISINCT vendor)
clause only. GROUP BY
, ORDER BY
, and COUNT(*)
clauses are all an equivalent of #set_group_by settings. Both queries will return one matching row for each category. In addition to indexed attributes, matches will also contain total per-category matches count, and the count of distinct vendor IDs within each category.
1207 1208 1209 1210 1211 1212 |
# File 'lib/sphinx/client.rb', line 1207 def set_group_distinct(attribute) raise ArgumentError, '"attribute" argument must be String or Symbol' unless attribute.kind_of?(String) or attribute.kind_of?(Symbol) @groupdistinct = attribute.to_s self end |
#set_id_range(min, max) ⇒ Sphinx::Client Also known as: SetIDRange
Sets an accepted range of document IDs. Parameters must be integers. Defaults are 0 and 0; that combination means to not limit by range.
After this call, only those records that have document ID between min
and max
(including IDs exactly equal to min
or max
) will be matched.
916 917 918 919 920 921 922 923 924 |
# File 'lib/sphinx/client.rb', line 916 def set_id_range(min, max) raise ArgumentError, '"min" argument must be Integer' unless min.kind_of?(Integer) raise ArgumentError, '"max" argument must be Integer' unless max.kind_of?(Integer) raise ArgumentError, '"max" argument greater or equal to "min"' unless min <= max @min_id = min @max_id = max self end |
#set_index_weights(weights) ⇒ Sphinx::Client Also known as: SetIndexWeights
Sets per-index weights, and enables weighted summing of match weights across different indexes. Parameter must be a hash (associative array) mapping string index names to integer weights. Default is empty array that means to disable weighting summing.
When a match with the same document ID is found in several different local indexes, by default Sphinx simply chooses the match from the index specified last in the query. This is to support searching through partially overlapping index partitions.
However in some cases the indexes are not just partitions, and you might want to sum the weights across the indexes instead of picking one. #set_index_weights lets you do that. With summing enabled, final match weight in result set will be computed as a sum of match weight coming from the given index multiplied by respective per-index weight specified in this call. Ie. if the document 123 is found in index A with the weight of 2, and also in index B with the weight of 3, and you called #set_index_weights with {"A"=>100, "B"=>10}
, the final weight return to the client will be 2*100+3*10 = 230.
881 882 883 884 885 886 887 888 889 890 891 |
# File 'lib/sphinx/client.rb', line 881 def set_index_weights(weights) raise ArgumentError, '"weights" argument must be Hash' unless weights.kind_of?(Hash) weights.each do |index, weight| unless (index.kind_of?(String) or index.kind_of?(Symbol)) and weight.kind_of?(Integer) raise ArgumentError, '"weights" argument must be Hash map of strings to integers' end end @indexweights = weights self end |
#set_limits(offset, limit, max = 0, cutoff = 0) ⇒ Sphinx::Client Also known as: SetLimits
Sets offset into server-side result set (offset
) and amount of matches to return to client starting from that offset (limit
). Can additionally control maximum server-side result set size for current query (max_matches
) and the threshold amount of matches to stop searching at (cutoff
). All parameters must be non-negative integers.
First two parameters to #set_limits are identical in behavior to MySQL LIMIT clause. They instruct searchd to return at most limit
matches starting from match number offset
. The default offset and limit settings are 0
and 20
, that is, to return first 20
matches.
max_matches
setting controls how much matches searchd will keep in RAM while searching. All matching documents will be normally processed, ranked, filtered, and sorted even if max_matches is set to 1
. But only best N
documents are stored in memory at any given moment for performance and RAM usage reasons, and this setting controls that N. Note that there are two places where max_matches limit is enforced. Per-query limit is controlled by this API call, but there also is per-server limit controlled by max_matches
setting in the config file. To prevent RAM usage abuse, server will not allow to set per-query limit higher than the per-server limit.
You can't retrieve more than max_matches
matches to the client application. The default limit is set to 1000
. Normally, you must not have to go over this limit. One thousand records is enough to present to the end user. And if you're thinking about pulling the results to application for further sorting or filtering, that would be much more efficient if performed on Sphinx side.
cutoff
setting is intended for advanced performance control. It tells searchd to forcibly stop search query once $cutoff matches had been found and processed.
459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 |
# File 'lib/sphinx/client.rb', line 459 def set_limits(offset, limit, max = 0, cutoff = 0) raise ArgumentError, '"offset" argument must be Integer' unless offset.kind_of?(Integer) raise ArgumentError, '"limit" argument must be Integer' unless limit.kind_of?(Integer) raise ArgumentError, '"max" argument must be Integer' unless max.kind_of?(Integer) raise ArgumentError, '"cutoff" argument must be Integer' unless cutoff.kind_of?(Integer) raise ArgumentError, '"offset" argument should be greater or equal to zero' unless offset >= 0 raise ArgumentError, '"limit" argument should be greater to zero' unless limit > 0 raise ArgumentError, '"max" argument should be greater or equal to zero' unless max >= 0 raise ArgumentError, '"cutoff" argument should be greater or equal to zero' unless cutoff >= 0 @offset = offset @limit = limit @maxmatches = max if max > 0 @cutoff = cutoff if cutoff > 0 self end |
#set_match_mode(mode) ⇒ Sphinx::Client Also known as: SetMatchMode
Sets full-text query matching mode.
Parameter must be a Fixnum
constant specifying one of the known modes (SPH_MATCH_ALL
, SPH_MATCH_ANY
, etc), String
with identifier ("all"
, "any"
, etc), or a Symbol
(:all
, :any
, etc).
687 688 689 690 |
# File 'lib/sphinx/client.rb', line 687 def set_match_mode(mode) @mode = parse_sphinx_constant('mode', mode, :match) self end |
#set_max_query_time(max) ⇒ Sphinx::Client Also known as: SetMaxQueryTime
Sets maximum search query time, in milliseconds. Parameter must be a non-negative integer. Default valus is 0
which means “do not limit”.
Similar to cutoff
setting from #set_limits, but limits elapsed query time instead of processed matches count. Local search queries will be stopped once that much time has elapsed. Note that if you're performing a search which queries several local indexes, this limit applies to each index separately.
495 496 497 498 499 500 501 |
# File 'lib/sphinx/client.rb', line 495 def set_max_query_time(max) raise ArgumentError, '"max" argument must be Integer' unless max.kind_of?(Integer) raise ArgumentError, '"max" argument should be greater or equal to zero' unless max >= 0 @maxquerytime = max self end |
#set_outer_select(orderby, offset, limit) ⇒ Object Also known as: SetOuterSelect
648 649 650 651 652 653 654 655 656 657 658 659 660 661 |
# File 'lib/sphinx/client.rb', line 648 def set_outer_select(orderby, offset, limit) raise ArgumentError, '"orderby" argument must be String' unless orderby.kind_of?(String) raise ArgumentError, '"offset" argument must be Integer' unless offset.kind_of?(Integer) raise ArgumentError, '"limit" argument must be Integer' unless limit.kind_of?(Integer) raise ArgumentError, '"offset" argument should be greater or equal to zero' unless offset >= 0 raise ArgumentError, '"limit" argument should be greater to zero' unless limit > 0 @outerorderby = orderby @outeroffset = offset @outerlimit = limit @hasouter = true self end |
#set_override(attribute, attrtype, values) ⇒ Sphinx::Client Also known as: SetOverride
Sets temporary (per-query) per-document attribute value overrides. Only supports scalar attributes. values
must be a Hash
that maps document IDs to overridden attribute values.
Override feature lets you “temporary” update attribute values for some documents within a single query, leaving all other queries unaffected. This might be useful for personalized data. For example, assume you're implementing a personalized search function that wants to boost the posts that the user's friends recommend. Such data is not just dynamic, but also personal; so you can't simply put it in the index because you don't want everyone's searches affected. Overrides, on the other hand, are local to a single query and invisible to everyone else. So you can, say, setup a “friends_weight” value for every document, defaulting to 0, then temporary override it with 1 for documents 123, 456 and 789 (recommended by exactly the friends of current user), and use that value when ranking.
You can specify attribute type as String (“integer”, “float”, etc), Symbol (:integer, :float, etc), or Fixnum constant (SPH_ATTR_INTEGER, SPH_ATTR_FLOAT, etc).
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 |
# File 'lib/sphinx/client.rb', line 536 def set_override(attribute, attrtype, values) raise ArgumentError, '"attribute" argument must be String or Symbol' unless attribute.kind_of?(String) or attribute.kind_of?(Symbol) attrtype = parse_sphinx_constant('attrtype', attrtype, :attr) raise ArgumentError, '"values" argument must be Hash' unless values.kind_of?(Hash) values.each do |id, value| raise ArgumentError, '"values" argument must be Hash map of Integer to Integer or Time' unless id.kind_of?(Integer) case attrtype when SPH_ATTR_TIMESTAMP raise ArgumentError, '"values" argument must be Hash map of Integer to Numeric' unless value.kind_of?(Integer) or value.kind_of?(Time) when SPH_ATTR_FLOAT raise ArgumentError, '"values" argument must be Hash map of Integer to Numeric' unless value.kind_of?(Numeric) else # SPH_ATTR_INTEGER, SPH_ATTR_ORDINAL, SPH_ATTR_BOOL, SPH_ATTR_BIGINT raise ArgumentError, '"values" argument must be Hash map of Integer to Integer' unless value.kind_of?(Integer) end end @overrides << { 'attr' => attribute.to_s, 'type' => attrtype, 'values' => values } self end |
#set_query_flag(flag_name, flag_value) ⇒ Sphinx::Client Also known as: SetQueryFlag
Allows to control a number of per-query options.
Supported options and respectively allowed values are:
-
reverse_scan
–0
or1
, lets you control the order in which full-scan query processes the rows. -
sort_method
– “pq” (priority queue, set by default) or “kbuffer” (gives faster sorting for already pre-sorted data, e.g. index data sorted by id). The result set is in both cases the same; picking one option or the other may just improve (or worsen!) performance. -
boolean_simplify
–false
ortrue
, enables simplifying the query to speed it up. -
idf
– either “normalized” (default) or “plain”.
628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 |
# File 'lib/sphinx/client.rb', line 628 def set_query_flag(flag_name, flag_value) raise ArgumentError, 'unknown "flag_name" argument value' unless QUERY_FLAGS.has_key?(flag_name) flag = QUERY_FLAGS[flag_name] values = QUERY_FLAGS[flag_name][:values] if flag_name.to_s == 'max_predicted_time' raise ArgumentError, "\"flag_value\" should be a positive integer for \"max_predicted_time\" flag" unless flag_value.kind_of?(Integer) and flag_value >= 0 @predictedtime = flag_value elsif !values.include?(flag_value) raise ArgumentError, "unknown \"flag_value\", should be one of #{values.inspect}" end is_set = values.respond_to?(:call) ? values.call(flag_value) : values.index(flag_value) == 1 @query_flags = set_bit(@query_flags, flag[:index], is_set) self end |
#set_ranking_mode(ranker, rankexpr = '') ⇒ Sphinx::Client Also known as: SetRankingMode
Sets ranking mode. Only available in SPH_MATCH_EXTENDED2
matching mode at the time of this writing. Parameter must be a constant specifying one of the known modes.
By default, in the EXTENDED
matching mode Sphinx computes two factors which contribute to the final match weight. The major part is a phrase proximity value between the document text and the query. The minor part is so-called BM25 statistical function, which varies from 0 to 1 depending on the keyword frequency within document (more occurrences yield higher weight) and within the whole index (more rare keywords yield higher weight).
However, in some cases you'd want to compute weight differently - or maybe avoid computing it at all for performance reasons because you're sorting the result set by something else anyway. This can be accomplished by setting the appropriate ranking mode.
You can specify ranking mode as String (“proximity_bm25”, “bm25”, etc), Symbol (:proximity_bm25, :bm25, etc), or Fixnum constant (SPH_RANK_PROXIMITY_BM25, SPH_RANK_BM25, etc).
731 732 733 734 735 736 737 738 739 740 |
# File 'lib/sphinx/client.rb', line 731 def set_ranking_mode(ranker, rankexpr = '') ranker = parse_sphinx_constant('ranker', ranker, :rank) raise ArgumentError, '"rankexpr" argument must be String' unless rankexpr.kind_of?(String) raise ArgumentError, '"rankexpr" should not be empty if ranker is SPH_RANK_EXPR' if ranker == SPH_RANK_EXPR and rankexpr.empty? @ranker = ranker @rankexpr = rankexpr self end |
#set_request_timeout(timeout, retries = 1) ⇒ Sphinx::Client Also known as: SetRequestTimeout
Sets the time allowed to spend performing request to the server before giving up and number of retries to perform.
In the event of a failure to do request, an appropriate error code should be returned back to the application in order for application-level error handling to advise the user.
When multiple servers configured through #set_servers method, and retries
number is greater than 1, library will try to do another try with this server (with full reconnect). If connection would fail, behavior depends on #set_connect_timeout settings.
Please note, this timeout will only be used for request performing, not for connection establishing.
371 372 373 374 375 376 377 378 379 |
# File 'lib/sphinx/client.rb', line 371 def set_request_timeout(timeout, retries = 1) raise ArgumentError, '"timeout" argument must be Integer' unless timeout.kind_of?(Integer) raise ArgumentError, '"retries" argument must be Integer' unless retries.kind_of?(Integer) raise ArgumentError, '"retries" argument must be greater than 0' unless retries > 0 @reqtimeout = timeout @reqretries = retries self end |
#set_retries(count, delay = 0) ⇒ Sphinx::Client Also known as: SetRetries
Sets distributed retry count and delay.
On temporary failures searchd will attempt up to count
retries per agent. delay
is the delay between the retries, in milliseconds. Retries are disabled by default. Note that this call will not make the API itself retry on temporary failure; it only tells searchd to do so. Currently, the list of temporary failures includes all kinds of connection failures and maxed out (too busy) remote agents.
402 403 404 405 406 407 408 409 |
# File 'lib/sphinx/client.rb', line 402 def set_retries(count, delay = 0) raise ArgumentError, '"count" argument must be Integer' unless count.kind_of?(Integer) raise ArgumentError, '"delay" argument must be Integer' unless delay.kind_of?(Integer) @retrycount = count @retrydelay = delay self end |
#set_select(select) ⇒ Sphinx::Client Also known as: SetSelect
Sets the select clause, listing specific attributes to fetch, and expressions to compute and fetch. Clause syntax mimics SQL.
#set_select is very similar to the part of a typical SQL query between SELECT
and FROM
. It lets you choose what attributes (columns) to fetch, and also what expressions over the columns to compute and fetch. A certain difference from SQL is that expressions must always be aliased to a correct identifier (consisting of letters and digits) using AS
keyword. SQL also lets you do that but does not require to. Sphinx enforces aliases so that the computation results can always be returned under a “normal” name in the result set, used in other clauses, etc.
Everything else is basically identical to SQL. Star ('*') is supported. Functions are supported. Arbitrary amount of expressions is supported. Computed expressions can be used for sorting, filtering, and grouping, just as the regular attributes.
Starting with version 0.9.9-rc2, aggregate functions (AVG()
, MIN()
, MAX()
, SUM()
) are supported when using GROUP BY
.
Expression sorting (Section 4.5, “SPH_SORT_EXPR mode”) and geodistance functions (#set_geo_anchor) are now internally implemented using this computed expressions mechanism, using magic names '@expr
' and '@geodist
' respectively.
601 602 603 604 605 606 |
# File 'lib/sphinx/client.rb', line 601 def set_select(select) raise ArgumentError, '"select" argument must be String' unless select.kind_of?(String) @select = select self end |
#set_server(host, port = 9312) ⇒ Sphinx::Client Also known as: SetServer
Sets searchd host name and TCP port. All subsequent requests will use the new host and port settings. Default host
and port
are 'localhost' and 9312, respectively.
Also, you can specify an absolute path to Sphinx's UNIX socket as host
, in this case pass port as 0
or nil
.
225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 |
# File 'lib/sphinx/client.rb', line 225 def set_server(host, port = 9312) raise ArgumentError, '"host" argument must be String' unless host.kind_of?(String) path = nil # Check if UNIX socket should be used if host[0] == ?/ path = host elsif host[0, 7] == 'unix://' path = host[7..-1] else raise ArgumentError, '"port" argument must be Integer' unless port.kind_of?(Integer) end host = port = nil unless path.nil? @servers = [Sphinx::Server.new(self, host, port, path)].freeze logger.info { "[sphinx] servers now: #{@servers.inspect}" } if logger self end |
#set_servers(servers) ⇒ Sphinx::Client Also known as: SetServers
Sets the list of searchd servers. Each subsequent request will use next server in list (round-robin). In case of one server failure, request could be retried on another server (see #set_connect_timeout and #set_request_timeout).
Method accepts an Array
of Hashes, each of them should have :host
and :port
(to connect to searchd through network) or :path
(an absolute path to UNIX socket) specified.
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
# File 'lib/sphinx/client.rb', line 274 def set_servers(servers) raise ArgumentError, '"servers" argument must be Array' unless servers.kind_of?(Array) raise ArgumentError, '"servers" argument must be not empty' if servers.empty? @servers = servers.map do |server| raise ArgumentError, '"servers" argument must be Array of Hashes' unless server.kind_of?(Hash) server = server.with_indifferent_access host = server[:path] || server[:host] port = server[:port] || 9312 path = nil raise ArgumentError, '"host" argument must be String' unless host.kind_of?(String) # Check if UNIX socket should be used if host[0] == ?/ path = host elsif host[0, 7] == 'unix://' path = host[7..-1] else raise ArgumentError, '"port" argument must be Integer' unless port.kind_of?(Integer) end host = port = nil unless path.nil? Sphinx::Server.new(self, host, port, path) end.freeze logger.info { "[sphinx] servers now: #{@servers.inspect}" } if logger self end |
#set_sort_mode(mode, sortby = '') ⇒ Sphinx::Client Also known as: SetSortMode
Set matches sorting mode.
You can specify sorting mode as String (“relevance”, “attr_desc”, etc), Symbol (:relevance, :attr_desc, etc), or Fixnum constant (SPH_SORT_RELEVANCE, SPH_SORT_ATTR_DESC, etc).
765 766 767 768 769 770 771 772 773 774 |
# File 'lib/sphinx/client.rb', line 765 def set_sort_mode(mode, sortby = '') mode = parse_sphinx_constant('mode', mode, :sort) raise ArgumentError, '"sortby" argument must be String' unless sortby.kind_of?(String) raise ArgumentError, '"sortby" should not be empty unless mode is SPH_SORT_RELEVANCE' unless mode == SPH_SORT_RELEVANCE or !sortby.empty? @sort = mode @sortby = sortby self end |
#set_weights(weights) ⇒ Sphinx::Client Also known as: SetWeights
Use #set_field_weights instead.
Binds per-field weights in the order of appearance in the index.
790 791 792 793 794 795 796 797 798 |
# File 'lib/sphinx/client.rb', line 790 def set_weights(weights) raise ArgumentError, '"weights" argument must be Array' unless weights.kind_of?(Array) weights.each do |weight| raise ArgumentError, '"weights" argument must be Array of integers' unless weight.kind_of?(Integer) end @weights = weights self end |
#status ⇒ Array<Array>, Array<Hash> Also known as: Status
Queries searchd status, and returns an array of status variable name and value pairs.
2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 |
# File 'lib/sphinx/client.rb', line 2098 def status request = Request.new request.put_int(1) # parse response results = @servers.map do |server| begin response = perform_request(:status, request, nil, server) rows, cols = response.get_ints(2) status = (0...rows).map do (0...cols).map { response.get_string } end HashWithIndifferentAccess.new(:server => server.to_s, :status => status) rescue SphinxError # Re-raise error when a single server configured raise if @servers.size == 1 HashWithIndifferentAccess.new(:server => server.to_s, :error => self.last_error) end end @servers.size > 1 ? results : results.first[:status] end |
#update_attributes(index, attrs, values, mva = false, ignore_non_existent = false) ⇒ Integer Also known as: UpdateAttributes
Instantly updates given attribute values in given documents. Returns number of actually updated documents (0 or more) on success, or -1 on failure.
index
is a name of the index (or indexes) to be updated. attrs
is a plain array with string attribute names, listing attributes that are updated. values
is a Hash where key is document ID, and value is a plain array of new attribute values.
index
can be either a single index name or a list, like in #query. Unlike #query, wildcard is not allowed and all the indexes to update must be specified explicitly. The list of indexes can include distributed index names. Updates on distributed indexes will be pushed to all agents.
The updates only work with docinfo=extern storage strategy. They are very fast because they're working fully in RAM, but they can also be made persistent: updates are saved on disk on clean searchd shutdown initiated by SIGTERM signal. With additional restrictions, updates are also possible on MVA attributes; refer to mva_updates_pool directive for details.
The first sample statement will update document 1 in index “test1”, setting “group_id” to 456. The second one will update documents 1001, 1002 and 1003 in index “products”. For document 1001, the new price will be set to 123 and the new amount in stock to 5; for document 1002, the new price will be 37 and the new amount will be 11; etc. The third one updates document 1 in index “test2”, setting MVA attribute “group_id” to [456, 789].
1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 |
# File 'lib/sphinx/client.rb', line 1994 def update_attributes(index, attrs, values, mva = false, ignore_non_existent = false) # verify everything raise ArgumentError, '"index" argument must be String' unless index.kind_of?(String) or index.kind_of?(Symbol) raise ArgumentError, '"mva" argument must be Boolean' unless mva.kind_of?(TrueClass) or mva.kind_of?(FalseClass) raise ArgumentError, '"ignore_non_existent" argument must be Boolean' unless ignore_non_existent.kind_of?(TrueClass) or ignore_non_existent.kind_of?(FalseClass) raise ArgumentError, '"attrs" argument must be Array' unless attrs.kind_of?(Array) attrs.each do |attr| raise ArgumentError, '"attrs" argument must be Array of Strings' unless attr.kind_of?(String) or attr.kind_of?(Symbol) end raise ArgumentError, '"values" argument must be Hash' unless values.kind_of?(Hash) values.each do |id, entry| raise ArgumentError, '"values" argument must be Hash map of Integer to Array' unless id.kind_of?(Integer) raise ArgumentError, '"values" argument must be Hash map of Integer to Array' unless entry.kind_of?(Array) raise ArgumentError, "\"values\" argument Hash values Array must have #{attrs.length} elements" unless entry.length == attrs.length entry.each do |v| if mva raise ArgumentError, '"values" argument must be Hash map of Integer to Array of Arrays' unless v.kind_of?(Array) v.each do |vv| raise ArgumentError, '"values" argument must be Hash map of Integer to Array of Arrays of Integers' unless vv.kind_of?(Integer) end else raise ArgumentError, '"values" argument must be Hash map of Integer to Array of Integers' unless v.kind_of?(Integer) end end end # build request request = Request.new request.put_string index request.put_int attrs.length request.put_int ignore_non_existent ? 1 : 0 for attr in attrs request.put_string attr request.put_int mva ? 1 : 0 end request.put_int values.length values.each do |id, entry| request.put_int64 id if mva entry.each { |v| request.put_int_array v } else request.put_int(*entry) end end response = perform_request(:update, request) # parse response response.get_int end |