Class: RubyLexer
- Inherits:
-
Object
- Object
- RubyLexer
- Extended by:
- Forwardable
- Includes:
- Enumerable, NestedContexts
- Defined in:
- lib/rubylexer.rb,
lib/rubylexer/0.6.rb,
lib/rubylexer/0.7.0.rb,
lib/rubylexer/token.rb,
lib/rubylexer/charset.rb,
lib/rubylexer/context.rb,
lib/rubylexer/rulexer.rb,
lib/rubylexer/version.rb,
lib/rubylexer/rubycode.rb,
lib/rubylexer/charhandler.rb,
lib/rubylexer/symboltable.rb,
lib/rubylexer/tokenprinter.rb
Overview
rubylexer - a ruby lexer written in ruby
Copyright (C) 2004,2005, 2011 Caleb Clausen
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Defined Under Namespace
Modules: ErrorToken, NestedContexts, RecursiveRubyLexer, RubyLexer1_9, StillIgnoreToken, TokenPat Classes: AssignmentRhsListEndToken, AssignmentRhsListStartToken, CharHandler, CharSet, DecoratorToken, EncodingDeclToken, EndHeaderToken, EoiToken, EscNlToken, FileAndLineToken, HereBodyToken, HerePlaceholderToken, IgnoreToken, ImplicitParamListEndToken, ImplicitParamListStartToken, KeepWsTokenPrinter, KeywordToken, KwParamListEndToken, KwParamListStartToken, MethNameToken, Newline, NewlineToken, NoWsToken, NumberToken, OperatorToken, OutlinedHereBodyToken, RenderExactlyStringToken, RubyCode, ShebangToken, SimpleTokenPrinter, StringToken, SubitemToken, SymbolTable, SymbolToken, Token, VarNameToken, WToken, WsToken, ZwToken
Constant Summary collapse
- RUBYUNOPERATORS =
%w{ +@ ~ ~@ -@ ! !@ }
- RUBYBINOPERATORS =
%w{ & | ^ / % == === =~ > >= >> < <= << <=> + - * ** }
- RUBYCOMPOPERATORS =
%w{== === =~ > >= < <= <=>}
- RUBYSYMOPERATORS =
RUBYUNOPERATORS+RUBYBINOPERATORS+%w{ [] []= }
- RUBYNONSYMOPERATORS =
%w{!= !~ = => :: ? : , ; . .. ... || && ||= &&=}+ (RUBYBINOPERATORS-RUBYCOMPOPERATORS).map{|op| op+'='}
- RUBYSYMOPERATORREX =
%r{^([&|^/%]|=(==?)|=~|>[=>]?|<(<|=>?)?|[+~\-]@?|\*\*?|\[\]=?)}
- RUBYNONSYMOPERATORREX =
(nasty beastie, eh?) these are the overridable operators does not match flow-control operators like: || && ! or and if not or op= ops like: += -= ||= or .. … ?: for that use:
%r{^([%^/\-+|&]=|(\|\||&&)=?|(<<|>>|\*\*?)=|\.{1,3}|[?:,;]|::|=>?|![=~]?)$}
- RUBYOPERATORREX =
/#{RUBYSYMOPERATORREX}|#{RUBYNONSYMOPERATORREX}/o
- UNSYMOPS =
always unary
/^[~!]$/
- UBSYMOPS =
ops that could be unary or binary
/^(?:[*&+-]|::)$/
- WHSPCHARS =
WHSPLF+"\\#"
- OPORBEGINWORDLIST =
%w(if unless while until)
- BEGINWORDLIST =
%w(def class module begin for case do)+OPORBEGINWORDLIST
- OPORBEGINWORDS =
"(?:#{OPORBEGINWORDLIST.join '|'})"
- BEGINWORDS =
/^(?:#{BEGINWORDLIST.join '|'})$/o
- FUNCLIKE_KEYWORDLIST_1_9 =
%w[not]
- FUNCLIKE_KEYWORDLIST =
%w/break next redo return yield retry super BEGIN END/
- FUNCLIKE_KEYWORDS =
/^(?:#{FUNCLIKE_KEYWORDLIST.join '|'})$/
- VARLIKE_KEYWORDLIST_1_9 =
%w[__ENCODING__]
- VARLIKE_KEYWORDLIST =
%w/__FILE__ __LINE__ false nil self true/
- VARLIKE_KEYWORDS =
/^(?:#{VARLIKE_KEYWORDLIST.join '|'})$/
- INNERBOUNDINGWORDLIST =
%w"else elsif ensure in then rescue when"
- INNERBOUNDINGWORDS =
"(?:#{INNERBOUNDINGWORDLIST.join '|'})"
- BINOPWORDLIST =
%w"and or"
- BINOPWORDS =
"(?:#{BINOPWORDLIST.join '|'})"
- RUBYKEYWORDS =
%r{ ^(?:alias|#{BINOPWORDS}|defined\?|not|undef|end| #{VARLIKE_KEYWORDS}|#{FUNCLIKE_KEYWORDS}| #{INNERBOUNDINGWORDS}|#{BEGINWORDS} )$ }xo
- RUBYKEYWORDLIST =
%w{alias defined? not undef end}+ BINOPWORDLIST+ VARLIKE_KEYWORDLIST+FUNCLIKE_KEYWORDLIST+ INNERBOUNDINGWORDLIST+BEGINWORDLIST+ VARLIKE_KEYWORDLIST_1_9
- HIGHASCII =
__END__ should not be in this set… its handled in start_of_line_directives
?\x80..?\xFF
- NONASCII =
HIGHASCII
- CHARMAPPINGS =
{ ?$ => :dollar_identifier, ?@ => :at_identifier, ?a..?z => :identifier, ?A..?Z => :identifier, ?_ => :identifier, ?0..?9 => :number, ?" => :double_quote, #" ?' => :single_quote, #' ?` => :back_quote, #` WHSP => :whitespace, #includes \r ?, => :comma, ?; => :semicolon, ?^ => :caret, ?~ => :tilde, ?= => :equals, ?! => :exclam, ?. => :dot, #these ones could signal either an op or a term ?/ => :regex_or_div, "|" => :conjunction_or_goalpost, ">" => :quadriop, "*&" => :star_or_amp, #could be unary "+-" => :plusminus, #could be unary ?< => :lessthan, ?% => :percent, ?? => :char_literal_or_op, #single-char int literal ?: => :symbol_or_op, ?\n => :newline, #implicitly escaped after op #?\r => :newline, #implicitly escaped after op ?\\ => :escnewline, "[({" => :open_brace, "])}" => :close_brace, ?# => :comment, ?\x00 => :eof, ?\x04 => :eof, ?\x1a => :eof, ?\x01..?\x03 => :illegal_char, ?\x05..?\x08 => :illegal_char, ?\x0E..?\x19 => :illegal_char, ?\x1b..?\x1F => :illegal_char, ?\x7F => :illegal_char, }
- UCLETTER =
@@UCLETTER="[A-Z]"
- LCLETTER =
cheaters way, treats utf chars as always 1 byte wide all high-bit chars are lowercase letters works, but strings compare with strict binary identity, not unicode collation works for euc too, I think (the ruby spec for utf8 support permits this interpretation)
@@LCLETTER="[a-z_\x80-\xFF]"
- LETTER =
@@LETTER="[A-Za-z_\x80-\xFF]"
- LETTER_DIGIT =
@@LETTER_DIGIT="[A-Za-z_0-9\x80-\xFF]"
- NEVERSTARTPARAMLISTWORDS =
/\A(#{OPORBEGINWORDS}|#{INNERBOUNDINGWORDS}|#{BINOPWORDS}|end)((?:(?!#@@LETTER_DIGIT).)|\Z)/om
- NEVERSTARTPARAMLISTFIRST =
chars that begin NEVERSTARTPARAMLIST
- NEVERSTARTPARAMLISTMAXLEN =
max len of a NEVERSTARTPARAMLIST
7
- RAW_ENCODING_ALIASES =
{ #'utf-8'=>'utf8', 'ascii-8-bit'=>'binary', 'ascii-7-bit'=>'ascii', 'euc-jp'=>'euc', 'iso-8859-1'=>'binary', 'latin-1'=>'binary', #'ascii8bit'=>'binary', #'ascii7bit'=>'ascii', #'eucjp'=>'euc', 'us-ascii'=>'ascii', 'shift-jis'=>'sjis', 'autodetect'=>'detect', }
- ENCODING_ALIASES =
- ENCODINGS =
%w[ascii binary utf8 euc sjis]
- NONWORKING_ENCODINGS =
%w[sjis]
- WSCHARS =
same as WHSP
@@WSCHARS= /[\s]/==="\v" ? '\s' : '\s\v'
- WSNONLCHARS =
same as WHSPLF
@@WSNONLCHARS=/(?!\n)[#@@WSCHARS]/o
- NOPARAMLONGOPTIONS =
%w[copyright version verbose debug yydebug help]
- PARAMLONGOPTIONS =
%w[encoding dump]
- DASHPARAMLONGOPTIONS =
%w[enable disable]
- NOPARAMOPTIONS =
"SacdhlnpsvwyU"
- OCTALPARAMOPTIONS =
"0"
- CHARPARAMOPTIONS =
"KTW"
- PARAMSHORTOPTIONS =
"CXFIEeir"
- MAYBEPARAMSHORTOPTIONS =
"x"
- NEWIN1_9OPTIONS =
%w[encoding dump enable disable X U W E]
- LONGOPTIONS =
/ --(#{NOPARAMLONGOPTIONS.join'|'})| --(#{PARAMLONGOPTIONS.join'|'})(=|#@@WSNONLCHARS+)[^#@@WSCHARS]+| --(#{DASHPARAMLONGOPTIONS.join'|'})-[^#@@WSCHARS]+ /ox
- CHAINOPTIONS =
/ [#{NOPARAMOPTIONS}]+| [#{OCTALPARAMOPTIONS}][0-7]{1,3}| [#{CHARPARAMOPTIONS}]. /ox
- PARAMOPTIONS =
/ [#{PARAMSHORTOPTIONS}]#@@WSNONLCHARS*[^#@@WSCHARS]+| [#{MAYBEPARAMSHORTOPTIONS}]#@@WSNONLCHARS*[^#@@WSCHARS]* /ox
- OPTIONS =
/ (#@@WSNONLCHARS*( #{LONGOPTIONS} | --? | -#{CHAINOPTIONS}*( #{PARAMOPTIONS} | #{CHAINOPTIONS} ) ))* /ox
- IMPLICIT_PARENS_BEFORE_ACCESSOR_ASSIGNMENT =
0
- AUTO_UNESCAPE_STRINGS =
false
- EndDefHeaderToken =
EndHeaderToken
- FASTER_STRING_ESCAPES =
true
- WHSP =
" \t\r\v\f"
- WHSPLF =
WHSP+"\n"
- LEGALCHARS =
maybe r should be in WHSPLF instead
/[!-~#{WHSPLF}\x80-\xFF]/
- PAIRS =
{ '{'=>'}', '['=>']', '('=>')', '<'=>'>'}
- VERSION =
'0.8.0'
Instance Attribute Summary collapse
-
#file ⇒ Object
hack.
-
#filename ⇒ Object
readonly
Returns the value of attribute filename.
-
#FUNCLIKE_KEYWORDS ⇒ Object
readonly
Returns the value of attribute FUNCLIKE_KEYWORDS.
-
#in_def ⇒ Object
Returns the value of attribute in_def.
-
#incomplete_here_tokens ⇒ Object
readonly
Returns the value of attribute incomplete_here_tokens.
-
#last_operative_token ⇒ Object
readonly
Returns the value of attribute last_operative_token.
-
#last_token_maybe_implicit ⇒ Object
readonly
Returns the value of attribute last_token_maybe_implicit.
-
#linenum ⇒ Object
readonly
Returns the value of attribute linenum.
-
#localvars_stack ⇒ Object
Returns the value of attribute localvars_stack.
-
#offset_adjust ⇒ Object
readonly
Returns the value of attribute offset_adjust.
-
#original_file ⇒ Object
readonly
Returns the value of attribute original_file.
-
#parsestack ⇒ Object
readonly
Returns the value of attribute parsestack.
-
#pending_here_bodies ⇒ Object
writeonly
Sets the attribute pending_here_bodies.
-
#rubyversion ⇒ Object
readonly
Returns the value of attribute rubyversion.
-
#VARLIKE_KEYWORDS ⇒ Object
readonly
Returns the value of attribute VARLIKE_KEYWORDS.
Instance Method Summary collapse
- #_keyword_funclike(str, offset, result) ⇒ Object
- #_keyword_innerbounding(str, offset, result) ⇒ Object
- #_keyword_varlike(str, offset, result) ⇒ Object
-
#at_identifier(ch = nil) ⇒ Object
———————————–.
-
#balanced_braces? ⇒ Boolean
———————————–.
- #build_method_operators ⇒ Object
-
#dollar_identifier(ch = nil) ⇒ Object
———————————–.
-
#each ⇒ Object
———————————–.
-
#enable_macros! ⇒ Object
———————————–.
-
#encoding_name_normalize(name) ⇒ Object
———————————–.
-
#endoffile_detected(s = '') ⇒ Object
(also: #rulexer_endoffile_detected)
———————————–.
-
#eof? ⇒ Boolean
(also: #rulexer_eof?)
———————————–.
-
#get1token ⇒ Object
(also: #rulexer_get1token)
———————————–.
-
#initialize(filename, file, line, offset_adjust = 0) ⇒ RubyLexer
(also: #rulexer_initialize)
constructor
———————————–.
-
#input_position_raw ⇒ Object
———————————–.
- #is__ENCODING__keyword?(name) ⇒ Boolean
- #keyword___FILE__(str, offset, result) ⇒ Object
- #keyword___LINE__(str, offset, result) ⇒ Object
- #keyword_alias(str, offset, result) ⇒ Object
- #keyword_begin(str, offset, result) ⇒ Object (also: #keyword_case)
- #keyword_class(str, offset, result) ⇒ Object
-
#keyword_def(str, offset, result) ⇒ Object
macros too, if enabled.
- #keyword_do(str, offset, result) ⇒ Object
- #keyword_elsif(str, offset, result) ⇒ Object
- #keyword_end(str, offset, result) ⇒ Object
- #keyword_END(str, offset, result) ⇒ Object
- #keyword_for(str, offset, result) ⇒ Object
-
#keyword_if(str, offset, result) ⇒ Object
(also: #keyword_unless)
could be infix form without end.
- #keyword_in(str, offset, result) ⇒ Object
- #keyword_module(str, offset, result) ⇒ Object
- #keyword_rescue(str, offset, result) ⇒ Object
- #keyword_return(str, offset, result) ⇒ Object (also: #keyword_break, #keyword_next)
- #keyword_then(str, offset, result) ⇒ Object
- #keyword_undef(str, offset, result) ⇒ Object
-
#keyword_when(str, offset, result) ⇒ Object
defined? might have a baresymbol following it does it need to be handled specially? it would seem not.….
-
#keyword_while(str, offset, result) ⇒ Object
(also: #keyword_until)
could be infix form without end.
- #localvars ⇒ Object
-
#no_more? ⇒ Boolean
———————————–.
- #progress_printer ⇒ Object
- #read_encoding_line ⇒ Object
- #read_leading_encoding ⇒ Object
- #rubylexer_modules_init ⇒ Object
-
#semicolon_in_block_param_list? ⇒ Boolean
module RubyLexer1_9.
-
#set_last_token(tok) ⇒ Object
———————————–.
-
#to_s ⇒ Object
(also: #inspect)
irb friendly #inspect/#to_s.
-
#unshift(*tokens) ⇒ Object
———————————–.
Constructor Details
#initialize(filename, file, line, offset_adjust = 0) ⇒ RubyLexer Also known as: rulexer_initialize
180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 |
# File 'lib/rubylexer.rb', line 180 def initialize(filename,file,linenum=1,offset_adjust=0,={}) if file.respond_to? :set_encoding file.set_encoding 'binary' elsif file.respond_to? :force_encoding file=file.dup if file.frozen? file.force_encoding 'binary' end @offset_adjust=@offset_adjust2=0 #set again in next line rulexer_initialize(filename,file, linenum,offset_adjust) @start_linenum=linenum @parsestack=[TopLevelContext.new] @incomplete_here_tokens=[] #not used anymore @pending_here_bodies=[] @localvars_stack=[SymbolTable.new] @defining_lvar=nil @in_def_name=false @last_operative_token=nil @last_token_maybe_implicit=nil @enable_macro=nil @base_file=nil @progress_thread=nil @rubyversion=[:rubyversion]||1.8 @encoding=[:encoding]||:detect @always_binary_chars=CharSet['}]);|>,.=^'] @unary_or_binary_chars=CharSet['+-%/'] @FUNCLIKE_KEYWORDS=FUNCLIKE_KEYWORDS @VARLIKE_KEYWORDS=VARLIKE_KEYWORDS @toptable=CharHandler.new(self, :identifier, CHARMAPPINGS) if @rubyversion>=1.9 extend RubyLexer1_9 end rubylexer_modules_init @method_operators=build_method_operators if input_position.zero? read_leading_encoding @encoding=:binary if @rubyversion<=1.8 start_of_line_directives end progress_printer end |
Instance Attribute Details
#file ⇒ Object
hack
63 64 65 |
# File 'lib/rubylexer/rulexer.rb', line 63 def file @file end |
#filename ⇒ Object (readonly)
Returns the value of attribute filename.
62 63 64 |
# File 'lib/rubylexer/rulexer.rb', line 62 def filename @filename end |
#FUNCLIKE_KEYWORDS ⇒ Object (readonly)
Returns the value of attribute FUNCLIKE_KEYWORDS.
67 68 69 |
# File 'lib/rubylexer.rb', line 67 def FUNCLIKE_KEYWORDS @FUNCLIKE_KEYWORDS end |
#in_def ⇒ Object
Returns the value of attribute in_def.
358 359 360 |
# File 'lib/rubylexer.rb', line 358 def in_def @in_def end |
#incomplete_here_tokens ⇒ Object (readonly)
Returns the value of attribute incomplete_here_tokens.
145 146 147 |
# File 'lib/rubylexer.rb', line 145 def incomplete_here_tokens @incomplete_here_tokens end |
#last_operative_token ⇒ Object (readonly)
Returns the value of attribute last_operative_token.
62 63 64 |
# File 'lib/rubylexer/rulexer.rb', line 62 def last_operative_token @last_operative_token end |
#last_token_maybe_implicit ⇒ Object (readonly)
Returns the value of attribute last_token_maybe_implicit.
145 146 147 |
# File 'lib/rubylexer.rb', line 145 def last_token_maybe_implicit @last_token_maybe_implicit end |
#linenum ⇒ Object (readonly)
Returns the value of attribute linenum.
62 63 64 |
# File 'lib/rubylexer/rulexer.rb', line 62 def linenum @linenum end |
#localvars_stack ⇒ Object
Returns the value of attribute localvars_stack.
356 357 358 |
# File 'lib/rubylexer.rb', line 356 def localvars_stack @localvars_stack end |
#offset_adjust ⇒ Object (readonly)
Returns the value of attribute offset_adjust.
359 360 361 |
# File 'lib/rubylexer.rb', line 359 def offset_adjust @offset_adjust end |
#original_file ⇒ Object (readonly)
Returns the value of attribute original_file.
62 63 64 |
# File 'lib/rubylexer/rulexer.rb', line 62 def original_file @original_file end |
#parsestack ⇒ Object (readonly)
Returns the value of attribute parsestack.
145 146 147 |
# File 'lib/rubylexer.rb', line 145 def parsestack @parsestack end |
#pending_here_bodies=(value) ⇒ Object (writeonly)
Sets the attribute pending_here_bodies
360 361 362 |
# File 'lib/rubylexer.rb', line 360 def pending_here_bodies=(value) @pending_here_bodies = value end |
#rubyversion ⇒ Object (readonly)
Returns the value of attribute rubyversion.
361 362 363 |
# File 'lib/rubylexer.rb', line 361 def rubyversion @rubyversion end |
#VARLIKE_KEYWORDS ⇒ Object (readonly)
Returns the value of attribute VARLIKE_KEYWORDS.
67 68 69 |
# File 'lib/rubylexer.rb', line 67 def VARLIKE_KEYWORDS @VARLIKE_KEYWORDS end |
Instance Method Details
#_keyword_funclike(str, offset, result) ⇒ Object
1897 1898 1899 1900 1901 1902 1903 1904 1905 |
# File 'lib/rubylexer.rb', line 1897 def _keyword_funclike(str,offset,result) if @last_operative_token===/^(\.|::)$/ result=yield MethNameToken.new(str) #should pass a methname token here else tok=KeywordToken.new(str) result=yield tok,tok end return result end |
#_keyword_innerbounding(str, offset, result) ⇒ Object
1549 1550 1551 1552 |
# File 'lib/rubylexer.rb', line 1549 def _keyword_innerbounding(str,offset,result) result.unshift(*abort_noparens!(str)) return result end |
#_keyword_varlike(str, offset, result) ⇒ Object
1910 1911 1912 1913 |
# File 'lib/rubylexer.rb', line 1910 def _keyword_varlike(str,offset,result) #do nothing return result end |
#at_identifier(ch = nil) ⇒ Object
452 453 454 455 456 457 458 459 460 461 462 |
# File 'lib/rubylexer.rb', line 452 def at_identifier(ch=nil) result = (eat_next_if(?@) or return nil) result << (eat_next_if(?@) or '') if t=identifier_as_string(?@) result << t else error= "missing @id name" end result=VarNameToken.new(result) result.in_def=true if inside_method_def? return lexerror(result,error) end |
#balanced_braces? ⇒ Boolean
433 434 435 436 437 |
# File 'lib/rubylexer.rb', line 433 def balanced_braces? #@parsestack.empty? @parsestack.size==1 and TopLevelContext===@parsestack.first end |
#build_method_operators ⇒ Object
244 245 246 |
# File 'lib/rubylexer.rb', line 244 def build_method_operators /#{RUBYSYMOPERATORREX}|\A`/o end |
#dollar_identifier(ch = nil) ⇒ Object
440 441 442 443 444 445 446 447 448 449 |
# File 'lib/rubylexer.rb', line 440 def dollar_identifier(ch=nil) s=eat_next_if(?$) or return nil if t=((identifier_as_string(?$) or special_global)) s << t else error= "missing $id name" end return lexerror(VarNameToken.new(s),error) end |
#each ⇒ Object
111 112 113 114 |
# File 'lib/rubylexer/rulexer.rb', line 111 def each begin yield tok = get1token end until tok.is_a? EoiToken end |
#enable_macros! ⇒ Object
1063 1064 1065 1066 1067 1068 1069 1070 |
# File 'lib/rubylexer.rb', line 1063 def enable_macros! #this wholemethod should be unnecessary now @enable_macro="macro" #shouldn't be needed anymore... should be safe to remove class <<self alias keyword_macro keyword_def end @unary_or_binary_chars.add '^' @always_binary_chars.remove '^' end |
#encoding_name_normalize(name) ⇒ Object
1608 1609 1610 1611 1612 1613 1614 |
# File 'lib/rubylexer.rb', line 1608 def encoding_name_normalize name name=name.dup name.downcase! name.tr_s! '-_','' name=ENCODING_ALIASES[name] if ENCODING_ALIASES[name] return name end |
#endoffile_detected(s = '') ⇒ Object Also known as: rulexer_endoffile_detected
3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 |
# File 'lib/rubylexer.rb', line 3343 def endoffile_detected(s='') @linenum+=1 #optional_here_bodies expects to be called after a newline was seen and @linenum bumped #in this case, there is no newline, but we need to pretend there is. otherwise optional_here_bodies #makes tokens with wrong line numbers @moretokens.concat optional_here_bodies @linenum-=1 #now put it back @moretokens.concat abort_noparens! @moretokens.push rulexer_endoffile_detected(s) if @progress_thread @progress_thread.kill @progress_thread=nil end result= @moretokens.shift assert @pending_here_bodies.empty? balanced_braces? or (lexerror result,"unbalanced braces at eof. parsestack=#{@parsestack.inspect}") result end |
#eof? ⇒ Boolean Also known as: rulexer_eof?
418 419 420 |
# File 'lib/rubylexer.rb', line 418 def eof? rulexer_eof? or EoiToken===@last_operative_token end |
#get1token ⇒ Object Also known as: rulexer_get1token
369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 |
# File 'lib/rubylexer.rb', line 369 def get1token result=rulexer_get1token #most of the action's here if ENV['PROGRESS'] @last_cp_pos||=0 @start_time||=Time.now if result.offset-@last_cp_pos>100000 $stderr.puts "#{result.offset} #{Time.now-@start_time}" @last_cp_pos=result.offset end end #now cleanup and housekeeping #check for bizarre token types case result when ImplicitParamListStartToken, ImplicitParamListEndToken @last_token_maybe_implicit=result result when StillIgnoreToken#,nil result when StringToken set_last_token result assert !(IgnoreToken===@last_operative_token) result.elems.map!{|frag| if String===frag result.translate_escapes(frag) else frag end } if AUTO_UNESCAPE_STRINGS result when Token#,String set_last_token result assert !(IgnoreToken===@last_operative_token) result else raise "#{@filename}:#{linenum}:token is a #{result.class}, last is #{@last_operative_token}" end end |
#input_position_raw ⇒ Object
428 429 430 |
# File 'lib/rubylexer.rb', line 428 def input_position_raw @file.pos end |
#is__ENCODING__keyword?(name) ⇒ Boolean
1895 |
# File 'lib/rubylexer.rb', line 1895 def is__ENCODING__keyword?(name); end |
#keyword___FILE__(str, offset, result) ⇒ Object
1596 1597 1598 1599 |
# File 'lib/rubylexer.rb', line 1596 def keyword___FILE__(str,offset,result) result.last.value=@filename return result end |
#keyword___LINE__(str, offset, result) ⇒ Object
1601 1602 1603 1604 |
# File 'lib/rubylexer.rb', line 1601 def keyword___LINE__(str,offset,result) result.last.value=@linenum return result end |
#keyword_alias(str, offset, result) ⇒ Object
1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 |
# File 'lib/rubylexer.rb', line 1449 def keyword_alias(str,offset,result) safe_recurse { |a| set_last_token KeywordToken.new( "alias" )#hack result.concat ignored_tokens res=symbol(eat_next_if(?:),false) unless res lexerror(result.first,"bad symbol in alias") else res.ident[0]==?$ and res=VarNameToken.new(res.ident,res.offset) result<< res set_last_token KeywordToken.new( "alias" )#hack result.concat ignored_tokens res=symbol(eat_next_if(?:),false) unless res lexerror(result.first,"bad symbol in alias") else res.ident[0]==?$ and res=VarNameToken.new(res.ident,res.offset) result<< res end end } return result end |
#keyword_begin(str, offset, result) ⇒ Object Also known as: keyword_case
1235 1236 1237 1238 1239 |
# File 'lib/rubylexer.rb', line 1235 def keyword_begin(str,offset,result) result.first.has_end! @parsestack.push WantsEndContext.new(str,@linenum) return result end |
#keyword_class(str, offset, result) ⇒ Object
1210 1211 1212 1213 1214 |
# File 'lib/rubylexer.rb', line 1210 def keyword_class(str,offset,result) result.first.has_end! @parsestack.push ClassContext.new(str,@linenum) return result end |
#keyword_def(str, offset, result) ⇒ Object
macros too, if enabled
1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 |
# File 'lib/rubylexer.rb', line 1284 def keyword_def(str,offset,result) #macros too, if enabled result.first.has_end! @parsestack.push ctx=DefContext.new(@linenum) ctx.state=:saw_def old_moretokens=@moretokens @moretokens=[] #aa=@moretokens #safe_recurse { |aa| set_last_token KeywordToken.new(str) #hack result.concat ignored_tokens #read an expr like a.b.c or a::b::c #or (expr).b.c if nextchar==?( #look for optional parenthesised head old_size=@parsestack.size parencount=0 begin tok=get1token case tok when /^\($/.token_pat ; parencount+=1 when /^\)$/.token_pat ; parencount-=1 when EoiToken @moretokens= old_moretokens.concat @moretokens return result<<lexerror( tok, "eof in def header" ) end result << tok end until parencount==0 #@parsestack.size==old_size @localvars_stack.push SymbolTable.new else #no parentheses, all tail set_last_token KeywordToken.new(".") #hack hack tokindex=result.size tokline=result.last.endline result << tok=symbol(false,false) name=tok.to_s assert !in_lvar_define_state #maybe_local really means 'maybe local or constant' @maybe_local_pat||=%r{ ((?!#@@LETTER_DIGIT).$) | ^[@$] | (#@VARLIKE_KEYWORDS | #@FUNCLIKE_KEYWORDS) | (^#@@LCLETTER) | (^#@@UCLETTER) }x @maybe_local_pat === name and maybe_local= case when $1; maybe_local=false #operator or non-ident when $2; ty=KeywordToken #keyword when $3; maybe_local=localvars===name #lvar or method when $4; is_const=true #constant else true end #maybe_local=ty=KeywordToken if is__ENCODING__keyword?(name) #"__ENCODING__"==name and @rubyversion>=1.9 =begin was maybe_local=case name when /(?!#@@LETTER_DIGIT).$/o; #do nothing when /^[@$]/; true when /#@VARLIKE_KEYWORDS|#@FUNCLIKE_KEYWORDS/,("__ENCODING__" if @rubyversion>=1.9); ty=KeywordToken when /^#@@LCLETTER/o; localvars===name when /^#@@UCLETTER/o; is_const=true #this is the right algorithm for constants... end =end result.push( *ignored_tokens(false,false) ) nc=nextchar if !ty and maybe_local if nc==?: || nc==?. ty=VarNameToken end end if ty.nil? or (ty==KeywordToken and nc!=?: and nc!=?.) ty=MethNameToken if nc != ?( endofs=tok.offset+tok.to_s.length newtok=ImplicitParamListStartToken.new(endofs) result.insert tokindex+1, newtok end end assert result[tokindex].equal?(tok) var=ty.new(tok.to_s,tok.offset) if ty==KeywordToken and name[0,2]=="__" send("keyword_#{name}",name,tok.offset,[var]) end var.endline=tokline var=assign_lvar_type! var @localvars_stack.push SymbolTable.new var.in_def=true if inside_method_def? and var.respond_to? :in_def= result[tokindex]=var #if a.b.c.d is seen, a, b and c #should be considered maybe varname instead of methnames. #the last (d in the example) is always considered a methname; #it's what's being defined. #b and c should be considered varnames only if #they are capitalized and preceded by :: . #a could even be a keyword (eg self or block_given?). end #read tail: .b.c.d etc result.reverse_each{|res| break set_last_token( res ) unless StillIgnoreToken===res} assert !(IgnoreToken===@last_operative_token) state=:expect_op @in_def_name=true while true #look for start of parameter list nc=(@moretokens.empty? ? nextchar.chr : @moretokens.first.to_s[0,1]) if state==:expect_op and /^(?:#@@LETTER|[(&*])/o===nc ctx.state=:def_param_list ctx.has_parens= '('==nc list,listend=def_param_list result.concat list end_index=result.index(listend) ofs=listend.offset if endofs result.insert end_index,ImplicitParamListEndToken.new(ofs) else ofs+=listend.to_s.size end tok=EndHeaderToken.new(ofs) tok.endline= result[end_index-1].endline #@linenum result.insert end_index+1,tok break end tok=get1token result<< tok case tok when EoiToken lexerror tok,'unexpected eof in def header' @moretokens= old_moretokens.concat @moretokens return result when StillIgnoreToken when MethNameToken ,VarNameToken # /^#@@LETTER/o.token_pat lexerror tok,'expected . or ::' unless state==:expect_name state=:expect_op when /^(\.|::)$/.token_pat lexerror tok,'expected ident' unless state==:expect_op if endofs result.insert( -2, ImplicitParamListEndToken.new(endofs) ) endofs=nil end state=:expect_name when /^(;|end)$/.token_pat, NewlineToken #are we done with def name? ctx.state=:def_body state==:expect_op or lexerror tok,'expected identifier' if endofs result.insert( -2,ImplicitParamListEndToken.new(tok.offset) ) end ehtok= EndHeaderToken.new(tok.offset) #ehtok.endline=tok.endline #ehtok.endline-=1 if NewlineToken===tok result.insert( -2, ehtok ) break else lexerror(tok, "bizarre token in def name: " + "#{tok}:#{tok.class}") end end @in_def_name=false #} @moretokens= old_moretokens.concat @moretokens return result end |
#keyword_do(str, offset, result) ⇒ Object
1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 |
# File 'lib/rubylexer.rb', line 1265 def keyword_do(str,offset,result) result.unshift(*abort_noparens_for_do!(str)) ctx=@parsestack.last if ExpectDoOrNlContext===ctx @parsestack.pop assert WantsEndContext===@parsestack.last result.last.as=";" else result.last.has_end! if BlockContext===ctx and ctx.wanting_stabby_block_body @parsestack[-1]= WantsEndContext.new(str,@linenum) else @parsestack.push WantsEndContext.new(str,@linenum) localvars.start_block block_param_list_lookahead end end return result end |
#keyword_elsif(str, offset, result) ⇒ Object
1229 1230 1231 1232 1233 |
# File 'lib/rubylexer.rb', line 1229 def keyword_elsif(str,offset,result) result.unshift(*abort_noparens!(str)) @parsestack.push ExpectThenOrNlContext.new(str,@linenum) return result end |
#keyword_end(str, offset, result) ⇒ Object
1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 |
# File 'lib/rubylexer.rb', line 1146 def keyword_end(str,offset,result) result.unshift(*abort_noparens!(str)) @parsestack.last.see self,:semi #sorta hacky... should make an :end event instead? =begin not needed? if [email protected] @parsestack.pop assert @parsestack.last.starter[/^(while|until|for)$/] end =end WantsEndContext===@parsestack.last or lexerror result.last, 'unbalanced end' ctx=@parsestack.pop start,line=ctx.starter,ctx.linenum BEGINWORDS===start or lexerror result.last, "end does not match #{start or "nil"}" /^(do)$/===start and localvars.end_block /^(class|module|def)$/===start and @localvars_stack.pop return result end |
#keyword_END(str, offset, result) ⇒ Object
1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 |
# File 'lib/rubylexer.rb', line 1570 def keyword_END(str,offset,result) #END could be treated, lexically, just as if it is an #ordinary method, except that local vars created in #END blocks are visible to subsequent code. (Why??) #That difference forces a custom parsing. if @last_operative_token===/^(\.|::)$/ result=yield MethNameToken.new(str) #should pass a methname token here else safe_recurse{ old=result.first result=[ KeywordToken.new(old.ident,old.offset), ImplicitParamListStartToken.new(input_position), ImplicitParamListEndToken.new(input_position), *ignored_tokens ] getchar=='{' or lexerror(result.first,"expected { after #{str}") result.push KeywordToken.new('{',input_position-1) result.last.set_infix! result.last.as="do" @parsestack.push BeginEndContext.new(str,offset) } end return result end |
#keyword_for(str, offset, result) ⇒ Object
1255 1256 1257 1258 1259 1260 1261 1262 1263 |
# File 'lib/rubylexer.rb', line 1255 def keyword_for(str,offset,result) result.first.has_end! result.push KwParamListStartToken.new(offset+str.length) # corresponding EndToken emitted leaving ForContext ("in" branch, below) @parsestack.push WantsEndContext.new(str,@linenum) #expect_do_or_end_or_nl! str #handled by ForSMContext now @parsestack.push ForSMContext.new(@linenum) return result end |
#keyword_if(str, offset, result) ⇒ Object Also known as: keyword_unless
could be infix form without end
1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 |
# File 'lib/rubylexer.rb', line 1217 def keyword_if(str,offset,result) #could be infix form without end if after_nonid_op?{false} #prefix form result.first.has_end! @parsestack.push WantsEndContext.new(str,@linenum) @parsestack.push ExpectThenOrNlContext.new(str,@linenum) else #infix form result.unshift(*abort_noparens!(str)) end return result end |
#keyword_in(str, offset, result) ⇒ Object
1542 1543 1544 1545 1546 1547 |
# File 'lib/rubylexer.rb', line 1542 def keyword_in(str,offset,result) result.unshift KwParamListEndToken.new( offset) result.unshift(*abort_noparens!(str)) @parsestack.last.see self,:in return result end |
#keyword_module(str, offset, result) ⇒ Object
1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 |
# File 'lib/rubylexer.rb', line 1166 def keyword_module(str,offset,result) result.first.has_end! @parsestack.push WantsEndContext.new(str,@linenum) offset=input_position assert @moretokens.empty? tokens=[] if @file.scan(/\A(#@@WSTOKS)?(#@@UCLETTER#@@LETTER_DIGIT*)(?=[#{WHSP}]+(?:[^(])|[#;\n]|::)/o) md=@file.last_match all,ws,name=*md tokens.concat divide_ws(ws,md.begin(1)) if ws tokens.push VarNameToken.new(name,md.begin(2)) end tokens.push( *read_arbitrary_expression{|tok,extra_contexts| #@file.check /\A(\n|;|::|end(?!#@@LETTER_DIGIT)|(#@@UCLETTER#@@LETTER_DIGIT*)(?!(#@@WSTOKS)?::))/o @file.check( /\A(\n|;|end(?!#@@LETTER_DIGIT))/o ) or @file.check("::") && extra_contexts.all?{|ctx| ImplicitParamListContext===ctx } && @moretokens.push(*abort_noparens!) } ) if !name #or @file.check /#@@WSTOKS?::/o @moretokens[0,0]=tokens @localvars_stack.push SymbolTable.new while @file.check( /\A::/ ) #[email protected] or #[email protected] && @moretokens.last.ident=="::" @file.scan(/\A(#@@WSTOKS)?(::)?(#@@WSTOKS)?(#@@UCLETTER#@@LETTER_DIGIT*)/o) or break #should not allow newline around :: here md=@file.last_match all,ws1,dc,ws2,name=*md if ws1 @moretokens.concat divide_ws(ws1,md.begin(1)) incr=ws1.size else incr=0 end @moretokens.push NoWsToken.new(md.begin(2)) if dc @moretokens.push KeywordToken.new('::',md.begin(2)) if dc @moretokens.concat divide_ws(ws2,md.begin(3)) if ws2 @moretokens.push VarNameToken.new(name,md.begin(4)) end @moretokens.push EndHeaderToken.new(input_position) return result end |
#keyword_rescue(str, offset, result) ⇒ Object
1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 |
# File 'lib/rubylexer.rb', line 1507 def keyword_rescue(str,offset,result) unless after_nonid_op? {false} result.replace [] #rescue needs to be treated differently when in operator context... #i think no RescueSMContext should be pushed on the stack... tok=OperatorToken.new(str,offset) tok.unary=false #plus, the rescue token should be marked as infix if AssignmentRhsContext===@parsestack.last tok.as="rescue3" @parsestack.pop #end rhs context result.push AssignmentRhsListEndToken.new(offset) #end rhs token else result.concat abort_noparens_for_rescue!(str) end result.push tok else result.push KwParamListStartToken.new(offset+str.length) #corresponding EndToken emitted by abort_noparens! on leaving rescue context @parsestack.push RescueSMContext.new(@linenum) # result.unshift(*abort_noparens!(str)) end return result end |
#keyword_return(str, offset, result) ⇒ Object Also known as: keyword_break, keyword_next
1557 1558 1559 1560 1561 1562 1563 1564 |
# File 'lib/rubylexer.rb', line 1557 def keyword_return(str,offset,result) fail if KeywordToken===@last_operative_token and @last_operative_token===/\A(\.|::)\Z/ tok=KeywordToken.new(str,offset) result=yield tok result[0]=tok tok.has_no_block! return result end |
#keyword_then(str, offset, result) ⇒ Object
1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 |
# File 'lib/rubylexer.rb', line 1531 def keyword_then(str,offset,result) result.unshift(*abort_noparens!(str)) @parsestack.last.see self,:then if ExpectThenOrNlContext===@parsestack.last @parsestack.pop else #error... does anyone care? end return result end |
#keyword_undef(str, offset, result) ⇒ Object
1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 |
# File 'lib/rubylexer.rb', line 1472 def keyword_undef(str,offset,result) safe_recurse { |a| loop do set_last_token KeywordToken.new( "," )#hack result.concat ignored_tokens tok=symbol(eat_next_if(?:),false) tok or lexerror(result.first,"bad symbol in undef") result<< tok set_last_token tok assert !(IgnoreToken===@last_operative_token) sawnl=false result.concat ignored_tokens(true){|nl| sawnl=true} break if sawnl or nextchar != ?, tok= single_char_token(?,) result<< tok end } return result end |
#keyword_when(str, offset, result) ⇒ Object
defined? might have a baresymbol following it does it need to be handled specially? it would seem not.….
1499 1500 1501 1502 1503 1504 1505 |
# File 'lib/rubylexer.rb', line 1499 def keyword_when(str,offset,result) #abort_noparens! emits EndToken on leaving context result.unshift(*abort_noparens!(str)) result.push KwParamListStartToken.new( offset+str.length) @parsestack.push WhenParamListContext.new(str,@linenum) return result end |
#keyword_while(str, offset, result) ⇒ Object Also known as: keyword_until
could be infix form without end
1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 |
# File 'lib/rubylexer.rb', line 1242 def keyword_while(str,offset,result) #could be infix form without end if after_nonid_op?{false} #prefix form result.first.has_end! @parsestack.push WantsEndContext.new(str,@linenum) expect_do_or_end_or_nl! str else #infix form result.unshift(*abort_noparens!(str)) end return result end |
#localvars ⇒ Object
352 353 354 |
# File 'lib/rubylexer.rb', line 352 def localvars; @localvars_stack.last end |
#no_more? ⇒ Boolean
105 106 107 108 |
# File 'lib/rubylexer/rulexer.rb', line 105 def no_more? @moretokens.each{|t| FileAndLineToken===t or return false } return true end |
#progress_printer ⇒ Object
341 342 343 344 345 346 347 348 349 350 |
# File 'lib/rubylexer.rb', line 341 def progress_printer return unless ENV['RL_PROGRESS'] $stderr.puts 'printing progresses' @progress_thread=Thread.new do until EoiToken===@last_operative_token sleep 10 $stderr.puts @file.pos end end end |
#read_encoding_line ⇒ Object
338 339 |
# File 'lib/rubylexer.rb', line 338 def read_encoding_line end |
#read_leading_encoding ⇒ Object
303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 |
# File 'lib/rubylexer.rb', line 303 def read_leading_encoding @encoding=nil if @encoding==:detect if enc=@file.scan( "\xEF\xBB\xBF" ) #bom encpos=0 @encoding||=:utf8 elsif @file.skip( /\A#!/ ) lastpos=@file.pos loop do til_charset( /[#@@WSCHARS]/o ) assert @file.pos > lastpos break if @file.match( /^\n|#@@WSNONLCHARS([^-#@@WSCHARS])/o,4 ) if @file.skip( /.-#{CHAINOPTIONS}*K#@@WSNONLCHARS*([a-zA-Z0-9])/o ) case @file.last_match[1] when 'u','U'; @encoding||=:utf8 when 'e','E'; @encoding||=:euc when 's','S'; @encoding||=:sjis end elsif @file.skip( /.#{LONGOPTIONS}/o ) end getchar lastpos=@file.pos end til_charset( /[\n]/ ) @moretokens<<ShebangToken.new(@file[0...@file.pos]) pos=input_position @moretokens<<EscNlToken.new(readnl,pos,@filename,2) @moretokens<<FileAndLineToken.new(@filename,2,input_position) end encpos=input_position unless enc enc||=read_encoding_line ensure @moretokens<<EncodingDeclToken.new(enc||'',@encoding,enc ? encpos : input_position) if @encoding @encoding||=:ascii end |
#rubylexer_modules_init ⇒ Object
227 228 229 |
# File 'lib/rubylexer.rb', line 227 def rubylexer_modules_init end |
#semicolon_in_block_param_list? ⇒ Boolean
module RubyLexer1_9
1894 |
# File 'lib/rubylexer.rb', line 1894 def semicolon_in_block_param_list?; end |
#set_last_token(tok) ⇒ Object
364 365 366 |
# File 'lib/rubylexer.rb', line 364 def set_last_token(tok) @last_operative_token=@last_token_maybe_implicit=tok end |
#to_s ⇒ Object Also known as: inspect
irb friendly #inspect/#to_s
234 235 236 237 238 239 |
# File 'lib/rubylexer.rb', line 234 def to_s mods=class<<self;self end.ancestors-self.class.ancestors mods=mods.map{|mod| mod.name }.join('+') mods="+"<<mods unless mods.empty? "#<#{self.class.name}#{mods}: [#{@file.inspect}]>" end |
#unshift(*tokens) ⇒ Object
413 414 415 |
# File 'lib/rubylexer.rb', line 413 def unshift(*tokens) @moretokens.unshift(*tokens) end |