Class: Puppet::Pops::Parser::Lexer2

Inherits:
Object
  • Object
show all
Includes:
EppSupport, HeredocSupport, InterpolationSupport, LexerSupport, SlurpSupport
Defined in:
lib/puppet/pops/parser/lexer2.rb

Constant Summary collapse

TOKEN_LBRACK =

ALl tokens have three slots, the token name (a Symbol), the token text (String), and a token text length. All operator and punctuation tokens reuse singleton arrays Tokens that require unique values create a unique array per token.

PEFORMANCE NOTES: This construct reduces the amount of object that needs to be created for operators and punctuation. The length is pre-calculated for all singleton tokens. The length is used both to signal the length of the token, and to advance the scanner position (without having to advance it with a scan(regexp)).

[:LBRACK,       '[',   1].freeze
TOKEN_LISTSTART =
[:LISTSTART,    '[',   1].freeze
TOKEN_RBRACK =
[:RBRACK,       ']',   1].freeze
TOKEN_LBRACE =
[:LBRACE,       '{',   1].freeze
TOKEN_RBRACE =
[:RBRACE,       '}',   1].freeze
TOKEN_SELBRACE =
[:SELBRACE,     '{',   1].freeze
TOKEN_LPAREN =
[:LPAREN,       '(',   1].freeze
TOKEN_WSLPAREN =
[:WSLPAREN,     '(',   1].freeze
TOKEN_RPAREN =
[:RPAREN,       ')',   1].freeze
TOKEN_EQUALS =
[:EQUALS,       '=',   1].freeze
TOKEN_APPENDS =
[:APPENDS,      '+=',  2].freeze
TOKEN_DELETES =
[:DELETES,      '-=',  2].freeze
TOKEN_ISEQUAL =
[:ISEQUAL,      '==',  2].freeze
TOKEN_NOTEQUAL =
[:NOTEQUAL,     '!=',  2].freeze
TOKEN_MATCH =
[:MATCH,        '=~',  2].freeze
TOKEN_NOMATCH =
[:NOMATCH,      '!~',  2].freeze
TOKEN_GREATEREQUAL =
[:GREATEREQUAL, '>=',  2].freeze
TOKEN_GREATERTHAN =
[:GREATERTHAN,  '>',   1].freeze
TOKEN_LESSEQUAL =
[:LESSEQUAL,    '<=',  2].freeze
TOKEN_LESSTHAN =
[:LESSTHAN,     '<',   1].freeze
TOKEN_FARROW =
[:FARROW,       '=>',  2].freeze
TOKEN_PARROW =
[:PARROW,       '+>',  2].freeze
TOKEN_LSHIFT =
[:LSHIFT,       '<<',  2].freeze
TOKEN_LLCOLLECT =
[:LLCOLLECT,    '<<|', 3].freeze
TOKEN_LCOLLECT =
[:LCOLLECT,     '<|',  2].freeze
TOKEN_RSHIFT =
[:RSHIFT,       '>>',  2].freeze
TOKEN_RRCOLLECT =
[:RRCOLLECT,    '|>>', 3].freeze
TOKEN_RCOLLECT =
[:RCOLLECT,     '|>',  2].freeze
TOKEN_PLUS =
[:PLUS,         '+',   1].freeze
TOKEN_MINUS =
[:MINUS,        '-',   1].freeze
TOKEN_DIV =
[:DIV,          '/',   1].freeze
TOKEN_TIMES =
[:TIMES,        '*',   1].freeze
TOKEN_MODULO =
[:MODULO,       '%',   1].freeze
TOKEN_NOT =

rubocop:disable Layout/SpaceBeforeComma

[:NOT,          '!',   1].freeze
TOKEN_DOT =
[:DOT,          '.',   1].freeze
TOKEN_PIPE =
[:PIPE,         '|',   1].freeze
TOKEN_AT =
[:AT ,          '@',   1].freeze
TOKEN_ATAT =
[:ATAT ,        '@@',  2].freeze
TOKEN_COLON =
[:COLON,        ':',   1].freeze
TOKEN_COMMA =
[:COMMA,        ',',   1].freeze
TOKEN_SEMIC =
[:SEMIC,        ';',   1].freeze
TOKEN_QMARK =
[:QMARK,        '?',   1].freeze
TOKEN_TILDE =

lexed but not an operator in Puppet

[:TILDE,        '~',   1].freeze
TOKEN_REGEXP =

rubocop:enable Layout/SpaceBeforeComma

[:REGEXP,       nil,   0].freeze
TOKEN_IN_EDGE =
[:IN_EDGE,      '->',  2].freeze
TOKEN_IN_EDGE_SUB =
[:IN_EDGE_SUB,  '~>',  2].freeze
TOKEN_OUT_EDGE =
[:OUT_EDGE,     '<-',  2].freeze
TOKEN_OUT_EDGE_SUB =
[:OUT_EDGE_SUB, '<~',  2].freeze
TOKEN_STRING =

Tokens that are always unique to what has been lexed

[:STRING,      nil,  0].freeze
TOKEN_WORD =
[:WORD,        nil,  0].freeze
TOKEN_DQPRE =
[:DQPRE,       nil,  0].freeze
TOKEN_DQMID =
[:DQPRE,       nil,  0].freeze
TOKEN_DQPOS =
[:DQPRE,       nil,  0].freeze
TOKEN_NUMBER =
[:NUMBER,      nil,  0].freeze
TOKEN_VARIABLE =
[:VARIABLE,    nil,  1].freeze
TOKEN_VARIABLE_EMPTY =
[:VARIABLE,    '',   1].freeze
TOKEN_HEREDOC =

HEREDOC has syntax as an argument.

[:HEREDOC,     nil,  0].freeze
TOKEN_EPPSTART =

EPP_START is currently a marker token, may later get syntax rubocop:disable Layout/ExtraSpacing

[:EPP_START,      nil,  0].freeze
TOKEN_EPPEND =
[:EPP_END,       '%>',  2].freeze
TOKEN_EPPEND_TRIM =
[:EPP_END_TRIM, '-%>',  3].freeze
TOKEN_OTHER =

This is used for unrecognized tokens, will always be a single character. This particular instance is not used, but is kept here for documentation purposes.

[:OTHER, nil, 0]
KEYWORDS =

Keywords are all singleton tokens with pre calculated lengths. Booleans are pre-calculated (rather than evaluating the strings “false” “true” repeatedly.

{
  'case' => [:CASE, 'case', 4],
  'class' => [:CLASS, 'class', 5],
  'default' => [:DEFAULT, 'default', 7],
  'define' => [:DEFINE, 'define', 6],
  'if' => [:IF, 'if', 2],
  'elsif' => [:ELSIF, 'elsif', 5],
  'else' => [:ELSE, 'else', 4],
  'inherits' => [:INHERITS, 'inherits', 8],
  'node' => [:NODE, 'node', 4],
  'and' => [:AND, 'and', 3],
  'or' => [:OR, 'or', 2],
  'undef' => [:UNDEF,    'undef',    5],
  'false' => [:BOOLEAN,  false,      5],
  'true' => [:BOOLEAN, true, 4],
  'in' => [:IN, 'in', 2],
  'unless' => [:UNLESS, 'unless', 6],
  'function' => [:FUNCTION, 'function', 8],
  'type' => [:TYPE,     'type',     4],
  'attr' => [:ATTR,     'attr',     4],
  'private' => [:PRIVATE, 'private', 7],
}
KEYWORD_NAMES =

Reverse lookup of keyword name to string

{}
PATTERN_WS =
/[[:blank:]\r]+/
PATTERN_NON_WS =
/\w+\b?/
PATTERN_COMMENT =

The single line comment includes the line ending.

/#.*\r?/
PATTERN_MLCOMMENT =
%r{/\*(.*?)\*/}m
PATTERN_REGEX =
%r{/[^/]*/}
PATTERN_REGEX_END =
%r{/}
PATTERN_REGEX_A =

for replacement to “”

%r{\A/}
PATTERN_REGEX_Z =

for replacement to “”

%r{/\Z}
PATTERN_REGEX_ESC =

for replacement to “/”

%r{\\/}
PATTERN_CLASSREF =

The NAME and CLASSREF in 4x are strict. Each segment must start with a letter a-z and may not contain dashes (w includes letters, digits and _).

/((::){0,1}[A-Z]\w*)+/
PATTERN_NAME =
/^((::)?[a-z]\w*)(::[a-z]\w*)*$/
PATTERN_BARE_WORD =
/((?:::){0,1}(?:[a-z_](?:[\w-]*\w)?))+/
PATTERN_DOLLAR_VAR =
/\$(::)?(\w+::)*\w+/
PATTERN_NUMBER =
/\b(?:0[xX][0-9A-Fa-f]+|0?\d+(?:\.\d+)?(?:[eE]-?\d+)?)\b/
STRING_BSLASH_SLASH =

PERFORMANCE NOTE: Comparison against a frozen string is faster (than unfrozen).

'\/'

Constants included from EppSupport

EppSupport::TOKEN_RENDER_EXPR, EppSupport::TOKEN_RENDER_STRING

Constants included from SlurpSupport

SlurpSupport::DQ_ESCAPES, SlurpSupport::SLURP_ALL_PATTERN, SlurpSupport::SLURP_DQ_PATTERN, SlurpSupport::SLURP_SQ_PATTERN, SlurpSupport::SLURP_UQNE_PATTERN, SlurpSupport::SLURP_UQ_PATTERN, SlurpSupport::SQ_ESCAPES, SlurpSupport::UQ_ESCAPES

Constants included from LexerSupport

Puppet::Pops::Parser::LexerSupport::BOM_BOCU, Puppet::Pops::Parser::LexerSupport::BOM_GB_18030, Puppet::Pops::Parser::LexerSupport::BOM_SCSU, Puppet::Pops::Parser::LexerSupport::BOM_UTF_1, Puppet::Pops::Parser::LexerSupport::BOM_UTF_16_1, Puppet::Pops::Parser::LexerSupport::BOM_UTF_16_2, Puppet::Pops::Parser::LexerSupport::BOM_UTF_32_1, Puppet::Pops::Parser::LexerSupport::BOM_UTF_32_2, Puppet::Pops::Parser::LexerSupport::BOM_UTF_8, Puppet::Pops::Parser::LexerSupport::BOM_UTF_EBCDIC, Puppet::Pops::Parser::LexerSupport::LONGEST_BOM, Puppet::Pops::Parser::LexerSupport::MM, Puppet::Pops::Parser::LexerSupport::MM_ANY

Constants included from InterpolationSupport

InterpolationSupport::PATTERN_VARIABLE

Constants included from HeredocSupport

HeredocSupport::PATTERN_HEREDOC

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from EppSupport

#fullscan_epp, #interpolate_epp, #scan_epp

Methods included from SlurpSupport

#slurp, #slurp_dqstring, #slurp_sqstring, #slurp_uqstring

Methods included from LexerSupport

#assert_not_bom, #assert_numeric, #create_lex_error, #filename, #followed_by, #format_quote, #get_bom, #lex_error, #lex_error_without_pos, #lex_warning, #line, #position

Methods included from InterpolationSupport

#enqueue_until, #interpolate_dq, #interpolate_tail_dq, #interpolate_tail_uq, #interpolate_uq, #interpolate_uq_to, #transform_to_variable

Methods included from HeredocSupport

#heredoc, #heredoc_text

Constructor Details

#initializeLexer2

Returns a new instance of Lexer2.



188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
# File 'lib/puppet/pops/parser/lexer2.rb', line 188

def initialize
  @selector = {
    '.' => -> { emit(TOKEN_DOT, @scanner.pos) },
    ',' => -> { emit(TOKEN_COMMA, @scanner.pos) },
    '[' => lambda do
      before = @scanner.pos
      # Must check the preceding character to see if it is whitespace.
      # The fastest thing to do is to simply byteslice to get the string ending at the offset before
      # and then check what the last character is. (This is the same as  what an locator.char_offset needs
      # to compute, but with less overhead of trying to find out the global offset from a local offset in the
      # case when this is sublocated in a heredoc).
      if before == 0 || @scanner.string.byteslice(0, before)[-1] =~ /[[:blank:]\r\n]+/
        emit(TOKEN_LISTSTART, before)
      else
        emit(TOKEN_LBRACK, before)
      end
    end,
    ']' => -> { emit(TOKEN_RBRACK, @scanner.pos) },
    '(' => lambda do
      before = @scanner.pos
      # If first on a line, or only whitespace between start of line and '('
      # then the token is special to avoid being taken as start of a call.
      line_start = @lexing_context[:line_lexical_start]
      if before == line_start || @scanner.string.byteslice(line_start, before - line_start) =~ /\A[[:blank:]\r]+\Z/
        emit(TOKEN_WSLPAREN, before)
      else
        emit(TOKEN_LPAREN, before)
      end
    end,
    ')' => -> { emit(TOKEN_RPAREN, @scanner.pos) },
    ';' => -> { emit(TOKEN_SEMIC, @scanner.pos) },
    '?' => -> { emit(TOKEN_QMARK, @scanner.pos) },
    '*' => -> { emit(TOKEN_TIMES, @scanner.pos) },
    '%' => lambda do
      scn = @scanner
      before = scn.pos
      la = scn.peek(2)
      if la[1] == '>' && @lexing_context[:epp_mode]
        scn.pos += 2
        if @lexing_context[:epp_mode] == :expr
          enqueue_completed(TOKEN_EPPEND, before)
        end
        @lexing_context[:epp_mode] = :text
        interpolate_epp
      else
        emit(TOKEN_MODULO, before)
      end
    end,
    '{' => lambda do
      # The lexer needs to help the parser since the technology used cannot deal with
      # lookahead of same token with different precedence. This is solved by making left brace
      # after ? into a separate token.
      #
      @lexing_context[:brace_count] += 1
      emit(if @lexing_context[:after] == :QMARK
             TOKEN_SELBRACE
           else
             TOKEN_LBRACE
           end, @scanner.pos)
    end,
    '}' => lambda do
      @lexing_context[:brace_count] -= 1
      emit(TOKEN_RBRACE, @scanner.pos)
    end,

    # TOKENS @, @@, @(
    '@' => lambda do
      scn = @scanner
      la = scn.peek(2)
      case la[1]
      when '@'
        emit(TOKEN_ATAT, scn.pos) # TODO; Check if this is good for the grammar
      when '('
        heredoc
      else
        emit(TOKEN_AT, scn.pos)
      end
    end,

    # TOKENS |, |>, |>>
    '|' => lambda do
      scn = @scanner
      la = scn.peek(3)
      emit(if la[1] == '>'
             la[2] == '>' ? TOKEN_RRCOLLECT : TOKEN_RCOLLECT
           else
             TOKEN_PIPE
           end, scn.pos)
    end,

    # TOKENS =, =>, ==, =~
    '=' => lambda do
      scn = @scanner
      la = scn.peek(2)
      emit(case la[1]
           when '='
             TOKEN_ISEQUAL
           when '>'
             TOKEN_FARROW
           when '~'
             TOKEN_MATCH
           else
             TOKEN_EQUALS
           end, scn.pos)
    end,

    # TOKENS '+', '+=', and '+>'
    '+' => lambda do
      scn = @scanner
      la = scn.peek(2)
      emit(case la[1]
           when '='
             TOKEN_APPENDS
           when '>'
             TOKEN_PARROW
           else
             TOKEN_PLUS
           end, scn.pos)
    end,

    # TOKENS '-', '->', and epp '-%>' (end of interpolation with trim)
    '-' => lambda do
      scn = @scanner
      la = scn.peek(3)
      before = scn.pos
      if @lexing_context[:epp_mode] && la[1] == '%' && la[2] == '>'
        scn.pos += 3
        if @lexing_context[:epp_mode] == :expr
          enqueue_completed(TOKEN_EPPEND_TRIM, before)
        end
        interpolate_epp(:with_trim)
      else
        emit(case la[1]
             when '>'
               TOKEN_IN_EDGE
             when '='
               TOKEN_DELETES
             else
               TOKEN_MINUS
             end, before)
      end
    end,

    # TOKENS !, !=, !~
    '!' => lambda do
      scn = @scanner
      la = scn.peek(2)
      emit(case la[1]
           when '='
             TOKEN_NOTEQUAL
           when '~'
             TOKEN_NOMATCH
           else
             TOKEN_NOT
           end, scn.pos)
    end,

    # TOKENS ~>, ~
    '~' => lambda do
      scn = @scanner
      la = scn.peek(2)
      emit(la[1] == '>' ? TOKEN_IN_EDGE_SUB : TOKEN_TILDE, scn.pos)
    end,

    '#' => -> { @scanner.skip(PATTERN_COMMENT); nil },

    # TOKENS '/', '/*' and '/ regexp /'
    '/' => lambda do
      scn = @scanner
      la = scn.peek(2)
      if la[1] == '*'
        lex_error(Issues::UNCLOSED_MLCOMMENT) if scn.skip(PATTERN_MLCOMMENT).nil?
        nil
      else
        before = scn.pos
        # regexp position is a regexp, else a div
        value = scn.scan(PATTERN_REGEX) if regexp_acceptable?
        if value
          # Ensure an escaped / was not matched
          while escaped_end(value)
            more = scn.scan_until(PATTERN_REGEX_END)
            return emit(TOKEN_DIV, before) unless more

            value << more
          end
          regex = value.sub(PATTERN_REGEX_A, '').sub(PATTERN_REGEX_Z, '').gsub(PATTERN_REGEX_ESC, '/')
          emit_completed([:REGEX, Regexp.new(regex), scn.pos - before], before)
        else
          emit(TOKEN_DIV, before)
        end
      end
    end,

    # TOKENS <, <=, <|, <<|, <<, <-, <~
    '<' => lambda do
      scn = @scanner
      la = scn.peek(3)
      emit(case la[1]
           when '<'
             if la[2] == '|'
               TOKEN_LLCOLLECT
             else
               TOKEN_LSHIFT
             end
           when '='
             TOKEN_LESSEQUAL
           when '|'
             TOKEN_LCOLLECT
           when '-'
             TOKEN_OUT_EDGE
           when '~'
             TOKEN_OUT_EDGE_SUB
           else
             TOKEN_LESSTHAN
           end, scn.pos)
    end,

    # TOKENS >, >=, >>
    '>' => lambda do
      scn = @scanner
      la = scn.peek(2)
      emit(case la[1]
           when '>'
             TOKEN_RSHIFT
           when '='
             TOKEN_GREATEREQUAL
           else
             TOKEN_GREATERTHAN
           end, scn.pos)
    end,

    # TOKENS :, ::CLASSREF, ::NAME
    ':' => lambda do
      scn = @scanner
      la = scn.peek(3)
      before = scn.pos
      if la[1] == ':'
        # PERFORMANCE NOTE: This could potentially be speeded up by using a case/when listing all
        # upper case letters. Alternatively, the 'A', and 'Z' comparisons may be faster if they are
        # frozen.
        #
        la2 = la[2]
        if la2 >= 'A' && la2 <= 'Z'
          # CLASSREF or error
          value = scn.scan(PATTERN_CLASSREF)
          if value && scn.peek(2) != '::'
            after = scn.pos
            emit_completed([:CLASSREF, value.freeze, after - before], before)
          else
            # move to faulty position ('::<uc-letter>' was ok)
            scn.pos = scn.pos + 3
            lex_error(Issues::ILLEGAL_FULLY_QUALIFIED_CLASS_REFERENCE)
          end
        else
          value = scn.scan(PATTERN_BARE_WORD)
          if value
            if value =~ PATTERN_NAME
              emit_completed([:NAME, value.freeze, scn.pos - before], before)
            else
              emit_completed([:WORD, value.freeze, scn.pos - before], before)
            end
          else
            # move to faulty position ('::' was ok)
            scn.pos = scn.pos + 2
            lex_error(Issues::ILLEGAL_FULLY_QUALIFIED_NAME)
          end
        end
      else
        emit(TOKEN_COLON, before)
      end
    end,

    '$' => lambda do
      scn = @scanner
      before = scn.pos
      value = scn.scan(PATTERN_DOLLAR_VAR)
      if value
        emit_completed([:VARIABLE, value[1..].freeze, scn.pos - before], before)
      else
        # consume the $ and let higher layer complain about the error instead of getting a syntax error
        emit(TOKEN_VARIABLE_EMPTY, before)
      end
    end,

    '"' => lambda do
      # Recursive string interpolation, 'interpolate' either returns a STRING token, or
      # a DQPRE with the rest of the string's tokens placed in the @token_queue
      interpolate_dq
    end,

    "'" => lambda do
      scn = @scanner
      before = scn.pos
      emit_completed([:STRING, slurp_sqstring.freeze, scn.pos - before], before)
    end,

    "\n" => lambda do
      # If heredoc_cont is in effect there are heredoc text lines to skip over
      # otherwise just skip the newline.
      #
      ctx = @lexing_context
      if ctx[:newline_jump]
        @scanner.pos = ctx[:newline_jump]
        ctx[:newline_jump] = nil
      else
        @scanner.pos += 1
      end
      ctx[:line_lexical_start] = @scanner.pos
      nil
    end,
    '' => -> { nil } # when the peek(1) returns empty
  }

  [' ', "\t", "\r"].each { |c| @selector[c] = -> { @scanner.skip(PATTERN_WS); nil } }

  ('0'..'9').each do |c|
    @selector[c] = lambda do
      scn = @scanner
      before = scn.pos
      value = scn.scan(PATTERN_NUMBER)
      if value
        length = scn.pos - before
        assert_numeric(value, before)
        emit_completed([:NUMBER, value.freeze, length], before)
      else
        invalid_number = scn.scan_until(PATTERN_NON_WS)
        if before > 1
          after = scn.pos
          scn.pos = before - 1
          if scn.peek(1) == '.'
            # preceded by a dot. Is this a bad decimal number then?
            scn.pos = before - 2
            while scn.peek(1) =~ /^\d$/
              invalid_number = nil
              before = scn.pos
              break if before == 0

              scn.pos = scn.pos - 1
            end
          end
          scn.pos = before
          invalid_number ||= scn.peek(after - before)
        end
        assert_numeric(invalid_number, before)
        scn.pos = before + 1
        lex_error(Issues::ILLEGAL_NUMBER, { :value => invalid_number })
      end
    end
  end
  ('a'..'z').to_a.push('_').each do |c|
    @selector[c] = lambda do
      scn = @scanner
      before = scn.pos
      value = scn.scan(PATTERN_BARE_WORD)
      if value && value =~ PATTERN_NAME
        emit_completed(KEYWORDS[value] || @taskm_keywords[value] || [:NAME, value.freeze, scn.pos - before], before)
      elsif value
        emit_completed([:WORD, value.freeze, scn.pos - before], before)
      else
        # move to faulty position ([a-z_] was ok)
        scn.pos = scn.pos + 1
        fully_qualified = scn.match?(/::/)
        if fully_qualified
          lex_error(Issues::ILLEGAL_FULLY_QUALIFIED_NAME)
        else
          lex_error(Issues::ILLEGAL_NAME_OR_BARE_WORD)
        end
      end
    end
  end

  ('A'..'Z').each do |c|
    @selector[c] = lambda do
      scn = @scanner
      before = scn.pos
      value = @scanner.scan(PATTERN_CLASSREF)
      if value && @scanner.peek(2) != '::'
        emit_completed([:CLASSREF, value.freeze, scn.pos - before], before)
      else
        # move to faulty position ([A-Z] was ok)
        scn.pos = scn.pos + 1
        lex_error(Issues::ILLEGAL_CLASS_REFERENCE)
      end
    end
  end

  @selector.default = lambda do
    # In case of unicode spaces of various kinds that are captured by a regexp, but not by the
    # simpler case expression above (not worth handling those special cases with better performance).
    scn = @scanner
    if scn.skip(PATTERN_WS)
      nil
    else
      # "unrecognized char"
      emit([:OTHER, scn.peek(0), 1], scn.pos)
    end
  end
  @selector.each { |k, _v| k.freeze }
  @selector.freeze
end

Instance Attribute Details

#locatorObject (readonly)



186
187
188
# File 'lib/puppet/pops/parser/lexer2.rb', line 186

def locator
  @locator
end

Instance Method Details

#clearObject

Clears the lexer state (it is not required to call this as it will be garbage collected and the next lex call (lex_string, lex_file) will reset the internal state.



607
608
609
610
611
612
# File 'lib/puppet/pops/parser/lexer2.rb', line 607

def clear
  # not really needed, but if someone wants to ensure garbage is collected as early as possible
  @scanner = nil
  @locator = nil
  @lexing_context = nil
end

#emit(token, byte_offset) ⇒ Object

Emits (produces) a token [:tokensymbol, TokenValue] and moves the scanner’s position past the token



738
739
740
741
# File 'lib/puppet/pops/parser/lexer2.rb', line 738

def emit(token, byte_offset)
  @scanner.pos = byte_offset + token[2]
  [token[0], TokenValue.new(token, byte_offset, @locator)]
end

#emit_completed(token, byte_offset) ⇒ Object

Emits the completed token on the form [:tokensymbol, TokenValue. This method does not alter the scanner’s position.



746
747
748
# File 'lib/puppet/pops/parser/lexer2.rb', line 746

def emit_completed(token, byte_offset)
  [token[0], TokenValue.new(token, byte_offset, @locator)]
end

#enqueue(emitted_token) ⇒ Object

Allows subprocessors for heredoc etc to enqueue tokens that are tokenized by a different lexer instance



757
758
759
# File 'lib/puppet/pops/parser/lexer2.rb', line 757

def enqueue(emitted_token)
  @token_queue << emitted_token
end

#enqueue_completed(token, byte_offset) ⇒ Object

Enqueues a completed token at the given offset



751
752
753
# File 'lib/puppet/pops/parser/lexer2.rb', line 751

def enqueue_completed(token, byte_offset)
  @token_queue << emit_completed(token, byte_offset)
end

#escaped_end(value) ⇒ Object

Determine if last char of value is escaped by a backslash



590
591
592
593
594
595
596
597
598
599
600
601
602
# File 'lib/puppet/pops/parser/lexer2.rb', line 590

def escaped_end(value)
  escaped = false
  if value.end_with?(STRING_BSLASH_SLASH)
    value[1...-1].each_codepoint do |cp|
      if cp == 0x5c # backslash
        escaped = !escaped
      else
        escaped = false
      end
    end
  end
  escaped
end

#fileObject

TODO: This method should not be used, callers should get the locator since it is most likely required to compute line, position etc given offsets.



659
660
661
# File 'lib/puppet/pops/parser/lexer2.rb', line 659

def file
  @locator ? @locator.file : nil
end

#file=(file) ⇒ Object

Convenience method, and for compatibility with older lexer. Use the lex_file instead. (Bad form to use overloading of assignment operator for something that is not really an assignment).



652
653
654
# File 'lib/puppet/pops/parser/lexer2.rb', line 652

def file=(file)
  lex_file(file)
end

#fullscanObject

Scans all of the content and returns it in an array Note that the terminating [false, false] token is included in the result.



688
689
690
691
692
# File 'lib/puppet/pops/parser/lexer2.rb', line 688

def fullscan
  result = []
  scan { |token| result.push(token) }
  result
end

#initvarsObject



673
674
675
676
677
678
679
680
681
682
683
# File 'lib/puppet/pops/parser/lexer2.rb', line 673

def initvars
  @token_queue = []
  # NOTE: additional keys are used; :escapes, :uq_slurp_pattern, :newline_jump, :epp_*
  @lexing_context = {
    :brace_count => 0,
    :after => nil,
    :line_lexical_start => 0
  }
  # Use of --tasks introduces the new keyword 'plan'
  @taskm_keywords = Puppet[:tasks] ? { 'plan' => [:PLAN, 'plan', 4], 'apply' => [:APPLY, 'apply', 5] }.freeze : EMPTY_HASH
end

#lex_file(file) ⇒ Object

Initializes lexing of the content of the given file. An empty string is used if the file does not exist.



665
666
667
668
669
670
671
# File 'lib/puppet/pops/parser/lexer2.rb', line 665

def lex_file(file)
  initvars
  contents = Puppet::FileSystem.exist?(file) ? Puppet::FileSystem.read(file, :mode => 'rb', :encoding => 'utf-8') : ''
  assert_not_bom(contents)
  @scanner = StringScanner.new(contents.freeze)
  @locator = Locator.locator(contents, file)
end

#lex_string(string, path = nil) ⇒ Object



623
624
625
626
627
628
# File 'lib/puppet/pops/parser/lexer2.rb', line 623

def lex_string(string, path = nil)
  initvars
  assert_not_bom(string)
  @scanner = StringScanner.new(string)
  @locator = Locator.locator(string, path)
end

#lex_tokenObject

This lexes one token at the current position of the scanner. PERFORMANCE NOTE: Any change to this logic should be performance measured.



732
733
734
# File 'lib/puppet/pops/parser/lexer2.rb', line 732

def lex_token
  @selector[@scanner.peek(1)].call
end

#lex_unquoted_string(string, locator, escapes, interpolate) ⇒ Object

Lexes an unquoted string.

Parameters:

  • string (String)

    the string to lex

  • locator (Locator)

    the locator to use (a default is used if nil is given)

  • escapes (Array<String>)

    array of character strings representing the escape sequences to transform

  • interpolate (Boolean)

    whether interpolation of expressions should be made or not.



636
637
638
639
640
641
642
643
644
645
646
647
# File 'lib/puppet/pops/parser/lexer2.rb', line 636

def lex_unquoted_string(string, locator, escapes, interpolate)
  initvars
  assert_not_bom(string)
  @scanner = StringScanner.new(string)
  @locator = locator || Locator.locator(string, '')
  @lexing_context[:escapes] = escapes || UQ_ESCAPES
  @lexing_context[:uq_slurp_pattern] = if interpolate
                                         escapes.include?('$') ? SLURP_UQ_PATTERN : SLURP_UQNE_PATTERN
                                       else
                                         SLURP_ALL_PATTERN
                                       end
end

#regexp_acceptable?Boolean

Answers after which tokens it is acceptable to lex a regular expression. PERFORMANCE NOTE: It may be beneficial to turn this into a hash with default value of true for missing entries. A case expression with literal values will however create a hash internally. Since a reference is always needed to the hash, this access is almost as costly as a method call.

Returns:

  • (Boolean)


767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
# File 'lib/puppet/pops/parser/lexer2.rb', line 767

def regexp_acceptable?
  case @lexing_context[:after]

  # Ends of (potential) R-value generating expressions
  when :RPAREN, :RBRACK, :RRCOLLECT, :RCOLLECT
    false

  # End of (potential) R-value - but must be allowed because of case expressions
  # Called out here to not be mistaken for a bug.
  when :RBRACE
    true

  # Operands (that can be followed by DIV (even if illegal in grammar)
  when :NAME, :CLASSREF, :NUMBER, :STRING, :BOOLEAN, :DQPRE, :DQMID, :DQPOST, :HEREDOC, :REGEX, :VARIABLE, :WORD
    false

  else
    true
  end
end

#scan {|[false, false]| ... } ⇒ Object

A block must be passed to scan. It will be called with two arguments, a symbol for the token, and an instance of LexerSupport::TokenValue PERFORMANCE NOTE: The TokenValue is designed to reduce the amount of garbage / temporary data and to only convert the lexer’s internal tokens on demand. It is slightly more costly to create an instance of a class defined in Ruby than an Array or Hash, but the gain is much bigger since transformation logic is avoided for many of its members (most are never used (e.g. line/pos information which is only of value in general for error messages, and for some expressions (which the lexer does not know about).

Yields:

  • ([false, false])


702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
# File 'lib/puppet/pops/parser/lexer2.rb', line 702

def scan
  # PERFORMANCE note: it is faster to access local variables than instance variables.
  # This makes a small but notable difference since instance member access is avoided for
  # every token in the lexed content.
  #
  scn = @scanner
  lex_error_without_pos(Issues::NO_INPUT_TO_LEXER) unless scn

  ctx   = @lexing_context
  queue = @token_queue
  selector = @selector

  scn.skip(PATTERN_WS)

  # This is the lexer's main loop
  until queue.empty? && scn.eos?
    token = queue.shift || selector[scn.peek(1)].call
    if token
      ctx[:after] = token[0]
      yield token
    end
  end

  # Signals end of input
  yield [false, false]
end

#string=(string) ⇒ Object

Convenience method, and for compatibility with older lexer. Use the lex_string instead which allows passing the path to use without first having to call file= (which reads the file if it exists). (Bad form to use overloading of assignment operator for something that is not really an assignment. Also, overloading of = does not allow passing more than one argument).



619
620
621
# File 'lib/puppet/pops/parser/lexer2.rb', line 619

def string=(string)
  lex_string(string, nil)
end