Community
Participate
Working Groups
Attached fixes a problem with lexing Integer Dot (and also eliminates the unused octal and hex digit definitions). The problem is that "1. " gets two thirds through "Integer Dot Integer" as a Decimal, but fails to complete so the lexer has no token to return, and returns a token of 0 kind. This was worked around by special INTEGER_RANGE_START for Integer.. and various NUMERIC_OPERATION for Integer.-> etc. The attached ensures that Integer Dot produces the INTEGER_LITERAL, DOT token sequence and so removes all the parser grammar workarounds. [It is now possible to remove start/endOffset since there are no tokens that do not represent sensible ranges.]
Created attachment 146476 [details] Elimination of parser/lexer workarounds Try agains
AbstractOCLParser.createRangeStart should also be deleted.
Ping. This has been waiting for review for over a month. It is a very simple grammar misunderstanding cleanup. It still applies successfully to HEAD.
+1. Trivial: Is there any real need to create a dotToken/dotDotToken ? if so, why not exploiting in the remaining lexer grammar ? Cheers, Adolfo.
Ed, your patch is extremely important. Not only does it remove the ugly workarounds but it seems to solve the problem below as well (I didn't check though): `1. oclIsTypeOf(Integer)` couldn't be parsed since there has been no workaround for numeric operations starting with whitespace. We seem to have had a problem no one has noticed before. E.g. for any other type: `true.oclIsTypeOf(Boolean)` and `true. oclIsTypeOf(Boolean)` were both parsed successfully. But though `1.oclIsTypeOf(Integer)` could be parsed well, its variant with a whitespace before the operation name produced a lexer error. My +1.
Changes committed to HEAD. Re: #4 Trivial: Is there any real need to create a dotToken/dotDotToken ? if so, why not exploiting in the remaining lexer grammar ? There is no dotDotToken; DotDotToken is used as in Token ::= IntegerLiteral DotDotToken /.$NoAction ./ The location of the makeAction is slightly unusual because lexer actions cannot produce two tokens from one reduction. Once moved, other reductions get difficult which perhaps explains why the original author resorted to workarounds rather than solving the true problem.
Resolved
Closing after over 18 months in resolved state.