Community
Participate
Working Groups
Maybe for how it is used this may not cause problems, but as the token stream changes the token source and removes the unused tokens (inside setTokenSource()): if (p < tokens.size()) { int rIndex = p > 0 ? ((CommonToken)tokens.get(p - 1)).getStopIndex() : 0; tokenSource.rewindToIndex(rIndex + 1); for (int i = tokens.size() - 1; i >= p; i--) { tokens.remove(i); } } the tokens are removed but the line number is not decreased (if necessary). When the token source is "popped" and the stream starts reading again the tokens have the incorrect line number (so at the end of a 30 line file you could have tokens saying they are at line 50 or so). One fix could be: if (p < tokens.size()) { int rIndex = p > 0 ? ((CommonToken) tokens.get(p - 1)).getStopIndex() : 0; tokenSource.rewindToIndex(rIndex + 1); for (int i = tokens.size() - 1; i >= p; i--) { Token removed = tokens.remove(i); // If the token source has read new lines and we are removing // those, we should update the line numbers. Use a matcher to // rewind one line for each newline character in the token, if // any. Pattern p = Pattern.compile("\\n"); Matcher m = p.matcher(removed.getText()); while (m.find()) { tokenSource.rewindLine(); } } } and the rewindLine() method: public void rewindLine() { ANTLRStringStream stream = (ANTLRStringStream)input; int line = stream.getLine() - 1; if (line >= 1) { stream.setLine(stream.getLine() - 1); } } That seems to work well, if there are no better options.