I understand that the token parser or validation will choke if the file isn't terminated with the exact sequence of NUL,CR,LF, whatever. But why is that?
Why don't you do something like this after reading tokens, before validating them?
token = string.match(token, '[%a%d%+]') .. \0
There may be slightly more to it than that, but in principle this would allow it to work so long as a valid token is in there, no matter what kind of nul-byte, line endings, whitespace, or other magic unprintable characters are there.
It's astonishing to watch people fall on their face over this month after month when a fix seems so trivial!