How does one do lexical analysis (tokenization) of Bash/Perl style string, where insertion in the middle is possible?
Say we have print "Now is {get "time of day"} of {get "current date"}..."
print 'have a nice day!'
And {…} inserts value in the middle of a string.
How Lexer would know which double-quote closes the string and which is part of the string?
Python uses format-like routines, because parsing such strings is hard.
Name:
Anonymous2013-07-15 20:11
HMMM LET'S SEE WE CREATE A STATE MACHINE THAT GOES THROUGH EACH CHARACTER IN THE FUQIN STRING AND WHEN WE ENCOUNTER A FUQIN QUOTE, WE INCREMENT A VARIABLE CALLED QUOTE_COUNT OR SOME SHIT LIKE THAT. AND...