I am working on the scanning and parsing of the OPEN statement and I think I want to keep the tokenization of the code tightly wound to the parsing step. I think this might make it easier when I need to evaluate things.
OPEN '','SITE-FILE' TO SITE.FILE ELSE NULL
Each part outside of the keywords, OPEN, TO and ELSE, can be expressions and can contain various break characters in them. I want to simply get the different parts of the OPEN statement and then I can evaluate them individually. If I generalize the tokenization, I'd have issues marrying the tokens up later.
Possibly. {It sounds right in my head so we'll see where this takes me} [I don't actually think it's right and gut feeling is I'm making a mistake].
I've finished writing up the OPEN statement handling. I added the ENVIRONMENT and FILES variables to manage the file handlers. Now I can work on the READ statement. I will need to do a locate to get the file number and then I can get the file handler from the FILES variable.
File handles can be stored in a dimensioned array but they can't be stored in a dynamic array. This makes sense as dimensioned arrays actually create a new variable in the symbol table whereas dynamic arrays are really strings.
I haven't decided if I want to use my hashmap functions yet. It would definitely speed things up but I also think {swapping it in} [As you can tell I really don't want to have dependencies] after using the LOCATE statement should be simple.