As my wiki parser grew larger and larger, with more and more features, the initial hack with closures had outlived its usefulness.
I evolved the parser into a full blown AST (Abstract Syntax Tree) model: the parser produces AST nodes, building this tree. The tree is at the end "folded" down to produce the resulting HTML and any model transformations collected along the way.
You can see the AST model here: WAST.scala.
The interesting AST nodes are:
The interesting method is fold()
. As you notice, folding receives a context with more data than available when parsing.
This model is quite flexible, since you can easily plugin your own parser extensions (still cleaning the edges there), creating their own funky AST nodes (if need be) which implement their own funky folds and transformations.
I choose composition as the pattern for extending the parser, so basically you implement your own extensions as a Trait and then mix it into your final parser, for instance here's mine:
class WikiParserCls(val realm:String) extends WikiParserT with DslParser with AdParser with WikiDomainParser with WikiParserNotes {
withWikiProp(dslWikiProps)
withWikiProp(adWikiProps)
withDotProp(notesDotProps)
withBlocks(domainBlocks)
}
Because of the grammar and the syntax rules, I could not allow you to combine your final parser rule (using plain combinators) but instead I allowed you to extend only specific sections: wikiProps, dotProps and blocks, thus this little funky model above.
Take a look at WikiParserBase.scala and WikiParser.scala to see how the composition of parser rules works (like moreWikiProp).