Move these two flags from HTML renderer's flags to extensions. Implement
both since they were not yet implemented in the AST rewrite. Add tests.
Note: the expected test strings differ very slightly from v1. The HTML
produced by v2 has a few extra newlines compared to the old one, but
it's now uniform with other sections of the generated document. If the
newline placement gets cleaned up in the future, this will get fixed
automatically, since the renderer is agnostic about the TOC list.
Separate Smartypants somewhat from the HTML renderer. Move its flags
from HtmlFlags to Extensions (probably should be moved to its own set of
flags, but not now). With that done, do a separate walk of the tree and
either run Smartypants processor if it's enabled, or simply escape text
nodes.
Build a partial tree by adding block nodes. The block nodes will then be
traversed and inline markdown parsed inside each of them. Tests are
broken at this point until the full tree is constructed.
Remove the 'out' parameter. Also, instead of returning and passing the
position of TOC, use CopyWrites to capture contents of the header and
pass that captured buffer instead.
Add a structure to collect output in a buffer (replaces what used to be
the 'out' parameter all over the place).
Notable things about this struct are the captureBuff and copyBuff
buffers. They're intended to redirect all the output (captureBuff) or
make a copy of all the output (copyBuff) while they're set to non-nil.
Here's an example of their intended use:
// what used to be a temp buffer as an 'out' parameter
// var cellWork bytes.Buffer
// p.inline(&cellWork, data[cellStart:cellEnd])
// can now be captured like this:
cellWork := p.r.CaptureWrites(func() {
p.inline(data[cellStart:cellEnd])
})
Change the way maybeLineBreak gets called to avoid breaking up stretches
of unprocessed characters that smartypants expects.
This inline processing is getting a bit out of hand, something needs to
be done about it.
Autolink detection used to be triggered by a colon and preceding
protocol name used to be rewound. Now instead of doing that, trigger
autolink processing on [hmfHMF] and see if it looks like a link.
Replace output truncation with appropriate inline callbacks. lineBreak()
is now only responsible for handling HardLineBreak. BackslashLineBreak
is handled in escape() and trailing whitespace is considered in
maybeLineBreak().
Link parser used to truncate in two cases: when parsing image links and
inline footnotes. In order to avoid this truncation, introduce a
separate callback for each of these cases and avoid writing extra
characters instead of truncating them after the fact.
The callbacks used to return bools, but none of the actual
implementations return false, always true. So in order to make further
refactorings simpler, make the interface reflect the inner workings: no
more return values, no more conditionals.
This is a better style for a set, since each value can only be present
or absent.
With bool as value type, each value may be absent, or true or false. It
also uses slightly more memory.
This is both nasty and neat at the same time. All the code could handle
nested footnotes just fine, the only place that was not working was the
final loop that printed the list. The loop was in a range form, which
couldn't account for another footnote being inserted while processing
existing ones. Changing the loop to the iterative form solves that.
Closes#193.