flak rss random

medium rare

Is it crazy that a Medium post about javascript bloat would have itself have megabytes of javascript and stylesheets? I wouldn’t know, since I didn’t see it. I have a little proxy like service running that rewrites its HTML. This particular service was an experiment to replace some python code with go, to evaluate suitability for future hacks.

I’ve been using the python lxml library for HTML parsing for ages. Seems to work pretty well. There’s actually a bunch of little one off scripts that share a similar skeleton, which is modified as needed. After all, the best code isn’t reusable, it’s reeditable. A little while ago that turned into a script to download Medium posts after I read them and save the important parts, so that sometime later when I want to read about the Riemann Hypothesis, it’s all still there in a place I can find it.

Back to the task at hand. After reading comments about the irony of bloat complaints, I thought I might take my medium download script, tune it up a bit and translate to go, then run it as a service, thus sparing all my computers. Not especially interesting or challenging, but a quick experiment. Would it be easy (fast) to write? And it would it run fast?

func main() {
        sort.Strings(permittedtags)
        mux := http.NewServeMux()
        mux.HandleFunc("/", refetch)
        http.ListenAndServe(":8090", mux)
}

Super basic main function. We’re going to listen to anything and then refetch it.

func refetch(w http.ResponseWriter, r *http.Request) {
        remurl := "https://medium.com" + r.URL.Path
        log.Println("fetching", remurl)
        resp, err := http.Get(remurl)
        if err != nil {
                log.Println("err", err)
                return
        }
        defer resp.Body.Close()
        if strings.HasPrefix(resp.Header.Get("Content-Type"), "text/html") {
                doc, err := html.Parse(resp.Body)
                if err != nil {
                        return
                }
                rerender(w, doc)
        } else {
                w.Header().Set("Cache-Control", "max-age=360000")
                io.Copy(w, resp.Body)
        }
}

Even that’s pretty simple. Whatever URL we are asked for, we ask Medium for the same thing. Curiously, Medium does user agent sniffing, but impersonating a browser at this point actually results in HTML that’s less fun to work with. We’re only interested in rewriting HTML pages. (Some other requests, like favicon, will also pass this way.)

var prolog = []byte(
`<!doctype html><html><meta charset="utf-8">
<style>
body {
        max-width: 940px;
        margin: auto;
        line-height: 1.5;
}
img {
        max-width: 900px;
}
</style>
<body>
`)

func rerender(w io.Writer, root *html.Node) {
        sel := cascadia.MustCompile("div.section-content")
        divs := sel.MatchAll(root)
        w.Write(prolog)
        for _, div := range divs {
                clean(w, div)
        }
}

Medium posts keep all the good stuff under section-content divs. If we wanted to be generic, maybe this algorithm or any of a dozen others. But I was making an apples to apples comparison with a bespoke python script, so it’s done simply.

We provide just enough styling to keep the page readable. Customize to taste. The meat of the matter, which we’re finally getting to, is in the clean function.

func clean(w io.Writer, node *html.Node) {
        switch node.Type {
        case html.ElementNode:
                tag := node.Data
                if tag == "a" {
                        w.Write([]byte(fmt.Sprintf(`<a href="%s">`,
                                html.EscapeString(getattr(node, "href")))))
                } else if tag == "img" {
                        w.Write([]byte(fmt.Sprintf(`<img src="%s">`,
                                html.EscapeString(getattr(node, "src")))))
                } else if permitted(tag) {
                        w.Write([]byte(fmt.Sprintf("<%s>", tag)))
                }
        case html.TextNode:
                w.Write([]byte(html.EscapeString(node.Data)))
        }
        for c := node.FirstChild; c != nil; c = c.NextSibling {
                clean(w, c)
        }
        if node.Type == html.ElementNode {
                tag := node.Data
                if tag == "a" || permitted(tag) {
                        w.Write([]byte(fmt.Sprintf("</%s>", tag)))
                }
                if tag == "p" || tag == "div" {
                        w.Write([]byte("\n"))
                }
        }
}

Recursively descend through the tree, printing out the permitted tags. These are generally very simple, only requiring special handling for a and img tags.

Unlike lxml, go’s html nodes have element and text nodes. The text isn’t attached as a field to the nodes, so it’s a little different. Also, there’s no easy way to get an attribute?

var permittedtags = []string{"div", "h1", "h2", "table", "tr", "td", "p", "span", "pre",
        "blockquote", "strong", "em", "b", "i", "ol", "ul", "li"}

func permitted(tag string) bool {
        idx := sort.SearchStrings(permittedtags, tag)
        return idx < len(permittedtags) && permittedtags[idx] == tag
}

func getattr(node *html.Node, attr string) string {
        for _, a := range node.Attr {
                if a.Key == attr {
                        return a.Val
                }
        }
        return ""
}

There’s really no better way to find an attribute than to loop through an array? A little weird, but maybe the space overhead of using a map is too much. Assuming only a few attributes, maybe even faster anyway.

The fact that sort.Search returns an index even for not found is one of those less is more worse is better go quirks. It’s not hard for me to test the index, but since the implementation already knew whether it was a match and threw away that information it feels wasteful.

Satisfied it works on localhost, you can tune things up with a few lines of nginx.conf and some DNS trickery to always point browsers to the proxy, but that’s getting into you’ll shoot your eye out kid territory.

As for results, it’s pretty good. Just the HTML for a post shrinks by a factor of six, to say nothing of the js and css left out. Where does this savings come from? The code above doesn’t copy attributes for most tags, so elements won’t have class information as in <em class="markup--em markup--p-em"> I haven’t spelunked the stylesheet, though I do find it a little curious that we’ve taken to replicating selectors in class names. You will also miss out on such gems as <div style="padding-bottom: 48.199999999999996%;"> resulting in some layout differences. A lot of page weight remains in the form of images, although for bonus points one can make them click to load.

I mentioned performance. Some benchmark code.

func main() {
        data, _ := ioutil.ReadFile("riemann.html")
        for i := 0; i < 100; i++ {
                buf := bytes.NewBuffer(data)
                root, _ := html.Parse(buf)
                sel := cascadia.MustCompile("div.section-content")
                divs := sel.MatchAll(root)
                fmt.Printf("%d found %d divs\n", i, len(divs))
        }
}

vs

fd = open("riemann.html", "rb")
data = fd.read()
i = 0
while i < 100:
        root = document_fromstring(data)
        divs = root.cssselect("div.section-content")
        print("%d found %d divs" % (i, len(divs)))
        i += 1

The go version runs about six times faster. I’ve never had major problems with lxml’s performance, but that’s primarily as a batch user. A fire and forget script that takes a second to run is fine. But for an inline rewriter, getting results fast is more important.

Python and lxml are still very handy for poking through a document interactively, but now that I’ve got my HTML rewriting skeleton translated to go, I’ll probably use that for all future final versions.

mediumrare.go source.

Posted 13 Feb 2017 14:13 by tedu Updated: 18 Feb 2017 20:10
Tagged: go programming web