I Accidentally Built a Web Browser

Posted: November 10, 2025
tdoc viewing this blog post

Sometimes I get a little excited with a small side project. I spend the weekend repeatedly opening the laptop for a few minutes to “just finish something real quick” throughout the day. Apparently, this happened on the Halloween weekend, so my wife asked me what I was working on. I replied, “A web browser.” “A web browser? Why?” she said. It took me a bit to phrase an answer, and I tried to explain how a few long-running projects had finally overlapped—I may have leaned into the crossover joke because she’s an Avengers fan. ¯\_(ツ)_/¯

This is a story about Yak shaving, rewriting in Rust, ADHD, and three small, independent hobby projects that had no business of being related. Each scratching a different itch but living in its own little bubble. Until they finally converged, and something unexpected emerged from the collision.

Think of them as parallel universes that kept drifting closer:

  • Universe 1: A Personal Wiki
    I kept tinkering with my knowledge base, trying every other language and GUI toolkit in search of something that felt right.

  • Universe 2: A Streaming Parser
    Existing XML libraries kept mangling documents, so I built a streaming parser that wouldn’t.

  • Universe 3: The Email Client
    Who isn’t always on the lookout for a better terminal email experience?

Universe 1: A Personal Wiki

Born on the web (2002-2004)

In the very early 2000s, around the time I graduated from school, I created a real-time collaborative wiki called LIve wiKI or just “Liki”. It was a simple web application built with PHP and MySQL, but heavily utilizing XMLHttpRequest to provide live updates.

Users were able to see edits from others in real-time without refreshing the page. My friends used it to collect links to interesting articles, as an offline storage for IRC discussions, and obviously, for porn.

Liki featured a custom markup language (similar to MediaWiki syntax) that I designed to be easy to type and read.

It was a fun experiment in pushing the boundaries of web interactivity at the time, and it taught me a lot about atomic updates, conflict resolution, and user experience design.

The road to the desktop (2004-2017)

Around 2004, I added basic authentication, versioning, and started hosting a private instance to use as my personal knowledge base: Somewhere to keep notes, document projects, and organize thoughts. And while the web was a great platform for collaboration, for personal notes I wanted something that integrated better with my desktop workflow.

What followed was more than a decade of experimentation across an absurd variety of languages and GUI toolkits:

  • fxliki (2006): First attempt at a desktop client, written in C++ with the FOX toolkit
  • cmdliki (2006–2008): A Ruby command-line client for quick edits using $EDITOR
  • fliki (2006–2008): Another C++ version, this time built with FLTK and a custom text buffer implementation for live markup previews
  • likipp (2006): A short-lived experiment with the Ultimate++ framework
  • emacs-liki (2008): 20 lines of Emacs Lisp for quick integration
  • cliki (2008–2011): Objective-C with Cocoa for native macOS integration
  • qliki (2010): C++ with Qt, adding syntax highlighting and Subversion integration
  • liki (2011–2017): The most mature version—a cross-platform C# implementation with GTK and Cocoa frontends, multiple storage backends (plain text files, Git, web), and a plugin system

Each implementation had different traits, but the only thing that really stuck after ten years was that I wanted the UNIX-like simplicity: Notes had to be stored in a human-readable markup language, they had to be stored locally in an easy-to-navigate directory structure, and I wanted to be able to manage history and syncing multiple machines using Git.

Piki, the personal wiki (2020+)

After Microsoft acquired Xamarin (the company behind Mono) in 2016, maintaining the Mono-based implementation had gotten increasingly harder—especially the Cocoa integration on the Mac.

So, in 2020, I rewrote only the basic functionality in Go and was pretty happy with it. I decided to give the project a more fitting name, and called it Piki. This version stripped away the GUI entirely—just a terminal interface, local Markdown files, and Git integration. I used it together with Visual Studio Code a lot.

But the itch remained. I experimented with adding a GUI using webview, explored Go GUI toolkits, and even built Spot—a reactive GUI framework for Go—trying to find the right approach. Each attempt revealed the same fundamental issue: with Go, a lightweight but featureful native GUI is essentially out of reach.

That note-taking universe would remain unfinished until 2025, when it finally met the other two. But first, here’s how those universes started.

Universe 2: A Streaming Parser

In 2015, I was working on a completely unrelated Go project that involved processing large numbers of XML files and making light modifications to them. I ran into serious issues with Go’s standard XML library at the time: it over-escaped text data, had broken namespace handling, and couldn’t parse DOCTYPEs with subsets. Worse, if you parsed an XML file and wrote it back out, the structure would change—making diffs useless.

So I built gockl, a streaming XML processor with one core principle: don’t fuck with the markup. It could parse XML/HTML and re-encode it without changing the original structure of the file. Perfect for making minimal modifications to documents while keeping them diffable.

It seemed—and still is—a niche tool for a specific problem. I had no idea it would later become crucial for handling the messy reality of HTML on the web.

Universe 3: The Email Client

In early 2018, I started working on ELMA (ELectronic Mail Agent)—a terminal-based email client. I knew most of the available options, and there are plenty, but I was not really happy with them: I had been a Mutt user on and off for years, but the hoops I had to jump through to set it up correctly every time I dug into it again was a pain. What I wanted was a stable, fast, and easy-to-use email application which renders mails nicely and consistently and that worked strictly client-side.

Also, I was not really subscribing to the “emails have to be text/plain and wrap at 72 characters” camp anymore. With the multitude of devices that people nowadays used to read and write mails, HTML was the pragmatic solution, and none of the terminal clients had a good implementation. Tools like lynx and w3m could convert HTML to text, but the output was clunky and not well integrated with the MUA. And, as far as I understand, none of them has a nice interface for producing HTML mail.

I set out to change this, and that evolution is still very much in progress. :)

To represent mail content, I needed a document format that could represent the structure of HTML emails—paragraphs, headers, lists, basic formatting—without all the complexity. Something designed specifically for terminal rendering. Something that could serve as a clean intermediate format between messy HTML and readable terminal output.

FTML: Born from necessity

That need led to FTML (Formatted Text Markup Language)—a document format designed to be the absolute minimum needed for rich text. Not Markdown with its multiple flavors and ambiguities. Not HTML with its complexity. Just a strict subset of HTML5 containing only the essentials:

  • Three levels of headers (<h1>, <h2>, <h3>)
  • Text paragraphs (<p>)
  • Lists, blockquotes, code blocks
  • Basic inline styles: bold, italic, underline, highlight, strike, code, links

The beauty of FTML is in what it doesn’t have. No tables. No images. No colors or fonts. No classes or IDs. Just structure and basic formatting. It’s designed to be easily diffable, unambiguous, and processable by simple tools.

ELMA would convert HTML emails to FTML, then render FTML to the terminal. When replying to a mail, the integrated editor would understand FTML and allow the user to directly edit the content using basic formatting. The finished mail would then be sent as HTML again. Clean separation of concerns. But to make this work, I needed to solve the HTML import problem—converting messy, real-world HTML into clean FTML.

This is where the parser work made its first appearance in the mail client. I remembered gockl and its streaming parser philosophy. Creating a simple, forgiving HTML-to-FTML converter turned out to be easier than expected, and soon I had a working pipeline: HTML email → FTML → ANSI terminal rendering.

Two little universes—XML parsing and email reading—had finally bumped into one another. But the wiki universe was still doing its own thing, completely unaware.

The Convergence

For twenty years, these three projects lived in separate repositories, separate contexts, separate problem domains. Then I decided to rewrite Piki in Rust.

In 2025, I finally decided to rewrite Piki in Rust and built what I’d been chasing for two decades: a personal wiki that keeps everything in plain files on your filesystem. It has both an extensible CLI (with custom aliases, like the Go version, and plugins, like the C# version) and a GUI built on FLTK featuring a custom rich-text editor (which was initially based on a rewrite of the C++ version from 2006). Local-first, git-friendly, no cloud services, no lock-in. The wiki universe was finally in full swing.

But the CLI’s view command needed to render documents to the terminal with proper formatting. I could have built yet another Markdown renderer, but I remembered FTML. The mail client work nudged its way back in.

So I built tdoc as a Rust rewrite of my Go FTML library—a document Swiss Army knife for handling FTML, Markdown, and HTML. It could parse them, convert between formats, and render to the terminal with nice ANSI formatting.

For the terminal rendering, I built a pager that properly handled text wrapping, ANSI escape codes for bold and italic, headers with visual hierarchy, lists, blockquotes, and code blocks. Perfect for Piki’s needs.

Then, while viewing documents in the pager, I thought: “Wouldn’t it be nice to open links I might have stored in my notes with the browser?” So I added clickable hyperlinks using OSC 8 terminal escape sequences. Modern terminals like iTerm2, GNOME Terminal, and Windows Terminal support them—you can literally click on links in the terminal and they open.

But in order to count as a full-blown rewrite of the Go implementation, tdoc needed to handle HTML import, too. And that’s when the parser work rejoined: I ported gockl’s streaming parser philosophy to Rust. Real-world HTML is a mess of missing closing tags, broken nesting, and invalid attributes. But gockl-style parsing handles it gracefully, extracting the structure we need without choking on malformed markup.

While testing the HTML import I got tired of downloading pages by hand, so I taught tdoc to fetch HTTP(S) URLs and render them directly. Suddenly it could parse messy HTML, turn it into FTML, and show the result in the pager—plus open links in a regular browser thanks to the OSC 8 work.

Three long-running universes had finally merged into a single tool.

The Unplanned Consequence

Then I wondered: instead of handing links off, why not let the pager follow them itself? The code already knew how to fetch and render HTML. Now, when a document came from the web, I could tab to a link, hit Enter (or simply click it with the mouse), and tdoc would fetch the target, convert it to FTML, and display it.

I had built a web browser.

Not on purpose. Not by design. But by combining:

  • A wiki system that needed terminal document viewing
  • A streaming parser that handles messy markup
  • A document format designed for terminal rendering

It felt like three long-running side projects had quietly lined up. A minimal web browser that fetches pages, renders them readably, and follows links. Text-only, no JavaScript, no CSS, no images. But that’s… that’s a web browser.

Want to read the news? tdoc https://lite.cnn.com and navigate around by following links. Technical documentation? Blog posts? Any text-heavy website works surprisingly well.

Browsing the news with tdoc

What I learned

After 25 years of reimplementing essentially the same thing across languages and platforms, a few lessons crystallized:

Constraints are liberating. A text-only browser sounds limited, but it’s perfect for documentation, technical articles, and long-form reading. No tracking scripts, no ads, no autoplaying videos. Just content. Sometimes less really is more.

Simplicity compounds. FTML’s minimal feature set meant I could implement a complete parser, renderer, and converter in a weekend. If I’d tried to support full HTML or all of Markdown’s variants, I’d still be working on edge cases, table support, CSS basics, and whatnot.

Solve your own problems. I built these tools because I wanted them. Piki manages my personal notes. tdoc helps me convert my documents. The “browser” feature wasn’t planned—it emerged naturally from solving the smaller problems in front of me.

Why Rust?

I really do not want to overstate Rust’s role here, and say that this would have been impossible without it. But what I can say is that Rust made it fun. The type system, ownership model, the crate ecosystem, and fantastic tooling make it easy to compose libraries like the following without worrying about memory leaks, dangling pointers, or platform-specific quirks:

  • crossterm: Cross-platform terminal manipulation made simple
  • insta: Snapshot testing for HTML input, terminal output, even the GUI renderer
  • reqwest: HTTP client with TLS support, following redirects automatically
  • unicode-width: Correctly handle wide characters for proper text wrapping

Compare that to my earlier attempts:

  • The C++ FLTK version needed libcurl configured correctly, manual HTTP header parsing, careful memory management
  • The Objective-C version was macOS-only and required tedious setup of Xcode projects and linking against system libraries
  • The C# version worked but would always break on dependency or framework updates
  • The Go version had snapshot tests and even fuzz testing, a nice HTTP client, and good Unicode support, but saw no further development because I could not find a nice way to integrate it with native GUI toolkits

With Rust and its ecosystem, I spent my time building features instead of fighting the platform. Things usually just work.

Try it yourself

If you want to experiment with accidental web browsing:

# Install tdoc
cargo install tdoc

# Browse some website
tdoc https://roblillack.net/i-accidentally-built-a-web-browser

The pager supports vim-style navigation (j/k for scrolling, Enter to follow links), and renders everything as clean, readable text.

It’s probably not going to replace Firefox for you. But for reading technical documentation, RFCs, or text-heavy websites? It’s surprisingly capable.

Twenty-five years is a long time to accidentally build a web browser. But that’s what happens when you let three side projects run in parallel for long enough—eventually, they collide. I got a personal wiki system, a document format, a streaming parser, and a web browser I never meant to build.


Most of the projects mentioned are open source and available on GitHub: piki, tdoc, ftml, gockl, ELMA: soon.