Publishing to the Permanent Web
When I hit publish on this blog, something unusual happens. The words you’re reading don’t live on a server somewhere—at least, not in the traditional sense. They exist as content-addressed data scattered across a distributed network, pinned to nodes around the world, accessible through pathways that no single entity controls. This is IPFS, and it’s how I publish everything.
The honest answer is that I didn’t need to do this. A VPS running nginx works fine for a personal blog. Nobody’s trying to take down my posts about amateur radio and Solidity contracts. But there’s something philosophically satisfying about publishing to infrastructure that doesn’t require permission. My content exists because the math says it does, not because I’m paying someone to keep a server running. The same post you’re reading at benwoodall.com is also available at benwoodall.eth.limo—two completely different infrastructures, same content, same hash. That’s the magic of content addressing. The data is the address.
The Pipeline
When I’m ready to publish, I run a deploy script. Here’s what happens:
┌──────────┐ ┌────────┐ ┌───────────────┐
│ Jekyll │───▶│ Pinata │───▶│ VPS (nginx) │
│ build │ │ pin │ │ IPFS node │
└──────────┘ └────────┘ └───────────────┘
│ │
▼ ▼
┌──────────┐ ┌─────────────┐
│ IPFS │ │ benwoodall │
│ network │ │ .com │
└──────────┘ └─────────────┘
│
▼
┌──────────┐
│ ENS │──▶ benwoodall.eth.limo
└──────────┘
Jekyll builds static HTML, which gets uploaded to Pinata. Pinata returns a CID—a cryptographic fingerprint of everything in that folder. Same content always produces the same CID. Change a single character and you get an entirely different hash. That CID then gets pinned to my own IPFS node on a VPS for redundancy, and nginx gets updated to proxy requests to it. When you visit benwoodall.com, nginx fetches from the local IPFS node and serves it like any normal website. You’d never know the difference unless you looked under the hood.
Here’s something I learned the hard way: deploying to IPFS is immutable, but your nginx config isn’t. If something goes wrong mid-deploy, you can end up with your domain pointing at a broken CID. So the script captures the current working CID before doing anything else. If any step fails, it rolls back automatically. The broken content still exists on IPFS—that’s the immutability—but nobody sees it because the domain points elsewhere. One of those “obvious in retrospect” things.
Making It Fast
IPFS has a reputation for being slow, and it can be. The solution is boring but effective: run your own node, proxy through nginx. Adding a swarm connect to Pinata’s node right after upload—basically saying “hey, connect directly to this node that has my content”—took propagation from minutes to seconds.
The piece that ties it together is ENS. My benwoodall.eth has a content hash pointing to the IPFS CID. If my VPS dies, content is still on Pinata and accessible through ENS gateways. If Pinata dies, my VPS has everything pinned. The content persists independent of any single point of failure.
Is it worth it? For practical purposes, probably overkill for a blog. But I write about digital sovereignty and censorship resistance; it would feel hollow to host that on infrastructure that could disappear with a terms-of-service violation. Plus, every time I deploy, I get a little hit of satisfaction watching the CID propagate. My words, addressed by their content, retrievable by anyone who knows the hash. That’s pretty cool.