Summary
- Use OpenZIM/Kiwix to archive and browse entire websites offline via .zim files.
- Use Zimit (web or Docker) to ‘print’ sites into ZIMs; Docker runs local, faster, multiple jobs.
- OpenZIM offers ready ZIMs (Wikipedia, Gutenberg) plus tools to build offline copies of any site.
When trying to save a website for offline use, your first instinct might be to hit the ‘save webpage’ button in your browser. That works great, but only for single web pages. If you try to save the entire website, you’ll have to open every single webpage, save it manually, and then (when you need to access the website) look for individual HTML files and open them one by one. There’s a better way to do this.
1
Meet the OpenZIM project
OpenZIM is an open-source project designed to archive any website and surf it offline. To that end, the developers created a new file format called ‘.zim,’ which is a highly compressed version of a website. You can read these ZIM archives offline using apps like Kiwix, which acts like a browser, but you can only browse these offline versions of websites.
The project has a library where you can find and download pre-built ZIM archives of popular wikis and knowledge portals. For example, you can download Wikipedia, keep it on your computer, and browse it without the internet using Kiwix. The entire English Wikipedia ZIM weighs around 100GB. There’s a truncated ‘mini’ Wikipedia too, which is around 11GB. You can even download and browse the whole Project Gutenberg library from the Kiwix library.
There are smaller ZIMs available too. For example, I downloaded the Doom Wiki website as a ZIM file. Then I installed the Kiwix reader, and loaded the Doom Wiki ZIM into it. It was just like browsing the Doom Wiki in a normal browser, except there was no lag or pinwheels.
The best part is that you’re not limited to the Kiwix library. The OpenZIM project provides tools that let you create ZIM archives out of any website URL. The community calls it ‘printing’ a website. I’ll show you two ways to do that.
2
The easy way to ‘print’ a website
The easiest way to print a website and download its ‘.zim’ version is to use the Kiwix portal. It’s a web app called Zimit. It asks the target website’s URL and your email address (the website will send a download link to this email addresss.) Once you’ve supplied the target website’s URL and an email address, you can start the process. You can close the tab at this point if you want.
Zimit will package the target website into a ZIM file (you can see the progress bar fill up in real-time) and send you a download link via email. You can then download the ZIM file and open it with Kiwix. Kiwix will let you browse it just like a regular website, except completely offline.
While this method is the easiest, it has some caveats. For example, you can only run one job at a time. Their system can take up to 24 hours to provide you the ZIM file, so it’s pretty slow. Then, you have to download the ZIM, which can take a long time too, depending on the file size.
If you don’t want to wait or if you want ‘print’ multiple websites at a time, it’s better to run this entire process on your machine. It’ll be much, much faster to write the file directly onto your storage, and you won’t have to download the ZIM from the internet.
3
The better way to ‘print’ a website with ZIMs
You can run the Zimit app on your device using Docker. Docker is an open-source platform that lets you run apps in ‘containerized’ environments locally. The Docker community provides ‘images’ for specific apps, which makes it quick and painless to ‘containerize’ those apps and run them on your machine. Since Zimit has an official Docker image, launching this app and printing ZIMs using it is super easy. Let me show you how.
First, you’ll need to install Docker on your device. On Windows, you can install Docker Desktop from the Microsoft Store or grab the installer package from the official website. Install it on your device like you would any other program.
Now let’s open the terminal and confirm that Docker is running properly. We’ll run a test container to make sure.
docker run hello-world
The command to create ZIM files from URL looks like this. I’m trying to ‘print’ the Legiblenews website, so I’ve added its URL next to ‘seeds’ and given it a name, as listed next to the ‘name’ tag.
docker run -v $PWD:/output ghcr.io/openzim/zimit zimit –seeds https://legiblenews.com –name tinynews
Running this command will create an archive out of every single Doom Wiki webpage. However, to save space, you can limit the archival to a specific number of pages too. All you need to do is add an additional ‘pagelimit’ tag and depth tag at the end of the command. The ‘depth’ tag limits how many levels past the first “https://www.howtogeek.com/” in the URL the crawler goes.
docker run -v $PWD:/output ghcr.io/openzim/zimit zimit –seeds https://legiblenews.com –name tinynews –pageLimit 20 –depth 1
That’s it. The ZIM file should be downloaded in whatever directly you ran the command in. To find out where the ZIM file ended up, run the following command and open that location in the File Explorer.
pwd
You can now read this ZIM with Kiwix. Download the Kiwiz package from the official website, extract it somewhere, and click the ‘kiwix-desktop’ application launcher. When the Kiwix browser launches, click the folder icon to load the .zim file. The website should be instantly loaded in the reader.
You can archive and hoard any website and create a personal library of knowledge you care about. The possibilities are as limitless as the internet itself. Even when you don’t have the internet, you’ll still be able to browse a snapshot of your favorite website.

