Over the last few days, I've been working on seting up a new computer for my dad. As he's a mechanic, one of the things he'll be using it for is to lookup information on different makes and models of cars and trucks. He's been using Alldata for some time now, but I tipped him off to charm.li and he was interested.
Now, I could've just gave him the url and stopped here, but since the data itsself has kindly been provided by the creator of the site, I wanted to see if I could self host it. While looking into how I could make this work, I came across a thread in r/mechanic about the site recently being down for an extended period. These things happen, but with the ability to obtain the data, I figured I'd take a shot at running this myself.
I currently have my version hosted at https://manuals.haroldsauto.com/.
Before diving into the technical setup, I'm doing all of this on Ubuntu Server 24 LTS. This will work on other distributions, but may need to be adapted accordingly. With this in mind, let's understand what we're dealing with:
charm.li is built on Node.js that serves content from a Lightning Memory-Mapped Database (LMDB). The database itself is packaged in a squashfs file - a compressed, read-only file system that's commonly used in Linux distributions.
There are essentially three components needed to make this work properly:
While there aren't any official setup instructions that I'm aware of, someone took a crack at this and added it to GitHub. The instructions are fairly straightforward:
However, making this run in Docker and persistent across reboots requires additional effort.
One thing to understand before attempting this is the challenge of obtaining the data. In total, it's slightly over 700GB, which can be prohibitive without a dedicated storage medium to host it. For my needs, I'm hosting it on my NAS, so some things going forwrard will need to be adapted accordingly to your own environment should you decide to proceed.
I posted the link earlier in the thread, but in case it was missed, here it is again - https://charm.li/operation-charm.torrent
As I mentioned before, my setup involves hosting the data on my NAS with the charm.li files stored at /mnt/backup/operation-charm
. Adjust the paths accordingly to match your setup. Setting up this mount is outside the scope of this article, but here is my /etc/fstab
entry for reference:
1# NAS Directory Mount 2192.168.1.6:/volume1/Files/ /mnt/Backup nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0
First, we need to ensure the squashfs file gets properly mounted:
1# Create the mount point if it doesn't exist 2sudo mkdir -p /mnt/backup/operation-charm/lmdb-pages 3 4# Mount the squashfs file 5sudo mount -o loop -t squashfs /mnt/backup/operation-charm/lmdb-pages.sqsh /mnt/backup/operation-charm/lmdb-pages
This mounts the compressed data, but it's important to note that this mount won't survive a system restart. We'll get into this later.
THe GitHub post I referenced earlier seems to get this going with Node.js version 18, which has now officially reached end of life. Initially, when I first got this working, I ran it with Node.js 18 and it worked fine, but it doesn't make sense to do this now as there are much newer LTS versions available.
I decided to use Node.js 22, which is an LTS version supported until April 2027. Assuming you already have a docker-compose.yml
file (create one if you don't), add the following to it:
1version: '3' 2 3services: 4 charm-li: 5 image: node:22 6 container_name: charm-li 7 working_dir: /app 8 command: > 9 sh -c "npm install && 10 sed -i 's/127.0.0.1/0.0.0.0/g' server.js && 11 npm start / 8080" 12 ports: 13 - "28080:8080" 14 volumes: 15 - /mnt/backup/operation-charm:/app 16 restart: unless-stopped
This configuration does several important things:
server.js
file to listen on all interfaces (0.0.0.0) instead of just localhost.As I had mentioned earlier, if you simply mount the squashfs file and reboot, your mount disappears, and charm.li stops working. There are a few different ways you can make this survive a reboot, but what I ended up doing was creating a systemd service that ensures the mount persists:
1# Create a systemd service file 2sudo nano /etc/systemd/system/mount-charm.service
Add this service definition:
1[Unit] 2Description=Mount charm.li squashfs file 3After=network.target remote-fs.target 4RequiresMountsFor=/mnt/Backup 5 6[Service] 7Type=oneshot 8RemainAfterExit=yes 9ExecStart=/bin/bash -c 'if ! mountpoint -q /mnt/Backup/operation-charm/lmdb-pages; then mount -o loop -t squashfs /mnt/Backup/operation-charm/lmdb-pages.sqsh /mnt/Backup/operation-charm/lmdb-pages; fi' 10ExecStop=/bin/bash -c 'if mountpoint -q /mnt/Backup/operation-charm/lmdb-pages; then umount /mnt/Backup/operation-charm/lmdb-pages; fi' 11 12[Install] 13WantedBy=multi-user.target
There's a lot going on here, so let me explain it a bit:
RemainAfterExit
, which works well for mount operations.Enable and start the service:
1sudo systemctl daemon-reload 2sudo systemctl enable mount-charm.service 3sudo systemctl start mount-charm.service
With the persistent mount ready to go, start the Docker container:
1cd /path/to/docker-compose.yml 2docker-compose up -d
After a moment, charm.li will be available at http://your-server-ip:28080.
While the setup described above works great for local network access, I wanted to make this available from anywhere without opening ports on my home network to accomplish it. Cloudflare Tunnel provides an elegant solution to this problem.
Setting up Cloudflare Tunnel is well outside the scope of this article, but if this is of interest to you, the following is a really great guide to getting it going:
I think it's worth mentioning that during my testing, I discovered that Node.js version compatibility can be tricky. charm.li relies on node-lmdb
, a native module that needs to be compiled specifically for your Node.js version.
While I originally tested Node.js 18 and got it to work reliably, I needed to use the newer Node.js 20. However, when changing the version number and rebuilding the container, I encountered this error:
1Error: The module '/app/node_modules/node-lmdb/build/Release/node-lmdb.node' 2was compiled against a different Node.js version using 3NODE_MODULE_VERSION 108. This version of Node.js requires 4NODE_MODULE_VERSION 127.
To solve this, you need to rebuild the native modules. This can be done with a temporary modification to the 'command' in the docker-compose file:
1command: > 2 sh -c "apt-get update && apt-get install -y python3 make g++ && 3 npm install && 4 npm rebuild node-lmdb && 5 sed -i 's/127.0.0.1/0.0.0.0/g' server.js && 6 npm start / 8080"
This adds the necessary build tools and rebuilds node-lmdb
for your specific Node.js version. Once it's rebuilt and working, revert to the earlier command:
1command: > 2 sh -c "npm install && 3 sed -i 's/127.0.0.1/0.0.0.0/g' server.js && 4 npm start / 8080"
This can be reliably used when upgrading to a new Node.js version.
While this is by no means the only way to get this going, I found this approach has several advantages:
squashfs
mounting (where it's most reliable), while Docker handles the application runtime.systemd
service ensures the mount remains available even after system restarts.This was a quick and dirty afternoon project, and I learned a lot in doing it. Running it in Docker Compose seemed like a complex task at first glance, but breaking it down into manageable steps made it accessible.
What started as a simple idea to help my dad access repair manuals grew into an interesting challenge. Probably the most valuable takeaway from this project is how to handle applications with specialized storage requirements in Docker. While Docker genereally leans toward complete isolation, there are legitimate cases where the host system needs to handle certain tasks (like mounting specialized filesystems) while the container focuses on application execution.
If you're considering implementing this for yourself, remember that the ~700GB data requirement is substantial, but the payoff is worth it for anyone who regularly needs access to automotive repair information. The setup process takes time, but the result is robust and requires minimal maintenance once configured.