Creating systemd service files for docker compose

I’ve recently been moving a few of my services from bare metal installations over to docker containers. Normally I use ansible to deploy everything in the right place (and you should be doing that too), but I have a “playground” to try out stuff before promoting it to “properly deployed on a different VM with ansible”.

The following script came in handy to simplify the process of creating systemd service files for the docker services.

It assumes that you are in a directory with a docker-compose.yml  and the directory name will be the service name, e.g. you are in /opt/watchtower/  and there is a docker-compose.yml  here -> the service name will be watchtower .

 

Using paperless-ngx to manage your paper documents digitally.

Paperless is a document management system that helps you manage digital scans of your documents. I’ve been using it for a while, but as with many projects the developer(s) lost motivation/time to keep the project up-to-date, a fork was made (paperless-ng) which after time also died off. Now a few developers got together, worked through the backlog of issues and forked the next generation as “paperless-ngx”.

There is a Changelog for the specific changes.

Short version: I have my scanner set up to be able to store scans on a network share as PDFs, and paperless monitors the same share for new documents. If it finds a new file, it performs OCR on it (saves the PDF as well as the text) and runs user defined rules against it (e.g. detecting the date of the document, or correspondents, or what kind of document it is, …). It actually makes it very easy to manage physical documents. I just pop them in the scanner, select the destination share, and that’s it, everything else happens automatically. So if I need to search for documents needed to do my taxes. I can do it digitally and not have to go through folders with physical paper.

It can also ingest images or office documents, just toss them in the directory the application monitors for ingesting documents (or use the “Upload” button in the application).

I’m not a big fan of the default dark theme, but a normal light theme is still available.

All in all, it’s pretty nifty, so have a look if this is something that you might find useful.

Visualizing Exim logs with Graylog

I spent some time the last few days tweaking my mail server settings since there has been an annoying rise in spam lately. Nothing special, mostly spring-cleaning of the blocklists and Spam Assassin settings. But as I was going over my config, I realized I didn’t have any way to measure “success”. I don’t really know which Blocklists work well for me and which don’t.

I use Graylog to collect logs from my systems and applications. But as far as my Exim logs are concerned my setup was pretty barebones (i.e. not parsing any fields, just dumping them as they were into Graylog). So I spent some time setting up proper extractors for my Exim logs to store everything useful in fields. A lot of the Exim logs use a straightforward key=value structure, making them easy to parse.

(spoiler: I bundled up everything here as a Graylog Content pack in case anyone wants to use it. Link at the bottom of the post)

Once the logs were properly parsed, I moved on to Dashboards to visualize the data. I started out with a visualization of the score Spam Assassin assigns to incoming emails (negative is good, positiv is bad, it’s been years since I’ve seen anything above a score of 5.0 that wasn’t spam). This gives me an indication of the quality of the mail making it through filters to my mailbox.
Then a little overview of incoming and outgoing mail, and how much is discarded by SPF and DNSBL.

This dashboard s the most interesting one when it comes to deciding which DNSBL lists are useful and which aren’t. It shows which lists are finding spambot globally as well as over time.
All my dashboards also have a widget with the relevant logs from the dashboard underneath to have easy access to the raw logs.

Since I had the data anyway, I also create a dashboard to show transport encryption information. About 60% of mail servers seem to support transport encryption, which is a lot lower than I would have expected (since it is easy to configure). I didn’t dig deeper into this, but I wouldn’t be surprised if the 40% sending email using plain unencrypted methods are mostly spammers that have very simple bots running to send their email.

 

This dashboard is technically also not related to spam, it’s bots trying to brute force user accounts on my mail server to abuse them to send more spam. Fairly aggressive fail2ban settings take care of that though.
It’s interesting to see, that the botnets aren’t used solely for sending spam, they are also used to try and compromise mail server accounts to increase the volume of mails they can send.

Link: Graylog content pack for Exim

How to fix Mono crashing on Odroid XU4

Recently I’ve been noticing my Sonarr and Radarr applications behaving erratic (sometimes not responsive, sometimes not performing tasks, not searching or adding content, but at other times behaving totally fine). A quick look at the logs told me the applications were crashing and being restarted by systemctl, after a few crashes they seemed to stabilize.
Today I had some time to dig deeper into the issue. I had already searched the Internet for general issues, but it didn’t seem to be a widespread problem. I assumed it might have to do with the ARM architecture, but Raspberry Pi users didn’t seem to be having these issues.

In the past I had difficulty reproducing the issue, but today I was in luck, every time I tried to kick off the “Process Monitored Downloads” task in Radarr, it would start working on the task and then crash and restart. The core issue turned out to be oddly specific to the Odroid XU4 hardware.

The XU4 has eight CPU cores, four A7 cores running at 1.4GHz, and four A15 cores running at 2GHz.

 

Whenever the mono process moved from an A7 to an A15 core (or vice versa) the process crashed.
Since both Sonarr and Radarr are Mono applications, they were both affected. Pinning the applications to either the A15 or the A7 CPUs resolved the problem.
taskset --cpu-list  can be used to change the CPU affinity of a process.

First look up the systemctl  service files for Radarr and Sonarr (e.g. via systemctl status radarr ). Edit the service files by prefixing the ExecStart  command with /usr/bin/taskset -c 0-3  (for the A7 cores) or /usr/bin/taskset -c 4-7  (for the A15 cores).
Then reload the systemctl files ( systemctl daemon-reload ) and restart the service.

Example:

If the service files are managed via the package manager, you may want to create an systemctl override instead of editing the service file, so that the package manager doesn’t overwrite your changes:

 

Now you probably want to see if it worked and want to know how to check which core a process is running on? There a re a few options:

htop: launch htop . Press <F2> , go to Columns , and add PROCESSOR  from Available Columns. The CPUs are numbered 1-8 in htop (as opposed to 0-7 by the system).

ps: the PSR column can display the core a process is on. The CPUs are numbered 0-7 in ps.

taskset: can display the current affinity of a process.

 

Statically served wordpress content

I’m currently still evaluating hugo and jekyll, themes and plugins, as an alternative to the current WordPress site. Until I decide what route to eventually go with, I had a look at WordPress plugins to generate static versions of a site.

Simply Static looked fine and I gave it a spin, it can easily crawl through the site and you can provide additional file/urls/directories to add to the static version (as well as exemptions).

The static version of the website is created regularly and stored locally, so I added a few ansible tasks to set up a periodic rsync of the files to my webserver that serves static content.

I have a HAProxy load balancer in front of my webservers that I have configured to serve the static version of the website first, and fall back to the wordpress server as a backup (that also gives me a nice redundancy, so I can update and reboot servers without causing a downtime).
HAProxy is also configured to always send certain requests (admin interface, search) to the WordPress server since they require PHP. This all happens transparently for the user.

I’m not going to bore with the details since it was all pretty standard stuff. It’s nothing fancy, but it looks reliable and does what it should.

I have this blog entry scheduled to go live in a few days, so we’ll see if all the automatisms work and the static version of the page generated and synced to the webserver.