How to use ansible to scan for Spectre/Meltdown vulnerable hosts

First of all head on over to github and download a spectre-meltdown-checker that supports JSON output. Now all we need is a ansible playbook that calls that script:

Important is to adjust the path to spectre-meltdown-checker.sh  in the script:  task (the path is relative to wherever your playboook file is). Adapt to your needs however you want. It is basically just feeding the output of the script into the from_json  filter, storing it in a variable and then iterating over the result via with_items.

Example output:


(vulnerable to  CVE-2017-5715 since Intel retracted their microcode updates and haven’t released new ones yet)

How to colorize manpages

I’m surprised I’ve never posted this here before. Turning manpages from monochrome to color is super easy.

There are a few LESS_TERMCAP_*  environment variables you can adjust. Here is a list of useful ones to change

I prefer to only set them for man, so I put this little function in my ~/.bashrc 

 

Bash function for easily watching logs and colorizing the output

Another useful bash function I have on my servers. It’s a wrapper around tail -F  and ccze . It will look for a log file (prepends /var/log/ to the patch if it can’t find it), and pipes it into ccze for colorizing the output. Handy if you find yourself watching logs. I mostly use it for dhcp/tftp/mail where I don’t have a huge amount of traffic (i.e. can watch it in real time) and am expecting an event/log entry.

Usage:

Using regex comparision in bash and BASH_REMATCH

Bash supports regular expressions in comparisons via the =~ operator.  But what is rarely used or documented is that you can use the ${BASH_REMATCH[n]}  array to access successful matches (back-references to capture groups). So if you use parentheses for grouping ()  in your regex, you can access the content of that group.

Here is an example where I am parsing date placeholders in a text with an optional offset (e.g. |YYYY.MM.DD|+2 ). Storing the format and offset in separate groups:

 

 

 

Multiply floats by 10,100, … in bash

A short one today. Bash can only handle integer numbers and not floats, so when someone searches the internet on how to use math on floats in bash the solution they find is usually “use bc” and looks something like this:

Or if they want the result to be an integer:

It’s a fine solution, and readable (which can mean a lot for people maintaining scripts). But if all you want to do is multiply by 10,100,1000, … you can achieve this faster with a bit of string manipulation:

It just splits the number into two strings, and assembles it again with the decimal shifted. Have a look at substring_removal and substring_expansion for more examples on how to modify strings in bash. I’d highly suggest either sticking this in a separate function, or commenting the code since it isn’t necessarily obvious what is going on

Since it is all pure bash and doesn’t need to spawn external commands, it quicker (not that bc  is slow, but if you are doing a lot of calculations, it can add up). I know what you are thinking “if your goal is speed, you shouldn’t be using bash”, that doesn’t mean we can’t write efficient code.

API for Troy Hunts passwords list

TL;DR version: https://github.com/ryanschulze/password-check-api

So NIST updated their recommendations on passwords/authentication a few weeks ago. And while a lot of the reporting was about how password complexity was removed in favor of password length, one point I found intriguing was the suggestion to check if a users password falls into one of these categories:

  • Passwords obtained from previous breach corpuses.
  • Dictionary words.
  • Repetitive or sequential characters (e.g. ‘aaaaaa’, ‘1234abcd’).
  • Context-specific words, such as the name of the service, the username, and derivatives thereof.

Troy Hunt, the guy behind https://haveibeenpwned.com, deals with a lot of data breaches and made 320 Million passwords from breaches available (at the time of this posting) to help people with checking if passwords that were part of a data breach.

I threw together a small API that can make the data from Troy Hunt easily query-able (or any list of SHA1 hashes for that matter). This can be useful if you have multiple systems that want to query the data, or if you want the data on a separate system.

It’s nothing special, a MySQL backend, a Webserver and an API application using the Slim framework. It’s also stupid fast because there is nothing fancy or special about it. Since it uses a well documented framework it is also easy to change/extend/adjust to your specific requirements.

Bash function to run commands against ansible hosts

I haven’t posted anything ansible related in a while, so here is a nifty little function I regularly use when I want to execute something on all (or a subset) of ansible hosts. It’s just a wrapper around ansible host -m script -a scriptname.sh  but adds –tree so that the output is stored and can easily be parsed by jq 

Usage example:

Keeping track of Jira issues

A Jira search that I find useful (show tickets you created and haven’t been updated in a while) since I often have to track tickets I created across different projects. You can subscribe to searches and get an email sent with the results.

reporter = currentUser() AND updated < -90d AND resolution = Unresolved ORDER BY updatedDate DESC 

Fixing Ubuntu 17.04 DNS problems

I recently upgraded my Ubuntu box to 17.04.

Much to my surprise DNS starting behaving strangely, so I checked my DNS server … worked fine if I queried it directly, so I checked if DHCP was giving out the wrong DNS IP … nope, that was fine too. I checked /etc/nsswitch.conf , and that looked fine too so I checked what was ending up in /etc/resolv.conf and was surprised that it contains nameserver 127.0.0.53  instead of the “real” DNS server.

After a bit of research I found out that Ubuntu switched over to using systemd-resolved, which shoves itself between user land and the DNS servers and (at least in Ubuntu 17.04) has problems with servers that support DNSSEC. Very frustrating when you know everything is OK and worked in the past, just systemd messing with stuff and breaking it.

My workaround was to turn of DNSSEC validation. Not pretty but better than no DNS at all, until Ubuntu get’s their problems sorted out.

 

How to fetch IP ranges/entries from SPF records in bash

Recently I needed to fetch IP ranges from SPF records. After looking at different python/ruby/perl modules I came to the conclusion that a fancy module (sometimes with wonky dependencies) was overkill just to parse a simple SPF record. So I threw together a simple bash script that is mainly just fetching the SPF record with dig and grep:

It iterates through the options (it currently recognizes a, mx, ip4, ip6, include, and redirect), and then sorts the output by ipv4, then ipv6.

Download URL: fetch_spf.sh

How to compare package version strings in bash

This is a little function I use to compare package version strings. Sometimes they can get complex with multiple different delimiters or strings in them. I cheated a bit by using sort –version-sort for the actual comparison. If you are looking for a pure bash version to compare simpler strings (e.g. compare 1.2.4 with 1.10.2), I’d suggest this stackoverflow posting.

The function takes three parameters (the version strings and the comparison you want to apply) and uses the return code to signal if the result was valid or not. This gives the function a somewhat natural feel, for example compare_version 3.2.0-113.155 “<” 3.2.0-130.145 would return true. Aside from < and > you can also use a few words like bigger/smaller, older/newer or higher/lower for comparing the strings.

List of return codes and meanings:

 

 

Temporary theme

A recent update broke my WordPress theme. I’ve used the same theme for almost 10 years and it was starting to be a pain to keep updated and working with newer WordPress versions. So I decided to put up this simple theme until I get a new theme picked out and up and running.

Setting up multidomain DKIM with Exim

I was recently setting up SPF, DKIM and DMARC for multiple domains and was having trouble getting Exim to sign emails for the different domains. I found an article here explaining the steps. But I kept getting the following error in my exim logs:

failed to expand dkim_private_key: missing or misplaced { or }

The suggested configuration was the following:

I’m not quite sure why, but Exim was having trouble using the macros in the following macros, so I ended up changing it to the following snippet instead. If you don’t use DKIM_FILE you can omit it. Also you might want to set DKIM_STRICT to true if you published a DMARC policy that will reject or quarantine email failing the DKIM tests (unset, or “false” tells Exim to send the message unsigned if it ran into problems signing the email). The default setting for DKIM_CANON is “relaxed“, so it also can be omitted.

Other than that, just make sure the exim process has permissions to access the dkim directory and certificate files and everything should work nicely.

OpenVAS: Using PostgreSQL instead of sqlite

When using OpenVAS in larger environments (e.g. lots of tasks and/or lots of slaves) you may have noticed the manager controlling all the slaves/scans can get sluggish or unresponsive at times. In my experience it is often due to the different processes waiting for an exclusive lock on the sqlite database. Fortunately OpenVAS 8 and above also supports using PostgreSQL as a database backend instead of sqlite. I think OpenVAS 7 also had support built-in, but it was still considered experimental.

Documentation on how to use PostgreSQL as the backend is in the OpenVAS svn repository. In a nutshell it is mainly adding -DBACKEND=POSTGRESQL to your cmake when you compile the manager (my cmake line is cmake -DCMAKE_INSTALL_PREFIX=/ -DCMAKE_BUILD_TYPE=Release -DBACKEND=POSTGRESQL ..). I generally only compile the master with PostgreSQL support and leave the slaves to use sqlite (since they don’t have as many concurrent accesses to their database). The documentation also steps you through the permissions you need to set up in PostgreSQL so it can be used by OpenVAS. Don’t forget to make the system aware of your OpenVAS libraries, in my case since I install OpenVAS to / I put /lib64/ in my /etc/ld.so.conf.d/openvas.conf file and then execute ldconfig.

One issue you may run into is migrating data from sqlite to PostgreSQL. There is a migration script in svn that can migrate the data, but it only works for a few older database versions. I assume OpenVAS 9 will contain an updated version of the script when it is released, but until then I wrote a script that uses the OMP protocol to export/store/import some of the settings. Since it only uses OMP to communicate with the master it is backend agnostic. You can use it to export the sqlite data and import it back into a manger using the PostgreSQL backend. It also means that it can only access data you can export via OMP (so no credential passwords/keys). The script will keep references intact (which tasks uses which target/schedule/…). The list of what it exactly imports/exports is on the github page: github.com/ryanschulze/openvas-tools

 

How to easily switch between ansible versions

Lately I’ve run into issues with different versions of ansible (1.9 handling async better, 2.x having more modules and handling IPv6 better) and having to test playbooks and roles against different versions to make sure they work. TO make life easier I put this little function in my .bashrc to switch back and forth between ansible versions. It checks out the specified version from github if it needs to, and switches over to it (just for that terminal, not the system). Usage is straight forward ansible_switch <branch> , i.e. ansible_switch 2.1  (or whatever branch you want, here is a list of all branches).

It is currently limited to stable branches, but you can change line 6 from stable- to whatever you want (or remove the prefix completely). If you have a github account you also may want to change from https to ssh by using the git@github.com:ansible/ansible.git checkout URL.

 

Ansible tasks to reboot a server if required

A quick one today.  The following ansible tasks check if a server needs to be rebooted, reboots it, and then waits for it to come back online. Easy to fire off during a maintenance after updating packages.

 

How to check if you are vulnerable for the DROWN attack (CVE-2016-0800).

CVE-2016-0800, also known as the DROWN attack, is an attack against servers that still support the old SSLv2 protocol. The only reason a server would still offer to use SSLv2 would be for possible compatibility reasons with 20-year-old PCs ( -> there is no reason to use or offer SSLv2 any more). From a configuration side you can disable the v2 protocol by adding  -SSLv2 to the list of protocols being used.

Where and how you configure this depends on the software, but using all -SSLv2 -SSLv3 is fine with most modern servers and clients, Mozilla has a fantastic overview for configuring SSL and TLS.

If you want to check a bunch of your hosts remotely, you can use the sslv2 script included with nmap like this:

Where hostname would be either a FQDN, or an IP, or an IP range. You can swap out  sslv2 with  ssl-enum-ciphers to see all SSL /TLS ciphers and protocols the server offers.

 

Granular OSSEC Emails

Occasionally I see questions on the OSSEC mailing list on how to send a bunch of alerts only to a specific email address. An example for a typical use case would be different departments responsible for different groups of servers and having alerts only go to them. OSSEC has a few options for sending Alerts to specific email addresses, but it only adds those email addresses to the alert (meaning it always goes to the global email address). Sometimes this isn’t desirable.

A workaround is setting the global email recipient to a blackhole email address (something that is aliased to /dev/null on the mail server) and only using the granular settings for delivering mail.

You can then use attributes like the rule ID, group names, or event locations to split up alerts to different recipients. The downside is that by doing this, you will miss alerts with  <options>alert_by_email</options> and a low level, unless you add a few granular email alerts. Rule 1002 (catch-all $BAD_WORDS) is a good candidate you will want to keep on receiving. Rules 501-504 (OSSEC agent/master status alerts) could also be interesting; either add an <email_alert>  for each rule individually, or overwrite the rules adding  <group>ossec,</group> to then, so you can add one <email_alert>  for the group of rules.

We use this system pretty extensively assigning alerts to email groups by <event_location>  and/or <group>

An example for the email block could look like this:

 

A script to diff files/directories on two different servers

Ok,  short one today. This is a straightforward script that simplifies comparing directories on different servers. There is no magic in it, it just rsyncs the directories to a local temp directory and runs diff against them (then deletes the directory afterwards). Mainly intended for config files, I wouldn’t recommend trying to diff gigabytes of binaries with it.

 

Renewing “Let’s Encrypt” SSL certificates

Let’s Encrypt provides free DV SSL certificates for everyone and is now in the open beta phase. I’m not going to go into the details of which of the clients are best, since that depends entirely on your use case (I use acme-tiny and a rule in varnish to intercept all calls to /.well-known/acme-challenge/).

Since the certificates are only valid for 90 days, I often see people suggesting to just renew them via cronjob every 2 months. I find this to be really awful advice, if that renewal fails for any reasons (network problems, local problems, problems with let’s encrypt) the next renewal is a month after the certificate expired. It is also pretty inflexible (what if you would rather prefer to renew them after 80 days).

I use openssl to check daily how long the certificate is still valid, and if a threshold has been reached it tries to renew the certificate (I believe the official client has this functionality too). And if the certificate isn’t renewed by a 2nd threshold, it sends an email altering the admin of the problem (for manually intervening and fixing whatever went wrong).

At the end of this posting I’ll add the complete script, but the quickest way to check how long a certificate is still valid is to use openssl x509 -in -checkend. It will return 0 if the file is still valid in x seconds, and 1 if the certificate doesn’t exist or if the certificate will be expired by then. Just multiply the number of days by 86400 and check if the certificate is still valid:

The openssl binary has a few nice options for looking at certificates (both local files and remotely connecting to a server and looking at the provided certificate)
Show information about a local certificate file: openssl x509 -text -noout -in
Connect to a remote server and display the certificate provided: openssl s_client -showcerts -servername foo.bar -connect IP:PORT | openssl x509 -text -noout  (servername foo.bar is only required if you are connecting to a server and need to use SNI to request a cert for a specific domain, i.e. a webserver providing multiple domains on port 443 via SNI. It can of course be omitted if you don’t need it.)

This is the full script I use for checking and renewing certs. It basically just loops through a list of domains, checks if any of the date thresholds are met and then renews certificates/send emails.