Posted in Security, Server, Tech

Setting up multidomain DKIM with Exim

I was recently setting up SPF, DKIM and DMARC for multiple domains and was having trouble getting Exim to sign emails for the different domains. I found an article here explaining the steps. But I kept getting the following error in my exim logs:

failed to expand dkim_private_key: missing or misplaced { or }

The suggested configuration was the following:

I’m not quite sure why, but Exim was having trouble using the macros in the following macros, so I ended up changing it to the following snippet instead. If you don’t use DKIM_FILE you can omit it. Also you might want to set DKIM_STRICT to true if you published a DMARC policy that will reject or quarantine email failing the DKIM tests (unset, or “false” tells Exim to send the message unsigned if it ran into problems signing the email). The default setting for DKIM_CANON is “relaxed“, so it also can be omitted.

Other than that, just make sure the exim process has permissions to access the dkim directory and certificate files and everything should work nicely.

Posted in Security, Tech

OpenVAS: Using PostgreSQL instead of sqlite

When using OpenVAS in larger environments (e.g. lots of tasks and/or lots of slaves) you may have noticed the manager controlling all the slaves/scans can get sluggish or unresponsive at times. In my experience it is often due to the different processes waiting for an exclusive lock on the sqlite database. Fortunately OpenVAS 8 and above also supports using PostgreSQL as a database backend instead of sqlite. I think OpenVAS 7 also had support built-in, but it was still considered experimental.

Documentation on how to use PostgreSQL as the backend is in the OpenVAS svn repository. In a nutshell it is mainly adding -DBACKEND=POSTGRESQL to your cmake when you compile the manager (my cmake line is cmake -DCMAKE_INSTALL_PREFIX=/ -DCMAKE_BUILD_TYPE=Release -DBACKEND=POSTGRESQL ..). I generally only compile the master with PostgreSQL support and leave the slaves to use sqlite (since they don’t have as many concurrent accesses to their database). The documentation also steps you through the permissions you need to set up in PostgreSQL so it can be used by OpenVAS. Don’t forget to make the system aware of your OpenVAS libraries, in my case since I install OpenVAS to / I put /lib64/ in my /etc/ld.so.conf.d/openvas.conf file and then execute ldconfig.

One issue you may run into is migrating data from sqlite to PostgreSQL. There is a migration script in svn that can migrate the data, but it only works for a few older database versions. I assume OpenVAS 9 will contain an updated version of the script when it is released, but until then I wrote a script that uses the OMP protocol to export/store/import some of the settings. Since it only uses OMP to communicate with the master it is backend agnostic. You can use it to export the sqlite data and import it back into a manger using the PostgreSQL backend. It also means that it can only access data you can export via OMP (so no credential passwords/keys). The script will keep references intact (which tasks uses which target/schedule/…). The list of what it exactly imports/exports is on the github page: github.com/ryanschulze/openvas-tools

 
Posted in Security, Tech

How to check if you are vulnerable for the DROWN attack (CVE-2016-0800).

CVE-2016-0800, also known as the DROWN attack, is an attack against servers that still support the old SSLv2 protocol. The only reason a server would still offer to use SSLv2 would be for possible compatibility reasons with 20-year-old PCs ( -> there is no reason to use or offer SSLv2 any more). From a configuration side you can disable the v2 protocol by adding  -SSLv2 to the list of protocols being used.

Where and how you configure this depends on the software, but using all -SSLv2 -SSLv3 is fine with most modern servers and clients, Mozilla has a fantastic overview for configuring SSL and TLS.

If you want to check a bunch of your hosts remotely, you can use the sslv2 script included with nmap like this:

Where hostname would be either a FQDN, or an IP, or an IP range. You can swap out  sslv2 with  ssl-enum-ciphers to see all SSL /TLS ciphers and protocols the server offers.

 

Posted in Security, Tech

Granular OSSEC Emails

Occasionally I see questions on the OSSEC mailing list on how to send a bunch of alerts only to a specific email address. An example for a typical use case would be different departments responsible for different groups of servers and having alerts only go to them. OSSEC has a few options for sending Alerts to specific email addresses, but it only adds those email addresses to the alert (meaning it always goes to the global email address). Sometimes this isn’t desirable.

A workaround is setting the global email recipient to a blackhole email address (something that is aliased to /dev/null on the mail server) and only using the granular settings for delivering mail.

You can then use attributes like the rule ID, group names, or event locations to split up alerts to different recipients. The downside is that by doing this, you will miss alerts with  <options>alert_by_email</options> and a low level, unless you add a few granular email alerts. Rule 1002 (catch-all $BAD_WORDS) is a good candidate you will want to keep on receiving. Rules 501-504 (OSSEC agent/master status alerts) could also be interesting; either add an <email_alert>  for each rule individually, or overwrite the rules adding  <group>ossec,</group> to then, so you can add one <email_alert>  for the group of rules.

We use this system pretty extensively assigning alerts to email groups by <event_location>  and/or <group>

An example for the email block could look like this:

 

Posted in Security, Tech

What was and wasn’t fixed in bash after the Shellshock vulnerability (CVE-2014-6271)

By now there are a few websites/blog posts online that explain CVE-2014-6271 (code being executed that was inserted after the function definition) and  CVE-2014-7169 (parser error that also led to code being executed). Both CVEs have been patched. Check out “Quick notes about the bash bug, its impact, and the fixes so far“, “Shellshock proof of concept – Reverse shell“, or “Everything you need to know about the Shellshock Bash bug” if you want more information about the details and how CVE-2014-6271 worked.

Most websites focus on the remote exploitability of the vulnerability via CGI in web servers (understandably since this is the most dangerous aspect since how request headers are passed to scripts), and it didn’t take long after the announcement on the oss-security mailing list for requests starting to hit my webservers trying to exploit the vulnerability (from what I saw either people checking “how many servers does this really affect” or the more malicious “add your server to my DDoS botnet”).

What I would like to focus on is the functionality of exporting function definitions that has been drawn into the spotlight by CVE-2014-6271. You can define a function and make it accessible to the script/child shell that you invoke. This may sound really nifty (and it is … to a certain degree). The problem is that the script has no control over what functions are being imposed upon it. It is therefore possible to override any existing command or function, even builtin functions. No matter how hard you try to keep a sane environment within your script, anyone will be able to manipulate it from the outside (e.g. by overriding unset, set, …).

A small example script that initializes a variable, prints some output and then exits.

If we execute the script we get the following output:

Looks good, the if condition will never be true. Unless of course if we start overriding functions …

Sacrificed the trap function to set i=10. Or slightly more elaborate:

When echo is called we delete our function (so subsequent calls go to the original echo, to keep the code intact), execute an echo with whatever arguments were passed, and set i=10.

Obviously this is a simple example and you could override stuff like set, declare, test, …
Another problem is that this also affects binaries that execute a system() that calls a bash script. The following is a compiled simple C binary that uses system() to execute the bash script from the beginning of this posting.

As you see the environment gets passed along to the shell script. And in my opinion this is where it can start to get ugly. There are probably more bash scripts in your $PATH than you are aware of, do a quick
for dir in ${PATH//:/ }; do egrep "^#\!.*bash" ${dir}/* ;done  and have a look for yourself.
If any of these scripts are called by a setuid/setgid binary that doesn’t sanitize and clean up the environment beforehand, you might have a serious problem on your hands.

Obviously if you have a program that is calling any kind of script, it is your job to make sure the environment is in a sane condition. At the same time I feel that having a more robust and secure way to implement exported functions would take away a lot of the pressure on the parent process ensuring sane conditions it may have itself inherited.

The topic is being extensively discussed by security experts and I expect the problem to be addressed shortly. In my opinion it should never be possible to override builtin functions externally (if a script itself wants to override a builtin, that’s fine with me, or have a switch to explicitly allow it). But since any solution will break the feature of exported functions in existing scripts it is a delicate problem to solve.

Posted in Internet Stuff, Security, Tech

Script to start minion in tmux

Minion is a security project from Mozilla (link). It provides a user-friendly web interface to various security scanner tools. There is a webcast demonstrating the software (link).

The software requires a few services to run, and since I like having one script take care of starting everything with the right parameters, I threw together a simple shell script that sets up a tmux session with the services started in windows with the names of the services.

Posted in Internet Stuff, Security

Updated OSSEC Web UI 0.3 files for OSSEC 2.6

OSSEC is an open source HIDS (Host-based Intrusion Detection System), and a pretty darn good one too. It also has a simple web front-end to view what’s going on, search through alerts and stuff like that  (called OSSEC Web UI, I’ll just call it “WUI” here). Unfortunately the code is a bit outdated (the last official update was from 2008 as far as I can tell) and it doesn’t support newer features of OSSEC like polling data from a database. Something I’d like to tackle if I find the time 😉

The latest version of OSSEC is 2.6, and due to some small changes to the format of the logs WUI no longer works out-of-the-box. I had a look at the code this weekend and am providing patches and downloads of the files needed to change to get everything running again with OSSEC 2.6.

List of changes:

  • Works with the OSSEC 2.6 alert log file format
  • Changed Rule ID Link to better work with the new documentation wiki
  • Added “user” field to alert output
  • Widened the layout by a few pixels (to 1000px) and changed the CSS / alert layout to make the individual alerts better readable
  • Moved some of the hardcoded formatting to CSS

Download Download changed files
Download Patch download

 

Posted in Internet Stuff, Security

What plugins is that website running?

While having a look at nikito yesterday I stumbled accross cms-explorer. It’s an interesting little program that checks the themes/modules/plugins installed in common CMS systems (Drupal, WordPress, Joomla! and Mambo), with automatic exploration for Drupal and WordPress. It also has some nice bonus features like providing a list of known issues for plugins found by accessing the OSVDB.org database.

Example output:

Running it against my own webspace revealed a possible SQL injection I was unaware of. *) Fixed that, will probably replace that plugin completely this week, anything that has stuff so obviously bad in it is generally not all too sane.

*) I normally look at plugins before I install them, must have missed this one. @ PHP programmers: anyone who passes on the content of a $_REQUEST directly to a SQL query without any sanity checking deserves to be flogged with his own code.