Posted in Security, Tech

What was and wasn’t fixed in bash after the Shellshock vulnerability (CVE-2014-6271)

By now there are a few websites/blog posts online that explain CVE-2014-6271 (code being executed that was inserted after the function definition) and  CVE-2014-7169 (parser error that also led to code being executed). Both CVEs have been patched. Check out “Quick notes about the bash bug, its impact, and the fixes so far“, “Shellshock proof of concept – Reverse shell“, or “Everything you need to know about the Shellshock Bash bug” if you want more information about the details and how CVE-2014-6271 worked.

Most websites focus on the remote exploitability of the vulnerability via CGI in web servers (understandably since this is the most dangerous aspect since how request headers are passed to scripts), and it didn’t take long after the announcement on the oss-security mailing list for requests starting to hit my webservers trying to exploit the vulnerability (from what I saw either people checking “how many servers does this really affect” or the more malicious “add your server to my DDoS botnet”).

What I would like to focus on is the functionality of exporting function definitions that has been drawn into the spotlight by CVE-2014-6271. You can define a function and make it accessible to the script/child shell that you invoke. This may sound really nifty (and it is … to a certain degree). The problem is that the script has no control over what functions are being imposed upon it. It is therefore possible to override any existing command or function, even builtin functions. No matter how hard you try to keep a sane environment within your script, anyone will be able to manipulate it from the outside (e.g. by overriding unset, set, …).

A small example script that initializes a variable, prints some output and then exits.

If we execute the script we get the following output:

Looks good, the if condition will never be true. Unless of course if we start overriding functions …

Sacrificed the trap function to set i=10. Or slightly more elaborate:

When echo is called we delete our function (so subsequent calls go to the original echo, to keep the code intact), execute an echo with whatever arguments were passed, and set i=10.

Obviously this is a simple example and you could override stuff like set, declare, test, …
Another problem is that this also affects binaries that execute a system() that calls a bash script. The following is a compiled simple C binary that uses system() to execute the bash script from the beginning of this posting.

As you see the environment gets passed along to the shell script. And in my opinion this is where it can start to get ugly. There are probably more bash scripts in your $PATH than you are aware of, do a quick
for dir in ${PATH//:/ }; do egrep "^#\!.*bash" ${dir}/* ;done  and have a look for yourself.
If any of these scripts are called by a setuid/setgid binary that doesn’t sanitize and clean up the environment beforehand, you might have a serious problem on your hands.

Obviously if you have a program that is calling any kind of script, it is your job to make sure the environment is in a sane condition. At the same time I feel that having a more robust and secure way to implement exported functions would take away a lot of the pressure on the parent process ensuring sane conditions it may have itself inherited.

The topic is being extensively discussed by security experts and I expect the problem to be addressed shortly. In my opinion it should never be possible to override builtin functions externally (if a script itself wants to override a builtin, that’s fine with me, or have a switch to explicitly allow it). But since any solution will break the feature of exported functions in existing scripts it is a delicate problem to solve.