Passing arrays to bash functions

Short version: yes, you can pass an array to a Bash function. You can also manipulate the array contents within the function to pass information back. It is easy and has been supported since Bash version 4.3, I believe.

Long version: I recently watched a talk by some very knowledgeable people about Bash, where they delved deep into its internals and quirks. At one point, they discussed passing information to and from functions without creating subshells. The solution became quite convoluted, and I was surprised because the whole time I was thinking, “just use nameref“.

Out of curiosity, I searched online, but unfortunately, the internet is full of responses like “doesn’t work,” “Bash can’t do that” and many variations of “just pass all the values of the array to the function as arguments and piece them together again inside the function” (which is a terrible solution since you lose the keys). There are a few posts here and there suggesting local -n as a solution, but they are rare, and especially on sites like Stack Overflow, they are not the top answers.

In a nutshell what we are going to do is pass a reference to an array to a function (think “pointers” or “symlinks” if that helps).

Relevant parts of the bash man page for declare and local:

declare -n

Give each name the nameref attribute, making it a name reference to another variable. That other variable is defined by the value of name. All references, assignments, and attribute modifications to name, except for those using or changing the -n attribute itself, are performed on the variable referenced by name’s value. The nameref attribute cannot be applied to array variables.

local

For each argument, a local variable named name is created, and assigned value. The option can be any of the options accepted by declarelocal can only be used within a function; it makes the variable name have a visible scope restricted to that function and its children.

So, in summary, we can use local to define variables with a scope limited to the function they are defined in, and local accepts all the options that declare supports.

It seems the “The nameref attribute cannot be applied to array variables.” part of the declare definition causes a lot of confusion or deters people from trying to use it for referencing arrays.
What it means is that you can’t do a local -n my_array=() (i.e. applying the nameref attribute to an array), but local -n my_array is fine (where my_array is a variable with the nameref attribute which can also point to an array).

Enough theory, let’s get down to practical examples.

We will create a function called do_stuff that:

  • takes the name of an array as argument $1
  • reads the length key from the array
  • add a random key to the array with a random number the length of the length key previously read

Then we will create an array outside of the function with some keys/values, pass it to the do_stuff function, and then output the contents

So this shows us that the do_stuff function could read from the array (the length value), and write to the array (add the random number key/value), and the changes were applied to the array outside of the function. (where we did the declare -p). Bonus points for not needing a subshell.

Using this “trick” allows us to pass more complex information to a function, and especially receive more complex information from a function.

There is one caveat: you can’t use the same name for the array inside the function as well as outside. I wouldn’t advise doing this anyway for readability reasons, as the variable’s scope can become confusing. If you try it you get the following output:

Oddly I often noticed the following statement on Stack Overflow about nameref:

This only works if the array is defined as a global
Nope, works just as fine passing an array locally scoped to a function to another function this way.

(I prefer using a main() function like in this example to avoid global variables unless explicitly defined)

So, there you have it: an easy way to pass an array to a function in bash, no weird looping over values. And a better way to receive information from an array than the byte return value and parsing the output of the function.

A case for the Pimoroni Tiny 2040

In my last post, I mentioned the Pimoroni Tiny 2040. While it probably won’t die by just dangling off it the end of a USB cable, or tossing the naked device it in your bag/pocket, I prefer to have a small case around it to have some protection and make handling easier.

This also has the benefit of looking more professional when using it at work, compared to the “uh, are you sure this is a good idea” look I get when plugging PCBs directly into USB ports.

The design itself is pretty basic:

  • as small as possible
  • a top and bottom half that snap together securely when assembled
  • a slight recession on the bottom to accommodate the parts on the underside of the PCB
  • holes for the two buttons
  • a thin layer above the LED so it is protected, but still can be seen/used

One reason I like using this case, is that I can print a few in different colors and switch them out based on the payload (e.g. red for dangerous, green/blue/yellow for safe, testing, informational).

 

 

 

 

I uploaded the design to Thingiverse for everyone to access: https://www.thingiverse.com/thing:5994359

 

Using the Pimoroni Tiny 2040 as a USB rubber ducky

Last year I bought a Pimoroni Tiny 2040 that I really enjoy playing around with. It’s a fun little device that runs Python. It’s about the size of a thumbnail, has an LED, and you can use the boot select button for user input.

(Image credit: Pimoroni)

 

I mainly use it as a cheap USB rubber ducky with a non-malicious script at work (if plugged into a PC, it registers as a keyboard and starts typing: open notepad, write some text about the importance of locking your PC, and then locks the PC).
To do this, install CircuitPython, and follow the instructions of this repository: pico-ducky

Once installed, you can easily write your own rubber ducky scripts and drop them on the device or use existing scripts found here: hak5/usbrubberducky-payloads

I have a small git repository that I use as a template to start off with, it includes all the required libraries and a slimmed down and modified rubber ducky parser: ryanschulze/rubber-pico-duck

The LED on the pico 2040 will glow dim blue when it has completed initialization and is ready, if you press the boot select button, the LED will turn red and it will execute the payload, when complete it will flash green briefly before returning to the ready state (dim blue).

 

Creating systemd service files for docker compose

I’ve recently been moving a few of my services from bare metal installations over to docker containers. Normally I use ansible to deploy everything in the right place (and you should be doing that too), but I have a “playground” to try out stuff before promoting it to “properly deployed on a different VM with ansible”.

The following script came in handy to simplify the process of creating systemd service files for the docker services.

It assumes that you are in a directory with a docker-compose.yml  and the directory name will be the service name, e.g. you are in /opt/watchtower/  and there is a docker-compose.yml  here -> the service name will be watchtower .

 

Using paperless-ngx to manage your paper documents digitally.

Paperless is a document management system that helps you manage digital scans of your documents. I’ve been using it for a while, but as with many projects the developer(s) lost motivation/time to keep the project up-to-date, a fork was made (paperless-ng) which after time also died off. Now a few developers got together, worked through the backlog of issues and forked the next generation as “paperless-ngx”.

There is a Changelog for the specific changes.

Short version: I have my scanner set up to be able to store scans on a network share as PDFs, and paperless monitors the same share for new documents. If it finds a new file, it performs OCR on it (saves the PDF as well as the text) and runs user defined rules against it (e.g. detecting the date of the document, or correspondents, or what kind of document it is, …). It actually makes it very easy to manage physical documents. I just pop them in the scanner, select the destination share, and that’s it, everything else happens automatically. So if I need to search for documents needed to do my taxes. I can do it digitally and not have to go through folders with physical paper.

It can also ingest images or office documents, just toss them in the directory the application monitors for ingesting documents (or use the “Upload” button in the application).

I’m not a big fan of the default dark theme, but a normal light theme is still available.

All in all, it’s pretty nifty, so have a look if this is something that you might find useful.