Protect Pillar Data with GPG

Pillar is a great method of storing static data and making it available to Salt states, but the individual items that constitute Pillar data are only secure by virtue of the fact that they’re only made available to the minion or minions defined in the /srv/pillar/top.sls file. This approach is acceptable if you’re working in a fully trusted environment, but what happens if someone gains unauthorized access to your Salt Master and can access all data stored in Pillar? Such a scenario is a perfect use case for GPG renderers and we’re going to look at configuring them in this post.

A note on virtualization and randomness

If you are running a Salt Master on a VM (regardless of hypervisor) you may experience long delays during key generation due to the lack of physical hardware such as a keyboard and mouse which are commonly used to generate entropy. You can install the rngd-tools package using your distribution’s package manager and seed it with the following command to generate sufficient entropy to generate a strong key-pair:

Generating a Key-Pair

We first need to generate a key-pair so we have something to encrypt pillar with. First, though, we need to create a directory to hold the keys:

Follow the prompts providing information appropriate for your organization and do not provide a password for the key-pair.

(NOTE: If you encounter an error when executing the GPG command above please see this post.)

Export the Public Key

With key creation complete we can export the public key which will be used to encrypt Pillar items and import it on our local machine. Note that you can reimport the public key on the Salt-Master itself if you plan to encrypt sensitive Pillar data directly on it. Export the public key with the following command:

Import the Public Key on your Local Machine

Encrypt your sensitive pillar data

Now that we have all the pieces in place we can start encrypting sensitive information before placing it in Pillars. For example, to encrypt the string “super_secret_server_stuff” we would use the following command:

The above command will generate the following PGP “message”

The above message can be placed into a Pillar using the typical key-value pairing with one minor change, #!yaml:gpg must be added to the top of the file for Salt to recognize the Pillar contains GPG encrypted data. For example:

Test your Encrypted Pillar

Once you have successfully created an encrypted Pillar you should test to ensure the Pillar is properly decrypted by the Minion or Minions it’s visible to. First, sync the newly created Pillar with the following command:

Then query a minion the Pillar should be visible to and ensure the clear text equivalent of the cipher text is returned:

For instance, the clear text of version of the enciphered Pillar data we created earlier would look like:

That’s all there is to it! If the minion correctly responded with the clear text equivalent of the cipher text we created above you have successfully configured GPG renderers. Note that you can mix and match GPG encrypted Pillar items and plain text Pillar items in a single file as long as the #!yaml|gpg header is in place. If you have any questions or encounter any issues please don’t hesitate to leave a comment.

//J

Quick Tip – Fix gpg: can’t connect to the agent: IPC connect call failed error

I was configuring GPG renderers on our Salt Master a few weeks ago and I ran into the following error while generating the PGP key (pair) that would be used to encrypt secrets before adding them their respective pillars:

Eventually, I determined this error was being caused by a GPG agent that was already running under CentOS 7, which my GPG command was unable to access. To fix this error kill the running agent with the following command:

Next, restart the agent with the following command:

You should now be able to re-run the GPG command you’re using to generate the key-pair and connect to the curses version of pinentry to input your passphrase. Next week we’ll expand on this error a bit and discuss the entire process of enabling GPG renderers in SaltStack.

//J

Force a VMware VM to remove a stuck SCSI disk without a reboot

I was adding additional storage to a virtual file server a couple of weeks ago and I accidentally added a new disk using the wrong interface type. Upon realizing my mistake, I removed the disk in VMware vDirector, but, for whatever reason, the virtual machine, the OS or some combination of the two simply wouldn’t realize the disk had been removed and was no longer available on the SCSI bus. The block device ID consistently showed up in fdisk output even though it no longer existed and the SCSI bus had been scanned multiple times in vain. This could’ve probably been resolved with a simple reboot, but this VM was providing shares for production systems and couldn’t go down during the business day. A little poking around in the /sys/class/block directory pointed me in the right direction and I finally figured out I could trigger a rescan for only the offending drive with the following command:

After rescanning for only the stuck drive, it finally disappeared from the output of fdisk and I was able to add the drive again using the correct interface type. I realize there are much cleaner ways to do this, but most, if not all of them, require a reboot and I had to keep the system online while dealing with this issue. If anyone knows of another way to force a stuck block device to be dropped without a reboot please feel free to comment below.

–J

Quick Tip: Flush the Postfix Mail Queue

Due to vacation schedules I ended up being the lone System Administrator for my entire company last week and, in between fighting fires, I found a useful command I wanted to share (I also haven’t had time to write a long-form post this week. Sorry!) Early in the week I noticed an interruption in mail flow and traced it to a misconfiguration in Postfix which was causing mail to endlessly loop and never actually deliver. The loop combined with the amount of mail we process on a given day caused a backlog of 585 messages. Needless to say, I didn’t want to wait for Postfix to requeue those messages for delivery itself. So, I did a little googling and learned about the -f (flush) option of the postqueue command which will flush the Postfix mail queue and immediately requeue the messages for delivery. To flush the Postfix mail queue and immediately requeue any messages waiting to be delivered simply execute:

–J