Deploy a LAMP Stack with Salt, Salt-Cloud, and the AWS CLI: Part 1 – Introduction and Architecture

Introduction

Over the next few weeks we’ll be learning how to use Salt, Salt-Cloud, and the AWS CLI to provision and configure a complete LAMP (Linux, Apache, MySQL, PHP) stack on top of Amazon AWS resources. The infrastructure will be comprised of the following Amazon AWS components:

  • Three (3) t2.small Amazon EC2 instances
  • One (1) Auto-Scaling group with associated launch configuration for additional instances created in response to infrastructure events (high CPU load, high RAM utilization, a sudden spike in requests, etc)
  • One (1) Elastic Load Balancer (ELB) instance
  • One (1) Amazon Simple Storage Service (S3) bucket for storage of static site assets
  • One (1) Amazon Virtual Private Cloud (VPC)
  • One (1) hosted zone in Amazon’s Route 53 DNS service
  • One (1) Elastic File System (EFS) volume with mount targets in two Availability Zones
  • One (1) Amazon Relational Database Service (RDS) instance configured for Multi-AZ replication

Continue reading “Deploy a LAMP Stack with Salt, Salt-Cloud, and the AWS CLI: Part 1 – Introduction and Architecture”

Protect Pillar Data with GPG

Pillar is a great method of storing static data and making it available to Salt states, but the individual items that constitute Pillar data are only secure by virtue of the fact that they’re only made available to the minion or minions defined in the /srv/pillar/top.sls file. This approach is acceptable if you’re working in a fully trusted environment, but what happens if someone gains unauthorized access to your Salt Master and can access all data stored in Pillar? Such a scenario is a perfect use case for GPG renderers and we’re going to look at configuring them in this post.

A note on virtualization and randomness

If you are running a Salt Master on a VM (regardless of hypervisor) you may experience long delays during key generation due to the lack of physical hardware such as a keyboard and mouse which are commonly used to generate entropy. You can install the rngd-tools package using your distribution’s package manager and seed it with the following command to generate sufficient entropy to generate a strong key-pair:

Generating a Key-Pair

We first need to generate a key-pair so we have something to encrypt pillar with. First, though, we need to create a directory to hold the keys:

Follow the prompts providing information appropriate for your organization and do not provide a password for the key-pair.

(NOTE: If you encounter an error when executing the GPG command above please see this post.)

Export the Public Key

With key creation complete we can export the public key which will be used to encrypt Pillar items and import it on our local machine. Note that you can reimport the public key on the Salt-Master itself if you plan to encrypt sensitive Pillar data directly on it. Export the public key with the following command:

Import the Public Key on your Local Machine

Encrypt your sensitive pillar data

Now that we have all the pieces in place we can start encrypting sensitive information before placing it in Pillars. For example, to encrypt the string “super_secret_server_stuff” we would use the following command:

The above command will generate the following PGP “message”

The above message can be placed into a Pillar using the typical key-value pairing with one minor change, #!yaml:gpg must be added to the top of the file for Salt to recognize the Pillar contains GPG encrypted data. For example:

Test your Encrypted Pillar

Once you have successfully created an encrypted Pillar you should test to ensure the Pillar is properly decrypted by the Minion or Minions it’s visible to. First, sync the newly created Pillar with the following command:

Then query a minion the Pillar should be visible to and ensure the clear text equivalent of the cipher text is returned:

For instance, the clear text of version of the enciphered Pillar data we created earlier would look like:

That’s all there is to it! If the minion correctly responded with the clear text equivalent of the cipher text we created above you have successfully configured GPG renderers. Note that you can mix and match GPG encrypted Pillar items and plain text Pillar items in a single file as long as the #!yaml|gpg header is in place. If you have any questions or encounter any issues please don’t hesitate to leave a comment.

//J

Quick Tip – Fix gpg: can’t connect to the agent: IPC connect call failed error

I was configuring GPG renderers on our Salt Master a few weeks ago and I ran into the following error while generating the PGP key (pair) that would be used to encrypt secrets before adding them their respective pillars:

Eventually, I determined this error was being caused by a GPG agent that was already running under CentOS 7, which my GPG command was unable to access. To fix this error kill the running agent with the following command:

Next, restart the agent with the following command:

You should now be able to re-run the GPG command you’re using to generate the key-pair and connect to the curses version of pinentry to input your passphrase. Next week we’ll expand on this error a bit and discuss the entire process of enabling GPG renderers in SaltStack.

//J

Send a Slack Message from a SaltStack Reactor State

My current side project at work is more than a little ambitious. In the past, we’ve had servers randomly “fall off” of our FreeIPA environment and stop authenticating for one reason or another and, after fixing what I honestly believe to be every possible client-side issue a FreeIPA-joined server can experience, I decided there had to be a better way. I’ve always been a huge fan of automation so I wanted a way to automatically detect a server that was failing to authenticate and fix it without any intervention on my part. I started with the quintessential Linux-based automation technology: shell scripting, but I ultimately ended up designing a solution based on SaltStack and SaltStack Reactors. While not yet complete, a few useful learnings have come out of my development work thus far. The learning we’re discussing today is, of course, sending a Slack message from a SaltStack Reactor State.

We aren’t technically a DevOps shop, but we’re steadily introducing more and more DevOps oriented tools and techniques, not the least of which is Slack. We love Slack and use it for team communication as well as alerting from Rundeck jobs so it made sense to send alerts from our Salt states to Slack as well. In my use case I wanted to send three different alerts: a start alert, a success alert or a failure alert. In today’s post, we’ll be looking at how to send the start notification, but the only thing that differs in the other two alerts is the message content. Let’s examine the code that makes this happen:

1. send_start_notification_to_slack_channel: – This line is the ID declaration and can be any unique string you choose.

2. local.slack.post_message: – This line declares the slack.post_message state. Notice local at the beginning of the line, local instructs the Minion running this Reactor State to run the slack.post_message state locally.

3. tgt: {{ data[‘id’] }} – This line sets the target for this state as the id of the Minion running this Reactor.

4. arg: – This line simply begins the list of arguments to the state.

5. #non-prod-alerts – This line indicates the Slack channel name to send the message to. Replace this with a channel in your team.

6. This is a test message sent by {{ data[‘id’] }} and brought to you by SaltStack. – This line contains the actual message body and can be any string you like. Notice that we use {{ data[‘id’] }} again in the middle of the string. {{ data[‘id’] }} will be replaced with the ID of the Minion running this Reactor so if, for example, server1.example.com ran this Reactor State the resulting message would read “This is a test message sent by server1.example.com and brought to you by SaltStack.”

7. saltstack – This line defines the username that will appear to be sending these messages. Replace this with a username that suits your needs.

8. <slack_API_key_goes_here> – Much as the description implies, your Slack API key goes here. Replace this placeholder with your Slack API key.

Whenever a Minion sends a correctly tagged event to the Master and the state is triggered, a message similar to the one shown below will be sent to the designated Slack channel.

That’s all there is to it! With a mere eight lines of code you’ve enabled your Salt Reactor States (and configuration states with a few minor changes) to notify your or your entire team via Slack. This could provide super useful as a way to notify someone when a long running state has completed or even function as a “poor man’s” monitoring system. In a future post I’ll bring the concept of Reactor States full-circle and discuss how to trigger them from Minions, but for now you can continue your salty foray into DevOps by reading over the Salt.States.Slack documentation.

— J