Quick Tip: Flush the System Security Services (SSS) Cache on Red Hat Enterprise Linux

After deleting a large number of service accounts from our FreeIPA infrastructure, I found they were still, supposedly anyway, resolvable by various hosts in our environment and that was preventing me from adding them locally. I suspected the issue was related to cacheing and referenced the SSS man page to figure out how to purge the its cache.

To flush a specific user entry from the SSS cache run the following command:

If that doesn’t do the trick run the following command to purge everything from the SSS cache:

Verify the desired user entry has been removed from the SSSD cache and is no longer present on the system with the following command:

–J

Deploy a LAMP Stack with Salt, Salt-Cloud, and the AWS CLI: Part 1 – Introduction and Architecture

Introduction

Over the next few weeks we’ll be learning how to use Salt, Salt-Cloud, and the AWS CLI to provision and configure a complete LAMP (Linux, Apache, MySQL, PHP) stack on top of Amazon AWS resources. The infrastructure will be comprised of the following Amazon AWS components:

  • Three (3) t2.small Amazon EC2 instances
  • One (1) Auto-Scaling group with associated launch configuration for additional instances created in response to infrastructure events (high CPU load, high RAM utilization, a sudden spike in requests, etc)
  • One (1) Elastic Load Balancer (ELB) instance
  • One (1) Amazon Simple Storage Service (S3) bucket for storage of static site assets
  • One (1) Amazon Virtual Private Cloud (VPC)
  • One (1) hosted zone in Amazon’s Route 53 DNS service
  • One (1) Elastic File System (EFS) volume with mount targets in two Availability Zones
  • One (1) Amazon Relational Database Service (RDS) instance configured for Multi-AZ replication

Continue reading “Deploy a LAMP Stack with Salt, Salt-Cloud, and the AWS CLI: Part 1 – Introduction and Architecture”

Quick Tip: Poll the Postfix mail queue and send a Slack notification if it starts to fill up

A few weeks ago I wrote a short post explaining how to flush the Postfix mail queue if it started filling up, but I didn’t mention how to determine if it’s filling up. The following script polls the Postfix mail queue and, if the status is anything other than “Mail queue is empty”, the script sleeps for two minutes to allow the queue to finish processing and polls it again. If the queue still doesn’t show as empty a notification is sent to a specific Slack channel. This script has already proven quite useful for us a couple of times when our upstream mail relay has experienced issues. Please note that for this script to provide maximum value you should probably run it on a schedule using a job scheduler like Cron or Rundeck so you and/or your team will be automatically notified if your mail queue starts filling up unexpectedly

Quick Tip: Flush the Postfix Mail Queue

Due to vacation schedules I ended up being the lone System Administrator for my entire company last week and, in between fighting fires, I found a useful command I wanted to share (I also haven’t had time to write a long-form post this week. Sorry!) Early in the week I noticed an interruption in mail flow and traced it to a misconfiguration in Postfix which was causing mail to endlessly loop and never actually deliver. The loop combined with the amount of mail we process on a given day caused a backlog of 585 messages. Needless to say, I didn’t want to wait for Postfix to requeue those messages for delivery itself. So, I did a little googling and learned about the -f (flush) option of the postqueue command which will flush the Postfix mail queue and immediately requeue the messages for delivery. To flush the Postfix mail queue and immediately requeue any messages waiting to be delivered simply execute:

–J

Dockerize an Existing WordPress Installation

Docker is an unbelievably popular technology right now. It’s extremely flexible and can be used at every stage of an application’s lifecycle. In fact, a Docker container and the application running inside it can move together through the application lifecycle all the way to production. In this post, we’re going to cover some of the core concepts of Docker by looking at how you can migrate an existing WordPress installation into Docker (or “Dockerize” it). For the purposes of this article, I’ll be creating Dockerized copy of this very blog. Before we get started, Let’s get started!

Continue reading “Dockerize an Existing WordPress Installation”

How-to: Harden SSH Authentication with Google Authenticator

A long, long time ago (in this very galaxy) I wrote a post on how to Harden SSH on a Linux instance with Multi-Factor Authentication which focused on using a Yubikey to add an extra layer of security to the traditional SSH login process. Today though, I’d like to follow up on that post and examine the use of OTP (One-Time Password) tokens generated via an app to secure SSH login in much the same way as the Yubikey. The primary benefit to using a software-based token as opposed to the physical Yubikey token is that you don’t have to incur the expense of purchasing a Yubikey. We’ll be using Google Authenticator and CentOS, but the same principles apply to virtually all Linux distribution that use PAM-based authentication. It’s also important to note that other apps such as Authy and 1Password can be used to generate the OTP tokens. We’re using Google Authenticator due to it’s ease of use and ubiquity. Let’s get started!

Continue reading “How-to: Harden SSH Authentication with Google Authenticator”

Send a Slack Message from a SaltStack Reactor State

My current side project at work is more than a little ambitious. In the past, we’ve had servers randomly “fall off” of our FreeIPA environment and stop authenticating for one reason or another and, after fixing what I honestly believe to be every possible client-side issue a FreeIPA-joined server can experience, I decided there had to be a better way. I’ve always been a huge fan of automation so I wanted a way to automatically detect a server that was failing to authenticate and fix it without any intervention on my part. I started with the quintessential Linux-based automation technology: shell scripting, but I ultimately ended up designing a solution based on SaltStack and SaltStack Reactors. While not yet complete, a few useful learnings have come out of my development work thus far. The learning we’re discussing today is, of course, sending a Slack message from a SaltStack Reactor State.

We aren’t technically a DevOps shop, but we’re steadily introducing more and more DevOps oriented tools and techniques, not the least of which is Slack. We love Slack and use it for team communication as well as alerting from Rundeck jobs so it made sense to send alerts from our Salt states to Slack as well. In my use case I wanted to send three different alerts: a start alert, a success alert or a failure alert. In today’s post, we’ll be looking at how to send the start notification, but the only thing that differs in the other two alerts is the message content. Let’s examine the code that makes this happen:

1. send_start_notification_to_slack_channel: – This line is the ID declaration and can be any unique string you choose.

2. local.slack.post_message: – This line declares the slack.post_message state. Notice local at the beginning of the line, local instructs the Minion running this Reactor State to run the slack.post_message state locally.

3. tgt: {{ data[‘id’] }} – This line sets the target for this state as the id of the Minion running this Reactor.

4. arg: – This line simply begins the list of arguments to the state.

5. #non-prod-alerts – This line indicates the Slack channel name to send the message to. Replace this with a channel in your team.

6. This is a test message sent by {{ data[‘id’] }} and brought to you by SaltStack. – This line contains the actual message body and can be any string you like. Notice that we use {{ data[‘id’] }} again in the middle of the string. {{ data[‘id’] }} will be replaced with the ID of the Minion running this Reactor so if, for example, server1.example.com ran this Reactor State the resulting message would read “This is a test message sent by server1.example.com and brought to you by SaltStack.”

7. saltstack – This line defines the username that will appear to be sending these messages. Replace this with a username that suits your needs.

8. <slack_API_key_goes_here> – Much as the description implies, your Slack API key goes here. Replace this placeholder with your Slack API key.

Whenever a Minion sends a correctly tagged event to the Master and the state is triggered, a message similar to the one shown below will be sent to the designated Slack channel.

That’s all there is to it! With a mere eight lines of code you’ve enabled your Salt Reactor States (and configuration states with a few minor changes) to notify your or your entire team via Slack. This could provide super useful as a way to notify someone when a long running state has completed or even function as a “poor man’s” monitoring system. In a future post I’ll bring the concept of Reactor States full-circle and discuss how to trigger them from Minions, but for now you can continue your salty foray into DevOps by reading over the Salt.States.Slack documentation.

— J

Quick Tip: Resolve Salt-Cloud EC2 Instance Provisioning Failure during Dependency Installation

I recently began learning SaltStack to compliment my knowledge of Puppet. I always like to use a new technology I’m learning to accomplish meaningful tasks related to whatever project I’m working on at the time and SaltStack, along with its cloud-oriented companion Salt-Cloud, seemed like they would be very helpful when migrating my personal website and Blog from Squarespace back to Amazon Web Services. Once I had deployed a salt-master server and configured it with the necessary packages and profiles, I attempted to deploy my first EC2 instance using Salt-Cloud and ran into the following error:

[bash]salt-cloud * ERROR: Failed to run install_amazon_linux_ami_deps()!!![/bash]
Continue reading “Quick Tip: Resolve Salt-Cloud EC2 Instance Provisioning Failure during Dependency Installation”

Harden SSH on a Linux instance with Multi-Factor Authentication

With stories like the recent iCloud hack popping up in mainstream media, security and privacy are becoming increasingly important to the average consumer. These topics however are even more important to system administrators, those of us entrusted with safeguarding sensitive data against an ever changing threat. As technology professionals we have access to a number of tools to assist in defending our networks and protecting private data, but one of the most powerful tools at our disposal is multi-factor authentication. In this post we will take a high level look at multi-factor authentication and discuss the implementation of one multi-factor authentication solution to secure SSH access on a Linux instance. Continue reading “Harden SSH on a Linux instance with Multi-Factor Authentication”

Building a Scalable Highly Available Web Cluster Part 1: The Load Balancing Tier

In an earlier post we discussed overall cluster configuration and now we’re going to begin actual cluster configuration with the load-balancing tier. Each load balancer will run following software:

  • HAProxy – HAProxy will facilitate the actual load balancing of traffic to web nodes behind the load balancers.
  • Keepalived – Keepalived will allow for IP failover in the event one of the load balancers fails.

Continue reading “Building a Scalable Highly Available Web Cluster Part 1: The Load Balancing Tier”