Exported Resources and Evaluation in Puppet

I recently ran into an interesting “gotcha” within Puppet with regards to exported resources and parameter evaluation.

For a quick refresher, Puppet defines exported resources as resources that are not evaluated and exported for collection by another catalog. Take Nagios, for example. You can create several nodes with exported Nagios configurations that are then collected by a Nagios node. For more information, review the puppet documentation on exported resources.

The Gotcha – Exported Resources and their Parameters

I encountered the problem when attempting to collect resources by a parameter, which should be valid.

For example, I built the following resource that I planned to export:

define profiles::gitlab_runner::cloud_app_config_tab::app_env
(
  String $app_name = $title,
  Array $app_env_secrets = [],
  Hash $app_env = {},
)
{ ... }

Note that I set the app_name above to default to the title of the resource. I later attempted to export an instance of that type:

@@profiles::gitlab_runner::cloud_app_config_tab::app_env { 'foobar' :
  app_env_secrets => $secrets,
  app_env         => $env,
}

For the final piece, I built a CI runner configuration that collected specific instances of that exported defined type:

Profiles::Gitlab_runner::Cloud_app_config_tab::App_env <<| app_name == 'foobar' |>>

I ran the Puppet agent on both the exporting and collecting nodes. The agent created no resources. What did I do wrong?

I queried our PuppetDB instance and found an issue- Puppet created my exported resources; but, the export only set app_env_secrets and app_env. Why?

It turns out that I’d run against a nuance of evaluation time vs export time. At export time, default parameter settings are NOT evaluated into expressions. On the other hand, Puppet stores the values of explicitly set variables so that the resource can be evaluated. This creates the problem I saw earlier: puppet doesn’t assign the value of $title to app_name until it evaluates the resource. Evaluation occurs AFTER collection.

The Solution

I solved the issue by collecting resources based on either their parameter values or the titles (since I expect titles to match the app_name parameter).

Profiles::Gitlab_runner::Cloud_app_config_tab::App_env <<| app_name == 'foobar' or title == 'foobar' |>>

When utilizing exported resources, try to keep in mind which values will be set and available at evaluation time vs export time. Puppet won’t evaluate or set any parameters to their defaults until after it collects them.

Continue Reading

Ruby with Docker Compose as a Non-root User

EDIT 2021-04-01: When using ruby with docker compose, as noted by a commenter, bundle install should be run as the root user. I’d mistakenly set the user before running the bundle install layer. I’ve updated the code below to fix this. If you had problems before, try the updated examples.

I’ve recently begun experimenting with Docker and docker-compose in my Ruby development and am fairly pleased with the results. Building ruby with docker-compose keeps my environment clean while giving me a working rails tool set. I derived most of my workflow from this guide. While that served quite well as a starting place, one major annoyance cropped up again and again: my container ran as root.

The Problem at Hand

First, running as root in production creates an obvious security risk. Containers are supposed to isolate processes. However, history tells us that hackers find creative ways to “crash-out” of containers into the host operating system. If your process runs as root, hackers escalate to administrative privileges if they succeed.

Second, running code as root complicates your local development process. Rails creates temporary files. These files complicate cleanup on your local copy of the code. Further, you must specify special user parameters in order to run Rails commands, such as generators or migrations. These reasons alone motivated a change in process for me, regardless of security.

Goals

Let’s establish some goals going forward.

First, we will build a Dockerfile which can be shared between production and our development environment. This file will enforce consistency in how we build our production or development images, whether they run locally or in a production cluster.

Second, we will build an environment where containers envelope all of our Ruby and Rails tools. We will not install any Ruby or Rails tooling into our host operating system.

Third, we will run as a non-root user in both production and development. This measure limits our attack surface and helps us achieve our final goal.

Finally, we will set the user running on our local development instance as our own user that owns the checked out code. This process ensures that generated files remain consistent with the rest of our code files. Additionally, this user can run Rails generators, migrations, and other commands naively without specifying special UIDs. We’ll run our ruby with docker-compose to set up this development environment.

The Starting Point – An Imperfect Setup

Following the Docker guide on Rails applications lead to the creation of the following Dockerfile and docker-compose.yml files.

#Dockerfile
FROM ruby:2.6-alpine

LABEL maintainer="Aaron M. Bond"

ARG APP_PATH=/opt/myapp

RUN apk add --update --no-cache \
        bash \
        build-base \
        nodejs \
        sqlite-dev \
        tzdata \
        mysql-dev && \
      gem install bundler && \
      mkdir $APP_PATH 

COPY docker-entrypoint.sh /usr/bin

RUN chmod +x /usr/bin/docker-entrypoint.sh

WORKDIR $APP_PATH

COPY Gemfile* $APP_PATH/

RUN bundle install

COPY . $APP_PATH/

ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

As a quick review, this file first pulls an image build for Ruby applications running the 2.6 family of Ruby versions. It then installs necessary operating system packages and creates an application folder.

Next it copies in an entrypoint script, which will be the default entry for any images created with docker run. (In my case, this command cleans up the Rails server pidfile and runs whatever command is passed.)

The build file then copies Gemfile* into the application folder and runs bundle install to install and compile necessary gems.

Finally, the file indicates its entrypoint, notes that port 3000 will be exposed, and sets up a default command argument to simply run rails server -b 0.0.0.0.

With no modifications, all of these steps will execute as root within the built image.

# docker-compose.yml
version: '3'
services:
  db:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=somesecret
      - MYSQL_DATABASE=myapp
      - MYSQL_USER=myapp_user
      - MYSQL_PASSWORD=devtest
    volumes:
      - datavolume:/var/lib/mysql
  web:
    build:
      context: .
      dockerfile: Dockerfile
    command: bash -c "rm -f /tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - .:/opt/myapp
    ports:
      - "3000:3000"
    depends_on:
      - db
    tty: true
    stdin_open: true

volumes:
  datavolume:

This docker-compose.yml file creates services for use in our development environment. (Another technology will handle production, such as a Kubernetes deployment.)

First, we define a db service (container), which utilizes the MySQL image from Docker Hub and passes some information specific to the image for database creation.

Second, we define a web service (container), which builds an image from our Dockerfile and mounts our local code directory over the top of the previous application directory. This should enable us to see simple changes instantaneously, without rebuilding the image.

The Cracks in the Facade

If you start here with your application and run docker-compose up, you’ll be able to see some of the issues that arise from root execution. The root process within the container will pollute your app’s tmp directory with root-owned files that make cleanup annoying.

Generators demonstrate a bigger problem. Assume we wanted to generate a new controller called Greetings with an action of hello (yes, I blatantly stole this example directly from the Ruby on Rails guide). The following command should create an ephemeral container with our image, run the rails generator, and remove the image (--rm) when complete.

docker-compose run --rm web bundle exec rails generate controller Greetings hello

This appears logical, but the command will result in a mess. The root user would now own all of the files generated by this command within our source code. We can solve this problem by adding a bit of a hack:

docker-compose run --rm --user $(id -u):$(id -g) web bundle exec rails generate controller Greetings hello

This offensive little command runs the id utility of Linux (twice) to get our UID and GID and passes that to the run command. Now, our generators will run using our own user identity. However, the ugliness of this command offends my delicate sensibilities.

Even after we complete our clunky development process, our local system administrator will definitely complain that our Rails server is running as root in our cluster.

Mitigation Step 1 – Adding an App-Specific User

To begin untangling ourselves from root, we must start by creating a non-root user within our image. This user should run our Rails server process and take over when the application-specific portions of the image are built. Take a look at the below, modified version of our Dockerfile to see how we add an app user.

#Dockerfile
FROM ruby:2.6-alpine

LABEL maintainer="Aaron M. Bond"

ARG APP_PATH=/opt/myapp
ARG APP_USER=appuser
ARG APP_GROUP=appgroup

RUN apk add --update --no-cache \
        bash \
        build-base \
        nodejs \
        sqlite-dev \
        tzdata \
        mysql-dev && \
      gem install bundler && \
      addgroup -S $APP_GROUP && \
      adduser -S -s /sbin/nologin -G $APP_GROUP $APP_USER && \
      mkdir $APP_PATH && \
      chown $APP_USER:$APP_GROUP $APP_PATH

COPY docker-entrypoint.sh /usr/bin

RUN chmod +x /usr/bin/docker-entrypoint.sh

WORKDIR $APP_PATH

COPY --chown=$APP_USER:$APP_GROUP Gemfile* $APP_PATH/

RUN bundle install

USER $APP_USER

COPY --chown=$APP_USER:$APP_GROUP . $APP_PATH/

ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

Here, we’ve added some variables for an app user name and app group name under which we intend to run.

Our initial setup step, which still runs as root, uses addgroup and adduser to create the specified group and user. Additionally, after we’ve created our application path, we change the owner to said user and group.

Once we’ve completed other root tasks (such as pushing our entrypoint), the USER directive instructs Docker that all other RUN directives and the container execution itself should be run as our app user. We also add our app user and group as the --chown argument to the COPY directives which push our app into the container. If we built an image and ran this container right now, the app would execute as a new, non-root user.

While this is a fantastic first step and secures our application in production, we’ve missed the mark on making our development environment easier to use.

While appuser isn’t root, it’s still some random user within the container which doesn’t match our local machine’s user. Files are still going to be created as a non-matching user in the tmp directories and by any generator commands we run in containers.

Mitigation 2 – Making our App-Specific User Match the Development User

To relieve our development pain, we have to force our containers to act as our own host user when working with our source code. Fortunately for us, Linux sees users and groups only by their IDs.

In our images, we’ll have to explicitly set IDs for the UID and GID that the application (by default) will utilize. Then, in development, we’ll want to override that default with our own UID and GID.

Let’s start by adding more build arguments in the Dockerfile for our two ids and using those arguments in our addgroup and adduser commands.

#Dockerfile
FROM ruby:2.6-alpine

LABEL maintainer="Aaron M. Bond"

ARG APP_PATH=/opt/myapp
ARG APP_USER=appuser
ARG APP_GROUP=appgroup
ARG APP_USER_UID=7084
ARG APP_GROUP_GID=2001

RUN apk add --update --no-cache \
        bash \
        build-base \
        nodejs \
        sqlite-dev \
        tzdata \
        mysql-dev && \
      gem install bundler && \
      addgroup -g $APP_GROUP_GID -S $APP_GROUP && \
      adduser -S -s /sbin/nologin -u $APP_USER_UID -G $APP_GROUP $APP_USER && \
      mkdir $APP_PATH && \
      chown $APP_USER:$APP_GROUP $APP_PATH

COPY docker-entrypoint.sh /usr/bin

RUN chmod +x /usr/bin/docker-entrypoint.sh

WORKDIR $APP_PATH

COPY --chown=$APP_USER:$APP_GROUP Gemfile* $APP_PATH/

RUN bundle install

USER $APP_USER

COPY --chown=$APP_USER:$APP_GROUP . $APP_PATH/

ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

Setting these IDs up as ARG directives with a default value opens the door to docker-compose.yml to override them. The numbers are not terribly important. You should pick IDs that are in the standard user and group id ranges. Also, by best practice, ensure your different apps have unique IDs from each other.

Next, we’ll add these arguments to the docker-compose.yml file.

# docker-compose.yml
version: '3'
services:
  db:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=somesecret
      - MYSQL_DATABASE=myapp
      - MYSQL_USER=myapp_user
      - MYSQL_PASSWORD=devtest
    volumes:
      - datavolume:/var/lib/mysql
  web:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        - APP_USER_UID=${APP_USER_UID}
        - APP_GROUP_GID=${APP_GROUP_GID}
    command: bash -c "rm -f /tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - .:/opt/myapp
    ports:
      - "3000:3000"
    depends_on:
      - db
    tty: true
    stdin_open: true

volumes:
  datavolume:

Note that under the web service definition’s build key, we’ve added an args section referencing our two args. Here, we’re setting them as equal to environment variable values of the same name. Unfortunately, we can’t specify default environment variable values in the docker-compose.yml file; but, we can add a special file called .env that specifies these values.

#.env
APP_USER_UID=7084
APP_GROUP_GID=2001

As we’ve currently built everything, docker-compose up will still have the undesired behavior of running as a differing UID and GID; but, passing overriding values to those environment variables allows us to run as ourselves.

APP_USER_UID=$(id -u) APP_GROUP_GID=$(id -g) docker-compose up --build

After we’ve run the build a single time, our local development version of the image will execute as a user matching our UID and GID by default. Any docker-compose run commands we run after this step will execute properly.

However, I don’t want to have to remember this every time I rebuild this container image (or build any other container image). So, I will specify in my .bashrc file on my local machine that these two environment variables should always be set to myself.

#Added to the bottom of ~/.bashrc
export APP_USER_UID=$(id -u)
export APP_GROUP_GID=$(id -g)

So long as I am consistent in naming these variables in my Dockerfile and docker-compose.yml files of other projects, I will get a consistent environment for every project.

A Quick Aside

I want to highlight one problem that I ran into that, while specific to my environment, might bite someone else. Few people will see this issue, but for completeness, I’m noting it here.

When using the above setup, I ran into build failures when building my Docker image. Turns out, since my user on my development machine is an Active Directory user, the UID it utilizes is NOT within the sane range of Linux UIDs. The same was true of my group id.

abond@abondlintab01:~$ id -u
500000001
abond@abondlintab01:~$ id -g
500000003

Since Active Directory used such large IDs, I couldn’t utilize this user’s UID and GID for the container IDs. The build would fail on attempting to run addgroup.

$ APP_USER_UID=$(id -u) APP_GROUP_GID=$(id -g) docker-compose build
db uses an image, skipping
Building web
Step 1/18 : FROM ruby:2.6-alpine

...

Executing busybox-1.30.1-r2.trigger
OK: 268 MiB in 75 packages
Successfully installed bundler-2.1.0
1 gem installed
addgroup: number 500000003 is not in 0..256000 range
ERROR: Service 'web' failed to build: The command '/bin/sh -c apk add --update --no-cache         bash         build-base         nodejs         sqlite-dev         tzdata         mysql-dev         postgresql-dev &&       gem install bundler &&       addgroup -g $APP_GROUP_GID -S $APP_GROUP &&       adduser -S -u $APP_USER_UID -G $APP_GROUP $APP_USER &&       mkdir $APP_PATH &&       chown $APP_USER:$APP_GROUP $APP_PATH' returned a non-zero code: 1

I resolved this by creating another Linux user (not on the domain) with a sane UID which I use to develop Ruby apps. This shouldn’t be necessary for most users.

A Quick Review

Using Ruby with docker-compose can simplify your development processes and keep your environment slim.

However, running containers as root is a bad security practice. The default instructions given by Docker for Rails app development provide a functional setup, but ignore the security of root privileges. Further, running as root in dev complicates your workflow and your environment.

By creating a default app user and group with a specific UID and GID, you eliminate root processes in your container and make your production sysadmins happy.

To take it a step further, you can override that UID and GID on your machine to mach YOUR user and simplify your development workflow.

Docker and containers are great tools for development; but, finding environment settings and patterns can be difficult. Hopefully, this pattern helps someone out there as new running ruby with docker compose as I was when I started.

Continue Reading

Installing Puppet Server on CentOS 7

I do want to write more about the synergy between Puppet and Ansible; but, several people have asked me for more information on getting started with Puppet. The last time I installed Puppet Server, I took extensive notes. I figured I’d share those here, to help anyone else save time who’s just getting started by installing puppet server.

These instructions are specifically related to Cent OS 7. However, most of these steps pertain to any Linux based OS. Additionally, these instructions cover installing Puppet 4.x, since 5.0 is not yet used in Puppet Enterprise.

Escalating to Root

Most of the commands in this document require you run them as the root user. Using the sudo tool, you can escalate to root for the rest of your session.

[lkanies@puppetlab ~]$ sudo su -
[root@puppetlab ~]#

Setting Up Puppetlabs Repositories

If you are running Red Hat, Debian, Ubuntu or SuSe Linux, Puppetlabs provides repositories to easily install their software. Installing in any other system is a little beyond the scope of this article. If you need to install puppet on another system, read the Puppetlabs documentation for more information.

Start by installing the software collection RPM from Puppetlabs.

[root@puppetlab ~]# rpm -Uvh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Retrieving https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppetlabs-release-pc1-1.1.0-5.el################################# [100%]

Installing Puppet Server

Since you have installed the repositories, use your package manager to install the following packages: puppetserver, puppetdb and puppetdb-termini. While puppetdb and puppetdb-termini are not required, I recommend you install them unless you have a separate puppetdb server already in place.

[root@puppetlab ~]# yum install puppetserver puppetdb puppetdb-termini puppetdb-terminus

Configuring and Starting Puppet Server

Previously, Puppet master servers ran on Ruby inside a Rack server configuration. Because this required manual configuration and didn’t perform as well under load, Puppetlabs wrote the puppetserver application to run the same code in a Java process. When the puppetserver process is started for the first time, it creates a certificate. That TLS certificate will be used to sign any agent that connects to it. The common name of the certificate derives from the hostname of the server. Check now to make sure the host name is what you prefer it to be.

[root@puppetlab ~]# hostname
puppetlab.example.com

You must ensure that this matches the FQDN your nodes will be using to contact this server. Before configuring the server, make this change now if you need to.

Next, you edit /etc/puppetlabs/puppet/puppet.conf to set up your server names.

[main]
dns_alt_names = <fqdn of puppet server>
[agent]
server = <fqdn of puppet server>

If you have additional dns names on which this server may be contacted, add them to the dns_alt_names entry (comma-separated). In addition, the server entry informs the puppet agent process what machine to contact as its puppet master. (In this case, the server is its own master.)

Finally, start the puppetserver service.

[root@puppetlab ~]# systemctl start puppetserver

As a result of the puppetserver service, a cert is generated. Use the following command to print the certificate and verify its details. (We’ll need to source the profile that adds the puppet binary to the path first.)

[root@puppetlab puppet]# . /etc/profile.d/puppet-agent.sh
[root@puppetlab puppet]# puppet cert list --all

Adding the First Puppet Agent – the Server Itself

Because puppet can control services, you should configure puppet to keep its own services running. When writing puppet code, you start with one or more manifests, or a collection of resources that are controlled by puppet. Most of the time, those manifests are organized into modules, which are smaller chunks of code meant to control one particular technology or set of configuration. For any mature technology, it’s very possible that someone else has already done much of the work for you in writing a module. Modules can be shared and downloaded from a single repository called the Puppet Forge.

Because of the limited scope of this post, I’m going to put all of my code directly into a node (client) definition on the manifest. This is generally a bad practice; but, it will serve to demonstrate puppet more succinctly. If you want to read more on modules and best practices, you should check the documentation on Puppetlabs’s website and Gary Larizza’s blog on the roles and profiles pattern.

Your First Manifest

First, we’re going to create a file called /etc/puppetlabs/code/environments/production/manifests/nodes.pp. This is a manifest- a collection of resources that will be directly applied to puppet agents. Note that we’re in a folder structure called environments/production. Puppet allows you to split your code into different environments so that you can test manifest changes without potentially breaking existing servers. We’re going to start with just the default production environment.

Open /etc/puppetlabs/code/environments/production/manifests/nodes.pp in your favorite editor. Below, I’m going to use the hostname puppetlab.example.com; but, you should replace that with the FQDN of your puppet server.

node 'puppetlab.example.com' {
  notify { 'hello world' : }
}

You have just written a node definition. Node definitions describe some resources that should only be applied on the matching node. Typically, the global manifests (any .pp files located in the manifests folder of an environment) contain mostly node definitions. So, to be clear, we’ve just informed our puppet server to print the ‘hello world’ message when it contacts itself as a client. Run puppet agent with the -t flag to see it in action:

[root@puppetlab manifests]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetlab.example.com
Info: Applying configuration version '1512101298'
Notice: hello world
Notice: /Stage[main]/Main/Node[puppetlab.example.com]/Notify[hello world]/message: defined 'message' as 'hello world'
Notice: Applied catalog in 0.03 seconds

Controlling Services

Hello world examples are great and all; but, shouldn’t we do something more meaningful? Most puppet nodes run an agent daemon that checks in with the puppet server from time to time to make sure the configuration hasn’t drifted. Change your node definition manifest to match the one below to ensure this service (daemon) is running.

node 'puppetlab.example.com' {
  service { 'puppet' :
    ensure => 'running',
  }
}

Now, instead of a ‘notify,’ we have a ‘service’ declaration in our node definition. Both notify and service are examples of resources. Puppet code is written in resources which are collected into a catalog to be applied by the agent. Run puppet agent again to see your service start.

[root@testpuppet manifests]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetlab.example.com
Info: Applying configuration version '1512101760'
Notice: /Stage[main]/Main/Node[puppetlab.example.com]/Service[puppet]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Main/Node[puppetlab.example.com]/Service[puppet]: Unscheduling refresh on Service[puppet]
Notice: Applied catalog in 0.12 seconds

For more information on the types of resources you can declare, review the puppet documentation.

Adding Another Node

While getting a server to be its own client is stimulating, it seems like now would be a good time to create another client. We started with a server called puppetlab.example.com, so I’m going to work on another CentOS 7 box called testnode.example.com. Add another node definition to your manifest just like the first, except target your new client node instead of your puppet server.

node 'puppetlab.example.com' {
  service { 'puppet' :
    ensure => 'running',
  }
}

node 'testnode.example.com' {
  service { 'puppet' :
    ensure => 'running',
  }

Installing and Configuring the Puppet Agent

After setting up your puppet server manifest, you want to install and configure the puppet agent on your client machine. Log in, escalate to root, and run the following commands to install the agent.

[root@testnode ~]# rpm -Uvh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Retrieving https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppetlabs-release-pc1-1.1.0-5.el################################# [100%]
[root@testnode ~]# yum install puppet-agent

Once puppet is installed, we need to tell the agent the name of the server to whom it should connect. Open /etc/puppetlabs/puppet/puppet.conf in your favorite editor and add the following (note that you should replace puppetlab.example.com with your puppet server’s name).

[agent]
server = puppetlab.example.com

Generating a Certificate Request

Puppet clients and agents authenticate each other with a TLS certificate. When you first create a new node and run puppet agent, a certificate request is generated and sent to the server to be signed. Once that’s complete, you can sign the request on the server to generate a client certificate. Consequently, the next time the agent runs against the server it will cache its new certificate and use it to authenticate every time puppet runs.

Start by running puppet agent on the new client to generate a request (note that you probably will have to source the puppet profile first).

[root@testnode ~]# . /etc/profile.d/puppet-agent.sh
[root@testnode ~]# puppet agent -t
Info: Creating a new SSL key for testnode.example.com
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for testnode.example.com
Info: Certificate Request fingerprint (SHA256): AF:E9:E3:D6:3F:9A:0F:CC:83:01:DD:66:55:87:B9:4B:03:9C:C1:1C:7E:BB:12:CE:8B:21:93:6B:83:B3:E4:33
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled

Note, if you see “no route to host,” the CentOS firewall may be blocking port 8140. Run the following commands on the puppet server to open the port and then try the agent run again.

[root@puppetlab manifests]# firewall-cmd --zone=public --add-port=8140/tcp --permanent
success
[root@puppetlab manifests]# firewall-cmd --reload
success

Your command created a certificate request from the new client and sent that request to the master. Take a note of that fingerprint.

Signing a Certificate Request

On your puppet server, you’ll now want to sign the certificate request for your new node so that the client and server trust each other. Start by running the following command, which prints out all of the certificates that haven’t yet been signed.

[root@puppetlab manifests]# puppet cert list
  "testnode.example.com" (SHA256) AF:E9:E3:D6:3F:9A:0F:CC:83:01:DD:66:55:87:B9:4B:03:9C:C1:1C:7E:BB:12:CE:8B:21:93:6B:83:B3:E4:33

You can see our request from testnode and its fingerprint. Check that fingerprint against the one that the client gave you and make certain they match. If they do, you can sign the request and issue the certificate with the following command.

[root@puppetlab manifests]# puppet cert sign testnode.example.com
Signing Certificate Request for:
  "testnode.eample.com" (SHA256) AF:E9:E3:D6:3F:9A:0F:CC:83:01:DD:66:55:87:B9:4B:03:9C:C1:1C:7E:BB:12:CE:8B:21:93:6B:83:B3:E4:33
Notice: Signed certificate request for testnode.example.com
Notice: Removing file Puppet::SSL::CertificateRequest testnode.example.com at '/etc/puppetlabs/puppet/ssl/ca/requests/testnode.example.com.pem'

Running the Agent for the First Time With a Signed Certificate

Now that our certificate is in order, go back to your test node and run puppet agent one last time. You should see it start the puppet daemon as expected.

[root@testnode ~]# puppet agent -t
Info: Caching certificate for testnode.example.com
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for testnode.example.com
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for testnode.example.com
Info: Applying configuration version '1512104337'
Notice: /Stage[main]/Main/Node[testnode.example.com]/Service[puppet]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Main/Node[testnode.example.com]/Service[puppet]: Unscheduling refresh on Service[puppet]
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.22 seconds

Assuming everything ran properly, you should see puppet start the puppet service as you declared in your node definition.

Aside: Why Did I Need to Generate a Cert Only on the New Node

It may seem odd that you only created a certificate for your second client and not for the puppet server itself. The reason for this is because the puppet server uses its own CA certificate to connect via the agent. Since this is a valid certificate that matches the client (puppetlab.example.com) and the certificate is signed by the CA (self-signed, in this case), the server accepts it as trusted.

Further Reading

This is just a base tutorial on how to get bootstrapped with a puppet server in a CentOS environment. Hopefully, working through this has whetted your appetite. If so, I’d suggest reading the puppet documentation, especially the sections on the main manifest(s), environments, module fundamentals, and the puppet language. If you’re looking for more real world examples, you can also review the essential configuration quick start guides for real world scenarios and how puppet helps solve them.

In conclusion, remember that your servers are cattle and not pets. Having a puppet server and agent model in place in your environment will help you avoid configuration problems that are hard to solve in the future.

Continue Reading

Ansible or Puppet? Both! – Part 1

Ansible or Puppet? A Great Debate?

Should I install Ansible or Puppet? In short, I feel both have their place.

Anyone who has asked me about work in the last few years knows that I have a passion for automation tools. My favorite for configuration automation has always been Puppet. Puppet is a mature infrastructure-as-code tool that describes a desired state and enforces it. Having used it for years, I can say that Puppet handles most of my management needs. However, there are some tasks that Puppet just doesn’t handle as well. After playing a bit with Ansible, I believe it can be the tool to fill many of those gaps.

What Puppet Gets Right

Puppet’s domain-specific language is powerful while being descriptive. Its agents are portable and cross platform. Its server is mature and stable. It handles building a catalog of configuration quite well and provides a lot of descriptive power. In short, Puppet is adept and defining and enforcing a configuration baseline. Your puppet code describes infrastructure configurations and puppet makes sure that they exist and stay consistent.

Beyond that, puppet features many other benefits:

  • the Puppet Forge, a community of module developers with a strong following
  • a robust ancillary toolset, including r10k configuration manager for advanced deployment of your code
  • hiera, a tool used to separate code from configuration (keeping your code clean and reusable)
  • a slick enterprise edition, if you need supported deployment in a larger environment

Where Puppet Falls Short

While I’ve derived great benefit from puppet and can sing its praises longer than most are comfortable with, there are a few gaps in puppet’s capabilities.  Here are a few of the gaps and drawbacks I most often find when using puppet.

Reliance on an Agent

Puppet’s agent is an asset, enabling many of the benefits I’ve listed above.  However, this also adds a slight burden to the configuration.  The first puppet code I write is usually a profile class to manage puppet.  If something happens and the agent breaks, getting back on track is a manual task.

This isn’t strictly a drawback.  As I mentioned above, the agent enables many of puppet’s most powerful features.  It is worth noting that the agent isn’t all sunshine and puppies, though.  That little bit of pain is part of the fee we pay for well managed infrastructure.

Bootstrapping and Orchestration

Related to the agent is the problem of boostrapping.  While puppet is great at taking a server that’s got a base OS installed and configuring it, it’s not as great at kicking off a task to install the OS or create a VM in the first place.  Additionally, configuring a new puppet master server for your environment is probably a manual process. You may be required to install an agent, pull down some code, and get things configured properly before you can do your first puppet run.

Lack of Procedural Tools or Tasks

Puppet is designed with desired state in mind- that is, you should build your puppet code to describe your desired outcome and let the tool decide how to do the work.  This is awesome.  However, there are times when you want to do tasks or time-based procedures.

Perhaps you want to write a task to bootstrap a new puppet server in your environment.  Maybe you want to kick off a job that will update and reboot all of your nodes in a particular order.  Possibly you want to tell VMWare to build you a new cluster of servers with a given set of IPs.

Puppet by itself cannot solve these problems.  I determined Ansible to be a good fill-in for these gaps.

(As an aside, Puppetlabs, the company that develops puppet, provides a tool to solve this problem called mcollective.  I won’t go into mcollective vs Ansible here; but, for my own uses, I’ve found Ansible to be a better fit.)

How Ansible Helps

Ansible, while also an infrastructure-as-code tool, doesn’t specifically describe desired state.  Instead, it enables the building of playbooks– blocks of code that describe tasks and inter-dependencies to operate on a server and achieve a verified result.  Ansible resolves a number of problems left to us by Puppet.

Ansible is Agentless

There are no agents when working with Ansible.  Instead, Ansible relies on SSH (PowerShell remote for Windows servers). Since SSH is common to most servers, there isn’t anything to install.

Because there are no agents and the underlying communication is a common component in servers, Ansible is a little less brittle than puppet.  A server with an OS installed is ready for Ansible out of the box.

Ansible has Modules for Orchestration

Ansible can build virtual machines.  Tasks can be combined to create a cluster.  Ansible can configure networking relatively easily. While it cannot provide bare-metal bootstrapping (you still need PXE or some other installer to accomplish that), it can build an environment in the cloud from the ground up.

Ansible Runs Tasks

I’m not going to lie to you- at the heart of it, Ansible is a scripting engine.  It uses Python to write code, ships it to your server, and runs it.  That’s not a bad thing- Ansible executes powerful tasks based on its language.  Because of this and the nature of playbooks, we can write timed tasks in Ansible that couldn’t be written in Puppet alone.  I can write a playbook to upgrade my environment.  Ansible can reload my webserver process on a set of machines.  I can execute a source control pull on all of my nodes at once. I don’t want to enforce this type of action every minute. I want to achieve these goals at times of my choosing.

So, Which Tool is Better? Ansible or Puppet?  Both.

Puppet and Ansible compete for market share.  They build similar tools and attempt to differentiate themselves. That said, you can use them together easily.  In the environments I’ve managed, I chose to employ both of these tools.  They complement each other well and can be used in concert without issue.  For example, if you have an existing puppet environment, it can create an Ansible configuration for you.

In the rest of this entry, I’ll cover how to start using Ansible by configuring it from Puppet.

Creating an Ansible Configuration In Puppet

To start using Ansible, I leveraged my existing puppet configuration.  Notably, the rest of this blog will make heavy use of the roles and profiles pattern.  A role is a puppet class describing a type of machine, such as a webserver or database server.  A profile, on the other hand, describes a configuration for a specific technology, such as Apache or MySQL.  For more information on using roles and profiles, read Gary Larizza’s blog post on the subject.  He describes it better than I could.

Creating a Profile to Install Ansible

First of all, start by making a very basic profile that installs Ansible.  Below is a good example.

# profile class to install and configure ansible
class profiles::ansible
{
  ensure_packages(['ansible'])
}

Creating a Role for Your Ansible Control Machine

Next we want to define the role which employs this profile. Where possible, roles should be named in a way that’s technology agnostic.

# role for an orchestration server
class roles::orchestrator inherits ::roles::base
{
  include ::profiles::ansible
}

Note that we have a “base” role upon which all other roles are built. This provides all of the configuration that every node in our environment should enforce. For now, let’s pretend it’s empty.

# role applied to all nodes
class roles::base
{

}

Applying Your Role to a Server

Finally, we apply the role to a node in our environment. Our role employs the profile that installs Ansible. Therefore, puppet will enforce that the package is installed on the target.

node 'rodrigo.example.com'
{
  include ::roles::orchestrator
}

Deploy this code to your puppet server. Now, we’ll run the puppet agent on the orchestrator machine to see it install the Ansible package.

[root@rodrigo~]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for rodrigo.example.com
Info: Applying configuration version '1508220476'
Notice: /Stage[main]/Profiles::Ansible/Package[ansible]/ensure: created
Notice: Applied catalog in 32.28 seconds

Using Puppet to Inform Ansible About our Environment

I’ve written 3 code files to do what one command on each server could do. So, why use puppet to do this? Because puppet can also be used to provide context about our environment to Ansible.

Ansible uses a hosts file in /etc/ansible/hosts to determine what servers are available and how to group them. Follow the below steps to create a puppet configuration to auto-generate this file. As a result, we want to produce a file that looks like this:

[all_servers]
server1.example.com
server2.example.com
server3.example.com
rodrigo.example.com

Creating a Defined Type for a Host Entry

Since we’re interested in creating multiple hosts in the host configuration file, we’ll create a defined type in puppet to describe a single entry in that file. Defined types are reusable descriptions of resources that we expect to duplicate in puppet classes. (For the below type, I’m using the concat module available on Puppet Forge, which assembles single files from multiple fragments. See their puppet forge page for more information.)

# defined type to define ansible host entry
define profiles::ansible::conf::host_definition
(
  $host = $title, #the name of the host
  $group = 'all_hosts', #the group under which the host will fall in the file
)
{
  $hostsfile = '/etc/ansible/hosts'

  ensure_resource('concat', $hostsfile, {
    'owner' => 'root',
    'group' => 'root',
    'mode'  => '0644',
  })

  ensure_resource('concat::fragment', "${hostsfile}_${group}", {
    'target'  => $hostsfile,
    'content' => "\n[${group}]\n",
    'order'   => "${group}",
  })

  ::concat::fragment { "${hostsfile}_${group}_${host}" :
    target  => $hostsfile,
    content => "${host}\n",
    order   => "${group}_${host}",
  }
}

First, we’re creating a single hosts file. This is a defined type for every entry in that file. Therefore, we’ll be calling it many times. Hence, we can’t just declare the concat resource for the file or we’ll get duplicate resource entries. The ensure_resource function allows us to ensure that the resource is in the catalog without erroring if it already exists. Second, we’re going to only want one line that contains the group name for the whole file. This could also create duplicate errors, so we use ensure_resource again. Finally, we put the hostname under the group for which we’ve defined it.

To employ our defined type to create an entry, we can instantiate it in this way:

::profiles::ansible::conf::host_definition{ 'server1.example.com' :
  host  => 'server1.example.com',
  group => 'all_servers',
}

(Note that the host and group params are unnecessary because they default to title and ‘all_groups,’ respectively.)

Exporting Our Resources

While this achieves our goal, it isn’t useful on its own. We’d rather puppet describe our servers than define them each ourselves within puppet using a resource. Therefore, we’ll export these resources from our base role, which is applied by every node.

# role applied to all nodes
class roles::base
{
  @@::profiles::ansible::conf::host_definition{ 'server1.example.com' : }
}

(Because we don’t need to specify the host or group, I’ve removed them from the example here.) Note the two at signs at the beginning of the declaration. This is an exported resource. Rather than defining hosts in one place, each host can export its own definition to collect later.

Collecting the Resources to Create a Hosts File

Finally, we’ll modify the profile and have it collect the resources from other servers.

# role for an orchestration server
class roles::orchestrator inherits ::roles::base
{
  include ::profiles::ansible
  Profiles::Ansible::Conf::Host_definition <<| |>>
}

Since we’ve used the spaceship operator (<<| |>>), our profile collects the hosts resources. As a result, puppet will create a file with every host in our environment:
/etc/ansible/hosts. This file is created on our Ansible server.

Puppet has now informed its good friend Ansible of the lay of the land.

More to Come

In my next blog post, I’ll cover more Ansible usage and how Ansible can be used to run and deploy puppet.

Continue Reading