Exported Resources and Evaluation in Puppet

I recently ran into an interesting “gotcha” within Puppet with regards to exported resources and parameter evaluation.

For a quick refresher, Puppet defines exported resources as resources that are not evaluated and exported for collection by another catalog. Take Nagios, for example. You can create several nodes with exported Nagios configurations that are then collected by a Nagios node. For more information, review the puppet documentation on exported resources.

The Gotcha – Exported Resources and their Parameters

I encountered the problem when attempting to collect resources by a parameter, which should be valid.

For example, I built the following resource that I planned to export:

define profiles::gitlab_runner::cloud_app_config_tab::app_env
(
  String $app_name = $title,
  Array $app_env_secrets = [],
  Hash $app_env = {},
)
{ ... }

Note that I set the app_name above to default to the title of the resource. I later attempted to export an instance of that type:

@@profiles::gitlab_runner::cloud_app_config_tab::app_env { 'foobar' :
  app_env_secrets => $secrets,
  app_env         => $env,
}

For the final piece, I built a CI runner configuration that collected specific instances of that exported defined type:

Profiles::Gitlab_runner::Cloud_app_config_tab::App_env <<| app_name == 'foobar' |>>

I ran the Puppet agent on both the exporting and collecting nodes. The agent created no resources. What did I do wrong?

I queried our PuppetDB instance and found an issue- Puppet created my exported resources; but, the export only set app_env_secrets and app_env. Why?

It turns out that I’d run against a nuance of evaluation time vs export time. At export time, default parameter settings are NOT evaluated into expressions. On the other hand, Puppet stores the values of explicitly set variables so that the resource can be evaluated. This creates the problem I saw earlier: puppet doesn’t assign the value of $title to app_name until it evaluates the resource. Evaluation occurs AFTER collection.

The Solution

I solved the issue by collecting resources based on either their parameter values or the titles (since I expect titles to match the app_name parameter).

Profiles::Gitlab_runner::Cloud_app_config_tab::App_env <<| app_name == 'foobar' or title == 'foobar' |>>

When utilizing exported resources, try to keep in mind which values will be set and available at evaluation time vs export time. Puppet won’t evaluate or set any parameters to their defaults until after it collects them.

Continue Reading

Ruby with Docker Compose as a Non-root User

EDIT 2021-04-01: When using ruby with docker compose, as noted by a commenter, bundle install should be run as the root user. I’d mistakenly set the user before running the bundle install layer. I’ve updated the code below to fix this. If you had problems before, try the updated examples.

I’ve recently begun experimenting with Docker and docker-compose in my Ruby development and am fairly pleased with the results. Building ruby with docker-compose keeps my environment clean while giving me a working rails tool set. I derived most of my workflow from this guide. While that served quite well as a starting place, one major annoyance cropped up again and again: my container ran as root.

The Problem at Hand

First, running as root in production creates an obvious security risk. Containers are supposed to isolate processes. However, history tells us that hackers find creative ways to “crash-out” of containers into the host operating system. If your process runs as root, hackers escalate to administrative privileges if they succeed.

Second, running code as root complicates your local development process. Rails creates temporary files. These files complicate cleanup on your local copy of the code. Further, you must specify special user parameters in order to run Rails commands, such as generators or migrations. These reasons alone motivated a change in process for me, regardless of security.

Goals

Let’s establish some goals going forward.

First, we will build a Dockerfile which can be shared between production and our development environment. This file will enforce consistency in how we build our production or development images, whether they run locally or in a production cluster.

Second, we will build an environment where containers envelope all of our Ruby and Rails tools. We will not install any Ruby or Rails tooling into our host operating system.

Third, we will run as a non-root user in both production and development. This measure limits our attack surface and helps us achieve our final goal.

Finally, we will set the user running on our local development instance as our own user that owns the checked out code. This process ensures that generated files remain consistent with the rest of our code files. Additionally, this user can run Rails generators, migrations, and other commands naively without specifying special UIDs. We’ll run our ruby with docker-compose to set up this development environment.

The Starting Point – An Imperfect Setup

Following the Docker guide on Rails applications lead to the creation of the following Dockerfile and docker-compose.yml files.

#Dockerfile
FROM ruby:2.6-alpine

LABEL maintainer="Aaron M. Bond"

ARG APP_PATH=/opt/myapp

RUN apk add --update --no-cache \
        bash \
        build-base \
        nodejs \
        sqlite-dev \
        tzdata \
        mysql-dev && \
      gem install bundler && \
      mkdir $APP_PATH 

COPY docker-entrypoint.sh /usr/bin

RUN chmod +x /usr/bin/docker-entrypoint.sh

WORKDIR $APP_PATH

COPY Gemfile* $APP_PATH/

RUN bundle install

COPY . $APP_PATH/

ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

As a quick review, this file first pulls an image build for Ruby applications running the 2.6 family of Ruby versions. It then installs necessary operating system packages and creates an application folder.

Next it copies in an entrypoint script, which will be the default entry for any images created with docker run. (In my case, this command cleans up the Rails server pidfile and runs whatever command is passed.)

The build file then copies Gemfile* into the application folder and runs bundle install to install and compile necessary gems.

Finally, the file indicates its entrypoint, notes that port 3000 will be exposed, and sets up a default command argument to simply run rails server -b 0.0.0.0.

With no modifications, all of these steps will execute as root within the built image.

# docker-compose.yml
version: '3'
services:
  db:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=somesecret
      - MYSQL_DATABASE=myapp
      - MYSQL_USER=myapp_user
      - MYSQL_PASSWORD=devtest
    volumes:
      - datavolume:/var/lib/mysql
  web:
    build:
      context: .
      dockerfile: Dockerfile
    command: bash -c "rm -f /tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - .:/opt/myapp
    ports:
      - "3000:3000"
    depends_on:
      - db
    tty: true
    stdin_open: true

volumes:
  datavolume:

This docker-compose.yml file creates services for use in our development environment. (Another technology will handle production, such as a Kubernetes deployment.)

First, we define a db service (container), which utilizes the MySQL image from Docker Hub and passes some information specific to the image for database creation.

Second, we define a web service (container), which builds an image from our Dockerfile and mounts our local code directory over the top of the previous application directory. This should enable us to see simple changes instantaneously, without rebuilding the image.

The Cracks in the Facade

If you start here with your application and run docker-compose up, you’ll be able to see some of the issues that arise from root execution. The root process within the container will pollute your app’s tmp directory with root-owned files that make cleanup annoying.

Generators demonstrate a bigger problem. Assume we wanted to generate a new controller called Greetings with an action of hello (yes, I blatantly stole this example directly from the Ruby on Rails guide). The following command should create an ephemeral container with our image, run the rails generator, and remove the image (--rm) when complete.

docker-compose run --rm web bundle exec rails generate controller Greetings hello

This appears logical, but the command will result in a mess. The root user would now own all of the files generated by this command within our source code. We can solve this problem by adding a bit of a hack:

docker-compose run --rm --user $(id -u):$(id -g) web bundle exec rails generate controller Greetings hello

This offensive little command runs the id utility of Linux (twice) to get our UID and GID and passes that to the run command. Now, our generators will run using our own user identity. However, the ugliness of this command offends my delicate sensibilities.

Even after we complete our clunky development process, our local system administrator will definitely complain that our Rails server is running as root in our cluster.

Mitigation Step 1 – Adding an App-Specific User

To begin untangling ourselves from root, we must start by creating a non-root user within our image. This user should run our Rails server process and take over when the application-specific portions of the image are built. Take a look at the below, modified version of our Dockerfile to see how we add an app user.

#Dockerfile
FROM ruby:2.6-alpine

LABEL maintainer="Aaron M. Bond"

ARG APP_PATH=/opt/myapp
ARG APP_USER=appuser
ARG APP_GROUP=appgroup

RUN apk add --update --no-cache \
        bash \
        build-base \
        nodejs \
        sqlite-dev \
        tzdata \
        mysql-dev && \
      gem install bundler && \
      addgroup -S $APP_GROUP && \
      adduser -S -s /sbin/nologin -G $APP_GROUP $APP_USER && \
      mkdir $APP_PATH && \
      chown $APP_USER:$APP_GROUP $APP_PATH

COPY docker-entrypoint.sh /usr/bin

RUN chmod +x /usr/bin/docker-entrypoint.sh

WORKDIR $APP_PATH

COPY --chown=$APP_USER:$APP_GROUP Gemfile* $APP_PATH/

RUN bundle install

USER $APP_USER

COPY --chown=$APP_USER:$APP_GROUP . $APP_PATH/

ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

Here, we’ve added some variables for an app user name and app group name under which we intend to run.

Our initial setup step, which still runs as root, uses addgroup and adduser to create the specified group and user. Additionally, after we’ve created our application path, we change the owner to said user and group.

Once we’ve completed other root tasks (such as pushing our entrypoint), the USER directive instructs Docker that all other RUN directives and the container execution itself should be run as our app user. We also add our app user and group as the --chown argument to the COPY directives which push our app into the container. If we built an image and ran this container right now, the app would execute as a new, non-root user.

While this is a fantastic first step and secures our application in production, we’ve missed the mark on making our development environment easier to use.

While appuser isn’t root, it’s still some random user within the container which doesn’t match our local machine’s user. Files are still going to be created as a non-matching user in the tmp directories and by any generator commands we run in containers.

Mitigation 2 – Making our App-Specific User Match the Development User

To relieve our development pain, we have to force our containers to act as our own host user when working with our source code. Fortunately for us, Linux sees users and groups only by their IDs.

In our images, we’ll have to explicitly set IDs for the UID and GID that the application (by default) will utilize. Then, in development, we’ll want to override that default with our own UID and GID.

Let’s start by adding more build arguments in the Dockerfile for our two ids and using those arguments in our addgroup and adduser commands.

#Dockerfile
FROM ruby:2.6-alpine

LABEL maintainer="Aaron M. Bond"

ARG APP_PATH=/opt/myapp
ARG APP_USER=appuser
ARG APP_GROUP=appgroup
ARG APP_USER_UID=7084
ARG APP_GROUP_GID=2001

RUN apk add --update --no-cache \
        bash \
        build-base \
        nodejs \
        sqlite-dev \
        tzdata \
        mysql-dev && \
      gem install bundler && \
      addgroup -g $APP_GROUP_GID -S $APP_GROUP && \
      adduser -S -s /sbin/nologin -u $APP_USER_UID -G $APP_GROUP $APP_USER && \
      mkdir $APP_PATH && \
      chown $APP_USER:$APP_GROUP $APP_PATH

COPY docker-entrypoint.sh /usr/bin

RUN chmod +x /usr/bin/docker-entrypoint.sh

WORKDIR $APP_PATH

COPY --chown=$APP_USER:$APP_GROUP Gemfile* $APP_PATH/

RUN bundle install

USER $APP_USER

COPY --chown=$APP_USER:$APP_GROUP . $APP_PATH/

ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

Setting these IDs up as ARG directives with a default value opens the door to docker-compose.yml to override them. The numbers are not terribly important. You should pick IDs that are in the standard user and group id ranges. Also, by best practice, ensure your different apps have unique IDs from each other.

Next, we’ll add these arguments to the docker-compose.yml file.

# docker-compose.yml
version: '3'
services:
  db:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=somesecret
      - MYSQL_DATABASE=myapp
      - MYSQL_USER=myapp_user
      - MYSQL_PASSWORD=devtest
    volumes:
      - datavolume:/var/lib/mysql
  web:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        - APP_USER_UID=${APP_USER_UID}
        - APP_GROUP_GID=${APP_GROUP_GID}
    command: bash -c "rm -f /tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - .:/opt/myapp
    ports:
      - "3000:3000"
    depends_on:
      - db
    tty: true
    stdin_open: true

volumes:
  datavolume:

Note that under the web service definition’s build key, we’ve added an args section referencing our two args. Here, we’re setting them as equal to environment variable values of the same name. Unfortunately, we can’t specify default environment variable values in the docker-compose.yml file; but, we can add a special file called .env that specifies these values.

#.env
APP_USER_UID=7084
APP_GROUP_GID=2001

As we’ve currently built everything, docker-compose up will still have the undesired behavior of running as a differing UID and GID; but, passing overriding values to those environment variables allows us to run as ourselves.

APP_USER_UID=$(id -u) APP_GROUP_GID=$(id -g) docker-compose up --build

After we’ve run the build a single time, our local development version of the image will execute as a user matching our UID and GID by default. Any docker-compose run commands we run after this step will execute properly.

However, I don’t want to have to remember this every time I rebuild this container image (or build any other container image). So, I will specify in my .bashrc file on my local machine that these two environment variables should always be set to myself.

#Added to the bottom of ~/.bashrc
export APP_USER_UID=$(id -u)
export APP_GROUP_GID=$(id -g)

So long as I am consistent in naming these variables in my Dockerfile and docker-compose.yml files of other projects, I will get a consistent environment for every project.

A Quick Aside

I want to highlight one problem that I ran into that, while specific to my environment, might bite someone else. Few people will see this issue, but for completeness, I’m noting it here.

When using the above setup, I ran into build failures when building my Docker image. Turns out, since my user on my development machine is an Active Directory user, the UID it utilizes is NOT within the sane range of Linux UIDs. The same was true of my group id.

abond@abondlintab01:~$ id -u
500000001
abond@abondlintab01:~$ id -g
500000003

Since Active Directory used such large IDs, I couldn’t utilize this user’s UID and GID for the container IDs. The build would fail on attempting to run addgroup.

$ APP_USER_UID=$(id -u) APP_GROUP_GID=$(id -g) docker-compose build
db uses an image, skipping
Building web
Step 1/18 : FROM ruby:2.6-alpine

...

Executing busybox-1.30.1-r2.trigger
OK: 268 MiB in 75 packages
Successfully installed bundler-2.1.0
1 gem installed
addgroup: number 500000003 is not in 0..256000 range
ERROR: Service 'web' failed to build: The command '/bin/sh -c apk add --update --no-cache         bash         build-base         nodejs         sqlite-dev         tzdata         mysql-dev         postgresql-dev &&       gem install bundler &&       addgroup -g $APP_GROUP_GID -S $APP_GROUP &&       adduser -S -u $APP_USER_UID -G $APP_GROUP $APP_USER &&       mkdir $APP_PATH &&       chown $APP_USER:$APP_GROUP $APP_PATH' returned a non-zero code: 1

I resolved this by creating another Linux user (not on the domain) with a sane UID which I use to develop Ruby apps. This shouldn’t be necessary for most users.

A Quick Review

Using Ruby with docker-compose can simplify your development processes and keep your environment slim.

However, running containers as root is a bad security practice. The default instructions given by Docker for Rails app development provide a functional setup, but ignore the security of root privileges. Further, running as root in dev complicates your workflow and your environment.

By creating a default app user and group with a specific UID and GID, you eliminate root processes in your container and make your production sysadmins happy.

To take it a step further, you can override that UID and GID on your machine to mach YOUR user and simplify your development workflow.

Docker and containers are great tools for development; but, finding environment settings and patterns can be difficult. Hopefully, this pattern helps someone out there as new running ruby with docker compose as I was when I started.

Continue Reading

A Garage Door Controller Based on a Raspberry Pi

A few weeks ago, I came home to an unfortunate surprise: my garage door was wide open to the world. While none of my precious junk seemed missing, I was embarrassed to have had my mess laid bare for the the neighborhood to see. As it turned out, the problem was a failed button connected to my door. I checked online and found that this unit was prone to failure. Worse than that, when these units fail, they open and close your garage door at random. I wanted a better solution in the form of a garage door controller.

I experimented a bit with the wiring and found something surprising. Turns out, to open the garage door, you simply need to short the two opener wires together. I decided to attempt to create my own, wireless enabled garage door controller based on an old Raspberry Pi B I had lying dormant. Here’s how I made it happen.

The Requirements

I wanted a garage door controller which had the following features:

  • Could open and close the garage door (well, duh)
  • Could detect the door state (open or closed)
  • Had a timer that would close the door if opened for a configurable period of time
  • Allowed the user to disable the timer easily, if needed
  • Could send an email alert if it could not shut the door or if the door was open for an extensive period of time

The Hardware

I needed more than just my little computer to make this happen.  My prototype eventually required the following bits and pieces.

  • A Raspberry Pi (I recycled an old B I had lying around; but, any will do)
  • 1 LED for the power indicator (RED)
  • 3 LEDs (preferably a different color from the power LED, YELLOW in my case) for timer status indication
  • 2 buttons, preferably with built in LEDs (I used these Adafruit Mini Arcade Buttons)
  • A magnet switch with a “normally open” option, as typically used by alarms (I used this)
  • A 5V relay module (I used this as it was in stock at my local electronics store; but, you don’t need two relays)
  • 3 10k Ohm resistors to be used as “pull-up” resistors
  • 4 330 Ohm resistors for the power and timer LEDs (note that the buttons I used contain their own LEDs and resistors)
  • ~10 ft of bell wiring to wire the door open sensor switch
  • A spring terminal block like this one to connect my magnet switch

Additionally, for prototyping, I used:

  • A solderless a breadboard similar to this one
  • Multiple female-to-male jumper wires (like these)

Finally, my design required a few prototyping printed circuit boards like these.

The Tools

I utilized a very simple soldering iron and some lead based solder (if I end up doing a lot more electronics work, I may switch to lead free).

For the enclosure, I made the design using FreeCAD and printed it using my Monoprice Maker Select V2.1 3D printer.  The printed enclosure isn’t strictly necessary, and a wood enclosure would work just as well if that’s your jam.  More on the enclosure later.

The Software

I built my own controller software using Ruby and the rpi_gpio gem listed here.  I have placed the software on Github.

The software utilizes several threads to “listen” for button presses.  Additionally, the software maintains a log of when the door was last opened.  If the door has been open longer than the current timer setting, the software simulates a button press of the garage door button.  If the door has been open for an excessive amount of time or the door failed to close, the software sends an alert email.

Wiring the Components to the Pi

To simplify my design, I broke my circuitry into two steps- wiring LEDs to indicate state and wiring switches / buttons to control state.  Below, I’ll outline both sets of wiring.

Note that all wiring listed below is using the BCM numbering, not the board numbering.

Timer Status LEDs and Button LEDs

The standalone LEDs I purchased required 330 Ohm resistors.  For any LEDs you purchase, check the requirements of the LED and modify the resistors accordingly.

To start, I wired my RED power LED with a resistor to the 5V rail of the Raspberry Pi.  This LED serves as a sanity-check that your Pi getting and outputting voltage.  The YELLOW status LEDs are to inform the user which timer setting is currently active (5 minutes if the first is lit, 10 if the second is lit and 15 if the last is lit).  I wired these to GPIO pins 4, 17 and 27, respectively, along with their resistors.

Status LED Wiring

For the button LEDs, the garage door opening button (blue, in my build) should stay on whenever the system is running.  The timer button (white) should stay on when the door is shut and the timer is “armed,” blink when the door is opened and turn off when the timer is disabled.

Button LED Terminals
If you purchased the same retro-arcade style buttons I did, the LEDs are included along with appropriate resistors.  To determine the negative and positive terminals of the LEDs, look at the back of the button.  LED terminals are the ones NOT going into the grey housing in the center.

I wired the timer LED to GPIO pin 2 and the door LED to GPIO pin 3.

Button LED Wiring

Wiring the Door Open Sensor Switch and Buttons

To connect my two buttons and door open sensor switch, I used 10k Ohm resistors to “pull-up” the voltage to the pins.  The way this works is that the buttons open a more direct path to ground than the GPIO pin.  So, if I detect that my GPIO pin voltage goes “low,” I know the button is being pressed.  Additionally, I could configure the door open sensor switch in one of two ways: normally open or normally closed.  This means that, if the door is closed and the magnet is near the sensor, the circuit will either be open or closed, respectively.  I chose normally open so that if I ever have a short in my system, the controller won’t mistake the door for “open” and try to open it after the timer resets.

I wired the door sensor switch to GPIO 7, the timer button to GPIO 8 and the door control button to GPIO 25.

Button And Sensor Wiring

Wiring the Relay Module

Finally, the heart of the garage door controller- the relay module.  When the relay module’s voltage is set to high, a magnetic switch closes a circuit and shorts our garage door wires together.  The application shorts these wires for about 1 second, as that circuit isn’t meant to be closed for a long period of time.

I wired the GND pin of the relay to a ground pin, the VCC pin to a 5V pin and the IN1 pin to GPIO 24 on the Pi.  On the first relay module, I put one of the wires coming from my garage door into the center terminal and the other into the terminal that is indicated as “disconnected” by the diagram.  This configures the relay as “normally open,” meaning the wires are NOT shorted together unless a signal is sent from the Pi (GPIO 24 is set to high).

Relay Module

Testing the Setup

I put that all together into my setup, creating quite a Frankenstein mess of wiring.

Solderless Prototype

From here, I built the software and ran daemon_start.rb in my project.  Pressing the blue (door) button produced a satisfying CLICK as the relay closed and then opened again.  Pressing the white (timer) button cycled the yellow LEDs through each setting.  Moving the motion sensor switch magnet away from the sensor caused the white (timer) button to blink, indicating an open door.  Finally, after the timeout, I heard the system CLICK again as the timer engaged the relay.  Fantastic.

But how do I make this rat’s nest of wires into a usable garage door controller?

A More Permanent Build

I won’t go much into soldering or other techniques here. However, I’ll provide a brief overview of how I designed a more permanent version of this garage door controller.

First, I split the electronics into two daughter boards.  One of the boards controls the power and timer LEDs. The other controls the buttons, door open switch and button LEDs.

I used 2cm by 8cm blank circuit boards and small gauge wire to make the daughter boards.  These are the boards I used.  On the switch and button control board, I installed a spring terminal to easily connect the bell wire leading to the door open sensor switch.  I also purchased some small push-on terminals for easy connections to the buttons.  Finally, I soldered headers to each of the boards and used female-to-female jumpers to connect the boards to the pins on the Raspberry Pi.

To hold it all together, I designed a case using FreeCAD and printed it using a 3D printer.  If you are interested in 3D printing my design, you can find it on Thingiverse here.

In the top of the case, I installed the LED daughter board with headers facing backwards so that they can easily be jumped to the Raspberry Pi.

The Raspberry Pi, relay and switch daughter board fit into the sections of the bottom portion of the enclosure.

Enclosure Back
The cutouts in the sides allow for the relay terminals. the spring terminals, the sd card, usb ports and ehternet port.

My design leaves a bit to be desired. I used a bit of shipping tape to anchor the pieces together. Still, the overall final look is nice and functional.

Final Product
The final product, connected to the garage door, power and sensor.

And finally, a video of the device in action!

Continue Reading

Installing Puppet Server on CentOS 7

I do want to write more about the synergy between Puppet and Ansible; but, several people have asked me for more information on getting started with Puppet. The last time I installed Puppet Server, I took extensive notes. I figured I’d share those here, to help anyone else save time who’s just getting started by installing puppet server.

These instructions are specifically related to Cent OS 7. However, most of these steps pertain to any Linux based OS. Additionally, these instructions cover installing Puppet 4.x, since 5.0 is not yet used in Puppet Enterprise.

Escalating to Root

Most of the commands in this document require you run them as the root user. Using the sudo tool, you can escalate to root for the rest of your session.

[lkanies@puppetlab ~]$ sudo su -
[root@puppetlab ~]#

Setting Up Puppetlabs Repositories

If you are running Red Hat, Debian, Ubuntu or SuSe Linux, Puppetlabs provides repositories to easily install their software. Installing in any other system is a little beyond the scope of this article. If you need to install puppet on another system, read the Puppetlabs documentation for more information.

Start by installing the software collection RPM from Puppetlabs.

[root@puppetlab ~]# rpm -Uvh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Retrieving https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppetlabs-release-pc1-1.1.0-5.el################################# [100%]

Installing Puppet Server

Since you have installed the repositories, use your package manager to install the following packages: puppetserver, puppetdb and puppetdb-termini. While puppetdb and puppetdb-termini are not required, I recommend you install them unless you have a separate puppetdb server already in place.

[root@puppetlab ~]# yum install puppetserver puppetdb puppetdb-termini puppetdb-terminus

Configuring and Starting Puppet Server

Previously, Puppet master servers ran on Ruby inside a Rack server configuration. Because this required manual configuration and didn’t perform as well under load, Puppetlabs wrote the puppetserver application to run the same code in a Java process. When the puppetserver process is started for the first time, it creates a certificate. That TLS certificate will be used to sign any agent that connects to it. The common name of the certificate derives from the hostname of the server. Check now to make sure the host name is what you prefer it to be.

[root@puppetlab ~]# hostname
puppetlab.example.com

You must ensure that this matches the FQDN your nodes will be using to contact this server. Before configuring the server, make this change now if you need to.

Next, you edit /etc/puppetlabs/puppet/puppet.conf to set up your server names.

[main]
dns_alt_names = <fqdn of puppet server>
[agent]
server = <fqdn of puppet server>

If you have additional dns names on which this server may be contacted, add them to the dns_alt_names entry (comma-separated). In addition, the server entry informs the puppet agent process what machine to contact as its puppet master. (In this case, the server is its own master.)

Finally, start the puppetserver service.

[root@puppetlab ~]# systemctl start puppetserver

As a result of the puppetserver service, a cert is generated. Use the following command to print the certificate and verify its details. (We’ll need to source the profile that adds the puppet binary to the path first.)

[root@puppetlab puppet]# . /etc/profile.d/puppet-agent.sh
[root@puppetlab puppet]# puppet cert list --all

Adding the First Puppet Agent – the Server Itself

Because puppet can control services, you should configure puppet to keep its own services running. When writing puppet code, you start with one or more manifests, or a collection of resources that are controlled by puppet. Most of the time, those manifests are organized into modules, which are smaller chunks of code meant to control one particular technology or set of configuration. For any mature technology, it’s very possible that someone else has already done much of the work for you in writing a module. Modules can be shared and downloaded from a single repository called the Puppet Forge.

Because of the limited scope of this post, I’m going to put all of my code directly into a node (client) definition on the manifest. This is generally a bad practice; but, it will serve to demonstrate puppet more succinctly. If you want to read more on modules and best practices, you should check the documentation on Puppetlabs’s website and Gary Larizza’s blog on the roles and profiles pattern.

Your First Manifest

First, we’re going to create a file called /etc/puppetlabs/code/environments/production/manifests/nodes.pp. This is a manifest- a collection of resources that will be directly applied to puppet agents. Note that we’re in a folder structure called environments/production. Puppet allows you to split your code into different environments so that you can test manifest changes without potentially breaking existing servers. We’re going to start with just the default production environment.

Open /etc/puppetlabs/code/environments/production/manifests/nodes.pp in your favorite editor. Below, I’m going to use the hostname puppetlab.example.com; but, you should replace that with the FQDN of your puppet server.

node 'puppetlab.example.com' {
  notify { 'hello world' : }
}

You have just written a node definition. Node definitions describe some resources that should only be applied on the matching node. Typically, the global manifests (any .pp files located in the manifests folder of an environment) contain mostly node definitions. So, to be clear, we’ve just informed our puppet server to print the ‘hello world’ message when it contacts itself as a client. Run puppet agent with the -t flag to see it in action:

[root@puppetlab manifests]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetlab.example.com
Info: Applying configuration version '1512101298'
Notice: hello world
Notice: /Stage[main]/Main/Node[puppetlab.example.com]/Notify[hello world]/message: defined 'message' as 'hello world'
Notice: Applied catalog in 0.03 seconds

Controlling Services

Hello world examples are great and all; but, shouldn’t we do something more meaningful? Most puppet nodes run an agent daemon that checks in with the puppet server from time to time to make sure the configuration hasn’t drifted. Change your node definition manifest to match the one below to ensure this service (daemon) is running.

node 'puppetlab.example.com' {
  service { 'puppet' :
    ensure => 'running',
  }
}

Now, instead of a ‘notify,’ we have a ‘service’ declaration in our node definition. Both notify and service are examples of resources. Puppet code is written in resources which are collected into a catalog to be applied by the agent. Run puppet agent again to see your service start.

[root@testpuppet manifests]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetlab.example.com
Info: Applying configuration version '1512101760'
Notice: /Stage[main]/Main/Node[puppetlab.example.com]/Service[puppet]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Main/Node[puppetlab.example.com]/Service[puppet]: Unscheduling refresh on Service[puppet]
Notice: Applied catalog in 0.12 seconds

For more information on the types of resources you can declare, review the puppet documentation.

Adding Another Node

While getting a server to be its own client is stimulating, it seems like now would be a good time to create another client. We started with a server called puppetlab.example.com, so I’m going to work on another CentOS 7 box called testnode.example.com. Add another node definition to your manifest just like the first, except target your new client node instead of your puppet server.

node 'puppetlab.example.com' {
  service { 'puppet' :
    ensure => 'running',
  }
}

node 'testnode.example.com' {
  service { 'puppet' :
    ensure => 'running',
  }

Installing and Configuring the Puppet Agent

After setting up your puppet server manifest, you want to install and configure the puppet agent on your client machine. Log in, escalate to root, and run the following commands to install the agent.

[root@testnode ~]# rpm -Uvh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Retrieving https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppetlabs-release-pc1-1.1.0-5.el################################# [100%]
[root@testnode ~]# yum install puppet-agent

Once puppet is installed, we need to tell the agent the name of the server to whom it should connect. Open /etc/puppetlabs/puppet/puppet.conf in your favorite editor and add the following (note that you should replace puppetlab.example.com with your puppet server’s name).

[agent]
server = puppetlab.example.com

Generating a Certificate Request

Puppet clients and agents authenticate each other with a TLS certificate. When you first create a new node and run puppet agent, a certificate request is generated and sent to the server to be signed. Once that’s complete, you can sign the request on the server to generate a client certificate. Consequently, the next time the agent runs against the server it will cache its new certificate and use it to authenticate every time puppet runs.

Start by running puppet agent on the new client to generate a request (note that you probably will have to source the puppet profile first).

[root@testnode ~]# . /etc/profile.d/puppet-agent.sh
[root@testnode ~]# puppet agent -t
Info: Creating a new SSL key for testnode.example.com
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for testnode.example.com
Info: Certificate Request fingerprint (SHA256): AF:E9:E3:D6:3F:9A:0F:CC:83:01:DD:66:55:87:B9:4B:03:9C:C1:1C:7E:BB:12:CE:8B:21:93:6B:83:B3:E4:33
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled

Note, if you see “no route to host,” the CentOS firewall may be blocking port 8140. Run the following commands on the puppet server to open the port and then try the agent run again.

[root@puppetlab manifests]# firewall-cmd --zone=public --add-port=8140/tcp --permanent
success
[root@puppetlab manifests]# firewall-cmd --reload
success

Your command created a certificate request from the new client and sent that request to the master. Take a note of that fingerprint.

Signing a Certificate Request

On your puppet server, you’ll now want to sign the certificate request for your new node so that the client and server trust each other. Start by running the following command, which prints out all of the certificates that haven’t yet been signed.

[root@puppetlab manifests]# puppet cert list
  "testnode.example.com" (SHA256) AF:E9:E3:D6:3F:9A:0F:CC:83:01:DD:66:55:87:B9:4B:03:9C:C1:1C:7E:BB:12:CE:8B:21:93:6B:83:B3:E4:33

You can see our request from testnode and its fingerprint. Check that fingerprint against the one that the client gave you and make certain they match. If they do, you can sign the request and issue the certificate with the following command.

[root@puppetlab manifests]# puppet cert sign testnode.example.com
Signing Certificate Request for:
  "testnode.eample.com" (SHA256) AF:E9:E3:D6:3F:9A:0F:CC:83:01:DD:66:55:87:B9:4B:03:9C:C1:1C:7E:BB:12:CE:8B:21:93:6B:83:B3:E4:33
Notice: Signed certificate request for testnode.example.com
Notice: Removing file Puppet::SSL::CertificateRequest testnode.example.com at '/etc/puppetlabs/puppet/ssl/ca/requests/testnode.example.com.pem'

Running the Agent for the First Time With a Signed Certificate

Now that our certificate is in order, go back to your test node and run puppet agent one last time. You should see it start the puppet daemon as expected.

[root@testnode ~]# puppet agent -t
Info: Caching certificate for testnode.example.com
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for testnode.example.com
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for testnode.example.com
Info: Applying configuration version '1512104337'
Notice: /Stage[main]/Main/Node[testnode.example.com]/Service[puppet]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Main/Node[testnode.example.com]/Service[puppet]: Unscheduling refresh on Service[puppet]
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.22 seconds

Assuming everything ran properly, you should see puppet start the puppet service as you declared in your node definition.

Aside: Why Did I Need to Generate a Cert Only on the New Node

It may seem odd that you only created a certificate for your second client and not for the puppet server itself. The reason for this is because the puppet server uses its own CA certificate to connect via the agent. Since this is a valid certificate that matches the client (puppetlab.example.com) and the certificate is signed by the CA (self-signed, in this case), the server accepts it as trusted.

Further Reading

This is just a base tutorial on how to get bootstrapped with a puppet server in a CentOS environment. Hopefully, working through this has whetted your appetite. If so, I’d suggest reading the puppet documentation, especially the sections on the main manifest(s), environments, module fundamentals, and the puppet language. If you’re looking for more real world examples, you can also review the essential configuration quick start guides for real world scenarios and how puppet helps solve them.

In conclusion, remember that your servers are cattle and not pets. Having a puppet server and agent model in place in your environment will help you avoid configuration problems that are hard to solve in the future.

Continue Reading

Ansible or Puppet? Both! – Part 1

Ansible or Puppet? A Great Debate?

Should I install Ansible or Puppet? In short, I feel both have their place.

Anyone who has asked me about work in the last few years knows that I have a passion for automation tools. My favorite for configuration automation has always been Puppet. Puppet is a mature infrastructure-as-code tool that describes a desired state and enforces it. Having used it for years, I can say that Puppet handles most of my management needs. However, there are some tasks that Puppet just doesn’t handle as well. After playing a bit with Ansible, I believe it can be the tool to fill many of those gaps.

What Puppet Gets Right

Puppet’s domain-specific language is powerful while being descriptive. Its agents are portable and cross platform. Its server is mature and stable. It handles building a catalog of configuration quite well and provides a lot of descriptive power. In short, Puppet is adept and defining and enforcing a configuration baseline. Your puppet code describes infrastructure configurations and puppet makes sure that they exist and stay consistent.

Beyond that, puppet features many other benefits:

  • the Puppet Forge, a community of module developers with a strong following
  • a robust ancillary toolset, including r10k configuration manager for advanced deployment of your code
  • hiera, a tool used to separate code from configuration (keeping your code clean and reusable)
  • a slick enterprise edition, if you need supported deployment in a larger environment

Where Puppet Falls Short

While I’ve derived great benefit from puppet and can sing its praises longer than most are comfortable with, there are a few gaps in puppet’s capabilities.  Here are a few of the gaps and drawbacks I most often find when using puppet.

Reliance on an Agent

Puppet’s agent is an asset, enabling many of the benefits I’ve listed above.  However, this also adds a slight burden to the configuration.  The first puppet code I write is usually a profile class to manage puppet.  If something happens and the agent breaks, getting back on track is a manual task.

This isn’t strictly a drawback.  As I mentioned above, the agent enables many of puppet’s most powerful features.  It is worth noting that the agent isn’t all sunshine and puppies, though.  That little bit of pain is part of the fee we pay for well managed infrastructure.

Bootstrapping and Orchestration

Related to the agent is the problem of boostrapping.  While puppet is great at taking a server that’s got a base OS installed and configuring it, it’s not as great at kicking off a task to install the OS or create a VM in the first place.  Additionally, configuring a new puppet master server for your environment is probably a manual process. You may be required to install an agent, pull down some code, and get things configured properly before you can do your first puppet run.

Lack of Procedural Tools or Tasks

Puppet is designed with desired state in mind- that is, you should build your puppet code to describe your desired outcome and let the tool decide how to do the work.  This is awesome.  However, there are times when you want to do tasks or time-based procedures.

Perhaps you want to write a task to bootstrap a new puppet server in your environment.  Maybe you want to kick off a job that will update and reboot all of your nodes in a particular order.  Possibly you want to tell VMWare to build you a new cluster of servers with a given set of IPs.

Puppet by itself cannot solve these problems.  I determined Ansible to be a good fill-in for these gaps.

(As an aside, Puppetlabs, the company that develops puppet, provides a tool to solve this problem called mcollective.  I won’t go into mcollective vs Ansible here; but, for my own uses, I’ve found Ansible to be a better fit.)

How Ansible Helps

Ansible, while also an infrastructure-as-code tool, doesn’t specifically describe desired state.  Instead, it enables the building of playbooks– blocks of code that describe tasks and inter-dependencies to operate on a server and achieve a verified result.  Ansible resolves a number of problems left to us by Puppet.

Ansible is Agentless

There are no agents when working with Ansible.  Instead, Ansible relies on SSH (PowerShell remote for Windows servers). Since SSH is common to most servers, there isn’t anything to install.

Because there are no agents and the underlying communication is a common component in servers, Ansible is a little less brittle than puppet.  A server with an OS installed is ready for Ansible out of the box.

Ansible has Modules for Orchestration

Ansible can build virtual machines.  Tasks can be combined to create a cluster.  Ansible can configure networking relatively easily. While it cannot provide bare-metal bootstrapping (you still need PXE or some other installer to accomplish that), it can build an environment in the cloud from the ground up.

Ansible Runs Tasks

I’m not going to lie to you- at the heart of it, Ansible is a scripting engine.  It uses Python to write code, ships it to your server, and runs it.  That’s not a bad thing- Ansible executes powerful tasks based on its language.  Because of this and the nature of playbooks, we can write timed tasks in Ansible that couldn’t be written in Puppet alone.  I can write a playbook to upgrade my environment.  Ansible can reload my webserver process on a set of machines.  I can execute a source control pull on all of my nodes at once. I don’t want to enforce this type of action every minute. I want to achieve these goals at times of my choosing.

So, Which Tool is Better? Ansible or Puppet?  Both.

Puppet and Ansible compete for market share.  They build similar tools and attempt to differentiate themselves. That said, you can use them together easily.  In the environments I’ve managed, I chose to employ both of these tools.  They complement each other well and can be used in concert without issue.  For example, if you have an existing puppet environment, it can create an Ansible configuration for you.

In the rest of this entry, I’ll cover how to start using Ansible by configuring it from Puppet.

Creating an Ansible Configuration In Puppet

To start using Ansible, I leveraged my existing puppet configuration.  Notably, the rest of this blog will make heavy use of the roles and profiles pattern.  A role is a puppet class describing a type of machine, such as a webserver or database server.  A profile, on the other hand, describes a configuration for a specific technology, such as Apache or MySQL.  For more information on using roles and profiles, read Gary Larizza’s blog post on the subject.  He describes it better than I could.

Creating a Profile to Install Ansible

First of all, start by making a very basic profile that installs Ansible.  Below is a good example.

# profile class to install and configure ansible
class profiles::ansible
{
  ensure_packages(['ansible'])
}

Creating a Role for Your Ansible Control Machine

Next we want to define the role which employs this profile. Where possible, roles should be named in a way that’s technology agnostic.

# role for an orchestration server
class roles::orchestrator inherits ::roles::base
{
  include ::profiles::ansible
}

Note that we have a “base” role upon which all other roles are built. This provides all of the configuration that every node in our environment should enforce. For now, let’s pretend it’s empty.

# role applied to all nodes
class roles::base
{

}

Applying Your Role to a Server

Finally, we apply the role to a node in our environment. Our role employs the profile that installs Ansible. Therefore, puppet will enforce that the package is installed on the target.

node 'rodrigo.example.com'
{
  include ::roles::orchestrator
}

Deploy this code to your puppet server. Now, we’ll run the puppet agent on the orchestrator machine to see it install the Ansible package.

[root@rodrigo~]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for rodrigo.example.com
Info: Applying configuration version '1508220476'
Notice: /Stage[main]/Profiles::Ansible/Package[ansible]/ensure: created
Notice: Applied catalog in 32.28 seconds

Using Puppet to Inform Ansible About our Environment

I’ve written 3 code files to do what one command on each server could do. So, why use puppet to do this? Because puppet can also be used to provide context about our environment to Ansible.

Ansible uses a hosts file in /etc/ansible/hosts to determine what servers are available and how to group them. Follow the below steps to create a puppet configuration to auto-generate this file. As a result, we want to produce a file that looks like this:

[all_servers]
server1.example.com
server2.example.com
server3.example.com
rodrigo.example.com

Creating a Defined Type for a Host Entry

Since we’re interested in creating multiple hosts in the host configuration file, we’ll create a defined type in puppet to describe a single entry in that file. Defined types are reusable descriptions of resources that we expect to duplicate in puppet classes. (For the below type, I’m using the concat module available on Puppet Forge, which assembles single files from multiple fragments. See their puppet forge page for more information.)

# defined type to define ansible host entry
define profiles::ansible::conf::host_definition
(
  $host = $title, #the name of the host
  $group = 'all_hosts', #the group under which the host will fall in the file
)
{
  $hostsfile = '/etc/ansible/hosts'

  ensure_resource('concat', $hostsfile, {
    'owner' => 'root',
    'group' => 'root',
    'mode'  => '0644',
  })

  ensure_resource('concat::fragment', "${hostsfile}_${group}", {
    'target'  => $hostsfile,
    'content' => "\n[${group}]\n",
    'order'   => "${group}",
  })

  ::concat::fragment { "${hostsfile}_${group}_${host}" :
    target  => $hostsfile,
    content => "${host}\n",
    order   => "${group}_${host}",
  }
}

First, we’re creating a single hosts file. This is a defined type for every entry in that file. Therefore, we’ll be calling it many times. Hence, we can’t just declare the concat resource for the file or we’ll get duplicate resource entries. The ensure_resource function allows us to ensure that the resource is in the catalog without erroring if it already exists. Second, we’re going to only want one line that contains the group name for the whole file. This could also create duplicate errors, so we use ensure_resource again. Finally, we put the hostname under the group for which we’ve defined it.

To employ our defined type to create an entry, we can instantiate it in this way:

::profiles::ansible::conf::host_definition{ 'server1.example.com' :
  host  => 'server1.example.com',
  group => 'all_servers',
}

(Note that the host and group params are unnecessary because they default to title and ‘all_groups,’ respectively.)

Exporting Our Resources

While this achieves our goal, it isn’t useful on its own. We’d rather puppet describe our servers than define them each ourselves within puppet using a resource. Therefore, we’ll export these resources from our base role, which is applied by every node.

# role applied to all nodes
class roles::base
{
  @@::profiles::ansible::conf::host_definition{ 'server1.example.com' : }
}

(Because we don’t need to specify the host or group, I’ve removed them from the example here.) Note the two at signs at the beginning of the declaration. This is an exported resource. Rather than defining hosts in one place, each host can export its own definition to collect later.

Collecting the Resources to Create a Hosts File

Finally, we’ll modify the profile and have it collect the resources from other servers.

# role for an orchestration server
class roles::orchestrator inherits ::roles::base
{
  include ::profiles::ansible
  Profiles::Ansible::Conf::Host_definition <<| |>>
}

Since we’ve used the spaceship operator (<<| |>>), our profile collects the hosts resources. As a result, puppet will create a file with every host in our environment:
/etc/ansible/hosts. This file is created on our Ansible server.

Puppet has now informed its good friend Ansible of the lay of the land.

More to Come

In my next blog post, I’ll cover more Ansible usage and how Ansible can be used to run and deploy puppet.

Continue Reading

Android RecyclerViews – Advanced UI Interaction

The Road Thus Far

So, previously I covered that the RecyclerView is a replacement for the ListView in Android that increases efficiency by “recycling” the view objects that are currently visible. While that’s great for efficiency, usability becomes more of a challenge as RecyclerViews are more divorced from the data they represent than ListViews were. Further, I demonstrated the ItemViewHolder pattern, which creates a data structure in which your view is contained and modified as data is swapped in and out while the user scrolls.

Android’s RecyclerView and Showing Data Changes

I demonstrated some extremely simple CRUD functionality with a button which allows us to add to our RecyclerView and update the view to display all contained items. However, what happens when one wants to do the other CRUD functionality? What if someone wants to update or delete a record and reflect that in the RecyclerView in real time? One can always modify the underlying dataset and reset the adapter on your RecyclerView. This leads to a bad user experience, though, since the view will reset back up to the top. Additionally, how do we interact by touching individual items? Since we don’t have an onItemClickListener (which makes sense, since the “item” in play may change every time the view is recycled), there doesn’t seem to be a convenient way to get the item out to work on it.

For updating, the ViewHolder has all of the access we need; but, deleting requires that we update the view of the whole set, not the one individual item. The solution to this issue is to flip the problem on its head and use an interface to inform some other object that data has changed and the RecyclerView must be updated to reflect it. This other object must have the scope to call methods on the RecyclerView. Our Activity is a good candidate to be the arbiter of this activity.

Updating Using ViewHolder as a Click Listener

For updates, it makes sense that the tapped view’s ViewHolder could handle the the click event. Is has access to redraw the view based on new data and could access the data to update it. Let’s implement the onClickListener interface to perform updates on a clicked view’s data from the ViewHolder.

public class SomeModelRecyclerViewAdapter
        extends RecyclerView.Adapter<SomeModelRecyclerViewAdapter.ViewHolder>{

    private List<SomeModel> data;

    public SomeModelRecyclerViewAdapter(List<SomeModel> data) {
        this.data = data;
    }

    @Override
    public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
        View view = LayoutInflater.from(parent.getContext())
                .inflate(R.layout.model_item, parent, false);
        ViewHolder holder = new ViewHolder(view);
        view.setOnClickListener(holder);
        return holder;
    }

    @Override
    public void onBindViewHolder(ViewHolder holder, int position) {
        holder.someModel = data.get(position);
        holder.bindData();
    }

    @Override
    public int getItemCount() {
        return data.size();
    }

    public static class ViewHolder extends RecyclerView.ViewHolder
    implements View.OnClickListener {

        private static final SimpleDateFormat dateFormat =
                new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");

        SomeModel someModel;

        TextView modelNameLabel;
        TextView modelDateLabel;

        public SomeModel getSomeModel() {
            return someModel;
        }

        public void setSomeModel(SomeModel someModel) {
            this.someModel = someModel;
        }

        public ViewHolder(View itemView) {
            super(itemView);
        }

        public void bindData() {
            if (modelNameLabel == null) {
                modelNameLabel = (TextView) itemView.findViewById(R.id.modelNameLabel);
            }
            if (modelDateLabel == null) {
                modelDateLabel = (TextView) itemView.findViewById(R.id.modelDateLabel);
            }
            modelNameLabel.setText(someModel.name);
            modelDateLabel.setText(dateFormat.format(someModel.addedDate));
        }

        @Override
        public void onClick(View v) {
            someModel.addedDate = new Date();
            someModel.save();
            bindData();
        }
    }
}

So, here’s what has changed. The ViewHolder now implements View.OnClickListener. The implementation updates the record by changing the date to right now, saving the updated data, and re-binding the view.

    public static class ViewHolder extends RecyclerView.ViewHolder
    implements View.OnClickListener {

        ...

        @Override
        public void onClick(View v) {
            someModel.addedDate = new Date();
            someModel.save();
            bindData();
        }
    }

In addition, the RecyclerViewAdapter now sets the ViewHolder as the View‘s OnClickListener after it creates the ViewHolder instance.

public class SomeModelRecyclerViewAdapter
        extends RecyclerView.Adapter<SomeModelRecyclerViewAdapter.ViewHolder>{

    ...

    @Override
    public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
        View view = LayoutInflater.from(parent.getContext())
                .inflate(R.layout.model_item, parent, false);
        ViewHolder holder = new ViewHolder(view);
        view.setOnClickListener(holder);
        return holder;
    }

    ...

}

Dynamically Showing Deletions in Android’s RecyclerView

Reflecting the update of a single data item wasn’t quite as easy as adding one to the dataset; but, deleting an item presents new challenges still. We don’t want to delete the item and reset the whole UI, forcing the user to start at the top of what might be a very large list. Instead, we need an outside control structure with scope access to our RecyclerView to handle a deletion event and inform the UI to update.

To handle the messaging when an item is deleted, we’ll create a new interface called the SomeModelDeletedListener. This interface will enforce a single method: onSomeModelDeleted, which takes the model that has been deleted and the position in the RecyclerView that it currently occupies (and, hence, must vacate). It will be the responsibility of the assigned instance of the SomeModelDeletedListener to update the UI to reflect the data change.

public interface SomeModelDeletedListener {
    void onSomeModelDeleted(SomeModel model, int position);
}

Since our Android Activity handles our RecyclerView and other UI, it might make sense to delegate this responsibility to it. (Note, if you are using asynchronous calls via AsyncTask or thread handlers, the activity may no longer be the proper place to handle this event. This is just the simplest demonstration possible.) When the Android Activity receives a call that lets us know our item has been deleted, it should update the RecyclerView to illustrate this data change.

To tie it all together, we’ll add an onLongClickListener to allow our users to long press to delete data. The ViewHolder can handle the deletion from the database and notify the SomeModelDeletedListener that it had made that change.

Whoo, that’s a lot. Let’s put it into practice.

Let’s start at the long-press, which we’ll again handle in the RecyclerViewAdapter.ViewHolder. Here’s what we add to the class:

   public static class ViewHolder extends RecyclerView.ViewHolder
    implements View.OnClickListener, View.OnLongClickListener {

        ...

        SomeModelDeletedListener someModelDeletedListener;
        public SomeModelDeletedListener getSomeModelDeletedListener() {
            return someModelDeletedListener;
        }
        public void setSomeModelDeletedListener(SomeModelDeletedListener someModelDeletedListener) {
            this.someModelDeletedListener = someModelDeletedListener;
        }

        ...

        public ViewHolder(View itemView) {
            this(itemView, null);
        }

        public ViewHolder(View itemView, SomeModelDeletedListener someModelDeletedListener) {
            super(itemView);
            this.someModelDeletedListener = someModelDeletedListener;
        }

        ...

        @Override
        public boolean onLongClick(View view) {
            if (someModel != null) {
                // Deletion from the database
                someModel.delete();
                if (someModelDeletedListener != null) {
                    someModelDeletedListener.onSomeModelDeleted(someModel, getAdapterPosition());
                }
            }
            return true;
        }
   }

We’ve added an instance of our SomeModelDeletedListener to the class to handle deletions and a getter and setter for convenience. We’ve also added a constructor which takes both a View and a SomeModelDeletedListener to initialize. The other constructor we’ve modified to take the new one into effect. Lastly, we’ve implemented the View.OnLongClick method and added delete logic (including informing listener, if present) to occur when the user long-presses on the item.

We’ll also have to make some changes to the enclosing adapter to make sure all this get wired and passed through. Here are those changes:

public class SomeModelRecyclerViewAdapter
        extends RecyclerView.Adapter<SomeModelRecyclerViewAdapter.ViewHolder>{

    ...

    private SomeModelDeletedListener someModelDeletedListener;
    public SomeModelDeletedListener getSomeModelDeletedListener() {
        return someModelDeletedListener;
    }
    public void setSomeModelDeletedListener(SomeModelDeletedListener someModelDeletedListener) {
        this.someModelDeletedListener = someModelDeletedListener;
    }

    ...

    public SomeModelRecyclerViewAdapter(List<SomeModel> data) {
        this(data, null);
    }
    public SomeModelRecyclerViewAdapter(List<SomeModel> data,
                                        SomeModelDeletedListener someModelDeletedListener) {
        this.data = data;
        this.someModelDeletedListener = someModelDeletedListener;
    }

    ...

    @Override
    public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
        View view = LayoutInflater.from(parent.getContext())
                .inflate(R.layout.model_item, parent, false);
        ViewHolder holder = new ViewHolder(view, someModelDeletedListener);
        view.setOnClickListener(holder);
        view.setOnLongClickListener(holder);
        return holder;
    }

    ...
}

Similarly to the ViewHolder, we’ve added an instance of the SomeModelDeletedListener with getters, setters and proper constructors to handle it. Additionally, we’ve added the SomeModelDeletedListener to our constructor call for the ViewHolder. Finally, we’ve also set the ViewHolder as the LongClickListener for the View.

Lastly, we need to change our Activity into a proper SomeModelDeletedListener. Here are those changes:

public class MainActivity extends AppCompatActivity implements SomeModelDeletedListener{

    ...

    @Override
    public void onSomeModelDeleted(SomeModel model, int position) {
        if (modelList != null) {
            SomeModelRecyclerViewAdapter adapter =
                    (SomeModelRecyclerViewAdapter) modelList.getAdapter();
            adapter.notifyItemRemoved(position);
        }
    }
}

Here, we simply implement the interface and override the onSomeModelDeleted method. This method will update the UI by notifying it’s adapter to remove the specific item.

Extra Credit – Updating the In-Memory Data When Deleting

If you made all of the changes above, everything should work properly most of the time. However, if your Android ever redraws your app without attempting to go back to the database and re-fetch, your deleted item will suddenly reappear in the list. What gives?

Remember that your RecyclerViewAdapter has a List of SomeModel objects that it uses to draw. If you never update that list but redraw with the same data, your deleted items will magically reappear. The solution is to write a method which removes the item from the in-memory dataset and call it from your SomeModelDeletedListener. (A hint here: our SomeModelDeletedListener represents something that reacts to data being deleted from the database, not any of the in-memory structures.) Here’s the custom method example:

public class SomeModelRecyclerViewAdapter
        extends RecyclerView.Adapter<SomeModelRecyclerViewAdapter.ViewHolder>{

    ...

    private List<SomeModel> data;

    ...

    public void removeItemAt(int position) {
        data.remove(position);
    }

}

Now, to implement the behavior in the Android Activity, we modify our onSomeModelDeleted implementation:

public class MainActivity extends AppCompatActivity implements SomeModelDeletedListener{
    
    ...

    @Override
    public void onSomeModelDeleted(SomeModel model, int position) {
        if (modelList != null) {
            SomeModelRecyclerViewAdapter adapter =
                    (SomeModelRecyclerViewAdapter) modelList.getAdapter();
            adapter.removeItemAt(position);
            adapter.notifyItemRemoved(position);
        }
    }
}

While all of this code seems like a lot of work, it makes for an efficient handling of your views and a clean responsive UI, reflecting what the user expects to see with each action. The additional time is worth the cost, both because of the added efficiency of RecyclerView and because the old ListView method is considered deprecated by Android.

Continue Reading

Android RecyclerView, ListView Replacement

Android ListView, Inefficient but Convenient

When I first began using Android, back in the Jellybean days, displaying lists of data from a data source had a lovely widget that you could easily use called ListView. This widget was exceedingly convenient, as it provided an onItemClickListener for easy data interaction. onItemClickListener came back with the View that was tapped and the position in the data the view represented, allowing you to easily modify both.

The ListView has a major drawback: for every data item in your list, a View is generated. This won’t be an immediately obvious problem if you either have very few pieces of data or your views are light-weight. However, if you have a very large dataset or if you start to use heavier widgets (such as ImageViews), you can easily create a massive and sluggish UI that eats your memory and slows your device while not even being visible on the screen. Luckily, there’s a better way, the RecyclerView.

RecyclerView – The Efficient Alternative

The RecyclerView widget takes a much more practical approach: it will only inflate enough View objects to fill the screen (plus or minus a couple for smooth scrolling). As you slide up and down in the list, views are recycled as they exit the screen, reinflated with new data, and pushed back into the other end of the widget where you’ll see them. This solves the problem of mass data sets and heavy widgets crashing your app or making it slow to a crawl; however, this efficiency comes with a price: there’s now a disconnect between the View and the single piece of data (or datum) that it represents. Because of this disconnect, you have to write your own RecyclerViewAdapter to handle the data (instead of relying on the more generic ArrayAdapter, which worked very conveniently with ListView).

Additionally, the framework now enforces a pattern called the ViewHolder Pattern. In this pattern, a special class is generated that maintains the state of the View that displays a single piece of data. The ViewHolder class a static inner class, meaning that it’s memory light (the “static” portion of that is very important). The ViewHolder can also serve as a utility class, helping you handle interactions that affect pieces of data in the list.

Writing Your Custom RecyclerViewAdapter and ViewHolder Classes

The RecyclerViewAdapter is what provides data to a RecyclerView. Because of the strong adherence to the ViewHolder Pattern, there is a built in abstract class you must override as a static inner class to your adapter as your ViewHolder: RecyclerView.ViewHolder. Take a look at the following code:

public class SomeModelRecyclerViewAdapter
        extends RecyclerView.Adapter<SomeModelRecyclerViewAdapter.ViewHolder>{
 
    /**
     * A list of data that the recyclerview will display
     */
    private List<SomeModel> data;
 
    /***
     * Constructor to build the adapter
     * @param data the data to be displayed by the views
     */
    public SomeModelRecyclerViewAdapter(List<SomeModel> data) {
        this.data = data;
    }
 
    /***
     * Creating the view holder (only called the first time the view is generated)
     *
     * @param parent
     * @param viewType
     * @return
     */
    @Override
    public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
        View view = LayoutInflater.from(parent.getContext())
                .inflate(R.layout.model_item, parent, false);
        ViewHolder holder = new ViewHolder(view);
        return holder;
    }
 
    /***
     * "Binding" the data to the view holder
     *
     * This function is what informs a holder that it's data has changed (ie, every)
     * time the view is recycled
     *
     * @param holder
     * @param position
     */
    @Override
    public void onBindViewHolder(ViewHolder holder, int position) {
        holder.someModel = data.get(position);
        holder.bindData();
    }
 
    /***
     * Returns the size of our data list as the item count for the adapter
     *
     * @return
     */
    @Override
    public int getItemCount() {
        return data.size();
    }
 
    /***
     * The ViewHolder is our "Presenter"- it links the View and the data to display
     * and handles how to draw the visual presentation of the data on the View
     */
    public static class ViewHolder extends RecyclerView.ViewHolder{
 
        /***
         * A formatter to make our date readable
         */
        private static final SimpleDateFormat dateFormat =
                new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
 
        /***
         * The model (datum) CURRENTLY to be displayed by the view
         *
         * Note that this should be expected to change and the view will need to update to reflect
         * changed data.
         */
        SomeModel someModel;
 
        /***
         * A label from the view to display some info about the datum
         */
        TextView modelNameLabel;
        /***
         * Another label from the view
         */
        TextView modelDateLabel;
 
        /***
         * Getter for the datum.
         *
         * @return
         */
        public SomeModel getSomeModel() {
            return someModel;
        }
 
        /***
         * Setter for the datum
         *
         * @param someModel
         */
        public void setSomeModel(SomeModel someModel) {
            this.someModel = someModel;
        }
 
        /***
         * ViewHolder constructor takes a view that will be used to display a single datum
         * @param itemView
         */
        public ViewHolder(View itemView) {
            super(itemView);
        }
 
        /***
         * This is a function that takes the piece of data currently stored in someModel
         * and displays it using this ViewHolder's view.
         *
         * This will be called by the onBindViewHolder method of the adapter every time
         * a view is recycled
         */
        public void bindData() {
            if (modelNameLabel == null) {
                modelNameLabel = (TextView) itemView.findViewById(R.id.modelNameLabel);
            }
            if (modelDateLabel == null) {
                modelDateLabel = (TextView) itemView.findViewById(R.id.modelDateLabel);
            }
            modelNameLabel.setText(someModel.name);
            modelDateLabel.setText(dateFormat.format(someModel.addedDate));
        }
    }
}

Here are few things to notice about this sample. SomeModel is a piece of data (by “model” in this name, I’m referring to a data model as present in the Model, View, Controller pattern). The adapter I’ve created takes a list of these records as the data. In my examples, I’m using SugarOrm behind the scenes, but that shouldn’t be relevant to this demonstration.

R.layout.model_item refers to an XML layout. This layout represents a single listing of a SomeModel record within the RecyclerView.

I’ve also created a ViewHolder (extending RecyclerViewAdapter.ViewHolder). It takes a View in it’s constructor that it will use to display some data and has a SomeModel property (note the getters and setters) for a piece of data that will be displayed on the View. Note that I don’t take in the data as part of the constructor- this is to reinforce how the RecyclerView works: the View on the ViewHolder will never change; but, as the user scrolls, a new SomeModel instance may be passed as the data to display on the view.

So, what’s happening here?

When you create an instance of this adapter, pass some data and apply the adapter to a RecyclerView, the following happens: the RecyclerView starts generating enough Views to fill the available space on the screen. For each of these, it also calls the adapter’s onCreateViewHolder method to instantiate a ViewHolder. The function inflates the view using the given XML layout in preparation for some data and creates a ViewHolder to wrap the View. Note that the ViewHolder DOES NOT apply the data to the view here.

Next, the RecyclerView iterates through the data (using the getItemCount method to determine the end of the list) and displays the items currently visible by calling onBindViewHolder. In my example, I decided to be slightly less efficient and store the data item itself within the ViewHolder. Another way to handle this could have been to expose the TextView widgets that I wanted to use from the ViewHolder and set the text directly in the onBindViewHolder, rather than within the ViewHolder code itself. This code takes the data from our SomeModel instance and uses the View within the ViewHolder to represent this record visually.

As the user scrolls, Views that are pushed off screen are returned re-initialized with new data via the onBindViewHolder method and placed back into the RecyclerView on the opposite side.

Using the RecyclerViewAdapter in a RecyclerView

To use our newly minted adapter, we just need to create some data, instantiate our adapter, and apply it to a RecyclerView that is set up in our activity layouts. See the below code for an example:

public class MainActivity extends AppCompatActivity {

    EditText modelName;
    Button addModelButton;
    RecyclerView modelList;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        modelName = (EditText) findViewById(R.id.modelName);
        addModelButton = (Button) findViewById(R.id.addModelButton);
        modelList = (RecyclerView) findViewById(R.id.modelList);
        addModelButton.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                SomeModel newRecord = new SomeModel();
                newRecord.name = modelName.getText().toString();
                newRecord.save();
                setupRecyclerView();
            }
        });
        setupRecyclerView();
    }

    private void setupRecyclerView() {
        List<SomeModel> allModels = SomeModel.listAll(SomeModel.class);
        SomeModelRecyclerViewAdapter adapter = new SomeModelRecyclerViewAdapter(allModels);
        modelList.setHasFixedSize(true);
        modelList.setLayoutManager(new LinearLayoutManager(this));
        modelList.setAdapter(adapter);
    }
}

In the above code, when our activity is created we pull the RecyclerView from the layout with findViewById into the modelList variable. Then, on the setupRecyclerView method call, we grab a list of SomeModels, instantiate an adapter with them and apply that the adapter to the modelList object.

Note that we have to inform the RecyclerView whether it has a fixed-size and we have to give it a layout manager. Layout managers are a bit beyond the scope of this post; but, a time you might want something other than a LinearLayoutManager is when you want to display your data as a grid. See the Creating Lists and Cards Android documentation for more information on Layout Managers.

You’ll notice that I also set up an EditText widget and Button to create new instance of SomeModel and then update the RecyclerView. This is the most basic and obvious way to update a RecyclerView- by setting up a new adapter instance and applying it to the view.

The result looks a lot like a ListView; but, it will perform better and affords more customization long term:

For completeness, here’s my model class SomeModel:

public class SomeModel extends SugarRecord{
    @Column(name="Name")
    public String name;
    @Column(name="AddedDate")
    public Date addedDate = new Date();
}

Additionally, here are my layouts for both the MainActivity and model_view:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/activity_main"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:paddingBottom="@dimen/activity_vertical_margin"
    android:paddingLeft="@dimen/activity_horizontal_margin"
    android:paddingRight="@dimen/activity_horizontal_margin"
    android:paddingTop="@dimen/activity_vertical_margin"
    tools:context="com.aaronmbond.recyclerviewdilemaexample.MainActivity">

    <EditText
        android:id="@+id/modelName"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_alignParentStart="true"
        android:layout_alignParentTop="true"
        />

    <Button
        android:id="@+id/addModelButton"
        android:layout_alignParentStart="true"
        android:layout_below="@id/modelName"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="@string/addModelButtonText"
        />

    <android.support.v7.widget.RecyclerView
        android:id="@+id/modelList"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_below="@id/addModelButton"
        android:layout_alignParentStart="true"
        />

</RelativeLayout>
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="wrap_content">

    <TextView
        android:id="@+id/modelNameLabel"
        android:layout_alignParentStart="true"
        android:layout_alignParentTop="true"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        />
    <TextView
        android:id="@+id/modelDateLabel"
        android:layout_alignParentStart="true"
        android:layout_below="@id/modelNameLabel"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        />

</RelativeLayout>

Conclusion

While the Android ListView widget provided a lot of convenience when displaying a set of data, it did so at great cost to efficiency. The Android RecyclerView solves many of these problems, but requires a bit more care and feeding out of the box.

In my next post, I’ll cover more than just display. I’ll be covering how to update the RecyclerView to the latest data and how to handle users touching individual items within the list.

Continue Reading