Docker is a new technology within the past few years, but many know that linux containers have been around longer than that. With the excitement around containers and Docker, the Customer Portal team at Red Hat wanted to take a look at how we could leverage Docker to deploy our application. We hope to always iterate on how code gets released to our customers, and containers seemed like a logical step to speeding up our application delivery.
The method of defining our installed Drupal modules centered around RPM packaging. This process of bundling Drupal modules inside of RPMs made sense for aligning ourselves with the release processes of other workstreams. The issue with this process for Drupal is managing many RPMs (we use over 100 Drupal modules on the Customer Portal). As our developers began to take over more of the operational aspects of Drupal releases, the process of the RPM packaging didn't flow well with the rest of coding that our developers were maintaining. Packaging a Drupal module inside of an RPM adds that extra abstraction layer of versioning as well, so our developers didn't always know exactly which version with patches of a module may be installed.
Along with RPM packaging, there wasn't a unified way in which code was installed per environment. For our close-to-production and production environments, this centered around versioned RPM updates via puppet. However, for our CI environment, in order to speed up the application delivery there, contributed Drupal module installs had to be done manually by the developer (mainly to support integration testing in case that module needed to be removed quickly). Developer environments were also different in that they didn't always have a packaged RPM for delivering code, but instead the custom code repository would need to be directly accessible to the developer.
Delivering a Docker image
You can probably see how unwieldy juggling over one hundred modules with custom code can become. Our Drupal team wanted to deliver one artifact that defined the entire release of our application. Docker doesn't solve this simply by shoving all the code inside a container, but it does allow for an agreement to be established as to what our Drupal developers are responsible for delivering. The host that runs this container can have the tools it needs to run Drupal with the appropriate monitoring. The application developer can define the entirety of the Drupal application, all the way from installing Drupal core to supplying our custom code. Imagine drawing a circle around all the parts that existed on the host that define our Drupal app. This is essentially how our Docker image was formed.
Below is the Dockerfile we use to deploy Drupal inside a container. You might notice one important piece that is missing: the database! It is assumed that your database host exists outside this container and can be connected to from this application (it could even be another running container on the host).
Also note, this is mostly an image used for testing and spinning up environments quickly right now, but as we vet our continuous delivery process the structure of this file would likely change.
This is some basic start configuration. We use the rhel7 Docker image as a base for running Drupal. A list of published Red Hat containers can be found here.
# Docker - Customer Portal Drupal # # VERSION dev # Use RHEL 7 image as the base image for this build. # Depends on subscription-manager correctly being setup on the RHEL 7 host VM that is building this image # With a correctly setup RHEL 7 host with subscriptions, those will be fed into the docker image build and yum repos # will become available FROM rhel7:latest MAINTAINER Ben Pritchett
Here we install yum packages and drush. The yum cache is cleared on the same line as the install in order to save space between image snapshots.
# Install all the necessary packages for Drupal and our application. Immediately yum update and yum clean all in this step # to save space in the image RUN yum -y --enablerepo rhel-7-server-optional-rpms install tar wget git httpd php python-setuptools vim php-dom php-gd memcached php-pecl-memcache mc gcc make php-mysql mod_ssl php-soap hostname rsyslog php-mbstring; yum -y update; yum clean all # Still need drush installed RUN pear channel-discover pear.drush.org && pear install drush/drush
Supervisord can be used to manage several processes at once, since the Docker container goes away when the last foreground process exits. Also we have some file permissions changes to the appropriate process users.
# Install supervisord (since this image runs without systemd) RUN easy_install supervisor RUN chown -R apache:apache /usr/sbin/httpd RUN chown -R memcached:memcached /usr/bin/memcached RUN chown -R apache:apache /var/log/httpd
With Docker, you could run a configuration management tool inside the container. However this generally increases the scope of what your app needs. These PHP config options are the same for all environments, so they are simply changed in place here. A tool like Augeus would be helpful for making these line edits to configuration files that don't change often.
# we run Drupal with a memory_limit of 512M RUN sed -i "s/memory_limit = 128M/memory_limit = 512M/" /etc/php.ini # we run Drupal with an increased file size upload limit as well RUN sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 100M/" /etc/php.ini RUN sed -i "s/post_max_size = 8M/post_max_size = 100M/" /etc/php.ini # we comment out this rsyslog config because of a known bug (https://bugzilla.redhat.com/1088021) RUN sed -i "s/$OmitLocalLogging on/#$OmitLocalLogging on/" /etc/rsyslog.conf
Here we add the makefile for our Drupal environment. This is the definition of where Drupal needs to be installed and with what modules/themes.
# Uses the drush make file in this Docker repo to correctly install all the modules we need # https://www.drupal.org/project/drush_make ADD drupal.make /tmp/drupal.make
Here we add some Drush config, and Registry Rebuild tool (which we have gotten a lot of value out of).
# Add a drushrc file to point to default site ADD drushrc.php /etc/drush/drushrc.php # Install registry rebuild tool. This is helpful when your Drupal registry gets # broken from moving modules around RUN drush @none dl registry_rebuild --nocolor
Finally, we install Drupal using drush.make. This takes a bit of time while all the modules are downloaded from Drupal.org.
# Install Drupal via drush.make. RUN rm -rf /var/www/html ; drush make /tmp/drupal.make /var/www/html --nocolor;
Some file permissions changes occur here. Notice we don't have a settings.php file; that will be added with the running Docker container.
# Do some miscellaneous cleanup of the Drupal file system. If certain files are volume linked into the container (via -v at runtime) # some of these files will get overwritten inside the container RUN chmod 664 /var/www/html/sites/default && mkdir -p /var/www/html/sites/default/files/tmp && mkdir /var/www/html/sites/default/private && chmod 775 /var/www/html/sites/default/files && chmod 775 /var/www/html/sites/default/files/tmp && chmod 775 /var/www/html/sites/default/private RUN chown -R apache:apache /var/www/html
Here we add our custom code, and link it into the appropriate location on the filesystem. Host name is filtered in this example ($INTERNAL_GIT_REPO). Another way to approach this would be to house the Dockerfile with our custom code to centralize things a bit.
# Pull in our custom code for the Customer Portal Drupal application RUN git clone $INTERNAL_GIT_REPO /opt/drupal-custom # Put our custom code in the appropriate places on disk RUN ln -s /opt/drupal-custom/all/themes/kcs /var/www/html/sites/all/themes/kcs RUN ln -s /opt/drupal-custom/all/modules/custom /var/www/html/sites/all/modules/custom RUN ln -s /opt/drupal-custom/all/modules/features /var/www/html/sites/all/modules/features RUN rm -rf /var/www/html/sites/all/libraries RUN ln -s /opt/drupal-custom/all/libraries /var/www/html/sites/all/
Here's an example of installing a module outside of drush.make. I believe due to drush.make's issues with git submodules, this was done in a custom manner.
# get version 0.8.0 of raven-php. This is used for integration with Sentry RUN git clone https://github.com/getsentry/raven-php.git /opt/raven; cd /opt/raven; git checkout d4b741736125f2b892e07903cd40450b53479290 RUN ln -s /opt/raven /var/www/html/sites/all/libraries/raven
Add all the configuration files that don't make sense to make one line changes for. Again, these can be added here since they do not change with environments.
# Add all our config files from the Docker build repo ADD supervisord /etc/supervisord.conf ADD drupal.conf /etc/httpd/conf.d/site.conf ADD ssl_extras.conf /etc/httpd/conf.d/ssl.conf ADD docker-start.sh /docker-start.sh ADD drupal-rsyslog.conf /etc/rsyslog.d/drupal.conf
The user for this Docker image is root, but supervisord handles running different processes as their appropriate users (apache, memcached, etc).
The docker-start.sh script handles the database configuration once the Drupal container is running.
CMD ["/bin/bash", "/docker-start.sh"]
Docker start script
Once the Docker image is built that contains all the Drupal code we'll want to run, the container will be started up on the appropriate environment. Notice that environment-specific configuration was left out of the Docker image building process, and for this reason we know that this image should be deployable to any of our environments (assuming it gets the right configuration). With a configuration tool like puppet or ansible, we can provide the correct settings.php file to each host before our Docker container is deployed, and "configure" the container on startup with a command similar to below:
/usr/bin/docker run -td -p 80:80 -p 11211:11211 -e DRUPAL_START=1 -e DOMAIN=ci -v /var/log/httpd:/var/log/httpd -v /opt/settings.php:/var/www/html/sites/default/settings.php:ro drupal-custom:latest
A summary of some of the arguments:
* -p 80:80 -p 11211:11211 (open/map the correct ports for apache and memcached)
* -e DRUPAL_START=1 (apply the database configuration stored in Drupal code, described in the docker-start.sh script)
* -e DOMAIN=ci (let this container know it belongs in the ci domain)
* -v /var/log/httpd:/var/log/httpd (write apache logs inside the container to the host. In this way, we always have our logs stored between container restarts)
* -v /opt/settings.php:/var/www/html/sites/default/settings.php:ro (Let the container know its configuration from what exists on the host)
When the container starts up, the assumed action to take is to start up supervisord, and apply database configuration if it was requested with DRUPAL_START. Mainly this involves running some drush helper commands, like features revert, applying module updates to the database, and ensuring our list of installed modules is correct with the Master module.
#!/bin/bash if [[ ! -z $DRUPAL_START ]]; then supervisord -c /etc/supervisord.conf & sleep 20 cd /var/www/html && drush cc all status=$? if [[ $status != 0 ]]; then drush rr fi cd /var/www/html && drush en master -y cd /var/www/html && drush master-execute --no-uninstall --scope=$DOMAIN -y && drush fra -y && drush updb -y && drush cc all status=$? if [[ $status != 0 ]]; then echo "Drupal release errored out on database config commands, please see output for more detail" exit 1 fi echo "Finished applying updates for Drupal database" kill $(jobs -p) sleep 20 fi supervisord -n -c /etc/supervisord.conf
That's it! On your host, you should now be able to access the Drupal environment over port 80, assuming that the database connection within settings.php is correct. Depending on whether DRUPAL_START was set, the environment may take some time to configure itself against a current database.
The entirety of this example Dockerfile can be found here.
Next blog post we'll talk about using this container approach with Jenkins to automate the delivery pipeline.