Warning message

Log in to add comments.

DevOps On The Desktop: Containers Are Software As A Service

rhn-support-tjay published on 2015-12-23T12:00:00+00:00, last updated 2016-02-23T20:59:51+00:00

It seems that everyone has a metaphor to explain what containers "are". If you want to emphasize the self-contained nature of containers and the way in which they can package a whole operating system's worth of dependencies, you might say that they are like virtual machines. If you want to emphasize the portability of containers and their role as a distribution mechanism, you might say that they are like a platform. If you want to emphasize the dangerous state of container security nowadays, you might say that they are equivalent to root access. Each of these metaphors emphasizes one aspect of what containers "are", and each of these metaphors is correct.

It is not an exaggeration to say that Red Hat employees have spent man-years clarifying the foggy notion invoked by the buzzword "the cloud". We might understand cloudiness as having three dimensions: (1) irrelevant location, (2) external responsibility, and (3) the abstraction of resources. The different kinds of cloud offerings distinguish themselves from one another by their emphasis on these qualities. The location of the resources that comprise the cloud is one aspect of the cloud metaphor and the abstraction of resources is another aspect of the cloud metaphor. This understanding was Red Hat's motivation for both its private-platform offerings and its infrastructure-as-a-service offerings (IaaS/PaaS). Though the hardware is self-hosted and administered, developers are still able to think either in terms of pools of generic computational resources that they assign to virtual machines (in the case of IaaS) or in terms of applications (in the case of PaaS).

What do containers and the cloud have in common? Software distribution. Software that is distributed via container or via statically-linked binary is essentially software-as-a-service (SaaS). The implications of this are far-reaching.

Given the three major dimensions of cloudiness, what is software as a service? It is a piece of software hosted and administered externally to you that you access mainly through a network layer (either an API or a web interface). With this definition of software as a service, we can declare that 99% of the container-distributed and statically-linked Go software is SaaS that happens to run on your own silicon powered by your own electricity. Despite being run locally, this software is still accessed through a network layer and this software is still---in practice---administered externally.

A static binary is a black box. A container is modifiable only if it was constructed as an ersatz VM. Even if the container has been constructed as an ersatz VM, it is only as flexible as (1) the underlying distribution in the container and (2) your familiarity with that distribution. Apart from basic networking, the important parts of administration must be handled by a third party: the originating vendor. For most containers, it is the originating vendor that must take responsibility for issues like Heartbleed that might be present in software's underlying dependencies.

This trend, which shows no signs of slowing down, is a natural extension to the blurring of the distinction between development and operations. The term for this collaboration is one whose definition is even harder to pin down than "cloud": DevOps. The DevOps movement has seen some traditional administration responsibilities---such as handling dependencies---become shared between operational personnel and developers. We have come to expect operations to consume their own bespoke containers and static binaries in order to ensure consistency and to ensure that needed runtime dependencies are always available. But now, a new trend is emerging---operational groups are now embedding the self-contained artifacts of other operational groups into their own stack. Containers and static blobs, as a result, are now emerging as a general software distribution method.

The security implications are clear. Self-contained software such as containers and static binaries must be judged as much by their vendor's commitments to security as by their feature set because it is that vendor who will be acting as the system administrator. Like when considering the purchase of a phone, the track record for appropriate, timely, and continuous security updates is as important as any feature matrix.

Some security experts might deride the lack of local control over security that this trend represents. However, that analysis ignores economies of scale and the fact that---by definition---the average system administrator is worse than the best. Just as the semi-centralized hosting of the cloud has allowed smaller businesses to achieve previously impossible reliability for their size, so too does this trend offer the possibility of a better overall security environment.

Of course, just as the unique economic, regulatory, and feature needs of enterprise customers pushed those customers to private clouds, so too must there be offerings of more customizable containers.

Red Hat is committed to providing both "private cloud" flexibility and to helping ISVs leverage the decades of investment that we have made in system administration. We release fresh containers at a regular cadence and at the request of our security team. By curating containers in this way, we provide a balance between the containers becoming dangerously out of date and the fragility that naturally occurs when software used within a stack updates "too often". However, just as important is our commitment to all of our containers being updatable in the ways our customers have come to expect from their servers and VMs: yum update for RPM based content, and zips and patches for content such as our popular JBoss products. This means that if you build a system on a RHEL-based container you can let "us" administer it by simply keeping up with the latest container releases or you can take control yourself using tools you already know.

Sadly, 2016 will probably not be the year of the Linux desktop, but it may well be the year of DevOps on the desktop. In the end, that may be much more exciting.

English

About The Author

rhn-support-tjay

Trevor has more than a decade of experience with multi-node/cloud systems and machine learning. As a security engineer he specializes in emerging cloud technologies. His current focus includes containers and other multi-layered runtimes.