Chapter 2. OpenShift Enterprise 3.2 Release Notes
2.1. Overview
OpenShift Enterprise by Red Hat is a Platform as a Service (PaaS) that provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Enterprise supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.
Built on Red Hat Enterprise Linux and Google Kubernetes, OpenShift Enterprise provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Enterprise brings the OpenShift PaaS platform to customer data centers, enabling organizations to implement a private PaaS that meets security, privacy, compliance, and governance requirements.
2.2. New Features and Enhancements
OpenShift Enterprise version 3.2 is now available. Ensure that you follow the instructions on upgrading your OpenShift cluster properly, including steps specific to this release.
For any release, always review the Installation and Configuration guide for instructions on upgrading your OpenShift cluster properly, including any additional steps that may be required for a specific release.
2.2.1. For Administrators
2.2.1.1. Updated Infrastructure Components
- Kubernetes has been updated to v1.2.0-36.
- etcd has been updated to v2.2.5.
2.2.1.2. Configuration and Administration
- A set of admission control plug-ins can now be configured by an administrator to intercept requests to the master API prior to persistence of a resource, but after the request is authenticated and authorized. See Configuring Admission Control Plug-ins for details.
- Multiple web login providers can now be configured at the same time.
- The
oc adm diagnosticscommand can now launch a diagnostic pod that reports on more potential issues with pod networking, DNS configuration, and registry authentication. - The number of projects an individual user can create can be limited via the
ProjectRequestLimitadmission controller. See Limiting Number of Self-Provisioned Projects Per User for details. - A build defaults admission controller can be used to set default environment variables on all builds created, including global proxy settings. See Configuring Global Build Defaults and Overrides for details.
- The
PodNodeConstraintsadmission control plug-in has been added, which constrains the use of theNodeNamefield in a pod definition to roles which have thepods/bindingpermission. This allows allows administrators, viaNodeSelectorLabelBlacklist, to specify node labels by setting them in theNodeSelectorfield of the pod definition. See Controlling Pod Placement for details. - Using the
openshift.io/imagestreamtagsandopenshift.io/imagestreamimagesresources, you can restrict the number of unique image references in a project using quota. - By setting
Max["storage"]on theopenshift.io/imagesizelimit type, you can restrict the maximum image size that can be pushed to a project using limit ranges. See Deploying a Docker Registry for details on setting the storage quota. - Support for security context constraints (SCCs) has been added to
oc describe. - The
NO_PROXYenvironment variable will now accept a CIDR in a number of places in the code for controlling which IP ranges bypass the default HTTP proxy settings.
2.2.1.3. Security
The new
Volumesfield in SCCs allows an administrator full control over which volume plug-ins may be specified.- In order to maintain backwards compatibility, the
AllowHostDirVolumePluginfield takes precedence over theVolumesfield for the host mounts. You may use*to allow all volumes. - By default, regular users are now forbidden from directly mounting any of the remote volume type; they must use a persistent volume claim (PVC).
- In order to maintain backwards compatibility, the
The new
ReadOnlyRootFilesystemfield in SCCs allows an administrator to force containers to run with a read-only root file system.- If set to true, containers are required to run with a read-only root file system by their
SecurityContext. Containers that do not set this value to true will be defaulted. Containers that explicitly set this value to false will be rejected. - If set to false, containers may use a read-only root file system, but they are not forced to run with one.
- If set to true, containers are required to run with a read-only root file system by their
- By default, the restricted and anyuid SCCs drop Linux capabilities that could be used to escalate container privileges. Administrators can change the list of default or enforced capabilities.
- A constant-time string comparison is now used on webhooks.
- Only users authenticated via OAuth can request projects.
- A GitLab server can now be used as an identity provider. See Configuring Authentication for details.
- The
SETUIDandSETGIDcapabilities have been added back to the anyuid SCC, which ensures that programs that start as root and then drop to a lower permission level will work by default. - Quota support has been added for
emptydir. When the quota is enabled on an XFS system, nodes will limit the amount of space any given project can use on a node to a fixed upper bound. The quota is tied to theFSGroupof the project. Administrators can control this value by editing the project directly or allowing users to setFSGroupvia SCCs. - The
DaemonSetobject is now limited to cluster administrators because pods running under aDaemonSetare considered to have higher priority than regular pods, and for regular users on the cluster this could be a security issue. - Administrators can prevent clients from accessing the API by their
User-Agentheader the newuserAgentMatchingconfiguration setting.
2.2.1.4. Integrated Docker Registry
- The integrated Docker registry now supports Azure Blob Storage, OpenStack Swift, and Amazon CloudFront as storage back ends.
- A readiness probe and health check have been added to the integrated registry to ensure new instances do not serve traffic until they are fully initialized.
2.2.1.5. Routes
- You can limit the frequency of router reloads using the
--interval=DURATIONflag orRELOAD_INTERVALenvironment variable to the router. This can minimize the memory and CPU used by the router while reloading, at the cost of delaying when the route is exposed via the router. - Routers now report back status to the master about whether routes are accepted, rejected, or conflict with other users. The CLI will now display that error information, allowing users to know that the route is not being served.
- Using router sharding, you can specify a selection criteria for either namespaces (projects) or labels on routes. This enables you to select the routes a router would expose, and you can use this functionality to distribute routes across a set of routers, or shards.
2.2.1.6. Storage
- The
NoDiskConflictsscheduling predicate can be added to the scheduler configuration to ensure that pods using the same Ceph RBD device are not placed on the same node. See Scheduler for details.
2.2.1.7. Administrator CLI
- The administrative commands are now exposed via
oc admso you have access to them in a client context. Theoadmcommands will still work, but will be a symlink to theopenshiftbinary. - The help output of the
oadm policycommand has been improved. Service accounts are now supported for the router and registry:
- The router can now be created without specifying
--credentialsand it will use the router service account in the current project. - The registry will also use a service account if
--credentialsis not provided. Otherwise, it will set the values from the--credentialsfile as environment on the generated deployment configuration.
- The router can now be created without specifying
- Administrators can pass the
--all-namespacesflag tooc statusto see status information across all namespaces and projects.
2.2.1.8. Web Console
- Users can now be presented with a customized, branded page before continuing on to a login identity provider. This allows users to see your branding up front instead of immediately redirecting to identity providers like GitHub and Google. See Customizing the Login Page for details.
- CLI download URLs and documentation URLs are now customizable through web console extensions. See Adding or Changing Links to Download the CLI for details.
2.2.2. For Developers
2.2.2.1. Web Console
The web console uses a brand new theme that changes the look and feel of the navigation, tabs, and other page elements. See Project Overviews for details.

A new About page provides developers with information about the product version,
ocCLI download locations, and a quick access to their current token to login usingoc login. See CLI Downloads for details.
You can now add or edit resource constraints for your containers during Add to Project or later from the deployment configuration.

A form-based editor for build configurations has been added for modifying commonly edited fields directly from the web console.

- All Browse resource pages (e.g, viewing a particular pod) now have a tab for Events related to that pod.
- Limits, quotas, and quota scopes are now displayed.
- More error and warning information is now displayed about routes, their configuration, and their use in the system.
- Support has been added for filtering and sorting on all Events pages.
- You can now edit a project’s display name and description from the Settings page.
- Existing persistent volume claims (PVCs) can now be listed and attached to deployments and deployment configurations.
- More detailed pod status is now provided on all pages.
- Better status and alert messages are now provided.
- Improved Dockerfile build keyword highlighting has been added when editing builds.
- More accurate information is now displayed about routes based on which addresses the router exposed them under.
- The layout and display of logs have been improved.
2.2.2.2. Developer CLI
The following commands have been added to
oc create, allowing more objects to be created directly using the CLI (instead of passing it a file or JSON/YAML):Command Description namespaceCreate a namespace with the specified name.
secretCreate a secret using a specific subcommand (
docker-registryorgeneric).configmapCreate a
ConfigMapfrom a local file, directory, or literal value.serviceaccountCreate a service account with the specified name.
routeExpose containers externally via secured routes. Use the
edge,passthrough, orreencryptsubcommands and specify the secret values to be used for the route.- Display more information about the application being created in
oc new-app, including any display name or description set on the image as a label, or whether the image may require running as root. - If you have set up the latest tag in an image stream to point to another tag in the same image stream, the
oc new-appcommand will follow that reference and create the application using the referenced tag, not latest. This allows administrators to ensure applications are created on stable tags (like php:5.6). The default image streams created in the openshift project follow this pattern. - You can view the logs of the oldest pod in a deployment or build configuration with
oc logs dc/<name>. - The
oc envandoc volumecommands have been moved tooc set envandoc set volume, and future commands that modify aspects of existing resources will be located under this command. - When a pod is crash-looping, meaning it is starting and exiting repeatedly, an error is now displayed in
oc statusand provides more information about possible causes. - The new
oc debugcommand makes it easy to obtain shell access in a misbehaving pod. It clones the exact environment of the running deployment configuration, replication controller, or pod, but replaces the run command with a shell. - The new
oc set triggercommand can be used to update deployment and build configuration triggers. - More information is displayed about liveness and readiness probes in the
oc statusandoc describecommands.
2.2.2.3. Builds and Image Sources
Builds can now be supplied with input files from unrelated images. Previously, all input to a build had to come from the builder image itself, or a Git repository. It is now possible to specify additional images and paths within those images to use as an input to a build for things like external dependencies.
Use the
--source-image=<image>and--source-image-path=<source>:<destination>flags with theoc new-buildcommand to specify images.The example shown below injects the /usr/lib/jenkins/jenkins.war file out of the image currently tagged with jenkins:latest into the installed-apps directory of the build input:
apiVersion: v1 kind: BuildConfig metadata: name: imagedockerbuild spec: source: images: - from: kind: ImageStreamTag name: jenkins:latest paths: - destinationDir: installed-apps/ sourcePath: /usr/lib/jenkins/jenkins.warEnsure that you set an image change trigger for jenkins:latest if you want to rebuild every time that image is updated.
- Builds can now be supplied with secrets for use during the build process. Previously, secrets could be used for Git cloning but now secrets can also be made available to the build process itself so that build operations such as Maven packaging can use a secret for credentials.
- Builds now properly use Git submodules when checking out the source repository. When a build configuration is deleted (via
oc delete), all associated builds are now deleted as well. To prevent this behavior, specify--cascade=false. - Custom build configurations can now specify the API version to use. This API version will determine the schema version used for the serialized build configuration supplied to the custom build pod in the
BUILDenvironment variable. - Resource limits are now enforced on the container launched by S2I builds, and also on the operations performed within containers as part of a
docker buildof a Dockerfile. Previously, the resource limit only applied to the build pod itself and not the containers spawned by the build process. - You can now provide a command to be triggered after a build succeeds but before the push. You can set
shell(to run a shell script),command, orargsto run a command in the working directory of the built image. All S2I builders set the user’s source repository as the working directory, so commands likebundle exec rake testshould work. See Build Hooks for details.
2.2.2.4. Image Imports
You can now import images from Docker v2 registries that are authenticated via Basic or Token credentials. To import, create a secret in your project based on a .docker/config.json or .dockercfg file:
$ oc secrets new hub .dockerconfigjson=$HOME/.docker/config.json Created secret/hub $ oc import-image auth-protected/image-from-dockerhub The import completed successfully. Name: image-from-dockerhub Created: Less than a second ago Tag Spec Created latest default/image-from-dockerhub:latest Less than a second ago ...
When importing, all secrets in your project of those types will be checked. To exclude a secret from being a candidate for importing, use the
openshift.io/image.excludeSecretannotation set to true:$ oc annotate secret/hub openshift.io/image.excludeSecret=true
Image stream tags can be set to be automatically imported from remote repositories when they change (public or private). OpenShift Enterprise will periodically query the remote registry and check for updates depending on the configuration the administrator sets. By default, images will be checked every 15 minutes.
To set an image to be imported automatically, use the
--scheduledflag with theoc tagcommand:$ oc tag --source=docker redis:latest myredis:latest --scheduled Tag myredis:latest set to import redis:latest periodically.
You can see which images are being scheduled using
oc describe is myredis.Administrators can control whether scheduling is enabled, the polling interval, and the rate at which images can be imported via the
imagePolicyConfigsection in the /etc/origin/master/master-config.yaml file.The integrated Docker registry now supports image pullthrough, allowing you to tag a remote image into OpenShift Enterprise and directly pull it from the integrated registry as if it were already pushed to the OpenShift Enterprise registry. If the remote registry is configured to use content-offload (sending back a temporary redirect URL to the actual binary contents), that value will be passed through the OpenShift Enterprise registry and down to the Docker daemon, avoiding the need to proxy the binary contents.
To try pullthrough, tag an image from the DockerHub and then pull it from the integrated registry:
$ oc tag --source=docker redis:latest redis:local $ oc get is redis NAME DOCKER REPO TAGS UPDATED mysql 172.30.1.5:5000/default/redis local Less than a second ago # log into your local docker registry $ docker pull 127.30.1.5:5000/default/redis:local Using default tag: local Trying to pull repository 127.30.1.5:5000/default/redis ... latest: Pulling from 127.30.1.5:5000/default/redis 47d44cb6f252: Pull complete 838c1c5c4f83: Pull complete 5764f0a31317: Pull complete 60e65a8e4030: Pull complete 449f8db3c25a: Pull complete a6b6487c42f6: Pull complete Digest: sha256:c541c66a86b0715bfbb89c5515929268196b642551beccf8fbd452bb00170cde Status: Downloaded newer image for 127.30.1.5:5000/default/redis:local
You can use pullthrough with private images; the integrated registry will use the same secret you imported the image with to fetch content from the remote registry.
- The
oc describecommand now reports overall image size for imported images as well as the individual layers and size of each layer. - When importing an entire remote repository, only the first five tags are imported by default. OpenShift Enterprise preferentially imports the latest tag and the highest semantically versioned tag (i.e., tags in the form v5, 5.0, or 5.0.1). You can import the remaining tags directly. Lists of tags will be sorted with the latest tag on top, followed by the highest major semantic tags, in descending order.
2.2.2.5. Test Deployments
It is now possible to create a "test" deployment that will scale itself down to zero when a deployment is complete. This deployment can be used to verify that an image will be correctly rolled out without requiring the pods to be running all the time. To create a test deployment, use the --as-test flag on oc new-app or set the spec.test field of a deployment configuration to true via oc edit.
The deployment triggers like any other deployment configuration, scaling up to the current spec.replicas value when triggered. After the deployment has completed with a success or failure, it is then scaled down to zero. You can use deployment hooks to test or verify the deployment; because hooks run as part of the deployment process, a test suite running in your hook can ensure your application is correct and pass or fail the deployment.
You can add a local database or other test container to the deployment pod template, and have your application code verify itself before passing to the next step.
Scaling a test deployment will only affect the next deployment.
2.2.2.6. Recreate Strategy
- The Recreate deployment strategy now supports
midhooks, which run while all old pods have been scaled down and before any new pods are scaled up; use it to run migrations or configuration changes that can only happen while the application is completely shut down. - The Recreate deployment strategy now has the same behavior as the Rolling strategy, requiring the pod to be "Ready" before continuing with the deployment. A new field
timeoutSecondswas added to the strategy that is the maximum allowed interval between pods becoming ready; it defaults to120s.
2.2.2.7. Other Enhancements
- The new Kubernetes 1.2 ConfigMap resource is now usable.
- Pods being pulled or terminating are now distinguished in the pod status output, and the size of images is now shown with other pod information.
- The Jenkins image can now be used as an S2I-compatible build image. See Using Jenkins as a Source-to-Image Builder for details.
2.3. Notable Technical Changes
OpenShift Enterprise 3.2 introduces the following notable technical changes:
2.3.1. For Administrators
2.3.1.1. Services with External IPs Rejected by Default
By default, services with external IPs are now rejected because, in some cases, they can be used to allow services to pretend to act as nodes. The new networkConfig.externalIPNetworkCIDR parameter has been added to the master-config.yaml file to control the allowable values for external IPs. By default, it is empty, which rejects all values. Cluster administrators can set it to 0.0.0.0/0 to emulate the behavior from OpenShift Enterprise 3.1.
2.3.1.2. Build Strategy Permissions Separated into Distinct Roles
Build strategy permissions have been separated into distinct roles. Administrators who have denied access to Docker, Source, or Custom builds must now assign users or groups to those roles by default. See Securing Builds by Strategy for details.
2.3.1.3. FSGroup Enabled by Default for restricted and hostaccess SCCs
FSGroup is now enabled by default in the restricted and hostaccess SCCs. This means that pods matched against those SCCs will now:
- Have the
pod.spec.securityContext.fsGroupfield populated to a namespace-wide allocated value automatically. - Have their emptyDir-derived (emptyDir, gitRepo, secret, configMap, and downwardAPI) and block device volumes (basically every network volume except ceph and nfs) owned by the
FSGroup. - Run with the
FSGroupin each container’s list of supplemental groups.
2.3.1.4. Tightened Directory Permissions on Hosts
Permissions on the /etc/origin directory have been tightened to prevent unprivileged users from reading the contents of this directory tree. Administrators should ensure that, if necessary, they have provided other means to access the generated CA certificate.
2.3.1.5. DNS Changes
- By default, new nodes installed with OpenShift Enterprise 3.2 will have Dnsmasq installed and configured as the default nameserver for both the host and pods.
- By default, new masters installed with OpenShift Enterprise 3.2 will run SkyDNS on port 8053 rather than 53. Network access controls must allow nodes to connect to masters on port 8053. This is necessary so that Dnsmasq may be configured on all nodes.
2.3.1.6. New Default Values for Pod Networking
The default values for pod networking have changed:
| master-config.yaml Field | Ansible Variable | Old Value | New Value |
|---|---|---|---|
|
| 10.1.0.0/16 | 10.128.0.0/14 (i.e., 10.128.0.0 - 10.131.255.255) |
|
| 8 (i.e., /24 subnet) | 9 (i.e., /23 subnet) |
2.3.1.7. API Changes
- Due to a change in the upstream JSON serialization path used in Kubernetes, some fields that were previously accepted case-insensitively are no longer accepted. Please validate that your API objects have the correct case for all attributes.
- When creating a deployment configuration, omitting the
spec.selectorfield will default that value to the pod template labels. ImageStreamTagobjects now return the spec tagtag, the current status conditions, and latest status generationgeneration, so clients can get an accurate view of the current tag.ImageStreamTagobjects can be updated viaPUTto set their spec tag in a single call.- Deployment configuration hooks now default the container name if there is only a single container in the deployment configuration.
2.3.1.8. Other Changes
- The default value for
MaxPodsPerNodehas been increased to110to reflect updated capacity.
2.3.2. For Developers
2.3.2.1. Developer CLI
oc rshnow launches/bin/sh, not/bin/bash. To have the old behavior, run:$ oc rsh <name> -- /bin/bash
2.4. Bug Fixes
The following bugs have been fixed:
- Passthrough routes may not be specified with paths. Because passthrough does not decode the route, there is no way for the router to check the path without decoding the request. The
oc statuscommand will now warn you if you have any such routes. - The
oc new-appcommand now returns more information about errors encountered while searching for matches to user input. - When using images from registries that are not the DockerHub, do not insert the
libraryprefix. - The image ID returned from the
ImageStreamImageAPI was not the correct value. - The router health check was not correct on all systems when using host networking. It now defaults to using localhost.
- OAuth client secrets are now correctly reset in HA master configurations.
- Improved the web console’s performance when displaying many deployments or builds.
- The router unique host check should not reprocess routes that did not change.
- Added the
AlwaysPulladmission controller to prevent users from being able to run images that others have already pulled to the node. - Fixed
oc editwhen editing multiple items in a list form. - The recycler for persistent volumes now uses a service account and has proper access to restricted content.
- The block profiler in
pprofis now supported. - Additional
cGrouplocations are now handled when constraining builds. - Scratch images from
oc new-appare now handled. - Added support for paged LDAP queries.
- Fixed a performance regression in
cAdvisorthat resulted in long pauses on Kubelet startup. - The
oc editcommand was not properly displaying all errors when saving an edited resource failed. - More information is now shown about persistent volume claims and persistent volumes in a number of places in the CLI and web console.
- Some commands that used the API PATCH command could fail intermittently when they were executed on the server and another user edited at the same time.
- Users are now warned when trying to import a non-existent tag in
oc import-image. - Singular pods are now shown in
oc statusoutput. Router fixes:
- More information s now shown from the router reload command in the router logs.
- Routes that changed at the same time could compete for being exposed if they were in different namespaces. The check for which route gets exposed has been made predictable.
- The health check is now used when restarting the router to ensure the new process is correctly running before continuing.
- Better errors are displayed in the web console when JavaScript is disabled.
- Failed deployments now update the status of the deployment configuration more rapidly, reducing the time before the old deployment is scaled back up.
- Persistent volume claims (PVCs) are no longer blocked by the default SCC policy for users.
- Continue to support host ports on
oadm router. Administrators can disable them with--host-ports=falsewhen--host-network=falseis also set. - Events are now emitted when the cancellation of a deployment fails.
- When invoking a binary build, retry if the input image stream tag does not exist yet (because it may be in the process of being imported).
- Fixed a race condition in Kubernetes where endpoints might be partially updated (only have some pods) when the controller is restarted.
- Docker containers do not allow CPU quota less than
10m, so set the minimum value. - Do not sync
DaemonSetobjects that match all pods. - The
oc new-buildcommand no longer fails when creating a binary build on a Git repository that does not have an upstream remote set. - Fixed a race condition between scaled up routers where some changes might be ignored.
- Enable the etcd watch cache for Kubernetes resources, reducing memory use and duplicate watches.
- Change the
RunOncepod duration restrictor to act as a limit instead of override. - Guarantee partially completed builds are cleaned up when cancelled.
- Check
claimRefUID when processing a recycled persistent volume (PV) to prevent races. - The
ProjectRequestLimitplug-in now ignores projects in terminating state. - The
ConfigMapvolume is now readable as non-root. - The system:image-auditor role has been added for managing the image registry.
- Dynamic volume provisioning can now be disabled.
- Deployment pods should now be cancelled when deployments are cancelled in all cases.
- The deployer controller should now ensure deployments that are cancelled can not become completed.
- Concurrent deployer pod creation is now prevented.
- Fixed an issue where a pod would never terminate if the registry it pulls images from was unavailable.
- Fixed precision of CPU to millicore and memory to Mi in the UI.
- The HAProxy router should now obfuscate the pod IP in when using cookies for session affinity.
2.5. Technology Preview Features
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Please note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
The following features are in Technology Preview:
- Feature 1
- Feature 2
2.6. Known Issues
- Upgrades from OpenShift Enterprise 3.1 to 3.2 for are currently only supported for clusters using the RPM-based installation method. Administrators with clusters using the containerized installation method should not perform an upgrade at this time, as development for this upgrade path is currently in progress. Performing a containerized upgrade at this time would be detrimental to your cluster. An asynchronous errata update will be released shortly to provide the ability to successfully upgrade containerized installations. (BZ#1331097, BZ#1331380, BZ#1326642, BZ#1328950)
- When
OPENSHIFT_DEFAULT_REGISTRYin /etc/sysconfig/origin-master is set to a DNS name (for exampledocker-registry.default.svc.cluster.local), builds cannot push to the internal registry, because the generated secrets for the internal registry only include the registry service IP, not the internal host name(s). A solution is in development. - Internally-managed images cannot be pulled from an image reference referencing another image stream. See Deploying a Docker Registry for more information.
2.7. Asynchronous Errata Updates
Security, bug fix, and enhancement updates for OpenShift Enterprise 3.2 are released as asynchronous errata through the Red Hat Network. All OpenShift Enterprise 3.2 errata is available on the Red Hat Customer Portal. See the OpenShift Enterprise Life Cycle for more information about asynchronous errata.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified via email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Enterprise entitlements for OpenShift Enterprise errata notification emails to generate.
This section will be updated over time to provide notes on enhancements and bug fixes for any future asynchronous errata releases of OpenShift Enterprise 3.2.
For any release, always review the instructions on upgrading your OpenShift Enterprise cluster properly.
