10.2.2. SSH Host Keys

Another concern with gear migration is that it results in a change of the host with which developers interact for the application. Under standard security practices, every host should generate unique host keys for SSH authentication. This enables SSH verification of the host ID to prevent man-in-the-middle or other attacks before the connection is made. If the host ID changes between sessions, by default SSH generates a warning and refuses the connection.
Developers may experience issues when an application changes hosts. The following sequence of events demonstrate this problem:
  1. Administrator deploys OpenShift Enterprise.
  2. Developer creates an OpenShift Enterprise account.
  3. Developer creates an application that is deployed to node1.
    • When an application is created, the application's Git repository is cloned using SSH. The host name of the application is used in this case, which is a cname to the node host where it resides.
    • Developer verifies the host key, either manually or as defined in the SSH configuration, which is then added to the developer's local ~/.ssh/known_hosts file for verification during future attempts to access the application gear.
  4. Administrator moves the gear to node2, which causes the application cname to change to node2.
  5. Developer attempts to connect to the application gear again, either with a Git operation or directly using SSH. However, this time SSH generates a warning message and refuses the connection, as shown in the following example:

    Example 10.3. SSH Warning Message After an Application Gear Moves

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that the RSA host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    ab:cd:ef:12:34:cd:11:10:3a:cd:1b:a2:91:cd:e5:1c.
    Please contact your system administrator.
    Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
    Offending key in /home/user/.ssh/known_hosts:1
    RSA host key for app-domain.example.com has changed and you have requested strict checking.
    Host key verification failed.
    
    This is because the host ID of the application has changed, and it no longer matches what is stored in the developer's known_hosts file.
The SSH warning message shows that the host the developer is attempting to connect to has indeed changed, and SSH has no way of distinguishing whether the new host is the correct host, or if there is an attack in progress. However, because all node hosts are presented as part of the same cloud, developers should be shielded from needing to understand resource allocation details, such as gear migrations. The SSH behavior in this particular case serves only to alarm and annoy developers unnecessarily, as there is no present security problem. In the worst case, these false alarms can teach developers to ignore the symptoms of a real attack.
Manage this situation by ensuring that all node hosts have the same SSH host keys. You can either deploy all node hosts using the same base image that includes host keys, or duplicate the SSH host keys on a node host to all other node hosts. The following instructions describe how to duplicate SSH host keys.

Procedure 10.3. To Duplicate SSH Host Keys:

  1. On each node host, back up all /etc/ssh/ssh_host_* files:
    # cd /etc/ssh/
    # mkdir hostkeybackup
    # cp ssh_host_* hostkeybackup/.
  2. From the first node, copy the /etc/ssh/ssh_host_* files to the other nodes:
    # scp /etc/ssh/ssh_host_* node2:/etc/ssh/.
    # scp /etc/ssh/ssh_host_* node3:/etc/ssh/.
    ...
    
    You can also manage this with a configuration management system.
  3. Restart the SSH service on each node host:
    # service sshd restart
If the host keys have been changed after connecting with SSH as an administrator, the same man-in-the-middle warning message shown in the previous example appears again when you attempt to SSH to a node host. This is because the host ID has changed. You can remove the host keys for all node hosts from your .ssh/known_hosts file as a workaround for this problem. Because all nodes have the same fingerprint now, verifying the correct fingerprint at the next attempt to connect should be relatively easy. In fact, you may wish to publish the node host fingerprint prominently so that developers creating applications on your OpenShift Enterprise installation can do the same.
Duplicating host keys is not essential and can be skipped if your IT policy mandates unique host keys. However, you will either need to educate developers on how to work around this problem, or avoid migrating application gears.