Service,Protocol,Dest. Port,Source Object,Dest. Object,Source/Dest Pairs,Dest. Network,ServiceNetMap Parent,Traffic description ansible,TCP,22,Undercloud,All Roles,Undercloud->All Roles,Control Plane,N/A,Undercloud SSH connections for Ansible Playbooks. aodh_api,TCP,8042,"Controller, Administrator","VIP, Controller, Telemetry","Administrator->VIP, Controller->Controller, Controller->Telemetry",Internal API,AodhApiNetwork,AODH Alarming Configuration Internal/Admin API. aodh_api,TCP,13042,"Controller, Administrator","VIP, Controller, Telemetry","Administrator->VIP, Controller->Controller, Controller->Telemetry",External,PublicNetwork,"AODH Alarming Configuration Public API.llow Allows the Admin to create metric thresholds and alarms. " barbican_api,TCP,9311,Controller,"VIP, Controller","Controller->VIP, Controller->Controller",Internal API,BarbicanApiNetwork,Barbican Internal/Admin API barbican_api,TCP,13311,"Controller, Users","VIP, Controller","Users->VIP, Controller->VIP, Controller->Controller",External,PublicNetwork,"Barbican Public API (TLS). Defaults to 9311 if not using TLS." ceph_mon,TCP,6789,"Controller, Compute, Ceph","Ceph, Compute, Controller","Ceph->Ceph, Ceph->Controller, Controller->Controller, Controller->Ceph, Compute->Controller, Compute->Ceph",Storage,CephMonNetwork,Ceph MON "ceph_{rbdmirror,osd,mgr,mds}",TCP,6800-7300,"Controller, Compute, Ceph","Ceph, Compute, Controller","Ceph->Ceph, Ceph->Controller, Controller->Controller, Controller->Ceph, Compute->Controller, Compute->Ceph",Storage,CephMonNetwork,"Ceph RBD-MIRROR, OSD, MGR, MDS" "ceph_{rbdmirror,osd}",TCP,6800-7300,Ceph,Ceph,Ceph->Ceph,Storage Management,CephClusterNetwork,"Ceph RBD-MIRROR, OSD" ceph_nfs,TCP/UDP,2049,Controller,"Controller, Ceph","Controller->Controller Controller->Ceph",StorageNfs,GaneshaNetwork,Ceph NFS (Ganesha) ceph_nfs,TCP/UDP,2049,Compute,VIP (on Controller),Compute->VIP,StorageNfs,GaneshaNetwork,Ceph NFS (Ganesha) HA VIP ceph_rgw,TCP,8080,Controller,"Controller, Ceph","Controller->Controller Controller->Ceph",Storage,CephRgwNetwork,Ceph RadosGW internal/admin S3/Swift ceph_rgw,TCP,13808,"Users, Administrator, Compute",VIP (on Controller),"Users->VIP Compute->VIP",External,PublicNetwork,"Ceph RadosGW public API (TLS) S3/Swift. Port 13080 used by Nova VNC proxy. Will use 8080 if no TLS." ceph dashboard,TCP,3100,"Controller, Administrator","VIP, Controller","Administrator->VIP Controller->Controller",Control Plane,PublicNetwork,"Ceph dashboard. Port 3100: grafana Port 9092: prometheus Port 9283: ceph_mgr for metrics Port 9093 alertmanager Port 9100: all node_exporter " cinder,TCP,8776,"Controller, Compute","VIP, Controller","Compute->VIP, Controller->VIP, Controller->Controller",Internal API,CinderApiNetwork,Cinder internal/admin API cinder,TCP,13776,"Controller, Users",Controller,"Users->VIP, Controller->Controller",External,PublicNetwork,"Cinder public API (TLS). Defaults to 8776 if not using TLS." collectd,TCP,25826,Compute,"VIP, Controller or Telemetry","Compute->VIP, Controller->Telemetry",Internal API,MetricsQdrNetwork,"Collectd server port. CollectdServerPort." collectd AMQP,TCP,5666,Compute,Controller or Telemetry,"Compute->Controller, Compute->Telemetry",Internal API,MetricsQdrNetwork,Collectd AMQP dns,TCP/UDP,53,All Roles,External Servers,All roles->External DNS,"External (Controller), Control Plane (other roles)",N/A,"DNS requests. Traffic will use the default route on the node to access external DNS servers." docker registry,TCP,8787,All Roles,Undercloud,All roles->Undercloud,Control Plane,DockerRegistryNetwork,"Docker registry for pulling containers. This entry assumes that the Undercloud is used as the Docker registry. If a Red Hat Satellite server is used, or if the containers are pulled straight from registry.redhat.io, then the traffic will flow there instead of the Undercloud." ec2_api,TCP,8788,Controller,Controller,"Controller->Controller, Controller->VIP",Internal Api,Ec2ApiNetwork,EC2 Internal/Admin API ec2_api,TCP,13788,"Controller, Users",Controller,"Controller->VIP, Users->VIP",External,PublicNetwork,"EC2 Public API (TLS). Ec2ApiExternalNetwork may be set to influence external network. Port 8788 will be used if no TLS" etcd,TCP,2379,Controller,Controller,Serivces -> etcd api,Internal Api,EtcdNetwork,etcd Client Port etcd,TCP,2380,Controller,Controller,"etcd nodes <->etcd nodes only masters run this",Internal Api,EtcdNetwork,etcd Peer Port glance,TCP,9292,"Controller, Compute",Controller,"Controller->Controller, Controller->VIP, Compute->VIP",Internal API,GlanceApiNetwork,Glance Internal/Admin API glance,TCP,13292,"Controller, Admin",Controller,"Controller->Controller, Controller->VIP, Admin->Controller",External,PublicNetwork,"Glance Public API (TLS). Port 9292 will be used if no TLS" gnocchi,TCP,8041,Controller,Controller,"Controller->Controller, Controller->VIP",Internal API,GnocchiApiNetwork,"Gnocchi Internal/Admin API. CollectdGnocchiPort." gnocchi,TCP,13041,Controller,Controller,"Controller->Controller, Controller->VIP",External,PublicNetwork,"Gnocchi Public API (TLS). Default port 8041 used with no TLS." gnocchi_statsd,UDP,8125,"Controller, Telemetry","Controller, Telemetry","Controller->Controller, Controller->VIP",Internal API,GnocchiApiNetwork,"Network daemon for statistics. " haproxy_stats,TCP,1993,Admin,Controller,"User->Controller, User->VIP",Control Plane,N/A,"HAProxy Statistics Port. Used for troubleshooting/reporting." Heat Internal/Admin API,TCP,8004,Controller,Controller,Controller->Controller,Internal Api,HeatApiNetwork,Heat API Internal/Admin API Endpoint Heat Public API,TCP,13004,"Controller, Users",Controller,"Controller->Controller, Users->Controller",External,PublicNetwork,"Heat Public API Endpoint (Public TLS). Default port 8004 used with no TLS" heatCloudFormation API,TCP,8000,Controller,Controller,Controller->External Service,Internal Api,HeatApiCfnNetwork,Heat AWS CloudFormation Internal/Admin API horizon,TCP,443,"Controller, Users, Admin",Controller,"Users -> VIP, Controller -> Controller, Controller -> Services, Users -> Ceph",External,PublicNetwork,"Dashboard (TLS). Will use port 80 by default if no TLS." ironic,TCP,6385,"Controller, Admin, Bare Metal Hosts","Controller, Undercloud","Controller->Controller, Controller->VIP, Admin->VIP, Bare Metal->VIP, Admin->Undercloud, Bare Metal->VIP",Control Plane,IronicApiNetwork,"Ironic internal/admin API. In Director, the Undercloud will be the destination. If Ironic is used in the Overcloud, then destination will be the Controllers." ironic,TCP,13385,"Controller, Admin, Bare Metal Hosts","Controller, Undercloud","Controller->Controller, Controller->VIP, Admin->VIP, Bare Metal->VIP, Admin->Undercloud, Bare Metal->VIP",External,PublicNetwork,"Ironic public API (TLS). Will use port 6385 by default if no TLS. In Director, the Undercloud will be the destination. If Ironic is used in the Overcloud, then destination will be the Controllers." Ironic python agent,TCP,9999,"Undercloud, Controller",Bare Metal Hosts,"Undercloud->Bare Metal, Controller->Bare Metal","Control Plane, Ironic Bare Metal Network",,"Ironic Python Agent. Used by Ironic for setting configuration on bare metal hosts during cleaning or deployment." http_ironic_conductor,TCP,8088,"Controller, Baremetal nodes","Undercloud, Controller","Bare Metal->Undercloud,Bare Metal->VIP, Controller->Controller",Control Plane,IronicNetwork,"HTTP for Ironic PXE boot. Used for inspecting/deploying bare metal There are potentially two instances: one for the Undercloud/Director, and another on the Controller when using Ironic in the overcloud." tftp_ironic_conductor,UDP,69,Baremetal nodes,"Undercloud, Controller","Bare Metal->Undercloud,Bare Metal->VIP, Controller->Controller",Control Plane,IronicNetwork,"TFTP for Ironic PXE boot. Used for inspecting/deploying bare metal. There are potentially two instances: one for the Undercloud/Director, and another on the Controller when using Ironic in the overcloud." ironic_inspector,TCP,5050,"Controller, Baremetal nodes","Undercloud, Controller","Bare Metal->Undercloud,Bare Metal->VIP, Controller->Controller",Control Plane,IronicInspectorNetwork,"Ironic inspector internal/admin API. Used for inspecting bare metal" ironic_inspector,TCP,13050,"Controller, Users","Undercloud, Controller","Users->VIP, Controller->VIP Controller->Controller",External,PublicNetwork,"Ironic inspector public API (TLS). Used for launching introspection, etc. Port 5050 will be used if no TLS." iSCSI (LVM),TCP,3260,Compute,Controller,Compute->Controller,Storage,CinderIscsiNetwork,"Cinder Volume iSCSI Initiator. Used for iSCSI when using LVM volumes." keystone,TCP,35357,"Undercloud, Controller, Admin",Controller,"Undercloud->VIP, Controller->Controller, Admin->VIP",Control Plane,KeystoneAdminApiNetwork,"Keystone admin API (for Undercloud). Undercloud contacts to set up admin." keystone,TCP,5000,All Roles,Controller,"All Roles->VIP, Controller->VIP, Controller->Controller",Internal Api,KeystonePublicApiNetwork,Keystone internal API keystone,TCP,13000,"Controller, Users",Controller,"Users->VIP, Controller->Controller",External,KeystonePublicApiNetwork,"Keystone public API (TLS). Will use port 5000 if no TLS." manila,TCP,8786,"Controller, Compute",Controller,"Compute->VIP, Controller->VIP, Controller->Controller",Internal API,ManilaApiNetwork,Manila internal/admin API manila,TCP,13786,"Controller, Compute",Controller,"Compute->VIP, Controller->VIP, Controller->Controller",External,PublicNetwork,"Manila Public API (TLS). Will use port 8786 if no TLS." memcached,TCP,11211,"Controller, Compute",Controller,"All Roles->Controller, Controller->Controller",Internal API,MemcachedNetwork,Services will use memcached to cached Keystone idetity tokens. All roles will communicate with the c=Controllers using memcached. mistral_api,TCP,8989,Undercloud,Controller,Undercloud only,Internal API,MistralApiNetwork,"Mistral internal/admin API. Used in the undercloud, NOT USED in overcloud." mistral_api,TCP,13989,"Controller, Users, Admin",Controller,Undercloud only,External,PublicNetwork,"Mistral API Public API (TLS). Uses port 8989 if no TLS." mysql_galera,TCP,4568,"Controller, Database","Controller, Database","Controller->Controller, Database->Database",Internal API,MysqlNetwork,"Galera Cluster incremental state transfer. Used by a galera server to join a running galera cluster and catch up to cluster state. Depending on the deployment topology, traffic is either Controller -> Controller, or Database -> Database." mysql_galera,TCP,4567,"Controller, Database","Controller, Database","Controller->Controller, Database->Database",Internal API,MysqlNetwork,"Galera Cluster replication traffic. Galera replication traffic between the galera nodes. Depending on the deployment topology, traffic is either Controller -> Controller, or Database -> Database." mysql_galera,TCP,9200,"Controller, Database","Controller, Database","Controller->Controller, Database->Database",Internal API,MysqlNetwork,"Galera-monitor. Polled by HAProxy (e.g. in role Controller, or ControllerOpenStack) to check whether the galera server that is running locally is clustered and available for service." mysql_galera,TCP,3306,"Controller, Networker","Controller, Database","Controller->VIP, Networker->VIP",Internal API,MysqlNetwork,"MySQL DB client access. Octavia running on Controller or Networker roles makes direct connections to Database running on Controller or Database. Other OpenStack services usually access database via HAProxy (e.g. in role Controller, or ControllerOpenStack)" mysql_galera,TCP,4444,"Controller, Database","Controller, Database","Controller->Controller, Database->Database",Internal API,MysqlNetwork,"MySQL State Snapshot Transfer. Used by a galera server to join a running galera and request a full DB synchronization over rsync. Depending on the deployment topology, traffic is either Controller -> Controller, or Database -> Database." mysql_galera,TCP,3123,"Controller, Database","Controller, Database","Controller->Controller, Database->Database",Internal API,MysqlNetwork,"Pacemaker MySQL Cluster Control Port. Special pacemaker_remote port dedicated to the containerized galera service. Connection between pacemaker on controller and the galera container. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar." neutron,TCP,9696,"Controller, Compute",Controller,"Controller->Controller, Compute -> Controller",Internal Api,NeutronApiNetwork,Neutron internal/admin API neutron,TCP,13696,"Controller, Users, Admin",Controller,"Controller->Controller, Controller->VIP, Users->VIP, Admin->VIP",External,PublicNetwork,"Neutron Public API (TLS). Will use port 9696 if no TLS." Neutron L3 VRRP,VRRP (mcast),N/A,Controller or Networker,Controller or Networker,"Controller->Controller, Networker->Networker",Provider/tenant Neutron networks (not Overcloud networks),N/A (present on Neutron provider/tenant networks),"VRRP. VRRP is used by the Neutron L3 HA to provide failover between Controller/Networker nodes." Neutron Virtual Networks,UDP,4789,"Controller, Compute, Networker","Controller, Compute, Networker","Controller->Controller, Controller->Compute, Compute->Controller, Compute->Compute, Networker->Networker, Compute->Networker, Networker->Compute",Tenant,NeutronTenantNetwork,"VXLAN tunnels. Used by multiple Neutron plugins such as ML2/OVS, OpenDaylight, etc." DHCP,UDP,67,All roles,Undercloud,All Roles->Undercloud,Control Plane,N/A,"Undercloud provisioning DHCP. DHCP requests for introspection/deployment." DHCP,UDP,68,Undercloud,All roles,Undercloud->All Roles,Control Plane,N/A,"Undercloud provisioning DHCP. DHCP responses for introspection/deployment." neutron_gre,GRE,N/A,"Controller, Compute, Networker","Controller, Compute, Networker","Controller->Controller, Controller->Compute, Compute->Controller, Compute->Compute, Networker->Networker, Compute->Networker, Networker->Compute",Tenant,NeutronTenantNetwork,Neutron OVS Agent nova,TCP,8774,"Controller, Compute",Controller,"Controller->Controller, Controller->VIP, Compute->VIP, Networker-> VIP",Internal Api,NovaApiNetwork,Nova internal/admin API nova,TCP,13774,"Controller, User, Admin",Controller,"Controller->VIP, Users->VIP,",External,PublicNetwork,"Nova public API (TLS). Will use port 8774 if no TLS." nova_metadata,TCP,8775,"Controller, Networker",Controller,"Controller->Controller, Controller->VIP, Networker->VIP",Internal Api,NovaMetadataNetwork,"Nova Metadata. Instances make connections to Neutron Metadata Proxy, which forwards to Nova Metadata service on Controllers." nova_libvirt_api,TCP,16514,Compute,Compute,Compute -> Compute,Internal Api,NovaLibvirtNetwork,"Nova libvirt API (TLS). Compute roles listen for libvirt calls when TLS is enabled." nova_libvirt_migration,TCP,61152-61215,Compute,Compute,Compute -> Compute,Internal Api,NovaLibvirtNetwork,"Nova live migration. Live migration port range for libvirtd." nova_vnc_console,TCP,5900-6923,Controller,Compute,Controllers -> Compute,Internal Api,NovaLibvirtNetwork,"Nova VNC console port range. VNC console from VNC proxy to Compute." nova_vnc_proxy,TCP,6080,Controller,Controller,"Controller -> Controller, Controller -> VIP",Internal Api,NovaVncProxyNetwork,"Nova VNC Proxy internal/admin API. nova api call that does an rpc to the compute node to get the console information." nova_vnc_proxy,TCP,13080,"Controller, Users, Admin",Controller,"Users -> VIP, Admin - > VIP, Controller -> VIP",External,PublicNetwork,"Nova VNC Proxy public API (TLS). Port 6080 will be used if no TLS. Users connect here for VNC proxy." nova_live_migration_ssh,TCP,2022,Compute,Compute,Compute -> Compute,Internal Api,ComputeHostnameResolveNetwork,"Nova live migration over SSH. Port may be set with MigrationSshPort." nova_cold_migration_ssh,TCP,2022,Compute,Compute,Compute -> Compute,Internal Api,NovaApiNetwork,"Nova cold migration over SSH. Port may be set with MigrationSshPort." nova_placement,TCP,8778,Controller,Controller,"Controller -> Controller, Controller -> VIP, Admin -> VIP",Internal API,NovaPlacementNetwork,"Nova placement internal/admin API. " nova_placement,TCP,13778,"Controller, Users, Admin",Controller,"Controller -> Controller, Controller -> VIP, Users -> VIP",External,PublicNetwork,"Nova placement public API (TLS). Will use port 8778 if no TLS." ntp,UDP,123,All Roles,External servers,All Roles-> NTP,"External (Controller), Control Plane (other roles)",N/A,"NTP. NTP is an external service that all roles must talk to for time sync via the default gateway." octavia_api,TCP,9876,Controller,Controller,"Controller->Controller, Controller->VIP, Users->VIP, Admin->VIP",Internal API,OctaviaApiNetwork,"Octavia internal/admin API. " octavia_api,TCP,13876,"Controller, Users, Admin","Controller, VIP","Controller->VIP, Users->VIP, Admin->VIP",External,PublicNetwork,"Octavia public API (TLS). Will use port 9876 if no TLS." octavia_health_manager,UDP,5555,Compute,"Controller, Networker","Compute->Controller, Compute->Networker",Neutron Tenant network(s),N/A,Octavia load balancer management network (amphora heartbeats). These heartbeats happen on the same tenant network(s) where the load balancer is handling requests. ovn_controller,UDP,6081,"Controller, Compute, Networker","Controller, Compute, Networker","Controller -> Compute, Controller -> Networker, Compute -> Controller, Compute -> Networker, Networker-> Controller, Networker -> Compute",Tenant,NeutronTenantNetwork,neutron geneve networks ovn_dbs,TCP,6641,Controller,Controller VIP (pacemaker),Controller -> VIP,Internal API,OvnDbsNetwork,"OVN db server. Port may be set with OVNNorthboundServerPort. Managed by pacemaker (active/passive)." ovn_dbs,TCP,6642,"Controller, Compute, Networker",Controller VIP (pacemaker),"Controller -> VIP, Compute -> VIP, Networker -> VIP",Internal API,OvnDbsNetwork,"OVN db server. Port may be set with OVNSouthboundServerPort Managed by pacemaker (active/passive)." pacemaker,TCP,3121,Controller,"Controller, Compute, Networker, Database","Controller -> Compute, Controller -> Networker, Controller -> Controller, Controller -> Database ",,,"Pacemaker remote. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar." pacemaker,TCP,2224,"Controller, Compute, Networker","Controller, Compute, Networker, Database",All roles -> all roles,,,"pcs - Required on all nodes node-to-node communication. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar." pacemaker,UDP,5405,Controller,Controller,Controller->Controller,Internal API,,"corosync - multicast UDP. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar" pacemaker,TCP,21064,Controller,Controller,Controller->Controller," ",,"dlm - Required on all nodes if the cluster contains any resources requiring DLM. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar." panko_api,TCP,8977,Controller,Controller,"Controller->Controller, Controller->VIP",InternalApi,PankoApiNetwork,"Panko internal/admin API. Internal back-end network for Panko API" panko_api,TCP,13977,"Controller, Admin",Controller,"Controller->Controller, Controller->VIP",External,PublicNetwork,"Panko public API (TLS). External access for Panko API. Will use port 8987 if no TLS." rabbitmq,TCP,3122,Controller,"Controller, Networker","Controller -> Controller, Controller->Networker",Internal API,RabbitmqNetwork,"Pacemaker Rabbitmq Cluster Control. Special pacemaker_remote port dedicated to the containerized rabbitmq service. Connection between pacemaker on controller and the rabbitmq container. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar." rabbitmq,TCP,5672,"Controller, Compute, Networker","Controller, Networker","Controller -> Controller, Controller -> Networker, Compute -> Controller, Compute -> Networker",Internal API,RabbitmqNetwork,"AMQP message traffic " rabbitmq,TCP,25672,"Controller, Networker","Controller, Networker","Controller -> Controller, Networker -> Networker",Internal API,RabbitmqNetwork,"Erlang distribution protocol (node clustering). " rabbitmq,TCP,4369,"Controller, Networker","Controller, Networker","Controller -> Controller, Networker -> Networker",Internal API,RabbitmqNetwork,"epmd (Erlang port mapper daemon). " redis,TCP,6379,Controller,Controller,Controller->Controller,Internal API,RedisNetwork,"Redis service access and replication. Redis service. Openstack services access it via HAProxy. Same port is used for Redis cluster replication (between redis servers)." redis (TLS),TCP,6379,Controller,Controller,Controller->Controller,Internal API,RedisNetwork,"Redis service access and replication. socat tunnel that exposes a TLS endpoint to HAProxy and a Redis server running/listening locally on localhost:6379 (because Redis doesn't support TLS natively). For replication, the Redis server target remote Redis host via another local socat tunnel listening on localhost:[Redis_base_port+offset_for_redis_server_replica]." redis,TCP,3124,Controller,Controller,Controller->Controller,Internal API,RedisNetwork,"Pacemaker Redis Cluster Control Port. Special pacemaker_remote port dedicated to the containerized redis service. Connection between pacemaker on controller and the redis container. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar" red hat satellite,TCP,80,All Roles,Satellite server,see notes,Control Plane or External,N/A,"Server enrollment/certificate download. Nodes will use IP default route to access." red hat satellite,TCP,443,All Roles,Satellite server,see notes,Control Plane or External,N/A,"RPM downloads. Nodes will use IP default route to access." sahara,TCP,8386,Controller,Controller,"Controller -> Controller Controller -> Compute Controller->Swift",Internal API,SaharaApiNetwork,"Sahara internal/admin API. " sahara,TCP,13386,"Controller, Users, Admin",Controller,"Controller -> Controller Controller -> Compute Controller->Swift",External,PublicNetwork,"Sahara public API (TLS). " SNMP,UDP,161,Controller,All Roles,"Controller -> Controller Controller -> Compute",Control Plane,SnmpdNetwork,"Ceilometer SNMP. SNMP monitoring." zaqar,TCP,8888,Controller,Controller,Undercloud only,Internal API,ZaqarApiNetwork,"Zaqar internal/admin API. Used in undercloud, not in overcloud." zaqar,TCP,13888,Controller,Controller,Undercloud only,External,PublicNetwork,"Zaqar public API (TLS). " zaqar websockets,TCP,9000,"Controller, Users, Admin",Controller,,External,PublicNetwork,"Zaqar websockets public API (TLS). Will use port 8888 if no TLS." swift,TCP,8080,Controller HAProxy,Controller,"Controller->Controller, Controller->Swift",Internal API,SwiftApiNetwork,"Swift internal endpoint, plaintext HTTP. Keep this firewalled from Internet, only visible to the LB (HAProxy)." swift,TCP,6200,"Controller, Storage",Storage,"Controller->Swift, Swift->Swft",Swift,SwiftApiNetwork,"Swift internal, object server. Absolutely keep this firewalled from anything but Swift proxy and peer Swift nodes; this whole block used to be at 600x, may still be around in OSP10 and older or legacy, upgraded clouds." swift,TCP,6201,"Controller, Storage",Storage,"Controller->Swift, Swift->Swft",Swift,SwiftApiNetwork,"Swift internal, container server. Absolutely keep this firewalled from anything but Swift proxy and peer Swift nodes." swift,TCP,6202,"Controller, Storage",Storage,"Controller->Swift, Swift->Swft",Swift,SwiftApiNetwork,"Swift internal, account server. Absolutely keep this firewalled from anything but Swift proxy and peer Swift nodes."