Unable to access openshift docker registry externally

Latest response

My organization is currently evaluating OpenShift enterprise, using the AWS quickstart cloudformation templates that create a VPC. The version is:

$ oc version
oc v3.7.52
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO

Although we understand there are build capabilities in the platform, our preferred CI/CD workflow is to build docker images outside of OpenShift using an external Jenkins. It seems like then pushing those images to the OpenShift docker registry would make sense, where we could then initiate deployment activity. The install documentation implies that the internal docker registry should already be secured and exposed during the installation process:

https://docs.openshift.com/container-platform/3.7/install_config/registry/securing_and_exposing_registry.html

"By default, the OpenShift Container Registry is secured during cluster installation so that it serves traffic via TLS. A passthrough route is also created by default to expose the service externally."

And indeed it does seem that a passthrough route exists for the docker-registry service, and it appears to be secured. However, attempting to access the docker-registry externally:

curl -kv https://docker-registry-default.<my domain>:443/v2/
I receive a 503:

The application is currently not serving requests at this endpoint. It may not have been started or is still starting.

with the list of usual suspects (no running pod, etc.). I believe this means DNS, etc. is working properly enough for the router to get the request and send back a 503 response.

There is in fact a running pod, and I have verified that I can use curl to access the registry-service from each of the three router pods using the IP address on the haproxy.config on the router. Here is the relevant config from the router:

$ grep docker-registry *
haproxy.config:backend be_tcp:default:docker-registry
haproxy.config:  server pod:docker-registry-4-jpswr:docker-registry:10.129.0.13:5000 10.129.0.13:5000 weight 256
os_route_http_redirect.map:^docker-registry-default\.<my domain>(:[0-9]+)?(/.*)?$ default:docker-registry
os_sni_passthrough.map:^docker-registry-default\.<my domain>(:[0-9]+)?(/.*)?$ 1
os_tcp_be.map:^docker-registry-default\.<my domain>(:[0-9]+)?(/.*)?$ default:docker-registry

and the curl command that succeeds (in the sense that it requires authentication) is:

curl -kv https://10.129.0.13:5000/v2/

which returns:

{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}

I understand I will need to authenticate for full functionality, but the authentication required message I believe indicates that the routers can reach the docker-registry.

To me this implies an issue with the router for the docker-registry for the "out of the box" 3.7 AWS quickstart configuration. I am wondering if anyone else has seen this issue. I am new to openshift and we are just evaluating it, so we don't have a support contract in place.

Thanks,

Scott Hasse (on behalf of Dave Hannon)

Responses

Also the docker-registry route YAML is:

apiVersion: v1
kind: Route
metadata:
  creationTimestamp: '2018-06-15T15:33:20Z'
  name: docker-registry
  namespace: default
  resourceVersion: '642510'
  selfLink: /oapi/v1/namespaces/default/routes/docker-registry
  uid: 6e979471-70b1-11e8-9e39-0e04a8ee0858
spec:
  host: docker-registry-default.<my domain>
  port:
    targetPort: 5000-tcp
  tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: passthrough
  to:
    kind: Service
    name: docker-registry
    weight: 100
  wildcardPolicy: None
status:
  ingress:
    - conditions:
        - lastTransitionTime: '2018-06-15T15:33:20Z'
          status: 'True'
          type: Admitted
      host: docker-registry-default.<my domain>
      routerName: router
      wildcardPolicy: None

my understanding is that it needs to be passthrough to work properly. This might be slightly modified from the stock route with the addition of an insecure redirect.

Thanks,

Scott

I have enabled haproxy debug logging via syslogd, as described at: https://access.redhat.com/discussions/3489031

and during the failures I see the following:

Jun 18 14:45:59 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60591 [18/Jun/2018:14:45:58.202] fe_no_sni~ openshift_default/<NOSRV> 1242/-1/-1/-1/1242 503 3278 - - SC-- 1/0/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"
Jun 18 14:45:59 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60591 [18/Jun/2018:14:45:58.202] public_ssl be_no_sni/fe_no_sni 1/0/1278 6766 -- 0/0/0/0/0 0/0
Jun 18 14:46:03 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60605 [18/Jun/2018:14:46:02.876] fe_no_sni~ openshift_default/<NOSRV> 189/-1/-1/-1/189 503 3278 - - SC-- 1/0/0/0/0 0/0 "GET /v2/ HTTP/1.1"
Jun 18 14:46:03 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60605 [18/Jun/2018:14:46:02.875] public_ssl be_no_sni/fe_no_sni 1/0/230 3475 -- 0/0/0/0/0 0/0
Jun 18 14:46:33 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60629 [18/Jun/2018:14:46:31.478] fe_no_sni~ openshift_default/<NOSRV> 2099/-1/-1/-1/2099 503 3278 - - SC-- 1/0/0/0/0 0/0 "GET /v2/ HTTP/1.1"
Jun 18 14:46:33 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60629 [18/Jun/2018:14:46:31.478] public_ssl be_no_sni/fe_no_sni 1/0/2139 3475 -- 0/0/0/0/0 0/0
Jun 18 14:46:43 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60635 [18/Jun/2018:14:46:33.671] fe_no_sni~ fe_no_sni/<NOSRV> -1/-1/-1/-1/10004 408 212 - - cR-- 1/0/0/0/0 0/0 "<BADREQ>"
Jun 18 14:46:43 ip-10-0-9-79.ec2.internal haproxy[177]: 10.0.146.25:60635 [18/Jun/2018:14:46:33.671] public_ssl be_no_sni/fe_no_sni 1/0/10116 409 -- 0/0/0/0/0 0/0

but I am not sure how to interpret that information. I see 503 errors corresponding to my requests for access to the router. Any advice would be greatly appreciated.

Thanks,

Scott

I am also trying to get insight into this issue via haproxy statistics, but "show errors" returns nothing:

echo 'show errors' | socat - UNIX-CONNECT:/var/lib/haproxy/run/haproxy.sock
Total events captured on [18/Jun/2018:15:29:43.865] : 0

show stats does return information:

$ echo 'show stat' | socat - UNIX-CONNECT:/var/lib/haproxy/run/haproxy.sock
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
public,FRONTEND,,,0,2,20000,557,78718,96982,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,2,,,,0,552,0,0,5,0,,0,2,557,,,0,0,0,0,,,,,,,,
public_ssl,FRONTEND,,,0,2,20000,11,9915,42166,0,0,0,,,,,OPEN,,,,,,,,,1,3,0,,,,0,0,0,2,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,
be_sni,fe_sni,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,1,1,0,,,,,,1,4,1,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_sni,BACKEND,0,0,0,0,2000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,2761,0,,1,4,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
fe_sni,FRONTEND,,,0,0,20000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,5,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,,0,0,0,0,,,,,,,,
be_no_sni,fe_no_sni,0,0,0,2,,11,9915,42166,,0,,0,0,0,0,no check,1,1,0,,,,,,1,6,1,,11,,2,0,,2,,,,,,,,,,0,,,,0,0,,,,,22,,,0,0,0,49,
be_no_sni,BACKEND,0,0,0,2,2000,11,9915,42166,0,0,,0,0,0,0,UP,1,1,0,,0,2761,0,,1,6,0,,11,,1,0,,2,,,,,,,,,,,,,,0,0,0,0,0,0,22,,,0,0,0,49,
fe_no_sni,FRONTEND,,,0,2,20000,11,3973,26835,0,0,3,,,,,OPEN,,,,,,,,,1,7,0,,,,0,0,0,2,,,,0,0,0,3,8,0,,0,2,11,,,0,0,0,0,,,,,,,,
openshift_default,BACKEND,0,0,0,1,6000,13,4307,42614,0,0,,13,0,0,0,UP,0,0,0,,0,2761,0,,1,8,0,,0,,1,0,,2,,,,0,0,0,0,13,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_secure:aws-service-broker:aws-asb-1338,pod:aws-asb-2-rwrc9:aws-asb:10.129.0.9:1338,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,9,1,,0,,2,0,,0,,,,0,0,0,0,0,0,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_secure:aws-service-broker:aws-asb-1338,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,9,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_tcp:default:docker-registry,pod:docker-registry-4-jpswr:docker-registry:10.129.0.13:5000,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,10,1,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_tcp:default:docker-registry,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,10,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_http:default:nodejs-basic,pod:nodejs-basic-1-tbhfd:nodejs-basic:10.128.2.6:8080,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,11,1,,0,,2,0,,0,,,,0,0,0,0,0,0,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_http:default:nodejs-basic,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,11,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_tcp:default:registry-console,pod:registry-console-1-fkmms:registry-console:10.129.0.3:9090,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,12,1,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_tcp:default:registry-console,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,12,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_tcp:kube-service-catalog:apiserver,pod:apiserver-clphv:apiserver:10.130.0.2:6443,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,13,1,,0,,2,0,,0,,,,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_tcp:kube-service-catalog:apiserver,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,13,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_secure:openshift-ansible-service-broker:asb-1338,pod:asb-1-8lkbv:asb:10.128.0.3:1338,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,14,1,,0,,2,0,,0,,,,0,0,0,0,0,0,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_secure:openshift-ansible-service-broker:asb-1338,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,14,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
be_secure:openshift-infra:hawkular-metrics,pod:hawkular-metrics-p2n28:hawkular-metrics:10.131.0.2:8443,0,0,0,0,,0,0,0,,0,,0,0,0,0,no check,256,1,0,,,,,,1,15,1,,0,,2,0,,0,,,,0,0,0,0,0,0,0,,,,0,0,,,,,-1,,,0,0,0,0,
be_secure:openshift-infra:hawkular-metrics,BACKEND,0,0,0,0,1,0,0,0,0,0,,0,0,0,0,UP,256,1,0,,0,2761,0,,1,15,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,

Anyone know why 'show errors' would be empty when there do seem to be errors?

Thanks,

Scott

Also there seems to be a fairly severe discrepancy between the docs at:

https://docs.openshift.com/enterprise/3.2/install_config/install/docker_registry.html#access-logging-in-to-the-registry

and this KB article:

https://access.redhat.com/solutions/2451111

Can anyone clarify what approach they have used to authenticate to the openshift docker registry once it is actually able to be accessed?

Thanks,

Scott

Anyone with input or thoughts on this issue? Is there an active openshift community here, or perhaps a more appropriate forum for someone doing an evaluation to touch base with actual users?

Thanks,

Scott

Just a note that we have shut down our evaluation environment without getting this working. Certainly an unfortunate outcome, we were hoping openshift would have been a good fit for this use case.

Scott

Yes we are also facing the same issue .

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.