Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hostname can not resolve #4745

Closed
zhuhongxu opened this issue Sep 10, 2023 · 1 comment
Closed

hostname can not resolve #4745

zhuhongxu opened this issue Sep 10, 2023 · 1 comment

Comments

@zhuhongxu
Copy link

Environmental Info:
RKE2 Version: lastest

Node(s) CPU architecture, OS, and Version:

root@k8s-master1:~/project/hello-image# uname -a
Linux k8s-master1 5.15.0-83-generic #92-Ubuntu SMP Mon Aug 14 09:30:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
root@k8s-master1:~/project/hello-image# cat /proc/version
Linux version 5.15.0-83-generic (buildd@lcy02-amd64-027) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #92-Ubuntu SMP Mon Aug 14 09:30:42 UTC 2023
root@k8s-master1:~/project/hello-image# 

Cluster Configuration: default(only a master node) with a private image repository

Describe the bug: I create a deployment and a service with it, I can user service cluster ip to access my pod, but can not user the servicename, even can not :

root@k8s-master1:~/project/hello-image# nslookup kubernetes
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find kubernetes: SERVFAIL

root@k8s-master1:~/project/hello-image# nslookup kubernetes.default
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find kubernetes.default: NXDOMAIN

Steps To Reproduce:

  • Installed RKE2:
curl -sfL https://rancher-mirror.rancher.cn/rke2/install.sh | INSTALL_RKE2_MIRROR=cn sh -
systemctl enable rke2-server.service
systemctl start rke2-server.service

it can start successfully, then add registries.yaml in below:

root@k8s-master1:~/project/hello-image# cat /etc/rancher/rke2/registries.yaml 
mirrors:
  docker.io:
    endpoint:
      - "https://s27w6kze.mirror.aliyuncs.com"
  registry.cn-hangzhou.aliyuncs.com:
    endpoint:
      - "https://registry.cn-hangzhou.aliyuncs.com"
configs:
  "registry.cn-hangzhou.aliyuncs.com":
    auth:
      username: [email protected]
      password: zhx19951115
    tls:
      insecure_skip_verify: true

and
systemctl start rke2-server.service
it can restart successfully, then I add the deployment and service yaml below;

root@k8s-master1:~/project/hello-image# cat hello-image-backend-depolyment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-image-backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-image-backend
  template:
    metadata:
      labels:
        app: hello-image-backend
    spec:
      containers:
        - name: hello-image-backend
          image: registry.cn-hangzhou.aliyuncs.com/zhuhongxu/hello-image:v3
          ports:
          - containerPort: 30000
root@k8s-master1:~/project/hello-image# cat hello-image-backend-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: hello-image-service1
  labels:
    app: hello-image-service1
spec:
  ports:
    - port: 30000
      targetPort: 30000
  selector:
    app: hello-image-backend

then apply them by kubectl and processed successful, at this time:

root@k8s-master1:~/project/hello-image# kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
hello-image-backend-564ff54964-7tkfl   1/1     Running   0          16h   10.42.0.30   k8s-master1   <none>           <none>
root@k8s-master1:~/project/hello-image# curl 10.42.0.30:30000/hello
hello, I will build docker image by use this project, my version is 1 
root@k8s-master1:~/project/hello-image# kubectl get svc
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
hello-image-service1   ClusterIP   10.43.55.219   <none>        30000/TCP   16h
kubernetes             ClusterIP   10.43.0.1      <none>        443/TCP     27h
root@k8s-master1:~/project/hello-image# curl 10.43.55.219:30000/hello
hello, I will build docker image by use this project, my version is 1 

it looks like I can use pod ip and service cluster ip to access my application interface, but I can not access it by service name like below:

root@k8s-master1:~/project/hello-image# curl hello-image-service1:30000/hello
curl: (6) Could not resolve host: hello-image-service1
root@k8s-master1:~/project/hello-image# curl -v hello-image-service1:30000/hello
* Could not resolve host: hello-image-service1
* Closing connection 0
curl: (6) Could not resolve host: hello-image-service1

at this time, I did some try:

root@k8s-master1:~/project/hello-image# nslookup hello-image-service1
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find hello-image-service1: SERVFAIL

root@k8s-master1:~/project/hello-image# nslookup hello-image-service1.default
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find hello-image-service1.default: NXDOMAIN

root@k8s-master1:~/project/hello-image# nslookup hello-image-service1.default.svc.cluster.local
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find hello-image-service1.default.svc.cluster.local: SERVFAIL

root@k8s-master1:~/project/hello-image# kubectl get svc -n kube-system | grep dns
rke2-coredns-rke2-coredns                 ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP   27h
root@k8s-master1:~/project/hello-image# nslookup hello-image-service1.default.svc.cluster.local 10.43.0.10
Server:		10.43.0.10
Address:	10.43.0.10#53

Name:	hello-image-service1.default.svc.cluster.local
Address: 10.43.55.219

root@k8s-master1:~/project/hello-image# cat /etc/resolv.conf 
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 127.0.0.53
options edns0 trust-ad
search .
root@k8s-master1:~/project/hello-image# nslookup kubernetes.default
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find kubernetes.default: NXDOMAIN

root@k8s-master1:~/project/hello-image# 
root@k8s-master1:~/project/hello-image# nslookup kubernetes.default 10.43.0.10
Server:		10.43.0.10
Address:	10.43.0.10#53

** server can't find kubernetes.default: NXDOMAIN

root@k8s-master1:~/project/hello-image# kubectl get pods -n kube-system
NAME                                                    READY   STATUS      RESTARTS      AGE
cloud-controller-manager-k8s-master1                    1/1     Running     3 (23h ago)   27h
etcd-k8s-master1                                        1/1     Running     1 (23h ago)   27h
helm-install-rke2-canal-qzqj6                           0/1     Completed   0             27h
helm-install-rke2-coredns-7fnq4                         0/1     Completed   0             27h
helm-install-rke2-ingress-nginx-x524p                   0/1     Completed   0             27h
helm-install-rke2-metrics-server-msnkl                  0/1     Completed   0             27h
helm-install-rke2-snapshot-controller-crd-bbczk         0/1     Completed   0             27h
helm-install-rke2-snapshot-controller-hgmnw             0/1     Completed   1             27h
helm-install-rke2-snapshot-validation-webhook-rww5k     0/1     Completed   0             27h
kube-apiserver-k8s-master1                              1/1     Running     1 (23h ago)   27h
kube-controller-manager-k8s-master1                     1/1     Running     3 (23h ago)   27h
kube-proxy-k8s-master1                                  1/1     Running     0             20h
kube-scheduler-k8s-master1                              1/1     Running     1 (23h ago)   27h
rke2-canal-6kd24                                        2/2     Running     2 (23h ago)   27h
rke2-coredns-rke2-coredns-546587f99c-tqs89              1/1     Running     1 (23h ago)   27h
rke2-coredns-rke2-coredns-autoscaler-797c865dbd-vm4pl   1/1     Running     1 (23h ago)   27h
rke2-ingress-nginx-controller-lwhrv                     1/1     Running     1 (23h ago)   27h
rke2-metrics-server-78b84fff48-brrlx                    1/1     Running     1 (23h ago)   27h
rke2-snapshot-controller-849d69c748-5mzs5               1/1     Running     1 (23h ago)   27h
rke2-snapshot-validation-webhook-7f955488ff-2sv9c       1/1     Running     1 (23h ago)   27h
root@k8s-master1:~/project/hello-image# ps -ef | grep proxy
root        6084    5975  4 Sep09 ?        01:06:25 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
root      193077  193019  0 Sep09 ?        00:00:21 kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k8s-master1 --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig --proxy-mode=iptables
root     1659636    1151  0 17:40 pts/0    00:00:00 grep --color=auto proxy

root@k8s-master1:~/project/hello-image# kubectl logs rke2-coredns-rke2-coredns-546587f99c-tqs89 -n kube-system
.:53
[INFO] plugin/reload: Running configuration SHA512 = c18591e7950724fe7f26bd172b7e98b6d72581b4a8fc4e5fc4cfd08229eea58f4ad043c9fd3dbd1110a11499c4aa3164cdd63ca0dd5ee59651d61756c4f671b7
CoreDNS-1.10.1
linux/amd64, go1.20.3 X:boringcrypto, 055b2c31
[ERROR] plugin/errors: 2 4662075983105844097.6607539516083766913. HINFO: read udp 10.42.0.15:55428->8.8.8.8:53: i/o timeout

as describe, I can not access my application interface by service name.
Expected behavior: want to access my application interface by service name

Actual behavior: can not access my application interface by service name.

Additional context / logs:

@brandond
Copy link
Member

I create a deployment and a service with it, I can user service cluster ip to access my pod, but can not user the servicename, even can not :

root@k8s-master1:~/project/hello-image# nslookup kubernetes
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find kubernetes: SERVFAIL

root@k8s-master1:~/project/hello-image# nslookup kubernetes.default
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find kubernetes.default: NXDOMAIN

Please read https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ . Note that DNS records for services and pods can only be resolved from WITHIN the cluster - in a pod. It is not expected that you would be able to resolve these records directly on the host.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants