【K8S】poststarthook/rbac/bootstrap-roles failed: not finished
token-id 的获取,token-secret获取方法,随机生成,写入bootstrap.secret.yaml 即可。1.获取资源状态无法获得值。4.重新获取集群状态。
·
1.获取资源状态无法获得值
kubectl get csr
[root@K8S1 work]# kubectl get csr
No resources found
--查看日志
journalctl -u kube-apiserver.service --no-pager > 1.log
journalctl -u kubelet.service --no-pager > 2.log
vi 2.log
Jul 20 17:26:39 K8S1 kubelet[59786]: I0720 17:26:39.360809 59786 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates
Jul 20 17:26:39 K8S1 kubelet[59786]: E0720 17:26:39.364491 59786 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Unauthorized
Jul 20 17:26:40 K8S1 kubelet[59786]: E0720 17:26:40.167353 59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:45 K8S1 kubelet[59786]: E0720 17:26:45.169147 59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:50 K8S1 kubelet[59786]: E0720 17:26:50.170266 59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:55 K8S1 kubelet[59786]: E0720 17:26:55.171148 59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
vi 1.log
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.687771 49383 healthz.go:257] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.729020 49383 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.736063 49383 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.736091 49383 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.783838 49383 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
2.原因分析
vi bootstrap.secret.yaml
token-id 的获取,token-secret获取方法,随机生成,写入bootstrap.secret.yaml 即可。
TOKEN_ID=$(head -c 30 /dev/urandom | od -An -t x | tr -dc a-f3-9|cut -c 3-8)
TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16)
echo $TOKEN_ID $TOKEN_SECRET
3.重新配置。
kubectl delete -f bootstrap.secret.yaml
kubectl create -f bootstrap.secret.yaml
4.重新获取集群状态
[root@K8S1 work]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-lsj5m 0s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:bab759 <none> Approved,Issued
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
已为社区贡献10条内容
所有评论(0)