Kubernetes学习笔记

文章使用Kubernetes版本为1.23.6

1、简介

用于自动部署、扩缩和管理容器化应用程序的开源系统,支持自动化部署、大规模可伸缩。

2、架构

在这里插入图片描述

在这里插入图片描述

2.1、Control Plane

对集群做出全局决策

Controller manager

在主节点上运行控制器的组件,包含

  • 节点控制器(Node Controller)
  • 任务控制器(Job controller)
  • 端点控制器(Endpoints Controller)
  • 服务帐户和令牌控制器(Service Account & Token Controllers)

Etcd

保存 Kubernetes 所有集群数据的后台键值数据库

Scheduler

监视新创建的、未指定运行节点(node)的Pods,选择节点让 Pod 在上面运行

Api server

Kubernetes API服务

2.2、Node

kubelet

节点(node)上运行的代理

kube-proxy

节点上运行的网络代理

3、安装

节点规划

主机名称主机IP
k-master124.223.4.217
k-cluster2124.222.59.241
k-cluster3150.158.24.200

3.1、安装docker

查看可用的docker版本

yum list docker-ce --showduplicates
#!/bin/bash
# remove old docker
yum remove docker \
        docker-client \
        docker-client-latest \
        docker-common \
        docker-latest \
        docker-latest-logrotate \
        docker-logrotate \
        docker-engine

# install dependents
yum install -y yum-utils

# set yum repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# install docker
yum -y install docker-ce-20.10.9-3.el7 docker-ce-cli-20.10.9-3.el7 containerd.io

# start
systemctl enable docker --now

# docker config(现在加速配置无效了)
# sudo mkdir -p /etc/docker
# sudo tee /etc/docker/daemon.json <<-'EOF'
# {
#   "registry-mirrors": ["https://12sotewv.mirror.aliyuncs.com"],
#   "exec-opts": ["native.cgroupdriver=systemd"],
#   "log-driver": "json-file",
#   "log-opts": {
#     "max-size": "100m"
#   },
#   "storage-driver": "overlay2"
# }
# EOF

sudo systemctl daemon-reload
sudo systemctl enable docker --now

3.2、安装准备

#!/bin/bash
# set SELinux permissive(disable)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# close swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab
# permit iptables
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
echo "1" > /proc/sys/net/ipv4/ip_forward
# flush
sudo sysctl --system

3.3、安装kubernetes

#!/bin/bash
# set Kubernetes repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# installKubernetes
sudo yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6 --disableexcludes=kubernetes
systemctl enable kubelet --now

4、卸载

yum -y remove kubelet kubeadm kubectl
sudo kubeadm reset -f
sudo rm -rvf $HOME/.kube
sudo rm -rvf ~/.kube/
sudo rm -rvf /etc/kubernetes/
sudo rm -rvf /etc/systemd/system/kubelet.service.d
sudo rm -rvf /etc/systemd/system/kubelet.service
sudo rm -rvf /usr/bin/kube*
sudo rm -rvf /etc/cni
sudo rm -rvf /opt/cni
sudo rm -rvf /var/lib/etcd
sudo rm -rvf /var/etcd

5、初始化主节点(仅主节点)

节点规划

主机名称主机IP
k-master124.223.4.217
k-cluster2124.222.59.241
k-cluster3150.158.24.200

5.1、初始化

kubeadm init \
--apiserver-advertise-address=10.0.4.6 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

5.2、初始化常见错误

[root@k-master ~]# kubeadm init \
> --apiserver-advertise-address=10.0.4.6 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.23.6 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
	[WARNING Hostname]: hostname "k-master" could not be reached
	[WARNING Hostname]: hostname "k-master": lookup k-master on 183.60.83.19:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

此时执行下面的命令

echo "1" > /proc/sys/net/ipv4/ip_forward

5.3、初始化成功

[root@k-master ~]# kubeadm init \
> --apiserver-advertise-address=10.0.4.6 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.23.6 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
	[WARNING Hostname]: hostname "k-master" could not be reached
	[WARNING Hostname]: hostname "k-master": lookup k-master on 183.60.83.19:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 150.158.187.211]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k-master localhost] and IPs [150.158.187.211 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k-master localhost] and IPs [150.158.187.211 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.502637 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: llukay.o7amg6bstg9abts3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 150.158.187.211:6443 --token llukay.o7amg6bstg9abts3 \
    --discovery-token-ca-cert-hash sha256:2f6c42689f5d5189947239997224916c94003cf9ed92220487ace5032206b4b9

5.4、后续步骤

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

5.5、设置网络组件

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml

5.6、注意事项

如果忘记了令牌,可以使用下面的命令来创建新的令牌

# 获取新join命令
kubeadm token create --print-join-command

每次初始化失败/加入失败都需要进行重置

kubeadm reset

5.7、核心文件位置

/etc/kubernetes

[root@k-master ~]# cd /etc/kubernetes/
[root@k-master kubernetes]# ls
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
[root@k-master kubernetes]# cd manifests/
[root@k-master manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml

5.8、配置命令自动补全

yum -y install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
source /usr/share/bash-completion/bash_completion

5.9、主节点参与调度

默认情况下,Kubernetes集群不会在主节点上调度pod

kubectl taint node k-master node-role.kubernetes.io/master:NoSchedule-

6、节点加入

[root@k-cluster1 ~]# kubeadm join 150.158.187.211:6443 --token llukay.o7amg6bstg9abts3 \
>     --discovery-token-ca-cert-hash sha256:2f6c42689f5d5189947239997224916c94003cf9ed92220487ace5032206b4b9
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
	[WARNING Hostname]: hostname "k-cluster1" could not be reached
	[WARNING Hostname]: hostname "k-cluster1": lookup k-cluster1 on 183.60.83.19:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

7、节点删除

# 将节点设置成不可调度状态
kubectl cordon nodeName
# 将节点从集群中移除
kubectl delete node nodeName

被移除的节点想要重新加入集群需先重置kubeadm

kubeadm reset

8、设置ipvs模式

kubectl edit cm kube-proxy -n kube-system

将mode修改为"ipvs"

mode: "ipvs"

并重新删除已有的kube-proxy

[root@VM-4-6-centos ~]# kubectl delete pod kube-proxy-frlpp kube-proxy-r4kpw kube-proxy-xk2ss -n kube-system
pod "kube-proxy-frlpp" deleted
pod "kube-proxy-r4kpw" deleted
pod "kube-proxy-xk2ss" deleted

日志应为如下效果

[root@k-master ~]# kubectl logs kube-proxy-wbgnj -n kube-system
I0417 15:28:54.174178       1 node.go:172] Successfully retrieved node IP: 124.222.59.241
I0417 15:28:54.174245       1 server_others.go:142] kube-proxy node IP is an IPv4 address (124.222.59.241), assume IPv4 operation
I0417 15:28:54.205700       1 server_others.go:258] Using ipvs Proxier.
E0417 15:28:54.205937       1 proxier.go:389] can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1
W0417 15:28:54.206077       1 proxier.go:445] IPVS scheduler not specified, use rr by default
I0417 15:28:54.206272       1 server.go:650] Version: v1.20.9
I0417 15:28:54.206603       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0417 15:28:54.206889       1 config.go:315] Starting service config controller
I0417 15:28:54.206909       1 shared_informer.go:240] Waiting for caches to sync for service config
I0417 15:28:54.206940       1 config.go:224] Starting endpoint slice config controller
I0417 15:28:54.206949       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0417 15:28:54.307063       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0417 15:28:54.307152       1 shared_informer.go:247] Caches are synced for service config

9、kubectl命令

9.1、获取资源

1、获取节点信息
# 获取节点信息(只能在主节点使用)
[root@VM-4-6-centos ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k-cluster1   Ready    <none>                 5m40s   v1.20.9
k-cluster2   Ready    <none>                 5m20s   v1.20.9
k-master     Ready    control-plane,master   6m59s   v1.20.9
# 修改节点Role
kubectl label node k-cluster1 node-role.kubernetes.io/cluster1=
[root@VM-4-6-centos ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k-cluster1   Ready    cluster1               6m40s   v1.20.9
k-cluster2   Ready    cluster2               6m20s   v1.20.9
k-master     Ready    control-plane,master   7m59s   v1.20.9
2、获取Pod信息
# 获取集群Pod信息
[root@VM-4-6-centos ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-577f77cb5c-8gmjp   1/1     Running   0          6m45s
kube-system   calico-node-lxzpz                          1/1     Running   0          5m44s
kube-system   calico-node-qxvfm                          1/1     Running   0          6m4s
kube-system   calico-node-vvmgh                          1/1     Running   0          6m46s
kube-system   coredns-7f89b7bc75-cj6rt                   1/1     Running   0          7m6s
kube-system   coredns-7f89b7bc75-ctb8l                   1/1     Running   0          7m6s
kube-system   etcd-k-master                              1/1     Running   0          7m14s
kube-system   kube-apiserver-k-master                    1/1     Running   0          7m14s
kube-system   kube-controller-manager-k-master           1/1     Running   0          7m14s
kube-system   kube-proxy-fc64c                           1/1     Running   0          3m10s
kube-system   kube-proxy-g2sj2                           1/1     Running   0          3m9s
kube-system   kube-proxy-gcjfc                           1/1     Running   0          3m9s
kube-system   kube-scheduler-k-master                    1/1     Running   0          7m14s
##################################### 参数 ################################################
-A 查看所有Pod,包含kube-system命名空间下的
3、获取部署信息
[root@k-master ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           11m

9.2、创建资源

1、创建部署

所有的部署都有自我修复能力,部署的Pod IP在集群任意节点均可访问!

[root@VM-4-6-centos ~]# kubectl create deploy nginx --image=nginx
deployment.apps/nginx created
[root@VM-4-6-centos ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-cjmsq   1/1     Running   0          9s
2、创建Pod

启动的pod无自我修复能力

kubectl run nginx1 --image=nginx

9.3、移除资源

1、移除部署
kubectl delete pod podName -n nameSpace
2、移除Pod
kubectl delete deploy deployName

9.4、资源日志/描述信息

Pod日志/描述信息

# 查看pod运行日志
kubectl logs podName -n nameSpace
# 查看pod构建
kubectl describe pod podName -n nameSpace

9.5、进入Pod

[root@k-master ~]# kubectl exec -it nginx-6799fc88d8-cjmsq -- /bin/bash
root@nginx-6799fc88d8-cjmsq:/# ls
bin  boot  dev	docker-entrypoint.d  docker-entrypoint.sh  etc	home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@nginx-6799fc88d8-cjmsq:/# exit
exit

常见问题1

有时候可能会碰到下面的问题

[root@k-cluster2 ~]# kubectl exec -it nginx-6799fc88d8-cjmsq -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?

此时需要将主节点的/etc/kubernetes/admin.conf文件拷贝到从节点相同目录下

scp admin.conf root@150.158.24.200:/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

此时再次进入容器就不会报错了

[root@k-cluster2 ~]# kubectl exec -it nginx-6799fc88d8-cjmsq -- bash
root@nginx-6799fc88d8-cjmsq:/# ls
bin  boot  dev	docker-entrypoint.d  docker-entrypoint.sh  etc	home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@nginx-6799fc88d8-cjmsq:/# exit
exit

9.6、扩缩容

1、扩容
[root@k-cluster2 ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           19h
[root@k-cluster2 ~]# kubectl scale --replicas=3 deploy nginx
deployment.apps/nginx scaled
[root@k-cluster2 ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-cjmsq   1/1     Running   0          19h
nginx-6799fc88d8-xg7rr   1/1     Running   0          20s
nginx-6799fc88d8-zdd9c   1/1     Running   0          20s
2、缩容
[root@k-cluster2 ~]# kubectl scale --replicas=1 deploy nginx
deployment.apps/nginx scaled
[root@k-cluster2 ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-cjmsq   1/1     Running   0          19h

9.7、Label

1、新加标签
[root@k-cluster2 ~]# kubectl get pods --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   LABELS
nginx-6799fc88d8-cjmsq   1/1     Running   0          20h   app=nginx,pod-template-hash=6799fc88d8
nginx-6799fc88d8-d2p54   1/1     Running   0          31m   app=nginx,pod-template-hash=6799fc88d8
nginx-6799fc88d8-vwrxp   1/1     Running   0          31m   app=nginx,pod-template-hash=6799fc88d8
[root@k-cluster2 ~]# kubectl label pod nginx-6799fc88d8-cjmsq hello=666
pod/nginx-6799fc88d8-cjmsq labeled
[root@k-cluster2 ~]# kubectl get pods --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   LABELS
nginx-6799fc88d8-cjmsq   1/1     Running   0          20h   app=nginx,hello=666,pod-template-hash=6799fc88d8
nginx-6799fc88d8-d2p54   1/1     Running   0          34m   app=nginx,pod-template-hash=6799fc88d8
nginx-6799fc88d8-vwrxp   1/1     Running   0          34m   app=nginx,pod-template-hash=6799fc88d8
2、覆盖标签
[root@k-cluster2 ~]# kubectl label pod nginx-6799fc88d8-cjmsq --overwrite hello=6699
pod/nginx-6799fc88d8-cjmsq labeled
[root@k-cluster2 ~]# kubectl get pods --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   LABELS
nginx-6799fc88d8-cjmsq   1/1     Running   0          20h   app=nginx,hello=6699,pod-template-hash=6799fc88d8
nginx-6799fc88d8-d2p54   1/1     Running   0          38m   app=nginx,pod-template-hash=6799fc88d8
nginx-6799fc88d8-vwrxp   1/1     Running   0          38m   app=nginx,pod-template-hash=6799fc88d8
3、移除标签
[root@k-cluster2 ~]# kubectl label pod nginx-6799fc88d8-cjmsq hello-
pod/nginx-6799fc88d8-cjmsq labeled
[root@k-cluster2 ~]# kubectl get pods --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   LABELS
nginx-6799fc88d8-cjmsq   1/1     Running   0          20h   app=nginx,pod-template-hash=6799fc88d8
nginx-6799fc88d8-d2p54   1/1     Running   0          36m   app=nginx,pod-template-hash=6799fc88d8
nginx-6799fc88d8-vwrxp   1/1     Running   0          36m   app=nginx,pod-template-hash=6799fc88d8

9.8、Service

暴露一次部署为服务,服务会自动负载均衡(轮询)

[root@k-cluster2 ~]# kubectl expose deploy nginx --port=8080 --target-port=80 --type=NodePort
service/nginx exposed
[root@k-cluster2 ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          20h
nginx        NodePort    10.96.156.44   <none>        8080:30724/TCP   10s
[root@k-cluster2 ~]# kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-cjmsq   1/1     Running   0          19h
pod/nginx-6799fc88d8-d2p54   1/1     Running   0          7m19s
pod/nginx-6799fc88d8-vwrxp   1/1     Running   0          7m19s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          20h
service/nginx        NodePort    10.96.156.44   <none>        8080:30724/TCP   93s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   3/3     3            3           19h

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6799fc88d8   3         3         3       19h
###################################### 参数 #########################################
# --type=ClusterPort: 分配一个集群内部可以访问的虚拟IP
# --type=NodePort: 在每个Node上分配一个端口作为外部访问入口

http://150.158.24.200:30724/(有时候公网ip放回感觉不出来负载均衡,貌似是长连接问题,可以隔一会再刷新)
在这里插入图片描述

9.9、滚动更新(灰度更新)

会在目标部署下移除一个启动一个,重复此操作直至完全替换

[root@k-master ~]# kubectl set image deploy nginx nginx=nginx:1.24.0 --record
deployment.apps/nginx image updated
[root@k-master ~]# kubectl get pods --show-labels
NAME                    READY   STATUS    RESTARTS   AGE   LABELS
nginx-c48b5bb67-ncwpw   1/1     Running   0          28s   app=nginx,pod-template-hash=c48b5bb67
nginx-c48b5bb67-snkvf   1/1     Running   0          9s    app=nginx,pod-template-hash=c48b5bb67
nginx-c48b5bb67-x469l   1/1     Running   0          61s   app=nginx,pod-template-hash=c48b5bb67
#################################### 参数 ########################################
# --record: 记录此次变更

获取版本升级记录

[root@k-master ~]# kubectl rollout history deployment nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>
4         kubectl set image deploy nginx nginx=nginx:1.23.0 --record=true

9.10、回滚

[root@k-master ~]# kubectl rollout history deployment nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
3         <none>
4         kubectl set image deploy nginx nginx=nginx:1.23.0 --record=true
5         kubectl set image deploy nginx nginx=nginx:1.24.0 --record=true

[root@k-master ~]# kubectl rollout undo deployment nginx 
deployment.apps/nginx rolled back
[root@k-master ~]# kubectl rollout history deployment nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deploy nginx nginx=nginx:1.24.0 --record=true
6         kubectl set image deploy nginx nginx=nginx:1.23.0 --record=true
#################################### 参数 ########################################
# --to-revision: 回滚到指定版本序号的位置

9.11、污点

# 等同于kubectl taint k-cluster1 taintKey=taintValue:noSchedule
kubectl cordon nodeName
# 等同于kubectl taint k-cluster1 taintKey:noSchedule-
kubectl uncordon nodeName
# 等同于kubectl taint k-cluster1 taintKey=taintValue:noExecute
kubectl drain nodeName

10、安装dashboard

10.1、应用配置文件

[root@k-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k-master ~]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-577f77cb5c-5jwch     1/1     Running   0          17m
kube-system            calico-node-9wphc                            1/1     Running   0          17m
kube-system            calico-node-gkjpt                            0/1     Running   0          17m
kube-system            calico-node-qv47h                            1/1     Running   0          17m
kube-system            calico-node-rvcrh                            1/1     Running   0          17m
kube-system            coredns-7f89b7bc75-bbkfz                     1/1     Running   0          28m
kube-system            coredns-7f89b7bc75-m46t5                     1/1     Running   0          28m
kube-system            etcd-k-master                                1/1     Running   0          28m
kube-system            kube-apiserver-k-master                      1/1     Running   0          28m
kube-system            kube-controller-manager-k-master             1/1     Running   0          28m
kube-system            kube-proxy-87hgc                             1/1     Running   0          26m
kube-system            kube-proxy-ksk4m                             1/1     Running   0          28m
kube-system            kube-proxy-nkmsl                             1/1     Running   0          26m
kube-system            kube-proxy-w4qdj                             1/1     Running   0          26m
kube-system            kube-scheduler-k-master                      1/1     Running   0          28m
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-mdhxx   1/1     Running   0          55s
kubernetes-dashboard   kubernetes-dashboard-658485d5c7-k2ds5        1/1     Running   0          55s

设置访问端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

将其中的type: ClusterIP 改为 type: NodePort

10.2、创建访问用户

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

10.3、获取访问密钥

kubectl -n kubernetes-dashboard describe secret

10.4、使用dashboard

查看面板端口

[root@k-master ~]# kubectl get svc -A |grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.96.215.168   <none>        8000/TCP                 3m34s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.96.102.122   <none>        443:32121/TCP            3m35s

https://124.223.4.217:32121/

11、Pod

11.0、常用命令

# 把某个资源输入为yaml
kubectl get xxx -oyaml
# 解释指定字段
kubectl explain xxx.xxx

Idea安装Kubernetes插件,file->setting->plugins->maketplace

在这里插入图片描述

在这里插入图片描述

11.1、Pod文件解释

# api版本,可以使用kubectl api-resources查询
apiVersion: v1
# 资源类型,这里是Pod
kind: Pod
# 元数据
metadata:
  # Pod名称
  name: multi-pods
  # Pod标签
  labels:
    # 标签名(key:value)
    app: multi-pods
# 目标状态
spec:
  # pod的hostname,可搭配service进行访问(podName.subdomain.名称空间)
  hostname: busybox-1
  # 必须和svc名称一样
  subdomain: default-subdomain
  # 挂载信息
  volumes:
      # 挂载名称
    - name: nginx-data
      # 挂载路径
      emptyDir: {}
  # 容器信息
  containers:
      # 容器名称
    - name: nginx-pod
      # 容器镜像
      image: nginx:1.22.0
      # 镜像下载策略
      imagePullPolicy: IfNotPresent
      # 镜像下载密钥
      imagePullSecrets:
      # 环境变量
      env:
        # 标签名(key:value)
      	hello: "world"
      # 容器挂载信息
      volumeMounts:
          # 容器挂载路径
        - mountPath: /usr/share/nginx/html
          # 容器挂载名称
          name: nginx-data
      # 容器生命周期
      lifecycle:
        # 容器启动后
      	postStart:
      	  # 执行命令
      	  exec: 
      	    # 命令
      	    command: ["/bin/sh","-c","echo helloworld"]
      	  # http请求
      	  httpGet:
      	    # 协议
      	    scheme: HTTP
      	    # 主机ip
      	    host: 127.0.0.1
      	    # 端口
      	    port: 80
      	    # 路径
      	    path: /abc.html
      	# 容器停止前
      	preStop:
      	  # 执行命令
      	  exec: 
      	    # 命令
      	    command: ["/bin/sh","-c","echo helloworld"]
      # 容器资源限制
      resources:
        # 首次启动请求资源
      	requests:
      	  # 内存限制
      	  memory: "200Mi"
      	  # cpu限制
          cpu: "700m"
      	# 最大请求资源
      	limits:
      	  # 内存限制
      	  memory: "200Mi"
      	  # cpu限制
          cpu: "700m"
      # 启动探针
      startupProbe:
        # 执行命令
        exec: 
          # 命令
          command: ["/bin/sh","-c","echo helloworld"]
        # http请求
        httpGet:
          # 协议
          scheme: HTTP
          # 主机ip
          host: 127.0.0.1
          # 端口
          port: 80
          # 路径
          path: /abc.html
      # 存活探针
      livenessProbe:
          # 执行命令
      	  exec: 
      	    # 命令
      	    command: ["/bin/sh","-c","echo helloworld"]
      	  # http请求
      	  httpGet:
      	    # 协议
      	    scheme: HTTP
      	    # 主机ip
      	    host: 127.0.0.1
      	    # 端口
      	    port: 80
      	    # 路径
      	    path: /abc.html
      	  # 探针延迟(s)
      	  initialDelaySeconds: 2
      	  # 探针间隔(s)
		  periodSeconds: 5
		  # 探针超时时间(s)
		  timeoutSeconds: 5
		  # 探针成功阈值
		  successThreshold: 3
		  # 探针失败阈值
		  failureThreshold: 5
      # 就绪探针
      readinessProbe:
      # 容器名称
    - name: alpine-pod
      # 容器镜像
      image: alpine
      # 容器挂载信息
      volumeMounts:
          # 容器挂载路径
        - mountPath: /app
          # 容器挂载名称
          name: nginx-data
      # 容器启动命令
      command: ["/bin/sh", "-c", "while true; do sleep 1; date > /app/index.html; done;"]
  # 重启策略
  restartPolicy: Always

11.2、镜像下载密钥

不同命名空间,密钥不共享

kubectl create secret docker-registry aliyun-docker \
    --docker-server=registry.cn-hangzhou.aliyuncs.com \
    --docker-username=ialso \
    --docker-password=xumeng2233. \
    --docker-email=2750955630@qq.com
[root@k-master ~]# kubectl create secret docker-registry aliyun-docker \
>     --docker-server=registry.cn-hangzhou.aliyuncs.com \
>     --docker-username=ialso \
>     --docker-password=xumeng2233. \
>     --docker-email=2750955630@qq.com
secret/aliyun-docker created
[root@k-master ~]# kubectl get secrets 
NAME                  TYPE                                  DATA   AGE
aliyun-docker         kubernetes.io/dockerconfigjson        1      4s
default-token-chtr2   kubernetes.io/service-account-token   3      3d20h

测试密钥

vim vue-pod
apiVersion: v1
kind: Pod
metadata:
  name: vue-pod
  labels:
    app: vue
spec:
  containers:
    - name: vue-demo
      image: registry.cn-hangzhou.aliyuncs.com/ialso/vue-demo
      imagePullPolicy: IfNotPresent
  restartPolicy: Always
  imagePullSecrets:
    - name: aliyun-docker
[root@k-master ~]# kubectl apply -f vue-pod.yaml 
pod/vue-pod created
[root@k-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5c8bc489f8-fs684   1/1     Running   0          33m
nginx-5c8bc489f8-t7vbh   1/1     Running   0          33m
nginx-5c8bc489f8-wm8pp   1/1     Running   0          34m
vue-pod                  1/1     Running   0          8s
[root@k-master ~]# kubectl describe pod vue-pod 
Name:         vue-pod
Namespace:    default
Priority:     0
Node:         k-cluster2/10.0.4.12
Start Time:   Sat, 22 Apr 2023 00:07:15 +0800
Labels:       app=vue
Annotations:  cni.projectcalico.org/containerID: c665663968023e7fc6c30b29d6add8393865452035d4e1e70f5641a03f7d1cb9
              cni.projectcalico.org/podIP: 192.168.2.217/32
              cni.projectcalico.org/podIPs: 192.168.2.217/32
Status:       Running
IP:           192.168.2.217
IPs:
  IP:  192.168.2.217
Containers:
  vue-demo:
    Container ID:   docker://ed88adae5082fef2104251f05e43a785a47dde99ba73f99b0bdc64cbf135a8db
    Image:          registry.cn-hangzhou.aliyuncs.com/ialso/vue-demo
    Image ID:       docker-pullable://registry.cn-hangzhou.aliyuncs.com/ialso/vue-demo@sha256:7747ada548c0ae3efcaa167c9ef1337414d1005232fb41c649b121fe8524f264
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 22 Apr 2023 00:07:17 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-chtr2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-chtr2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-chtr2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  30s   default-scheduler  Successfully assigned default/vue-pod to k-cluster2
  Normal  Pulling    29s   kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/ialso/vue-demo"
  Normal  Pulled     28s   kubelet            Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/ialso/vue-demo" in 904.180724ms
  Normal  Created    28s   kubelet            Created container vue-demo
  Normal  Started    28s   kubelet            Started container vue-demo

11.3、环境变量

vim mysql-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mysql-pod
  labels:
    app: mysql-pod
spec:
  containers:
    - name: mysql-pod
      image: mysql:5.7
      imagePullPolicy: IfNotPresent
      env:
        - name: MYSQL_ROOT_PASSWORD
          value: "root"
  restartPolicy: Always
[root@k-master ~]# kubectl apply -f mysql-pod.yaml 
pod/mysql-pod created
[root@k-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
mysql-pod                1/1     Running   0          45s
nginx-5c8bc489f8-fs684   1/1     Running   0          49m
nginx-5c8bc489f8-t7vbh   1/1     Running   0          49m
nginx-5c8bc489f8-wm8pp   1/1     Running   0          49m
vue-pod                  1/1     Running   0          15m
[root@k-master ~]# kubectl describe pod mysql-pod 
Name:         mysql-pod
Namespace:    default
Priority:     0
Node:         k-cluster2/10.0.4.12
Start Time:   Sat, 22 Apr 2023 00:22:16 +0800
Labels:       app=mysql-pod
Annotations:  cni.projectcalico.org/containerID: 361923ab5ad4dd7f40021cf69c4e36d963e346081f6339adc4080d76166da09d
              cni.projectcalico.org/podIP: 192.168.2.218/32
              cni.projectcalico.org/podIPs: 192.168.2.218/32
Status:       Running
IP:           192.168.2.218
IPs:
  IP:  192.168.2.218
Containers:
  mysql-pod:
    Container ID:   docker://a3d39d09d02ce0e71680d7947d70e929c0de985b3f858832b9211cca6b8506cd
    Image:          mysql:5.7
    Image ID:       docker-pullable://mysql@sha256:f2ad209efe9c67104167fc609cca6973c8422939491c9345270175a300419f94
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 22 Apr 2023 00:22:50 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      MYSQL_ROOT_PASSWORD:  root
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-chtr2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-chtr2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-chtr2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  39s   default-scheduler  Successfully assigned default/mysql-pod to k-cluster2
  Normal  Pulling    38s   kubelet            Pulling image "mysql:5.7"
  Normal  Pulled     6s    kubelet            Successfully pulled image "mysql:5.7" in 32.202053959s
  Normal  Created    5s    kubelet            Created container mysql-pod
  Normal  Started    5s    kubelet            Started container mysql-pod
[root@k-master ~]# kubectl exec -it mysql-pod -- bash
root@mysql-pod:/# mysql -uroot -proot
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit
Bye
root@mysql-pod:/# exit
exit

11.4、容器启动命令

vim nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-pod
spec:
  containers:
    - name: nginx-pod
      image: nginx:1.22.0
      imagePullPolicy: IfNotPresent
      env:
        - name: message
          value: "hello world"
      command:
        - /bin/bash
        - -c
        - "echo $(message);sleep 1000"
  restartPolicy: Always
[root@k-master ~]# kubectl apply -f nginx-pod.yaml 
pod/nginx-pod created
[root@k-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
mysql-pod                1/1     Running   0          15m
nginx-pod                1/1     Running   0          65s
nginx-5c8bc489f8-fs684   1/1     Running   0          64m
nginx-5c8bc489f8-t7vbh   1/1     Running   0          64m
nginx-5c8bc489f8-wm8pp   1/1     Running   0          64m
vue-pod                  1/1     Running   0          30m
[root@k-master ~]# kubectl logs nginx-pod
hello world

11.5、生命周期(容器)

vim nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-pod
spec:
  containers:
    - name: nginx-pod
      image: nginx:1.22.0
      imagePullPolicy: IfNotPresent
      lifecycle:
        postStart:
          exec:
            command:
              - "/bin/bash"
              - "-c"
              - "echo hello postStart"
        preStop:
          exec:
            command: ["/bin/bash", "-c", "echo hello preStop"]
  restartPolicy: Always

11.6、生命周期(Pod)

initContainers

11.7、探针(容器)

probe

11.8、资源限制

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-pod
spec:
  containers:
    - name: nginx-pod
      image: nginx:1.22.0
      imagePullPolicy: IfNotPresent
      resources:
        requests:
          memory: "200Mi"
          cpu: "700m"
        limits:
          memory: "500Mi"
          cpu: "1000m"
  restartPolicy: Always

11.9、多容器

apiVersion: v1
kind: Pod
metadata:
  name: multi-pods
  labels:
    app: multi-pods
spec:
  volumes:
    - name: nginx-data
      emptyDir: {}
  containers:
    - name: nginx-pod
      image: nginx:1.22.0
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-data
    - name: alpine-pod
      image: alpine
      volumeMounts:
        - mountPath: /app
          name: nginx-data
      command: ["/bin/sh", "-c", "while true; do sleep 1; date > /app/index.html; done;"]
  restartPolicy: Always


[root@k-master ~]# kubectl apply -f multi-pod.yaml 
pod/multi-pods created
[root@k-master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
multi-pods               2/2     Running   0          44s     192.168.2.225   k-cluster2   <none>           <none>
mysql-pod                1/1     Running   0          13h     192.168.2.218   k-cluster2   <none>           <none>
nginx-5c8bc489f8-fs684   1/1     Running   0          14h     192.168.2.216   k-cluster2   <none>           <none>
nginx-5c8bc489f8-t7vbh   1/1     Running   0          14h     192.168.2.215   k-cluster2   <none>           <none>
nginx-5c8bc489f8-wm8pp   1/1     Running   0          14h     192.168.2.214   k-cluster2   <none>           <none>
nginx-pod                1/1     Running   0          4h15m   192.168.2.223   k-cluster2   <none>           <none>
[root@k-master ~]# curl 192.168.2.225
Sat Apr 22 06:20:10 UTC 2023
[root@k-master ~]# curl 192.168.2.225
Sat Apr 22 06:20:11 UTC 2023
[root@k-master ~]# curl 192.168.2.225
Sat Apr 22 06:20:12 UTC 2023

多容器进入容器可以使用-c指定容器

[root@k-master ~]# kubectl exec -it multi-pods -c alpine-pod -- sh
/ # cat /app/index.html 
Sat Apr 22 06:41:45 UTC 2023
/ # exit

11.10、静态Pod

/etc/kubernetes/manifests下的yaml kubelet会自动(在当前节点)启动,即使被删除仍会被重新启动一个新的

12、Deployment

12.1、Deployment文件解释

# api版本,可以使用kubectl api-resources查询
apiVersion: apps/v1
# 资源类型,这里是Deployment
kind: Deployment
# 元数据
metadata:
  # Deployment名称
  name: deployment1
  # Deployment标签
  labels:
    # 标签名(key:value)
    app: deployment1
# 目标状态
spec:
  # 是否停止更新
  # 可使用kubectl rollout pause deployment deployment1暂停更新,kubectl rollout resume deployment deployment1恢复更新
  paused: false
  # 副本数
  replicas: 2
  # 保留的最大版本历史数
  revisionHistoryLimit: 10
  # 更新策略
  strategy:
    # 更新,Recreate(全部停止,重新启动新的)RollingUpdate(滚动更新)
    type: RollingUpdate
    rollingUpdate:
      # 每次滚动更新数量(数字或百分比)
      maxSurge: 1
      # 最大不可用数(数字或百分比)
      maxUnavailable: 1
  # Deployment中Pod信息
  template:
    # 元数据
    metadata:
      # Pod名称
      name: deployment1
      # Pod标签
      labels:
        # 标签名(key:value)
        app: deployment1
    # 目标状态
    spec:
      # 容器信息
      containers:
          # 容器名称
        - name: deployment1
          # 容器镜像
          image: nginx:alpine
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
  # 选择器
  selector:
    # 简单选择器
    matchLabels:
      app: deployment1
    # 复杂选择器
    matchExpressions:
      - key: podName
        value: [aaa,bbb]
        operator: In
[root@k-master ~]# kubectl apply -f deployment1.yaml 
deployment.apps/deployment1 created
[root@k-master ~]# kubectl get all
NAME                               READY   STATUS    RESTARTS   AGE
pod/deployment1-58484bd895-pgtvr   1/1     Running   0          64s
pod/deployment1-58484bd895-s4q8h   1/1     Running   0          64s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4d16h

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment1   2/2     2            2           64s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/deployment1-58484bd895   2         2         2       64s

12.2、动态扩缩容

下载yaml文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

修改yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                k8s-app: metrics-server
            namespaces:
            - kube-system
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls  #加上参数
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.3 # 配置镜像地址
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  minAvailable: 1
  selector:
    matchLabels:
      k8s-app: metrics-server
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

应用配置文件

[root@VM-4-6-centos ~]# kubectl apply -f high-availability-1.21+.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
poddisruptionbudget.policy/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
# 获取node资源使用情况
[root@VM-4-6-centos ~]# kubectl top nodes
W0422 22:15:12.979484   30366 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k-cluster1   115m         5%     1038Mi          54%       
k-cluster2   121m         1%     1514Mi          9%        
k-master     286m         14%    1296Mi          68%
# 查看pod资源使用情况
[root@VM-4-6-centos ~]# kubectl top pods -n kube-system
W0422 22:15:32.246417   30695 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME                                       CPU(cores)   MEMORY(bytes)   
calico-kube-controllers-594649bd75-sldhx   2m           23Mi            
calico-node-652kq                          30m          85Mi            
calico-node-874nz                          28m          66Mi            
calico-node-slsx5                          31m          131Mi           
coredns-545d6fc579-w67ms                   2m           19Mi            
coredns-545d6fc579-zlpv7                   3m           16Mi            
etcd-k-master                              15m          46Mi            
kube-apiserver-k-master                    114m         395Mi           
kube-controller-manager-k-master           18m          88Mi            
kube-proxy-2kkkx                           8m           26Mi            
kube-proxy-75gq5                           9m           24Mi            
kube-proxy-q4694                           4m           19Mi            
kube-scheduler-k-master                    4m           37Mi            
metrics-server-74b8bdc985-4g62c            2m           20Mi            
metrics-server-74b8bdc985-h84gz            3m           17Mi

扩缩容配置

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: hap1
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: deployment1
  targetCPUUtilizationPercentage: 50
[root@VM-4-6-centos ~]# kubectl apply -f hap1.yaml 
horizontalpodautoscaler.autoscaling/hap1 created
[root@VM-4-6-centos ~]# kubectl get HorizontalPodAutoscaler
NAME   REFERENCE                TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
hap1   Deployment/deployment1   <unknown>/50%   1         10        2          27s

12.3、Canary(金丝雀)部署

灰度发布无法控制灰度时间,可采用同一个service下多个deployment来实现灰度发布,即金丝雀部署

12.4、DaemonSet

会为每个节点部署一个pod副本(master除外),常用于监控、日志收集等

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset1
  labels:
    app: daemonset1
spec:
  template:
    metadata:
      name: daemonset1
      labels:
        app: daemonset-nginx
    spec:
      containers:
        - name: nginx-pod
          image: nginx:alpine
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
  selector:
    matchLabels:
      app: daemonset-nginx
[root@k-master ~]# kubectl apply -f daemonset1.yaml 
daemonset.apps/daemonset1 created
[root@k-master ~]# kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
daemonset1-chzp6               1/1     Running   0          34s
daemonset1-wkj7r               1/1     Running   0          34s
deployment1-58484bd895-qt9nd   1/1     Running   0          23h
deployment1-58484bd895-zlb4c   1/1     Running   0          23h

12.5、StatefulSet

Deployment部署称为无状态应用(网络可能会变,存储可能会变,顺序可能会变,常用于业务代码)

StatefulSet部署称为有状态应用(网络不变,存储不变,顺序不变,常用于存储和中间件)

apiVersion: v1
kind: Service
metadata:
  name: statefulset-serves
spec:
  selector:
    app: statefulset
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  replicas: 3
  selector:
    matchLabels:
      app: statefulset
  serviceName: statefulset-serves
  template:
    metadata:
      labels:
        app: statefulset
    spec:
      containers:
        - name: nginx
          image: nginx:1.24.0
[root@k-master ~]# kubectl apply -f statefulset.yaml 
service/statefulset-serves created
statefulset.apps/statefulset created
[root@k-master ~]# kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
daemonset1-chzp6               1/1     Running   0          42m
daemonset1-wkj7r               1/1     Running   0          42m
deployment1-58484bd895-qt9nd   1/1     Running   0          23h
deployment1-58484bd895-zlb4c   1/1     Running   0          23h
statefulset-0                  1/1     Running   0          109s
statefulset-1                  1/1     Running   0          107s
statefulset-2                  1/1     Running   0          105s
[root@k-master ~]# curl statefulset-0.statefulset-serves
curl: (6) Could not resolve host: statefulset-0.statefulset-serves; Unknown error
[root@k-master ~]# kubectl exec -it daemonset1-chzp6 -- /bin/sh
/ # curl statefulset-0.statefulset-serves
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # exit

12.6、Job

apiVersion: batch/v1
kind: Job
metadata:
  name: job-test
  labels:
    app: job-test
spec:
  completions: 2
  template:
    metadata:
      name: job-pod
      labels:
        app: job1
    spec:
      containers:
        - name: job-container
          image: alpine
      restartPolicy: Never
[root@k-master ~]# kubectl apply -f job1.yaml 
job.batch/job-test created
[root@k-master ~]# kubectl get pods
NAME                           READY   STATUS      RESTARTS   AGE
daemonset1-chzp6               1/1     Running     0          84m
daemonset1-wkj7r               1/1     Running     0          84m
deployment1-58484bd895-qt9nd   1/1     Running     0          24h
deployment1-58484bd895-zlb4c   1/1     Running     0          24h
job-test-mt6ds                 0/1     Completed   0          80s
job-test-prxqt                 0/1     Completed   0          62s
statefulset-0                  1/1     Running     0          43m
statefulset-1                  1/1     Running     0          43m
statefulset-2                  1/1     Running     0          43m

12.7、CronJob

CronJob创建Job执行任务

apiVersion: batch/v1
kind: CronJob
metadata:
  name: cronjob
spec:
  jobTemplate:
    metadata:
      name: job-test
      labels:
        app: job-test
    spec:
      completions: 2
      template:
        metadata:
          name: job-pod
          labels:
            app: job1
        spec:
          containers:
            - name: job-container
              image: alpine
              command: ["/bin/sh", "-c", "echo hello world"]
          restartPolicy: Never
  # 每分钟
  schedule: "*/1 * * * *"
[root@k-master ~]# kubectl apply -f cronjob1.yaml 
cronjob1cronjob1.batch/cronjob-test created
[root@k-master ~]# kubectl get cronjobs
NAME      SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob   */1 * * * *   False     0        <none>          28s
[root@k-master ~]# kubectl get pods
NAME                           READY   STATUS      RESTARTS   AGE
cronjob-28037753-9frvk         0/1     Completed   0          36s
cronjob-28037753-qbmwv         0/1     Completed   0          20s
daemonset1-chzp6               1/1     Running     0          115m
daemonset1-wkj7r               1/1     Running     0          115m
deployment1-58484bd895-qt9nd   1/1     Running     0          25h
deployment1-58484bd895-zlb4c   1/1     Running     0          25h
job-test-mt6ds                 0/1     Completed   0          32m
job-test-prxqt                 0/1     Completed   0          32m
statefulset-0                  1/1     Running     0          74m
statefulset-1                  1/1     Running     0          74m
statefulset-2                  1/1     Running     0          74m
[root@k-master ~]# kubectl logs cronjob-28037753-9frvk
hello world

13、Service

13.1、ClusterIP

apiVersion: v1
kind: Service
metadata:
  name: kser1
spec:
  selector:
    app: nginx-pod
  ports:
    - port: 80
      targetPort: 8080
  type: ClusterIP
  # 不指定则随机分配,为None则为无头服务
  clusterIP: 10.96.80.80
  # 指定一组集群内IP
  clusterIPs:
    - 10.96.80.80
    - 10.96.80.81
  # 内部流量策略,Cluster负载均衡,Local优先本机节点
  internalTrafficPolicy: Cluster

13.2、nodePort

apiVersion: v1
kind: Service
metadata:
  name: kser1
spec:
  selector:
    app: nginx-pod
  ports:
    - port: 80
      targetPort: 8080
      # 不指定则随机分配
      nodePort: 8099
  type: NodePort
  # 外部流量策略,Cluster隐藏源IP,Local不隐藏源IP
  externalTrafficPolicy: Cluster
  # 指定可访问service的IP(白名单)
  externalIPs:
    # 其他ip
    - 10.0.4.14
  # session亲和
  sessionAffinity: "ClientIP"
  # session亲和配置
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 300

13.3、headless

apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
spec:
  selector:
    # 匹配带有app: mysql标签的pod
    app: mysql
  clusterIP: None
  ports:
    - name: mysql
      port: 3306

13.4、代理外部服务(ep)

apiVersion: v1
kind: Endpoints
metadata:
  labels:
    app: nginx-svc-external
  name: nginx-svc-external
subsets:
  - addresses:
    - ip: 150.158.187.211
    ports:
      - name: http
        port: 80
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # 要与Endpoints label一致
    app: nginx-svc-external
  name: nginx-svc-external
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  type: ClusterIP

13.5、代理外部服务(ExternalName)

apiVersion: v1
kind: Service
metadata:
  name: externalName1
spec:
  type: ExternalName
  externalName: baidu.com
[root@k-master ~]# kubectl exec -it deployment1-58484bd895-qt9nd -- /bin/sh
/ # curl externalname1
curl: (56) Recv failure: Connection reset by peer
/ # ping externalname1
PING externalname1 (39.156.66.10): 56 data bytes
64 bytes from 39.156.66.10: seq=0 ttl=248 time=30.950 ms
64 bytes from 39.156.66.10: seq=1 ttl=248 time=30.944 ms
64 bytes from 39.156.66.10: seq=2 ttl=248 time=30.930 ms
^C
--- externalname1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 30.930/30.941/30.950 ms
/ # exit

14、强制删除

14.1、强删Pod

kubectl delete pod <pod-name> -n <name-space> --force --grace-period=0

14.2、强制删除pv、pvc

kubectl patch pv xxx -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc xxx -p '{"metadata":{"finalizers":null}}'

14.3、强制删除ns

kubectl delete ns <terminating-namespace> --force --grace-period=0

15、Ingress

Service使用NodePort暴露集群外访问端口性能低且不安全,无法完成限流,特定路由转发,所以需要再向上抽取一层

Ingress将作为整个集群唯一的入口,完成路由转发,限流等功能

在这里插入图片描述

15.1、安装

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resourceNames:
      - ingress-controller-leader
    resources:
      - configmaps
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - coordination.k8s.io
    resourceNames:
      - ingress-controller-leader
    resources:
      - leases
    verbs:
      - get
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
    verbs:
      - get
      - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  ports:
    - appProtocol: http
      name: http
      port: 80
      protocol: TCP
      targetPort: http
    - appProtocol: https
      name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
    - appProtocol: https
      name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
# kind: Deployment
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
      containers:
        - args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          # image: registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: controller
          ports:
            - containerPort: 80
              name: http
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
            - containerPort: 8443
              name: webhook
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - ALL
            runAsUser: 101
          volumeMounts:
            - mountPath: /usr/local/certificates/
              name: webhook-cert
              readOnly: true
      # dnsPolicy: ClusterFirst
      dnsPolicy: ClusterFirstWithHostNet
      # 开放的为node端口
      hostNetwork: true
      nodeSelector:
        # 选择节点角色为ingress的
        # kubernetes.io/os: linux
        node-role: ingress
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-create
    spec:
      containers:
        - args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          # image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
          imagePullPolicy: IfNotPresent
          name: create
          securityContext:
            allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-patch
    spec:
      containers:
        - args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
          imagePullPolicy: IfNotPresent
          name: patch
          securityContext:
            allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
webhooks:
  - admissionReviewVersions:
      - v1
    clientConfig:
      service:
        name: ingress-nginx-controller-admission
        namespace: ingress-nginx
        path: /networking/v1/ingresses
    failurePolicy: Fail
    matchPolicy: Equivalent
    name: validate.nginx.ingress.kubernetes.io
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    sideEffects: None
[root@k-master ~]# kubectl apply -f ingress-nginx.yaml 
service/ingress-nginx created
[root@k-master ~]# kubectl get all -n ingress-nginx 
NAME                                       READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-2hrfl   0/1     Completed   0          4m43s
pod/ingress-nginx-admission-patch-lh8ph    0/1     Completed   0          4m43s

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.96.33.14     <none>        80:31926/TCP,443:32352/TCP   4m43s
service/ingress-nginx-controller-admission   ClusterIP   10.96.222.198   <none>        443/TCP                      4m43s

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR       AGE
daemonset.apps/ingress-nginx-controller   0         0         0       0            0           node-role=ingress   4m43s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           36s        4m43s
job.batch/ingress-nginx-admission-patch    1/1           44s        4m43s
# 为node节点打标签
[root@k-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
k-cluster1   Ready    <none>                 3d    v1.22.0
k-cluster2   Ready    <none>                 3d    v1.22.0
k-master     Ready    control-plane,master   3d    v1.22.0
[root@k-master ~]# kubectl label node k-cluster1 node-role=ingress
node/k-cluster1 labeled
[root@k-master ~]# kubectl label node k-cluster2 node-role=ingress
node/k-cluster2 labeled
[root@k-master ~]# kubectl get DaemonSet -n ingress-nginx 
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR       AGE
ingress-nginx-controller   2         2         2       2            2           node-role=ingress   30s
[root@k-master ~]# kubectl get pods -n ingress-nginx 
NAME                                   READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-2hrfl   0/1     Completed   0          8m3s
ingress-nginx-admission-patch-lh8ph    0/1     Completed   0          8m3s
ingress-nginx-controller-h7xgp         1/1     Running     0          43s
ingress-nginx-controller-kltkt         1/1     Running     0          48s

ingress测试service

# api版本,可以使用kubectl api-resources查询
apiVersion: apps/v1
# 资源类型,这里是Deployment
kind: Deployment
# 元数据
metadata:
  # Deployment名称
  name: deploy-nginx
  # Deployment标签
  labels:
    # 标签名(key:value)
    app: deploy-nginx
# 目标状态
spec:
  # 副本数
  replicas: 4
  # Deployment中Pod信息
  template:
    # 元数据
    metadata:
      # Pod名称
      name: nginx-pod
      # Pod标签
      labels:
        # 标签名(key:value)
        app: nginx-pod
    # 目标状态
    spec:
      # 容器信息
      containers:
        # 容器名称
        - name: nginx
          # 容器镜像
          image: nginx:alpine
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
  # 选择器
  selector:
    # 简单选择器
    matchLabels:
      app: nginx-pod
---
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
spec:
  selector:
    app: nginx-pod
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP

15.2、ingress文件解释

ingress规则

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
  # 新版要加上
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  # 默认转发(未匹配到其他规则的)
  defaultBackend:
    # 服务信息
    service:
      # 服务名称
      name: service-nginx
      # 服务端口
      port:
        number: 80
  # 转发规则
  rules:
      # host
    - host: ialso.cn
      http:
        # 路径
        paths:
            # 对于it123.com下所有请求转发至service-nginx:80
          - path: /
            # 匹配规则,Prefix前缀匹配,Exact精确匹配,ImplementationSpecific自定义
            pathType: Prefix
            # 后台服务
            backend:
              # 服务信息
              service:
                # 服务名称
                name: service-nginx
                # 服务端口
                port:
                  number: 80

使用switch配置124.222.59.241 ialso.cn,尝试访问http://ialso.cn

15.3、全局配置

kubectl edit cm ingress-nginx-controller -n ingress-nginx

# 修改全局配置,https://kubernetes.github.io/ingress-nginx/user-guide/nginxconfiguration/configmap/
# 修改后需要重启pod
data:
  allow-snippet-annotations: "true"

15.4、限流

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # 限流 每秒最大2个请求
    nginx.ingress.kubernetes.io/limit-rps: "2"
spec:
  rules:
    - host: ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-nginx
                port:
                  number: 80

使用switch配置124.222.59.241 ialso.cn,尝试访问

15.5、路径重写

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-rewrite
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # 路径重写,$2值为下面path中的(.*)
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
    - host: ialso.cn
      http:
        paths:
            # 匹配/api下所有,$表示以当前字符串结尾
          - path: /api(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: service-nginx
                port:
                  number: 80

使用switch配置124.222.59.241 ialso.cn,尝试访问http://ialso.cn/api/hello

15.6、会话保持

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-session
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # 会话保持
    nginx.ingress.kubernetes.io/affinity: "cookie"
    # cookie名称
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    # cookie有效期
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
  rules:
    - host: ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-nginx
                port:
                  number: 80

使用switch配置124.222.59.241 ialso.cn,尝试访问http://ialso.cn.com/

15.7、SSL配置(k8s)

首先拿到ssl证书的文件需要两个文件 xxx.key xxx.crt

1、创建密钥
[root@k-master ~]# kubectl create secret tls ialso-ssl --key ialso.cn.key --cert ialso.cn_bundle.crt 
secret/ialso-ssl created
[root@k-master ~]# kubectl get secrets 
NAME                  TYPE                                  DATA   AGE
default-token-m278l   kubernetes.io/service-account-token   3      4d2h
ialso-ssl             kubernetes.io/tls                     2      45s
2、配置ingress规则
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
    - hosts:
      - ialso.cn
      secretName: ialso-ssl
  rules:
    - host: ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-nginx
                port:
                  number: 80

配置好证书,访问域名,会默认跳转到https

15.8、SSL配置(nginx)

因为要做一下ingress-nginx的负载均衡,所以额外加了一层nginx

docker run -d \
	-p 80:80 \
	-v nginx-data:/etc/nginx \
	--name nginx \
	nginx
# 用户
user  nginx;
# 线程数(一般为核心数两倍)
worker_processes  4;
# 日志记录
error_log  /var/log/nginx/error.log notice;
# 进程ID
pid        /var/run/nginx.pid;

# 并发优化
events {
    # 工作模型
    use epoll;
    worker_connections  1024;
}


http {
    # 设置文件的mime文件类型
    include       /etc/nginx/mime.types;
    # 设置默认类型为二进制流
    default_type  application/octet-stream;

    # 日志格式
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    # 日志路径,日志格式为 main 定义的格式
    access_log  /var/log/nginx/access.log  main;

    # 指向 sendfile()函数,在磁盘和 TCP 端口(或者任意两个文件描述符)之间复制数据。
    sendfile        on;
    #tcp_nopush     on;

    # 指定了与客户端的 keep-alive 链接的超时时间
    keepalive_timeout  65;

    # 限制客户端文件上传下载大小限制,默认1m
    client_max_body_size 100m;

    # 开启 gzip 压缩模块
    gzip  on;

    # nginx 的 http 块配置文件
    include /etc/nginx/conf.d/*.conf;

    # 负载均衡信息
    upstream ialso_kubernetes_ingress {
        # 负载均衡规则,采用ip_hash
        ip_hash;
        # k-cluster1
        server 124.222.59.241;
        # k-cluster2
        server 150.158.24.200;
    }

    # 配置http->https
    server {
        # 监听端口
        listen 80;
        # 监听域名
        server_name ialso.cn;
        # http->https
        rewrite ^(.*)$ https://${server_name}$1 permanent;
    }

    server {
        # 监听端口
        listen 443 ssl;
        # 监听域名
        server_name  ialso.cn;
        # 证书信息(放在nginx数据卷下ssl目录,例如:/var/lib/docker/volumes/nginx-data/_data/ssl)
        ssl_certificate      /etc/nginx/ssl/ialso.cn_bundle.pem;
        ssl_certificate_key  /etc/nginx/ssl/ialso.cn.key;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

        # 转发请求到ialso_kubernetes_ingress
        location / {
            proxy_pass http://ialso_kubernetes_ingress;
	    	# 转发时保留原始请求域名
	    	proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}
# 更改配置文件后需要重新加载配置文件
docker exec -it nginx
nginx -s reload

15.9、Canary(金丝雀)部署

基于Service实现的金丝雀部署能自定义灰度逻辑,ingress可以进行灰度的自定义

可以基于canary-by-headercanary-by-cookiecanary-weight,可设置多个,权重依次降低

# canary-by-header
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # 基于canary-by-header设置金丝雀部署
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "is-canary"
    nginx.ingress.kubernetes.io/canary-by-header-value: "yes"
spec:
  rules:
    - host: ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-nginx
                port:
                  number: 80

在这里插入图片描述

在这里插入图片描述

# canary-weight
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # 基于canary-weight设置金丝雀部署,设置此项后host: ialso.cn可以重复
    nginx.ingress.kubernetes.io/canary: "true"
    # 10%流量会进入灰度
    nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
  rules:
    - host: ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-nginx
                port:
                  number: 80

16、NetworkPolicy

网络策略,默认情况下虽然所有Pod资源隔离,但是网络互通,有时候可能需要Pod网络隔离

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-network-policy
  labels:
    app: network-policy
spec:
  podSelector:
    matchLabels:
      # 选中的Pod将被隔离起来,包含service
      app: nginx-pod-canary
  policyTypes:
    # 一般只配置入站规则
    - Ingress
  # 定义入站规则
  ingress:
    # 入站白名单
    - from:
        # 名称空间选择器
        - namespaceSelector:
            matchLabels:
              # default名称空间的Pod可以访问
              kubernetes.io/metadata.name: default
    - from:
        # pod选择器
        - podSelector:
            matchLabels:
              app: nginx-pod-http
    - from:
        # cidr选择器
        - ipBlock:
            # 允许访问的cidr
            cidr: 10.0.0.0/16
            # 排除的cidr
            except:
              - 10.0.1.0/24

17、存储

在kubernetes存储中

  • 容器声明卷挂载(要挂载那些)
  • Pod声明卷详情(卷的位置)

17.0、卷类型

1、配置信息

Secret、ConfigMap

2、临时存储

emptyDir、HostPath

3、持久化存储

Glusterfs、CephFS、Nfs

apiVersion: v1
kind: Pod
metadata:
  name: volume-pods
  labels:
    app: volume-pods
spec:
  # 数据卷信息
  volumes:
      # 挂载名称
    - name: nginx-data
      # 挂载路径
      emptyDir: {}
  containers:
    - name: alpine-pod
      image: alpine
      # 挂载信息
      volumeMounts:
        - mountPath: /app
          name: nginx-data

17.1、Secret

内置类型用法
Opaque用户定义的任意数据
kubernetes.io/service-account-token服务账号令牌
kubernetes.io/dockercfg~/.dockercfg 文件的序列化形式
kubernetes.io/dockerconfigjson~/.docker/config.json 文件的序列化形式
kubernetes.io/basic-auth用于基本身份认证的凭据
kubernetes.io/ssh-auth用于 SSH 身份认证的凭据
kubernetes.io/tls用于 TLS 客户端或者服务器端的数据
bootstrap.kubernetes.io/token启动引导令牌数据
1、Opaque

命令行方式

kubectl create secret generic secret-opaque \
  --from-literal=username=xumeng \
  --from-literal=password=lianxing

yaml方式

apiVersion: v1
kind: Secret
metadata:
  name: secret-opaque
# 不指定默认也为Opaque
type: Opaque
data:
  password: bGlhbnhpbmc=
  username: eHVtZW5n

使用-Env

secret修改后不会热更新

apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
  labels:
    app: secret-pod
spec:
  containers:
    - name: alpine-pod
      image: alpine
      resources:
        limits:
          cpu: 2000m
      env:
      	# 环境变量
        - name: Secret
          # 变量来源
          valueFrom:
            # 来自secret
            secretKeyRef:
              # secret名字
              name: secret-opaque
              # secret中那个字段
              key: username
        # 环境变量
        - name: Field
          # 变量来源
          valueFrom:
            # 来自field(当前yaml)
            fieldRef:
              # field路径,从kind层开始写
              fieldPath: metadata.name
        # 环境变量
        - name: Resource
          # 变量来源
          valueFrom:
            # 来源资源(spec.containers.resources)
            resourceFieldRef:
              # 容器名字
              containerName: alpine-pod
              # resource路径(从spec.containers.resources开始写)
              resource: limits.cpu
      command: ["/bin/sh", "-c", "sleep 36000"]
[root@k-master ~]# kubectl apply -f secret-pod.yaml 
pod/secret-pod created
[root@k-master ~]# kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
deploy-nginx-canary-cb5cbc59f-74tt4   1/1     Running   0          34h
deploy-nginx-http-5b4d86c-4vj62       1/1     Running   0          36h
deploy-nginx-http-5b4d86c-5tcpc       1/1     Running   0          36h
deploy-nginx-http-5b4d86c-brbzn       1/1     Running   0          36h
deploy-nginx-http-5b4d86c-vdln8       1/1     Running   0          36h
secret-pod                            1/1     Running   0          40s
[root@k-master ~]# kubectl exec -it secret-pod -- /bin/sh
/ # echo $Secret
xumeng
/ # echo $Field
secret-pod
/ # echo $Resource
2
/ # exit

使用-挂载

secret修改后会热更新

apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
  labels:
    app: secret-pod
spec:
  volumes:
    - name: secret
      # 挂载的secret信息
      secret:
        # 挂载的secret名称
        secretName: secret-opaque
        # 挂载secret中那些信息,如果不屑挂载所有内容
        items:
            # secret中的key
          - key: username
            # 挂在出来叫什么名字
            path: un
  containers:
    - name: alpine-pod
      image: alpine
      command: ["/bin/sh", "-c", "sleep 36000"]
      volumeMounts:
        - mountPath: /app
          name: secret
[root@k-master ~]# kubectl apply -f secret-pod.yaml 
pod/secret-pod created
[root@k-master ~]# kubectl exec -it secret-pod -- /bin/sh
/ # cd /app
/app # ls
password  username
/app # cat username 
xumeng
/app # cat password
lianxing
/app # exit
2、service-account-token
3、dockercfg
4、basic-auth
5、ssh-auth
6、tls
7、token

17.3、ConfigMap

命令行方式

kubectl create configmap configmap-db \
	--from-literal=key=value

yaml方式

apiVersion: v1
kind: ConfigMap
metadata:
 name: my-config2
data:
 key1: hello
 key2: world

使用-Env

apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
  labels:
    app: secret-pod
spec:
  containers:
    - name: alpine-pod
      image: alpine
      resources:
        limits:
          cpu: 2000m
      env:
      	# 环境变量
        - name: Secret
          # 变量来源
          valueFrom:
            # 来自secret
            secretKeyRef:
              # secret名字
              name: secret-opaque
              # secret中那个字段
              key: username
        # 环境变量
        - name: Field
          # 变量来源
          valueFrom:
            # 来自field(当前yaml)
            fieldRef:
              # field路径,从kind层开始写
              fieldPath: metadata.name
        # 环境变量
        - name: Resource
          # 变量来源
          valueFrom:
            # 来源资源(spec.containers.resources)
            resourceFieldRef:
              # 容器名字
              containerName: alpine-pod
              # resource路径(从spec.containers.resources开始写)
              resource: limits.cpu
      command: ["/bin/sh", "-c", "sleep 36000"]
[root@k-master ~]# kubectl exec -it configmap-pod -- /bin/sh
/ # echo $Configmap
hello
/ # exit

使用-挂载

apiVersion: v1
kind: Pod
metadata:
  name: configmap-pod
  labels:
    app: configmap-pod
spec:
  volumes:
    - name: configmap
      configMap:
        name: configmap
  containers:
    - name: alpine-pod
      image: alpine
      command: ["/bin/sh", "-c", "sleep 36000"]
      volumeMounts:
        - mountPath: /app
          name: configmap
[root@k-master ~]# kubectl exec -it configmap-pod -- /bin/sh
/ # cd /app
/app # ls
key1  key2
/app # cat key1
hello
/app # cat key2
world
/app # exit

17.4、临时存储

1、emptyDir

类似于docker匿名挂载

Pod中的容器重启并不会导致挂载卷丢失,但是Pod重新拉起卷会丢失,匿名卷生命周期即为Pod的生命周期

apiVersion: v1
kind: Pod
metadata:
 name: multi-pods
 labels:
  app: multi-pods
spec:
 volumes:
  - name: nginx-data
   emptyDir: {}
 containers:
  - name: nginx-pod
   image: nginx:1.22.0
   volumeMounts:
    - mountPath: /usr/share/nginx/html
     name: nginx-data
  - name: alpine-pod
   image: alpine
   volumeMounts:
    - mountPath: /app
     name: nginx-data
   command: ["/bin/sh", "-c", "while true; do sleep 1; date > /app/index.html; done;"]
 restartPolicy: Always
2、hostPath

未挂载时间

apiVersion: v1
kind: Pod
metadata:
 name: pod-no-time
 labels:
  app: pod-no-time
spec:
 containers:
  - name: pod-no-time
   image: alpine
   imagePullPolicy: IfNotPresent
   command: ["/bin/sh", "-c", "sleep 60000"]
 restartPolicy: Always
[root@k-master ~]# kubectl exec -it pod-no-time -- /bin/sh
/ # date
Thu May 4 08:12:17 UTC 2023
/ # exit

挂载时间

apiVersion: v1
kind: Pod
metadata:
 name: pod-time
 labels:
  app: pod-time
spec:
 containers:
  \- name: pod-time
   image: alpine
   imagePullPolicy: IfNotPresent
   command: ["/bin/sh", "-c", "sleep 60000"]
   volumeMounts:
    \- mountPath: /etc/localtime
     name: localtime
 volumes:
  \- name: localtime
   hostPath:
    path: /usr/share/zoneinfo/Asia/Shanghai
 restartPolicy: Always
[root@k-master ~]# kubectl exec -it pod-time -- /bin/sh
/ # date
Thu May 4 16:19:22 CST 2023
/ # exit

17.5、持久化存储(nfs)

0、卷阶段

Available(可用)-- 卷是一个空闲资源,尚未绑定

Bound(已绑定)-- 该卷已经绑定

Released(已释放)-- 绑定已被删除,但资源尚未被回收,无法再次绑定

Failed(失败)-- 卷的自动回收操作失败

1、安装nfs
# 所有机器
yum install -y nfs-utils
mkdir -p /nfs/data
# nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
exportfs -r
#检查配置是否生效
exportfs
2、nfs使用
[root@k-master ~]# cd /nfs/data/
[root@k-master data]# clear
[root@k-master data]# echo 111 > 1.txt
# 查看远程主机共享目录
[root@k-cluster1 ~]# showmount -e 10.0.4.6
Export list for 10.0.4.6:
/nfs/data *
# 挂载
[root@k-cluster1 ~]# mount -t nfs 10.0.4.6:/nfs/data /nfs/data
[root@k-cluster1 ~]# cd /nfs/data/
[root@k-cluster1 data]# ls
1.txt
[root@k-cluster1 data]# cat 1.txt 
111
3、PersistentVolume(PV)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
spec:
  # storageName可以当作pv的命名空间
  storageClassName: nfs-storage
  # 持久化卷大小
  capacity:
    storage: 100Mi
  # 持久化卷访问模式
  #   ReadWriteOnce:可以被一个节点以读写方式挂载;
  #	  ReadOnlyMany:可以被多个节点以只读方式挂载;
  #   ReadWriteMany:可以被多个节点以读写方式挂载;
  #   ReadWriteOncePod:以被单个 Pod 以读写方式挂载;
  accessModes:
    - ReadWriteMany
  nfs:
    path: /nfs/data/pv001
    server: 10.0.4.6
[root@k-master pv001]# kubectl apply -f pv001.yaml 
persistentvolume/pv001 created
[root@k-master pv001]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   100Mi      RWX            Retain           Available           nfs-storage             5s
4、PersistentVolumeClaim(PVC)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc001
spec:
  # storageClassName
  storageClassName: nfs-storage
  # 持久化卷访问模式
  accessModes:
    - ReadWriteMany
  # 所需资源
  resources:
    requests:
      storage: 100Mi
[root@k-master /]# kubectl apply -f pvc001.yaml 
persistentvolumeclaim/pvc001 created
[root@k-master /]# kubectl get pv,pvc
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
persistentvolume/pv001   100Mi      RWX            Retain           Bound    default/pvc001   nfs-storage             4m41s

NAME                           STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc001   Bound    pv001    100Mi      RWX            nfs-storage    8s
5、deployment中挂载pvc
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
spec:
  # storageName可以当作pv的命名空间
  storageClassName: nfs-storage
  # 持久化卷大小
  capacity:
    storage: 100Mi
  # 持久化卷访问模式
  #   ReadWriteOnce:可以被一个节点以读写方式挂载;
  #	  ReadOnlyMany:可以被多个节点以只读方式挂载;
  #   ReadWriteMany:可以被多个节点以读写方式挂载;
  #   ReadWriteOncePod:以被单个 Pod 以读写方式挂载;
  accessModes:
    - ReadWriteMany
  nfs:
    path: /nfs/data/pv002
    server: 10.0.4.6
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-pvc
  labels:
    app: nginx-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pvc
  template:
    metadata:
      name: nginx-pvc
      labels:
        app: nginx-pvc
    spec:
      containers:
        - name: nginx-pvc
          image: nginx
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /usr/share/nginx/html/
              name: nfs-pvc
      restartPolicy: Always
      volumes:
        - name: nfs-pvc
          persistentVolumeClaim:
            claimName: pvc002
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc002
spec:
  storageClassName: nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 70Mi
[root@k-master ~]# kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
nginx-pvc-5bb94866f9-lsl9q   1/1     Running   0          58s   192.168.163.69   k-cluster1   <none>           <none>
nginx-pvc-5bb94866f9-qvm6z   1/1     Running   0          58s   192.168.2.198    k-cluster2   <none>           <none>
[root@k-master ~]# curl 192.168.163.69
Hello NFS
[root@k-master ~]# curl 192.168.2.198
Hello NFS
6、回收策略

当pv绑定的pvc被删除时,pv的回收策略

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
spec:
  # storageName可以当作pv的命名空间
  storageClassName: nfs-storage
  # 持久化卷大小
  capacity:
    storage: 100Mi
  # 持久化卷访问模式
  #   ReadWriteOnce:可以被一个节点以读写方式挂载;
  #	  ReadOnlyMany:可以被多个节点以只读方式挂载;
  #   ReadWriteMany:可以被多个节点以读写方式挂载;
  #   ReadWriteOncePod:以被单个 Pod 以读写方式挂载;
  accessModes:
    - ReadWriteMany
  # 回收策略 Retain:手动回收;Delete:自动清除;Recycle:已废弃;
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/data/pv003
    server: 10.0.4.6
7、动态PV

当创建pvc时会去寻找对应的StorageClass,然后根据StorageClass对应的供应商来创建PV

nfs供应商:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner(deploy)

创建StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
# or choose another name, must match deployment's env PROVISIONER_NAME'
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner 
parameters:
  # 删除pv时是否要备份
  archiveOnDelete: "false"

创建Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          # image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: registry.cn-hangzhou.aliyuncs.com/ialso/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.0.4.6
            - name: NFS_PATH
              value: /nfs/data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.4.6
            path: /nfs/data

创建rbac

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

完整yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  # 删除pv时是否要备份
  archiveOnDelete: "false"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          # 更换镜像地址
          # image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: registry.cn-hangzhou.aliyuncs.com/ialso/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              # 修改为自己的nfs服务器
              value: 10.0.4.6
            - name: NFS_PATH
              # 修改为自己的nfs路径
              value: /nfs/data
      volumes:
        - name: nfs-client-root
          nfs:
            # 修改为自己的nfs服务器
            server: 10.0.4.6
            # 修改为自己的nfs路径
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@k-master ~]# kubectl apply -f nfs-provisioner.yaml 
storageclass.storage.k8s.io/nfs-client created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k-master ~]# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-b758b784d-dzkpk   1/1     Running   0          6s
nginx-pvc-5bb94866f9-lsl9q               1/1     Running   0          49m
nginx-pvc-5bb94866f9-qvm6z               1/1     Running   0          49m

测试

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-pvc-provision
  labels:
    app: nginx-pvc-provision
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pvc-provision
  template:
    metadata:
      name: nginx-pvc-provision
      labels:
        app: nginx-pvc-provision
    spec:
      containers:
        - name: nginx-pvc-provision
          image: nginx
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /usr/share/nginx/html/
              name: nfs-pvc
      restartPolicy: Always
      volumes:
        - name: nfs-pvc
          persistentVolumeClaim:
            claimName: pvc003
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc003
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 75Mi
[root@k-master ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
pvc-5ae5f602-c7c9-414d-8a2b-aab53150493f   75Mi       RWX            Delete           Bound    default/pvc003   nfs-client              88s
# 编写测试index.html
[root@k-cluster1 data]# ls
1.txt  2.txt  default-pvc003-pvc-5ae5f602-c7c9-414d-8a2b-aab53150493f  pv001  pv002
[root@k-cluster1 data]# cd default-pvc003-pvc-5ae5f602-c7c9-414d-8a2b-aab53150493f/
[root@k-cluster1 default-pvc003-pvc-5ae5f602-c7c9-414d-8a2b-aab53150493f]# ls
[root@k-cluster1 default-pvc003-pvc-5ae5f602-c7c9-414d-8a2b-aab53150493f]# echo "Hello Provisioner" > index.html
# 尝试访问
[root@k-master ~]# kubectl get pods -owide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
nfs-client-provisioner-f79b7bf8d-689qn   1/1     Running   0          2m50s   192.168.2.202    k-cluster2   <none>           <none>
nginx-pvc-provision-9db75b74-24xjj       1/1     Running   0          18m     192.168.2.203    k-cluster2   <none>           <none>
nginx-pvc-provision-9db75b74-k2rs4       1/1     Running   0          18m     192.168.163.70   k-cluster1   <none>           <none>
[root@k-master ~]# curl 192.168.2.203
Hello Provisioner

指定默认storageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-client"},"parameters":{"archiveOnDelete":"false"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}
    # 加上这句
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: "2023-05-05T16:53:18Z"
  name: nfs-client
  resourceVersion: "15939"
  uid: 66da0acd-f88f-4969-bf57-4a6f015227fc
parameters:
  archiveOnDelete: "false"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
[root@k-master ~]# kubectl edit sc nfs-client 
error: storageclasses.storage.k8s.io "nfs-client" is invalid
storageclass.storage.k8s.io/nfs-client edited
[root@k-master ~]# kubectl get sc
NAME                   PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  27m

或者

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

18、资源配额

可以使用ResourceQuota对每个名称空间进行资源限制

18.1、可限制资源

1、计算资源
资源名称描述
cpu所有非中止Pod,CPU需求总量
requests.cpu所有非中止Pod,CPU需求总量
limits.cpu所有非中止Pod,CPU限额
memory所有非中止Pod,Memory需求总量
requests.memory所有非中止Pod,Memory需求总量
limits.memory所有非中止Pod,Memory限额
2、存储资源
资源名称描述
requests.storage命名空间PVC存储总量
persistentvolumeclaims命名空间的PVC总量
storageClassname.storageclass.storage.k8s.io/requests.storagestorageClassname下的PVC存储总量
storageClassname.storageclass.storage.k8s.io/persistentvolumeclaimsstorageClassname下的PVC总量
3、对象资源
资源名称描述
configmaps命名空间ConfigMap上限
persistentvolumeclaims命名空间PVC上限
pods命名空间Pod上限
replicationcontrollers命名空间Replicationcontroller上限
resourcequotas命名空间ResourceQuota上限
services命名空间Service上限
services.loadbalancers命名空间LoadBalancer类型的Service上限
services.nodeports命名空间NodePort类型的Service上限
secrets命名空间Secret上限
4、示例
apiVersion: v1
kind: ResourceQuota
metadata:
  name: cpu-mem
  namespace: xumeng
spec:
  hard:
    pods: "2"
    requests.cpu: 200m
    requests.memory: 500Mi
    limits.cpu: 500m
    limits.memory: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-resource-quota
  namespace: xumeng
  labels:
    app: nginx-resource-quota
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-resource-quota
  template:
    metadata:
      name: nginx-resource-quota
      labels:
        app: nginx-resource-quota
    spec:
      containers:
        - name: nginx-resource-quota
          image: nginx
          imagePullPolicy: IfNotPresent
          # 虽然容器有两个副本,但是第二个副本如果启动会超出限制,因此只会有一个pod在运行
          resources:
            requests:
              cpu: 120m
              memory: 260Mi
            limits:
              cpu: 260m
              memory: 510Mi
      restartPolicy: Always
[root@k-master ~]# kubectl apply -f resourcequota-test.yaml 
deployment.apps/nginx-resource-quota created
[root@k-master ~]# kubectl get pods -n xumeng
NAME                                    READY   STATUS    RESTARTS   AGE
nginx-resource-quota-64b7bf7967-lp6rc   1/1     Running   0          14s
[root@k-master ~]# kubectl get deployments -n xumeng
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
nginx-resource-quota   1/2     1            1           33s
[root@k-master ~]# kubectl get resourcequotas -n xumeng
NAME      AGE     REQUEST                                                            LIMIT
cpu-mem   9m11s   pods: 1/2, requests.cpu: 120m/200m, requests.memory: 260Mi/500Mi   limits.cpu: 260m/500m, limits.memory: 510Mi/1Gi

18.2、多资源配额

apiVersion: v1
kind: List
items:
  - apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: pods-high
    spec:
      hard:
        cpu: 1000m
        memory: 1Gi
        pods: "10"
      scopeSelector:
        matchExpressions:
          - operator: In
            scopeName: PriorityClass
            values: ["high"]
  - apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: pods-medium
    spec:
      hard:
        cpu: 800m
        memory: 800Mi
        pods: "8"
      scopeSelector:
        matchExpressions:
          - operator: In
            scopeName: PriorityClass
            values: ["medium"]
  - apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: pods-low
    spec:
      hard:
        cpu: 500m
        memory: 500Mi
        pods: "5"
      scopeSelector:
        matchExpressions:
          - operator: In
            scopeName: PriorityClass
            values: ["low"]
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-pod
spec:
  containers:
    - name: nginx-pod
      image: nginx:1.22.0
      imagePullPolicy: IfNotPresent
      resources:
        requests:
          memory: "200Mi"
          cpu: "700m"
        limits:
          memory: "500Mi"
          cpu: "1000m"
  restartPolicy: Always
  # 指定资源配额
  priorityClassName: high

18.3、LimitRange

有时候并不希望一个Pod把命名空间里的资源全部占了,可以使用LimitRange来限制

并且如果LimitRange设置cpu&内存限制后,pod无需书写资源限制

apiVersion: v1
kind: LimitRange
metadata:
  name: limit-range
  namespace: xumeng
spec:
  limits:
    - type: Pod
      max:
        cpu: 200m
        memory: 1Gi
      min:
        cpu: 50m
        memory: 50Mi
apiVersion: v1
kind: Pod
metadata:
  name: nginx-limit-range
  namespace: xumeng
  labels:
    app: nginx-limit-range
spec:
  containers:
    - name: nginx-limit-range
      image: nginx
      imagePullPolicy: IfNotPresent
      resources:
        requests:
          cpu: 50m
          memory: 50Mi
        limits:
          cpu: 50m
          memory: 100Mi
  restartPolicy: Always
[root@k-master ~]# kubectl apply -f limit-range-pod.yaml 
pod/nginx-limit-range created
[root@k-master ~]# kubectl get all -nxumeng
NAME                    READY   STATUS    RESTARTS   AGE
pod/nginx-limit-range   1/1     Running   0          10s
[root@k-master ~]# kubectl get resourcequotas -nxumeng
NAME      AGE   REQUEST                                                          LIMIT
cpu-mem   16m   pods: 1/2, requests.cpu: 50m/200m, requests.memory: 50Mi/500Mi   limits.cpu: 50m/500m, limits.memory: 100Mi/1Gi

19、调度

19.1、nodeName

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: schedule-node-selector
  labels:
    app: schedule-node-selector
spec:
  selector:
    matchLabels:
      app: schedule-node-selector
  template:
    metadata:
      name: schedule-node-selector
      labels:
        app: schedule-node-selector
    spec:
      containers:
        - name: schedule-node-selector
          image: nginx
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      # 指定要部署的节点名称
      nodeName: k-cluster1

19.2、nodeSelector

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: schedule-node-selector
  labels:
    app: schedule-node-selector
spec:
  selector:
    matchLabels:
      app: schedule-node-selector
  template:
    metadata:
      name: schedule-node-selector
      labels:
        app: schedule-node-selector
    spec:
      containers:
        - name: schedule-node-selector
          image: nginx
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      # 选择节点携带node-role=schedule的节点部署
      nodeSelector:
        node-role: schedule

18.3、Affinity

apiVersion: v1
kind: Pod
metadata:
  name: pod-affinity
  labels:
    app: pod-affinity
spec:
  containers:
    - name: pod-affinity
      image: nginx
      imagePullPolicy: IfNotPresent
  restartPolicy: Always
  # 亲和
  affinity:
    # 节点亲和;podAffinity(节点亲和)
    nodeAffinity:
      # 硬性标准
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          # 匹配标签
          - matchExpressions:
              - key: disktype
                # In在calue中;NotIn不在value中;Exists只要key存在;NotExists只要key不存在
                operator: In
                values:
                  - ssd
                  - hdd
      # 软性标准
      preferredDuringSchedulingIgnoredDuringExecution:
        - preference:
            matchExpressions:
              - key: disktype
                # In在calue中;NotIn不在value中;Exists只要key存在;NotExists只要key不存在
                operator: In
                values:
                  - ssd
          # 权重
          weight: 100
        - preference:
            matchExpressions:
              - key: disktype
                # In在calue中;NotIn不在value中;Exists只要key存在;NotExists只要key不存在
                operator: In
                values:
                  - hdd
          # 权重
          weight: 80
    # 节点不亲和
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
          # 指定拓扑结构
        - topologyKey: "kubernetes.io/hostname"
          labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - redis
                  - mysql

19.4、污点

kubectl taint node nodeName taintKey=taintValue:taintEffect

驱逐所有Pod(停机维护)

kubectl taint k-cluster1 taintKey=taintValue:noExecute

Pod无法再被调度到当前节点

kubectl taint k-cluster1 taintKey=taintValue:noSchedule

污点清除

kubectl taint k-cluster1 taintKey:noExecute-

20、角色

20.1、RBAC

RBAC(Role-Based Access Control):基于角色的访问控制

kubernetes中Role分为两种

  • Role(基于名称空间的角色):可以操作名称空间下的资源
    • RoleBinding:把Role绑定给用户
  • ClusterRole(基于集群的角色):可以操作集群的资源
    • ClusterRoleBinding:把ClusterRole绑定给用户

21、自己开发可视化平台

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/

https://github.com/kubernetes-client/java

22、helm

Chart:Helm 的包

Repository:用于存放和共享 charts

Release:运行 chart 的实例

22.1、安装

# 下载
curl https://get.helm.sh/helm-v3.12.0-linux-amd64.tar.gz -O helm-v3.12.0-linux-amd64.tar.gz
tar -zxvf helm-v3.12.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
# 验证
helm version

22.2、配置

# 配置命令补全
echo "source <(helm completion bash)" >>  ~/.bashrc && source ~/.bashrc
helm completion bash > /usr/share/bash-completion/completions/helm
# 配置仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

22.3、使用

1、仓库
# 添加仓库
helm repo add stable https://charts.helm.sh/stable
# 移除仓库
helm repo remove aliyun
# 更新仓库
helm repo update
# 仓库列表
helm repo list
2、包
# 包目录结构
[root@kubernetes1 ~]# helm create ihelm
Creating ihelm
[root@kubernetes1 ~]# cd ihelm/
[root@kubernetes1 ihelm]# tree
.
|-- charts	# 依赖chart
|-- Chart.yaml # chart的基本信息
|--		apiVersion	# Chart的apiVersion,目前默认都是v2
|--	    name	# Chart的名称
|--	    type	# Chart的类型[可选]
|--	    version	# Chart自己的版本号
|--	    appVersion	# Chart内应用的版本号[可选]
|--	    description	# Chart描述信息[可选]
|-- templates	# 模板目录
|   |-- deployment.yaml
|   |-- _helpers.tpl	# 自定义的模板或者函数
|   |-- hpa.yaml	
|   |-- ingress.yaml
|   |-- NOTES.txt	# Chart安装完毕后的提醒信息
|   |-- serviceaccount.yaml
|   |-- service.yaml
|   `-- tests	# 测试目录
|       `-- test-connection.yaml
`-- values.yaml	# 全局变量
# 下载包
helm pull bitnami/mysql
# 安装包
helm install bitnami/mysql
# 创建包
helm create bitnami/mysql
Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐