Docker安装

首先准备好三台虚拟机,如何安装虚拟机可以参考我以前的博文Linux(1)centOS7/RedHat7 VMwareWorkstation12安装步骤

然后针对三台虚拟机分别的IP地址如下所示:

虚拟机IP虚拟机名
192.168.189.145k8s-master
192.168.189.144k8s-node1
192.168.189.146k8s-node2

在三台相应的/etc/hosts文件分别写入,至少cpu为2核,否则后续有一步操作会提示错误,最小要求不足。

本次安装版本1.15版本的k8s版本,推荐安装Docker CE 18.09

Docker安装过程参考官网此文

删除旧版本的docker

# 在 master 节点和 worker 节点都要执行
sudo yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

由于是首次安装,所以并没有删除旧版本docker步骤,此外,安装Docker的步骤属于一个月前早期使用的步骤记录,只需要挑选其中重要过程即可,至少配置docker启动参数步骤不可省略。

以下命令均需要在root账户下运行。

设置Docker存储库

在新主机上首次安装Docker之前,需要设置Docker存储库。之后,可以从存储库安装和更新Docker。

安装所需的包

yum-utils提供了yum-config-manager 效用,devicemapper存储驱动程序由需要 device-mapper-persistent-datalvm2

 yum install -y yum-utils device-mapper-persistent-data  lvm2

在这里插入图片描述

设置稳定的存储库。

yum-config-manager  --add-repo https://download.docker.com/linux/centos/docker-ce.repo

在这里插入图片描述

(可选)启用夜间或测试存储库。

这些存储库包含在docker.repo上面的文件中,但默认情况下处于禁用状态。可以将它们与稳定存储库一起启用。

以下命令启用nightly存储库。

yum-config-manager --enable docker-ce-nightly

要启用test通道,请运行以下命令:

yum-config-manager --enable docker-ce-test

可以通过运行带有 --disableyum-config-manager 命令来禁用nightlytest存储库 。要重新启用它,请使用 --enable 。以下命令禁用nightly存储库。

yum-config-manager --disable docker-ce-nightly

关于nightlytest 渠道的描述在这里看见

安装Docke ENGINE - COMMUNITY

安装最新版本的Docker Engine - COMMUNITY和或者转到下一步安装特定版本的容器:

yum install docker-ce docker-ce-cli containerd.io

如果要安装特定版本,可以先列出repo中的可用版本,然后选择并安装其中一个:

列出并对您的仓库中可用的版本进行排序。

示例中按版本号对结果进行排序,从最高到最低,并被截断:

[root@ tanqiwei]# yum list docker-ce --showduplicates | sort -r
已加载插件:fastestmirror, langpacks
可安装的软件包
 * updates: mirrors.cn99.com
Loading mirror speeds from cached hostfile
 * extras: mirrors.cn99.com
docker-ce.x86_64    3:19.03.1-3.el7                            docker-ce-test   
docker-ce.x86_64    3:19.03.1-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:19.03.0-3.el7                            docker-ce-test   
docker-ce.x86_64    3:19.03.0-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:19.03.0-2.3.rc3.el7                      docker-ce-test   
docker-ce.x86_64    3:19.03.0-2.2.rc2.el7                      docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.5.beta5.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.4.beta4.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.3.beta3.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.2.beta2.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.1.beta1.el7                    docker-ce-test   
docker-ce.x86_64    3:18.09.8-3.el7                            docker-ce-test   
docker-ce.x86_64    3:18.09.8-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:18.09.7-3.el7                            docker-ce-test   
docker-ce.x86_64    3:18.09.7-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:18.09.7-2.1.rc1.el7                      docker-ce-test   
...
 * base: mirrors.cn99.com

返回的列表取决于启用的存储库,并且特定于你的CentOS版本(.el7在此示例中以后缀表示)。

通过其完全限定的包名称安装特定版本,包名称(docker-ce)加上从第一个冒号(:)开始的版本字符串(第2列),用连字符(-)分隔。例如,docker-ce-18.09.1

安装docker18.09.7

yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

中途先快后慢速度有点捉急,最后终于装好了。

在这里插入图片描述

Docker已安装但尚未启动。该docker组已创建,但没有用户添加到该组。

启动Docker

用下面运行启动docker

systemctl start docker

验证是否安装成功

先查看是否启动成功

service docker status

在这里插入图片描述

通过运行hello-world 映像验证是否正确安装了Docker Engine - 社区。

docker run hello-world

此命令下载测试映像并在容器中运行它。当容器运行时,它会打印一条信息性消息并退出。

在这里插入图片描述

Docker Engine - 社区已安装并正在运行。如果是非root用户执行,目前需要使用sudo来运行Docker命令。

查看版本是否正确

docker version

在这里插入图片描述

设置开启启动Docker

systemctl enable docker && systemctl restart docker && service docker status

在这里插入图片描述

配置docker启动参数

在三台虚拟机分别执行下面操作

vim /etc/docker/daemon.json

然后写入:

{
  "registry-mirrors": ["https://registry.docker-cn.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker

systemctl  daemon-reload
systemctl  restart docker

测试从Docker Hub pull下镜像

docker pull kubeguide/redis-master

在这里插入图片描述

K8S集群搭建

由于CentOS7 默认启用防火墙服务(firewalld),而Kubernetes的Master与工作Node之间会有大量的网络通信,安全的做法是在防火墙上配置各组件需要相互通信的端口号。

由于目前是实测环节,可以在安全的内部网络环境中可以关闭防火墙服务:

# 关闭 防火墙
systemctl stop firewalld
systemctl disable firewalld

输出信息大概是这样的:

在这里插入图片描述

建议在主机上禁用SELinux,让容器可以读取主机文件系统。或修改系统文件/etc/sysconfig/selinux,将SELINUX=enforcing修改成SELINUX=disabled,然后重启Linux。

# 关闭 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

执行重启操作,重启后记得查看docker是否在运行中【之前设置了开机启动】。

关闭交换空间

# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

配置iptable管理ipv4/6请求

vim /etc/sysctl.d/k8s.conf

写入

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

使配置生效

sysctl --system

在这里插入图片描述

安装kubeadm套件

编辑源

vim /etc/yum.repos.d/kubernetes.repo

写入

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

下载

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

在这里插入图片描述

然后执行

systemctl enable kubelet

集群初始化

kubeadm init --apiserver-advertise-address=192.168.189.145 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

注意上面的命令中 --apiserver-advertise-address是master节点的IP地址,以上命令在master节点运行。

在这里插入图片描述
输出代码是:

[root@k8s-master tanqiwei]# kubeadm init --apiserver-advertise-address=192.168.189.145 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.189.145]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.189.145 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.189.145 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 47.005142 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4vlw30.paeiwou9nmcslgjb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.189.145:6443 --token 4vlw30.paeiwou9nmcslgjb \
    --discovery-token-ca-cert-hash sha256:7d468fa1c5b477ae33689abc26bb0aef47293fd29348cf5a54070559f21751cb 

记住这里面的token和发现密钥。

接下来执行如下命令,切换成普通用户,由输出结果中给出的。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

另外注意的是,需要对集群进行网络部署,方案多种,可以在点击此处查看。
这里选择flannel。

https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

事实上还可以是这个网址下的:

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

二者实际上一样,只是如果访问github比较困难,可以选择第二个。

部署flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

在这里插入图片描述

集群加入节点

然后在其他两个节点,以root用户权限进行:

kubeadm join 192.168.189.145:6443 --token 4vlw30.paeiwou9nmcslgjb  --discovery-token-ca-cert-hash sha256:7d468fa1c5b477ae33689abc26bb0aef47293fd29348cf5a54070559f21751cb 

在这里插入图片描述

在这里插入图片描述

验证集群

在主master节点上,运行命令:

kubectl get nodes

在这里插入图片描述
证明集群加入成功。

安装k8s仪表盘

k8s的仪表盘github地址点击此处。目前版本v1.10.1,已经2.0.0的测试版,目前还是以稳定版为主。以下所有命令都是在master节点运行。

首先是

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

将其中的image的值改成:

lizhenliang/kubernetes-dashboard-amd64:v1.10.1

接着运行下面命令:

kubectl apply -f kubernetes-dashboard.yaml

接着运行下面命令:

kubectl get pod -A -o wide |grep dash
kubectl get svc -A -o wide |grep dash

在这里插入图片描述

查看服务情况

kubectl -n kube-system describe pod

输出很多:

[root@k8s-master Documents]# kubectl -n kube-system describe pod
Name:                 coredns-bccdc95cf-qklvg
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 k8s-master/192.168.189.145
Start Time:           Fri, 27 Sep 2019 13:50:19 +0800
Labels:               k8s-app=kube-dns
                      pod-template-hash=bccdc95cf
Annotations:          <none>
Status:               Running
IP:                   10.244.0.5
Controlled By:        ReplicaSet/coredns-bccdc95cf
Containers:
.....
Events:
  Type    Reason     Age    From                Message
  ----    ------     ----   ----                -------
  Normal  Scheduled  2m41s  default-scheduler   Successfully assigned kube-system/kubernetes-dashboard-79ddd5-hhff6 to k8s-node2
  Normal  Pulling    2m40s  kubelet, k8s-node2  Pulling image "lizhenliang/kubernetes-dashboard-amd64:v1.10.1"
  Normal  Pulled     2m11s  kubelet, k8s-node2  Successfully pulled image "lizhenliang/kubernetes-dashboard-amd64:v1.10.1"
  Normal  Created    2m11s  kubelet, k8s-node2  Created container kubernetes-dashboard
  Normal  Started    2m11s  kubelet, k8s-node2  Started container kubernetes-dashboard

因为中间结果太多,用省略号代替那些输出。

查看docker启动情况

docker ps

在这里插入图片描述

创建登陆账户

因为没有创建用户,所以无法进行登录。用户创建参考此处

你可以选择以下两个命令进行创建

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

接下来使用下面命令生成登录token

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

然后输出如下所示:

[root@k8s-master Documents]# kubectl get pod -A -o wide |grep dash
kube-system   kubernetes-dashboard-79ddd5-hhff6    1/1     Running   0          7m27s   10.244.2.8        k8s-node2    <none>           <none>
[root@k8s-master Documents]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-9v4f2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7a49b752-5b4c-47b7-91a9-5f2da7ee62a7

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOXY0ZjIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2E0OWI3NTItNWI0Yy00N2I3LTkxYTktNWYyZGE3ZWU2MmE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.JjZnMwBia9XXZf8YXqpmP0EgZgJBCm48FUscls1v_5dACQ1yTfTb7eC9iaV3C4oAr23_qSi8gslbVp4TYDMxa_g-7jk1qEM5KIPtxpEMRbiY7X3yr2PZZLCyPn8LFc6WEASeUkCrPVVCYEw_lk45nnnseS-WG3FA4o9DM3Yba9Z7I7WpzINYl55mWY3m2uqL2l_Rl-CGQzFWLxUw-DDIAuz-IFtD4YF23zDGH7l9yNcbsFOmNmfRTt0jPEraCUdqOcmh0DqgrfX8iTRhCQ2gC4oLe23vuqZV_q18QagtpTEzR54Cca28uDnYC1zCEy-25Y3z4pSzP73EYvKd6oxgag

在这里插入图片描述

kubectl proxy

登录地址:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

在这里插入图片描述

到此全部安装成功

Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐