概 述
Kubernetes集群的搭建方法其实有多种,比如我在之前的文章 《利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)》中使用的就是二进制的安装方法。虽然这种方法有利于我们理解 k8s集群,但却过于繁琐。而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之。
注: 本文首发于 My Personal Blog:CodeSheep·程序羊,欢迎光临 小站
节点规划
本文准备部署一个 一主两从 的 三节点 Kubernetes集群,整体节点规划如下表所示:
主机名 | IP | 角色 |
k8s-master | 192.168.39.79 | k8s主节点 | |
k8s-node-1 | 192.168.39.77 | k8s从节点 | |
k8s-node-2 | 192.168.39.78 | k8s从节点 | |
下面介绍一下各个节点的软件版本:
- 操作系统:
CentOS-7.4-64Bit
- Docker版本:
1.13.1
- Kubernetes版本:
1.13.1
所有节点都需要安装以下组件:
-
Docker
:不用多说了吧 -
kubelet
:运行于所有 Node上,负责启动容器和 Pod -
kubeadm
:负责初始化集群 -
kubectl
: k8s命令行工具,通过其可以部署/管理应用 以及CRUD各种资源
准备工作
1 2
| systemctl disable firewalld.service systemctl stop firewalld.service
|
1 2 3 4
| setenforce 0 vi /etc/selinux/config SELINUX=disabled
|
1 2 3
| hostnamectl --static set-hostname k8s-master hostnamectl --static set-hostname k8s-node-1 hostnamectl --static set-hostname k8s-node-2
|
编辑 /etc/hosts
文件,加入以下内容:
1 2 3
| 192.168.39.79 k8s-master 192.168.39.77 k8s-node-1 192.168.39.78 k8s-node-2
|
组件安装
0x01. Docker安装(所有节点)
不赘述 ! ! !
0x02. kubelet、kubeadm、kubectl安装(所有节点)
1 2 3 4 5 6 7
| cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF
|
1 2 3 4 5
| setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
|
安装kubelet kubeadm kubectl
Master节点配置
0x01. 初始化 k8s集群
为了应对网络不畅通的问题,我们国内网络环境只能提前手动下载相关镜像并重新打 tag :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1 docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6 docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
|
准备好的镜像
然后再在 Master节点上执行如下命令初始化 k8s集群:
1
| kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16
|
-
--kubernetes-version
: 用于指定 k8s版本 -
--apiserver-advertise-address
:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface -
--pod-network-cidr
:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。
执行命令后,控制台给出了如下所示的详细集群初始化过程:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
| [root@localhost ~]# kubeadm init --config kubeadm-config.yaml W1224 11:01:25.408209 10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet” [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull’ [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml” [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki” [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes” [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests” [control-plane] Creating static Pod manifest for "kube-apiserver” [control-plane] Creating static Pod manifest for "kube-controller-manager” [control-plane] Creating static Pod manifest for "kube-scheduler” [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests” [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 24.005638 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system” Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''” [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public” namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123 [root@localhost ~]#
|
0x02. 配置 kubectl
在 Master上用 root用户执行下列命令来配置 kubectl:
1 2 3
| echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo $KUBECONFIG
|
0x03. 安装Pod网络
安装 Pod网络是 Pod之间进行通信的必要条件,k8s支持众多网络方案,这里我们依然选用经典的 flannel方案
1
| sysctl net.bridge.bridge-nf-call-iptables=1
|
1
| kubectl apply -f kube-flannel.yaml
|
kube-flannel.yaml
文件在此
一旦 Pod网络安装完成,可以执行如下命令检查一下 CoreDNS Pod此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤
1
| kubectl get pods --all-namespaces -o wide
|
检查所有pod是否正常启动
同时我们可以看到主节点已经就绪: kubectl get nodes
查看主节点状态
添加 Slave节点
在两个 Slave节点上分别执行如下命令来让其加入Master上已经就绪了的 k8s集群:
1
| kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
|
如果 token忘记,则可以去 Master上执行如下命令来获取:
上述kubectl join命令的执行结果如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| [root@localhost ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8 [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "192.168.39.79:6443” [discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443” [discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443” [discovery] Successfully established connection with API Server "192.168.39.79:6443” [join] Reading configuration from the cluster… [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml’ [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml” [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap… [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
|
效果验证
节点状态
1
| kubectl get pods --all-namespaces -o wide
|
查看所有 Pod状态
好了,集群现在已经正常运行了,接下来看看如何正常的拆卸集群。
拆卸集群
首先处理各节点:
1 2
| kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name>
|
一旦节点移除之后,则可以执行如下命令来重置集群:
安装 dashboard
就像给elasticsearch配一个可视化的管理工具一样,我们最好也给 k8s集群配一个可视化的管理工具,便于管理集群。
因此我们接下来安装 v1.10.0
版本的 kubernetes-dashboard,用于集群可视化的管理。
1 2 3
| docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
|
1
| kubectl create -f dashboard.yaml
|
dashboard.yaml
文件在此
- 查看 dashboard的 pod是否正常启动,如果正常说明安装成功:
1
| kubectl get pods --namespace=kube-system
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| [root@k8s-master ~]# kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-4rds2 1/1 Running 0 81m coredns-86c58d9df4-rhtgq 1/1 Running 0 81m etcd-k8s-master 1/1 Running 0 80m kube-apiserver-k8s-master 1/1 Running 0 80m kube-controller-manager-k8s-master 1/1 Running 0 80m kube-flannel-ds-amd64-8qzpx 1/1 Running 0 78m kube-flannel-ds-amd64-jvp59 1/1 Running 0 77m kube-flannel-ds-amd64-wztbk 1/1 Running 0 78m kube-proxy-crr7k 1/1 Running 0 81m kube-proxy-gk5vf 1/1 Running 0 78m kube-proxy-ktr27 1/1 Running 0 77m kube-scheduler-k8s-master 1/1 Running 0 80m kubernetes-dashboard-79ff88449c-v2jnc 1/1 Running 0 21s
|
1
| kubectl get service --namespace=kube-system
|
1 2 3
| NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5h38m kubernetes-dashboard NodePort 10.99.242.186 <none> 443:31234/TCP 14
|
1 2 3 4
| openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048 openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key rm dashboard.pass.key openssl req -new -key dashboard.key -out dashboard.csr【如遇输入,一路回车即可】
|
1
| openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
|
- 然后将生成的
dashboard.key
和 dashboard.crt
置于路径 /home/share/certs
下,该路径会配置到下面即将要操作的
dashboard-user-role.yaml
文件中
1
| kubectl create -f dashboard-user-role.yaml
|
dashboard-user-role.yaml
文件在此
1
| kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| [root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system Name: admin-token-9d4vl Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw
|
token既然生成成功,接下来就可以打开浏览器,输入 token来登录进集群管理页面:
登录集群管理页面
集群概览