利用Kubeadm部署 Kubernetes 1.13.1集群实践录 | CodeSheep · 程序羊

标签: | 发表时间:2020-02-21 11:24 | 作者:
出处:https://www.codesheep.cn

概 述

Kubernetes集群的搭建方法其实有多种,比如我在之前的文章 《利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)》中使用的就是二进制的安装方法。虽然这种方法有利于我们理解 k8s集群,但却过于繁琐。而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之。

注: 本文首发于  My Personal Blog:CodeSheep·程序羊,欢迎光临  小站


节点规划

本文准备部署一个  一主两从 的  三节点 Kubernetes集群,整体节点规划如下表所示:

主机名 IP 角色
k8s-master 192.168.39.79 k8s主节点
k8s-node-1 192.168.39.77 k8s从节点
k8s-node-2 192.168.39.78 k8s从节点

下面介绍一下各个节点的软件版本:

  • 操作系统: CentOS-7.4-64Bit
  • Docker版本: 1.13.1
  • Kubernetes版本: 1.13.1

所有节点都需要安装以下组件:

  • Docker:不用多说了吧
  • kubelet:运行于所有 Node上,负责启动容器和 Pod
  • kubeadm:负责初始化集群
  • kubectl: k8s命令行工具,通过其可以部署/管理应用 以及CRUD各种资源

准备工作

  • 所有节点关闭防火墙
1     
2
systemctl disable firewalld.service      
systemctl stop firewalld.service
  • 禁用SELINUX
1     
2
3
4
setenforce 0     

vi /etc/selinux/config
SELINUX=disabled
  • 所有节点关闭 swap
1     
swapoff -a     
  • 设置所有节点主机名
1     
2
3
hostnamectl --static set-hostname  k8s-master     
hostnamectl --static set-hostname k8s-node-1
hostnamectl --static set-hostname k8s-node-2
  • 所有节点 主机名/IP加入 hosts解析

编辑  /etc/hosts文件,加入以下内容:

1     
2
3
192.168.39.79 k8s-master     
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2

组件安装

0x01. Docker安装(所有节点)

不赘述 ! ! !

0x02. kubelet、kubeadm、kubectl安装(所有节点)

  • 首先准备repo
1     
2
3
4
5
6
7
cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF     
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
  • 然后执行如下指令来进行安装
1     
2
3
4
5
setenforce 0     
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

安装kubelet kubeadm kubectl安装kubelet kubeadm kubectl


Master节点配置

0x01. 初始化 k8s集群

为了应对网络不畅通的问题,我们国内网络环境只能提前手动下载相关镜像并重新打 tag :

1     
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1     
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1           
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.2.24                      
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

准备好的镜像准备好的镜像

然后再在 Master节点上执行如下命令初始化 k8s集群:

1     
kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16     
  • --kubernetes-version: 用于指定 k8s版本
  • --apiserver-advertise-address:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface
  • --pod-network-cidr:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。

执行命令后,控制台给出了如下所示的详细集群初始化过程:

1     
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
[[email protected] ~]# kubeadm init --config kubeadm-config.yaml     
W1224 11:01:25.408209 10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet”
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki”
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes”
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for "kube-apiserver”
[control-plane] Creating static Pod manifest for "kube-controller-manager”
[control-plane] Creating static Pod manifest for "kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system” Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''”
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123

[[email protected] ~]#

0x02. 配置 kubectl

在 Master上用 root用户执行下列命令来配置 kubectl:

1     
2
3
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile     
source /etc/profile 
echo $KUBECONFIG

0x03. 安装Pod网络

安装 Pod网络是 Pod之间进行通信的必要条件,k8s支持众多网络方案,这里我们依然选用经典的 flannel方案

  • 首先设置系统参数:
1     
sysctl net.bridge.bridge-nf-call-iptables=1     
  • 然后在 Master节点上执行如下命令:
1     
kubectl apply -f kube-flannel.yaml     

kube-flannel.yaml  文件在此

一旦 Pod网络安装完成,可以执行如下命令检查一下 CoreDNS Pod此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤

1     
kubectl get pods --all-namespaces -o wide     

检查所有pod是否正常启动检查所有pod是否正常启动

同时我们可以看到主节点已经就绪: kubectl get nodes

查看主节点状态查看主节点状态


添加 Slave节点

在两个 Slave节点上分别执行如下命令来让其加入Master上已经就绪了的 k8s集群:

1     
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>     

如果 token忘记,则可以去 Master上执行如下命令来获取:

1     
kubeadm token list     

上述kubectl join命令的执行结果如下:

1     
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[[email protected] ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8     
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443”
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443”
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443”
[discovery] Successfully established connection with API Server "192.168.39.79:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

效果验证

  • 查看节点状态
1     
kubectl get nodes     

节点状态节点状态

  • 查看所有 Pod状态
1     
kubectl get pods --all-namespaces -o wide     

查看所有 Pod状态查看所有 Pod状态

好了,集群现在已经正常运行了,接下来看看如何正常的拆卸集群。


拆卸集群

首先处理各节点:

1     
2
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets     
kubectl delete node <node name>

一旦节点移除之后,则可以执行如下命令来重置集群:

1     
kubeadm reset     

安装 dashboard

就像给elasticsearch配一个可视化的管理工具一样,我们最好也给 k8s集群配一个可视化的管理工具,便于管理集群。

因此我们接下来安装  v1.10.0版本的 kubernetes-dashboard,用于集群可视化的管理。

  • 首先手动下载镜像并重新打标签:(所有节点)
1     
2
3
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0     
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
  • 安装 dashboard:
1     
kubectl create -f dashboard.yaml     

dashboard.yaml  文件在此

  • 查看 dashboard的 pod是否正常启动,如果正常说明安装成功:
1     
kubectl get pods --namespace=kube-system     
1     
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[[email protected] ~]# kubectl get pods --namespace=kube-system     
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-4rds2 1/1 Running 0 81m
coredns-86c58d9df4-rhtgq 1/1 Running 0 81m
etcd-k8s-master 1/1 Running 0 80m
kube-apiserver-k8s-master 1/1 Running 0 80m
kube-controller-manager-k8s-master 1/1 Running 0 80m
kube-flannel-ds-amd64-8qzpx 1/1 Running 0 78m
kube-flannel-ds-amd64-jvp59 1/1 Running 0 77m
kube-flannel-ds-amd64-wztbk 1/1 Running 0 78m
kube-proxy-crr7k 1/1 Running 0 81m
kube-proxy-gk5vf 1/1 Running 0 78m
kube-proxy-ktr27 1/1 Running 0 77m
kube-scheduler-k8s-master 1/1 Running 0 80m
kubernetes-dashboard-79ff88449c-v2jnc 1/1 Running 0 21s
  • 查看 dashboard的外网暴露端口
1     
kubectl get service --namespace=kube-system     
1     
2
3
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE     
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5h38m
kubernetes-dashboard NodePort 10.99.242.186 <none> 443:31234/TCP 14
  • 生成私钥和证书签名:
1     
2
3
4
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048     
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr【如遇输入,一路回车即可】
  • 生成SSL证书:
1     
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt     
  • 然后将生成的  dashboard.key 和  dashboard.crt置于路径  /home/share/certs下,该路径会配置到下面即将要操作的

dashboard-user-role.yaml文件中

  • 创建 dashboard用户
1     
kubectl create -f dashboard-user-role.yaml     

dashboard-user-role.yaml  文件在此

  • 获取登陆token
1     
kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system     
1     
2
3
4
5
6
7
8
9
10
11
12
13
14
[[email protected] ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system     
Name: admin-token-9d4vl
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw

token既然生成成功,接下来就可以打开浏览器,输入 token来登录进集群管理页面:

登录集群管理页面登录集群管理页面

集群概览集群概览

相关 [利用 kubeadm kubernetes] 推荐:

利用Kubeadm部署 Kubernetes 1.13.1集群实践录 | CodeSheep · 程序羊

- -
Kubernetes集群的搭建方法其实有多种,比如我在之前的文章 《利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)》中使用的就是二进制的安装方法. 虽然这种方法有利于我们理解 k8s集群,但却过于繁琐. 而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之.

Kubernetes & Microservice

- - 午夜咖啡
这是前一段时间在一个微服务的 meetup 上的分享,整理成文章发布出来. 谈微服务之前,先澄清一下概念. 微服务这个词的准确定义很难,不同的人有不同的人的看法. 比如一个朋友是『微服务原教旨主义者』,坚持微服务一定是无状态的 http API 服务,其他的都是『邪魔歪道』,它和 SOA,RPC,分布式系统之间有明显的分界.

Kubernetes 完全教程

- - 午夜咖啡
经过一个阶段的准备,视频版本的 《Kubernetes 完全教程》出炉了. 课程一共分为七节,另外有一节 Docker 预备课,每节课大约一个多小时. 目标是让从没接触过 Kubernetes 的同学也能通过这个课程掌握 Kubernetes. 为什么要学习 Kubernetes. 在介绍课程之前,先说说为什么要学习 Kubernetes 以及什么人需要学习 Kubernetes.

喜大普奔:Spark on kubernetes

- - Zlatan Eevee
两个星期前(08/15/2017),spark社区提了一个新的SPIP(Spark Project Improvement Proposals): Spark on Kubernetes: Kubernetes as A Native Cluster Manager,即用k8s管理spark集群. 经过社区2个星期的投票,看上去很快要能合入了.

Kubernetes 日志收集方案

- - IT瘾-dev
Kubernetes 中的基本日志. Kubernetes 日志收集. 以 sidecar 容器收集日志. 用 sidecar 容器重新输出日志. 使用 sidecar 运行日志采集 agent. 前面的课程中和大家一起学习了 Kubernetes 集群中监控系统的搭建,除了对集群的监控报警之外,还有一项运维工作是非常重要的,那就是日志的收集.

Kubernetes 会不会“杀死” DevOps?

- - InfoQ推荐
DevOps 这个概念最早是在 2007 年提出的,那时云计算基础设施的概念也才刚刚提出没多久,而随着互联网的逐渐普及,应用软件的需求爆发式增长,软件开发的理念也逐渐从瀑布模型(waterfall)转向敏捷开发(agile). 传统的软件交付模式(应用开发人员专注于软件开发、IT 运维人员负责将软件部署到服务器运行),再也无法满足互联网软件快速迭代的需求.

轻量级Kubernetes k3s初探

- - InfoQ推荐
1 k3s简介–5 less than K8s. k3s [1] 是rancher®开源的一个Kubernetes发行版,从名字上就可以看出k3s相对k8s做了很多裁剪和优化,二进制程序不足50MB,占用资源更少,只需要512MB内存即可运行. 而之所以称为k3s是因为相对k8s裁剪了如下5个部分:.

kubernetes dashboard向外网提供服务

- - 学习日志
目前新版本的 kubernetes dashboard ( https://github.com/kubernetes/dashboard)安装了后,为了安全起见,默认情况下已经不向外提供服务,只能通过. http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 本机访问.

如何进行kubernetes问题的排障

- - Xinkun Blog
k8s的成熟度很高,伴随着整个项目的扩增,以及新功能和新流程的不断引入,也伴随这产生了一些问题. 虽然自动化测试可以排除掉大部分,但是一些复杂流程以及极端情况却很难做到bug的完全覆盖. 因此在实际的工作过程中,需要对运行的集群进行故障定位和解决. 当然,进行排障的前提是对于k8s的流程和概念进行掌握,对于源码有一定的掌握能力,才可以更好的进行.

Kubernetes 基础概念知多少

- - IT瘾-dev
kubernetes(简称k8s)是一种用于在一组主机上运行和协同容器化应用程序的管理平台,皆在提供高可用、高扩展性和可预测性的方式来管理容器应用的生命周期. 通过k8s,用户可以定义程序运行方式、部署升级策略、动态伸缩容,使得用户以一种更灵活可靠的方式来管理应用程序. 关于k8s,是一种对应用服务的打包、部署、监控等一整套生命周期的自动化管理平台,目前各大公司已在生产环境部署使用,同时k8s社区比较活跃,在未来一段时间内会越来越流行,可以说是以后服务部署的事实标准,对于Java开发者来说,你可以不直接使用它,但是不能不了解它.