
安装环境说明
硬件要求
内存:3GB或更多RAM
CPU: 2核CPU或更多CPU
硬盘: 20GB或更多
本次环境说明:
**操作系统:**CentOS 10
master: 192.168.23.133
node01: 192.168.23.135
node02: 192.168.23.134
环境准备
关闭防火墙和selinux
关闭防火墙
systemctl stop firewalld && systemctl disable firewalld && iptables -F
|
关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
|
关闭swap分区
临时关闭
永久关闭swap
永久关闭是需要关机重启生效的,所以还是要把临时关闭命令也执行下
sed -ri 's/.*swap.*/#&/' /etc/fstab
|
修改hosts文件
设置主机名
不设置也可以,但是要保证主机名不相同,并且修改的hosts内容要与之对应
master上
hostnamectl set-hostname master.local
|
node01
hostnamectl set-hostname node01.local
|
node02
hostnamectl set-hostname node02.local
|
修改本地hosts文件
vi /etc/hosts
添加如下内容
192.168.23.133 master.local 192.168.23.135 node01.local 192.168.23.134 node02.local
|
修改内核参数
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system
|
加载ip_vs内核模块
如果kube-proxy 模式为ip_vs则必须加载,本文采用iptables
modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4 modprobe br_netfilter echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 > /proc/sys/net/ipv4/ip_forward
|
设置下次开机自动加载
cat > /etc/modules-load.d/ip_vs.conf << EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4 EOF
|
安装docker
配置yum源
这里使用阿里云的源
yum install wget -y wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
|
安装docker
yum install docker-ce docker-ce-cli -y
|
编辑docker配置文件
编辑 /etc/docker/daemon.json
mkdir /etc/docker/ cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://gqs7xcfd.mirror.aliyuncs.com","https://hub-mirror.c.163.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF
|
启动docker服务
systemctl daemon-reload && systemctl enable docker && systemctl start docker
|
安装Kubernetes
配置yum源
这里使用阿里云的源
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
|
安装kubeadm、kubelet、kubectl
列出所有版本
yum list kubelet --showduplicates
|
指定版本安装
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 --nogpgcheck
|
正常安装,需要添加 --nogpgcheck
来跳过证书验证
yum install -y kubelet kubeadm kubectl --nogpgcheck
|
设置开机自启
配置CRI
修复 containerd CRI 问题
确保配置已正确覆盖:
containerd config default > /etc/containerd/config.toml vim /etc/containerd/config.toml
|
确保下面两个内容修改完成
[plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
|
重启 containerd:
systemctl daemon-reexec systemctl restart containerd systemctl enable containerd
|
验证 CRI 是否启用:
ctr plugins list | grep cri
|
输出应包含:
io.containerd.grpc.v1.cri - ok
|
确保 containerd 启动成功并监听 CRI socket
确认是否有以下文件存在:
ls /run/containerd/containerd.sock
|
如果存在,尝试运行:
crictl --runtime-endpoint=unix:///run/containerd/containerd.sock ps -a
|
看到类似结果则说明 containerd 已正常运行。
部署Kubernetes Master节点
master节点初始化
如果之前初始化失败了,需要重新初始化需要先运行下面命令
kubeadm reset -f rm -rf ~/.kube rm -rf /etc/cni/net.d /opt/cni
|
如果第一次运行就运行下面命令
kubeadm init \ --kubernetes-version=1.28.2 \ --apiserver-advertise-address=192.168.23.133 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 \ --image-repository=registry.aliyuncs.com/google_containers \ --ignore-preflight-errors=Mem
|
参数项 | 类型 | 含义与作用 | 示例 |
--kubernetes-version | 字符串 | 指定 Kubernetes 版本。用于控制初始化的组件版本,必须与镜像源中存在的版本一致。 | 1.28.2 |
--apiserver-advertise-address | IP 地址 | 指定 Kubernetes Master 对外通信的 IP 地址。 0.0.0.0 表示监听所有可用 IP(用于单节点或调试场景)。 | 192.168.23.133 或 0.0.0.0 |
--service-cidr | CIDR 地址段 | 集群内部 Service 对象所使用的虚拟 IP 地址段,不应与宿主机或 Pod 网络冲突。 | 10.96.0.0/16 (默认) |
--pod-network-cidr | CIDR 地址段 | 分配给每个 Pod 的 IP 地址段,需要与网络插件(如 Calico、Flannel)配置一致。 | 10.244.0.0/16 |
--image-repository | 镜像仓库地址 | 指定拉取 Kubernetes 镜像的仓库, 国内推荐使用 Aliyun 镜像加速,避免 gcr.io 被墙。 | registry.aliyuncs.com/google_containers |
--ignore-preflight-errors=Mem | 字符串列表 | 忽略预检中对内存的最低限制要求。Kubeadm 要求 ≥1700MB 内存,若低于此值需要显式忽略。 | Mem |
接下来就是等待拉取镜像了,最终返回完整结果
root@master:/home/ascotbe# kubeadm init \ --kubernetes-version=1.28.2 \ --apiserver-advertise-address=192.168.23.133 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 \ --image-repository=registry.aliyuncs.com/google_containers \ --ignore-preflight-errors=Mem [init] Using Kubernetes version: v1.28.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.local] and IPs [10.96.0.1 192.168.23.133] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master.local] and IPs [192.168.23.133 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master.local] and IPs [192.168.23.133 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 3.501959 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: cgu3ya.j0j6a5c7iuwlrwo3 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.23.133:6443 --token cgu3ya.j0j6a5c7iuwlrwo3 \ --discovery-token-ca-cert-hash sha256:4bee84018e33091640c109ff15b204140c88777a0db943ddbee47ddb93260b3c
|
配置kubectl
就是执行初始化成功后输出的那三条命令
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
|
查看节点信息
root@master:/home/ascotbe# kubectl get nodes NAME STATUS ROLES AGE VERSION master.local NotReady control-plane 46s v1.28.2
|
此时只能看到master节点,等待其他节点加入进来后即可看到。
安装flannel
root@master:/home/ascotbe# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml namespace/kube-flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
|
Node节点加入集群
node1、node2
都运行下面命令
kubeadm join 192.168.23.133:6443 --token cgu3ya.j0j6a5c7iuwlrwo3 \ --discovery-token-ca-cert-hash sha256:4bee84018e33091640c109ff15b204140c88777a0db943ddbee47ddb93260b3c
|
master节点上查看集群节点
root@master:/home/ascotbe# kubectl get nodes NAME STATUS ROLES AGE VERSION master.local Ready control-plane 26m v1.28.2 node01.local Ready <none> 13m v1.28.2 node02.local Ready <none> 33s v1.28.2
|