二进制部署高可用k8s

雨中笑 k8s 132热度

简介k8s三master一工作节点部署实操笔记

1、环境规划

  • 三台master 一台node
  • ssh 22端口被禁,使用2222接口
检查系统版本以及内核
cat /etc/redhat-release ; uname -r
CentOS Linux release 7.9.2009 (Core)
3.10.0-1160.71.1.el7.x86_64

2、三主节点搭建

没特殊说到的就是每台master上都执行

2.1、修改主机名

hostnamectl set-hostname k8s-master-site-01
hostnamectl set-hostname k8s-master-site-02
hostnamectl set-hostname k8s-master-site-03
bash

2.2、修改hosts

cat >> /etc/hosts <<EOF
172.21.74.3 k8s-master-site-01
172.21.74.4 k8s-master-site-02
172.21.74.5 k8s-master-site-03
EOF

2.3、关闭 selinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

2.4、关闭交换分区

swapoff -a
sed --in-place=.bak 's/.swap./#&/g' /etc/fstab
2.5、关闭防火墙及NetworkManager
systemctl stop firewalld ; systemctl disable firewalld
systemctl stop NetworkManager; systemctl disable NetworkManager

2.6、主机互相通信

只需要配置k8s-master-site-01到三节点互信即可
·生成rsa 秘钥
ssh-keygen -t rsa
·传输秘钥,免密通信
ssh-copy-id k8s-master-site-01
ssh-copy-id k8s-master-site-02
ssh-copy-id k8s-master-site-03

ssh非22端口执行
ssh-copy-id -i /root/.ssh/id_rsa.pub -p2222 root@k8s-master-site-01
ssh-copy-id -i /root/.ssh/id_rsa.pub -p2222 root@k8s-master-site-02
ssh-copy-id -i /root/.ssh/id_rsa.pub -p2222 root@k8s-master-site-03

2.7、调整资源限制

cat >> /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

2.8、更换yum 源

yum install wget
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bak
wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
yum -y install update

2.9、配置时间同步

yum install -y chrony

配置chrony.conf
cat > /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
logchange 0.5
logdir /var/log/chrony
EOF

重启服务
systemctl restart chronyd
systemctl status chronyd

2.10、升级内核(可跳过)

### 先升级一下软件包
### 下载内核(4.19以上推荐,默认其实也可以)
### 安装内核(当前目录只有这2个rpm包)
### 更改内核启动顺序
### 检查是否加载最新
cd /tmp
yum update --exclude=kernel* -y
curl -o kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpmcurl -o kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
yum localinstall -y *.rpm
grub2-set-default 0
grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
### 重启服务器
reboot

2.11、安装基础工具

yum install -y device-mapper-persistent-data net-tools nfs-utils jq psmisc git lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipset sysstat libseccomp

2.12、配置内核模块和参数

cat > /etc/modules-load.d/k8s.conf << EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
overlay
br_netfilter
EOF
## 开机自动加载
systemctl enable systemd-modules-load.service --now
## 配置内核参数优化
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
## 生效加载
sysctl --system
### 重启服务,检查模块加载是否正常
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

2.13、安装容器运行时

### 安装 containerd
##已失效 yum install -y containerd.io-1.6.6

获取阿里云YUM源
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list | grep containerd
yum -y install containerd.io

### 生成配置文件
cp /etc/containerd/config.toml /etc/containerd/config.toml.init.bak
containerd config default > /etc/containerd/config.toml
### 修改配置文件
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#sandbox_image = "k8s.gcr.io/pause:3.6"#sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"#g' /etc/containerd/config.toml
### 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#g' /etc/containerd/config.toml
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://registry-1.docker.io"
[host."https://xpd691zc.mirror.aliyuncs.com"]capabilities = ["pull", "resolve", "push"]
EOF
## 启动生效
systemctl daemon-reload ; systemctl enable containerd --now

----
`安装 crictl`
### 下载二进制包
cd /tmp
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz
### 解压
tar -xf crictl-v1.25.0-linux-amd64.tar.gz
### 移动位置
mv crictl /usr/local/bin/
### 配置
cat > /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
### 重启生效
systemctl restart containerd

2.14、高可用组件安装 keepalived、nginx

安装组件

yum install nginx keepalived nginx-mod-stream -y
##排空初始化配置
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.init.bak
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.init.bak
cat /dev/null > /etc/nginx/nginx.conf
cat /dev/null > /etc/keepalived/keepalived.conf
配置相关文件

三个节点都一样的 

/etc/nginx/nginx.conf 文件

worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream kube-apiserver {
server k8s-master-site-01:6443 weight=5 max_fails=3 fail_timeout=30s;
server k8s-master-site-02:6443 weight=5 max_fails=3 fail_timeout=30s;
server k8s-master-site-03:6443 weight=5 max_fails=3 fail_timeout=30s;
}

server {
listen 16443;
proxy_pass kube-apiserver;
}
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
default_type application/octet-stream;

include /etc/nginx/mime.types;
server {
listen 80 default_server;
server_name _;
location / {}
}
}

/etc/keepalived/keepalived.conf 文件
注意需要更改的

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}

notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}

vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
state MASTER
interface ens32 #注意修改网络适配器
virtual_router_id 51
priority 100 # 主是100,其他2个节点是90、80
advert_int 1

authentication {
auth_type PASS
auth_pass 9KStn9nr
}

virtual_ipaddress {
172.21.74.20 #你的虚拟IP
}

track_script {
check_nginx
}
}
/etc/keepalived/check_nginx.sh 文件
#!/bin/bash
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )

if [ $counter -eq 0 ];then
service nginx start
sleep 2
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi
## 给执行权限
chmod +x /etc/keepalived/check_nginx.sh

nginx -t
systemctl daemon-reload
systemctl enable nginx keepalived --now
systemctl stop nginx keepalived ## 尝试停一下主节点的nginx,看是否漂移恢复
ip -4 a
systemctl start nginx keepalived ## 验证结果:vip 正常飘逸,恢复后会回到第一台

2.15、cfssl 工具安装

## k8s-master-site-01 执行安装即可
# 下载地址:https://github.com/cloudflare/cfssl/releases
# 下载三个软件:cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64
# 下载最新版本1.6.4后操作
mkdir /root/cfssl
cd /root/cfssl
下载的包压缩成zip上传
chmod +x cfssl*
mv cfssl_1.6.4_linux_amd64 /usr/local/bin/cfssl
mv cfssl-certinfo_1.6.4_linux_amd64 /usr/local/bin/cfssl-certinfo
mv cfssljson_1.6.4_linux_amd64 /usr/local/bin/cfssljson

2.16、配置 CA 证书中心

## k8s-master-site-01执行即可
#生成 CA 证书请求文件
mkdir -p /root/cfssl/pki
cd /root/cfssl/pki/
vim /root/cfssl/pki/ca-csr.json
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Gudong","L": "Foshan","O": "k8s","OU": "system"}],"ca": {"expiry": "87600h"}}
#生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
#生成 CA 证书的配置文件
vim /root/cfssl/pki/ca-config.json
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}}

#生成 etcd 证书
##配置证书请求文件
vim /root/cfssl/pki/etcd-csr.json
{"CN": "etcd","hosts": ["127.0.0.1","172.21.74.3","172.21.74.4","172.21.74.5"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Sichuan","L": "Chengdu","O": "k8s","OU": "system"}]}
##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

#生成 apiserver 证书
##创建 token.csv 文件
cd /root/cfssl/pki/
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
##创建证书请求文件
vim /root/cfssl/pki/kube-apiserver-csr.json (授权 主节点、虚拟ip以及 kube-proxy 的ipvs地址)
{"CN":"kubernetes","hosts":["127.0.0.1","172.21.74.3","172.21.74.4","172.21.74.5","10.20.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Gudong","L": "Foshan","O": "k8s","OU": "system"}]}
##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

#生成 kubectl 证书
##创建证书请求文件
vim /root/cfssl/pki/admin-csr.json
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Gudong","L": "Foshan","O": "system:masters","OU": "system"}]}
##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#生成 controller-manager 证书
##创建证书请求文件
vim /root/cfssl/pki/kube-controller-manager-csr.json
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","172.21.74.3","172.21.74.4","172.21.74.5"],"names": [{"C": "CN","ST": "Gudong","L": "Foshan","O": "system:kube-controller-manager","OU": "system"}]}
##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

#生成 scheduler 证书
##创建证书请求文件
vim /root/cfssl/pki/kube-scheduler-csr.json
{"CN": "system:kube-scheduler","hosts": ["127.0.0.1","172.21.74.3","172.21.74.4","172.21.74.5"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Gudong","L": "Foshan","O": "system:kube-scheduler","OU": "system"}]}
##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

#生成 kube-proxy 证书
##创建证书请求文件
vim /root/cfssl/pki/kube-proxy-csr.json
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Gudong","L": "Foshan","O": "k8s","OU": "system"}]}
##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2.17、安装 etcd 高可用集群

etcd 二进制安装包
# etcd 二进制下载地址:1.25.9版本对应的etcd版本:v3.5.5
https://github.com/etcd-io/etcd/releases
#得到的包:etcd-v3.4.27-linux-amd64.tar.gz
mkdir /tmp/etcd
cd /tmp/etcd
tar -xf etcd-v3.4.27-linux-amd64.tar.gz
cp -ar etcd-v3.4.27-linux-amd64/etcd* /usr/local/bin/
chmod +x /usr/local/bin/etcd*
#拷贝到其他主节点
scp -P 2222 /usr/local/bin/etcd* k8s-master-site-02:/usr/local/bin/
scp -P 2222 /usr/local/bin/etcd* k8s-master-site-03:/usr/local/bin/
创建 etcd 配置文件
k8s-master-site-01节点执行,生成配置文文件
mkdir -p /etc/etcd/
vim /etc/etcd/etcd.conf

#[Member1]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.21.74.3:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.74.3:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.74.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.74.3:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.21.74.3:2380,etcd2=https://172.21.74.4:2380,etcd3=https://172.21.74.5:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
k8s-master-site-02节点执行,生成配置文件
mkdir -p /etc/etcd/
vim /etc/etcd/etcd.conf

#[Member2]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.21.74.4:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.74.4:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.74.4:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.74.4:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.21.74.3:2380,etcd2=https://172.21.74.4:2380,etcd3=https://172.21.74.5:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
k8s-master-site-03节点执行,生成配置文件
mkdir -p /etc/etcd/
vim /etc/etcd/etcd.conf

#[Member3]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.21.74.5:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.74.5:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.74.5:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.74.5:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.21.74.3:2380,etcd2=https://172.21.74.4:2380,etcd3=https://172.21.74.5:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
创建 etcd 启动服务文件 每台都执行
/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-client-cert-auth --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝证书文件
mkdir -p /etc/etcd/ssl/
## k8s-master-site-01执行
cd /root/cfssl/pki
mkdir /etc/etcd/ssl/
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/
scp -P 2222 -rp ca*.pem etcd*.pem k8s-master-site-01:/etc/etcd/ssl/
scp -P 2222 -rp ca*.pem etcd*.pem k8s-master-site-02:/etc/etcd/ssl/
scp -P 2222 -rp ca*.pem etcd*.pem k8s-master-site-03:/etc/etcd/ssl/
启动 etcd 集群
### 每台执行
rm -rf /var/lib/etcd/default.etcd/* && mkdir -p /var/lib/etcd/default.etcd
systemctl daemon-reload
systemctl enable etcd --now

查看集群状态

/usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://172.21.74.3:2379,https://172.21.74.4:2379,https://172.21.74.5:2379 endpoint status --cluster

2.18、k8s 二进制包安装

第一台下载即可:https://www.downloadkubernetes.com/
下载的二进制包有:kube-apiserver、kube-controller-manager、kube-proxy、kube-scheduler、kubectl、kubelet
在k8s-master-site-01下拷贝二进制包到其他节点
chmod +x kube*
cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /usr/local/bin/
scp -P 2222 -r kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy k8s-master-site-02:/usr/local/bin/
scp -P 2222 -r kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy k8s-master-site-03:/usr/local/bin/

2.19、安装 kube-apiserver

创建数据目录
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes
创建服务配置文件
  • k8s-maser-site-01 
vim /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-auth=false --bind-address=172.21.74.3 --secure-port=6443 --advertise-address=172.21.74.3  --authorization-mode=Node,RBAC --runtime-config=api/all=true --enable-bootstrap-token-auth --service-cluster-ip-range=10.20.0.0/16 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-issuer=https://kubernetes.default.svc.cluster.local --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --etcd-servers=https://172.21.74.3:2379,https://172.21.74.4:2379,https://172.21.74.5:2379 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver-audit.log --event-ttl=1h  --v=4"

(去掉了 --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes)

  • k8s-maser-site-02 
vim /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-auth=false --bind-address=172.21.74.4 --secure-port=6443 --advertise-address=172.21.74.4  --authorization-mode=Node,RBAC --runtime-config=api/all=true --enable-bootstrap-token-auth --service-cluster-ip-range=10.20.0.0/16 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-issuer=https://kubernetes.default.svc.cluster.local --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --etcd-servers=https://172.21.74.3:2379,https://172.21.74.4:2379,https://172.21.74.5:2379 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver-audit.log --event-ttl=1h  --v=4"
  • k8s-maser-site-03
vim /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-auth=false --bind-address=172.21.74.5 --secure-port=6443 --advertise-address=172.21.74.5  --authorization-mode=Node,RBAC --runtime-config=api/all=true --enable-bootstrap-token-auth --service-cluster-ip-range=10.20.0.0/16 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-issuer=https://kubernetes.default.svc.cluster.local --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --etcd-servers=https://172.21.74.3:2379,https://172.21.74.4:2379,https://172.21.74.5:2379 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver-audit.log --event-ttl=1h  --v=4"
每台执行创建服务启动文件
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝证书
### k8s-maser-site-01执行
cd /root/cfssl/pki
cp -rp ca*.pem /etc/kubernetes/ssl
cp -rp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/
scp -P 2222 -r ca*.pem kube-apiserver*.pem k8s-master-site-02:/etc/kubernetes/ssl/
scp -P 2222 -r ca*.pem kube-apiserver*.pem k8s-master-site-03:/etc/kubernetes/ssl/
scp -P 2222 token.csv k8s-master-site-02:/etc/kubernetes/
scp -P 2222 token.csv k8s-master-site-03:/etc/kubernetes/
启动服务
systemctl daemon-reload
systemctl enable kube-apiserver --now
systemctl status kube-apiserver
#service kube-apiserver status -l

2.20、安装 kubectl

只需要在一台主机上操作 k8s 集群资源,本过程在k8s-master-site01 安装
#拷贝证书
cd /root/cfssl/pki
cp admin*.pem /etc/kubernetes/ssl/

#配置安全上下文
### 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.21.74.3:16443 --kubeconfig=kube.config
### 设置客户端认证参数
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
### 设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
### 设置当前上下文
kubectl config use-context kubernetes --kubeconfig=kube.config
### 设置
mkdir /root/.kube
cp kube.config /root/.kube/config
cp kube.config /etc/kubernetes/admin.conf

#授权 kubernetes 证书访问权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
#查看集群状态
kubectl cluster-info
#查看组件状态
kubectl get componentstatuses

2.21、安装 kube-controller-manager

配置 kubeconfig
### 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.21.74.3:16443 --kubeconfig=kube-controller-manager.kubeconfig
### 设置客户端参数
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
### 设置上下文参数
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
### 设置当前上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
创建配置文件三台都一样
/etc/kubernetes/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS=" --secure-port=10257 --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig --service-cluster-ip-range=10.20.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --allocate-node-cidrs=true --cluster-cidr=10.20.0.0/16 --root-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --controllers=*,bootstrapsigner,tokencleaner --horizontal-pod-autoscaler-sync-period=10s --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem --use-service-account-credentials=true  --v=2"
/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
拷贝证书
k8s-master-site-01执行
cd /root/cfssl/pki
scp -P 2222 kube-controller-manager*.pem k8s-master-site-01:/etc/kubernetes/ssl/
scp -P 2222 kube-controller-manager*.pem k8s-master-site-02:/etc/kubernetes/ssl/
scp -P 2222 kube-controller-manager*.pem k8s-master-site-03:/etc/kubernetes/ssl/
scp -P 2222 kube-controller-manager.kubeconfig k8s-master-site-01:/etc/kubernetes/
scp -P 2222 kube-controller-manager.kubeconfig k8s-master-site-02:/etc/kubernetes/
scp -P 2222 kube-controller-manager.kubeconfig k8s-master-site-03:/etc/kubernetes/
启动服务
systemctl daemon-reload
systemctl enable kube-controller-manager --now
service kube-controller-manager status

2.22、安装 kube-scheduler

配置 kubeconfig
### k8s-master-site-01执行
cd /root/cfssl/pki
### 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.21.74.3:16443 --kubeconfig=kube-scheduler.kubeconfig
### 设置客户端认证参数
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
### 设置上下文参数
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
### 设置当前上下文
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
创建配置文件
vim /etc/kubernetes/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--v=2"
创建服务启动文件
 /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
拷贝文件
### k8s-master-site-01执行
cd /root/cfssl/pki
scp -P 2222 kube-scheduler*.pem k8s-master-site-01:/etc/kubernetes/ssl/
scp -P 2222 kube-scheduler*.pem k8s-master-site-02:/etc/kubernetes/ssl/
scp -P 2222 kube-scheduler*.pem k8s-master-site-03:/etc/kubernetes/ssl/
scp -P 2222 kube-scheduler.kubeconfig k8s-master-site-01:/etc/kubernetes/
scp -P 2222 kube-scheduler.kubeconfig k8s-master-site-02:/etc/kubernetes/
scp -P 2222 kube-scheduler.kubeconfig k8s-master-site-03:/etc/kubernetes/
启动服务
systemctl daemon-reload
systemctl enable kube-scheduler --now
service kube-scheduler status
kubectl get componentstatuses

3、工作节点搭建

本实践只有一台工作节点主机,有多个可同样操作

3.1、修改主机名

hostnamectl set-hostname k8s-worker-site-01

3.2、修改hosts

cat >> /etc/hosts <<EOF
172.21.74.3 k8s-master-site-01
172.21.74.4 k8s-master-site-02
172.21.74.5 k8s-master-site-03
172.21.74.6 k8s-worker-site-01
EOF
3.3、关闭 selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

3.4、关闭交换分区

swapoff -a
sed --in-place=.bak 's/.*swap.*/#&/g' /etc/fstab

3.5、关闭防火墙及NetworkManager

systemctl stop firewalld ; systemctl disable firewalld
systemctl stop NetworkManager; systemctl disable NetworkManager

3.6、主机互相通信

k8s-master-site-01下执行
·生成rsa 秘钥
ssh-keygen -t rsa
·传输秘钥,免密通信
##k8s-mstart-site-01 执行
ssh-copy-id -i /root/.ssh/id_rsa.pub -p2222 root@k8s-worker-site-01

3.7、调整资源限制

cat >> /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

3.8、更换yum 源

yum install wget
cd /etc/yum.repos.d
mv CentoS-Base.repo CentoS-Base.repo.bak
wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
yum -y install update

3.9、配置时间同步

yum install -y chrony

配置chrony.conf
cat > /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
logchange 0.5
logdir /var/log/chrony
EOF

重启服务
systemctl restart chronyd
systemctl status chronyd

3.9、安装基础工具

yum install -y device-mapper-persistent-data net-tools nfs-utils jq psmisc git lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipset sysstat libseccomp

3.10、配置内核模块和参数

cat > /etc/modules-load.d/k8s.conf << EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
overlay
br_netfilter
EOF
## 开机自动加载
systemctl enable systemd-modules-load.service --now
## 配置内核参数优化
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
## 生效加载
sysctl --system
### 重启服务,检查模块加载是否正常
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

3.11、安装容器运行时

## 安装containerd
##已失效 yum install -y containerd.io-1.6.6

获取阿里云YUM源
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list | grep containerd
yum -y install containerd.io

### 生成配置文件
cp /etc/containerd/config.toml /etc/containerd/config.toml.init.bak
containerd config default > /etc/containerd/config.toml
### 修改配置文件
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#sandbox_image = "k8s.gcr.io/pause:3.6"#sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"#g' /etc/containerd/config.toml
### 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#g' /etc/containerd/config.toml
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://registry-1.docker.io"
[host."https://xpd691zc.mirror.aliyuncs.com"]capabilities = ["pull", "resolve", "push"]
EOF
## 启动生效
systemctl daemon-reload ; systemctl enable containerd --now
systemctl restart containerd
systemctl status containerd
----
`安装 crictl`
### 下载二进制包
cd /tmp
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz
### 解压
tar -xf crictl-v1.25.0-linux-amd64.tar.gz
### 移动位置
mv crictl /usr/local/bin/
### 配置
cat > /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
### 重启生效
systemctl restart containerd

3.12、安装 kubelet

##k8s-master-site-01 执行(已有可忽略)
#创建 kubelet-bootstrap.kubeconfig
cd /root/cfssl/pki
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.21.74.3:16443 --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

#创建配置文件
### k8s-worker-site-01上
/etc/kubernetes/kubelet.json

{"kind": "KubeletConfiguration","apiVersion": "kubelet.config.k8s.io/v1beta1","authentication": {"x509": {"clientCAFile": "/etc/kubernetes/ssl/ca.pem"},"webhook": {"enabled": true,"cacheTTL": "2m0s"},"anonymous": {"enabled": false}},"authorization": {"mode": "Webhook","webhook": {"cacheAuthorizedTTL": "5m0s","cacheUnauthorizedTTL": "30s"}},"address": "172.21.74.6","port": 10250,"readOnlyPort": 10255,"cgroupDriver": "systemd","hairpinMode": "promiscuous-bridge","serializeImagePulls": false,"featureGates": {"RotateKubeletServerCertificate": true},"clusterDomain": "cluster.local.","clusterDNS": ["10.20.0.2"]}

#创建服务启动文件 ### k8s-worker-site-01上
vi /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/ssl --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet.json --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

(去掉了--alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes)

#拷贝文件
### k8s-master-site-01执行
mkdir -p /etc/kubernetes/ssl
cd /root/cfssl/pki
scp -P 2222 kubelet-bootstrap.kubeconfig k8s-worker-site-01:/etc/kubernetes/
scp -P 2222 ca.pem k8s-worker-site-01:/etc/kubernetes/ssl
cd /usr/local/bin/
scp -P 2222 -r kubelet kube-proxy k8s-worker-site-01:/usr/local/bin/

#启动服务 ### k8s-worker-site-01上mkdir /var/lib/kubelet
systemctl daemon-reload
systemctl enable kubelet --now
service kubelet status

#授权
### k8s-master-site-01执行
### CSR请求
kubectl get csr
### 同意
kubectl certificate approve xxx

3.13、安装 kube-proxy

#配置 kubeconfig (k8s-master-site-01) 
cd /root/cfssl/pki
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.21.74.3:16443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

#创建配置文件
### k8s-worker-site-01
vi /etc/kubernetes/kube-proxy.yaml

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.21.74.6
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.21.0.0/16
healthzBindAddress: 172.21.74.6:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.21.74.6:10249
mode: "ipvs"



#创建服务启动文件
### k8s-worker-site-01
vi /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

(去掉了 --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes)

#拷贝文件
### k8s-master-site-01执行
cd /root/cfssl/pki
scp -P 2222 kube-proxy.kubeconfig k8s-worker-site-01:/etc/kubernetes/

#启动服务
### 每台执行
mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy --now
service kube-proxy status

3.14、安装网络插件 calico

### k8s-master-site01执行即可
mkdir /opt/k8s-yaml
cd /opt/k8s-yaml
### 提前准备好了相关文件和镜像包:calico.tar.gz和calico.yaml
### 将镜像拷贝到其他节点
cd /data/download/
scp -P 2222 calico-node.tar.gz k8s-worker-site-01:/data/download

### 在每个节点对应位置导入镜像
ctr -n=k8s.io images import calico-node.tar.gz docker.io/calico/node:v3.26.1
### 在k8s-master-site-01节点启动
kubectl apply -f calico.yaml

3.15、 安装 coredns

### k8s-master-site01执行即可
cd /opt/k8s-yaml
### 提前准备好了coredns.yaml
### 注意修改下 clusterIP地址:clusterIP: 10.20.0.2
kubectl apply -f coredns.yaml

#删除语句
#kubectl delete -f coredns.yaml --grace-period=0 --force -n kube-system


很赞哦!(1)

本文阅读量 283发布于 2023年11月15日

您的访问IP 3.229.117.191最早于 2024年5月26日 22时11分27秒 阅读过本文 为本文提供了 1 热度 1 阅读量

文章评论
回帖