0.前言
集群功能各模块功能描述:
Master节点:
Master节点上面主要由四个模块组成,APIServer,schedule,controller-manager,etcd
APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
schedule: schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
controller manager: 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
Node节点:
每个Node节点主要由三个模板组成:kublet, kube-proxy
kube-proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。
kublet:kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。
1.集群规划
Kubernetes 角色 | 分布节点 | 节点 IP |
---|---|---|
kube-apiserver | Master | 192.168.91.18/19/20 |
kube-controller-manager | Master | 192.168.91. 18/19/20 |
kube-scheduler | Master | 192.168.91. 18/19/20 |
Etcd | Master | 192.168.91. 18/19/20 |
kubelet | Node | 192.168.91.21/22 |
kube-proxy | Node | 192.168.91. 21/22 |
docker | Node | 192.168.91. 21/22 |
flannel | Node | 192.168.91. 21/22 |
- 架构图
2.基础环境优化
# 所有节点
$ yum install net-tools vim wget lrzsz git telnet –y
$ systemctl stop firewalld
$ systemctl disable firewalld
$ setenforce 0
$ sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 设置时区
$ \cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime -rf
# 关闭交换分区
$ swapoff -a
$ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 设置时间同步
$ yum install -y ntpdate
$ ntpdate -u ntp.api.bz
$ echo "*/5 * * * * ntpdate time7.aliyun.com >/dev/null 2>&1" >> /etc/crontab
$ systemctl restart crond
$ systemctl enable crond
# 所有节点
# 设置主机名
$ cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.91.18 master-18
192.168.91.19 master-19
192.168.91.20 master-20
192.168.91.21 node-21
192.168.91.22 node-22
EOF
# 内和优化
cat >/etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
fs.file-max=52706963
fs.nr_open=52706963
EOF
$ sysctl -p
# 从任意Master节点分发配置到其他所有的节点(包括其他的Master与Node),当前以Master-18为分发机器。
$ yum install -y expect
$ ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
$ export mypass=123456
$ name=(master-18 master-19 master-20 node-21 node-22)
$ for i in ${name[@]};do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
expect {
\"*yes/no*\" {send \"yes\r\"; exp_continue}
\"*password*\" {send \"$mypass\r\"; exp_continue}
\"*Password*\" {send \"$mypass\r\";}
}"
done;
3.Master节点部署keepalived
$ yum install -y keepalived
# 其他节点将state改为BACKUP
$ cat >/etc/keepalived/keepalived.conf <<EOL
global_defs {
router_id KUB_LVS
}
vrrp_script CheckMaster {
script "curl -k https://192.168.91.254:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 61
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 111111
}
virtual_ipaddress {
192.168.91.254/24 dev eth0
}
track_script {
CheckMaster
}
}
EOL
$ systemctl enable keepalived && systemctl start keepalived
4.配置证书
4.1 下载证书工具
# 在分发机器上操作
$ mkdir /soft && cd /soft
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
$ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
$ cp cfssl_linux-amd64 /usr/local/bin/cfssl
$ cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
$ cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
4.2 生成 ETCD 证书
$ mkdir /root/etcd && cd /root/etcd
- CA证书配置
$ cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"www": {
"expiry": "876000h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
- 创建CA证书请求文件
$ cat << EOF | tee ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShenZhen",
"ST": "GuangDong"
}
]
}
EOF
- 创建ETCD证书请求文件
cat << EOF | tee server-csr.json
{
"CN": "etcd",
"hosts": [
"master-18",
"master-19",
"master-20",
"192.168.91.18",
"192.168.91.19",
"192.168.91.20"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShenZhen",
"ST": "GuangDong"
}
]
}
EOF
- 生成ETCD CA证书和ETCD公私钥
$ cd /root/etcd/
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
$ ll
-rw-r--r-- 1 root root 253 2020-07-12 16:07:18 ca-config.json #ca 的配置文件
-rw-r--r-- 1 root root 960 2020-07-12 16:13:01 ca.csr #ca 证书生成文件
-rw-r--r-- 1 root root 198 2020-07-12 16:12:39 ca-csr.json #ca 证书请求文件
-rw------- 1 root root 1.7K 2020-07-12 16:13:01 ca-key.pem #ca 证书 key
-rw-r--r-- 1 root root 1.3K 2020-07-12 16:13:01 ca.pem #ca 证书
-rw-r--r-- 1 root root 344 2020-07-12 16:11:19 server-csr.json
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
$ ll
-rw-r--r-- 1 root root 253 2020-07-12 16:07:18 ca-config.json
-rw-r--r-- 1 root root 960 2020-07-12 16:13:01 ca.csr
-rw-r--r-- 1 root root 198 2020-07-12 16:12:39 ca-csr.json
-rw------- 1 root root 1.7K 2020-07-12 16:13:01 ca-key.pem
-rw-r--r-- 1 root root 1.3K 2020-07-12 16:13:01 ca.pem
-rw-r--r-- 1 root root 1.1K 2020-07-12 16:17:14 server.csr
-rw-r--r-- 1 root root 344 2020-07-12 16:11:19 server-csr.json
-rw------- 1 root root 1.7K 2020-07-12 16:17:14 server-key.pem #etcd 客户端使用
-rw-r--r-- 1 root root 1.4K 2020-07-12 16:17:14 server.pem
4.3 创建Kubernetes相关证书
$ mkdir /root/kubernetes/ && cd /root/kubernetes/
- 配置CA文件
$ cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"expiry": "876000h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
- 创建 ca 证书申请文件
cat << EOF | tee ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShenZhen",
"ST": "GuangDong",
"O": "k8s",
"OU": "System"
}
]
}
EOF
- 生成 API SERVER 证书申请文件
$ cat << EOF | tee server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"10.0.0.2",
"192.168.91.18",
"192.168.91.19",
"192.168.91.20",
"192.168.91.21",
"192.168.91.22",
"192.168.91.254",
"master-18",
"master-19",
"master-20",
"node-21",
"node-22",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShenZhen",
"ST": "GuangDong",
"O": "k8s",
"OU": "System"
}
]
}
EOF
- 创建 Kubernetes Proxy 证书申请文件
$ cat << EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShenZhen",
"ST": "GuangDong",
"O": "k8s",
"OU": "System"
}
]
}
EOF
- 生成 kubernetes CA 证书和公私钥
# 生成 ca 证书
$ cd /root/kubernetes/
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
# 生成 api-server 证书
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
# 生成 kube-proxy 证书
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
5.部署ETCD
5.1 下载并配置etcd
# 分发节点下载
$ cd /soft
$ wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
$ tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
$ cd etcd-v3.3.10-linux-amd64/
$ cp etcd etcdctl /usr/local/bin/
# 配置etcd,每个节点不同
$ mkdir -p /etc/etcd/{cfg,ssl}
$ cat >/etc/etcd/cfg/etcd.conf<<EOFL
#[Member]
ETCD_NAME="master-18"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.91.18:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.91.18:2379,http://192.168.91.18:2390"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.18:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.18:2379"
ETCD_INITIAL_CLUSTER="master-18=https://192.168.91.18:2380,master-19=https://192.168.91.19:2380,master-20=https://192.168.91.20:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOFL
参数说明:
ETCD_NAME 节点名称, 如果有多个节点, 那么每个节点要修改为本节点的名称。
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址,如果多个节点那么逗号分隔
ETCD_INITIAL_CLUSTER="master1=https://192.168.91.200:2380,master2=https://192.168.91.201:2380,master3=https://192.168.91.202:2380"
ETCD_INITIAL_CLUSTER_TOKEN 集群 Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new 是新集群,existing 表示加入已有集群
5.2 创建 ETCD 的系统启动服务
$ cat > /usr/lib/systemd/system/etcd.service<<EOFL
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/cfg/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--peer-cert-file=/etc/etcd/ssl/server.pem \
--peer-key-file=/etc/etcd/ssl/server-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOFL
5.3 复制 etcd 证书到指定目录
此目录与之前的 ETCD 启动目录相一致
如果有多个 Master 节点, 那么需要复制到每个 Master
$ mkdir -p /etc/etcd/ssl/
$ \cp /root/etcd/*pem /etc/etcd/ssl/ -rf
#复制 etcd 证书到每个节点
$ for i in master-19 master-20 node-21 node-22;do ssh $i mkdir -p /etc/etcd/{cfg,ssl};done
$ for i in master-19 master-20 node-21 node-22;do scp /etc/etcd/ssl/* $i:/etc/etcd/ssl/;done
$ for i in master-19 master-20 node-21 node-22;do ssh $i ls /etc/etcd/ssl;done
5.4 启动etcd并检查集群是否正常
# 启动 etcd
$ systemctl daemon-reload #需要刷新一下系统配置
$ systemctl enable etcd
$ systemctl start etcd
$ systemctl status etcd
# 检查 etcd 集群是否运行正常
$ etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.91.18:2379" cluster-health
member bcef4c3b581e1d2e is healthy: got healthy result from https://192.168.91.18:2379
member d99a26304cec5ace is healthy: got healthy result from https://192.168.91.19:2379
member fc4e801f28271758 is healthy: got healthy result from https://192.168.91.20:2379
cluster is healthy
$ etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.91.18:2379" member list
bcef4c3b581e1d2e: name=master-18 peerURLs=https://192.168.91.18:2380 clientURLs=https://192.168.91.18:2379 isLeader=true
d99a26304cec5ace: name=master-19 peerURLs=https://192.168.91.19:2380 clientURLs=https://192.168.91.19:2379 isLeader=false
fc4e801f28271758: name=master-20 peerURLs=https://192.168.91.20:2380 clientURLs=https://192.168.91.20:2379 isLeader=false
6.创建Docker所需分配POD网段
向 etcd 写入集群 Pod 网段信息
172.17.0.0/16 为 Kubernetes Pod 的 IP 地址段
网段必须与 kube-controller-manager 的 --cluster-cidr 参数值一致
$ etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
# 检查是否创建成功
$ etcdctl --endpoints=https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379 --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem get /coreos.com/network/config
# 结果:{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
7.Node节点安装docker
$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum makecache fast
$ yum -y install docker-ce
# 创建国内源镜像加速配置
$ mkdir -p /etc/docker
$ tee /etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://plqjafsr.mirror.aliyuncs.com"]
}
EOF
# 重新加载并启动docker
$ systemctl daemon-reload
$ systemctl start docker
$ systemctl enable docker
8.部署Flannel
8.1下载flannel
所有节点都需要部署flannel
# 在Master-18分发节点操作
$ cd /soft
$ wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
$ tar xvf flannel-v0.11.0-linux-amd64.tar.gz
$ mv flanneld mk-docker-opts.sh /usr/local/bin/
# 复制 flanneld 到其他的所有节点
$ for i in master-19 master-20 node-21 node-22;do scp /usr/local/bin/flanneld $i:/usr/local/bin/;done
$ for i in master-19 master-20 node-21 node-22;do scp /usr/local/bin/mk-docker-opts.sh $i:/usr/local/bin/;done
8.2配置Flannel
所有节点都需要配置
$ mkdir -p /etc/flannel
$ cat > /etc/flannel/flannel.cfg<<EOF
FLANNEL_OPTIONS="-etcd-endpoints=https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379 -etcd-cafile=/etc/etcd/ssl/ca.pem -etcd-certfile=/etc/etcd/ssl/server.pem -etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
8.3配置Flannel系统服务
$ cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/flannel/flannel.cfg
ExecStart=/usr/local/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
启动脚本说明 :
mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变 量配置 docker0 网桥;
flanneld使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用-iface参数指定通信接口,如上面的eth0 接口;
8.4启动Flannel
$ systemctl enable flanneld
$ systemctl start flanneld
$ systemctl status flanneld
# 所有的节点都需要有 172.17.0.0网段IP
$ ip a |grep flannel.1
inet 172.17.73.0/32 scope global flannel.1
8.5修改Docker启动配置(Node)
# Node节点停止Flanneld服务
$ systemctl stop flanneld.service
# 修改Docker启动文件
$ cat >/usr/lib/systemd/system/docker.service<<EOFL
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOFL
8.6重启Docker并检查docker0网段
$ systemctl daemon-reload
$ systemctl restart docker
# 检查IP地址: docker0与flannel.1是同一个网段
$ ip a
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d8:11:68:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.57.1/24 brd 172.17.57.255 scope global docker0
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 8a:a8:db:4d:2b:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.57.0/32 scope global flannel.1
# 任意节点验证是否可以访问node节点Docker0
[root@master-18 ~]# ping 172.17.57.1
PING 172.17.57.1 (172.17.57.1) 56(84) bytes of data.
64 bytes from 172.17.57.1: icmp_seq=1 ttl=64 time=0.463 ms
64 bytes from 172.17.57.1: icmp_seq=2 ttl=64 time=0.373 ms
# 重新启动Node节点Flanneld
$ systemctl start flanneld
9.部署Master节点
9.1 部署kube-apiserver服务
- 下载 Kubernetes 二进制包(1.15.1)(master-18)
$ cd /soft
$ tar xvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes/server/bin/
$ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /usr/local/bin/
# 复制执行文件到其他的master节点
$ for i in master-19 master-20;do scp /usr/local/bin/kube* $i:/usr/local/bin/;done
- 配置 Kubernetes 证书
# Kubernetes各个组件之间通信需要证书,需要复制个每个master节点(master-18)
$ mkdir -p /etc/kubernetes/{cfg,ssl}
$ cp /root/kubernetes/*.pem /etc/kubernetes/ssl/
# 复制到其他节点
$ for i in master-19 master-20 node-21 node-22;do ssh $i mkdir -p /etc/kubernetes/{cfg,ssl};done
$ for i in master-19 master-20 node-21 node-22;do scp /etc/kubernetes/ssl/* $i:/etc/kubernetes/ssl/;done
$ for i in master-19 master-20 node-21 node-22;do echo $i "<========>"; ssh $i ls /etc/kubernetes/ssl;done
- 创建 TLS Bootstrapping Token
TLS bootstrapping功能就是让 kubelet 先使用一个预定的低权限用户连接到 apiserver, 然后向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署 。
Token 可以是任意的包涵 128 bit 的字符串,可以使用安全的随机数发生器生成 。
$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
3d76504c3adc64b3762d35c93dd8e439
- 编辑Token文件(master-18)
3d76504c3adc64b3762d35c93dd8e439:随机字符串,自定义生成;
kubelet-bootstrap:用户名; 10001:UID; system:kubelet-bootstrap:用户组
$ vim /etc/kubernetes/cfg/token.csv
3d76504c3adc64b3762d35c93dd8e439,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#复制到其他master节点
$ for i in master-19 master-20;do scp /etc/kubernetes/cfg/token.csv $i:/etc/kubernetes/cfg/token.csv;done
- 创建Apiserver 配置文件(所有的 master 节点)
配置文件内容基本相同, 如果有多个节点, 那么需要修改 IP 地址即可 (本实验无需修改)
$ cat >/etc/kubernetes/cfg/kube-apiserver.cfg <<EOFL
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd-servers=https://192.168.91.18:2379,https://192.168.91.19:2379,https://192.168.91.20:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=0.0.0.0 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/server.pem \
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/server.pem \
--etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOFL
参数说明
--logtostderr 启用日志
---v 日志等级
--etcd-servers etcd 集群地址
--bind-address 监听地址
--secure-port https 安全端口
--advertise-address 集群通告地址
--allow-privileged 启用授权
--service-cluster-ip-range Service 虚拟 IP 地址段
--enable-admission-plugins 准入控制模块
--authorization-mode 认证授权,启用 RBAC 授权
--enable-bootstrap-token-auth 启用 TLS bootstrap 功能
--token-auth-file token 文件
--service-node-port-range Service Node 类型默认分配端口范围
- 配置kube-apiserver 启动文件(所有的 master 节点)
$ cat >/usr/lib/systemd/system/kube-apiserver.service<<EOFL
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.cfg
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOFL
- 启动 kube-apiserver 服务
$ systemctl enable kube-apiserver
$ systemctl start kube-apiserver
$ systemctl status kube-apiserver
# 查看加密端口是否启动(Master节点)
$ netstat -anltup | grep 6443
tcp6 0 0 :::6443 :::* LISTEN 30373/kube-apiserve
tcp6 0 0 ::1:6443 ::1:48226 ESTABLISHED 30373/kube-apiserve
tcp6 0 0 ::1:48226 ::1:6443 ESTABLISHED 30373/kube-apiserve
#查看加密的端口是否已经启动(node 节点)
$ telnet 192.168.91.254 6443
Trying 192.168.91.254...
Connected to 192.168.91.254.
Escape character is '^]'.
9.2 部署kube-scheduler
- 创建 kube-scheduler 配置文件(所有的 master 节点)
$ cat >/etc/kubernetes/cfg/kube-scheduler.cfg<<EOFL
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --bind-address=0.0.0.0 --master=127.0.0.1:8080 --leader-elect"
EOFL
#查看配置文件
$ cat /etc/kubernetes/cfg/kube-scheduler.cfg
参数说明
--bind-address=0.0.0.0 启动绑定地址
--master 连接本地 apiserver(非加密端口)
--leader-elect=true:集群运行模式,启用选举功能;被选为leader的节点负责处理工作,其它节点为阻塞状态;
- 创建 kube-scheduler 启动文件(所有的 master 节点)
$ cat >/usr/lib/systemd/system/kube-scheduler.service<<EOFL
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.cfg
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOFL
- 启动 kube-scheduler 服务(所有的 master 节点)
$ systemctl enable kube-scheduler
$ systemctl start kube-scheduler
$ systemctl status kube-scheduler
- 查看 Master 节点组件状态(任意一台 master)
$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
9.3 部署kube-controller-manager
- 创建 kube-controller-manager 配置文件(所有master节点)
$ cat >/etc/kubernetes/cfg/kube-controller-manager.cfg<<EOFL
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=0.0.0.0 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"
EOFL
参数说明
--master=127.0.0.1:8080 #指定 Master 地址
--leader-elect #竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。
--service-cluster-ip-range #kubernetes service 指定的 IP 地址范围。
- 创建 kube-controller-manager 启动文件(所有master节点)
$ cat >/usr/lib/systemd/system/kube-controller-manager.service<<EOFL
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.cfg
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOFL
- 启动 kube-controller-manager 服务
$ systemctl enable kube-controller-manager
$ systemctl start kube-controller-manager
$ systemctl status kube-controller-manager
- 查看 Master 节点组件状态
$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
10.部署Node节点
Node节点需要部署的组件: Kubelet Kube-proxy Flannel Docker
10.1 部署kubelet组件
kublet 运行在每个 Node 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等;
kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
- 从 Master 节点复制 Kubernetes 文件到 Node
$ cd /soft
$ scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy node-21:/usr/local/bin/
$ scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy node-22:/usr/local/bin/
- 创建kubelet bootstrap.kubeconfig 文件(master-18)
Kubernetes 中 kubeconfig 文件配置文件用于访问集群信息,在开启了 TLS 的集群中,每次与集群交互时都需要身份认证,生产环境一般使用证书 进行认证,其认证所需要的信息会放在 kubeconfig 文件中
$ mkdir /root/config && cd /root/config
$ cat >environment.sh<<EOFL
# 创建 kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=f89a76f197526a0d4bc2bf9c86e871c3
KUBE_APISERVER="https://192.168.91.254:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=\${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=\${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#通过 bash environment.sh获取 bootstrap.kubeconfig 配置文件。
EOFL
# 执行脚本
$ sh environment.sh
- 创建kube-proxy kubeconfig 文件(master-18)
$ cd /root/config
$ cat >env_proxy.sh<<EOF
# 创建 kube-proxy kubeconfig文件
BOOTSTRAP_TOKEN=f89a76f197526a0d4bc2bf9c86e871c3
KUBE_APISERVER="https://192.168.91.254:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=\${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
EOF
# 执行脚本
$ sh env_proxy.sh
- 复制 kubeconfig 文件与证书到所有 Node 节点(master-18)
将 bootstrap kubeconfig kube-proxy.kubeconfig 文件复制到所有 Node 节点
$ cd /root/config
#远程创建目录 (master-18)
$ ssh node-21 "mkdir -p /etc/kubernetes/{cfg,ssl}"
$ ssh node-22 "mkdir -p /etc/kubernetes/{cfg,ssl}"
# 复制证书文件ssl(master-18)
$ for i in node-21 node-22;do scp /etc/kubernetes/ssl/* $i:/etc/kubernetes/ssl/;done
# 复制kubeconfig文件(master-18)
$ for i in node-21 node-22;do scp -rp bootstrap.kubeconfig kube-proxy.kubeconfig $i:/etc/kubernetes/cfg/;done
- 创建kubelet 参数配置文件(node 节点)
不同的 Node 节点, 需要修改 IP 地址 (node 节点操作)
$ cat >/etc/kubernetes/cfg/kubelet.config<<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.91.21
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF
- 创建kubelet 配置文件(node 节点)
不同的 Node 节点, 需要修改 IP 地址
/etc/kubernetes/cfg/kubelet.kubeconfig 文件自动生成
$ cat >/etc/kubernetes/cfg/kubelet<<EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.91.21 \
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/cfg/bootstrap.kubeconfig \
--config=/etc/kubernetes/cfg/kubelet.config \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=docker.io/kubernetes/pause:latest"
EOF
- 创建kubelet 系统启动文件(node 节点)
$ cat >/usr/lib/systemd/system/kubelet.service<<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kubelet
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
- 将kubelet-bootstrap 用户绑定到系统集群角色(master-18)
$ kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
- 启动kubelet 服务(node 节点)
$ systemctl start kubelet
$ systemctl enable kubelet
$ systemctl status kubelet
- 服务端批准与查看 CSR 请求(master-18)
# 查看CSR请求
# Maste-18节点操作
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-06GrnsQALK4sya09u8ETHyeEjmLWZU4iPsJwDkFEKjY 73s kubelet-bootstrap Pending
node-csr-LFA5XKZfyBKFTOp6toxiSKolLvpPBNWjqEFk1ZHICAk 73s kubelet-bootstrap Pending
# 批准请求(两个都需要批准)
#Master节点操作
$ kubectl certificate approve node-csr-06GrnsQALK4sya09u8ETHyeEjmLWZU4iPsJwDkFEKjY
certificatesigningrequest.certificates.k8s.io/node-csr-06GrnsQALK4sya09u8ETHyeEjmLWZU4iPsJwDkFEKjY approved
$ kubectl certificate approve node-csr-LFA5XKZfyBKFTOp6toxiSKolLvpPBNWjqEFk1ZHICAk
certificatesigningrequest.certificates.k8s.io/node-csr-LFA5XKZfyBKFTOp6toxiSKolLvpPBNWjqEFk1ZHICAk approved
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-06GrnsQALK4sya09u8ETHyeEjmLWZU4iPsJwDkFEKjY 4m34s kubelet-bootstrap Approved,Issued
node-csr-LFA5XKZfyBKFTOp6toxiSKolLvpPBNWjqEFk1ZHICAk 4m34s kubelet-bootstrap Approved,Issued
- 查看节点状态
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.91.21 Ready <none> 72s v1.15.1
192.168.91.22 Ready <none> 85s v1.15.1
# 更改节点角色
$ kubectl label node 192.168.91.21 node-role.kubernetes.io/node-21=
node/192.168.91.21 labeled
$ kubectl label node 192.168.91.22 node-role.kubernetes.io/node-22=
node/192.168.91.22 labeled
$ kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.91.21 Ready node-21 6m35s v1.15.1
192.168.91.22 Ready node-22 6m48s v1.15.1
$ kubectl get no
NAME STATUS ROLES AGE VERSION
192.168.91.21 Ready node-21 7m41s v1.15.1
192.168.91.22 Ready node-22 7m54s v1.15.1
10.2 部署kube-proxy组件(Node节点)
kube-proxy 运行在所有 Node 节点上, 监听 Apiserver 中 Service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
- 创建kube-proxy配置文件
注意修改 hostname-override 地址, 不同的节点则不同。
$ cat >/etc/kubernetes/cfg/kube-proxy<<EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--metrics-bind-address=0.0.0.0 \
--hostname-override=192.168.91.21 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/etc/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
- 创建kube-proxy systemd unit文件
$ cat >/usr/lib/systemd/system/kube-proxy.service<<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
- 启动kube-proxy服务
$ systemctl enable kube-proxy
$ systemctl start kube-proxy
$ systemctl status kube-proxy
11.运行Demo项目
$ kubectl run nginx --image=nginx --replicas=2
$ kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
- 查看pod
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7bb7cd8db5-5g6r4 0/1 ContainerCreating 0 40s
nginx-7bb7cd8db5-cddjv 0/1 ContainerCreating 0 40s
- 查看svc
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 22h
nginx NodePort 10.0.0.191 <none> 88:37712/TCP 34s
# NodePort类型的POD:88是容器内端口,37712是宿主机端口。
- 访问web
$ curl -I http://192.168.91.21:37712
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Mon, 13 Jul 2020 16:16:44 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT
Connection: keep-alive
ETag: "5f049a39-264"
Accept-Ranges: bytes
12.部署Dashboard(Master-18)
- 创建Dashboard证书
$ mkdir /certs && cd /certs
$ kubectl create namespace kubernetes-dashboard
# 创建dashboard的key文件
$ openssl genrsa -out dashboard.key 2048
# 创建证书请求文件
$ openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
# 创建自签证书
$ openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
# 创建kubernetes-dashboard-certs对象
$ kubectl delete secrets kubernetes-dashboard-certs -n kubernetes-dashboard
$ kubectl create secret generic kubernetes-dashboard-certs --from-file=/certs -n kubernetes-dashboar
# 查看证书
$ kubectl get secret
- 安装Dashboard
$ mkdir /root/dashboard && cd /root/dashboard
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
# 修改配置文件如下:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
#apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
# 应用资源配置清单
$ kubectl apply -f recommended.yaml
# 查询创建结果
$ kubectl get pod -A -o wide| grep kubernetes
$ kubectl get svc -A | grep kubernetes
- 创建Dashboard访问账户
# 创建SA账户
$ kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# 绑定集群管理员
$ kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# 获取token
$ kubectl describe secrets $(kubectl get secrets -n kubernetes-dashboard | awk '/dashboard-admin-token/{print $1}' ) -n kubernetes-dashboard |sed -n '/token:.*/p'
# 浏览器使用https访问:https://Node_IP:svc_nodeport,输入token验证。
13.部署Ingress
Ingress在K8S中相当于是传统架构中的Nginx反向代理,Ingress可以调用service资源,再通过service来找到后端的多个负载POD。Ingress的实现组件叫做Traefik,因此我们需要部署Traefik。
$ mkdir /root/ingress
$ kubectl apply -f traefik-crd.yaml -n # 创建Traefik CRD文件
$ kubectl apply -f traefik-rbac.yaml -n kube-system
$ kubectl apply -f traefik-config.yaml -n
$ kubectl apply -f traefik-deploy.yaml -n kube-system
$ kubectl apply -f traefik-dashboard-route.yaml -n kube-system # 创建路由
# 创建完成后本地绑定host: NodeIP 域名(路由中配置的域名);然后即可浏览器访问域名
- traefik-crd.yaml
## IngressRoute
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
scope: Namespaced
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
---
## IngressRouteTCP
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
scope: Namespaced
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
---
## Middleware
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: middlewares.traefik.containo.us
spec:
scope: Namespaced
group: traefik.containo.us
version: v1alpha1
names:
kind: Middleware
plural: middlewares
singular: middleware
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
scope: Namespaced
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
- traefik-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: traefik-ingress-controller
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups: [""]
resources: ["services","endpoints","secrets"]
verbs: ["get","list","watch"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","list","watch"]
- apiGroups: ["extensions"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: ["traefik.containo.us"]
resources: ["middlewares"]
verbs: ["get","list","watch"]
- apiGroups: ["traefik.containo.us"]
resources: ["ingressroutes"]
verbs: ["get","list","watch"]
- apiGroups: ["traefik.containo.us"]
resources: ["ingressroutetcps"]
verbs: ["get","list","watch"]
- apiGroups: ["traefik.containo.us"]
resources: ["tlsoptions"]
verbs: ["get","list","watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
- traefik-config.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
data:
traefik.yaml: |-
serversTransport:
insecureSkipVerify: true
api:
insecure: true
dashboard: true
debug: true
metrics:
prometheus: ""
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
redistcp:
address: ":6379"
providers:
kubernetesCRD: ""
log:
filePath: ""
level: error
format: json
accessLog:
filePath: ""
format: json
bufferingSize: 0
filters:
retryAttempts: true
minDuration: 20
fields:
defaultMode: keep
names:
ClientUsername: drop
headers:
defaultMode: keep
names:
User-Agent: redact
Authorization: drop
Content-Type: keep
- traefik-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
labels:
app: traefik-metrics
spec:
ports:
- name: web
port: 80
- name: websecure
port: 443
- name: admin
port: 8080
selector:
app: traefik
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
labels:
app: traefik
spec:
selector:
matchLabels:
app: traefik
template:
metadata:
name: traefik
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 1
containers:
#- image: traefik:latest
- image: traefik:2.0.5
name: traefik-ingress-lb
ports:
- name: web
containerPort: 80
hostPort: 80
- name: websecure
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
- name: redistcp
containerPort: 6379
hostPort: 6379
resources:
limits:
cpu: 200m
memory: 300Mi
requests:
cpu: 100m
memory: 256Mi
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --configfile=/config/traefik.yaml
volumeMounts:
- mountPath: "/config"
name: "config"
volumes:
- name: config
configMap:
name: traefik-config
tolerations: #设置容忍所有污点,防止节点被设置污点
- operator: "Exists"
nodeSelector: #设置node筛选器,在特定label的节点上启动
IngressProxy: "true"
- traefik-dashboard-route.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard-route
namespace: kube-system
spec:
entryPoints:
- web
routes:
- match: Host(`ingress.abcd.com`)
kind: Rule
services:
- name: traefik
port: 8080
14.部署访问服务
14.1 实现http服务
- 创建Nginx服务
# 创建POD
$ kubectl run nginx-ingress-demo --image=nginx --replicas=1 -n kube-system
# 创建SVC
$ kubectl expose deployment nginx-ingress-demo --port=1099 --target-port=80 -n kube-system
- 创建Nginx路由
$ vim nginx-ingress-demo-route.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-nginx-demo-route
namespace: kube-system
spec:
entryPoints:
- web
routes:
- match: Host(`nginx.abcd.com`)
kind: Rule
services:
- name: nginx-ingress-demo
port: 1099
$ kubectl apply -f nginx-ingress-demo-route.yaml
- 检查路由并绑定域名
$ kubectl get ingressroute -A
NAMESPACE NAME AGE
kube-system traefik-dashboard-route 3h16m
kube-system traefik-nginx-demo-route 73s
# 本地windows主机hosts文件绑定
192.168.91.21 nginx.abcd.com
14.2 实现https服务
- 代理Dashboard https服务
# 创建自签证书
$ openssl req -x509 -nodes -days 36500 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=cloud.abcd.com"
# 将证书存储到k8s中
$ kubectl create secret tls dashboard-tls --key=tls.key --cert=tls.crt -n kube-system
$ kubectl get secret -A|grep dashboard-tls
kube-system dashboard-tls kubernetes.io/tls 2 83s
# 创建Kubernetes Dashboard路由
$ vim kubernetes-dashboard-route.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard-route
namespace: kubernetes-dashboard
spec:
entryPoints:
- websecure
tls:
secretName: dashboard-tls
routes:
- match: Host(`cloud.abcd.com`)
kind: Rule
services:
- name: kubernetes-dashboard
port: 443
# 应用路由配置
$ kubectl apply -f kubernetes-dashboard-route.yaml
# 检查路由
$ kubectl get ingressroute -A
NAMESPACE NAME AGE
kube-system traefik-dashboard-route 3h39m
kube-system traefik-nginx-demo-route 24m
kubernetes-dashboard kubernetes-dashboard-route 11s
# 绑定host并浏览器访问https://cloud.abcd.com
192.168.91.21 cloud.abcd.com