K8S集群接入外部Ceph集群

K8S集群接入外部Ceph集群

1.环境配置

①K8s所有节点安装yum仓库

$ rpm -ivh https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm

$ yum install epel-release -y

②K8s所有节点安装ceph-common

$ yum -y install ceph-common

③hosts解析

便于使用主机名来配置动态卷参数

[root@ceph-admin ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain
192.168.0.59 ceph-admin
192.168.0.55 ceph-stor01 stor01 mon01 mds01
192.168.0.56 ceph-stor02 stor02 mon02 mgr01
192.168.0.57 ceph-stor03 stor03 mon03 mgr02
192.168.0.107 master01
192.168.0.108 master02
192.168.0.109 master03
192.168.0.236 master-lb
192.168.0.110 node01
192.168.0.111 node02

[root@ceph-admin ~]# for host in master01 master02 master03 node01 node02;do scp /etc/hosts $host:/etc/;done

2.K8s接入外部Ceph配置

①创建供k8s使用的存储池

$ ceph osd pool create k8s 64
$ ceph osd pool application enable k8s rbd
$ rbd pool init -p k8s

# 随便创建几个映像文件
[cephadm@ceph-admin ~/ceph-cluster]$ rbd create --size 2G k8s/img01
[cephadm@ceph-admin ~/ceph-cluster]$ rbd create --size 2G k8s/img02
[cephadm@ceph-admin ~/ceph-cluster]$ rbd ls -p k8s
img01
img02

②创建K8s与Ceph集群通信的用户

$ ceph auth get-or-create client.k8s mon 'allow r' osd 'allow * pool=k8s'

# 导出keyring
$ ceph auth get client.k8s -o ceph.client.k8s.keyring

③分发Ceph配置文件给k8s集群

$ for host in master01 master02 master03 node01 node02;do scp ceph.client.k8s.keyring ceph.conf root@${host}:/etc/ceph;done

# 在k8s集群测试k8s用户是否可以查看ceph集群状态信息
[root@master01 ~]# rbd --user=k8s ls -p k8s
img01
img02

④获取密钥并存放于k8s

  • ceph-admin用户信息
# 获取admin用户的key并使用base64加密
[cephadm@ceph-admin ~/ceph-cluster]$ ceph auth get-key client.admin|base64
QVFBb3krWmd5eTB2SEJBQXJQcVFGWlZvUGVUck1aa0hRb3MwZVE9PQ==
  • k8s用户信息
# 获取k8s用户的key并使用base64加密
[cephadm@ceph-admin ~/ceph-cluster]$ ceph auth get-key client.k8s|base64
QVFDSXlPcGc2YzJyRVJBQVRaSmd3SHJ1VEdhSVJRb1dLandPSUE9PQ==
  • 创建secret存储ceph的管理员和普通用户接入认证
$ cat ceph-secret.yaml 

---
apiVersion: v1
data:
  key: QVFBb3krWmd5eTB2SEJBQXJQcVFGWlZvUGVUck1aa0hRb3MwZVE9PQ==
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: kubernetes.io/rbd
---
apiVersion: v1
data:
  key: QVFDSXlPcGc2YzJyRVJBQVRaSmd3SHJ1VEdhSVJRb1dLandPSUE9PQ== 
kind: Secret
metadata:
  name: ceph-user-secret
  namespace: default
type: kubernetes.io/rbd

⑤使用k8s动态卷接入Ceph

$ cat dynamic-sc-ceph-rbd.yaml 

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rbd-dynamic
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.0.55:6789,192.168.0.56:6789,192.168.0.57:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  pool: k8s
  userId: k8s
  userSecretName: ceph-user-secret
  userSecretNamespace: default
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
reclaimPolicy: Retain
allowVolumeExpansion: true

⑤创建PVC关联SC

$ cat rbd-pvc-sc.yaml 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: rbd-dynamic
  • 查看状态
$ kubectl get secret | grep ceph
ceph-user-secret      kubernetes.io/rbd                     1      20m
$ kubectl get secret -n kube-system | grep ceph
ceph-admin-secret      kubernetes.io/rbd                     1      20m
$ kubectl get sc
NAME        PROVISIONER RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rbd-dynamic (default)   kubernetes.io/rbd   Retain  Immediate   true                11

$  kubectl get pvc
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rbd-claim  Bound    pvc-a3a5245e-eca2-4aad-a4ed-026293ffa9f3  2Gi  RWO  rbd-dynamic 4m21s

$ rbd --user=k8s -p k8s ls
img01
img02
kubernetes-dynamic-pvc-f127c45d-b60c-4df2-977a-0a2e712946cd