ceph 创建一个桶命令_《蹲坑学K8S》之20-3:Ceph存储

b5a9fe5966bc4861f1f5432a0f2f29e6.gif

Ceph是一个统一的分布式存储系统,提供较好的性能、可靠性和可扩展性。Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。

1ce0aa6a7d0ac71a5aa62cfe103f51ca.png

一、准备服务器

1、SSH免密码登录

[root@ceph-1 ~]# ssh-keygen[root@ceph-1 ~]# ssh-copy-id root@ceph-2[root@ceph-1 ~]# ssh-copy-id root@ceph-3

2、时间同步

[root@ceph-1 ~]# vim /etc/chrony.conf添加:server 20.0.0.202 iburst [root@ceph-1 ~]# scp /etc/chrony.conf root@ceph-2:/etc/[root@ceph-1 ~]# scp /etc/chrony.conf root@ceph-3:/etc/ 注意:ceph-1、ceph-2和ceph-3都要重启chronyd服务。[root@ceph-1 ~]# systemctl restart chronyd[root@ceph-1 ~]# chronyc sources -v
1a600aff78aff3a7caded9f4fcdbc3d6.png

3、添加硬盘

(1)ceph-1、ceph-2和ceph-3服务器中各添加一块硬盘。

[root@ceph-1 ~]# lsblk
07688634f602c3e4b5c5602a2bc414e6.png
[root@ceph-2 ~]# lsblk
5cc95684333de768b718415385cdd18b.png
[root@ceph-3 ~]# lsblk
17d71c88bcdb0d5da6ea902b958a09e7.png

(2)格式硬盘

[root@ceph-1 ~]# mkfs.xfs /dev/sdb[root@ceph-2 ~]# mkfs.xfs /dev/sdb[root@ceph-3 ~]# mkfs.xfs /dev/sdb
9c1a2834735d7bf7408c65a9bde08cdd.png

二、部署ceph

目前Ceph官方提供三种部署Ceph集群的方法,分别是ceph-deploy,cephadm和手动安装:

  • ceph-deploy,一个集群自动化部署工具,使用较久,成熟稳定,被很多自动化工具所集成,可用于生产部署;
  • cephadm,从Octopus开始提供的新集群部署工具,支持通过图形界面或者命令行界面添加节点,目前不建议用于生产环境,有兴趣可以尝试;
  • manual,手动部署,一步步部署Ceph集群,支持较多定制化和了解部署细节,安装难度较大,但可以清晰掌握安装部署的细节。
1c992266cdf07ae0c8773dcbe49ee5e7.png

1、在所有节点ceph-1安装工具

(1)准备yum源

[root@ceph-1 ~]# vim /etc/yum.repos.d/ceph.repo添加:[Ceph]name=Ceph packages for $basearchbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/x86_64/enabled=1gpgcheck=1type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [Ceph-noarch]name=Ceph noarch packagesbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/noarch/enabled=1gpgcheck=1type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source]name=Ceph source packagesbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/SRPMS/enabled=1gpgcheck=1type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc  [root@ceph-1 ~]# scp /etc/yum.repos.d/ceph.repo root@ceph-2:/etc/yum.repos.d/[root@ceph-1 ~]# scp /etc/yum.repos.d/ceph.repo root@ceph-3:/etc/yum.repos.d/ [root@ceph-1 ~]# yum install epel-release -y[root@ceph-1 ~]# yum clean all && yum makecache

(2)在所有节点上部署ceph软件

[root@ceph-1 ~]# yum -y install ceph[root@ceph-1 ~]# yum install -y ceph-common [root@ceph-1 ~]# ceph --version
64e9994414d9dddc0db74f5a56fa7d68.png

2、创建管理节点

(2)在ceph-1创建管理节点

[root@ceph-1 ~]# yum install -y cephadm[root@ceph-1 ~]# cd /etc/ceph/[root@ceph-1 ceph]# cephadm bootstrap --mon-ip 192.168.1.201
215f08c04bc33a2ed671baffb1e31bbe.png

(3)访问ceph UI界面。注意:首次登陆要修改密码,进行验证。

f4769763f995c12a15b2d701733ed78c.png
3ef86dbe58e8590891f3a893f1509b59.png
[root@ceph-1 ~]# podman images
6b626d9e54fcf730a0d4aa66834cc3c3.png
[root@ceph-1 ~]# podman ps -a
764fc1daea47f8b8e7297f034b4142d0.png

cephadm shell命令在安装了所有Ceph包的容器中启动一个bash shell

[root@ceph-1 ~]# cephadm shell
151268f6ae7c1d6bedae5d86d1b65122.png
[root@ceph-1 ~]# cephadm install ceph-common或者[root@ceph-1 ~]# yum install -y ceph-common
547f124c9c8257a22194aa34cbb4cbd9.png
[root@ceph-1 ~]# ceph -v
c1258525b03dc74c78519c1ba9dee941.png
[root@ceph-1 ~]# ceph status
e2e5b11b9651a49807c1aeedfc58f946.png
[root@ceph-1 ~]# ceph health
9c43f88aef948d1085b427342fc1e532.png

3、创建ceph群集

[root@ceph-1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2[root@ceph-1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3
d9629fcf89840ae050b35f73eed6071b.png
[root@ceph-1 ~]# ceph orch host add ceph-2[root@ceph-1 ~]# ceph orch host add ceph-3
82bc6cfc640602a0646337a599b5d0dc.png
[root@ceph-1 ~]# ceph status
958d917638cea60fd4482d1000770f57.png

4、部署添加monitor

(1)设置公共网段,以便client访问

[root@ceph-1 ~]# ceph config set mon public_network 192.168.1.0/24

(2)设置monitor节点

[root@ceph-1 ~]# ceph orch host label add ceph-1 mon[root@ceph-1 ~]# ceph orch host label add ceph-2 mon[root@ceph-1 ~]# ceph orch host label add ceph-3 mon
b803d526edbf3ef6ff2693b20463698e.png
[root@ceph-1 ~]# ceph orch host ls
a2d722096af1873c29903567a83cd6bc.png

(3)各节点拉取images并启动容器

[root@ceph-1 ~]# ceph orch apply mon label:mon
8c34b7c44536ebefdfa73717209ce74d.png

查看ceph-2的镜像和容器运行情况

81abb9c16dd6779856fe4550216f9cc6.png
e6120b1f6c31d45d682d4bf456ec6e5c.png

查看ceph-3的镜像和容器运行情况

b86e1a10efd3f4f977e46a9d91b450cf.png
6aa8e99f99d13f44b7318f507f1b7240.png

(4)查看ceph群集健康状态

[root@ceph-1 ~]# ceph -s
98650047056c83f25b15b270d5ab0f67.png

5、部署OSD

(1)查看查看硬盘

[root@ceph-1 ~]# ceph orch device ls
d94912e51cdc11f1fdccfebec848a548.png

(2)添加硬盘

[root@ceph-1 ceph]# ceph orch daemon add osd ceph-1:/dev/sdb[root@ceph-1 ceph]# ceph orch daemon add osd ceph-2:/dev/sdb[root@ceph-1 ceph]# ceph orch daemon add osd ceph-3:/dev/sdb
53e6b417e1985be5f1512dbda4fafa52.png

(3)查看添加硬盘情况

[root@ceph-1 ceph]# ceph osd df
2e5955beadd3267042a13c160ec05c2e.png

(4)查看群集健康

[root@ceph-1 ceph]# ceph -s
2a01689048b049deb4bc6ddefbd92a74.png

6、部署MDS

[root@ceph-1 ceph]# ceph orch apply mds ceph-cluster --placement=3
d9a71adcf7f4a98ec6c6aaaabb365179.png
[root@ceph-1 ceph]# ceph -s
6454387f77e49d71f6c02bfcce453447.png

7、部署RGW

(1)创建一个领域

[root@ceph-1 ceph]# radosgw-admin realm create --rgw-realm=my-rgw --default
e8b94118374fcda98cf194f674ece724.png

(2)创建一个zonegroup

[root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=my-rgwgroup --master --default
f2c874a19c53cf6eef01e538c5d3c13f.png

(3)创建一个区域

[root@ceph-1 ceph]# radosgw-admin zone create --rgw-zonegroup=my-rgwgroup --rgw-zone=my-zone-1 --master --default
028beb2483f7275d7ad4c729cd4235a2.png
7dbb16c9ccdb1b56b6a94fb720415c7f.png

(4)部署一组radosgw守护进程

[root@ceph-1 ceph]# ceph orch apply rgw my-rgw  my-zone-1 --placement="2 ceph-2 ceph-3"
1c84748c9b66025affe497f27b9f16f3.png

(5)查看RGW部署情况

[root@ceph-1 ceph]# ceph -s
561ad126423b3bd41ec4bb647c0ccf00.png

(6)为RGW开启dashborad创建管理用户

[root@ceph-1 ceph]# radosgw-admin user create --uid=admin --display-name=admin --system
e77d9f49c24405add6d3b8a5881d63f9.png
d9dc0e9540b5a40cd09a5e896640e0fe.png

(7)设置dashboard凭证

注意:使用rgw的管理用户的Key。

[root@ceph-1 ceph]# ceph dashboard set-rgw-api-access-key VKIC8KNA7Q75P1PT8JVA
0f02a9940e50b17663120a03f612727f.png
[root@ceph-1 ceph]# ceph dashboard set-rgw-api-secret-key EEqJMI7zEgK1MPSWSAM4vZJRbLkmaQZs8czJez9B
4b8c925869964cfb055a6cc5f32db59d.png

设置

 [root@ceph-1 ceph]# ceph dashboard set-rgw-api-ssl-verify False[root@ceph-1 ceph]# ceph dashboard set-rgw-api-scheme http[root@ceph-1 ceph]# ceph dashboard set-rgw-api-host 192.168.1.0/24[root@ceph-1 ceph]# ceph dashboard set-rgw-api-port 80[root@ceph-1 ceph]# ceph dashboard set-rgw-api-user-id admin

(8)重启RGW

[root@ceph-1 ceph]# ceph orch restart rgw
752d5bd75940abb9d3aac679195bda1a.png

(4)修改ceph配置文件

[root@ceph-1 ~]# vim /etc/ceph/ceph.conf添加:public network = 192.168.1.0/24cluster network = 192.168.1.0/24osd_pool_default_size = 2mon_clock_drift_allowed = 2                            ##增大mon之间时差 [mon]mon allow pool delete = true                          ##允许删除pool

挂载使用

[root@ceph-1 ~]# rados df
9b43f3b89cd67bf563755f6e1cbc690f.png
[root@ceph-1 ~]# ceph osd lspools
4babed4416fa049d7087fd45a7414228.png

三、创建ceph文件系统

1、创建存储池

[root@ceph-1 ceph]# ceph fs ls
f0895507a89c212abb8778dbaac6d7b8.png

一个 Ceph 文件系统需要至少两个 RADOS 存储池,一个用于数据、一个用于元数据。配置这些存储池时需考虑:

为元数据存储池设置较高的副本水平,因为此存储池丢失任何数据都会导致整个文件系统失效。

为元数据存储池分配低延时存储器(像 SSD ),因为它会直接影响到客户端的操作延时。

[root@ceph-1 ceph]# ceph osd pool create cephfs_data 128[root@ceph-1 ceph]# ceph osd pool create cephfs_metadata 128
27deb6370bb68f74ced83eee4ed53522.png
bd45f8a21b3914cf3f133cc7c98cc7f6.png

注:关于创建存储池
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
*少于 5 个 OSD 时可把 pg_num 设置为 128
*OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
*OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
*OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值

删除存储池pool

45887d5cd7481a074037ca7f6658d0ed.png

2、创建文件系统

[root@ceph-1 ceph]# ceph fs new cephfs_dodo cephfs_metadata cephfs_data
e22b6fffdd78be69cdd7181ea03cd9d5.png

查看创建后的cephfs

[root@ceph-1 ceph]# ceph fs ls
8b545895f36126bf63accd213bb3022e.png

查看mds节点状态

a2d7fc3be1301b3022c6304b65441ba8.png

查看存储池pool

[root@ceph-1 ceph]# rados lspools或者[root@ceph-1 ceph]# ceph osd lspools
f791724ed395ed90dcf5732770dc651f.png
[root@ceph-1 ceph]# rados df
1336001f93ad596f40c9e39ef2ea0603.png

3、挂载Ceph文件系统

(1)创建挂载点

[root@client ~]# mkdir /dodo

(2)内核驱动挂载Ceph文件系统

准备秘钥

[root@ceph-1 ceph]# cat /etc/ceph/ceph.client.admin.keyring
afd5ac55d1192b038a7f394340cf7e01.png

将ceph服务器的秘钥复制到Client中。

[root@client ~]# mkdir /etc/ceph/[root@client ~]# vim /etc/ceph/admin.secret添加:AQAc879eIBHNIhAAhmLPJSe758tL9lyqNL5aZw==挂载[root@client ~]# mount -t ceph 192.168.1.201:6789:/ /dodo/ -o name=admin,secret=AQAc879eIBHNIhAAhmLPJSe758tL9lyqNL5aZw==或者[root@client ~]# mount -t ceph 192.168.1.201:6789:/ /dodo -o name=admin,secretfile=/etc/ceph/admin.secret [root@client ~]# mount | grep ceph
e51e84151a6c96eec8747440b257801f.png
[root@client ~]# df -h
1bef09e9c657ef7e7f7ce618f5995e76.png

取消挂载

[root@client ~]# umount /dodo

(3)用户控件挂载Ceph文件系统

安装ceph-fuse

[root@client ~]# yum install epel-release -y[root@client ~]# yum install -y ceph-fuse

拷贝ceph配置和秘钥文件

[root@ceph-1 ~]# scp /etc/ceph/ceph.*  client:/etc/ceph/

注意:将ceph配置和秘钥文件权限设置为644

[root@client ~]# chmod 644 /etc/ceph/*

挂载

[root@client ~]# ceph-fuse -m 192.168.1.201:6789 /dodo
a00ad0fc49bc47e5ae70143950216753.png
[root@client ~]# df -h
6f67a8991841a0a50db0fbf0b16a8dc6.png

取消挂载

[root@client ~]# fusermount -u /dodo

四、kubernetes应用ceph

(一)准备ceph存储

1、创建存储池pool

[root@ceph-1 ceph]# ceph osd pool create dodo-cephfs 64

2、Ceph上准备K8S客户端账号

[root@ceph-1 ceph]# ceph auth get-or-create client.dodo mon 'allow r' osd 'allow rwx pool=dodo-cephfs' -o dodo-cephfs.k8s.keyring

3、获取账号的密钥

[root@ceph-1 ceph]# ceph auth get-key client.admin | base64秘钥为:QVFBYzg3OWVJQkhOSWhBQWhtTFBKU2U3NTh0TDlseXFOTDVhWnc9PQ==
1d84b5c2f3f9af87bef32a28e5fe3163.png

(二)在客户端(k8s-node-1、k8s-node-2和k8s-master)上安装ceph

注意:客户端版本与ceph集群的版本保持一致。

1、安装ceph

[root@k8s-master ~]# vim /etc/yum.repos.d/ceph.repo添加:[Ceph]name=Ceph packages for $basearchbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el7/x86_64/enabled=1gpgcheck=1type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [Ceph-noarch]name=Ceph noarch packagesbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/enabled=1gpgcheck=1type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source]name=Ceph source packagesbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el7/SRPMS/enabled=1gpgcheck=1type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc  [root@k8s-master ~]# scp /etc/yum.repos.d/ceph.repo 192.168.1.1:/etc/yum.repos.d/[root@k8s-master ~]# scp /etc/yum.repos.d/ceph.repo 192.168.1.2:/etc/yum.repos.d/  [root@k8s-master ~]# yum install epel-release -y[root@k8s-master ~]# yum -y install ceph-common[root@k8s-master ~]# ceph --version
463e211500134c79b19b0217a99a5754.png

2、将ceph群集中的配置文件ceph.conf拷贝到所有节点的/etc/ceph目录下

[root@ceph-1 ~]# scp /etc/ceph/ceph.conf 192.168.1.1:/etc/ceph/[root@ceph-1 ~]# scp /etc/ceph/ceph.conf 192.168.1.2:/etc/ceph/[root@ceph-1 ~]# scp /etc/ceph/ceph.conf 192.168.1.3:/etc/ceph/

3、将caph集群的ceph.client.admin.keyring文件放在k8s控制节点的/etc/ceph目录

[root@ceph-1 ~]# scp /etc/ceph/dodo-cephfs.k8s.keyring 192.168.1.1:/etc/ceph/[root@ceph-1 ~]# scp /etc/ceph/dodo-cephfs.k8s.keyring 192.168.1.2:/etc/ceph/[root@ceph-1 ~]# scp /etc/ceph/dodo-cephfs.k8s.keyring 192.168.1.3:/etc/ceph/

4、生成加密key

[root@ceph-1 ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64QVFBYzg3OWVJQkhOSWhBQWhtTFBKU2U3NTh0TDlseXFOTDVhWnc9PQ==
01ebb1af22f3ef11c8d7dead172b802b.png

5、创建ceph的secret

[root@k8s-master ~]# vim ceph-secret.yaml添加:apiVersion: v1kind: Secretmetadata:  name: ceph-secrettype: "kubernetes.io/rbd"data:  key: QVFBYzg3OWVJQkhOSWhBQWhtTFBKU2U3NTh0TDlseXFOTDVhWnc9PQ==
228f4975c4e4e955e21ef4b0e4d3434a.png
[root@k8s-master ~]# kubectl apply -f ceph-secret.yaml
f377742704aa223091dd82d1b5d5cd3a.png
[root@k8s-master ~]# kubectl get secrets ceph-secret
1cd6a2f3da262bbf504197a0d26fb432.png

6、创建存储类

[root@k8s-master ~]# vim ceph-class.yaml添加:apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:   name: test-cephprovisioner: kubernetes.io/rbdparameters:  monitors: 192.168.1.201:6789  adminId: admin  adminSecretName: ceph-secret  adminSecretNamespace: default  pool: dodo-cephfs  userId: admin  userSecretName: ceph-secret
a916ef6eed43d69aabb7fccb6c2015e6.png

7、应用ceph

[root@k8s-master ~]# vim ceph-web.yaml添加:apiVersion: v1kind: Servicemetadata:  name: ceph-web  labels:    app: ceph-webspec:  type: NodePort  selector:    app: ceph-web  ports:  - name: http    port: 80    targetPort: 80---apiVersion: apps/v1kind: StatefulSetmetadata:  name: ceph-webspec:  serviceName: "ceph-web"  replicas: 2  selector:    matchLabels:      app: ceph-web  template:    metadata:      labels:        app: ceph-web    spec:     containers:     - name: nginx       image: nginx:1.17       ports:       - name: http         containerPort: 80       volumeMounts:       - name: test-storage         mountPath: /usr/share/nginx/html  volumeClaimTemplates:  - metadata:      name: test-storage    spec:      storageClassName: test-ceph      accessModes: ["ReadWriteOnce"]      resources:        requests:          storage: 3Gi
e48ae0ac85d741a7253ea21cb1109e03.png
96fb8cde05fcf4d59e0df4d255e76ac7.png
[root@k8s-master ~]# kubectl apply -f ceph-web.yaml
7e58cfe54094a516604eb9611599935a.png
[root@k8s-master ~]# kubectl get pod -o wide
84f18f39a2ed3e3a4d8002b5e7e024a8.png
[root@k8s-master ~]# kubectl get pv
09afc099b2868796d2d7bbe02a517959.png
[root@k8s-master ~]# kubectl get pvc
9684b0ba209731e21cf2fc9bc6678a53.png

连接容器查看挂载情况

2a7bd56a4cc60cb0e7eb3046427811ce.png

添加测试文件

[root@k8s-master ~]# kubectl exec -it ceph-web-0 -- /bin/bashroot@ceph-web-0:/# echo "welcome to POD:ceph-web-0!!!" > /usr/share/nginx/html/index.html
6974ff5b0857c3b5b875863d5aa65ac4.png
[root@k8s-master ~]# kubectl exec -it ceph-web-1 -- /bin/bashroot@ceph-web-1:/# df -hT
eed8eba099578b94581e8b05a02b008b.png
[root@k8s-master ~]# kubectl exec -it ceph-web-1 -- /bin/bashroot@ceph-web-1:/# echo "welcome to POD:ceph-web-1!!!" > /usr/share/nginx/html/index.html
2caaa85828ed937763ad8e605b7b7863.png

访问:

50f6bc30624397ede00f2b084bd1667d.png
149b93617cd8c91d783b084cc138ddab.png

注:以轮训的方式显示。


0d9ad6bebce91d8cb9ea3a829cc12735.png

版权声明:本文为weixin_29982021原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。