centos rbd?centos7服务器

很多朋友对于centos rbd和centos7服务器不太懂,今天就由小编来为大家分享,希望可以帮助到大家,下面一起来看看吧!

ceph 块存储rbd的使用,使用普通户创建和挂载rbd

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create rbd1-data 32 32

pool'rbd1-data' created

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool ls

device_health_metrics

mypool

.rgw.root

default.rgw.log

default.rgw.control

default.rgw.meta

myrbd1

cephfs-metadata

cephfs-data

rbd1-data

在存储池启用rbd:

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable rbd1-data rbd

enabled application'rbd' on pool'rbd1-data'

初始化存储池:

ceph@ceph-deploy:~/ceph-cluster$ rbd pool init-p rbd1-data

创建存储池映像文件:

映像文件的管理都是rbd命令来执行,rbd可对映像执行创建,查看,删除,以及创建快照,克隆映像,删除快照,查看快照,快照回滚等管理操作

ceph@ceph-deploy:~/ceph-cluster$ rbd create data-img1--size 3G--pool rbd1-data--image-format 2--image-feature layering

ceph@ceph-deploy:~/ceph-cluster$ rbd create data-img2--size 5G--pool rbd1-data--image-format 2--image-feature layering

查看存储池映像文件

ceph@ceph-deploy:~/ceph-cluster$ rbd list--pool rbd1-data

data-img1

data-img2

列出映像更多信息

ceph@ceph-deploy:~/ceph-cluster$ rbd list--pool rbd1-data-l

NAME    SIZE  PARENT  FMT  PROT  LOCK

data-img1  3 GiB       2      

data-img2  5 GiB       2

ceph@ceph-deploy:~/ceph-cluster$ rbd--image data-img1--pool rbd1-data info

rbd image'data-img1':

size 3 GiB in 768 objects

order 22(4 MiB objects)

snapshot_count: 0

id: 3ab91c6a62f5

block_name_prefix: rbd_data.3ab91c6a62f5

format: 2

features: layering

op_features:

flags:

create_timestamp: Thu Sep  2 06:48:11 2021

access_timestamp: Thu Sep  2 06:48:11 2021

modify_timestamp: Thu Sep  2 06:48:11 2021

ceph@ceph-deploy:~/ceph-cluster$ rbd--image data-img1--pool rbd1-data info--format json--pretty-format

{

  "name":"data-img1",

  "id":"3ab91c6a62f5",

  "size": 3221225472,

  "objects": 768,

  "order": 22,

  "object_size": 4194304,

  "snapshot_count": 0,

  "block_name_prefix":"rbd_data.3ab91c6a62f5",

  "format": 2,

  "features": [

    "layering"

   ],

  "op_features": [],

  "flags": [],

  "create_timestamp":"Thu Sep  2 06:48:11 2021",

  "access_timestamp":"Thu Sep  2 06:48:11 2021",

  "modify_timestamp":"Thu Sep  2 06:48:11 2021"

}

镜像(映像)特性的启用和禁用

特性包括:

layering支持分层快照特性 默认开启

striping条带化

exclusive-lock:支持独占锁 默认开启

object-map支持对象映射,加速数据导入导出及已用空间特性统计等 默认开启

fast-diff快速计算对象和快找数据差异对比 默认开启

deep-flatten 支持快照扁平化操作 默认开启

journaling 是否记录日志

开启:

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable object-map--pool rbd1-data--image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable fast-diff--pool rbd1-data--image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable exclusive-lock--pool rbd1-data--image data-img1

禁止:

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable object-map--pool rbd1-data--image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable fast-diff--pool rbd1-data--image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable exclusive-lock--pool rbd1-data--image data-img1

客户端使用块设备:

首先要安装ceph-comman,配置授权

[root@ceph-client1 ceph_data]# yum install-y

[root@ceph-client1 ceph_data]# yum install ceph-common-y 

授权,

ceph@ceph-deploy:/etc/ceph$ sudo-i

root@ceph-deploy:~# cd/etc/ceph/      

root@ceph-deploy:/etc/ceph# scp ceph.conf ceph.client.admin.keyring root@192.168.241.21:/etc/ceph

ubuntu系统:

root@ceph-client2:/var/lib/ceph# apt install-y ceph-common

root@ceph-deploy:/etc/ceph# sudo scp ceph.conf ceph.client.admin.keyring ceph@192.168.241.22:/tmp

ceph@192.168.241.22's password:

ceph.conf                                                          100%  270  117.7KB/s  00:00  

ceph.client.admin.keyring

root@ceph-client2:/var/lib/ceph# cd/etc/ceph/

root@ceph-client2:/etc/ceph# cp/tmp/ceph.c*/etc/ceph/

root@ceph-client2:/etc/ceph# ll/etc/ceph/

total 20

drwxr-xr-x  2 root root 4096 Aug 26 07:58./

drwxr-xr-x 84 root root 4096 Aug 26 07:49../

-rw-------  1 root root  151 Sep  2 07:24 ceph.client.admin.keyring

-rw-r--r--  1 root root  270 Sep  2 07:24 ceph.conf

-rw-r--r--  1 root root  92 Jul  8 07:17 rbdmap

-rw-------  1 root root   0 Aug 26 07:58 tmpmhFvZ7

客户端映射镜像

root@ceph-client2:/etc/ceph# rbd-p rbd1-data map data-img1

rbd: sysfs write failed

RBD image feature set mismatch. You can disable features unsupported by the kernel with"rbd feature disable rbd1-data/data-img1 object-map fast-diff".

In some cases useful info is found in syslog- try"dmesg| tail".

rbd: map failed:(6) No such device or address

root@ceph-client2:/etc/ceph# rbd feature disable rbd1-data/data-img1 object-map fast-diff

root@ceph-client2:/etc/ceph# rbd-p rbd1-data map data-img1

/dev/rbd0

root@ceph-client2:/etc/ceph# rbd-p rbd1-data map data-img2

格式化块设备admin映射映像文件

查看块设备

root@ceph-client2:/etc/ceph# lsblk

NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda    8:0   0  20G  0 disk

└─sda1  8:1   0  20G  0 part/

sr0   11:0   1 1024M  0 rom 

rbd0  252:0   0   3G  0 disk

rbd1  252:16  0   5G  0 disk

root@ceph-client2:/etc/ceph# mkfs.ext4/dev/rbd1

mke2fs 1.44.1(24-Mar-2018)

Discarding device blocks: done              

Creating filesystem with 1310720 4k blocks and 327680 inodes

Filesystem UUID: 168b99e6-a3d7-4dc6-9c69-76ce8b42f636

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done              

Writing inode tables: done              

Creating journal(16384 blocks): done

Writing superblocks and filesystem accounting information: done

挂在挂设备

root@ceph-client2:/etc/ceph# mkdir/data/data1-p

root@ceph-client2:/etc/ceph# mount/dev/rbd1/data/data1/

验证写入数据:

root@ceph-client2:/etc/ceph# cd/data/data1/

root@ceph-client2:/data/data1# cp/var/log/.-r

root@ceph-client2:/data/data1# ceph df

--- RAW STORAGE---

CLASS   SIZE   AVAIL   USED  RAW USED %RAW USED

hdd   220 GiB  213 GiB  7.4 GiB  7.4 GiB    3.37

TOTAL  220 GiB  213 GiB  7.4 GiB  7.4 GiB    3.37

--- POOLS---

POOL          ID  PGS  STORED  OBJECTS   USED %USED  MAX AVAIL

device_health_metrics  1   1    0 B     0    0 B    0   66 GiB

mypool          2  32  1.2 MiB     1  3.5 MiB    0   66 GiB

.rgw.root        3  32  1.3 KiB     4  48 KiB    0   66 GiB

default.rgw.log     4  32  3.6 KiB    209  408 KiB    0   66 GiB

default.rgw.control   5  32    0 B     8    0 B    0   66 GiB

default.rgw.meta     6   8    0 B     0    0 B    0   66 GiB

myrbd1          7  64  829 MiB    223  2.4 GiB  1.20   66 GiB

cephfs-metadata     8  32  563 KiB    23  1.7 MiB    0   66 GiB

cephfs-data       9  64  455 MiB    129  1.3 GiB  0.66   66 GiB

rbd1-data        10  32  124 MiB    51  373 MiB  0.18   66 GiB

创建普通用户并授权

root@ceph-deploy:/etc/ceph# ceph auth add client.huahualin mon"allow rw"  osd"allow rwx pool=rbd1-data"

added key for client.huahualin

root@ceph-deploy:/etc/ceph# ceph-authtool--create-keyring ceph.client.huahualin.keyring

creating ceph.client.huahualin.keyring

root@ceph-deploy:/etc/ceph# ceph auth  get client.huahualin-o ceph.client.huahualin.keyring

exported keyring for client.huahualin

使用普通用户创建rbd

root@ceph-deploy:/etc/ceph# scp ceph.conf ceph.client.huahualin.keyring  root@192.168.241.21:/etc/ceph/

普通用户映射镜像

[root@ceph-client1~]# rbd--user huahualin--pool rbd1-data map data-img2

/dev/rbd0

使用普通用户挂载rbd

[root@ceph-client1~]# mkfs.ext4/dev/rbd0

[root@ceph-client1~]# fdisk-l/dev/rbd0

[root@ceph-client1~]# mkdir/data

[root@ceph-client1~]# mount /dev/rbd0/data

[root@ceph-client1~]# df-Th

Filesystem        Type    Size  Used Avail Use% Mounted on

devtmpfs         devtmpfs  475M   0  475M  0%/dev

tmpfs          tmpfs   487M   0  487M  0%/dev/shm

tmpfs          tmpfs   487M  7.7M  479M  2%/run

tmpfs          tmpfs   487M   0  487M  0%/sys/fs/cgroup

/dev/mapper/centos-root xfs     37G  1.7G  36G  5%/

/dev/sda1        xfs    1014M  138M  877M  14%/boot

tmpfs          tmpfs    98M   0  98M  0%/run/user/0

192.168.241.12:6789:/  ceph    67G  456M  67G  1%/ceph_data

/dev/rbd0        ext4    4.8G  20M  4.6G  1%/data

挂载rbd后会自动加载模块libceph.ko

[root@ceph-client1~]# lsmod|grep ceph

ceph          363016  1

libceph        306750  2 rbd,ceph

dns_resolver      13140  1 libceph

libcrc32c        12644  4 xfs,libceph,nf_nat,nf_conntrack

[root@ceph-client1~]# modinfo libceph

filename:   /lib/modules/3.10.0-1160.el7.x86_64/kernel/net/ceph/libceph.ko.xz

license:     GPL

description:   Ceph core library

author:     Patience Warnick<patience@newdream.net>

author:     Yehuda Sadeh<yehuda@hq.newdream.net>

author:     Sage Weil<sage@newdream.net>

retpoline:    Y

rhelversion:   7.9

srcversion:   D4ABB648AE8130ECF90AA3F

depends:     libcrc32c,dns_resolver

intree:     Y

vermagic:    3.10.0-1160.el7.x86_64 SMP mod_unload modversions

signer:     CentOS Linux kernel signing key

sig_key:     E1:FD:B0:E2:A7:E8:61:A1:D1:CA:80:A2:3D:CF:0D:BA:3A:A4:AD:F5

sig_hashalgo:  sha256

如果镜像空间不够用了,我们可以做镜像空间的拉伸,一般不建议减小

查看rdb1-data存储池的镜像

[root@ceph-client1~]# rbd ls-p rbd1-data-l

NAME    SIZE  PARENT  FMT  PROT  LOCK

data-img1  3 GiB       2      

data-img2  5 GiB       2 

比如data-img2空间不够了,需要拉伸,将data-img2扩展到8G

[root@ceph-client1~]# rbd resize--pool rbd1-data--image data-img2--size  8G

Resizing image: 100% complete...done.

可以通过fdisk-l查看镜像空间大小,但是通过df-h就看不到

[root@ceph-client1~]# lsblk

NAME       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda        8:0   0  40G  0 disk

├─sda1       8:1   0   1G  0 part/boot

└─sda2       8:2   0  39G  0 part

 ├─centos-root 253:0   0  37G  0 lvm /

 └─centos-swap 253:1   0   2G  0 lvm  [SWAP]

sr0        11:0   1 1024M  0 rom 

rbd0       252:0   0   8G  0 disk/data

[root@ceph-client1~]# fdisk-l/dev/rbd0

Disk/dev/rbd0: 8589 MB, 8589934592 bytes, 16777216 sectors

Units= sectors of 1* 512= 512 bytes

Sector size(logical/physical): 512 bytes/ 512 bytes

I/O size(minimum/optimal): 4194304 bytes/ 4194304 bytes

将挂载设置开机启动

[root@ceph-client1~]# vi/etc/rc.d/rc.local

rbd--user huahualin--pool rbd1-data map data-img2

mount/dev/rbd0/data

[root@ceph-client1~]# chmod a+x /etc/rc.d/rc.local

[root@ceph-client1~]# reboot

如何构建OpenStack镜像

本文将详细介绍如何构建OpenStack镜像,以CentOS 7.2为例。首先,手动制作镜像过程繁琐,涉及下载镜像(推荐中国镜像源以提升下载速度)、创建虚拟机、安装操作系统和配置。例如,从官方isos目录下载x86_64 Minimal镜像,创建10GB的qcow2根磁盘,然后通过安装脚本启动虚拟机并进行系统安装。

自动化工具DIB简化了过程,只需在命令行指定元素(如安装cloud-init和yum源等),避免了重复操作。对于宿主机,需确保Ubuntu 14.04支持VT功能并安装相关虚拟机管理工具。

在虚拟机内,配置操作系统时,可以选择Minimal Install,手动设置根分区,配置SSH支持root远程登录,并确保ACPID服务运行以支持软重启和安全关机。安装cloud-init用于从metadata服务获取配置信息。

制作过程中,还需注意禁用zeroconf服务以避免与metadata服务冲突,以及通过growpart确保镜像根分区大小可动态调整。最后,移除宿主机信息,删除虚拟机,完成镜像制作。

上传镜像时,可以选择使用glance命令或通过Ceph的rbd导入功能,后者在大镜像上传时更为高效。同时,务必添加qemu-guest-agent属性,以支持Nova的动态密码修改功能。

DIB项目引入了自动化镜像构建的便利,通过chroot和elements脚本,简化镜像定制和维护。通过DIB,可以基于基础镜像轻松调整,如创建Ubuntu 14.04或带有trove-guest-agent和percona的镜像。

功能验证包括验证密码和密钥注入,以及磁盘大小动态调整和密码的动态修改。最后,本文总结了手动制作和DIB构建镜像的优势,建议OpenStack镜像采用DIB进行构建。

阅读剩余
THE END