Epidemiology & Technology

Ceph Keyring Locations on Proxmox

Reference CephFS documentation

https://docs.ceph.com/docs/master/cephfs/

Ceph FS architecture
CephFS Structure

Standard cepfFS commands

ceph fs ls
ceph mds stat

ceph health
ceph df
ceph auth ls


Ceph Users REDHAT SUSE

To restrict clients to only mount and work within a certain directory, use path-based MDS authentication capabilities.

For example, to restrict the MDS daemon to write metadata only to a particular directory, specify that directory while creating the client capabilities.

The following example command restricts the MDS to write metadata only to the /home/cephfs/ directory. Also, it restricts the CephFS client ‘user1‘ to perform read and write operations only within the cephfs pool:

ceph auth get client.admin

ceph auth get-or-create client.user1 mon 'allow r' mds 'allow r, allow rw path=/mnt/pve/cephfs' osd 'allow rw pool=cephfs'Code language: PHP (php)

Location of cephfs folder

df-h
MDS1,MDS2,MDS3,MDS4:/  360G     0  360G   0% /mnt/pve/cephfs

root@hp0XXXX:~# mkdir /mnt/pve/cephfs/phy_backups

root@hp0XXXX:~# tree /mnt/pve/cephfs/
/mnt/pve/cephfs/
├── dump
├── phy_backups
└── template
    ├── cache
    └── isoCode language: PHP (php)

Set a file size limit on the shared folder

LINK

Note – If you want to set user quotas on directory, use ceph-fuse when mounting. So far its the only way I’ve been able to get quotas to work.

setfattr -n ceph.quota.max_bytes -v 107300000000 /mnt/pve/cephfs/phy_backups

Location of keyring Files in Proxmox


root@hp0XXXX:~# cat /etc/pve/ceph.conf 
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 192.168.yy.xx/24
         fsid = 09fc106c-xxxx-xxxx-xxx-xxxxxxxxxxxxxxxxx
         mon_allow_pool_delete = true
         mon_host = 192.168.yy.xx1 192.168.yy.xx2 192.168.yy.xx3 192.168.yy.xx4
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 192.168.yy.zz/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.dell04]
         host = dell04
         mds_standby_for_name = pve

[mds.hp0XXXX]
         host = hp0105blade07duplicate
         mds standby for name = pve

[mds.dell07]
         host = dell07
         mds_standby_for_name = pve



root@hp0XXXX:~#  tree /etc/pve/priv/
/etc/pve/priv/
├── authkey.key
├── authorized_keys
├── ceph
│   ├── cephfs.secret
│   └── cephpool1.keyring
├── ceph.client.admin.keyring
├── ceph.mon.keyring
├── known_hosts
├── lock
│   ├── ha_agent_dellXXXX01_lock
│   ├── ha_agent_dellXXXX02_lock
│   ├── ha_agent_dellXXXX04_lock
│   ├── ha_agent_delXXXX10_lock
│   ├── ha_agent_hp0XXXX_lock
│   └── ha_manager_lock
├── pve-root-ca.key
├── pve-root-ca.srl
└── shadow.cfg


root@hp0XXXX:~# tree /var/lib/ceph/
/var/lib/ceph/
├── bootstrap-mds
├── bootstrap-mgr
├── bootstrap-osd
│   └── ceph.keyring
├── bootstrap-rbd
├── bootstrap-rbd-mirror
├── bootstrap-rgw
├── crash
│   └── posted
├── mds
│   └── ceph-hp0XXXX
│       └── keyring
├── mgr
│   └── ceph-hp0XXXX
│       └── keyring
├── mon
│   └── ceph-hp0XXXX
│       ├── keyring
│       ├── kv_backend
│       ├── min_mon_release
│       └── store.db
│           ├── 078850.log
│           ├── 078852.sst
│           ├── CURRENT
│           ├── IDENTITY
│           ├── LOCK
│           ├── MANIFEST-072471
│           ├── OPTIONS-039000
│           └── OPTIONS-072474
├── osd
│   └── ceph-3
│       ├── block -> /dev/ceph-6a2068a6-XXXX-4461-9bb2-XXXXXX/osd-block-XXXXXXXfd55-XXXX-XXXXXX
│       ├── ceph_fsid
│       ├── fsid
│       ├── keyring
│       ├── ready
│       ├── type
│       └── whoami
└── tmp
Code language: PHP (php)

Copying the Keyring file for admin

cat /etc/pve/priv/ceph.client.admin.keyring 
[client.admin]
        key = AXXXXXXkoRdXXXXXEXX7XXXXXXGrX0XXXXXXvcNXXXXXXw==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"Code language: JavaScript (javascript)

Mount CephFS on client [https://www.suse.com/media/report/Discover_CephFS_Technical_Report.pdf]

On client Computer



sudo apt install ceph-common ceph-fuse

scp root@192.168.yy.yyy:/etc/pve/priv/ceph.client.admin.keyring .
sudo mkdir /etc/ceph
sudo touch /etc/ceph/admin.secret
sudo nano /etc/ceph/admin.secret
[ADD THE KEY HERE]

sudo mkdir /mnt/ceph_fs1

sudo mount ‑t ceph ceph_monitor1:6789:/ /mnt/ceph_fs1 ‑o name=admin, Usecretfile=/etc/ceph/admin.secret
# where /mnt/cephfs  is  the  mount  point,  ceph_monitor1 is a monitor host for the Ceph cluster, admin is the user, and /etc/ceph/admin.secret is the secret key file.

sudo df -hT
Code language: PHP (php)

The monitor host is a system that holds a map of the underlying cluster. The CephFS client will obtain the CRUSH map from the monitor host and thus obtain the information necessary to interface with the cluster. The Ceph monitor host listens on port 6789 by default.

If your Ceph cluster has more than one monitor host, you can specify multiple monitors in the mount command. Use a comma-separated

sudo mount ‑t ceph ceph_monitor1, ceph_monitor2, ceph_monitor3:6789/ /mnt/ceph_fs1 ‑o name=admin, secretfile=/etc/ceph/admin.secret

Code language: JavaScript (javascript)

Specifying multiple monitors provides failover in case one monitor system is down.

Linux views CephFS as a regular filesystem, so you can use all the standard mounting techniques used with other Linux filesystems. For instance, you can add your CephFS filesystem to the /etc/fstab file to mount the filesystem at system startup

CephFS comes with a collection of command-line tools for configuring and managing CephFS filesystems.

The ceph fs command is a general-purpose configuration tool with several options for managing file layout and location.

Related posts