NFS backend for Openstack Glance/Cinder/Instance-store

In this post, let’s go through how to configure NFS as unified storage backend for Openstack Glance, Cinder and shared instance-store, also we look at how it works under the hood.

Setup: 1 controller and 2 compute nodes. Controller acts as a NFS server as well.
OS+Openstack: RHEL7 + Juno

Controller: 192.168.255.1 HPDL36
Compute:  192.168.255.2 HPDL37
Compute:  192.168.255.3 HPDL38

Setup NFS server on controller server

Create 3 folders as shared source for instance-store, glance and cinder, grant enough access right:
mkdir /nfsshare; chmod 777 /nfsshare  
mkdir /nfsshare_glance; chmod 777 /nfsshare_glance  
mkdir /nfsshare_cinder; chmod 777 /nfsshare_cinder  
Create /etc/exports
/nfsshare   *(rw,no_root_squash)  
/nfsshare_cinder *(rw,no_root_squash)  
/nfsshare_glance *(rw,no_root_squash)
 Start NFS server
systemctl start rpcbind  
systemctl start nfs  
systemctl start nfslock  

Setup NFS clients

Glance

Mount NFS share on controller node for glance:

mount HPDL36:/nfsshare_glance /var/lib/glance/images  
Nova instance-store

Mount NFS share on 2 compute nodes for shared instance-store

mount HPDL36:/nfsshare /var/lib/nova/instances  
Cinder

Cinder-volume service will handle the mounting, we don’t need to do manual mounting here.

Setup Openstack

Since Glance and Nova uses those NFS mounted folder just as local filesystem, default Openstack configuration will work. Only Cinder needs special configs for NFS backend:

Create NFS shares entries into a file /etc/cinder/nfsshare
HPDL36:/nfsshare_cinder  
Change owership and access right of the file:
chown root:cinder /etc/cinder/nfsshare  
chmod 0640 /etc/cinder/nfsshare  
Configure /etc/cinder.conf
nfs_shares_config=/etc/cinder/nfsshare  
volume_driver=cinder.volume.drivers.nfs.NfsDriver  
Restart cinder services
systemctl restart openstack-cinder-api  
systemctl restart openstack-cinder-scheduler  
systemctl restart openstack-cinder-volume  
Check mounted Cinder NFS share
[root@HPDL36 ~(keystone_admin)]# mount | grep cinder  
 HPDL36:/nfsshare_cinder on /var/lib/cinder/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.255.1,local_lock=none,addr=192.168.255.1)

Testing

Create Glance image
[root@HPDL36 ~(keystone_admin)]# glance image-create –name cirros –disk-format qcow2 –container-format bare –is-public true –file cirros-0.3.1-x86_64-disk.qcow2

We can see the image is created and stored on /var/lib/glance/images

[root@HPDL36 ~(keystone_admin)]# glance image-list  
 +————————————–+——–+————-+——————+———-+——–+  
 | ID                                   | Name   | Disk Format | Container Format | Size     | Status |  
 +————————————–+——–+————-+——————+———-+——–+  
 | d3fd5cb6-1a88-4da8-a0af-d83f7728e76b | cirros | qcow2       | bare             | 13147648 | active |  
 +————————————–+——–+————-+——————+———-+——–+

[root@HPDL36 ~(keystone_admin)]# ls -lah /var/lib/glance/images/  
 total 13M  
 drwxrwxrwx 2 root   root    49 Feb 11 23:29 .  
 drwxr-xr-x 3 glance nobody  19 Feb 11 13:38 ..  
 -rw-r—– 1 glance glance 13M Feb 11 23:29 d3fd5cb6-1a88-4da8-a0af-d83f7728e76b
Launch a VM:
[root@HPDL36 ~(keystone_admin)]# nova boot –flavor m1.tiny –image cirros –nic net-id=8a7032de-e041-4e5b-a282-51534b38b15f testvm

[root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 +————————————–+——–+——–+————-+——–+———————+

From the compute node HPDL37, we could see the VM related files are created under /var/lib/nova/instances

[root@HPDL37 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 7 instance-0000005d running

[root@HPDL37 ~]# ls -lah /var/lib/nova/instances/  
 total 4.0K  
 drwxrwxrwx 5 root root 129 Feb 11 23:47 .  
 drwxr-xr-x 9 nova nova 93 Feb 11 15:35 ..  
 drwxr-xr-x 2 nova nova 100 Feb 11 23:47 _base  
 -rw-r–r– 1 nova nova 57 Feb 11 23:43 compute_nodes  
 drwxr-xr-x 2 nova nova 69 Feb 11 23:47 f17ecb86-04de-44c9-9466-47ff6577b7d8  
 -rw-r–r– 1 nfsnobody nfsnobody 0 Feb 11 13:22 glance-touch  
 drwxr-xr-x 2 nova nova 143 Feb 11 23:47 locks  
 -rw-r–r– 1 nfsnobody nfsnobody 0 Feb 11 13:42 nova-touch
Live-Migration

Since we have shared instance-store, let’s try a live-migration:

[root@HPDL36 ~(keystone_admin)]# nova live-migration testvm  
 [root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL38 | network=192.168.0.7 |  
 +————————————–+——–+——–+————-+——–+———————+

It works, now the VM is live-migrated to HPDL38 compute node.

We could do a measurement when the VM has no load, how fast the migration can be. From controller, I ping the VM every 1ms for 10000 times, which last 10000ms (10s), during the ping, I do the live-migration, then we check the result, how many packet lost we get:

[root@HPDL36 ~(keystone_admin)]# ip netns exec qrouter-02ca3bdc-999a-4d3a-8485-c7ffd4600ebc ping 192.168.0.7 -i 0.001 -c 10000 -W 0.001  
 …  
 …  
 — 192.168.0.7 ping statistics —  
 10000 packets transmitted, 9942 received, 0% packet loss, time 10526ms  
 rtt min/avg/max/mdev = 0.113/0.167/1.649/0.040 ms

We lost actually 58 packets, which means basically live-migration takes only 58ms!

Create a Cinder volume
[root@HPDL36 ~(keystone_admin)]# cinder create –display-name 5gb 5  
 [root@HPDL36 ~(keystone_admin)]# cinder list  
 +————————————–+———–+————–+——+————-+———-+————-+  
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |  
 +————————————–+———–+————–+——+————-+———-+————-+  
 | 6e408336-43a8-453a-a5e5-928c12cdd3a1 | available | 5gb | 5 | None | false | |  
 +————————————–+———–+————–+——+————-+———-+————-+  
 [root@HPDL36 ~(keystone_admin)]# ls -lah /var/lib/cinder/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/  
 total 0  
 drwxrwxrwx 2 root root 56 Feb 12 00:16 .  
 drwxr-xr-x 4 cinder cinder 84 Feb 11 16:16 ..  
 -rw-rw-rw- 1 root root 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

We could see the 5GB volume is stored on the mounted Cinder NFS share.

Attach volume to an instance
[root@HPDL36 ~(keystone_admin)]# nova volume-attach testvm 6e408336-43a8-453a-a5e5-928c12cdd3a1  
 +———-+————————————–+  
 | Property | Value |  
 +———-+————————————–+  
 | device | /dev/vdb |  
 | id | 6e408336-43a8-453a-a5e5-928c12cdd3a1 |  
 | serverId | f17ecb86-04de-44c9-9466-47ff6577b7d8 |  
 | volumeId | 6e408336-43a8-453a-a5e5-928c12cdd3a1 |  
 +———-+————————————–+

Let’s check on compute node:

[root@HPDL38 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 7 instance-0000005d running

[root@HPDL38 ~]# virsh domblklist 7  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/instances/f17ecb86-04de-44c9-9466-47ff6577b7d8/disk  
 vdb /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

We see the VM get the volume attached as vdb, and the source is /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6e408336-43a8-453a-a5e5-928c12cdd3a1, which is actually the volume file on the Cinder NFS share.

[root@HPDL38 ~]# mount |grep cinder  
 HPDL36:/nfsshare_cinder on /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.255.3,local_lock=none,addr=192.168.255.1)

[root@HPDL38 ~]# ls -lah /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82  
 total 0  
 -rw-rw-rw- 1 qemu qemu 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

So from compute node, it also mount the Cinder NFS share to /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 , then directly expose the volume file to KVM.

Now we know how it works, compute node directly mount Cinder NFS share, access the volume file, not like LVM Cinder backend, Cinder-volume service exposes volume via iSCSI target.

Live-migration with a volume attached

Will live-migration working for a VM with a volume attached?

[root@HPDL36 ~(keystone_admin)]# nova live-migration testvm  
 [root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 +————————————–+——–+——–+————-+——–+———————+

The answer is YES!

Let’s check the compute node HPDL37.

[root@HPDL37 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 11 instance-0000005d running

[root@HPDL37 ~]# virsh domblklist 11  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/instances/f17ecb86-04de-44c9-9466-47ff6577b7d8/disk  
 vdb /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

[root@HPDL37 ~]# mount | grep cinder  
 HPDL36:/nfsshare_cinder on /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82 type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.255.2,local_lock=none,addr=192.168.255.1)

[root@HPDL37 ~]# ls -lah /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82  
 total 0  
 -rw-rw-rw- 1 qemu qemu 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1

Advanced Cinder feature testing

Create volume from Glance image
[root@HPDL36 ~(keystone_admin)]# cinder create –image-id d3fd5cb6-1a88-4da8-a0af-d83f7728e76b –display-name vol-from-image 1  
 [root@HPDL36 ~(keystone_admin)]# cinder list  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | 209840f0-0559-4de5-ab64-bd4a8249ffd4 | available | 1gb | 1 | None | false | |  
 | 6e408336-43a8-453a-a5e5-928c12cdd3a1 | in-use | 5gb | 5 | None | false | f17ecb86-04de-44c9-9466-47ff6577b7d8 |  
 | 6fda4ea7-8f97-4c62-8df0-f04a36860d30 | available | vol-from-image | 1 | None | true | |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 [root@HPDL36 ~(keystone_admin)]# ls -lh /var/lib/cinder/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/  
 total 18M  
 -rw-rw-rw- 1 root root 1.0G Feb 12 00:43 volume-209840f0-0559-4de5-ab64-bd4a8249ffd4  
 -rw-rw-rw- 1 qemu qemu 5.0G Feb 12 00:16 volume-6e408336-43a8-453a-a5e5-928c12cdd3a1  
 -rw-rw-rw- 1 root root 1.0G Feb 12 00:45 volume-6fda4ea7-8f97-4c62-8df0-f04a36860d30
Create Glance image from volume
[root@HPDL36 ~(keystone_admin)]# cinder upload-to-image 209840f0-0559-4de5-ab64-bd4a8249ffd4 image-from-vol  
 [root@HPDL36 ~(keystone_admin)]# glance image-list  
 +————————————–+—————-+————-+——————+————+——–+  
 | ID | Name | Disk Format | Container Format | Size | Status |  
 +————————————–+—————-+————-+——————+————+——–+  
 | d3fd5cb6-1a88-4da8-a0af-d83f7728e76b | cirros | qcow2 | bare | 13147648 | active |  
 | 85d4ddb0-6159-40f6-b7ce-653eabea7142 | image-from-vol | raw | bare | 1073741824 | active |  
 +————————————–+—————-+————-+——————+————+——–+  
 [root@HPDL36 ~(keystone_admin)]# ls -lh /var/lib/glance/images/  
 total 1.1G  
 -rw-r—– 1 glance glance 1.0G Feb 12 01:05 85d4ddb0-6159-40f6-b7ce-653eabea7142  
 -rw-r—– 1 glance glance 13M Feb 11 23:29 d3fd5cb6-1a88-4da8-a0af-d83f7728e76b
Boot an instance from a bootable volume
[root@HPDL36 ~(keystone_admin)]# nova boot –flavor m1.small –block-device-mapping vda=6fda4ea7-8f97-4c62-8df0-f04a36860d30:::0 vm-boot-from-vol

[root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+——————+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+——————+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 | 01c20971-7876-474c-8a38-93d39b78cc98 | vm-boot-from-vol | ACTIVE | Running | HPDL38 | network=192.168.0.8 |  
 +————————————–+——————+——–+————-+——–+———————+

[root@HPDL38 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 8 instance-0000005e running

[root@HPDL38 ~]# virsh domblklist 8  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-6fda4ea7-8f97-4c62-8df0-f04a36860d30
Boot an instance from image (by creating new volume)
[root@HPDL36 ~(keystone_admin)]# nova boot –flavor m1.tiny –block-device source=image,id=d3fd5cb6-1a88-4da8-a0af-d83f7728e76b,dest=volume,size=6,shutdown=preserve,bootindex=0 vm-boot-from-image-create-new-vol

[root@HPDL36 ~(keystone_admin)]# nova list –fields name,status,power_state,host,networks  
 +————————————–+———————————–+——–+————-+——–+———————+  
 | ID | Name | Status | Power State | Host | Networks |  
 +————————————–+———————————–+——–+————-+——–+———————+  
 | f17ecb86-04de-44c9-9466-47ff6577b7d8 | testvm | ACTIVE | Running | HPDL37 | network=192.168.0.7 |  
 | 6cacf835-2adb-4730-ac11-cceacf1d0915 | vm-boot-from-image-create-new-vol | ACTIVE | Running | HPDL38 | network=192.168.0.9 |  
 | 01c20971-7876-474c-8a38-93d39b78cc98 | vm-boot-from-vol | ACTIVE | Running | HPDL37 | network=192.168.0.8 |  
 +————————————–+———————————–+——–+————-+——–+———————+

[root@HPDL36 ~(keystone_admin)]# cinder list  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+  
 | 209840f0-0559-4de5-ab64-bd4a8249ffd4 | available | 1gb | 1 | None | false | |  
 | 6e408336-43a8-453a-a5e5-928c12cdd3a1 | in-use | 5gb | 5 | None | false | f17ecb86-04de-44c9-9466-47ff6577b7d8 |  
 | 6fda4ea7-8f97-4c62-8df0-f04a36860d30 | in-use | vol-from-image | 1 | None | true | 01c20971-7876-474c-8a38-93d39b78cc98 |  
 | 7d8a4fc5-5f75-4273-a2ac-3e36521be37c | in-use | | 6 | None | true | 6cacf835-2adb-4730-ac11-cceacf1d0915 |  
 +————————————–+———–+—————-+——+————-+———-+————————————–+

[root@HPDL38 ~]# virsh list  
 setlocale: No such file or directory  
 Id Name State  
 —————————————————-  
 9 instance-0000005f running

[root@HPDL38 ~]# virsh domblklist 9  
 setlocale: No such file or directory  
 Target Source  
 ————————————————  
 vda /var/lib/nova/mnt/2bc8688d1bab3cab3b9a974b3f99cb82/volume-7d8a4fc5-5f75-4273-a2ac-3e36521be37c
Volume snapshot related features

Not supported at the moment, coming from Kilo: https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots

Volume clone

Not supported at the moment