Openstack Grizzly Quantum Advanced Features

Quantum advanced features

Environment: Openstack Grizzly running on top of RHEL6.4

1.Namespace

Here we try to enable namespace to let quantum to support overlapping IP subnet for tenant networks, also single L3 agent is capable to host more than one routers. Enabling namespace we need update kernel and iproute2 utility from RDO rpository.

  • Add RDO repository in /etc/yum.repos.d/rhel-source.repo

[grizzly]
name=gruzzly
baseurl=http:``//repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/
gpgcheck=``0
- Update SW, and reboot server to take new kernel into use

yum clean all
yum update
reboot
- Enable overlapping ips in /etc/quantum/quantum.conf on hosts running quantum-server, quantum-l3-agent and quantum-dhcp-agent

allow_overlapping_ips = True
- Enable namespace and veth usage in /etc/quantum/dhcp_agent.ini

use_namespaces = True
ovs_use_veth = True
- Enable namespace and veth usage in /etc/quantum/l3_agent.ini

use_namespaces = True
ovs_use_veth = True
- Restart quantum-server, quantum-l3-agent and quantum-dhcp-agent services

Now let’s try to verify this by creating 2 tenant networks with same IP subnet, 2 tenant routers with one shared external uplink
- Assume we have tenant “admin” and “ncep”, create a net and a subnet for each tenant, both subnet use overlapping IP subnet

#Get tenant list
[root``@gateway``-``1``~]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| 8ad295a5fac84759b5770fef059861a6 |  admin  |   True  |
| 45f69c2ba1e34f61ab7e86a81605589d |   ncep  |   True  |
| ef1716b3b580460ba0402da01bac8243 | service |   True  |
+----------------------------------+---------+---------+
 
#Create admin-net and ncep-net ``for``admin and ncep tenant
[root``@gateway``-``1``~]# quantum net-create --tenant-id  8ad295a5fac84759b5770fef059861a6  admin-net  
[root``@gateway``-``1``~]# quantum net-create --tenant-id  45f69c2ba1e34f61ab7e86a81605589d  ncep-net                                
 
#Create admin-subnet and ncep-subnet, both use overlapping ip subnet ``192.168``.``0.0``/``24
[root``@gateway``-``1``~]# quantum subnet-create --tenant-id  8ad295a5fac84759b5770fef059861a6 --name admin-subnet admin-net ``192.168``.``0.0``/``24
[root``@gateway``-``1``~]# quantum subnet-create --tenant-id 45f69c2ba1e34f61ab7e86a81605589d  --name ncep-subnet ncep-net ``192.168``.``0.0``/``24
- Create a router for each tenant, link subnets to routers accordingly

#Create admin-router and ncep-router ``for``each tenant
[root``@gateway``-``1``~]# quantum router-create --tenant-id  8ad295a5fac84759b5770fef059861a6 admin-router
[root``@gateway``-``1``~]# quantum router-create --tenant-id 45f69c2ba1e34f61ab7e86a81605589d   ncep-router                              
#Link admin-subnet to admin-router, ncep-subnet to ncep-router
[root``@gateway``-``1``~]# quantum  router-``interface``-add  admin-router admin-subnet
Added ``interface``to router admin-router
[root``@gateway``-``1``~]# quantum  router-``interface``-add  ncep-router ncep-subnet     
Added ``interface``to router ncep-router
- Launch one VM from admin tenant, another VM from ncep tenant

#Launch one VM with admin-net network in admin tenant, one VM with ncep-net network in ncep tenant
[root``@controller``-``1``~(keystone_admin)]# nova image-list
+--------------------------------------+---------------------------+--------+--------+
| ID                                   | Name                      | Status | Server |
+--------------------------------------+---------------------------+--------+--------+
| 91739ebd-2c41-47d7-aceb-6c66a2e999b4 | F18-x86_64-cfntools.qcow2 | ACTIVE |        |
| f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 | rhel6.``3``-raw               | ACTIVE |        |
+--------------------------------------+---------------------------+--------+--------+
[root``@controller``-``1``~(keystone_admin)]# nova net-list
+--------------------------------------+-----------+------+
| ID                                   | Label     | CIDR |
+--------------------------------------+-----------+------+
| 05d18b38-5ed6-``4249``-bc9f-53d5c64e5783 | ncep-net  | None |
| d19079f2-bdc5-414e-b703-6e8cd3685854 | admin-net | None |
+--------------------------------------+-----------+------+
[root``@controller``-``1``~(keystone_admin)]# . adminrc
[root``@controller``-``1``~(keystone_admin)]# nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=d19079f2-bdc5-414e-b703-6e8cd3685854  vm-from-admin-tenant       
[root``@controller``-``1``~(keystone_admin)]# nova list
+--------------------------------------+----------------------+--------+-----------------------+
| ID                                   | Name                 | Status | Networks              |
+--------------------------------------+----------------------+--------+-----------------------+
| 371ca40f-1f24-477b-bd1d-04e5c79b66de | vm-from-admin-tenant | ACTIVE | admin-net=``192.168``.``0.2``|
+--------------------------------------+----------------------+--------+-----------------------+
[root``@controller``-``1``~(keystone_admin)]# . nceprc
[root``@controller``-``1``~(keystone_ncep)]# nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=05d18b38-5ed6-``4249``-bc9f-53d5c64e5783  vm-from-ncep-tenant  
[root``@controller``-``1``~(keystone_ncep)]# nova list
+--------------------------------------+---------------------+--------+----------------------+
| ID                                   | Name                | Status | Networks             |
+--------------------------------------+---------------------+--------+----------------------+
| 456c087d-621e-4e9d-b84d-52ef620d7405 | vm-from-ncep-tenant | ACTIVE | ncep-net=``192.168``.``0.2``|
+--------------------------------------+---------------------+--------+----------------------+
We can see both VM get same IP 192.168.0.2 allocated, let’s check how this works on quantum l3-agent and dhcp-agent node

[root``@gateway``-``1``~]# ip netns list
qdhcp-d19079f2-bdc5-414e-b703-6e8cd3685854
qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc
qrouter-5ec0f82d-``8597``-``4530``-b35a-e02b746ec493
qdhcp-05d18b38-5ed6-``4249``-bc9f-53d5c64e5783
We can see 4 namespace are created, actually 1 pair of dhcp+router namespaces are for admin tenant network, the other pair of dhcp+router namespaces are for ncep tenant network.

Let’s see if both VM are reachable from its own router namespace.

[root``@gateway``-``1``~]# ip netns exec qrouter-5ec0f82d-``8597``-``4530``-b35a-e02b746ec493 ip addr
78``: qr-2e5c23a7-f4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu ``1500``qdisc pfifo_fast state UP qlen ``1000
    ``inet ``192.168``.``0.1``/``24``brd ``192.168``.``0.255``scope global qr-2e5c23a7-f4
[root``@gateway``-``1``~]# ip netns exec qrouter-5ec0f82d-``8597``-``4530``-b35a-e02b746ec493 arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168``.``0.2              ether   fa:``16``:3e:8e:dc:cc   C                     qr-2e5c23a7-f4
[root``@gateway``-``1``~]# ip netns exec qrouter-5ec0f82d-``8597``-``4530``-b35a-e02b746ec493   ping ``192.168``.``0.2
PING ``192.168``.``0.2``(``192.168``.``0.2``) ``56``(``84``) bytes of data.
64``bytes from ``192.168``.``0.2``: icmp_seq=``1``ttl=``64``time=``0.983``ms
64``bytes from ``192.168``.``0.2``: icmp_seq=``2``ttl=``64``time=``0.470``ms
[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc ip addr
75``: qr-184bdc43-b8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu ``1500``qdisc pfifo_fast state UP qlen ``1000
    ``inet ``192.168``.``0.1``/``24``brd ``192.168``.``0.255``scope global qr-184bdc43-b8
[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc  arp -n                                                       
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168``.``0.2              ether   fa:``16``:3e:d1:b7:3d   C                     qr-184bdc43-b8
[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc  ping ``192.168``.``0.2
PING ``192.168``.``0.2``(``192.168``.``0.2``) ``56``(``84``) bytes of data.
64``bytes from ``192.168``.``0.2``: icmp_seq=``1``ttl=``64``time=``1.08``ms
64``bytes from ``192.168``.``0.2``: icmp_seq=``2``ttl=``64``time=``0.474``ms
We can see in both namespace, there’s a gateway IP, 192.168.0.1, and from both namespace, we can see the VM’s MAC from arp table, also ping to each VM works fine.

Now we get idea how namespace works on overlapping IP subnet scenario and multi router management.

2. Customized VM MAC address

Quantum support to customize MAC address for VM

#Create a port with customized MAC on desired tenant network
[root``@controller``-``1``~(keystone_admin)]# quantum port-create --mac-address ``00``:``11``:``22``:``33``:``44``:``55``admin-net
Created a ``new``port:
+----------------------+------------------------------------------------------------------------------------+
| Field                | Value                                                                              |
+----------------------+------------------------------------------------------------------------------------+
| admin_state_up       | True                                                                               |
| binding:capabilities | {``"port_filter"``: ``false``}                                                             |
| binding:vif_type     | ovs                                                                                |
| device_id            |                                                                                    |
| device_owner         |                                                                                    |
| fixed_ips            | {``"subnet_id"``: ``"78a7f776-e465-41bd-86ef-937363bd09a1"``, ``"ip_address"``: ``"192.168.0.4"``} |
| id                   | ce1a590b-02ab-475e-90f7-a0004281ce2c                                               |
| mac_address          | ``00``:``11``:``22``:``33``:``44``:``55                                                                  |
| name                 |                                                                                    |
| network_id           | d19079f2-bdc5-414e-b703-6e8cd3685854                                               |
| status               | DOWN                                                                               |
| tenant_id            | 8ad295a5fac84759b5770fef059861a6                                                   |
+----------------------+------------------------------------------------------------------------------------+
 
#Launch a VM from CLI by explicitly linking vNIC to ``this``port[root``@controller``-``1``~(keystone_admin)]# nova boot --flavor m1.small --image f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic port-id=ce1a590b-02ab-475e-90f7-a0004281ce2c vm-with-custormized-mac
 
#From gateway node, let's check ``if``the vm get the customized mac
[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc arp -n
Address HWtype HWaddress Flags Mask Iface
192.168``.``0.2``ether fa:``16``:3e:d1:b7:3d C qr-184bdc43-b8
192.168``.``0.4``ether ``00``:``11``:``22``:``33``:``44``:``55``C qr-184bdc43-b8
[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc ssh ``192.168``.``0.4``ifconfig eth0
root``@192``.168.``0.4``'s password:
eth0 Link encap:Ethernet HWaddr ``00``:``11``:``22``:``33``:``44``:``55
 ``inet addr:``192.168``.``0.4``Bcast:``192.168``.``0.255``Mask:``255.255``.``255.0
We can see the customized MAC is successfully configured in the VM.

3. Multi VM NICs support

Openstack support multi NICs in VM, each NIC can connect to one tenant network.

VM guest OS image should be pre-configured to support multi NICs, for RHEL, means ifcfg-ethX(0,1,2…) should be pre-configured and set as DHCP.

#Let's create ``2``more tenant network with different IP subnets
[root``@gateway``-``1``~]# quantum net-create  admin-net-``2
[root``@gateway``-``1``~]# quantum subnet-create --name admin-subnet-``2``--no-gateway  admin-net-``2``10.10``.``10.0``/``24
[root``@gateway``-``1``~]# quantum net-create  admin-net-``3
[root``@gateway``-``1``~]# quantum subnet-create --name admin-subnet-``3``--no-gateway  admin-net-``3``172.28``.``0.0``/``24
#List all networks
[root``@controller``-``1``~(keystone_admin)]# quantum net-list
+--------------------------------------+-------------+-----------------------------------------------------+
| id                                   | name        | subnets                                             |
+--------------------------------------+-------------+-----------------------------------------------------+
| 05aef494-d1e9-41f6-b5cf-bfb65981831c | admin-net-``2``| 240d04c7-8b1b-4ec7-b41e-ebc5db9385e8 ``10.10``.``10.0``/``24  |
| c2a56b46-29a7-458e-afab-bbcf0fbbb998 | admin-net-``3``| 856d9459-b76d-4ba4-bcdf-061272e9d334 ``172.28``.``0.0``/``24  |
| d19079f2-bdc5-414e-b703-6e8cd3685854 | admin-net   | 78a7f776-e465-41bd-86ef-937363bd09a1 ``192.168``.``0.0``/``24``|
+--------------------------------------+-------------+-----------------------------------------------------+
 
#Launch a VM with ``3``NICs, each NIC linking to one network above
[root``@controller``-``1``~(keystone_admin)]# nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=d19079f2-bdc5-414e-b703-6e8cd3685854 --nic net-id=05aef494-d1e9-41f6-b5cf-bfb65981831c --nic net-id=c2a56b46-29a7-458e-afab-bbcf0fbbb998  vm-with-``3``-nics[root``@controller``-``1``~(keystone_admin)]# nova list
+--------------------------------------+-------------------------+--------+-----------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------------------------+--------+-----------------------------------------------------------------------+
| 85ea89c8-2e52-4abe-a8ce-1d9fc236f04f | vm-with-``3``-nics | ACTIVE | admin-net=``192.168``.``0.5``; admin-net-``3``=``172.28``.``0.2``; admin-net-``2``=``10.10``.``10.2``|
+--------------------------------------+-------------------------+--------+-----------------------------------------------------------------------+#From gateway node, login into VM to check NICs status[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc ssh ``192.168``.``0.5``ip addr
root``@192``.168.``0.5``'s password:
1``: lo: <LOOPBACK,UP,LOWER_UP> mtu ``16436``qdisc noqueue state UNKNOWN
 ``link/loopback ``00``:``00``:``00``:``00``:``00``:``00``brd ``00``:``00``:``00``:``00``:``00``:``00
 ``inet ``127.0``.``0.1``/``8``scope host lo
2``: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu ``1500``qdisc pfifo_fast state UP qlen ``1000
 ``link/ether fa:``16``:3e:``13``:5e:f0 brd ff:ff:ff:ff:ff:ff
 ``inet ``192.168``.``0.5``/``24``brd ``192.168``.``0.255``scope global eth0
3``: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu ``1500``qdisc pfifo_fast state UP qlen ``1000
 ``link/ether fa:``16``:3e:5a:``14``:e0 brd ff:ff:ff:ff:ff:ff
 ``inet ``10.10``.``10.2``/``24``brd ``10.10``.``10.255``scope global eth1
4``: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu ``1500``qdisc pfifo_fast state UP qlen ``1000
 ``link/ether fa:``16``:3e:``58``:``57``:``47``brd ff:ff:ff:ff:ff:ff
 ``inet ``172.28``.``0.2``/``24``brd ``172.28``.``0.255``scope global eth2
In the VM, we can see 3 NICs are active, each one get an IP of linked tenant network via DHCP

4. Meta-data service

Meta-data service is provided by quantum-metadata-agent, it uses iptables to convert 169.254.169.254:80 to nova API server IP, metadata service port 8775. This service should be deployed on the same host of l3-agent.

  • SSH public key retrieval to realize ssh password free login

#Add a keypair, save to a file, change mode to ``400
[root``@controller``-``1``~(keystone_admin)]# nova keypair-add adminkey > adminkey.pem
[root``@controller``-``1``~(keystone_admin)]# chmod ``400``adminkey.pem
 
#Launch a VM with the keypair just created
[root``@controller``-``1``~(keystone_admin)]#  nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=d19079f2-bdc5-414e-b703-6e8cd3685854 --key-name adminkey vm-with-adminkey
 
#Retrieve ssh key from VM, save to /root/.ssh/authorized_keys
curl http:``//169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /root/.ssh/authorized_keys
 
#Try password free ssh login to VM by keypair file
[root``@gateway``-``1``~]# ip netns exec qrouter-665dd6d1-8b5b-``4430``-9fec-d99797ec12cc ssh ``192.168``.``0.6``-i adminkey.pem
Last login: Tue Jul ``23``13``:``46``:``01``2013``from ``192.168``.``0.1
[root``@192``-``168``-``0``-``6``~]#
- user-data retrieval

#Create a user-data file with some content you want to inject to VM
echo ``"Hello, I am user-data"``> user-data
 
#Launch a VM with user-data
[root``@controller``-``1``~(keystone_admin)]#  nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=d19079f2-bdc5-414e-b703-6e8cd3685854 --user-data ./user-data vm-with-userdata
 
#Retrieve the user-data from VM
[root``@192``-``168``-``0``-``7``~]# curl  http:``//169.254.169.254/latest/user-data
Hello, I am user-data
Cloud-init script can be pre-installed into image to realize meta-data related tasks automation during VM boot-up, such as ssh key and user-data injection.

5.Load Balancing as a Service(LBaaS)

In Grizzly, quantum brings a new service called quantum-lbaas-agent, it provides load balancing service towards VMs from cloud infrastructure level.

It supports:
-Load balancing for several protocols (TCP, HTTP)
-Supporting session persistence
-Monitoring the health of the application services

It introduces new resources:
-vip, is the primary load balancing configuration object that specifies the virtual IP address and port on which client traffic is received.
-pool, a load balancing pool is a logical set of members, such as web servers, that you group together to receive and process traffic.
-member, a representation of the application running on backend server.
-health_monitor, which is used to determine whether or not back-end members of the pool are usable for processing traffic.

  • Enable LBaaS agent service

#Add following line on /etc/quantum/quantum.conf on quantum-server host, then restart quantum-server
service_plugins = quantum.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin
 
[root``@controller``-``1``~(keystone_admin)]# /etc/init.d/quantum-server restart
 
#Update following line in /etc/openstack-dashboard/local_settings on horizon host, and restart httpd, ``this``enables LBaaS operations on Dashboard WebUI
OPENSTACK_QUANTUM_NETWORK = {
    ``'enable_lb'``: True
}
 
[root``@controller``-``1``~(keystone_admin)]# /etc/init.d/httpd restart
 
#Update /etc/quantum/lbaas_agent.ini on gateway node, then restart quantum-lbaas-agent service
ovs_use_veth = True
use_namespaces = True
interface_driver = quantum.agent.linux.``interface``.OVS``InterfaceDriver
device_driver = quantum.plugins.services.agent_loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
user_group = haproxy
[root``@gateway``-``1``~]# /etc/init.d/quantum-lbaas-agent restart
- Create a pool for web servers with ROUND_ROBIN load balancing method.

#Get subnet list, choose one LB pool is created on, here we use admin-subnet
 [root``@controller``-``1``~(keystone_admin)]# quantum subnet-list
+--------------------------------------+----------------+----------------+----------------------------------------------------+
| id                                   | name           | cidr           | allocation_pools                                   |
+--------------------------------------+----------------+----------------+----------------------------------------------------+
| 240d04c7-8b1b-4ec7-b41e-ebc5db9385e8 | admin-subnet-``2``| ``10.10``.``10.0``/``24  | {``"start"``: ``"10.10.10.2"``, ``"end"``: ``"10.10.10.254"``}     |
| 78a7f776-e465-41bd-86ef-937363bd09a1 | admin-subnet   | ``192.168``.``0.0``/``24``| {``"start"``: ``"192.168.0.2"``, ``"end"``: ``"192.168.0.254"``}   |
| cfa7243a-1d02-429f-95fa-d384d6112c04 | subext_net     | ``10.68``.``124.0``/``24``| {``"start"``: ``"10.68.124.100"``, ``"end"``: ``"10.68.124.200"``} |
+--------------------------------------+----------------+----------------+----------------------------------------------------+
 
#Create pool
[root``@gateway``-``1``~]# quantum  lb-pool-create  --lb-method ROUND_ROBIN --name webpool --protocol HTTP --subnet-id 78a7f776-e465-41bd-86ef-937363bd09a1
- Launch 2 VMs in the admin-subnet, set them as web server with different index.html contents

#Launch ``2``VMs
nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=d19079f2-bdc5-414e-b703-6e8cd3685854 webserver-``1
nova boot --flavor m1.small --image  f1e5e50a-``2668``-``4627``-bcd4-769a0dbe28d3 --nic net-id=d19079f2-bdc5-414e-b703-6e8cd3685854 webserver-``2
 
#List VMs, record their IPs
[root``@controller``-``1``~(keystone_admin)]# nova list
+--------------------------------------+-------------------------+--------+-----------------------------------------------------------------------+
| ID                                   | Name                    | Status | Networks                                                              |
+--------------------------------------+-------------------------+--------+-----------------------------------------------------------------------+
| 88b66170-f216-4f75-8b85-21921fe1360d | webserver-``1             | ACTIVE | admin-net=``192.168``.``0.8                                                 |
| 52af3bfe-31a5-495f-87bd-2ebeb8102d54 | webserver-``2             | ACTIVE | admin-net=``192.168``.``0.9                                                 |
+--------------------------------------+-------------------------+--------+-----------------------------------------------------------------------+
 
#Log into webserver-``1``, enable httpd, setup index.html to ``"I am webserver-1"
[root``@192``-``168``-``0``-``8``~]# service httpd start
[root``@192``-``168``-``0``-``8``~]# echo ``"I am webserver-1"``> /var/www/html/index.html
 
#Log into webserver-``2``, enable httpd, setup index.html to ``"I am webserver-2"
[root``@192``-``168``-``0``-``9``~]# service httpd start
[root``@192``-``168``-``0``-``9``~]# echo ``"I am webserver-2"``> /var/www/html/index.html
- Create 2 members into webpool (using IPs of webserver-1 and webserver-2)

[root``@gateway``-``1``~]# quantum  lb-member-create --address ``192.168``.``0.8``--protocol-port ``80``webpool
[root``@gateway``-``1``~]# quantum  lb-member-create --address ``192.168``.``0.9``--protocol-port ``80``webpool
- Create a Healthmonitor and associated it with the webpool

[root``@gateway``-``1``~]# quantum lb-healthmonitor-create --delay ``3``--type HTTP --max-retries ``3``--timeout ``3
[root``@gateway``-``1``~]# quantum lb-healthmonitor-list
+--------------------------------------+------+----------------+----------------+
| id                                   | type | admin_state_up | status         |
+--------------------------------------+------+----------------+----------------+
| 521618bf-``8448``-47ea-b02f-3aaa7940efa8 | HTTP | True           | PENDING_CREATE |
+--------------------------------------+------+----------------+----------------+
[root``@gateway``-``1``~]# quantum lb-healthmonitor-associate 521618bf-``8448``-47ea-b02f-3aaa7940efa8 webpool
Associated health monitor 521618bf-``8448``-47ea-b02f-3aaa7940efa8
- Create a VIP for the webpool

[root``@gateway``-``1``~]# quantum lb-vip-create --name webvip --protocol-port ``80``--protocol HTTP --subnet-id 78a7f776-e465-41bd-86ef-937363bd09a1 webpool
 
#List the VIP created
[root``@gateway``-``1``~]# quantum lb-vip-list
+--------------------------------------+--------+--------------+----------+----------------+--------+
| id                                   | name   | address      | protocol | admin_state_up | status |
+--------------------------------------+--------+--------------+----------+----------------+--------+
| 161cba5c-784f-44ff-a6d2-991b43236452 | webvip | ``192.168``.``0.10``| HTTP     | True           | ACTIVE |
+--------------------------------------+--------+--------------+----------+----------------+--------+
 
#Check HAproxy created
[root``@gateway``-``1``~]# ps -ef | grep haproxy
nobody   ``14730     1  0``15``:``13``?        ``00``:``00``:``00``haproxy -f /var/lib/quantum/lbaas/2abc3d96-``9183``-47f2-9c93-7d08bdd0a675/conf -p /var/lib/quantum/lbaas/2abc3d96-``9183``-47f2-9c93-7d08bdd0a675/pid
 
#Check HAproxy configuration
[root``@gateway``-``1``~]# cat /var/lib/quantum/lbaas/2abc3d96-``9183``-47f2-9c93-7d08bdd0a675/conf
global
        ``daemon
        ``user nobody
        ``group haproxy
        ``log /dev/log local0
        ``log /dev/log local1 notice
        ``stats socket /var/lib/quantum/lbaas/2abc3d96-``9183``-47f2-9c93-7d08bdd0a675/sock mode ``0666``level user
defaults
        ``log global
        ``retries ``3
        ``option redispatch
        ``timeout connect ``5000
        ``timeout client ``50000
        ``timeout server ``50000
frontend 161cba5c-784f-44ff-a6d2-991b43236452
        ``option tcplog
        ``bind ``192.168``.``0.10``:``80
        ``mode http
        ``default_backend 2abc3d96-``9183``-47f2-9c93-7d08bdd0a675
        ``option forwardfor
backend 2abc3d96-``9183``-47f2-9c93-7d08bdd0a675
        ``mode http
        ``balance roundrobin
        ``option forwardfor
        ``timeout check 3s
        ``option httpchk GET /
        ``http-check expect rstatus ``200
        ``server 8628f260-1ee2-``4816``-b2d9-8fc1068c2f56 ``192.168``.``0.9``:``80``weight ``1``check inter 3s fall ``3
        ``server c4a8c9ca-bd41-4dbb-a9ae-4a23caefc4d0 ``192.168``.``0.8``:``80``weight ``1``check inter 3s fall ``3
- Test load balancing

#Try to connect VIP of webserver by curl ``for``10``times
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``1
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``2
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``1
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``2
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``1
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``2
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``1
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``2
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``1
[root``@gateway``-``1``~]# ip netns exec qlbaas-2abc3d96-``9183``-47f2-9c93-7d08bdd0a675 curl http:``//192.168.0.10
I am webserver-``2
We can see the traffic is evenly balanced between webserver-1 and webserver-2

  • Associate a public IP with the LB VIP, and test web access again

#Create a floating IP from external network
[root``@gateway``-``1``~]# quantum floatingip-create ext_net
Created a ``new``floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | ``10.68``.``124.102                        |
| floating_network_id | 72ee69d6-``8014``-4a64-af67-854fac687ca7 |
| id                  | 558ae586-``2266``-46f9-a648-f6c76f43554b |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 8ad295a5fac84759b5770fef059861a6     |
+---------------------+--------------------------------------+#Check port id of LB VIP[root``@gateway``-``1``~]# quantum port-list+--------------------------------------+------------------------------------------+-------------------+--------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------------------------------------------+-------------------+--------------------------------------------------------------------------------------+
| bdb48677-``9202``-4c2f-98fb-54c00a043807 | vip-161cba5c-784f-44ff-a6d2-991b43236452 | fa:``16``:3e:``58``:``91``:``35``| {``"subnet_id"``: ``"78a7f776-e465-41bd-86ef-937363bd09a1"``, ``"ip_address"``: ``"192.168.0.10"``} |
+--------------------------------------+------------------------------------------+-------------------+--------------------------------------------------------------------------------------+#Associate floating IP with the LB VIP[root``@gateway``-``1``~]# quantum floatingip-associate 558ae586-``2266``-46f9-a648-f6c76f43554b bdb48677-``9202``-4c2f-98fb-54c00a043807
Associated floatingip 558ae586-``2266``-46f9-a648-f6c76f43554b#From external network, ``try``to connect the floating IP by curl ``for``10``times[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``1
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``2
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``1
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``2
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``1
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``2
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``1
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``2
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``1
[root``@KS``-Server ~]# curl http:``//10.68.124.102
I am webserver-``2
LB still works through floating IP.