You are here: Home Research SDN@EDGE: Installing a FROG 3.0 compute node

SDN@EDGE: Installing a FROG 3.0 compute node

by Fabio Mignini last modified Jun 10, 2015 09:13 PM

Installation guide for a FROG 3.0 compute node

Objective of this document

This document provides the instructions to install a compute node connected to the OpenStack controller running at POLITO premises. The network-aware scheduler implemented in this custom version of OpenStack can guarantee that when a tenant asks for a service (e.g., the deployment of a set of VMs or a NFV service graph) all the machines will be instantiated on the compute node that is installed in the tenant premises.
Please note that the compute node (i.e., the server you're installing at your premises) run an unmodified version of OpenStack. The novel algorithm to execute optimized NFV functions reside on the controller node, running at POLITO.

 

 

Required hardware and software

  • Standard Intel server with at least two network interfaces. Due to the necessity to host many virtual machines, we suggest to have at least 8 GB of memory.
  •  Ubuntu 14.04 LTS (64 bit).

 

OpenStack compute node installation

Here there are the steps required to install a standard OpenStack icehouse compute node, that will be connected to the OpenStack controller running at POLITO premises. For your convenience, we report here all the required steps (taken from http://docs.openstack.org/icehouse/install-guide/install/apt/content/), which take into account only the components that are needed in our use case (e.g., neutron opendaylight plugin instead of neutron openvswitch plugin).

  1. Configure name resolution:

    • Set the hostname for the node in /etc/hostname
    • Replace COMPUTE_HOSTNAME with the hostname provided by the FROG administrator at the POLITO domain.

      COMPUTE_HOSTNAME
    • Edit the /etc/hosts file to contain the following:
    • Replace COMPUTE_HOSTNAME with the hostname provided by the FROG administrator at the POLITO domain.

      127.0.0.1    localhost
      127.0.1.1    COMPUTE_HOSTNAME
  2. Install the Compute packages:

  3. sudo apt-get install nova-compute-kvm


  4. Edit the /etc/nova/nova.conf configuration file and add these lines to the appropriate sections:

  5. Replace NOVA_PASS and NOVA_DBPASS with the password provided by the FROG administrator at the POLITO domain.

    [DEFAULT]
    ...
    auth_strategy = keystone
    ...
    [database]
    # The SQLAlchemy connection string used to connect to the database
    connection = mysql://nova:NOVA_DBPASS@controller.ipv6.polito.it/nova
    [keystone_authtoken]
    auth_uri = http://controller.ipv6.polito.it:5000
    auth_host = controller.ipv6.polito.it
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = nova
    admin_password = NOVA_PASS


  6. Configure the Compute service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT] configuration group of the /etc/nova/nova.conf file:

  7. Replace RABBIT_PASS with the password provided by the FROG administrator at the POLITO domain.

    [DEFAULT]
    ...
    rpc_backend = rabbit
    rabbit_host = controller.ipv6.polito.it
    rabbit_password = RABBIT_PASS


  8. Configure Compute to provide remote console access to instances.

    • Edit /etc/nova/nova.conf and add the following keys under the [DEFAULT] section:

    Replace YOUR_IP_ADDRESS with your public IP address.

    [DEFAULT]
    ...
    my_ip = YOUR_IP_ADDRESS
    vnc_enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = YOUR_IP_ADDRESS
    novncproxy_base_url = http://controller.ipv6.polito.it:6080/vnc_auto.html


  9. Specify the host that runs the Image Service. Edit /etc/nova/nova.conf file and add these lines to the [DEFAULT] section:

  10. [DEFAULT]
    ...
    glance_host = controller.ipv6.polito.it


  11. You must determine whether your system's processor and/or hypervisor support hardware acceleration for virtual machines.
    Run the following command:

  12. sudo egrep -c '(vmx|svm)' /proc/cpuinfo

    If this command returns a value of one or greater, your system supports hardware acceleration which typically requires no additional configuration.

    If this command returns a value of zero, your system does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

    • Edit the [libvirt] section in the /etc/nova/nova-compute.conf file to modify this key:

    • [libvirt]
      ...
      virt_type = qemu


  13. Remove the SQLite database created by the packages:

  14. sudo rm /var/lib/nova/nova.sqlite


  15. Restart the Compute service:

  16. sudo service nova-compute restart


  17. Edit /etc/sysctl.conf to contain the following:

  18. net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
    net.bridge.bridge-nf-call-arptables=1 
    net.bridge.bridge-nf-call-iptables=1 
    net.bridge.bridge-nf-call-ip6tables=1


  19. Implement the changes:

  20. sudo sysctl -p


  21. To install the Networking components:

  22. sudo apt-get install neutron-common neutron-plugin-ml2  \
    openvswitch-datapath-dkms openvswitch-switch


  23. To configure the Networking common components:

  24. The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
    • Configure Networking to use the Identity service for authentication:
      • Edit the /etc/neutron/neutron.conf file and add the following key to the [DEFAULT] section:
      • [DEFAULT]
        ...
        auth_strategy = keystone
    • Add the following keys to the [keystone_authtoken] section:
    • Replace NEUTRON_PASS with the password provided by the FROG administrator at the POLITO domain.

      [keystone_authtoken]
      ...
      auth_uri = http://controller.ipv6.polito.it:5000
      auth_host = controller.ipv6.polito.it
      auth_protocol = http
      auth_port = 35357
      admin_tenant_name = service
      admin_user = neutron
      admin_password = NEUTRON_PASS
    • Configure Networking to use the message broker:
      • Edit the /etc/neutron/neutron.conf file and add the following keys to the [DEFAULT] section:
      • Replace RABBIT_PASS with the password provided by the FROG administrator at the POLITO domain.

        [DEFAULT]
        ...
        rpc_backend = neutron.openstack.common.rpc.impl_kombu
        rabbit_host = controller.ipv6.polito.it
        rabbit_password = RABBIT_PASS
    • Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
      • Edit the /etc/neutron/neutron.conf file and add the following keys to the [DEFAULT] section:
      • [DEFAULT]
        ...
        core_plugin = ml2
        service_plugins = router
        allow_overlapping_ips = True

     

  25. To configure the Modular Layer 2 (ML2) plug-in:

  26. The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
    • Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file:
      • Add the following keys to the [ml2] section:
      • [ml2]
        ...
        type_drivers = gre
        tenant_network_types = gre
        mechanism_drivers = opendaylight

       

      • Add the following keys to the [ml2_type_gre] section:
      • [ml2_type_gre]
        ...
        tunnel_id_ranges = 1:1000

       

      • Add the [securitygroup] section and the following keys to it:
      • [securitygroup]
        ...
        firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
        enable_security_group = True

       

      • Add [ml2_odl] and [odl] section:
      • Replace ODL_PASS with the password provided by the FROG administrator at the POLITO domain.

        [ml2_odl]
        url = http://odl.ipv6.polito.it:8080/controller/nb/v2/neutron
        username = admin
        password = ODL_PASS
        timeout = 10
        session_timeout = 30
        [odl]
        network_vlan_ranges = 1:4095
        tunnel_id_ranges = 1:1000
        tun_peer_patch_port = patch-int
        int_peer_patch_port = patch-tun
        tunnel_bridge = br-tun
        integration_bridge = br-int

     

    • To configure Compute to use Networking. By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking
      • Edit the /etc/nova/nova.conf and add the following keys to the [DEFAULT] section:
      • Replace NEUTRON_PASS with the password provided by the FROG administrator at the POLITO domain.

        [DEFAULT]
        ...
        network_api_class = nova.network.neutronv2.api.API
        neutron_url = http://controller.ipv6.polito.it:9696
        neutron_auth_strategy = keystone
        neutron_admin_tenant_name = service
        neutron_admin_username = neutron
        neutron_admin_password = NEUTRON_PASS
        neutron_admin_auth_url = http://controller.ipv6.polito.it:35357/v2.0
        linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
        firewall_driver = nova.virt.firewall.NoopFirewallDriver
        security_group_api = neutron

     

  27. Configure the avaibility zone

  28. The availability zone is the temporary mechanism used to place the virtual machines of the user where his is connected.

    • Set the availability zone in /etc/nova/nova.conf:

    Replace AVAILABILITY_ZONE_NAME with the availability zone provided by the FROG administrator at the POLITO domain.

    [default]
     …
    default_availability_zone = AVAILABILITY_ZONE_NAME

     

  29. Set the the IP address of the instance tunnels network interface on your network node.

    • Create a script 'odl_os_ovs.sh':
    • #!/usr/bin/env bash
      
      # odl_os_ovs.sh : Stands for OpenDaylight_OpenStack_Openvswith.sh (cant be more Open than this ;) )
      
      if [ `whoami` != "root" ]; then
          echo Please execute this script as superuser or with sudo previleges.
          exit 1
      fi
      
      if [ "$#" -ne 1 ]; then
        echo "Usage: odl_ovs_os.sh " >&2
        echo "        is same as the local-ip configuration done for ovs-neutron-agent in ovs_quantum_plugin.ini"
        exit 1
      fi
      
      read ovstbl <<< $(ovs-vsctl get Open_vSwitch . _uuid)
      ovs-vsctl set Open_vSwitch $ovstbl other_config={"local_ip"="$1"}
      ovs-vsctl list Open_vSwitch .
    • Run the script:
    • Replace YOUR_IP_ADDRESS with your public IP address.

      sudo chmod +x odl_os_ovs.sh
      sudo ./odl_os_ovs.sh YOUR_IP_ADDRESS

     

  30. Set the manager for OVSDB:

  31. Note that in this case we need the IP address of the SDN controller, because the hostname is not resolved here. Replace OPENDAYLIGHT_ADDRESS with the address obtained from the command 'nslookup odl.ipv6.polito.it'

    sudo ovs-vsctl set-manager tcp:OPENDAYLIGHT_ADDRESS:6640
    • Control that, after this step, in the output of 'ovs-vsctl show' there are the bridges br-int and br-tun. Otherwise execute the following commands.
    • sudo ovs-vsctl del-manager
      sudo ovs-vsctl set-manager tcp:OPENDAYLIGHT_ADDRESS:6640
      sudo ovs-vsctl add-br br-int
      sudo ovs-vsctl add-br br-tun
      sudo ovs-vsctl set-fail-mode br-int secure
      sudo ovs-vsctl set-fail-mode br-tun secure
      sudo ovs-vsctl set-controller br-int tcp:OPENDAYLIGHT_ADDRESS:6633
      sudo ovs-vsctl set-controller br-tun tcp:OPENDAYLIGHT_ADDRESS:6633
      sudo ovs-vsctl add-port br-int patch-tun
      sudo ovs-vsctl add-port br-tun patch-int
      sudo ovs-vsctl set interface patch-tun type=patch
      sudo ovs-vsctl set interface patch-int type=patch
      sudo ovs-vsctl set interface patch-tun options:peer=patch-int
      sudo ovs-vsctl set interface patch-int options:peer=patch-tun
      • Now you should see the needed bridge, executing 'ovs-vsctl show'.
  32. Set the controller of the integration bridge in "out of band" mode:

  33. sudo ovs-vsctl set controller br-int connection-mode=out-of-band
    sudo ovs-vsctl set bridge br-int other-config:disable-in-band=true
    • WARNING: if you delete the controller from the br-int and then you reset it, you must remember to set it in out-of-band mode.


  34. Set your preferred RAM allocation ratio (i.e., the oversubscription rate for the memory) in /etc/nova/nova.conf:

  35. [default]
    …
    ram_allocation_ratio = 5


  36. To finalize the installation, restart the Compute service:

  37. sudo service nova-compute restart

     

  38. Now, please contact the the FROG administrator at the POLITO domain to finalize your availability zone setup.

  39.  

  40. Verify the OpenStack standard installation.

    • After you have contacted the FROG administrator, export the following global variables:
    • Replace USERNAME, TENANT and PASSWORD with those provided by the FROG administrator at the POLITO domain.
      export OS_USERNAME=USERNAME
      export OS_PASSWORD=PASSWORD
      export OS_TENANT_NAME=TENANT
      export OS_AUTH_URL=http://controller.ipv6.polito.it:35357/v2.0
    • Create a network, and associates to it a neutron subnet:
    • Replace TENANT_NETWORK_CIDR with the IP address and mask (e.g. 10.0.0.0/24) associate to that neutron network.

      neutron net-create demo-net
      neutron subnet-create demo-net --name demo-subnet TENANT_NETWORK_CIDR
    • Take the network ID of the net you have created:
    • neutron net-list
    • Boot virtual machine: 
    • Replace DEMO_NET_ID with the network id of the previous step and replace AVAILABILITY_ZONE_NAME with the availability zone provided by the FROG administrator at the POLITO domain.

      nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=DEMO_NET_ID --availability-zone AVAILABILITY_ZONE_NAME demo-instance1
    • Check the status of the instance newly created: 
    • nova list

      If the status of the instance converges to 'ACTIVE', the installation of the OpenStack standard compute node is correct and you can continue to follow the guide.

 

 

Prototype configuration

  1. Configure the external bridge

    • Add an L2 bridge that manage the exit traffic (it is necessary to deliver the traffic coming from the internet to the NF-FG graph of the correct user, which happens when multiple users are connected to your compute node):
    sudo ovs-vsctl add-br br-ex

    • Add a physical network interface, connected to the Internet, to the external bridge, so the traffic of all NF-FGs will exit through that interface:
    Replace INTERFACE_NAME with the actual interface name. For example, eth0 or em0.
    sudo ovs-vsctl add-port br-ex INTERFACE_NAME
    WARNING: at this point, if you were connected to the node through the interface you had bridged to br-ex, you are no longer able to reach the node. If it is possible, you should use two different interfaces: one for management and the other for the outgoing traffic of NF-FGs.
    If no additional interface are available, to restore the connection you should perform the following steps in order to assign the IP address to the bridge:
    • Remove the IP address from the interface
    • sudo ifconfig INTERFACE_NAME 0 

    • Configure the interface and the bridge in /etc/network/interfaces
    • auto INTERFACE_NAME
      iface INTERFACE_NAME inet manual
      auto br-ex
      iface br-ex inet dhcp

       

    • Restart the configuration of br-ex:
    • sudo ifdown br-ex
      sudo ifup br-ex

       

    • Remove the controller from br-ex (this is a bridge that does not need to be controlled by opendaylight):
    • sudo ovs-vsctl del-controller br-ex

       

      WARNING: if the interface that you connected to the virtual switch is a physical interface, the bridge take the same MAC address of the interface (hence it obtains the same IP address).
      Otherwise you should edit the /etc/nova/nova.conf  file according to the new IP address and restart nova-compute.

       

  2. Add the ingress bridge:

    • Add an L2 bridge that manage the user traffic:
    • sudo ovs-vsctl add-br br-usr
    • Add a port, where you will connect the devices that use the prototype, to the ingress bridge (all the ports bridged to this bridge will be called "LAN" port):
    • Replace INTERFACE_NAME with the actual interface name. For example, eth0 or wlan0. More then one interface can be connected to this bridge. Connecting a device to those ports you are able to reach your service.
      sudo ovs-vsctl add-port br-usr INTERFACE_NAME
    • Configure the ingress interfaces in /etc/network/interfaces 
    • auto INTERFACE_NAME
      iface INTERFACE_NAME inet manual

  

    Verify the compute node installation

Right now your compute node has no running VNFs. In order to instantiate a service, you need to carry our the following operations (in sequence):

  • associate a different service graph to each one of your users, follow the guide available here (currently, user  USERNAME (provided by by the FROG administrator at the POLITO domain) is associated to a graph that contain a firewall that blocks www.polito.it while user AUTH_USERNAME (provided by by the FROG administrator at the POLITO domain) is associated to an authentication graph; you can skip this step if you are fine with those graphs)
  • contact the FROG service layer and tell which service graph (which translates into a specific user) has to be instantiated in your node
  • connect a laptop on the "LAN" interface of your compute node, and browse the Internet.

When you would like to switch to another service, you have to tell the FROG service layer to instantiate the graph associated to another users (and, optionally, you may login in the FROG dashboard to change the graph associated to that user).

  1. Instantiate your profile

    • Create a script 'service_graph.py' that contacts service layer to instantiate a user's service graph. 
    • import json, requests, sys, re
      
      keystone_authentication_url = "http://controller.ipv6.polito.it:35357/v2.0/tokens"
      
      if len(sys.argv) != 6 and len(sys.argv) != 5:
          print "Usage: service_graph.py    [] "
          sys.exit()
      elif len(sys.argv) == 6: 
          matchObj = re.match( r'^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$', sys.argv[4])
          if not matchObj:
              print "Not valid mac address"
              sys.exit()
          mac = sys.argv[4]
          method = sys.argv[5]
      else:
          mac = None
          method = sys.argv[4]
          
      username = sys.argv[1]
      tenant = sys.argv[2]
      password = sys.argv[3]
      
      
      if method != "PUT" and method != "DELETE":
          print "wrong param!"
          print "method: [PUT|DELETE]"
          sys.exit()
      if method == "PUT":
          print "Intantiating graph for device "+sys.argv[4]
      else:
          print "Deleting graph for device "+sys.argv[4]
      
      authenticationData = {"auth": {"tenantName": tenant, "passwordCredentials": {"username": username, "password": password}}}
      
      
      headers = {'Accept': 'application/json', 'Content-Type': 'application/json'}
      resp = requests.post(keystone_authentication_url, data=json.dumps(authenticationData), headers=headers)
      resp.raise_for_status()
      tokendata = json.loads(resp.text)
      
      if method == "PUT":
          orchestrator = "http://orchestrator.ipv6.polito.it:8000/orchestrator"
      else:
          if mac == None:
              orchestrator = "http://orchestrator.ipv6.polito.it:8000/orchestrator"
          else:
              orchestrator = "http://orchestrator.ipv6.polito.it:8000/orchestrator/"+mac
      
      if mac != None:
          request_body = {"session":{"session_param" : {"mac": mac}}} 
      else:
          request_body = {"session":{"session_param" : {}}} 
          
      headers_graph = {'X-Auth-Token': tokendata['access']['token']['id']}
      if method == "PUT":
          resp = requests.request(method, url=orchestrator, data=json.dumps(request_body), headers=headers_graph)
      else:
          resp = requests.request(method, url=orchestrator, headers=headers_graph)
      try:
          print "Response code by service layer: "+str(resp.status_code)
          resp.raise_for_status()
      except:
          if method == "PUT":
              print "ERROR - User "+username+" is NOT correctly instantiated"
          else:
              print "ERROR - User "+username+" is NOT correctly deleted"
          sys.exit()
      if method == "PUT":
          print "User "+username+" is now correctly instantiated"
      else:
          print "User "+username+" is now correctly deleted"
    • Instantiate the service graph associated to your user:
    • Replace USERNAME, TENANT and PASSWORD with those provided by the FROG administrator at the POLITO domain and replace the MAC_ADDRESS with a MAC address of a your device used as a client.

      python service_graph.py USERNAME TENANT PASSWORD MAC_ADDRESS PUT
    • Check the progress of instantiation of the NF-FG on OpenStack dashboard
    • Use the USERNAME and PASSWORD, used to instantiate the graph, to log-in into the dashboard

      http://controller.ipv6.polito.it:8888/horizon
      • Click on the tab Orchestration->Stack and control the status of the NF-FG. When the status is 'COMPLETE', the system is ready and your service graph has been instantiated.
      • Now you can connect a laptop to the "LAN" port of your compute node and start browsing the Internet.
  2. Delete your profile

    • Delete the service graph associated to your user:
    • Replace USERNAME, TENANT and PASSWORD with those provided by the FROG administrator at the POLITO domain and replace the MAC_ADDRESS with a mac address of a your device used as a client.

      python service_graph.py USERNAME TENANT PASSWORD MAC_ADDRESS DELETE
  3. Authentication graph

    • You can now instantiate the authentication graph associated with the user authentication_[partner]. The graph provided is capable of redirect the HTTP traffic to a captive portal where you are asked to authenticate.A successful authentication triggers the instantiation of the service graph associated with the logged user.
    • Instantiate the service graph associated to your user:

      Replace AUTH_USERNAME, AUTH_TENANT and AUTH_PASSWORD with those provided by the FROG administrator at the POLITO domain.

      python service_graph.py AUTH_USERNAME AUTH_TENANT AUTH_PASSWORD PUT
    • If this is running, you're now able to connect a laptop to the "LAN" port of your server and start browsing the Internet. The initial traffic will be redirected to a captive portal where you are asked to authenticate.

    • Delete the authentication service graph:

      Replace AUTH_USERNAME, AUTH_TENANT and AUTH_PASSWORD with those provided by the FROG administrator at the POLITO domain.

      python service_graph.py AUTH_USERNAME AUTH_TENANT AUTH_PASSWORD DELETE
Document Actions