Home Lab 2015 Edition: OpenStack Ansible Deployment

I’ve written a few blog posts on my home lab before, and since then it has evolved slightly to account for new and improved ways of deploying OpenStack – specifically how Rackspace deploy OpenStack using the Ansible OpenStack Deployment Playbooks. Today, my lab consists of the following:

  • 7 HP MicroServers (between 6Gb and 8Gb RAM with SSDs) and a total of 2 NICs being used.
  • 1 Server (Shuttle i5, 16Gb, 2Tb) as a host to run virtual environments using Vagrant and used as an Ansible deployment host for my OpenStack environment. This also has 2 NICs.
  • 1 x 24-Port Managed Switch

The environment looks like this

HomeLabEnvironment2015

(Click To Enlarge)

In the lab environment I allocate the 7 servers to OpenStack as follows

  • openstack1 – openstack3: Nova Computes (3 Hypervisors running KVM)
  • openstack4 – openstack6: Controllers
  • openstack7: Cinder + HA Proxy

This environment was the test lab for many of the chapters of the OpenStack Cloud Computing Cookbook.

With this environment, to install OpenStack using the Ansible Playbooks, I essentially do the following steps:

  1. PXE Boot Ubuntu 14.04 across my 7 OpenStack servers
  2. Configure the networking to add all needed bridges, using Ansible
  3. Configure OSAD deployment by grabbing the pre-built configuration files
  4. Run the OSAD Playbooks

From PXE Boot-to-OpenStack, the lab gets deployed in about 2 hours.

Network Setup

In my lab I use the following subnets on my network

  • Host network: 192.168.1.0/24
  • Container-to-container network: 172.29.236.0/24
  • VXLAN Neutron Tunnel Network: 172.29.240.0/24

The hosts on my network are on the following IP addresses

  • OpenStack Servers (openstackX.lab.openstackcookbook.com) (HP MicroServers)
    • br-host (em1): 192.168.1.101 – 192.168.1.107
    • br-mgmt (p2p1.236): 172.29.236.101 – 172.29.236.107
    • br-vxlan (p2p1.240): 172.29.240.101 – 172.29.240.107
  • Deployment Host (Shuttle)
    • 192.168.1.20
    • 172.29.236.20
    • 172.29.240.20

The OpenStack Servers have their addresses laid out on the interfaces as follows:

  • em1 – host network untagged/native VLAN
    • Each server is configured so that the onboard interface, em1 (in the case of the HP MicroServers running Ubuntu 14.04), is untagged on my home network on 192.168.1.0/24.
  • p2p1 – interface used by the OpenStack environment
    • VLAN 236
      • Container to Container network
      • 172.29.236.0/24
    • VLAN 240
      • VXLAN Tunnel Network
      • 172.29.240.0/24
    • Untagged/Native VLAN
      • As we want to create VLAN type networks in OpenStack, we use this to add those extra tags to this interface

Step 1: PXE Boot OpenStack Servers to Ubuntu 14.04 with a single interface on host network

I’m not going to explain PXE Booting. There are plenty of guides on the internet to set up PXE Booting. The result should be Ubuntu 14.04 with a single interface configured. This network is the host network when referring to the OpenStack environment

Step 2: Configure Ansible on the Deployment Host (Shuttle) [If not already]

We’ll be using Ansible to configure and deploy the OpenStack lab environment. First of all, to ensure we’ve got everything we need to run Ansible and subsequently the OpenStack Ansible Deployment (OSAD) – we’ll run the bootstrap script provided by the OSAD environment. We’ll check out the OSAD Playbooks and run the ansible bootstrap script. For this we’re using a Kilo release (as denoted by the tag 11.0.4, as K is the 11th letter of the alphabet)

  cd /opt
  git clone -b 11.0.4 https://github.com/stackforge/os-ansible-deployment.git
  cd /opt/os-ansible-deployment
  scripts/bootstrap-ansible.sh

Step 3: Configure Networking

Each of the OpenStack servers (openstack1-7) and deployment host are all configured with the same network configuration (Note: technically the Cinder/HA Proxy host and Deployment host don’t need access to the networks used by Neutron (br-vxlan and br-vlan), but for simplicity in my lab, all hosts are always configured the same). To do this I use an Ansible Playbook to set up the /etc/network/interfaces files to give the following:

  • br-host – 192.168.1.0/24
    • Bridge ports: em1
    • I move the interface on the host network, to a bridge and as such move the IP assigned to this bridge. This is so I can use my host network as a Flat network external provider network in my lab. This is not required for many OpenStack environments but useful for my lab.
  • br-mgmt – 172.29.236.0/24
    • Bridge ports: p2p1.236
    • OSAD uses LXC Containers for deployment of the services. This network is used for inter-communication between the containers (such as OpenStack services on one server or another container communicating with other OpenStack services), and between the host and containers. To install OpenStack using the OSAD Playbooks, the deployment host needs access on this network too.
  • br-vxlan – 172.29.240.0/24
    • Bridge ports: p2p1.240
    • VXLAN is an overlay network and Neutron creates a point-to-point mesh network using endpoints on this address to create the VXLAN networks.
  • br-vlan – address unassigned, untagged
    • Bridge ports: p2p1
    • This interface is completely managed by Neutron. It uses this interface to assign further VLAN networks in our environment.

Ansible requires that it has a persistent connection to the servers when it is executing the Playbooks, therefore it isn’t possible to move the IP from the existing em1 interface, to br-host, using Ansible as we’re using this network to do so, so we do this manually before creating the other bridges.

To move the IP address from em1, and move em1 to a bridge called br-host carry out the following:

  1. Let’s first ensure we have the right tools available for Ubuntu so it can create bridges:
      sudo apt-get update
      sudo apt-get install bridge-utils
  2. Comment out em1 in /etc/network/interfaces:
    # auto em1
    # iface em1 dhcp
  3. Create the bridge in the interfaces.d includes directory, /etc/network/interfaces.d/ifcfg-br-host and put em1 in as the interface that exists in here and set up the same IP that was previously assigned to em1 onto the bridge itself:
    auto br-host
    iface br-host inet static
    bridge_ports em1
    address 192.168.1.101
    netmask 255.255.255.0
    gateway 192.168.1.254
    bridge_stp off
  4. We then tell Ubuntu to source in this directory when setting up the interfaces by adding the following to /etc/network/interfaces:
    source /etc/network/interfaces.d/*
  5. Repeat for all OpenStack hosts, making sure the IP address in step 3 is updated accordingly, and reboot each host to pick up the change.

With the environment up and running, when we have a look at what we have set up, we should see the following

ip a
4: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br-host state UP group default qlen 1000
    link/ether 9c:b6:54:04:50:94 brd ff:ff:ff:ff:ff:ff
5: br-host: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default     link/ether 9c:b6:54:04:50:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.101/24 brd 192.168.1.255 scope global br-host
       valid_lft forever preferred_lft forever
    inet6 fe80::9eb6:54ff:fe04:5094/64 scope link 
       valid_lft forever preferred_lft forever

At this stage we can configure an Ansible Playbook to set up the networking on our OpenStack hosts.

  1. First checkout the following set of Playbooks:
      cd /opt
      git clone https://github.com/uksysadmin/openstack-lab-setup.git
      cd openstack-lab-setup
  2. This set of Playbooks is based on https://github.com/bennojoy/network_interface and other things to help me set up OSAD. To configure it, first create a file (and directory) /etc/ansible/hosts with the following contents:
    [openstack-servers]
    openstack[1:7]
  3. This tells Ansible that I have 7 servers with a hostname accessible on my network called openstack1, openstack2… openstack7. The next step is to configure these in the /opt/openstack-lab-setup/host_vars directory of the Playbooks checked out in step 1. This directory has files that match the host names specified in step 2. That means we have 7 files in host_vars named openstack1, openstack2 all the way to openstack7.Edit host_vars/openstack1 with the following contents:
    roles:
      - role: network_interface
    network_vlan_interfaces:
      - device: p2p1
        vlan: 236
        bootproto: manual
      - device: p2p1
        vlan: 240
        bootproto: manual
    network_bridge_interfaces:
      - device: br-mgmt
        type: bridge
        address: 172.29.236.101
        netmask: 255.255.255.0
        bootproto: static
        stp: "off"
        ports: [p2p1.236]
      - device: br-vxlan
        type: bridge
        address: 172.29.240.101
        netmask: 255.255.255.0
        bootproto: static
        stp: "off"
        ports: [p2p1.240]
      - device: br-vlan
        type: bridge
        bootproto: manual
        stp: "off"
        ports: [p2p1]
  4. As you can see, we’re describing the bridges and interfaces, as well as the IP addresses that will be used on the hosts. The only difference between each of these files will be the IP address – so we now need to update the rest of the files, host_vars/openstack2host_vars/openstack7 with the correct IP addresses.
      for a in {2..7}
      do 
        sed "s/101/10${a}/g" host_vars/openstack1 > host_vars/openstack${a}
      done
  5. With the IP addresses all configured we can run the Playbook to configure these on each of our OpenStack hosts as follows:
    cd /opt/openstack-lab-setup
    ansible-playbook setup-network.yml
  6. Once complete, with no errors, to check all is OK, and your deployment host also has the correct networking set up too, use fping to ping all the networks that we will be using in our OpenStack environment
    fping -g 192.168.1.101 192.168.1.107
    fping -g 172.29.236.101 172.29.236.107
    fping -g 172.29.240.101 172.29.240.107

Step 4: Configure OpenStack Ansible Deployment

With the networking set up, we can now configure the deployment so we’re ready to run the scripts to install OpenStack with no further involvement.

  1. If you’ve not grabbed the OSAD Playbooks, do so now
      cd /opt
      git clone -b 11.0.4 https://github.com/stackforge/os-ansible-deployment.git
      cd /opt/os-ansible-deployment
  2. We need to copy the configuration files to /etc/openstack_deploy
      cd etc
      cp -R openstack_deploy /etc
  3. At this point, we would configure the files to match the lab environment on how OpenStack will get deployed. I simply grab the required files that I’ve pre-configured for my lab.
      # If you've not checked this out already to do the Networking section
      cd /opt
      git clone https://github.com/uksysadmin/openstack-lab-setup.git
      # The pre-configured files are in here
      cd /opt/openstack-lab-setup
  4. Copy the files openstack_user_config.yml, user_group_vars.yml, user_variables.yml to /etc/openstack_deploy
    
      cp openstack_user_config.yml user_group_vars.yml user_variables.yml /etc/openstack_deploy
  5. We now need to generate the random passwords for the OpenStack services. This gets written to a file /opt/openstack_deploy/user_secrets.yml
      cd /opt/os-ansible-deploy
      scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml

That’s it for configuration. Feel free to edit /etc/openstack_deploy/*.yml files to suit your environment.

(Optional) Step 5: Create a local repo mirror + proxy environment {WARNING!}

The file /etc/openstack_deploy/user_group_vars.yml has entries in that are applicable to my lab environment:

#openstack_upstream_domain: "rpc-repo.rackspace.com"
openstack_upstream_domain: "internal-repo.lab.openstackcookbook.com"

Edit this file so it is applicable to your environment. If you do not plan on creating a local repo, use rpc-repo.rackspace.com which is available to everyone.

OSAD pulls down specific versions of code to ensure consistency with releases from servers hosted at rackspace.com. Rather than do this each time which would cause unnecessary traffic going over a relatively slow internet connection, I recreate this repo once. I create this on the deployment server (my Shuttle on 192.168.1.20) as that remains constant regardless of how many times I tear down and spin up my lab.

To create this carry out the following:

  1. First ensure you’ve enough space on your deployment host. The repo is 9.6G in size when doing a full sync.
  2. I create the mirror in /openstack/mirror and do the sync using Rsync as follows
     mkdir -p /openstack
     rsync -avzlHAX --exclude=/repos --exclude=/mirror --exclude=/rpcgit \
        --exclude=/openstackgit --exclude=/python_packages \
        rpc-repo.rackspace.com::openstack_mirror repo/
  3. This will take a while depending on your bandwidth, so grab a coffee or go for a sleep.
  4. Ensure this directory, /openstack/mirror, is available using a web server such as Nginx or Apache, and the hostname matches that of {{ openstack_upstream_domain }}. When you use a web-browser to view http://rpc-repo.rackspace.com/ it should look like the same when you view your internal version (e.g. http://internal-repo.lab.openstackcookbook.com). Edit DNS and your web server files to suit.

I also have another amendment which is found in the file /etc/openstack_deploy/user_variables.yml. This has an entries in that are applicable to my lab environment and using apt-cacher:

## Example environment variable setup:
proxy_env_url: http://apt-cacher:3142/
no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"

## global_environment_variables:
HTTP_PROXY: "{{ proxy_env_url }}"
HTTPS_PROXY: "{{ proxy_env_url }}"
NO_PROXY: "{{ no_proxy_env }}"

If you’re not using apt-cacher, or using something else – edit this to suit or ensure these are commented out so no proxy is used.

Step 6: Configure Cinder Storage

Cinder runs on openstack7 in my environment. On here I have a number of disks, and one of them is used for Cinder.

To configure a disk for Cinder, we create a volume group called cinder-volumes as follows:

fdisk /dev/sdb

Create a partition, /dev/sdb1 of type 8e (Linux LVM)

pvcreate /dev/sdb1
vgcreate cinder-volumes /dev/sdb1

Step 7: Running the OpenStack Ansible Deployment Playbooks

After the environment preparation has been done, we can simply run the Playbooks that perform all the tasks for setting up our multi-node lab. Note that these Playbooks are designed for any environment and not just for labs.

Verify all of the configuration files in /etc/openstack_deploy as described in Step 4 is suitable for your environmenet

To install OpenStack, simply carry out the following

cd /opt/os-ansible-deployment
scripts/run-playbooks.sh

This wrapper script executes a number of playbooks, with a certain amount of retries built in to help with any transient failures. The Playbooks it runs can be seen in the script. Ultimately, the following gets run:

cd /opt/os-ansible-deployment
openstack-ansible playbooks/setup-hosts.yml
openstack-ansible playbooks/install-haproxy.yml
openstack-ansible playbooks/setup-infrastructure.yml
openstack-ansible playbooks/setup-openstack.yml

openstack-ansible is a wrapper script so that ansible knows where to find its environment files so you don’t have to specify this on the command line yourself.

After running the run-playbooks.sh you will be presented with a summary of what was installed, how long it took, and how many times the action was retried:

run-playbook-completeClick for a bigger version

Step 8: Configure Glance Containers to use NFS

I configure Glance to use the local filesystem for it’s image location. This is fine if there was one Glance service, but there are 3 in this cluster. Each individual Glance service is pointing to its local /var/lib/glance directory – which means that if I uploaded an image – it would only exist in one of the three servers.

To get around this, I use NFS and mount /var/lib/glance from an NFS server. I’ve configured my NFS server (My QNAP NAS) to give me a share called “/glance” which allows me to use Glance as if it was local. To do this, carry out the following steps:

  1. Each of the services run in containers on our controller servers. The controller servers in my lab are openstack4, openstack5 and openstack6. Log into one of these and execute the following as root:
    lxc-ls -f
  2. This will produce a list of containers running on that server. One of them will be labelled like controller-01_glance_YYYYY (where YYYY is the UUID associated with the container). Attach yourself to this as follows:
    lxc-attach -n controller-01_glance_YYYYY
  3. Edit /etc/fstab to mount /var/lib/glance from my NFS server:
    192.168.1.2:/glance /var/lib/glance nfs rw,defaults 0 0
  4. Edit to suit your NFS server. Mount this as follows:
    mount -t nfs -a
  5. Ensure that this area is writeable by glance:glance as follows (only need to do this once):
    chown -R glance:glance /var/lib/glance
  6. The directory structure in here should be (for verification)
    /var/lib/glance/images
    /var/lib/glance/cache
    /var/lib/glance/scrubber
  7. Exit and then repeat on the 2 other Glance containers found on the other two controller nodes.

Step 9: Logging in to the OpenStack Environment

In my environment, I log into a controller node, which houses a “utility” container. This has all the tools needed to operate the OpenStack environment. It also has the randomly generated admin credentials found in /root/openrc so I can also log in using Horizon. To do this:

 ssh openstack4 # as root, one of my controllers
 lxc-ls -f # look for the utility container
 lxc-attach -n controller-01_utility_container_{random_n}

Once on the container, I can see a file in here /root/openrc. View this file, and then use the same credentials to log into Horizon.

Advertisements

Tagged: , , , , , ,

4 thoughts on “Home Lab 2015 Edition: OpenStack Ansible Deployment

  1. Stuart Taylor July 21, 2015 at 10:46 pm Reply

    Nice, now let me see you do it with Windows Server. 😏Yeah, its a PITA, but, we’ve got to make that leap. Nice write up non the less 🙌

    • Kevin Jackson July 22, 2015 at 9:18 am Reply

      OpenStack doesn’t run on Windows Server (well, there are ways) – but that’s like running Doom on a printer. Just because you can, doesn’t mean you should.

  2. […] step 4.5 of https://openstackr.wordpress.com/2015/07/19/home-lab-2015-edition-openstack-ansible-deployment/ we randomly generated all passwords used by OpenStack. This also generated a random password for […]

  3. […] some images loaded, its about time we set about to do something useful with the environment, like fire up some instances. But before we do, those instances need to live somewhere. In this […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: