OpenStack Books: January sale discounts!

IMG_20160115_154450

Get up to 30% off a copy of the OpenStack Cloud Computing Cookbook (3rd Edition) and Learning OpenStack Networking (Neutron) 2nd Edition using the codes below!

OpenStack Cloud Computing Cookbook, 3rd Edition

30% off eBook with code OSCCC3E
25% off print with
 code OSCCC3P

Learning OpenStack Neutron (Networking), 2nd Edition
30% off eBook with code LOSN2E
25% off print with code LOSN2P

Offer valid from Friday 15th January to 31st January 2016

I have an OpenStack environment, now what? Creating Networks with Neutron #OpenStack 101

With some images loaded, it’s about time we set about to do something useful with the environment, like fire up some instances. But before we do, those instances need to live somewhere. In this post I describe how to create an external provider network, a router and a private tenant network.

Creating Neutron Networks

Neutron networks can be external or internal/private and can be shared or kept within the bounds of a tenant (project). When a network is created with the external flag (a feature only available to ‘admin’ users), this network can be set as the gateway network of a router. This mechanism gives us the ability to hand-out Floating IP addresses from this external network so we can access our instances from outside of OpenStack.

Creating Neutron networks is done in two stages:

  1. Create the Network (this stage is like configuring a switch and cabling)
  2. Create the Subnet(s) on this network (assigning the CIDR, gateways, DHCP information and DNS)

This is the same process for any networks we create in OpenStack using Neutron.

1. Creating the External Provider Network (as ‘admin’ user)

In the lab environment, my external provider network is going to be my host network. In most environments, especially ones denoted by something real and not ones with the word ‘lab’ in the title, this provider network would be a routable network that is not your host network. The simple reason being security – putting instances on the same network as the hosts is considered a security issue.

With that aside, my configuration has allowed me to do this in my lab, and with a desire to access instances from my laptop on my host network – this is the desired state. To do this carry out the following:

  1. First create the network as follows. We’ll call this ‘flatNet’.
    neutron net-create \
     --provider:physical_network=flat \
     --provider:network_type=flat \
     --router:external=true \
     --shared
     flatNet

    This should produce output like the following:

    createnetwork

  2. Next we allocate a subnet to this network as follows:
    neutron subnet-create \
     --name flatSubnet \
     --allocation-pool start=192.168.1.150,end=192.168.1.170 \
     --dns-nameserver 192.168.1.1 \
     --gateway 192.168.1.254 \
     flatNet 192.168.1.0/24

    This produces output like the following:

    createsubnet

This is all we need at this stage. We’ve created a network, and marked it as available as an ‘external’ provider network. We marked it as shared so that any tenants (projects) we create in our environment can all use this external network. We then put on a subnet which is part of my 192.168.1.0/24 CIDR, but restricted what IP address we could get with the start and end range (so as not to conflict with the other clients on the same network – would be terribad one of my instances claiming the IP of a device running Netflix in this household!).

2. Create a router

The external network allows me to create instances on that network. This gives me limited functionality in my environment – given the fact that what I created was a single, flat network. To allow me to produce multi-tier environments, and allow me to access resources outside of my OpenStack environment, we need to introduce a router. This router will allow me to create private tenant networks, and create instances on them, and have routed access to the internet and other networks I create. It also enables the functionality of providing Floating IP addresses to my instances – so I can selectively assign IPs to instances that I want to be ‘externally’ accessible (in this case, instances that are accessible from my host (home) network).

To create a router, it is as simple as the following

neutron router-create myRouter

This isn’t an ‘admin’ function – any user in a tenant can create routers. As I was using my ‘admin’ user’s credentials, this created this in the ‘admin’ tenant. If I was in another tenant, called development, I’d have to create a router in that too for example. (An admin user, however, can create routers in other tenants, by specifying the –tenant flag when creating the router).

3. Assign the gateway of the router to the external provider network

A router is not much use if it can’t route somewhere. The intention of this router is to provide access inbound and outbound for my instances, and to do this I must set a gateway. The choice of networks that allow me to assign it as a gateway was provided when I specified the –router:external=true flag when I created the network (in Step 1. Creating the External Provider Network).

To set a gateway on the router, carry out the following

neutron router-gateway-set myRouter flatNet

(If you have multiple networks of the same name, or even routers of the same name, use the UUID of the resources instead.)

4. Create a private tenant network

At this stage, we have an external provider network, and a router. We can now create a private network for our instances to live on. In the OSAD installation, tenant networks will be created as VXLAN networks and creating them follows the same pattern as before: create the network, then create the subnet on that network.

To create a private tenant network, with a CIDR of 10.0.1.0/24 carry out the following:

  1. First create the network (note the lack of options required to make this simple network)
    neutron net-create privateNet

    This creates output like the following:

    createprivatenetwork

  2. Secondly create the subnet details
    neutron subnet-create \
     --name privateSubnet \
     --dns-nameserver 192.168.1.1 \
     privateNet 10.0.1.0/24

    This creates output like the following:

    createprivatesubnet

And that’s it. We carried out the same steps as before – we created a privateNet network, and then we created a subnet on this. I’ve kept the defaults for the gateway and DHCP range. This means that all the IP addresses in the CIDR are available apart from 2: The gateway IP (will be .1 of the range) and the DHCP server address.

5. Connect the private tenant network to the router

Finally, we connect the private tenant network to the router. This allows us to be able to connect our instances to the outside world via the router, as well as be able to assign Floating IP addresses to our instances. To do this, carry out the following:

neutron router-interface-add myRouter privateSubnet

(as before, if you have subnets or routers with the same names, use the UUIDs instead).

With this done, we end up with the following:

networkhorizon

We can now spin up instances on these networks.

I have an OpenStack environment, now what? Loading Images into Glance #OpenStack 101

With an OpenStack environment up and running based on an OpenStack Ansible Deployment, now what?

Using Horizon with OSAD

First, we can log into Horizon (point your web browser at your load balance pool address, the one labelled external_lb_vip_address in the /etc/openstack_deploy/openstack_user_config.yml):

global_overrides:
  internal_lb_vip_address: 172.29.236.107
  external_lb_vip_address: 192.168.1.107
  lb_name: haproxy

Where are the username/password credentials for Horizon?

In step 4.5 of https://openstackr.wordpress.com/2015/07/19/home-lab-2015-edition-openstack-ansible-deployment/ we randomly generated all passwords used by OpenStack. This also generated a random password for the ‘admin‘ user. This user is the equivalent of ‘root’ on a Linux system, so generating a strong password is highly recommended. But to get that password, we need to get it out of a file.

The easiest place to find this password is to look on the deployment host itself as that is where we wrote out the passwords. Take a look in /etc/openstack_deploy/user_secrets.yml file and find the line that says ‘keystone_auth_admin_password‘. This random string of characters is the ‘admin‘ user’s password that you can use for Horizon:

keystone_auth_admin_password: bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0

admin_login_osad

The Utility Container and openrc credentials file

Alternatively, you can grab the ‘openrc‘ file from a ‘utility’ container which is found on a controller node. To do this, carry out the following:

  1. Log into a controller node and change to root. In my case I can choose either openstack4, openstack5 or openstack6. Here I can list the containers running on here as follows:
    lxc-ls -f

    This brings back output like the following:
    lxcls-openstack4(Click to enlarge)

  2. Locate the name of the utility container and attach to it as follows
    lxc-attach -n controller-01_utility_container-71cceb47
  3. Here you will find the admin user’s credentials in the /root/openrc file:
    cat openrc
    
    
    
    # Do not edit, changes will be overwritten
    # COMMON CINDER ENVS
    export CINDER_ENDPOINT_TYPE=internalURL
    # COMMON NOVA ENVS
    export NOVA_ENDPOINT_TYPE=internalURL
    # COMMON OPENSTACK ENVS
    export OS_ENDPOINT_TYPE=internalURL
    export OS_USERNAME=admin
    export OS_PASSWORD=bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://172.29.236.107:5000/v2.0
    export OS_NO_CACHE=1
  4. To use this, we simply source this into our environment as follows:
    . openrc

    or

    source openrc
  5. And now we can use the command line tools such as nova, glance, cinder, keystone, neutron and heat.

Loading images into Glance

Glance is the Image Service. This service provides you with a list of available images you can use to launch instances in OpenStack. To do this, we use the Glance command line tool.

There are plenty of public images available for OpenStack. You essentially grab them from the internet, and load them into Glance for your use. A list of places for OpenStack images can be found below:

CirrOS test image (can use username/password to log in): http://download.cirros-cloud.net/

Ubuntu images: http://cloud-images.ubuntu.com/

Windows 2012 R2: http://www.cloudbase.it/

CentOS 7: http://cloud.centos.org/centos/7/images/

Fedora: https://getfedora.org/en/cloud/download/

To load these, log into a Utililty container as described above and load into the environment as follows.

Note that you can either grab the files from the website, save them locally and upload to Glance, or have Glance grab the files and load into the environment direct from the site. I’ll describe both as you will have to load from a locally saved file for Windows due to having to accept an EULA before gaining access.

CirrOS

glance image-create \
  --name "cirros-image" \
  --disk-format qcow2 \
  --container-format bare \
  --copy-from http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img \
  --is-public True \
  --progress

You can use a username and password to log into CirrOS. This makes this tiny just-enough-OS great for testing and troubleshooting. Username: cirros, Password: Cubswin:)

Ubuntu 14.04

glance image-create \
–name “trusty-image” \
–disk-format qcow2 \
–container-format bare \
–copy-from http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
–is-public True \
–progress

You’d specify a keypair to use when launching this image as there is no default username or password on these cloud images [that would be a disastrous security fail if so]. The username to log into these will be ‘root’ and the private key that matched the public key specified at launch would get you access.

Windows 2012 R2

For Windows, you can download an evaluation copy of Windows 2012 R2 and to do so you need to accept a license. Head over to http://www.cloudbase.it/ and follow the instructions to download the image.

Once downloaded, you need to get this to OpenStack. As we’re using the Utility container for our access we need to get the image so it is accessible from there. There are alternative ways such as installing the OpenStack Client tools on your client which is ultimately how you’d use OpenStack. For now though, we will copy to the Utility container.

  1. Copy the Windows image to the Utility Container. All of the containers have an IP on the container ‘management’ network (172.29.236.0/24 in my lab). View the IP address of the Utility container and use this IP. This network is available via my deployment host so I simply secure copy this over to the container:

    (performed as root on my deployment host as that has SSH access using keypairs to the containers)

    scp Windows2012R2.qcow2 root@172.29.236.85:
  2. We can then upload this to Glance as follows, note the use of –file instead of –copy-from:
    glance image-create \
      --name "windows-image" \
      --disk-format qcow2 \
      --container-format bare \
      --file ./Windows2012R2.qcow2 \
      --is-public True \
      --progress

    This will take a while as the Windows images are naturally bigger than Linux ones. Once uploaded it will be available for our use.

Access to Windows instances will be by RDP, and although SSH keypairs are not used by this Windows image for RDP access, it is still required to get access to the randomly generated ‘Administrator’ passphrase, so when launching the Windows instance, specify a keypair.

Access to the Administrator password is then carried out using the following once you’ve launched an instance:

nova get-password myWindowsInstance .ssh/ida_rsa
Launching instances will be covered in a later topic!

Home Lab 2015 Edition: OpenStack Ansible Deployment

I’ve written a few blog posts on my home lab before, and since then it has evolved slightly to account for new and improved ways of deploying OpenStack – specifically how Rackspace deploy OpenStack using the Ansible OpenStack Deployment Playbooks. Today, my lab consists of the following:

  • 7 HP MicroServers (between 6Gb and 8Gb RAM with SSDs) and a total of 2 NICs being used.
  • 1 Server (Shuttle i5, 16Gb, 2Tb) as a host to run virtual environments using Vagrant and used as an Ansible deployment host for my OpenStack environment. This also has 2 NICs.
  • 1 x 24-Port Managed Switch

The environment looks like this

HomeLabEnvironment2015

(Click To Enlarge)

In the lab environment I allocate the 7 servers to OpenStack as follows

  • openstack1 – openstack3: Nova Computes (3 Hypervisors running KVM)
  • openstack4 – openstack6: Controllers
  • openstack7: Cinder + HA Proxy

This environment was the test lab for many of the chapters of the OpenStack Cloud Computing Cookbook.

With this environment, to install OpenStack using the Ansible Playbooks, I essentially do the following steps:

  1. PXE Boot Ubuntu 14.04 across my 7 OpenStack servers
  2. Configure the networking to add all needed bridges, using Ansible
  3. Configure OSAD deployment by grabbing the pre-built configuration files
  4. Run the OSAD Playbooks

From PXE Boot-to-OpenStack, the lab gets deployed in about 2 hours.

Network Setup

In my lab I use the following subnets on my network

  • Host network: 192.168.1.0/24
  • Container-to-container network: 172.29.236.0/24
  • VXLAN Neutron Tunnel Network: 172.29.240.0/24

The hosts on my network are on the following IP addresses

  • OpenStack Servers (openstackX.lab.openstackcookbook.com) (HP MicroServers)
    • br-host (em1): 192.168.1.101 – 192.168.1.107
    • br-mgmt (p2p1.236): 172.29.236.101 – 172.29.236.107
    • br-vxlan (p2p1.240): 172.29.240.101 – 172.29.240.107
  • Deployment Host (Shuttle)
    • 192.168.1.20
    • 172.29.236.20
    • 172.29.240.20

The OpenStack Servers have their addresses laid out on the interfaces as follows:

  • em1 – host network untagged/native VLAN
    • Each server is configured so that the onboard interface, em1 (in the case of the HP MicroServers running Ubuntu 14.04), is untagged on my home network on 192.168.1.0/24.
  • p2p1 – interface used by the OpenStack environment
    • VLAN 236
      • Container to Container network
      • 172.29.236.0/24
    • VLAN 240
      • VXLAN Tunnel Network
      • 172.29.240.0/24
    • Untagged/Native VLAN
      • As we want to create VLAN type networks in OpenStack, we use this to add those extra tags to this interface

Step 1: PXE Boot OpenStack Servers to Ubuntu 14.04 with a single interface on host network

I’m not going to explain PXE Booting. There are plenty of guides on the internet to set up PXE Booting. The result should be Ubuntu 14.04 with a single interface configured. This network is the host network when referring to the OpenStack environment

Step 2: Configure Ansible on the Deployment Host (Shuttle) [If not already]

We’ll be using Ansible to configure and deploy the OpenStack lab environment. First of all, to ensure we’ve got everything we need to run Ansible and subsequently the OpenStack Ansible Deployment (OSAD) – we’ll run the bootstrap script provided by the OSAD environment. We’ll check out the OSAD Playbooks and run the ansible bootstrap script. For this we’re using a Kilo release (as denoted by the tag 11.0.4, as K is the 11th letter of the alphabet)

  cd /opt
  git clone -b 11.0.4 https://github.com/stackforge/os-ansible-deployment.git
  cd /opt/os-ansible-deployment
  scripts/bootstrap-ansible.sh

Step 3: Configure Networking

Each of the OpenStack servers (openstack1-7) and deployment host are all configured with the same network configuration (Note: technically the Cinder/HA Proxy host and Deployment host don’t need access to the networks used by Neutron (br-vxlan and br-vlan), but for simplicity in my lab, all hosts are always configured the same). To do this I use an Ansible Playbook to set up the /etc/network/interfaces files to give the following:

  • br-host – 192.168.1.0/24
    • Bridge ports: em1
    • I move the interface on the host network, to a bridge and as such move the IP assigned to this bridge. This is so I can use my host network as a Flat network external provider network in my lab. This is not required for many OpenStack environments but useful for my lab.
  • br-mgmt – 172.29.236.0/24
    • Bridge ports: p2p1.236
    • OSAD uses LXC Containers for deployment of the services. This network is used for inter-communication between the containers (such as OpenStack services on one server or another container communicating with other OpenStack services), and between the host and containers. To install OpenStack using the OSAD Playbooks, the deployment host needs access on this network too.
  • br-vxlan – 172.29.240.0/24
    • Bridge ports: p2p1.240
    • VXLAN is an overlay network and Neutron creates a point-to-point mesh network using endpoints on this address to create the VXLAN networks.
  • br-vlan – address unassigned, untagged
    • Bridge ports: p2p1
    • This interface is completely managed by Neutron. It uses this interface to assign further VLAN networks in our environment.

Ansible requires that it has a persistent connection to the servers when it is executing the Playbooks, therefore it isn’t possible to move the IP from the existing em1 interface, to br-host, using Ansible as we’re using this network to do so, so we do this manually before creating the other bridges.

To move the IP address from em1, and move em1 to a bridge called br-host carry out the following:

  1. Let’s first ensure we have the right tools available for Ubuntu so it can create bridges:
      sudo apt-get update
      sudo apt-get install bridge-utils
  2. Comment out em1 in /etc/network/interfaces:
    # auto em1
    # iface em1 dhcp
  3. Create the bridge in the interfaces.d includes directory, /etc/network/interfaces.d/ifcfg-br-host and put em1 in as the interface that exists in here and set up the same IP that was previously assigned to em1 onto the bridge itself:
    auto br-host
    iface br-host inet static
    bridge_ports em1
    address 192.168.1.101
    netmask 255.255.255.0
    gateway 192.168.1.254
    bridge_stp off
  4. We then tell Ubuntu to source in this directory when setting up the interfaces by adding the following to /etc/network/interfaces:
    source /etc/network/interfaces.d/*
  5. Repeat for all OpenStack hosts, making sure the IP address in step 3 is updated accordingly, and reboot each host to pick up the change.

With the environment up and running, when we have a look at what we have set up, we should see the following

ip a
4: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br-host state UP group default qlen 1000
    link/ether 9c:b6:54:04:50:94 brd ff:ff:ff:ff:ff:ff
5: br-host: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default     link/ether 9c:b6:54:04:50:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.101/24 brd 192.168.1.255 scope global br-host
       valid_lft forever preferred_lft forever
    inet6 fe80::9eb6:54ff:fe04:5094/64 scope link 
       valid_lft forever preferred_lft forever

At this stage we can configure an Ansible Playbook to set up the networking on our OpenStack hosts.

  1. First checkout the following set of Playbooks:
      cd /opt
      git clone https://github.com/uksysadmin/openstack-lab-setup.git
      cd openstack-lab-setup
  2. This set of Playbooks is based on https://github.com/bennojoy/network_interface and other things to help me set up OSAD. To configure it, first create a file (and directory) /etc/ansible/hosts with the following contents:
    [openstack-servers]
    openstack[1:7]
  3. This tells Ansible that I have 7 servers with a hostname accessible on my network called openstack1, openstack2… openstack7. The next step is to configure these in the /opt/openstack-lab-setup/host_vars directory of the Playbooks checked out in step 1. This directory has files that match the host names specified in step 2. That means we have 7 files in host_vars named openstack1, openstack2 all the way to openstack7.Edit host_vars/openstack1 with the following contents:
    roles:
      - role: network_interface
    network_vlan_interfaces:
      - device: p2p1
        vlan: 236
        bootproto: manual
      - device: p2p1
        vlan: 240
        bootproto: manual
    network_bridge_interfaces:
      - device: br-mgmt
        type: bridge
        address: 172.29.236.101
        netmask: 255.255.255.0
        bootproto: static
        stp: "off"
        ports: [p2p1.236]
      - device: br-vxlan
        type: bridge
        address: 172.29.240.101
        netmask: 255.255.255.0
        bootproto: static
        stp: "off"
        ports: [p2p1.240]
      - device: br-vlan
        type: bridge
        bootproto: manual
        stp: "off"
        ports: [p2p1]
  4. As you can see, we’re describing the bridges and interfaces, as well as the IP addresses that will be used on the hosts. The only difference between each of these files will be the IP address – so we now need to update the rest of the files, host_vars/openstack2host_vars/openstack7 with the correct IP addresses.
      for a in {2..7}
      do 
        sed "s/101/10${a}/g" host_vars/openstack1 > host_vars/openstack${a}
      done
  5. With the IP addresses all configured we can run the Playbook to configure these on each of our OpenStack hosts as follows:
    cd /opt/openstack-lab-setup
    ansible-playbook setup-network.yml
  6. Once complete, with no errors, to check all is OK, and your deployment host also has the correct networking set up too, use fping to ping all the networks that we will be using in our OpenStack environment
    fping -g 192.168.1.101 192.168.1.107
    fping -g 172.29.236.101 172.29.236.107
    fping -g 172.29.240.101 172.29.240.107

Step 4: Configure OpenStack Ansible Deployment

With the networking set up, we can now configure the deployment so we’re ready to run the scripts to install OpenStack with no further involvement.

  1. If you’ve not grabbed the OSAD Playbooks, do so now
      cd /opt
      git clone -b 11.0.4 https://github.com/stackforge/os-ansible-deployment.git
      cd /opt/os-ansible-deployment
  2. We need to copy the configuration files to /etc/openstack_deploy
      cd etc
      cp -R openstack_deploy /etc
  3. At this point, we would configure the files to match the lab environment on how OpenStack will get deployed. I simply grab the required files that I’ve pre-configured for my lab.
      # If you've not checked this out already to do the Networking section
      cd /opt
      git clone https://github.com/uksysadmin/openstack-lab-setup.git
      # The pre-configured files are in here
      cd /opt/openstack-lab-setup
  4. Copy the files openstack_user_config.yml, user_group_vars.yml, user_variables.yml to /etc/openstack_deploy
    
      cp openstack_user_config.yml user_group_vars.yml user_variables.yml /etc/openstack_deploy
  5. We now need to generate the random passwords for the OpenStack services. This gets written to a file /opt/openstack_deploy/user_secrets.yml
      cd /opt/os-ansible-deploy
      scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml

That’s it for configuration. Feel free to edit /etc/openstack_deploy/*.yml files to suit your environment.

(Optional) Step 5: Create a local repo mirror + proxy environment {WARNING!}

The file /etc/openstack_deploy/user_group_vars.yml has entries in that are applicable to my lab environment:

#openstack_upstream_domain: "rpc-repo.rackspace.com"
openstack_upstream_domain: "internal-repo.lab.openstackcookbook.com"

Edit this file so it is applicable to your environment. If you do not plan on creating a local repo, use rpc-repo.rackspace.com which is available to everyone.

OSAD pulls down specific versions of code to ensure consistency with releases from servers hosted at rackspace.com. Rather than do this each time which would cause unnecessary traffic going over a relatively slow internet connection, I recreate this repo once. I create this on the deployment server (my Shuttle on 192.168.1.20) as that remains constant regardless of how many times I tear down and spin up my lab.

To create this carry out the following:

  1. First ensure you’ve enough space on your deployment host. The repo is 9.6G in size when doing a full sync.
  2. I create the mirror in /openstack/mirror and do the sync using Rsync as follows
     mkdir -p /openstack
     rsync -avzlHAX --exclude=/repos --exclude=/mirror --exclude=/rpcgit \
        --exclude=/openstackgit --exclude=/python_packages \
        rpc-repo.rackspace.com::openstack_mirror repo/
  3. This will take a while depending on your bandwidth, so grab a coffee or go for a sleep.
  4. Ensure this directory, /openstack/mirror, is available using a web server such as Nginx or Apache, and the hostname matches that of {{ openstack_upstream_domain }}. When you use a web-browser to view http://rpc-repo.rackspace.com/ it should look like the same when you view your internal version (e.g. http://internal-repo.lab.openstackcookbook.com). Edit DNS and your web server files to suit.

I also have another amendment which is found in the file /etc/openstack_deploy/user_variables.yml. This has an entries in that are applicable to my lab environment and using apt-cacher:

## Example environment variable setup:
proxy_env_url: http://apt-cacher:3142/
no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"

## global_environment_variables:
HTTP_PROXY: "{{ proxy_env_url }}"
HTTPS_PROXY: "{{ proxy_env_url }}"
NO_PROXY: "{{ no_proxy_env }}"

If you’re not using apt-cacher, or using something else – edit this to suit or ensure these are commented out so no proxy is used.

Step 6: Configure Cinder Storage

Cinder runs on openstack7 in my environment. On here I have a number of disks, and one of them is used for Cinder.

To configure a disk for Cinder, we create a volume group called cinder-volumes as follows:

fdisk /dev/sdb

Create a partition, /dev/sdb1 of type 8e (Linux LVM)

pvcreate /dev/sdb1
vgcreate cinder-volumes /dev/sdb1

Step 7: Running the OpenStack Ansible Deployment Playbooks

After the environment preparation has been done, we can simply run the Playbooks that perform all the tasks for setting up our multi-node lab. Note that these Playbooks are designed for any environment and not just for labs.

Verify all of the configuration files in /etc/openstack_deploy as described in Step 4 is suitable for your environmenet

To install OpenStack, simply carry out the following

cd /opt/os-ansible-deployment
scripts/run-playbooks.sh

This wrapper script executes a number of playbooks, with a certain amount of retries built in to help with any transient failures. The Playbooks it runs can be seen in the script. Ultimately, the following gets run:

cd /opt/os-ansible-deployment
openstack-ansible playbooks/setup-hosts.yml
openstack-ansible playbooks/install-haproxy.yml
openstack-ansible playbooks/setup-infrastructure.yml
openstack-ansible playbooks/setup-openstack.yml

openstack-ansible is a wrapper script so that ansible knows where to find its environment files so you don’t have to specify this on the command line yourself.

After running the run-playbooks.sh you will be presented with a summary of what was installed, how long it took, and how many times the action was retried:

run-playbook-completeClick for a bigger version

Step 8: Configure Glance Containers to use NFS

I configure Glance to use the local filesystem for it’s image location. This is fine if there was one Glance service, but there are 3 in this cluster. Each individual Glance service is pointing to its local /var/lib/glance directory – which means that if I uploaded an image – it would only exist in one of the three servers.

To get around this, I use NFS and mount /var/lib/glance from an NFS server. I’ve configured my NFS server (My QNAP NAS) to give me a share called “/glance” which allows me to use Glance as if it was local. To do this, carry out the following steps:

  1. Each of the services run in containers on our controller servers. The controller servers in my lab are openstack4, openstack5 and openstack6. Log into one of these and execute the following as root:
    lxc-ls -f
  2. This will produce a list of containers running on that server. One of them will be labelled like controller-01_glance_YYYYY (where YYYY is the UUID associated with the container). Attach yourself to this as follows:
    lxc-attach -n controller-01_glance_YYYYY
  3. Edit /etc/fstab to mount /var/lib/glance from my NFS server:
    192.168.1.2:/glance /var/lib/glance nfs rw,defaults 0 0
  4. Edit to suit your NFS server. Mount this as follows:
    mount -t nfs -a
  5. Ensure that this area is writeable by glance:glance as follows (only need to do this once):
    chown -R glance:glance /var/lib/glance
  6. The directory structure in here should be (for verification)
    /var/lib/glance/images
    /var/lib/glance/cache
    /var/lib/glance/scrubber
  7. Exit and then repeat on the 2 other Glance containers found on the other two controller nodes.

Step 9: Logging in to the OpenStack Environment

In my environment, I log into a controller node, which houses a “utility” container. This has all the tools needed to operate the OpenStack environment. It also has the randomly generated admin credentials found in /root/openrc so I can also log in using Horizon. To do this:

 ssh openstack4 # as root, one of my controllers
 lxc-ls -f # look for the utility container
 lxc-attach -n controller-01_utility_container_{random_n}

Once on the container, I can see a file in here /root/openrc. View this file, and then use the same credentials to log into Horizon.

OpenStack Cloud Computing Cookbook @OpenStackBook 3rd Edition Progress

Hopefully by now you’re familiar with the extremely popular book for learning to use and deploy OpenStack, the OpenStack Cloud Computing Cookbook. This book presents you with a number of chapters relevant to running an OpenStack environment and the first edition was published at the beginning of the Grizzly release. In the 2nd Edition I picked up a co-author by the name of Cody Bunch and, after deciding to do a 3rd Edition, we picked up another co-author by the name of Egle Sigler. All of us work in the Professional Services arm of Rackspace on either side of the Atlantic so we know a thing or two about deploying and operating OpenStack. We’re now mid-cycle in writing the 3rd edition, which will be based on Juno – and we’re aiming to set our pens down for a publication by May 2015. It’s going to be a fight to reach this date, but as this is a book on OpenStack – we’re no strangers to challenges!

It’s now well into January of 2015 and great progress has been made whilst people have been thanking each other, gChaptersSoFariving presents and getting blind drunk because of a digit change in the year. One of our early challenges was getting a solid working OpenStack development base to work from that allowed us to work as a group on our assigned chapters of the book. Thankfully, a lot of the heavy lifting was done during the 2nd Cookbook_Vagrant_Environment_FullEdition but a move to Juno, improving the security and defaults of our reference installation, and incorporating new features caused a few hiccups along the way. These have been resolved, and the result is a Vagrant environment that has one purpose: to help educate people who want to run and deploy OpenStack environments. This environment can be found at https://github.com/OpenStackCookbook/OpenStackCookbook.

This environment consists of a single Controller, up to 2 Compute hosts running Qemu/KVM, a Network node, a Cinder node, and up to 2 Swift hosts. This environment works hand-in-hand with the recipes in the OpenStack Cloud Computing Cookbook – allowing the reader to follow along and learn from the configurations presented and reference environment. If you’re running the full complement of virtual machines as depicted, I highly recommend at least 16Gb RAM. The environment I use to develop the scripts for the book is detailed here.

So what’s new in the OpenStack Cloud Computing Cookbook, 3rd Edition?

We’re still writing, but along with the basics you’d expect in our Cookbook, we have information on securing Keystone with SSL, Neutron with Distributed Virtual Routers (DVR), running multiple Swift environments and Container Synchronization, using AZs, Host Aggregates, Live-Migration, working with the Scheduler, a whole new chapter with recipes on using features such as LBaaS, FWaaS, Telemetry, Cloud-init/config, Heat, Automating installs, and a whole host of useful recipes for production use.

We hope you enjoy it as much as you have with the first two editions, and we’ll update you near to finishing when we have a confirmed publication date.

Vagrant OpenStack Plugin 101: vagrant up –provider=openstack

Now that we have a multi-node OpenStack environment spun up very easily using Vagrant, we can now take this further by using Vagrant to spin up OpenStack instances too using the Vagrant OpenStack Plugin. To see how easily this is, follow the instructions below:

git clone https://github.com/mat128/vagrant-openstack.git
cd vagrant-openstack/
gem build vagrant-openstack.gemspec
vagrant plugin install vagrant-openstack-*.gem
vagrant box add dummy https://github.com/mat128/vagrant-openstack/raw/master/dummy.box
sudo apt-get install python-novaclient

With the plug-in installed for use with Vagrant we can now configure the Vagrantfile. Remember the Vagrantfile is just a configuration file that lives in the directory of the working environment where any artifacts related to the virtual environment is kept. The following Vagrantfile contents can be used against the OpenStack Cloud Computing Cookbook Juno Demo environment:

require 'vagrant-openstack'
Vagrant.configure("2") do |config|
  config.vm.box = "dummy"
  config.vm.provider :openstack do |os|    # e.g.
    os.username = "admin"          # "#{ENV['OS_USERNAME']}"
    os.api_key  = "openstack"      # "#{ENV['OS_PASSWORD']}"
    os.flavor   = /m1.tiny/
    os.image    = /trusty/
    os.endpoint = "http://172.16.0.200:5000/v2.0/tokens" # "#{ENV['OS_AUTH_URL']}/tokens"
    os.keypair_name = "demokey"
    os.ssh_username = "ubuntu"
    os.public_network_name = "ext_net"
    os.networks = %w(ext_net)
    os.tenant = "cookbook"
    os.region = "regionOne"
  end
end

Once created, we can simply bring up this instance with the following command:

vagrant up --provider=openstack

Note: The network chosen is a routable “public” network, which is accessible from the Vagrant client and is a limitation at this time for creating these instances. Also note that vagrant openstack seems to get stuck at “Waiting for SSH to become available”. Ctrl + C at this point will drop you back to the shell.

Remote OpenStack Vagrant Environment

To coincide with the development of the 3rd Edition of the OpenStack Cloud Computing Cookbook, I decided to move my vagranting from the ever-increasing temperatures of my MBP to a Shuttle capable of spinning up multi-node OpenStack environments in minutes. I’ve found this very useful for convenience and speed, so sharing the small number of steps to help you quickly get up to speed too.

The spec of the Shuttle is:

  • Shuttle XPC SH87R6
  • Intel i5 3.3GHz i5-4590
  • 2 x Crucial 8Gb 1600MHz DDR3 Ballistix Sport
  • 1 x Seagate Desktop SSHD 1000GB 64MB Cache SATA 6 Gb/s 8GB SSD Cache Hybrid HDD

Running on here is Ubuntu 14.04 LTS, along with VirtualBox 4.3 and VMware Workstation 10. I decided to give one of those hybrid HDDs a spin, and can say the performance is pretty damn good for the price. All in all, this is a quiet little workhorse sitting under my desk.

To have this as part of my work environment (read: my MBP), I connect to this using SSH and X11 courtesy of XQuartz. XQuartz, once installed on the Mac, allows me to access my remote GUI on my Shuttle as you’d expect from X11 (ssh -X …). This is useful when running the GUI of VMware Workstation and VirtualBox – as well as giving me a hop into my virtual environment running OpenStack (that exists only within my Shuttle) by allowing me to run remote web browsers that have the necessary network access to my Virtual OpenStack environment.

x11firefox

With this all installed and accessible on my network, I grab the OpenStack Cloud Computing Cookbook scripts (that we’re updating for Juno and the in-progress 3rd Edition) from GitHub and can bring up a multi-node Juno OpenStack environment running in either VirtualBox or VMware Workstation in just over 15 minutes.

Once OpenStack is up and running, I can then run the demo.sh script that we provide to launch 2 networks (one private, one public), with an L3 router providing floating IPs, and an instance that I’m able to access from a shell on my Shuttle. Despite the Shuttle being remote, I can browse the OpenStack Dashboard with no issues, and without VirtualBox or VMware consuming resources on my trusty MBP.