Category Archives: Ubuntu

I have an OpenStack environment, now what? Loading Images into Glance #OpenStack 101

With an OpenStack environment up and running based on an OpenStack Ansible Deployment, now what?

Using Horizon with OSAD

First, we can log into Horizon (point your web browser at your load balance pool address, the one labelled external_lb_vip_address in the /etc/openstack_deploy/openstack_user_config.yml):

global_overrides:
  internal_lb_vip_address: 172.29.236.107
  external_lb_vip_address: 192.168.1.107
  lb_name: haproxy

Where are the username/password credentials for Horizon?

In step 4.5 of https://openstackr.wordpress.com/2015/07/19/home-lab-2015-edition-openstack-ansible-deployment/ we randomly generated all passwords used by OpenStack. This also generated a random password for the ‘admin‘ user. This user is the equivalent of ‘root’ on a Linux system, so generating a strong password is highly recommended. But to get that password, we need to get it out of a file.

The easiest place to find this password is to look on the deployment host itself as that is where we wrote out the passwords. Take a look in /etc/openstack_deploy/user_secrets.yml file and find the line that says ‘keystone_auth_admin_password‘. This random string of characters is the ‘admin‘ user’s password that you can use for Horizon:

keystone_auth_admin_password: bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0

admin_login_osad

The Utility Container and openrc credentials file

Alternatively, you can grab the ‘openrc‘ file from a ‘utility’ container which is found on a controller node. To do this, carry out the following:

  1. Log into a controller node and change to root. In my case I can choose either openstack4, openstack5 or openstack6. Here I can list the containers running on here as follows:
    lxc-ls -f

    This brings back output like the following:
    lxcls-openstack4(Click to enlarge)

  2. Locate the name of the utility container and attach to it as follows
    lxc-attach -n controller-01_utility_container-71cceb47
  3. Here you will find the admin user’s credentials in the /root/openrc file:
    cat openrc
    
    
    
    # Do not edit, changes will be overwritten
    # COMMON CINDER ENVS
    export CINDER_ENDPOINT_TYPE=internalURL
    # COMMON NOVA ENVS
    export NOVA_ENDPOINT_TYPE=internalURL
    # COMMON OPENSTACK ENVS
    export OS_ENDPOINT_TYPE=internalURL
    export OS_USERNAME=admin
    export OS_PASSWORD=bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://172.29.236.107:5000/v2.0
    export OS_NO_CACHE=1
  4. To use this, we simply source this into our environment as follows:
    . openrc

    or

    source openrc
  5. And now we can use the command line tools such as nova, glance, cinder, keystone, neutron and heat.

Loading images into Glance

Glance is the Image Service. This service provides you with a list of available images you can use to launch instances in OpenStack. To do this, we use the Glance command line tool.

There are plenty of public images available for OpenStack. You essentially grab them from the internet, and load them into Glance for your use. A list of places for OpenStack images can be found below:

CirrOS test image (can use username/password to log in): http://download.cirros-cloud.net/

Ubuntu images: http://cloud-images.ubuntu.com/

Windows 2012 R2: http://www.cloudbase.it/

CentOS 7: http://cloud.centos.org/centos/7/images/

Fedora: https://getfedora.org/en/cloud/download/

To load these, log into a Utililty container as described above and load into the environment as follows.

Note that you can either grab the files from the website, save them locally and upload to Glance, or have Glance grab the files and load into the environment direct from the site. I’ll describe both as you will have to load from a locally saved file for Windows due to having to accept an EULA before gaining access.

CirrOS

glance image-create \
  --name "cirros-image" \
  --disk-format qcow2 \
  --container-format bare \
  --copy-from http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img \
  --is-public True \
  --progress

You can use a username and password to log into CirrOS. This makes this tiny just-enough-OS great for testing and troubleshooting. Username: cirros, Password: Cubswin:)

Ubuntu 14.04

glance image-create \
–name “trusty-image” \
–disk-format qcow2 \
–container-format bare \
–copy-from http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
–is-public True \
–progress

You’d specify a keypair to use when launching this image as there is no default username or password on these cloud images [that would be a disastrous security fail if so]. The username to log into these will be ‘root’ and the private key that matched the public key specified at launch would get you access.

Windows 2012 R2

For Windows, you can download an evaluation copy of Windows 2012 R2 and to do so you need to accept a license. Head over to http://www.cloudbase.it/ and follow the instructions to download the image.

Once downloaded, you need to get this to OpenStack. As we’re using the Utility container for our access we need to get the image so it is accessible from there. There are alternative ways such as installing the OpenStack Client tools on your client which is ultimately how you’d use OpenStack. For now though, we will copy to the Utility container.

  1. Copy the Windows image to the Utility Container. All of the containers have an IP on the container ‘management’ network (172.29.236.0/24 in my lab). View the IP address of the Utility container and use this IP. This network is available via my deployment host so I simply secure copy this over to the container:

    (performed as root on my deployment host as that has SSH access using keypairs to the containers)

    scp Windows2012R2.qcow2 root@172.29.236.85:
  2. We can then upload this to Glance as follows, note the use of –file instead of –copy-from:
    glance image-create \
      --name "windows-image" \
      --disk-format qcow2 \
      --container-format bare \
      --file ./Windows2012R2.qcow2 \
      --is-public True \
      --progress

    This will take a while as the Windows images are naturally bigger than Linux ones. Once uploaded it will be available for our use.

Access to Windows instances will be by RDP, and although SSH keypairs are not used by this Windows image for RDP access, it is still required to get access to the randomly generated ‘Administrator’ passphrase, so when launching the Windows instance, specify a keypair.

Access to the Administrator password is then carried out using the following once you’ve launched an instance:

nova get-password myWindowsInstance .ssh/ida_rsa
Launching instances will be covered in a later topic!
Advertisements

Home Lab 2015 Edition: OpenStack Ansible Deployment

I’ve written a few blog posts on my home lab before, and since then it has evolved slightly to account for new and improved ways of deploying OpenStack – specifically how Rackspace deploy OpenStack using the Ansible OpenStack Deployment Playbooks. Today, my lab consists of the following:

  • 7 HP MicroServers (between 6Gb and 8Gb RAM with SSDs) and a total of 2 NICs being used.
  • 1 Server (Shuttle i5, 16Gb, 2Tb) as a host to run virtual environments using Vagrant and used as an Ansible deployment host for my OpenStack environment. This also has 2 NICs.
  • 1 x 24-Port Managed Switch

The environment looks like this

HomeLabEnvironment2015

(Click To Enlarge)

In the lab environment I allocate the 7 servers to OpenStack as follows

  • openstack1 – openstack3: Nova Computes (3 Hypervisors running KVM)
  • openstack4 – openstack6: Controllers
  • openstack7: Cinder + HA Proxy

This environment was the test lab for many of the chapters of the OpenStack Cloud Computing Cookbook.

With this environment, to install OpenStack using the Ansible Playbooks, I essentially do the following steps:

  1. PXE Boot Ubuntu 14.04 across my 7 OpenStack servers
  2. Configure the networking to add all needed bridges, using Ansible
  3. Configure OSAD deployment by grabbing the pre-built configuration files
  4. Run the OSAD Playbooks

From PXE Boot-to-OpenStack, the lab gets deployed in about 2 hours.

Network Setup

In my lab I use the following subnets on my network

  • Host network: 192.168.1.0/24
  • Container-to-container network: 172.29.236.0/24
  • VXLAN Neutron Tunnel Network: 172.29.240.0/24

The hosts on my network are on the following IP addresses

  • OpenStack Servers (openstackX.lab.openstackcookbook.com) (HP MicroServers)
    • br-host (em1): 192.168.1.101 – 192.168.1.107
    • br-mgmt (p2p1.236): 172.29.236.101 – 172.29.236.107
    • br-vxlan (p2p1.240): 172.29.240.101 – 172.29.240.107
  • Deployment Host (Shuttle)
    • 192.168.1.20
    • 172.29.236.20
    • 172.29.240.20

The OpenStack Servers have their addresses laid out on the interfaces as follows:

  • em1 – host network untagged/native VLAN
    • Each server is configured so that the onboard interface, em1 (in the case of the HP MicroServers running Ubuntu 14.04), is untagged on my home network on 192.168.1.0/24.
  • p2p1 – interface used by the OpenStack environment
    • VLAN 236
      • Container to Container network
      • 172.29.236.0/24
    • VLAN 240
      • VXLAN Tunnel Network
      • 172.29.240.0/24
    • Untagged/Native VLAN
      • As we want to create VLAN type networks in OpenStack, we use this to add those extra tags to this interface

Step 1: PXE Boot OpenStack Servers to Ubuntu 14.04 with a single interface on host network

I’m not going to explain PXE Booting. There are plenty of guides on the internet to set up PXE Booting. The result should be Ubuntu 14.04 with a single interface configured. This network is the host network when referring to the OpenStack environment

Step 2: Configure Ansible on the Deployment Host (Shuttle) [If not already]

We’ll be using Ansible to configure and deploy the OpenStack lab environment. First of all, to ensure we’ve got everything we need to run Ansible and subsequently the OpenStack Ansible Deployment (OSAD) – we’ll run the bootstrap script provided by the OSAD environment. We’ll check out the OSAD Playbooks and run the ansible bootstrap script. For this we’re using a Kilo release (as denoted by the tag 11.0.4, as K is the 11th letter of the alphabet)

  cd /opt
  git clone -b 11.0.4 https://github.com/stackforge/os-ansible-deployment.git
  cd /opt/os-ansible-deployment
  scripts/bootstrap-ansible.sh

Step 3: Configure Networking

Each of the OpenStack servers (openstack1-7) and deployment host are all configured with the same network configuration (Note: technically the Cinder/HA Proxy host and Deployment host don’t need access to the networks used by Neutron (br-vxlan and br-vlan), but for simplicity in my lab, all hosts are always configured the same). To do this I use an Ansible Playbook to set up the /etc/network/interfaces files to give the following:

  • br-host – 192.168.1.0/24
    • Bridge ports: em1
    • I move the interface on the host network, to a bridge and as such move the IP assigned to this bridge. This is so I can use my host network as a Flat network external provider network in my lab. This is not required for many OpenStack environments but useful for my lab.
  • br-mgmt – 172.29.236.0/24
    • Bridge ports: p2p1.236
    • OSAD uses LXC Containers for deployment of the services. This network is used for inter-communication between the containers (such as OpenStack services on one server or another container communicating with other OpenStack services), and between the host and containers. To install OpenStack using the OSAD Playbooks, the deployment host needs access on this network too.
  • br-vxlan – 172.29.240.0/24
    • Bridge ports: p2p1.240
    • VXLAN is an overlay network and Neutron creates a point-to-point mesh network using endpoints on this address to create the VXLAN networks.
  • br-vlan – address unassigned, untagged
    • Bridge ports: p2p1
    • This interface is completely managed by Neutron. It uses this interface to assign further VLAN networks in our environment.

Ansible requires that it has a persistent connection to the servers when it is executing the Playbooks, therefore it isn’t possible to move the IP from the existing em1 interface, to br-host, using Ansible as we’re using this network to do so, so we do this manually before creating the other bridges.

To move the IP address from em1, and move em1 to a bridge called br-host carry out the following:

  1. Let’s first ensure we have the right tools available for Ubuntu so it can create bridges:
      sudo apt-get update
      sudo apt-get install bridge-utils
  2. Comment out em1 in /etc/network/interfaces:
    # auto em1
    # iface em1 dhcp
  3. Create the bridge in the interfaces.d includes directory, /etc/network/interfaces.d/ifcfg-br-host and put em1 in as the interface that exists in here and set up the same IP that was previously assigned to em1 onto the bridge itself:
    auto br-host
    iface br-host inet static
    bridge_ports em1
    address 192.168.1.101
    netmask 255.255.255.0
    gateway 192.168.1.254
    bridge_stp off
  4. We then tell Ubuntu to source in this directory when setting up the interfaces by adding the following to /etc/network/interfaces:
    source /etc/network/interfaces.d/*
  5. Repeat for all OpenStack hosts, making sure the IP address in step 3 is updated accordingly, and reboot each host to pick up the change.

With the environment up and running, when we have a look at what we have set up, we should see the following

ip a
4: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br-host state UP group default qlen 1000
    link/ether 9c:b6:54:04:50:94 brd ff:ff:ff:ff:ff:ff
5: br-host: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default     link/ether 9c:b6:54:04:50:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.101/24 brd 192.168.1.255 scope global br-host
       valid_lft forever preferred_lft forever
    inet6 fe80::9eb6:54ff:fe04:5094/64 scope link 
       valid_lft forever preferred_lft forever

At this stage we can configure an Ansible Playbook to set up the networking on our OpenStack hosts.

  1. First checkout the following set of Playbooks:
      cd /opt
      git clone https://github.com/uksysadmin/openstack-lab-setup.git
      cd openstack-lab-setup
  2. This set of Playbooks is based on https://github.com/bennojoy/network_interface and other things to help me set up OSAD. To configure it, first create a file (and directory) /etc/ansible/hosts with the following contents:
    [openstack-servers]
    openstack[1:7]
  3. This tells Ansible that I have 7 servers with a hostname accessible on my network called openstack1, openstack2… openstack7. The next step is to configure these in the /opt/openstack-lab-setup/host_vars directory of the Playbooks checked out in step 1. This directory has files that match the host names specified in step 2. That means we have 7 files in host_vars named openstack1, openstack2 all the way to openstack7.Edit host_vars/openstack1 with the following contents:
    roles:
      - role: network_interface
    network_vlan_interfaces:
      - device: p2p1
        vlan: 236
        bootproto: manual
      - device: p2p1
        vlan: 240
        bootproto: manual
    network_bridge_interfaces:
      - device: br-mgmt
        type: bridge
        address: 172.29.236.101
        netmask: 255.255.255.0
        bootproto: static
        stp: "off"
        ports: [p2p1.236]
      - device: br-vxlan
        type: bridge
        address: 172.29.240.101
        netmask: 255.255.255.0
        bootproto: static
        stp: "off"
        ports: [p2p1.240]
      - device: br-vlan
        type: bridge
        bootproto: manual
        stp: "off"
        ports: [p2p1]
  4. As you can see, we’re describing the bridges and interfaces, as well as the IP addresses that will be used on the hosts. The only difference between each of these files will be the IP address – so we now need to update the rest of the files, host_vars/openstack2host_vars/openstack7 with the correct IP addresses.
      for a in {2..7}
      do 
        sed "s/101/10${a}/g" host_vars/openstack1 > host_vars/openstack${a}
      done
  5. With the IP addresses all configured we can run the Playbook to configure these on each of our OpenStack hosts as follows:
    cd /opt/openstack-lab-setup
    ansible-playbook setup-network.yml
  6. Once complete, with no errors, to check all is OK, and your deployment host also has the correct networking set up too, use fping to ping all the networks that we will be using in our OpenStack environment
    fping -g 192.168.1.101 192.168.1.107
    fping -g 172.29.236.101 172.29.236.107
    fping -g 172.29.240.101 172.29.240.107

Step 4: Configure OpenStack Ansible Deployment

With the networking set up, we can now configure the deployment so we’re ready to run the scripts to install OpenStack with no further involvement.

  1. If you’ve not grabbed the OSAD Playbooks, do so now
      cd /opt
      git clone -b 11.0.4 https://github.com/stackforge/os-ansible-deployment.git
      cd /opt/os-ansible-deployment
  2. We need to copy the configuration files to /etc/openstack_deploy
      cd etc
      cp -R openstack_deploy /etc
  3. At this point, we would configure the files to match the lab environment on how OpenStack will get deployed. I simply grab the required files that I’ve pre-configured for my lab.
      # If you've not checked this out already to do the Networking section
      cd /opt
      git clone https://github.com/uksysadmin/openstack-lab-setup.git
      # The pre-configured files are in here
      cd /opt/openstack-lab-setup
  4. Copy the files openstack_user_config.yml, user_group_vars.yml, user_variables.yml to /etc/openstack_deploy
    
      cp openstack_user_config.yml user_group_vars.yml user_variables.yml /etc/openstack_deploy
  5. We now need to generate the random passwords for the OpenStack services. This gets written to a file /opt/openstack_deploy/user_secrets.yml
      cd /opt/os-ansible-deploy
      scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml

That’s it for configuration. Feel free to edit /etc/openstack_deploy/*.yml files to suit your environment.

(Optional) Step 5: Create a local repo mirror + proxy environment {WARNING!}

The file /etc/openstack_deploy/user_group_vars.yml has entries in that are applicable to my lab environment:

#openstack_upstream_domain: "rpc-repo.rackspace.com"
openstack_upstream_domain: "internal-repo.lab.openstackcookbook.com"

Edit this file so it is applicable to your environment. If you do not plan on creating a local repo, use rpc-repo.rackspace.com which is available to everyone.

OSAD pulls down specific versions of code to ensure consistency with releases from servers hosted at rackspace.com. Rather than do this each time which would cause unnecessary traffic going over a relatively slow internet connection, I recreate this repo once. I create this on the deployment server (my Shuttle on 192.168.1.20) as that remains constant regardless of how many times I tear down and spin up my lab.

To create this carry out the following:

  1. First ensure you’ve enough space on your deployment host. The repo is 9.6G in size when doing a full sync.
  2. I create the mirror in /openstack/mirror and do the sync using Rsync as follows
     mkdir -p /openstack
     rsync -avzlHAX --exclude=/repos --exclude=/mirror --exclude=/rpcgit \
        --exclude=/openstackgit --exclude=/python_packages \
        rpc-repo.rackspace.com::openstack_mirror repo/
  3. This will take a while depending on your bandwidth, so grab a coffee or go for a sleep.
  4. Ensure this directory, /openstack/mirror, is available using a web server such as Nginx or Apache, and the hostname matches that of {{ openstack_upstream_domain }}. When you use a web-browser to view http://rpc-repo.rackspace.com/ it should look like the same when you view your internal version (e.g. http://internal-repo.lab.openstackcookbook.com). Edit DNS and your web server files to suit.

I also have another amendment which is found in the file /etc/openstack_deploy/user_variables.yml. This has an entries in that are applicable to my lab environment and using apt-cacher:

## Example environment variable setup:
proxy_env_url: http://apt-cacher:3142/
no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"

## global_environment_variables:
HTTP_PROXY: "{{ proxy_env_url }}"
HTTPS_PROXY: "{{ proxy_env_url }}"
NO_PROXY: "{{ no_proxy_env }}"

If you’re not using apt-cacher, or using something else – edit this to suit or ensure these are commented out so no proxy is used.

Step 6: Configure Cinder Storage

Cinder runs on openstack7 in my environment. On here I have a number of disks, and one of them is used for Cinder.

To configure a disk for Cinder, we create a volume group called cinder-volumes as follows:

fdisk /dev/sdb

Create a partition, /dev/sdb1 of type 8e (Linux LVM)

pvcreate /dev/sdb1
vgcreate cinder-volumes /dev/sdb1

Step 7: Running the OpenStack Ansible Deployment Playbooks

After the environment preparation has been done, we can simply run the Playbooks that perform all the tasks for setting up our multi-node lab. Note that these Playbooks are designed for any environment and not just for labs.

Verify all of the configuration files in /etc/openstack_deploy as described in Step 4 is suitable for your environmenet

To install OpenStack, simply carry out the following

cd /opt/os-ansible-deployment
scripts/run-playbooks.sh

This wrapper script executes a number of playbooks, with a certain amount of retries built in to help with any transient failures. The Playbooks it runs can be seen in the script. Ultimately, the following gets run:

cd /opt/os-ansible-deployment
openstack-ansible playbooks/setup-hosts.yml
openstack-ansible playbooks/install-haproxy.yml
openstack-ansible playbooks/setup-infrastructure.yml
openstack-ansible playbooks/setup-openstack.yml

openstack-ansible is a wrapper script so that ansible knows where to find its environment files so you don’t have to specify this on the command line yourself.

After running the run-playbooks.sh you will be presented with a summary of what was installed, how long it took, and how many times the action was retried:

run-playbook-completeClick for a bigger version

Step 8: Configure Glance Containers to use NFS

I configure Glance to use the local filesystem for it’s image location. This is fine if there was one Glance service, but there are 3 in this cluster. Each individual Glance service is pointing to its local /var/lib/glance directory – which means that if I uploaded an image – it would only exist in one of the three servers.

To get around this, I use NFS and mount /var/lib/glance from an NFS server. I’ve configured my NFS server (My QNAP NAS) to give me a share called “/glance” which allows me to use Glance as if it was local. To do this, carry out the following steps:

  1. Each of the services run in containers on our controller servers. The controller servers in my lab are openstack4, openstack5 and openstack6. Log into one of these and execute the following as root:
    lxc-ls -f
  2. This will produce a list of containers running on that server. One of them will be labelled like controller-01_glance_YYYYY (where YYYY is the UUID associated with the container). Attach yourself to this as follows:
    lxc-attach -n controller-01_glance_YYYYY
  3. Edit /etc/fstab to mount /var/lib/glance from my NFS server:
    192.168.1.2:/glance /var/lib/glance nfs rw,defaults 0 0
  4. Edit to suit your NFS server. Mount this as follows:
    mount -t nfs -a
  5. Ensure that this area is writeable by glance:glance as follows (only need to do this once):
    chown -R glance:glance /var/lib/glance
  6. The directory structure in here should be (for verification)
    /var/lib/glance/images
    /var/lib/glance/cache
    /var/lib/glance/scrubber
  7. Exit and then repeat on the 2 other Glance containers found on the other two controller nodes.

Step 9: Logging in to the OpenStack Environment

In my environment, I log into a controller node, which houses a “utility” container. This has all the tools needed to operate the OpenStack environment. It also has the randomly generated admin credentials found in /root/openrc so I can also log in using Horizon. To do this:

 ssh openstack4 # as root, one of my controllers
 lxc-ls -f # look for the utility container
 lxc-attach -n controller-01_utility_container_{random_n}

Once on the container, I can see a file in here /root/openrc. View this file, and then use the same credentials to log into Horizon.

OpenStack Cloud Computing Cookbook @OpenStackBook 3rd Edition Progress

Hopefully by now you’re familiar with the extremely popular book for learning to use and deploy OpenStack, the OpenStack Cloud Computing Cookbook. This book presents you with a number of chapters relevant to running an OpenStack environment and the first edition was published at the beginning of the Grizzly release. In the 2nd Edition I picked up a co-author by the name of Cody Bunch and, after deciding to do a 3rd Edition, we picked up another co-author by the name of Egle Sigler. All of us work in the Professional Services arm of Rackspace on either side of the Atlantic so we know a thing or two about deploying and operating OpenStack. We’re now mid-cycle in writing the 3rd edition, which will be based on Juno – and we’re aiming to set our pens down for a publication by May 2015. It’s going to be a fight to reach this date, but as this is a book on OpenStack – we’re no strangers to challenges!

It’s now well into January of 2015 and great progress has been made whilst people have been thanking each other, gChaptersSoFariving presents and getting blind drunk because of a digit change in the year. One of our early challenges was getting a solid working OpenStack development base to work from that allowed us to work as a group on our assigned chapters of the book. Thankfully, a lot of the heavy lifting was done during the 2nd Cookbook_Vagrant_Environment_FullEdition but a move to Juno, improving the security and defaults of our reference installation, and incorporating new features caused a few hiccups along the way. These have been resolved, and the result is a Vagrant environment that has one purpose: to help educate people who want to run and deploy OpenStack environments. This environment can be found at https://github.com/OpenStackCookbook/OpenStackCookbook.

This environment consists of a single Controller, up to 2 Compute hosts running Qemu/KVM, a Network node, a Cinder node, and up to 2 Swift hosts. This environment works hand-in-hand with the recipes in the OpenStack Cloud Computing Cookbook – allowing the reader to follow along and learn from the configurations presented and reference environment. If you’re running the full complement of virtual machines as depicted, I highly recommend at least 16Gb RAM. The environment I use to develop the scripts for the book is detailed here.

So what’s new in the OpenStack Cloud Computing Cookbook, 3rd Edition?

We’re still writing, but along with the basics you’d expect in our Cookbook, we have information on securing Keystone with SSL, Neutron with Distributed Virtual Routers (DVR), running multiple Swift environments and Container Synchronization, using AZs, Host Aggregates, Live-Migration, working with the Scheduler, a whole new chapter with recipes on using features such as LBaaS, FWaaS, Telemetry, Cloud-init/config, Heat, Automating installs, and a whole host of useful recipes for production use.

We hope you enjoy it as much as you have with the first two editions, and we’ll update you near to finishing when we have a confirmed publication date.

Vagrant OpenStack Plugin 101: vagrant up –provider=openstack

Now that we have a multi-node OpenStack environment spun up very easily using Vagrant, we can now take this further by using Vagrant to spin up OpenStack instances too using the Vagrant OpenStack Plugin. To see how easily this is, follow the instructions below:

git clone https://github.com/mat128/vagrant-openstack.git
cd vagrant-openstack/
gem build vagrant-openstack.gemspec
vagrant plugin install vagrant-openstack-*.gem
vagrant box add dummy https://github.com/mat128/vagrant-openstack/raw/master/dummy.box
sudo apt-get install python-novaclient

With the plug-in installed for use with Vagrant we can now configure the Vagrantfile. Remember the Vagrantfile is just a configuration file that lives in the directory of the working environment where any artifacts related to the virtual environment is kept. The following Vagrantfile contents can be used against the OpenStack Cloud Computing Cookbook Juno Demo environment:

require 'vagrant-openstack'
Vagrant.configure("2") do |config|
  config.vm.box = "dummy"
  config.vm.provider :openstack do |os|    # e.g.
    os.username = "admin"          # "#{ENV['OS_USERNAME']}"
    os.api_key  = "openstack"      # "#{ENV['OS_PASSWORD']}"
    os.flavor   = /m1.tiny/
    os.image    = /trusty/
    os.endpoint = "http://172.16.0.200:5000/v2.0/tokens" # "#{ENV['OS_AUTH_URL']}/tokens"
    os.keypair_name = "demokey"
    os.ssh_username = "ubuntu"
    os.public_network_name = "ext_net"
    os.networks = %w(ext_net)
    os.tenant = "cookbook"
    os.region = "regionOne"
  end
end

Once created, we can simply bring up this instance with the following command:

vagrant up --provider=openstack

Note: The network chosen is a routable “public” network, which is accessible from the Vagrant client and is a limitation at this time for creating these instances. Also note that vagrant openstack seems to get stuck at “Waiting for SSH to become available”. Ctrl + C at this point will drop you back to the shell.

Remote OpenStack Vagrant Environment

To coincide with the development of the 3rd Edition of the OpenStack Cloud Computing Cookbook, I decided to move my vagranting from the ever-increasing temperatures of my MBP to a Shuttle capable of spinning up multi-node OpenStack environments in minutes. I’ve found this very useful for convenience and speed, so sharing the small number of steps to help you quickly get up to speed too.

The spec of the Shuttle is:

  • Shuttle XPC SH87R6
  • Intel i5 3.3GHz i5-4590
  • 2 x Crucial 8Gb 1600MHz DDR3 Ballistix Sport
  • 1 x Seagate Desktop SSHD 1000GB 64MB Cache SATA 6 Gb/s 8GB SSD Cache Hybrid HDD

Running on here is Ubuntu 14.04 LTS, along with VirtualBox 4.3 and VMware Workstation 10. I decided to give one of those hybrid HDDs a spin, and can say the performance is pretty damn good for the price. All in all, this is a quiet little workhorse sitting under my desk.

To have this as part of my work environment (read: my MBP), I connect to this using SSH and X11 courtesy of XQuartz. XQuartz, once installed on the Mac, allows me to access my remote GUI on my Shuttle as you’d expect from X11 (ssh -X …). This is useful when running the GUI of VMware Workstation and VirtualBox – as well as giving me a hop into my virtual environment running OpenStack (that exists only within my Shuttle) by allowing me to run remote web browsers that have the necessary network access to my Virtual OpenStack environment.

x11firefox

With this all installed and accessible on my network, I grab the OpenStack Cloud Computing Cookbook scripts (that we’re updating for Juno and the in-progress 3rd Edition) from GitHub and can bring up a multi-node Juno OpenStack environment running in either VirtualBox or VMware Workstation in just over 15 minutes.

Once OpenStack is up and running, I can then run the demo.sh script that we provide to launch 2 networks (one private, one public), with an L3 router providing floating IPs, and an instance that I’m able to access from a shell on my Shuttle. Despite the Shuttle being remote, I can browse the OpenStack Dashboard with no issues, and without VirtualBox or VMware consuming resources on my trusty MBP.

Bandwidth monitoring with Neutron and Ceilometer

OpenStack Telemetry (aka Ceilometer) gives you access to a wealth of information – and a particularly interesting stat I wanted access to was outgoing bandwidth of my Neutron networks. Out of the box, Ceilometer gives you a rolled up cumulative stat for this with the following:

ceilometer sample-list --meter network.outgoing.bytes

This produces output like the following

ceilometer-sample-list

This is fine, but when you’re trying to break down the network stats for something useful like billing since beginning of month, this makes it tricky – even though I’m sure I shouldn’t have to do this! I followed this excellent post http://cjchand.wordpress.com/2014/01/16/transforming-cumulative-ceilometer-stats-to-gauges/ which describes this use-case brilliantly (and goes further with Logstash and Elastic Search) – where you can use Ceilometer’s Pipeline to transform data into a format that suits your use case. The key pieces of information from this article was as follows:

1. Edit the /etc/ceilometer/pipeline.yaml of your Controller and Compute hosts and add in the following lines

-
 name: network_stats
   interval: 10
   meters:
     - "network.incoming.bytes"
     - "network.outgoing.bytes"
   transformers:
     - name: "rate_of_change"
     parameters:
       target:
         type: "gauge"
         scale: "1"
   publishers:
     - rpc://

2. Restart the ceilometer-agent-compute on each host

restart ceilometer-agent-compute

That’s it. We now have a “gauge” for our incoming and outgoing neutron – which means they’re pieces of fixed data applicable for the time range associated with it (e.g. number of bytes in the last 10 second sample set).

ceilometer-sample-list-gauge

 

I wanted to see if I could answer the question with this data: how much outgoing bandwidth was used for any given network over a period of time (i.e since the start of the month). It could be I misunderstand the cumulative stats (which tease me), but this was a useful exercise nonetheless! Now here is my ask of you – my bash fu below can be improved somewhat and would love to see it! My limited knowledge of Ceilometer, plus the lure of Bash, awk and sed, provided me with a script that outputs the following:

neutron-bandwidth-output

The (OMFG WTF has Kev wrote now) script, link below, loops through each of your Neutron networks, and outputs the number of outgoing bytes per Neutron network. Just make sure you’ve got ‘bc’ installed to calculate the floating numbers. If I’ve used and abused Ceilometer and awk too much, let me know!

https://raw.githubusercontent.com/OpenStackCookbook/OpenStackCookbook/icehouse/bandwidth_report.sh

Home Rackspace Private Cloud / OpenStack Lab: Bare-Metal to 7 Node Lab

Over the past few weeks I’ve been writing a series of blog posts explaining my home Rackspace Private Cloud powered by OpenStack lab – the posts are indexed below:

More to come! Subscribe to my blog and follow me at @itarchitectkev for updates