Monthly Archives: February 2014

Home Rackspace Private Cloud / OpenStack Lab: Bare-Metal to 7 Node Lab

Over the past few weeks I’ve been writing a series of blog posts explaining my home Rackspace Private Cloud powered by OpenStack lab – the posts are indexed below:

More to come! Subscribe to my blog and follow me at @itarchitectkev for updates

Advertisements

Home Rackspace Private Cloud / OpenStack Lab: Part 5

Adding Extra Compute Nodes to Rackspace Private Cloud

The first four of these posts covered setup and installation of my home lab, including the networking, PXE booting Ubuntu and installation of Rackspace Private Cloud. I ended up with 2 Controllers in HA, and 3 Computes.

In this post I show how easy it is to add 2 extra Compute nodes to the lab.

The extra nodes are HP N54L MicroServers. They’re 2.2GHz AMD Turion II machines that come with 250Gb HDD and 2Gb RAM. I add an Integral 4Gb DIMM to each as well as an extra TP-Link NIC:

The first thing to do is prep my network services so I can PXE Boot. This includes adding the new services to DNS and DHCP (static IP assignment from MAC). As I use my QNAP TS-210 (192.168.1.1) as my DNS and DHCP service (courtesy of Dnsmasq) I add the following to /etc/hosts on there:

192.168.1.106 openstack6.home.mydomain.co.uk openstack6
192.168.1.107 openstack7.home.mydomain.co.uk openstack7

I then open up /opt/etc/dnsmasq.conf and add in the static MAC assignment:

dhcp-host=64:70:02:10:88:66,192.168.1.106,infinite
dhcp-host=64:70:02:10:88:99,192.168.1.107,infinite

After reloading the dnsmasq service (/opt/etc/init.d/S56dnsmasq restart) I’m ready to PXE boot the servers. See this post for details of my PXE Boot setup using the QNAP NAS boxes.

Now that they have Ubuntu installed on the two new servers, openstack6 (192.168.1.106) and openstack7 (192.168.1.107), as well as having root‘s SSH key setup, I first check that the networking is setup correctly on the new servers. The /etc/network/interfaces should have the following contents:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback

# Host/Management
auto eth0
iface eth0 inet dhcp

# Neutron Provider interface
auto eth1
iface eth1 inet manual
  up ip link set $IFACE up
  down ip link set $IFACE down

Bootstrap, assign role, chef-client, done!

With that in place, I can now bootstrap them with the Chef Client and assign the relevant roles which puts them as part of my OpenStack Compute lab. To do this I log onto my Chef server (running on openstack1) as root and issue the following:

knife bootstrap -E RPCS -r role[single-compute] 192.168.1.106
knife bootstrap -E RPCS -r role[single-compute] 192.168.1.107

knife ssh "hostname:openstack6" "ovs-vsctl add-port br-eth1 eth1"
knife ssh "hostname:openstack7" "ovs-vsctl add-port br-eth1 eth1"
knife ssh "role:single-compute" chef-client

And it is that easy!

I execute chef-client on all my computes and not just the new ones. This isn’t strictly necessary to add these new nodes, but it’s good practice to run it to ensure that my computes are consistent.

I can view that my hypervisors (the compute nodes) are correctly available by issuing the following:

. openrc
nova hypervisor-list

This will produce the following output for my lab:

root@openstack1:~# nova hypervisor-list
+----+--------------------------------+
| ID | Hypervisor hostname            |
+----+--------------------------------+
| 1  | openstack5.home.mydomain.co.uk |
| 3  | openstack3.home.mydomain.co.uk |
| 5  | openstack4.home.mydomain.co.uk |
| 7  | openstack6.home.mydomain.co.uk |
| 8  | openstack7.home.mydomain.co.uk |
+----+--------------------------------+

Home Rackspace Private Cloud / OpenStack Lab: Part 4

So after following the first three posts, we now have a Rackspace Private Cloud powered by OpenStack running with 2 Controllers (HA) and 3 Computes. So now what? Well the first thing we need to do is get our hands dirty with the OpenStack Networking component, Neutron, and create a network that our instances can be spun up on. For the home lab, I have dumb unmanaged switches – and I take advantage of that by creating a Flat Network that allows my instances access out through my home LAN on the 192.168.1.0/24 subnet.

Logging on to the environment

We first need to get to the OpenStack lab environment and there are a couple of routes: we can use Web Dashboard, Horizon which lives on the “API_VIP” IP I created when I set up my environment (see step 9 on Part 3) – which is https://192.168.1.243/ (and answering yes to the SSL warning due to it being a self-signed certificate) or I can use the command line (CLI).  And the easiest way to use the CLI is to ssh to the first controller, openstack1 (192.168.1.101) and changing to the root user, then sourcing the environment file (/root/openrc) that was created that sets up the various environment variables to allow you to communicate with OpenStack.

To use the CLI on the first controller, issue the following:

ssh openstack1
sudo -i
. openrc

The /root/openrc file contains the following

# This file autogenerated by Chef
# Do not edit, changes will be overwritten
# COMMON OPENSTACK ENVS
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.253:5000/v2.0
export OS_AUTH_STRATEGY=keystone
export OS_NO_CACHE=1
# LEGACY NOVA ENVS
export NOVA_USERNAME=${OS_USERNAME}
export NOVA_PROJECT_ID=${OS_TENANT_NAME}
export NOVA_PASSWORD=${OS_PASSWORD}
export NOVA_API_KEY=${OS_PASSWORD}
export NOVA_URL=${OS_AUTH_URL}
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=RegionOne
# EUCA2OOLs ENV VARIABLES
export EC2_ACCESS_KEY=b8bab4a938c340bbbf3e27fe9527b9a0
export EC2_SECRET_KEY=787de1a83efe4ff289f82f8ea1ccc9ee
export EC2_URL=http://192.168.1.253:8773/services/Cloud

These details match up to the admin user’s details specified in the environment file that was created in Step 9 in Part 3.

With this loaded into our environment we can now use the command line clients to control our OpenStack cloud. These include

nova for launching and controlling instances

neutron for managing networking

glance for manipulation of images that are used to create our instances

keystone for managing users and tenants (projects)

There are, of course, others such as cinder, heat, swift, etc. but they’re not configured in the lab yet. To get an idea of what you can do with the CLI, head over to this @eglute‘s blog for a very handy, quick “cheat sheet” of commands.

Creating the Home Lab Flat Network

The 5 OpenStack servers, and everything else on my network, hang off a single subnet: 192.168.1.0/24. Each and every one of those devices gets an IP from that range and configured to use a default gateway of 192.168.1.254 – which is great, it means they get internet access.

I want my instances running under OpenStack to also have internet access, and be accessible from the network (from my laptop, or through PAT on my firewall/router to expose some services such as a webserver running on one of my OpenStack instances). To do this I create a Flat Network, where I allocate a small DHCP range so as not to conflict with any other IPs or ranges currently in use.

For more information on Flat Networking, view @jimmdenton‘s blog post

To create this Flat Network to co-exist on the home LAB subnet of 192.168.1.0/24 I do the following

1. First create the network (network_type=flat)  (all one line)

neutron net-create 
    --provider:physical_network=ph-eth1 
    --provider:network_type=flat 
    --router:external=true
    --shared flatNet

2. Next create the subnet (all one line)

neutron subnet-create 
    --name flatSubnet 
    --no-gateway 
    --host-route destination=0.0.0.0/0,nexthop=192.168.1.254 
    --allocation-pool start=192.168.1.150,end=192.168.1.170 
    --dns-nameserver 192.168.1.1 
    flatNet 192.168.1.0/24

Now what’s going on in that subnet-create command is as follows:

–no-gateway specifies no default gateway, but…

–destination=0.0.0.0/0,nexthop=192.168.1.254 looks suspiciously like a default route – it is.

The effect of –no-gateway is that something extra has to happen for an instance to access the Metadata service (where it goes to get cloud-init details, ssh-keys, etc.). As it can’t rely on a gateway address (it doesn’t exist) to get to the next hop to the 169.254/16 network where the Metadata service lives – a route is injected into the instance’s routing table instead.

But that’s Metadata sorted, what about access to anything other than 192.168.1.0/24 – i.e. everything else? This is taken care of by putting in another route (–destination=) but simply has the same effect as setting a gateway due to the settings used (0.0.0.0/0,nexthop=192.168.1.254). That nexthop address is the default gateway on my LAN, therefore the instance gets internet access. With Neutron we can add in a number of routes – and these are automatically created on the instance’s routing table. Very handy.

–allocation-pool is the DHCP address pool range.  I run DHCP on my network for everything else but I deliberately specify ranges in both so as not to ever conflict (obviously). I set this range to be between 192.168.1.150 and 192.168.1.170.

–dns nameserver 192.168.1.1 sets the resolver to my NAS (192.168.1.1) which is running Dnsmasq and performs DNS resolution for all my LAN clients. As the instance gets an IP on this network, it can reach my DNS server.

Now that we have a network in place, I can spin up an instance – but before that, there are a couple of other housekeeping items that need to be performed first: creating/importing an SSH keypair and setting security group rules to allow me access (SSH) to the instance.

Creating/Importing Keypairs

Keypairs are SSH Public/Private keypairs that you would create when wanting passwordless (or key-based) access to Linux instances. They are the same thing in OpenStack. What happens though is that OpenStack has a copy of the Public key of your keypair, and when you specify that key when booting an instance, it squirts it into a user’s .ssh/authorized_keys file – meaning that you can ssh to it using the Private portion of your key.

To create a keypair for use with OpenStack, issue the following command:

nova keypair-add demo > demo.pem
chmod 0600 demo.pem

demo.pem will then be the private key, and the name of the key when booting an instance will be called “demo”. Keep the private key safe and ensure it is only readable by you.

If creating a whole new keypair isn’t suitable (it’s an extra key to carry around) you can always use the one you’ve been using for years by importing it into OpenStack.  To import a key, you’re effectively copying the public key into the database so that it can be used by OpenStack when you boot an instance. To do this issue the following:

nova keypair-add --pub-key .ssh/id_rsa.pub myKey

What this does is take a copy of .ssh/id_rsa.pub and assigns the name myKey to it. You can now use your key to access new instances you spin up.

Creating Default security group rules

Before we can spin up an instance (although technically this step could also be done after you have booted on one up) we need to allow access to it as by default, no traffic can flow in to it – not even a ping.

To allow pings (ICMP) and SSH to the Default group (I tend to not allow any more than that in this group) issue the following commands:

neutron security-group-rule-create --protocol ICMP --direction ingress default
neutron security-group-rule-create --protocol tcp --direction ingress --port-range-min 22 --port-range-max 22 default

Adjusting m1.tiny so Ubuntu can boot with 512Mb

My servers don’t have much memory – the Computes (hypervisors – where the instances actually spawn up in) only have 4Gb Ram so I value the m1.tiny when trying various things – especially when I have a need to spin up a lot of instances. The problem is that by default in Havana, an m1.tiny specifies 1Gb for the disk of an instance. A Ubuntu image requires more than 1Gb and OpenStack is unable to “shrink” an instance smaller than what was created for the image. To fix this we amend the m1.tiny so that the disk becomes “unlimited” again, just like it was in Grizzly and before. To do this we issue the following:

nova flavor-delete 1
nova flavor-create m1.tiny 1 512 0 1

Havana’s nova command is unable to amend flavors – so we delete and recreate to mimic this behaviour. (Horizon does this process to when you edit a flavor).

We’re now ready to boot an instance, or are we?

Loading Images

A Rackspace Private Cloud can automatically pull down and upload images into Glance by setting the following in the environment JSON file and running chef-client:

"glance": {
  "images": [
    "cirros",
    "precise"
  ],
  "image" : {
    "cirros": "https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img",
    "precise": "http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"
  },
  "image_upload": true
 },

This is handy at installation time, but can also add to RPC installation times. Tune this to suit. What I tend to do is download images, store them on my Web server (NAS2, 192.168.1.2) which is connected to the same switch as my OpenStack environment, and change the URL for the images to point to the NAS2 Web server instead. If you do this – wget those URLs above and store them in /share/Web. When you enable the default Webserver service under QNAP it becomes available on the network. Change the above JSON snippet for Chef to the following:

"glance": {
  "images": [
    "cirros",
    "precise"
  ],
  "image" : {
    "cirros": "http://192.168.1.2/cirros-0.3.0-x86_64-disk.img",
    "precise": "http://192.168.1.2/precise-server-cloudimg-amd64-disk1.img"
  },
  "image_upload": true
 },

To load images using the glance client issue the following (all one line)

glance image-create 
    --name='precise-image' 
    --disk-format=qcow2 
    --container-format=bare 
    --public < precise-server-cloudimg-amd64-disk1.img

What this will do is load that image precise-server-cloudimg-amd64-disk1.img that you have downloaded manually into Glance. If you don’t have that image downloaded, you can get Glance to fetch it for you – saving you that extra step:

glance image-create 
    --name='precise-image' 
    --disk-format=qcow2 
    --container-format=bare 
    --public 
    --location http://192.168.1.2/precise-server-cloudimg-amd64-disk1.img

Booting an instance

Now we have an OpenStack lab, a network, set up our keypair and have access to any instance – we can now boot an instance. To do this we first list the available images:

nova image-list

And then we can use one of those images for our instance.

The next thing to do is list the Neutron networks available:

neutron net-list

Now that we have these pieces of information – along with knowing the name of the keypair (nova keypair-list) we can now boot our instance (I grab the UUID of “flatNet” and store it in a variable for me to automate this step when I first spin up an instance):

flatNetId=$(neutron net-list | awk '/flatNet/ {print $2}')

nova boot myInstance 
    --image precise-image 
    --flavor m1.tiny 
    --key_name demo 
    --security_groups default 
    --nic net-id=$flatNetId

You can watch this boot up by viewing the console output with the following command:

nova console-log myInstance

When this has booted up, I’ll have an instance available in the range 192.168.1.150 that is accessible from my network – check this by viewing the nova list output:

nova list

This will show you the IP that the instance will be assigned. As this is on my home LAN network, I can SSH to this instance as if it was a server connected to my switch:

root@openstack1:~# ssh -i demo.pem root@192.168.1.150
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-57-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Feb 14 16:05:59 UTC 2014
System load: 0.05 Processes: 62
 Usage of /: 39.1% of 1.97GB Users logged in: 0
 Memory usage: 8% IP address for eth0: 192.168.1.150
 Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root@myinstance:~#

Now the basics are out of the way, the next blog post will look at more advanced uses of OpenStack and the instances!

Home Rackspace Private Cloud / OpenStack Lab: Part 3

In the first two posts I covered the basics: what hardware is involved and the basic network services which forms the basis of my Rackspace Private Cloud install. In this post I set up Rackspace Private Cloud to give an OpenStack environment consisting of highly available controllers running as a pair with services such as the OpenStack APIs, Neutron, Glance and Keystone and 3 compute servers allowing me flexibility to do some testing.

Home Lab Network DiagramTo recap, this is my environment showing the Rackspace Private Cloud. Click on the picture to view a bigger version.

To install Rackspace Private Cloud using the Rackspace Chef Cookbooks, we need to first have Chef installed.  In this environment, to make most use of the hardware, I’ll be installing Chef onto the first of my hosts, openstack1, which we will refer to as a controller to facilitate the installation of RPC via the Chef Cookbooks. If you have Chef already installed in your environment, you can skip these steps and head straight to step 7 below.  In Rackspace Private Cloud, Controllers are the servers that run the APIs and OpenStack services such as Glance, Keystone and Neutron.  We also run MySQL and RabbitMQ on these too.  Through the use of our cookbooks, when we utilise two of these Controllers, we end up with a HA pair, giving highly available Controller services.

Rackspace Private Cloud provides scripts to make installation of an OpenStack powered private cloud very easy – including the initial setup of Chef and underlying services. Head over to the Rackspace Private Cloud website for more information on this. This blog post pulls out those steps to show complete transparency and give greater control over my installation.

Installation of RabbitMQ

As we will be running Chef alongside other OpenStack services, we need to do some initial setup and configuration of services such as RabbitMQ to ensure common services and ports don’t conflict and operate seamlessly with one another.

1. We first install some pre-requisite packages

apt-get update
apt-get install -y python-dev python-pip git erlang erlang-nox erlang-dev curl lvm2

2. Then we install and set up RabbitMQ on openstack1 (192.168.1.101).  Both Chef and OpenStack will be set up to use the same RabbitMQ service.

# Ensure our Rabbit environment doesn't lose it's settings later on
mkdir -p /var/lib/rabbitmq
echo -n "ANYRANDOMSTRING" > /var/lib/rabbitmq/.erlang.cookie
chmod 600 /var/lib/rabbitmq/.erlang.cookie
RABBIT_URL="http://www.rabbitmq.com"
RABBITMQ_KEY="${RABBIT_URL}/rabbitmq-signing-key-public.asc"
wget -O /tmp/rabbitmq.asc ${RABBITMQ_KEY};
apt-key add /tmp/rabbitmq.asc
RABBITMQ="${RABBIT_URL}/releases/rabbitmq-server/v3.1.5/rabbitmq-server_3.1.5-1_all.deb"
wget -O /tmp/rabbitmq.deb ${RABBITMQ}
dpkg -i /tmp/rabbitmq.deb

3. Now that we have RabbitMQ installed, we need to configure it so that Chef can utilise it.  To do this we create a vhost and appropriate user on here as follows:

CHEF_RMQ_PW="rand0mStr1ng"
rabbitmqctl add_vhost /chef
rabbitmqctl add_user chef $CHEF_RMQ_PW
rabbitmqctl set_permissions -p /chef chef '.*' '.*' '.*'

Installation of Chef Server

4. Now that we have RabbitMQ setup on one of our nodes (openstack1) we can install Chef Server. To do this grab the package for Ubuntu 12.04 from the opscode.com website as shown below:

CHEF="https://www.opscode.com/chef/download-server?p=ubuntu&pv=12.04&m=x86_64"
wget -O /tmp/chef_server.deb ${CHEF}
dpkg -i /tmp/chef_server.deb

5. We can then configure Chef Server for our environment, where we set various configuration items such as the ports to run on and where RabbitMQ is, along with the password we created for the chef user in step 2. This is done by running the following commands:

RMQ_IP="192.168.1.101"    # openstack1

mkdir -p /etc/chef-server
cat > /etc/chef-server/chef-server.rb <<EOF
erchef["s3_url_ttl"] = 3600
nginx["ssl_port"] = 4000
nginx["non_ssl_port"] = 4080
nginx["enable_non_ssl"] = true
rabbitmq["enable"] = false
rabbitmq["password"] = "${CHEF_RMQ_PW}"
rabbitmq["vip"] = "${RMQ_IP}"
rabbitmq['node_ip_address'] = "${RMQ_IP}"
chef_server_webui["web_ui_admin_default_password"] = "openstack"
bookshelf["url"] = "https://#{node['ipaddress']}:4000"
EOF

chef-server-ctl reconfigure

Installation of Chef Client

6. After this has pulled down and packages and dependencies, we can configure the Chef client as shown below:

# Make sure knife can be found
ln -sf /opt/chef-server/embedded/bin/knife /usr/bin/knife 

SYS_IP=$(ohai ipaddress | awk '/^ / {gsub(/ *\"/, ""); print; exit}')
export CHEF_SERVER_URL=https://${SYS_IP}:4000
# Configure Knife
mkdir -p /root/.chef
cat > /root/.chef/knife.rb <<EOF
log_level :info
log_location STDOUT
node_name 'admin'
client_key '/etc/chef-server/admin.pem'
validation_client_name 'chef-validator'
validation_key '/etc/chef-server/chef-validator.pem'
chef_server_url "https://${SYS_IP}:4000"
cache_options( :path => '/root/.chef/checksums' )
cookbook_path [ '/opt/chef-cookbooks/cookbooks' ]
EOF

Upload Rackspace Private Cloud Cookbooks to Chef Server

7. With Chef Server running and the Chef client configured, we can grab the Rackspace Private Cloud Cookbooks from GitHub and upload them to our Chef Server. We do this as follows:

COOKBOOK_VERSION="v4.2.1"     # Check versions here
mkdir -p /opt/
if [ -d "/opt/chef-cookbooks" ];then
    rm -rf /opt/chef-cookbooks
fi
git clone https://github.com/rcbops/chef-cookbooks.git /opt/chef-cookbooks
pushd /opt/chef-cookbooks
git checkout ${COOKBOOK_VERSION}
git submodule init
git submodule sync
git submodule update
# Upload all of the RCBOPS Cookbooks
knife cookbook upload -o /opt/chef-cookbooks/cookbooks -a
popd

knife role from file /opt/chef-cookbooks/roles/*.rb

Configuration of Environment for Rackspace Private Cloud

With the cookbooks uploaded into Chef, we can now configure our environment ready for an installation. For this home lab we have 2 Controllers (openstack1 and openstack2) and 3 Computes (openstack3, openstack4 and openstack5).  Each of the servers has 2 nics:

eth0 is on the LAN Subnet of 192.168.1.0/24 and is our Management network.

eth1 has not been assigned an IP and will be used for Neutron and is known as the Provider network.

This means we will configure our environment so that eth1 will be used for Neutron, and when we get to use our environment (GUI or CLI), we will be accessing the environment using a 192.168.1.X/24 address – just like any other server or computer on this LAN.

The /etc/network/interfaces for these servers have the following contents:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback

# Host/Management
auto eth0
iface eth0 inet dhcp
# Neutron Provider interface
auto eth1
iface eth1 inet manual
  up ip link set $IFACE up
  down ip link set $IFACE down

8. After this we can create our Environment JSON file which describes our complete setup for Rackspace Private Cloud:

VIP_PREFIX="192.168.1"    # Home lab network is 192.168.1.0/24
API_VIP="243"
MYSQL_VIP="242"
AMQP_VIP="241"

cat > /opt/base.env.json <<EOF
{
  "name": "rpcs",
  "description": "Environment for Openstack Private Cloud",
  "cookbook_versions": {
  },
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "default_attributes": {
  },
  "override_attributes": {
  "monitoring": {
  "procmon_provider": "monit",
  "metric_provider": "collectd"
  },
  "enable_monit": true,
  "osops_networks": {
    "management": "${VIP_PREFIX}.0/24",
    "swift": "${VIP_PREFIX}.0/24",
    "public": "${VIP_PREFIX}.0/24",
    "nova": "${VIP_PREFIX}.0/24"
  },
  "rabbitmq": {
    "open_file_limit": 4096,
    "use_distro_version": false
  },
  "nova": {
    "config": {
      "use_single_default_gateway": false,
      "ram_allocation_ratio": 1.5,
      "disk_allocation_ratio": 1.0,
      "cpu_allocation_ratio": 8.0,
      "resume_guests_state_on_host_boot": false
    },
    "network": {
      "provider": "neutron"
    },
    "scheduler": {
      "default_filters": [
        "AvailabilityZoneFilter",
        "ComputeFilter",
        "RetryFilter"
      ]
    },
    "libvirt": {
      "vncserver_listen": "0.0.0.0",
      "virt_type": "kvm"
    }
  },
  "keystone": {
    "pki": {
      "enabled": false
    },
    "admin_user": "admin",
    "tenants": [
      "service",
      "admin",
      "demo",
      "demo2"
    ],
    "users": {
      "admin": {
        "password": "secrete",
        "roles": {
          "admin": [
            "admin"
          ]
        }
      },
      "demo": {
        "password": "secrete",
        "default_tenant": "demo",
        "roles": {
          "Member": [
            "demo2",
            "demo"
          ]
        }
      },
      "demo2": {
        "password": "secrete",
        "default_tenant": "demo2",
        "roles": {
          "Member": [
            "demo2",
            "demo"
          ]
        }
      }
    }
  },
  "neutron": {
    "ovs": {
      "external_bridge": "",
      "network_type": "gre",
      "provider_networks": [
        {
          "bridge": "br-eth1",
          "vlans": "1024:1024",
          "label": "ph-eth1"
        }
      ]
    }
  },
  "mysql": {
    "tunable": {
      "log_queries_not_using_index": false
    },
    "allow_remote_root": true,
    "root_network_acl": "127.0.0.1"
  },
  "vips": {
    "horizon-dash": "${VIP_PREFIX}.${API_VIP}",
    "keystone-service-api": "${VIP_PREFIX}.${API_VIP}",
    "nova-xvpvnc-proxy": "${VIP_PREFIX}.${API_VIP}",
    "nova-api": "${VIP_PREFIX}.${API_VIP}",
    "nova-metadata-api": "${VIP_PREFIX}.${API_VIP}",
    "cinder-api": "${VIP_PREFIX}.${API_VIP}",
    "nova-ec2-public": "${VIP_PREFIX}.${API_VIP}",
    "config": {
      "${VIP_PREFIX}.${API_VIP}": {
        "vrid": 12,
        "network": "public"
      },
      "${VIP_PREFIX}.${MYSQL_VIP}": {
        "vrid": 10,
        "network": "public"
      },
      "${VIP_PREFIX}.${AMQP_VIP}": {
        "vrid": 11,
        "network": "public"
      }
    },
    "rabbitmq-queue": "${VIP_PREFIX}.${AMQP_VIP}",
    "nova-novnc-proxy": "${VIP_PREFIX}.${API_VIP}",
    "mysql-db": "${VIP_PREFIX}.${MYSQL_VIP}",
    "glance-api": "${VIP_PREFIX}.${API_VIP}",
    "keystone-internal-api": "${VIP_PREFIX}.${API_VIP}",
    "horizon-dash_ssl": "${VIP_PREFIX}.${API_VIP}",
    "glance-registry": "${VIP_PREFIX}.${API_VIP}",
    "neutron-api": "${VIP_PREFIX}.${API_VIP}",
    "ceilometer-api": "${VIP_PREFIX}.${API_VIP}",
    "ceilometer-central-agent": "${VIP_PREFIX}.${API_VIP}",
    "heat-api": "${VIP_PREFIX}.${API_VIP}",
    "heat-api-cfn": "${VIP_PREFIX}.${API_VIP}",
    "heat-api-cloudwatch": "${VIP_PREFIX}.${API_VIP}",
    "keystone-admin-api": "${VIP_PREFIX}.${API_VIP}"
  },
  "glance": {
    "images": [
    ],
    "image": {
    },
    "image_upload": false
  },
  "osops": {
    "do_package_upgrades": false,
    "apply_patches": false
  },
  "developer_mode": false
  }
}
EOF

9. When we execute the above command, we end up with a file call /opt/base.env.json applicable to the home lab environment. We then load this into Chef as follows:

knife environment from file /opt/base.env.json

10. With the environment loaded, we can simply install Rackspace Private Cloud with just a few commands and a cup of coffee. In my environment I check that my proxy server is configured in /etc/apt/apt.conf.d/01squid as follows (as we’ll be installing a lot of the same packages on all the hosts):

Acquire::http { Proxy "http://192.168.1.2:3128"; };

Bootstrapping the Controllers (Installing Rackspace Private Cloud)

11. We are now ready to bootstrap the nodes (install Chef on each node), assign roles to them and install Rackspace Private Cloud to them.  The first servers to do this on are the Controllers (openstack1 and openstack2). We assign the roles of ha-controller1 to openstack1 and ha-controller2 to openstack2, a well as the role of single-neutron to each which preps these for the roles assigned.  We do this as follows:

CONTROLLER1=192.168.1.101
CONTROLLER2=192.168.1.102

knife bootstrap -E rpcs -r role[ha-controller1],role[single-network-node] ${CONTROLLER1}
knife bootstrap -E rpcs -r role[ha-controller2],role[single-network-node] ${CONTROLLER2}

12. This will fetch and install Chef Client and configure their roles within Chef Server. We can then run chef-client to apply the roles to the servers. As we are running a HA pair, we run chef-client in the following order: first on openstack1, then on openstack2, then finally on openstack1 again. This is because of the preparation and work to sync MySQL and RabbitMQ between the two to allow us to operate them in a HA (failover) capacity:

# On openstack1
chef-client
# Configure Rabbit HA Policy
knife ssh -C1 -a ipaddress 'role:*controller*' 
    "rabbitmqctl set_policy ha-all '.*' '{\"ha-mode\":\"all\", \"ha-sync-mode\":\"automatic\"}' 0"

# SSH to openstack2 and run chef-client
ssh openstack2 chef-client

# Back on openstack1
chef-client

Bootstrapping the Computes

13. For the Computes it’s even simpler. We simply assign the role single-compute to them then execute chef-client across each of them as follows

for node in {103..105}; do
 knife bootstrap -E rpcs -r role[single-compute] 192.168.1.${node}
done
knife ssh "role:single-compute" chef-client

Finalising the install

14. Lastly we add the eth1 device to the bridge, then reboot the cluster so everything starts up as expected each time

knife ssh "role:*" "ovs-vsctl add-port br-eth1 eth1&&reboot"

Tip: I’ve seen RabbitMQ cause a problem when first rebooting. If it fails to die due to multiple status checks running, kill it with:
ps -ef | awk ‘/rabbitmq/ {print $2}’ | while read R; do kill -9 $R; done

Congratulations! You now have a home lab ready for testing Rackspace Private Cloud! 

I tend to wrap all these commands into a single script and execute from my first node where Chef is to be installed, openstack1 (Recall that I grab a shell script (install-openstack.sh) as part of the Ubuntu installation process). This will run through all these steps fully automated. On my network and slow ADSL (and the N40L isn’t the fastest server on the planet!), I tend to have Rackspace Private Cloud up and running in about 90 mins from boot.

In the next post we’ll look at configuring some basic Neutron Networking which I set up after each installation of RPC on my home lab.  This includes private networking and a Flat Network to allow me to access my instances from laptops on my home network.

Thanks to @cloudnull (Kevin Carter, Rackspace) and @IRTermite (Dale Bracey) for help fine-tuning these steps!

Home Rackspace Private Cloud / OpenStack Lab: Part 2

In the first part of this series I introduced the kit that makes up my home lab.  There’s nothing unusual or special in the kit list, but it certainly is affordable and makes entry into an OpenStack world very accessible.

This post explains some more basics of my networking: DHCP, DNS, Proxy and TFTP.

Rather than to settle for the DHCP and DNS services provided by my broadband provider’s ADSL router, given that I want to be able to do more than simply turn on a laptop or tablet and surf the ‘net, configuring a separate DHCP and DNS service up is important and offers most flexibility on your network. This becomes important later on when setting up Rackspace Private Cloud using Chef: it’s requirement to have consistent FQDN for it’s nodes is one you don’t want to leave to chance and expect things to work as expected.

DHCP and DNS: Dnsmasq

To provide DHCP and DNS on my network, I utilise Dnsmasq on my QNAP TS-210 NAS (nas.home.mydomain.co.uk / 192.168.1.1).  Installation on my QNAP is as simple as enabling the Optware plugin which allows me to simply install Dnsmasq using the following

ipkg update
ipkg install dnsmasq

Configuration of Dnsmasq is then done in /opt/etc/dnsmasq.conf (/opt is where the QNAP places optional packages)

The uncommented sections of the file (i.e non-defaults):

# Use an alternative resolv.conf
# I use this to point a real external DNS service (i.e. don't point to itself!)
resolv-file=/opt/etc/resolv.conf

# Ensure the areas that Dnsmasq wants to read/write to/from is set correctly
user=admin
group=everyone 

# for queries of hosts with short names, add on the domain name
expand-hosts

# my domain name to ensure FQDN queries all work as expected
domain=home.mydomain.co.uk

# DHCP Setup
# First the dynamic range for those that join my network
dhcp-range=192.168.1.20,192.168.1.50,12h

# Some static entries (i.e. the servers openstack1-5)
dhcp-host=64:70:02:10:4e:00,192.168.1.101,infinite
dhcp-host=64:70:02:10:6d:66,192.168.1.102,infinite
dhcp-host=64:70:02:10:48:11,192.168.1.103,infinite
dhcp-host=64:70:02:10:4e:22,192.168.1.104,infinite
dhcp-host=64:70:02:10:88:33,192.168.1.105,infinite

# A laptop that has a LAN and Wifi adapter - give it the same IP regardless
dhcp-host=5c:96:aa:aa:aa:aa,a8:20:bb:bb:bb:bb,192.168.1.19

# Some options for setting the gateway (which is my ADSL router IP)
# Some clients don't understand some DHCP options, so present both ways
dhcp-option=3,192.168.1.254
dhcp-option=option:router,192.168.1.254

# Ensure this is writeable by Dnsmasq
dhcp-leasefile=/opt/var/lib/misc/dnsmasq.leases

# When we're PXE Booting, where's the TFTP service (and filename to boot when found)
dhcp-boot=pxeboot/pxelinux.0,nas2,192.168.1.2
# And finally, this is the authoritative DHCP server on network
dhcp-authoritative

The above basically tweaks a default Dnsmasq set up to work on my network.  I have a small DHCP range for any device I don’t care too much about (tablets, laptops, phones, Blu-Ray, TV, etc). I then have a static block that match my servers — when they boot up they will always get that IP.  And I have a line that allows me to control the same IP on a laptop whether I connect via the LAN or WiFi.  This allows me to do some basic IPtable filtering on hosts based on IP – trusting that IP for example if need be.

For DNS, Dnsmasq relies on /etc/hosts – and reads this on startup.  My hosts file on my Qnap is as follows:

127.0.0.1 localhost localhost
192.168.1.1 NAS NAS
192.168.1.2 NAS2 NAS2
192.168.1.101 openstack1.home.mydomain.co.uk openstack1
192.168.1.102 openstack2.home.mydomain.co.uk openstack2
192.168.1.103 openstack3.home.mydomain.co.uk openstack3
192.168.1.104 openstack4.home.mydomain.co.uk openstack4
192.168.1.105 openstack5.home.mydomain.co.uk openstack5

For the vast majority of any device on my network, I don’t care enough to provide any entries.  For the devices that I rely on for services and OpenStack, I put in here.  The result is a very basic Dnsmasq setup that provides an important, fundamental job on my network.

You will also note a line pointing to my second NAS device denoted by the dhcp-boot line.  This instructs any machine when they’re PXE Booting to go look at that IP for the boot image.

TFTP from Dnsmsaq + Kickstart

Now we have the basic services running on the network, we can take advantage of another feature of Dnsmasq: TFTP services.  This allows me to PXE/Network boot my servers with Ubuntu 12.04 and by running some post commands once Ubuntu has been laid down sets up my environment ready for a Rackspace Private Cloud installation.

Due to proximity of my network services and my servers, and a WiFi link being involved I opted to run TFTP on another QNAP NAS (my QNAP TS-121 nas2.home.mydomain.co.uk / 192.168.1.2) which is connected to the same Gigabit switch as my servers.  Dnsmasq was installed the same way on this second QNAP (by way of the Optware plugin and installation of dnsmasq package using ipkg command).   The contents of /opt/etc/dnsmasq.conf on this second NAS is as follows (everything else is hashed out defaults, as before)

# Ensure the areas that Dnsmasq wants to read/write to/from is set correctly
user=admin
group=everyone

# Enable TFTP service
enable-tftp

# Where to put the pxeboot images (our Ubuntu netboot image)
tftp-root=/share/HDA_DATA/Public

We now have a TFTP service but it’s no good without any images to boot from.  As I’m interested in running RPC on Ubuntu 12.04, I head over to grab an Ubuntu 12.04 ISO and copy the /images/netboot/pxeboot directory to /share/HDA_DATA/Public.  If you don’t have an Ubuntu 12.04 ISO handy, you can grab this directory and files from here http://archive.ubuntu.com/ubuntu/dists/precise-updates/main/installer-amd64/current/images/netboot/

You will end up with a directory called /share/HDA_DATA/Public/pxeboot with the following files in:

pxelinux.0 -> ubuntu-installer/amd64/pxelinux.0 
pxelinux.cfg -> ubuntu-installer/amd64/pxelinux.cfg/
ubuntu-installer/
version.info

Booting this as it stands now will give you a graphical installer allowing you to install Ubuntu interactively.  This is fine, but we prefer to automate all the things. To do this we edit the ubuntu-installer/amd64/boot-screens/txt.cfg file to give a choice in the menu to do an install of Ubuntu applicable.  Mine is as follows:

default ks
 label ks
 menu label ^Kickstart Install
 menu default
 kernel ubuntu-installer/amd64/linux
 append vga=788 initrd=ubuntu-installer/amd64/initrd.gz
 http_proxy=http://192.168.1.2:3128/ ks=ftp://192.168.1.2/Public/ubuntu/ks.cfg
 -- quiet
 label install
 menu label ^Install
 kernel ubuntu-installer/amd64/linux
 append vga=788 initrd=ubuntu-installer/amd64/initrd.gz -- quiet

Be careful with that first append line under the Kickstart Install stanza – it’s a single line, line-wrapped for this blog.

With this in place I now get a couple of options when I PXE boot a server (Pressing F12 during the N40L boot process), with one being the option to do a Kickstart Install. This Kickstart Install option specifies a few things on the boot “append” line.  Particularly the http_proxy option and the ks option.  The http_proxy option is fairly obvious – we’re passing the the address and port of a proxy server to use as part of the installation.  I’m specifying the proxy server running on my second NAS2 at address 192.168.1.2 where Squid has been setup to cache large objects so it can cache deb packages. The ks option specifies a kickstart configuration file to run as part of the installation, automating the options of the installation from choosing packages to formatting disk options, etc. This kickstart file is available on an anonymous FTP address running on the same NAS2 device and looks like the following

lang en_GB
langsupport en_GB
keyboard gb
timezone Etc/UTC
text
install
skipx
reboot
url --url http://gb.archive.ubuntu.com/ubuntu/ --proxy http://192.168.1.2:3128/
rootpw --disabled
user openstack --fullname "openstack" --password openstack
authconfig --enableshadow --enablemd5
clearpart --all --initlabel
zerombr yes
part /boot --fstype=ext2 --size=64
part swap --size=1024
part / --fstype=ext4 --size=1 --grow
bootloader --location=mbr
firewall --disabled
%packages
ubuntu-minimal
openssh-server
screen
curl
wget
sshpass
git
%post
apt-get update
apt-get upgrade -y
# setup locales
locale-gen en_GB.UTF-8
update-locale LANG="en_GB.UTF-8"
echo 'LANG=en_GB.UTF-8' >> /etc/environment
echo 'LC_ALL=en_GB.UTF-8' >> /etc/environment
# Set up root keys (all use same root key - NOT FOR PROD!)
wget -O /tmp/root-ssh-key.tar.gz ftp://192.168.1.2/Public/ubuntu/root-ssh-key.tar.gz
cd /target
tar zxf /tmp/root-ssh-key.tar.gz
# Pull down installer for later use
wget -O /root/install-openstack.sh ftp://192.168.1.2/Public/ubuntu/install-openstack.sh
chmod 0700 /root/install-openstack.sh
# Convenient script to power off all my machines
wget -O /root/all-off.sh ftp://192.168.1.2/Public/ubuntu/all-off.sh
chmod 0700 /root/all-off.sh

A few things – this is clearly intended for a lab, feel free to edit and improve! The result should be something that is convenient to you through automation. I set a user up called openstack – this is a requirement of Ubuntu to have a user other than just root set up. I also just pull down some scripts and items useful for my setup, one of which is a tarballed /root/.ssh/ directory. This has keys and .ssh/authorized_keys already pre-setup and the reason is for when we get to do an install of Rackspace Private Cloud which uses Chef that relies on an ability to ssh amongst the machines.

At this point, after PXE booting the 5 servers in my home lab setup, I have Ubuntu 12.04 installed and root keys ready to have Rackspace Private Cloud installed. On my ADSL setup, kicking the 5 machines takes about 15 minutes which is an acceptable time to kick my lab, allowing me to try various runs of installations – speed and automation are important for any lab.

In the next post I’ll cover off the steps to get an HA Rackspace Private Cloud installed using the Rackspace Chef cookbooks.

Home Rackspace Private Cloud / OpenStack Lab: Part 1

Over the past year I’ve been using a home lab for quick, hands-on testing of OpenStack and Rackspace Private Cloud and a number of people have requested information on the setup.  Over the next few blog posts I’m going to explain what I’ve got that serves two purposes: documentation of my own setup as well as hopefully providing information that other people might find useful – and not everything is about OpenStack.

This first post is about the tech involved and how it is set up.  In subsequent posts I’ll go into further detail and finally installation of Rackspace Private Cloud.

The servers

5 x HP MicroServer N40L

The N40L is an incredibly cheap, 4 SATA Bay (+ CDROM Bay), low power server with supplied 250Gb SATA. It’s a single CPU AMD Turion II processor with 2 cores that supports Hardware-VT.  It has been superseded by the HP MicroServer N45L and often found with cashback deals meaning these usually come in under £130.

There seems to be some caution when choosing memory for these things, with the documentation reporting they support up to 8Gb.  I’ve read people successfully running 16Gb and through my own trial – I grabbed the cheapest memory I could get and found it worked.

When choosing the PCI-X NICs and other cards, be aware that you need to use low-profile ones.  The NICs I added to mine are low-profile, but the metal backing plate isn’t.  A quick email to TP-Link customer services will get you some low-profile backing plates free of charge.

Networked Attached Storage

I have 2 QNAP NAS devices.  One functioning as my main network storage (nas / 192.168.1.1) with 2 drives in, running DHCP for my home subnet, DNS for all connected devices and Proxy (primarily used to compensate for the slow 6-7Mbps ADSL speed I get when installing packages on my servers).  The second (nas2 / 192.168.1.2) acts as a TFTP server and proxy for my servers, as well as providing a replication/backup for my primary NAS.  The reason I run a proxy and TFTP next to my servers, rather than on the main NAS, is the wireless link I have between my servers and my router.  Although WiFi speeds are OK, it’s a much more efficient setup (and I have 2 floors between my servers and wifi router).  Powerline adapters? I tried them, but due to my home having an RCD (Residual Current Device), it made Powerline adapters useless.

  • nas
    • QNAP TS-210 (QTS 4.0.2)
    • 2 x Western Digital Caviar Green 500GB SATA 6Gb/s 64MB Cache – OEM (WD5000AZRX)
    • Raid 1 EXT4
    • eth0 (onboard) 192.168.1.1
    • DHCP (Dnsmasq)
    • DNS (Dnsmasq)
    • Proxy (Squid)
  • nas2
    • QNAP TS-121 (QTS 4.0.2)
    • 1 x Western Digital Caviar Green 500GB SATA 6Gb/s 64MB Cache – OEM (WD5000AZRX)
    • EXT4
    • eth0 (onboard) 192.168.1.2
    • TFTP (Dnsmasq)
    • Proxy (Squid)

Network Kit

Essentially I have 2 parts to my network – separated by 2 floors of a house which is connected using WiFi bridging – all on a 192.168.1.0/24 subnet.  I have unmanaged switches connecting the servers and NAS so there’s nothing here that’s highly exciting but presented for clarity and completeness (but useful if you’re thinking of needing to WiFi bridge 2 parts of your network together)

  • TP-Link TL-WA901ND Wireless-N PoE Access Point (300Mbps)
    • Bridge Mode
    • Connects LAN based servers to wireless network/clients
  • TP-Link TD-W8980 N600 Dual Band Wireless ADSL Modem Router
    • WiFi Router (2.4GHz + 5GHz)
    • ADSL Router disabled (for now)
    • DHCP/DNS disabled (Dnsmasq used instead)
  • TP-Link TL-SG1024 24-port Ethernet Switch
    • 24-Port Switch connecting servers to NAS and Wifi Bridge (TL-WA901ND)

(I think I should get sponsorship from TP-Link for this post!)

Overall, this looks like this (click for bigger version). Hopefully, having this detailed background info will aid you in setting up your own OpenStack environment big or small.

Home Lab Network Diagram

In the next blog post I’ll cover QNAP Dnsmasq Configuration providing DHCP, DNS and TFTP for my network allowing me to PXE boot my N40L servers to kick Ubuntu and Rackspace Private Cloud.