Vagrant OpenStack Plugin 101: vagrant up –provider=openstack

Now that we have a multi-node OpenStack environment spun up very easily using Vagrant, we can now take this further by using Vagrant to spin up OpenStack instances too using the Vagrant OpenStack Plugin. To see how easily this is, follow the instructions below:

git clone https://github.com/mat128/vagrant-openstack.git
cd vagrant-openstack/
gem build vagrant-openstack.gemspec
vagrant plugin install vagrant-openstack-*.gem
vagrant box add dummy https://github.com/mat128/vagrant-openstack/raw/master/dummy.box
sudo apt-get install python-novaclient

With the plug-in installed for use with Vagrant we can now configure the Vagrantfile. Remember the Vagrantfile is just a configuration file that lives in the directory of the working environment where any artifacts related to the virtual environment is kept. The following Vagrantfile contents can be used against the OpenStack Cloud Computing Cookbook Juno Demo environment:

require 'vagrant-openstack'
Vagrant.configure("2") do |config|
  config.vm.box = "dummy"
  config.vm.provider :openstack do |os|    # e.g.
    os.username = "admin"          # "#{ENV['OS_USERNAME']}"
    os.api_key  = "openstack"      # "#{ENV['OS_PASSWORD']}"
    os.flavor   = /m1.tiny/
    os.image    = /trusty/
    os.endpoint = "http://172.16.0.200:5000/v2.0/tokens" # "#{ENV['OS_AUTH_URL']}/tokens"
    os.keypair_name = "demokey"
    os.ssh_username = "ubuntu"
    os.public_network_name = "ext_net"
    os.networks = %w(ext_net)
    os.tenant = "cookbook"
    os.region = "regionOne"
  end
end

Once created, we can simply bring up this instance with the following command:

vagrant up --provider=openstack

Note: The network chosen is a routable “public” network, which is accessible from the Vagrant client and is a limitation at this time for creating these instances. Also note that vagrant openstack seems to get stuck at “Waiting for SSH to become available”. Ctrl + C at this point will drop you back to the shell.

Remote OpenStack Vagrant Environment

To coincide with the development of the 3rd Edition of the OpenStack Cloud Computing Cookbook, I decided to move my vagranting from the ever-increasing temperatures of my MBP to a Shuttle capable of spinning up multi-node OpenStack environments in minutes. I’ve found this very useful for convenience and speed, so sharing the small number of steps to help you quickly get up to speed too.

The spec of the Shuttle is:

  • Shuttle XPC SH87R6
  • Intel i5 3.3GHz i5-4590
  • 2 x Crucial 8Gb 1600MHz DDR3 Ballistix Sport
  • 1 x Seagate Desktop SSHD 1000GB 64MB Cache SATA 6 Gb/s 8GB SSD Cache Hybrid HDD

Running on here is Ubuntu 14.04 LTS, along with VirtualBox 4.3 and VMware Workstation 10. I decided to give one of those hybrid HDDs a spin, and can say the performance is pretty damn good for the price. All in all, this is a quiet little workhorse sitting under my desk.

To have this as part of my work environment (read: my MBP), I connect to this using SSH and X11 courtesy of XQuartz. XQuartz, once installed on the Mac, allows me to access my remote GUI on my Shuttle as you’d expect from X11 (ssh -X …). This is useful when running the GUI of VMware Workstation and VirtualBox – as well as giving me a hop into my virtual environment running OpenStack (that exists only within my Shuttle) by allowing me to run remote web browsers that have the necessary network access to my Virtual OpenStack environment.

x11firefox

With this all installed and accessible on my network, I grab the OpenStack Cloud Computing Cookbook scripts (that we’re updating for Juno and the in-progress 3rd Edition) from GitHub and can bring up a multi-node Juno OpenStack environment running in either VirtualBox or VMware Workstation in just over 15 minutes.

Once OpenStack is up and running, I can then run the demo.sh script that we provide to launch 2 networks (one private, one public), with an L3 router providing floating IPs, and an instance that I’m able to access from a shell on my Shuttle. Despite the Shuttle being remote, I can browse the OpenStack Dashboard with no issues, and without VirtualBox or VMware consuming resources on my trusty MBP.

OpenStack Summit Packt Book Discount Codes – 30% Off

If you fancy grabbing a copy of the OpenStack Cloud Computing Cookbook, by Kevin Jackson and Cody Bunch, you can use the following 30% discount codes at http://www.packtpub.com when you make your purchase:

30% off Printed Book Discount Code: nieX72Mn7U

30% off eBook Discount Code: k1QxrwyMvD

You can also get 30% off James Denton’s Learning OpenStack Networking (Neutron) with the following codes:

30% off Printed Book Discount Code: luyLRpSQ

30% off eBook Discount Code: IXQ1swn2

As a bonus, James will be book signing at the Rackspace Booth Monday to Wednesday! Stop by the booth for more information!

The OpenStack Architecture Design Guide Book Sprint

OpenStack Architecture Design Guide It’s been over a week since we were locked up, and held against our will*, writing a book in 5 days on designing and architecting OpenStack installations. I hope, by now, you have managed to get your hands on our hard work? If not, you can grab a copy of the OpenStack Architecture Design Guide here.

I admit, I was cynical of the process. My previous experience of writing books had stemmed from the many weeks and months of hammering out the OpenStack Cloud Computing Cookbooks. This involved a few days thinking about TOCs, laying out the chapters, research and then the many hours spent writing each chapter – followed by the arduous back and forth edits between 1st drafts and final copies. To write a book in 5 days, I know sounds feasible (and has been done before, twice over for OpenStack!) – but there’s a difference between being able to write a book in 5 days, and writing a useful book in 5 days. With that in mind, I headed over to sunny California where I met many new faces for the first time who were excited and ready at what lay ahead. Challenge accepted.

Day one was an important day. The skeleton of the book gets created and sets the foundation and template for the work for the week. And like any new situation, understanding people’s intentions, people’s characters and level of humour is an important part of the process. But we’re good – they seemed normal enough (not quite sure on that Steve Gordon fella…;) ).

We made very good progress on day one which put us in a very good position for the rest of the week. With appropriate groups formed according to familiarity and expertise, the initial chapters began to take shape. Through the course of the day we were ticking off tasks that were assigned. By the end of this first day – the shortest day finishing at 6pm – the amount of words that were put to pages and the general brilliant attitude from everyone was beginning to change my cynical mind. Not only were we looking good (even at this stage) to simply fill in the chapters we set out to write, but the content was actually very credible and readable, even at this very early stage.

Day two continued with a purpose that was laid down the day before. We were rattling through each section with mumblings of word counts whispering through the camp. Sure, we had some setbacks that sucked – like when Sean lost 2 hours of writing because he simply clicked on a link in a browser – as a team, we all felt how painful that was. But we had a lot more successes and maybe the odd few heated moments – and the rest of week saw people ticking off tasks on the board and asking what else needs doing. Despite the very long 12-13 hour days – they seemed to go quick.

The remaining days saw the start of the edits – the many passes between the early written parts of text to nearing what would become part of the final book. Fuelled by Mountain Dew and delirium – and the lovely supplied kosher food and lunches – the team started focussing on specific tasks rather than on OpenStack topics. Editing by Nick, Beth, Alex, Scott and Sean happened in tandem with diagrams being drawn and interpreted by the very knowledgeable Vinny whilst the rest of us were fixing up content, proofreading from high above — literally in my case as I had to short-change the system by leaving a day early. Thank you USA and your airlines proving internet access, allowing me to help edit and read the nearly finished book from 31,000ft in the clouds – a fitting end to the week.

So we did it. We started as 12, with a blank sheet of paper, a clean whiteboard and definitely no food at the table.

This book is Open Source. You can edit it and contribute to it like any of the OpenStack documentation. I hope you find it useful as much as I enjoyed help write it. A huge thank you to the rest of the team – who are just damn clever people in the OpenStack community – as well as Adam and Faith from www.booksprints.net who make these ridiculous rules, like writing a book in 5 days, possible and somehow kept us from going insane and killing each other (or them!).

Thanks to Anne Gentle (Rackspace) + Ken Hui (EMC) for organising this and kicking this event off, Scott Lowe (VMware) – and to VMware for hosting us, Nick Chase (Mirantis), Maish Saidel-Keesing (Cisco), Alexandra Settle (Rackspace), Sean Winn (Cloudscaling), Sean Collins and Anthony Veiga (Comcast), Beth Cohen (Verizon), Steve Gordon, Vinny Valdez and Sebastian Gutierrez (Red Hat).

* it’s all lies. We weren’t locked up.

OpenStack Design Guide Book Sprint: July 7th – July 11th

Thanks to efforts by Ken Hui and Anne Gentle and help from BookSprints.net, following on from the heels of the excellent OpenStack Operations Guide and OpenStack Security Guide book sprints, The OpenStack Foundation has commissioned another in the form of a Design Guide. A bunch of seasoned architects, developers, operators and authors are heading to sunny Palo Alto in July to take up refuge in VMware’s HQ (Thanks Scott and VMware!) whilst we put pen to post-it notes and crayons to walls as we bash out a book that helps OpenStack soothsayers plan and architect their installations.

Your esteemed list of contributors and brains who are up for the challenge:

Ken Hui, Rackspace soon to be of EMC kicking things off for us, Scott Lowe of VMware, Nick Chase of Mirantis, Maish Saidel-Keesing of Cisco, Alexandra Settle and myself from Rackspace, Sean Winn from Cloudscaling, Sean Collins and Anthony Veiga of Comcast, Beth Cohen from Verizon, Steve Gordon and Vinny Valdez of Red Hat.

That’s some pretty top representation from across the OpenStack community (I know, I somehow crept in there – possibly just for the entertainment) and I’m looking forward to meeting these great folk in a few weeks time.

Home Rackspace Private Cloud / OpenStack Lab: Part 7 LBaaS

With a useful OpenStack lab up and running, it’s time to take advantage of some more advanced features. The first of which I want to look at is adding the OpenStack Networking LBaaS (Load Balance) to my Rackspace Private Cloud. This is currently a Tech Preview and unsupported feature of Rackspace Private Cloud v4.2 and not considered for use in production at this time. To add this to RPC we simply make a change to the environment and run chef-client across the nodes.

More information on LBaaS and Rackspace Private Cloud can be found here

Adding LBaaS to Rackspace Private Cloud Lab

1. Edit /opt/base.env.json (or the name of the JSON describing your environment) and add in the following:

 "neutron": {
  "ovs": {
    ...
  },
  "lbaas": {
    "enabled": true
  }
},
...

"horizon": {
  "neutron": {
    "enable_lb": "True"
  }
},

2. Save and then load this into the environment

knife environment from file /opt/base.env.json

3. Now run chef-client on the controllers

# openstack1
chef-client

# openstack2
knife ssh "role:ha-controller2" chef-client

That’s it, LBaaS is now enabled in our RPC Lab!

Creating a Load Balancer

The first thing we do is to create a load balance pool.

1. To do this get the UUID of the private subnet we want our load balancer to live on.

neutron subnet-list

subnet-list

2. Next we create a pool:

neutron lb-pool-create 
    --lb-method ROUND_ROBIN 
    --name mypool 
    --protocol HTTP 
    --subnet-id 19ab172a-87af-4e0f-82e8-3d275c9430ca

lb-pool-create

3. We can now add members to this pool. For this I’m using two instances spun up running Apache running on

nova list

nova-list

neutron lb-member-create --address 192.168.1.152 --protocol-port 80 mypool
neutron lb-member-create --address 192.168.1.153 --protocol-port 80 mypool

lb-member-create

4. We can now create a Healthmonitor and associated it with the pool. This tests the members availability and controls whether to send traffic to that member or not:

neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
lb-healthmonitor-create

neutron lb-healthmonitor-associate 5479a729-ab81-4665-bfb8-992aab8d4aaf mypool
lb-associate-health

5. With that in place, we now create the load balanced VIP using the subnet UUID and the name of the load balancer (myPool). The VIP is the address of the load balanced pool.

neutron lb-vip-create 
    --name myvip 
    --protocol-port 80 
    --protocol HTTP 
    --subnet-id 19ab172a-87af-4e0f-82e8-3d275c9430ca mypool
lb-vip-create

This has taken an IP from the subnet that we created the load balance pool on of 192.168.1.154 and we can now use this to access our load balanced web pool consisting of the two Apache instances – for example: http://192.168.1.154/

Viewing details about the load balancers

To list the load balancer pools issue the following:

neutron lb-pool-list

lb-pool-list

To see information and list the members in a pool issue the following:

neutron lb-pool-show mypool

lb-pool-show

Horizon

In Horizon this looks like the following

horizon-lb-pools

horizon-lb-members

horizon-lb-monitors

For more information on OpenStack Networking LBaaS visit here.

Bandwidth monitoring with Neutron and Ceilometer

OpenStack Telemetry (aka Ceilometer) gives you access to a wealth of information – and a particularly interesting stat I wanted access to was outgoing bandwidth of my Neutron networks. Out of the box, Ceilometer gives you a rolled up cumulative stat for this with the following:

ceilometer sample-list --meter network.outgoing.bytes

This produces output like the following

ceilometer-sample-list

This is fine, but when you’re trying to break down the network stats for something useful like billing since beginning of month, this makes it tricky – even though I’m sure I shouldn’t have to do this! I followed this excellent post http://cjchand.wordpress.com/2014/01/16/transforming-cumulative-ceilometer-stats-to-gauges/ which describes this use-case brilliantly (and goes further with Logstash and Elastic Search) – where you can use Ceilometer’s Pipeline to transform data into a format that suits your use case. The key pieces of information from this article was as follows:

1. Edit the /etc/ceilometer/pipeline.yaml of your Controller and Compute hosts and add in the following lines

-
 name: network_stats
   interval: 10
   meters:
     - "network.incoming.bytes"
     - "network.outgoing.bytes"
   transformers:
     - name: "rate_of_change"
     parameters:
       target:
         type: "gauge"
         scale: "1"
   publishers:
     - rpc://

2. Restart the ceilometer-agent-compute on each host

restart ceilometer-agent-compute

That’s it. We now have a “gauge” for our incoming and outgoing neutron – which means they’re pieces of fixed data applicable for the time range associated with it (e.g. number of bytes in the last 10 second sample set).

ceilometer-sample-list-gauge

 

I wanted to see if I could answer the question with this data: how much outgoing bandwidth was used for any given network over a period of time (i.e since the start of the month). It could be I misunderstand the cumulative stats (which tease me), but this was a useful exercise nonetheless! Now here is my ask of you – my bash fu below can be improved somewhat and would love to see it! My limited knowledge of Ceilometer, plus the lure of Bash, awk and sed, provided me with a script that outputs the following:

neutron-bandwidth-output

The (OMFG WTF has Kev wrote now) script, link below, loops through each of your Neutron networks, and outputs the number of outgoing bytes per Neutron network. Just make sure you’ve got ‘bc’ installed to calculate the floating numbers. If I’ve used and abused Ceilometer and awk too much, let me know!

https://raw.githubusercontent.com/OpenStackCookbook/OpenStackCookbook/icehouse/bandwidth_report.sh