Monthly Archives: May 2014

Bandwidth monitoring with Neutron and Ceilometer

OpenStack Telemetry (aka Ceilometer) gives you access to a wealth of information – and a particularly interesting stat I wanted access to was outgoing bandwidth of my Neutron networks. Out of the box, Ceilometer gives you a rolled up cumulative stat for this with the following:

ceilometer sample-list --meter network.outgoing.bytes

This produces output like the following


This is fine, but when you’re trying to break down the network stats for something useful like billing since beginning of month, this makes it tricky – even though I’m sure I shouldn’t have to do this! I followed this excellent post which describes this use-case brilliantly (and goes further with Logstash and Elastic Search) – where you can use Ceilometer’s Pipeline to transform data into a format that suits your use case. The key pieces of information from this article was as follows:

1. Edit the /etc/ceilometer/pipeline.yaml of your Controller and Compute hosts and add in the following lines

 name: network_stats
   interval: 10
     - "network.incoming.bytes"
     - "network.outgoing.bytes"
     - name: "rate_of_change"
         type: "gauge"
         scale: "1"
     - rpc://

2. Restart the ceilometer-agent-compute on each host

restart ceilometer-agent-compute

That’s it. We now have a “gauge” for our incoming and outgoing neutron – which means they’re pieces of fixed data applicable for the time range associated with it (e.g. number of bytes in the last 10 second sample set).



I wanted to see if I could answer the question with this data: how much outgoing bandwidth was used for any given network over a period of time (i.e since the start of the month). It could be I misunderstand the cumulative stats (which tease me), but this was a useful exercise nonetheless! Now here is my ask of you – my bash fu below can be improved somewhat and would love to see it! My limited knowledge of Ceilometer, plus the lure of Bash, awk and sed, provided me with a script that outputs the following:


The (OMFG WTF has Kev wrote now) script, link below, loops through each of your Neutron networks, and outputs the number of outgoing bytes per Neutron network. Just make sure you’ve got ‘bc’ installed to calculate the floating numbers. If I’ve used and abused Ceilometer and awk too much, let me know!


OpenStack Cloud Computing Cookbook – Summit Special 50% Off!

OpenStack Cloud Computing CookbookWhile you are enjoying the OpenStack Summit in Atlanta, why not treat your bookcase to a new copy of the OpenStack Cloud Computing Cookbook by Kevin Jackson and Cody Bunch?

The Publishers, Packt, are offering 50% off the DRM-free ebook and printed copy with the following codes at their website

50% off ebook, use promo code: AtlantaE50

50% off printed, use promo code: AtlantaP50 

We also have a re-publication of the 1st Chapter that fixes a few issues which you can find here for free [PDF].

Whilst this book was penned for Grizzly, we ensure we maintain the latest code so you can reference the book and still have the latest and greatest from OpenStack in Icehouse. To get the latest supporting scripts based on Icehouse, head over to

More information on the book can be found at

Hope you are enjoying the OpenStack Summit in Atlanta, and I’ll hopefully get to meet more of you in Paris in November this year for the K Summit!

OpenStack Cloud Computing Cookbook: The Icehouse Scripts!

So you may have realised by now that Cody and I wrote a book to help you all get up to speed with the ubiquitous Open Source Cloud Computing platform, OpenStack. Like any good tech book, it’s full of “At the time of writing…” which covers our ass when people type in things on newer versions. For OpenStack, this happens quite often and the recent release of Icehouse has seen some changes to what we presented in the book. To overcome some of the shortcomings of publishing we maintain scripts that follow the processes and commands in the book as much as possible. You can find these at our Git Repo @


What you can do is checkout the supporting scripts and run a multi-node OpenStack setup using Git, Vagrant and VirtualBox.

Ensure you have

Also ensure you’ve something decent to run this all on. A PC or Mac with 8Gb should be enough – but my Mac has 16Gb and gets toasty so YMMV.

The Icehouse OpenStack Cookbook scripts also give you a suggestion to install vagrant-cachier. We use this to speed up installations of the VMs as we pull down a lot of packages when installing OpenStack.

To install vagrant-cachier (ensure you have Vagrant 1.5+) run the following:

vagrant plugin install vagrant-cachier

The Icehouse Vagrant scripts have been updated to use Ubuntu’s latest and greatest, Trusty Tahr 14.04. Icehouse is default in this release.


When you run these scripts you will end up with the following

4 x VMs: Controller, Network, Compute and Cinder.

Host network (on eth1): Controller | Compute | Network | Cinder

Provider (Neutron) network (on eth2)

External network (on eth3): Controller | Compute | Network | Cinder

You’d interact with the OpenStack API on the host network ( and when you create an external floating network, you’d create it on the network.

Icehouse is here!

After some hacking and slashing, and a little bit of cleanup here and there, the scripts have now been updated to work for Icehouse. You want some? Follow the steps below:

  1. Grab the code from Github
    git clone
  2. Checkout the Icehouse version
    cd OpenStackCookbook
    git checkout icehouse
  3. Run all the things with a simple single liner
    vagrant up
  4. Vagrant-Cachier will prompt you for your system admin password as it modifies /etc/exports to allow the cached areas to be presented to VirtualBox. Once done sit back for 20 mins, head over to Amazon and purchase a book, then come back and play with OpenStack!
  5. Once done you will end up with 4 machines – check they’re running
    vagrant status

    The output should look like below:

    Current machine states:
    controller running (virtualbox)
    network running (virtualbox)
    compute running (virtualbox)
    cinder running (virtualbox)
    This environment represents multiple VMs. The VMs are all listed
    above with their current state. For more information about a specific
    VM, run `vagrant status NAME`.
  6. Connect to the controller
    vagrant ssh controller
  7. The credentials have been written to your /vagrant directory [which is the directory of your git clone on your host presented to all your VMs]. Source them in with
    . /vagrant/openrc

    (Credentials: username=admin, password=openstack, tenant=cookbook)

  8. Now you can use OpenStack as it was intended. To quickly create some networking and spin an instance up run the following (check out the script to see how you can use OpenStack once it’s running)
  9. You can always visit the OpenStack Dashboard to at using the admin/openstack credentials.

Any problems – leave us a note! Happy OpenStacking.