Category Archives: Operating Systems

I have an OpenStack environment, now what? Loading Images into Glance #OpenStack 101

With an OpenStack environment up and running based on an OpenStack Ansible Deployment, now what?

Using Horizon with OSAD

First, we can log into Horizon (point your web browser at your load balance pool address, the one labelled external_lb_vip_address in the /etc/openstack_deploy/openstack_user_config.yml):

global_overrides:
  internal_lb_vip_address: 172.29.236.107
  external_lb_vip_address: 192.168.1.107
  lb_name: haproxy

Where are the username/password credentials for Horizon?

In step 4.5 of https://openstackr.wordpress.com/2015/07/19/home-lab-2015-edition-openstack-ansible-deployment/ we randomly generated all passwords used by OpenStack. This also generated a random password for the ‘admin‘ user. This user is the equivalent of ‘root’ on a Linux system, so generating a strong password is highly recommended. But to get that password, we need to get it out of a file.

The easiest place to find this password is to look on the deployment host itself as that is where we wrote out the passwords. Take a look in /etc/openstack_deploy/user_secrets.yml file and find the line that says ‘keystone_auth_admin_password‘. This random string of characters is the ‘admin‘ user’s password that you can use for Horizon:

keystone_auth_admin_password: bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0

admin_login_osad

The Utility Container and openrc credentials file

Alternatively, you can grab the ‘openrc‘ file from a ‘utility’ container which is found on a controller node. To do this, carry out the following:

  1. Log into a controller node and change to root. In my case I can choose either openstack4, openstack5 or openstack6. Here I can list the containers running on here as follows:
    lxc-ls -f

    This brings back output like the following:
    lxcls-openstack4(Click to enlarge)

  2. Locate the name of the utility container and attach to it as follows
    lxc-attach -n controller-01_utility_container-71cceb47
  3. Here you will find the admin user’s credentials in the /root/openrc file:
    cat openrc
    
    
    
    # Do not edit, changes will be overwritten
    # COMMON CINDER ENVS
    export CINDER_ENDPOINT_TYPE=internalURL
    # COMMON NOVA ENVS
    export NOVA_ENDPOINT_TYPE=internalURL
    # COMMON OPENSTACK ENVS
    export OS_ENDPOINT_TYPE=internalURL
    export OS_USERNAME=admin
    export OS_PASSWORD=bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://172.29.236.107:5000/v2.0
    export OS_NO_CACHE=1
  4. To use this, we simply source this into our environment as follows:
    . openrc

    or

    source openrc
  5. And now we can use the command line tools such as nova, glance, cinder, keystone, neutron and heat.

Loading images into Glance

Glance is the Image Service. This service provides you with a list of available images you can use to launch instances in OpenStack. To do this, we use the Glance command line tool.

There are plenty of public images available for OpenStack. You essentially grab them from the internet, and load them into Glance for your use. A list of places for OpenStack images can be found below:

CirrOS test image (can use username/password to log in): http://download.cirros-cloud.net/

Ubuntu images: http://cloud-images.ubuntu.com/

Windows 2012 R2: http://www.cloudbase.it/

CentOS 7: http://cloud.centos.org/centos/7/images/

Fedora: https://getfedora.org/en/cloud/download/

To load these, log into a Utililty container as described above and load into the environment as follows.

Note that you can either grab the files from the website, save them locally and upload to Glance, or have Glance grab the files and load into the environment direct from the site. I’ll describe both as you will have to load from a locally saved file for Windows due to having to accept an EULA before gaining access.

CirrOS

glance image-create \
  --name "cirros-image" \
  --disk-format qcow2 \
  --container-format bare \
  --copy-from http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img \
  --is-public True \
  --progress

You can use a username and password to log into CirrOS. This makes this tiny just-enough-OS great for testing and troubleshooting. Username: cirros, Password: Cubswin:)

Ubuntu 14.04

glance image-create \
–name “trusty-image” \
–disk-format qcow2 \
–container-format bare \
–copy-from http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
–is-public True \
–progress

You’d specify a keypair to use when launching this image as there is no default username or password on these cloud images [that would be a disastrous security fail if so]. The username to log into these will be ‘root’ and the private key that matched the public key specified at launch would get you access.

Windows 2012 R2

For Windows, you can download an evaluation copy of Windows 2012 R2 and to do so you need to accept a license. Head over to http://www.cloudbase.it/ and follow the instructions to download the image.

Once downloaded, you need to get this to OpenStack. As we’re using the Utility container for our access we need to get the image so it is accessible from there. There are alternative ways such as installing the OpenStack Client tools on your client which is ultimately how you’d use OpenStack. For now though, we will copy to the Utility container.

  1. Copy the Windows image to the Utility Container. All of the containers have an IP on the container ‘management’ network (172.29.236.0/24 in my lab). View the IP address of the Utility container and use this IP. This network is available via my deployment host so I simply secure copy this over to the container:

    (performed as root on my deployment host as that has SSH access using keypairs to the containers)

    scp Windows2012R2.qcow2 root@172.29.236.85:
  2. We can then upload this to Glance as follows, note the use of –file instead of –copy-from:
    glance image-create \
      --name "windows-image" \
      --disk-format qcow2 \
      --container-format bare \
      --file ./Windows2012R2.qcow2 \
      --is-public True \
      --progress

    This will take a while as the Windows images are naturally bigger than Linux ones. Once uploaded it will be available for our use.

Access to Windows instances will be by RDP, and although SSH keypairs are not used by this Windows image for RDP access, it is still required to get access to the randomly generated ‘Administrator’ passphrase, so when launching the Windows instance, specify a keypair.

Access to the Administrator password is then carried out using the following once you’ve launched an instance:

nova get-password myWindowsInstance .ssh/ida_rsa
Launching instances will be covered in a later topic!
Advertisements

Remote OpenStack Vagrant Environment

To coincide with the development of the 3rd Edition of the OpenStack Cloud Computing Cookbook, I decided to move my vagranting from the ever-increasing temperatures of my MBP to a Shuttle capable of spinning up multi-node OpenStack environments in minutes. I’ve found this very useful for convenience and speed, so sharing the small number of steps to help you quickly get up to speed too.

The spec of the Shuttle is:

  • Shuttle XPC SH87R6
  • Intel i5 3.3GHz i5-4590
  • 2 x Crucial 8Gb 1600MHz DDR3 Ballistix Sport
  • 1 x Seagate Desktop SSHD 1000GB 64MB Cache SATA 6 Gb/s 8GB SSD Cache Hybrid HDD

Running on here is Ubuntu 14.04 LTS, along with VirtualBox 4.3 and VMware Workstation 10. I decided to give one of those hybrid HDDs a spin, and can say the performance is pretty damn good for the price. All in all, this is a quiet little workhorse sitting under my desk.

To have this as part of my work environment (read: my MBP), I connect to this using SSH and X11 courtesy of XQuartz. XQuartz, once installed on the Mac, allows me to access my remote GUI on my Shuttle as you’d expect from X11 (ssh -X …). This is useful when running the GUI of VMware Workstation and VirtualBox – as well as giving me a hop into my virtual environment running OpenStack (that exists only within my Shuttle) by allowing me to run remote web browsers that have the necessary network access to my Virtual OpenStack environment.

x11firefox

With this all installed and accessible on my network, I grab the OpenStack Cloud Computing Cookbook scripts (that we’re updating for Juno and the in-progress 3rd Edition) from GitHub and can bring up a multi-node Juno OpenStack environment running in either VirtualBox or VMware Workstation in just over 15 minutes.

Once OpenStack is up and running, I can then run the demo.sh script that we provide to launch 2 networks (one private, one public), with an L3 router providing floating IPs, and an instance that I’m able to access from a shell on my Shuttle. Despite the Shuttle being remote, I can browse the OpenStack Dashboard with no issues, and without VirtualBox or VMware consuming resources on my trusty MBP.

Home Rackspace Private Cloud / OpenStack Lab: Part 2

In the first part of this series I introduced the kit that makes up my home lab.  There’s nothing unusual or special in the kit list, but it certainly is affordable and makes entry into an OpenStack world very accessible.

This post explains some more basics of my networking: DHCP, DNS, Proxy and TFTP.

Rather than to settle for the DHCP and DNS services provided by my broadband provider’s ADSL router, given that I want to be able to do more than simply turn on a laptop or tablet and surf the ‘net, configuring a separate DHCP and DNS service up is important and offers most flexibility on your network. This becomes important later on when setting up Rackspace Private Cloud using Chef: it’s requirement to have consistent FQDN for it’s nodes is one you don’t want to leave to chance and expect things to work as expected.

DHCP and DNS: Dnsmasq

To provide DHCP and DNS on my network, I utilise Dnsmasq on my QNAP TS-210 NAS (nas.home.mydomain.co.uk / 192.168.1.1).  Installation on my QNAP is as simple as enabling the Optware plugin which allows me to simply install Dnsmasq using the following

ipkg update
ipkg install dnsmasq

Configuration of Dnsmasq is then done in /opt/etc/dnsmasq.conf (/opt is where the QNAP places optional packages)

The uncommented sections of the file (i.e non-defaults):

# Use an alternative resolv.conf
# I use this to point a real external DNS service (i.e. don't point to itself!)
resolv-file=/opt/etc/resolv.conf

# Ensure the areas that Dnsmasq wants to read/write to/from is set correctly
user=admin
group=everyone 

# for queries of hosts with short names, add on the domain name
expand-hosts

# my domain name to ensure FQDN queries all work as expected
domain=home.mydomain.co.uk

# DHCP Setup
# First the dynamic range for those that join my network
dhcp-range=192.168.1.20,192.168.1.50,12h

# Some static entries (i.e. the servers openstack1-5)
dhcp-host=64:70:02:10:4e:00,192.168.1.101,infinite
dhcp-host=64:70:02:10:6d:66,192.168.1.102,infinite
dhcp-host=64:70:02:10:48:11,192.168.1.103,infinite
dhcp-host=64:70:02:10:4e:22,192.168.1.104,infinite
dhcp-host=64:70:02:10:88:33,192.168.1.105,infinite

# A laptop that has a LAN and Wifi adapter - give it the same IP regardless
dhcp-host=5c:96:aa:aa:aa:aa,a8:20:bb:bb:bb:bb,192.168.1.19

# Some options for setting the gateway (which is my ADSL router IP)
# Some clients don't understand some DHCP options, so present both ways
dhcp-option=3,192.168.1.254
dhcp-option=option:router,192.168.1.254

# Ensure this is writeable by Dnsmasq
dhcp-leasefile=/opt/var/lib/misc/dnsmasq.leases

# When we're PXE Booting, where's the TFTP service (and filename to boot when found)
dhcp-boot=pxeboot/pxelinux.0,nas2,192.168.1.2
# And finally, this is the authoritative DHCP server on network
dhcp-authoritative

The above basically tweaks a default Dnsmasq set up to work on my network.  I have a small DHCP range for any device I don’t care too much about (tablets, laptops, phones, Blu-Ray, TV, etc). I then have a static block that match my servers — when they boot up they will always get that IP.  And I have a line that allows me to control the same IP on a laptop whether I connect via the LAN or WiFi.  This allows me to do some basic IPtable filtering on hosts based on IP – trusting that IP for example if need be.

For DNS, Dnsmasq relies on /etc/hosts – and reads this on startup.  My hosts file on my Qnap is as follows:

127.0.0.1 localhost localhost
192.168.1.1 NAS NAS
192.168.1.2 NAS2 NAS2
192.168.1.101 openstack1.home.mydomain.co.uk openstack1
192.168.1.102 openstack2.home.mydomain.co.uk openstack2
192.168.1.103 openstack3.home.mydomain.co.uk openstack3
192.168.1.104 openstack4.home.mydomain.co.uk openstack4
192.168.1.105 openstack5.home.mydomain.co.uk openstack5

For the vast majority of any device on my network, I don’t care enough to provide any entries.  For the devices that I rely on for services and OpenStack, I put in here.  The result is a very basic Dnsmasq setup that provides an important, fundamental job on my network.

You will also note a line pointing to my second NAS device denoted by the dhcp-boot line.  This instructs any machine when they’re PXE Booting to go look at that IP for the boot image.

TFTP from Dnsmsaq + Kickstart

Now we have the basic services running on the network, we can take advantage of another feature of Dnsmasq: TFTP services.  This allows me to PXE/Network boot my servers with Ubuntu 12.04 and by running some post commands once Ubuntu has been laid down sets up my environment ready for a Rackspace Private Cloud installation.

Due to proximity of my network services and my servers, and a WiFi link being involved I opted to run TFTP on another QNAP NAS (my QNAP TS-121 nas2.home.mydomain.co.uk / 192.168.1.2) which is connected to the same Gigabit switch as my servers.  Dnsmasq was installed the same way on this second QNAP (by way of the Optware plugin and installation of dnsmasq package using ipkg command).   The contents of /opt/etc/dnsmasq.conf on this second NAS is as follows (everything else is hashed out defaults, as before)

# Ensure the areas that Dnsmasq wants to read/write to/from is set correctly
user=admin
group=everyone

# Enable TFTP service
enable-tftp

# Where to put the pxeboot images (our Ubuntu netboot image)
tftp-root=/share/HDA_DATA/Public

We now have a TFTP service but it’s no good without any images to boot from.  As I’m interested in running RPC on Ubuntu 12.04, I head over to grab an Ubuntu 12.04 ISO and copy the /images/netboot/pxeboot directory to /share/HDA_DATA/Public.  If you don’t have an Ubuntu 12.04 ISO handy, you can grab this directory and files from here http://archive.ubuntu.com/ubuntu/dists/precise-updates/main/installer-amd64/current/images/netboot/

You will end up with a directory called /share/HDA_DATA/Public/pxeboot with the following files in:

pxelinux.0 -> ubuntu-installer/amd64/pxelinux.0 
pxelinux.cfg -> ubuntu-installer/amd64/pxelinux.cfg/
ubuntu-installer/
version.info

Booting this as it stands now will give you a graphical installer allowing you to install Ubuntu interactively.  This is fine, but we prefer to automate all the things. To do this we edit the ubuntu-installer/amd64/boot-screens/txt.cfg file to give a choice in the menu to do an install of Ubuntu applicable.  Mine is as follows:

default ks
 label ks
 menu label ^Kickstart Install
 menu default
 kernel ubuntu-installer/amd64/linux
 append vga=788 initrd=ubuntu-installer/amd64/initrd.gz
 http_proxy=http://192.168.1.2:3128/ ks=ftp://192.168.1.2/Public/ubuntu/ks.cfg
 -- quiet
 label install
 menu label ^Install
 kernel ubuntu-installer/amd64/linux
 append vga=788 initrd=ubuntu-installer/amd64/initrd.gz -- quiet

Be careful with that first append line under the Kickstart Install stanza – it’s a single line, line-wrapped for this blog.

With this in place I now get a couple of options when I PXE boot a server (Pressing F12 during the N40L boot process), with one being the option to do a Kickstart Install. This Kickstart Install option specifies a few things on the boot “append” line.  Particularly the http_proxy option and the ks option.  The http_proxy option is fairly obvious – we’re passing the the address and port of a proxy server to use as part of the installation.  I’m specifying the proxy server running on my second NAS2 at address 192.168.1.2 where Squid has been setup to cache large objects so it can cache deb packages. The ks option specifies a kickstart configuration file to run as part of the installation, automating the options of the installation from choosing packages to formatting disk options, etc. This kickstart file is available on an anonymous FTP address running on the same NAS2 device and looks like the following

lang en_GB
langsupport en_GB
keyboard gb
timezone Etc/UTC
text
install
skipx
reboot
url --url http://gb.archive.ubuntu.com/ubuntu/ --proxy http://192.168.1.2:3128/
rootpw --disabled
user openstack --fullname "openstack" --password openstack
authconfig --enableshadow --enablemd5
clearpart --all --initlabel
zerombr yes
part /boot --fstype=ext2 --size=64
part swap --size=1024
part / --fstype=ext4 --size=1 --grow
bootloader --location=mbr
firewall --disabled
%packages
ubuntu-minimal
openssh-server
screen
curl
wget
sshpass
git
%post
apt-get update
apt-get upgrade -y
# setup locales
locale-gen en_GB.UTF-8
update-locale LANG="en_GB.UTF-8"
echo 'LANG=en_GB.UTF-8' >> /etc/environment
echo 'LC_ALL=en_GB.UTF-8' >> /etc/environment
# Set up root keys (all use same root key - NOT FOR PROD!)
wget -O /tmp/root-ssh-key.tar.gz ftp://192.168.1.2/Public/ubuntu/root-ssh-key.tar.gz
cd /target
tar zxf /tmp/root-ssh-key.tar.gz
# Pull down installer for later use
wget -O /root/install-openstack.sh ftp://192.168.1.2/Public/ubuntu/install-openstack.sh
chmod 0700 /root/install-openstack.sh
# Convenient script to power off all my machines
wget -O /root/all-off.sh ftp://192.168.1.2/Public/ubuntu/all-off.sh
chmod 0700 /root/all-off.sh

A few things – this is clearly intended for a lab, feel free to edit and improve! The result should be something that is convenient to you through automation. I set a user up called openstack – this is a requirement of Ubuntu to have a user other than just root set up. I also just pull down some scripts and items useful for my setup, one of which is a tarballed /root/.ssh/ directory. This has keys and .ssh/authorized_keys already pre-setup and the reason is for when we get to do an install of Rackspace Private Cloud which uses Chef that relies on an ability to ssh amongst the machines.

At this point, after PXE booting the 5 servers in my home lab setup, I have Ubuntu 12.04 installed and root keys ready to have Rackspace Private Cloud installed. On my ADSL setup, kicking the 5 machines takes about 15 minutes which is an acceptable time to kick my lab, allowing me to try various runs of installations – speed and automation are important for any lab.

In the next post I’ll cover off the steps to get an HA Rackspace Private Cloud installed using the Rackspace Chef cookbooks.

Home Rackspace Private Cloud / OpenStack Lab: Part 1

Over the past year I’ve been using a home lab for quick, hands-on testing of OpenStack and Rackspace Private Cloud and a number of people have requested information on the setup.  Over the next few blog posts I’m going to explain what I’ve got that serves two purposes: documentation of my own setup as well as hopefully providing information that other people might find useful – and not everything is about OpenStack.

This first post is about the tech involved and how it is set up.  In subsequent posts I’ll go into further detail and finally installation of Rackspace Private Cloud.

The servers

5 x HP MicroServer N40L

The N40L is an incredibly cheap, 4 SATA Bay (+ CDROM Bay), low power server with supplied 250Gb SATA. It’s a single CPU AMD Turion II processor with 2 cores that supports Hardware-VT.  It has been superseded by the HP MicroServer N45L and often found with cashback deals meaning these usually come in under £130.

There seems to be some caution when choosing memory for these things, with the documentation reporting they support up to 8Gb.  I’ve read people successfully running 16Gb and through my own trial – I grabbed the cheapest memory I could get and found it worked.

When choosing the PCI-X NICs and other cards, be aware that you need to use low-profile ones.  The NICs I added to mine are low-profile, but the metal backing plate isn’t.  A quick email to TP-Link customer services will get you some low-profile backing plates free of charge.

Networked Attached Storage

I have 2 QNAP NAS devices.  One functioning as my main network storage (nas / 192.168.1.1) with 2 drives in, running DHCP for my home subnet, DNS for all connected devices and Proxy (primarily used to compensate for the slow 6-7Mbps ADSL speed I get when installing packages on my servers).  The second (nas2 / 192.168.1.2) acts as a TFTP server and proxy for my servers, as well as providing a replication/backup for my primary NAS.  The reason I run a proxy and TFTP next to my servers, rather than on the main NAS, is the wireless link I have between my servers and my router.  Although WiFi speeds are OK, it’s a much more efficient setup (and I have 2 floors between my servers and wifi router).  Powerline adapters? I tried them, but due to my home having an RCD (Residual Current Device), it made Powerline adapters useless.

  • nas
    • QNAP TS-210 (QTS 4.0.2)
    • 2 x Western Digital Caviar Green 500GB SATA 6Gb/s 64MB Cache – OEM (WD5000AZRX)
    • Raid 1 EXT4
    • eth0 (onboard) 192.168.1.1
    • DHCP (Dnsmasq)
    • DNS (Dnsmasq)
    • Proxy (Squid)
  • nas2
    • QNAP TS-121 (QTS 4.0.2)
    • 1 x Western Digital Caviar Green 500GB SATA 6Gb/s 64MB Cache – OEM (WD5000AZRX)
    • EXT4
    • eth0 (onboard) 192.168.1.2
    • TFTP (Dnsmasq)
    • Proxy (Squid)

Network Kit

Essentially I have 2 parts to my network – separated by 2 floors of a house which is connected using WiFi bridging – all on a 192.168.1.0/24 subnet.  I have unmanaged switches connecting the servers and NAS so there’s nothing here that’s highly exciting but presented for clarity and completeness (but useful if you’re thinking of needing to WiFi bridge 2 parts of your network together)

  • TP-Link TL-WA901ND Wireless-N PoE Access Point (300Mbps)
    • Bridge Mode
    • Connects LAN based servers to wireless network/clients
  • TP-Link TD-W8980 N600 Dual Band Wireless ADSL Modem Router
    • WiFi Router (2.4GHz + 5GHz)
    • ADSL Router disabled (for now)
    • DHCP/DNS disabled (Dnsmasq used instead)
  • TP-Link TL-SG1024 24-port Ethernet Switch
    • 24-Port Switch connecting servers to NAS and Wifi Bridge (TL-WA901ND)

(I think I should get sponsorship from TP-Link for this post!)

Overall, this looks like this (click for bigger version). Hopefully, having this detailed background info will aid you in setting up your own OpenStack environment big or small.

Home Lab Network Diagram

In the next blog post I’ll cover QNAP Dnsmasq Configuration providing DHCP, DNS and TFTP for my network allowing me to PXE boot my N40L servers to kick Ubuntu and Rackspace Private Cloud.

OpenStack Diablo, updates and work in progress!

It has been a while since I blogged, and in that time OpenStack has come on leaps and bounds with Diablo being the latest official release. This will change as I work pretty much full-time on testing OpenStack as an end-user (and day job as architect) based on Diablo. This will also help with some book projects that are in the pipe-line for which I’m very humbled and excited about. I’ll blog my experiences as I go along – after all, it’s the reason you’ve stumbled upon this corner of the internet in the first place to learn from my experiences in using OpenStack.
The project I’m working on will be based on Ubuntu running the latest release of OpenStack, Diablo (2011.3). I’ll be investigating Crowbar from Dell to see how remote bare-metal provisioning of OpenStack is coming along – a crucial element for this to be adopted in established enterprises where it is the norm to roll-out enterprise class software in this way. I’ll try to squeeze in JuJu too. Most importantly though is playing catch up on the raft of projects that are flowing through OpenStack from Keystone for authentication, Quantum (although probably more relevant to Essex as this develops) as well as playing catch up on where Swift, Glance and the Dashboard are.

Running OpenStack under VirtualBox – A Complete Guide (Part 1)

UPDATE: I’ve been working on a new version of the script which can be used to create an OpenStack host running on Ubuntu 12.04 Precise Pangolin and the Essex release.
I’ve now got a video to accompany this which is recommended over this guide
Head over to  ‎http://uksysadmin.wordpress.com/2012/03/28/screencast-video-of-an-install-of-openstack-essex-on-ubuntu-12-04-under-virtualbox/

Running OpenStack under VirtualBox allows you to have a complete multi-node cluster that you can access and manage from the computer running VirtualBox as if you’re accessing a region on Amazon.
This is a complete guide to setting up a VirtualBox VM running Ubuntu, with OpenStack running on this guest and an OpenStack instance running, accessible from your host.

Part 1 – OpenStack on a single VirtualBox VM with OpenStack instances accessible from host

The environment used for this guide

  • A 64-Bit Intel Core i7 Laptop, 8Gb Ram.
  • Ubuntu 10.10 Maverick AMD64 (The “host”)
  • VirtualBox 4
  • Access from host running VirtualBox only (so useful for development/proof of concept)

The proposed environment

  • OpenStack “Public” Network: 172.241.0.0/25
  • OpenStack “Private” Network: 10.0.0.0/8
  • Host has access to its own LAN, separate to this on 192.168.0.0/16 and not used for this guide

The Guide

  • Download and install VirtualBox from http://www.virtualbox.org/
  • Under Preferences… Network…
  • Add/Edit Host-only network so you have vboxnet0. This will serve as the “Public interface” to your cloud environment
    • Configure this as follows
      • Adapter
        • IPv4 Address: 172.241.0.100
        • IPv4 Network Mask: 255.255.255.128
      • DHCP Server
        • Disable Server
    • On your Linux host running VirtualBox, you will see an interface created called ‘vboxnet0’ with the address specified as 172.241.0.100. This will be the IP address your OpenStack instances will see when you access them.
    • Create a new Guest
      • Name: Cloud1
        • OS Type: Linux
        • Version: Ubuntu (64-Bit)
      • 1024Mb Ram
      • Boot Hard Disk
        • Dynamically Expanding Storage
        • 8.0Gb
      • After this initial set up, continue to configure the guest
        • Storage:
          • Edit the CD-ROM so that it boots Ubuntu 10.10 Live or Server ISO
          • Ensure that the SATA controller has Host I/O Cache Enabled (recommended by VirtualBox for EXT4 filesystems)
        • Network:
          • Adapter 1
            • Host-only Adapter
            • Name: vboxnet0
          • Adapter 2
            • NAT
            • This will provide the default route to allow the VM to access the internet to get the updates, OpenStack scripts and software
        • Audio:
          • Disable (just not required)
    • Power the guest on and install Ubuntu
    • For this guide I’ve statically assigned the guest with the IP: 172.241.0.101 for eth0 and netmask 255.255.255.128.  This will be the IP address that you will use to access the guest from your host box, as well as the IP address you can use to SSH/SCP files around.
    • Once installed, run an update (sudo apt-get update&&sudo apt-get upgrade) then reboot
    • If you’re running a desktop, install the Guest Additions (Device… Install Guest Additions, then click on Places and select the VBoxGuestAdditions CD and follow the Autorun script), then Reboot
    • Install openssh-server
      • sudo apt-get -y install openssh-server
    • Grab this script to install OpenStack
      • This will set up a repository (ppa:nova/trunk) and install MySQL server where the information regarding your cloud will be stored
      • The options specified on the command line match the environment described above
      • wget https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
    • Run the script (as root/through sudo)
      • sudo bash ./OSinstall.sh -A $(whoami)
    • Run the post-configuration steps
      • ADMIN=$(whoami)
        sudo nova-manage user admin ${ADMIN}
        sudo nova-manage role add ${ADMIN} cloudadmin
        sudo nova-manage project create myproject ${ADMIN}
        sudo nova-manage project zipfile myproject ${ADMIN}
        mkdir -p cloud/creds
        cd cloud/creds
        unzip ~/nova.zip
        . novarc
        cd
        euca-add-keypair openstack > ~/cloud/creds/openstack.pem
        chmod 0600 cloud/creds/*

    Congratulations, you now have a working Cloud environment waiting for its first image and instances to run, with a user you specified on the command line (yourusername), the credentials to access the cloud and a project called ‘myproject’ to host the instances.

    • You now need to ensure that you can access any instances that you launch via SSH as a minimum (as well as being able to ping) – but I add in access to a web service and port 8080 too for this environment as my “default” security group.
      • euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
        euca-authorize default -P icmp -t -1:-1
    • Next you need to load a UEC image into your cloud so that instances can be launched from it
      • image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz"
        wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image
        uec-publish-tarball $image mybucket
    • Once the uec-publish-tarball command has been run, it will present you with a line with emi=, eri= and eki= specifying the Image, Ramdisk and Kernel as shown below. Highlight this, copy and paste back in your shell
      Thu Feb 24 09:55:19 GMT 2011: ====== extracting image ======
      kernel : ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
      ramdisk: ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd
      image  : ttylinux-uec-amd64-12.1_2.6.35-22_1.img
      Thu Feb 24 09:55:19 GMT 2011: ====== bundle/upload kernel ======
      Thu Feb 24 09:55:21 GMT 2011: ====== bundle/upload ramdisk ======
      Thu Feb 24 09:55:22 GMT 2011: ====== bundle/upload image ======
      Thu Feb 24 09:55:25 GMT 2011: ====== done ======
      emi="ami-fnlidlmq"; eri="ami-dqliu15n"; eki="ami-66rz6vbs";
    • To launch an instance
      • euca-run-instances $emi -k openstack -t m1.tiny
    • To check its running
      • euca-describe-instances
      • You will see the Private IP that has been assigned to this instance, for example 10.0.0.3
    • To access this via SSH
      • ssh -i cloud/creds/openstack.pem root@10.0.0.3
      • (To log out of ttylinux, type: logout)
    • Congratulations, you now have an OpenStack instance running under OpenStack Nova, running under a VirtualBox VM!
    • To access this outside of the VirtualBox environment (i.e. back on your real computer, the host) you need to assign it a “public” IP
      • Associate this to the instance id (get from euca-describe-instances and will be of the format i-00000000)
        • euca-allocate-address
        • This will return an IP address that has been assigned to your project so that you can now associate to your instance, e.g. 172.241.0.3
        • euca-associate-address -i i-00000001 172.241.0.3
      • Now back on your host (so outside of VirtualBox), grab a copy of cloud/creds directory
        • scp -r user@172.241.0.101:cloud/creds .
      • You can now access that host using the Public address you associated to it above
        • ssh -i cloud/creds/openstack.pem root@172.241.0.3

    CONGRATULATIONS! You have now created a complete cloud environment under VirtualBox that you can manage from your computer (host) as if you’re managing services on Amazon. To demonstrate this you can terminate that instance you created from your computer (host)

    • sudo apt-get install euca2ools
      . cloud/creds/novarc
      euca-describe-instances
      euca-terminate-instances i-00000001

    Credits

    This guide is based on Thierry Carrez’ blog @ http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/

  • Next: Part 2 – OpenStack on a multiple VirtualBox VMs with OpenStack instances accessible from host