Tag Archives: Glance

I have an OpenStack environment, now what? Loading Images into Glance #OpenStack 101

With an OpenStack environment up and running based on an OpenStack Ansible Deployment, now what?

Using Horizon with OSAD

First, we can log into Horizon (point your web browser at your load balance pool address, the one labelled external_lb_vip_address in the /etc/openstack_deploy/openstack_user_config.yml):

global_overrides:
  internal_lb_vip_address: 172.29.236.107
  external_lb_vip_address: 192.168.1.107
  lb_name: haproxy

Where are the username/password credentials for Horizon?

In step 4.5 of https://openstackr.wordpress.com/2015/07/19/home-lab-2015-edition-openstack-ansible-deployment/ we randomly generated all passwords used by OpenStack. This also generated a random password for the ‘admin‘ user. This user is the equivalent of ‘root’ on a Linux system, so generating a strong password is highly recommended. But to get that password, we need to get it out of a file.

The easiest place to find this password is to look on the deployment host itself as that is where we wrote out the passwords. Take a look in /etc/openstack_deploy/user_secrets.yml file and find the line that says ‘keystone_auth_admin_password‘. This random string of characters is the ‘admin‘ user’s password that you can use for Horizon:

keystone_auth_admin_password: bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0

admin_login_osad

The Utility Container and openrc credentials file

Alternatively, you can grab the ‘openrc‘ file from a ‘utility’ container which is found on a controller node. To do this, carry out the following:

  1. Log into a controller node and change to root. In my case I can choose either openstack4, openstack5 or openstack6. Here I can list the containers running on here as follows:
    lxc-ls -f

    This brings back output like the following:
    lxcls-openstack4(Click to enlarge)

  2. Locate the name of the utility container and attach to it as follows
    lxc-attach -n controller-01_utility_container-71cceb47
  3. Here you will find the admin user’s credentials in the /root/openrc file:
    cat openrc
    
    
    
    # Do not edit, changes will be overwritten
    # COMMON CINDER ENVS
    export CINDER_ENDPOINT_TYPE=internalURL
    # COMMON NOVA ENVS
    export NOVA_ENDPOINT_TYPE=internalURL
    # COMMON OPENSTACK ENVS
    export OS_ENDPOINT_TYPE=internalURL
    export OS_USERNAME=admin
    export OS_PASSWORD=bfbbb99316ae0a4292f8d07cd4db5eda2578c5253dabfa0
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://172.29.236.107:5000/v2.0
    export OS_NO_CACHE=1
  4. To use this, we simply source this into our environment as follows:
    . openrc

    or

    source openrc
  5. And now we can use the command line tools such as nova, glance, cinder, keystone, neutron and heat.

Loading images into Glance

Glance is the Image Service. This service provides you with a list of available images you can use to launch instances in OpenStack. To do this, we use the Glance command line tool.

There are plenty of public images available for OpenStack. You essentially grab them from the internet, and load them into Glance for your use. A list of places for OpenStack images can be found below:

CirrOS test image (can use username/password to log in): http://download.cirros-cloud.net/

Ubuntu images: http://cloud-images.ubuntu.com/

Windows 2012 R2: http://www.cloudbase.it/

CentOS 7: http://cloud.centos.org/centos/7/images/

Fedora: https://getfedora.org/en/cloud/download/

To load these, log into a Utililty container as described above and load into the environment as follows.

Note that you can either grab the files from the website, save them locally and upload to Glance, or have Glance grab the files and load into the environment direct from the site. I’ll describe both as you will have to load from a locally saved file for Windows due to having to accept an EULA before gaining access.

CirrOS

glance image-create \
  --name "cirros-image" \
  --disk-format qcow2 \
  --container-format bare \
  --copy-from http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img \
  --is-public True \
  --progress

You can use a username and password to log into CirrOS. This makes this tiny just-enough-OS great for testing and troubleshooting. Username: cirros, Password: Cubswin:)

Ubuntu 14.04

glance image-create \
–name “trusty-image” \
–disk-format qcow2 \
–container-format bare \
–copy-from http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
–is-public True \
–progress

You’d specify a keypair to use when launching this image as there is no default username or password on these cloud images [that would be a disastrous security fail if so]. The username to log into these will be ‘root’ and the private key that matched the public key specified at launch would get you access.

Windows 2012 R2

For Windows, you can download an evaluation copy of Windows 2012 R2 and to do so you need to accept a license. Head over to http://www.cloudbase.it/ and follow the instructions to download the image.

Once downloaded, you need to get this to OpenStack. As we’re using the Utility container for our access we need to get the image so it is accessible from there. There are alternative ways such as installing the OpenStack Client tools on your client which is ultimately how you’d use OpenStack. For now though, we will copy to the Utility container.

  1. Copy the Windows image to the Utility Container. All of the containers have an IP on the container ‘management’ network (172.29.236.0/24 in my lab). View the IP address of the Utility container and use this IP. This network is available via my deployment host so I simply secure copy this over to the container:

    (performed as root on my deployment host as that has SSH access using keypairs to the containers)

    scp Windows2012R2.qcow2 root@172.29.236.85:
  2. We can then upload this to Glance as follows, note the use of –file instead of –copy-from:
    glance image-create \
      --name "windows-image" \
      --disk-format qcow2 \
      --container-format bare \
      --file ./Windows2012R2.qcow2 \
      --is-public True \
      --progress

    This will take a while as the Windows images are naturally bigger than Linux ones. Once uploaded it will be available for our use.

Access to Windows instances will be by RDP, and although SSH keypairs are not used by this Windows image for RDP access, it is still required to get access to the randomly generated ‘Administrator’ passphrase, so when launching the Windows instance, specify a keypair.

Access to the Administrator password is then carried out using the following once you’ve launched an instance:

nova get-password myWindowsInstance .ssh/ida_rsa
Launching instances will be covered in a later topic!
Advertisements

Home Rackspace Private Cloud / OpenStack Lab: Part 4

So after following the first three posts, we now have a Rackspace Private Cloud powered by OpenStack running with 2 Controllers (HA) and 3 Computes. So now what? Well the first thing we need to do is get our hands dirty with the OpenStack Networking component, Neutron, and create a network that our instances can be spun up on. For the home lab, I have dumb unmanaged switches – and I take advantage of that by creating a Flat Network that allows my instances access out through my home LAN on the 192.168.1.0/24 subnet.

Logging on to the environment

We first need to get to the OpenStack lab environment and there are a couple of routes: we can use Web Dashboard, Horizon which lives on the “API_VIP” IP I created when I set up my environment (see step 9 on Part 3) – which is https://192.168.1.243/ (and answering yes to the SSL warning due to it being a self-signed certificate) or I can use the command line (CLI).  And the easiest way to use the CLI is to ssh to the first controller, openstack1 (192.168.1.101) and changing to the root user, then sourcing the environment file (/root/openrc) that was created that sets up the various environment variables to allow you to communicate with OpenStack.

To use the CLI on the first controller, issue the following:

ssh openstack1
sudo -i
. openrc

The /root/openrc file contains the following

# This file autogenerated by Chef
# Do not edit, changes will be overwritten
# COMMON OPENSTACK ENVS
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.253:5000/v2.0
export OS_AUTH_STRATEGY=keystone
export OS_NO_CACHE=1
# LEGACY NOVA ENVS
export NOVA_USERNAME=${OS_USERNAME}
export NOVA_PROJECT_ID=${OS_TENANT_NAME}
export NOVA_PASSWORD=${OS_PASSWORD}
export NOVA_API_KEY=${OS_PASSWORD}
export NOVA_URL=${OS_AUTH_URL}
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=RegionOne
# EUCA2OOLs ENV VARIABLES
export EC2_ACCESS_KEY=b8bab4a938c340bbbf3e27fe9527b9a0
export EC2_SECRET_KEY=787de1a83efe4ff289f82f8ea1ccc9ee
export EC2_URL=http://192.168.1.253:8773/services/Cloud

These details match up to the admin user’s details specified in the environment file that was created in Step 9 in Part 3.

With this loaded into our environment we can now use the command line clients to control our OpenStack cloud. These include

nova for launching and controlling instances

neutron for managing networking

glance for manipulation of images that are used to create our instances

keystone for managing users and tenants (projects)

There are, of course, others such as cinder, heat, swift, etc. but they’re not configured in the lab yet. To get an idea of what you can do with the CLI, head over to this @eglute‘s blog for a very handy, quick “cheat sheet” of commands.

Creating the Home Lab Flat Network

The 5 OpenStack servers, and everything else on my network, hang off a single subnet: 192.168.1.0/24. Each and every one of those devices gets an IP from that range and configured to use a default gateway of 192.168.1.254 – which is great, it means they get internet access.

I want my instances running under OpenStack to also have internet access, and be accessible from the network (from my laptop, or through PAT on my firewall/router to expose some services such as a webserver running on one of my OpenStack instances). To do this I create a Flat Network, where I allocate a small DHCP range so as not to conflict with any other IPs or ranges currently in use.

For more information on Flat Networking, view @jimmdenton‘s blog post

To create this Flat Network to co-exist on the home LAB subnet of 192.168.1.0/24 I do the following

1. First create the network (network_type=flat)  (all one line)

neutron net-create 
    --provider:physical_network=ph-eth1 
    --provider:network_type=flat 
    --router:external=true
    --shared flatNet

2. Next create the subnet (all one line)

neutron subnet-create 
    --name flatSubnet 
    --no-gateway 
    --host-route destination=0.0.0.0/0,nexthop=192.168.1.254 
    --allocation-pool start=192.168.1.150,end=192.168.1.170 
    --dns-nameserver 192.168.1.1 
    flatNet 192.168.1.0/24

Now what’s going on in that subnet-create command is as follows:

–no-gateway specifies no default gateway, but…

–destination=0.0.0.0/0,nexthop=192.168.1.254 looks suspiciously like a default route – it is.

The effect of –no-gateway is that something extra has to happen for an instance to access the Metadata service (where it goes to get cloud-init details, ssh-keys, etc.). As it can’t rely on a gateway address (it doesn’t exist) to get to the next hop to the 169.254/16 network where the Metadata service lives – a route is injected into the instance’s routing table instead.

But that’s Metadata sorted, what about access to anything other than 192.168.1.0/24 – i.e. everything else? This is taken care of by putting in another route (–destination=) but simply has the same effect as setting a gateway due to the settings used (0.0.0.0/0,nexthop=192.168.1.254). That nexthop address is the default gateway on my LAN, therefore the instance gets internet access. With Neutron we can add in a number of routes – and these are automatically created on the instance’s routing table. Very handy.

–allocation-pool is the DHCP address pool range.  I run DHCP on my network for everything else but I deliberately specify ranges in both so as not to ever conflict (obviously). I set this range to be between 192.168.1.150 and 192.168.1.170.

–dns nameserver 192.168.1.1 sets the resolver to my NAS (192.168.1.1) which is running Dnsmasq and performs DNS resolution for all my LAN clients. As the instance gets an IP on this network, it can reach my DNS server.

Now that we have a network in place, I can spin up an instance – but before that, there are a couple of other housekeeping items that need to be performed first: creating/importing an SSH keypair and setting security group rules to allow me access (SSH) to the instance.

Creating/Importing Keypairs

Keypairs are SSH Public/Private keypairs that you would create when wanting passwordless (or key-based) access to Linux instances. They are the same thing in OpenStack. What happens though is that OpenStack has a copy of the Public key of your keypair, and when you specify that key when booting an instance, it squirts it into a user’s .ssh/authorized_keys file – meaning that you can ssh to it using the Private portion of your key.

To create a keypair for use with OpenStack, issue the following command:

nova keypair-add demo > demo.pem
chmod 0600 demo.pem

demo.pem will then be the private key, and the name of the key when booting an instance will be called “demo”. Keep the private key safe and ensure it is only readable by you.

If creating a whole new keypair isn’t suitable (it’s an extra key to carry around) you can always use the one you’ve been using for years by importing it into OpenStack.  To import a key, you’re effectively copying the public key into the database so that it can be used by OpenStack when you boot an instance. To do this issue the following:

nova keypair-add --pub-key .ssh/id_rsa.pub myKey

What this does is take a copy of .ssh/id_rsa.pub and assigns the name myKey to it. You can now use your key to access new instances you spin up.

Creating Default security group rules

Before we can spin up an instance (although technically this step could also be done after you have booted on one up) we need to allow access to it as by default, no traffic can flow in to it – not even a ping.

To allow pings (ICMP) and SSH to the Default group (I tend to not allow any more than that in this group) issue the following commands:

neutron security-group-rule-create --protocol ICMP --direction ingress default
neutron security-group-rule-create --protocol tcp --direction ingress --port-range-min 22 --port-range-max 22 default

Adjusting m1.tiny so Ubuntu can boot with 512Mb

My servers don’t have much memory – the Computes (hypervisors – where the instances actually spawn up in) only have 4Gb Ram so I value the m1.tiny when trying various things – especially when I have a need to spin up a lot of instances. The problem is that by default in Havana, an m1.tiny specifies 1Gb for the disk of an instance. A Ubuntu image requires more than 1Gb and OpenStack is unable to “shrink” an instance smaller than what was created for the image. To fix this we amend the m1.tiny so that the disk becomes “unlimited” again, just like it was in Grizzly and before. To do this we issue the following:

nova flavor-delete 1
nova flavor-create m1.tiny 1 512 0 1

Havana’s nova command is unable to amend flavors – so we delete and recreate to mimic this behaviour. (Horizon does this process to when you edit a flavor).

We’re now ready to boot an instance, or are we?

Loading Images

A Rackspace Private Cloud can automatically pull down and upload images into Glance by setting the following in the environment JSON file and running chef-client:

"glance": {
  "images": [
    "cirros",
    "precise"
  ],
  "image" : {
    "cirros": "https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img",
    "precise": "http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"
  },
  "image_upload": true
 },

This is handy at installation time, but can also add to RPC installation times. Tune this to suit. What I tend to do is download images, store them on my Web server (NAS2, 192.168.1.2) which is connected to the same switch as my OpenStack environment, and change the URL for the images to point to the NAS2 Web server instead. If you do this – wget those URLs above and store them in /share/Web. When you enable the default Webserver service under QNAP it becomes available on the network. Change the above JSON snippet for Chef to the following:

"glance": {
  "images": [
    "cirros",
    "precise"
  ],
  "image" : {
    "cirros": "http://192.168.1.2/cirros-0.3.0-x86_64-disk.img",
    "precise": "http://192.168.1.2/precise-server-cloudimg-amd64-disk1.img"
  },
  "image_upload": true
 },

To load images using the glance client issue the following (all one line)

glance image-create 
    --name='precise-image' 
    --disk-format=qcow2 
    --container-format=bare 
    --public < precise-server-cloudimg-amd64-disk1.img

What this will do is load that image precise-server-cloudimg-amd64-disk1.img that you have downloaded manually into Glance. If you don’t have that image downloaded, you can get Glance to fetch it for you – saving you that extra step:

glance image-create 
    --name='precise-image' 
    --disk-format=qcow2 
    --container-format=bare 
    --public 
    --location http://192.168.1.2/precise-server-cloudimg-amd64-disk1.img

Booting an instance

Now we have an OpenStack lab, a network, set up our keypair and have access to any instance – we can now boot an instance. To do this we first list the available images:

nova image-list

And then we can use one of those images for our instance.

The next thing to do is list the Neutron networks available:

neutron net-list

Now that we have these pieces of information – along with knowing the name of the keypair (nova keypair-list) we can now boot our instance (I grab the UUID of “flatNet” and store it in a variable for me to automate this step when I first spin up an instance):

flatNetId=$(neutron net-list | awk '/flatNet/ {print $2}')

nova boot myInstance 
    --image precise-image 
    --flavor m1.tiny 
    --key_name demo 
    --security_groups default 
    --nic net-id=$flatNetId

You can watch this boot up by viewing the console output with the following command:

nova console-log myInstance

When this has booted up, I’ll have an instance available in the range 192.168.1.150 that is accessible from my network – check this by viewing the nova list output:

nova list

This will show you the IP that the instance will be assigned. As this is on my home LAN network, I can SSH to this instance as if it was a server connected to my switch:

root@openstack1:~# ssh -i demo.pem root@192.168.1.150
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-57-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Feb 14 16:05:59 UTC 2014
System load: 0.05 Processes: 62
 Usage of /: 39.1% of 1.97GB Users logged in: 0
 Memory usage: 8% IP address for eth0: 192.168.1.150
 Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root@myinstance:~#

Now the basics are out of the way, the next blog post will look at more advanced uses of OpenStack and the instances!

Updated OpenStackInstaller script for Precise and Essex installs

I’ve updated the OpenStackInstaller script which now gives you a Development (Trunk) OpenStack Essex installation on Ubuntu Precise (Currently Alpha 2) with the following

Nova Compute (and associated services)
Keystone
Glance

This set up allows you to use nova client tools to launch instances

Install Ubuntu Precise
apt-get update
apt-get dist-upgrade
reboot

(as root)

  1. git clone https://github.com/uksysadmin/OpenStackInstaller.git
  2. cd OpenStackInstaller
  3. git checkout essex
  4. ./OSinstall.sh

A lot of this wouldn’t be possible without the help of people in #openstack on freenode.org.
For an equally awesome installation from scripts for a Diablo release view these scripts: https://github.com/managedit/openstack-setup

OpenStack Diablo, updates and work in progress!

It has been a while since I blogged, and in that time OpenStack has come on leaps and bounds with Diablo being the latest official release. This will change as I work pretty much full-time on testing OpenStack as an end-user (and day job as architect) based on Diablo. This will also help with some book projects that are in the pipe-line for which I’m very humbled and excited about. I’ll blog my experiences as I go along – after all, it’s the reason you’ve stumbled upon this corner of the internet in the first place to learn from my experiences in using OpenStack.
The project I’m working on will be based on Ubuntu running the latest release of OpenStack, Diablo (2011.3). I’ll be investigating Crowbar from Dell to see how remote bare-metal provisioning of OpenStack is coming along – a crucial element for this to be adopted in established enterprises where it is the norm to roll-out enterprise class software in this way. I’ll try to squeeze in JuJu too. Most importantly though is playing catch up on the raft of projects that are flowing through OpenStack from Keystone for authentication, Quantum (although probably more relevant to Essex as this develops) as well as playing catch up on where Swift, Glance and the Dashboard are.