Setting Up a Single Node OpenStack Nova Server

We’re pleased to publish this step-by-step guide for installing OpenStack Compute (aka Nova to the cool kids) on a single server. This tutorial is for developers, who would like to set up a compute machine on local servers before deploying to the public cloud. While a lot of people are comfortable using public cloud for testing and development, some developers like setting up a simple environment on their own local servers, first. With this guide, we’ll show you how to install Nova. We organized the steps into one guide for your convenience and added insights from our experience. At the end of this tutorial, you’ll have a local OpenStack Compute machine on your network that you can use to create users and start and stop virtual machine (VM) guests. Your local compute machine will have the same core API interface as HP Public Cloud, and you can use this interface for API testing.

In this article we'll be covering installing Nova, or OpenStack Compute on a single server. At the end of this tutorial you should have a stable OpenStack Compute machine on your network that you can create users on and start and stop VM guests.

What It Is 

OpenStack Compute aka Nova is a python-based VM management system. It can manage a wide variety of HyperVisors, but in this tutorial we'll be sticking with the default, Linux's KVM. Nova manages creating new VMs, setting up networks for them and handling scheduling and other requests. Nova takes a 'small pieces loosely joined' approach to cloud computing, utilizing most SQL backends (sqlite, MySQL, MariaDB, etc) for data storage, RabbitMQ for its message queue and iptables for network management instead of a single monolithic codebase. The API server processes that clients connect to are python based, as is all the glue code that connects the various components together.

We're going to be installing the Diablo Final version of Nova. We won't be touching on configuring Nova with OpenStack's new Keystone unified authentication system in this walkthrough, since that's a bit beyond our scope.

What You'll Need 

You can setup Nova on nearly anything capable of running KVM. You can run it virtualized, but I heavily recommend against it for serious use, since virtualizing virtuilization is a recipe for madness. If you just want to test the API, though, running nova inside of a VM is fine. Anything that would help improve VM guest performance will help us with Nova, in order of priority I'd say:

* Lots of RAM (so you can run multiple guests at once)

* An i5 or better CPU (for hyperthreading)

* Fast Disks (RAID or an SSD, but remember that each guest takes up space)

You'll need a copy of Ubuntu 11.04 Server for this, you can get the ISOs from Ubuntu:

If you're running Nova inside of a VM you'll want to set the network to a bridged mode so you can access the server and its VMs from your network. If you're going to use your Nova server for any serious length of time you'll probably want to set the server to use a static IP address as well.

System Installation

Install Ubuntu 11.04 Server on your machine and configure the network. I'd suggest installing OpenSSH during setup so you can remotely administer your server. At this point you should be able to SSH into your server from another machine on the network. If you can do that, we should be ready to go.

First, let's update our package lists and make sure we have any available security updates installed. All of the following should be done as root, so: 

sudo bash

And then:

apt-get update
apt-get upgrade -y

We'll also need python-software-properties to talk to the PPA servers that Nova's distributed from:

apt-get install -y python-software-properties

We'll also add the PPA (python package archive) server for Nova and install it. In this tutorial we're using the ppa: openstack-release/2011.3 PPA, which should have Diablo Final on it.

add-apt-repository ppa:openstack-release/2011.3
apt-get update

Lets setup some defaults for MySQL. If you'd like a different password for MySQL, change that here.


cat < mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.1 mysql-server/start_on_boot boolean true

Now we'll install our packages. RabbitMQ is the queue server for our scheduling service, then we have a bunch of Nova packages and mysql 5.1 for our datastore, glance is the OpenStack Image Service, euca2ools is a handy command line client for the EC2 api, and unzip is handy because the nova-manage command spits out a zipfile with credentials in it we need to unzip:

apt-get install -y rabbitmq-server nova-volume nova-api nova-vncproxy nova-ajax-console-proxy nova-doc nova-scheduler nova-objectstore nova-network nova-compute mysql-server glance euca2ools unzip

And lets run our MySQL commands, these will create a nova database, grant privileges for the nova user and set the nova users password:

mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE nova;'
mysql -u root -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' WITH GRANT OPTION;"
mysql -u root -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$NOVA_PASS');"

We should be able to verify now, with something like this:

mysql -u nova -p$NOVA_PASS nova -e 'show databases;'

Next lets go edit /etc/nova/nova.conf. FYI, you can't have comments in nova.conf because the entire thing gets passed in as command line options to some apps.

*If you're installing Nova inside of a VM you'll want to set '--libvirt_type=qemu' in /etc/nova/nova-compute.conf, since you'll be using the qemu software virtualizer.

cat >/etc/nova/nova.conf 

Now let's sync Nova's default data into mysql:

nova-manage db sync

And restart our services:

restart libvirt-bin; restart nova-network; restart nova-compute; restart nova-api; restart nova-objectstore; restart nova-scheduler

Configuring Nova Network

Now lets create our private IP addresses. The flags for this in Diablo Final are: nova-manage network create #networkname# #cidr representation of entire network# #number of networks# #number of ips per network#

In the VLAN default configuration of OpenStack Nova each project gets it's own network once a server is created, and those networks are pulled from the pool that is created with this command. This command creates 32 networks of 32 IP addresses. If you plan on writing code that constantly creates new projects and VMs, you'll want to create a lot more.

nova-manage network create private 32 32

Once you've run this, check out the networks table in your mysql nova database. Since we haven't created any projects with VMs yet, the projects row is null. Once the first project successfully spins up a VM, it'll show in row 1. 

mysql -u nova -p$NOVA_PASS nova -e 'select id,cidr,project_id from networks;'

Next let's create some floating IPs. These are IPs that are assigned to VMs for their public interface, so they need to be real IP addresses on your network. If you're setting this up at home your router may be on or, and it's DHCP range may be something like to 200. We want to create these IPs in a space we know won't conflict with anyone else. Since my router lives at and it's DHCP range is to .200, I'm going to allocate to

nova-manage floating create --ip_range=

Which we can see in the floating_ips table in our database:

mysql -u nova -p$NOVA_PASS nova -e 'select id,address,project_id,host from floating_ips;'

Create A Nova Project

Now lets create a user, create a project for that user, and assign the sysadmin and netadmin permissions to that user and user/project:

nova-manage user create testuser
nova-manage project create testproject testuser
nova-manage role add testuser sysadmin
nova-manage role add testuser netadmin
nova-manage role add testuser sysadmin testproject
nova-manage role add testuser netadmin testproject

The first command spit out some EC2 style access keys, but lets get the entire thing in a zipfile with the nova-manage project zipfile command:

mkdir testuser-testproject; cd testuser-testproject
nova-manage project zipfile --user=testuser --project=testproject

Now we should have a couple of PEM files and a novarc file in this directory:

ls -l

Lets execute that novarc file, which will set a bunch of environment variables that the nova cli and euca2ools use: 

. novarc

And lets add ping and SSH to this users default security group, so once we have a server we can get to it:

euca-authorize -P icmp -t -1:-1 default 
euca-authorize -P tcp -p 22 default

Start Your First TTYLinux VM

Scott Moser has a great image of TTY Linux that we can test our install with, it's only 24 meg so it's much easier than grabbing the entire ubuntu image.

tar fxvz ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz

Now lets add that image into glance. First we need the kernel image:

glance add is_public=true disk_format=aki container_format=aki name=ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz 

Which will spit out an ID, probably 1. We can see the public glance images with:

glance index

Next, upload our disk image referencing our kernel ID:

glance add is_public=true disk_format=ami container_format=ami kernel_id=1 name=ttylinux-uec-amd64-12.1_2.6.35-22_1.img 

We should be able to see the images with 'glance index', and they'll also show up if we access the EC2 api with euca2ools:


Our server image is probably ami-00000002, so lets create a new server with the ec2 api. Even the m1.tiny OpenStack Nova preset will allocate a gig of ram, so if you're on a ram limited machine, be aware.

euca-run-instances ami-00000002 -t m1.tiny

We can see our server running with euca2ools which uses the ec2 api (the i-0000001 part is the instance identifier, which will be important later):


And the nova CLI which uses the OpenStack API:

nova list

Since we added ssh and ping to our default security group, we should be able to ping the machine now. It'll probably be on, if we did everything right. If this is the second or third VM you created, it may have a different IP (the previous two commands should list the address):

ping -c 5

In the /var/lib/nova/instances/ directory there should be a directory for our instance, and inside of that a console.log file. Assuming this is our first instance, it will be:

cat /var/lib/nova/instances/instance-00000001/console.log 

We should be able to ssh into the server from your machine. The username for the ttylinux image is 'root' with a password of 'password'. If you're successful, the server should bestow on you some zen wisdom, then you can logout.

Now lets assign a public IP address to our image. We can do that with euca-allocate-address from euca2ools: 


This will return an IP address, if you added, this will probably be Once allocated, you can associate the address with your running instance, like this (if your instance has a different ID or you got a different IP, swap that in):

euca-associate-address -i i-00000001

It'll now show in 'nova list' and 'euca-describe-instances'. You should now be able to ping that floating IP and ssh into it from, say, another computer on the network. You should be able to see the floating IP as allocated in the Nova database as well:

mysql -u nova -p$NOVA_PASS nova -e 'select id,address,project_id from floating_ips;'

To remove the address from the server, you can run:


To terminate a server, you can use 'euca-terminate-instances':

euca-terminate-instances i-00000001

Creating a Keypair

While the ttylinux image uses usernames and passwords, ubuntu guest images use ssh keypairs to login. To create a keypair we can use euca-add-keypair. The single argument for euca-add-keypair is the name of the keypair, and the output is the private part of the ssh keypair. We also have to update the keypair file's permissions or ssh will complain when we try to use it.

euca-add-keypair mykeypair > mykeypair.pem
chmod 400 mykeypair.pem

Running an Ubuntu 11.04 Guest

Lets grab the ubuntu 11.04 UEC image:


And decompress it:

tar fxvz natty-server-cloudimg-amd64.tar.gz

We'll use the same process to add it to our Glance image service as we did with ttylinux:

glance add is_public=true disk_format=aki container_format=aki name=natty-server-cloudimg-amd64-vmlinuz-virtual 

Our image ID will probably be 3, whatever it is needs to go into kernel_id on the next command:

glance add is_public=true disk_format=ami container_format=ami kernel_id=3 name=natty-server-cloudimg-amd64.img 

We should now see this image in 'glance index' and 'euca-describe-images':

glance index

Assuming the image is ami-00000004 and our keypair created earlier was called 'mykeypair', lets start a new server with this keypair:

euca-run-instances -k mykeypair ami-00000004 -t m1.tiny

We should soon see it in 'euca-describe-instances':


It may take a minute or so for ubuntu to boot (it will go from 'pending' to 'running'), but once it does, you should be able to ssh into it using the keypair file. Assuming the server is on, the command for that will look like

ssh -i mykeypair.pem root@

Once you're done poking around your ubuntu server, you can terminate it with:

euca-terminate-instances i-00000002

You should now have Nova running and reboot-ready (though your VMs won't auto-restart if you reboot, you'll have to restart them manually). The OpenStack Nova API will be available on port 8774, the EC2 compatibility API on port 8773 and Glance on port 9191.

Tips and Tricks

A few quick commonly used commands.

Here are sample commands for creating a new user. Replace USER with your username, PROJECT with your projectname:

nova-manage user create USER # This creates our user account
nova-manage project create PROJECT USER # This creates our project and associates it with the user account
nova-manage role add USER sysadmin # The sysadmin role lets you create new servers
nova-manage role add USER netadmin # The netadmin role lets you allocate IP addresses
nova-manage role add USER sysadmin PROJECT # And the same for your project
nova-manage role add USER netadmin PROJECT

The command 'nova-manage project zipfile' will export a zipfile with your credentials inside. Unzip the file and take a peek at the novarc file for the juicy details:

nova-manage project zipfile --user=USER --project=PROJECT

Some particularly relevant settings in the novarc file are NOVA_USERNAME, NOVA_PROJECT_ID and NOVA_API_KEY. You can use them to talk to the OpenStack API like this. Your keys and IPs will probably be different, but you should be able to substitute them as appropriate:

cat novarc

With this you can construct a curl request like the following:

curl -D - -H "X-Auth-Key: 34916990-1296-4ab3-9280-f7a40eb09550" -H "X-Auth-User: testuser"

Which should return a X-Server-Management-Url and X-Auth-Token you can use to make a follow-up request:

curl -D - -H "X-Auth-Token: 7880397c6255c4d28f4801f2518ff9b9aa2849ac" -H "X-Auth-User: testuser"

The following commands list our currently active servers. The euca2ools command gets its information from the EC2 API, the nova command from the OpenStack Nova API, but the database they're talking to is the same.

nova list

List your available images:

nova image-list

Create and save a new keypair (the nova openstack client as of Diablo Final doesn't include this functionality, so you have to use euca2ools or the API):

euca-add-keypair keypairname > keypairname.pem
chmod 400 keypairname.pem

 List the sizes of servers available:

nova flavor-list

 Start a new server (the keypair part applies if you're starting an ubuntu guest): 

euca-run-instances -k keypairname -t server.size image-identifier
nova boot --flavor 1 --image 2 myserver

Allocate a floating IP address:


Associate an IP address with a guest:

euca-associate-address -i instance-identifier ip-address

Release an IP address back into the pool:

euca-release-address ip-address

Shutdown and delete a server:

euca-terminate-instances instance-identifier
nova delete instance_id

Also see our setup tutorial for OpenStack Swift.