
Installing OpenStack
The steps in this section document the installation of OpenStack services, including Keystone, Glance, Nova Compute, and Horizon on a single controller and two compute nodes. Neutron, the OpenStack Networking service, will be installed in the next chapter.
Installing and configuring the MySQL database server
On the controller node, use apt-get
to install the MySQL database service and related Python packages:
# apt-get install mariadb-server python-mysqldb
If prompted, set the password to openstack
.
Note
Insecure passwords are used throughout the book to simplify the configuration and demonstration of concepts and are not recommended for production environments. Visit http://www.strongpasswordgenerator.org to generate strong passwords for your environment.
Once installed, set the IP address that MySQL will bind to by editing the /etc/mysql/conf.d/mysqld_openstack.cnf
configuration file and adding the bind-address
definition. Doing so will allow connectivity to MySQL from other hosts in the environment. The value for bind-address
should be the management IP of the controller node:
[mysqld] ... bind-address = 10.254.254.100
In addition to adding the bind-address
definition, add the options shown here to the [mysqld]
section:
[mysqld] ... default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = "SET NAMES utf8" character-set-server = utf8
Save and close the file. Then, start the mysql
service:
# service mysql restart
The MySQL secure installation utility is used to build the default MySQL database and set a password for the MySQL root user. The following command will begin the MySQL installation and configuration process:
# mysql_secure_installation
During the MySQL installation process, you will be prompted to enter a password and change various settings. For this installation, the chosen root password is openstack
. A more secure password suitable for your environment is highly recommended.
Answer [Y]es
to the remaining questions to exit the configuration process. At this point, the MySQL server has been successfully installed on the controller node.
Installing and configuring the messaging server
Advanced Message Queue Protocol (AMQP) is the messaging technology chosen for use with an OpenStack-based cloud. Components such as Nova, Cinder, and Neutron communicate internally and among one another using a message bus. Here are the instructions to install RabbitMQ, an AMQP broker.
On the controller node, install the messaging server:
# apt-get install rabbitmq-server
Add a user named openstack
to RabbitMQ with the password as rabbit
, as shown in the following command:
# rabbitmqctl add_user openstack rabbit
Set RabbitMQ permissions to allow configuration, read, and write access for the openstack
user:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
At this point, the installation and configuration of RabbitMQ is complete.
Installing and configuring the identity service
Keystone is the identity service for OpenStack, and is used to authenticate and authorize users and services in the OpenStack cloud. Keystone should only be installed on the controller node and will be covered in the following sections.
Installing Keystone
In the Kilo release of OpenStack, the Keystone project uses a WSGI server instead of Eventlet. An Apache HTTP server using mod_wsgi
will serve Keystone requests on port 5000
and 35357
rather than the Keystone service itself. Execute the following command to disable the keystone
service from starting on the controller node once it is installed to avoid issues:
# echo "manual" > /etc/init/keystone.override
Run the following command to install the Keystone packages on the controller node:
# apt-get install keystone python-openstackclient apache2 libapache2-mod-wsgi memcached python-memcache
Update the [database]
section in the /etc/keystone/keystone.conf
file to configure Keystone to use MySQL as its database. In this installation, the username and password will be keystone
. You will need to overwrite the existing connection string with the following value:
[database] ... connection = mysql://keystone:keystone@controller01/keystone
Update the [memcache]
section in the /etc/keystone/keystone.conf
file to configure Keystone to use the local memcache service:
[memcache] ... servers = localhost:11211
Configuring the database
Keystone comes equipped with a SQLite database by default. Remove the database with the following command:
# rm -f /var/lib/keystone/keystone.db
Using the mysql
client, create the Keystone database and associated user. When prompted for the root password, use openstack
:
# mysql -u root -p
Enter the following SQL statements in the MariaDB [(none)] >
prompt:
CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; quit;
Configuring tokens and drivers
Before an administrative user has been configured in Keystone, an authorization token can be used as a shared secret between Keystone and other OpenStack services. When defined, the authorization token, admin_token
, could be used to make changes to Keystone if an administrative user has not been configured or the password has been forgotten. Clients making calls to Keystone can pass the authorization token, which is then validated by Keystone before actions are taken.
Update the [DEFAULT]
section in the /etc/keystone/keystone.conf
file to set a simple admin token:
[DEFAULT] ... admin_token = insecuretoken123
Keystone supports customizable token providers that can be defined within the [token]
section of the configuration file. Keystone provides both UUID and PKI token providers. In this installation, the UUID token provider will be used. Update the token
and revoke
sections in the /etc/keystone/keystone.conf
file accordingly:
[token] ... provider = keystone.token.providers.uuid.Provider driver = keystone.token.persistence.backends.memcache.Token [revoke] ... driver = keystone.contrib.revoke.backends.sql.Revoke
Populate the Keystone database using the keystone-manage
utility:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
Configuring the Apache HTTP server
Keystone now uses an HTTP server to process the Keystone API requests rather than a dedicated service. As a result, Apache must be configured accordingly.
Add the ServerName
option to the Apache configuration file that references the short name of the controller node:
# sed -i '1s/^/ServerName controller01\n&/' /etc/apache2/apache2.conf
Next, create a file named /etc/apache2/sites-available/wsgi-keystone.conf
that includes virtual host definitions for the WSGI server:
# cat >> /etc/apache2/sites-available/wsgi-keystone.conf <<EOF Listen 5000 Listen 35357 <VirtualHost *:5000> WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /var/www/cgi-bin/keystone/main WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> LogLevel info ErrorLog /var/log/apache2/keystone-error.log CustomLog /var/log/apache2/keystone-access.log combined </VirtualHost> <VirtualHost *:35357> WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /var/www/cgi-bin/keystone/admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> LogLevel info ErrorLog /var/log/apache2/keystone-error.log CustomLog /var/log/apache2/keystone-access.log combined </VirtualHost> EOF
Once complete, enable the virtual hosts for the Identity service with the following command:
# ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled
Download WSGI components
Create a directory structure for the WSGI components:
# mkdir -p /var/www/cgi-bin/keystone
Then, copy the WSGI components from the upstream Kilo repository to the new directory using curl
:
# curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin
Note
The curl
statement should be on one line, but is line-wrapped in this example.
Adjust the ownership and permissions of the files just created:
# chown -R keystone:keystone /var/www/cgi-bin/keystone # chmod 755 /var/www/cgi-bin/keystone/*
Finally, restart the Apache web service for the changes to take effect:
# service apache2 restart
Define services and API endpoints in Keystone
Each OpenStack service that is installed should be registered with Keystone so that its location on the network can be tracked. There are two commands involved in registering a service:
openstack service create
: This describes the service that is being createdopenstack endpoint create
: This associates API endpoints with the service
Typically, a username and password is used to authenticate against Keystone. As users have not yet been created, it is necessary that you use the authorization token created earlier. The token can be passed using the --os-token
option of the Keystone command or by setting the OS_TOKEN
environment variable. We will use both the OS_TOKEN
and OS_URL
environment variables to provide the authorization token and to specify where the Keystone service is running.
Use the export
command to export the variables and their values to your environment. OS_TOKEN
should be set to the admin token value determined earlier:
# export OS_TOKEN=insecuretoken123 # export OS_URL=http://controller01:35357/v2.0
Keystone itself is among the services that must be registered. You can create a service entry for Keystone with the following command:
# openstack service create --name keystone --description "OpenStack Identity" identity
The resulting output is as follows:

Figure 2.2
Next, specify an API endpoint for the Identity service. When specifying an endpoint, you must provide URLs for the public API, internal API, and the admin API. The three URLs can potentially be on three different IP networks depending on your network setup and can have different hostnames. The short name of the controller will be used to populate the URLs. Each host can reference the other based on the hostname via DNS or the local /etc/hosts
entries created earlier:
# openstack endpoint create \ --publicurl http://controller01:5000/v2.0 \ --internalurl http://controller01:5000/v2.0 \ --adminurl http://controller01:35357/v2.0 \ --region RegionOne \ identity
The resulting output is as follows:

Figure 2.3
Note
IDs of various resources are unique and will vary between environments.
Defining users, tenants, and roles in Keystone
Once the installation of Keystone is complete, it is necessary to set up domains, users, projects (tenants), roles, and endpoints that will be used by various OpenStack services.
Note
In this installation, the default
domain will be used.
In Keystone, a project or tenant represents a logical group of users to which resources are assigned. The terms "project" and "tenant" are used interchangeably throughout various OpenStack services. Resources are assigned to projects and not directly to users. Create an admin
project for the administrative user, a demo
project for regular users, and a service
project for other OpenStack services to use:
# openstack project create --description "Admin Project" admin # openstack project create --description "Service Project" service # openstack project create --description "Demo Project" demo
Next, create an administrative user called admin
. Specify a secure password for the admin
user:
# openstack user create admin --password=secrete
Once the admin
user has been created, create a role for administrative tasks called admin
:
# openstack role create admin
Any roles that are created should map to roles specified in the policy.json
files of the corresponding OpenStack services. The default policy files use the admin
role to allow access to services.
Note
For more information on user management in Keystone, refer to http://docs.openstack.org/admin-guide-cloud/content/keystone-user-management.html.
Associate the admin
role to the admin
user in the admin
project:
# openstack role add --project admin --user admin admin
Create a regular user called demo
. Specify a secure password for the demo
user:
# openstack user create demo --password=demo
Create the user
role:
# openstack role create user
Finally, add the user
role to the demo
project and user:
# openstack role add --project demo --user demo user
Verifying the Keystone installation
To verify that Keystone was installed and configured properly, use the unset
command to unset the OS_TOKEN
and OS_URL
environment variables. These variables are only needed to bootstrap the administrative user and to register the Keystone service:
# unset OS_TOKEN OS_URL
Once the environment variables are unset, it should be possible to use username-based authentication. Request an authentication token using the admin
user and the password specified earlier:
# openstack --os-auth-url http://controller01:35357 --os-project-name admin --os-username admin --os-password secrete token issue
Keystone should respond with a token that is paired with the specified user ID. This verifies that the user account is established in Keystone with the expected credentials:

Figure 2.4
As the admin user, request a list of projects to verify that the admin user can execute admin-only CLI commands and that the Identity service contains all of the projects created earlier in this chapter:
# openstack --os-auth-url http://controller01:35357 \ --os-project-name admin --os-username admin \ --os-password secrete project list
The command will result in the following output:

Figure 2.5
You should receive a list of projects containing the admin
, demo
, and service
projects created earlier in the chapter.
Setting environment variables
To avoid having to provide credentials every time you run an OpenStack command, create a file containing environment variables that can be loaded at any time. The following commands will create a file named adminrc
containing environment variables for the admin
user:
# cat >> ~/adminrc <<EOF export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=secrete export OS_AUTH_URL=http://controller01:35357/v3 EOF
The following commands will create a file named demorc
containing environment variables for the demo
user:
# cat >> ~/demorc <<EOF export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=demo export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller01:5000/v3 EOF
Use the source
command to load the environment variables from the file. To test Keystone, issue the following commands:
# source ~/adminrc # openstack user list
As the admin
user, Keystone should return the user list as requested:

Figure 2.6
As the demo
user, access is denied:

Figure 2.7
Depending on the command, non-admin users may not have appropriate access.
Installing and configuring the image service
Glance is the image service for OpenStack. It is responsible for storing images and snapshots of instances, and for providing images to compute nodes when instances are created.
To install Glance, run the following command from the controller node:
# apt-get install glance python-glanceclient
Configuring the database
Glance comes equipped with a SQLite database by default. Remove the database with the following command:
# rm -f /var/lib/glance/glance.sqlite
Using the mysql
client, create the Keystone database and associated user. When prompted for the root password, use openstack
:
# mysql -u root -p
Enter the following SQL statements in the MariaDB [(none)] >
prompt:
CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance'; quit;
Update the [database]
connection string in the glance-api
configuration file found at /etc/glance/glance-api.conf
to use the previously defined MySQL database:
[database] ... connection = mysql://glance:glance@controller01/glance
Repeat the process for the glance-registry
configuration file found at /etc/glance/glance-registry.conf
:
[database] ... connection = mysql://glance:glance@controller01/glance
Add the glance
user to Keystone and create the appropriate role:
# openstack user create --password glance glance # openstack role add --project service --user glance admin
Configuring authentication settings
Both the glance-api
and glance-registry
service configuration files must be updated with the appropriate authentication settings for the services to operate.
Update the [keystone_authtoken]
settings in the glance-api
configuration file found at /etc/glance/glance-api.conf:
[keystone_authtoken] ... auth_uri = http://controller01:5000/v2.0 auth_url = http://controller01:35357 auth_plugin = password user_domain_id = default project_domain_id = default project_name = service username = glance password = glance
Repeat the process for the glance-registry
configuration file found at /etc/glance/glance-registry.conf
:
[keystone_authtoken] ... auth_uri = http://controller01:5000/v2.0 auth_url = http://controller01:35357 auth_plugin = password user_domain_id = default project_domain_id = default project_name = service username = glance password = glance
Configuring additional settings
Update the glance-api
configuration file found at /etc/glance/glance-api.conf
with the following additional settings:
[paste_deploy] ... flavor = keystone [glance_store] ... default_store = file filesystem_store_datadir = /var/lib/glance/images [DEFAULT] ... notification_driver = noop
Update the glance-registry
configuration file found at /etc/glance/glance-registry.conf
with the following additional settings:
[paste_deploy] ... flavor = keystone [DEFAULT] ... notification_driver = noop
Populate the Glance database using the glance-manage
utility:
# su -s /bin/sh -c "glance-manage db_sync" glance
Restart the Glance services with the following commands:
# service glance-registry restart # service glance-api restart
Defining the Glance service and API endpoints in Keystone
Like other OpenStack services, Glance should be added to the Keystone database using the openstack
service create
and endpoint create
commands:
# openstack service create --name glance \ --description "OpenStack Image service" image
The resulting output can be seen as follows:

Figure 2.8
Create the Glance API endpoints with the following command:
# openstack endpoint create \ --publicurl http://controller01:9292 \ --internalurl http://controller01:9292 \ --adminurl http://controller01:9292 \ --region RegionOne \ image
The resulting output is as follows:

Figure 2.9
Verifying the Glance image service installation
Update each environment script created earlier in the chapter to define an environment variable that instructs the OpenStack client to use v2 of the Glance API:
# echo "export OS_IMAGE_API_VERSION=2" | tee -a ~/adminrc ~/demorc
Source the adminrc
script to set or update the environment variables:
# source ~/adminrc
To verify that Glance was installed and configured properly, download a test image from the Internet and verify that it can be uploaded to the image server:
# mkdir /tmp/images # wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Upload the image to Glance using the following command:
# glance image-create --name "cirros-0.3.4-x86_64" \ --file /tmp/images/cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --visibility public --progress
Verify that the image exists in Glance using the openstack image list
or glance image-list
command:

Figure 2.10
Installing additional images
The CirrOS image is limited in functionality and is recommended only for testing network connectivity and operational Compute functionality. Multiple vendors provide cloud-ready images for use with OpenStack:
- Ubuntu Cloud Images at http://cloud-images.ubuntu.com/
- Red Hat-Based Cloud Images at https://www.rdoproject.org/resources/image-resources/
To install the Ubuntu 14.04 LTS image, download the file to /tmp/images
and upload it to Glance:
# wget -P /tmp/images https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
Use the Glance image-create
command to upload the new image:
# glance image-create --name "Ubuntu 14.04 LTS Cloud Image" \ --file /tmp/images/trusty-server-cloudimg-amd64-disk1.img \ --disk-format qcow2 --container-format bare \ --visibility public --progress
Another look at the image list shows that the new Ubuntu image is available for use:

Figure 2.11
Installing and configuring the Compute service
OpenStack Compute is a collection of services that enable cloud operators and tenants to launch virtual machine instances. Most services run on the controller node. The only exception is the nova-compute
service, which runs on the compute nodes and is responsible for launching the virtual machine instances on those nodes.
Installing and configuring controller node components
Execute the following command on the controller node to install the various Nova Compute services used by the controller:
# apt-get install nova-api nova-cert nova-conductor \ nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
Configuring the database
Nova comes equipped with a SQLite database by default. Remove the database with the following command:
# rm -f /var/lib/nova/nova.sqlite
Using the mysql
client, create the Nova database and associated user. When prompted for the root password, use openstack
:
# mysql -u root -p
Enter the following SQL statements in the MariaDB [(none)] >
prompt:
CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova'; quit;
Update the [database]
section of the Nova configuration file found at /etc/nova/nova.conf
to set the connection string to use the previously configured MySQL database:
[database] ... connection = mysql://nova:nova@controller01/nova
Note
The [database]
and other sections referenced here may not exist in a new installation and can be safely created.
Update the [DEFAULT]
and [oslo_messaging_rabbit]
sections of the Nova configuration file to configure Nova to use the RabbitMQ message broker:
[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller01 rabbit_userid = openstack rabbit_password = rabbit
VNC Proxy is an OpenStack component that allows users to access their instances through VNC clients. VNC stands for virtual network computing and is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol to control another computer over a network. The controller must be able to communicate with compute nodes for VNC services to work properly through the Horizon dashboard or other VNC clients.
Update the [DEFAULT]
section of the Nova configuration file to configure the appropriate VNC settings for the controller node:
[DEFAULT] ... my_ip = 10.254.254.100 vncserver_listen = 10.254.254.100 vncserver_proxyclient_address = 10.254.254.100
Configuring authentication settings
Create a user called nova
in Keystone. The Nova service will use this user for authentication. After this, associate the user with the service
project and give the user the admin
role:
# openstack user create --password nova nova # openstack role add --project service --user nova admin
Update the Nova configuration file at /etc/nova/nova.conf
with the following Keystone-related attributes:
[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller01:5000 auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova
You must then register Nova with the Identity service so that other OpenStack services can locate it. Register the service and specify the endpoint:
# openstack service create --name nova --description "OpenStack Compute" compute
The resulting output should resemble the output shown here:

Figure 2.12
Create the Nova endpoint with the following command:
# openstack endpoint create \ --publicurl http://controller01:8774/v2/%\(tenant_id\)s \ --internalurl http://controller01:8774/v2/%\(tenant_id\)s \ --adminurl http://controller01:8774/v2/%\(tenant_id\)s \ --region RegionOne \ compute
The output should resemble the one shown here:

Figure 2.13
Additional controller tasks
Update the Nova configuration file at /etc/nova/nova.conf
to specify the controller node as the Glance host:
[glance] ... host = controller01
Update the Nova configuration file to set the lock file path for Nova services:
[oslo_concurrency] ... lock_path = /var/lib/nova/tmp
Populate the Nova database using the nova-manage
utility:
# su -s /bin/sh -c "nova-manage db sync" nova
Restart the controller-based Nova services for the changes to take effect:
# service nova-api restart # service nova-cert restart # service nova-consoleauth restart # service nova-scheduler restart # service nova-conductor restart # service nova-novncproxy restart
Installing and configuring compute node components
Once the controller-based Nova services have been configured on the controller node, at least one other host must be configured as a compute node. The compute node receives requests from the controller node to host virtual machine instances. Separating the services by running dedicated compute nodes means that Nova compute services can be scaled horizontally by adding additional compute nodes, once all available resources have been utilized.
On the compute nodes, install the nova-compute
package and related packages. These packages provide virtualization support services to the compute node:
# apt-get install nova-compute sysfsutils
Update the Nova configuration file at /etc/nova/nova.conf
on the compute nodes with the following Keystone-related attributes:
[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller01:5000 auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova
Next, update the [DEFAULT]
and [oslo_messaging_rabbit]
sections of the Nova configuration file to configure Nova to use the RabbitMQ message broker:
[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller01 rabbit_userid = openstack rabbit_password = rabbit
Then, update the Nova configuration file to provide remote console access to instances through a proxy on the controller node. The remote console is accessible through the Horizon dashboard. The IP configured as my_ip
should be the respective management IP of each compute node:
Compute01:
[DEFAULT] ... my_ip = 10.254.254.101 vncserver_proxyclient_address = 10.254.254.101 vnc_enabled = True vncserver_listen = 0.0.0.0 novncproxy_base_url = http://controller01:6080/vnc_auto.html
Compute02:
[DEFAULT] ... my_ip = 10.254.254.102 vncserver_proxyclient_address = 10.254.254.102 vnc_enabled = True vncserver_listen = 0.0.0.0 novncproxy_base_url http://controller01:6080/vnc_auto.html
Additional compute tasks
Update the Nova configuration file at /etc/nova/nova.conf
to specify the controller node as the Glance host:
[glance] ... host = controller01
Update the Nova configuration file to set the lock file path for Nova services:
[oslo_concurrency] ... lock_path = /var/lib/nova/tmp
Restart the nova-compute
service on all compute nodes:
# service nova-compute restart
Verifying communication between services
To check the status of Nova services throughout the environment, use the Nova service-list
command on the controller node as follows:
# nova service-list
The command should return statuses on all Nova services that have checked in:

Figure 2.14
In the preceding output, the state of the services on both the controller and compute nodes are reflected under the Status
column. The nova service-list
command can be run on any node in the environment, but will require proper authentication credentials. If there are inconsistencies in the output among multiple nodes, it's worth ensuring that Network Time Protocol (NTP) is synchronized properly on all nodes.
Installing the OpenStack dashboard
The OpenStack dashboard, also known as Horizon, provides a web-based user interface to OpenStack services, including Compute, Networking, Storage, and Identity, among others.
To install Horizon, install the following package on the controller node:
# apt-get install openstack-dashboard
Identifying the Keystone server
Edit the /etc/openstack-dashboard/local_settings.py
file to set the hostname of the Identity server. In this installation, the Keystone services are running on the controller node. Change the OPENSTACK_HOST
value from its default to the following:
OPENSTACK_HOST = controller01
Configuring a default role
The OPENSTACK_KEYSTONE_DEFAULT_ROLE
setting in the /etc/openstack-dashboard/local_settings.py
file must also be modified before the dashboard can be used. Change the OPENSTACK_KEYSTONE_DEFAULT_ROLE
value from its default to the following:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
Save and close the file.
Reload Apache
Once the preceding changes have been made, reload the Apache web server configuration using the following command:
# service apache2 reload
Uninstalling the default Ubuntu theme (optional)
By default, installations of the OpenStack dashboard on Ubuntu include a theme that has been customized by Canonical. To remove the theme, execute the following command:
# apt-get remove openstack-dashboard-ubuntu-theme
The examples in this book assume that the custom theme has been uninstalled.
Testing connectivity to the dashboard
From a machine that has access to the management network of the controller node, open http://controller01/horizon/
in a web browser.
The API network is reachable from my workstation, and the /etc/hosts
file on my client workstation has been updated to include the same hostname-to-IP mappings configured earlier in this chapter. The following screenshot demonstrates a successful connection to the dashboard. The username and password were created in the Defining users, tenants, and roles in Keystone section earlier in this chapter. In this installation, the username is admin
and the password is secrete
:

Figure 2.15
Once you have successfully logged in, the dashboard defaults to the Admin tab. From here, information about the environment is provided in a graphical format. Looking at the following screenshot, the System Information panel provides the user with information about the environment, including Services and Compute Services. The services listed in the following screenshot are services that were installed earlier in this chapter:

Figure 2.16
To view the status of Nova Compute services, click on the Compute Services tab. This will return output similar to that of nova service-list
in the CLI:

Figure 2.17