
Configuring Neutron services
The neutron-server
service exposes the Neutron API to users and passes all calls to the configured Neutron plugins for processing.
By default, Neutron is configured to listen for API calls on all configured addresses as seen by the default bind_hosts
option in the Neutron configuration file:
bind_host = 0.0.0.0
As an additional security measure, it is possible to expose the API only on the management or API network. On the controller node, update the bind_host
value in the [DEFAULT]
section of the Neutron configuration located at /etc/neutron/neutron.conf
with the management address of the controller node:
[DEFAULT] ... bind_host = 10.254.254.100
Other configuration options that may require tweaking include:
core_plugin
service_plugins
dhcp_lease_duration
Some of these settings apply to all nodes, while others apply only to the network or controller node. The core_plugin
configuration option instructs Neutron to use the specified networking plugin. Beginning with the Icehouse release, the ML2 plugin supersedes both the LinuxBridge and Open vSwitch monolithic plugins.
On all nodes, update the core_plugin
value in the [DEFAULT]
section of the Neutron configuration file located at /etc/neutron/neutron.conf
and specify the ML2 plugin:
[DEFAULT] ... core_plugin = ml2
The service_plugins
configuration option is used to define plugins that are loaded by Neutron for additional functionality. Examples of plugins include router
, firewall
, lbaas
, vpnaas
, and metering
. This option should only be configured on the controller node or any other node running the neutron-server
service. Specific plugins will be defined in later chapters.
Tip
Due to a bug in Horizon, the router
plugin must be defined before users can create and manage networks within the dashboard. On the controller node, update the service_plugins
configuration option accordingly:
[DEFAULT] service_plugins = router
The dhcp_lease_duration
configuration option specifies the duration of an IP address lease by an instance. The default value is 86400 seconds, or 24 hours. If the value is set too low, the network may be flooded with broadcast traffic due to short leases and frequent renewal attempts. The DHCP client on the instance itself is responsible for renewing the lease, and this operation varies between operating systems. It is not uncommon for instances to attempt to renew their lease well before exceeding the lease duration. However, the value set for dhcp_lease_duration
does not dictate how long an IP address stays associated with an instance. Once an IP address has been allocated to an instance by Neutron, it remains associated with the instance until the instance or the port is deleted, even if the instance is shut off. Instances typically rely on DHCP to obtain their address, though, which is why this configuration option is important.
Starting neutron-server
Before the neutron-server
service can be started, the Neutron database must be updated based on options configured earlier in this chapter. Use the neutron-db-manage
command on the controller node to update the database accordingly:
# su -s /bin/sh -c "neutron-db-manage \ --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \ upgrade head" neutron
Restart the Nova compute services on the controller node:
# service nova-api restart # service nova-scheduler restart # service nova-conductor restart
Restart the Nova compute service on the compute nodes:
# service nova-compute restart
Finally, restart the neutron-server
service on the controller node:
# service neutron-server restart
Configuring the Neutron DHCP agent
Neutron utilizes dnsmasq
, a free, lightweight DNS forwarder and DHCP server, which is used to provide DHCP services to networks. The neutron-dhcp-agent
service is responsible for spawning and configuring dnsmasq
and metadata processes for each network that leverages DHCP.
The DHCP driver is specified in the dhcp_agent.ini
configuration file found in the /etc/neutron
directory. The agent can be configured to use other DHCP drivers, but dnsmasq
support is built in and requires no additional setup. The default dhcp_driver
value is neutron.agent.linux.dhcp.Dnsmasq
and can be left unmodified.
Other notable configuration options found in the dhcp_agent.ini
configuration file include:
interface_driver
use_namespaces
enable_isolated_metadata
enable_metadata_network
dhcp_domain
dhcp_delete_namespaces
The interface_driver
configuration option should be configured appropriately based on the network mechanism driver chosen for your environment:
- LinuxBridge:
neutron.agent.linux.interface.BridgeInterfaceDriver
- Open vSwitch:
neutron.agent.linux.interface.OVSInterfaceDriver
Both LinuxBridge and Open vSwitch will be discussed in further detail in Chapter 4, Building a Virtual Switching Infrastructure. For now, update the interface_driver
value in the [DEFAULT]
section of the DHCP agent configuration file located at /etc/neutron/dhcp_agent.ini
on the controller node to specify the OVS driver:
[DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Note
Only one interface_driver
value can be configured at a time per agent.
The use_namespaces
configuration option instructs Neutron to disable or enable the use of network namespaces for DHCP. When True
, every network scheduled to a DHCP agent will have a namespace by the name qdhcp-<Network UUID>
, where <Network UUID>
is a unique UUID associated with every network. By default, use_namespaces
is set to True
. When set to False
, overlapping networks between tenants are not allowed. Not all distributions and kernels support network namespaces, which may limit how tenant networks are built out. The operating system and kernel recommended in Chapter 2, Installing OpenStack, does support network namespaces. For this installation, leave the value set to True
.
The enable_isolated_metadata
configuration option is useful in cases where a physical network device, such as a firewall or router, serves as the default gateway for instances, but Neutron is still required to provide metadata services to instances. When the L3 agent is used, an instance reaches the metadata service through the Neutron router that serves as its default gateway. An isolated network is assumed to be one in which a Neutron router is not serving as the gateway, but Neutron handles DHCP requests for the instances. Often, this is the case when instances are leveraging flat or VLAN networks and the L3 agent is not used. The default value for enable_isolated_metadata
is False
. When set to True
, Neutron can provide instances with a static route to the metadata service via DHCP in certain cases. More information on the use of metadata and this configuration can be found in Chapter 5, Creating Networks with Neutron. On the controller node, update the enable_isolated_metadata
option in the DHCP agent configuration file located at /etc/neutron/dhcp_agent.ini
to True
:
[DEFAULT] ... enable_isolated_metadata = True
The enable_metadata_network
configuration option is useful in cases where the L3 agent may be used, but the metadata agent is not on the same host as the router. By setting enable_metadata_network
to True
, Neutron networks whose subnet CIDR is included in 169.254.0.0/16 will be regarded as metadata networks. When connected to a Neutron router, a metadata proxy is spawned on the node hosting the router, granting metadata access to all the networks connected to the router.
The dhcp_domain
configuration option specifies the DNS search domain that is provided to instances via DHCP when they obtain a lease. The default value is openstacklocal
. This can be changed to whatever fits your organization. For the purpose of this installation, change the value from openstacklocal
to learningneutron.com.
On the controller node, update the dhcp_domain
option in the DHCP agent configuration file located at /etc/neutron/dhcp_agent.ini
to learningneutron.com
:
[DEFAULT] ... dhcp_domain = learningneutron.com
The dhcp_delete_namespaces
configuration option, when set to true
, allows Neutron to automatically delete DHCP namespaces from the host when a DHCP server is disabled on a network. It is set to false
by default and should be set to true
for most modern operating systems, including Ubuntu 14.04 LTS. Update the dhcp_delete_namespaces
option in the DHCP agent configuration file from false
to true
:
[DEFAULT] ... dhcp_delete_namespaces = true
Configuration options not mentioned here have sufficient default values and should not be changed unless your environment requires it.
Restarting the Neutron DHCP agent
Use the following command to restart the neutron-dhcp-agent
service on the controller node:
# service neutron-dhcp-agent restart
Confirm the status of the neutron-dhcp-agent
as follows:

Figure 3.4
The agent should be in a running status. Using the neutron agent-list
command, verify that the service has checked in:

Figure 3.5
A smiley face under the alive
column means the agent is properly communicating with the Neutron service.
Note
The metadata agent may have checked in prior to its configuration due to base settings in the configuration file. The base configuration will be replaced in the following section.
Configuring the Neutron metadata agent
OpenStack provides metadata services that enable users to retrieve information about their instances that can be used to configure or manage the running instance. Metadata includes information such as the hostname, fixed and floating IPs, public keys, and more. In addition to metadata, users can access user data, such as scripts, that are provided during the launching of an instance and are executed during the boot process.
Instances typically access the metadata service over HTTP at http://169.254.169.254
during the boot process. This mechanism is implemented by cloud-init
, a utility found on most cloud-ready images and available at https://launchpad.net/cloud-init.
The following diagram provides a high-level overview of the retrieval of metadata from an instance when the controller node hosts Neutron networking services:

Figure 3.6
In the preceding diagram, the following actions take place when an instance makes a request to the metadata service:
- An instance sends a request for metadata to
169.254.269.254
via HTTP at boot. - The metadata request hits either the router or DHCP namespace depending on the route in the instance.
- The metadata proxy service in the namespace sends the request to the Neutron metadata agent service via a Unix socket.
- The Neutron metadata agent service forwards the request to the Nova metadata API service.
- The Nova metadata API service responds to the request and forwards the response back to the Neutron metadata agent service.
- The Neutron metadata agent service sends the response back to the metadata proxy service in the namespace.
- The metadata proxy service forwards the HTTP response to the instance.
- The instance receives the metadata and/or the user data and continues the boot process.
For proper operation of metadata services, both Neutron and Nova must be configured to communicate together with a shared secret. Neutron uses this secret to sign the Instance-ID
header of the metadata request to prevent spoofing. On the controller node, update the following metadata options in the [neutron]
section of the Nova configuration file located at/etc/nova/nova.conf
:
[neutron] ... metadata_proxy_shared_secret = metadatasecret123 service_metadata_proxy = true
Next, update the [DEFAULT]
section of the metadata agent configuration file located at /etc/neutron/metadata_agent.ini
with the Neutron authentication details and the metadata proxy shared secret:
[DEFAULT] ... auth_url = http://controller01:5000/v2.0 auth_region = regionOne admin_tenant_name = service admin_user = neutron admin_password = neutron nova_metadata_ip = controller01 metadata_proxy_shared_secret = metadatasecret123
Configuration options not mentioned here have sufficient default values and should not be changed unless your environment requires it.
Restarting the Neutron metadata agent
Use the following command to restart the neutron-metadata-agent
and nova-api
services on the controller node for the changes to take effect:
# service nova-api restart # service neutron-metadata-agent restart
Confirm the status of neutron-metadata-agent
as follows:

Figure 3.7
The agent should be in a running status. Using the neutron agent-list
command, verify that the service has checked in:

Figure 3.8
A smiley face under the alive
column means the agent is properly communicating with the Neutron service. If the services do not appear or have XXX
under the alive
column, check the Neutron logs found at /var/log/neutron
for assistance in troubleshooting. More information on the use of metadata can be found in Chapter 5, Creating Networks with Neutron, and later chapters.
Configuring the Neutron L3 agent
OpenStack Networking includes an extension that provides users with the ability to dynamically provision and configure virtual routers using the API. These routers interconnect L2 networks and provide floating IP functionality that make instances on private networks externally accessible. The Neutron L3 agent uses the Linux IP stack and iptables to perform both L3 forwarding and network address translation, or NAT. In order to support multiple routers with potentially overlapping IP networks, the Neutron L3 agent defaults to using network namespaces to provide isolated forwarding contexts. More information on creating and managing routers in Neutron begins with Chapter 7, Creating Standalone Routers with Neutron.
Configuring the Neutron LBaaS agent
OpenStack Networking includes an extension that provides users the ability to dynamically provision and configure virtual load balancers using the API. Neutron includes a reference implementation for LBaaS that utilizes the HAProxy software load balancer. Network namespaces are used to provide isolated load balancing contexts per virtual IP, or VIP, in version 1.0 of the LBaaS API. More information on creating and managing virtual load balancers in Neutron can be found in Chapter 10, Load Balancing Traffic to Instances.
Using the Neutron command-line interface
Neutron provides a command-line client to interface with its API. Neutron commands can be run directly from the Linux command line, or the Neutron shell can be invoked by issuing the neutron
command:

Figure 3.9
The neutron
shell provides commands that can be used to create, read, update, and delete the networking configuration within the OpenStack cloud. By typing a question mark or help
within the Neutron shell, a list of commands can be found. Additionally, running neutron help
from the Linux command line provides a brief description of each command's function.
Many of the commands listed will be covered in subsequent chapters of this book. Commands outside the scope of basic Neutron functionality, such as those relying on third-party plugins, can be found in Appendix A, Additional Neutron Commands.