Deploy the components that are necessary to create a cloud
environment with KVM or
QEMU compute nodes using advanced configuration.
Before you begin
- Before you begin, ensure you completed the Deploying prerequisites steps.
- For the node systems, the network configuration and hypervisor
type limit the type of networks that can be defined.
Important: The following example uses defaults of eth0 for
the management network, eth1 for the virtual machine
data network, and eth2 for the external network.
The interface values for the networks in your environment are probably
different. Ensure that you update all occurrences of eth0, eth1,
or eth2 in your environment file to match your
network configuration.
Table 1. Supported
network configuration
Number of network interface cards (NICs) per
node |
Network type |
Hypervisor type |
1 - eth0 is dedicated as
the management network |
|
KVM only Note: If
one or more of the compute hypervisors is not KVM,
then GRE and VXLAN cannot
be used.
|
2
- eth0 is the management network and the external
network.
- eth1 is dedicated as the virtual machine data
network.
Note: Default configuration
|
- Local
- Flat
- VLAN
- VXLAN
- GRE
|
KVM only Note: If
one or more of the compute hypervisors is not KVM,
then GRE and VXLAN cannot
be used.
|
3 or more
- eth0 is dedicated as the management network.
- eth1 is dedicated as the virtual machine data
network.
- eth2 is dedicated as the external network.
Note: Recommended configuration
|
- Local
- Flat
- VLAN
- VXLAN
- GRE
|
KVM only Note: If
one or more of the compute hypervisors is not KVM,
then GRE and VXLAN cannot
be used.
|
Note:
- The virtual machine data network must be on a dedicated
interface. Communication to the node must be done through the management
network or another interface on the node.
- The local network type can be configured; however,
the network traffic is limited to the current node. The minimum topology
uses the local network option, by default.
For more information about network
considerations, see Network considerations.
Use the following procedure to deploy the topology to
your node systems.
Procedure
- Log in to the deployment system as the root
user.
This is the system where IBM Cloud
Manager with OpenStack was installed.
- Create a directory to store the files for
the topology that you deploy. Change your-deployment-name to
the name for your deployment.
$ mkdir your-deployment-name
$ chmod 600 your-deployment-name
$ cd your-deployment-name
- Copy the example environment for the topology
that you deploy. Change your-environment-name to
the name for your environment.
$ knife environment show example-ibm-os-single-controller-n-compute -d -Fjson > your-environment-name.json
- Change the following JSON attributes in your environment
file, your-environment-name.json:
- Name: Set to your environment name: your-environment-name.
- Description: Set to the description for your
environment.
- openstack.region: (Optional)
Customize the region name for your cloud. The region name must not
contain spaces or special characters.
- openstack.endpoints.host, openstack.endpoints.bind-host, openstack.endpoints.mq.host,
and openstack.endpoints.db.host: Change from 127.0.0.1 to
the IP address of the controller node system for the topology.
- ibm-sce.self-service.bind_interface: If ibm-sce.service.enabled is
set to true, change from 127.0.0.1 to
the IP address of the controller node system for the topology.
- openstack.compute.libvirt.virt_type: Set
to the hypervisor type, kvm or qemu, for the topology.
- (Single network interface card or no virtual machine data network):
If you are using a GRE or VXLAN network
with a single network interface card on the nodes (or no virtual machine
data network), you must change the following default values in the
environment:
openstack.network.openvswitch.tenant_network_type = "gre"
openstack.network.openvswitch.bridge_mappings = ""
openstack.network.openvswitch.network_vlan_ranges = ""
openstack.network.openvswitch.bridge_mapping_interface = ""
openstack.network.ml2.tenant_network_types = "gre"
openstack.network.ml2.network_vlan_ranges = ""
openstack.network.ml2.flat_networks = ""
Note: If you are using VXLAN, then replace gre in
the previous example with vxlan.
If the
management network interface of the nodes is not eth0,
then update all occurrences of eth0 in the environment
file to match your network configuration on the nodes.
(Recommended
network configuration) If the management network interface of
the nodes is not
eth0, if the virtual machine data
network interface is not
eth1, or both apply, then
update all occurrences of
eth0,
eth1,
or both in the environment file to match your network configuration.
The following list displays some of the networking properties and
their default values (from the example environment) that you might
need to change. In most cases, these default values should be sufficient
and do not need to be changed.
- openstack.network.core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin.
In
the example environment, the openstack.network.core_plugin property
is set to "neutron.plugins.ml2.plugin.ML2Plugin" and
the openstack.network.ml2.mechanism_drivers property
is set to "openvswitch". The ML2 specific properties
(properties that begin with openstack.network.ml2.*)
must be consistent and kept in sync with the properties of the mechanism
driver (openstack.network.openvswitch.*). The
example environment is set up with these two sets of properties in
sync.
- openstack.network.openvswitch.bridge_mapping_interface: "br-eth1:eth1".
The bridge_mapping_interface property is used
to control the creation of the data network OVS bridge on the nodes.
If Open vSwitch is installed and the data network bridge is already
configured on the nodes, this property is not necessary and you can
set the variable to " ". If a specific network
configuration is needed for the data network (for example, bonding),
you must set this variable to "" and complete set up manually before
or after the node is converged.
- openstack.network.openvswitch.bridge_mappings: "default:br-eth1".
The bridge_mappings property controls which OVS
bridge is used for flat and VLAN network traffic from the node. If
this OVS bridge does not exist, the Open vSwitch agent does not start.
This bridge can be automatically created by setting the bridge_mapping_interface property.
- openstack.network.openvswitch.network_vlan_ranges
and openstack.network.ml2.network_vlan_ranges:
These two properties define the default vlan range used when creating
a tenant network. Both of these properties default to default:1:4094.
This vlan range might need to be adjusted based on the vlan configuration
of the physical switches and hypervisors in your environment. The values of both properties must be the same.
- Copy the following example topology to a file, your-topology-name.json.
Change your-topology-name to the name for your
topology.
Here is an example topology with
KVM or
QEMU compute nodes.
{
"name":"CHANGEME",
"description":"CHANGEME",
"environment":"CHANGEME",
"secret_file":"CHANGEME",
"run_sequentially":false,
"nodes": [
{
"fqdn":"CHANGEME",
"password":"CHANGEME",
"identity_file":"CHANGEME",
"quit_on_error":true,
"run_order_number":1,
"runlist": [
"role[ibm-os-single-controller-node]",
"role[ibm-os-prs-ego-master]",
"role[ibm-os-prs-controller-node]",
"role[ibm-sce-node]"
]
},
{
"fqdn":"CHANGEME",
"password":"CHANGEME",
"identity_file":"CHANGEME",
"quit_on_error":true,
"run_order_number":2,
"runlist": [
"role[ibm-os-compute-node-kvm]",
"role[ibm-os-prs-compute-node]"
]
}
]
}
- Customize the topology file.
- The first node in your topology file is your single controller
node. The second node in your topology file is for a compute node.
If your topology requires extra compute nodes, copy the compute node
section as many times as needed. Ensure that additional compute node
sections are comma-separated.
- Change the following JSON attributes in your topology file, your-topology-name.json:
- Name: Set to your topology name: your-topology-name.
- Description: Set to the description for your
topology.
- Environment: Set to the environment for your
topology: your-environment-name.
- nodes.fqdn: For each node, you must set to
the fully qualified domain name of the node system. The deployment
system must be able to ssh using the fully qualified domain name.
You can also set to the public IP address, private IP address, or
host name. It is recommended that the value used correspond to the
management network interface for the node.
- nodes.password or nodes.identity_file:
For each node, set to the appropriate SSH root user authentication
for the node system. Either a password and an SSH identity file can
be used for authentication. Remove the unused attribute for each node.
- (Optional) Create node specific attribute files. This step is
only required when one or more nodes in your topology require different
attributes from those defined in your environment file your-environment-name.json.
- Create a node-specific attribute file that is similar to the following
format. For example, a node may not have an eth0 network
interface, which is the default value for some attributes. Below is
an example node attribute file that can be used to change the eth0 default
network on a compute node over to use eth2.
{
"openstack": {
"endpoints": {
"network-openvswitch": {
"bind_interface": "eth2"
},
"compute-vnc-bind": {
"bind_interface": "eth2"
}
}
}
After creating the node specific attribute files, add
the
nodes.attribute_file JSON attributes in your
topology file,
your-topology-name.json:
- nodes.attribute_file: For each node, set
to the attribute JSON file which overrides the attributes in the default_attributes section
of the environment file.
- Customize the passwords and secrets before deploying.
For instructions, see Customizing passwords and secrets.
- Configure the OpenStack block
storage (cinder) driver.
By default, the environment is
configured to use the LVM iSCSI cinder driver. You can change the
following JSON attributes in your environment file,
your-environment-name.json,
to customize the LVM iSCSI cinder driver configuration.
- openstack.block-storage.volume.create_volume_group:
If set to true, then the cinder-volumes volume
group is created on the controller node with a size determined by openstack.block-storage.volume.volume_group_size.
If set to false (default), then you can create
the volume group manually using physical disks. For more information,
see Creating an LVM volume group using physical disks.
- openstack.block-storage.volume.volume_group_size:
The amount of storage you use must be smaller than the size available.
If necessary, you can set the value to your free disk size. The default
value is 40 GB. This attribute is used only if openstack.block-storage.volume.create_volume_group is
set to true.
- openstack.block-storage.volume.iscsi_ip_address:
Change from 127.0.0.1 to the management IP address
of the controller node.
To customize your environment for a different cinder driver,
see Configuring Cinder drivers.
- (Optional) Complete any optional
customizations.
For options, see
Deployment customization options.
Note: Some customization
options might not be supported for all hypervisor types and some cannot
be configured after you deploy your cloud environment.
- Deploy your topology.
- Upload the environment for your deployment.
$ knife environment from file your-environment-name.json
- Deploy the topology.
$ knife os manage deploy topology your-topology-name.json
- (Optional) Check the detailed status of
the IBM Cloud
Manager with OpenStack services
that are deployed.
$ knife os manage services status --topology-file your-topology-name.json
- After the deployment is complete, the IBM Cloud
Manager with OpenStack services are
ready to use. The IBM Cloud
Manager - Dashboard is
available at https://controller.fqdn.com/, where controller.fqdn.com is
the fully qualified domain name of the controller node in your topology.
- (Optional) Verify the Open vSwitch (OVS) configuration
for your network.
What to do next
You are ready to start using your cloud environment.
To continue, see Using your cloud environment.