Deploying an advanced configuration with KVM for IBM z Systems

Deploy the components that are necessary to create a cloud environment with KVM for IBM z Systems compute nodes using advanced configuration.

Before you begin

Before you begin, ensure you completed the Deploying prerequisites steps.

Use the following procedure to deploy the topology to your node systems.

Procedure

  1. Log in to the deployment system as the root user.
    This is the system where IBM Cloud Manager with OpenStack was installed.
  2. Create a directory to store the files for the topology that you deploy. Change your-deployment-name to the name for your deployment.
    $ mkdir your-deployment-name
    $ chmod 600 your-deployment-name
    $ cd your-deployment-name
  3. Copy the example environment for the topology that you deploy. Change your-environment-name to the name for your environment.
    $ knife environment show example-ibm-os-single-controller-n-compute -d -Fjson > your-environment-name.json
  4. Change the following JSON attributes in your environment file, your-environment-name.json:
    • Name: Set to your environment name: your-environment-name.
    • Description: Set to the description for your environment.
    • openstack.region: (Optional) Customize the region name for your cloud. The region name must not contain spaces or special characters.
    • openstack.endpoints.host, openstack.endpoints.bind-host, openstack.endpoints.mq.host, and openstack.endpoints.db.host: Change from 127.0.0.1 to the IP address of the controller node system for the topology.
    • ibm-sce.self-service.bind_interface: If ibm-sce.service.enabled is set to true, change from 127.0.0.1 to the IP address of the controller node system for the topology.
    • openstack.compute.libvirt.virt_type: Set to the hypervisor type, kvm, for the topology.
    • (Single network interface card or no virtual machine data network): If you are using a GRE or VXLAN network with a single network interface card on the nodes (or no virtual machine data network), you must change the following default values in the environment:
      openstack.network.openvswitch.tenant_network_type = "gre"
      openstack.network.openvswitch.bridge_mappings = ""
      openstack.network.openvswitch.network_vlan_ranges = ""
      openstack.network.openvswitch.bridge_mapping_interface = ""
      openstack.network.ml2.tenant_network_types = "gre"
      openstack.network.ml2.network_vlan_ranges = ""
      openstack.network.ml2.flat_networks = ""
      Note: If you are using VXLAN, then replace gre in the previous example with vxlan.

      If the management network interface of the nodes is not eth0, then update all occurrences of eth0 in the environment file to match your network configuration on the nodes.

      (Recommended network configuration) If the management network interface of the nodes is not eth0, if the virtual machine data network interface is not eth1, or both apply, then update all occurrences of eth0, eth1, or both in the environment file to match your network configuration. The following list displays some of the networking properties and their default values (from the example environment) that you might need to change. In most cases, these default values should be sufficient and do not need to be changed.
      • openstack.network.core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin.

        In the example environment, the openstack.network.core_plugin property is set to "neutron.plugins.ml2.plugin.ML2Plugin" and the openstack.network.ml2.mechanism_drivers property is set to "openvswitch". The ML2 specific properties (properties that begin with openstack.network.ml2.*) must be consistent and kept in sync with the properties of the mechanism driver (openstack.network.openvswitch.*). The example environment is set up with these two sets of properties in sync.

      • openstack.network.openvswitch.bridge_mapping_interface: "br-eth1:eth1". The bridge_mapping_interface property is used to control the creation of the data network OVS bridge on the nodes. If Open vSwitch is installed and the data network bridge is already configured on the nodes, this property is not necessary and you can set the variable to " ". If a specific network configuration is needed for the data network (for example, bonding), you must set this variable to "" and complete set up manually before or after the node is converged.
      • openstack.network.openvswitch.bridge_mappings: "default:br-eth1". The bridge_mappings property controls which OVS bridge is used for flat and VLAN network traffic from the node. If this OVS bridge does not exist, the Open vSwitch agent does not start. This bridge can be automatically created by setting the bridge_mapping_interface property.
      • openstack.network.openvswitch.network_vlan_ranges and openstack.network.ml2.network_vlan_ranges: These two properties define the default vlan range used when creating a tenant network. Both of these properties default to default:1:4094. This vlan range might need to be adjusted based on the vlan configuration of the physical switches and hypervisors in your environment. The values of both properties must be the same.
      • openstack.endpoints.compute-serial-proxy.host: If you are using domain name, not the IP address, to access IBM Cloud Manager - Dashboard, you need to add this attribute setting to the domain name of the controller, for example controller.fqdn.com.
      • override_attributes.openstack.compute.cert and override_attributes.openstack.compute.key: These two properties are used to set serial console proxy with SSL on the controller node. The values are the certificate paths on the controller node. You must also ensure that the certificates are authorized by the browser that you used to access IBM Cloud Manager - Dashboard. For more information about obtaining certificates, see note 1a in Customizing for a more secure cloud.
  5. Copy the following example topology to a file, your-topology-name.json. Change your-topology-name to the name for your topology.
    Here is an example topology with KVM for IBM z Systems compute nodes.
    {
      "name":"CHANGEME",
      "description":"CHANGEME",
      "environment":"CHANGEME",
      "secret_file":"CHANGEME",
      "run_sequentially":false,
      "nodes": [
        {
          "fqdn":"CHANGEME",
          "password":"CHANGEME",
          "identity_file":"CHANGEME",
          "quit_on_error":true,
          "run_order_number":1,
           "runlist": [
             "role[ibm-os-single-controller-node]",
             "role[ibm-os-prs-ego-master]",
             "role[ibm-os-prs-controller-node]",
             "role[ibm-sce-node]"
          ]
        },
        {
          "fqdn":"CHANGEME",
          "password":"CHANGEME",
          "identity_file":"CHANGEME",
          "quit_on_error":true,
          "run_order_number":2,
          "runlist": [
            "role[ibm-os-compute-node-kvmibm]",
            "role[ibm-os-prs-compute-node]"
          ]
        }  
      ]
    }
  6. Customize the topology file.
    1. The first node in your topology file is your single controller node. The second node in your topology file is for a compute node. If your topology requires extra compute nodes, copy the compute node section as many times as needed. Ensure that additional compute node sections are comma-separated.
    2. Change the following JSON attributes in your topology file, your-topology-name.json:
      • Name: Set to your topology name: your-topology-name.
      • Description: Set to the description for your topology.
      • Environment: Set to the environment for your topology: your-environment-name.
      • nodes.fqdn: For each node, you must set to the fully qualified domain name of the node system. The deployment system must be able to ssh using the fully qualified domain name. You can also set to the public IP address, private IP address, or host name. It is recommended that the value used correspond to the management network interface for the node.
      • nodes.password or nodes.identity_file: For each node, set to the appropriate SSH root user authentication for the node system. Either a password and an SSH identity file can be used for authentication. Remove the unused attribute for each node.
    3. (Optional) Create node specific attribute files. This step is only required when one or more nodes in your topology require different attributes from those defined in your environment file your-environment-name.json.
      1. Create a node-specific attribute file that is similar to the following format. For example, a node may not have an eth0 network interface, which is the default value for some attributes. Below is an example node attribute file that can be used to change the eth0 default network on a compute node over to use eth2.
        {
          "openstack": {
            "endpoints": {
              "network-openvswitch": {
                "bind_interface": "eth2"
              },
              "compute-vnc-bind": {
                "bind_interface": "eth2"
              }, 
              "compute-vnc-proxy-bind": {
                "bind_interface": "eth2"
              },
              "compute-serial-console-bind": {
                "bind_interface": "eth2"
              }
            }
          }
        }
      After creating the node specific attribute files, add the nodes.attribute_file JSON attributes in your topology file, your-topology-name.json:
      • nodes.attribute_file: For each node, set to the attribute JSON file which overrides the attributes in the default_attributes section of the environment file.
  7. Customize the passwords and secrets before deploying. For instructions, see Customizing passwords and secrets.
  8. Configure the OpenStack block storage (cinder) driver.
    By default, the environment is configured to use the LVM iSCSI cinder driver. You can change the following JSON attributes in your environment file, your-environment-name.json, to customize the LVM iSCSI cinder driver configuration.
    1. openstack.block-storage.volume.create_volume_group: If set to true, then the cinder-volumes volume group is created on the controller node with a size determined by openstack.block-storage.volume.volume_group_size. If set to false (default), then you can create the volume group manually using physical disks. For more information, see Creating an LVM volume group using physical disks.
    2. openstack.block-storage.volume.volume_group_size: The amount of storage you use must be smaller than the size available. If necessary, you can set the value to your free disk size. The default value is 40 GB. This attribute is used only if openstack.block-storage.volume.create_volume_group is set to true.
    3. openstack.block-storage.volume.iscsi_ip_address: Change from 127.0.0.1 to the management IP address of the controller node.

    To customize your environment for a different cinder driver, see Configuring Cinder drivers.

  9. (Optional) Complete any optional customizations.
    For options, see Deployment customization options.
    Note: Some customization options might not be supported for all hypervisor types and some cannot be configured after you deploy your cloud environment.
  10. Deploy your topology.
    1. Upload the environment for your deployment.
      $ knife environment from file your-environment-name.json
    2. Deploy the topology.
      $ knife os manage deploy topology your-topology-name.json 
    3. (Optional) Check the detailed status of the IBM Cloud Manager with OpenStack services that are deployed.
      $ knife os manage services status --topology-file your-topology-name.json
  11. After the deployment is complete, the IBM Cloud Manager with OpenStack services are ready to use. The IBM Cloud Manager - Dashboard is available at https://controller.fqdn.com/, where controller.fqdn.com is the fully qualified domain name of the controller node in your topology.
    You can log in using admin user with the password that you customized in step 7.

    For more information about managing IBM Cloud Manager with OpenStack services, see Managing IBM Cloud Manager with OpenStack services.

  12. (Optional) Verify the Open vSwitch (OVS) configuration for your network.

What to do next

You are ready to start using your cloud environment. To continue, see Using your cloud environment.