Deploying with z/VM compute nodes

Deploy the components that are necessary to create a cloud environment with z/VM compute nodes.

Before you begin

  • Ensure you completed the Deploying prerequisites steps.
  • To use the z/VM hypervisor, one or more x86_64 Red Hat Enterprise Linux system nodes must be used to install the compute and network driver to manage the z/VM hypervisor. You can run several compute and network agent services (for several z/VM hypervisors) on one x86_64 Red Hat Enterprise Linux system node. For more information on configuring the z/VM hypervisor, see the z/VM OpenStack user manual.

About this task

Use the following procedure to deploy the topology to your node systems.

Procedure

  1. Log in to the deployment system as the root user.
    This is the system where IBM Cloud Manager with OpenStack was installed.
  2. Create a directory to store the files for the topology that you deploy. Change your-deployment-name to the name for your deployment.
    $ mkdir your-deployment-name
    $ chmod 600 your-deployment-name
    $ cd your-deployment-name
  3. Copy the example environment for the topology that you deploy. Change your-environment-name to the name for your environment.
    $ knife environment show example-ibm-os-single-controller-n-compute -d -Fjson > your-environment-name.json
  4. Change the following JSON attributes in your environment file, your-environment-name.json:
      • Name: Set to your environment name: your-environment-name.
      • Description: Set to the description for your environment.
      • openstack.region: (Optional) Customize the region name for your cloud. The region name must not contain spaces or special characters.
      • openstack.endpoints.host, openstack.endpoints.bind-host, and openstack.endpoints.mq.host: Change from 127.0.0.1 to the IP address of the controller node system for the topology.
      • openstack.compute.instance_name_template: Change the default value to a string of 8 characters in length. For example, abc%05x.
        Note: z/VM does not support more than 8 characters; therefore, this parameter does not support more than 8 characters. The compute node should use the same instance_name_template as the controller node.
      • (Recommended network configuration) If the management network interface of the nodes is not eth0, if the virtual machine data network interface is not eth1, or both apply, then update all occurrences of eth0, eth1, or both in the environment file to match your network configuration.
      • Update the following attributes for the controller node in your environment file.
        • openstack.network.openvswitch.bridge_mapping_interface: Set the value as nil.
        • openstack.network.ml2.mechanism_drivers: Add zvm, using a comma-separated list.
        • openstack.network.ml2.flat_networks: Add your z/VM flat networks. You must include all flat networks of managed hosts, separated by a comma. For example, xcatvsw1flat,xcatvsw2flat.
        • openstack.network.ml2.network_vlan_ranges: Add your z/VM network VLAN range. You must include all VLAN networks of managed hosts, separated by a comma. For example, xcatvsw1vlan:10:100,xcatvsw2vlan:10:100.
        • openstack.network.service_plugins: Set the value to [] because no service plugins are required for the z/VM driver.
        • ibm-openstack.network.l3.enable: Set the value to false. The l3-agent is not supported in z/VM.
        • ibm-openstack.network.ipmovement.enable: Set the value to false. The parameter is not required in z/VM.
    1. z/VM compute nodes configurations: You can update the following attributes for the z/VM compute nodes in the environment file. Otherwise, you can specify the attributes in the attributes file for each node, as described in step 6 which describes how to customize the attribute file. For more information about the description and usage of the following attributes, see Enabling z/VM for OpenStack, Chapter 5. OpenStack Configuration.

      • ibm-openstack.zvm-driver.hosts: Specify all z/VM hypervisors to be managed.
      • ibm-openstack.zvm-driver.#host: Specify each z/VM hypervisor managed, the value should match the values in ibm-openstack.zvm-driver.hosts. For example, if you set ibm-openstack.zvm-driver.hosts as ["server1","server2"], then you have the attributes ibm-openstack.zvm-driver.server1 and ibm-openstack.zvm-driver.server2. This is also the top attribute for each managed z/VM hypervisor. The sub-attributes of each ibm-openstack.zvm-driver.#host describe properties of the corresponding z/VM hypervisor.
      • ibm-openstack.zvm-driver.#host.xcat.server: Specify the xCAT MN IP address or host name.
      • ibm-openstack.zvm-driver.#host.xcat.username: Specify the xCAT REST API user name.
      • ibm-openstack.zvm-driver.#host.xcat.zhcp_nodename: Specify the zHCP node name in xCAT.
      • ibm-openstack.zvm-driver.#host.xcat.master: Specify the xCAT master node (the node name in the xCAT definition).
      • ibm-openstack.zvm-driver.#host.xcat.mnadmin: Specify the xCAT management user that can ssh into xcat mn. If you do not set this user, the default value is mnadmin.
      • ibm-openstack.zvm-driver.#host.xcat.mgt_ip: Specify the first IP address of the management network.
        Note: Remember the xCAT management interface IP address. xCAT uses this IP address to connect a newly deployed instance server.
      • ibm-openstack.zvm-driver.#host.xcat.mgt_mask: Specify the network mask of the xCAT management network. For example: 255.255.255.0.
      • ibm-openstack.zvm-driver.#host.xcat.connection_timeout: Specify the timeout value for reading the xCAT response in seconds.
      • ibm-openstack.zvm-driver.#host.xcat.image_clean_period: Specify the amount of time that if the xCAT image is not used, it is purged after the specified time expires. The default is 30 days.
      • ibm-openstack.zvm-driver.#host.xcat.free_space_threshold: Specify the threshold when xCAT MN disk space is not large enough. The default is 50G. After the disk space threshold is met, a purge operation starts.
      • ibm-openstack.zvm-driver.#host.xcat.timeout: Specify the number of seconds the agent waits for a xCAT MN response. The recommended value is 300.
      • ibm-openstack.zvm-driver.#host.config.ram_allocation_ratio: Specify the memory overcommit ratio for the z/VM Driver. The recommended value is 3.
      • ibm-openstack.zvm-driver.#host.image.tmp_path: Specify the path that images are stored (snapshot, deploy, and so on).
      • ibm-openstack.zvm-driver.#host.image.cache_manager_interval: This value is not z/VM specific. Set it to the default 86400(s), which is 24 hours.
      • ibm-openstack.zvm-driver.#host.rpc_response_timeout: Specify the timeout response time. The default is 180 seconds. If the value is reached, the live migration does not succeed.
      • ibm-openstack.zvm-driver.#host.reachable_timeout: After this value, the deployment reports an error 'Failed to power on instance'.
      • ibm-openstack.zvm-driver.#host.polling_interval: The Neutron z/VM agent's polling interval, in seconds.
      • ibm-openstack.zvm-driver.#host.config_drive.format: The config driver format. This value must be tgz.
      • ibm-openstack.zvm-driver.#host.config_drive.inject_password: Defines whether to inject the password in config drive. If it is set to True, the default os root password for the new booted virtual machine is the random value of the adminPass property that is shown in the output of the Nova boot command.
      • ibm-openstack.zvm-driver.#host.diskpool: Specify the disk pool name from where xCAT allocates disks for new servers. The disk pool name is the name of the storage 'group' defined in the Directory Manager.
      • ibm-openstack.zvm-driver.#host.diskpool_type: Specify the disk pool type, either FBA or ECKD.
      • ibm-openstack.zvm-driver.#host.zvm_host: Specify the xCAT node name of the z/VM hypervisor. This property is case sensitive and should match the value specified in XCAT_zvmsysid in the DMSSICNF COPY file.
      • ibm-openstack.zvm-driver.#host.host: Specify the host that is used to distinguish different Nova compute hosts. It can be the same as zvm_host.
      • ibm-openstack.zvm-driver.#host.user_profile: Specify the default template of the user directory for new servers. Do not use lnxdflt but define your own profile.
      • ibm-openstack.zvm-driver.#host.config_drive.inject_password: Define whether to place the password in the config drive. If inject_password is set to False, the default os root password of the new booted virtual machine is the password in data bag user_passwords.zlinuxroot. If inject_password is set to True, the default os root password can be set using Nova user-data. If you do not specify the password in nova user-data, the default os root password is a random value of adminPass property that is shown in the output of the virtual machine boot console.
      • ibm-openstack.zvm-driver.#host.scsi_pool: Specify the name of the xCAT SCSI pool. You can specify any name. xCAT creates and manages it.
      • ibm-openstack.zvm-driver.#host.fcp_list: Specify the list of FCPs used by instances. Each instance needs one FCP to attach a volume to itself. Those FCPs should be well planned and made online before OpenStack can use them. OpenStack does not check their status, so if they are not ready, you might receive errors. The format of this variable should look like 'min1-max1;min2-max2;min3-max3'. Contact your z/VM system manager if you do not know what FCPs you can use.
      • ibm-openstack.zvm-driver.#host.zhcp_fcp_list: Specify the list of FCPs used only by xCAT HCP node. It must be different from zvm_fcp_list or you receive errors. The format of this variable should look like 'min1-max1;min2-max2;min3-max3'. Only specify one FCP for HCP to avoid wasting resources. Contact your z/VM system manager if you do not know what FCPs you can use.
      • ibm-openstack.zvm-driver.#host.external_vswitch_mappings: Set the OSA configuration for each of the virtual switches. These configurations are required if the virtual switch connects outside of z/VM. The format of this variable is 'xcatvsw2:6243,6245;xcatvsw3:6343'. Where xcatvsw2 and xcatvsw3 are the virtual switches and 6243, 6245, 6343 are RDEV addresses of the OSA cards that are connected to the virtual switch.
      • ibm-openstack.zvm-driver.#host.ml2.flat_networks: Add your z/VM flat networks. For example, xcatvsw1flat.
      • ibm-openstack.zvm-driver.#host.ml2.network_vlan_ranges: Add your z/VM network VLAN range. For example, xcatvsw1vlan:10:100.
  5. Copy the following example topology to a file, your-topology-name.json. Change your-topology-name to the name for your topology.
    Here is an example topology with z/VM compute nodes.
    {
      "name":"CHANGEME",
      "description":"CHANGEME",
      "environment":"CHANGEME",
      "secret_file":"CHANGEME",
      "run_sequentially":false,
      "nodes": [
        {
          "fqdn":"CHANGEME",
          "password":"CHANGEME",
          "identity_file":"CHANGEME",
          "quit_on_error":true,
          "run_order_number":1,
          "runlist": [
             "role[ibm-os-single-controller-node]",
             "role[ibm-os-prs-ego-master]",
             "role[ibm-os-prs-controller-node]",
             "role[ibm-sce-node]"
         ]
        },
        {
          "fqdn":"CHANGEME",
          "password":"CHANGEME",
          "identity_file":"CHANGEME",
          "quit_on_error":true,
          "run_order_number":2,
          "runlist": [
            "role[ibm-os-zvm-driver-node]", 
            "role[ibm-os-prs-compute-node]"
          ],
            "attribute_file":"CHANGEME"
        }
      ]
    }
    Notes:
    • If you deploy a distributed topology, you must add recipe[ibm-openstack-zvm-driver::neutron-server-configure] in the controller node runlist.
    • If you deploy an all-in-one environment, set the information for the two nodes the same.
  6. Customize the topology file.
    1. The first node in your topology file is your single controller node. The second node in your topology file is for a compute node. If your topology requires extra compute nodes, copy the compute node section as many times as needed. Ensure that additional compute node sections are comma-separated.
    2. Change the following JSON attributes in your topology file, your-topology-name.json:
      • Name: Set to your topology name: your-topology-name.
      • Description: Set to the description for your topology.
      • Environment: Set to the environment for your topology: your-environment-name.
      • nodes.fqdn: For each node, you must set to the fully qualified domain name of the node system. The deployment system must be able to ssh using the fully qualified domain name. You can also set to the public IP address, private IP address, or host name. It is recommended that the value used correspond to the management network interface for the node.
      • nodes.password or nodes.identity_file: For each node, set to the appropriate SSH root user authentication for the node system. Either a password and an SSH identity file can be used for authentication. Remove the unused attribute for each node.
    3. Create node-specific attribute files. You can create the attribute file for each node you deploy. The following example shows a node-specific attribute file for a z/VM compute node. The example file that is shown creates two compute services in the node. Reference Step 4 to review what each attribute stands for.

      In addition, you must update occurrences of CHANGEME to the actual value. You can also change the default values and hosts in the examples.

      {
        "ibm-openstack": {
           "zvm-driver" :{
               "hosts" : ["zvm1","zvm2"],
               "zvm1" : {
                  "xcat": {
                     "username": "CHANGEME",
                     "server": "CHANGEME",
                     "zhcp_nodename": "CHANGEME",
                     "master": "CHANGEME",
                     "mgt_ip": "CHANGEME",
                     "mgt_mask": "CHANGEME"
                  },
                  "ml2": {
                     "type_drivers": "local,flat,vlan,gre",
                     "tenant_network_types": "vlan",
                     "flat_networks": "CHANGEME",
                     "network_vlan_ranges": "CHANGEME"
                   },
                  "config": {
                     "ram_allocation_ratio": "3"
                  },
                  "image": {
                     "tmp_path": "/var/lib/nova/images",
                     "cache_manager_interval": "86400"
                  },
                  "config_drive": {
                     "format": "tgz",
                     "inject_password": "false"
                  },
                  "diskpool" : "CHANGEME",
                  "diskpool_type" : "CHANGEME",
                  "zvm_host" : "CHANGEME",
                  "host" : "CHANGEME",
                  "user_profile" : "CHANGEME",
                  "scsi_pool" : "CHANGEME",
                  "fcp_list" : "CHANGEME",
                  "zhcp_fcp_list" : "CHANGEME",
                  "external_vswitch_mappings": "CHANGEME"
               },
             "zvm2" : {
                  "xcat": {
                     "username": "CHANGEME",
                     "server": "CHANGEME",
                     "zhcp_nodename": "CHANGEME",
                     "master": "CHANGEME",
                     "mgt_ip": "CHANGEME",
                     "mgt_mask": "CHANGEME"
                  },
                  "ml2": {
                     "type_drivers": "local,flat,vlan,gre",
                     "tenant_network_types": "vlan",
                     "flat_networks": "CHANGEME",
                     "network_vlan_ranges": "CHANGEME"
                  },
                  "config": {
                     "ram_allocation_ratio": "3"
                  },
                  "image": {
                     "tmp_path": "/var/lib/nova/imagesCHANGEME",
                     "cache_manager_interval": "86400"
                  },
                  "config_drive": {
                     "format": "tgz",
                     "inject_password": "false"
                  },
                  "diskpool" : "CHANGEME",
                  "diskpool_type" : "CHANGEME",
                  "zvm_host" : "CHANGEME",
                  "host" : "CHANGEME",
                  "user_profile" : "CHANGEME",
                  "scsi_pool" : "CHANGEME",
                  "fcp_list" : "CHANGEME",
                  "zhcp_fcp_list" : "CHANGEME",
                  "external_vswitch_mappings": "CHANGEME"
             }
           }
         }
      }
      After creating the node-specific attribute files, add the nodes.attribute_file JSON attributes in your topology file, your-topology-name.json:
      • nodes.attribute_file: For each node, set to the attribute JSON file that overrides the attributes in the default_attributes section of the environment file.
  7. Customize the passwords and secrets before deploying. You must change the passwords of the xCAT administrator xcat, the xCAT mnadmin user xcatmnadmin, and any instances that are created by the z/VM root user zlinuxroot in the user_passwords data bag. For instructions, see Customizing passwords and secrets.
  8. (Optional) Complete any optional customizations.
    For options, see Deployment customization options.
    Note: Some customization options might not be supported for all hypervisor types and some cannot be configured after you deploy your cloud environment.
  9. Deploy your topology.
    1. Upload the environment for your deployment.
      $ knife environment from file your-environment-name.json
    2. Deploy the topology.
      $ knife os manage deploy topology your-topology-name.json 
    3. (Optional) Check the detailed status of the IBM Cloud Manager with OpenStack services that are deployed.
      $ knife os manage services status --topology-file your-topology-name.json
  10. After the deployment is complete, the IBM Cloud Manager with OpenStack services are ready to use. The IBM Cloud Manager with OpenStack dashboard is available at https://controller.fqdn.com/, where controller.fqdn.com is the fully qualified domain name of the controller node in your topology.
    The web interface for IBM Cloud Manager with OpenStack self-service portal is available at https://controller.fqdn.com:18443/cloud/web/login.html. You can log into either using admin user with the password that you customized in step 7.

    For more information about managing IBM Cloud Manager with OpenStack services, see Managing IBM Cloud Manager with OpenStack services.

What to do next

You are ready to start using your cloud environment. To continue, see Using your cloud environment.