Deploying an all-in-one node with n VMware compute services

Deploy the components that are necessary to create a cloud environment with VMware nodes.

Before you begin

Before you begin, verify that you completed the following prerequisites.
  • Ensure that you completed the Deploying prerequisites steps.
  • Ensure that your managed-to vCenter matches the requirements in Verifying VMware driver and DVS mechanism driver requirements.
  • To use the VMware hypervisor, one or more x86_64 Red Hat Enterprise Linux system nodes must be used to install the compute, network, block-storage, and image driver to manage the VMware hypervisor. You can run several compute and on one x86_64 Red Hat Enterprise Linux system node.
  • You must have at least one managed-to vCenter server ready.
    Note: To use multiple vCenter servers, you must also use multiple regions.

About this task

Use the following procedure to deploy the topology to your node systems.

Procedure

  1. Log in to the deployment system as the root user.
    This is the system where IBM Cloud Manager with OpenStack was installed.
  2. Create a directory to store the files for the topology that you deploy. Change your-deployment-name to the name for your deployment.
    $ mkdir your-deployment-name
    $ chmod 600 your-deployment-name
    $ cd your-deployment-name
  3. Copy the example environment for the topology that you deploy. Change your-environment-name to the name for your environment.
    $ knife environment show example-ibm-os-single-controller-n-compute -d -Fjson > your-environment-name.json
  4. Change the following JSON attributes in your environment file, your-environment-name.json:
      • Name: Set to your environment name: your-environment-name.
      • Description: Set to the description for your environment.
      • openstack.region: (Optional) Customize the region name for your cloud. The region name must not contain spaces or special characters.
      • openstack.endpoints.host, openstack.endpoints.bind-host, openstack.endpoints.mq.host, and openstack.endpoints.db.host: Change from 127.0.0.1 to the IP address of the controller node system for the topology.
    1. VMware node configurations: You can update the following attributes for the VMware driver nodes in the environment file. Otherwise, you can specify the attributes in the attributes file for each node, as described in step 6 that describes how to customize the attribute file.
      • ibm-openstack.vmware-driver.vcenter_connection.host_ip: Specify the IP address of the managed-to vCenter server.
      • ibm-openstack.vmware-driver.vcenter_connection.host_username: Specify the login user name of the managed-to vCenter server.
      • ibm-openstack.vmware-driver.vcenter_connection.wsdl_location: Specify the location of the wsdl file that is used to set up the vSphere API session. This attribute is not mandatory and can be left blank.
      • ibm-openstack.vmware-driver.vcenter_connection.api_retry_count: Specify the API retry count that is used to set up the vSphere API session.
      • ibm-openstack.vmware-driver.vcenter_connection.task_poll_interval: Specify the task pool interval that is used to set up the vSphere API session.
      • ibm-openstack.vmware-driver.compute.services: Specify the compute service list to be set up. For example, [“compute0”, “compute1”]. You must configure specific options for each service. Following are examples for “compute0”.
      • ibm-openstack.vmware-driver.compute.compute0.compute_type: Specify the type of the compute resource to be managed. For example, "cluster", “cluster_resourcepool”, “host_resourepool”, or “esxi”.
        Table 1. Compute type options
        Type Mandatory options
        cluster ibm-openstack.vmware-driver.compute.compute0.cluster_name: The cluster name to be managed (For example, [‘cluster01’])
        cluster_resource_pool

        ibm-openstack.vmware-driver.compute.compute0.cluster_name: The cluster name to be managed (For example, [‘cluster01’])

        ibm-openstack.vmware-driver.compute.compute0.resource_pool: The cluster resource pool path to be managed (For example, ‘cluster01:res1’).

        host_resource_pool ibm-openstack.vmware-driver.compute.compute0.resource_pool: The host resource pool path to be managed (For example, ‘x.x.x.x:res1’)

        where x.x.x.x is the IP address of the host.

        esxi

        ibm-openstack.vmware-driver.compute.compute0.esx_host_name: The VMware ESX host to be managed (For example, x.x.x.x)

        where x.x.x.x is the IP address of the host.

        ibm-openstack.vmware-driver.compute.compute0.compute_monitors: The compute monitors of the vCenter server. The default value is ['VMwareCPUMonitor'].

        There are extra common options for all four compute types. Some of these options in the JSON file are optional. The optional parameters use the default value that is provided, unless you specifically change the value. Other parameters are mandatory and you must specifically configure them. The following list shows the mandatory parameter in this section, thus this is not an all-inclusive list. If you must change any of the optional parameters, you can do so. Optional parameters and default values are in the environment file.

      • ibm-openstack.vmware-driver.compute.compute0.datastore_regex: Specify the regular expression pattern that is used to search data stores. For example, “datastore*”.
        Note: The data store names on the vSphere side, can only contain numbers, lowercase and uppercase letters, white spaces, an underscore ('_'), or a hyphen ('-').
    2. Discovery service configuration:
      The following three options are mandatory and you must have at least one of them configured.
      • ibm-openstack.vmware-driver.discovery.common.clusters: Specify clusters to discover from. The default value is [].
      • ibm-openstack.vmware-driver.discovery.common.host_resource_pools: Specify the host resource pool to discover from. The default value is [].
      • ibm-openstack.vmware-driver.discovery.common.cluster_resource_pools: Specify the cluster resource pool to discover from. The default value is [].
      The following option is also mandatory.
      • ibm-openstack.vmware-driver.discovery.network.physical_network_mappings: Specify the physical network mappings that are used for port group discovery. The value format is "physnet0:vSwitch0, physnet1:vSwitch1,...". The default value is ‘physnet1:vSwitch0’. In this example, 'physnet1' is the physical networks that discovered networks are created on. The value should match the ML2 configurations, which are mentioned in section 4e. 'vSwitch' is the vSphere virtual switch name to be discovered. Change the values based on your real environment information.
      If OpenStack was configured to use increased security, configure the following attributes:
      • ibm-openstack.vmware-driver.discovery.auth.http_insecure: Set to false if you want to verify the client cacert under an https connection mode. If you set this value to true, it does not verify the certification.
      • ibm-openstack.vmware-driver.discovery.auth.connection_cacert: Required to be set when http_insecure is false. Specify the Certificates of certificate authorities (CA) file that is used for increased security. This value must be a valid file directory.
      For information on how to generate the CA file, see Customizing for a more secure cloud.
    3. Distributed virtual switch (DVS) with Neutron ML2 driver configuration:
      • ibm-openstack.vmware-driver.network.use_dvs: Specify if you need to configure a DVS Mechanism driver in this deployment. The default value is true. If the value is false, the remaining attributes that are listed in this section are not needed.
      • ibm-openstack.vmware-driver.network.network_maps: Specify the physical network maps used for creating the distributed virtual port group. The default value is ‘physnet1:dvSwitch’. 'physnet1' is the physical network that matches ml2 network_vlan_ranges or flat_networks in section 4e. 'dvSwitch' is the distributed virtual switch name that you want to create the network on.
    4. Common ML2 configurations are shown below:
      • openstack.network.ml2.flat_networks: Specify the physical networks that the flat networks will be created on. The value should be like 'physnet1', which is the same as the physical network in "ibm-openstack.vmware-driver.network.network_maps", and "ibm-openstack.vmware-driver.discovery.network.physical_network_mappings".
      • openstack.network.ml2.network_vlan_range: Specify the physical networks and VLAN ranges of the VLAN networks. The value should be like "physnet2:1000:2999". "physnet2" is the same as the physical network in "ibm-openstack.vmware-driver.network.network_maps", and "ibm-openstack.vmware-driver.discovery.network.physical_network_mappings". The value "1000:2999" is the VLAN range.
      • openstack.network.ml2.tenant_network_types: Specify the tenant networks. The value should be like "vlan,flat", because the DVS ML2 driver only supports these two types.
    5. Recommended network configuration:
      If the management network interface of the nodes is not eth0, if the virtual machine data network interface is not eth1, or both apply, then update all occurrences of eth0, eth1, or both in the environment file to match your network configuration. The following list displays some of the networking properties and their default values (from the example environment) that you might need to change. In most cases, these default values should be sufficient and do not need to be changed:
      openstack.network.openvswitch.bridge_mapping_interface:"br-eth1:eth1"
          
      The bridge_mapping_interface property is used to control the creation of the data network OVS bridge on the nodes. If the node system does not have a second network interface card that can be used as a data network, then set to "" or remove this value. Do not set to the same value as the management network. Also, do not set to a network interface card that provides an alternative management network or an external network for the node, for example, a private or public IP address. A data network is required to use VLAN or flat networks in your cloud:
      openstack.network.openvswitch.bridge_mappings:"default:br-eth1"
          

      The bridge_mappings property controls which OVS bridge is used for flat and VLAN network traffic from the node. If this OVS bridge does not exist, the Open vSwitch agent does not start. This bridge can be automatically created by setting the bridge_mapping_interface property. If the node system does not have a second network interface card that can be used as a data network, then set to "" or remove this value.

  5. Copy the following example topology to a file, your-topology-name.json. Change your-topology-name to the name for your topology.
    Here is an example topology with VMware nodes.
    {
      "name":"CHANGEME",
      "description":"CHANGEME",
      "environment":"CHANGEME",
      "secret_file":"CHANGEME",
      "run_sequentially":false,
      "nodes": [
        {
          "fqdn":"CHANGEME",
          "password":"CHANGEME",
          "identity_file":"CHANGEME",
          "quit_on_error":true,
          "run_order_number":1,
          "runlist": [
             "role[ibm-os-single-controller-vmware-driver]"
         ]
        }
      ]
    }
  6. Customize the topology file.
    Change the following JSON attributes in your topology file, your-topology-name.json:
    • Name: Set to your topology name: your-topology-name.
    • Description: Set to the description for your topology.
    • Environment: Set to the environment for your topology: your-environment-name.
    • nodes.fqdn: For each node, you must set to the fully qualified domain name of the node system. The deployment system must be able to ssh by using the fully qualified domain name. You can also set to the public IP address, private IP address, or host name. It is recommended that the value used correspond to the management network interface for the node.
    • nodes.password or nodes.identity_file: For each node, set to the appropriate SSH root user authentication for the node system. Either a password and an SSH identity file can be used for authentication. Remove the unused attribute for each node.
  7. Customize the passwords and secrets before deploying.
    You must change the password of the vCenter connection, which is stored in secrets/openstack_vmware_secret_name. For more information, see Customizing passwords and secrets.
  8. Upload the environment file for your deployment.
    $ knife environment from file your-environment-name.json
  9. Deploy the topology.
    $ knife os manage deploy topology your-topology-name.json 

What to do next

To check the detailed status of the services you deployed, run the $ knife os manage services status –-topology-file your-topology-name.json command.

After the deployment is complete, your IBM Cloud Manager with OpenStack services are ready to use, and several Nova compute services with a VMware hypervisor are running.