Deploying prerequisites

Before deploying an IBM® Cloud Manager with OpenStack topology, you need to complete the prerequisites.

Procedure

  1. Install the latest fix packs for IBM Cloud Manager with OpenStack before you proceed.
    For information about fix packs, see Applying fixes and updates.
  2. Identify the node systems for the topology.
    Validate and collect basic information about a node system. Change node-fqdn to the fully qualified domain name of a node system. The deployment system must be able to SSH by using the fully qualified domain name. You can also set to the public IP address, private IP address, or host name. You are prompted for the SSH root user password for the node. To use an SSH identity file instead of a password, use the -i node-ssh-identity option, where node-ssh-identity is the SSH identity file for the node.
    knife os manage validate node node-fqdn --node-type node-type
    where node-type can be the following values:
    • controller - to run validation on a controller node
    • kvm - to run validation on a KVM compute node (all KVM platforms)
    • db2 - to run validation on a DB2® database node
    • any - to run a basic validation on any type of node, for example, on a block storage node. This is the default if the --node-type option is not specified.

    The command performs basic validation on the node system and displays the results to the terminal. The command also stores the results in a JSON file, validate_node_node-fqdn.json, and creates a cloud configuration YAML snippet file, node_node-fqdn.yml, in the current directory. The YAML snippet file contains network information that is collected for the node, including the recommended management network to use for the node. The file does not contain a recommended data network for the node, even if a data network exists. In addition, the file might not identify networks that have unique configurations. The node’s network information is used when you deploy your cloud environment.

    1. Verify that the time on the node systems is within 15 minutes of the time that is shown on the deployment server.
      Consider synchronizing the system clock of the deployment server and node systems with a network time protocol (NTP) server.
      Note: You can configure the deployment server as the network time protocol (NTP) server. When you are following the deployment process, customize the topology to use the NTP server you configured. Look for the customization step.
    2. Record the IP addresses for each node.
    3. Record the fully qualified domain names for each node.
    4. Record the root user login information (either password or SSH identity file) for each node.
    5. Record the number and name of each network interface card on each node.
      • Management network = Defaults to eth0. It is used for OpenStack communication between nodes.
      • Virtual machine data network = Defaults to eth1 (optional). It is used for virtual machine data communication within the cloud environment and is only required if you are using VLAN or Flat networks. Do not use a management or external network as the virtual machine data network.
      • External network L3 network = Defaults to eth0. It can be shared with the management network, which is the default configuration.
      If the deployment nodes do not have an eth0 or eth1 interface or if the management, external network and virtual machine data network are on different interfaces besides eth0 and eth1, then you must change the default environment settings when you configure the deployment.
      Note:
      • The network interface cards must not connect to the same network at the same time. For example, eth0 and eth1 must not connect to Network A at the same time.
      • The example environment assumes that the network configuration is identical across all of the deployment nodes.
      • The usable network interfaces on a node can be found by logging in to the node and running the 'ip address' command.
    6. The network configuration and hypervisor type limit the type of networks that can be defined. For more information about network considerations, see Network considerations.
  3. Verify the OpenStack controller node system meets the following criteria:
  4. If applicable, verify that the PowerVC environment that you want to manage meets the IBM Power Virtualization Center prerequisites.
  5. If applicable, verify the KVM, QEMU on x86_64 compute node system meets the following criteria:
    • See KVM or QEMU prerequisites for details.
    • To use the KVM hypervisor type on a node system, the node must support KVM acceleration. If the node system does not support KVM acceleration, then you must use the QEMU hypervisor type. To use the QEMU hypervisor type, set the openstack.compute.libvirt.virt_type attribute to qemu in the default_attributes section of your environment when you deploy your cloud environment. The Minimal topology uses the QEMU hypervisor type by default. Note that the QEMU hypervisor type is not recommended for a production deployment. For details, see the OpenStack documentation.
  6. If applicable, verify the KVM for z Systems® compute node system meets the KVM for IBM z Systems prerequisites.
  7. If applicable, verify that the z/VM® compute node system meets the following criteria:
    • See z/VM prerequisites for details.
    • To use the z/VM hypervisor, one or more x86_64 Red Hat Enterprise Linux system nodes should be used to install the compute and network driver to manage the z/VM hypervisor. One x86_64 Red Hat Enterprise Linux system node is supported for each z/VM node. For more information on configuring the z/VM hypervisor, see the Enabling z/VM for OpenStack user manual.
  8. If applicable, verify the PowerKVM compute node system meets the following criteria:
    See PowerKVM prerequisites for details.
  9. If applicable, verify that the Hyper-V compute node system meets the following criteria:
  10. Linux node systems must have access to a yum repository, which contains base operating system packages for your node systems.
    Many of the OpenStack components depend on operating system packages that are automatically installed on the node system during the deployment process. To determine if a node has access to a yum repository you can run the yum list libvirt command on the node. If the command fails, a valid yum repository does not exist on the node. If you do not have a yum repository configured on the node system, you can configure the Deployment Server to create the yum repositories on the nodes automatically when OpenStack is deployed. For configuration information, see Configuring operating system yum repositories for nodes using Red Hat Enterprise Linux.
  11. Consider this restriction before you deploy IBM Cloud Manager with OpenStack, in case you must undo a deployment.
    • Support does not exist to uninstall a deployed topology for IBM Cloud Manager with OpenStack. You must reinstall or reset the node back to its pre-deployment state. You cannot attempt to redeploy to the same managed system without first setting the node back to its pre-deployment state. Back up your node system before deployment by using existing snapshot or capture capabilities that are provided by the underlying virtualization manager or other backup methods.
    • The node must also be deleted from the chef server. For more information, see Cleaning up a node for redeployment.
  12. To ensure the IP address moves from the interface to the corresponding OVS bridge, verify that the following preconditions are met.
    1. Each interface must have an ifcfg-ethX file in the /etc/sysconfig/network-scripts/ directory whose name and device name match strictly the name that you will specify later in the environment file.
    2. If BOOTPROTO is static, the IPADDR, DEVICE attributes must be contained in the ifcfg-ethX file. In addition, either PREFIX or NETMASK must be specified in the ifcfg-ethX file as well.
    3. The controller node and compute node have a default gateway. Ensure that the gateway is valid and that you can ping the gateway. If you do not have a valid gateway in your environment, see Customizing IP movement attributes for more information.
    4. Before you deploy, make sure that your network service can restart successfully.
  13. If the controller node has several CPUs, deploying the topology might fail because of excessive database connections.
    The following troubleshooting topics that are related to deploying topologies, can help you to identify and correct this problem, if relevant for your environment: A correct configuration for this problem depends on several factors, including the database engine, number of CPUs, physical memory, and swap space.
  14. Consider whether to encrypt passwords during the deployment process.
    When you enter passwords into files during the deployment process (such as in the cloud YAML, passwords JSON, or topology JSON files), you can use a command to encrypt the password. The encryption avoids having clear text passwords in those files.
    knife os manage encrypt password [PASSWORD] 

    The command takes a clear text password and returns an encrypted password that can be used by the other IBM Cloud Manager with OpenStack commands when processing a deployment.