KVM for IBM z Systems prerequisites

View the prerequisites for using IBM® Cloud Manager with OpenStack with KVM for IBM z Systems®.

IBM Cloud Manager with OpenStack supports KVM for IBM z Systems compute nodes. KVM for IBM z Systems compute nodes must run in a z Systems logical partition. Running KVM for IBM z Systems compute nodes in a z/VM® guest or in a KVM for IBM z Systems guest is not supported.

The KVM for IBM z Systems compute node must satisfy the following requirements:
  • Operating system: KVM for IBM z Systems version 1.1
  • Hardware: zEC12/zBC12 or higher

For more information about KVM for IBM z Systems, see http://www.ibm.com/systems/z/solutions/virtualization/kvm/ for general information and http://www.ibm.com/support/knowledgecenter/SSNW54_1.1.0 for documentation.

Supported guest operating systems

The following operating systems are supported as guests of this hypervisor:
Table 1. Supported guest operating systems for KVM for IBM z Systems
Operating System Version
SUSE Linux Enterprise Server 12 SP1

Considerations for Fibre Channel attached storage

When using FCP-attached storage with KVM for IBM z Systems, the system and the operating systems needs to be prepared as follows.
Note: The term Fibre Channel refers to the lower level protocol on top of which the higher level protocols, FCP (for SCSI SANs) and FICON® (for ECKD storage), are used. The names of the higher level protocols are used whenever the distinction is important.
  • An FCP device must be configured for each z Systems logical partition that is running a compute node or block storage node. An FCP device represents a virtual FCP host bus adapter (vHBA) and is backed by an FCP channel in the z Systems I/O configuration. The FCP channels that are used for logical partitions running compute or block storage nodes must be in NPIV mode. An FCP channel represents a physical port on the FCP adapter. FCP channels can be shared between logical partitions.
    Note: Enterprise environments typically use multipathing for FCP. For more information about setting up multipathing, see the section about preparing FC-attached SCSI disks in KVM Virtual Server Management, SC34-2752-00.
  • Configure the vHBA to be automatically online when the operating system is booted:

    For more information, see the section about persistent configuration in KVM Virtual Server Management, SC34-2752-00.

  • KVM for IBM z Systems can be configured to perform FCP port rescanning and LUN scanning in an automated fashion. Turning on automated port rescanning can impact the performance of the system in certain situations by causing additional traffic in the storage network. Complete the following steps to optimize performance:
    • Disable automated port rescanning if the SAN does not implement single initiator zoning.
    • Disable automated LUN scanning if the SAN does not implement single initiator zoning and LUN masking.
    You can check the current settings for automatic LUN scanning and port rescanning on the current operating system by using the following example. The results shown are the recommended settings:
    cat /sys/module/zfcp/parameters/allow_lun_scan
    N
    cat /sys/module/zfcp/parameters/no_auto_port_rescan
    Y

    You can change these settings permanently for the current operating system by changing the zfcp.allow_lun_scan and zfcp.no_auto_port_rescan parameters in /etc/zipl.conf. Then run the zipl command to activate the changed settings.

    Note: When automated port rescanning is disabled, and the storage array is connected to the z Systems machine after the KVM for IBM z Systems operating system has been started, you need to trigger a manual port rescan, by using the following command:
    echo 1 > /sys/bus/ccw/drivers/zfcp/fcp-device-bus-id/port_rescan