Administering cloud groups

Use this menu to administer cloud group resources.

About this task

Cloud groups are organized compute nodes and IP groups. You can create and administer three types of cloud groups in the system:
  • Dedicated
  • Average
  • Virtual manager
The virtual manager cloud group type is not used for deployments. The compute nodes in these cloud groups are reserved for use by external applications that are not managed by the system. For details about configuring external application access with virtual manager cloud groups, see Configuring external application access. Cloud group high availability option is not applicable for this type of cloud group.

Dedicated and average cloud group types define how resources are allocated to a virtual machine during deployment.

Dedicated cloud groups are used for workloads that require an entire CPU core to be dedicated to each virtual CPU in the VM. This may be useful when a VM runs CPU intensive workloads and ensures the workload's performance is not impacted if there is contention for CPU.

Average cloud groups are used to allow the CPU resources in a cloud group to be over-committed. You are able to select the over-commit ratio that you want to use (default is 8 vCPU to each physical core). thus allowing a larger number of workloads to be run on a compute node. However, because the CPU resources are over-committed, you may experience contention for CPU.
Table 1. Defined cloud groups
Type Physical cores per compute node vCPUs per compute node Maximum vCPUs per virtual machine Memory
Dedicated 16 16 16 No overcommit
Average 16 128 32 No overcommit
The following table shows how a virtual machine's CPU and memory reservations are mapped to hardware resources:
Note: VMware CPU overhead is amortized over each physical CPU. There is a 10% overhead that is reserved on each pCPU for ESX giving you 0.9 of a core for a dedicated cloud group, and 0.1125 for an average cloud group.
Table 2. Mapping of CPU and memory reservations to hardware resources
Type CPU count (1 vCPU) Virtual memory (1 MB)
Dedicated 0.9 pCPU per vCPU 1 physical MB
Average 0.1125 pCPU per vCPU 1 physical MB

Optionally, a cloud group can be set to reserve resources for high availability. This option reserves resources (CPU and memory) within the cloud group equivalent to one compute node. The reserved capacity in a cloud group containing N compute nodes is 1 / N of the resources (CPU and memory) on each compute node.

You may not want certain workloads deployed in the cloud group to be highly available. When Reserve resources for availability is set to None, workloads deployed in the cloud group will not be highly available. For example, if the cloud group is dedicated for development or test, then having the workloads highly available may not be required. Conversely, if a you have a cloud group that has production VMs, you may want the workloads in the cloud group to be highly available. In this case, be sure to set the Reserve resources for availability parameter to Cloud group or System.

Cloud group high availability options are enhanced with system-level high availability option. System-level high availability designates one or more compute nodes for high availability, and compute nodes designated as such may not be added to a cloud group by a user.

Consider the following factors when you decide to implement high availability at the system level versus the cloud group level:
  • System-level high availability maintains high availability for a cloud group when there are one or more spare compute nodes for the system. Cloud group high availability maintains a best effort approach, and there are times when high availability will become inactive.
  • To maintain high availability, system-level high availability requires the same or less compute resources than cloud group high availability. System-level high availability requires one spare compute node to maintain high availability for the entire system, whereas cloud group high availability requires one compute node per cloud group to maintain high availability for that cloud group.
  • System-level high availability maintains high availability while applying maintenance to a cloud group, as long as there are two or more spare compute nodes.
  • System-level high availability provides superior high availability. Based on your tolerance for failure, you can designate more spare compute nodes. If you want to handle a single compute node failure, then a single spare compute node is all that is required. If you want to handle two or three compute node failures, then you can designate two or three spare compute nodes. If failures occur using cloud group high availability, you must carefully monitor and control the virtual machines being deployed.
  • High availability at the cloud group level has a lower Recovery Time Objective (RTO) than that of system-level high availability. System-level high availability must add the compute node to the cloud group where the hardware failure occurs in order to recover the virtual machines on the failed compute node. This takes additional time that cloud group high availability does not require.
For information on system-level high availability for cloud groups, see the Related tasks section.