Published on 01-Dec-2010
Validated on 03 Dec 2013
Banking IT Service Center
Computer Services, Banking
Business Integration, Business Resiliency, Enabling Business Flexibility, High Availability , Information Infrastructure, Optimizing IT, Optimizing IT, Virtualization, Virtualization - Server
IBM Business Partner:
c.a.r.u.s. IT GmbH Hannover
This technical paper describes the server architecture and virtualization configuration chosen by the shared IT service center of a major German banking group to support a wide range of SAP environments for customers in different business units across the group. The paper explores the history and current state of the IT infrastructure, and also explains how the shared service center deals with operational considerations.
Customer objectives included: Create a reliable, scalable, high-performance infrastructure capable of hosting SAP environments for the entire group. Boost performance for specific SAP processes within SAP Bank Analyzer and SAP NetWeaver Business Warehouse. Protect data and ensure rapid disaster recovery within customer-specified service-level agreements. Simplify maintenance and ensure high availability even during planned maintenance windows.
Two IBM Power 595 servers, each with: 32 active IBM POWER6 processors running at 5GHz; 32 additional processors that can be activated on demand. Two IBM Power 570 servers, each with: 16 active IBM POWER6 processors. IBM AIX 6.1 to run the SAP applications and Oracle databases. IBM PowerVM including IBM Virtual I/O Server 2.1 for management of virtual servers. IBM AIX Logical Volume Manager for data mirroring between two data center locations.
Customer benefits included:Reduced processing time for critical batch jobs such as SAP Bank Analyzer by 50 percent. Improved I/O throughput rates for individual LPARs up to 80,000 IOPS. Increased bandwidth for SAP NetWeaver Business Warehouse InfoCube processing. Reduced TCO – four separate independent studies confirmed that the Power architecture would be more cost-effective than comparable Linux/x86-based architectures. Reduced administration workload.
About this paper
This technical paper describes the server architecture and virtualization configuration chosen by the shared IT service center of a major German banking group to support a wide range of SAP environments for customers in different business units across the group. The paper explores the history and current state of the IT infrastructure, and also explains how the shared service center deals with operational considerations such as maintaining high availability, providing data protection and disaster recovery, simplifying maintenance, monitoring performance, planning capacity, and ensuring appropriate charge-back for the services provided.
- Create a reliable, scalable, high-performance infrastructure capable of hosting SAP environments for the entire group
- Boost performance for specific SAP processes within SAP Bank Analyzer and SAP NetWeaver Business Warehouse
- Protect data and ensure rapid disaster recovery within customer-specified service-level agreements
- Simplify maintenance and ensure high availability even during planned maintenance windows
- Monitor performance and utilization of systems to ensure efficient operations and enable capacity planning
- Simplify accounting processes and enable accurate charge-back for IT services.
- Two IBM Power 595 servers, each with:
- 32 active IBM POWER6 processors running at 5GHz
- 32 additional processors that can be activated on demand
- 2 TB total physical memory, of which 1,100 GB is active
- Two IBM Power 570 servers, each with:
- 16 active IBM POWER6 processors
- 384 GB active physical memory
- IBM AIX 6.1 to run the SAP applications and Oracle databases
- IBM PowerVM including IBM Virtual I/O Server 2.1 for management of virtual servers
- IBM AIX Logical Volume Manager for data mirroring between two data center locations.
- Reduced processing time for critical batch jobs such as SAP Bank Analyzer by 50 percent
- Improved I/O throughput rates for individual LPARs up to 80,000 IOPS
- Increased bandwidth for SAP NetWeaver Business Warehouse InfoCube processing to up to 1,600 MB/s
- Reduced total cost of ownership – four separate independent studies confirmed that the Power architecture would be more cost-effective than comparable Linux/x86-based architectures
- Reduced administration workload with Virtual I/O Server, Shared Ethernet and vSCSI technologies
- Usage of Live Partition Mobility to enable maintenance of hardware with continuous availability of the application and to optimize workload distribution
- Increased business resiliency: in case of disaster at one site, systems can be rapidly transferred to standby LPARs running on servers at the other data center
- Improved monitoring of hardware and software components with SAP Solution Manager
- Enhanced management of performance, capacity and availability, making it easier to meet service level requirements
- Enabled accurate charge-back by using monitoring data to assess the level of service delivered to internal clients.
- Outstanding CPU over-commitment ratios of up to 3.9.
Virtualization has arrived in many SAP IT landscapes and delivers tangible benefits in flexibility and cost reduction. The level of completeness of using IBM PowerVM is very high in this specific case and represents a leading example for other implementations.
In December 2005 the experiences with a fully virtualized SAP system landscape on AIX were documented in a whitepaper called ‘SAP systems in a customer datacenter on a virtualized IBM POWER5 environment’. This technical paper is the second edition of this document, describing the major changes and developments that have taken place since December 2005. The IBM customer involved is a large organization in the banking sector in Germany. The company prefers to stay anonymous and will be referred to as ‘Banking IT Service Center’.
The volume of the landscape can be comprised as:
Overall approximately 35,000 named SAP users
Transactional volume: (for all productive systems in one month)
Dialog-Steps: approximately 10,000,000
RFC-Calls: approximately 8,355,000
Batch jobs: approximately 5,055,000
Overall SAPS 500,000
Four POWER6 systems with almost 100 cores active, 2.8 TB main memory
100 AIX LPARs
Managed disk space 120 TB (gross)
Tape space 870 TB
IT services are delivered internally, to subsidiaries and customers via service level agreements. The Banking IT Service Center team covers SAP Basis Administration, AIX, Server Management, and DB Administration with a team of approximately 10 full-time employees (FTEs) for the whole SAP landscape.
SAP Business Applications
The ‘Banking IT Service Center’ (BISC) generally runs a classic SAP ERP system landscape set up, including systems for development (DEV), quality assurance (QAS) and production (PRD). Approximately 90 to 100 SAP systems are being managed in total.
The Banking IT Service Center prefers to run 2-tier client server set ups (database and central instance in one operating system image) throughout their landscape leading to lower number of system images to administrate and reducing communication overhead between SAP applications and databases. Tables 1 and 2 show the SAP Application Components, the System ID, the SAP release and the role of the system.
Table 3 shows the evolution of the server infrastructure. At the start of 2010 the POWER5 systems were completely replaced and the large POWER6 systems were scaled up to double capacity. Two data center locations in a campus set up with a distance of 700m between them host the virtualized server and storage hardware.
Evolution of the company’s requirements
Traditionally, the company seeks ways to further standardize and consolidate its IT infrastructure. This involves introducing new technologies and architectures, bringing in challenges that call for further investment in POWER technology and the expansion of the server infrastructure with high-end systems.
With growing database sizes and higher demand for fast backup and recovery (service level agreements are in place to point-in-time recover a database within four hours) the company needs an alternative to LAN-based backup solutions. Using IBM PowerVM and virtual networks (high speed communication between LPARs within a physical server system) a solution based on Tivoli Storage Manager (TSM) has been implemented that provides a centralized and consolidated LAN-free data flow for backup and recovery purposes.
For the SAP Bank Analyzer there was always the demand to shorten the elapsed time for selected batch jobs. In 2007 the customer was required to halve the processing time. Some testing has been done to validate whether this requirement can be fulfilled using a scale-out approach using Oracle RAC with 3-tier SAP architecture on x86/Linux. At the time the x86/Linux solution with RAC didn’t deliver a sufficient performance gain to halve the processing time.
Towards the end of 2007, the decision was made to continue with the scale-up approach for SAP Bank Analyzer, and two IBM Power 570 systems (with the fastest processor speed available at the time, 4.7 GHz) were purchased and implemented. On one of these servers, 14 out of 16 CPUs were exclusively dedicated to the SAP Bank Analyzer production environment.
In 2009 the company initiated a project exploring ways to maintain performance in the Bank Analyzer system while introducing additional application modules, and further improve the run duration for existing workloads. One of the key questions faced was whether the scale-up approach would be sufficient to handle this kind of workload.
The specific performance demands caused the POWER5 and POWER5+ processors to reach their capacity limits. In mid-2009, the SAP Bank Analyzer production and SAP NetWeaver Business Warehouse production environments were running on “dedicated” server systems that meant inefficient virtualization, higher costs, performance constraints in case of hardware failure, and restrictions regarding optimal workload distribution.
Beside the capacity-related problems, it was also necessary to improve the performance per core for single-threaded operations in some systems (mainly Business Warehouse), and to significantly improve the disk I/O performance for both Bank Analyzer and Business Warehouse.
All these issues could be solved by introducing the two IBM Power 595 high-end server systems and starting a consolidation project to replace the POWER5 based servers.
To support SAP Bank Analyzer the company found that combining the scale-out approach with balanced improvement of the I/O infrastructure could fulfill the near-linear scaling requirements of this application. As a result, the computing performance of the production system has been doubled to 26 virtual CPUs (vCPUs) and 320 GB RAM. On the I/O side, the improved configuration showed throughput rates for an individual LPAR up to 80,000 IOPS and bandwidth (Business Warehouse InfoCube processing) up to 1,600 MB/s.
Temporary x86/Linux Based SAP Landscapes
Like most other IT Service Centers, BISC evaluated x86/Linux based IT infrastructure solutions to take advantage of the expected or promised lower acquisition costs of commodity hardware. In 2005, the internal SAP Business Warehouse was migrated from POWER/AIX to Linux, and in 2008 a customer’s SAP CRM Landscape was set up on x86/Linux.
Mainly because of CPU-based license costs (for the Oracle database, IBM Tivoli Storage Manager and related software), the cost-benefits of using low-priced x86 hardware were not as high as expected. This was proven by four internal studies carried out by different companies. The result illustrated that POWER-based landscapes deliver better TCO in the long term, helping BISC in its platform decision.
On the other hand, there are many advantages in using a homogenous IT infrastructure architecture based on the IBM POWER Architecture with PowerVM and AIX. For the customer with the SAP CRM landscape on x86/Linux, it was the POWER platform’s flexibility and scalability that led to the decision to migrate the CRM landscape to the IBM Power 595 server, running IBM AIX.
Software Release Combination
BISC is using the following software release combination:
Operating System: AIX 6.1
VIO Server release 2.1
Database: Oracle 10.2.0.4
Power VM Set up
Based on the experiences since the implementation of the IBM Power 595 servers, the company developed some architectural changes regarding the virtualization implementation. The main goal was to further improve and standardize the set up for each client LPAR. The architecture concept for processor virtualization wasn’t changed. The architecture concept for processor virtualization was to be left unchanged. All defined LPARs share the available CPU resources from one pool on the physical systems. Also the two virtual I/O servers run in shared processor LPARs. All LPARs run in an uncapped mode. Even very small LPARs get an uncapped configuration. Therefore they are ready to perform, for example, the nightly backup very quickly. The weighting of the LPARs and the entitlement ensures the appropriate assignment (or in a sense prioritization) of resources.
Generally, a newly defined LPAR gets a desired entitlement of at least 0.1 physical CPU and 1 virtual CPU. The production LPARs get a minimum of 0.2 CPU entitlement for each virtual CPU. This ensures that the productive LPARs get more guaranteed capacity than the other (non-production) LPARs.
The desired CPU entitlement is selected as low as possible in order to provide maximum flexibility for the hypervisor to assign granular portions of each physical CPU to the various LPARs according to workload.
At the upper end, BISC follows a rule of thumb to not assign more than 75 percent of the CPU capacity of the physical system to one LPAR. For example, for a 32-core system, the largest LPAR should not have more than 24 virtual CPUs (soft capping). This helps to ensure an efficient method for over-commitment, because it leaves room for other applications to leverage the excessive capacity.
The memory assignment happens once during basic configuration. After that, operations staff need to monitor the memory consumption and manually adjust the LPAR’s memory configuration.
The following four tables show the placement of the LPARs across the dual data center landscape. The virtualization (or over-commitment) factor, as defined by number of virtual CPUs divided by number of physical CPUs for the four Power Systems goes up to 3.9.
Virtual I/O Server (VIOS) sizing and configuration
Just as in the initial system set-up, two VIOS are used on each physical server. Occasionally, four VIOS on each IBM Power 570 server have been implemented to separate the disk I/O from the network I/O. However, BISC has not seen any benefit from this configuration.
Each VIOS LPAR has four vCPUs, 0.8 processing units’ entitlement and 4 GB RAM. The weighting factor has been set to 255 (maximum). Performance Monitoring shows that disk I/O can be handled with less than one physical CPU, but network I/O using shared Ethernet takes up to two CPUs or even more on each VIOS to handle the 10 Gbit bandwidth (used mainly for backup and recovery or for fast data transfer between physical systems).
To handle the higher demand on disk and network I/O each VIOS is equipped with the following adapter configuration:
- 16x 8 Gbps FC for SAN disk access
- 4x 1 Gbps Ethernet for customer network access (LPAR traffic)
- 2x 1 Gbps Ethernet for customer network access (VIOS traffic)
- 2x 1 Gbps Ethernet for administrative network access (LPAR traffic)
- 2x 1 Gbps Ethernet for administrative network access (VIOS traffic)
- 1x 10 Gbps Ethernet for backup/recovery network access (LPAR and VIOS traffic)
Both IBM Power 595 systems have spare disk and network adapters available to support future growth.
Storage Access via Fibre channel
All data is stored on SAN attached disk subsystem from Hitachi Data Systems. This is being managed by a different organizational unit in the Banking IT Service Center. It is set up in a redundant fashion across the two data center locations. For fault tolerance, host-based logical volume mirroring with IBM AIX Logical Volume Manager (LVM) is used.
On the PowerVM disk virtualization, the company had to decide between the two available architectures: N-Port-ID-Virtualization (NPIV), and Virtual-SCSI (vSCSI). The arguments for NPIV are that storage administration can be handled by the storage administration team and no further administrative task are required to manage the operating system. Each client uses a standardized multi-pathing solution (provided by the storage vendor) for redundancy and load balancing reasons.
From a virtualization point of view, the vSCSI model for disk virtualization has a few key benefits that led to the decision to continue with this architecture. vSCSI establishes an abstraction layer that separates the disk access on a VIO client LPAR from the physical attached storage on the VIO server.
Storage multi-pathing software is only required on the VIO server. Therefore many changes for physical disk attachment can be done without intervention on each LPAR. Also, the installation and maintenance of the multipath software on each LPAR is no longer required, significantly reducing administration effort and downtime.
At BISC there are FC adapter groups on the VIOS to separate the traffic for both the data center locations and for individual application groups.
The main drawback for the vSCSI architecture model is that there is another administrative task required. When storage needs to be provisioned to a group of VIOS, the disks have to be assigned to the individual LPARs. At BISC, a script-based solution has been developed to automate and handle this task very efficiently. For example, the SAP Bank Analyzer landscape was prepared to benefit from the separation of the two layers of LPARs and storage technology underneath the VIOS. Any maintenance in case of a failure, exchange of SAN technology components or changes in bandwidth can be handled transparently by the LPARs on which the SAP application components are running.
To simplify the configuration and completely decouple (i.e. virtualize) the client LPAR from the physical infrastructure and failover implementation, BISC uses the PowerVM Shared Ethernet Adapter feature, which includes Shared Ethernet Failover.
This architecture ensures that each client LPAR is configured consistently with three virtual network adapters – one for each of the three different networks. No further configuration steps on the client LPARs are required to establish redundancy.
On the VIOS for each network there is an Etherchannel implemented to establish redundancy and/or load balancing. The access to the customer production network is realized using an Etherchannel configuration with Link Aggregation Protocol. Four 1 Gbit Ports compose a single “virtual” adapter with aggregated bandwidth and fault tolerance.
SAP Solution Manager
Historically the application monitoring was done for the entire landscape using a central SAP CCMS system. This functionality was transferred to SAP Solution Manager, in which the ABAP functionality is available in the same fashion.
Central Performance History (CPH)
The access to central performance history is very helpful for the administrators.
In the future, BISC plans to monitor and report the CPU utilization of the shared processor pool via the CPH to help with reliable capacity management. The data from transaction os06n to be recorded is “available capacity”, “available capacity consumed” and “physical CPU consumed”.
The customers will have a 5 percent grace area within the SLA to cover unexpected peaks. So, for example, if the customer has a guaranteed 100,000 SAPS, he can consume 105,000 SAPS (i.e. 5,000 extra SAPS) in certain peak-load timeframes without any extra cost. To make this possible, it is necessary to closely monitor the utilization data of the entire environment and the performance experience of the customers. All data for the landscape needs to be consolidated and correlated.
Some of the Banking IT Service Center’s customers require higher service levels. For example, one customer requires 7 x 24 SAP application availability. Downtime management is highly important and as a result, so is reporting on system availability.
The downtime management process, designed together with the SAP development team, comprises the following functionality:
- “Four-eye” principle
- IT calendar
- Task book
- Notification (e.g. via system message or SMS or email)
The integration of SAP NetWeaver Business Warehouse reporting of the SAP Solution Manager data is currently being implemented.
Change Request Management (ChaRM)
Change Request Management (ChaRM) helps to support the SAP implementation project by managing changes within the SAP landscape. It integrates with the Service Desk for change requests and cProjects for project planning. It is currently operational at the Banking IT Service Center.
Many changes were made to the Java components of a BI system over two years. It is possible to drill down to the detail which happened at a certain point in time. When a different behavior is observed, the administrator is able to find out what changed between releases. This is especially useful for the BISC if the customer is able to handle changes (e. g. SAP transports) independently without informing the BISC.
A further functionality within change management is security reporting.
For example, BISC provides an SLA that commits to the creation of a security report once a year. Some parts are of the security reporting process are done manually, but there are also tasks that are generated automatically using templates. These tasks include: the checks of all security-relevant profile parameters for SAP systems in the landscape and a verification that all security-relevant SAP notes are implemented and working correctly. Another example is to check the settings on the standard users like SAP*. The current values to create a compliance report are read from CCDB (which is delivered and filled with Solution Manager Root Cause Analysis).
When a new SAP system is created either by installing from scratch or copying an existing system, the following steps are performed:
- Maintain the thresholds for the alert management via a transport from a template (in case of a new install) or from the source system (in case of a system copy).
- Establish the connection of the agents
- Maintain the new system in the monitoring tree
- Assign the central alert management settings according to standard methods
This whole process is completed in 30-45 minutes by an administrator with a few manual steps which are designed to minimize errors.
Load Balancing / Scalability
BISC aims to optimize utilization of the existing hardware resources by mixing and matching different system types (e.g. Development, Quality Assurance, Production or Sandbox) and different application components (e.g. BW, ERP, Bank Analyzer, etc.) on a single physical server.
At the same time, there should be room for scalability on every physical system.
The experience over the years has been very positive. The flexibility of changing virtualization settings on the fly and adjusting parameters without disruption to the business helps to consistently meet the service level agreements.
For example, one of BISC’s customers requires the adjustment of CPU and memory resources to be done in a single day – a condition that can easily be fulfilled. In case of a necessary migration or a release upgrade, it is easy to dynamically assign more virtual CPUs to an LPAR to shorten the time for that process.
BISC is able to make new system landscapes available without going through a purchasing process or having to work on cabling in the data center. The systems are sometimes only required for a few weeks and hence the usage can easily be accounted for: at the end of the project, the customer’s application department receives a bill, the systems are decommissioned, and the resources are made available for another project afterwards. This process was used for a large Bank Analyzer test project, where the production system was cloned into a new LPAR to test further functionality and determine the necessary configuration for the LPAR.
As mentioned above, all systems at BISC are running in a two-tier client-server set up. The large Bank Analyzer system running with 26 virtual CPUs is also configured as a central system with 240 work processes in one instance. The benefit of running a two-tier set up is that it enables automatic load-balancing within the system itself. Depending on the phase in the business process, the load requirements for the application switch between needing high I/O throughput (for database transactions) and CPU capacity (for calculations).
Hardware changes and Microcode updates can be covered by Live Partition Mobility (LPM) with no disruption to users of the systems in the application departments.
Live Partition Mobility has been extensively tested over a long period for several systems (from dedicated test systems through small sandbox systems to quality assurance and production systems). Migrations can be performed without disruption or performance impact during normal business hours.
Recently, one of the IBM Power 595 systems had a hardware defect that required downtime for the repair. Using LPM and Capacity on Demand, the entire server was “evacuated”, and all LPARs were moved to the other server without taking any applications offline. The largest LPAR in this case included SAP Bank Analyzer, configured with 26 virtual CPUs and 320 GB memory. The movement of this LPAR happened with a memory transfer rate of about 4 GB per minute.
Included in process is the LPM verification check, set up of the target LPAR shell, initial memory transfer, shift over, and transfer of the remaining (changed) memory. The steps happen without impact to the end users including the synchronization of the remaining (changed) memory pages.
LPM can be used in parallel for several partitions, as long as care is taken to ensure that there is no resource constraint. Using the implementation at BISC, four LPM operations can be done in parallel without performance impact for a single operation (distributing the operations on the two VIOS pairs and using two different network interfaces on each VIOS).
BISC’s disaster recovery backup scheme is realized through the use of two data centers, with hot standby LPARs for the production LPARs only. When a new SAP system landscape needs to be used for production purposes, the production and development LPARs are in one data center, and the quality assurance and idle standby LPARs are in the other data center.
The service level agreements allow a manual take-over after problem analysis. This is sufficient for most of the production systems.
The idle standby LPAR is up and running, prepared with AIX and the according maintenance level. In case of a disaster, a few tasks need to be executed. These are well documented and prepared by shell scripts which are readily available on the idle LPAR.
The basic tasks are:
- Make sure that the normal environment is completely down and cleaned up (deactivate the network interface, unmount of file systems, varyoffvg)
- Assign the virtual hostname (and IP address) to the new physical hostname on the standby LPAR
- Import the volume group (optionally forced) and varyonvg
- Mount of all necessary file systems
- Start of the system on the standby LPAR
- Perform post-processing steps
According to the service level requirements of customers, it is possible to set up the SAP production systems in a high availability cluster with IBM PowerHA – this has been successfully tested.
Data Protection (Backup / Recovery)
Backup and Recovery is provided by standard procedures based on IBM Tivoli Storage Manager (TSM). The TSM server is running in a shared processor LPAR and clustered using IBM PowerHA. It is maintained by another organizational unit and provided as a service to the SAP Basis Team.
To achieve the requirements and service level agreements regarding timeframes for backup and recovery, an alternative to LAN-based backup and recovery had to be found. The idea was to use LAN-free architecture concepts and combine this with the POWER virtualization technology. Instead of having direct tape attachment for selective systems, one LPAR has been set up to serve as a “Proxy” server. Communication flows from the LPAR over virtual Ethernet (hypervisor, or memory-only communication) to the Proxy-LPAR, and from there directly to the tape-based storage pool.
Using four parallel sessions for backup or recovery and LTO4 tape technology, a bandwidth of over 2,000 GB/h can be achieved. Bandwidth can be further scaled by using more tapes in parallel, or implementing faster tape technology.
One of the main benefits is that this technology can be used for each LPAR regardless of the database size. Administration tasks (tape configuration, software maintenance, etc.) only need to be performed on a single LPAR on each physical server.
For the largest SAP database systems, even this concept of LAN-free technology with bandwidth over 2,000 GB/h is not sufficient to fulfill the service level requirements regarding recovery time. To solve this problem, an alternative backup architecture based on disk snapshot technology has been designed and is in the implementation phase.
Performance-Monitoring and Capacity Planning
Right from the beginning, AIX tools (sar) have been set up to monitor the system performance and utilization for each LPAR. Using a monitoring interval of 5 minutes, this data was aggregated for each hour and stored in the SAP Computing Center Management System (CCMS) database.
Using CCMS and customized queries, this data can be viewed for individual LPARs or as summary for an entire physical server system.
nmon (short for Nigel’s Monitor) is a popular system monitor tool for AIX and Linux operating systems used by systems administrators and performance tuning specialists. This is delivered today as a part of AIX. More details are available at: http://www.ibm.com/developerworks/aix/library/au-analyze_aix/
In addition to the monitoring framework mentioned in the previous chapter, nmon has been set up in the BISC landscape on each LPAR to record detailed performance data. This information is mainly used for on demand performance analysis to check a system in detail and identify possible bottlenecks.
Experiences showed that the CCMS-based solution is not very widely accepted and only rarely used. As a result, there was a need to create a “cockpit” view for the actual performance of individual LPARs or physical server utilization. Trend analysis and reporting were also required. As a first step, Ganglia has been implemented to provide a “cockpit” view that can provide trend analysis for capacity planning.
Ganglia (open source) is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids.
It is based on a hierarchical design targeted at federations of clusters and leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. Using carefully engineered data structures and algorithms, Ganglia achieves very low per-node overheads and high concurrency. More details are available at http://ganglia.sourceforge.net/
In order to extend the Ganglia monitoring framework for PowerVM, it is required to consult the extensions by Michael Perzl available at http://www.perzl.org/ganglia/
So far, no perfect reporting solution has been found, and there are several evaluation steps to find an efficient solution. SAP Solution Manager with Business Warehouse functionality, IBM Tivoli Monitoring with common reporting, or freeware-based implementations could be viable options.
Accounting is handled centrally based on the monitoring data. CPU consumption, disk space and tape space consumption are also calculated. Using information from CCMS and AIX, it is possible to perform accounting in a virtualized environment. The customers of the Banking IT Service Center will get a regular bill (e.g. monthly), and then a balance at certain points in time (e.g. quarterly or yearly).
The reliable, scalable, high-performance infrastructure chosen by BISC has proved capable of successfully and efficiently hosting SAP environments for the entire group. Leveraging the advantages of virtualization delivers tangible benefits in flexibility and cost reduction. As mentioned above, the project demonstrates the value of making extensive use of IBM PowerVM, and represents an example that other implementations would do well to follow.
Finally, the paper demonstrates how BISC deals with operational considerations such as maintaining high availability, providing data protection and disaster recovery, simplifying maintenance, monitoring performance, planning capacity, and ensuring appropriate charge-back for the services provided using the combined IBM and SAP solution.
Products and services used
© Copyright IBM Corp. 2010 All Rights Reserved. IBM Deutschland GmbH D – 71137 Ehningen ibm.com Produced in Germany IBM, the IBM logo, ibm.com, i5/OS, DB2, Domino, FlashCopy, Lotus, Notes, POWER, POWER4, POWER5, POWER6, System i, System x, and Tivoli are trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of other IBM trademarks is available on the Web at: http://www.ibm.com/legal/copytrade.shtml UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product or service names may be trademarks, or service marks of others. This brochure illustrates how IBM customers may be using IBM and/or IBM Business Partner technologies/services. Many factors have contributed to the results and benefits described. IBM does not guarantee comparable results. All information contained herein was provided by the featured customer/s and/or IBM Business Partner/s. IBM does not attest to its accuracy. All customer examples cited represent how some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication is for general guidance only. Photographs may show design models.