IBM General Parallel File System for Linux on POWER, V3 offers Information Lifecycle Management and enhanced multicluster support

IBM United States Software Announcement 206-100
April 25, 2006

 

 ENUS206-100.PDF (59KB)

Table of contents   Document options  
TOC link At a glance TOC link Offering Information
TOC link Overview TOC link Publications
TOC link Key prerequisites TOC link Technical information
TOC link Planned availability date TOC link Ordering information
TOC link Description TOC link Terms and conditions
TOC link Product positioning TOC link Prices
TOC link Reference information TOC link Order now
 
Printable version Printable version

 
At a glance

IBM General Parallel File System (GPFS) for Linux on POWER, V3.1 offers:

  • Information Lifecycle Management, to allow more flexible support of different disk classes
  • Improved multicluster support and use of multiple networks
  • Administration usability and flexibility improvements
  • Support for mixed clusters of IBM eServer servers sharing file system services running Linux or AIX 5L™

For ordering, contact:

Your IBM representative, an IBM Business Partner, or IBM Americas Call Centers at 800-IBM-CALL (Reference: RE001).
 
Back topBack to top
 

Overview

IBM General Parallel File System (GPFS) for Linux™ on POWER™ is ideal for high-performance parallel file transfer and parallel I/O to single or multiple files. The key strengths of GPFS are multicluster support, superior scalability and performance, ability to support extremely large files, failure recovery, and ease of administration.

Version 3.1 enhancements:

  • Information Lifecycle Management support with the introduction of storage pools, policy-based file management, and filesets. GPFS is designed to help you to achieve data lifecycle management efficiencies through policy-driven automation and tiered storage management. User-defined policies provide the ability to better match the cost of your storage resources to the value of your data.
    • Storage pools allow you to manage your file system's storage in groups. You may now partition your storage based on such factors as performance, locality, and reliability.
    • Filesets provide for partitioning of a file system, allowing administrative operations at a finer granularity than the entire file system.
    • New commands and APIs support storage pools, policies, and filesets.
  • Elimination of the need for communication between nodes in different remote clusters.
  • Enhancements to multicluster file access. You can specify the use of multiple networks for a node in your cluster, thus allowing both the use of internal networks within a cluster and the use of external addresses for remote clusters.
  • Enhanced file system mounting and Network Shared Disk functions.
  • Improved performance and scalability via option to distribute the token manager.

 
Back topBack to top
 
Key prerequisites
  • IBM System p5™ servers or
  • IBM eServer pSeries® servers or
  • IBM eServer p5 servers or
  • IBM eServer BladeCenter® servers or
  • IBM eServer OpenPower.
  • SUSE Linux Enterprise Server 9 or
  • Red Hat Enterprise Linux 4 (AS, ES, or WS).


 
Back topBack to top
 

Planned availability date

April 28, 2006


 
Back topBack to top
 

Description

IBM General Parallel File System (GPFS) provides file system services to parallel and serial applications. GPFS allows parallel applications simultaneous access to the same files, or different files, from any node that has the GPFS file system mounted, while managing a high level of control over all file system operations. GPFS is particularly appropriate in an environment where the aggregate peak need for data bandwidth exceeds the capability of a distributed file system server.

GPFS allows users shared file access within a single GPFS cluster and across multiple GPFS clusters. A GPFS cluster can consist of Linux nodes, AIX 5L V5.3 nodes, or a combination thereof, and network shared disks (NSDs) created and maintained by the NSD component of GPFS.

GPFS for Linux on POWER V3.1 enhancements:

  • GPFS V3 provides for Information Lifecycle Management with the introduction of storage pools, policy-based file management, and filesets. GPFS is designed to help you to achieve data lifecycle management efficiencies through policy-driven automation and tiered storage management. User-defined policies provide the ability to better match the cost of your storage resources to the value of your data.
    • Storage pools allow you to manage your file system's storage in groups. You may partition your storage based on such factors as performance, locality, and reliability. Files are assigned to a storage pool based on defined policies. Storage policies provide for:
      • File placement to a specific storage pool when it is created
      • Migration of a file from one storage pool to another
      • Deletion of a file based on characteristics of the file

      Filesets provide a method for partitioning a file system, and allow administrative operations at a finer granularity than the entire file system. For example, filesets allow you to:

      • Define data block and inode quotas at the fileset level
      • Apply policy rules to specific filesets
    • New commands provide enhanced support of storage pools, policies, and filesets.
    • New APIs in support of storage pools and filesets include gpfs_igetstoragepool and gpfs_igetfilesetname.
  • This release eliminates the need for communication between nodes in different remote clusters by distributing the token server load over multiple manager nodes in a home cluster and by letting the token server handle revoke requests for tokens rather than delegating this to the client in a remote cluster. This eliminates token traffic required among these different remote clusters.
  • GPFS provides improved multiple-cluster network access and support. You may specify the use of multiple networks for a node in your cluster, allowing both the use of internal networks within a cluster and the use of external addresses for remote mounts. The mmchconfig command has been enhanced so you may specify a list of private IP addresses used to communicate between nodes in a GPFS cluster.
    • Multiple security levels, up to one for each authorized cluster, may be specified by the cluster granting access.
    • The local cluster no longer needs to be shut down prior to changing security keys. GPFS aims to provide a highly available service while allowing for required periodic changing of keys:
      • In order to make connection rate performance acceptable in large clusters, the size of the security keys used for authentication can not be very large. As a result it may be necessary to change security keys in order to prevent a given key from being compromised while it is still in use.
      • As a matter of policy, some institutions may require periodic changing of security keys.

    Refer to the GPFS FAQ for information on the latest support for GPFS interoperability, at

  • Enhanced file system mounting is supported by three new commands:
    • The mmmount and mmunmount commands are provided for cluster-wide file system management, alleviating the administrator from issuing the dsh command.
    • The mmlsmount displays the IP addresses and names of the nodes (local and remote) that have a particular file system mounted.
  • Network Shared Disk functions are provided:
    • An option to allow or restrict failover from local to remote access is provided on the mmchfs, mmmount, and mmremotefs commands.
    • In prior releases of GPFS, the only way to failback to local disk access once the connection had been repaired and access through an NSD server was no longer desired, was to remount the file system. In this release, GPFS discovers if the path has been repaired. If the path has been repaired, GPFS falls back to local disk access.
    • Improved NSD access information is provided by the mmlsnsd command:
      • The availability of the disk is displayed.
      • The device type of the disk is available when specifying the -X option.
  • The mmpmon performance monitoring tool provides:
    • Use of a named socket — the mmpmon command no longer uses ports.
    • Ability to include a list of nodes, in the local cluster, to report on instead of just the node from which the command is issued.
  • Mount support is provided for clusters utilizing NSD servers.
  • You may specify how long to wait for an NSD server to come online before allowing the mount to fail. The mmchconfig command has been enhanced, allowing you to specify wait time when either bringing a cluster online or bringing an NSD server online when client nodes are already active:
    • The cluster formation time is at most nsdServerWaitTimeWindowOnMount seconds from the current time.
    • The last failed time is at most nsdServerWaitTimeWindowOnMount seconds from the current time.
    • The number of seconds to wait for the server to come up before declaring the mount a failure is as specified by the nsdServerWaitTimeForMount option.
  • You may specify different networks for GPFS daemon communication and for GPFS administration command usage within your cluster. The node descriptor for the mmchcluster command now allows you to specify separate node interfaces for these uses, for each node within your cluster. You may choose to use this capability when considering cluster security or performance.
  • An auto-configuration tool is provided for GPFS for Linux installations.

    GPFS Linux installations require a set of kernel extensions be built to match a customer's specific hardware, Linux distribution, and kernel. Prior to GPFS V3.1, this was a manual operation (refer to GPFS Concepts, Planning, and Installation Guide and search on the GPFS portability layer). In addition to the manual method, there now exists an autoconfiguration tool to query your system configuration and generate a working configuration file.

  • The use of a single port removes conflicts with existing network services.

    As of November 2004, port number 1191 was officially registered with InterNet Assigned Numbers Authority (IANA) for handling GPFS-specific traffic. With this release, all GPFS-related traffic has been consolidated on port 1191.

  • Improved usability and command consistency are provided via the -N flag to indicate which nodes are to participate in execution of the command.
  • Enhanced error messages reporting is provided in the mmfs log.

    Messages previously reporting no explanation provide the proper text.

  • You may experience performance improvements when issuing either the mmlsnsd -M or the mmlsnsd -m command due to a redesign of the command.
  • Quorum semantics are improved when utilizing node quorum with tiebreaker disks. The maximum number of quorum nodes has been increased from two to eight.
  • Enhanced quota information is provided in support of filesets.
  • GPFS V3 provides the ability to distribute the token management function among file system manager nodes in a cluster, reducing possible bottlenecks.

    You may choose to distribute the token server load among the nodes in your cluster that have been designated as file system manager nodes. The mmchconfig command has been enhanced with the distributedTokenServer option, allowing you to turn the distributed token management function on and off.

    Refer to Concepts, Planning, and Installation Guide for further information on the roles of the file system manager and General Parallel File System: Advanced Administration for details on distributed token managers.

  • Performance and security enhancements have been made to the mmauth command. You may experience performance improvements as GPFS V3 uses a multithreaded receiver for authentication, encryption, and decryption.

Other functional changes in GPFS V3.1:

  • The mmpmon command no longer uses ports.

Note: Refer to the GPFS FAQ available from the following Web site for complete information

Accessibility by people with disabilities

A U.S. Section 508 Voluntary Product Accessibility Template (VPAT) containing details on the product's accessibility compliance can be requested via IBM's Web site:


 
Back topBack to top
 
Product positioning

GPFS can be ideal for high-performance parallel file transfer and parallel I/O to single or multiple files.

The key strengths of GPFS are:

  • Information Lifecycle Management
  • Shared file system access among GPFS clusters
  • Improved system performance
  • superior scalability and ability to support extremely large files
  • File consistency
  • High recoverability and increased data availability
  • Enhanced system flexibility
  • Simplified storage management
  • Simplified administration
  • Multiple platform support

GPFS for Linux is also positioned as a complementary offering to GPFS for AIX 5L.
 
Back topBack to top
 

Reference information
  • Software Announcement 206-095 , dated April 25, 2006 (GPFS for Linux Multiplatform)
  • Software Announcement 206-101 , dated April 25, 2006 (GPFS for AIX 5L)
  • Withdrawal Announcement 906-059 , dated March 28, 2006 (HPC SW product service end dates)
  • Software Announcement 204-298 , dated December 7, 2004 (On/Off Capacity on Demand)
  • Withdrawal Announcement 906-089 , dated April 25, 2006 (SW product withdrawals from marketing)

Business Partner information

If you are a Direct Reseller - System Reseller acquiring products from IBM, you may link directly to Business Partner information for this announcement. A PartnerWorld ID and password are required (use IBM ID).

BP Attachment for Announcement Letter 206-100

Trademarks

 
POWER, System p5, and AIX 5L are trademarks of International Business Machines Corporation in the United States or other countries or both.
 
pSeries and BladeCenter are registered trademarks of International Business Machines Corporation in the United States or other countries or both.
 
Linux is a trademark of Linus Torvalds in the United States, other countries or both.
 
Other company, product, and service names may be trademarks or service marks of others.

GPFS for Linux™ on POWER™ V2.3 will remain available via current 5765-G20 billing feature numbers and AIX 5L™ SPO (5692-A5L) supply feature numbers until further notice.

Program warranty service for GPFS V2.2 will end on April 30, 2007, as previously announced.

Program warranty service for GPFS V2.3 will continue until further notice.

For additional GPFS product information, go to


 
Back topBack to top
 
Offering Information

Product information is available via the Offering Information Web site


 
Back topBack to top
 
Publications

No publications are shipped with this program; however, the following publications are available from the Web:

IBM General Parallel File System:

  • Concepts, Planning and Installation Guide (GA76-0413)
  • Administration and Programming Reference (SA23-2221)
  • Problem Determination Guide (GA76-0415)
  • Data Management API Guide (GA76-0414)
  • Advanced Administration Guide (SC23-5182)

These displayable manuals are viewable in PDF format from

The manuals are also available by selecting the "Library" link off of the IBM eServer Cluster Information Center at

IBM Publications Center

The Publications Center is a worldwide central repository for IBM product publications and marketing material with a catalog of 70,000 items. Extensive search facilities are provided. Payment options for orders are via credit card (in the U.S.) or customer number for 50 countries. A large number of publications are available online in various file formats, and they can all be downloaded by all countries free of charge.
 
Back topBack to top
 

Technical information

Specified operating environment

Hardware requirements

Any of the following IBM servers:

  • IBM System p5™ servers
  • IBM eServer pSeries® servers
  • IBM eServer Cluster 1600 (with supported network interconnect)
  • IBM eServer p5 servers
  • IBM eServer BladeCenter® servers
  • IBM eServer OpenPower™

Any of the following connectivity and storage:

  • 100 Mb Ethernet, 1 Gb Ethernet, or Myrinet (IP only)
  • IBM TotalStorage® DS4000 disk (DS4100, DS4300, DS4400, DS4500, or DS4800)
  • IBM TotalStorage ESS (2105-F20 or 2105-800)

Note: Refer to the GPFS FAQ available from the following Web site for an updated list of formally qualified disk subsystems and servers:

Software requirements

Compatibility: In any cluster or group of clusters that mount the same filesystems, GPFS requires that all nodes be running the same release of code. Nodes running different service or update levels may coexist as long as the different levels do not depend on certain bug fixes for correct execution.

Refer to the online documentation on migration from the "Library" link off of the IBM eServer Cluster Information Center at

Limitations: For complete and up-to-date information regarding GPFS specifications and capacities, refer to the GPFS for Linux on POWER (5765-G67) Sales Manual, and to the GPFS FAQ available from the following Web site

For additional information, refer to usage restrictions in the Terms and conditions section of this announcement, or to the License Information document that is available on the IBM Software License Agreement Web site at

Planning information

Packaging: This program is distributed as a single package available via CD-ROM media. A product README file and license information are shipped with the product. Publications are available online.

Security, auditability, and control

GPFS uses the security and auditability features of the Linux operating system, OpenSSH, and OpenSSL. The customer is responsible for evaluation, selection, and implementation of security features, administrative procedures, and appropriate controls in application systems and communication facilities.
 
Back topBack to top
 

Ordering information

Charge metric

                        Part number
Program name            or PID number      Charge metric
 
GPFS for Linux          5765-G67           Per processor --
 on POWER V3            5660-GLP            small, medium, large
                        5661-GLP
                        5662-GLP
                        5663-GLP
                        5664-GLP

GPFS V3.1 has one charge unit: number of processors active on either a small, medium, or large server running GPFS for the supported operating environment.

The order quantity is the integer number of processors on which GPFS will be run or installed.

Note: Ordering and billing is per physical processor, regardless of processor sharing ("sub-CPU LPARs").

A new order for GPFS includes one-year IBM Software Maintenance. Upgrades may be acquired via the IBM Software Maintenance agreement up to the current level of use authorized for the qualifying program.

Sub-capacity processor

The required authorized use level for eligible programs is based upon the highest utilization of the partitions where the program or a component of the program executes. The customer agrees to periodically report program use to IBM using an IBM license management tool.

  1. The number of authorizations you must acquire is the smaller value of either of the following methods:
    1. The total number of activated (available for use) processors in the machine, or
    2. The sum of (i) and (ii) as follows (any remaining fraction of a processor must be rounded up to a full processor in the final aggregation)

      (i) When the program is run in partitions with dedicated processors, the sum of the processing units of those partitions, and

      (ii) When the program is run in partitions that are members of a shared processing pool, the smaller of:

      1. The number of processors assigned to the pool, or
      2. The sum of the virtual processors of each uncapped partition plus the processing units in each capped partition running a program.

New users of GPFS for Linux on POWER, V3.1 should specify type 5765, model G67, and the one-time charge (OTC) feature number in the appropriate server-size column with a quantity equal to the total number of processors active in that server running the supported operating system in which GPFS will be installed. If a processor is active and GPFS will not be running on that processor, an order for GPFS is not required. For CD-ROM media, specify type 5692, model A5L, and feature number 1486.

Note: Throughout the following tables:

  • Small means small capacity servers (processor groups C5, D5, and E5)
  • Medium means medium capacity servers (processor groups F5 and G5)
  • Large means large capacity servers (processor groups H5 and P5)
  • OTC means one-time charge
  • OOCoD means On/Off Capacity on Demand

5765-G67, General Parallel File System for Linux on POWER, V3.1

Basic license one-time charge

                                 Small       Medium      Large
                                 OTC         OTC         OTC
                    Program      feature     feature     feature
Description         number       number      number      number
 
Per processor       5765-G67     0001        0003        0005
Block of 250        5765-G67     0002        0004        0006
 processors(1)
1
The feature for a block of 250 active processors is provided for convenience with large orders. A quantity of one for this feature will order a license for 250 processors.

                                                         Media
                                                         supply
                         Program                         feature
Program name             number           Media          number
 
GPFS for Linux           5692-A5L         CD-ROM         1486
 on POWER, V3.1

This software license includes Software Maintenance, previously referred to as Software Subscription and Technical Support.

Extending coverage for a total of three years from date of acquisition may be elected. Order the program number, feature number, and quantity to extend coverage for your software licenses. If maintenance has expired, specify the after-license feature number.

New Software Maintenance products and features for General Parallel File System (GPFS) for Linux on POWER V3 (5765-G67) and GPFS for Linux on POWER V2.3 (5765-G20) effective April 25, 2006.

                                                         OTC
                                                         Billing
                                                         feature
Feature description                                      number
 
5660-GLP GPFS Software Maintenance                       5809
 Agreement
5661-GLP GPFS Software Maintenance                       5809
 Agreement
5662-GLP GPFS Software Maintenance                       5809
 Agreement
5664-GLP GPFS Software Maintenance                       5809
 Agreement
 
5660-GLP Maintenance No-Charge 1-year Registration
 
Small per processor                                      0824
Small per block of 250 processors                        0825
Medium per processor                                     0828
Medium per block of 250 processors                       0829
Large per processor                                      0832
Large per block of 250 processors                        0833
 
5660-GLP Maintenance 1-year 24 x 7 support
 
Small per processor                                      0826
Small per block of 250 processors                        0827
Medium per processor                                     0830
Medium per block of 250 processors                       0831
Large per processor                                      0834
Large per block of 250 processors                        0835
 
5661-GLP 1-year Software Maintenance After License
 
Small per processor                                      0108
Small per block of 250 processors                        0109
Medium per processor                                     0112
Medium per block of 250 processors                       0113
Large per processor                                      0116
Large per block of 250 processors                        0117
 
5661-GLP After License 1-year 24 x 7 support
 
Small per processor                                      0110
Small per block of 250 processors                        0111
Medium per processor                                     0114
Medium per block of 250 processors                       0115
Large per processor                                      0118
Large per block of 250 processors                        0119
 
5662-GLP Maintenance 3-year Registration (2-year uplift)
 
Small per processor                                      0168
Small per block of 250 processors                        0169
Medium per processor                                     0172
Medium per block of 250 processors                       0173
Large per processor                                      0176
Large per block of 250 processors                        0177
 
5662-GLP Maintenance 3-year 24 x 7 support
 
Small per processor                                      0170
Small per block of 250 processors                        0171
Medium per processor                                     0174
Medium per block of 250 processors                       0175
Large per processor                                      0178
Large per block of 250 processors                        0179
 
5664-GLP 3-year Software Maintenance After License
 
Small per processor                                      0001
Small per block of 250 processors                        0002
Medium per processor                                     0005
Medium per block of 250 processors                       0006
Large per processor                                      0009
Large per block of 250 processors                        0010
 
5664-GLP After License 3-year 24 x 7 support
 
Small per processor                                      0003
Small per block of 250 processors                        0004
Medium per processor                                     0007
Medium per block of 250 processors                       0008
Large per processor                                      0011
Large per block of 250 processors                        0012

Withdrawal of SWMA program numbers

For the licensed programs, the SWMA is changing structure and type number from 577x to 566x. GPFS Software Maintenance is now available via the new 566x program numbers announced above. Customers who have current software maintenance will be renewed under the new SWMA product numbers at their normal renewal date.

The following program numbers are withdrawn as of April 25, 2006:

                                                        Program
Description                                             number
 
GPFS for Linux on POWER                                 5771-GLP
 1-Year Software Maintenance
 
GPFS for Linux on POWER                                 5773-GLP
 3-Year Software Maintenance

Refer to Withdrawal Announcement 906-089 , dated April 25, 2006 (pSeries SW withdrawal).

System Program Order (SPO) (5692-A5L)

A 5692-A5L SPO is mandatory for shipments of program distribution and publications. The individual licensed program orders (for example, 5765-G67) are for registration and billing purposes only. No shipment occurs under these orders.

Machine-readable materials are only available on CD-ROM. To receive shipment of machine-readable materials under SPO (5692-A5L), order the CD-ROM media.

Specify feature code 3410. Billing for the media is generated under the SPO. To prevent additional billing expenses, place only one SPO order per machine.

Select one of the following (5692-A5L) features for the licensed program hardcopy entitled publications.

Program
number            Program name                            Number
 
5692-A5L          GPFS for Linux on POWER, V3.1           1486

Under SPO 5692-A5L, feature number 3470 can be used to suppress hardcopy documentation. To order entitled hardcopy documentation only, order feature number 3430.

On/Off Capacity on Demand

IBM offers a daily price option to enable you to dynamically change your available server capacity. On/Off Capacity on Demand (OOCoD) allows you to dynamically increase and decrease server capacity based on changing workload demands. In order to take advantage of OOCoD, you must have activation of the OOCoD hardware offering for pSeries and the applicable base software license on order or on inventory.

This offering allows you to pay a fee to enable and use temporary server capacity of GPFS on a per-day basis using the features below.

Customer must complete and sign contract Z125-6907 — Amendment for iSeries™ and pSeries Software On/Off Capacity on Demand.

For specific terms, refer to Software Announcement 204-298 , dated December 7, 2004.

                                 Small       Medium      Large
                                 OTC         OTC         OTC
                    Program      feature     feature     feature
Description         number       number      number      number
 
Per Processor       5765-G67     0007        0009        0011
 Day OOCoD
 Temp Use Chrg
Block of 250        5765-G67     0008        0010        0012
 processors(1)
1
The feature for a block of 250 active processors is provided for convenience with large orders. A quantity of one for this feature will order a license for 250 processors.

 
Back topBack to top
 
Terms and conditions

Licensing: IBM International Program License Agreement and License Information document. Proofs of Entitlement (PoE) are required for all authorized use.

This software license includes Software Maintenance, previously referred to as Software Subscription and Technical Support.

The following agreement applies for maintenance and does not require customer signatures: IBM Agreement for Acquisition of Software Maintenance (Z125-6011).

Limited warranty applies: Yes

Warranty: This program includes a warranty for one year from acquisition from IBM or an authorized IBM Business Partner. For one year from acquisition of the program, this warranty provides the customer with access to databases containing program information and FAQs, including any known fixes to defects, which the customer can download or otherwise obtain and install.

Program technical support: Technical support of a program product will be available for a minimum of three years from the general availability date, as long as your Software Maintenance is in effect. This technical support allows you to obtain assistance (via telephone or electronic means) from IBM for product-specific, task-oriented questions regarding the installation and operation of the program product. Software Maintenance also provides you with access to updates, releases, and versions of the program. Customers will be notified, via announcement letter, of discontinuance of support with 12 months' notice. If you require additional technical support from IBM, including an extension of support beyond the discontinuance date, contact your IBM representative or IBM Business Partner. This extension may be available for a fee.

Money-back guarantee: If for any reason you are dissatisfied with the program and you are the original licensee, return it within 30 days from the invoice date, to the party (either IBM or its reseller) from whom you acquired it, for a refund. For clarification, note that for programs acquired under any of IBM's On/Off Capacity on Demand (On/Off CoD) software offerings, this term does not apply since these offerings apply to programs already acquired and in use by the customer.

Copy and use on home/portable computer: No

Volume orders (IVO): Yes. Contact your IBM representative.

Passport Advantage® applies: No

Usage restriction: Yes

For additional information, refer to the License Information document that is available on the IBM Software License Agreement Web site

Software Maintenance applies: Yes

All distributed software licenses include Software Maintenance (Software Subscription and Technical Support) for a period of 12 months from the date of acquisition, providing a streamlined way to acquire IBM software and assure technical support coverage for all licenses. Extending coverage for a total of three years from date of acquisition may be elected.

While your Software Maintenance is in effect, IBM provides you assistance for your routine, short duration installation and usage (how-to) questions, and code-related questions. IBM provides assistance via telephone and, if available, electronic access, only to your information systems (IS) technical support personnel during the normal business hours (published prime shift hours) of your IBM support center. (This assistance is not available to your end users.) IBM provides Severity 1 assistance 24 hours a day, every day of the year. For additional details, consult your IBM Software Support Guide at

Software Maintenance does not include assistance for the design and development of applications, your use of programs in other than their specified operating environment, or failures caused by products for which IBM is not responsible under this agreement.

Acquisition of GPFS V3 via SWMA does not change existing SWMA contract start nor end dates.

IBM Operational Support Services — SoftwareXcel: No

iSeries Software Maintenance applies: No

Variable charges apply: Yes

Educational allowance available

Yes, a 15% education allowance applies to qualified education institution customers.

On/Off Capacity on Demand

To be eligible for On/Off Capacity on Demand pricing, customers must be enabled for temporary capacity on the corresponding hardware, and the required contract — Z125-6907, Amendment for iSeries and pSeries Temporary Capacity On Demand — Software — must be signed prior to use.


 
Back topBack to top
 

Prices

The prices provided in this announcement are suggested retail prices for the U.S. only and are provided for your information only. Dealer prices may vary, and prices may also vary by country. Prices are subject to change without notice. For additional information and current prices, contact your local IBM representative.

Program name: 5765-G67 General Parallel File System for Linux on POWER, V3.1

                                                    OTC
                                                    Billing  One-
                                                    feature  time
Feature description                                 number   charge
 
Base product
 
 Small per processor                                0001     $    650
 Small per block of 250 processors                  0002      162,500
 Medium per processor                               0003          857
 Medium per block of 250 processors                 0004      214,250
 Large per processor                                0005        1,050
 Large per block of 250 processors                  0006      262,500
 
Per processor day OOCoD temp use charge
 
 Small per processor                                0007            7
 Small per block of 250 processors                  0008        1,750
 Medium per processor                               0009            9
 Medium per block of 250 processors                 0010        2,250
 Large per processor                                0011           11
 Large per block of 250 processors                  0012        2,750

New Software Maintenance Program Identification (PID) and features — GPFS for Linux on POWER, V3.1

                                                    OTC
                                                    Billing  One-
                                                    feature  time
Feature description                                 number   charge
 
5660-GLP GPFS Software Maintenance Agreement        5809           NC
5661-GLP GPFS Software Maintenance Agreement        5809
5662-GLP GPFS Software Maintenance Agreement        5809
5664-GLP GPFS Software Maintenance Agreement        5809
 
5660-GLP Maintenance No-Charge 1-year Registration
 
Small per processor                                 0824           NC
Small per block of 250 processors                   0825
Medium per processor                                0828
Medium per block of 250 processors                  0829
Large per processor                                 0832
Large per block of 250 processors                   0833
 
5660-GLP Maintenance 1-year 24 x 7 support
 
Small per processor                                 0826     $      9
Small per block of 250 processors                   0827        2,250
Medium per processor                                0830           12
Medium per block of 250 processors                  0831        3,000
Large per processor                                 0834           15
Large per block of 250 processors                   0835        3,750
 
5661-GLP 1-year Software Maintenance After License
 
Small per processor                                 0108          195
Small per block of 250 processors                   0109       48,750
Medium per processor                                0112          257
Medium per block of 250 processors                  0113       64,250
Large per processor                                 0116          315
Large per block of 250 processors                   0117       78,750
 
5661-GLP After License 1-year 24 x 7 support
 
Small per processor                                 0110            9
Small per block of 250 processors                   0111        2,250
Medium per processor                                0114           12
Medium per block of 250 processors                  0115        3,000
Large per processor                                 0118           15
Large per block of 250 processors                   0119        3,750
 
5662-GLP Maintenance 3-year Registration (2-year uplift)
 
Small per processor                                 0168          221
Small per block of 250 processors                   0169       55,250
Medium per processor                                0172          291
Medium per block of 250 processors                  0173       72,750
Large per processor                                 0176          357
Large per block of 250 processors                   0177       89,250
 
5662-GLP Maintenance 3-year 24 x 7 support
 
Small per processor                                 0170           25
Small per block of 250 processors                   0171        6,250
Medium per processor                                0174           32
Medium per block of 250 processors                  0175        8,000
Large per processor                                 0178           40
Large per block of 250 processors                   0179       10,000
 
5664-GLP 3-year Software Maintenance After License
 
Small per processor                                 0001          415
Small per block of 250 processors                   0002      103,750
Medium per processor                                0005          547
Medium per block of 250 processors                  0006      136,750
Large per processor                                 0009          671
Large per block of 250 processors                   0010      167,750
 
5664-GLP After License 3-year 24 x 7 support
 
Small per processor                                 0003           25
Small per block of 250 processors                   0004        6,250
Medium per processor                                0007           32
Medium per block of 250 processors                  0008        8,000
Large per processor                                 0011           40
Large per block of 250 processors                   0012       10,000

Variable charges

The applicable processor-based one-time charge will be based on the group of the designated machine on which the program is licensed for use. If the program is designated to a processor in a group for which no charge is listed, the charge of the next higher group listed applies. For movement to a machine in a higher group, an upgrade charge equal to the difference in the then-current charges between the two groups will apply. For movement to a machine in a lower group, there will be no adjustment or refund of charges paid.
 
Back topBack to top
 

Order now

To order, contact the Americas Call Centers, your local IBM representative, or your IBM Business Partner.

To identify your local IBM representative or IBM Business Partner, call 800-IBM-4YOU (426-4968).

 Phone:     800-IBM-CALL (426-2255)
 Fax:       800-2IBM-FAX (242-6329)
 Internet:  callserv@ca.ibm.com
 Mail:      IBM Americas Call Centers
            Dept. Teleweb Customer Support, 9th floor
            105 Moatfield Drive
            North York, Ontario
            Canada M3B 3R1
 
 Reference: RE001

The Americas Call Centers, our national direct marketing organization, can add your name to the mailing list for catalogs of IBM products.

Note: Shipments will begin after the planned availability date.

Trademarks

 
POWER, AIX 5L, System p5, OpenPower, and iSeries are trademarks of International Business Machines Corporation in the United States or other countries or both.
 
eServer, pSeries, BladeCenter, TotalStorage, and Passport Advantage are registered trademarks of International Business Machines Corporation in the United States or other countries or both.
 
Linux is a trademark of Linus Torvalds in the United States, other countries or both.
 
Other company, product, and service names may be trademarks or service marks of others.

Back to topBack to top
 

 
Printable version Printable version