IBM Support

An Overview of Hiperspace Caching for PDSE

Question & Answer


Question

How does Hiperspace Caching work for PDSE?

Answer

PDSE Hiperspace Caching provides an opportunity for PDSEs to use central storage as a substitute for I/O operations at the expense of CPU utilization. This document provides an introductory overview of the efficacy and efficiency of the Hiperspace, as well as providing usage information.


How Hiperspace works?
PDSE Hiperspace represents a caching function implemented to improve PDSE performance in cases where the same member (or set of members) is accessed repeatedly. When members are opened, and eligible for Hiperspace Caching, the member pages are placed into the Hiperspace. The Hiperspace provides an opportunity to applications to use central storage as a substitute to I/O operations in much the same way the LLA/VLF store program objects. The primary goal of these forms of caching is to greatly increase application efficiency. The BMF(Buffer Management Facility)/Hiperspace caching order of operations are as follows. (NOTE: For any additional information on the BMF, please refer to the PDSE Usage Guide Redbook listed in the following Related Information section.)

When members are requested from a PDSE the request is propagated through the BMF/Hiperspace Caching in order to cut down on I/O processing. If the member is present in the Hiperspace, then the member is returned without needing to enter I/O processing. However, if the member is not present I/O processing is entered, the requested member is fetched, and sent to both the caller and the Hiperspace Cache. The member pages are placed into the Hiperspace cache so that I/O processing can be avoided on the next look-up. It is important to note that if the member pages have not been requested within the LRU time constraint, the pages are removed from the Hiperspace.

It is important to recognize the performance tradeoffs associated with Hiperspace caching, where reduced DASD I/O costs are offset by increased CPU and real storage usage. The CPU cost of Hiperspace caching is primarily due to the LRU, which periodically evaluates and identifies pages in the cache that are eligible for reuse. Another drawback when using the Hiperspace cache is that Hiperspace pages are the last to be stolen in a real storage-constrained environment. For this reason, the size of the Hiperspace might have to be limited when real storage is constrained.

BMF/Hiperspace and LLA/VLF
The LLA/VLF enables the storage of load module directories as well as the load modules themselves. LLA/VLF control is specified at a library level. With the inclusion of the BMF/Hiperspace Cache, extra care should be maintained to avoid unnecessary slow-downs. Any program object that is cached in Hiperspace and LLA/VLF concurrently will eventually be dropped from the Hiperspace Cache due to inactivity. For this reason, it is better to prevent these program objects from going into the Hiperspace in the first place.

BMF/Hiperspace Caching closely parallels LLA/VLF in concept; both are used to offset the cost of I/O processing, and both have a maximum capacity of 2GB. Hiperspace Caching is dynamic, utilizing the LRU to purge member pages so that recent or more frequently used member pages are accessible without I/O operations. Since the LRU is constantly checking the status of member pages in the Hiperspace, CPU utilization is increased, though I/O processing is decreased. Striking a balance between these two factors is key to employing the Hiperspace efficiently and effectively. VLF always outperforms Hiperspace caching for program objects that VLF has the ability to manage. The Hiperspace advantage is that it can cache any data that can be placed in a PDSE member, whereas there are restrictions on which program objects that VLF can manage. 
VLF, at the OA45127 maintenance level, has the ability to cache program objects with one deferred segment. One deferred segment is a program object characteristic of all COBOL 5 program objects. Prior to this maintenance level VLF requested PDSE HIPERSPACE caching for program objects, with one deferred segment or multiple segments (RMODE=SPLIT), if the Hiperspace was active.
The correcting PTF for z/OS 2.2 (JBB7790) is UA76715 and this maintenance is included in the base of all subsequent releases. 
It is important to note that the VLF RMODE=SPLIT restriction still exists at the OA45127 maintenance level.

How to make a PDSE eligible for Hiperspace Caching?
Only SMS-managed PDSE data sets are eligible for Hiperspace caching, as eligibility is determined by the value of a STORCLAS Direct MSR parameter. For BMF/Hiperspace caching, it is only the must cache and do not cache flags set by SMS based on the MSR value that are of interest. PDSE does not process the MSR value directly.

An MSR set at less than 10 indicates must cache which turns on the associated flag required by PDSE. An MSR between 10 and 998 implies may cache, however PDSE will only cache if the must cache flag is enabled. Finally, an MSR value of 999 explicitly sets the do not cache flag. Be aware, changing the MSR value can have effects on other components which rely on it. To be sure of getting expected cache activity, ensure that SMS-managed PDSEs are associated with a storage class that has appropriate MSR settings. PDSE data sets delivered as part of the operating system (or applications such as DB2) are generally not SMS-managed.

Caching can also occur regardless of the MSR value if LLA determines that a member cannot be cached in VLF but would otherwise be eligible for caching in the Hiperspace. In this case, LLA directs the Hiperspace to cache the member regardless of the must cache flag status, assuming the Hiperspace is enabled and has sufficient space.

Which members get cached?
It is not entirely accurate to ask which members get cached in Hiperspace. In Hiperspace it is member pages that get cached when accessed/created. In order to better understand this concept, consider the Hiperspace process. When the DFSMSdfp buffer manager processes requests to read a PDSE member page that is eligible for caching, it first checks the Hiperspace to see whether it has the page. If the page is not found, the buffer manager retrieves the page from the disk and copies it into the Hiperspace after reading it into the user's work area. Conversely, when the buffer manager writes a PDSE member page that is eligible for caching, it copies it into the Hiperspace as it writes it. Because of these operations, the Hiperspace is always a current copy, and any updates to it are made simultaneously to the copy on disk. It is important to note that when the Hiperspace becomes full, new member pages will not be cached until unreferenced or invalidated pages are removed.

When does the Hiperspace remove members?
The buffer manager utilizes the LRU to remove the oldest pages from the Hiperspace. When a member has not been used within the LRU Time-Cycles, the member page is purged from the cache. Cached pages can also be purged if all connections to a PDSE are closed. When more than one program has a PDSE open for input, they can share the member pages from the Hiperspace. When the last program closes the data set, all of the pages are purged from the Hiperspace. This behavior can be overwritten by using the IGDSMSxx PARMLIB member PDSE(1)_BUFFER_BEYOND_CLOSE, which will retain the cached member pages of the dataset until they complete their LRU Time-Cycles. This is useful for PDSEs which are frequently opened and closed.

Hiperspace caching IGDSMSxx parameters
In order to enable Hiperspace Caching, there are a few IGDSMSxx PARMLIB parameters which must be set alongside the MSR. First of which is the PDSE_HSP_SIZE (PDSE1_HSP_SIZE).









     
  • PDSE_HSP_SIZE = x
    PDSE1_HSP_SIZE = x

This parameter can be used to request up to 2047 MB for the PDSE Hiperspace. In order to activate Hiperspace Caching the PDSE(1)_HSP_SIZE must be set to a value greater than 0MB. The default HSP_SIZE value is 0MB making Hiperspace caching disabled by default. Once set at IPL, the PDSE_HSP_SIZE cannot be altered. The PDSE1 Hiperspace can, however, be modified by using the SETSMS PDSE1_HSP_SIZE(xxxx) command and restarting the address space. This will only work for the PDSE1 Address Space, not the PDSE Address Space, due to its restartability. (NOTE: APAR OA46328 fixes an issue where the Hiperspace cache was failing to expand past 2MB regardless of the PDSE(1)_HSP_SIZE parameter. This issue applies to 2.1 only.) The next parameter required is the PDSE_LRUCYCLES (PDSE1_LRUCYCLES).









     
  • PDSE_LRUCYCLES = x
    PDSE1_LRUCYCLES = x

This parameter controls the length of time, in cycles, an unreferenced page may stay in the Hiperspace. The range of allowable values for LRUCYCLES is 5 to 240. The default value 15 cycles. Lastly, the parameter PDSE_LRUTIME (PDSE1_LRUTIME) is required as well.









     
  • PDSE_LRUTIME = x
    PDSE1_LRUTIME = x

This parameter controls the frequency, in seconds, with which checks related to the LRUCYCLES are carried out. The range of allowable values for this parameter is 5 to 60. The default value is 60 seconds. Therefore, by not specifying PDSE(1)_LRUCYCLES or PDSE(1)_LRUTIME, or leaving the default values, the system will remove members from the Hiperspace after 15 1-minute cycles since last use have passed. Unlike PDSE(1)_HSP_SIZE, adjusting PDSE(1)_LRUCYCLES and PDSE(1)_LRUTIME may be done without an IPL/Restart using the SETSMS command. It is also important to remember that the LRUCYCLE and LRU_TIME parameters affect not only the Hiperspace, but also the BMF cache. The LRU cannot be controlled separately for the BMF and the Hiperspace.

Verifying BMF/Hiperspace functionality
Looking in the DFSMS Record Type 42 Subtype 1 storage-class summary section gives indication of the BMF/Hiperspace Cache efficacy. Subtype 1 is created on a timed interval specified in the IGDSMSxx parmlib member. Subtype 1 summarizes the buffer manager hits (number of page-read requests handled by BMF. These hits are instances where the BMF/Hiperspace Cache returned the member page, instead of I/O processing. DFSMS Record Type 14/15 Subtype 6 also gives indication of member cache eligibility on a per PDSE basis. More information on DFSMS Record Type 42 Subtype 1 and Record Type 14/15 Subtype 6 can be found in the PDSE Usage Guide Redbook Chapter 9 sections 6,8 (SG24-6106-01) as well as in the links in the following Related Information section.  Running the command D SMS,PDSE[1],HSPSTATS can also provide information regarding BMF/Hiperspace usage. The output for this command is an IGW048I message of the following format(s).









     
  • HiperSpace Size: #### MB
    LRUTime : ### Seconds LRUCycles: ### Cycles
    BMF Time interval #### Seconds
    ---------data set name------------Cache--Always-DoNot
                              Elig--Cache--Cache
     
    Pdsedataset1               x      x       x
     
    Pdsedataset2               x      x       x 
     
         :              (x has a value of Y or N)
     
    PdsedatasetN               x      x       x

In the above message output, the columns specifying Hiperspace usage are Cache Eligible, Always Cache, and Do Not Cache. This command determines the status of the 3 caching fields related to Hiperspace eligibility by looking at flags set by SMS. The Always Cache field is marked YES when the must cache flag is on. The Cache Eligible field is marked YES when both the must cache flag is on AND the Hiperspace is enabled. Finally, the Do Not Cache field is marked YES if and only if the never cache flag is enabled. It is important to note that non-SMS managed PDSEs have NO in all three columns and are not Hiperspace cache-able, with the exception of members which LLA requests to be cached. If there are no currently active PDSEs at the time of the command, the message has the following format.









     
  • HiperSpace Size: #### MB
    LRUTime : ### Seconds LRUCycles: ### Cycles
    BMF Time interval #### Seconds

    ++ no PDSE datasets found

[{"Product":{"code":"SWG90","label":"z\/OS"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"5695DF115 - DFSMS\/MVS PDSE AND FAMS","Platform":[{"code":"PF035","label":"z\/OS"}],"Version":"1.13;2.1;2.2","Edition":"","Line of Business":{"code":"LOB56","label":"Z HW"}},{"Product":{"code":"SWG90","label":"z\/OS"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":" ","Platform":[{"code":"","label":""}],"Version":"","Edition":"","Line of Business":{"code":"LOB56","label":"Z HW"}}]

Document Information

Modified date:
03 September 2021

UID

isg3T1022058