POSIX programs read and write files or read directories within
an open-close bracket, whereas VFS servers do this directly from the
vnodes that they have looked up or created.
The LFS inserts a single open-close bracket around the operations
that are issued by a VFS server against regular files. Operations
that affect a file's attributes or read a directory may or may not
be preceded by an open, and a PFS has to be prepared for either case.
In particular, a file's size may be changed with the truncate() function,
which results in a call to vn_setattr without a preceding vn_open.
The PFS must perform two main functions to support reading and
writing, both of which tend to be done only once:
- Physically prepare to do the I/O. This may involve getting buffers
ready or using lower-layer protocols for a device or access method.
- Perform access checking.
Note that for performance reasons,
the fewest number of access checks possible should be done when a
particular end user accesses a particular file.
Serialization: Both vn_open and vn_close are invoked under
an exclusive vnode latch.
The PFS is expected to do the following:
- During vn_open:
- Perform access checks. This must be done here for POSIX users.
- Prepare for I/O, if necessary.
- Increment an open counter in the inode for regular files.
- During reading or writing:
Perform access checks, if the Check
Access bit is on in the UIO.
- During vn_close:
- Perform any I/O that is necessary, instead of deferring it to
the vn_inactive call. Examples include saving the contents of data
buffers to disk and updating access times. This allows I/O to be charged
back to the end user, whereas I/O that is done during vn_inactive
is charged to z/OS UNIX.
- Decrement the inode's open counter for regular files. If this
goes to zero and the file's link count is zero, the file's data blocks
are deleted and their space is reclaimed before the return from vn_close.
A
PFS that reclaims space on the last vn_close of a deleted file should
set the immeddel bit in the PFSI during initialization, for best performance.
Otherwise, the LFS issues vn_trunc unnecessarily.
- Perform the minimum amount of other cleanup. It is better to
defer cleanup to vn_inactive processing. Even if no one is still referring
to a file, which would not be apparent to the PFS, performance is
better if the PFS allows LFS file caching to reuse a closed file with
minimal overhead.
- During vn_inactive, or vfs_inactive if the PFS supports batch
inactive:
Perform
final cleanup for the file or directory inode. This operation runs
on a z/OS UNIX system
task with the containing file system locked, so the PFS should accomplish
this cleanup as quickly as possible. Avoid waits and I/O during this
cleanup processing.
If this process is followed, the access credentials of POSIX users
are checked only during their open() call. A VFS
server that maintains state information requests access checking for
the first reference by a particular end user to a particular file,
but not for subsequent references. A VFS server without this state
knowledge must pay the price of access checks on every reference.
The LFS builds and manages the file descriptors that are used by
POSIX programs.
The vn_open-vn_close pair has the following characteristics:
- There may be many vn_opens issued for the same file or directory,
and any number may be outstanding at a given time.
- The LFS may share a single vn_open with many users, because of
forking or VFS server usage. This sharing is not apparent, nor is
it of concern, to the PFS.
- For any vn_open that is seen by the PFS, there is a corresponding
vn_close. Because there may be many vn_opens active, getting a vn_close
does not mean that the file is in any sense no longer in use. The
PFS does not get any indication that a particular vn_close is the "last
close", so it needs to maintain an "open counter" to control
the deletion of data blocks for removed regular files.
- If the PFS needs to maintain an open context for file operations:
an 8-byte Open_token can be returned by the PFS from vn_open and the
LFS will pass this token back to the PFS on vnode operations that
are invoked from within this open context. See PFS Open Context and the Open_token for more information.