Troubleshooting
Problem
When running a remotely mounted cluster environment (multi-cluster with remotely mounted filesytems) and the owning cluster and accessing cluster are at 5.0.0.x or 5.0.1.x code level with File Audit Logging enabled, and the owning cluster is upgraded first to 5.0.2.x, and “mmchconfig --release=LATEST” is run, then the remotely mounted file systems on the accessing clusters will panic and not be able to mount.
Symptom
When trying to mount the filesystem on the accessing cluster the following error output will be seen:
#mmmount /dev/fs0 -N node-vm1,node-vm2,node-vm3 Wed Sep 26 13:46:26 MST 2018: mmmount: Mounting file systems ... node-vm1.gpfs.net: mmremote: Remount failed for file system /dev/fs0. Error code 19. mmdsh: node-vm1.gpfs.net remote shell process had return code 19
Cause
The accessing cluster at 5.0.0.x or 5.0.1.x cannot mount the file system owned by the cluster at 5.0.2.x because of the new File Audit Logging policy code not being understood by the down level accessing cluster.
Diagnosing The Problem
If upgrading the owning cluster from 5.0.0.x or 5.0.1.x to 5.0.2.x while leaving the accessing cluster at 5.0.0.x or 5.0.1.x, the user will not be able to remote mount the filesystem on the accessing cluster after running “mmchconfig --release=LATEST” on the owning cluster.
Resolving The Problem
If this happens, users should either upgrade the accessing cluster to the 5.0.2.x code level or disable File Audit Logging on the owning cluster until user is able to upgrade the accessing cluster (cluster where the filesystem is remotely mounted) to the 5.0.2.x code stream.
To disable FAL on the owning cluster, run:
mmaudit <device> disable
Was this topic helpful?
Document Information
Modified date:
28 October 2018
UID
ibm10734629