IBM Support

Security Bulletin: A vulnerability has been identified in IBM Spectrum Scale that could allow a local unprivileged user access to information located in dump files. User data could be sent to IBM during service engagements (CVE-2017-1654)

Security Bulletin


Summary

A vulnerability has been identified in IBM Spectrum Scale that could allow a local unprivileged user access to information located in dump files. User data could be sent to IBM during service engagements (CVE-2017-1654).

Vulnerability Details

CVEID: CVE-2017-1654
DESCRIPTION: IBM Spectrum Scale 4.1.1, 4.2.0, 4.2.1, 4.2.2, 4.2.3, and 5.0.0 could allow a local unprivileged user access to information located in dump files. User data could be sent to IBM during service engagements.
CVSS Base Score: 4.3
CVSS Temporal Score: See https://exchange.xforce.ibmcloud.com/vulnerabilities/133378 for the current score
CVSS Environmental Score*: Undefined
CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N)

Affected Products and Versions

IBM Spectrum Scale V5.0.0

IBM Spectrum Scale V4.2.3.0 thru V4.2.3.6

IBM Spectrum Scale V4.2.2.0 thru V4.2.2.3

IBM Spectrum Scale V4.2.1.0 thru V4.2.1.2

IBM Spectrum Scale V4.2.0.0 thru V4.2.0.4

IBM Spectrum Scale V4.1.1.0 thru V4.1.1.18

IBM General Parallel File System V4.1.0.0 thru V4.1.0.8

Remediation/Fixes

For IBM Spectrum Scale V5.0.0.0, apply V5.0.0.1 available from FixCentral at
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=Software%20defined%20storage&product=ibm/StorageSoftware/IBM+Spectrum+Scale&release=5.0.0&platform=All&function=all

For IBM Spectrum Scale V4.2.0.0 thru V4.2.3.6, apply V4.2.3.7 available from FixCentral at
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=Software%20defined%20storage&product=ibm/StorageSoftware/IBM+Spectrum+Scale&release=4.2.3&platform=All&function=all

For IBM Spectrum Scale V4.1.0.0 (GPFS) thru V4.1.1.18, apply V4.1.1.19 available from FixCentral at
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=Software%2Bdefined%2Bstorage&product=ibm/StorageSoftware/IBM+Spectrum+Scale&release=4.1.1&platform=All&function=all

Once the appropriate PTF is installed on a node, issue the mmstartup command on the node to restart Spectrum Scale and enable the fix.

This fix addresses file permissions of Spectrum Scale dump and trace files.

Spectrum Scale dump and trace files are generally created in the directory specified by the dataStructureDump configuration attribute of the mmchconfig command. If dataStructureDump is not explicitly set to a value, dump and trace files are created in /tmp/mmfs.

Spectrum Scale dump and trace files might exist on any node in a Spectrum Scale cluster. Whether or not such files exist on a particular node depends on the history of the node. The dump directory might be empty if tracing was never started on the node and no event triggering the collection of problem determination data has ever occurred on the node. The dump directory might also be empty if it has been purged of aged files, perhaps directly by an administrator or through a cron job.

It should be noted that if the dataStructureDump configuration attribute has been changed, dump and trace files might exist in former and current dump directories.

Following are some common file names that might be found in Spectrum Scale dump directories. This is not an exhaustive list:

internaldump.* Internal state of Spectrum Scale
kthreads.* Kernel thread stacks
extra.* Additional system state
logdump.* Dump of a Spectrum Scale recovery log (binary)
trcrpt.* Formatted Spectrum Scale trace records
trcfile.* Unformatted (binary) Spectrum Scale trace records
lxtrace.trc.* Unformatted (binary) Spectrum Scale trace records (Linux only)

 


Non-privileged User Access to Dump and Trace Files
Subsequent to the enablement of the fix on a node, whenever Spectrum Scale on that node creates certain dump files and trace files in the dump directory, the files are created with restricted permissions. The permissions set for such a file grant the file's user owner and group owner read access to the file content, and deny other users any access to the file content. On UNIX systems, the file's user owner is typically root (user ID 0), and the file's group owner is typically the primary group for the root user. Different ownership is possible if the sticky bit is set on the dump directory, if the dump directory is configured to be within a remote file system, or if the node is running Windows.

Dump and trace files created before the application of the fix may have permissions that allow any user access to the file's content. Application of the fix affects the permissions of subsequently created files, but does not affect the permissions of already created dump and trace files.

Spectrum Scale dump and trace files are generally created in the directory specified by the dataStructureDump configuration attribute of the mmchconfig command. If dataStructureDump is not explicitly set to a value, dump and trace files are created in /tmp/mmfs.

An administrator wishing to restrict access to existing dump and trace files has the option of doing so by changing the permissions of individual dump and trace files, or by changing the permissions of the dump directory.

If changing the permissions of an individual file, it is recommended that other users be given no access to the file. Here are some sample invocations of the chmod command that deny access for other users:

# chmod o= FILE...
# chmod o-rwx FILE...

Here are some additional sample invocations of the chmod command that also explicitly set read permissions for the owning user and group:

# chmod ug=r,o= FILE...
# chmod 440 FILE...

If changing the permissions of the dump directory, the simplest approach would be to remove all access permissions to the directory for other users. Please note that removing execute (x) access to the directory is needed to prevent a user accessing file content through a directory; just removing read (r) access to the directory is not sufficient. Some examples:

# chmod o-rwx /tmp/mmfs
# chmod o-wx /tmp/mmfs

Transmission of User Data to IBM during Service Engagements
On a node on which a Spectrum Scale file system is mounted, file system updates originating from the node may be logged to allow caching of updates in memory while ensuring file system consistency in the event of node failure. Traditionally, the recovery log only contains information related to file system metadata. However, if highly-available write cache (HAWC) is enabled for the file system, user data may be written to the recovery log. If the node fails, the file system manager performs log recovery; it replays file system updates described in the recovery log.

Before the application of this fix, if log recovery fails, the file system manager node dumps the contents of the recovery log into a file in the dump directory. The file name's pattern is logdump.fsName.*, where fsName is the name of the file system. If HAWC is currently enabled for the file system, or if it has been enabled in the past, the logdump.fsName.* file could contain user data. If you do not want this data transmitted to IBM during a service engagement, remove the logdump.* files from the dump directory of each cluster node before running the gpfs.snap command.

After the application of this fix, if log recovery fails, the file system manager node by default does not dump the contents of any recovery log. However, existing logdump.* files created before the application of this fix might exist in the dump directory. As noted above, if you do not want this data transmitted to IBM during a service engagement, remove the logdump.* files from the dump directory of each cluster node before running the gpfs.snap command.

After the application of this fix, if you do want the file system manager to dump the contents of a recovery log for which recovery has failed, use the mmchconfig command to change the value of the allowUserDataDump configuration attribute to yes. The mmchconfig command option -i is supported for allowUserDataDump, putting the change in effect immediately.

Note that if HAWC has never been enabled for any of the cluster's file systems, logdump.* files will not contain user data, whether they were created before the application of this fix, or were allowed to be created after the application of this fix by the value of the allowUserDataDump configuration attribute.

In a Spectrum Scale internal dump file, a file in the dump directory with a name matching the pattern internaldump.*, the "dump files" section contains information about currently opened and recently opened files. The information for each file is labeled OpenFile. Internal dump files generated by some releases of Spectrum Scale might include data-in-inode data in the OpenFile entries. The phrase data-in-inode refers to a technique where a file's data is placed within the file's inode, instead of being placed in data blocks referenced by the inode. Placement of data within the inode may occur if the entire file's content is small enough to fit within the inode.

For additional information on data-in-inode, see the Knowledge Center section entitled Use of disk storage and file structure within a GPFS File System at https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_fstruct.htm (this is a subsection under the GPFS Architecture section of the Product Overview). While the data-in-inode feature has existed in Spectrum Scale file systems with format 3.5.0.0 or later, it was first described in the referenced section in the IBM Spectrum Scale V5.0.0 documentation.

Internal dump files generated after the application of this fix will not include data-in-inode data in OpenFile entries. However, internal dump files generated before the application of this fix might include data-in-inode data. Data-in-inode data might be placed in internal dump files generated by IBM Spectrum Scale V4.1.0.0 thru V4.1.0.3, V4.2.0.0 thru V4.2.0.4, V4.2.1.0 thru V4.2.1.2, or V4.2.2.0 thru V4.2.2.3.

You may want to identify existing internal dump files containing data-in-inode data. Once these files are identified, you might decide to take action to prevent the data-in-inode data being sent to IBM during a service engagement. You could delete the internaldump.* files containing data-in-inode data before any subsequent runs of the gpfs.snap command. Another possibility would be to remove the data-in-inode data from the internal dump files using an editor or a shell script.

Here is an example of what an OpenFile entry might look like in an internaldump.* file:

OpenFile:  AAA5EE58A8C05C8C:000000000003C7BC:0000000000000000 @ 0x1801A1461D0
  cach 1 ref 1 hc 1 tc 6 mtx 0x1801A146208
  Inode: valid eff token rs @ 0x1801A146560, ctMode rs seq 3
  Mnode: valid eff token ro @ 0x1801A146650, ctMode ro Flags 0x30 (pfro+pfxw) seq 3
  DMAPI: invalid eff token nl @ 0x1801A146470, ctMode nl seq 2
  SMBOpen: valid eff token (A:R   D:   ) @ 0x1801A146260, ctMode (A:R   D:   ) seq 3
    lock state [ R(1) D: ] x [] flags [ ]
  SMBOpLk: valid eff token wf @ 0x1801A146370, ctMode wf Flags 0x30 (pfro+pfxw) seq 3
  BR: @ 0x1801A146750, ctMode nl Flags 0x10 (pfro) seq 3
    treeP 0x18018FF8000 C btFastTrack 0 1 ranges mode RO/XW:
    BLK [0,INF] mode RO node <0>
  Fcntl: @ 0x1801A146780, ctMode nl Flags 0x10 (pfro) seq 3
    treeP 0x18018FF8058 C btFastTrack 0 1 ranges mode RO/XW:
    BLK [0,INF] mode RO node <0>
  inode 247740 snap 0 USERFILE nlink 1 genNum 0xF9897BD mode 0200100644: -rw-r--r--
  tmmgr node <c0n1> (other)
  metanode <c0n2> (other) fail+panic count 0 flags 0x2, remoteStart 0 remoteCnt 0 localCnt 1 lastFrom 65535 switchCnt 0
  vfsReference 1
  dioCount 0 dioFlushNeeded 1 dioSkipCounter 0 dioReentryThreshold 0.000000
  openInstCount 1
  bufferListCount 1 bufferListChangeCount 1
  dirty status: clean
  SMB oplock state: nReaders 1
  inodeValid 1
  objectVersion 1
  flushVersion 1
  block size code 5 (32 subblocksPerFileBlock)
  dataBytesPerFileBlock 262144
  fileSize 61 synchedFileSize 61 indirectionLevel 0
  atime 1513037448.674232000
  mtime 1513105331.609254000
  ctime 1513105331.609232544
  crtime 1513037448.674232000
  last data block num 0, num blocks allocated 0
  lastBlockSubblocks 0, fragmentChanged 0 (count 0), fragmentSubblocks 0
  replicas: maxMeta 2 curMeta 1 maxData 2 curData 1
  indBlockHashKey1 0x0000000000000001, indAccessFlag 0
  inodeDAm: recordNum 247740 DiskAddrs: 1:292757472
  UpdateLogger 0x1801A146948 is clean
  fileModifiedSnapId 2 committed snap 2
  inodeCopiedSnapId 2 isUnknown dataCopiedSnapId 2
  lrocBufP 0x0 hdrComp civPtrsInvalid (1) priority 0
  Winattrs: a 1
  Data [4040]:
0000000000000000: 32303137 2F31322F 31322D31 343A3032  *2017/12/12-14:02*
0000000000000010: 3A31312E 36303838 31353934 343A204D  *:11.608815944:.M*
0000000000000020: 7920686F 76657263 72616674 20697320  *y.hovercraft.is.*
0000000000000030: 66756C6C 206F6620 65656C73 0A000000  *full.of.eels....*
0000000000000040: 00000000 00000000 00000000 00000000  *................*
 ...
0000000000000FC0: 00000000 00000000 D0A51019 00C9FFFF  *................*

The data-in-inode data starts with the line Data [4040]: and in this instance continues through the line labeled 0000000000000FC0: . This data shows file content, in hexadecimal and interpreted as ASCII. It is this data that should not exist in the internal dump. In this section, lines containing simply " ..." indicate subsequent portions of data that repeat. Note that the actual length of the data dumped depends on the size of inodes in the file system. The number in brackets in the Data line introducing the section can vary.

If you write your own script to remove data-in-inode data, it is important to realize that the data-in-inode section can be immediately followed by additional information for the OpenFile entry.

The internaldump.* files might or might not be compressed. If they are compressed, the file suffix suggests the method of compression. Files ending with .gz were compressed using the gzip command. Files ending with .Z were compressed with the compress command (on AIX).

A shell script, named findDataInInodeDump, designed to identify internal dump files including data-in-inode data and to optionally remove that data, is available from IBM Service. To contact IBM Service, see http://www.ibm.com/planetwide/ .

The findDataInInodeDump script takes as input a list of directories in which internaldump.* files might be found. When invoked without the --fix option, the script reports information about internaldump.* files found in the specified directories that include data-in-inode data. For example:

# findDataInInodeDump /tmp/mmfs
/tmp/mmfs/internaldump.2018-03-05_15.35.06.19131.c40bbc3xn4.gz has 102 instance(s) of dumped data-in-inodes
/tmp/mmfs/internaldump.2018-03-05_15.35.27.signal.17254.c40bbc3xn4.gz has 102 instance(s) of dumped data-in-inodes
/tmp/mmfs/internaldump.2018-03-05_15.38.44.1979.c40bbc3xn4 has 9 instance(s) of dumped data-in-inodes
/tmp/mmfs/internaldump.2018-03-05_15.39.45.5661.c40bbc3xn4 has 9 instance(s) of dumped data-in-inodes
Summary: 12 internaldump files found, 4 with dumped data-in-inodes

When invoked with the --fix option, the script removes the data-in-inode data found in internaldump.* files in the specified directories. For example:

# findDataInInodeDump --fix /tmp/mmfs
/tmp/mmfs/internaldump.2018-03-05_15.35.06.19131.c40bbc3xn4.gz has 102 instance(s) of dumped data-in-inodes ... fixed
/tmp/mmfs/internaldump.2018-03-05_15.35.27.signal.17254.c40bbc3xn4.gz has 102 instance(s) of dumped data-in-inodes ... fixed
/tmp/mmfs/internaldump.2018-03-05_15.38.44.1979.c40bbc3xn4 has 9 instance(s) of dumped data-in-inodes ... fixed
/tmp/mmfs/internaldump.2018-03-05_15.39.45.5661.c40bbc3xn4 has 9 instance(s) of dumped data-in-inodes ... fixed
Summary: 12 internaldump files found, 4 with dumped data-in-inodes

If you cannot apply the latest level of service, contact IBM Service for an efix:

 

  • IBM Specrum Scale 5.0.0.0, reference APAR IJ03165
  • IBM Spectrum Scale 4.2.0.0 thru 4.2.3.6, reference APAR IJ03164
  • IBM Spectrum Scale 4.1.0.0 thru 4.1.1.18, reference APAR IJ03140


To contact IBM Service, see http://www.ibm.com/planetwide/

Workarounds and Mitigations


Non-privileged User Access to Dump and Trace Files
Before the application of this fix, an administrator might consider applying permissions to the
dump directory (specified by the dataStructureDump configuration attribute; /tmp/mmfs, by default)
to deny non-privileged users access to the files contained therein.

For example, on Linux:

# chmod o= /tmp/mmfs

# ls -ld /tmp/mmfs
drwxrwx--- 2 root root 4423680 Feb  6 14:10 /tmp/mmfs

Transmission of User Data to IBM during Service Engagements
On a node on which a Spectrum Scale file system is mounted, file system updates originating from
the node may be logged to allow caching of updates in memory while ensuring file system consistency in the event of node failure. Traditionally, the recovery log only contains information related to file system metadata. However, if HAWC is enabled for the file system, user data may be written to the recovery log. If the node fails, the file system manager performs log recovery; it replays file system updates described in the recovery log.

Before the application of this fix, if log recovery fails, the file system manager node dumps the contents of the recovery log into a file in the dump directory. The file name's pattern is logdump.fsName.*, where fsName is the name of the file system. If HAWC is currently enabled for the file system, or if it has been enabled in the past, the logdump.fsName.* file could contain user data. If you do not want this data transmitted to IBM during a service engagement, remove the logdump.* files from the dump directory of each cluster node before running the gpfs.snap command.

Before the application of this fix, a generated Spectrum Scale internal dump file might include data-in-inode data in OpenFile entries. This is described in detail in the Remediation/Fixes section of this bulletin. If you do not want this data transmitted to IBM during a service engagement, remove the data-in-inode data from existing internaldump.* files in the dump directory of each cluster node, perhaps with the findDataInInodeDump script, before running the gpfs.snap command. However, without this fix, the gpfs.snap command might, under some circumstances, generate a new internal dump file that includes data-in-inode data, and include the newly generated internal dump file in the tar file to be sent to IBM Service. For example, this can occur if the --deadlock option is specified without the --quick option when invoking the gpfs.snap command. Application of the fix is the surest way to ensure data-in-inode data is not transmitted to IBM during a service engagement.

Data-in-inode data might be placed in internal dump files generated by IBM Spectrum Scale V4.1.0.0 thru V4.1.0.3, V4.2.0.0 thru V4.2.0.4, V4.2.1.0 thru V4.2.1.2, or V4.2.2.0 thru V4.2.2.3.

Get Notified about Future Security Bulletins

References

Off

Change History

3 May 2018: Updated Version Published
26 Feb 2018: Original Version Published

*The CVSS Environment Score is customer environment specific and will ultimately impact the Overall CVSS Score. Customers can evaluate the impact of this vulnerability in their environments by accessing the links in the Reference section of this Security Bulletin.

Disclaimer

Review the IBM security bulletin disclaimer and definitions regarding your responsibilities for assessing potential impact of security vulnerabilities to your environment.

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STXKQY","label":"IBM Spectrum Scale"},"Component":"--","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"4.1.1;4.2.0;4.2.1;4.2.2;4.2.3;5.0.0","Edition":"Advanced;Standard","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
01 August 2018

UID

ssg1S1010869