To maintain strict file system consistency, the GFS Shared File System processing can significantly slow down during the following operations:
File access
Open and close a file for each successive read or write, without putting all I/O together within one open and close.
Sequential write with a small record size (less than 1 megabyte). It can cause a fragmentation of a file data.
Deleting 1000 or more open files at once.
If one or more of the above situations occur, you can improve file system performance by changing open/close frequency or timing of the target file, or optimizing I/O size.
Confliction between nodes, and directory access.
Frequently referencing a file from other nodes that is frequently updated from one or more nodes.
Frequently updating the same block in the same file from two or more nodes.
Frequently creating or deleting files or directories in the same directory from multiple nodes, and repeatedly monitoring directory updates with readdir(2) or stat(2).
When an ls(1) command with the -l option and a command requiring attributes of files in the directory such as cp(1), du(1), and tar(1) are both issued in a directory containing more than 10000 files.
If one or more of the above situations occurs, you can improve file system performance by changing the monitoring or access frequency, or dividing up files into separate directories.
CPU load or I/O load may be concentrated on the MDS node that manages the file system meta-data operates. A heavy load here indicates that operations, which require updating of the file system meta-data, such as file creation, deletion, or extension, are frequently being done. In such cases, file system throughput may improve by optimizing the layout of the meta-data area and update log area. An alternative is to increase CPU throughput of the MDS node operates.